problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_30157 | rasdani/github-patches | git_diff | xonsh__xonsh-3796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bad documentation or bug: _.rtn does not work
[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:
```
#!/usr/bin/env xonsh
echo "abc"
print(_.rtn)
```
Outputs
```
abc
Traceback (most recent call last):
File "/home/volker/.local/bin/xonsh", line 8, in <module>
sys.exit(main())
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 426, in main
_failback_to_other_shells(args, err)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 373, in _failback_to_other_shells
raise err
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 424, in main
sys.exit(main_xonsh(args))
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py", line 471, in main_xonsh
run_script_with_cache(
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 162, in run_script_with_cache
run_compiled_code(ccode, glb, loc, mode)
File "/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py", line 67, in run_compiled_code
func(code, glb, loc)
File "./generateIso.xonsh", line 24, in <module>
print(_.rtn)
NameError: name '_' is not defined
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xontrib/bashisms.py`
Content:
```
1 """Bash-like interface extensions for xonsh."""
2 import shlex
3 import sys
4 import re
5 import builtins
6
7
8 __all__ = ()
9
10
11 @events.on_transform_command
12 def bash_preproc(cmd, **kw):
13 bang_previous = {
14 "!": lambda x: x,
15 "$": lambda x: shlex.split(x)[-1],
16 "^": lambda x: shlex.split(x)[0],
17 "*": lambda x: " ".join(shlex.split(x)[1:]),
18 }
19
20 def replace_bang(m):
21 arg = m.group(1)
22 inputs = __xonsh__.history.inps
23
24 # Dissect the previous command.
25 if arg in bang_previous:
26 try:
27 return bang_previous[arg](inputs[-1])
28 except IndexError:
29 print("xonsh: no history for '!{}'".format(arg))
30 return ""
31
32 # Look back in history for a matching command.
33 else:
34 try:
35 return next((x for x in reversed(inputs) if x.startswith(arg)))
36 except StopIteration:
37 print("xonsh: no previous commands match '!{}'".format(arg))
38 return ""
39
40 return re.sub(r"!([!$^*]|[\w]+)", replace_bang, cmd)
41
42
43 def alias(args, stdin=None):
44 ret = 0
45
46 if args:
47 for arg in args:
48 if "=" in arg:
49 # shlex.split to remove quotes, e.g. "foo='echo hey'" into
50 # "foo=echo hey"
51 name, cmd = shlex.split(arg)[0].split("=", 1)
52 aliases[name] = shlex.split(cmd)
53 elif arg in aliases:
54 print("{}={}".format(arg, aliases[arg]))
55 else:
56 print("alias: {}: not found".format(arg), file=sys.stderr)
57 ret = 1
58 else:
59 for alias, cmd in aliases.items():
60 print("{}={}".format(alias, cmd))
61
62 return ret
63
64
65 aliases["alias"] = alias
66 builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py
--- a/xontrib/bashisms.py
+++ b/xontrib/bashisms.py
@@ -64,3 +64,86 @@
aliases["alias"] = alias
builtins.__xonsh__.env["THREAD_SUBPROCS"] = False
+
+
+def _unset(args):
+ if not args:
+ print("Usage: unset ENV_VARIABLE", file=sys.stderr)
+
+ for v in args:
+ try:
+ __xonsh__.env.pop(v)
+ except KeyError:
+ print(f"{v} not found", file=sys.stderr)
+
+
+aliases["unset"] = _unset
+
+
+def _export(args):
+ if not args:
+ print("Usage: export ENV_VARIABLE=VALUE", file=sys.stderr)
+
+ for eq in args:
+ if "=" in eq:
+ name, val = shlex.split(eq)[0].split("=", 1)
+ __xonsh__.env[name] = val
+ else:
+ print(f"{eq} equal sign not found", file=sys.stderr)
+
+
+aliases["export"] = _export
+
+
+def _set(args):
+ arg = args[0]
+ if arg == "-e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = True
+ elif arg == "+e":
+ __xonsh__.env["RAISE_SUBPROC_ERROR"] = False
+ elif arg == "-x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = True
+ elif arg == "+x":
+ __xonsh__.env["XONSH_TRACE_SUBPROC"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["set"] = _set
+
+
+def _shopt(args):
+
+ supported_shopt = ["DOTGLOB"]
+
+ args_len = len(args)
+ if args_len == 0:
+ for so in supported_shopt:
+ onoff = "on" if so in __xonsh__.env and __xonsh__.env[so] else "off"
+ print(f"dotglob\t{onoff}")
+ return
+ elif args_len < 2 or args[0] in ["-h", "--help"]:
+ print(f'Usage: shopt <-s|-u> <{"|".join(supported_shopt).lower()}>')
+ return
+
+ opt = args[0]
+ optname = args[1]
+
+ if opt == "-s" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = True
+ elif opt == "-u" and optname == "dotglob":
+ __xonsh__.env["DOTGLOB"] = False
+ else:
+ print(
+ "Not supported in xontrib bashisms.\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py",
+ file=sys.stderr,
+ )
+
+
+aliases["shopt"] = _shopt
+
+
+aliases["complete"] = "completer list"
| {"golden_diff": "diff --git a/xontrib/bashisms.py b/xontrib/bashisms.py\n--- a/xontrib/bashisms.py\n+++ b/xontrib/bashisms.py\n@@ -64,3 +64,86 @@\n \n aliases[\"alias\"] = alias\n builtins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n+\n+\n+def _unset(args):\n+ if not args:\n+ print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n+\n+ for v in args:\n+ try:\n+ __xonsh__.env.pop(v)\n+ except KeyError:\n+ print(f\"{v} not found\", file=sys.stderr)\n+\n+\n+aliases[\"unset\"] = _unset\n+\n+\n+def _export(args):\n+ if not args:\n+ print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n+\n+ for eq in args:\n+ if \"=\" in eq:\n+ name, val = shlex.split(eq)[0].split(\"=\", 1)\n+ __xonsh__.env[name] = val\n+ else:\n+ print(f\"{eq} equal sign not found\", file=sys.stderr)\n+\n+\n+aliases[\"export\"] = _export\n+\n+\n+def _set(args):\n+ arg = args[0]\n+ if arg == \"-e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = True\n+ elif arg == \"+e\":\n+ __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = False\n+ elif arg == \"-x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = True\n+ elif arg == \"+x\":\n+ __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"set\"] = _set\n+\n+\n+def _shopt(args):\n+\n+ supported_shopt = [\"DOTGLOB\"]\n+\n+ args_len = len(args)\n+ if args_len == 0:\n+ for so in supported_shopt:\n+ onoff = \"on\" if so in __xonsh__.env and __xonsh__.env[so] else \"off\"\n+ print(f\"dotglob\\t{onoff}\")\n+ return\n+ elif args_len < 2 or args[0] in [\"-h\", \"--help\"]:\n+ print(f'Usage: shopt <-s|-u> <{\"|\".join(supported_shopt).lower()}>')\n+ return\n+\n+ opt = args[0]\n+ optname = args[1]\n+\n+ if opt == \"-s\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = True\n+ elif opt == \"-u\" and optname == \"dotglob\":\n+ __xonsh__.env[\"DOTGLOB\"] = False\n+ else:\n+ print(\n+ \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n+ file=sys.stderr,\n+ )\n+\n+\n+aliases[\"shopt\"] = _shopt\n+\n+\n+aliases[\"complete\"] = \"completer list\"\n", "issue": "Bad documentation or bug: _.rtn does not work\n[In the Documentation](https://xon.sh/bash_to_xsh.html) you write that `_.rtn` is the equivalent of the shell `$?` and that it `Returns the exit code, or status, of the previous command.`. Either I understand the documentation wrong or there is a bug:\r\n```\r\n#!/usr/bin/env xonsh\r\necho \"abc\"\r\nprint(_.rtn)\r\n```\r\nOutputs\r\n```\r\nabc\r\nTraceback (most recent call last):\r\n File \"/home/volker/.local/bin/xonsh\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 426, in main\r\n _failback_to_other_shells(args, err)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 373, in _failback_to_other_shells\r\n raise err\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 424, in main\r\n sys.exit(main_xonsh(args))\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/main.py\", line 471, in main_xonsh\r\n run_script_with_cache(\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 162, in run_script_with_cache\r\n run_compiled_code(ccode, glb, loc, mode)\r\n File \"/home/volker/.local/lib/python3.8/site-packages/xonsh/codecache.py\", line 67, in run_compiled_code\r\n func(code, glb, loc)\r\n File \"./generateIso.xonsh\", line 24, in <module>\r\n print(_.rtn)\r\nNameError: name '_' is not defined\r\n```\n", "before_files": [{"content": "\"\"\"Bash-like interface extensions for xonsh.\"\"\"\nimport shlex\nimport sys\nimport re\nimport builtins\n\n\n__all__ = ()\n\n\[email protected]_transform_command\ndef bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n \"$\": lambda x: shlex.split(x)[-1],\n \"^\": lambda x: shlex.split(x)[0],\n \"*\": lambda x: \" \".join(shlex.split(x)[1:]),\n }\n\n def replace_bang(m):\n arg = m.group(1)\n inputs = __xonsh__.history.inps\n\n # Dissect the previous command.\n if arg in bang_previous:\n try:\n return bang_previous[arg](inputs[-1])\n except IndexError:\n print(\"xonsh: no history for '!{}'\".format(arg))\n return \"\"\n\n # Look back in history for a matching command.\n else:\n try:\n return next((x for x in reversed(inputs) if x.startswith(arg)))\n except StopIteration:\n print(\"xonsh: no previous commands match '!{}'\".format(arg))\n return \"\"\n\n return re.sub(r\"!([!$^*]|[\\w]+)\", replace_bang, cmd)\n\n\ndef alias(args, stdin=None):\n ret = 0\n\n if args:\n for arg in args:\n if \"=\" in arg:\n # shlex.split to remove quotes, e.g. \"foo='echo hey'\" into\n # \"foo=echo hey\"\n name, cmd = shlex.split(arg)[0].split(\"=\", 1)\n aliases[name] = shlex.split(cmd)\n elif arg in aliases:\n print(\"{}={}\".format(arg, aliases[arg]))\n else:\n print(\"alias: {}: not found\".format(arg), file=sys.stderr)\n ret = 1\n else:\n for alias, cmd in aliases.items():\n print(\"{}={}\".format(alias, cmd))\n\n return ret\n\n\naliases[\"alias\"] = alias\nbuiltins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n", "path": "xontrib/bashisms.py"}], "after_files": [{"content": "\"\"\"Bash-like interface extensions for xonsh.\"\"\"\nimport shlex\nimport sys\nimport re\nimport builtins\n\n\n__all__ = ()\n\n\[email protected]_transform_command\ndef bash_preproc(cmd, **kw):\n bang_previous = {\n \"!\": lambda x: x,\n \"$\": lambda x: shlex.split(x)[-1],\n \"^\": lambda x: shlex.split(x)[0],\n \"*\": lambda x: \" \".join(shlex.split(x)[1:]),\n }\n\n def replace_bang(m):\n arg = m.group(1)\n inputs = __xonsh__.history.inps\n\n # Dissect the previous command.\n if arg in bang_previous:\n try:\n return bang_previous[arg](inputs[-1])\n except IndexError:\n print(\"xonsh: no history for '!{}'\".format(arg))\n return \"\"\n\n # Look back in history for a matching command.\n else:\n try:\n return next((x for x in reversed(inputs) if x.startswith(arg)))\n except StopIteration:\n print(\"xonsh: no previous commands match '!{}'\".format(arg))\n return \"\"\n\n return re.sub(r\"!([!$^*]|[\\w]+)\", replace_bang, cmd)\n\n\ndef alias(args, stdin=None):\n ret = 0\n\n if args:\n for arg in args:\n if \"=\" in arg:\n # shlex.split to remove quotes, e.g. \"foo='echo hey'\" into\n # \"foo=echo hey\"\n name, cmd = shlex.split(arg)[0].split(\"=\", 1)\n aliases[name] = shlex.split(cmd)\n elif arg in aliases:\n print(\"{}={}\".format(arg, aliases[arg]))\n else:\n print(\"alias: {}: not found\".format(arg), file=sys.stderr)\n ret = 1\n else:\n for alias, cmd in aliases.items():\n print(\"{}={}\".format(alias, cmd))\n\n return ret\n\n\naliases[\"alias\"] = alias\nbuiltins.__xonsh__.env[\"THREAD_SUBPROCS\"] = False\n\n\ndef _unset(args):\n if not args:\n print(\"Usage: unset ENV_VARIABLE\", file=sys.stderr)\n\n for v in args:\n try:\n __xonsh__.env.pop(v)\n except KeyError:\n print(f\"{v} not found\", file=sys.stderr)\n\n\naliases[\"unset\"] = _unset\n\n\ndef _export(args):\n if not args:\n print(\"Usage: export ENV_VARIABLE=VALUE\", file=sys.stderr)\n\n for eq in args:\n if \"=\" in eq:\n name, val = shlex.split(eq)[0].split(\"=\", 1)\n __xonsh__.env[name] = val\n else:\n print(f\"{eq} equal sign not found\", file=sys.stderr)\n\n\naliases[\"export\"] = _export\n\n\ndef _set(args):\n arg = args[0]\n if arg == \"-e\":\n __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = True\n elif arg == \"+e\":\n __xonsh__.env[\"RAISE_SUBPROC_ERROR\"] = False\n elif arg == \"-x\":\n __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = True\n elif arg == \"+x\":\n __xonsh__.env[\"XONSH_TRACE_SUBPROC\"] = False\n else:\n print(\n \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n file=sys.stderr,\n )\n\n\naliases[\"set\"] = _set\n\n\ndef _shopt(args):\n\n supported_shopt = [\"DOTGLOB\"]\n\n args_len = len(args)\n if args_len == 0:\n for so in supported_shopt:\n onoff = \"on\" if so in __xonsh__.env and __xonsh__.env[so] else \"off\"\n print(f\"dotglob\\t{onoff}\")\n return\n elif args_len < 2 or args[0] in [\"-h\", \"--help\"]:\n print(f'Usage: shopt <-s|-u> <{\"|\".join(supported_shopt).lower()}>')\n return\n\n opt = args[0]\n optname = args[1]\n\n if opt == \"-s\" and optname == \"dotglob\":\n __xonsh__.env[\"DOTGLOB\"] = True\n elif opt == \"-u\" and optname == \"dotglob\":\n __xonsh__.env[\"DOTGLOB\"] = False\n else:\n print(\n \"Not supported in xontrib bashisms.\\nPRs are welcome - https://github.com/xonsh/xonsh/blob/master/xontrib/bashisms.py\",\n file=sys.stderr,\n )\n\n\naliases[\"shopt\"] = _shopt\n\n\naliases[\"complete\"] = \"completer list\"\n", "path": "xontrib/bashisms.py"}]} | 1,263 | 751 |
gh_patches_debug_13527 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spawner custom form validation
Are there ideas for allowing form validation for spawners that have a custom form?
I was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/handlers/pages.py`
Content:
```
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web, gen
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11 from .login import LoginHandler
12
13
14 class RootHandler(BaseHandler):
15 """Render the Hub root page.
16
17 If logged in, redirects to:
18
19 - single-user server if running
20 - hub home, otherwise
21
22 Otherwise, renders login page.
23 """
24 def get(self):
25 user = self.get_current_user()
26 if user:
27 if user.running:
28 url = user.server.base_url
29 self.log.debug("User is running: %s", url)
30 else:
31 url = url_path_join(self.hub.server.base_url, 'home')
32 self.log.debug("User is not running: %s", url)
33 self.redirect(url)
34 return
35 url = url_path_join(self.hub.server.base_url, 'login')
36 self.redirect(url)
37
38
39 class HomeHandler(BaseHandler):
40 """Render the user's home page."""
41
42 @web.authenticated
43 def get(self):
44 html = self.render_template('home.html',
45 user=self.get_current_user(),
46 )
47 self.finish(html)
48
49
50 class SpawnHandler(BaseHandler):
51 """Handle spawning of single-user servers via form.
52
53 GET renders the form, POST handles form submission.
54
55 Only enabled when Spawner.options_form is defined.
56 """
57 def _render_form(self, message=''):
58 user = self.get_current_user()
59 return self.render_template('spawn.html',
60 user=user,
61 spawner_options_form=user.spawner.options_form,
62 error_message=message,
63 )
64
65 @web.authenticated
66 def get(self):
67 """GET renders form for spawning with user-specified options"""
68 user = self.get_current_user()
69 if user.running:
70 url = user.server.base_url
71 self.log.debug("User is running: %s", url)
72 self.redirect(url)
73 return
74 if user.spawner.options_form:
75 self.finish(self._render_form())
76 else:
77 # not running, no form. Trigger spawn.
78 url = url_path_join(self.base_url, 'user', user.name)
79 self.redirect(url)
80
81 @web.authenticated
82 @gen.coroutine
83 def post(self):
84 """POST spawns with user-specified options"""
85 user = self.get_current_user()
86 if user.running:
87 url = user.server.base_url
88 self.log.warning("User is already running: %s", url)
89 self.redirect(url)
90 return
91 form_options = {}
92 for key, byte_list in self.request.body_arguments.items():
93 form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
94 for key, byte_list in self.request.files.items():
95 form_options["%s_file"%key] = byte_list
96 options = user.spawner.options_from_form(form_options)
97 try:
98 yield self.spawn_single_user(user, options=options)
99 except Exception as e:
100 self.log.error("Failed to spawn single-user server with form", exc_info=True)
101 self.finish(self._render_form(str(e)))
102 return
103 self.set_login_cookie(user)
104 url = user.server.base_url
105 self.redirect(url)
106
107 class AdminHandler(BaseHandler):
108 """Render the admin page."""
109
110 @admin_only
111 def get(self):
112 available = {'name', 'admin', 'running', 'last_activity'}
113 default_sort = ['admin', 'name']
114 mapping = {
115 'running': '_server_id'
116 }
117 default_order = {
118 'name': 'asc',
119 'last_activity': 'desc',
120 'admin': 'desc',
121 'running': 'desc',
122 }
123 sorts = self.get_arguments('sort') or default_sort
124 orders = self.get_arguments('order')
125
126 for bad in set(sorts).difference(available):
127 self.log.warn("ignoring invalid sort: %r", bad)
128 sorts.remove(bad)
129 for bad in set(orders).difference({'asc', 'desc'}):
130 self.log.warn("ignoring invalid order: %r", bad)
131 orders.remove(bad)
132
133 # add default sort as secondary
134 for s in default_sort:
135 if s not in sorts:
136 sorts.append(s)
137 if len(orders) < len(sorts):
138 for col in sorts[len(orders):]:
139 orders.append(default_order[col])
140 else:
141 orders = orders[:len(sorts)]
142
143 # this could be one incomprehensible nested list comprehension
144 # get User columns
145 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
146 # get User.col.desc() order objects
147 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
148
149 users = self.db.query(orm.User).order_by(*ordered)
150 users = [ self._user_from_orm(u) for u in users ]
151 running = [ u for u in users if u.running ]
152
153 html = self.render_template('admin.html',
154 user=self.get_current_user(),
155 admin_access=self.settings.get('admin_access', False),
156 users=users,
157 running=running,
158 sort={s:o for s,o in zip(sorts, orders)},
159 )
160 self.finish(html)
161
162
163 default_handlers = [
164 (r'/', RootHandler),
165 (r'/home', HomeHandler),
166 (r'/admin', AdminHandler),
167 (r'/spawn', SpawnHandler),
168 ]
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -93,8 +93,8 @@
form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
for key, byte_list in self.request.files.items():
form_options["%s_file"%key] = byte_list
- options = user.spawner.options_from_form(form_options)
try:
+ options = user.spawner.options_from_form(form_options)
yield self.spawn_single_user(user, options=options)
except Exception as e:
self.log.error("Failed to spawn single-user server with form", exc_info=True)
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -93,8 +93,8 @@\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n- options = user.spawner.options_from_form(form_options)\n try:\n+ options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n", "issue": "Spawner custom form validation\nAre there ideas for allowing form validation for spawners that have a custom form?\n\nI was thinking of raising an exception in `options_from_form()` and moving the `try` up by one line in [SpawnHandler](https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/handlers/pages.py#L97).\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n options = user.spawner.options_from_form(form_options)\n try:\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}], "after_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n def _render_form(self, message=''):\n user = self.get_current_user()\n return self.render_template('spawn.html',\n user=user,\n spawner_options_form=user.spawner.options_form,\n error_message=message,\n )\n\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n self.finish(self._render_form())\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n for key, byte_list in self.request.files.items():\n form_options[\"%s_file\"%key] = byte_list\n try:\n options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n except Exception as e:\n self.log.error(\"Failed to spawn single-user server with form\", exc_info=True)\n self.finish(self._render_form(str(e)))\n return\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 1,928 | 165 |
gh_patches_debug_34164 | rasdani/github-patches | git_diff | tensorflow__tensor2tensor-1281 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MRPC dev data is being used for training
### Description
I expected that the dev dataset would be different from the training dataset. However, all dev examples of MRPC are actually included in the training dataset.
### Environment information
```
OS: macOS 10.13.4
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
-e [email protected]:tensorflow/tensor2tensor.git@7de63449a98375011e2a8715482dfeea946e6de7#egg=tensor2tensor
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.4
```
### For bugs: reproduction and error logs
```python
import tensorflow as tf
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators.mrpc import MSRParaphraseCorpus
data_dir = "/tmp/t2t_mrpc"
mrpc = MSRParaphraseCorpus()
tf.gfile.MakeDirs(data_dir)
mrpc.generate_data(data_dir, "/tmp")
encoder = mrpc.feature_encoders(data_dir).get("inputs")
tfe = tf.contrib.eager
tfe.enable_eager_execution()
train_dataset = set(
encoder.decode(example["inputs"])
for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.TRAIN, data_dir)))
eval_dataset = set(
encoder.decode(example["inputs"])
for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.EVAL, data_dir)))
print("TRAIN Dataset: {}".format(len(train_dataset)))
print("EVAL Dataset: {}".format(len(eval_dataset)))
print("Duplication: {}".format(len(train_dataset & eval_dataset)))
```
Output:
```
TRAIN Dataset: 8152
EVAL Dataset: 816
Duplication: 816
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensor2tensor/data_generators/mrpc.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2018 The Tensor2Tensor Authors.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """Data generators for the MSR Paraphrase Corpus."""
17
18 from __future__ import absolute_import
19 from __future__ import division
20 from __future__ import print_function
21
22 import os
23 import six
24 from tensor2tensor.data_generators import generator_utils
25 from tensor2tensor.data_generators import problem
26 from tensor2tensor.data_generators import text_encoder
27 from tensor2tensor.data_generators import text_problems
28 from tensor2tensor.utils import registry
29 import tensorflow as tf
30
31 EOS = text_encoder.EOS
32
33
34 @registry.register_problem
35 class MSRParaphraseCorpus(text_problems.TextConcat2ClassProblem):
36 """MSR Paraphrase Identification problems."""
37
38 # Link to data from GLUE: https://gluebenchmark.com/tasks
39 DEV_IDS = ("https://firebasestorage.googleapis.com/v0/b/"
40 "mtl-sentence-representations.appspot.com/o/"
41 "data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-"
42 "48f4-b431-7480817f1adc")
43 MRPC_TRAIN = ("https://s3.amazonaws.com/senteval/senteval_data/"
44 "msr_paraphrase_train.txt")
45 MRPC_TEST = ("https://s3.amazonaws.com/senteval/senteval_data/"
46 "msr_paraphrase_test.txt")
47 DATA_DIR = "MRPC"
48
49 @property
50 def is_generate_per_split(self):
51 return True
52
53 @property
54 def dataset_splits(self):
55 return [{
56 "split": problem.DatasetSplit.TRAIN,
57 "shards": 10,
58 }, {
59 "split": problem.DatasetSplit.EVAL,
60 "shards": 1,
61 }]
62
63 @property
64 def approx_vocab_size(self):
65 return 2**13 # 8k vocab suffices for this small dataset.
66
67 @property
68 def num_classes(self):
69 return 2
70
71 def class_labels(self, data_dir):
72 del data_dir
73 return ["not_paraphrase", "paraphrase"]
74
75 def _maybe_download_corpora(self, tmp_dir):
76 mrpc_dir = os.path.join(tmp_dir, self.DATA_DIR)
77 tf.gfile.MakeDirs(mrpc_dir)
78 mrpc_train_finalpath = os.path.join(mrpc_dir, "msr_paraphrase_train.txt")
79 mrpc_test_finalpath = os.path.join(mrpc_dir, "msr_paraphrase_test.txt")
80 mrpc_dev_ids_finalpath = os.path.join(mrpc_dir, "dev_ids.tsv")
81
82 def download_file(tdir, filepath, url):
83 if not tf.gfile.Exists(filepath):
84 generator_utils.maybe_download(tdir, filepath, url)
85
86 download_file(mrpc_dir, mrpc_train_finalpath, self.MRPC_TRAIN)
87 download_file(mrpc_dir, mrpc_test_finalpath, self.MRPC_TEST)
88 download_file(mrpc_dir, mrpc_dev_ids_finalpath, self.DEV_IDS)
89
90 return mrpc_dir
91
92 def example_generator(self, filename, dev_ids):
93 for idx, line in enumerate(tf.gfile.Open(filename, "rb")):
94 if idx == 0: continue # skip header
95 if six.PY2:
96 line = unicode(line.strip(), "utf-8")
97 else:
98 line = line.strip().decode("utf-8")
99 l, id1, id2, s1, s2 = line.split("\t")
100 if dev_ids and [id1, id2] not in dev_ids:
101 continue
102 inputs = [[s1, s2], [s2, s1]]
103 for inp in inputs:
104 yield {
105 "inputs": inp,
106 "label": int(l)
107 }
108
109 def generate_samples(self, data_dir, tmp_dir, dataset_split):
110 mrpc_dir = self._maybe_download_corpora(tmp_dir)
111 filesplit = "msr_paraphrase_train.txt"
112 dev_ids = []
113 if dataset_split != problem.DatasetSplit.TRAIN:
114 for row in tf.gfile.Open(os.path.join(mrpc_dir, "dev_ids.tsv")):
115 dev_ids.append(row.strip().split("\t"))
116
117 filename = os.path.join(mrpc_dir, filesplit)
118 for example in self.example_generator(filename, dev_ids):
119 yield example
120
121
122 @registry.register_problem
123 class MSRParaphraseCorpusCharacters(MSRParaphraseCorpus):
124 """MSR Paraphrase Identification problems, character level"""
125
126 @property
127 def vocab_type(self):
128 return text_problems.VocabType.CHARACTER
129
130 def global_task_id(self):
131 return problem.TaskID.EN_SIM
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensor2tensor/data_generators/mrpc.py b/tensor2tensor/data_generators/mrpc.py
--- a/tensor2tensor/data_generators/mrpc.py
+++ b/tensor2tensor/data_generators/mrpc.py
@@ -58,6 +58,9 @@
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 1,
+ }, {
+ "split": problem.DatasetSplit.TEST,
+ "shards": 1,
}]
@property
@@ -89,7 +92,7 @@
return mrpc_dir
- def example_generator(self, filename, dev_ids):
+ def example_generator(self, filename, dev_ids, dataset_split):
for idx, line in enumerate(tf.gfile.Open(filename, "rb")):
if idx == 0: continue # skip header
if six.PY2:
@@ -97,7 +100,10 @@
else:
line = line.strip().decode("utf-8")
l, id1, id2, s1, s2 = line.split("\t")
- if dev_ids and [id1, id2] not in dev_ids:
+ is_dev = [id1, id2] in dev_ids
+ if dataset_split == problem.DatasetSplit.TRAIN and is_dev:
+ continue
+ if dataset_split == problem.DatasetSplit.EVAL and not is_dev:
continue
inputs = [[s1, s2], [s2, s1]]
for inp in inputs:
@@ -108,14 +114,17 @@
def generate_samples(self, data_dir, tmp_dir, dataset_split):
mrpc_dir = self._maybe_download_corpora(tmp_dir)
- filesplit = "msr_paraphrase_train.txt"
+ if dataset_split != problem.DatasetSplit.TEST:
+ filesplit = "msr_paraphrase_train.txt"
+ else:
+ filesplit = "msr_paraphrase_test.txt"
dev_ids = []
- if dataset_split != problem.DatasetSplit.TRAIN:
+ if dataset_split != problem.DatasetSplit.TEST:
for row in tf.gfile.Open(os.path.join(mrpc_dir, "dev_ids.tsv")):
dev_ids.append(row.strip().split("\t"))
filename = os.path.join(mrpc_dir, filesplit)
- for example in self.example_generator(filename, dev_ids):
+ for example in self.example_generator(filename, dev_ids, dataset_split):
yield example
| {"golden_diff": "diff --git a/tensor2tensor/data_generators/mrpc.py b/tensor2tensor/data_generators/mrpc.py\n--- a/tensor2tensor/data_generators/mrpc.py\n+++ b/tensor2tensor/data_generators/mrpc.py\n@@ -58,6 +58,9 @@\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n+ }, {\n+ \"split\": problem.DatasetSplit.TEST,\n+ \"shards\": 1,\n }]\n \n @property\n@@ -89,7 +92,7 @@\n \n return mrpc_dir\n \n- def example_generator(self, filename, dev_ids):\n+ def example_generator(self, filename, dev_ids, dataset_split):\n for idx, line in enumerate(tf.gfile.Open(filename, \"rb\")):\n if idx == 0: continue # skip header\n if six.PY2:\n@@ -97,7 +100,10 @@\n else:\n line = line.strip().decode(\"utf-8\")\n l, id1, id2, s1, s2 = line.split(\"\\t\")\n- if dev_ids and [id1, id2] not in dev_ids:\n+ is_dev = [id1, id2] in dev_ids\n+ if dataset_split == problem.DatasetSplit.TRAIN and is_dev:\n+ continue\n+ if dataset_split == problem.DatasetSplit.EVAL and not is_dev:\n continue\n inputs = [[s1, s2], [s2, s1]]\n for inp in inputs:\n@@ -108,14 +114,17 @@\n \n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n mrpc_dir = self._maybe_download_corpora(tmp_dir)\n- filesplit = \"msr_paraphrase_train.txt\"\n+ if dataset_split != problem.DatasetSplit.TEST:\n+ filesplit = \"msr_paraphrase_train.txt\"\n+ else:\n+ filesplit = \"msr_paraphrase_test.txt\"\n dev_ids = []\n- if dataset_split != problem.DatasetSplit.TRAIN:\n+ if dataset_split != problem.DatasetSplit.TEST:\n for row in tf.gfile.Open(os.path.join(mrpc_dir, \"dev_ids.tsv\")):\n dev_ids.append(row.strip().split(\"\\t\"))\n \n filename = os.path.join(mrpc_dir, filesplit)\n- for example in self.example_generator(filename, dev_ids):\n+ for example in self.example_generator(filename, dev_ids, dataset_split):\n yield example\n", "issue": "MRPC dev data is being used for training\n### Description\r\n\r\nI expected that the dev dataset would be different from the training dataset. However, all dev examples of MRPC are actually included in the training dataset.\r\n\r\n### Environment information\r\n\r\n```\r\nOS: macOS 10.13.4\r\n\r\n$ pip freeze | grep tensor\r\nmesh-tensorflow==0.0.4\r\n-e [email protected]:tensorflow/tensor2tensor.git@7de63449a98375011e2a8715482dfeea946e6de7#egg=tensor2tensor\r\ntensorboard==1.12.0\r\ntensorflow==1.12.0\r\ntensorflow-metadata==0.9.0\r\ntensorflow-probability==0.5.0\r\n\r\n$ python -V\r\nPython 3.6.4\r\n```\r\n\r\n### For bugs: reproduction and error logs\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom tensor2tensor.data_generators import problem\r\nfrom tensor2tensor.data_generators.mrpc import MSRParaphraseCorpus\r\n\r\ndata_dir = \"/tmp/t2t_mrpc\"\r\nmrpc = MSRParaphraseCorpus()\r\ntf.gfile.MakeDirs(data_dir)\r\nmrpc.generate_data(data_dir, \"/tmp\")\r\nencoder = mrpc.feature_encoders(data_dir).get(\"inputs\")\r\n\r\ntfe = tf.contrib.eager\r\ntfe.enable_eager_execution()\r\ntrain_dataset = set(\r\n encoder.decode(example[\"inputs\"])\r\n for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.TRAIN, data_dir)))\r\neval_dataset = set(\r\n encoder.decode(example[\"inputs\"])\r\n for example in tfe.Iterator(mrpc.dataset(problem.DatasetSplit.EVAL, data_dir)))\r\n\r\nprint(\"TRAIN Dataset: {}\".format(len(train_dataset)))\r\nprint(\"EVAL Dataset: {}\".format(len(eval_dataset)))\r\nprint(\"Duplication: {}\".format(len(train_dataset & eval_dataset)))\r\n```\r\n\r\nOutput:\r\n```\r\nTRAIN Dataset: 8152\r\nEVAL Dataset: 816\r\nDuplication: 816\r\n```\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Tensor2Tensor Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data generators for the MSR Paraphrase Corpus.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport six\nfrom tensor2tensor.data_generators import generator_utils\nfrom tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_encoder\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.utils import registry\nimport tensorflow as tf\n\nEOS = text_encoder.EOS\n\n\[email protected]_problem\nclass MSRParaphraseCorpus(text_problems.TextConcat2ClassProblem):\n \"\"\"MSR Paraphrase Identification problems.\"\"\"\n\n # Link to data from GLUE: https://gluebenchmark.com/tasks\n DEV_IDS = (\"https://firebasestorage.googleapis.com/v0/b/\"\n \"mtl-sentence-representations.appspot.com/o/\"\n \"data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-\"\n \"48f4-b431-7480817f1adc\")\n MRPC_TRAIN = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_train.txt\")\n MRPC_TEST = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_test.txt\")\n DATA_DIR = \"MRPC\"\n\n @property\n def is_generate_per_split(self):\n return True\n\n @property\n def dataset_splits(self):\n return [{\n \"split\": problem.DatasetSplit.TRAIN,\n \"shards\": 10,\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n }]\n\n @property\n def approx_vocab_size(self):\n return 2**13 # 8k vocab suffices for this small dataset.\n\n @property\n def num_classes(self):\n return 2\n\n def class_labels(self, data_dir):\n del data_dir\n return [\"not_paraphrase\", \"paraphrase\"]\n\n def _maybe_download_corpora(self, tmp_dir):\n mrpc_dir = os.path.join(tmp_dir, self.DATA_DIR)\n tf.gfile.MakeDirs(mrpc_dir)\n mrpc_train_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_train.txt\")\n mrpc_test_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_test.txt\")\n mrpc_dev_ids_finalpath = os.path.join(mrpc_dir, \"dev_ids.tsv\")\n\n def download_file(tdir, filepath, url):\n if not tf.gfile.Exists(filepath):\n generator_utils.maybe_download(tdir, filepath, url)\n\n download_file(mrpc_dir, mrpc_train_finalpath, self.MRPC_TRAIN)\n download_file(mrpc_dir, mrpc_test_finalpath, self.MRPC_TEST)\n download_file(mrpc_dir, mrpc_dev_ids_finalpath, self.DEV_IDS)\n\n return mrpc_dir\n\n def example_generator(self, filename, dev_ids):\n for idx, line in enumerate(tf.gfile.Open(filename, \"rb\")):\n if idx == 0: continue # skip header\n if six.PY2:\n line = unicode(line.strip(), \"utf-8\")\n else:\n line = line.strip().decode(\"utf-8\")\n l, id1, id2, s1, s2 = line.split(\"\\t\")\n if dev_ids and [id1, id2] not in dev_ids:\n continue\n inputs = [[s1, s2], [s2, s1]]\n for inp in inputs:\n yield {\n \"inputs\": inp,\n \"label\": int(l)\n }\n\n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n mrpc_dir = self._maybe_download_corpora(tmp_dir)\n filesplit = \"msr_paraphrase_train.txt\"\n dev_ids = []\n if dataset_split != problem.DatasetSplit.TRAIN:\n for row in tf.gfile.Open(os.path.join(mrpc_dir, \"dev_ids.tsv\")):\n dev_ids.append(row.strip().split(\"\\t\"))\n\n filename = os.path.join(mrpc_dir, filesplit)\n for example in self.example_generator(filename, dev_ids):\n yield example\n\n\[email protected]_problem\nclass MSRParaphraseCorpusCharacters(MSRParaphraseCorpus):\n \"\"\"MSR Paraphrase Identification problems, character level\"\"\"\n\n @property\n def vocab_type(self):\n return text_problems.VocabType.CHARACTER\n\n def global_task_id(self):\n return problem.TaskID.EN_SIM\n", "path": "tensor2tensor/data_generators/mrpc.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Tensor2Tensor Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data generators for the MSR Paraphrase Corpus.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport six\nfrom tensor2tensor.data_generators import generator_utils\nfrom tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_encoder\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.utils import registry\nimport tensorflow as tf\n\nEOS = text_encoder.EOS\n\n\[email protected]_problem\nclass MSRParaphraseCorpus(text_problems.TextConcat2ClassProblem):\n \"\"\"MSR Paraphrase Identification problems.\"\"\"\n\n # Link to data from GLUE: https://gluebenchmark.com/tasks\n DEV_IDS = (\"https://firebasestorage.googleapis.com/v0/b/\"\n \"mtl-sentence-representations.appspot.com/o/\"\n \"data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-\"\n \"48f4-b431-7480817f1adc\")\n MRPC_TRAIN = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_train.txt\")\n MRPC_TEST = (\"https://s3.amazonaws.com/senteval/senteval_data/\"\n \"msr_paraphrase_test.txt\")\n DATA_DIR = \"MRPC\"\n\n @property\n def is_generate_per_split(self):\n return True\n\n @property\n def dataset_splits(self):\n return [{\n \"split\": problem.DatasetSplit.TRAIN,\n \"shards\": 10,\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n }, {\n \"split\": problem.DatasetSplit.TEST,\n \"shards\": 1,\n }]\n\n @property\n def approx_vocab_size(self):\n return 2**13 # 8k vocab suffices for this small dataset.\n\n @property\n def num_classes(self):\n return 2\n\n def class_labels(self, data_dir):\n del data_dir\n return [\"not_paraphrase\", \"paraphrase\"]\n\n def _maybe_download_corpora(self, tmp_dir):\n mrpc_dir = os.path.join(tmp_dir, self.DATA_DIR)\n tf.gfile.MakeDirs(mrpc_dir)\n mrpc_train_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_train.txt\")\n mrpc_test_finalpath = os.path.join(mrpc_dir, \"msr_paraphrase_test.txt\")\n mrpc_dev_ids_finalpath = os.path.join(mrpc_dir, \"dev_ids.tsv\")\n\n def download_file(tdir, filepath, url):\n if not tf.gfile.Exists(filepath):\n generator_utils.maybe_download(tdir, filepath, url)\n\n download_file(mrpc_dir, mrpc_train_finalpath, self.MRPC_TRAIN)\n download_file(mrpc_dir, mrpc_test_finalpath, self.MRPC_TEST)\n download_file(mrpc_dir, mrpc_dev_ids_finalpath, self.DEV_IDS)\n\n return mrpc_dir\n\n def example_generator(self, filename, dev_ids, dataset_split):\n for idx, line in enumerate(tf.gfile.Open(filename, \"rb\")):\n if idx == 0: continue # skip header\n if six.PY2:\n line = unicode(line.strip(), \"utf-8\")\n else:\n line = line.strip().decode(\"utf-8\")\n l, id1, id2, s1, s2 = line.split(\"\\t\")\n is_dev = [id1, id2] in dev_ids\n if dataset_split == problem.DatasetSplit.TRAIN and is_dev:\n continue\n if dataset_split == problem.DatasetSplit.EVAL and not is_dev:\n continue\n inputs = [[s1, s2], [s2, s1]]\n for inp in inputs:\n yield {\n \"inputs\": inp,\n \"label\": int(l)\n }\n\n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n mrpc_dir = self._maybe_download_corpora(tmp_dir)\n if dataset_split != problem.DatasetSplit.TEST:\n filesplit = \"msr_paraphrase_train.txt\"\n else:\n filesplit = \"msr_paraphrase_test.txt\"\n dev_ids = []\n if dataset_split != problem.DatasetSplit.TEST:\n for row in tf.gfile.Open(os.path.join(mrpc_dir, \"dev_ids.tsv\")):\n dev_ids.append(row.strip().split(\"\\t\"))\n\n filename = os.path.join(mrpc_dir, filesplit)\n for example in self.example_generator(filename, dev_ids, dataset_split):\n yield example\n\n\[email protected]_problem\nclass MSRParaphraseCorpusCharacters(MSRParaphraseCorpus):\n \"\"\"MSR Paraphrase Identification problems, character level\"\"\"\n\n @property\n def vocab_type(self):\n return text_problems.VocabType.CHARACTER\n\n def global_task_id(self):\n return problem.TaskID.EN_SIM\n", "path": "tensor2tensor/data_generators/mrpc.py"}]} | 2,142 | 552 |
gh_patches_debug_30656 | rasdani/github-patches | git_diff | rucio__rucio-2150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test reaper console script
Motivation
----------
The reaper console script `rucio-reaper` is not tested in the testsuite.
Modification
------------
- Add test for the reaper console script.
- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.
- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/clis/daemons/reaper/reaper.py`
Content:
```
1 # Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # Authors:
16 # - Vincent Garonne, <[email protected]>, 2012-2018
17 # - Wen Guan, <[email protected]>, 2014
18 # - Hannes Hansen, <[email protected]>, 2018
19
20 """
21 Reaper is a daemon to manage file deletion
22 """
23
24 import argparse
25 import signal
26
27 from rucio.daemons.reaper.reaper import run, stop
28
29
30 def get_parser():
31 """
32 Returns the argparse parser.
33 """
34 parser = argparse.ArgumentParser(description="The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.", epilog='''
35 Upload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::
36
37 $ rucio upload --rse MOCK --scope mock --name file filename.txt
38 $ rucio add-rule mock:file 1 MOCK2 --lifetime 1
39 $ rucio-judge-cleaner --run-once
40
41 Check if the replica was created::
42
43 $ rucio list-file-replica mock:file
44 +---------+--------+------------+-----------+---------------------------------------------------------+
45 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
46 |---------+--------+------------+-----------+---------------------------------------------------------|
47 | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |
48 +---------+--------+------------+-----------+---------------------------------------------------------+
49
50 Run the daemon::
51
52 $ rucio-reaper --run-once
53
54 Check if the replica exists::
55
56 $ rucio list-file-replica mock:file
57 +---------+--------+------------+-----------+---------------------------------------------------------+
58 | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |
59 |---------+--------+------------+-----------+---------------------------------------------------------|
60 +---------+--------+------------+-----------+---------------------------------------------------------+
61 ''')
62 parser.add_argument("--run-once", action="store_true", default=False, help='One iteration only')
63 parser.add_argument("--total-workers", action="store", default=1, type=int, help='Total number of workers per process')
64 parser.add_argument("--threads-per-worker", action="store", default=None, type=int, help='Total number of threads created by each worker')
65 parser.add_argument("--chunk-size", action="store", default=10, type=int, help='Chunk size')
66 parser.add_argument("--scheme", action="store", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')
67 parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')
68 parser.add_argument('--exclude-rses', action="store", default=None, type=str, help='RSEs expression to exclude RSEs')
69 parser.add_argument('--include-rses', action="store", default=None, type=str, help='RSEs expression to include RSEs')
70 parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')
71 parser.add_argument('--delay-seconds', action="store", default=3600, type=int, help='Delay to retry failed deletion')
72 return parser
73
74
75 def main():
76
77 signal.signal(signal.SIGTERM, stop)
78 parser = get_parser()
79 args = parser.parse_args()
80 try:
81 run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
82 once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
83 exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)
84 except KeyboardInterrupt:
85 stop()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py
--- a/lib/rucio/clis/daemons/reaper/reaper.py
+++ b/lib/rucio/clis/daemons/reaper/reaper.py
@@ -1,4 +1,4 @@
-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.
+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,7 +13,7 @@
# limitations under the License.
#
# Authors:
-# - Vincent Garonne, <[email protected]>, 2012-2018
+# - Vincent Garonne, <[email protected]>, 2012-2019
# - Wen Guan, <[email protected]>, 2014
# - Hannes Hansen, <[email protected]>, 2018
@@ -23,6 +23,7 @@
import argparse
import signal
+import sys
from rucio.daemons.reaper.reaper import run, stop
@@ -72,11 +73,19 @@
return parser
-def main():
+def main(argv=None):
+ """
+ The main reaper method called by the command.
+ :param argv: Command-line arguments. Default to sys.argv if not set.
+ """
signal.signal(signal.SIGTERM, stop)
+
+ if argv is None:
+ argv = sys.argv[1:]
+
parser = get_parser()
- args = parser.parse_args()
+ args = parser.parse_args(argv)
try:
run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,
once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,
| {"golden_diff": "diff --git a/lib/rucio/clis/daemons/reaper/reaper.py b/lib/rucio/clis/daemons/reaper/reaper.py\n--- a/lib/rucio/clis/daemons/reaper/reaper.py\n+++ b/lib/rucio/clis/daemons/reaper/reaper.py\n@@ -1,4 +1,4 @@\n-# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n+# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -13,7 +13,7 @@\n # limitations under the License.\n #\n # Authors:\n-# - Vincent Garonne, <[email protected]>, 2012-2018\n+# - Vincent Garonne, <[email protected]>, 2012-2019\n # - Wen Guan, <[email protected]>, 2014\n # - Hannes Hansen, <[email protected]>, 2018\n \n@@ -23,6 +23,7 @@\n \n import argparse\n import signal\n+import sys\n \n from rucio.daemons.reaper.reaper import run, stop\n \n@@ -72,11 +73,19 @@\n return parser\n \n \n-def main():\n+def main(argv=None):\n+ \"\"\"\n+ The main reaper method called by the command.\n \n+ :param argv: Command-line arguments. Default to sys.argv if not set.\n+ \"\"\"\n signal.signal(signal.SIGTERM, stop)\n+\n+ if argv is None:\n+ argv = sys.argv[1:]\n+\n parser = get_parser()\n- args = parser.parse_args()\n+ args = parser.parse_args(argv)\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n", "issue": "Test reaper console script\nMotivation\r\n----------\r\n\r\nThe reaper console script `rucio-reaper` is not tested in the testsuite.\r\n\r\nModification\r\n------------\r\n- Add test for the reaper console script.\r\n- Install the environnement with `python setup.py develop` in the docker env to have the generated console scripts available in the docker.\r\n- Extend the reaper argparse method and the reaper tests to validate the argparse main method and console script.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2012-2018 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne, <[email protected]>, 2012-2018\n# - Wen Guan, <[email protected]>, 2014\n# - Hannes Hansen, <[email protected]>, 2018\n\n\"\"\"\nReaper is a daemon to manage file deletion\n\"\"\"\n\nimport argparse\nimport signal\n\nfrom rucio.daemons.reaper.reaper import run, stop\n\n\ndef get_parser():\n \"\"\"\n Returns the argparse parser.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.\", epilog='''\nUpload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::\n\n $ rucio upload --rse MOCK --scope mock --name file filename.txt\n $ rucio add-rule mock:file 1 MOCK2 --lifetime 1\n $ rucio-judge-cleaner --run-once\n\nCheck if the replica was created::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |\n +---------+--------+------------+-----------+---------------------------------------------------------+\n\nRun the daemon::\n\n $ rucio-reaper --run-once\n\nCheck if the replica exists::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n +---------+--------+------------+-----------+---------------------------------------------------------+\n ''')\n parser.add_argument(\"--run-once\", action=\"store_true\", default=False, help='One iteration only')\n parser.add_argument(\"--total-workers\", action=\"store\", default=1, type=int, help='Total number of workers per process')\n parser.add_argument(\"--threads-per-worker\", action=\"store\", default=None, type=int, help='Total number of threads created by each worker')\n parser.add_argument(\"--chunk-size\", action=\"store\", default=10, type=int, help='Chunk size')\n parser.add_argument(\"--scheme\", action=\"store\", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')\n parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')\n parser.add_argument('--exclude-rses', action=\"store\", default=None, type=str, help='RSEs expression to exclude RSEs')\n parser.add_argument('--include-rses', action=\"store\", default=None, type=str, help='RSEs expression to include RSEs')\n parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')\n parser.add_argument('--delay-seconds', action=\"store\", default=3600, type=int, help='Delay to retry failed deletion')\n return parser\n\n\ndef main():\n\n signal.signal(signal.SIGTERM, stop)\n parser = get_parser()\n args = parser.parse_args()\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)\n except KeyboardInterrupt:\n stop()\n", "path": "lib/rucio/clis/daemons/reaper/reaper.py"}], "after_files": [{"content": "# Copyright 2012-2019 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Vincent Garonne, <[email protected]>, 2012-2019\n# - Wen Guan, <[email protected]>, 2014\n# - Hannes Hansen, <[email protected]>, 2018\n\n\"\"\"\nReaper is a daemon to manage file deletion\n\"\"\"\n\nimport argparse\nimport signal\nimport sys\n\nfrom rucio.daemons.reaper.reaper import run, stop\n\n\ndef get_parser():\n \"\"\"\n Returns the argparse parser.\n \"\"\"\n parser = argparse.ArgumentParser(description=\"The Reaper daemon is responsible for replica deletion. It deletes them by checking if there are replicas that are not locked and have a tombstone to indicate that they can be deleted.\", epilog='''\nUpload a file and prepare the rules and replicas for deletion by using the judge-cleaner daemon::\n\n $ rucio upload --rse MOCK --scope mock --name file filename.txt\n $ rucio add-rule mock:file 1 MOCK2 --lifetime 1\n $ rucio-judge-cleaner --run-once\n\nCheck if the replica was created::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n | mock | file | 1.542 kB | 1268ee71 | MOCK: file://localhost:0/tmp/rucio_rse/mock/15/58/file |\n +---------+--------+------------+-----------+---------------------------------------------------------+\n\nRun the daemon::\n\n $ rucio-reaper --run-once\n\nCheck if the replica exists::\n\n $ rucio list-file-replica mock:file\n +---------+--------+------------+-----------+---------------------------------------------------------+\n | SCOPE | NAME | FILESIZE | ADLER32 | RSE: REPLICA |\n |---------+--------+------------+-----------+---------------------------------------------------------|\n +---------+--------+------------+-----------+---------------------------------------------------------+\n ''')\n parser.add_argument(\"--run-once\", action=\"store_true\", default=False, help='One iteration only')\n parser.add_argument(\"--total-workers\", action=\"store\", default=1, type=int, help='Total number of workers per process')\n parser.add_argument(\"--threads-per-worker\", action=\"store\", default=None, type=int, help='Total number of threads created by each worker')\n parser.add_argument(\"--chunk-size\", action=\"store\", default=10, type=int, help='Chunk size')\n parser.add_argument(\"--scheme\", action=\"store\", default=None, type=str, help='Force the reaper to use a particular protocol, e.g., mock.')\n parser.add_argument('--greedy', action='store_true', default=False, help='Greedy mode')\n parser.add_argument('--exclude-rses', action=\"store\", default=None, type=str, help='RSEs expression to exclude RSEs')\n parser.add_argument('--include-rses', action=\"store\", default=None, type=str, help='RSEs expression to include RSEs')\n parser.add_argument('--rses', nargs='+', type=str, help='List of RSEs')\n parser.add_argument('--delay-seconds', action=\"store\", default=3600, type=int, help='Delay to retry failed deletion')\n return parser\n\n\ndef main(argv=None):\n \"\"\"\n The main reaper method called by the command.\n\n :param argv: Command-line arguments. Default to sys.argv if not set.\n \"\"\"\n signal.signal(signal.SIGTERM, stop)\n\n if argv is None:\n argv = sys.argv[1:]\n\n parser = get_parser()\n args = parser.parse_args(argv)\n try:\n run(total_workers=args.total_workers, chunk_size=args.chunk_size, greedy=args.greedy,\n once=args.run_once, scheme=args.scheme, rses=args.rses, threads_per_worker=args.threads_per_worker,\n exclude_rses=args.exclude_rses, include_rses=args.include_rses, delay_seconds=args.delay_seconds)\n except KeyboardInterrupt:\n stop()\n", "path": "lib/rucio/clis/daemons/reaper/reaper.py"}]} | 1,546 | 476 |
gh_patches_debug_4942 | rasdani/github-patches | git_diff | saleor__saleor-11327 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Lack of validation in cleaning manifest data
### What are you trying to achieve?
The `KeyError` is raised when `manifest_data` doesn't have `tokenTargetUrl`.
### Steps to reproduce the problem
1. Run `AppFetchManifest` with URL that contains JSON data without `tokenTargetUrl`.
2. You will get the `KeyError`.
### What did you expect to happen?
The `ValidationError` should be raised.
### Logs
https://sentry.io/organizations/saleor/issues/3749157627/?project=6417854
### Environment
Saleor version: 3.9 (to check if it also affect other versions)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/app/manifest_validations.py`
Content:
```
1 import logging
2 from collections import defaultdict
3 from typing import Dict, Iterable, List
4
5 from django.contrib.auth.models import Permission
6 from django.core.exceptions import ValidationError
7 from django.db.models import Value
8 from django.db.models.functions import Concat
9
10 from ..core.permissions import (
11 get_permissions,
12 get_permissions_enum_list,
13 split_permission_codename,
14 )
15 from ..graphql.core.utils import str_to_enum
16 from ..graphql.webhook.subscription_payload import validate_subscription_query
17 from ..webhook.event_types import WebhookEventAsyncType, WebhookEventSyncType
18 from .error_codes import AppErrorCode
19 from .types import AppExtensionMount, AppExtensionTarget
20 from .validators import AppURLValidator
21
22 logger = logging.getLogger(__name__)
23
24 T_ERRORS = Dict[str, List[ValidationError]]
25
26
27 def _clean_app_url(url):
28 url_validator = AppURLValidator()
29 url_validator(url)
30
31
32 def _clean_extension_url_with_only_path(
33 manifest_data: dict, target: str, extension_url: str
34 ):
35 if target == AppExtensionTarget.APP_PAGE:
36 return
37 elif manifest_data["appUrl"]:
38 _clean_app_url(manifest_data["appUrl"])
39 else:
40 msg = (
41 "Incorrect relation between extension's target and URL fields. "
42 "APP_PAGE can be used only with relative URL path."
43 )
44 logger.warning(msg, extra={"target": target, "url": extension_url})
45 raise ValidationError(msg)
46
47
48 def clean_extension_url(extension: dict, manifest_data: dict):
49 """Clean assigned extension url.
50
51 Make sure that format of url is correct based on the rest of manifest fields.
52 - url can start with '/' when one of these conditions is true:
53 a) extension.target == APP_PAGE
54 b) appUrl is provided
55 - url cannot start with protocol when target == "APP_PAGE"
56 """
57 extension_url = extension["url"]
58 target = extension.get("target") or AppExtensionTarget.POPUP
59 if extension_url.startswith("/"):
60 _clean_extension_url_with_only_path(manifest_data, target, extension_url)
61 elif target == AppExtensionTarget.APP_PAGE:
62 msg = "Url cannot start with protocol when target == APP_PAGE"
63 logger.warning(msg)
64 raise ValidationError(msg)
65 else:
66 _clean_app_url(extension_url)
67
68
69 def clean_manifest_url(manifest_url):
70 try:
71 _clean_app_url(manifest_url)
72 except (ValidationError, AttributeError):
73 msg = "Enter a valid URL."
74 code = AppErrorCode.INVALID_URL_FORMAT.value
75 raise ValidationError({"manifest_url": ValidationError(msg, code=code)})
76
77
78 def clean_permissions(
79 required_permissions: List[str], saleor_permissions: Iterable[Permission]
80 ) -> List[Permission]:
81 missing_permissions = []
82 all_permissions = {perm[0]: perm[1] for perm in get_permissions_enum_list()}
83 for perm in required_permissions:
84 if not all_permissions.get(perm):
85 missing_permissions.append(perm)
86 if missing_permissions:
87 error_msg = "Given permissions don't exist."
88 code = AppErrorCode.INVALID_PERMISSION.value
89 params = {"permissions": missing_permissions}
90 raise ValidationError(error_msg, code=code, params=params)
91
92 permissions = [all_permissions[perm] for perm in required_permissions]
93 permissions = split_permission_codename(permissions)
94 return [p for p in saleor_permissions if p.codename in permissions]
95
96
97 def clean_manifest_data(manifest_data):
98 errors: T_ERRORS = defaultdict(list)
99
100 validate_required_fields(manifest_data, errors)
101 try:
102 _clean_app_url(manifest_data["tokenTargetUrl"])
103 except (ValidationError, AttributeError):
104 errors["tokenTargetUrl"].append(
105 ValidationError(
106 "Incorrect format.",
107 code=AppErrorCode.INVALID_URL_FORMAT.value,
108 )
109 )
110
111 saleor_permissions = get_permissions().annotate(
112 formated_codename=Concat("content_type__app_label", Value("."), "codename")
113 )
114 try:
115 app_permissions = clean_permissions(
116 manifest_data.get("permissions", []), saleor_permissions
117 )
118 except ValidationError as e:
119 errors["permissions"].append(e)
120 app_permissions = []
121
122 manifest_data["permissions"] = app_permissions
123
124 if not errors:
125 clean_extensions(manifest_data, app_permissions, errors)
126 clean_webhooks(manifest_data, errors)
127
128 if errors:
129 raise ValidationError(errors)
130
131
132 def _clean_extension_permissions(extension, app_permissions, errors):
133 permissions_data = extension.get("permissions", [])
134 try:
135 extension_permissions = clean_permissions(permissions_data, app_permissions)
136 except ValidationError as e:
137 e.params["label"] = extension.get("label")
138 errors["extensions"].append(e)
139 return
140
141 if len(extension_permissions) != len(permissions_data):
142 errors["extensions"].append(
143 ValidationError(
144 "Extension permission must be listed in App's permissions.",
145 code=AppErrorCode.OUT_OF_SCOPE_PERMISSION.value,
146 )
147 )
148
149 extension["permissions"] = extension_permissions
150
151
152 def clean_extension_enum_field(enum, field_name, extension, errors):
153 if extension[field_name] in [code.upper() for code, _ in enum.CHOICES]:
154 extension[field_name] = getattr(enum, extension[field_name])
155 else:
156 errors["extensions"].append(
157 ValidationError(
158 f"Incorrect value for field: {field_name}",
159 code=AppErrorCode.INVALID.value,
160 )
161 )
162
163
164 def clean_extensions(manifest_data, app_permissions, errors):
165 extensions = manifest_data.get("extensions", [])
166 for extension in extensions:
167 if "target" not in extension:
168 extension["target"] = AppExtensionTarget.POPUP
169 else:
170 clean_extension_enum_field(AppExtensionTarget, "target", extension, errors)
171 clean_extension_enum_field(AppExtensionMount, "mount", extension, errors)
172
173 try:
174 clean_extension_url(extension, manifest_data)
175 except (ValidationError, AttributeError):
176 errors["extensions"].append(
177 ValidationError(
178 "Incorrect value for field: url.",
179 code=AppErrorCode.INVALID_URL_FORMAT.value,
180 )
181 )
182 _clean_extension_permissions(extension, app_permissions, errors)
183
184
185 def clean_webhooks(manifest_data, errors):
186 webhooks = manifest_data.get("webhooks", [])
187
188 async_types = {
189 str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventAsyncType.CHOICES
190 }
191 sync_types = {
192 str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventSyncType.CHOICES
193 }
194
195 target_url_validator = AppURLValidator(
196 schemes=["http", "https", "awssqs", "gcpubsub"]
197 )
198
199 for webhook in webhooks:
200 if not validate_subscription_query(webhook["query"]):
201 errors["webhooks"].append(
202 ValidationError(
203 "Subscription query is not valid.",
204 code=AppErrorCode.INVALID.value,
205 )
206 )
207
208 webhook["events"] = []
209 for e_type in webhook.get("asyncEvents", []):
210 try:
211 webhook["events"].append(async_types[e_type])
212 except KeyError:
213 errors["webhooks"].append(
214 ValidationError(
215 "Invalid asynchronous event.",
216 code=AppErrorCode.INVALID.value,
217 )
218 )
219 for e_type in webhook.get("syncEvents", []):
220 try:
221 webhook["events"].append(sync_types[e_type])
222 except KeyError:
223 errors["webhooks"].append(
224 ValidationError(
225 "Invalid synchronous event.",
226 code=AppErrorCode.INVALID.value,
227 )
228 )
229
230 try:
231 target_url_validator(webhook["targetUrl"])
232 except ValidationError:
233 errors["webhooks"].append(
234 ValidationError(
235 "Invalid target url.",
236 code=AppErrorCode.INVALID_URL_FORMAT.value,
237 )
238 )
239
240
241 def validate_required_fields(manifest_data, errors):
242 manifest_required_fields = {"id", "version", "name", "tokenTargetUrl"}
243 extension_required_fields = {"label", "url", "mount"}
244 webhook_required_fields = {"name", "targetUrl", "query"}
245
246 if manifest_missing_fields := manifest_required_fields.difference(manifest_data):
247 for missing_field in manifest_missing_fields:
248 errors[missing_field].append(
249 ValidationError("Field required.", code=AppErrorCode.REQUIRED.value)
250 )
251
252 app_extensions_data = manifest_data.get("extensions", [])
253 for extension in app_extensions_data:
254 extension_fields = set(extension.keys())
255 if missing_fields := extension_required_fields.difference(extension_fields):
256 errors["extensions"].append(
257 ValidationError(
258 "Missing required fields for app extension: "
259 f'{", ".join(missing_fields)}.',
260 code=AppErrorCode.REQUIRED.value,
261 )
262 )
263
264 webhooks = manifest_data.get("webhooks", [])
265 for webhook in webhooks:
266 webhook_fields = set(webhook.keys())
267 if missing_fields := webhook_required_fields.difference(webhook_fields):
268 errors["webhooks"].append(
269 ValidationError(
270 f"Missing required fields for webhook: "
271 f'{", ".join(missing_fields)}.',
272 code=AppErrorCode.REQUIRED.value,
273 )
274 )
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/app/manifest_validations.py b/saleor/app/manifest_validations.py
--- a/saleor/app/manifest_validations.py
+++ b/saleor/app/manifest_validations.py
@@ -99,7 +99,8 @@
validate_required_fields(manifest_data, errors)
try:
- _clean_app_url(manifest_data["tokenTargetUrl"])
+ if "tokenTargetUrl" in manifest_data:
+ _clean_app_url(manifest_data["tokenTargetUrl"])
except (ValidationError, AttributeError):
errors["tokenTargetUrl"].append(
ValidationError(
| {"golden_diff": "diff --git a/saleor/app/manifest_validations.py b/saleor/app/manifest_validations.py\n--- a/saleor/app/manifest_validations.py\n+++ b/saleor/app/manifest_validations.py\n@@ -99,7 +99,8 @@\n \n validate_required_fields(manifest_data, errors)\n try:\n- _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n+ if \"tokenTargetUrl\" in manifest_data:\n+ _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n except (ValidationError, AttributeError):\n errors[\"tokenTargetUrl\"].append(\n ValidationError(\n", "issue": "Bug: Lack of validation in cleaning manifest data\n### What are you trying to achieve?\n\nThe `KeyError` is raised when `manifest_data` doesn't have `tokenTargetUrl`.\n\n### Steps to reproduce the problem\n\n1. Run `AppFetchManifest` with URL that contains JSON data without `tokenTargetUrl`.\r\n2. You will get the `KeyError`.\n\n### What did you expect to happen?\n\nThe `ValidationError` should be raised.\n\n### Logs\n\nhttps://sentry.io/organizations/saleor/issues/3749157627/?project=6417854\n\n### Environment\n\nSaleor version: 3.9 (to check if it also affect other versions)\n", "before_files": [{"content": "import logging\nfrom collections import defaultdict\nfrom typing import Dict, Iterable, List\n\nfrom django.contrib.auth.models import Permission\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Value\nfrom django.db.models.functions import Concat\n\nfrom ..core.permissions import (\n get_permissions,\n get_permissions_enum_list,\n split_permission_codename,\n)\nfrom ..graphql.core.utils import str_to_enum\nfrom ..graphql.webhook.subscription_payload import validate_subscription_query\nfrom ..webhook.event_types import WebhookEventAsyncType, WebhookEventSyncType\nfrom .error_codes import AppErrorCode\nfrom .types import AppExtensionMount, AppExtensionTarget\nfrom .validators import AppURLValidator\n\nlogger = logging.getLogger(__name__)\n\nT_ERRORS = Dict[str, List[ValidationError]]\n\n\ndef _clean_app_url(url):\n url_validator = AppURLValidator()\n url_validator(url)\n\n\ndef _clean_extension_url_with_only_path(\n manifest_data: dict, target: str, extension_url: str\n):\n if target == AppExtensionTarget.APP_PAGE:\n return\n elif manifest_data[\"appUrl\"]:\n _clean_app_url(manifest_data[\"appUrl\"])\n else:\n msg = (\n \"Incorrect relation between extension's target and URL fields. \"\n \"APP_PAGE can be used only with relative URL path.\"\n )\n logger.warning(msg, extra={\"target\": target, \"url\": extension_url})\n raise ValidationError(msg)\n\n\ndef clean_extension_url(extension: dict, manifest_data: dict):\n \"\"\"Clean assigned extension url.\n\n Make sure that format of url is correct based on the rest of manifest fields.\n - url can start with '/' when one of these conditions is true:\n a) extension.target == APP_PAGE\n b) appUrl is provided\n - url cannot start with protocol when target == \"APP_PAGE\"\n \"\"\"\n extension_url = extension[\"url\"]\n target = extension.get(\"target\") or AppExtensionTarget.POPUP\n if extension_url.startswith(\"/\"):\n _clean_extension_url_with_only_path(manifest_data, target, extension_url)\n elif target == AppExtensionTarget.APP_PAGE:\n msg = \"Url cannot start with protocol when target == APP_PAGE\"\n logger.warning(msg)\n raise ValidationError(msg)\n else:\n _clean_app_url(extension_url)\n\n\ndef clean_manifest_url(manifest_url):\n try:\n _clean_app_url(manifest_url)\n except (ValidationError, AttributeError):\n msg = \"Enter a valid URL.\"\n code = AppErrorCode.INVALID_URL_FORMAT.value\n raise ValidationError({\"manifest_url\": ValidationError(msg, code=code)})\n\n\ndef clean_permissions(\n required_permissions: List[str], saleor_permissions: Iterable[Permission]\n) -> List[Permission]:\n missing_permissions = []\n all_permissions = {perm[0]: perm[1] for perm in get_permissions_enum_list()}\n for perm in required_permissions:\n if not all_permissions.get(perm):\n missing_permissions.append(perm)\n if missing_permissions:\n error_msg = \"Given permissions don't exist.\"\n code = AppErrorCode.INVALID_PERMISSION.value\n params = {\"permissions\": missing_permissions}\n raise ValidationError(error_msg, code=code, params=params)\n\n permissions = [all_permissions[perm] for perm in required_permissions]\n permissions = split_permission_codename(permissions)\n return [p for p in saleor_permissions if p.codename in permissions]\n\n\ndef clean_manifest_data(manifest_data):\n errors: T_ERRORS = defaultdict(list)\n\n validate_required_fields(manifest_data, errors)\n try:\n _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n except (ValidationError, AttributeError):\n errors[\"tokenTargetUrl\"].append(\n ValidationError(\n \"Incorrect format.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n saleor_permissions = get_permissions().annotate(\n formated_codename=Concat(\"content_type__app_label\", Value(\".\"), \"codename\")\n )\n try:\n app_permissions = clean_permissions(\n manifest_data.get(\"permissions\", []), saleor_permissions\n )\n except ValidationError as e:\n errors[\"permissions\"].append(e)\n app_permissions = []\n\n manifest_data[\"permissions\"] = app_permissions\n\n if not errors:\n clean_extensions(manifest_data, app_permissions, errors)\n clean_webhooks(manifest_data, errors)\n\n if errors:\n raise ValidationError(errors)\n\n\ndef _clean_extension_permissions(extension, app_permissions, errors):\n permissions_data = extension.get(\"permissions\", [])\n try:\n extension_permissions = clean_permissions(permissions_data, app_permissions)\n except ValidationError as e:\n e.params[\"label\"] = extension.get(\"label\")\n errors[\"extensions\"].append(e)\n return\n\n if len(extension_permissions) != len(permissions_data):\n errors[\"extensions\"].append(\n ValidationError(\n \"Extension permission must be listed in App's permissions.\",\n code=AppErrorCode.OUT_OF_SCOPE_PERMISSION.value,\n )\n )\n\n extension[\"permissions\"] = extension_permissions\n\n\ndef clean_extension_enum_field(enum, field_name, extension, errors):\n if extension[field_name] in [code.upper() for code, _ in enum.CHOICES]:\n extension[field_name] = getattr(enum, extension[field_name])\n else:\n errors[\"extensions\"].append(\n ValidationError(\n f\"Incorrect value for field: {field_name}\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n\ndef clean_extensions(manifest_data, app_permissions, errors):\n extensions = manifest_data.get(\"extensions\", [])\n for extension in extensions:\n if \"target\" not in extension:\n extension[\"target\"] = AppExtensionTarget.POPUP\n else:\n clean_extension_enum_field(AppExtensionTarget, \"target\", extension, errors)\n clean_extension_enum_field(AppExtensionMount, \"mount\", extension, errors)\n\n try:\n clean_extension_url(extension, manifest_data)\n except (ValidationError, AttributeError):\n errors[\"extensions\"].append(\n ValidationError(\n \"Incorrect value for field: url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n _clean_extension_permissions(extension, app_permissions, errors)\n\n\ndef clean_webhooks(manifest_data, errors):\n webhooks = manifest_data.get(\"webhooks\", [])\n\n async_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventAsyncType.CHOICES\n }\n sync_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventSyncType.CHOICES\n }\n\n target_url_validator = AppURLValidator(\n schemes=[\"http\", \"https\", \"awssqs\", \"gcpubsub\"]\n )\n\n for webhook in webhooks:\n if not validate_subscription_query(webhook[\"query\"]):\n errors[\"webhooks\"].append(\n ValidationError(\n \"Subscription query is not valid.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n webhook[\"events\"] = []\n for e_type in webhook.get(\"asyncEvents\", []):\n try:\n webhook[\"events\"].append(async_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid asynchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n for e_type in webhook.get(\"syncEvents\", []):\n try:\n webhook[\"events\"].append(sync_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid synchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n try:\n target_url_validator(webhook[\"targetUrl\"])\n except ValidationError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid target url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n\ndef validate_required_fields(manifest_data, errors):\n manifest_required_fields = {\"id\", \"version\", \"name\", \"tokenTargetUrl\"}\n extension_required_fields = {\"label\", \"url\", \"mount\"}\n webhook_required_fields = {\"name\", \"targetUrl\", \"query\"}\n\n if manifest_missing_fields := manifest_required_fields.difference(manifest_data):\n for missing_field in manifest_missing_fields:\n errors[missing_field].append(\n ValidationError(\"Field required.\", code=AppErrorCode.REQUIRED.value)\n )\n\n app_extensions_data = manifest_data.get(\"extensions\", [])\n for extension in app_extensions_data:\n extension_fields = set(extension.keys())\n if missing_fields := extension_required_fields.difference(extension_fields):\n errors[\"extensions\"].append(\n ValidationError(\n \"Missing required fields for app extension: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n\n webhooks = manifest_data.get(\"webhooks\", [])\n for webhook in webhooks:\n webhook_fields = set(webhook.keys())\n if missing_fields := webhook_required_fields.difference(webhook_fields):\n errors[\"webhooks\"].append(\n ValidationError(\n f\"Missing required fields for webhook: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n", "path": "saleor/app/manifest_validations.py"}], "after_files": [{"content": "import logging\nfrom collections import defaultdict\nfrom typing import Dict, Iterable, List\n\nfrom django.contrib.auth.models import Permission\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Value\nfrom django.db.models.functions import Concat\n\nfrom ..core.permissions import (\n get_permissions,\n get_permissions_enum_list,\n split_permission_codename,\n)\nfrom ..graphql.core.utils import str_to_enum\nfrom ..graphql.webhook.subscription_payload import validate_subscription_query\nfrom ..webhook.event_types import WebhookEventAsyncType, WebhookEventSyncType\nfrom .error_codes import AppErrorCode\nfrom .types import AppExtensionMount, AppExtensionTarget\nfrom .validators import AppURLValidator\n\nlogger = logging.getLogger(__name__)\n\nT_ERRORS = Dict[str, List[ValidationError]]\n\n\ndef _clean_app_url(url):\n url_validator = AppURLValidator()\n url_validator(url)\n\n\ndef _clean_extension_url_with_only_path(\n manifest_data: dict, target: str, extension_url: str\n):\n if target == AppExtensionTarget.APP_PAGE:\n return\n elif manifest_data[\"appUrl\"]:\n _clean_app_url(manifest_data[\"appUrl\"])\n else:\n msg = (\n \"Incorrect relation between extension's target and URL fields. \"\n \"APP_PAGE can be used only with relative URL path.\"\n )\n logger.warning(msg, extra={\"target\": target, \"url\": extension_url})\n raise ValidationError(msg)\n\n\ndef clean_extension_url(extension: dict, manifest_data: dict):\n \"\"\"Clean assigned extension url.\n\n Make sure that format of url is correct based on the rest of manifest fields.\n - url can start with '/' when one of these conditions is true:\n a) extension.target == APP_PAGE\n b) appUrl is provided\n - url cannot start with protocol when target == \"APP_PAGE\"\n \"\"\"\n extension_url = extension[\"url\"]\n target = extension.get(\"target\") or AppExtensionTarget.POPUP\n if extension_url.startswith(\"/\"):\n _clean_extension_url_with_only_path(manifest_data, target, extension_url)\n elif target == AppExtensionTarget.APP_PAGE:\n msg = \"Url cannot start with protocol when target == APP_PAGE\"\n logger.warning(msg)\n raise ValidationError(msg)\n else:\n _clean_app_url(extension_url)\n\n\ndef clean_manifest_url(manifest_url):\n try:\n _clean_app_url(manifest_url)\n except (ValidationError, AttributeError):\n msg = \"Enter a valid URL.\"\n code = AppErrorCode.INVALID_URL_FORMAT.value\n raise ValidationError({\"manifest_url\": ValidationError(msg, code=code)})\n\n\ndef clean_permissions(\n required_permissions: List[str], saleor_permissions: Iterable[Permission]\n) -> List[Permission]:\n missing_permissions = []\n all_permissions = {perm[0]: perm[1] for perm in get_permissions_enum_list()}\n for perm in required_permissions:\n if not all_permissions.get(perm):\n missing_permissions.append(perm)\n if missing_permissions:\n error_msg = \"Given permissions don't exist.\"\n code = AppErrorCode.INVALID_PERMISSION.value\n params = {\"permissions\": missing_permissions}\n raise ValidationError(error_msg, code=code, params=params)\n\n permissions = [all_permissions[perm] for perm in required_permissions]\n permissions = split_permission_codename(permissions)\n return [p for p in saleor_permissions if p.codename in permissions]\n\n\ndef clean_manifest_data(manifest_data):\n errors: T_ERRORS = defaultdict(list)\n\n validate_required_fields(manifest_data, errors)\n try:\n if \"tokenTargetUrl\" in manifest_data:\n _clean_app_url(manifest_data[\"tokenTargetUrl\"])\n except (ValidationError, AttributeError):\n errors[\"tokenTargetUrl\"].append(\n ValidationError(\n \"Incorrect format.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n saleor_permissions = get_permissions().annotate(\n formated_codename=Concat(\"content_type__app_label\", Value(\".\"), \"codename\")\n )\n try:\n app_permissions = clean_permissions(\n manifest_data.get(\"permissions\", []), saleor_permissions\n )\n except ValidationError as e:\n errors[\"permissions\"].append(e)\n app_permissions = []\n\n manifest_data[\"permissions\"] = app_permissions\n\n if not errors:\n clean_extensions(manifest_data, app_permissions, errors)\n clean_webhooks(manifest_data, errors)\n\n if errors:\n raise ValidationError(errors)\n\n\ndef _clean_extension_permissions(extension, app_permissions, errors):\n permissions_data = extension.get(\"permissions\", [])\n try:\n extension_permissions = clean_permissions(permissions_data, app_permissions)\n except ValidationError as e:\n e.params[\"label\"] = extension.get(\"label\")\n errors[\"extensions\"].append(e)\n return\n\n if len(extension_permissions) != len(permissions_data):\n errors[\"extensions\"].append(\n ValidationError(\n \"Extension permission must be listed in App's permissions.\",\n code=AppErrorCode.OUT_OF_SCOPE_PERMISSION.value,\n )\n )\n\n extension[\"permissions\"] = extension_permissions\n\n\ndef clean_extension_enum_field(enum, field_name, extension, errors):\n if extension[field_name] in [code.upper() for code, _ in enum.CHOICES]:\n extension[field_name] = getattr(enum, extension[field_name])\n else:\n errors[\"extensions\"].append(\n ValidationError(\n f\"Incorrect value for field: {field_name}\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n\ndef clean_extensions(manifest_data, app_permissions, errors):\n extensions = manifest_data.get(\"extensions\", [])\n for extension in extensions:\n if \"target\" not in extension:\n extension[\"target\"] = AppExtensionTarget.POPUP\n else:\n clean_extension_enum_field(AppExtensionTarget, \"target\", extension, errors)\n clean_extension_enum_field(AppExtensionMount, \"mount\", extension, errors)\n\n try:\n clean_extension_url(extension, manifest_data)\n except (ValidationError, AttributeError):\n errors[\"extensions\"].append(\n ValidationError(\n \"Incorrect value for field: url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n _clean_extension_permissions(extension, app_permissions, errors)\n\n\ndef clean_webhooks(manifest_data, errors):\n webhooks = manifest_data.get(\"webhooks\", [])\n\n async_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventAsyncType.CHOICES\n }\n sync_types = {\n str_to_enum(e_type[0]): e_type[0] for e_type in WebhookEventSyncType.CHOICES\n }\n\n target_url_validator = AppURLValidator(\n schemes=[\"http\", \"https\", \"awssqs\", \"gcpubsub\"]\n )\n\n for webhook in webhooks:\n if not validate_subscription_query(webhook[\"query\"]):\n errors[\"webhooks\"].append(\n ValidationError(\n \"Subscription query is not valid.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n webhook[\"events\"] = []\n for e_type in webhook.get(\"asyncEvents\", []):\n try:\n webhook[\"events\"].append(async_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid asynchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n for e_type in webhook.get(\"syncEvents\", []):\n try:\n webhook[\"events\"].append(sync_types[e_type])\n except KeyError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid synchronous event.\",\n code=AppErrorCode.INVALID.value,\n )\n )\n\n try:\n target_url_validator(webhook[\"targetUrl\"])\n except ValidationError:\n errors[\"webhooks\"].append(\n ValidationError(\n \"Invalid target url.\",\n code=AppErrorCode.INVALID_URL_FORMAT.value,\n )\n )\n\n\ndef validate_required_fields(manifest_data, errors):\n manifest_required_fields = {\"id\", \"version\", \"name\", \"tokenTargetUrl\"}\n extension_required_fields = {\"label\", \"url\", \"mount\"}\n webhook_required_fields = {\"name\", \"targetUrl\", \"query\"}\n\n if manifest_missing_fields := manifest_required_fields.difference(manifest_data):\n for missing_field in manifest_missing_fields:\n errors[missing_field].append(\n ValidationError(\"Field required.\", code=AppErrorCode.REQUIRED.value)\n )\n\n app_extensions_data = manifest_data.get(\"extensions\", [])\n for extension in app_extensions_data:\n extension_fields = set(extension.keys())\n if missing_fields := extension_required_fields.difference(extension_fields):\n errors[\"extensions\"].append(\n ValidationError(\n \"Missing required fields for app extension: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n\n webhooks = manifest_data.get(\"webhooks\", [])\n for webhook in webhooks:\n webhook_fields = set(webhook.keys())\n if missing_fields := webhook_required_fields.difference(webhook_fields):\n errors[\"webhooks\"].append(\n ValidationError(\n f\"Missing required fields for webhook: \"\n f'{\", \".join(missing_fields)}.',\n code=AppErrorCode.REQUIRED.value,\n )\n )\n", "path": "saleor/app/manifest_validations.py"}]} | 3,051 | 133 |
gh_patches_debug_30904 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1591 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Cannot Download a File if it doesnt have a `file_path` or if `custom_path` is not provided to `download()`
<!--
Thanks for reporting issues of python-telegram-bot!
Use this template to notify us if you found a bug.
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. Create an operation with the above mentioned aspects
Ex:
```
def run_bot(token):
def make_pasta(update, context):
msg = update.message
if msg.reply_to_message is None:
msg.reply_text(responses.NOT_A_REPLY)
return
if msg.reply_to_message.document is None:
msg.reply_text(responses.NOT_A_DOC)
return
telegram_file = File(msg.reply_to_message.document.file_id)
telegram_file.download()
updater = Updater(token, use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler('hello', hello))
dp.add_handler(CommandHandler('make_pasta', make_pasta))
print('Log: Seu bot iniciou! (:')
updater.start_polling()
updater.idle()
```
(I know this is not exactly a MWE, sorry)
### Expected behaviour
According to the documentation, it should download the file directly to my current working directory `Download this file. By default, the file is saved in the current working directory with its original filename as reported by Telegram.`
### Actual behaviour
on `telegram/files/file.py`, on the download function, we get a type error since `self.file_path` is NoneType, not str or os.PathLike object
### Configuration
Ubuntu 18.04
python-telegram-bot 12.2.0
certifi 2019.09.11
future 0.18.1
Python 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/files/file.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains an object that represents a Telegram File."""
20 from base64 import b64decode
21 from os.path import basename
22
23 from future.backports.urllib import parse as urllib_parse
24
25 from telegram import TelegramObject
26 from telegram.passport.credentials import decrypt
27
28
29 class File(TelegramObject):
30 """
31 This object represents a file ready to be downloaded. The file can be downloaded with
32 :attr:`download`. It is guaranteed that the link will be valid for at least 1 hour. When the
33 link expires, a new one can be requested by calling getFile.
34
35 Note:
36 Maximum file size to download is 20 MB
37
38 Attributes:
39 file_id (:obj:`str`): Unique identifier for this file.
40 file_size (:obj:`str`): Optional. File size.
41 file_path (:obj:`str`): Optional. File path. Use :attr:`download` to get the file.
42
43 Args:
44 file_id (:obj:`str`): Unique identifier for this file.
45 file_size (:obj:`int`, optional): Optional. File size, if known.
46 file_path (:obj:`str`, optional): File path. Use :attr:`download` to get the file.
47 bot (:obj:`telegram.Bot`, optional): Bot to use with shortcut method.
48 **kwargs (:obj:`dict`): Arbitrary keyword arguments.
49
50 Note:
51 If you obtain an instance of this class from :attr:`telegram.PassportFile.get_file`,
52 then it will automatically be decrypted as it downloads when you call :attr:`download()`.
53
54 """
55
56 def __init__(self, file_id, bot=None, file_size=None, file_path=None, **kwargs):
57 # Required
58 self.file_id = str(file_id)
59
60 # Optionals
61 self.file_size = file_size
62 self.file_path = file_path
63
64 self.bot = bot
65 self._credentials = None
66
67 self._id_attrs = (self.file_id,)
68
69 @classmethod
70 def de_json(cls, data, bot):
71 if not data:
72 return None
73
74 return cls(bot=bot, **data)
75
76 def download(self, custom_path=None, out=None, timeout=None):
77 """
78 Download this file. By default, the file is saved in the current working directory with its
79 original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be
80 saved to that path instead. If :attr:`out` is defined, the file contents will be saved to
81 that object using the ``out.write`` method.
82
83 Note:
84 :attr:`custom_path` and :attr:`out` are mutually exclusive.
85
86 Args:
87 custom_path (:obj:`str`, optional): Custom path.
88 out (:obj:`io.BufferedWriter`, optional): A file-like object. Must be opened for
89 writing in binary mode, if applicable.
90 timeout (:obj:`int` | :obj:`float`, optional): If this value is specified, use it as
91 the read timeout from the server (instead of the one specified during creation of
92 the connection pool).
93
94 Returns:
95 :obj:`str` | :obj:`io.BufferedWriter`: The same object as :attr:`out` if specified.
96 Otherwise, returns the filename downloaded to.
97
98 Raises:
99 ValueError: If both :attr:`custom_path` and :attr:`out` are passed.
100
101 """
102 if custom_path is not None and out is not None:
103 raise ValueError('custom_path and out are mutually exclusive')
104
105 # Convert any UTF-8 char into a url encoded ASCII string.
106 url = self._get_encoded_url()
107
108 if out:
109 buf = self.bot.request.retrieve(url)
110 if self._credentials:
111 buf = decrypt(b64decode(self._credentials.secret),
112 b64decode(self._credentials.hash),
113 buf)
114 out.write(buf)
115 return out
116 else:
117 if custom_path:
118 filename = custom_path
119 else:
120 filename = basename(self.file_path)
121
122 buf = self.bot.request.retrieve(url, timeout=timeout)
123 if self._credentials:
124 buf = decrypt(b64decode(self._credentials.secret),
125 b64decode(self._credentials.hash),
126 buf)
127 with open(filename, 'wb') as fobj:
128 fobj.write(buf)
129 return filename
130
131 def _get_encoded_url(self):
132 """Convert any UTF-8 char in :obj:`File.file_path` into a url encoded ASCII string."""
133 sres = urllib_parse.urlsplit(self.file_path)
134 return urllib_parse.urlunsplit(urllib_parse.SplitResult(
135 sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))
136
137 def download_as_bytearray(self, buf=None):
138 """Download this file and return it as a bytearray.
139
140 Args:
141 buf (:obj:`bytearray`, optional): Extend the given bytearray with the downloaded data.
142
143 Returns:
144 :obj:`bytearray`: The same object as :attr:`buf` if it was specified. Otherwise a newly
145 allocated :obj:`bytearray`.
146
147 """
148 if buf is None:
149 buf = bytearray()
150
151 buf.extend(self.bot.request.retrieve(self._get_encoded_url()))
152 return buf
153
154 def set_credentials(self, credentials):
155 self._credentials = credentials
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/files/file.py b/telegram/files/file.py
--- a/telegram/files/file.py
+++ b/telegram/files/file.py
@@ -19,6 +19,7 @@
"""This module contains an object that represents a Telegram File."""
from base64 import b64decode
from os.path import basename
+import os
from future.backports.urllib import parse as urllib_parse
@@ -76,9 +77,10 @@
def download(self, custom_path=None, out=None, timeout=None):
"""
Download this file. By default, the file is saved in the current working directory with its
- original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be
- saved to that path instead. If :attr:`out` is defined, the file contents will be saved to
- that object using the ``out.write`` method.
+ original filename as reported by Telegram. If the file has no filename, it the file ID will
+ be used as filename. If a :attr:`custom_path` is supplied, it will be saved to that path
+ instead. If :attr:`out` is defined, the file contents will be saved to that object using
+ the ``out.write`` method.
Note:
:attr:`custom_path` and :attr:`out` are mutually exclusive.
@@ -116,8 +118,10 @@
else:
if custom_path:
filename = custom_path
- else:
+ elif self.file_path:
filename = basename(self.file_path)
+ else:
+ filename = os.path.join(os.getcwd(), self.file_id)
buf = self.bot.request.retrieve(url, timeout=timeout)
if self._credentials:
| {"golden_diff": "diff --git a/telegram/files/file.py b/telegram/files/file.py\n--- a/telegram/files/file.py\n+++ b/telegram/files/file.py\n@@ -19,6 +19,7 @@\n \"\"\"This module contains an object that represents a Telegram File.\"\"\"\n from base64 import b64decode\n from os.path import basename\n+import os\n \n from future.backports.urllib import parse as urllib_parse\n \n@@ -76,9 +77,10 @@\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n- original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be\n- saved to that path instead. If :attr:`out` is defined, the file contents will be saved to\n- that object using the ``out.write`` method.\n+ original filename as reported by Telegram. If the file has no filename, it the file ID will\n+ be used as filename. If a :attr:`custom_path` is supplied, it will be saved to that path\n+ instead. If :attr:`out` is defined, the file contents will be saved to that object using\n+ the ``out.write`` method.\n \n Note:\n :attr:`custom_path` and :attr:`out` are mutually exclusive.\n@@ -116,8 +118,10 @@\n else:\n if custom_path:\n filename = custom_path\n- else:\n+ elif self.file_path:\n filename = basename(self.file_path)\n+ else:\n+ filename = os.path.join(os.getcwd(), self.file_id)\n \n buf = self.bot.request.retrieve(url, timeout=timeout)\n if self._credentials:\n", "issue": "[BUG] Cannot Download a File if it doesnt have a `file_path` or if `custom_path` is not provided to `download()`\n<!--\r\nThanks for reporting issues of python-telegram-bot!\r\n\r\nUse this template to notify us if you found a bug.\r\n\r\nTo make it easier for us to help you please enter detailed information below.\r\n\r\nPlease note, we only support the latest version of python-telegram-bot and\r\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\r\nversion prior to opening an issue.\r\n-->\r\n### Steps to reproduce\r\n1. Create an operation with the above mentioned aspects\r\n\r\nEx:\r\n```\r\ndef run_bot(token):\r\n def make_pasta(update, context):\r\n msg = update.message\r\n\r\n if msg.reply_to_message is None:\r\n msg.reply_text(responses.NOT_A_REPLY)\r\n return\r\n if msg.reply_to_message.document is None:\r\n msg.reply_text(responses.NOT_A_DOC)\r\n return\r\n telegram_file = File(msg.reply_to_message.document.file_id)\r\n telegram_file.download() \r\n\r\n updater = Updater(token, use_context=True)\r\n dp = updater.dispatcher\r\n\r\n dp.add_handler(CommandHandler('hello', hello))\r\n dp.add_handler(CommandHandler('make_pasta', make_pasta))\r\n\r\n print('Log: Seu bot iniciou! (:')\r\n updater.start_polling()\r\n updater.idle() \r\n```\r\n(I know this is not exactly a MWE, sorry)\r\n\r\n### Expected behaviour\r\nAccording to the documentation, it should download the file directly to my current working directory `Download this file. By default, the file is saved in the current working directory with its original filename as reported by Telegram.`\r\n\r\n### Actual behaviour\r\non `telegram/files/file.py`, on the download function, we get a type error since `self.file_path` is NoneType, not str or os.PathLike object\r\n\r\n### Configuration\r\nUbuntu 18.04\r\n\r\npython-telegram-bot 12.2.0\r\ncertifi 2019.09.11\r\nfuture 0.18.1\r\nPython 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram File.\"\"\"\nfrom base64 import b64decode\nfrom os.path import basename\n\nfrom future.backports.urllib import parse as urllib_parse\n\nfrom telegram import TelegramObject\nfrom telegram.passport.credentials import decrypt\n\n\nclass File(TelegramObject):\n \"\"\"\n This object represents a file ready to be downloaded. The file can be downloaded with\n :attr:`download`. It is guaranteed that the link will be valid for at least 1 hour. When the\n link expires, a new one can be requested by calling getFile.\n\n Note:\n Maximum file size to download is 20 MB\n\n Attributes:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`str`): Optional. File size.\n file_path (:obj:`str`): Optional. File path. Use :attr:`download` to get the file.\n\n Args:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`int`, optional): Optional. File size, if known.\n file_path (:obj:`str`, optional): File path. Use :attr:`download` to get the file.\n bot (:obj:`telegram.Bot`, optional): Bot to use with shortcut method.\n **kwargs (:obj:`dict`): Arbitrary keyword arguments.\n\n Note:\n If you obtain an instance of this class from :attr:`telegram.PassportFile.get_file`,\n then it will automatically be decrypted as it downloads when you call :attr:`download()`.\n\n \"\"\"\n\n def __init__(self, file_id, bot=None, file_size=None, file_path=None, **kwargs):\n # Required\n self.file_id = str(file_id)\n\n # Optionals\n self.file_size = file_size\n self.file_path = file_path\n\n self.bot = bot\n self._credentials = None\n\n self._id_attrs = (self.file_id,)\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n return cls(bot=bot, **data)\n\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n original filename as reported by Telegram. If a :attr:`custom_path` is supplied, it will be\n saved to that path instead. If :attr:`out` is defined, the file contents will be saved to\n that object using the ``out.write`` method.\n\n Note:\n :attr:`custom_path` and :attr:`out` are mutually exclusive.\n\n Args:\n custom_path (:obj:`str`, optional): Custom path.\n out (:obj:`io.BufferedWriter`, optional): A file-like object. Must be opened for\n writing in binary mode, if applicable.\n timeout (:obj:`int` | :obj:`float`, optional): If this value is specified, use it as\n the read timeout from the server (instead of the one specified during creation of\n the connection pool).\n\n Returns:\n :obj:`str` | :obj:`io.BufferedWriter`: The same object as :attr:`out` if specified.\n Otherwise, returns the filename downloaded to.\n\n Raises:\n ValueError: If both :attr:`custom_path` and :attr:`out` are passed.\n\n \"\"\"\n if custom_path is not None and out is not None:\n raise ValueError('custom_path and out are mutually exclusive')\n\n # Convert any UTF-8 char into a url encoded ASCII string.\n url = self._get_encoded_url()\n\n if out:\n buf = self.bot.request.retrieve(url)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n out.write(buf)\n return out\n else:\n if custom_path:\n filename = custom_path\n else:\n filename = basename(self.file_path)\n\n buf = self.bot.request.retrieve(url, timeout=timeout)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n with open(filename, 'wb') as fobj:\n fobj.write(buf)\n return filename\n\n def _get_encoded_url(self):\n \"\"\"Convert any UTF-8 char in :obj:`File.file_path` into a url encoded ASCII string.\"\"\"\n sres = urllib_parse.urlsplit(self.file_path)\n return urllib_parse.urlunsplit(urllib_parse.SplitResult(\n sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))\n\n def download_as_bytearray(self, buf=None):\n \"\"\"Download this file and return it as a bytearray.\n\n Args:\n buf (:obj:`bytearray`, optional): Extend the given bytearray with the downloaded data.\n\n Returns:\n :obj:`bytearray`: The same object as :attr:`buf` if it was specified. Otherwise a newly\n allocated :obj:`bytearray`.\n\n \"\"\"\n if buf is None:\n buf = bytearray()\n\n buf.extend(self.bot.request.retrieve(self._get_encoded_url()))\n return buf\n\n def set_credentials(self, credentials):\n self._credentials = credentials\n", "path": "telegram/files/file.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram File.\"\"\"\nfrom base64 import b64decode\nfrom os.path import basename\nimport os\n\nfrom future.backports.urllib import parse as urllib_parse\n\nfrom telegram import TelegramObject\nfrom telegram.passport.credentials import decrypt\n\n\nclass File(TelegramObject):\n \"\"\"\n This object represents a file ready to be downloaded. The file can be downloaded with\n :attr:`download`. It is guaranteed that the link will be valid for at least 1 hour. When the\n link expires, a new one can be requested by calling getFile.\n\n Note:\n Maximum file size to download is 20 MB\n\n Attributes:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`str`): Optional. File size.\n file_path (:obj:`str`): Optional. File path. Use :attr:`download` to get the file.\n\n Args:\n file_id (:obj:`str`): Unique identifier for this file.\n file_size (:obj:`int`, optional): Optional. File size, if known.\n file_path (:obj:`str`, optional): File path. Use :attr:`download` to get the file.\n bot (:obj:`telegram.Bot`, optional): Bot to use with shortcut method.\n **kwargs (:obj:`dict`): Arbitrary keyword arguments.\n\n Note:\n If you obtain an instance of this class from :attr:`telegram.PassportFile.get_file`,\n then it will automatically be decrypted as it downloads when you call :attr:`download()`.\n\n \"\"\"\n\n def __init__(self, file_id, bot=None, file_size=None, file_path=None, **kwargs):\n # Required\n self.file_id = str(file_id)\n\n # Optionals\n self.file_size = file_size\n self.file_path = file_path\n\n self.bot = bot\n self._credentials = None\n\n self._id_attrs = (self.file_id,)\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n return cls(bot=bot, **data)\n\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n original filename as reported by Telegram. If the file has no filename, it the file ID will\n be used as filename. If a :attr:`custom_path` is supplied, it will be saved to that path\n instead. If :attr:`out` is defined, the file contents will be saved to that object using\n the ``out.write`` method.\n\n Note:\n :attr:`custom_path` and :attr:`out` are mutually exclusive.\n\n Args:\n custom_path (:obj:`str`, optional): Custom path.\n out (:obj:`io.BufferedWriter`, optional): A file-like object. Must be opened for\n writing in binary mode, if applicable.\n timeout (:obj:`int` | :obj:`float`, optional): If this value is specified, use it as\n the read timeout from the server (instead of the one specified during creation of\n the connection pool).\n\n Returns:\n :obj:`str` | :obj:`io.BufferedWriter`: The same object as :attr:`out` if specified.\n Otherwise, returns the filename downloaded to.\n\n Raises:\n ValueError: If both :attr:`custom_path` and :attr:`out` are passed.\n\n \"\"\"\n if custom_path is not None and out is not None:\n raise ValueError('custom_path and out are mutually exclusive')\n\n # Convert any UTF-8 char into a url encoded ASCII string.\n url = self._get_encoded_url()\n\n if out:\n buf = self.bot.request.retrieve(url)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n out.write(buf)\n return out\n else:\n if custom_path:\n filename = custom_path\n elif self.file_path:\n filename = basename(self.file_path)\n else:\n filename = os.path.join(os.getcwd(), self.file_id)\n\n buf = self.bot.request.retrieve(url, timeout=timeout)\n if self._credentials:\n buf = decrypt(b64decode(self._credentials.secret),\n b64decode(self._credentials.hash),\n buf)\n with open(filename, 'wb') as fobj:\n fobj.write(buf)\n return filename\n\n def _get_encoded_url(self):\n \"\"\"Convert any UTF-8 char in :obj:`File.file_path` into a url encoded ASCII string.\"\"\"\n sres = urllib_parse.urlsplit(self.file_path)\n return urllib_parse.urlunsplit(urllib_parse.SplitResult(\n sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))\n\n def download_as_bytearray(self, buf=None):\n \"\"\"Download this file and return it as a bytearray.\n\n Args:\n buf (:obj:`bytearray`, optional): Extend the given bytearray with the downloaded data.\n\n Returns:\n :obj:`bytearray`: The same object as :attr:`buf` if it was specified. Otherwise a newly\n allocated :obj:`bytearray`.\n\n \"\"\"\n if buf is None:\n buf = bytearray()\n\n buf.extend(self.bot.request.retrieve(self._get_encoded_url()))\n return buf\n\n def set_credentials(self, credentials):\n self._credentials = credentials\n", "path": "telegram/files/file.py"}]} | 2,451 | 387 |
gh_patches_debug_32170 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `telescope --frame`
It'd be nice to have a `telescope` command that prints out the stack frame, from `rsp` to `rbp`. Obviously this only works if the program actually uses a frame pointer.
It would be equivalent to this:
```
pwndbg> p ($rbp-$rsp)/8 + 1
$5 = 9
pwndbg> telescope $rsp $$
00:0000│ rsp 0x7ffe5f4951a0 ◂— 0x300000001
01:0008│ 0x7ffe5f4951a8 —▸ 0x7ffe5f495220 ◂— 0x170c94ca0
02:0010│ 0x7ffe5f4951b0 —▸ 0x7f6870c96168 —▸ 0x563e09600000 ◂— jg 0x563e09600047
03:0018│ 0x7ffe5f4951b8 ◂— 0xf0
04:0020│ rsi 0x7ffe5f4951c0 ◂— 0xb1ed2074ada5ce5
05:0028│ 0x7ffe5f4951c8 ◂— 0xda37756c736484c1
06:0030│ 0x7ffe5f4951d0 —▸ 0x7ffe5f4951fe ◂— 0x563e096013e00000
07:0038│ 0x7ffe5f4951d8 ◂— 0x56657596c3d91600
08:0040│ rbp 0x7ffe5f4951e0 —▸ 0x7ffe5f495200 —▸ 0x563e096013e0 ◂— push r15
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/telescope.py`
Content:
```
1 """
2 Prints out pointer chains starting at some address in memory.
3
4 Generally used to print out the stack or register values.
5 """
6
7 from __future__ import annotations
8
9 import argparse
10 import collections
11 import math
12
13 import pwndbg.chain
14 import pwndbg.color.telescope as T
15 import pwndbg.commands
16 import pwndbg.gdblib.arch
17 import pwndbg.gdblib.config
18 import pwndbg.gdblib.memory
19 import pwndbg.gdblib.regs
20 import pwndbg.gdblib.typeinfo
21 from pwndbg.color import theme
22 from pwndbg.commands import CommandCategory
23
24 telescope_lines = pwndbg.gdblib.config.add_param(
25 "telescope-lines", 8, "number of lines to printed by the telescope command"
26 )
27 skip_repeating_values = pwndbg.gdblib.config.add_param(
28 "telescope-skip-repeating-val",
29 True,
30 "whether to skip repeating values of the telescope command",
31 )
32 skip_repeating_values_minimum = pwndbg.gdblib.config.add_param(
33 "telescope-skip-repeating-val-minimum",
34 3,
35 "minimum amount of repeated values before skipping lines",
36 )
37
38 offset_separator = theme.add_param(
39 "telescope-offset-separator", "│", "offset separator of the telescope command"
40 )
41 offset_delimiter = theme.add_param(
42 "telescope-offset-delimiter", ":", "offset delimiter of the telescope command"
43 )
44 repeating_marker = theme.add_param(
45 "telescope-repeating-marker", "... ↓", "repeating values marker of the telescope command"
46 )
47
48
49 parser = argparse.ArgumentParser(
50 description="Recursively dereferences pointers starting at the specified address."
51 )
52 parser.add_argument(
53 "-r",
54 "--reverse",
55 dest="reverse",
56 action="store_true",
57 default=False,
58 help="Show <count> previous addresses instead of next ones",
59 )
60
61 parser.add_argument(
62 "address", nargs="?", default="$sp", type=int, help="The address to telescope at."
63 )
64
65 parser.add_argument(
66 "count", nargs="?", default=telescope_lines, type=int, help="The number of lines to show."
67 )
68
69
70 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)
71 @pwndbg.commands.OnlyWhenRunning
72 def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):
73 """
74 Recursively dereferences pointers starting at the specified address
75 ($sp by default)
76 """
77 ptrsize = pwndbg.gdblib.typeinfo.ptrsize
78 if telescope.repeat:
79 address = telescope.last_address + ptrsize
80 telescope.offset += 1
81 else:
82 telescope.offset = 0
83
84 address = int(address if address else pwndbg.gdblib.regs.sp) & pwndbg.gdblib.arch.ptrmask
85 input_address = address
86 count = max(int(count), 1) & pwndbg.gdblib.arch.ptrmask
87 delimiter = T.delimiter(offset_delimiter)
88 separator = T.separator(offset_separator)
89
90 # Allow invocation of "telescope 20" to dump 20 bytes at the stack pointer
91 if address < pwndbg.gdblib.memory.MMAP_MIN_ADDR and not pwndbg.gdblib.memory.peek(address):
92 count = address
93 address = pwndbg.gdblib.regs.sp
94
95 # Allow invocation of telescope -r to dump previous addresses
96 if reverse:
97 address -= (count - 1) * ptrsize
98
99 # Allow invocation of "telescope a b" to dump all bytes from A to B
100 if int(address) <= int(count):
101 # adjust count if it is an address. use ceil division as count is number of
102 # ptrsize values and we don't want to strip out a value if dest is unaligned
103 count -= address
104 count = max(math.ceil(count / ptrsize), 1)
105
106 reg_values = collections.defaultdict(lambda: [])
107 for reg in pwndbg.gdblib.regs.common:
108 reg_values[pwndbg.gdblib.regs[reg]].append(reg)
109
110 start = address
111 stop = address + (count * ptrsize)
112 step = ptrsize
113
114 # Find all registers which show up in the trace
115 regs = {}
116 for i in range(start, stop, step):
117 values = list(reg_values[i])
118
119 for width in range(1, pwndbg.gdblib.arch.ptrsize):
120 values.extend("%s-%i" % (r, width) for r in reg_values[i + width])
121
122 regs[i] = " ".join(values)
123
124 # Find the longest set of register information
125 if regs:
126 longest_regs = max(map(len, regs.values()))
127 else:
128 longest_regs = 0
129
130 # Print everything out
131 result = []
132 last = None
133 collapse_buffer: list[str] = []
134 skipped_padding = (
135 2
136 + len(offset_delimiter)
137 + 4
138 + len(offset_separator)
139 + 1
140 + longest_regs
141 + 1
142 - len(repeating_marker)
143 )
144
145 # Collapse repeating values exceeding minimum delta.
146 def collapse_repeating_values() -> None:
147 # The first line was already printed, hence increment by 1
148 if collapse_buffer and len(collapse_buffer) + 1 >= skip_repeating_values_minimum:
149 result.append(
150 T.repeating_marker(
151 "%s%s%i skipped"
152 % (repeating_marker, " " * skipped_padding, len(collapse_buffer))
153 )
154 )
155 else:
156 result.extend(collapse_buffer)
157 collapse_buffer.clear()
158
159 for i, addr in enumerate(range(start, stop, step)):
160 if not pwndbg.gdblib.memory.peek(addr):
161 collapse_repeating_values()
162 result.append("<Could not read memory at %#x>" % addr)
163 break
164
165 line = " ".join(
166 (
167 T.offset(
168 "%02x%s%04x%s"
169 % (
170 i + telescope.offset,
171 delimiter,
172 addr - start + (telescope.offset * ptrsize),
173 separator,
174 )
175 ),
176 T.register(regs[addr].ljust(longest_regs)),
177 pwndbg.chain.format(addr),
178 )
179 )
180
181 # Buffer repeating values.
182 if skip_repeating_values:
183 value = pwndbg.gdblib.memory.pvoid(addr)
184 if last == value and addr != input_address:
185 collapse_buffer.append(line)
186 continue
187 collapse_repeating_values()
188 last = value
189
190 result.append(line)
191
192 collapse_repeating_values()
193 telescope.offset += i
194 telescope.last_address = addr
195
196 if not to_string:
197 print("\n".join(result))
198
199 return result
200
201
202 parser = argparse.ArgumentParser(
203 description="Dereferences on stack data with specified count and offset."
204 )
205 parser.add_argument("count", nargs="?", default=8, type=int, help="number of element to dump")
206 parser.add_argument(
207 "offset",
208 nargs="?",
209 default=0,
210 type=int,
211 help="Element offset from $sp (support negative offset)",
212 )
213
214
215 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.STACK)
216 @pwndbg.commands.OnlyWhenRunning
217 def stack(count, offset) -> None:
218 ptrsize = pwndbg.gdblib.typeinfo.ptrsize
219 telescope.repeat = stack.repeat
220 telescope(address=pwndbg.gdblib.regs.sp + offset * ptrsize, count=count)
221
222
223 telescope.last_address = 0
224 telescope.offset = 0
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/telescope.py b/pwndbg/commands/telescope.py
--- a/pwndbg/commands/telescope.py
+++ b/pwndbg/commands/telescope.py
@@ -58,6 +58,15 @@
help="Show <count> previous addresses instead of next ones",
)
+parser.add_argument(
+ "-f",
+ "--frame",
+ dest="frame",
+ action="store_true",
+ default=False,
+ help="Show the stack frame, from rsp to rbp",
+)
+
parser.add_argument(
"address", nargs="?", default="$sp", type=int, help="The address to telescope at."
)
@@ -69,7 +78,7 @@
@pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)
@pwndbg.commands.OnlyWhenRunning
-def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):
+def telescope(address=None, count=telescope_lines, to_string=False, reverse=False, frame=False):
"""
Recursively dereferences pointers starting at the specified address
($sp by default)
@@ -96,6 +105,24 @@
if reverse:
address -= (count - 1) * ptrsize
+ # Allow invocation of telescope -f (--frame) to dump frame addresses
+ if frame:
+ sp = pwndbg.gdblib.regs.sp
+ bp = pwndbg.gdblib.regs[pwndbg.gdblib.regs.frame]
+ if sp > bp:
+ print("Cannot display stack frame because base pointer is below stack pointer")
+ return
+
+ for page in pwndbg.gdblib.vmmap.get():
+ if sp in page and bp not in page:
+ print(
+ "Cannot display stack frame because base pointer is not on the same page with stack pointer"
+ )
+ return
+
+ address = sp
+ count = int((bp - sp) / ptrsize) + 1
+
# Allow invocation of "telescope a b" to dump all bytes from A to B
if int(address) <= int(count):
# adjust count if it is an address. use ceil division as count is number of
| {"golden_diff": "diff --git a/pwndbg/commands/telescope.py b/pwndbg/commands/telescope.py\n--- a/pwndbg/commands/telescope.py\n+++ b/pwndbg/commands/telescope.py\n@@ -58,6 +58,15 @@\n help=\"Show <count> previous addresses instead of next ones\",\n )\n \n+parser.add_argument(\n+ \"-f\",\n+ \"--frame\",\n+ dest=\"frame\",\n+ action=\"store_true\",\n+ default=False,\n+ help=\"Show the stack frame, from rsp to rbp\",\n+)\n+\n parser.add_argument(\n \"address\", nargs=\"?\", default=\"$sp\", type=int, help=\"The address to telescope at.\"\n )\n@@ -69,7 +78,7 @@\n \n @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)\n @pwndbg.commands.OnlyWhenRunning\n-def telescope(address=None, count=telescope_lines, to_string=False, reverse=False):\n+def telescope(address=None, count=telescope_lines, to_string=False, reverse=False, frame=False):\n \"\"\"\n Recursively dereferences pointers starting at the specified address\n ($sp by default)\n@@ -96,6 +105,24 @@\n if reverse:\n address -= (count - 1) * ptrsize\n \n+ # Allow invocation of telescope -f (--frame) to dump frame addresses\n+ if frame:\n+ sp = pwndbg.gdblib.regs.sp\n+ bp = pwndbg.gdblib.regs[pwndbg.gdblib.regs.frame]\n+ if sp > bp:\n+ print(\"Cannot display stack frame because base pointer is below stack pointer\")\n+ return\n+\n+ for page in pwndbg.gdblib.vmmap.get():\n+ if sp in page and bp not in page:\n+ print(\n+ \"Cannot display stack frame because base pointer is not on the same page with stack pointer\"\n+ )\n+ return\n+\n+ address = sp\n+ count = int((bp - sp) / ptrsize) + 1\n+\n # Allow invocation of \"telescope a b\" to dump all bytes from A to B\n if int(address) <= int(count):\n # adjust count if it is an address. use ceil division as count is number of\n", "issue": "Add `telescope --frame`\nIt'd be nice to have a `telescope` command that prints out the stack frame, from `rsp` to `rbp`. Obviously this only works if the program actually uses a frame pointer.\r\n\r\nIt would be equivalent to this:\r\n```\r\npwndbg> p ($rbp-$rsp)/8 + 1\r\n$5 = 9\r\npwndbg> telescope $rsp $$\r\n00:0000\u2502 rsp 0x7ffe5f4951a0 \u25c2\u2014 0x300000001\r\n01:0008\u2502 0x7ffe5f4951a8 \u2014\u25b8 0x7ffe5f495220 \u25c2\u2014 0x170c94ca0\r\n02:0010\u2502 0x7ffe5f4951b0 \u2014\u25b8 0x7f6870c96168 \u2014\u25b8 0x563e09600000 \u25c2\u2014 jg 0x563e09600047\r\n03:0018\u2502 0x7ffe5f4951b8 \u25c2\u2014 0xf0\r\n04:0020\u2502 rsi 0x7ffe5f4951c0 \u25c2\u2014 0xb1ed2074ada5ce5\r\n05:0028\u2502 0x7ffe5f4951c8 \u25c2\u2014 0xda37756c736484c1\r\n06:0030\u2502 0x7ffe5f4951d0 \u2014\u25b8 0x7ffe5f4951fe \u25c2\u2014 0x563e096013e00000\r\n07:0038\u2502 0x7ffe5f4951d8 \u25c2\u2014 0x56657596c3d91600\r\n08:0040\u2502 rbp 0x7ffe5f4951e0 \u2014\u25b8 0x7ffe5f495200 \u2014\u25b8 0x563e096013e0 \u25c2\u2014 push r15\r\n```\n", "before_files": [{"content": "\"\"\"\nPrints out pointer chains starting at some address in memory.\n\nGenerally used to print out the stack or register values.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport collections\nimport math\n\nimport pwndbg.chain\nimport pwndbg.color.telescope as T\nimport pwndbg.commands\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.memory\nimport pwndbg.gdblib.regs\nimport pwndbg.gdblib.typeinfo\nfrom pwndbg.color import theme\nfrom pwndbg.commands import CommandCategory\n\ntelescope_lines = pwndbg.gdblib.config.add_param(\n \"telescope-lines\", 8, \"number of lines to printed by the telescope command\"\n)\nskip_repeating_values = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val\",\n True,\n \"whether to skip repeating values of the telescope command\",\n)\nskip_repeating_values_minimum = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val-minimum\",\n 3,\n \"minimum amount of repeated values before skipping lines\",\n)\n\noffset_separator = theme.add_param(\n \"telescope-offset-separator\", \"\u2502\", \"offset separator of the telescope command\"\n)\noffset_delimiter = theme.add_param(\n \"telescope-offset-delimiter\", \":\", \"offset delimiter of the telescope command\"\n)\nrepeating_marker = theme.add_param(\n \"telescope-repeating-marker\", \"... \u2193\", \"repeating values marker of the telescope command\"\n)\n\n\nparser = argparse.ArgumentParser(\n description=\"Recursively dereferences pointers starting at the specified address.\"\n)\nparser.add_argument(\n \"-r\",\n \"--reverse\",\n dest=\"reverse\",\n action=\"store_true\",\n default=False,\n help=\"Show <count> previous addresses instead of next ones\",\n)\n\nparser.add_argument(\n \"address\", nargs=\"?\", default=\"$sp\", type=int, help=\"The address to telescope at.\"\n)\n\nparser.add_argument(\n \"count\", nargs=\"?\", default=telescope_lines, type=int, help=\"The number of lines to show.\"\n)\n\n\[email protected](parser, category=CommandCategory.MEMORY)\[email protected]\ndef telescope(address=None, count=telescope_lines, to_string=False, reverse=False):\n \"\"\"\n Recursively dereferences pointers starting at the specified address\n ($sp by default)\n \"\"\"\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n if telescope.repeat:\n address = telescope.last_address + ptrsize\n telescope.offset += 1\n else:\n telescope.offset = 0\n\n address = int(address if address else pwndbg.gdblib.regs.sp) & pwndbg.gdblib.arch.ptrmask\n input_address = address\n count = max(int(count), 1) & pwndbg.gdblib.arch.ptrmask\n delimiter = T.delimiter(offset_delimiter)\n separator = T.separator(offset_separator)\n\n # Allow invocation of \"telescope 20\" to dump 20 bytes at the stack pointer\n if address < pwndbg.gdblib.memory.MMAP_MIN_ADDR and not pwndbg.gdblib.memory.peek(address):\n count = address\n address = pwndbg.gdblib.regs.sp\n\n # Allow invocation of telescope -r to dump previous addresses\n if reverse:\n address -= (count - 1) * ptrsize\n\n # Allow invocation of \"telescope a b\" to dump all bytes from A to B\n if int(address) <= int(count):\n # adjust count if it is an address. use ceil division as count is number of\n # ptrsize values and we don't want to strip out a value if dest is unaligned\n count -= address\n count = max(math.ceil(count / ptrsize), 1)\n\n reg_values = collections.defaultdict(lambda: [])\n for reg in pwndbg.gdblib.regs.common:\n reg_values[pwndbg.gdblib.regs[reg]].append(reg)\n\n start = address\n stop = address + (count * ptrsize)\n step = ptrsize\n\n # Find all registers which show up in the trace\n regs = {}\n for i in range(start, stop, step):\n values = list(reg_values[i])\n\n for width in range(1, pwndbg.gdblib.arch.ptrsize):\n values.extend(\"%s-%i\" % (r, width) for r in reg_values[i + width])\n\n regs[i] = \" \".join(values)\n\n # Find the longest set of register information\n if regs:\n longest_regs = max(map(len, regs.values()))\n else:\n longest_regs = 0\n\n # Print everything out\n result = []\n last = None\n collapse_buffer: list[str] = []\n skipped_padding = (\n 2\n + len(offset_delimiter)\n + 4\n + len(offset_separator)\n + 1\n + longest_regs\n + 1\n - len(repeating_marker)\n )\n\n # Collapse repeating values exceeding minimum delta.\n def collapse_repeating_values() -> None:\n # The first line was already printed, hence increment by 1\n if collapse_buffer and len(collapse_buffer) + 1 >= skip_repeating_values_minimum:\n result.append(\n T.repeating_marker(\n \"%s%s%i skipped\"\n % (repeating_marker, \" \" * skipped_padding, len(collapse_buffer))\n )\n )\n else:\n result.extend(collapse_buffer)\n collapse_buffer.clear()\n\n for i, addr in enumerate(range(start, stop, step)):\n if not pwndbg.gdblib.memory.peek(addr):\n collapse_repeating_values()\n result.append(\"<Could not read memory at %#x>\" % addr)\n break\n\n line = \" \".join(\n (\n T.offset(\n \"%02x%s%04x%s\"\n % (\n i + telescope.offset,\n delimiter,\n addr - start + (telescope.offset * ptrsize),\n separator,\n )\n ),\n T.register(regs[addr].ljust(longest_regs)),\n pwndbg.chain.format(addr),\n )\n )\n\n # Buffer repeating values.\n if skip_repeating_values:\n value = pwndbg.gdblib.memory.pvoid(addr)\n if last == value and addr != input_address:\n collapse_buffer.append(line)\n continue\n collapse_repeating_values()\n last = value\n\n result.append(line)\n\n collapse_repeating_values()\n telescope.offset += i\n telescope.last_address = addr\n\n if not to_string:\n print(\"\\n\".join(result))\n\n return result\n\n\nparser = argparse.ArgumentParser(\n description=\"Dereferences on stack data with specified count and offset.\"\n)\nparser.add_argument(\"count\", nargs=\"?\", default=8, type=int, help=\"number of element to dump\")\nparser.add_argument(\n \"offset\",\n nargs=\"?\",\n default=0,\n type=int,\n help=\"Element offset from $sp (support negative offset)\",\n)\n\n\[email protected](parser, category=CommandCategory.STACK)\[email protected]\ndef stack(count, offset) -> None:\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n telescope.repeat = stack.repeat\n telescope(address=pwndbg.gdblib.regs.sp + offset * ptrsize, count=count)\n\n\ntelescope.last_address = 0\ntelescope.offset = 0\n", "path": "pwndbg/commands/telescope.py"}], "after_files": [{"content": "\"\"\"\nPrints out pointer chains starting at some address in memory.\n\nGenerally used to print out the stack or register values.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport collections\nimport math\n\nimport pwndbg.chain\nimport pwndbg.color.telescope as T\nimport pwndbg.commands\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.memory\nimport pwndbg.gdblib.regs\nimport pwndbg.gdblib.typeinfo\nfrom pwndbg.color import theme\nfrom pwndbg.commands import CommandCategory\n\ntelescope_lines = pwndbg.gdblib.config.add_param(\n \"telescope-lines\", 8, \"number of lines to printed by the telescope command\"\n)\nskip_repeating_values = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val\",\n True,\n \"whether to skip repeating values of the telescope command\",\n)\nskip_repeating_values_minimum = pwndbg.gdblib.config.add_param(\n \"telescope-skip-repeating-val-minimum\",\n 3,\n \"minimum amount of repeated values before skipping lines\",\n)\n\noffset_separator = theme.add_param(\n \"telescope-offset-separator\", \"\u2502\", \"offset separator of the telescope command\"\n)\noffset_delimiter = theme.add_param(\n \"telescope-offset-delimiter\", \":\", \"offset delimiter of the telescope command\"\n)\nrepeating_marker = theme.add_param(\n \"telescope-repeating-marker\", \"... \u2193\", \"repeating values marker of the telescope command\"\n)\n\n\nparser = argparse.ArgumentParser(\n description=\"Recursively dereferences pointers starting at the specified address.\"\n)\nparser.add_argument(\n \"-r\",\n \"--reverse\",\n dest=\"reverse\",\n action=\"store_true\",\n default=False,\n help=\"Show <count> previous addresses instead of next ones\",\n)\n\nparser.add_argument(\n \"-f\",\n \"--frame\",\n dest=\"frame\",\n action=\"store_true\",\n default=False,\n help=\"Show the stack frame, from rsp to rbp\",\n)\n\nparser.add_argument(\n \"address\", nargs=\"?\", default=\"$sp\", type=int, help=\"The address to telescope at.\"\n)\n\nparser.add_argument(\n \"count\", nargs=\"?\", default=telescope_lines, type=int, help=\"The number of lines to show.\"\n)\n\n\[email protected](parser, category=CommandCategory.MEMORY)\[email protected]\ndef telescope(address=None, count=telescope_lines, to_string=False, reverse=False, frame=False):\n \"\"\"\n Recursively dereferences pointers starting at the specified address\n ($sp by default)\n \"\"\"\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n if telescope.repeat:\n address = telescope.last_address + ptrsize\n telescope.offset += 1\n else:\n telescope.offset = 0\n\n address = int(address if address else pwndbg.gdblib.regs.sp) & pwndbg.gdblib.arch.ptrmask\n input_address = address\n count = max(int(count), 1) & pwndbg.gdblib.arch.ptrmask\n delimiter = T.delimiter(offset_delimiter)\n separator = T.separator(offset_separator)\n\n # Allow invocation of \"telescope 20\" to dump 20 bytes at the stack pointer\n if address < pwndbg.gdblib.memory.MMAP_MIN_ADDR and not pwndbg.gdblib.memory.peek(address):\n count = address\n address = pwndbg.gdblib.regs.sp\n\n # Allow invocation of telescope -r to dump previous addresses\n if reverse:\n address -= (count - 1) * ptrsize\n\n # Allow invocation of telescope -f (--frame) to dump frame addresses\n if frame:\n sp = pwndbg.gdblib.regs.sp\n bp = pwndbg.gdblib.regs[pwndbg.gdblib.regs.frame]\n if sp > bp:\n print(\"Cannot display stack frame because base pointer is below stack pointer\")\n return\n\n for page in pwndbg.gdblib.vmmap.get():\n if sp in page and bp not in page:\n print(\n \"Cannot display stack frame because base pointer is not on the same page with stack pointer\"\n )\n return\n\n address = sp\n count = int((bp - sp) / ptrsize) + 1\n\n # Allow invocation of \"telescope a b\" to dump all bytes from A to B\n if int(address) <= int(count):\n # adjust count if it is an address. use ceil division as count is number of\n # ptrsize values and we don't want to strip out a value if dest is unaligned\n count -= address\n count = max(math.ceil(count / ptrsize), 1)\n\n reg_values = collections.defaultdict(lambda: [])\n for reg in pwndbg.gdblib.regs.common:\n reg_values[pwndbg.gdblib.regs[reg]].append(reg)\n\n start = address\n stop = address + (count * ptrsize)\n step = ptrsize\n\n # Find all registers which show up in the trace\n regs = {}\n for i in range(start, stop, step):\n values = list(reg_values[i])\n\n for width in range(1, pwndbg.gdblib.arch.ptrsize):\n values.extend(\"%s-%i\" % (r, width) for r in reg_values[i + width])\n\n regs[i] = \" \".join(values)\n\n # Find the longest set of register information\n if regs:\n longest_regs = max(map(len, regs.values()))\n else:\n longest_regs = 0\n\n # Print everything out\n result = []\n last = None\n collapse_buffer: list[str] = []\n skipped_padding = (\n 2\n + len(offset_delimiter)\n + 4\n + len(offset_separator)\n + 1\n + longest_regs\n + 1\n - len(repeating_marker)\n )\n\n # Collapse repeating values exceeding minimum delta.\n def collapse_repeating_values() -> None:\n # The first line was already printed, hence increment by 1\n if collapse_buffer and len(collapse_buffer) + 1 >= skip_repeating_values_minimum:\n result.append(\n T.repeating_marker(\n \"%s%s%i skipped\"\n % (repeating_marker, \" \" * skipped_padding, len(collapse_buffer))\n )\n )\n else:\n result.extend(collapse_buffer)\n collapse_buffer.clear()\n\n for i, addr in enumerate(range(start, stop, step)):\n if not pwndbg.gdblib.memory.peek(addr):\n collapse_repeating_values()\n result.append(\"<Could not read memory at %#x>\" % addr)\n break\n\n line = \" \".join(\n (\n T.offset(\n \"%02x%s%04x%s\"\n % (\n i + telescope.offset,\n delimiter,\n addr - start + (telescope.offset * ptrsize),\n separator,\n )\n ),\n T.register(regs[addr].ljust(longest_regs)),\n pwndbg.chain.format(addr),\n )\n )\n\n # Buffer repeating values.\n if skip_repeating_values:\n value = pwndbg.gdblib.memory.pvoid(addr)\n if last == value and addr != input_address:\n collapse_buffer.append(line)\n continue\n collapse_repeating_values()\n last = value\n\n result.append(line)\n\n collapse_repeating_values()\n telescope.offset += i\n telescope.last_address = addr\n\n if not to_string:\n print(\"\\n\".join(result))\n\n return result\n\n\nparser = argparse.ArgumentParser(\n description=\"Dereferences on stack data with specified count and offset.\"\n)\nparser.add_argument(\"count\", nargs=\"?\", default=8, type=int, help=\"number of element to dump\")\nparser.add_argument(\n \"offset\",\n nargs=\"?\",\n default=0,\n type=int,\n help=\"Element offset from $sp (support negative offset)\",\n)\n\n\[email protected](parser, category=CommandCategory.STACK)\[email protected]\ndef stack(count, offset) -> None:\n ptrsize = pwndbg.gdblib.typeinfo.ptrsize\n telescope.repeat = stack.repeat\n telescope(address=pwndbg.gdblib.regs.sp + offset * ptrsize, count=count)\n\n\ntelescope.last_address = 0\ntelescope.offset = 0\n", "path": "pwndbg/commands/telescope.py"}]} | 3,001 | 505 |
gh_patches_debug_57588 | rasdani/github-patches | git_diff | joke2k__faker-1043 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BBAN for en_GB too short
* Faker version: v2.0.3
* OS: linux
Numeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.
### Steps to reproduce
Invoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.
### Expected behavior
GB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/bank/en_GB/__init__.py`
Content:
```
1 from .. import Provider as BankProvider
2
3
4 class Provider(BankProvider):
5 bban_format = '????#############'
6 country_code = 'GB'
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py
--- a/faker/providers/bank/en_GB/__init__.py
+++ b/faker/providers/bank/en_GB/__init__.py
@@ -2,5 +2,5 @@
class Provider(BankProvider):
- bban_format = '????#############'
+ bban_format = '????##############'
country_code = 'GB'
| {"golden_diff": "diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py\n--- a/faker/providers/bank/en_GB/__init__.py\n+++ b/faker/providers/bank/en_GB/__init__.py\n@@ -2,5 +2,5 @@\n \n \n class Provider(BankProvider):\n- bban_format = '????#############'\n+ bban_format = '????##############'\n country_code = 'GB'\n", "issue": "BBAN for en_GB too short\n* Faker version: v2.0.3\r\n* OS: linux\r\n\r\nNumeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.\r\n\r\n### Steps to reproduce\r\n\r\nInvoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.\r\n\r\n### Expected behavior\r\n\r\nGB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom\r\n\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????#############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}], "after_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????##############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}]} | 432 | 102 |
gh_patches_debug_10673 | rasdani/github-patches | git_diff | ultralytics__yolov5-3973 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
model ensembling isn't working
## 🐛 Bug
When I detect some image by using ensembling, it doesn't work.
## To Reproduce (REQUIRED)
Input:
```
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
import urllib
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
def download_file(url, dst_path):
try:
with urllib.request.urlopen(url) as web_file:
data = web_file.read()
with open(dst_path, mode='wb') as local_file:
local_file.write(data)
except urllib.error.URLError as e:
print(e)
download_file('https://user-images.githubusercontent.com/26833433/124489091-ea4f9a00-ddb0-11eb-8ef1-d6f335c97f6f.jpg', "zidane.jpg")
!python detect.py --weights yolov5x.pt yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='zidane.jpg', width=600)
```
Output:
```
image 1/2 /content/yolov5/yolov5/yolov5/yolov5/data/images/bus.jpg: Traceback (most recent call last):
File "detect.py", line 228, in <module>
main(opt)
File "detect.py", line 223, in main
run(**vars(opt))
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 106, in run
visualize=increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False)[0]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'visualize'
```
<img width="1093" alt="スクリーンショット 2021-07-12 1 31 10" src="https://user-images.githubusercontent.com/33506506/125202974-31f28c00-e2b1-11eb-8d50-ff518011c32e.png">
## Expected behavior
detect image with ensembling correctly.
## Environment
If applicable, add screenshots to help explain your problem.
google colab
https://colab.research.google.com/drive/1rXRjuFTiHdJwbxhSIY8EywwQMrg3zCbV?usp=sharing
- OS: [e.g. Ubuntu]
- GPU Tesla P100
## Additional context
I'm trying to fix it now. might be one day from now I will make pull request
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `models/experimental.py`
Content:
```
1 # YOLOv5 experimental modules
2
3 import numpy as np
4 import torch
5 import torch.nn as nn
6
7 from models.common import Conv, DWConv
8 from utils.google_utils import attempt_download
9
10
11 class CrossConv(nn.Module):
12 # Cross Convolution Downsample
13 def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
14 # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
15 super(CrossConv, self).__init__()
16 c_ = int(c2 * e) # hidden channels
17 self.cv1 = Conv(c1, c_, (1, k), (1, s))
18 self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
19 self.add = shortcut and c1 == c2
20
21 def forward(self, x):
22 return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
23
24
25 class Sum(nn.Module):
26 # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
27 def __init__(self, n, weight=False): # n: number of inputs
28 super(Sum, self).__init__()
29 self.weight = weight # apply weights boolean
30 self.iter = range(n - 1) # iter object
31 if weight:
32 self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
33
34 def forward(self, x):
35 y = x[0] # no weight
36 if self.weight:
37 w = torch.sigmoid(self.w) * 2
38 for i in self.iter:
39 y = y + x[i + 1] * w[i]
40 else:
41 for i in self.iter:
42 y = y + x[i + 1]
43 return y
44
45
46 class GhostConv(nn.Module):
47 # Ghost Convolution https://github.com/huawei-noah/ghostnet
48 def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
49 super(GhostConv, self).__init__()
50 c_ = c2 // 2 # hidden channels
51 self.cv1 = Conv(c1, c_, k, s, None, g, act)
52 self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
53
54 def forward(self, x):
55 y = self.cv1(x)
56 return torch.cat([y, self.cv2(y)], 1)
57
58
59 class GhostBottleneck(nn.Module):
60 # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
61 def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
62 super(GhostBottleneck, self).__init__()
63 c_ = c2 // 2
64 self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
65 DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
66 GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
67 self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
68 Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
69
70 def forward(self, x):
71 return self.conv(x) + self.shortcut(x)
72
73
74 class MixConv2d(nn.Module):
75 # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
76 def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
77 super(MixConv2d, self).__init__()
78 groups = len(k)
79 if equal_ch: # equal c_ per group
80 i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
81 c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
82 else: # equal weight.numel() per group
83 b = [c2] + [0] * groups
84 a = np.eye(groups + 1, groups, k=-1)
85 a -= np.roll(a, 1, axis=1)
86 a *= np.array(k) ** 2
87 a[0] = 1
88 c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
89
90 self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
91 self.bn = nn.BatchNorm2d(c2)
92 self.act = nn.LeakyReLU(0.1, inplace=True)
93
94 def forward(self, x):
95 return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
96
97
98 class Ensemble(nn.ModuleList):
99 # Ensemble of models
100 def __init__(self):
101 super(Ensemble, self).__init__()
102
103 def forward(self, x, augment=False):
104 y = []
105 for module in self:
106 y.append(module(x, augment)[0])
107 # y = torch.stack(y).max(0)[0] # max ensemble
108 # y = torch.stack(y).mean(0) # mean ensemble
109 y = torch.cat(y, 1) # nms ensemble
110 return y, None # inference, train output
111
112
113 def attempt_load(weights, map_location=None, inplace=True):
114 from models.yolo import Detect, Model
115
116 # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
117 model = Ensemble()
118 for w in weights if isinstance(weights, list) else [weights]:
119 ckpt = torch.load(attempt_download(w), map_location=map_location) # load
120 model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
121
122 # Compatibility updates
123 for m in model.modules():
124 if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:
125 m.inplace = inplace # pytorch 1.7.0 compatibility
126 elif type(m) is Conv:
127 m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
128
129 if len(model) == 1:
130 return model[-1] # return model
131 else:
132 print(f'Ensemble created with {weights}\n')
133 for k in ['names']:
134 setattr(model, k, getattr(model[-1], k))
135 model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
136 return model # return ensemble
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/models/experimental.py b/models/experimental.py
--- a/models/experimental.py
+++ b/models/experimental.py
@@ -100,10 +100,10 @@
def __init__(self):
super(Ensemble, self).__init__()
- def forward(self, x, augment=False):
+ def forward(self, x, augment=False, profile=False, visualize=False):
y = []
for module in self:
- y.append(module(x, augment)[0])
+ y.append(module(x, augment, profile, visualize)[0])
# y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.stack(y).mean(0) # mean ensemble
y = torch.cat(y, 1) # nms ensemble
| {"golden_diff": "diff --git a/models/experimental.py b/models/experimental.py\n--- a/models/experimental.py\n+++ b/models/experimental.py\n@@ -100,10 +100,10 @@\n def __init__(self):\n super(Ensemble, self).__init__()\n \n- def forward(self, x, augment=False):\n+ def forward(self, x, augment=False, profile=False, visualize=False):\n y = []\n for module in self:\n- y.append(module(x, augment)[0])\n+ y.append(module(x, augment, profile, visualize)[0])\n # y = torch.stack(y).max(0)[0] # max ensemble\n # y = torch.stack(y).mean(0) # mean ensemble\n y = torch.cat(y, 1) # nms ensemble\n", "issue": "model ensembling isn't working\n## \ud83d\udc1b Bug\r\nWhen I detect some image by using ensembling, it doesn't work.\r\n\r\n## To Reproduce (REQUIRED)\r\nInput:\r\n```\r\n!git clone https://github.com/ultralytics/yolov5 # clone repo\r\n%cd yolov5\r\n%pip install -qr requirements.txt # install dependencies\r\n\r\nimport torch\r\nfrom IPython.display import Image, clear_output # to display images\r\nimport urllib\r\n\r\nclear_output()\r\nprint(f\"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})\")\r\n\r\ndef download_file(url, dst_path):\r\n try:\r\n with urllib.request.urlopen(url) as web_file:\r\n data = web_file.read()\r\n with open(dst_path, mode='wb') as local_file:\r\n local_file.write(data)\r\n except urllib.error.URLError as e:\r\n print(e)\r\n\r\ndownload_file('https://user-images.githubusercontent.com/26833433/124489091-ea4f9a00-ddb0-11eb-8ef1-d6f335c97f6f.jpg', \"zidane.jpg\")\r\n\r\n!python detect.py --weights yolov5x.pt yolov5s.pt --img 640 --conf 0.25 --source data/images/\r\nImage(filename='zidane.jpg', width=600)\r\n```\r\n\r\nOutput:\r\n```\r\nimage 1/2 /content/yolov5/yolov5/yolov5/yolov5/data/images/bus.jpg: Traceback (most recent call last):\r\n File \"detect.py\", line 228, in <module>\r\n main(opt)\r\n File \"detect.py\", line 223, in main\r\n run(**vars(opt))\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"detect.py\", line 106, in run\r\n visualize=increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False)[0]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'visualize'\r\n```\r\n<img width=\"1093\" alt=\"\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8 2021-07-12 1 31 10\" src=\"https://user-images.githubusercontent.com/33506506/125202974-31f28c00-e2b1-11eb-8d50-ff518011c32e.png\">\r\n\r\n\r\n## Expected behavior\r\ndetect image with ensembling correctly.\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\ngoogle colab\r\n\r\nhttps://colab.research.google.com/drive/1rXRjuFTiHdJwbxhSIY8EywwQMrg3zCbV?usp=sharing\r\n\r\n - OS: [e.g. Ubuntu]\r\n - GPU Tesla P100\r\n\r\n## Additional context\r\nI'm trying to fix it now. might be one day from now I will make pull request\r\n\n", "before_files": [{"content": "# YOLOv5 experimental modules\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom models.common import Conv, DWConv\nfrom utils.google_utils import attempt_download\n\n\nclass CrossConv(nn.Module):\n # Cross Convolution Downsample\n def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):\n # ch_in, ch_out, kernel, stride, groups, expansion, shortcut\n super(CrossConv, self).__init__()\n c_ = int(c2 * e) # hidden channels\n self.cv1 = Conv(c1, c_, (1, k), (1, s))\n self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)\n self.add = shortcut and c1 == c2\n\n def forward(self, x):\n return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass Sum(nn.Module):\n # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070\n def __init__(self, n, weight=False): # n: number of inputs\n super(Sum, self).__init__()\n self.weight = weight # apply weights boolean\n self.iter = range(n - 1) # iter object\n if weight:\n self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights\n\n def forward(self, x):\n y = x[0] # no weight\n if self.weight:\n w = torch.sigmoid(self.w) * 2\n for i in self.iter:\n y = y + x[i + 1] * w[i]\n else:\n for i in self.iter:\n y = y + x[i + 1]\n return y\n\n\nclass GhostConv(nn.Module):\n # Ghost Convolution https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups\n super(GhostConv, self).__init__()\n c_ = c2 // 2 # hidden channels\n self.cv1 = Conv(c1, c_, k, s, None, g, act)\n self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)\n\n def forward(self, x):\n y = self.cv1(x)\n return torch.cat([y, self.cv2(y)], 1)\n\n\nclass GhostBottleneck(nn.Module):\n # Ghost Bottleneck https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride\n super(GhostBottleneck, self).__init__()\n c_ = c2 // 2\n self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw\n DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw\n GhostConv(c_, c2, 1, 1, act=False)) # pw-linear\n self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),\n Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()\n\n def forward(self, x):\n return self.conv(x) + self.shortcut(x)\n\n\nclass MixConv2d(nn.Module):\n # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595\n def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):\n super(MixConv2d, self).__init__()\n groups = len(k)\n if equal_ch: # equal c_ per group\n i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices\n c_ = [(i == g).sum() for g in range(groups)] # intermediate channels\n else: # equal weight.numel() per group\n b = [c2] + [0] * groups\n a = np.eye(groups + 1, groups, k=-1)\n a -= np.roll(a, 1, axis=1)\n a *= np.array(k) ** 2\n a[0] = 1\n c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b\n\n self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])\n self.bn = nn.BatchNorm2d(c2)\n self.act = nn.LeakyReLU(0.1, inplace=True)\n\n def forward(self, x):\n return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))\n\n\nclass Ensemble(nn.ModuleList):\n # Ensemble of models\n def __init__(self):\n super(Ensemble, self).__init__()\n\n def forward(self, x, augment=False):\n y = []\n for module in self:\n y.append(module(x, augment)[0])\n # y = torch.stack(y).max(0)[0] # max ensemble\n # y = torch.stack(y).mean(0) # mean ensemble\n y = torch.cat(y, 1) # nms ensemble\n return y, None # inference, train output\n\n\ndef attempt_load(weights, map_location=None, inplace=True):\n from models.yolo import Detect, Model\n\n # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a\n model = Ensemble()\n for w in weights if isinstance(weights, list) else [weights]:\n ckpt = torch.load(attempt_download(w), map_location=map_location) # load\n model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model\n\n # Compatibility updates\n for m in model.modules():\n if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:\n m.inplace = inplace # pytorch 1.7.0 compatibility\n elif type(m) is Conv:\n m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility\n\n if len(model) == 1:\n return model[-1] # return model\n else:\n print(f'Ensemble created with {weights}\\n')\n for k in ['names']:\n setattr(model, k, getattr(model[-1], k))\n model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride\n return model # return ensemble\n", "path": "models/experimental.py"}], "after_files": [{"content": "# YOLOv5 experimental modules\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom models.common import Conv, DWConv\nfrom utils.google_utils import attempt_download\n\n\nclass CrossConv(nn.Module):\n # Cross Convolution Downsample\n def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):\n # ch_in, ch_out, kernel, stride, groups, expansion, shortcut\n super(CrossConv, self).__init__()\n c_ = int(c2 * e) # hidden channels\n self.cv1 = Conv(c1, c_, (1, k), (1, s))\n self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)\n self.add = shortcut and c1 == c2\n\n def forward(self, x):\n return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))\n\n\nclass Sum(nn.Module):\n # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070\n def __init__(self, n, weight=False): # n: number of inputs\n super(Sum, self).__init__()\n self.weight = weight # apply weights boolean\n self.iter = range(n - 1) # iter object\n if weight:\n self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights\n\n def forward(self, x):\n y = x[0] # no weight\n if self.weight:\n w = torch.sigmoid(self.w) * 2\n for i in self.iter:\n y = y + x[i + 1] * w[i]\n else:\n for i in self.iter:\n y = y + x[i + 1]\n return y\n\n\nclass GhostConv(nn.Module):\n # Ghost Convolution https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups\n super(GhostConv, self).__init__()\n c_ = c2 // 2 # hidden channels\n self.cv1 = Conv(c1, c_, k, s, None, g, act)\n self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)\n\n def forward(self, x):\n y = self.cv1(x)\n return torch.cat([y, self.cv2(y)], 1)\n\n\nclass GhostBottleneck(nn.Module):\n # Ghost Bottleneck https://github.com/huawei-noah/ghostnet\n def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride\n super(GhostBottleneck, self).__init__()\n c_ = c2 // 2\n self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw\n DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw\n GhostConv(c_, c2, 1, 1, act=False)) # pw-linear\n self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),\n Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()\n\n def forward(self, x):\n return self.conv(x) + self.shortcut(x)\n\n\nclass MixConv2d(nn.Module):\n # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595\n def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):\n super(MixConv2d, self).__init__()\n groups = len(k)\n if equal_ch: # equal c_ per group\n i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices\n c_ = [(i == g).sum() for g in range(groups)] # intermediate channels\n else: # equal weight.numel() per group\n b = [c2] + [0] * groups\n a = np.eye(groups + 1, groups, k=-1)\n a -= np.roll(a, 1, axis=1)\n a *= np.array(k) ** 2\n a[0] = 1\n c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b\n\n self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])\n self.bn = nn.BatchNorm2d(c2)\n self.act = nn.LeakyReLU(0.1, inplace=True)\n\n def forward(self, x):\n return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))\n\n\nclass Ensemble(nn.ModuleList):\n # Ensemble of models\n def __init__(self):\n super(Ensemble, self).__init__()\n\n def forward(self, x, augment=False, profile=False, visualize=False):\n y = []\n for module in self:\n y.append(module(x, augment, profile, visualize)[0])\n # y = torch.stack(y).max(0)[0] # max ensemble\n # y = torch.stack(y).mean(0) # mean ensemble\n y = torch.cat(y, 1) # nms ensemble\n return y, None # inference, train output\n\n\ndef attempt_load(weights, map_location=None, inplace=True):\n from models.yolo import Detect, Model\n\n # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a\n model = Ensemble()\n for w in weights if isinstance(weights, list) else [weights]:\n ckpt = torch.load(attempt_download(w), map_location=map_location) # load\n model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model\n\n # Compatibility updates\n for m in model.modules():\n if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:\n m.inplace = inplace # pytorch 1.7.0 compatibility\n elif type(m) is Conv:\n m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility\n\n if len(model) == 1:\n return model[-1] # return model\n else:\n print(f'Ensemble created with {weights}\\n')\n for k in ['names']:\n setattr(model, k, getattr(model[-1], k))\n model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride\n return model # return ensemble\n", "path": "models/experimental.py"}]} | 2,915 | 178 |
gh_patches_debug_60487 | rasdani/github-patches | git_diff | mars-project__mars-284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Fuse operand's sparse value is wrong
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A fuse operand's sparseness should be the same as tail node's, it is not set correctly now.
**To Reproduce**
``` Python
In [1]: import scipy.sparse as sps
In [2]: import mars.tensor as mt
In [3]: data = sps.rand(10, 10, density=0.05)
In [4]: a = mt.tensor(data, chunk_size=3)
In [5]: b = (a * 2) * 2
In [6]: g = b.build_graph(tiled=True, compose=True)
In [7]: list(g)[0].op.sparse
Out[7]: False
In [8]: list(g)[0].op
Out[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>
In [9]: list(g)[0].composed[-1].op.sparse
Out[9]: True
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/expressions/fuse/core.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from .... import operands
18 from ....tiles import NotSupportTile
19 from ..core import TensorOperandMixin
20
21
22 class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
23 def __init__(self, dtype=None, **kw):
24 super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
25
26 def calc_shape(self, *inputs_shape):
27 in_shapes = inputs_shape
28 out_shape = None
29
30 # TODO: the logic will be changed when fusion is not only straight line
31 for c in self.outputs[0].composed:
32 out_shape = c.op.calc_shape(*in_shapes)
33 in_shapes = [out_shape]
34 return out_shape
35
36 @classmethod
37 def tile(cls, op):
38 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
39
40
41 class TensorFuseChunkMixin(TensorOperandMixin):
42 __slots__ = ()
43
44 @classmethod
45 def tile(cls, op):
46 raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')
47
48 def __call__(self, fuse_chunks):
49 head_chunk = fuse_chunks[0]
50 tail_chunk = fuse_chunks[-1]
51 setattr(self, '_operands', [c.op for c in fuse_chunks])
52 return self.new_chunk(head_chunk.inputs, tail_chunk.shape,
53 _composed=fuse_chunks, _key=tail_chunk.key)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py
--- a/mars/tensor/expressions/fuse/core.py
+++ b/mars/tensor/expressions/fuse/core.py
@@ -20,8 +20,8 @@
class TensorFuseChunk(operands.Fuse, TensorOperandMixin):
- def __init__(self, dtype=None, **kw):
- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)
+ def __init__(self, dtype=None, sparse=False, **kw):
+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)
def calc_shape(self, *inputs_shape):
in_shapes = inputs_shape
| {"golden_diff": "diff --git a/mars/tensor/expressions/fuse/core.py b/mars/tensor/expressions/fuse/core.py\n--- a/mars/tensor/expressions/fuse/core.py\n+++ b/mars/tensor/expressions/fuse/core.py\n@@ -20,8 +20,8 @@\n \n \n class TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n- def __init__(self, dtype=None, **kw):\n- super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n+ def __init__(self, dtype=None, sparse=False, **kw):\n+ super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)\n \n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n", "issue": "[BUG] Fuse operand's sparse value is wrong\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nA fuse operand's sparseness should be the same as tail node's, it is not set correctly now.\r\n\r\n**To Reproduce**\r\n``` Python\r\nIn [1]: import scipy.sparse as sps \r\n\r\nIn [2]: import mars.tensor as mt \r\n\r\nIn [3]: data = sps.rand(10, 10, density=0.05) \r\n\r\nIn [4]: a = mt.tensor(data, chunk_size=3) \r\n\r\nIn [5]: b = (a * 2) * 2 \r\n\r\nIn [6]: g = b.build_graph(tiled=True, compose=True) \r\n\r\nIn [7]: list(g)[0].op.sparse \r\nOut[7]: False\r\n\r\nIn [8]: list(g)[0].op \r\nOut[8]: <mars.tensor.expressions.fuse.core.TensorFuseChunk at 0xa208b7048>\r\n\r\nIn [9]: list(g)[0].composed[-1].op.sparse \r\nOut[9]: True\r\n```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .... import operands\nfrom ....tiles import NotSupportTile\nfrom ..core import TensorOperandMixin\n\n\nclass TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n def __init__(self, dtype=None, **kw):\n super(TensorFuseChunk, self).__init__(_dtype=dtype, **kw)\n\n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n out_shape = None\n\n # TODO: the logic will be changed when fusion is not only straight line\n for c in self.outputs[0].composed:\n out_shape = c.op.calc_shape(*in_shapes)\n in_shapes = [out_shape]\n return out_shape\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n\nclass TensorFuseChunkMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n def __call__(self, fuse_chunks):\n head_chunk = fuse_chunks[0]\n tail_chunk = fuse_chunks[-1]\n setattr(self, '_operands', [c.op for c in fuse_chunks])\n return self.new_chunk(head_chunk.inputs, tail_chunk.shape,\n _composed=fuse_chunks, _key=tail_chunk.key)\n", "path": "mars/tensor/expressions/fuse/core.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .... import operands\nfrom ....tiles import NotSupportTile\nfrom ..core import TensorOperandMixin\n\n\nclass TensorFuseChunk(operands.Fuse, TensorOperandMixin):\n def __init__(self, dtype=None, sparse=False, **kw):\n super(TensorFuseChunk, self).__init__(_dtype=dtype, _sparse=sparse, **kw)\n\n def calc_shape(self, *inputs_shape):\n in_shapes = inputs_shape\n out_shape = None\n\n # TODO: the logic will be changed when fusion is not only straight line\n for c in self.outputs[0].composed:\n out_shape = c.op.calc_shape(*in_shapes)\n in_shapes = [out_shape]\n return out_shape\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n\nclass TensorFuseChunkMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n raise NotSupportTile('TensorFuseChunk is a chunk operand which does not support tile')\n\n def __call__(self, fuse_chunks):\n head_chunk = fuse_chunks[0]\n tail_chunk = fuse_chunks[-1]\n setattr(self, '_operands', [c.op for c in fuse_chunks])\n return self.new_chunk(head_chunk.inputs, tail_chunk.shape,\n _composed=fuse_chunks, _key=tail_chunk.key)\n", "path": "mars/tensor/expressions/fuse/core.py"}]} | 1,092 | 176 |
gh_patches_debug_11091 | rasdani/github-patches | git_diff | chainer__chainer-7202 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`eps` is doubly added to variance in inference of `L.BatchRenormalization`
- `runninng_var` learns variances with `eps`
- `train=False` mode uses `running_var + eps`
### Conditions
I tested with Chainer versions: `3.0.0`, `4.5.0`, `5.4.0`, `7.0.0a1`
### Code to reproduce
```python
import chainer
import numpy as np
np.random.seed(0)
brn = chainer.links.BatchRenormalization(3, eps=1.)
for _ in range(1000):
x = np.random.randn(1000, 3).astype('f')
brn(x)
x = np.random.randn(1000, 3).astype('f')
y = brn(x)
print(y.array.var(axis=0))
with chainer.using_config('train', False):
y = brn(x)
print(y.array.var(axis=0))
```
### Error messages, stack traces, or logs
```
[0.51281106 0.49953052 0.48790243]
[0.3506052 0.33283928 0.31892547]
```
Here, the "normalized" variance is around 1/(1+eps) if train, whereas it's around 1/(1+2 eps) otherwise.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/normalization/batch_renormalization.py`
Content:
```
1 import warnings
2
3 import numpy
4
5 from chainer import backend
6 from chainer.backends import cuda
7 from chainer import configuration
8 from chainer import function
9 from chainer.functions.normalization import batch_normalization
10 from chainer.utils import type_check
11
12
13 def _xhat(x, mean, std, expander):
14 x_mu = x - mean[expander]
15 x_mu /= std[expander]
16 return x_mu
17
18
19 class BatchRenormalizationFunction(function.Function):
20
21 def __init__(self, eps=2e-5, mean=None, var=None, decay=0.9,
22 rmax=1, dmax=0, update_statistics=True):
23 self._running_mean = mean
24 self._running_var = var
25 self.rmax = rmax
26 self.dmax = dmax
27 self.r = None
28 self.update_statistics = update_statistics
29
30 self.eps = eps
31 self.decay = decay
32
33 def _warn_accessing_property(self):
34 warnings.warn(
35 'The attributes of BatchRenormalizationFunction '
36 'are deprecated. '
37 'Consider setting update_statistics=True to '
38 'batch_renormalization to update running statistics.',
39 DeprecationWarning)
40
41 @property
42 def running_mean(self):
43 self._warn_accessing_property()
44 return self._running_mean
45
46 @property
47 def running_var(self):
48 self._warn_accessing_property()
49 return self._running_var
50
51 def check_type_forward(self, in_types):
52 type_check.expect(in_types.size() == 3)
53 x_type, gamma_type, beta_type = in_types
54 M = type_check.eval(gamma_type.ndim)
55 type_check.expect(
56 x_type.dtype.kind == 'f',
57 x_type.ndim >= gamma_type.ndim + 1,
58 x_type.shape[1:1 + M] == gamma_type.shape,
59 # TODO(tkerola): Check shape
60 gamma_type.dtype.kind == 'f',
61 gamma_type.dtype == beta_type.dtype,
62 gamma_type.shape == beta_type.shape,
63 )
64
65 def forward(self, inputs):
66 xp = backend.get_array_module(*inputs)
67 x, gamma, beta = inputs
68
69 # Note: we must be in train mode.
70 assert configuration.config.train
71
72 head_ndim = gamma.ndim + 1
73 expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)
74
75 # NOTE(tommi): cuDNN is not used since it does not support
76 # batch renormalization
77 axis = (0,) + tuple(range(head_ndim, x.ndim))
78 mean = x.mean(axis=axis, dtype=gamma.dtype)
79 var = x.var(axis=axis, dtype=gamma.dtype) + self.eps
80 self.std = xp.sqrt(var, dtype=var.dtype)
81
82 running_sigma = xp.sqrt(self._running_var + self.eps,
83 dtype=self._running_mean.dtype)
84 self.r = xp.clip(self.std / running_sigma,
85 1.0 / self.rmax, self.rmax)
86 d = xp.clip(
87 (mean - self._running_mean) / running_sigma,
88 -self.dmax, self.dmax)
89
90 gamma = gamma[expander]
91 beta = beta[expander]
92
93 if xp is numpy:
94 self.x_hat = _xhat(x, mean, self.std, expander)
95 self.x_hat_renorm = self.x_hat * self.r[expander] + d[expander]
96 y = gamma * self.x_hat_renorm
97 y += beta
98 y = y.astype(dtype=x.dtype)
99 else:
100 self.x_hat, self.x_hat_renorm, y = cuda.elementwise(
101 'T x, U mean, U std, U gamma, U beta, U r, U d',
102 'U x_hat, U x_hat_renorm, T y',
103 '''
104 x_hat = (x - mean) / std;
105 x_hat_renorm = x_hat * r + d;
106 y = gamma * x_hat_renorm + beta;
107 ''',
108 'brn_fwd')(
109 x, mean[expander], self.std[expander], gamma, beta,
110 self.r[expander], d[expander])
111
112 if self.update_statistics:
113 m = x.size // gamma[expander].size
114 self._running_mean *= self.decay
115 adjust = m / max(m - 1., 1.) # unbiased estimation
116 temp_ar = xp.array(mean)
117 temp_ar *= (1 - self.decay)
118 self._running_mean += temp_ar
119 del temp_ar
120 self._running_var *= self.decay
121 temp_ar = xp.array(var)
122 temp_ar *= (1 - self.decay) * adjust
123 self._running_var += temp_ar
124 del temp_ar
125
126 return y,
127
128 def backward(self, inputs, grad_outputs):
129 x, gamma, _ = inputs
130 gy = grad_outputs[0]
131 head_ndim = gamma.ndim + 1
132 expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)
133 m = gamma.dtype.type(x.size // gamma.size)
134 axis = (0,) + tuple(range(head_ndim, x.ndim))
135 xp = backend.get_array_module(x)
136
137 # Note: we must be in train mode.
138 assert configuration.config.train
139 # NOTE(tommi): cuDNN is not used since it does not support
140 # batch renormalization
141 gbeta = gy.sum(axis=axis, dtype=gamma.dtype)
142 ggamma = (gy * self.x_hat_renorm).sum(axis=axis)
143 gsigma_batch = (gy * self.x_hat).sum(axis=axis)
144 if xp is numpy:
145 scale = (self.r * gamma / self.std)[expander]
146 gx = scale * (gy - (self.x_hat * gsigma_batch[expander] +
147 gbeta[expander]) / m)
148 gx = gx.astype(dtype=x.dtype)
149 else:
150 inv_m = numpy.float32(1) / m
151 gx = cuda.elementwise(
152 'T gy, U x_hat, U gamma, U std, U gsigma_batch, U gbeta, \
153 U inv_m, U r',
154 'T gx',
155 'gx = (r * gamma / std) * (gy - (x_hat * gsigma_batch + gbeta) * \
156 inv_m)',
157 'brn_bwd')(
158 gy, self.x_hat, gamma[expander],
159 self.std[expander], gsigma_batch[expander],
160 gbeta[expander], inv_m, self.r[expander])
161 return gx, ggamma, gbeta
162
163
164 def batch_renormalization(x, gamma, beta, rmax, dmax, eps=2e-5,
165 running_mean=None, running_var=None, decay=0.9,
166 update_statistics=False):
167 """Batch renormalization function.
168
169 This is an extension of batch normalization, which ensures that the
170 training and inference models generate the same outputs that depend on
171 individual examples rather than the entire minibatch.
172
173 .. note::
174
175 This function does not perform in-place update to
176 ``running_mean`` and ``running_var`` by default, contrary to
177 :func:`~chainer.functions.batch_normalization`.
178 If the function is called, it will not be possible to access the
179 updated running mean and variance statistics, because they are members
180 of the function object, which cannot be accessed by the caller.
181 If it is desired to update the running statistics, call the function
182 with ``update_statistics=True`` option.
183
184 .. note::
185
186 For the consistency with Batch Normalization, this function
187 intentionally ignores some of the theoretical flaws in Algorithm 1 of
188 the Batch Renormalization paper:
189
190 - ``F.batch_renormalization`` maintains the moving average of variances
191 :math:`\\sigma^2`, while the original paper maintains the moving
192 average of standard deviations :math:`\\sigma`.
193 - ``F.batch_renormalization`` applies Bessel's correction to update the
194 moving average of variances.
195
196 See: `Batch Renormalization: Towards Reducing Minibatch Dependence in
197 Batch-Normalized Models <https://arxiv.org/abs/1702.03275>`_
198
199 .. seealso:: :class:`~chainer.links.BatchRenormalization`
200
201 """
202 if running_mean is None:
203 raise TypeError('running_mean is required')
204 if running_var is None:
205 raise TypeError('running_var is required')
206 return BatchRenormalizationFunction(
207 eps, running_mean, running_var, decay, rmax, dmax, update_statistics
208 )(x, gamma, beta)
209
210
211 def fixed_batch_renormalization(x, gamma, beta, mean, var, eps=2e-5):
212 warnings.warn(
213 'fixed_batch_renormalization is deprecated. '
214 'Use fixed_batch_normalization instead.',
215 DeprecationWarning)
216 with configuration.using_config('train', False):
217 return batch_normalization.fixed_batch_normalization(
218 x, gamma, beta, mean, var, eps
219 )
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/functions/normalization/batch_renormalization.py b/chainer/functions/normalization/batch_renormalization.py
--- a/chainer/functions/normalization/batch_renormalization.py
+++ b/chainer/functions/normalization/batch_renormalization.py
@@ -76,8 +76,8 @@
# batch renormalization
axis = (0,) + tuple(range(head_ndim, x.ndim))
mean = x.mean(axis=axis, dtype=gamma.dtype)
- var = x.var(axis=axis, dtype=gamma.dtype) + self.eps
- self.std = xp.sqrt(var, dtype=var.dtype)
+ var = x.var(axis=axis, dtype=gamma.dtype)
+ self.std = xp.sqrt(var + self.eps, dtype=var.dtype)
running_sigma = xp.sqrt(self._running_var + self.eps,
dtype=self._running_mean.dtype)
| {"golden_diff": "diff --git a/chainer/functions/normalization/batch_renormalization.py b/chainer/functions/normalization/batch_renormalization.py\n--- a/chainer/functions/normalization/batch_renormalization.py\n+++ b/chainer/functions/normalization/batch_renormalization.py\n@@ -76,8 +76,8 @@\n # batch renormalization\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n mean = x.mean(axis=axis, dtype=gamma.dtype)\n- var = x.var(axis=axis, dtype=gamma.dtype) + self.eps\n- self.std = xp.sqrt(var, dtype=var.dtype)\n+ var = x.var(axis=axis, dtype=gamma.dtype)\n+ self.std = xp.sqrt(var + self.eps, dtype=var.dtype)\n \n running_sigma = xp.sqrt(self._running_var + self.eps,\n dtype=self._running_mean.dtype)\n", "issue": "`eps` is doubly added to variance in inference of `L.BatchRenormalization`\n- `runninng_var` learns variances with `eps`\r\n- `train=False` mode uses `running_var + eps`\r\n\r\n### Conditions\r\nI tested with Chainer versions: `3.0.0`, `4.5.0`, `5.4.0`, `7.0.0a1`\r\n\r\n### Code to reproduce\r\n```python\r\nimport chainer\r\nimport numpy as np\r\nnp.random.seed(0)\r\n\r\nbrn = chainer.links.BatchRenormalization(3, eps=1.)\r\nfor _ in range(1000):\r\n x = np.random.randn(1000, 3).astype('f')\r\n brn(x)\r\n\r\nx = np.random.randn(1000, 3).astype('f')\r\n\r\ny = brn(x)\r\nprint(y.array.var(axis=0))\r\n\r\nwith chainer.using_config('train', False):\r\n y = brn(x)\r\nprint(y.array.var(axis=0))\r\n```\r\n\r\n### Error messages, stack traces, or logs\r\n```\r\n[0.51281106 0.49953052 0.48790243]\r\n[0.3506052 0.33283928 0.31892547]\r\n```\r\n\r\nHere, the \"normalized\" variance is around 1/(1+eps) if train, whereas it's around 1/(1+2 eps) otherwise.\n", "before_files": [{"content": "import warnings\n\nimport numpy\n\nfrom chainer import backend\nfrom chainer.backends import cuda\nfrom chainer import configuration\nfrom chainer import function\nfrom chainer.functions.normalization import batch_normalization\nfrom chainer.utils import type_check\n\n\ndef _xhat(x, mean, std, expander):\n x_mu = x - mean[expander]\n x_mu /= std[expander]\n return x_mu\n\n\nclass BatchRenormalizationFunction(function.Function):\n\n def __init__(self, eps=2e-5, mean=None, var=None, decay=0.9,\n rmax=1, dmax=0, update_statistics=True):\n self._running_mean = mean\n self._running_var = var\n self.rmax = rmax\n self.dmax = dmax\n self.r = None\n self.update_statistics = update_statistics\n\n self.eps = eps\n self.decay = decay\n\n def _warn_accessing_property(self):\n warnings.warn(\n 'The attributes of BatchRenormalizationFunction '\n 'are deprecated. '\n 'Consider setting update_statistics=True to '\n 'batch_renormalization to update running statistics.',\n DeprecationWarning)\n\n @property\n def running_mean(self):\n self._warn_accessing_property()\n return self._running_mean\n\n @property\n def running_var(self):\n self._warn_accessing_property()\n return self._running_var\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 3)\n x_type, gamma_type, beta_type = in_types\n M = type_check.eval(gamma_type.ndim)\n type_check.expect(\n x_type.dtype.kind == 'f',\n x_type.ndim >= gamma_type.ndim + 1,\n x_type.shape[1:1 + M] == gamma_type.shape,\n # TODO(tkerola): Check shape\n gamma_type.dtype.kind == 'f',\n gamma_type.dtype == beta_type.dtype,\n gamma_type.shape == beta_type.shape,\n )\n\n def forward(self, inputs):\n xp = backend.get_array_module(*inputs)\n x, gamma, beta = inputs\n\n # Note: we must be in train mode.\n assert configuration.config.train\n\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n mean = x.mean(axis=axis, dtype=gamma.dtype)\n var = x.var(axis=axis, dtype=gamma.dtype) + self.eps\n self.std = xp.sqrt(var, dtype=var.dtype)\n\n running_sigma = xp.sqrt(self._running_var + self.eps,\n dtype=self._running_mean.dtype)\n self.r = xp.clip(self.std / running_sigma,\n 1.0 / self.rmax, self.rmax)\n d = xp.clip(\n (mean - self._running_mean) / running_sigma,\n -self.dmax, self.dmax)\n\n gamma = gamma[expander]\n beta = beta[expander]\n\n if xp is numpy:\n self.x_hat = _xhat(x, mean, self.std, expander)\n self.x_hat_renorm = self.x_hat * self.r[expander] + d[expander]\n y = gamma * self.x_hat_renorm\n y += beta\n y = y.astype(dtype=x.dtype)\n else:\n self.x_hat, self.x_hat_renorm, y = cuda.elementwise(\n 'T x, U mean, U std, U gamma, U beta, U r, U d',\n 'U x_hat, U x_hat_renorm, T y',\n '''\n x_hat = (x - mean) / std;\n x_hat_renorm = x_hat * r + d;\n y = gamma * x_hat_renorm + beta;\n ''',\n 'brn_fwd')(\n x, mean[expander], self.std[expander], gamma, beta,\n self.r[expander], d[expander])\n\n if self.update_statistics:\n m = x.size // gamma[expander].size\n self._running_mean *= self.decay\n adjust = m / max(m - 1., 1.) # unbiased estimation\n temp_ar = xp.array(mean)\n temp_ar *= (1 - self.decay)\n self._running_mean += temp_ar\n del temp_ar\n self._running_var *= self.decay\n temp_ar = xp.array(var)\n temp_ar *= (1 - self.decay) * adjust\n self._running_var += temp_ar\n del temp_ar\n\n return y,\n\n def backward(self, inputs, grad_outputs):\n x, gamma, _ = inputs\n gy = grad_outputs[0]\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n m = gamma.dtype.type(x.size // gamma.size)\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n xp = backend.get_array_module(x)\n\n # Note: we must be in train mode.\n assert configuration.config.train\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n gbeta = gy.sum(axis=axis, dtype=gamma.dtype)\n ggamma = (gy * self.x_hat_renorm).sum(axis=axis)\n gsigma_batch = (gy * self.x_hat).sum(axis=axis)\n if xp is numpy:\n scale = (self.r * gamma / self.std)[expander]\n gx = scale * (gy - (self.x_hat * gsigma_batch[expander] +\n gbeta[expander]) / m)\n gx = gx.astype(dtype=x.dtype)\n else:\n inv_m = numpy.float32(1) / m\n gx = cuda.elementwise(\n 'T gy, U x_hat, U gamma, U std, U gsigma_batch, U gbeta, \\\n U inv_m, U r',\n 'T gx',\n 'gx = (r * gamma / std) * (gy - (x_hat * gsigma_batch + gbeta) * \\\n inv_m)',\n 'brn_bwd')(\n gy, self.x_hat, gamma[expander],\n self.std[expander], gsigma_batch[expander],\n gbeta[expander], inv_m, self.r[expander])\n return gx, ggamma, gbeta\n\n\ndef batch_renormalization(x, gamma, beta, rmax, dmax, eps=2e-5,\n running_mean=None, running_var=None, decay=0.9,\n update_statistics=False):\n \"\"\"Batch renormalization function.\n\n This is an extension of batch normalization, which ensures that the\n training and inference models generate the same outputs that depend on\n individual examples rather than the entire minibatch.\n\n .. note::\n\n This function does not perform in-place update to\n ``running_mean`` and ``running_var`` by default, contrary to\n :func:`~chainer.functions.batch_normalization`.\n If the function is called, it will not be possible to access the\n updated running mean and variance statistics, because they are members\n of the function object, which cannot be accessed by the caller.\n If it is desired to update the running statistics, call the function\n with ``update_statistics=True`` option.\n\n .. note::\n\n For the consistency with Batch Normalization, this function\n intentionally ignores some of the theoretical flaws in Algorithm 1 of\n the Batch Renormalization paper:\n\n - ``F.batch_renormalization`` maintains the moving average of variances\n :math:`\\\\sigma^2`, while the original paper maintains the moving\n average of standard deviations :math:`\\\\sigma`.\n - ``F.batch_renormalization`` applies Bessel's correction to update the\n moving average of variances.\n\n See: `Batch Renormalization: Towards Reducing Minibatch Dependence in\n Batch-Normalized Models <https://arxiv.org/abs/1702.03275>`_\n\n .. seealso:: :class:`~chainer.links.BatchRenormalization`\n\n \"\"\"\n if running_mean is None:\n raise TypeError('running_mean is required')\n if running_var is None:\n raise TypeError('running_var is required')\n return BatchRenormalizationFunction(\n eps, running_mean, running_var, decay, rmax, dmax, update_statistics\n )(x, gamma, beta)\n\n\ndef fixed_batch_renormalization(x, gamma, beta, mean, var, eps=2e-5):\n warnings.warn(\n 'fixed_batch_renormalization is deprecated. '\n 'Use fixed_batch_normalization instead.',\n DeprecationWarning)\n with configuration.using_config('train', False):\n return batch_normalization.fixed_batch_normalization(\n x, gamma, beta, mean, var, eps\n )\n", "path": "chainer/functions/normalization/batch_renormalization.py"}], "after_files": [{"content": "import warnings\n\nimport numpy\n\nfrom chainer import backend\nfrom chainer.backends import cuda\nfrom chainer import configuration\nfrom chainer import function\nfrom chainer.functions.normalization import batch_normalization\nfrom chainer.utils import type_check\n\n\ndef _xhat(x, mean, std, expander):\n x_mu = x - mean[expander]\n x_mu /= std[expander]\n return x_mu\n\n\nclass BatchRenormalizationFunction(function.Function):\n\n def __init__(self, eps=2e-5, mean=None, var=None, decay=0.9,\n rmax=1, dmax=0, update_statistics=True):\n self._running_mean = mean\n self._running_var = var\n self.rmax = rmax\n self.dmax = dmax\n self.r = None\n self.update_statistics = update_statistics\n\n self.eps = eps\n self.decay = decay\n\n def _warn_accessing_property(self):\n warnings.warn(\n 'The attributes of BatchRenormalizationFunction '\n 'are deprecated. '\n 'Consider setting update_statistics=True to '\n 'batch_renormalization to update running statistics.',\n DeprecationWarning)\n\n @property\n def running_mean(self):\n self._warn_accessing_property()\n return self._running_mean\n\n @property\n def running_var(self):\n self._warn_accessing_property()\n return self._running_var\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 3)\n x_type, gamma_type, beta_type = in_types\n M = type_check.eval(gamma_type.ndim)\n type_check.expect(\n x_type.dtype.kind == 'f',\n x_type.ndim >= gamma_type.ndim + 1,\n x_type.shape[1:1 + M] == gamma_type.shape,\n # TODO(tkerola): Check shape\n gamma_type.dtype.kind == 'f',\n gamma_type.dtype == beta_type.dtype,\n gamma_type.shape == beta_type.shape,\n )\n\n def forward(self, inputs):\n xp = backend.get_array_module(*inputs)\n x, gamma, beta = inputs\n\n # Note: we must be in train mode.\n assert configuration.config.train\n\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n mean = x.mean(axis=axis, dtype=gamma.dtype)\n var = x.var(axis=axis, dtype=gamma.dtype)\n self.std = xp.sqrt(var + self.eps, dtype=var.dtype)\n\n running_sigma = xp.sqrt(self._running_var + self.eps,\n dtype=self._running_mean.dtype)\n self.r = xp.clip(self.std / running_sigma,\n 1.0 / self.rmax, self.rmax)\n d = xp.clip(\n (mean - self._running_mean) / running_sigma,\n -self.dmax, self.dmax)\n\n gamma = gamma[expander]\n beta = beta[expander]\n\n if xp is numpy:\n self.x_hat = _xhat(x, mean, self.std, expander)\n self.x_hat_renorm = self.x_hat * self.r[expander] + d[expander]\n y = gamma * self.x_hat_renorm\n y += beta\n y = y.astype(dtype=x.dtype)\n else:\n self.x_hat, self.x_hat_renorm, y = cuda.elementwise(\n 'T x, U mean, U std, U gamma, U beta, U r, U d',\n 'U x_hat, U x_hat_renorm, T y',\n '''\n x_hat = (x - mean) / std;\n x_hat_renorm = x_hat * r + d;\n y = gamma * x_hat_renorm + beta;\n ''',\n 'brn_fwd')(\n x, mean[expander], self.std[expander], gamma, beta,\n self.r[expander], d[expander])\n\n if self.update_statistics:\n m = x.size // gamma[expander].size\n self._running_mean *= self.decay\n adjust = m / max(m - 1., 1.) # unbiased estimation\n temp_ar = xp.array(mean)\n temp_ar *= (1 - self.decay)\n self._running_mean += temp_ar\n del temp_ar\n self._running_var *= self.decay\n temp_ar = xp.array(var)\n temp_ar *= (1 - self.decay) * adjust\n self._running_var += temp_ar\n del temp_ar\n\n return y,\n\n def backward(self, inputs, grad_outputs):\n x, gamma, _ = inputs\n gy = grad_outputs[0]\n head_ndim = gamma.ndim + 1\n expander = (None, Ellipsis) + (None,) * (x.ndim - head_ndim)\n m = gamma.dtype.type(x.size // gamma.size)\n axis = (0,) + tuple(range(head_ndim, x.ndim))\n xp = backend.get_array_module(x)\n\n # Note: we must be in train mode.\n assert configuration.config.train\n # NOTE(tommi): cuDNN is not used since it does not support\n # batch renormalization\n gbeta = gy.sum(axis=axis, dtype=gamma.dtype)\n ggamma = (gy * self.x_hat_renorm).sum(axis=axis)\n gsigma_batch = (gy * self.x_hat).sum(axis=axis)\n if xp is numpy:\n scale = (self.r * gamma / self.std)[expander]\n gx = scale * (gy - (self.x_hat * gsigma_batch[expander] +\n gbeta[expander]) / m)\n gx = gx.astype(dtype=x.dtype)\n else:\n inv_m = numpy.float32(1) / m\n gx = cuda.elementwise(\n 'T gy, U x_hat, U gamma, U std, U gsigma_batch, U gbeta, \\\n U inv_m, U r',\n 'T gx',\n 'gx = (r * gamma / std) * (gy - (x_hat * gsigma_batch + gbeta) * \\\n inv_m)',\n 'brn_bwd')(\n gy, self.x_hat, gamma[expander],\n self.std[expander], gsigma_batch[expander],\n gbeta[expander], inv_m, self.r[expander])\n return gx, ggamma, gbeta\n\n\ndef batch_renormalization(x, gamma, beta, rmax, dmax, eps=2e-5,\n running_mean=None, running_var=None, decay=0.9,\n update_statistics=False):\n \"\"\"Batch renormalization function.\n\n This is an extension of batch normalization, which ensures that the\n training and inference models generate the same outputs that depend on\n individual examples rather than the entire minibatch.\n\n .. note::\n\n This function does not perform in-place update to\n ``running_mean`` and ``running_var`` by default, contrary to\n :func:`~chainer.functions.batch_normalization`.\n If the function is called, it will not be possible to access the\n updated running mean and variance statistics, because they are members\n of the function object, which cannot be accessed by the caller.\n If it is desired to update the running statistics, call the function\n with ``update_statistics=True`` option.\n\n .. note::\n\n For the consistency with Batch Normalization, this function\n intentionally ignores some of the theoretical flaws in Algorithm 1 of\n the Batch Renormalization paper:\n\n - ``F.batch_renormalization`` maintains the moving average of variances\n :math:`\\\\sigma^2`, while the original paper maintains the moving\n average of standard deviations :math:`\\\\sigma`.\n - ``F.batch_renormalization`` applies Bessel's correction to update the\n moving average of variances.\n\n See: `Batch Renormalization: Towards Reducing Minibatch Dependence in\n Batch-Normalized Models <https://arxiv.org/abs/1702.03275>`_\n\n .. seealso:: :class:`~chainer.links.BatchRenormalization`\n\n \"\"\"\n if running_mean is None:\n raise TypeError('running_mean is required')\n if running_var is None:\n raise TypeError('running_var is required')\n return BatchRenormalizationFunction(\n eps, running_mean, running_var, decay, rmax, dmax, update_statistics\n )(x, gamma, beta)\n\n\ndef fixed_batch_renormalization(x, gamma, beta, mean, var, eps=2e-5):\n warnings.warn(\n 'fixed_batch_renormalization is deprecated. '\n 'Use fixed_batch_normalization instead.',\n DeprecationWarning)\n with configuration.using_config('train', False):\n return batch_normalization.fixed_batch_normalization(\n x, gamma, beta, mean, var, eps\n )\n", "path": "chainer/functions/normalization/batch_renormalization.py"}]} | 3,151 | 197 |
gh_patches_debug_38830 | rasdani/github-patches | git_diff | searxng__searxng-436 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing language: pt-BR
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
1.0.0-950-dfdf71bf
(docker)
**How did you install SearXNG?**
Docker
**What happened?**
Language `pt-BR` is missing.
https://github.com/searxng/searxng/blob/master/searx/languages.py
It was present in a previous version.
Google supports this language (and some other engines) but it is not showing up on the list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searxng_extra/update/update_languages.py`
Content:
```
1 #!/usr/bin/env python
2 # SPDX-License-Identifier: AGPL-3.0-or-later
3
4 # This script generates languages.py from intersecting each engine's supported languages.
5 #
6 # Output files: searx/data/engines_languages.json and searx/languages.py
7
8 import json
9 from pathlib import Path
10 from pprint import pformat
11 from babel import Locale, UnknownLocaleError
12 from babel.languages import get_global
13
14 from searx import settings, searx_dir
15 from searx.engines import load_engines, engines
16 from searx.network import set_timeout_for_thread
17
18 # Output files.
19 engines_languages_file = Path(searx_dir) / 'data' / 'engines_languages.json'
20 languages_file = Path(searx_dir) / 'languages.py'
21
22
23 # Fetchs supported languages for each engine and writes json file with those.
24 def fetch_supported_languages():
25 set_timeout_for_thread(10.0)
26
27 engines_languages = dict()
28 names = list(engines)
29 names.sort()
30
31 for engine_name in names:
32 if hasattr(engines[engine_name], 'fetch_supported_languages'):
33 engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()
34 print("fetched %s languages from engine %s" % (
35 len(engines_languages[engine_name]), engine_name))
36 if type(engines_languages[engine_name]) == list:
37 engines_languages[engine_name] = sorted(engines_languages[engine_name])
38
39 # write json file
40 with open(engines_languages_file, 'w', encoding='utf-8') as f:
41 json.dump(engines_languages, f, indent=2, sort_keys=True)
42
43 return engines_languages
44
45
46 # Get babel Locale object from lang_code if possible.
47 def get_locale(lang_code):
48 try:
49 locale = Locale.parse(lang_code, sep='-')
50 return locale
51 except (UnknownLocaleError, ValueError):
52 return None
53
54
55 # Join all language lists.
56 def join_language_lists(engines_languages):
57 language_list = dict()
58 for engine_name in engines_languages:
59 for lang_code in engines_languages[engine_name]:
60
61 # apply custom fixes if necessary
62 if lang_code in getattr(engines[engine_name], 'language_aliases', {}).values():
63 lang_code = next(lc for lc, alias in engines[engine_name].language_aliases.items()
64 if lang_code == alias)
65
66 locale = get_locale(lang_code)
67
68 # ensure that lang_code uses standard language and country codes
69 if locale and locale.territory:
70 lang_code = "{lang}-{country}".format(lang=locale.language, country=locale.territory)
71 short_code = lang_code.split('-')[0]
72
73 # add language without country if not in list
74 if short_code not in language_list:
75 if locale:
76 # get language's data from babel's Locale object
77 language_name = locale.get_language_name().title()
78 english_name = locale.english_name.split(' (')[0]
79 elif short_code in engines_languages['wikipedia']:
80 # get language's data from wikipedia if not known by babel
81 language_name = engines_languages['wikipedia'][short_code]['name']
82 english_name = engines_languages['wikipedia'][short_code]['english_name']
83 else:
84 language_name = None
85 english_name = None
86
87 # add language to list
88 language_list[short_code] = {'name': language_name,
89 'english_name': english_name,
90 'counter': set(),
91 'countries': dict()}
92
93 # add language with country if not in list
94 if lang_code != short_code and lang_code not in language_list[short_code]['countries']:
95 country_name = ''
96 if locale:
97 # get country name from babel's Locale object
98 country_name = locale.get_territory_name()
99
100 language_list[short_code]['countries'][lang_code] = {'country_name': country_name,
101 'counter': set()}
102
103 # count engine for both language_country combination and language alone
104 language_list[short_code]['counter'].add(engine_name)
105 if lang_code != short_code:
106 language_list[short_code]['countries'][lang_code]['counter'].add(engine_name)
107
108 return language_list
109
110
111 # Filter language list so it only includes the most supported languages and countries
112 def filter_language_list(all_languages):
113 min_engines_per_lang = 13
114 min_engines_per_country = 10
115 main_engines = [engine_name for engine_name in engines.keys()
116 if 'general' in engines[engine_name].categories and
117 engines[engine_name].supported_languages and
118 not engines[engine_name].disabled]
119
120 # filter list to include only languages supported by most engines or all default general engines
121 filtered_languages = {code: lang for code, lang
122 in all_languages.items()
123 if (len(lang['counter']) >= min_engines_per_lang or
124 all(main_engine in lang['counter']
125 for main_engine in main_engines))}
126
127 def _copy_lang_data(lang, country_name=None):
128 new_dict = dict()
129 new_dict['name'] = all_languages[lang]['name']
130 new_dict['english_name'] = all_languages[lang]['english_name']
131 if country_name:
132 new_dict['country_name'] = country_name
133 return new_dict
134
135 # for each language get country codes supported by most engines or at least one country code
136 filtered_languages_with_countries = dict()
137 for lang, lang_data in filtered_languages.items():
138 countries = lang_data['countries']
139 filtered_countries = dict()
140
141 # get language's country codes with enough supported engines
142 for lang_country, country_data in countries.items():
143 if len(country_data['counter']) >= min_engines_per_country:
144 filtered_countries[lang_country] = _copy_lang_data(lang, country_data['country_name'])
145
146 # add language without countries too if there's more than one country to choose from
147 if len(filtered_countries) > 1:
148 filtered_countries[lang] = _copy_lang_data(lang)
149 elif len(filtered_countries) == 1:
150 # if there's only one country per language, it's not necessary to show country name
151 lang_country = next(iter(filtered_countries))
152 filtered_countries[lang_country]['country_name'] = None
153
154 # if no country has enough engines try to get most likely country code from babel
155 if not filtered_countries:
156 lang_country = None
157 subtags = get_global('likely_subtags').get(lang)
158 if subtags:
159 country_code = subtags.split('_')[-1]
160 if len(country_code) == 2:
161 lang_country = "{lang}-{country}".format(lang=lang, country=country_code)
162
163 if lang_country:
164 filtered_countries[lang_country] = _copy_lang_data(lang)
165 else:
166 filtered_countries[lang] = _copy_lang_data(lang)
167
168 filtered_languages_with_countries.update(filtered_countries)
169
170 return filtered_languages_with_countries
171
172
173 # Write languages.py.
174 def write_languages_file(languages):
175 file_headers = (
176 "# -*- coding: utf-8 -*-",
177 "# list of language codes",
178 "# this file is generated automatically by utils/fetch_languages.py",
179 "language_codes ="
180 )
181
182 language_codes = tuple([
183 (
184 code,
185 languages[code]['name'].split(' (')[0],
186 languages[code].get('country_name') or '',
187 languages[code].get('english_name') or ''
188 ) for code in sorted(languages)
189 ])
190
191 with open(languages_file, 'w') as new_file:
192 file_content = "{file_headers} \\\n{language_codes}".format(
193 file_headers='\n'.join(file_headers),
194 language_codes=pformat(language_codes, indent=4)
195 )
196 new_file.write(file_content)
197 new_file.close()
198
199
200 if __name__ == "__main__":
201 load_engines(settings['engines'])
202 engines_languages = fetch_supported_languages()
203 all_languages = join_language_lists(engines_languages)
204 filtered_languages = filter_language_list(all_languages)
205 write_languages_file(filtered_languages)
206
```
Path: `searx/languages.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # list of language codes
3 # this file is generated automatically by utils/fetch_languages.py
4 language_codes = \
5 ( ('af-ZA', 'Afrikaans', '', 'Afrikaans'),
6 ('ar-EG', 'العربية', '', 'Arabic'),
7 ('be-BY', 'Беларуская', '', 'Belarusian'),
8 ('bg-BG', 'Български', '', 'Bulgarian'),
9 ('ca-ES', 'Català', '', 'Catalan'),
10 ('cs-CZ', 'Čeština', '', 'Czech'),
11 ('da-DK', 'Dansk', '', 'Danish'),
12 ('de', 'Deutsch', '', 'German'),
13 ('de-AT', 'Deutsch', 'Österreich', 'German'),
14 ('de-CH', 'Deutsch', 'Schweiz', 'German'),
15 ('de-DE', 'Deutsch', 'Deutschland', 'German'),
16 ('el-GR', 'Ελληνικά', '', 'Greek'),
17 ('en', 'English', '', 'English'),
18 ('en-AU', 'English', 'Australia', 'English'),
19 ('en-CA', 'English', 'Canada', 'English'),
20 ('en-GB', 'English', 'United Kingdom', 'English'),
21 ('en-IE', 'English', 'Ireland', 'English'),
22 ('en-NZ', 'English', 'New Zealand', 'English'),
23 ('en-US', 'English', 'United States', 'English'),
24 ('es', 'Español', '', 'Spanish'),
25 ('es-AR', 'Español', 'Argentina', 'Spanish'),
26 ('es-CL', 'Español', 'Chile', 'Spanish'),
27 ('es-ES', 'Español', 'España', 'Spanish'),
28 ('es-MX', 'Español', 'México', 'Spanish'),
29 ('et-EE', 'Eesti', '', 'Estonian'),
30 ('fa-IR', 'فارسی', '', 'Persian'),
31 ('fi-FI', 'Suomi', '', 'Finnish'),
32 ('fr', 'Français', '', 'French'),
33 ('fr-BE', 'Français', 'Belgique', 'French'),
34 ('fr-CA', 'Français', 'Canada', 'French'),
35 ('fr-CH', 'Français', 'Suisse', 'French'),
36 ('fr-FR', 'Français', 'France', 'French'),
37 ('he-IL', 'עברית', '', 'Hebrew'),
38 ('hr-HR', 'Hrvatski', '', 'Croatian'),
39 ('hu-HU', 'Magyar', '', 'Hungarian'),
40 ('id-ID', 'Indonesia', '', 'Indonesian'),
41 ('is-IS', 'Íslenska', '', 'Icelandic'),
42 ('it-IT', 'Italiano', '', 'Italian'),
43 ('ja-JP', '日本語', '', 'Japanese'),
44 ('ko-KR', '한국어', '', 'Korean'),
45 ('lt-LT', 'Lietuvių', '', 'Lithuanian'),
46 ('lv-LV', 'Latviešu', '', 'Latvian'),
47 ('nb-NO', 'Norsk Bokmål', '', 'Norwegian Bokmål'),
48 ('nl', 'Nederlands', '', 'Dutch'),
49 ('nl-BE', 'Nederlands', 'België', 'Dutch'),
50 ('nl-NL', 'Nederlands', 'Nederland', 'Dutch'),
51 ('pl-PL', 'Polski', '', 'Polish'),
52 ('pt-PT', 'Português', '', 'Portuguese'),
53 ('ro-RO', 'Română', '', 'Romanian'),
54 ('ru-RU', 'Русский', '', 'Russian'),
55 ('sk-SK', 'Slovenčina', '', 'Slovak'),
56 ('sl-SI', 'Slovenščina', '', 'Slovenian'),
57 ('sr-RS', 'Srpski', '', 'Serbian'),
58 ('sv-SE', 'Svenska', '', 'Swedish'),
59 ('sw-TZ', 'Kiswahili', '', 'Swahili'),
60 ('th-TH', 'ไทย', '', 'Thai'),
61 ('tr-TR', 'Türkçe', '', 'Turkish'),
62 ('uk-UA', 'Українська', '', 'Ukrainian'),
63 ('vi-VN', 'Tiếng Việt', '', 'Vietnamese'),
64 ('zh', '中文', '', 'Chinese'),
65 ('zh-CN', '中文', '中国', 'Chinese'),
66 ('zh-HK', '中文', '中國香港特別行政區', 'Chinese'),
67 ('zh-TW', '中文', '台灣', 'Chinese'))
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/languages.py b/searx/languages.py
--- a/searx/languages.py
+++ b/searx/languages.py
@@ -19,13 +19,21 @@
('en-CA', 'English', 'Canada', 'English'),
('en-GB', 'English', 'United Kingdom', 'English'),
('en-IE', 'English', 'Ireland', 'English'),
+ ('en-IN', 'English', 'India', 'English'),
+ ('en-MY', 'English', 'Malaysia', 'English'),
('en-NZ', 'English', 'New Zealand', 'English'),
+ ('en-PH', 'English', 'Philippines', 'English'),
+ ('en-SG', 'English', 'Singapore', 'English'),
('en-US', 'English', 'United States', 'English'),
+ ('en-ZA', 'English', 'South Africa', 'English'),
('es', 'Español', '', 'Spanish'),
('es-AR', 'Español', 'Argentina', 'Spanish'),
('es-CL', 'Español', 'Chile', 'Spanish'),
+ ('es-CO', 'Español', 'Colombia', 'Spanish'),
('es-ES', 'Español', 'España', 'Spanish'),
('es-MX', 'Español', 'México', 'Spanish'),
+ ('es-PE', 'Español', 'Perú', 'Spanish'),
+ ('es-US', 'Español', 'Estados Unidos', 'Spanish'),
('et-EE', 'Eesti', '', 'Estonian'),
('fa-IR', 'فارسی', '', 'Persian'),
('fi-FI', 'Suomi', '', 'Finnish'),
@@ -49,12 +57,14 @@
('nl-BE', 'Nederlands', 'België', 'Dutch'),
('nl-NL', 'Nederlands', 'Nederland', 'Dutch'),
('pl-PL', 'Polski', '', 'Polish'),
- ('pt-PT', 'Português', '', 'Portuguese'),
+ ('pt', 'Português', '', 'Portuguese'),
+ ('pt-BR', 'Português', 'Brasil', 'Portuguese'),
+ ('pt-PT', 'Português', 'Portugal', 'Portuguese'),
('ro-RO', 'Română', '', 'Romanian'),
('ru-RU', 'Русский', '', 'Russian'),
('sk-SK', 'Slovenčina', '', 'Slovak'),
('sl-SI', 'Slovenščina', '', 'Slovenian'),
- ('sr-RS', 'Srpski', '', 'Serbian'),
+ ('sr-RS', 'Српски', '', 'Serbian'),
('sv-SE', 'Svenska', '', 'Swedish'),
('sw-TZ', 'Kiswahili', '', 'Swahili'),
('th-TH', 'ไทย', '', 'Thai'),
diff --git a/searxng_extra/update/update_languages.py b/searxng_extra/update/update_languages.py
--- a/searxng_extra/update/update_languages.py
+++ b/searxng_extra/update/update_languages.py
@@ -111,7 +111,7 @@
# Filter language list so it only includes the most supported languages and countries
def filter_language_list(all_languages):
min_engines_per_lang = 13
- min_engines_per_country = 10
+ min_engines_per_country = 7
main_engines = [engine_name for engine_name in engines.keys()
if 'general' in engines[engine_name].categories and
engines[engine_name].supported_languages and
| {"golden_diff": "diff --git a/searx/languages.py b/searx/languages.py\n--- a/searx/languages.py\n+++ b/searx/languages.py\n@@ -19,13 +19,21 @@\n ('en-CA', 'English', 'Canada', 'English'),\n ('en-GB', 'English', 'United Kingdom', 'English'),\n ('en-IE', 'English', 'Ireland', 'English'),\n+ ('en-IN', 'English', 'India', 'English'),\n+ ('en-MY', 'English', 'Malaysia', 'English'),\n ('en-NZ', 'English', 'New Zealand', 'English'),\n+ ('en-PH', 'English', 'Philippines', 'English'),\n+ ('en-SG', 'English', 'Singapore', 'English'),\n ('en-US', 'English', 'United States', 'English'),\n+ ('en-ZA', 'English', 'South Africa', 'English'),\n ('es', 'Espa\u00f1ol', '', 'Spanish'),\n ('es-AR', 'Espa\u00f1ol', 'Argentina', 'Spanish'),\n ('es-CL', 'Espa\u00f1ol', 'Chile', 'Spanish'),\n+ ('es-CO', 'Espa\u00f1ol', 'Colombia', 'Spanish'),\n ('es-ES', 'Espa\u00f1ol', 'Espa\u00f1a', 'Spanish'),\n ('es-MX', 'Espa\u00f1ol', 'M\u00e9xico', 'Spanish'),\n+ ('es-PE', 'Espa\u00f1ol', 'Per\u00fa', 'Spanish'),\n+ ('es-US', 'Espa\u00f1ol', 'Estados Unidos', 'Spanish'),\n ('et-EE', 'Eesti', '', 'Estonian'),\n ('fa-IR', '\u0641\u0627\u0631\u0633\u06cc', '', 'Persian'),\n ('fi-FI', 'Suomi', '', 'Finnish'),\n@@ -49,12 +57,14 @@\n ('nl-BE', 'Nederlands', 'Belgi\u00eb', 'Dutch'),\n ('nl-NL', 'Nederlands', 'Nederland', 'Dutch'),\n ('pl-PL', 'Polski', '', 'Polish'),\n- ('pt-PT', 'Portugu\u00eas', '', 'Portuguese'),\n+ ('pt', 'Portugu\u00eas', '', 'Portuguese'),\n+ ('pt-BR', 'Portugu\u00eas', 'Brasil', 'Portuguese'),\n+ ('pt-PT', 'Portugu\u00eas', 'Portugal', 'Portuguese'),\n ('ro-RO', 'Rom\u00e2n\u0103', '', 'Romanian'),\n ('ru-RU', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439', '', 'Russian'),\n ('sk-SK', 'Sloven\u010dina', '', 'Slovak'),\n ('sl-SI', 'Sloven\u0161\u010dina', '', 'Slovenian'),\n- ('sr-RS', 'Srpski', '', 'Serbian'),\n+ ('sr-RS', '\u0421\u0440\u043f\u0441\u043a\u0438', '', 'Serbian'),\n ('sv-SE', 'Svenska', '', 'Swedish'),\n ('sw-TZ', 'Kiswahili', '', 'Swahili'),\n ('th-TH', '\u0e44\u0e17\u0e22', '', 'Thai'),\ndiff --git a/searxng_extra/update/update_languages.py b/searxng_extra/update/update_languages.py\n--- a/searxng_extra/update/update_languages.py\n+++ b/searxng_extra/update/update_languages.py\n@@ -111,7 +111,7 @@\n # Filter language list so it only includes the most supported languages and countries\n def filter_language_list(all_languages):\n min_engines_per_lang = 13\n- min_engines_per_country = 10\n+ min_engines_per_country = 7\n main_engines = [engine_name for engine_name in engines.keys()\n if 'general' in engines[engine_name].categories and\n engines[engine_name].supported_languages and\n", "issue": "Missing language: pt-BR\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n\r\n1.0.0-950-dfdf71bf\r\n(docker)\r\n**How did you install SearXNG?**\r\n\r\nDocker\r\n\r\n**What happened?**\r\nLanguage `pt-BR` is missing.\r\nhttps://github.com/searxng/searxng/blob/master/searx/languages.py\r\n\r\nIt was present in a previous version.\r\n\r\nGoogle supports this language (and some other engines) but it is not showing up on the list.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# SPDX-License-Identifier: AGPL-3.0-or-later\n\n# This script generates languages.py from intersecting each engine's supported languages.\n#\n# Output files: searx/data/engines_languages.json and searx/languages.py\n\nimport json\nfrom pathlib import Path\nfrom pprint import pformat\nfrom babel import Locale, UnknownLocaleError\nfrom babel.languages import get_global\n\nfrom searx import settings, searx_dir\nfrom searx.engines import load_engines, engines\nfrom searx.network import set_timeout_for_thread\n\n# Output files.\nengines_languages_file = Path(searx_dir) / 'data' / 'engines_languages.json'\nlanguages_file = Path(searx_dir) / 'languages.py'\n\n\n# Fetchs supported languages for each engine and writes json file with those.\ndef fetch_supported_languages():\n set_timeout_for_thread(10.0)\n\n engines_languages = dict()\n names = list(engines)\n names.sort()\n\n for engine_name in names:\n if hasattr(engines[engine_name], 'fetch_supported_languages'):\n engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()\n print(\"fetched %s languages from engine %s\" % (\n len(engines_languages[engine_name]), engine_name))\n if type(engines_languages[engine_name]) == list:\n engines_languages[engine_name] = sorted(engines_languages[engine_name])\n\n # write json file\n with open(engines_languages_file, 'w', encoding='utf-8') as f:\n json.dump(engines_languages, f, indent=2, sort_keys=True)\n\n return engines_languages\n\n\n# Get babel Locale object from lang_code if possible.\ndef get_locale(lang_code):\n try:\n locale = Locale.parse(lang_code, sep='-')\n return locale\n except (UnknownLocaleError, ValueError):\n return None\n\n\n# Join all language lists.\ndef join_language_lists(engines_languages):\n language_list = dict()\n for engine_name in engines_languages:\n for lang_code in engines_languages[engine_name]:\n\n # apply custom fixes if necessary\n if lang_code in getattr(engines[engine_name], 'language_aliases', {}).values():\n lang_code = next(lc for lc, alias in engines[engine_name].language_aliases.items()\n if lang_code == alias)\n\n locale = get_locale(lang_code)\n\n # ensure that lang_code uses standard language and country codes\n if locale and locale.territory:\n lang_code = \"{lang}-{country}\".format(lang=locale.language, country=locale.territory)\n short_code = lang_code.split('-')[0]\n\n # add language without country if not in list\n if short_code not in language_list:\n if locale:\n # get language's data from babel's Locale object\n language_name = locale.get_language_name().title()\n english_name = locale.english_name.split(' (')[0]\n elif short_code in engines_languages['wikipedia']:\n # get language's data from wikipedia if not known by babel\n language_name = engines_languages['wikipedia'][short_code]['name']\n english_name = engines_languages['wikipedia'][short_code]['english_name']\n else:\n language_name = None\n english_name = None\n\n # add language to list\n language_list[short_code] = {'name': language_name,\n 'english_name': english_name,\n 'counter': set(),\n 'countries': dict()}\n\n # add language with country if not in list\n if lang_code != short_code and lang_code not in language_list[short_code]['countries']:\n country_name = ''\n if locale:\n # get country name from babel's Locale object\n country_name = locale.get_territory_name()\n\n language_list[short_code]['countries'][lang_code] = {'country_name': country_name,\n 'counter': set()}\n\n # count engine for both language_country combination and language alone\n language_list[short_code]['counter'].add(engine_name)\n if lang_code != short_code:\n language_list[short_code]['countries'][lang_code]['counter'].add(engine_name)\n\n return language_list\n\n\n# Filter language list so it only includes the most supported languages and countries\ndef filter_language_list(all_languages):\n min_engines_per_lang = 13\n min_engines_per_country = 10\n main_engines = [engine_name for engine_name in engines.keys()\n if 'general' in engines[engine_name].categories and\n engines[engine_name].supported_languages and\n not engines[engine_name].disabled]\n\n # filter list to include only languages supported by most engines or all default general engines\n filtered_languages = {code: lang for code, lang\n in all_languages.items()\n if (len(lang['counter']) >= min_engines_per_lang or\n all(main_engine in lang['counter']\n for main_engine in main_engines))}\n\n def _copy_lang_data(lang, country_name=None):\n new_dict = dict()\n new_dict['name'] = all_languages[lang]['name']\n new_dict['english_name'] = all_languages[lang]['english_name']\n if country_name:\n new_dict['country_name'] = country_name\n return new_dict\n\n # for each language get country codes supported by most engines or at least one country code\n filtered_languages_with_countries = dict()\n for lang, lang_data in filtered_languages.items():\n countries = lang_data['countries']\n filtered_countries = dict()\n\n # get language's country codes with enough supported engines\n for lang_country, country_data in countries.items():\n if len(country_data['counter']) >= min_engines_per_country:\n filtered_countries[lang_country] = _copy_lang_data(lang, country_data['country_name'])\n\n # add language without countries too if there's more than one country to choose from\n if len(filtered_countries) > 1:\n filtered_countries[lang] = _copy_lang_data(lang)\n elif len(filtered_countries) == 1:\n # if there's only one country per language, it's not necessary to show country name\n lang_country = next(iter(filtered_countries))\n filtered_countries[lang_country]['country_name'] = None\n\n # if no country has enough engines try to get most likely country code from babel\n if not filtered_countries:\n lang_country = None\n subtags = get_global('likely_subtags').get(lang)\n if subtags:\n country_code = subtags.split('_')[-1]\n if len(country_code) == 2:\n lang_country = \"{lang}-{country}\".format(lang=lang, country=country_code)\n\n if lang_country:\n filtered_countries[lang_country] = _copy_lang_data(lang)\n else:\n filtered_countries[lang] = _copy_lang_data(lang)\n\n filtered_languages_with_countries.update(filtered_countries)\n\n return filtered_languages_with_countries\n\n\n# Write languages.py.\ndef write_languages_file(languages):\n file_headers = (\n \"# -*- coding: utf-8 -*-\",\n \"# list of language codes\",\n \"# this file is generated automatically by utils/fetch_languages.py\",\n \"language_codes =\"\n )\n\n language_codes = tuple([\n (\n code,\n languages[code]['name'].split(' (')[0],\n languages[code].get('country_name') or '',\n languages[code].get('english_name') or ''\n ) for code in sorted(languages)\n ])\n\n with open(languages_file, 'w') as new_file:\n file_content = \"{file_headers} \\\\\\n{language_codes}\".format(\n file_headers='\\n'.join(file_headers),\n language_codes=pformat(language_codes, indent=4)\n )\n new_file.write(file_content)\n new_file.close()\n\n\nif __name__ == \"__main__\":\n load_engines(settings['engines'])\n engines_languages = fetch_supported_languages()\n all_languages = join_language_lists(engines_languages)\n filtered_languages = filter_language_list(all_languages)\n write_languages_file(filtered_languages)\n", "path": "searxng_extra/update/update_languages.py"}, {"content": "# -*- coding: utf-8 -*-\n# list of language codes\n# this file is generated automatically by utils/fetch_languages.py\nlanguage_codes = \\\n( ('af-ZA', 'Afrikaans', '', 'Afrikaans'),\n ('ar-EG', '\u0627\u0644\u0639\u0631\u0628\u064a\u0629', '', 'Arabic'),\n ('be-BY', '\u0411\u0435\u043b\u0430\u0440\u0443\u0441\u043a\u0430\u044f', '', 'Belarusian'),\n ('bg-BG', '\u0411\u044a\u043b\u0433\u0430\u0440\u0441\u043a\u0438', '', 'Bulgarian'),\n ('ca-ES', 'Catal\u00e0', '', 'Catalan'),\n ('cs-CZ', '\u010ce\u0161tina', '', 'Czech'),\n ('da-DK', 'Dansk', '', 'Danish'),\n ('de', 'Deutsch', '', 'German'),\n ('de-AT', 'Deutsch', '\u00d6sterreich', 'German'),\n ('de-CH', 'Deutsch', 'Schweiz', 'German'),\n ('de-DE', 'Deutsch', 'Deutschland', 'German'),\n ('el-GR', '\u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac', '', 'Greek'),\n ('en', 'English', '', 'English'),\n ('en-AU', 'English', 'Australia', 'English'),\n ('en-CA', 'English', 'Canada', 'English'),\n ('en-GB', 'English', 'United Kingdom', 'English'),\n ('en-IE', 'English', 'Ireland', 'English'),\n ('en-NZ', 'English', 'New Zealand', 'English'),\n ('en-US', 'English', 'United States', 'English'),\n ('es', 'Espa\u00f1ol', '', 'Spanish'),\n ('es-AR', 'Espa\u00f1ol', 'Argentina', 'Spanish'),\n ('es-CL', 'Espa\u00f1ol', 'Chile', 'Spanish'),\n ('es-ES', 'Espa\u00f1ol', 'Espa\u00f1a', 'Spanish'),\n ('es-MX', 'Espa\u00f1ol', 'M\u00e9xico', 'Spanish'),\n ('et-EE', 'Eesti', '', 'Estonian'),\n ('fa-IR', '\u0641\u0627\u0631\u0633\u06cc', '', 'Persian'),\n ('fi-FI', 'Suomi', '', 'Finnish'),\n ('fr', 'Fran\u00e7ais', '', 'French'),\n ('fr-BE', 'Fran\u00e7ais', 'Belgique', 'French'),\n ('fr-CA', 'Fran\u00e7ais', 'Canada', 'French'),\n ('fr-CH', 'Fran\u00e7ais', 'Suisse', 'French'),\n ('fr-FR', 'Fran\u00e7ais', 'France', 'French'),\n ('he-IL', '\u05e2\u05d1\u05e8\u05d9\u05ea', '', 'Hebrew'),\n ('hr-HR', 'Hrvatski', '', 'Croatian'),\n ('hu-HU', 'Magyar', '', 'Hungarian'),\n ('id-ID', 'Indonesia', '', 'Indonesian'),\n ('is-IS', '\u00cdslenska', '', 'Icelandic'),\n ('it-IT', 'Italiano', '', 'Italian'),\n ('ja-JP', '\u65e5\u672c\u8a9e', '', 'Japanese'),\n ('ko-KR', '\ud55c\uad6d\uc5b4', '', 'Korean'),\n ('lt-LT', 'Lietuvi\u0173', '', 'Lithuanian'),\n ('lv-LV', 'Latvie\u0161u', '', 'Latvian'),\n ('nb-NO', 'Norsk Bokm\u00e5l', '', 'Norwegian Bokm\u00e5l'),\n ('nl', 'Nederlands', '', 'Dutch'),\n ('nl-BE', 'Nederlands', 'Belgi\u00eb', 'Dutch'),\n ('nl-NL', 'Nederlands', 'Nederland', 'Dutch'),\n ('pl-PL', 'Polski', '', 'Polish'),\n ('pt-PT', 'Portugu\u00eas', '', 'Portuguese'),\n ('ro-RO', 'Rom\u00e2n\u0103', '', 'Romanian'),\n ('ru-RU', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439', '', 'Russian'),\n ('sk-SK', 'Sloven\u010dina', '', 'Slovak'),\n ('sl-SI', 'Sloven\u0161\u010dina', '', 'Slovenian'),\n ('sr-RS', 'Srpski', '', 'Serbian'),\n ('sv-SE', 'Svenska', '', 'Swedish'),\n ('sw-TZ', 'Kiswahili', '', 'Swahili'),\n ('th-TH', '\u0e44\u0e17\u0e22', '', 'Thai'),\n ('tr-TR', 'T\u00fcrk\u00e7e', '', 'Turkish'),\n ('uk-UA', '\u0423\u043a\u0440\u0430\u0457\u043d\u0441\u044c\u043a\u0430', '', 'Ukrainian'),\n ('vi-VN', 'Ti\u1ebfng Vi\u1ec7t', '', 'Vietnamese'),\n ('zh', '\u4e2d\u6587', '', 'Chinese'),\n ('zh-CN', '\u4e2d\u6587', '\u4e2d\u56fd', 'Chinese'),\n ('zh-HK', '\u4e2d\u6587', '\u4e2d\u570b\u9999\u6e2f\u7279\u5225\u884c\u653f\u5340', 'Chinese'),\n ('zh-TW', '\u4e2d\u6587', '\u53f0\u7063', 'Chinese'))", "path": "searx/languages.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# SPDX-License-Identifier: AGPL-3.0-or-later\n\n# This script generates languages.py from intersecting each engine's supported languages.\n#\n# Output files: searx/data/engines_languages.json and searx/languages.py\n\nimport json\nfrom pathlib import Path\nfrom pprint import pformat\nfrom babel import Locale, UnknownLocaleError\nfrom babel.languages import get_global\n\nfrom searx import settings, searx_dir\nfrom searx.engines import load_engines, engines\nfrom searx.network import set_timeout_for_thread\n\n# Output files.\nengines_languages_file = Path(searx_dir) / 'data' / 'engines_languages.json'\nlanguages_file = Path(searx_dir) / 'languages.py'\n\n\n# Fetchs supported languages for each engine and writes json file with those.\ndef fetch_supported_languages():\n set_timeout_for_thread(10.0)\n\n engines_languages = dict()\n names = list(engines)\n names.sort()\n\n for engine_name in names:\n if hasattr(engines[engine_name], 'fetch_supported_languages'):\n engines_languages[engine_name] = engines[engine_name].fetch_supported_languages()\n print(\"fetched %s languages from engine %s\" % (\n len(engines_languages[engine_name]), engine_name))\n if type(engines_languages[engine_name]) == list:\n engines_languages[engine_name] = sorted(engines_languages[engine_name])\n\n # write json file\n with open(engines_languages_file, 'w', encoding='utf-8') as f:\n json.dump(engines_languages, f, indent=2, sort_keys=True)\n\n return engines_languages\n\n\n# Get babel Locale object from lang_code if possible.\ndef get_locale(lang_code):\n try:\n locale = Locale.parse(lang_code, sep='-')\n return locale\n except (UnknownLocaleError, ValueError):\n return None\n\n\n# Join all language lists.\ndef join_language_lists(engines_languages):\n language_list = dict()\n for engine_name in engines_languages:\n for lang_code in engines_languages[engine_name]:\n\n # apply custom fixes if necessary\n if lang_code in getattr(engines[engine_name], 'language_aliases', {}).values():\n lang_code = next(lc for lc, alias in engines[engine_name].language_aliases.items()\n if lang_code == alias)\n\n locale = get_locale(lang_code)\n\n # ensure that lang_code uses standard language and country codes\n if locale and locale.territory:\n lang_code = \"{lang}-{country}\".format(lang=locale.language, country=locale.territory)\n short_code = lang_code.split('-')[0]\n\n # add language without country if not in list\n if short_code not in language_list:\n if locale:\n # get language's data from babel's Locale object\n language_name = locale.get_language_name().title()\n english_name = locale.english_name.split(' (')[0]\n elif short_code in engines_languages['wikipedia']:\n # get language's data from wikipedia if not known by babel\n language_name = engines_languages['wikipedia'][short_code]['name']\n english_name = engines_languages['wikipedia'][short_code]['english_name']\n else:\n language_name = None\n english_name = None\n\n # add language to list\n language_list[short_code] = {'name': language_name,\n 'english_name': english_name,\n 'counter': set(),\n 'countries': dict()}\n\n # add language with country if not in list\n if lang_code != short_code and lang_code not in language_list[short_code]['countries']:\n country_name = ''\n if locale:\n # get country name from babel's Locale object\n country_name = locale.get_territory_name()\n\n language_list[short_code]['countries'][lang_code] = {'country_name': country_name,\n 'counter': set()}\n\n # count engine for both language_country combination and language alone\n language_list[short_code]['counter'].add(engine_name)\n if lang_code != short_code:\n language_list[short_code]['countries'][lang_code]['counter'].add(engine_name)\n\n return language_list\n\n\n# Filter language list so it only includes the most supported languages and countries\ndef filter_language_list(all_languages):\n min_engines_per_lang = 13\n min_engines_per_country = 7\n main_engines = [engine_name for engine_name in engines.keys()\n if 'general' in engines[engine_name].categories and\n engines[engine_name].supported_languages and\n not engines[engine_name].disabled]\n\n # filter list to include only languages supported by most engines or all default general engines\n filtered_languages = {code: lang for code, lang\n in all_languages.items()\n if (len(lang['counter']) >= min_engines_per_lang or\n all(main_engine in lang['counter']\n for main_engine in main_engines))}\n\n def _copy_lang_data(lang, country_name=None):\n new_dict = dict()\n new_dict['name'] = all_languages[lang]['name']\n new_dict['english_name'] = all_languages[lang]['english_name']\n if country_name:\n new_dict['country_name'] = country_name\n return new_dict\n\n # for each language get country codes supported by most engines or at least one country code\n filtered_languages_with_countries = dict()\n for lang, lang_data in filtered_languages.items():\n countries = lang_data['countries']\n filtered_countries = dict()\n\n # get language's country codes with enough supported engines\n for lang_country, country_data in countries.items():\n if len(country_data['counter']) >= min_engines_per_country:\n filtered_countries[lang_country] = _copy_lang_data(lang, country_data['country_name'])\n\n # add language without countries too if there's more than one country to choose from\n if len(filtered_countries) > 1:\n filtered_countries[lang] = _copy_lang_data(lang)\n elif len(filtered_countries) == 1:\n # if there's only one country per language, it's not necessary to show country name\n lang_country = next(iter(filtered_countries))\n filtered_countries[lang_country]['country_name'] = None\n\n # if no country has enough engines try to get most likely country code from babel\n if not filtered_countries:\n lang_country = None\n subtags = get_global('likely_subtags').get(lang)\n if subtags:\n country_code = subtags.split('_')[-1]\n if len(country_code) == 2:\n lang_country = \"{lang}-{country}\".format(lang=lang, country=country_code)\n\n if lang_country:\n filtered_countries[lang_country] = _copy_lang_data(lang)\n else:\n filtered_countries[lang] = _copy_lang_data(lang)\n\n filtered_languages_with_countries.update(filtered_countries)\n\n return filtered_languages_with_countries\n\n\n# Write languages.py.\ndef write_languages_file(languages):\n file_headers = (\n \"# -*- coding: utf-8 -*-\",\n \"# list of language codes\",\n \"# this file is generated automatically by utils/fetch_languages.py\",\n \"language_codes =\"\n )\n\n language_codes = tuple([\n (\n code,\n languages[code]['name'].split(' (')[0],\n languages[code].get('country_name') or '',\n languages[code].get('english_name') or ''\n ) for code in sorted(languages)\n ])\n\n with open(languages_file, 'w') as new_file:\n file_content = \"{file_headers} \\\\\\n{language_codes}\".format(\n file_headers='\\n'.join(file_headers),\n language_codes=pformat(language_codes, indent=4)\n )\n new_file.write(file_content)\n new_file.close()\n\n\nif __name__ == \"__main__\":\n load_engines(settings['engines'])\n engines_languages = fetch_supported_languages()\n all_languages = join_language_lists(engines_languages)\n filtered_languages = filter_language_list(all_languages)\n write_languages_file(filtered_languages)\n", "path": "searxng_extra/update/update_languages.py"}, {"content": "# -*- coding: utf-8 -*-\n# list of language codes\n# this file is generated automatically by utils/fetch_languages.py\nlanguage_codes = \\\n( ('af-ZA', 'Afrikaans', '', 'Afrikaans'),\n ('ar-EG', '\u0627\u0644\u0639\u0631\u0628\u064a\u0629', '', 'Arabic'),\n ('be-BY', '\u0411\u0435\u043b\u0430\u0440\u0443\u0441\u043a\u0430\u044f', '', 'Belarusian'),\n ('bg-BG', '\u0411\u044a\u043b\u0433\u0430\u0440\u0441\u043a\u0438', '', 'Bulgarian'),\n ('ca-ES', 'Catal\u00e0', '', 'Catalan'),\n ('cs-CZ', '\u010ce\u0161tina', '', 'Czech'),\n ('da-DK', 'Dansk', '', 'Danish'),\n ('de', 'Deutsch', '', 'German'),\n ('de-AT', 'Deutsch', '\u00d6sterreich', 'German'),\n ('de-CH', 'Deutsch', 'Schweiz', 'German'),\n ('de-DE', 'Deutsch', 'Deutschland', 'German'),\n ('el-GR', '\u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac', '', 'Greek'),\n ('en', 'English', '', 'English'),\n ('en-AU', 'English', 'Australia', 'English'),\n ('en-CA', 'English', 'Canada', 'English'),\n ('en-GB', 'English', 'United Kingdom', 'English'),\n ('en-IE', 'English', 'Ireland', 'English'),\n ('en-IN', 'English', 'India', 'English'),\n ('en-MY', 'English', 'Malaysia', 'English'),\n ('en-NZ', 'English', 'New Zealand', 'English'),\n ('en-PH', 'English', 'Philippines', 'English'),\n ('en-SG', 'English', 'Singapore', 'English'),\n ('en-US', 'English', 'United States', 'English'),\n ('en-ZA', 'English', 'South Africa', 'English'),\n ('es', 'Espa\u00f1ol', '', 'Spanish'),\n ('es-AR', 'Espa\u00f1ol', 'Argentina', 'Spanish'),\n ('es-CL', 'Espa\u00f1ol', 'Chile', 'Spanish'),\n ('es-CO', 'Espa\u00f1ol', 'Colombia', 'Spanish'),\n ('es-ES', 'Espa\u00f1ol', 'Espa\u00f1a', 'Spanish'),\n ('es-MX', 'Espa\u00f1ol', 'M\u00e9xico', 'Spanish'),\n ('es-PE', 'Espa\u00f1ol', 'Per\u00fa', 'Spanish'),\n ('es-US', 'Espa\u00f1ol', 'Estados Unidos', 'Spanish'),\n ('et-EE', 'Eesti', '', 'Estonian'),\n ('fa-IR', '\u0641\u0627\u0631\u0633\u06cc', '', 'Persian'),\n ('fi-FI', 'Suomi', '', 'Finnish'),\n ('fr', 'Fran\u00e7ais', '', 'French'),\n ('fr-BE', 'Fran\u00e7ais', 'Belgique', 'French'),\n ('fr-CA', 'Fran\u00e7ais', 'Canada', 'French'),\n ('fr-CH', 'Fran\u00e7ais', 'Suisse', 'French'),\n ('fr-FR', 'Fran\u00e7ais', 'France', 'French'),\n ('he-IL', '\u05e2\u05d1\u05e8\u05d9\u05ea', '', 'Hebrew'),\n ('hr-HR', 'Hrvatski', '', 'Croatian'),\n ('hu-HU', 'Magyar', '', 'Hungarian'),\n ('id-ID', 'Indonesia', '', 'Indonesian'),\n ('is-IS', '\u00cdslenska', '', 'Icelandic'),\n ('it-IT', 'Italiano', '', 'Italian'),\n ('ja-JP', '\u65e5\u672c\u8a9e', '', 'Japanese'),\n ('ko-KR', '\ud55c\uad6d\uc5b4', '', 'Korean'),\n ('lt-LT', 'Lietuvi\u0173', '', 'Lithuanian'),\n ('lv-LV', 'Latvie\u0161u', '', 'Latvian'),\n ('nb-NO', 'Norsk Bokm\u00e5l', '', 'Norwegian Bokm\u00e5l'),\n ('nl', 'Nederlands', '', 'Dutch'),\n ('nl-BE', 'Nederlands', 'Belgi\u00eb', 'Dutch'),\n ('nl-NL', 'Nederlands', 'Nederland', 'Dutch'),\n ('pl-PL', 'Polski', '', 'Polish'),\n ('pt', 'Portugu\u00eas', '', 'Portuguese'),\n ('pt-BR', 'Portugu\u00eas', 'Brasil', 'Portuguese'),\n ('pt-PT', 'Portugu\u00eas', 'Portugal', 'Portuguese'),\n ('ro-RO', 'Rom\u00e2n\u0103', '', 'Romanian'),\n ('ru-RU', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439', '', 'Russian'),\n ('sk-SK', 'Sloven\u010dina', '', 'Slovak'),\n ('sl-SI', 'Sloven\u0161\u010dina', '', 'Slovenian'),\n ('sr-RS', '\u0421\u0440\u043f\u0441\u043a\u0438', '', 'Serbian'),\n ('sv-SE', 'Svenska', '', 'Swedish'),\n ('sw-TZ', 'Kiswahili', '', 'Swahili'),\n ('th-TH', '\u0e44\u0e17\u0e22', '', 'Thai'),\n ('tr-TR', 'T\u00fcrk\u00e7e', '', 'Turkish'),\n ('uk-UA', '\u0423\u043a\u0440\u0430\u0457\u043d\u0441\u044c\u043a\u0430', '', 'Ukrainian'),\n ('vi-VN', 'Ti\u1ebfng Vi\u1ec7t', '', 'Vietnamese'),\n ('zh', '\u4e2d\u6587', '', 'Chinese'),\n ('zh-CN', '\u4e2d\u6587', '\u4e2d\u56fd', 'Chinese'),\n ('zh-HK', '\u4e2d\u6587', '\u4e2d\u570b\u9999\u6e2f\u7279\u5225\u884c\u653f\u5340', 'Chinese'),\n ('zh-TW', '\u4e2d\u6587', '\u53f0\u7063', 'Chinese'))", "path": "searx/languages.py"}]} | 3,881 | 856 |
gh_patches_debug_33075 | rasdani/github-patches | git_diff | sanic-org__sanic-2858 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Websocket invalid upgrade exception handling b0rkage
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
A client apparently sent no Upgrade header to a websocket endpoint, leading to an error as it should. An ugly traceback is printed on terminal even though the error eventually gets handled correctly it would seem.
It would appear that the websockets module attempts to attach its exception on `request._exception` field which Sanic's Request doesn't have a slot for. This could be hidden if Sanic later used `raise BadRequest(...) from None` rather than `raise SanicException(...)`, suppressing the chain and giving a non-500 error for what really is no server error. Not sure though if that would from this context ever reach the client anyway but at least it could avoid a traceback in server log.
If anyone wants to investigate and make a PR, feel free to (I am currently busy and cannot do that unfortunately).
```python
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 111, in accept
) = self.process_request(request)
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 218, in process_request
raise InvalidUpgrade("Upgrade", ", ".join(upgrade) if upgrade else None)
websockets.exceptions.InvalidUpgrade: missing Upgrade header
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 120, in websocket_handshake
resp: "http11.Response" = ws_proto.accept(request)
File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 122, in accept
request._exception = exc
AttributeError: 'Request' object has no attribute '_exception'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "handle_request", line 97, in handle_request
File "/home/user/sanic/sanic/app.py", line 1047, in _websocket_handler
ws = await protocol.websocket_handshake(request, subprotocols)
File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 126, in websocket_handshake
raise SanicException(msg, status_code=500)
sanic.exceptions.SanicException: Failed to open a WebSocket connection.
See server log for more information.
```
### Code snippet
_No response_
### Expected Behavior
400 Bad Request error reaching the client and being more silent on server side. Including the message of **missing Upgrade header** would be helpful for debugging (e.g. in case Nginx proxy config forgot to forward that header).
### How do you run Sanic?
Sanic CLI
### Operating System
Linux
### Sanic Version
Almost 23.03.0 (a git version slightly before release)
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/server/protocols/websocket_protocol.py`
Content:
```
1 from typing import TYPE_CHECKING, Optional, Sequence, cast
2
3
4 try: # websockets < 11.0
5 from websockets.connection import State
6 from websockets.server import ServerConnection as ServerProtocol
7 except ImportError: # websockets >= 11.0
8 from websockets.protocol import State # type: ignore
9 from websockets.server import ServerProtocol # type: ignore
10
11 from websockets.typing import Subprotocol
12
13 from sanic.exceptions import SanicException
14 from sanic.log import logger
15 from sanic.server import HttpProtocol
16
17 from ..websockets.impl import WebsocketImplProtocol
18
19
20 if TYPE_CHECKING:
21 from websockets import http11
22
23
24 OPEN = State.OPEN
25 CLOSING = State.CLOSING
26 CLOSED = State.CLOSED
27
28
29 class WebSocketProtocol(HttpProtocol):
30 __slots__ = (
31 "websocket",
32 "websocket_timeout",
33 "websocket_max_size",
34 "websocket_ping_interval",
35 "websocket_ping_timeout",
36 )
37
38 def __init__(
39 self,
40 *args,
41 websocket_timeout: float = 10.0,
42 websocket_max_size: Optional[int] = None,
43 websocket_ping_interval: Optional[float] = 20.0,
44 websocket_ping_timeout: Optional[float] = 20.0,
45 **kwargs,
46 ):
47 super().__init__(*args, **kwargs)
48 self.websocket: Optional[WebsocketImplProtocol] = None
49 self.websocket_timeout = websocket_timeout
50 self.websocket_max_size = websocket_max_size
51 self.websocket_ping_interval = websocket_ping_interval
52 self.websocket_ping_timeout = websocket_ping_timeout
53
54 def connection_lost(self, exc):
55 if self.websocket is not None:
56 self.websocket.connection_lost(exc)
57 super().connection_lost(exc)
58
59 def data_received(self, data):
60 if self.websocket is not None:
61 self.websocket.data_received(data)
62 else:
63 # Pass it to HttpProtocol handler first
64 # That will (hopefully) upgrade it to a websocket.
65 super().data_received(data)
66
67 def eof_received(self) -> Optional[bool]:
68 if self.websocket is not None:
69 return self.websocket.eof_received()
70 else:
71 return False
72
73 def close(self, timeout: Optional[float] = None):
74 # Called by HttpProtocol at the end of connection_task
75 # If we've upgraded to websocket, we do our own closing
76 if self.websocket is not None:
77 # Note, we don't want to use websocket.close()
78 # That is used for user's application code to send a
79 # websocket close packet. This is different.
80 self.websocket.end_connection(1001)
81 else:
82 super().close()
83
84 def close_if_idle(self):
85 # Called by Sanic Server when shutting down
86 # If we've upgraded to websocket, shut it down
87 if self.websocket is not None:
88 if self.websocket.ws_proto.state in (CLOSING, CLOSED):
89 return True
90 elif self.websocket.loop is not None:
91 self.websocket.loop.create_task(self.websocket.close(1001))
92 else:
93 self.websocket.end_connection(1001)
94 else:
95 return super().close_if_idle()
96
97 async def websocket_handshake(
98 self, request, subprotocols: Optional[Sequence[str]] = None
99 ):
100 # let the websockets package do the handshake with the client
101 try:
102 if subprotocols is not None:
103 # subprotocols can be a set or frozenset,
104 # but ServerProtocol needs a list
105 subprotocols = cast(
106 Optional[Sequence[Subprotocol]],
107 list(
108 [
109 Subprotocol(subprotocol)
110 for subprotocol in subprotocols
111 ]
112 ),
113 )
114 ws_proto = ServerProtocol(
115 max_size=self.websocket_max_size,
116 subprotocols=subprotocols,
117 state=OPEN,
118 logger=logger,
119 )
120 resp: "http11.Response" = ws_proto.accept(request)
121 except Exception:
122 msg = (
123 "Failed to open a WebSocket connection.\n"
124 "See server log for more information.\n"
125 )
126 raise SanicException(msg, status_code=500)
127 if 100 <= resp.status_code <= 299:
128 first_line = (
129 f"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\r\n"
130 ).encode()
131 rbody = bytearray(first_line)
132 rbody += (
133 "".join([f"{k}: {v}\r\n" for k, v in resp.headers.items()])
134 ).encode()
135 rbody += b"\r\n"
136 if resp.body is not None:
137 rbody += resp.body
138 rbody += b"\r\n\r\n"
139 await super().send(rbody)
140 else:
141 raise SanicException(resp.body, resp.status_code)
142 self.websocket = WebsocketImplProtocol(
143 ws_proto,
144 ping_interval=self.websocket_ping_interval,
145 ping_timeout=self.websocket_ping_timeout,
146 close_timeout=self.websocket_timeout,
147 )
148 loop = (
149 request.transport.loop
150 if hasattr(request, "transport")
151 and hasattr(request.transport, "loop")
152 else None
153 )
154 await self.websocket.connection_made(self, loop=loop)
155 return self.websocket
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py
--- a/sanic/server/protocols/websocket_protocol.py
+++ b/sanic/server/protocols/websocket_protocol.py
@@ -1,4 +1,4 @@
-from typing import TYPE_CHECKING, Optional, Sequence, cast
+from typing import Optional, Sequence, cast
try: # websockets < 11.0
@@ -8,19 +8,18 @@
from websockets.protocol import State # type: ignore
from websockets.server import ServerProtocol # type: ignore
+from websockets import http11
+from websockets.datastructures import Headers as WSHeaders
from websockets.typing import Subprotocol
from sanic.exceptions import SanicException
from sanic.log import logger
+from sanic.request import Request
from sanic.server import HttpProtocol
from ..websockets.impl import WebsocketImplProtocol
-if TYPE_CHECKING:
- from websockets import http11
-
-
OPEN = State.OPEN
CLOSING = State.CLOSING
CLOSED = State.CLOSED
@@ -94,6 +93,13 @@
else:
return super().close_if_idle()
+ @staticmethod
+ def sanic_request_to_ws_request(request: Request):
+ return http11.Request(
+ path=request.path,
+ headers=WSHeaders(request.headers),
+ )
+
async def websocket_handshake(
self, request, subprotocols: Optional[Sequence[str]] = None
):
@@ -117,7 +123,7 @@
state=OPEN,
logger=logger,
)
- resp: "http11.Response" = ws_proto.accept(request)
+ resp = ws_proto.accept(self.sanic_request_to_ws_request(request))
except Exception:
msg = (
"Failed to open a WebSocket connection.\n"
| {"golden_diff": "diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py\n--- a/sanic/server/protocols/websocket_protocol.py\n+++ b/sanic/server/protocols/websocket_protocol.py\n@@ -1,4 +1,4 @@\n-from typing import TYPE_CHECKING, Optional, Sequence, cast\n+from typing import Optional, Sequence, cast\n \n \n try: # websockets < 11.0\n@@ -8,19 +8,18 @@\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n \n+from websockets import http11\n+from websockets.datastructures import Headers as WSHeaders\n from websockets.typing import Subprotocol\n \n from sanic.exceptions import SanicException\n from sanic.log import logger\n+from sanic.request import Request\n from sanic.server import HttpProtocol\n \n from ..websockets.impl import WebsocketImplProtocol\n \n \n-if TYPE_CHECKING:\n- from websockets import http11\n-\n-\n OPEN = State.OPEN\n CLOSING = State.CLOSING\n CLOSED = State.CLOSED\n@@ -94,6 +93,13 @@\n else:\n return super().close_if_idle()\n \n+ @staticmethod\n+ def sanic_request_to_ws_request(request: Request):\n+ return http11.Request(\n+ path=request.path,\n+ headers=WSHeaders(request.headers),\n+ )\n+\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n@@ -117,7 +123,7 @@\n state=OPEN,\n logger=logger,\n )\n- resp: \"http11.Response\" = ws_proto.accept(request)\n+ resp = ws_proto.accept(self.sanic_request_to_ws_request(request))\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n", "issue": "Websocket invalid upgrade exception handling b0rkage\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe the bug\n\nA client apparently sent no Upgrade header to a websocket endpoint, leading to an error as it should. An ugly traceback is printed on terminal even though the error eventually gets handled correctly it would seem.\r\n\r\nIt would appear that the websockets module attempts to attach its exception on `request._exception` field which Sanic's Request doesn't have a slot for. This could be hidden if Sanic later used `raise BadRequest(...) from None` rather than `raise SanicException(...)`, suppressing the chain and giving a non-500 error for what really is no server error. Not sure though if that would from this context ever reach the client anyway but at least it could avoid a traceback in server log.\r\n\r\nIf anyone wants to investigate and make a PR, feel free to (I am currently busy and cannot do that unfortunately).\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 111, in accept\r\n ) = self.process_request(request)\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 218, in process_request\r\n raise InvalidUpgrade(\"Upgrade\", \", \".join(upgrade) if upgrade else None)\r\nwebsockets.exceptions.InvalidUpgrade: missing Upgrade header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/user/sanic/sanic/server/protocols/websocket_protocol.py\", line 120, in websocket_handshake\r\n resp: \"http11.Response\" = ws_proto.accept(request)\r\n File \"/home/user/.local/lib/python3.10/site-packages/websockets/server.py\", line 122, in accept\r\n request._exception = exc\r\nAttributeError: 'Request' object has no attribute '_exception'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"handle_request\", line 97, in handle_request\r\n File \"/home/user/sanic/sanic/app.py\", line 1047, in _websocket_handler\r\n ws = await protocol.websocket_handshake(request, subprotocols)\r\n File \"/home/user/sanic/sanic/server/protocols/websocket_protocol.py\", line 126, in websocket_handshake\r\n raise SanicException(msg, status_code=500)\r\nsanic.exceptions.SanicException: Failed to open a WebSocket connection.\r\nSee server log for more information.\r\n```\r\n\n\n### Code snippet\n\n_No response_\n\n### Expected Behavior\n\n400 Bad Request error reaching the client and being more silent on server side. Including the message of **missing Upgrade header** would be helpful for debugging (e.g. in case Nginx proxy config forgot to forward that header).\n\n### How do you run Sanic?\n\nSanic CLI\n\n### Operating System\n\nLinux\n\n### Sanic Version\n\nAlmost 23.03.0 (a git version slightly before release)\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Optional, Sequence, cast\n\n\ntry: # websockets < 11.0\n from websockets.connection import State\n from websockets.server import ServerConnection as ServerProtocol\nexcept ImportError: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n\nfrom websockets.typing import Subprotocol\n\nfrom sanic.exceptions import SanicException\nfrom sanic.log import logger\nfrom sanic.server import HttpProtocol\n\nfrom ..websockets.impl import WebsocketImplProtocol\n\n\nif TYPE_CHECKING:\n from websockets import http11\n\n\nOPEN = State.OPEN\nCLOSING = State.CLOSING\nCLOSED = State.CLOSED\n\n\nclass WebSocketProtocol(HttpProtocol):\n __slots__ = (\n \"websocket\",\n \"websocket_timeout\",\n \"websocket_max_size\",\n \"websocket_ping_interval\",\n \"websocket_ping_timeout\",\n )\n\n def __init__(\n self,\n *args,\n websocket_timeout: float = 10.0,\n websocket_max_size: Optional[int] = None,\n websocket_ping_interval: Optional[float] = 20.0,\n websocket_ping_timeout: Optional[float] = 20.0,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.websocket: Optional[WebsocketImplProtocol] = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n self.websocket.data_received(data)\n else:\n # Pass it to HttpProtocol handler first\n # That will (hopefully) upgrade it to a websocket.\n super().data_received(data)\n\n def eof_received(self) -> Optional[bool]:\n if self.websocket is not None:\n return self.websocket.eof_received()\n else:\n return False\n\n def close(self, timeout: Optional[float] = None):\n # Called by HttpProtocol at the end of connection_task\n # If we've upgraded to websocket, we do our own closing\n if self.websocket is not None:\n # Note, we don't want to use websocket.close()\n # That is used for user's application code to send a\n # websocket close packet. This is different.\n self.websocket.end_connection(1001)\n else:\n super().close()\n\n def close_if_idle(self):\n # Called by Sanic Server when shutting down\n # If we've upgraded to websocket, shut it down\n if self.websocket is not None:\n if self.websocket.ws_proto.state in (CLOSING, CLOSED):\n return True\n elif self.websocket.loop is not None:\n self.websocket.loop.create_task(self.websocket.close(1001))\n else:\n self.websocket.end_connection(1001)\n else:\n return super().close_if_idle()\n\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n # let the websockets package do the handshake with the client\n try:\n if subprotocols is not None:\n # subprotocols can be a set or frozenset,\n # but ServerProtocol needs a list\n subprotocols = cast(\n Optional[Sequence[Subprotocol]],\n list(\n [\n Subprotocol(subprotocol)\n for subprotocol in subprotocols\n ]\n ),\n )\n ws_proto = ServerProtocol(\n max_size=self.websocket_max_size,\n subprotocols=subprotocols,\n state=OPEN,\n logger=logger,\n )\n resp: \"http11.Response\" = ws_proto.accept(request)\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n \"See server log for more information.\\n\"\n )\n raise SanicException(msg, status_code=500)\n if 100 <= resp.status_code <= 299:\n first_line = (\n f\"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\\r\\n\"\n ).encode()\n rbody = bytearray(first_line)\n rbody += (\n \"\".join([f\"{k}: {v}\\r\\n\" for k, v in resp.headers.items()])\n ).encode()\n rbody += b\"\\r\\n\"\n if resp.body is not None:\n rbody += resp.body\n rbody += b\"\\r\\n\\r\\n\"\n await super().send(rbody)\n else:\n raise SanicException(resp.body, resp.status_code)\n self.websocket = WebsocketImplProtocol(\n ws_proto,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n close_timeout=self.websocket_timeout,\n )\n loop = (\n request.transport.loop\n if hasattr(request, \"transport\")\n and hasattr(request.transport, \"loop\")\n else None\n )\n await self.websocket.connection_made(self, loop=loop)\n return self.websocket\n", "path": "sanic/server/protocols/websocket_protocol.py"}], "after_files": [{"content": "from typing import Optional, Sequence, cast\n\n\ntry: # websockets < 11.0\n from websockets.connection import State\n from websockets.server import ServerConnection as ServerProtocol\nexcept ImportError: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n\nfrom websockets import http11\nfrom websockets.datastructures import Headers as WSHeaders\nfrom websockets.typing import Subprotocol\n\nfrom sanic.exceptions import SanicException\nfrom sanic.log import logger\nfrom sanic.request import Request\nfrom sanic.server import HttpProtocol\n\nfrom ..websockets.impl import WebsocketImplProtocol\n\n\nOPEN = State.OPEN\nCLOSING = State.CLOSING\nCLOSED = State.CLOSED\n\n\nclass WebSocketProtocol(HttpProtocol):\n __slots__ = (\n \"websocket\",\n \"websocket_timeout\",\n \"websocket_max_size\",\n \"websocket_ping_interval\",\n \"websocket_ping_timeout\",\n )\n\n def __init__(\n self,\n *args,\n websocket_timeout: float = 10.0,\n websocket_max_size: Optional[int] = None,\n websocket_ping_interval: Optional[float] = 20.0,\n websocket_ping_timeout: Optional[float] = 20.0,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.websocket: Optional[WebsocketImplProtocol] = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n self.websocket.data_received(data)\n else:\n # Pass it to HttpProtocol handler first\n # That will (hopefully) upgrade it to a websocket.\n super().data_received(data)\n\n def eof_received(self) -> Optional[bool]:\n if self.websocket is not None:\n return self.websocket.eof_received()\n else:\n return False\n\n def close(self, timeout: Optional[float] = None):\n # Called by HttpProtocol at the end of connection_task\n # If we've upgraded to websocket, we do our own closing\n if self.websocket is not None:\n # Note, we don't want to use websocket.close()\n # That is used for user's application code to send a\n # websocket close packet. This is different.\n self.websocket.end_connection(1001)\n else:\n super().close()\n\n def close_if_idle(self):\n # Called by Sanic Server when shutting down\n # If we've upgraded to websocket, shut it down\n if self.websocket is not None:\n if self.websocket.ws_proto.state in (CLOSING, CLOSED):\n return True\n elif self.websocket.loop is not None:\n self.websocket.loop.create_task(self.websocket.close(1001))\n else:\n self.websocket.end_connection(1001)\n else:\n return super().close_if_idle()\n\n @staticmethod\n def sanic_request_to_ws_request(request: Request):\n return http11.Request(\n path=request.path,\n headers=WSHeaders(request.headers),\n )\n\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n # let the websockets package do the handshake with the client\n try:\n if subprotocols is not None:\n # subprotocols can be a set or frozenset,\n # but ServerProtocol needs a list\n subprotocols = cast(\n Optional[Sequence[Subprotocol]],\n list(\n [\n Subprotocol(subprotocol)\n for subprotocol in subprotocols\n ]\n ),\n )\n ws_proto = ServerProtocol(\n max_size=self.websocket_max_size,\n subprotocols=subprotocols,\n state=OPEN,\n logger=logger,\n )\n resp = ws_proto.accept(self.sanic_request_to_ws_request(request))\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n \"See server log for more information.\\n\"\n )\n raise SanicException(msg, status_code=500)\n if 100 <= resp.status_code <= 299:\n first_line = (\n f\"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\\r\\n\"\n ).encode()\n rbody = bytearray(first_line)\n rbody += (\n \"\".join([f\"{k}: {v}\\r\\n\" for k, v in resp.headers.items()])\n ).encode()\n rbody += b\"\\r\\n\"\n if resp.body is not None:\n rbody += resp.body\n rbody += b\"\\r\\n\\r\\n\"\n await super().send(rbody)\n else:\n raise SanicException(resp.body, resp.status_code)\n self.websocket = WebsocketImplProtocol(\n ws_proto,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n close_timeout=self.websocket_timeout,\n )\n loop = (\n request.transport.loop\n if hasattr(request, \"transport\")\n and hasattr(request.transport, \"loop\")\n else None\n )\n await self.websocket.connection_made(self, loop=loop)\n return self.websocket\n", "path": "sanic/server/protocols/websocket_protocol.py"}]} | 2,412 | 418 |
gh_patches_debug_16846 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import intensity could fallback on yearly averages when missing/unknown
When a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).
I can see that it happens quite often for Norway, that "Data [is] temporarily unavailable". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/config.py`
Content:
```
1 import json
2 import os
3
4 def relative_path(script_reference_path, rel_path):
5 # __file__ should be passed as script_reference_path
6 script_path = os.path.abspath(
7 script_reference_path) # i.e. /path/to/dir/foobar.py
8 script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/
9 return os.path.join(script_dir, rel_path)
10
11
12 # Prepare zone bounding boxes
13 ZONE_BOUNDING_BOXES = {}
14
15 # Read parser import list from config jsons
16 ZONES_CONFIG = json.load(open(relative_path(
17 __file__, '../config/zones.json')))
18
19 # Read all zones
20 for zone_id, zone_config in ZONES_CONFIG.items():
21 if 'bounding_box' in zone_config:
22 ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']
23
24 # Read parser import list from config jsons
25 ZONES_CONFIG = json.load(open(relative_path(
26 __file__, '../config/zones.json')))
27 EXCHANGES_CONFIG = json.load(open(relative_path(
28 __file__, '../config/exchanges.json')))
29 ZONE_NEIGHBOURS = {}
30 for k, v in EXCHANGES_CONFIG.items():
31 zone_names = k.split('->')
32 pairs = [
33 (zone_names[0], zone_names[1]),
34 (zone_names[1], zone_names[0])
35 ]
36 for zone_name_1, zone_name_2 in pairs:
37 if zone_name_1 not in ZONE_NEIGHBOURS:
38 ZONE_NEIGHBOURS[zone_name_1] = set()
39 ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)
40 # we want neighbors to always be in the same order
41 for zone, neighbors in ZONE_NEIGHBOURS.items():
42 ZONE_NEIGHBOURS[zone] = sorted(neighbors)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/config.py b/utils/config.py
--- a/utils/config.py
+++ b/utils/config.py
@@ -40,3 +40,22 @@
# we want neighbors to always be in the same order
for zone, neighbors in ZONE_NEIGHBOURS.items():
ZONE_NEIGHBOURS[zone] = sorted(neighbors)
+
+CO2EQ_PARAMETERS = json.load(open(relative_path(
+ __file__, '../config/co2eq_parameters.json')))
+
+def emission_factors(zone_key):
+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');
+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})
+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']
+ merged = {**defaults, **override}
+ if fallback_carbon_intensity:
+ merged['battery storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ merged['hydro storage'] = {
+ 'value': fallback_carbon_intensity,
+ 'source': 'Annual carbon intensity'
+ }
+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])
| {"golden_diff": "diff --git a/utils/config.py b/utils/config.py\n--- a/utils/config.py\n+++ b/utils/config.py\n@@ -40,3 +40,22 @@\n # we want neighbors to always be in the same order\n for zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n+\n+CO2EQ_PARAMETERS = json.load(open(relative_path(\n+ __file__, '../config/co2eq_parameters.json')))\n+\n+def emission_factors(zone_key):\n+ fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');\n+ override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})\n+ defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']\n+ merged = {**defaults, **override}\n+ if fallback_carbon_intensity:\n+ merged['battery storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ merged['hydro storage'] = {\n+ 'value': fallback_carbon_intensity,\n+ 'source': 'Annual carbon intensity'\n+ }\n+ return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])\n", "issue": "Import intensity could fallback on yearly averages when missing/unknown\nWhen a country, or area, is importing electricity from another country and the exporting country's production sources are unknown, it seems as if the intensity of the imported electricity is set to be equal to the intensity of the importing country. But this is hardly meaningful. Would it be possible to set the unknown intensity of imported electricity to an average or mean value from a historical period? E.g. the last month or the same month last year. Or to the last available dataset (depending on how old that is).\r\n\r\nI can see that it happens quite often for Norway, that \"Data [is] temporarily unavailable\". The intensity of the electricity exported to Sweden is low, while it is medium high when exported to West Denmark.\n", "before_files": [{"content": "import json\nimport os\n\ndef relative_path(script_reference_path, rel_path):\n # __file__ should be passed as script_reference_path\n script_path = os.path.abspath(\n script_reference_path) # i.e. /path/to/dir/foobar.py\n script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/\n return os.path.join(script_dir, rel_path)\n\n\n# Prepare zone bounding boxes\nZONE_BOUNDING_BOXES = {}\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\n\n# Read all zones\nfor zone_id, zone_config in ZONES_CONFIG.items():\n if 'bounding_box' in zone_config:\n ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\nEXCHANGES_CONFIG = json.load(open(relative_path(\n __file__, '../config/exchanges.json')))\nZONE_NEIGHBOURS = {}\nfor k, v in EXCHANGES_CONFIG.items():\n zone_names = k.split('->')\n pairs = [\n (zone_names[0], zone_names[1]),\n (zone_names[1], zone_names[0])\n ]\n for zone_name_1, zone_name_2 in pairs:\n if zone_name_1 not in ZONE_NEIGHBOURS:\n ZONE_NEIGHBOURS[zone_name_1] = set()\n ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)\n# we want neighbors to always be in the same order\nfor zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n", "path": "utils/config.py"}], "after_files": [{"content": "import json\nimport os\n\ndef relative_path(script_reference_path, rel_path):\n # __file__ should be passed as script_reference_path\n script_path = os.path.abspath(\n script_reference_path) # i.e. /path/to/dir/foobar.py\n script_dir = os.path.split(script_path)[0] # i.e. /path/to/dir/\n return os.path.join(script_dir, rel_path)\n\n\n# Prepare zone bounding boxes\nZONE_BOUNDING_BOXES = {}\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\n\n# Read all zones\nfor zone_id, zone_config in ZONES_CONFIG.items():\n if 'bounding_box' in zone_config:\n ZONE_BOUNDING_BOXES[zone_id] = zone_config['bounding_box']\n\n# Read parser import list from config jsons\nZONES_CONFIG = json.load(open(relative_path(\n __file__, '../config/zones.json')))\nEXCHANGES_CONFIG = json.load(open(relative_path(\n __file__, '../config/exchanges.json')))\nZONE_NEIGHBOURS = {}\nfor k, v in EXCHANGES_CONFIG.items():\n zone_names = k.split('->')\n pairs = [\n (zone_names[0], zone_names[1]),\n (zone_names[1], zone_names[0])\n ]\n for zone_name_1, zone_name_2 in pairs:\n if zone_name_1 not in ZONE_NEIGHBOURS:\n ZONE_NEIGHBOURS[zone_name_1] = set()\n ZONE_NEIGHBOURS[zone_name_1].add(zone_name_2)\n# we want neighbors to always be in the same order\nfor zone, neighbors in ZONE_NEIGHBOURS.items():\n ZONE_NEIGHBOURS[zone] = sorted(neighbors)\n\nCO2EQ_PARAMETERS = json.load(open(relative_path(\n __file__, '../config/co2eq_parameters.json')))\n\ndef emission_factors(zone_key):\n fallback_carbon_intensity = CO2EQ_PARAMETERS['fallbackZoneMixes'].get(zone_key, {}).get('carbonIntensity');\n override = CO2EQ_PARAMETERS['emissionFactors']['zoneOverrides'].get(zone_key, {})\n defaults = CO2EQ_PARAMETERS['emissionFactors']['defaults']\n merged = {**defaults, **override}\n if fallback_carbon_intensity:\n merged['battery storage'] = {\n 'value': fallback_carbon_intensity,\n 'source': 'Annual carbon intensity'\n }\n merged['hydro storage'] = {\n 'value': fallback_carbon_intensity,\n 'source': 'Annual carbon intensity'\n }\n return dict([(k, (v or {}).get('value')) for (k, v) in merged.items()])\n", "path": "utils/config.py"}]} | 893 | 284 |
gh_patches_debug_43085 | rasdani/github-patches | git_diff | SeldonIO__MLServer-1337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[huggingface] Merge predictive unit parameters from env with model settings
# Background
## What are PREDICTIVE_UNIT_PARAMETERS
This is a collection of parameters passed though the environment to a HuggingFace model. This is opposed to declaring them in a `model-settings.json` file as it is the case in Seldon Core v2. As of this moment, those parameters are injected via Seldon Core v1 only, where as Seldon Core v2 uses `model-settings.json` to provide the metadata for the model.
PREDICTIVE_UNIT_PARAMETERS are used only by the HuggingFace runtime and injected only by SCv1.
You can find the code for creating `HuggingFaceSettings` in `./runtimes/huggingface/ml-server-huggingface/settings.py` fle along with functions for parsing those params from env vars or from `model-settings.json`
# What is the problem
Currently, `HuggingFaceSettings` are created either from parsing the PREDICTIVE_UNIT_PARAMETERS from the environment OR from the `model-settings.json`. Meaning that if there is at least one parameter set in the env var, the `model-settings.json` extra parameters will be ignored. This makes it cumbersome when a deployment is created from the UI because additional params such as `task`, `pretrained_model`, `pretrained_tokenizer`, `framework`, etc. will have to be added one by one in the Wizard. Why they have to be added from the wizard and not just specified in `model-settings.json` - because currently SCv1 always injects `model_uri` param so the PREDICTIVE_UNIT_PARAMETERS env var so it's not empty. Because this var is not empty, the HF settings are initialised from it and the `model-settings.json` is ignored.
# What needs to be done
When creating HuggingFace settings, env vars needs to be merged with params from `model-settings.json`, giving priority to env vars. For example:
If such env var exists:
```
PREDICTIVE_UNIT_PARAMETERS = [{"name":"model_uri","value":"/mnt/models","type":"STRING"}, {"name":"task_suffix","value":"else","type":"STRING"}]
```
and such `model-settings.json` file exists:
```
{
"name": "transformer",
"implementation": "mlserver_huggingface.HuggingFaceRuntime",
"parameters": {
"extra": {
"task": "text-generation",
"task_suffix": "something",
"framework": "pt"
}
}
}
```
The outcome should be that the `task` parameter doesn't need to be specified in the wizard and
The HuggingFace settings should contain values: task = text-generation, task_suffix = else, framework = pt
# Scope
This only relating to `HuggingFace` runtime and when it's used from SCv1 and only valid as long as SCv1 is still operational and related code is present in MLServer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtimes/huggingface/mlserver_huggingface/settings.py`
Content:
```
1 import os
2 import orjson
3
4 from typing import Optional, Dict
5 from pydantic import BaseSettings
6 from distutils.util import strtobool
7 from transformers.pipelines import SUPPORTED_TASKS
8
9 try:
10 # Optimum 1.7 changed the import name from `SUPPORTED_TASKS` to
11 # `ORT_SUPPORTED_TASKS`.
12 # We'll try to import the more recent one, falling back to the previous
13 # import name if not present.
14 # https://github.com/huggingface/optimum/blob/987b02e4f6e2a1c9325b364ff764da2e57e89902/optimum/pipelines/__init__.py#L18
15 from optimum.pipelines import ORT_SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS
16 except ImportError:
17 from optimum.pipelines import SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS
18
19 from mlserver.settings import ModelSettings
20
21 from .errors import (
22 MissingHuggingFaceSettings,
23 InvalidTransformersTask,
24 InvalidOptimumTask,
25 InvalidModelParameter,
26 InvalidModelParameterType,
27 )
28
29 ENV_PREFIX_HUGGINGFACE_SETTINGS = "MLSERVER_MODEL_HUGGINGFACE_"
30 PARAMETERS_ENV_NAME = "PREDICTIVE_UNIT_PARAMETERS"
31
32
33 class HuggingFaceSettings(BaseSettings):
34 """
35 Parameters that apply only to HuggingFace models
36 """
37
38 class Config:
39 env_prefix = ENV_PREFIX_HUGGINGFACE_SETTINGS
40
41 # TODO: Document fields
42 task: str = ""
43 """
44 Pipeline task to load.
45 You can see the available Optimum and Transformers tasks available in the
46 links below:
47
48 - `Optimum Tasks <https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines#inference-pipelines-with-the-onnx-runtime-accelerator>`_
49 - `Transformer Tasks <https://huggingface.co/docs/transformers/task_summary>`_
50 """ # noqa: E501
51
52 task_suffix: str = ""
53 """
54 Suffix to append to the base task name.
55 Useful for, e.g. translation tasks which require a suffix on the task name
56 to specify source and target.
57 """
58
59 pretrained_model: Optional[str] = None
60 """
61 Name of the model that should be loaded in the pipeline.
62 """
63
64 pretrained_tokenizer: Optional[str] = None
65 """
66 Name of the tokenizer that should be loaded in the pipeline.
67 """
68
69 framework: Optional[str] = None
70 """
71 The framework to use, either "pt" for PyTorch or "tf" for TensorFlow.
72 """
73
74 optimum_model: bool = False
75 """
76 Flag to decide whether the pipeline should use a Optimum-optimised model or
77 the standard Transformers model.
78 Under the hood, this will enable the model to use the optimised ONNX
79 runtime.
80 """
81
82 device: int = -1
83 """
84 Device in which this pipeline will be loaded (e.g., "cpu", "cuda:1", "mps",
85 or a GPU ordinal rank like 1).
86 """
87
88 inter_op_threads: Optional[int] = None
89 """
90 Threads used for parallelism between independent operations.
91 PyTorch:
92 https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
93 Tensorflow:
94 https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads
95 """
96
97 intra_op_threads: Optional[int] = None
98 """
99 Threads used within an individual op for parallelism.
100 PyTorch:
101 https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html
102 Tensorflow:
103 https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads
104 """
105
106 @property
107 def task_name(self):
108 if self.task == "translation":
109 return f"{self.task}{self.task_suffix}"
110 return self.task
111
112
113 def parse_parameters_from_env() -> Dict:
114 """
115 This method parses the environment variables injected via SCv1.
116 """
117 # TODO: Once support for SCv1 is deprecated, we should remove this method and rely
118 # purely on settings coming via the `model-settings.json` file.
119 parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, "[]"))
120
121 type_dict = {
122 "INT": int,
123 "FLOAT": float,
124 "DOUBLE": float,
125 "STRING": str,
126 "BOOL": bool,
127 }
128
129 parsed_parameters = {}
130 for param in parameters:
131 name = param.get("name")
132 value = param.get("value")
133 type_ = param.get("type")
134 if type_ == "BOOL":
135 parsed_parameters[name] = bool(strtobool(value))
136 else:
137 try:
138 parsed_parameters[name] = type_dict[type_](value)
139 except ValueError:
140 raise InvalidModelParameter(name, value, type_)
141 except KeyError:
142 raise InvalidModelParameterType(type_)
143 return parsed_parameters
144
145
146 def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:
147 env_params = parse_parameters_from_env()
148 if not env_params and (
149 not model_settings.parameters or not model_settings.parameters.extra
150 ):
151 raise MissingHuggingFaceSettings()
152
153 extra = env_params or model_settings.parameters.extra # type: ignore
154 hf_settings = HuggingFaceSettings(**extra) # type: ignore
155
156 if hf_settings.task not in SUPPORTED_TASKS:
157 raise InvalidTransformersTask(hf_settings.task, SUPPORTED_TASKS.keys())
158
159 if hf_settings.optimum_model:
160 if hf_settings.task not in SUPPORTED_OPTIMUM_TASKS:
161 raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())
162
163 return hf_settings
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/runtimes/huggingface/mlserver_huggingface/settings.py b/runtimes/huggingface/mlserver_huggingface/settings.py
--- a/runtimes/huggingface/mlserver_huggingface/settings.py
+++ b/runtimes/huggingface/mlserver_huggingface/settings.py
@@ -1,7 +1,7 @@
import os
import orjson
-from typing import Optional, Dict
+from typing import Optional, Dict, Union, NewType
from pydantic import BaseSettings
from distutils.util import strtobool
from transformers.pipelines import SUPPORTED_TASKS
@@ -110,23 +110,33 @@
return self.task
-def parse_parameters_from_env() -> Dict:
+EXTRA_TYPE_DICT = {
+ "INT": int,
+ "FLOAT": float,
+ "DOUBLE": float,
+ "STRING": str,
+ "BOOL": bool,
+}
+
+ExtraDict = NewType("ExtraDict", Dict[str, Union[str, bool, float, int]])
+
+
+def parse_parameters_from_env() -> ExtraDict:
"""
This method parses the environment variables injected via SCv1.
+
+ At least an empty dict is always returned.
"""
# TODO: Once support for SCv1 is deprecated, we should remove this method and rely
# purely on settings coming via the `model-settings.json` file.
parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, "[]"))
- type_dict = {
- "INT": int,
- "FLOAT": float,
- "DOUBLE": float,
- "STRING": str,
- "BOOL": bool,
- }
+ parsed_parameters: ExtraDict = ExtraDict({})
+
+ # Guard: Exit early if there's no parameters
+ if len(parameters) == 0:
+ return parsed_parameters
- parsed_parameters = {}
for param in parameters:
name = param.get("name")
value = param.get("value")
@@ -135,22 +145,20 @@
parsed_parameters[name] = bool(strtobool(value))
else:
try:
- parsed_parameters[name] = type_dict[type_](value)
+ parsed_parameters[name] = EXTRA_TYPE_DICT[type_](value)
except ValueError:
raise InvalidModelParameter(name, value, type_)
except KeyError:
raise InvalidModelParameterType(type_)
+
return parsed_parameters
def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:
- env_params = parse_parameters_from_env()
- if not env_params and (
- not model_settings.parameters or not model_settings.parameters.extra
- ):
- raise MissingHuggingFaceSettings()
+ """Get the HuggingFace settings provided to the runtime"""
- extra = env_params or model_settings.parameters.extra # type: ignore
+ env_params = parse_parameters_from_env()
+ extra = merge_huggingface_settings_extra(model_settings, env_params)
hf_settings = HuggingFaceSettings(**extra) # type: ignore
if hf_settings.task not in SUPPORTED_TASKS:
@@ -161,3 +169,35 @@
raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())
return hf_settings
+
+
+def merge_huggingface_settings_extra(
+ model_settings: ModelSettings, env_params: ExtraDict
+) -> ExtraDict:
+ """
+ This function returns the Extra field of the Settings.
+
+ It merges them, iff they're both present, from the
+ environment AND model settings file. Precedence is
+ giving to the environment.
+ """
+
+ # Both `parameters` and `extra` are Optional, so we
+ # need to get the value, or nothing.
+ settings_params = (
+ model_settings.parameters.extra
+ if model_settings.parameters is not None
+ else None
+ )
+
+ if settings_params is None and env_params == {}:
+ # There must be settings provided by at least the environment OR model settings
+ raise MissingHuggingFaceSettings()
+
+ # Set the default value
+ settings_params = settings_params or {}
+
+ # Overwrite any conflicting keys, giving precedence to the environment
+ settings_params.update(env_params)
+
+ return ExtraDict(settings_params)
| {"golden_diff": "diff --git a/runtimes/huggingface/mlserver_huggingface/settings.py b/runtimes/huggingface/mlserver_huggingface/settings.py\n--- a/runtimes/huggingface/mlserver_huggingface/settings.py\n+++ b/runtimes/huggingface/mlserver_huggingface/settings.py\n@@ -1,7 +1,7 @@\n import os\n import orjson\n \n-from typing import Optional, Dict\n+from typing import Optional, Dict, Union, NewType\n from pydantic import BaseSettings\n from distutils.util import strtobool\n from transformers.pipelines import SUPPORTED_TASKS\n@@ -110,23 +110,33 @@\n return self.task\n \n \n-def parse_parameters_from_env() -> Dict:\n+EXTRA_TYPE_DICT = {\n+ \"INT\": int,\n+ \"FLOAT\": float,\n+ \"DOUBLE\": float,\n+ \"STRING\": str,\n+ \"BOOL\": bool,\n+}\n+\n+ExtraDict = NewType(\"ExtraDict\", Dict[str, Union[str, bool, float, int]])\n+\n+\n+def parse_parameters_from_env() -> ExtraDict:\n \"\"\"\n This method parses the environment variables injected via SCv1.\n+\n+ At least an empty dict is always returned.\n \"\"\"\n # TODO: Once support for SCv1 is deprecated, we should remove this method and rely\n # purely on settings coming via the `model-settings.json` file.\n parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, \"[]\"))\n \n- type_dict = {\n- \"INT\": int,\n- \"FLOAT\": float,\n- \"DOUBLE\": float,\n- \"STRING\": str,\n- \"BOOL\": bool,\n- }\n+ parsed_parameters: ExtraDict = ExtraDict({})\n+\n+ # Guard: Exit early if there's no parameters\n+ if len(parameters) == 0:\n+ return parsed_parameters\n \n- parsed_parameters = {}\n for param in parameters:\n name = param.get(\"name\")\n value = param.get(\"value\")\n@@ -135,22 +145,20 @@\n parsed_parameters[name] = bool(strtobool(value))\n else:\n try:\n- parsed_parameters[name] = type_dict[type_](value)\n+ parsed_parameters[name] = EXTRA_TYPE_DICT[type_](value)\n except ValueError:\n raise InvalidModelParameter(name, value, type_)\n except KeyError:\n raise InvalidModelParameterType(type_)\n+\n return parsed_parameters\n \n \n def get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:\n- env_params = parse_parameters_from_env()\n- if not env_params and (\n- not model_settings.parameters or not model_settings.parameters.extra\n- ):\n- raise MissingHuggingFaceSettings()\n+ \"\"\"Get the HuggingFace settings provided to the runtime\"\"\"\n \n- extra = env_params or model_settings.parameters.extra # type: ignore\n+ env_params = parse_parameters_from_env()\n+ extra = merge_huggingface_settings_extra(model_settings, env_params)\n hf_settings = HuggingFaceSettings(**extra) # type: ignore\n \n if hf_settings.task not in SUPPORTED_TASKS:\n@@ -161,3 +169,35 @@\n raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())\n \n return hf_settings\n+\n+\n+def merge_huggingface_settings_extra(\n+ model_settings: ModelSettings, env_params: ExtraDict\n+) -> ExtraDict:\n+ \"\"\"\n+ This function returns the Extra field of the Settings.\n+\n+ It merges them, iff they're both present, from the\n+ environment AND model settings file. Precedence is\n+ giving to the environment.\n+ \"\"\"\n+\n+ # Both `parameters` and `extra` are Optional, so we\n+ # need to get the value, or nothing.\n+ settings_params = (\n+ model_settings.parameters.extra\n+ if model_settings.parameters is not None\n+ else None\n+ )\n+\n+ if settings_params is None and env_params == {}:\n+ # There must be settings provided by at least the environment OR model settings\n+ raise MissingHuggingFaceSettings()\n+\n+ # Set the default value\n+ settings_params = settings_params or {}\n+\n+ # Overwrite any conflicting keys, giving precedence to the environment\n+ settings_params.update(env_params)\n+\n+ return ExtraDict(settings_params)\n", "issue": "[huggingface] Merge predictive unit parameters from env with model settings\n# Background\r\n## What are PREDICTIVE_UNIT_PARAMETERS\r\nThis is a collection of parameters passed though the environment to a HuggingFace model. This is opposed to declaring them in a `model-settings.json` file as it is the case in Seldon Core v2. As of this moment, those parameters are injected via Seldon Core v1 only, where as Seldon Core v2 uses `model-settings.json` to provide the metadata for the model.\r\n\r\nPREDICTIVE_UNIT_PARAMETERS are used only by the HuggingFace runtime and injected only by SCv1.\r\n\r\nYou can find the code for creating `HuggingFaceSettings` in `./runtimes/huggingface/ml-server-huggingface/settings.py` fle along with functions for parsing those params from env vars or from `model-settings.json` \r\n\r\n# What is the problem\r\nCurrently, `HuggingFaceSettings` are created either from parsing the PREDICTIVE_UNIT_PARAMETERS from the environment OR from the `model-settings.json`. Meaning that if there is at least one parameter set in the env var, the `model-settings.json` extra parameters will be ignored. This makes it cumbersome when a deployment is created from the UI because additional params such as `task`, `pretrained_model`, `pretrained_tokenizer`, `framework`, etc. will have to be added one by one in the Wizard. Why they have to be added from the wizard and not just specified in `model-settings.json` - because currently SCv1 always injects `model_uri` param so the PREDICTIVE_UNIT_PARAMETERS env var so it's not empty. Because this var is not empty, the HF settings are initialised from it and the `model-settings.json` is ignored.\r\n\r\n# What needs to be done\r\nWhen creating HuggingFace settings, env vars needs to be merged with params from `model-settings.json`, giving priority to env vars. For example:\r\nIf such env var exists:\r\n```\r\nPREDICTIVE_UNIT_PARAMETERS = [{\"name\":\"model_uri\",\"value\":\"/mnt/models\",\"type\":\"STRING\"}, {\"name\":\"task_suffix\",\"value\":\"else\",\"type\":\"STRING\"}]\r\n```\r\nand such `model-settings.json` file exists:\r\n```\r\n{\r\n \"name\": \"transformer\",\r\n \"implementation\": \"mlserver_huggingface.HuggingFaceRuntime\",\r\n \"parameters\": {\r\n \"extra\": {\r\n \"task\": \"text-generation\",\r\n \"task_suffix\": \"something\",\r\n \"framework\": \"pt\"\r\n }\r\n }\r\n}\r\n```\r\nThe outcome should be that the `task` parameter doesn't need to be specified in the wizard and\r\nThe HuggingFace settings should contain values: task = text-generation, task_suffix = else, framework = pt\r\n\r\n# Scope\r\nThis only relating to `HuggingFace` runtime and when it's used from SCv1 and only valid as long as SCv1 is still operational and related code is present in MLServer.\n", "before_files": [{"content": "import os\nimport orjson\n\nfrom typing import Optional, Dict\nfrom pydantic import BaseSettings\nfrom distutils.util import strtobool\nfrom transformers.pipelines import SUPPORTED_TASKS\n\ntry:\n # Optimum 1.7 changed the import name from `SUPPORTED_TASKS` to\n # `ORT_SUPPORTED_TASKS`.\n # We'll try to import the more recent one, falling back to the previous\n # import name if not present.\n # https://github.com/huggingface/optimum/blob/987b02e4f6e2a1c9325b364ff764da2e57e89902/optimum/pipelines/__init__.py#L18\n from optimum.pipelines import ORT_SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\nexcept ImportError:\n from optimum.pipelines import SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\n\nfrom mlserver.settings import ModelSettings\n\nfrom .errors import (\n MissingHuggingFaceSettings,\n InvalidTransformersTask,\n InvalidOptimumTask,\n InvalidModelParameter,\n InvalidModelParameterType,\n)\n\nENV_PREFIX_HUGGINGFACE_SETTINGS = \"MLSERVER_MODEL_HUGGINGFACE_\"\nPARAMETERS_ENV_NAME = \"PREDICTIVE_UNIT_PARAMETERS\"\n\n\nclass HuggingFaceSettings(BaseSettings):\n \"\"\"\n Parameters that apply only to HuggingFace models\n \"\"\"\n\n class Config:\n env_prefix = ENV_PREFIX_HUGGINGFACE_SETTINGS\n\n # TODO: Document fields\n task: str = \"\"\n \"\"\"\n Pipeline task to load.\n You can see the available Optimum and Transformers tasks available in the\n links below:\n\n - `Optimum Tasks <https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines#inference-pipelines-with-the-onnx-runtime-accelerator>`_\n - `Transformer Tasks <https://huggingface.co/docs/transformers/task_summary>`_\n \"\"\" # noqa: E501\n\n task_suffix: str = \"\"\n \"\"\"\n Suffix to append to the base task name.\n Useful for, e.g. translation tasks which require a suffix on the task name\n to specify source and target.\n \"\"\"\n\n pretrained_model: Optional[str] = None\n \"\"\"\n Name of the model that should be loaded in the pipeline.\n \"\"\"\n\n pretrained_tokenizer: Optional[str] = None\n \"\"\"\n Name of the tokenizer that should be loaded in the pipeline.\n \"\"\"\n\n framework: Optional[str] = None\n \"\"\"\n The framework to use, either \"pt\" for PyTorch or \"tf\" for TensorFlow.\n \"\"\"\n\n optimum_model: bool = False\n \"\"\"\n Flag to decide whether the pipeline should use a Optimum-optimised model or\n the standard Transformers model.\n Under the hood, this will enable the model to use the optimised ONNX\n runtime.\n \"\"\"\n\n device: int = -1\n \"\"\"\n Device in which this pipeline will be loaded (e.g., \"cpu\", \"cuda:1\", \"mps\",\n or a GPU ordinal rank like 1).\n \"\"\"\n\n inter_op_threads: Optional[int] = None\n \"\"\"\n Threads used for parallelism between independent operations.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads\n \"\"\"\n\n intra_op_threads: Optional[int] = None\n \"\"\"\n Threads used within an individual op for parallelism.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads\n \"\"\"\n\n @property\n def task_name(self):\n if self.task == \"translation\":\n return f\"{self.task}{self.task_suffix}\"\n return self.task\n\n\ndef parse_parameters_from_env() -> Dict:\n \"\"\"\n This method parses the environment variables injected via SCv1.\n \"\"\"\n # TODO: Once support for SCv1 is deprecated, we should remove this method and rely\n # purely on settings coming via the `model-settings.json` file.\n parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, \"[]\"))\n\n type_dict = {\n \"INT\": int,\n \"FLOAT\": float,\n \"DOUBLE\": float,\n \"STRING\": str,\n \"BOOL\": bool,\n }\n\n parsed_parameters = {}\n for param in parameters:\n name = param.get(\"name\")\n value = param.get(\"value\")\n type_ = param.get(\"type\")\n if type_ == \"BOOL\":\n parsed_parameters[name] = bool(strtobool(value))\n else:\n try:\n parsed_parameters[name] = type_dict[type_](value)\n except ValueError:\n raise InvalidModelParameter(name, value, type_)\n except KeyError:\n raise InvalidModelParameterType(type_)\n return parsed_parameters\n\n\ndef get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:\n env_params = parse_parameters_from_env()\n if not env_params and (\n not model_settings.parameters or not model_settings.parameters.extra\n ):\n raise MissingHuggingFaceSettings()\n\n extra = env_params or model_settings.parameters.extra # type: ignore\n hf_settings = HuggingFaceSettings(**extra) # type: ignore\n\n if hf_settings.task not in SUPPORTED_TASKS:\n raise InvalidTransformersTask(hf_settings.task, SUPPORTED_TASKS.keys())\n\n if hf_settings.optimum_model:\n if hf_settings.task not in SUPPORTED_OPTIMUM_TASKS:\n raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())\n\n return hf_settings\n", "path": "runtimes/huggingface/mlserver_huggingface/settings.py"}], "after_files": [{"content": "import os\nimport orjson\n\nfrom typing import Optional, Dict, Union, NewType\nfrom pydantic import BaseSettings\nfrom distutils.util import strtobool\nfrom transformers.pipelines import SUPPORTED_TASKS\n\ntry:\n # Optimum 1.7 changed the import name from `SUPPORTED_TASKS` to\n # `ORT_SUPPORTED_TASKS`.\n # We'll try to import the more recent one, falling back to the previous\n # import name if not present.\n # https://github.com/huggingface/optimum/blob/987b02e4f6e2a1c9325b364ff764da2e57e89902/optimum/pipelines/__init__.py#L18\n from optimum.pipelines import ORT_SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\nexcept ImportError:\n from optimum.pipelines import SUPPORTED_TASKS as SUPPORTED_OPTIMUM_TASKS\n\nfrom mlserver.settings import ModelSettings\n\nfrom .errors import (\n MissingHuggingFaceSettings,\n InvalidTransformersTask,\n InvalidOptimumTask,\n InvalidModelParameter,\n InvalidModelParameterType,\n)\n\nENV_PREFIX_HUGGINGFACE_SETTINGS = \"MLSERVER_MODEL_HUGGINGFACE_\"\nPARAMETERS_ENV_NAME = \"PREDICTIVE_UNIT_PARAMETERS\"\n\n\nclass HuggingFaceSettings(BaseSettings):\n \"\"\"\n Parameters that apply only to HuggingFace models\n \"\"\"\n\n class Config:\n env_prefix = ENV_PREFIX_HUGGINGFACE_SETTINGS\n\n # TODO: Document fields\n task: str = \"\"\n \"\"\"\n Pipeline task to load.\n You can see the available Optimum and Transformers tasks available in the\n links below:\n\n - `Optimum Tasks <https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines#inference-pipelines-with-the-onnx-runtime-accelerator>`_\n - `Transformer Tasks <https://huggingface.co/docs/transformers/task_summary>`_\n \"\"\" # noqa: E501\n\n task_suffix: str = \"\"\n \"\"\"\n Suffix to append to the base task name.\n Useful for, e.g. translation tasks which require a suffix on the task name\n to specify source and target.\n \"\"\"\n\n pretrained_model: Optional[str] = None\n \"\"\"\n Name of the model that should be loaded in the pipeline.\n \"\"\"\n\n pretrained_tokenizer: Optional[str] = None\n \"\"\"\n Name of the tokenizer that should be loaded in the pipeline.\n \"\"\"\n\n framework: Optional[str] = None\n \"\"\"\n The framework to use, either \"pt\" for PyTorch or \"tf\" for TensorFlow.\n \"\"\"\n\n optimum_model: bool = False\n \"\"\"\n Flag to decide whether the pipeline should use a Optimum-optimised model or\n the standard Transformers model.\n Under the hood, this will enable the model to use the optimised ONNX\n runtime.\n \"\"\"\n\n device: int = -1\n \"\"\"\n Device in which this pipeline will be loaded (e.g., \"cpu\", \"cuda:1\", \"mps\",\n or a GPU ordinal rank like 1).\n \"\"\"\n\n inter_op_threads: Optional[int] = None\n \"\"\"\n Threads used for parallelism between independent operations.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_inter_op_parallelism_threads\n \"\"\"\n\n intra_op_threads: Optional[int] = None\n \"\"\"\n Threads used within an individual op for parallelism.\n PyTorch:\n https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html\n Tensorflow:\n https://www.tensorflow.org/api_docs/python/tf/config/threading/set_intra_op_parallelism_threads\n \"\"\"\n\n @property\n def task_name(self):\n if self.task == \"translation\":\n return f\"{self.task}{self.task_suffix}\"\n return self.task\n\n\nEXTRA_TYPE_DICT = {\n \"INT\": int,\n \"FLOAT\": float,\n \"DOUBLE\": float,\n \"STRING\": str,\n \"BOOL\": bool,\n}\n\nExtraDict = NewType(\"ExtraDict\", Dict[str, Union[str, bool, float, int]])\n\n\ndef parse_parameters_from_env() -> ExtraDict:\n \"\"\"\n This method parses the environment variables injected via SCv1.\n\n At least an empty dict is always returned.\n \"\"\"\n # TODO: Once support for SCv1 is deprecated, we should remove this method and rely\n # purely on settings coming via the `model-settings.json` file.\n parameters = orjson.loads(os.environ.get(PARAMETERS_ENV_NAME, \"[]\"))\n\n parsed_parameters: ExtraDict = ExtraDict({})\n\n # Guard: Exit early if there's no parameters\n if len(parameters) == 0:\n return parsed_parameters\n\n for param in parameters:\n name = param.get(\"name\")\n value = param.get(\"value\")\n type_ = param.get(\"type\")\n if type_ == \"BOOL\":\n parsed_parameters[name] = bool(strtobool(value))\n else:\n try:\n parsed_parameters[name] = EXTRA_TYPE_DICT[type_](value)\n except ValueError:\n raise InvalidModelParameter(name, value, type_)\n except KeyError:\n raise InvalidModelParameterType(type_)\n\n return parsed_parameters\n\n\ndef get_huggingface_settings(model_settings: ModelSettings) -> HuggingFaceSettings:\n \"\"\"Get the HuggingFace settings provided to the runtime\"\"\"\n\n env_params = parse_parameters_from_env()\n extra = merge_huggingface_settings_extra(model_settings, env_params)\n hf_settings = HuggingFaceSettings(**extra) # type: ignore\n\n if hf_settings.task not in SUPPORTED_TASKS:\n raise InvalidTransformersTask(hf_settings.task, SUPPORTED_TASKS.keys())\n\n if hf_settings.optimum_model:\n if hf_settings.task not in SUPPORTED_OPTIMUM_TASKS:\n raise InvalidOptimumTask(hf_settings.task, SUPPORTED_OPTIMUM_TASKS.keys())\n\n return hf_settings\n\n\ndef merge_huggingface_settings_extra(\n model_settings: ModelSettings, env_params: ExtraDict\n) -> ExtraDict:\n \"\"\"\n This function returns the Extra field of the Settings.\n\n It merges them, iff they're both present, from the\n environment AND model settings file. Precedence is\n giving to the environment.\n \"\"\"\n\n # Both `parameters` and `extra` are Optional, so we\n # need to get the value, or nothing.\n settings_params = (\n model_settings.parameters.extra\n if model_settings.parameters is not None\n else None\n )\n\n if settings_params is None and env_params == {}:\n # There must be settings provided by at least the environment OR model settings\n raise MissingHuggingFaceSettings()\n\n # Set the default value\n settings_params = settings_params or {}\n\n # Overwrite any conflicting keys, giving precedence to the environment\n settings_params.update(env_params)\n\n return ExtraDict(settings_params)\n", "path": "runtimes/huggingface/mlserver_huggingface/settings.py"}]} | 2,536 | 957 |
gh_patches_debug_16532 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1516 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update default port for OTLP exporter to 4317
With this change https://github.com/open-telemetry/opentelemetry-specification/pull/1221 default port for OTLP exporter is 4317, the current default port in the OTLP exporter is 55680. This should be updated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 """
17 This library allows to export tracing data to an OTLP collector.
18
19 Usage
20 -----
21
22 The **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the
23 `OTLP`_ collector.
24
25
26 .. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/
27 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
28
29 .. envvar:: OTEL_EXPORTER_OTLP_COMPRESSION
30
31 The :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` environment variable allows a
32 compression algorithm to be passed to the OTLP exporter. The compression
33 algorithms that are supported include gzip and no compression. The value should
34 be in the format of a string "gzip" for gzip compression, and no value specified
35 if no compression is the desired choice.
36 Additional details are available `in the specification
37 <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/exporter.md#opentelemetry-protocol-exporter>`_.
38
39 .. code:: python
40
41 from opentelemetry import trace
42 from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter
43 from opentelemetry.sdk.resources import Resource
44 from opentelemetry.sdk.trace import TracerProvider
45 from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
46
47 # Resource can be required for some backends, e.g. Jaeger
48 # If resource wouldn't be set - traces wouldn't appears in Jaeger
49 resource = Resource(attributes={
50 "service.name": "service"
51 })
52
53 trace.set_tracer_provider(TracerProvider(resource=resource))
54 tracer = trace.get_tracer(__name__)
55
56 otlp_exporter = OTLPSpanExporter(endpoint="localhost:55680", insecure=True)
57
58 span_processor = BatchExportSpanProcessor(otlp_exporter)
59
60 trace.get_tracer_provider().add_span_processor(span_processor)
61
62 with tracer.start_as_current_span("foo"):
63 print("Hello world!")
64
65 API
66 ---
67 """
68
```
Path: `exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OTLP Exporter"""
16
17 import enum
18 import logging
19 from abc import ABC, abstractmethod
20 from collections.abc import Mapping, Sequence
21 from time import sleep
22 from typing import Any, Callable, Dict, Generic, List, Optional
23 from typing import Sequence as TypingSequence
24 from typing import Text, TypeVar
25
26 from backoff import expo
27 from google.rpc.error_details_pb2 import RetryInfo
28 from grpc import (
29 ChannelCredentials,
30 Compression,
31 RpcError,
32 StatusCode,
33 insecure_channel,
34 secure_channel,
35 ssl_channel_credentials,
36 )
37
38 from opentelemetry.configuration import Configuration
39 from opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue
40 from opentelemetry.proto.resource.v1.resource_pb2 import Resource
41 from opentelemetry.sdk.resources import Resource as SDKResource
42
43 logger = logging.getLogger(__name__)
44 SDKDataT = TypeVar("SDKDataT")
45 ResourceDataT = TypeVar("ResourceDataT")
46 TypingResourceT = TypeVar("TypingResourceT")
47 ExportServiceRequestT = TypeVar("ExportServiceRequestT")
48 ExportResultT = TypeVar("ExportResultT")
49
50
51 class OTLPCompression(enum.Enum):
52 gzip = "gzip"
53
54
55 def _translate_key_values(key: Text, value: Any) -> KeyValue:
56
57 if isinstance(value, bool):
58 any_value = AnyValue(bool_value=value)
59
60 elif isinstance(value, str):
61 any_value = AnyValue(string_value=value)
62
63 elif isinstance(value, int):
64 any_value = AnyValue(int_value=value)
65
66 elif isinstance(value, float):
67 any_value = AnyValue(double_value=value)
68
69 elif isinstance(value, Sequence):
70 any_value = AnyValue(array_value=value)
71
72 elif isinstance(value, Mapping):
73 any_value = AnyValue(kvlist_value=value)
74
75 else:
76 raise Exception(
77 "Invalid type {} of value {}".format(type(value), value)
78 )
79
80 return KeyValue(key=key, value=any_value)
81
82
83 def _get_resource_data(
84 sdk_resource_instrumentation_library_data: Dict[
85 SDKResource, ResourceDataT
86 ],
87 resource_class: Callable[..., TypingResourceT],
88 name: str,
89 ) -> List[TypingResourceT]:
90
91 resource_data = []
92
93 for (
94 sdk_resource,
95 instrumentation_library_data,
96 ) in sdk_resource_instrumentation_library_data.items():
97
98 collector_resource = Resource()
99
100 for key, value in sdk_resource.attributes.items():
101
102 try:
103 # pylint: disable=no-member
104 collector_resource.attributes.append(
105 _translate_key_values(key, value)
106 )
107 except Exception as error: # pylint: disable=broad-except
108 logger.exception(error)
109
110 resource_data.append(
111 resource_class(
112 **{
113 "resource": collector_resource,
114 "instrumentation_library_{}".format(name): [
115 instrumentation_library_data
116 ],
117 }
118 )
119 )
120
121 return resource_data
122
123
124 def _load_credential_from_file(filepath) -> ChannelCredentials:
125 try:
126 with open(filepath, "rb") as creds_file:
127 credential = creds_file.read()
128 return ssl_channel_credentials(credential)
129 except FileNotFoundError:
130 logger.exception("Failed to read credential file")
131 return None
132
133
134 # pylint: disable=no-member
135 class OTLPExporterMixin(
136 ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]
137 ):
138 """OTLP span/metric exporter
139
140 Args:
141 endpoint: OpenTelemetry Collector receiver endpoint
142 insecure: Connection type
143 credentials: ChannelCredentials object for server authentication
144 headers: Headers to send when exporting
145 compression: Compression algorithm to be used in channel
146 timeout: Backend request timeout in seconds
147 """
148
149 def __init__(
150 self,
151 endpoint: Optional[str] = None,
152 insecure: Optional[bool] = None,
153 credentials: Optional[ChannelCredentials] = None,
154 headers: Optional[Sequence] = None,
155 timeout: Optional[int] = None,
156 compression: str = None,
157 ):
158 super().__init__()
159
160 endpoint = (
161 endpoint
162 or Configuration().EXPORTER_OTLP_ENDPOINT
163 or "localhost:55680"
164 )
165
166 if insecure is None:
167 insecure = Configuration().EXPORTER_OTLP_INSECURE
168 if insecure is None:
169 insecure = False
170
171 self._headers = headers or Configuration().EXPORTER_OTLP_HEADERS
172 if isinstance(self._headers, str):
173 self._headers = tuple(
174 tuple(item.split("=")) for item in self._headers.split(",")
175 )
176 self._timeout = (
177 timeout
178 or Configuration().EXPORTER_OTLP_TIMEOUT
179 or 10 # default: 10 seconds
180 )
181 self._collector_span_kwargs = None
182
183 if compression is None:
184 compression_algorithm = Compression.NoCompression
185 elif (
186 compression in OTLPCompression._value2member_map_
187 and OTLPCompression(compression) is OTLPCompression.gzip
188 ):
189 compression_algorithm = Compression.Gzip
190 else:
191 compression_str = Configuration().EXPORTER_OTLP_INSECURE or None
192 if compression_str is None:
193 compression_algorithm = Compression.NoCompression
194 elif (
195 compression_str in OTLPCompression._value2member_map_
196 and OTLPCompression(compression_str) is OTLPCompression.gzip
197 ):
198 compression_algorithm = Compression.Gzip
199 else:
200 raise ValueError(
201 "OTEL_EXPORTER_OTLP_COMPRESSION environment variable does not match gzip."
202 )
203
204 if insecure:
205 self._client = self._stub(
206 insecure_channel(endpoint, compression=compression_algorithm)
207 )
208 return
209
210 # secure mode
211 if (
212 credentials is None
213 and Configuration().EXPORTER_OTLP_CERTIFICATE is None
214 ):
215 # use the default location chosen by gRPC runtime
216 credentials = ssl_channel_credentials()
217 else:
218 credentials = credentials or _load_credential_from_file(
219 Configuration().EXPORTER_OTLP_CERTIFICATE
220 )
221 self._client = self._stub(
222 secure_channel(
223 endpoint, credentials, compression=compression_algorithm
224 )
225 )
226
227 @abstractmethod
228 def _translate_data(
229 self, data: TypingSequence[SDKDataT]
230 ) -> ExportServiceRequestT:
231 pass
232
233 def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:
234 # expo returns a generator that yields delay values which grow
235 # exponentially. Once delay is greater than max_value, the yielded
236 # value will remain constant.
237 # max_value is set to 900 (900 seconds is 15 minutes) to use the same
238 # value as used in the Go implementation.
239
240 max_value = 900
241
242 for delay in expo(max_value=max_value):
243
244 if delay == max_value:
245 return self._result.FAILURE
246
247 try:
248 self._client.Export(
249 request=self._translate_data(data),
250 metadata=self._headers,
251 timeout=self._timeout,
252 )
253
254 return self._result.SUCCESS
255
256 except RpcError as error:
257
258 if error.code() in [
259 StatusCode.CANCELLED,
260 StatusCode.DEADLINE_EXCEEDED,
261 StatusCode.PERMISSION_DENIED,
262 StatusCode.UNAUTHENTICATED,
263 StatusCode.RESOURCE_EXHAUSTED,
264 StatusCode.ABORTED,
265 StatusCode.OUT_OF_RANGE,
266 StatusCode.UNAVAILABLE,
267 StatusCode.DATA_LOSS,
268 ]:
269
270 retry_info_bin = dict(error.trailing_metadata()).get(
271 "google.rpc.retryinfo-bin"
272 )
273 if retry_info_bin is not None:
274 retry_info = RetryInfo()
275 retry_info.ParseFromString(retry_info_bin)
276 delay = (
277 retry_info.retry_delay.seconds
278 + retry_info.retry_delay.nanos / 1.0e9
279 )
280
281 logger.debug(
282 "Waiting %ss before retrying export of span", delay
283 )
284 sleep(delay)
285 continue
286
287 if error.code() == StatusCode.OK:
288 return self._result.SUCCESS
289
290 return self._result.FAILURE
291
292 return self._result.FAILURE
293
294 def shutdown(self) -> None:
295 pass
296
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py
@@ -53,7 +53,7 @@
trace.set_tracer_provider(TracerProvider(resource=resource))
tracer = trace.get_tracer(__name__)
- otlp_exporter = OTLPSpanExporter(endpoint="localhost:55680", insecure=True)
+ otlp_exporter = OTLPSpanExporter(endpoint="localhost:4317", insecure=True)
span_processor = BatchExportSpanProcessor(otlp_exporter)
diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py
@@ -160,7 +160,7 @@
endpoint = (
endpoint
or Configuration().EXPORTER_OTLP_ENDPOINT
- or "localhost:55680"
+ or "localhost:4317"
)
if insecure is None:
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py\n@@ -53,7 +53,7 @@\n trace.set_tracer_provider(TracerProvider(resource=resource))\n tracer = trace.get_tracer(__name__)\n \n- otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:55680\", insecure=True)\n+ otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:4317\", insecure=True)\n \n span_processor = BatchExportSpanProcessor(otlp_exporter)\n \ndiff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n--- a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py\n@@ -160,7 +160,7 @@\n endpoint = (\n endpoint\n or Configuration().EXPORTER_OTLP_ENDPOINT\n- or \"localhost:55680\"\n+ or \"localhost:4317\"\n )\n \n if insecure is None:\n", "issue": "Update default port for OTLP exporter to 4317\nWith this change https://github.com/open-telemetry/opentelemetry-specification/pull/1221 default port for OTLP exporter is 4317, the current default port in the OTLP exporter is 55680. This should be updated.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nThis library allows to export tracing data to an OTLP collector.\n\nUsage\n-----\n\nThe **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the\n`OTLP`_ collector.\n\n\n.. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. envvar:: OTEL_EXPORTER_OTLP_COMPRESSION\n\nThe :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` environment variable allows a\ncompression algorithm to be passed to the OTLP exporter. The compression\nalgorithms that are supported include gzip and no compression. The value should\nbe in the format of a string \"gzip\" for gzip compression, and no value specified\nif no compression is the desired choice.\nAdditional details are available `in the specification\n<https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/exporter.md#opentelemetry-protocol-exporter>`_.\n\n.. code:: python\n\n from opentelemetry import trace\n from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter\n from opentelemetry.sdk.resources import Resource\n from opentelemetry.sdk.trace import TracerProvider\n from opentelemetry.sdk.trace.export import BatchExportSpanProcessor\n\n # Resource can be required for some backends, e.g. Jaeger\n # If resource wouldn't be set - traces wouldn't appears in Jaeger\n resource = Resource(attributes={\n \"service.name\": \"service\"\n })\n\n trace.set_tracer_provider(TracerProvider(resource=resource))\n tracer = trace.get_tracer(__name__)\n\n otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:55680\", insecure=True)\n\n span_processor = BatchExportSpanProcessor(otlp_exporter)\n\n trace.get_tracer_provider().add_span_processor(span_processor)\n\n with tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n\nAPI\n---\n\"\"\"\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OTLP Exporter\"\"\"\n\nimport enum\nimport logging\nfrom abc import ABC, abstractmethod\nfrom collections.abc import Mapping, Sequence\nfrom time import sleep\nfrom typing import Any, Callable, Dict, Generic, List, Optional\nfrom typing import Sequence as TypingSequence\nfrom typing import Text, TypeVar\n\nfrom backoff import expo\nfrom google.rpc.error_details_pb2 import RetryInfo\nfrom grpc import (\n ChannelCredentials,\n Compression,\n RpcError,\n StatusCode,\n insecure_channel,\n secure_channel,\n ssl_channel_credentials,\n)\n\nfrom opentelemetry.configuration import Configuration\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue\nfrom opentelemetry.proto.resource.v1.resource_pb2 import Resource\nfrom opentelemetry.sdk.resources import Resource as SDKResource\n\nlogger = logging.getLogger(__name__)\nSDKDataT = TypeVar(\"SDKDataT\")\nResourceDataT = TypeVar(\"ResourceDataT\")\nTypingResourceT = TypeVar(\"TypingResourceT\")\nExportServiceRequestT = TypeVar(\"ExportServiceRequestT\")\nExportResultT = TypeVar(\"ExportResultT\")\n\n\nclass OTLPCompression(enum.Enum):\n gzip = \"gzip\"\n\n\ndef _translate_key_values(key: Text, value: Any) -> KeyValue:\n\n if isinstance(value, bool):\n any_value = AnyValue(bool_value=value)\n\n elif isinstance(value, str):\n any_value = AnyValue(string_value=value)\n\n elif isinstance(value, int):\n any_value = AnyValue(int_value=value)\n\n elif isinstance(value, float):\n any_value = AnyValue(double_value=value)\n\n elif isinstance(value, Sequence):\n any_value = AnyValue(array_value=value)\n\n elif isinstance(value, Mapping):\n any_value = AnyValue(kvlist_value=value)\n\n else:\n raise Exception(\n \"Invalid type {} of value {}\".format(type(value), value)\n )\n\n return KeyValue(key=key, value=any_value)\n\n\ndef _get_resource_data(\n sdk_resource_instrumentation_library_data: Dict[\n SDKResource, ResourceDataT\n ],\n resource_class: Callable[..., TypingResourceT],\n name: str,\n) -> List[TypingResourceT]:\n\n resource_data = []\n\n for (\n sdk_resource,\n instrumentation_library_data,\n ) in sdk_resource_instrumentation_library_data.items():\n\n collector_resource = Resource()\n\n for key, value in sdk_resource.attributes.items():\n\n try:\n # pylint: disable=no-member\n collector_resource.attributes.append(\n _translate_key_values(key, value)\n )\n except Exception as error: # pylint: disable=broad-except\n logger.exception(error)\n\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"instrumentation_library_{}\".format(name): [\n instrumentation_library_data\n ],\n }\n )\n )\n\n return resource_data\n\n\ndef _load_credential_from_file(filepath) -> ChannelCredentials:\n try:\n with open(filepath, \"rb\") as creds_file:\n credential = creds_file.read()\n return ssl_channel_credentials(credential)\n except FileNotFoundError:\n logger.exception(\"Failed to read credential file\")\n return None\n\n\n# pylint: disable=no-member\nclass OTLPExporterMixin(\n ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]\n):\n \"\"\"OTLP span/metric exporter\n\n Args:\n endpoint: OpenTelemetry Collector receiver endpoint\n insecure: Connection type\n credentials: ChannelCredentials object for server authentication\n headers: Headers to send when exporting\n compression: Compression algorithm to be used in channel\n timeout: Backend request timeout in seconds\n \"\"\"\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: str = None,\n ):\n super().__init__()\n\n endpoint = (\n endpoint\n or Configuration().EXPORTER_OTLP_ENDPOINT\n or \"localhost:55680\"\n )\n\n if insecure is None:\n insecure = Configuration().EXPORTER_OTLP_INSECURE\n if insecure is None:\n insecure = False\n\n self._headers = headers or Configuration().EXPORTER_OTLP_HEADERS\n if isinstance(self._headers, str):\n self._headers = tuple(\n tuple(item.split(\"=\")) for item in self._headers.split(\",\")\n )\n self._timeout = (\n timeout\n or Configuration().EXPORTER_OTLP_TIMEOUT\n or 10 # default: 10 seconds\n )\n self._collector_span_kwargs = None\n\n if compression is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression in OTLPCompression._value2member_map_\n and OTLPCompression(compression) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n compression_str = Configuration().EXPORTER_OTLP_INSECURE or None\n if compression_str is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression_str in OTLPCompression._value2member_map_\n and OTLPCompression(compression_str) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n raise ValueError(\n \"OTEL_EXPORTER_OTLP_COMPRESSION environment variable does not match gzip.\"\n )\n\n if insecure:\n self._client = self._stub(\n insecure_channel(endpoint, compression=compression_algorithm)\n )\n return\n\n # secure mode\n if (\n credentials is None\n and Configuration().EXPORTER_OTLP_CERTIFICATE is None\n ):\n # use the default location chosen by gRPC runtime\n credentials = ssl_channel_credentials()\n else:\n credentials = credentials or _load_credential_from_file(\n Configuration().EXPORTER_OTLP_CERTIFICATE\n )\n self._client = self._stub(\n secure_channel(\n endpoint, credentials, compression=compression_algorithm\n )\n )\n\n @abstractmethod\n def _translate_data(\n self, data: TypingSequence[SDKDataT]\n ) -> ExportServiceRequestT:\n pass\n\n def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:\n # expo returns a generator that yields delay values which grow\n # exponentially. Once delay is greater than max_value, the yielded\n # value will remain constant.\n # max_value is set to 900 (900 seconds is 15 minutes) to use the same\n # value as used in the Go implementation.\n\n max_value = 900\n\n for delay in expo(max_value=max_value):\n\n if delay == max_value:\n return self._result.FAILURE\n\n try:\n self._client.Export(\n request=self._translate_data(data),\n metadata=self._headers,\n timeout=self._timeout,\n )\n\n return self._result.SUCCESS\n\n except RpcError as error:\n\n if error.code() in [\n StatusCode.CANCELLED,\n StatusCode.DEADLINE_EXCEEDED,\n StatusCode.PERMISSION_DENIED,\n StatusCode.UNAUTHENTICATED,\n StatusCode.RESOURCE_EXHAUSTED,\n StatusCode.ABORTED,\n StatusCode.OUT_OF_RANGE,\n StatusCode.UNAVAILABLE,\n StatusCode.DATA_LOSS,\n ]:\n\n retry_info_bin = dict(error.trailing_metadata()).get(\n \"google.rpc.retryinfo-bin\"\n )\n if retry_info_bin is not None:\n retry_info = RetryInfo()\n retry_info.ParseFromString(retry_info_bin)\n delay = (\n retry_info.retry_delay.seconds\n + retry_info.retry_delay.nanos / 1.0e9\n )\n\n logger.debug(\n \"Waiting %ss before retrying export of span\", delay\n )\n sleep(delay)\n continue\n\n if error.code() == StatusCode.OK:\n return self._result.SUCCESS\n\n return self._result.FAILURE\n\n return self._result.FAILURE\n\n def shutdown(self) -> None:\n pass\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"\nThis library allows to export tracing data to an OTLP collector.\n\nUsage\n-----\n\nThe **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the\n`OTLP`_ collector.\n\n\n.. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. envvar:: OTEL_EXPORTER_OTLP_COMPRESSION\n\nThe :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` environment variable allows a\ncompression algorithm to be passed to the OTLP exporter. The compression\nalgorithms that are supported include gzip and no compression. The value should\nbe in the format of a string \"gzip\" for gzip compression, and no value specified\nif no compression is the desired choice.\nAdditional details are available `in the specification\n<https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/protocol/exporter.md#opentelemetry-protocol-exporter>`_.\n\n.. code:: python\n\n from opentelemetry import trace\n from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter\n from opentelemetry.sdk.resources import Resource\n from opentelemetry.sdk.trace import TracerProvider\n from opentelemetry.sdk.trace.export import BatchExportSpanProcessor\n\n # Resource can be required for some backends, e.g. Jaeger\n # If resource wouldn't be set - traces wouldn't appears in Jaeger\n resource = Resource(attributes={\n \"service.name\": \"service\"\n })\n\n trace.set_tracer_provider(TracerProvider(resource=resource))\n tracer = trace.get_tracer(__name__)\n\n otlp_exporter = OTLPSpanExporter(endpoint=\"localhost:4317\", insecure=True)\n\n span_processor = BatchExportSpanProcessor(otlp_exporter)\n\n trace.get_tracer_provider().add_span_processor(span_processor)\n\n with tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n\nAPI\n---\n\"\"\"\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/__init__.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OTLP Exporter\"\"\"\n\nimport enum\nimport logging\nfrom abc import ABC, abstractmethod\nfrom collections.abc import Mapping, Sequence\nfrom time import sleep\nfrom typing import Any, Callable, Dict, Generic, List, Optional\nfrom typing import Sequence as TypingSequence\nfrom typing import Text, TypeVar\n\nfrom backoff import expo\nfrom google.rpc.error_details_pb2 import RetryInfo\nfrom grpc import (\n ChannelCredentials,\n Compression,\n RpcError,\n StatusCode,\n insecure_channel,\n secure_channel,\n ssl_channel_credentials,\n)\n\nfrom opentelemetry.configuration import Configuration\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue, KeyValue\nfrom opentelemetry.proto.resource.v1.resource_pb2 import Resource\nfrom opentelemetry.sdk.resources import Resource as SDKResource\n\nlogger = logging.getLogger(__name__)\nSDKDataT = TypeVar(\"SDKDataT\")\nResourceDataT = TypeVar(\"ResourceDataT\")\nTypingResourceT = TypeVar(\"TypingResourceT\")\nExportServiceRequestT = TypeVar(\"ExportServiceRequestT\")\nExportResultT = TypeVar(\"ExportResultT\")\n\n\nclass OTLPCompression(enum.Enum):\n gzip = \"gzip\"\n\n\ndef _translate_key_values(key: Text, value: Any) -> KeyValue:\n\n if isinstance(value, bool):\n any_value = AnyValue(bool_value=value)\n\n elif isinstance(value, str):\n any_value = AnyValue(string_value=value)\n\n elif isinstance(value, int):\n any_value = AnyValue(int_value=value)\n\n elif isinstance(value, float):\n any_value = AnyValue(double_value=value)\n\n elif isinstance(value, Sequence):\n any_value = AnyValue(array_value=value)\n\n elif isinstance(value, Mapping):\n any_value = AnyValue(kvlist_value=value)\n\n else:\n raise Exception(\n \"Invalid type {} of value {}\".format(type(value), value)\n )\n\n return KeyValue(key=key, value=any_value)\n\n\ndef _get_resource_data(\n sdk_resource_instrumentation_library_data: Dict[\n SDKResource, ResourceDataT\n ],\n resource_class: Callable[..., TypingResourceT],\n name: str,\n) -> List[TypingResourceT]:\n\n resource_data = []\n\n for (\n sdk_resource,\n instrumentation_library_data,\n ) in sdk_resource_instrumentation_library_data.items():\n\n collector_resource = Resource()\n\n for key, value in sdk_resource.attributes.items():\n\n try:\n # pylint: disable=no-member\n collector_resource.attributes.append(\n _translate_key_values(key, value)\n )\n except Exception as error: # pylint: disable=broad-except\n logger.exception(error)\n\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"instrumentation_library_{}\".format(name): [\n instrumentation_library_data\n ],\n }\n )\n )\n\n return resource_data\n\n\ndef _load_credential_from_file(filepath) -> ChannelCredentials:\n try:\n with open(filepath, \"rb\") as creds_file:\n credential = creds_file.read()\n return ssl_channel_credentials(credential)\n except FileNotFoundError:\n logger.exception(\"Failed to read credential file\")\n return None\n\n\n# pylint: disable=no-member\nclass OTLPExporterMixin(\n ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]\n):\n \"\"\"OTLP span/metric exporter\n\n Args:\n endpoint: OpenTelemetry Collector receiver endpoint\n insecure: Connection type\n credentials: ChannelCredentials object for server authentication\n headers: Headers to send when exporting\n compression: Compression algorithm to be used in channel\n timeout: Backend request timeout in seconds\n \"\"\"\n\n def __init__(\n self,\n endpoint: Optional[str] = None,\n insecure: Optional[bool] = None,\n credentials: Optional[ChannelCredentials] = None,\n headers: Optional[Sequence] = None,\n timeout: Optional[int] = None,\n compression: str = None,\n ):\n super().__init__()\n\n endpoint = (\n endpoint\n or Configuration().EXPORTER_OTLP_ENDPOINT\n or \"localhost:4317\"\n )\n\n if insecure is None:\n insecure = Configuration().EXPORTER_OTLP_INSECURE\n if insecure is None:\n insecure = False\n\n self._headers = headers or Configuration().EXPORTER_OTLP_HEADERS\n if isinstance(self._headers, str):\n self._headers = tuple(\n tuple(item.split(\"=\")) for item in self._headers.split(\",\")\n )\n self._timeout = (\n timeout\n or Configuration().EXPORTER_OTLP_TIMEOUT\n or 10 # default: 10 seconds\n )\n self._collector_span_kwargs = None\n\n if compression is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression in OTLPCompression._value2member_map_\n and OTLPCompression(compression) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n compression_str = Configuration().EXPORTER_OTLP_INSECURE or None\n if compression_str is None:\n compression_algorithm = Compression.NoCompression\n elif (\n compression_str in OTLPCompression._value2member_map_\n and OTLPCompression(compression_str) is OTLPCompression.gzip\n ):\n compression_algorithm = Compression.Gzip\n else:\n raise ValueError(\n \"OTEL_EXPORTER_OTLP_COMPRESSION environment variable does not match gzip.\"\n )\n\n if insecure:\n self._client = self._stub(\n insecure_channel(endpoint, compression=compression_algorithm)\n )\n return\n\n # secure mode\n if (\n credentials is None\n and Configuration().EXPORTER_OTLP_CERTIFICATE is None\n ):\n # use the default location chosen by gRPC runtime\n credentials = ssl_channel_credentials()\n else:\n credentials = credentials or _load_credential_from_file(\n Configuration().EXPORTER_OTLP_CERTIFICATE\n )\n self._client = self._stub(\n secure_channel(\n endpoint, credentials, compression=compression_algorithm\n )\n )\n\n @abstractmethod\n def _translate_data(\n self, data: TypingSequence[SDKDataT]\n ) -> ExportServiceRequestT:\n pass\n\n def _export(self, data: TypingSequence[SDKDataT]) -> ExportResultT:\n # expo returns a generator that yields delay values which grow\n # exponentially. Once delay is greater than max_value, the yielded\n # value will remain constant.\n # max_value is set to 900 (900 seconds is 15 minutes) to use the same\n # value as used in the Go implementation.\n\n max_value = 900\n\n for delay in expo(max_value=max_value):\n\n if delay == max_value:\n return self._result.FAILURE\n\n try:\n self._client.Export(\n request=self._translate_data(data),\n metadata=self._headers,\n timeout=self._timeout,\n )\n\n return self._result.SUCCESS\n\n except RpcError as error:\n\n if error.code() in [\n StatusCode.CANCELLED,\n StatusCode.DEADLINE_EXCEEDED,\n StatusCode.PERMISSION_DENIED,\n StatusCode.UNAUTHENTICATED,\n StatusCode.RESOURCE_EXHAUSTED,\n StatusCode.ABORTED,\n StatusCode.OUT_OF_RANGE,\n StatusCode.UNAVAILABLE,\n StatusCode.DATA_LOSS,\n ]:\n\n retry_info_bin = dict(error.trailing_metadata()).get(\n \"google.rpc.retryinfo-bin\"\n )\n if retry_info_bin is not None:\n retry_info = RetryInfo()\n retry_info.ParseFromString(retry_info_bin)\n delay = (\n retry_info.retry_delay.seconds\n + retry_info.retry_delay.nanos / 1.0e9\n )\n\n logger.debug(\n \"Waiting %ss before retrying export of span\", delay\n )\n sleep(delay)\n continue\n\n if error.code() == StatusCode.OK:\n return self._result.SUCCESS\n\n return self._result.FAILURE\n\n return self._result.FAILURE\n\n def shutdown(self) -> None:\n pass\n", "path": "exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/exporter.py"}]} | 3,722 | 371 |
gh_patches_debug_19649 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3246 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I cannot migrate a Plone 5.2 ZoDB to 6
I took a Plone 5.2.3 Database with some history and I tried to migrate it to Plone 6 (buildout coredev, dx site root plip cfg).
When I browse to the site root http://localhost:8080, I have this:
```
...
AttributeError: 'MountedObject' object has no attribute 'getId'
- Expression: "view/sites"
- Filename: ... ne/Products/CMFPlone/browser/templates/plone-overview.pt
...
```
The reason is that we have the `temp_folder` which is already meant to be removed in Plone 6 but unluckily this is broken and so the meta_type attribute can be found:
```
((Pdb++)) pp root.values()
[<BrowserIdManager at /browser_id_manager>,
<SessionDataManager at /session_data_manager>,
<SiteErrorLog at /error_log>,
<persistent broken Products.ZODBMountPoint.MountedObject.MountedObject instance b'\x00\x00\x00\x00\x00\x00\x00\t'>,
<Products.SiteAccess.VirtualHostMonster.VirtualHostMonster object at 0x7f304b3984a0 oid 0xa in <Connection at 7f3051c6a490>>,
<ZopePageTemplate at /index_html>,
<PloneSite at /Plone>,
<PluggableAuthService at /acl_users>]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/browser/admin.py`
Content:
```
1 from AccessControl import getSecurityManager
2 from AccessControl.Permissions import view as View
3 from OFS.interfaces import IApplication
4 from Products.CMFCore.permissions import ManagePortal
5 from Products.CMFPlone.factory import _DEFAULT_PROFILE
6 from Products.CMFPlone.factory import addPloneSite
7 from Products.CMFPlone.interfaces import INonInstallable
8 from Products.CMFPlone.interfaces import IPloneSiteRoot
9 from Products.CMFPlone.utils import get_installer
10 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
11 from Products.GenericSetup import BASE, EXTENSION
12 from Products.GenericSetup import profile_registry
13 from Products.GenericSetup.upgrade import normalize_version
14 from ZPublisher.BaseRequest import DefaultPublishTraverse
15 from collections import OrderedDict
16 from plone.i18n.locales.interfaces import IContentLanguageAvailability
17 from plone.keyring.interfaces import IKeyManager
18 from plone.protect.authenticator import check as checkCSRF
19 from plone.protect.interfaces import IDisableCSRFProtection
20 from urllib import parse
21 from zope.component import adapts
22 from zope.component import getAllUtilitiesRegisteredFor
23 from zope.component import getUtility
24 from zope.component import queryMultiAdapter
25 from zope.component import queryUtility
26 from zope.i18n.interfaces import IUserPreferredLanguages
27 from zope.i18n.locales import locales, LoadLocaleError
28 from zope.interface import Interface
29 from zope.interface import alsoProvides
30 from zope.publisher.browser import BrowserView
31 from zope.publisher.interfaces import IRequest
32 from zope.schema.interfaces import IVocabularyFactory
33
34 import logging
35 LOGGER = logging.getLogger('Products.CMFPlone')
36
37
38 class AppTraverser(DefaultPublishTraverse):
39 adapts(IApplication, IRequest)
40
41 def publishTraverse(self, request, name):
42 if name == 'index_html':
43 view = queryMultiAdapter(
44 (self.context, request), Interface, 'plone-overview')
45 if view is not None:
46 return view
47 return DefaultPublishTraverse.publishTraverse(self, request, name)
48
49
50 class Overview(BrowserView):
51
52 def sites(self, root=None):
53 if root is None:
54 root = self.context
55
56 result = []
57 secman = getSecurityManager()
58 for obj in root.values():
59 if obj.meta_type == 'Folder':
60 result = result + self.sites(obj)
61 elif IPloneSiteRoot.providedBy(obj):
62 if secman.checkPermission(View, obj):
63 result.append(obj)
64 elif obj.getId() in getattr(root, '_mount_points', {}):
65 result.extend(self.sites(root=obj))
66 return result
67
68 def outdated(self, obj):
69 mig = obj.get('portal_migration', None)
70 if mig is not None:
71 return mig.needUpgrading()
72 return False
73
74 def can_manage(self):
75 secman = getSecurityManager()
76 return secman.checkPermission(ManagePortal, self.context)
77
78 def upgrade_url(self, site, can_manage=None):
79 if can_manage is None:
80 can_manage = self.can_manage()
81 if can_manage:
82 return site.absolute_url() + '/@@plone-upgrade'
83 else:
84 return self.context.absolute_url() + '/@@plone-root-login'
85
86
87 class RootLoginRedirect(BrowserView):
88 """ @@plone-root-login
89
90 This view of the Zope root forces authentication via the root
91 acl_users and then redirects elsewhere.
92 """
93
94 def __call__(self, came_from=None):
95 if came_from is not None:
96 # see if this is a relative url or an absolute
97 if len(parse.urlparse(came_from)[1]) == 0:
98 # No host specified, so url is relative. Get an absolute url.
99 # Note: '\\domain.org' is not recognised as host,
100 # which is good.
101 came_from = parse.urljoin(
102 self.context.absolute_url() + '/', came_from,
103 )
104 elif not came_from.startswith(self.context.absolute_url()):
105 # Note: we cannot use portal_url.isURLInPortal here, because we
106 # are not in a Plone portal, but in the Zope root.
107 came_from = None
108 if came_from is None:
109 came_from = self.context.absolute_url()
110 self.request.response.redirect(came_from)
111
112
113 class RootLogout(BrowserView):
114 """ @@plone-root-logout """
115
116 logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')
117
118 def __call__(self):
119 response = self.request.response
120 realm = response.realm
121 response.setStatus(401)
122 response.setHeader('WWW-Authenticate', 'basic realm="%s"' % realm, 1)
123 response.setBody(self.logout())
124 return
125
126
127 class FrontPage(BrowserView):
128
129 index = ViewPageTemplateFile('templates/plone-frontpage.pt')
130
131
132 class AddPloneSite(BrowserView):
133
134 # Profiles that are installed by default,
135 # but can be removed later.
136 default_extension_profiles = (
137 'plone.app.caching:default',
138 'plonetheme.barceloneta:default',
139 )
140
141 def profiles(self):
142 base_profiles = []
143 extension_profiles = []
144
145 # profiles available for install/uninstall, but hidden at the time
146 # the Plone site is created
147 not_installable = [
148 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',
149 ]
150 utils = getAllUtilitiesRegisteredFor(INonInstallable)
151 for util in utils:
152 not_installable.extend(util.getNonInstallableProfiles())
153
154 for info in profile_registry.listProfileInfo():
155 if info.get('type') == EXTENSION and \
156 info.get('for') in (IPloneSiteRoot, None):
157 profile_id = info.get('id')
158 if profile_id not in not_installable:
159 if profile_id in self.default_extension_profiles:
160 info['selected'] = 'selected'
161 extension_profiles.append(info)
162
163 def _key(v):
164 # Make sure implicitly selected items come first
165 selected = v.get('selected') and 'automatic' or 'manual'
166 return '{}-{}'.format(selected, v.get('title', ''))
167 extension_profiles.sort(key=_key)
168
169 for info in profile_registry.listProfileInfo():
170 if info.get('type') == BASE and \
171 info.get('for') in (IPloneSiteRoot, None):
172 base_profiles.append(info)
173
174 return dict(
175 base=tuple(base_profiles),
176 default=_DEFAULT_PROFILE,
177 extensions=tuple(extension_profiles),
178 )
179
180 def browser_language(self):
181 language = 'en'
182 pl = IUserPreferredLanguages(self.request)
183 if pl is not None:
184 languages = pl.getPreferredLanguages()
185 for httplang in languages:
186 parts = (httplang.split('-') + [None, None])[:3]
187 if parts[0] == parts[1]:
188 # Avoid creating a country code for simple languages codes
189 parts = [parts[0], None, None]
190 try:
191 locale = locales.getLocale(*parts)
192 language = locale.getLocaleID().replace('_', '-').lower()
193 break
194 except LoadLocaleError:
195 # Just try the next combination
196 pass
197 return language
198
199 def grouped_languages(self, default='en'):
200 util = queryUtility(IContentLanguageAvailability)
201 available = util.getLanguages(combined=True)
202 languages = dict(util.getLanguageListing())
203
204 # Group country specific versions by language
205 grouped = OrderedDict()
206 for langcode, data in available.items():
207 lang = langcode.split('-')[0]
208 language = languages.get(lang, lang) # Label
209
210 struct = grouped.get(lang, {'label': language, 'languages': []})
211
212 langs = struct['languages']
213 langs.append({
214 'langcode': langcode,
215 'label': data.get('native', data.get('name')),
216 })
217
218 grouped[lang] = struct
219
220 # Sort list by language, next by country
221 data = sorted(grouped.values(), key=lambda k: k['label'])
222 for item in data:
223 item['languages'] = sorted(
224 item['languages'], key=lambda k: k['label'].lower())
225 return data
226
227 def timezones(self):
228 tz_vocab = getUtility(
229 IVocabularyFactory,
230 'plone.app.vocabularies.CommonTimezones'
231 )(self.context)
232
233 grouped = OrderedDict()
234 tz_values = [it.value for it in tz_vocab]
235 for value in tz_values:
236 splitted = value.split('/')
237 group = splitted.pop(0)
238 label = '/'.join(splitted)
239
240 entries = grouped.get(group, [])
241 entries.append({'label': label or group, 'value': value})
242 grouped[group] = entries
243
244 return grouped
245
246 def __call__(self):
247 context = self.context
248 form = self.request.form
249 submitted = form.get('form.submitted', False)
250 if submitted:
251 site_id = form.get('site_id', 'Plone')
252
253 # CSRF protect. DO NOT use auto CSRF protection for adding a site
254 alsoProvides(self.request, IDisableCSRFProtection)
255
256 # check if keyring is installed on root, disable CSRF protection
257 # if it is because it is not installed until a plone site
258 # is created
259 if queryUtility(IKeyManager) is None:
260 LOGGER.info('CSRF protection disabled on initial site '
261 'creation')
262 else:
263 # we have a keymanager, check csrf protection manually now
264 checkCSRF(self.request)
265 site = addPloneSite(
266 context, site_id,
267 title=form.get('title', ''),
268 profile_id=form.get('profile_id', _DEFAULT_PROFILE),
269 extension_ids=form.get('extension_ids', ()),
270 setup_content=form.get('setup_content', False),
271 default_language=form.get('default_language', 'en'),
272 portal_timezone=form.get('portal_timezone', 'UTC')
273 )
274 self.request.response.redirect(site.absolute_url())
275 return ''
276
277 return self.index()
278
279
280 class Upgrade(BrowserView):
281
282 def upgrades(self):
283 pm = getattr(self.context, 'portal_migration')
284 return pm.listUpgrades()
285
286 def versions(self):
287 pm = getattr(self.context, 'portal_migration')
288 result = {}
289 result['instance'] = pm.getInstanceVersion()
290 result['fs'] = pm.getFileSystemVersion()
291 result['equal'] = result['instance'] == result['fs']
292 instance_version = normalize_version(result['instance'])
293 fs_version = normalize_version(result['fs'])
294 result['instance_gt'] = instance_version > fs_version
295 result['instance_lt'] = instance_version < fs_version
296 result['corelist'] = pm.coreVersions()
297 return result
298
299 def __call__(self):
300 form = self.request.form
301 submitted = form.get('form.submitted', False)
302 if submitted:
303 # CSRF protect. DO NOT use auto CSRF protection for upgrading sites
304 alsoProvides(self.request, IDisableCSRFProtection)
305
306 pm = getattr(self.context, 'portal_migration')
307 report = pm.upgrade(
308 REQUEST=self.request,
309 dry_run=form.get('dry_run', False),
310 )
311 return self.index(
312 report=report,
313 )
314
315 return self.index()
316
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/browser/admin.py b/Products/CMFPlone/browser/admin.py
--- a/Products/CMFPlone/browser/admin.py
+++ b/Products/CMFPlone/browser/admin.py
@@ -18,6 +18,7 @@
from plone.protect.authenticator import check as checkCSRF
from plone.protect.interfaces import IDisableCSRFProtection
from urllib import parse
+from ZODB.broken import Broken
from zope.component import adapts
from zope.component import getAllUtilitiesRegisteredFor
from zope.component import getUtility
@@ -55,7 +56,10 @@
result = []
secman = getSecurityManager()
- for obj in root.values():
+ candidates = (
+ obj for obj in root.values() if not isinstance(obj, Broken)
+ )
+ for obj in candidates:
if obj.meta_type == 'Folder':
result = result + self.sites(obj)
elif IPloneSiteRoot.providedBy(obj):
| {"golden_diff": "diff --git a/Products/CMFPlone/browser/admin.py b/Products/CMFPlone/browser/admin.py\n--- a/Products/CMFPlone/browser/admin.py\n+++ b/Products/CMFPlone/browser/admin.py\n@@ -18,6 +18,7 @@\n from plone.protect.authenticator import check as checkCSRF\n from plone.protect.interfaces import IDisableCSRFProtection\n from urllib import parse\n+from ZODB.broken import Broken\n from zope.component import adapts\n from zope.component import getAllUtilitiesRegisteredFor\n from zope.component import getUtility\n@@ -55,7 +56,10 @@\n \n result = []\n secman = getSecurityManager()\n- for obj in root.values():\n+ candidates = (\n+ obj for obj in root.values() if not isinstance(obj, Broken)\n+ )\n+ for obj in candidates:\n if obj.meta_type == 'Folder':\n result = result + self.sites(obj)\n elif IPloneSiteRoot.providedBy(obj):\n", "issue": "I cannot migrate a Plone 5.2 ZoDB to 6\nI took a Plone 5.2.3 Database with some history and I tried to migrate it to Plone 6 (buildout coredev, dx site root plip cfg).\r\n\r\nWhen I browse to the site root http://localhost:8080, I have this:\r\n```\r\n...\r\nAttributeError: 'MountedObject' object has no attribute 'getId'\r\n\r\n - Expression: \"view/sites\"\r\n - Filename: ... ne/Products/CMFPlone/browser/templates/plone-overview.pt\r\n ...\r\n```\r\n\r\nThe reason is that we have the `temp_folder` which is already meant to be removed in Plone 6 but unluckily this is broken and so the meta_type attribute can be found:\r\n```\r\n((Pdb++)) pp root.values() \r\n[<BrowserIdManager at /browser_id_manager>,\r\n <SessionDataManager at /session_data_manager>,\r\n <SiteErrorLog at /error_log>,\r\n <persistent broken Products.ZODBMountPoint.MountedObject.MountedObject instance b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\t'>,\r\n <Products.SiteAccess.VirtualHostMonster.VirtualHostMonster object at 0x7f304b3984a0 oid 0xa in <Connection at 7f3051c6a490>>,\r\n <ZopePageTemplate at /index_html>,\r\n <PloneSite at /Plone>,\r\n <PluggableAuthService at /acl_users>]\r\n```\r\n\n", "before_files": [{"content": "from AccessControl import getSecurityManager\nfrom AccessControl.Permissions import view as View\nfrom OFS.interfaces import IApplication\nfrom Products.CMFCore.permissions import ManagePortal\nfrom Products.CMFPlone.factory import _DEFAULT_PROFILE\nfrom Products.CMFPlone.factory import addPloneSite\nfrom Products.CMFPlone.interfaces import INonInstallable\nfrom Products.CMFPlone.interfaces import IPloneSiteRoot\nfrom Products.CMFPlone.utils import get_installer\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.GenericSetup import BASE, EXTENSION\nfrom Products.GenericSetup import profile_registry\nfrom Products.GenericSetup.upgrade import normalize_version\nfrom ZPublisher.BaseRequest import DefaultPublishTraverse\nfrom collections import OrderedDict\nfrom plone.i18n.locales.interfaces import IContentLanguageAvailability\nfrom plone.keyring.interfaces import IKeyManager\nfrom plone.protect.authenticator import check as checkCSRF\nfrom plone.protect.interfaces import IDisableCSRFProtection\nfrom urllib import parse\nfrom zope.component import adapts\nfrom zope.component import getAllUtilitiesRegisteredFor\nfrom zope.component import getUtility\nfrom zope.component import queryMultiAdapter\nfrom zope.component import queryUtility\nfrom zope.i18n.interfaces import IUserPreferredLanguages\nfrom zope.i18n.locales import locales, LoadLocaleError\nfrom zope.interface import Interface\nfrom zope.interface import alsoProvides\nfrom zope.publisher.browser import BrowserView\nfrom zope.publisher.interfaces import IRequest\nfrom zope.schema.interfaces import IVocabularyFactory\n\nimport logging\nLOGGER = logging.getLogger('Products.CMFPlone')\n\n\nclass AppTraverser(DefaultPublishTraverse):\n adapts(IApplication, IRequest)\n\n def publishTraverse(self, request, name):\n if name == 'index_html':\n view = queryMultiAdapter(\n (self.context, request), Interface, 'plone-overview')\n if view is not None:\n return view\n return DefaultPublishTraverse.publishTraverse(self, request, name)\n\n\nclass Overview(BrowserView):\n\n def sites(self, root=None):\n if root is None:\n root = self.context\n\n result = []\n secman = getSecurityManager()\n for obj in root.values():\n if obj.meta_type == 'Folder':\n result = result + self.sites(obj)\n elif IPloneSiteRoot.providedBy(obj):\n if secman.checkPermission(View, obj):\n result.append(obj)\n elif obj.getId() in getattr(root, '_mount_points', {}):\n result.extend(self.sites(root=obj))\n return result\n\n def outdated(self, obj):\n mig = obj.get('portal_migration', None)\n if mig is not None:\n return mig.needUpgrading()\n return False\n\n def can_manage(self):\n secman = getSecurityManager()\n return secman.checkPermission(ManagePortal, self.context)\n\n def upgrade_url(self, site, can_manage=None):\n if can_manage is None:\n can_manage = self.can_manage()\n if can_manage:\n return site.absolute_url() + '/@@plone-upgrade'\n else:\n return self.context.absolute_url() + '/@@plone-root-login'\n\n\nclass RootLoginRedirect(BrowserView):\n \"\"\" @@plone-root-login\n\n This view of the Zope root forces authentication via the root\n acl_users and then redirects elsewhere.\n \"\"\"\n\n def __call__(self, came_from=None):\n if came_from is not None:\n # see if this is a relative url or an absolute\n if len(parse.urlparse(came_from)[1]) == 0:\n # No host specified, so url is relative. Get an absolute url.\n # Note: '\\\\domain.org' is not recognised as host,\n # which is good.\n came_from = parse.urljoin(\n self.context.absolute_url() + '/', came_from,\n )\n elif not came_from.startswith(self.context.absolute_url()):\n # Note: we cannot use portal_url.isURLInPortal here, because we\n # are not in a Plone portal, but in the Zope root.\n came_from = None\n if came_from is None:\n came_from = self.context.absolute_url()\n self.request.response.redirect(came_from)\n\n\nclass RootLogout(BrowserView):\n \"\"\" @@plone-root-logout \"\"\"\n\n logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')\n\n def __call__(self):\n response = self.request.response\n realm = response.realm\n response.setStatus(401)\n response.setHeader('WWW-Authenticate', 'basic realm=\"%s\"' % realm, 1)\n response.setBody(self.logout())\n return\n\n\nclass FrontPage(BrowserView):\n\n index = ViewPageTemplateFile('templates/plone-frontpage.pt')\n\n\nclass AddPloneSite(BrowserView):\n\n # Profiles that are installed by default,\n # but can be removed later.\n default_extension_profiles = (\n 'plone.app.caching:default',\n 'plonetheme.barceloneta:default',\n )\n\n def profiles(self):\n base_profiles = []\n extension_profiles = []\n\n # profiles available for install/uninstall, but hidden at the time\n # the Plone site is created\n not_installable = [\n 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',\n ]\n utils = getAllUtilitiesRegisteredFor(INonInstallable)\n for util in utils:\n not_installable.extend(util.getNonInstallableProfiles())\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == EXTENSION and \\\n info.get('for') in (IPloneSiteRoot, None):\n profile_id = info.get('id')\n if profile_id not in not_installable:\n if profile_id in self.default_extension_profiles:\n info['selected'] = 'selected'\n extension_profiles.append(info)\n\n def _key(v):\n # Make sure implicitly selected items come first\n selected = v.get('selected') and 'automatic' or 'manual'\n return '{}-{}'.format(selected, v.get('title', ''))\n extension_profiles.sort(key=_key)\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == BASE and \\\n info.get('for') in (IPloneSiteRoot, None):\n base_profiles.append(info)\n\n return dict(\n base=tuple(base_profiles),\n default=_DEFAULT_PROFILE,\n extensions=tuple(extension_profiles),\n )\n\n def browser_language(self):\n language = 'en'\n pl = IUserPreferredLanguages(self.request)\n if pl is not None:\n languages = pl.getPreferredLanguages()\n for httplang in languages:\n parts = (httplang.split('-') + [None, None])[:3]\n if parts[0] == parts[1]:\n # Avoid creating a country code for simple languages codes\n parts = [parts[0], None, None]\n try:\n locale = locales.getLocale(*parts)\n language = locale.getLocaleID().replace('_', '-').lower()\n break\n except LoadLocaleError:\n # Just try the next combination\n pass\n return language\n\n def grouped_languages(self, default='en'):\n util = queryUtility(IContentLanguageAvailability)\n available = util.getLanguages(combined=True)\n languages = dict(util.getLanguageListing())\n\n # Group country specific versions by language\n grouped = OrderedDict()\n for langcode, data in available.items():\n lang = langcode.split('-')[0]\n language = languages.get(lang, lang) # Label\n\n struct = grouped.get(lang, {'label': language, 'languages': []})\n\n langs = struct['languages']\n langs.append({\n 'langcode': langcode,\n 'label': data.get('native', data.get('name')),\n })\n\n grouped[lang] = struct\n\n # Sort list by language, next by country\n data = sorted(grouped.values(), key=lambda k: k['label'])\n for item in data:\n item['languages'] = sorted(\n item['languages'], key=lambda k: k['label'].lower())\n return data\n\n def timezones(self):\n tz_vocab = getUtility(\n IVocabularyFactory,\n 'plone.app.vocabularies.CommonTimezones'\n )(self.context)\n\n grouped = OrderedDict()\n tz_values = [it.value for it in tz_vocab]\n for value in tz_values:\n splitted = value.split('/')\n group = splitted.pop(0)\n label = '/'.join(splitted)\n\n entries = grouped.get(group, [])\n entries.append({'label': label or group, 'value': value})\n grouped[group] = entries\n\n return grouped\n\n def __call__(self):\n context = self.context\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n site_id = form.get('site_id', 'Plone')\n\n # CSRF protect. DO NOT use auto CSRF protection for adding a site\n alsoProvides(self.request, IDisableCSRFProtection)\n\n # check if keyring is installed on root, disable CSRF protection\n # if it is because it is not installed until a plone site\n # is created\n if queryUtility(IKeyManager) is None:\n LOGGER.info('CSRF protection disabled on initial site '\n 'creation')\n else:\n # we have a keymanager, check csrf protection manually now\n checkCSRF(self.request)\n site = addPloneSite(\n context, site_id,\n title=form.get('title', ''),\n profile_id=form.get('profile_id', _DEFAULT_PROFILE),\n extension_ids=form.get('extension_ids', ()),\n setup_content=form.get('setup_content', False),\n default_language=form.get('default_language', 'en'),\n portal_timezone=form.get('portal_timezone', 'UTC')\n )\n self.request.response.redirect(site.absolute_url())\n return ''\n\n return self.index()\n\n\nclass Upgrade(BrowserView):\n\n def upgrades(self):\n pm = getattr(self.context, 'portal_migration')\n return pm.listUpgrades()\n\n def versions(self):\n pm = getattr(self.context, 'portal_migration')\n result = {}\n result['instance'] = pm.getInstanceVersion()\n result['fs'] = pm.getFileSystemVersion()\n result['equal'] = result['instance'] == result['fs']\n instance_version = normalize_version(result['instance'])\n fs_version = normalize_version(result['fs'])\n result['instance_gt'] = instance_version > fs_version\n result['instance_lt'] = instance_version < fs_version\n result['corelist'] = pm.coreVersions()\n return result\n\n def __call__(self):\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n # CSRF protect. DO NOT use auto CSRF protection for upgrading sites\n alsoProvides(self.request, IDisableCSRFProtection)\n\n pm = getattr(self.context, 'portal_migration')\n report = pm.upgrade(\n REQUEST=self.request,\n dry_run=form.get('dry_run', False),\n )\n return self.index(\n report=report,\n )\n\n return self.index()\n", "path": "Products/CMFPlone/browser/admin.py"}], "after_files": [{"content": "from AccessControl import getSecurityManager\nfrom AccessControl.Permissions import view as View\nfrom OFS.interfaces import IApplication\nfrom Products.CMFCore.permissions import ManagePortal\nfrom Products.CMFPlone.factory import _DEFAULT_PROFILE\nfrom Products.CMFPlone.factory import addPloneSite\nfrom Products.CMFPlone.interfaces import INonInstallable\nfrom Products.CMFPlone.interfaces import IPloneSiteRoot\nfrom Products.CMFPlone.utils import get_installer\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.GenericSetup import BASE, EXTENSION\nfrom Products.GenericSetup import profile_registry\nfrom Products.GenericSetup.upgrade import normalize_version\nfrom ZPublisher.BaseRequest import DefaultPublishTraverse\nfrom collections import OrderedDict\nfrom plone.i18n.locales.interfaces import IContentLanguageAvailability\nfrom plone.keyring.interfaces import IKeyManager\nfrom plone.protect.authenticator import check as checkCSRF\nfrom plone.protect.interfaces import IDisableCSRFProtection\nfrom urllib import parse\nfrom ZODB.broken import Broken\nfrom zope.component import adapts\nfrom zope.component import getAllUtilitiesRegisteredFor\nfrom zope.component import getUtility\nfrom zope.component import queryMultiAdapter\nfrom zope.component import queryUtility\nfrom zope.i18n.interfaces import IUserPreferredLanguages\nfrom zope.i18n.locales import locales, LoadLocaleError\nfrom zope.interface import Interface\nfrom zope.interface import alsoProvides\nfrom zope.publisher.browser import BrowserView\nfrom zope.publisher.interfaces import IRequest\nfrom zope.schema.interfaces import IVocabularyFactory\n\nimport logging\nLOGGER = logging.getLogger('Products.CMFPlone')\n\n\nclass AppTraverser(DefaultPublishTraverse):\n adapts(IApplication, IRequest)\n\n def publishTraverse(self, request, name):\n if name == 'index_html':\n view = queryMultiAdapter(\n (self.context, request), Interface, 'plone-overview')\n if view is not None:\n return view\n return DefaultPublishTraverse.publishTraverse(self, request, name)\n\n\nclass Overview(BrowserView):\n\n def sites(self, root=None):\n if root is None:\n root = self.context\n\n result = []\n secman = getSecurityManager()\n candidates = (\n obj for obj in root.values() if not isinstance(obj, Broken)\n )\n for obj in candidates:\n if obj.meta_type == 'Folder':\n result = result + self.sites(obj)\n elif IPloneSiteRoot.providedBy(obj):\n if secman.checkPermission(View, obj):\n result.append(obj)\n elif obj.getId() in getattr(root, '_mount_points', {}):\n result.extend(self.sites(root=obj))\n return result\n\n def outdated(self, obj):\n mig = obj.get('portal_migration', None)\n if mig is not None:\n return mig.needUpgrading()\n return False\n\n def can_manage(self):\n secman = getSecurityManager()\n return secman.checkPermission(ManagePortal, self.context)\n\n def upgrade_url(self, site, can_manage=None):\n if can_manage is None:\n can_manage = self.can_manage()\n if can_manage:\n return site.absolute_url() + '/@@plone-upgrade'\n else:\n return self.context.absolute_url() + '/@@plone-root-login'\n\n\nclass RootLoginRedirect(BrowserView):\n \"\"\" @@plone-root-login\n\n This view of the Zope root forces authentication via the root\n acl_users and then redirects elsewhere.\n \"\"\"\n\n def __call__(self, came_from=None):\n if came_from is not None:\n # see if this is a relative url or an absolute\n if len(parse.urlparse(came_from)[1]) == 0:\n # No host specified, so url is relative. Get an absolute url.\n # Note: '\\\\domain.org' is not recognised as host,\n # which is good.\n came_from = parse.urljoin(\n self.context.absolute_url() + '/', came_from,\n )\n elif not came_from.startswith(self.context.absolute_url()):\n # Note: we cannot use portal_url.isURLInPortal here, because we\n # are not in a Plone portal, but in the Zope root.\n came_from = None\n if came_from is None:\n came_from = self.context.absolute_url()\n self.request.response.redirect(came_from)\n\n\nclass RootLogout(BrowserView):\n \"\"\" @@plone-root-logout \"\"\"\n\n logout = ViewPageTemplateFile('templates/plone-admin-logged-out.pt')\n\n def __call__(self):\n response = self.request.response\n realm = response.realm\n response.setStatus(401)\n response.setHeader('WWW-Authenticate', 'basic realm=\"%s\"' % realm, 1)\n response.setBody(self.logout())\n return\n\n\nclass FrontPage(BrowserView):\n\n index = ViewPageTemplateFile('templates/plone-frontpage.pt')\n\n\nclass AddPloneSite(BrowserView):\n\n # Profiles that are installed by default,\n # but can be removed later.\n default_extension_profiles = (\n 'plone.app.caching:default',\n 'plonetheme.barceloneta:default',\n )\n\n def profiles(self):\n base_profiles = []\n extension_profiles = []\n\n # profiles available for install/uninstall, but hidden at the time\n # the Plone site is created\n not_installable = [\n 'Products.CMFPlacefulWorkflow:CMFPlacefulWorkflow',\n ]\n utils = getAllUtilitiesRegisteredFor(INonInstallable)\n for util in utils:\n not_installable.extend(util.getNonInstallableProfiles())\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == EXTENSION and \\\n info.get('for') in (IPloneSiteRoot, None):\n profile_id = info.get('id')\n if profile_id not in not_installable:\n if profile_id in self.default_extension_profiles:\n info['selected'] = 'selected'\n extension_profiles.append(info)\n\n def _key(v):\n # Make sure implicitly selected items come first\n selected = v.get('selected') and 'automatic' or 'manual'\n return '{}-{}'.format(selected, v.get('title', ''))\n extension_profiles.sort(key=_key)\n\n for info in profile_registry.listProfileInfo():\n if info.get('type') == BASE and \\\n info.get('for') in (IPloneSiteRoot, None):\n base_profiles.append(info)\n\n return dict(\n base=tuple(base_profiles),\n default=_DEFAULT_PROFILE,\n extensions=tuple(extension_profiles),\n )\n\n def browser_language(self):\n language = 'en'\n pl = IUserPreferredLanguages(self.request)\n if pl is not None:\n languages = pl.getPreferredLanguages()\n for httplang in languages:\n parts = (httplang.split('-') + [None, None])[:3]\n if parts[0] == parts[1]:\n # Avoid creating a country code for simple languages codes\n parts = [parts[0], None, None]\n try:\n locale = locales.getLocale(*parts)\n language = locale.getLocaleID().replace('_', '-').lower()\n break\n except LoadLocaleError:\n # Just try the next combination\n pass\n return language\n\n def grouped_languages(self, default='en'):\n util = queryUtility(IContentLanguageAvailability)\n available = util.getLanguages(combined=True)\n languages = dict(util.getLanguageListing())\n\n # Group country specific versions by language\n grouped = OrderedDict()\n for langcode, data in available.items():\n lang = langcode.split('-')[0]\n language = languages.get(lang, lang) # Label\n\n struct = grouped.get(lang, {'label': language, 'languages': []})\n\n langs = struct['languages']\n langs.append({\n 'langcode': langcode,\n 'label': data.get('native', data.get('name')),\n })\n\n grouped[lang] = struct\n\n # Sort list by language, next by country\n data = sorted(grouped.values(), key=lambda k: k['label'])\n for item in data:\n item['languages'] = sorted(\n item['languages'], key=lambda k: k['label'].lower())\n return data\n\n def timezones(self):\n tz_vocab = getUtility(\n IVocabularyFactory,\n 'plone.app.vocabularies.CommonTimezones'\n )(self.context)\n\n grouped = OrderedDict()\n tz_values = [it.value for it in tz_vocab]\n for value in tz_values:\n splitted = value.split('/')\n group = splitted.pop(0)\n label = '/'.join(splitted)\n\n entries = grouped.get(group, [])\n entries.append({'label': label or group, 'value': value})\n grouped[group] = entries\n\n return grouped\n\n def __call__(self):\n context = self.context\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n site_id = form.get('site_id', 'Plone')\n\n # CSRF protect. DO NOT use auto CSRF protection for adding a site\n alsoProvides(self.request, IDisableCSRFProtection)\n\n # check if keyring is installed on root, disable CSRF protection\n # if it is because it is not installed until a plone site\n # is created\n if queryUtility(IKeyManager) is None:\n LOGGER.info('CSRF protection disabled on initial site '\n 'creation')\n else:\n # we have a keymanager, check csrf protection manually now\n checkCSRF(self.request)\n site = addPloneSite(\n context, site_id,\n title=form.get('title', ''),\n profile_id=form.get('profile_id', _DEFAULT_PROFILE),\n extension_ids=form.get('extension_ids', ()),\n setup_content=form.get('setup_content', False),\n default_language=form.get('default_language', 'en'),\n portal_timezone=form.get('portal_timezone', 'UTC')\n )\n self.request.response.redirect(site.absolute_url())\n return ''\n\n return self.index()\n\n\nclass Upgrade(BrowserView):\n\n def upgrades(self):\n pm = getattr(self.context, 'portal_migration')\n return pm.listUpgrades()\n\n def versions(self):\n pm = getattr(self.context, 'portal_migration')\n result = {}\n result['instance'] = pm.getInstanceVersion()\n result['fs'] = pm.getFileSystemVersion()\n result['equal'] = result['instance'] == result['fs']\n instance_version = normalize_version(result['instance'])\n fs_version = normalize_version(result['fs'])\n result['instance_gt'] = instance_version > fs_version\n result['instance_lt'] = instance_version < fs_version\n result['corelist'] = pm.coreVersions()\n return result\n\n def __call__(self):\n form = self.request.form\n submitted = form.get('form.submitted', False)\n if submitted:\n # CSRF protect. DO NOT use auto CSRF protection for upgrading sites\n alsoProvides(self.request, IDisableCSRFProtection)\n\n pm = getattr(self.context, 'portal_migration')\n report = pm.upgrade(\n REQUEST=self.request,\n dry_run=form.get('dry_run', False),\n )\n return self.index(\n report=report,\n )\n\n return self.index()\n", "path": "Products/CMFPlone/browser/admin.py"}]} | 3,855 | 227 |
gh_patches_debug_34020 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1888 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Path is not mounted correctly when running Docker hooks from Docker
**Situation**:
- In our CI we want to run `pre-commit` inside Docker.
- Some of our hooks are `docker_image`
**Problem**
This line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94
Currently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.
However this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.
Example:
```
/opt/my_code <- host, mounts /opt/my_code:/project
/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src
/src <- (in Dockerized hook)
```
Currently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`
**Possible solution**:
When I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.
**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/docker.py`
Content:
```
1 import hashlib
2 import os
3 from typing import Sequence
4 from typing import Tuple
5
6 import pre_commit.constants as C
7 from pre_commit.hook import Hook
8 from pre_commit.languages import helpers
9 from pre_commit.prefix import Prefix
10 from pre_commit.util import clean_path_on_failure
11
12 ENVIRONMENT_DIR = 'docker'
13 PRE_COMMIT_LABEL = 'PRE_COMMIT'
14 get_default_version = helpers.basic_get_default_version
15 healthy = helpers.basic_healthy
16
17
18 def md5(s: str) -> str: # pragma: win32 no cover
19 return hashlib.md5(s.encode()).hexdigest()
20
21
22 def docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover
23 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
24 return f'pre-commit-{md5sum}'
25
26
27 def build_docker_image(
28 prefix: Prefix,
29 *,
30 pull: bool,
31 ) -> None: # pragma: win32 no cover
32 cmd: Tuple[str, ...] = (
33 'docker', 'build',
34 '--tag', docker_tag(prefix),
35 '--label', PRE_COMMIT_LABEL,
36 )
37 if pull:
38 cmd += ('--pull',)
39 # This must come last for old versions of docker. See #477
40 cmd += ('.',)
41 helpers.run_setup_cmd(prefix, cmd)
42
43
44 def install_environment(
45 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
46 ) -> None: # pragma: win32 no cover
47 helpers.assert_version_default('docker', version)
48 helpers.assert_no_additional_deps('docker', additional_dependencies)
49
50 directory = prefix.path(
51 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
52 )
53
54 # Docker doesn't really have relevant disk environment, but pre-commit
55 # still needs to cleanup its state files on failure
56 with clean_path_on_failure(directory):
57 build_docker_image(prefix, pull=True)
58 os.mkdir(directory)
59
60
61 def get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover
62 try:
63 return ('-u', f'{os.getuid()}:{os.getgid()}')
64 except AttributeError:
65 return ()
66
67
68 def docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover
69 return (
70 'docker', 'run',
71 '--rm',
72 *get_docker_user(),
73 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
74 # The `Z` option tells Docker to label the content with a private
75 # unshared label. Only the current container can use a private volume.
76 '-v', f'{os.getcwd()}:/src:rw,Z',
77 '--workdir', '/src',
78 )
79
80
81 def run_hook(
82 hook: Hook,
83 file_args: Sequence[str],
84 color: bool,
85 ) -> Tuple[int, bytes]: # pragma: win32 no cover
86 # Rebuild the docker image in case it has gone missing, as many people do
87 # automated cleanup of docker images.
88 build_docker_image(hook.prefix, pull=False)
89
90 entry_exe, *cmd_rest = hook.cmd
91
92 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
93 cmd = (*docker_cmd(), *entry_tag, *cmd_rest)
94 return helpers.run_xargs(hook, cmd, file_args, color=color)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -1,5 +1,7 @@
import hashlib
+import json
import os
+import socket
from typing import Sequence
from typing import Tuple
@@ -8,6 +10,7 @@
from pre_commit.languages import helpers
from pre_commit.prefix import Prefix
from pre_commit.util import clean_path_on_failure
+from pre_commit.util import cmd_output_b
ENVIRONMENT_DIR = 'docker'
PRE_COMMIT_LABEL = 'PRE_COMMIT'
@@ -15,6 +18,34 @@
healthy = helpers.basic_healthy
+def _is_in_docker() -> bool:
+ try:
+ with open('/proc/1/cgroup', 'rb') as f:
+ return b'docker' in f.read()
+ except FileNotFoundError:
+ return False
+
+
+def _get_docker_path(path: str) -> str:
+ if not _is_in_docker():
+ return path
+ hostname = socket.gethostname()
+
+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)
+
+ container, = json.loads(out)
+ for mount in container['Mounts']:
+ src_path = mount['Source']
+ to_path = mount['Destination']
+ if os.path.commonpath((path, to_path)) == to_path:
+ # So there is something in common,
+ # and we can proceed remapping it
+ return path.replace(to_path, src_path)
+ # we're in Docker, but the path is not mounted, cannot really do anything,
+ # so fall back to original path
+ return path
+
+
def md5(s: str) -> str: # pragma: win32 no cover
return hashlib.md5(s.encode()).hexdigest()
@@ -73,7 +104,7 @@
# https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
# The `Z` option tells Docker to label the content with a private
# unshared label. Only the current container can use a private volume.
- '-v', f'{os.getcwd()}:/src:rw,Z',
+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',
'--workdir', '/src',
)
| {"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -1,5 +1,7 @@\n import hashlib\n+import json\n import os\n+import socket\n from typing import Sequence\n from typing import Tuple\n \n@@ -8,6 +10,7 @@\n from pre_commit.languages import helpers\n from pre_commit.prefix import Prefix\n from pre_commit.util import clean_path_on_failure\n+from pre_commit.util import cmd_output_b\n \n ENVIRONMENT_DIR = 'docker'\n PRE_COMMIT_LABEL = 'PRE_COMMIT'\n@@ -15,6 +18,34 @@\n healthy = helpers.basic_healthy\n \n \n+def _is_in_docker() -> bool:\n+ try:\n+ with open('/proc/1/cgroup', 'rb') as f:\n+ return b'docker' in f.read()\n+ except FileNotFoundError:\n+ return False\n+\n+\n+def _get_docker_path(path: str) -> str:\n+ if not _is_in_docker():\n+ return path\n+ hostname = socket.gethostname()\n+\n+ _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n+\n+ container, = json.loads(out)\n+ for mount in container['Mounts']:\n+ src_path = mount['Source']\n+ to_path = mount['Destination']\n+ if os.path.commonpath((path, to_path)) == to_path:\n+ # So there is something in common,\n+ # and we can proceed remapping it\n+ return path.replace(to_path, src_path)\n+ # we're in Docker, but the path is not mounted, cannot really do anything,\n+ # so fall back to original path\n+ return path\n+\n+\n def md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n \n@@ -73,7 +104,7 @@\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n- '-v', f'{os.getcwd()}:/src:rw,Z',\n+ '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n", "issue": "Path is not mounted correctly when running Docker hooks from Docker\n**Situation**:\r\n\r\n- In our CI we want to run `pre-commit` inside Docker.\r\n- Some of our hooks are `docker_image`\r\n\r\n**Problem**\r\nThis line mostly https://github.com/pre-commit/pre-commit/blob/528c7afd18dafa6e47ce73add2c8e1550d105674/pre_commit/languages/docker.py#L94\r\n\r\nCurrently `pre-commit` mounts the current directory to `/src` and uses current directory name as mount base.\r\nHowever this does not work when `pre-commit` is run inside the container on some mounted path already, because mount points are relative to the host, not to the container.\r\n\r\n Example: \r\n```\r\n/opt/my_code <- host, mounts /opt/my_code:/project\r\n/project <- in Docker running pre-commit, pre-commit is doing mount /project:/src\r\n/src <- (in Dockerized hook)\r\n```\r\n\r\nCurrently pre-commit will try to mount it as `-v /project:/src,rw,Z`. Expected - to mount it as `-v /opt/my_code:/src`\r\n\r\n**Possible solution**:\r\n\r\nWhen I replaced `os.getcwd()` from the code above to `translate_path(os.getcwd())` where `translate_path` is taken from https://gist.github.com/dpfoose/f96d4e4b76c2e01265619d545b77987a, it worked perfectly. It does add extra `docker` pip-dependency though.\r\n\r\n**See also**: https://forums.docker.com/t/mounting-a-volume-not-working-with-running-docker-in-docker/25775/2\n", "before_files": [{"content": "import hashlib\nimport os\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{os.getcwd()}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}], "after_files": [{"content": "import hashlib\nimport json\nimport os\nimport socket\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _is_in_docker() -> bool:\n try:\n with open('/proc/1/cgroup', 'rb') as f:\n return b'docker' in f.read()\n except FileNotFoundError:\n return False\n\n\ndef _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n hostname = socket.gethostname()\n\n _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n\n container, = json.loads(out)\n for mount in container['Mounts']:\n src_path = mount['Source']\n to_path = mount['Destination']\n if os.path.commonpath((path, to_path)) == to_path:\n # So there is something in common,\n # and we can proceed remapping it\n return path.replace(to_path, src_path)\n # we're in Docker, but the path is not mounted, cannot really do anything,\n # so fall back to original path\n return path\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}]} | 1,540 | 533 |
gh_patches_debug_41877 | rasdani/github-patches | git_diff | litestar-org__litestar-1794 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/sqlalchemy/base.py`
Content:
```
1 """Application ORM configuration."""
2 from __future__ import annotations
3
4 import re
5 from datetime import date, datetime
6 from typing import TYPE_CHECKING, Any, ClassVar, Protocol, TypeVar, runtime_checkable
7 from uuid import UUID, uuid4
8
9 from pydantic import AnyHttpUrl, AnyUrl, EmailStr
10 from sqlalchemy import Date, DateTime, MetaData, Sequence, String
11 from sqlalchemy.event import listens_for
12 from sqlalchemy.orm import (
13 DeclarativeBase,
14 Mapped,
15 Session,
16 declared_attr,
17 mapped_column,
18 orm_insert_sentinel,
19 registry,
20 )
21
22 from .types import GUID, JSON, BigIntIdentity
23
24 if TYPE_CHECKING:
25 from sqlalchemy.sql import FromClause
26
27 __all__ = (
28 "AuditColumns",
29 "BigIntAuditBase",
30 "BigIntBase",
31 "BigIntPrimaryKey",
32 "CommonTableAttributes",
33 "create_registry",
34 "ModelProtocol",
35 "touch_updated_timestamp",
36 "UUIDAuditBase",
37 "UUIDBase",
38 "UUIDPrimaryKey",
39 )
40
41
42 UUIDBaseT = TypeVar("UUIDBaseT", bound="UUIDBase")
43 BigIntBaseT = TypeVar("BigIntBaseT", bound="BigIntBase")
44
45 convention = {
46 "ix": "ix_%(column_0_label)s",
47 "uq": "uq_%(table_name)s_%(column_0_name)s",
48 "ck": "ck_%(table_name)s_%(constraint_name)s",
49 "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
50 "pk": "pk_%(table_name)s",
51 }
52 """Templates for automated constraint name generation."""
53
54
55 @listens_for(Session, "before_flush")
56 def touch_updated_timestamp(session: Session, *_: Any) -> None:
57 """Set timestamp on update.
58
59 Called from SQLAlchemy's
60 :meth:`before_flush <sqlalchemy.orm.SessionEvents.before_flush>` event to bump the ``updated``
61 timestamp on modified instances.
62
63 Args:
64 session: The sync :class:`Session <sqlalchemy.orm.Session>` instance that underlies the async
65 session.
66 """
67 for instance in session.dirty:
68 if hasattr(instance, "updated"):
69 instance.updated = datetime.now() # noqa: DTZ005
70
71
72 @runtime_checkable
73 class ModelProtocol(Protocol):
74 """The base SQLAlchemy model protocol."""
75
76 __table__: FromClause
77 __name__: ClassVar[str]
78
79 def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
80 """Convert model to dictionary.
81
82 Returns:
83 dict[str, Any]: A dict representation of the model
84 """
85 ...
86
87
88 class UUIDPrimaryKey:
89 """UUID Primary Key Field Mixin."""
90
91 id: Mapped[UUID] = mapped_column(default=uuid4, primary_key=True) # pyright: ignore
92 """UUID Primary key column."""
93
94 @declared_attr
95 def _sentinel(cls) -> Mapped[int]:
96 return orm_insert_sentinel()
97
98
99 class BigIntPrimaryKey:
100 """BigInt Primary Key Field Mixin."""
101
102 @declared_attr
103 def id(cls) -> Mapped[int]:
104 """BigInt Primary key column."""
105 return mapped_column(
106 BigIntIdentity,
107 Sequence(f"{cls.__tablename__}_id_seq", optional=False), # type: ignore[attr-defined] # pyright: ignore
108 primary_key=True,
109 )
110
111
112 class AuditColumns:
113 """Created/Updated At Fields Mixin."""
114
115 created: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore
116 """Date/time of instance creation."""
117 updated: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore
118 """Date/time of instance last update."""
119
120
121 class CommonTableAttributes:
122 """Common attributes for SQLALchemy tables."""
123
124 __name__: ClassVar[str]
125 __table__: FromClause
126
127 # noinspection PyMethodParameters
128 @declared_attr.directive
129 def __tablename__(cls) -> str: # pylint: disable=no-self-argument
130 """Infer table name from class name."""
131 regexp = re.compile("((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))")
132 return regexp.sub(r"_\1", cls.__name__).lower()
133
134 def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
135 """Convert model to dictionary.
136
137 Returns:
138 dict[str, Any]: A dict representation of the model
139 """
140 exclude = exclude.union("_sentinel") if exclude else {"_sentinel"}
141 return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}
142
143
144 def create_registry() -> registry:
145 """Create a new SQLAlchemy registry."""
146 meta = MetaData(naming_convention=convention)
147 return registry(
148 metadata=meta,
149 type_annotation_map={
150 UUID: GUID,
151 EmailStr: String,
152 AnyUrl: String,
153 AnyHttpUrl: String,
154 dict: JSON,
155 datetime: DateTime,
156 date: Date,
157 },
158 )
159
160
161 orm_registry = create_registry()
162
163
164 class UUIDBase(UUIDPrimaryKey, CommonTableAttributes, DeclarativeBase):
165 """Base for all SQLAlchemy declarative models with UUID primary keys."""
166
167 registry = orm_registry
168
169
170 class UUIDAuditBase(CommonTableAttributes, UUIDPrimaryKey, AuditColumns, DeclarativeBase):
171 """Base for declarative models with UUID primary keys and audit columns."""
172
173 registry = orm_registry
174
175
176 class BigIntBase(BigIntPrimaryKey, CommonTableAttributes, DeclarativeBase):
177 """Base for all SQLAlchemy declarative models with BigInt primary keys."""
178
179 registry = orm_registry
180
181
182 class BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):
183 """Base for declarative models with BigInt primary keys and audit columns."""
184
185 registry = orm_registry
186
```
Path: `litestar/contrib/sqlalchemy/types.py`
Content:
```
1 from __future__ import annotations
2
3 import uuid
4 from base64 import b64decode
5 from typing import TYPE_CHECKING, Any, cast
6
7 from sqlalchemy import text, util
8 from sqlalchemy.dialects.oracle import BLOB as ORA_BLOB
9 from sqlalchemy.dialects.oracle import RAW as ORA_RAW
10 from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB
11 from sqlalchemy.dialects.postgresql import UUID as PG_UUID
12 from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator
13 from sqlalchemy.types import JSON as _JSON
14
15 if TYPE_CHECKING:
16 from sqlalchemy.engine import Dialect
17
18 BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
19
20
21 class GUID(TypeDecorator):
22 """Platform-independent GUID type.
23
24 Uses PostgreSQL's UUID type, Oracle's RAW(16) type, otherwise uses
25 BINARY(16) or CHAR(32), storing as stringified hex values.
26
27 Will accept stringified UUIDs as a hexstring or an actual UUID
28
29 """
30
31 impl = BINARY(16)
32 cache_ok = True
33
34 @property
35 def python_type(self) -> type[uuid.UUID]:
36 return uuid.UUID
37
38 def __init__(self, *args: Any, binary: bool = True, **kwargs: Any) -> None:
39 self.binary = binary
40
41 def load_dialect_impl(self, dialect: Dialect) -> Any:
42 if dialect.name in {"postgresql", "duckdb"}:
43 return dialect.type_descriptor(PG_UUID())
44 if dialect.name == "oracle":
45 return dialect.type_descriptor(ORA_RAW(16))
46 if self.binary:
47 return dialect.type_descriptor(BINARY(16))
48 return dialect.type_descriptor(CHAR(32))
49
50 def process_bind_param(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> bytes | str | None:
51 if value is None:
52 return value
53 if dialect.name in {"postgresql", "duckdb"}:
54 return str(value)
55 value = self.to_uuid(value)
56 if value is None:
57 return value
58 if dialect.name in {"oracle", "spanner+spanner"}:
59 return value.bytes
60 return value.bytes if self.binary else value.hex
61
62 def process_result_value(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> uuid.UUID | None:
63 if value is None:
64 return value
65 if isinstance(value, uuid.UUID):
66 return value
67 if dialect.name == "spanner+spanner":
68 return uuid.UUID(bytes=b64decode(value))
69 if self.binary:
70 return uuid.UUID(bytes=cast("bytes", value))
71 return uuid.UUID(hex=cast("str", value))
72
73 @staticmethod
74 def to_uuid(value: Any) -> uuid.UUID | None:
75 if isinstance(value, uuid.UUID) or value is None:
76 return value
77 try:
78 value = uuid.UUID(hex=value)
79 except (TypeError, ValueError):
80 value = uuid.UUID(bytes=value)
81 return cast("uuid.UUID | None", value)
82
83
84 class JSON(TypeDecorator, SchemaType): # type: ignore
85 """Platform-independent JSON type.
86
87 Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.
88
89 JSON = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_BLOB, "oracle")
90
91 """
92
93 impl = _JSON
94 cache_ok = True
95
96 @property
97 def python_type(self) -> type[dict]:
98 return dict
99
100 def __init__(self, *args: Any, **kwargs: Any) -> None:
101 """Initialize JSON type"""
102 self.name = kwargs.pop("name", None)
103 self.oracle_strict = kwargs.pop("oracle_strict", True)
104
105 def load_dialect_impl(self, dialect: Dialect) -> Any:
106 if dialect.name == "postgresql":
107 return dialect.type_descriptor(PG_JSONB()) # type: ignore
108 if dialect.name == "oracle":
109 return dialect.type_descriptor(ORA_BLOB())
110 return dialect.type_descriptor(_JSON())
111
112 def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:
113 return bool(compiler.dialect.name == "oracle")
114
115 def _variant_mapping_for_set_table(self, column: Any) -> dict | None:
116 if column.type._variant_mapping:
117 variant_mapping = dict(column.type._variant_mapping)
118 variant_mapping["_default"] = column.type
119 else:
120 variant_mapping = None
121 return variant_mapping
122
123 @util.preload_module("sqlalchemy.sql.schema")
124 def _set_table(self, column: Any, table: Any) -> None:
125 schema = util.preloaded.sql_schema
126 variant_mapping = self._variant_mapping_for_set_table(column)
127 constraint_options = "(strict)" if self.oracle_strict else ""
128 sqltext = text(f"{column.name} is json {constraint_options}")
129 e = schema.CheckConstraint(
130 sqltext,
131 name=f"{column.name}_is_json",
132 _create_rule=util.portable_instancemethod( # type: ignore[no-untyped-call]
133 self._should_create_constraint,
134 {"variant_mapping": variant_mapping},
135 ),
136 _type_bound=True,
137 )
138 table.append_constraint(e)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/contrib/sqlalchemy/base.py b/litestar/contrib/sqlalchemy/base.py
--- a/litestar/contrib/sqlalchemy/base.py
+++ b/litestar/contrib/sqlalchemy/base.py
@@ -19,7 +19,7 @@
registry,
)
-from .types import GUID, JSON, BigIntIdentity
+from .types import GUID, BigIntIdentity, JsonB
if TYPE_CHECKING:
from sqlalchemy.sql import FromClause
@@ -151,7 +151,7 @@
EmailStr: String,
AnyUrl: String,
AnyHttpUrl: String,
- dict: JSON,
+ dict: JsonB,
datetime: DateTime,
date: Date,
},
diff --git a/litestar/contrib/sqlalchemy/types.py b/litestar/contrib/sqlalchemy/types.py
--- a/litestar/contrib/sqlalchemy/types.py
+++ b/litestar/contrib/sqlalchemy/types.py
@@ -9,14 +9,14 @@
from sqlalchemy.dialects.oracle import RAW as ORA_RAW
from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB
from sqlalchemy.dialects.postgresql import UUID as PG_UUID
-from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator
+from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator, TypeEngine
from sqlalchemy.types import JSON as _JSON
+from litestar.serialization import decode_json, encode_json
+
if TYPE_CHECKING:
from sqlalchemy.engine import Dialect
-BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
-
class GUID(TypeDecorator):
"""Platform-independent GUID type.
@@ -81,16 +81,14 @@
return cast("uuid.UUID | None", value)
-class JSON(TypeDecorator, SchemaType): # type: ignore
- """Platform-independent JSON type.
-
- Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.
+class ORA_JSONB(TypeDecorator, SchemaType): # type: ignore # noqa: N801
+ """Oracle Binary JSON type.
- JSON = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_BLOB, "oracle")
+ JsonB = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_JSONB, "oracle")
"""
- impl = _JSON
+ impl = ORA_BLOB
cache_ok = True
@property
@@ -102,12 +100,21 @@
self.name = kwargs.pop("name", None)
self.oracle_strict = kwargs.pop("oracle_strict", True)
- def load_dialect_impl(self, dialect: Dialect) -> Any:
- if dialect.name == "postgresql":
- return dialect.type_descriptor(PG_JSONB()) # type: ignore
- if dialect.name == "oracle":
- return dialect.type_descriptor(ORA_BLOB())
- return dialect.type_descriptor(_JSON())
+ def coerce_compared_value(self, op: Any, value: Any) -> Any:
+ return self.impl.coerce_compared_value(op=op, value=value) # type: ignore
+
+ def load_dialect_impl(self, dialect: Dialect) -> TypeEngine[Any]:
+ return dialect.type_descriptor(ORA_BLOB())
+
+ def process_bind_param(self, value: Any, dialect: Dialect) -> Any | None:
+ if value is None:
+ return value
+ return encode_json(value)
+
+ def process_result_value(self, value: bytes | None, dialect: Dialect) -> Any | None:
+ if value is None:
+ return value
+ return decode_json(value)
def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:
return bool(compiler.dialect.name == "oracle")
@@ -136,3 +143,7 @@
_type_bound=True,
)
table.append_constraint(e)
+
+
+BigIntIdentity = BigInteger().with_variant(Integer, "sqlite")
+JsonB = _JSON().with_variant(PG_JSONB, "postgresql").with_variant(ORA_JSONB, "oracle")
| {"golden_diff": "diff --git a/litestar/contrib/sqlalchemy/base.py b/litestar/contrib/sqlalchemy/base.py\n--- a/litestar/contrib/sqlalchemy/base.py\n+++ b/litestar/contrib/sqlalchemy/base.py\n@@ -19,7 +19,7 @@\n registry,\n )\n \n-from .types import GUID, JSON, BigIntIdentity\n+from .types import GUID, BigIntIdentity, JsonB\n \n if TYPE_CHECKING:\n from sqlalchemy.sql import FromClause\n@@ -151,7 +151,7 @@\n EmailStr: String,\n AnyUrl: String,\n AnyHttpUrl: String,\n- dict: JSON,\n+ dict: JsonB,\n datetime: DateTime,\n date: Date,\n },\ndiff --git a/litestar/contrib/sqlalchemy/types.py b/litestar/contrib/sqlalchemy/types.py\n--- a/litestar/contrib/sqlalchemy/types.py\n+++ b/litestar/contrib/sqlalchemy/types.py\n@@ -9,14 +9,14 @@\n from sqlalchemy.dialects.oracle import RAW as ORA_RAW\n from sqlalchemy.dialects.postgresql import JSONB as PG_JSONB\n from sqlalchemy.dialects.postgresql import UUID as PG_UUID\n-from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator\n+from sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator, TypeEngine\n from sqlalchemy.types import JSON as _JSON\n \n+from litestar.serialization import decode_json, encode_json\n+\n if TYPE_CHECKING:\n from sqlalchemy.engine import Dialect\n \n-BigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n-\n \n class GUID(TypeDecorator):\n \"\"\"Platform-independent GUID type.\n@@ -81,16 +81,14 @@\n return cast(\"uuid.UUID | None\", value)\n \n \n-class JSON(TypeDecorator, SchemaType): # type: ignore\n- \"\"\"Platform-independent JSON type.\n-\n- Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.\n+class ORA_JSONB(TypeDecorator, SchemaType): # type: ignore # noqa: N801\n+ \"\"\"Oracle Binary JSON type.\n \n- JSON = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_BLOB, \"oracle\")\n+ JsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n \n \"\"\"\n \n- impl = _JSON\n+ impl = ORA_BLOB\n cache_ok = True\n \n @property\n@@ -102,12 +100,21 @@\n self.name = kwargs.pop(\"name\", None)\n self.oracle_strict = kwargs.pop(\"oracle_strict\", True)\n \n- def load_dialect_impl(self, dialect: Dialect) -> Any:\n- if dialect.name == \"postgresql\":\n- return dialect.type_descriptor(PG_JSONB()) # type: ignore\n- if dialect.name == \"oracle\":\n- return dialect.type_descriptor(ORA_BLOB())\n- return dialect.type_descriptor(_JSON())\n+ def coerce_compared_value(self, op: Any, value: Any) -> Any:\n+ return self.impl.coerce_compared_value(op=op, value=value) # type: ignore\n+\n+ def load_dialect_impl(self, dialect: Dialect) -> TypeEngine[Any]:\n+ return dialect.type_descriptor(ORA_BLOB())\n+\n+ def process_bind_param(self, value: Any, dialect: Dialect) -> Any | None:\n+ if value is None:\n+ return value\n+ return encode_json(value)\n+\n+ def process_result_value(self, value: bytes | None, dialect: Dialect) -> Any | None:\n+ if value is None:\n+ return value\n+ return decode_json(value)\n \n def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:\n return bool(compiler.dialect.name == \"oracle\")\n@@ -136,3 +143,7 @@\n _type_bound=True,\n )\n table.append_constraint(e)\n+\n+\n+BigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n+JsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "\"\"\"Application ORM configuration.\"\"\"\nfrom __future__ import annotations\n\nimport re\nfrom datetime import date, datetime\nfrom typing import TYPE_CHECKING, Any, ClassVar, Protocol, TypeVar, runtime_checkable\nfrom uuid import UUID, uuid4\n\nfrom pydantic import AnyHttpUrl, AnyUrl, EmailStr\nfrom sqlalchemy import Date, DateTime, MetaData, Sequence, String\nfrom sqlalchemy.event import listens_for\nfrom sqlalchemy.orm import (\n DeclarativeBase,\n Mapped,\n Session,\n declared_attr,\n mapped_column,\n orm_insert_sentinel,\n registry,\n)\n\nfrom .types import GUID, JSON, BigIntIdentity\n\nif TYPE_CHECKING:\n from sqlalchemy.sql import FromClause\n\n__all__ = (\n \"AuditColumns\",\n \"BigIntAuditBase\",\n \"BigIntBase\",\n \"BigIntPrimaryKey\",\n \"CommonTableAttributes\",\n \"create_registry\",\n \"ModelProtocol\",\n \"touch_updated_timestamp\",\n \"UUIDAuditBase\",\n \"UUIDBase\",\n \"UUIDPrimaryKey\",\n)\n\n\nUUIDBaseT = TypeVar(\"UUIDBaseT\", bound=\"UUIDBase\")\nBigIntBaseT = TypeVar(\"BigIntBaseT\", bound=\"BigIntBase\")\n\nconvention = {\n \"ix\": \"ix_%(column_0_label)s\",\n \"uq\": \"uq_%(table_name)s_%(column_0_name)s\",\n \"ck\": \"ck_%(table_name)s_%(constraint_name)s\",\n \"fk\": \"fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s\",\n \"pk\": \"pk_%(table_name)s\",\n}\n\"\"\"Templates for automated constraint name generation.\"\"\"\n\n\n@listens_for(Session, \"before_flush\")\ndef touch_updated_timestamp(session: Session, *_: Any) -> None:\n \"\"\"Set timestamp on update.\n\n Called from SQLAlchemy's\n :meth:`before_flush <sqlalchemy.orm.SessionEvents.before_flush>` event to bump the ``updated``\n timestamp on modified instances.\n\n Args:\n session: The sync :class:`Session <sqlalchemy.orm.Session>` instance that underlies the async\n session.\n \"\"\"\n for instance in session.dirty:\n if hasattr(instance, \"updated\"):\n instance.updated = datetime.now() # noqa: DTZ005\n\n\n@runtime_checkable\nclass ModelProtocol(Protocol):\n \"\"\"The base SQLAlchemy model protocol.\"\"\"\n\n __table__: FromClause\n __name__: ClassVar[str]\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n ...\n\n\nclass UUIDPrimaryKey:\n \"\"\"UUID Primary Key Field Mixin.\"\"\"\n\n id: Mapped[UUID] = mapped_column(default=uuid4, primary_key=True) # pyright: ignore\n \"\"\"UUID Primary key column.\"\"\"\n\n @declared_attr\n def _sentinel(cls) -> Mapped[int]:\n return orm_insert_sentinel()\n\n\nclass BigIntPrimaryKey:\n \"\"\"BigInt Primary Key Field Mixin.\"\"\"\n\n @declared_attr\n def id(cls) -> Mapped[int]:\n \"\"\"BigInt Primary key column.\"\"\"\n return mapped_column(\n BigIntIdentity,\n Sequence(f\"{cls.__tablename__}_id_seq\", optional=False), # type: ignore[attr-defined] # pyright: ignore\n primary_key=True,\n )\n\n\nclass AuditColumns:\n \"\"\"Created/Updated At Fields Mixin.\"\"\"\n\n created: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance creation.\"\"\"\n updated: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance last update.\"\"\"\n\n\nclass CommonTableAttributes:\n \"\"\"Common attributes for SQLALchemy tables.\"\"\"\n\n __name__: ClassVar[str]\n __table__: FromClause\n\n # noinspection PyMethodParameters\n @declared_attr.directive\n def __tablename__(cls) -> str: # pylint: disable=no-self-argument\n \"\"\"Infer table name from class name.\"\"\"\n regexp = re.compile(\"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))\")\n return regexp.sub(r\"_\\1\", cls.__name__).lower()\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n exclude = exclude.union(\"_sentinel\") if exclude else {\"_sentinel\"}\n return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}\n\n\ndef create_registry() -> registry:\n \"\"\"Create a new SQLAlchemy registry.\"\"\"\n meta = MetaData(naming_convention=convention)\n return registry(\n metadata=meta,\n type_annotation_map={\n UUID: GUID,\n EmailStr: String,\n AnyUrl: String,\n AnyHttpUrl: String,\n dict: JSON,\n datetime: DateTime,\n date: Date,\n },\n )\n\n\norm_registry = create_registry()\n\n\nclass UUIDBase(UUIDPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with UUID primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass UUIDAuditBase(CommonTableAttributes, UUIDPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with UUID primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntBase(BigIntPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with BigInt primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with BigInt primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n", "path": "litestar/contrib/sqlalchemy/base.py"}, {"content": "from __future__ import annotations\n\nimport uuid\nfrom base64 import b64decode\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom sqlalchemy import text, util\nfrom sqlalchemy.dialects.oracle import BLOB as ORA_BLOB\nfrom sqlalchemy.dialects.oracle import RAW as ORA_RAW\nfrom sqlalchemy.dialects.postgresql import JSONB as PG_JSONB\nfrom sqlalchemy.dialects.postgresql import UUID as PG_UUID\nfrom sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator\nfrom sqlalchemy.types import JSON as _JSON\n\nif TYPE_CHECKING:\n from sqlalchemy.engine import Dialect\n\nBigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\n\n\nclass GUID(TypeDecorator):\n \"\"\"Platform-independent GUID type.\n\n Uses PostgreSQL's UUID type, Oracle's RAW(16) type, otherwise uses\n BINARY(16) or CHAR(32), storing as stringified hex values.\n\n Will accept stringified UUIDs as a hexstring or an actual UUID\n\n \"\"\"\n\n impl = BINARY(16)\n cache_ok = True\n\n @property\n def python_type(self) -> type[uuid.UUID]:\n return uuid.UUID\n\n def __init__(self, *args: Any, binary: bool = True, **kwargs: Any) -> None:\n self.binary = binary\n\n def load_dialect_impl(self, dialect: Dialect) -> Any:\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return dialect.type_descriptor(PG_UUID())\n if dialect.name == \"oracle\":\n return dialect.type_descriptor(ORA_RAW(16))\n if self.binary:\n return dialect.type_descriptor(BINARY(16))\n return dialect.type_descriptor(CHAR(32))\n\n def process_bind_param(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> bytes | str | None:\n if value is None:\n return value\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return str(value)\n value = self.to_uuid(value)\n if value is None:\n return value\n if dialect.name in {\"oracle\", \"spanner+spanner\"}:\n return value.bytes\n return value.bytes if self.binary else value.hex\n\n def process_result_value(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> uuid.UUID | None:\n if value is None:\n return value\n if isinstance(value, uuid.UUID):\n return value\n if dialect.name == \"spanner+spanner\":\n return uuid.UUID(bytes=b64decode(value))\n if self.binary:\n return uuid.UUID(bytes=cast(\"bytes\", value))\n return uuid.UUID(hex=cast(\"str\", value))\n\n @staticmethod\n def to_uuid(value: Any) -> uuid.UUID | None:\n if isinstance(value, uuid.UUID) or value is None:\n return value\n try:\n value = uuid.UUID(hex=value)\n except (TypeError, ValueError):\n value = uuid.UUID(bytes=value)\n return cast(\"uuid.UUID | None\", value)\n\n\nclass JSON(TypeDecorator, SchemaType): # type: ignore\n \"\"\"Platform-independent JSON type.\n\n Uses JSONB type for postgres, BLOB for Oracle, otherwise uses the generic JSON data type.\n\n JSON = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_BLOB, \"oracle\")\n\n \"\"\"\n\n impl = _JSON\n cache_ok = True\n\n @property\n def python_type(self) -> type[dict]:\n return dict\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Initialize JSON type\"\"\"\n self.name = kwargs.pop(\"name\", None)\n self.oracle_strict = kwargs.pop(\"oracle_strict\", True)\n\n def load_dialect_impl(self, dialect: Dialect) -> Any:\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(PG_JSONB()) # type: ignore\n if dialect.name == \"oracle\":\n return dialect.type_descriptor(ORA_BLOB())\n return dialect.type_descriptor(_JSON())\n\n def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:\n return bool(compiler.dialect.name == \"oracle\")\n\n def _variant_mapping_for_set_table(self, column: Any) -> dict | None:\n if column.type._variant_mapping:\n variant_mapping = dict(column.type._variant_mapping)\n variant_mapping[\"_default\"] = column.type\n else:\n variant_mapping = None\n return variant_mapping\n\n @util.preload_module(\"sqlalchemy.sql.schema\")\n def _set_table(self, column: Any, table: Any) -> None:\n schema = util.preloaded.sql_schema\n variant_mapping = self._variant_mapping_for_set_table(column)\n constraint_options = \"(strict)\" if self.oracle_strict else \"\"\n sqltext = text(f\"{column.name} is json {constraint_options}\")\n e = schema.CheckConstraint(\n sqltext,\n name=f\"{column.name}_is_json\",\n _create_rule=util.portable_instancemethod( # type: ignore[no-untyped-call]\n self._should_create_constraint,\n {\"variant_mapping\": variant_mapping},\n ),\n _type_bound=True,\n )\n table.append_constraint(e)\n", "path": "litestar/contrib/sqlalchemy/types.py"}], "after_files": [{"content": "\"\"\"Application ORM configuration.\"\"\"\nfrom __future__ import annotations\n\nimport re\nfrom datetime import date, datetime\nfrom typing import TYPE_CHECKING, Any, ClassVar, Protocol, TypeVar, runtime_checkable\nfrom uuid import UUID, uuid4\n\nfrom pydantic import AnyHttpUrl, AnyUrl, EmailStr\nfrom sqlalchemy import Date, DateTime, MetaData, Sequence, String\nfrom sqlalchemy.event import listens_for\nfrom sqlalchemy.orm import (\n DeclarativeBase,\n Mapped,\n Session,\n declared_attr,\n mapped_column,\n orm_insert_sentinel,\n registry,\n)\n\nfrom .types import GUID, BigIntIdentity, JsonB\n\nif TYPE_CHECKING:\n from sqlalchemy.sql import FromClause\n\n__all__ = (\n \"AuditColumns\",\n \"BigIntAuditBase\",\n \"BigIntBase\",\n \"BigIntPrimaryKey\",\n \"CommonTableAttributes\",\n \"create_registry\",\n \"ModelProtocol\",\n \"touch_updated_timestamp\",\n \"UUIDAuditBase\",\n \"UUIDBase\",\n \"UUIDPrimaryKey\",\n)\n\n\nUUIDBaseT = TypeVar(\"UUIDBaseT\", bound=\"UUIDBase\")\nBigIntBaseT = TypeVar(\"BigIntBaseT\", bound=\"BigIntBase\")\n\nconvention = {\n \"ix\": \"ix_%(column_0_label)s\",\n \"uq\": \"uq_%(table_name)s_%(column_0_name)s\",\n \"ck\": \"ck_%(table_name)s_%(constraint_name)s\",\n \"fk\": \"fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s\",\n \"pk\": \"pk_%(table_name)s\",\n}\n\"\"\"Templates for automated constraint name generation.\"\"\"\n\n\n@listens_for(Session, \"before_flush\")\ndef touch_updated_timestamp(session: Session, *_: Any) -> None:\n \"\"\"Set timestamp on update.\n\n Called from SQLAlchemy's\n :meth:`before_flush <sqlalchemy.orm.SessionEvents.before_flush>` event to bump the ``updated``\n timestamp on modified instances.\n\n Args:\n session: The sync :class:`Session <sqlalchemy.orm.Session>` instance that underlies the async\n session.\n \"\"\"\n for instance in session.dirty:\n if hasattr(instance, \"updated\"):\n instance.updated = datetime.now() # noqa: DTZ005\n\n\n@runtime_checkable\nclass ModelProtocol(Protocol):\n \"\"\"The base SQLAlchemy model protocol.\"\"\"\n\n __table__: FromClause\n __name__: ClassVar[str]\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n ...\n\n\nclass UUIDPrimaryKey:\n \"\"\"UUID Primary Key Field Mixin.\"\"\"\n\n id: Mapped[UUID] = mapped_column(default=uuid4, primary_key=True) # pyright: ignore\n \"\"\"UUID Primary key column.\"\"\"\n\n @declared_attr\n def _sentinel(cls) -> Mapped[int]:\n return orm_insert_sentinel()\n\n\nclass BigIntPrimaryKey:\n \"\"\"BigInt Primary Key Field Mixin.\"\"\"\n\n @declared_attr\n def id(cls) -> Mapped[int]:\n \"\"\"BigInt Primary key column.\"\"\"\n return mapped_column(\n BigIntIdentity,\n Sequence(f\"{cls.__tablename__}_id_seq\", optional=False), # type: ignore[attr-defined] # pyright: ignore\n primary_key=True,\n )\n\n\nclass AuditColumns:\n \"\"\"Created/Updated At Fields Mixin.\"\"\"\n\n created: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance creation.\"\"\"\n updated: Mapped[datetime] = mapped_column(default=datetime.now) # pyright: ignore\n \"\"\"Date/time of instance last update.\"\"\"\n\n\nclass CommonTableAttributes:\n \"\"\"Common attributes for SQLALchemy tables.\"\"\"\n\n __name__: ClassVar[str]\n __table__: FromClause\n\n # noinspection PyMethodParameters\n @declared_attr.directive\n def __tablename__(cls) -> str: # pylint: disable=no-self-argument\n \"\"\"Infer table name from class name.\"\"\"\n regexp = re.compile(\"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))\")\n return regexp.sub(r\"_\\1\", cls.__name__).lower()\n\n def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:\n \"\"\"Convert model to dictionary.\n\n Returns:\n dict[str, Any]: A dict representation of the model\n \"\"\"\n exclude = exclude.union(\"_sentinel\") if exclude else {\"_sentinel\"}\n return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}\n\n\ndef create_registry() -> registry:\n \"\"\"Create a new SQLAlchemy registry.\"\"\"\n meta = MetaData(naming_convention=convention)\n return registry(\n metadata=meta,\n type_annotation_map={\n UUID: GUID,\n EmailStr: String,\n AnyUrl: String,\n AnyHttpUrl: String,\n dict: JsonB,\n datetime: DateTime,\n date: Date,\n },\n )\n\n\norm_registry = create_registry()\n\n\nclass UUIDBase(UUIDPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with UUID primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass UUIDAuditBase(CommonTableAttributes, UUIDPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with UUID primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntBase(BigIntPrimaryKey, CommonTableAttributes, DeclarativeBase):\n \"\"\"Base for all SQLAlchemy declarative models with BigInt primary keys.\"\"\"\n\n registry = orm_registry\n\n\nclass BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):\n \"\"\"Base for declarative models with BigInt primary keys and audit columns.\"\"\"\n\n registry = orm_registry\n", "path": "litestar/contrib/sqlalchemy/base.py"}, {"content": "from __future__ import annotations\n\nimport uuid\nfrom base64 import b64decode\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom sqlalchemy import text, util\nfrom sqlalchemy.dialects.oracle import BLOB as ORA_BLOB\nfrom sqlalchemy.dialects.oracle import RAW as ORA_RAW\nfrom sqlalchemy.dialects.postgresql import JSONB as PG_JSONB\nfrom sqlalchemy.dialects.postgresql import UUID as PG_UUID\nfrom sqlalchemy.types import BINARY, CHAR, BigInteger, Integer, SchemaType, TypeDecorator, TypeEngine\nfrom sqlalchemy.types import JSON as _JSON\n\nfrom litestar.serialization import decode_json, encode_json\n\nif TYPE_CHECKING:\n from sqlalchemy.engine import Dialect\n\n\nclass GUID(TypeDecorator):\n \"\"\"Platform-independent GUID type.\n\n Uses PostgreSQL's UUID type, Oracle's RAW(16) type, otherwise uses\n BINARY(16) or CHAR(32), storing as stringified hex values.\n\n Will accept stringified UUIDs as a hexstring or an actual UUID\n\n \"\"\"\n\n impl = BINARY(16)\n cache_ok = True\n\n @property\n def python_type(self) -> type[uuid.UUID]:\n return uuid.UUID\n\n def __init__(self, *args: Any, binary: bool = True, **kwargs: Any) -> None:\n self.binary = binary\n\n def load_dialect_impl(self, dialect: Dialect) -> Any:\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return dialect.type_descriptor(PG_UUID())\n if dialect.name == \"oracle\":\n return dialect.type_descriptor(ORA_RAW(16))\n if self.binary:\n return dialect.type_descriptor(BINARY(16))\n return dialect.type_descriptor(CHAR(32))\n\n def process_bind_param(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> bytes | str | None:\n if value is None:\n return value\n if dialect.name in {\"postgresql\", \"duckdb\"}:\n return str(value)\n value = self.to_uuid(value)\n if value is None:\n return value\n if dialect.name in {\"oracle\", \"spanner+spanner\"}:\n return value.bytes\n return value.bytes if self.binary else value.hex\n\n def process_result_value(self, value: bytes | str | uuid.UUID | None, dialect: Dialect) -> uuid.UUID | None:\n if value is None:\n return value\n if isinstance(value, uuid.UUID):\n return value\n if dialect.name == \"spanner+spanner\":\n return uuid.UUID(bytes=b64decode(value))\n if self.binary:\n return uuid.UUID(bytes=cast(\"bytes\", value))\n return uuid.UUID(hex=cast(\"str\", value))\n\n @staticmethod\n def to_uuid(value: Any) -> uuid.UUID | None:\n if isinstance(value, uuid.UUID) or value is None:\n return value\n try:\n value = uuid.UUID(hex=value)\n except (TypeError, ValueError):\n value = uuid.UUID(bytes=value)\n return cast(\"uuid.UUID | None\", value)\n\n\nclass ORA_JSONB(TypeDecorator, SchemaType): # type: ignore # noqa: N801\n \"\"\"Oracle Binary JSON type.\n\n JsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n\n \"\"\"\n\n impl = ORA_BLOB\n cache_ok = True\n\n @property\n def python_type(self) -> type[dict]:\n return dict\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Initialize JSON type\"\"\"\n self.name = kwargs.pop(\"name\", None)\n self.oracle_strict = kwargs.pop(\"oracle_strict\", True)\n\n def coerce_compared_value(self, op: Any, value: Any) -> Any:\n return self.impl.coerce_compared_value(op=op, value=value) # type: ignore\n\n def load_dialect_impl(self, dialect: Dialect) -> TypeEngine[Any]:\n return dialect.type_descriptor(ORA_BLOB())\n\n def process_bind_param(self, value: Any, dialect: Dialect) -> Any | None:\n if value is None:\n return value\n return encode_json(value)\n\n def process_result_value(self, value: bytes | None, dialect: Dialect) -> Any | None:\n if value is None:\n return value\n return decode_json(value)\n\n def _should_create_constraint(self, compiler: Any, **kw: Any) -> bool:\n return bool(compiler.dialect.name == \"oracle\")\n\n def _variant_mapping_for_set_table(self, column: Any) -> dict | None:\n if column.type._variant_mapping:\n variant_mapping = dict(column.type._variant_mapping)\n variant_mapping[\"_default\"] = column.type\n else:\n variant_mapping = None\n return variant_mapping\n\n @util.preload_module(\"sqlalchemy.sql.schema\")\n def _set_table(self, column: Any, table: Any) -> None:\n schema = util.preloaded.sql_schema\n variant_mapping = self._variant_mapping_for_set_table(column)\n constraint_options = \"(strict)\" if self.oracle_strict else \"\"\n sqltext = text(f\"{column.name} is json {constraint_options}\")\n e = schema.CheckConstraint(\n sqltext,\n name=f\"{column.name}_is_json\",\n _create_rule=util.portable_instancemethod( # type: ignore[no-untyped-call]\n self._should_create_constraint,\n {\"variant_mapping\": variant_mapping},\n ),\n _type_bound=True,\n )\n table.append_constraint(e)\n\n\nBigIntIdentity = BigInteger().with_variant(Integer, \"sqlite\")\nJsonB = _JSON().with_variant(PG_JSONB, \"postgresql\").with_variant(ORA_JSONB, \"oracle\")\n", "path": "litestar/contrib/sqlalchemy/types.py"}]} | 3,647 | 935 |
gh_patches_debug_33085 | rasdani/github-patches | git_diff | goauthentik__authentik-7028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Date field can't be serialized to JSON on user write stage
**Describe the bug**
`Date` fields can't be serialised to JSON and thus are not saved to the user.
**To Reproduce**
I've added a field of type `Date` to my user settings by adding it in the `default-user-settings` prompt stage.
When I go into my user settings, set a value and save it, my user's fields are no longer displayed, instead replaced by a `Open settings` button.

When I click the button I get this error message:
```
builtins.TypeError: Object of type date is not JSON serializable
```

- authentik version: 2023.6.1
- Deployment: docker-compose
**Additional context**
https://discord.com/channels/809154715984199690/1129892642080161913
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/events/utils.py`
Content:
```
1 """event utilities"""
2 import re
3 from copy import copy
4 from dataclasses import asdict, is_dataclass
5 from enum import Enum
6 from pathlib import Path
7 from types import GeneratorType
8 from typing import Any, Optional
9 from uuid import UUID
10
11 from django.contrib.auth.models import AnonymousUser
12 from django.core.handlers.wsgi import WSGIRequest
13 from django.db import models
14 from django.db.models.base import Model
15 from django.http.request import HttpRequest
16 from django.views.debug import SafeExceptionReporterFilter
17 from geoip2.models import City
18 from guardian.utils import get_anonymous_user
19
20 from authentik.blueprints.v1.common import YAMLTag
21 from authentik.core.models import User
22 from authentik.events.geo import GEOIP_READER
23 from authentik.policies.types import PolicyRequest
24
25 # Special keys which are *not* cleaned, even when the default filter
26 # is matched
27 ALLOWED_SPECIAL_KEYS = re.compile("passing", flags=re.I)
28
29
30 def cleanse_item(key: str, value: Any) -> Any:
31 """Cleanse a single item"""
32 if isinstance(value, dict):
33 return cleanse_dict(value)
34 if isinstance(value, (list, tuple, set)):
35 for idx, item in enumerate(value):
36 value[idx] = cleanse_item(key, item)
37 return value
38 try:
39 if SafeExceptionReporterFilter.hidden_settings.search(
40 key
41 ) and not ALLOWED_SPECIAL_KEYS.search(key):
42 return SafeExceptionReporterFilter.cleansed_substitute
43 except TypeError: # pragma: no cover
44 return value
45 return value
46
47
48 def cleanse_dict(source: dict[Any, Any]) -> dict[Any, Any]:
49 """Cleanse a dictionary, recursively"""
50 final_dict = {}
51 for key, value in source.items():
52 new_value = cleanse_item(key, value)
53 if new_value is not ...:
54 final_dict[key] = new_value
55 return final_dict
56
57
58 def model_to_dict(model: Model) -> dict[str, Any]:
59 """Convert model to dict"""
60 name = str(model)
61 if hasattr(model, "name"):
62 name = model.name
63 return {
64 "app": model._meta.app_label,
65 "model_name": model._meta.model_name,
66 "pk": model.pk,
67 "name": name,
68 }
69
70
71 def get_user(user: User, original_user: Optional[User] = None) -> dict[str, Any]:
72 """Convert user object to dictionary, optionally including the original user"""
73 if isinstance(user, AnonymousUser):
74 user = get_anonymous_user()
75 user_data = {
76 "username": user.username,
77 "pk": user.pk,
78 "email": user.email,
79 }
80 if original_user:
81 original_data = get_user(original_user)
82 original_data["on_behalf_of"] = user_data
83 return original_data
84 return user_data
85
86
87 # pylint: disable=too-many-return-statements
88 def sanitize_item(value: Any) -> Any:
89 """Sanitize a single item, ensure it is JSON parsable"""
90 if is_dataclass(value):
91 # Because asdict calls `copy.deepcopy(obj)` on everything that's not tuple/dict,
92 # and deepcopy doesn't work with HttpRequest (neither django nor rest_framework).
93 # (more specifically doesn't work with ResolverMatch)
94 # rest_framework's custom Request class makes this more complicated as it also holds a
95 # thread lock.
96 # Since this class is mainly used for Events which already hold the http request context
97 # we just remove the http_request from the shallow policy request
98 # Currently, the only dataclass that actually holds an http request is a PolicyRequest
99 if isinstance(value, PolicyRequest) and value.http_request is not None:
100 value: PolicyRequest = copy(value)
101 value.http_request = None
102 value = asdict(value)
103 if isinstance(value, dict):
104 return sanitize_dict(value)
105 if isinstance(value, GeneratorType):
106 return sanitize_item(list(value))
107 if isinstance(value, (list, tuple, set)):
108 new_values = []
109 for item in value:
110 new_value = sanitize_item(item)
111 if new_value:
112 new_values.append(new_value)
113 return new_values
114 if isinstance(value, (User, AnonymousUser)):
115 return sanitize_dict(get_user(value))
116 if isinstance(value, models.Model):
117 return sanitize_dict(model_to_dict(value))
118 if isinstance(value, UUID):
119 return value.hex
120 if isinstance(value, (HttpRequest, WSGIRequest)):
121 return ...
122 if isinstance(value, City):
123 return GEOIP_READER.city_to_dict(value)
124 if isinstance(value, Path):
125 return str(value)
126 if isinstance(value, Exception):
127 return str(value)
128 if isinstance(value, YAMLTag):
129 return str(value)
130 if isinstance(value, Enum):
131 return value.value
132 if isinstance(value, type):
133 return {
134 "type": value.__name__,
135 "module": value.__module__,
136 }
137 return value
138
139
140 def sanitize_dict(source: dict[Any, Any]) -> dict[Any, Any]:
141 """clean source of all Models that would interfere with the JSONField.
142 Models are replaced with a dictionary of {
143 app: str,
144 name: str,
145 pk: Any
146 }"""
147 final_dict = {}
148 for key, value in source.items():
149 new_value = sanitize_item(value)
150 if new_value is not ...:
151 final_dict[key] = new_value
152 return final_dict
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/events/utils.py b/authentik/events/utils.py
--- a/authentik/events/utils.py
+++ b/authentik/events/utils.py
@@ -2,6 +2,7 @@
import re
from copy import copy
from dataclasses import asdict, is_dataclass
+from datetime import date, datetime, time, timedelta
from enum import Enum
from pathlib import Path
from types import GeneratorType
@@ -13,6 +14,7 @@
from django.db import models
from django.db.models.base import Model
from django.http.request import HttpRequest
+from django.utils import timezone
from django.views.debug import SafeExceptionReporterFilter
from geoip2.models import City
from guardian.utils import get_anonymous_user
@@ -84,7 +86,7 @@
return user_data
-# pylint: disable=too-many-return-statements
+# pylint: disable=too-many-return-statements,too-many-branches
def sanitize_item(value: Any) -> Any:
"""Sanitize a single item, ensure it is JSON parsable"""
if is_dataclass(value):
@@ -134,6 +136,23 @@
"type": value.__name__,
"module": value.__module__,
}
+ # See
+ # https://github.com/encode/django-rest-framework/blob/master/rest_framework/utils/encoders.py
+ # For Date Time string spec, see ECMA 262
+ # https://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15
+ if isinstance(value, datetime):
+ representation = value.isoformat()
+ if representation.endswith("+00:00"):
+ representation = representation[:-6] + "Z"
+ return representation
+ if isinstance(value, date):
+ return value.isoformat()
+ if isinstance(value, time):
+ if timezone and timezone.is_aware(value):
+ raise ValueError("JSON can't represent timezone-aware times.")
+ return value.isoformat()
+ if isinstance(value, timedelta):
+ return str(value.total_seconds())
return value
| {"golden_diff": "diff --git a/authentik/events/utils.py b/authentik/events/utils.py\n--- a/authentik/events/utils.py\n+++ b/authentik/events/utils.py\n@@ -2,6 +2,7 @@\n import re\n from copy import copy\n from dataclasses import asdict, is_dataclass\n+from datetime import date, datetime, time, timedelta\n from enum import Enum\n from pathlib import Path\n from types import GeneratorType\n@@ -13,6 +14,7 @@\n from django.db import models\n from django.db.models.base import Model\n from django.http.request import HttpRequest\n+from django.utils import timezone\n from django.views.debug import SafeExceptionReporterFilter\n from geoip2.models import City\n from guardian.utils import get_anonymous_user\n@@ -84,7 +86,7 @@\n return user_data\n \n \n-# pylint: disable=too-many-return-statements\n+# pylint: disable=too-many-return-statements,too-many-branches\n def sanitize_item(value: Any) -> Any:\n \"\"\"Sanitize a single item, ensure it is JSON parsable\"\"\"\n if is_dataclass(value):\n@@ -134,6 +136,23 @@\n \"type\": value.__name__,\n \"module\": value.__module__,\n }\n+ # See\n+ # https://github.com/encode/django-rest-framework/blob/master/rest_framework/utils/encoders.py\n+ # For Date Time string spec, see ECMA 262\n+ # https://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15\n+ if isinstance(value, datetime):\n+ representation = value.isoformat()\n+ if representation.endswith(\"+00:00\"):\n+ representation = representation[:-6] + \"Z\"\n+ return representation\n+ if isinstance(value, date):\n+ return value.isoformat()\n+ if isinstance(value, time):\n+ if timezone and timezone.is_aware(value):\n+ raise ValueError(\"JSON can't represent timezone-aware times.\")\n+ return value.isoformat()\n+ if isinstance(value, timedelta):\n+ return str(value.total_seconds())\n return value\n", "issue": "Date field can't be serialized to JSON on user write stage\n**Describe the bug**\r\n\r\n`Date` fields can't be serialised to JSON and thus are not saved to the user.\r\n\r\n**To Reproduce**\r\n\r\nI've added a field of type `Date` to my user settings by adding it in the `default-user-settings` prompt stage.\r\nWhen I go into my user settings, set a value and save it, my user's fields are no longer displayed, instead replaced by a `Open settings` button.\r\n\r\n\r\n\r\nWhen I click the button I get this error message:\r\n\r\n```\r\nbuiltins.TypeError: Object of type date is not JSON serializable\r\n```\r\n\r\n\r\n\r\n- authentik version: 2023.6.1\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nhttps://discord.com/channels/809154715984199690/1129892642080161913\r\n\n", "before_files": [{"content": "\"\"\"event utilities\"\"\"\nimport re\nfrom copy import copy\nfrom dataclasses import asdict, is_dataclass\nfrom enum import Enum\nfrom pathlib import Path\nfrom types import GeneratorType\nfrom typing import Any, Optional\nfrom uuid import UUID\n\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.core.handlers.wsgi import WSGIRequest\nfrom django.db import models\nfrom django.db.models.base import Model\nfrom django.http.request import HttpRequest\nfrom django.views.debug import SafeExceptionReporterFilter\nfrom geoip2.models import City\nfrom guardian.utils import get_anonymous_user\n\nfrom authentik.blueprints.v1.common import YAMLTag\nfrom authentik.core.models import User\nfrom authentik.events.geo import GEOIP_READER\nfrom authentik.policies.types import PolicyRequest\n\n# Special keys which are *not* cleaned, even when the default filter\n# is matched\nALLOWED_SPECIAL_KEYS = re.compile(\"passing\", flags=re.I)\n\n\ndef cleanse_item(key: str, value: Any) -> Any:\n \"\"\"Cleanse a single item\"\"\"\n if isinstance(value, dict):\n return cleanse_dict(value)\n if isinstance(value, (list, tuple, set)):\n for idx, item in enumerate(value):\n value[idx] = cleanse_item(key, item)\n return value\n try:\n if SafeExceptionReporterFilter.hidden_settings.search(\n key\n ) and not ALLOWED_SPECIAL_KEYS.search(key):\n return SafeExceptionReporterFilter.cleansed_substitute\n except TypeError: # pragma: no cover\n return value\n return value\n\n\ndef cleanse_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"Cleanse a dictionary, recursively\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = cleanse_item(key, value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n\n\ndef model_to_dict(model: Model) -> dict[str, Any]:\n \"\"\"Convert model to dict\"\"\"\n name = str(model)\n if hasattr(model, \"name\"):\n name = model.name\n return {\n \"app\": model._meta.app_label,\n \"model_name\": model._meta.model_name,\n \"pk\": model.pk,\n \"name\": name,\n }\n\n\ndef get_user(user: User, original_user: Optional[User] = None) -> dict[str, Any]:\n \"\"\"Convert user object to dictionary, optionally including the original user\"\"\"\n if isinstance(user, AnonymousUser):\n user = get_anonymous_user()\n user_data = {\n \"username\": user.username,\n \"pk\": user.pk,\n \"email\": user.email,\n }\n if original_user:\n original_data = get_user(original_user)\n original_data[\"on_behalf_of\"] = user_data\n return original_data\n return user_data\n\n\n# pylint: disable=too-many-return-statements\ndef sanitize_item(value: Any) -> Any:\n \"\"\"Sanitize a single item, ensure it is JSON parsable\"\"\"\n if is_dataclass(value):\n # Because asdict calls `copy.deepcopy(obj)` on everything that's not tuple/dict,\n # and deepcopy doesn't work with HttpRequest (neither django nor rest_framework).\n # (more specifically doesn't work with ResolverMatch)\n # rest_framework's custom Request class makes this more complicated as it also holds a\n # thread lock.\n # Since this class is mainly used for Events which already hold the http request context\n # we just remove the http_request from the shallow policy request\n # Currently, the only dataclass that actually holds an http request is a PolicyRequest\n if isinstance(value, PolicyRequest) and value.http_request is not None:\n value: PolicyRequest = copy(value)\n value.http_request = None\n value = asdict(value)\n if isinstance(value, dict):\n return sanitize_dict(value)\n if isinstance(value, GeneratorType):\n return sanitize_item(list(value))\n if isinstance(value, (list, tuple, set)):\n new_values = []\n for item in value:\n new_value = sanitize_item(item)\n if new_value:\n new_values.append(new_value)\n return new_values\n if isinstance(value, (User, AnonymousUser)):\n return sanitize_dict(get_user(value))\n if isinstance(value, models.Model):\n return sanitize_dict(model_to_dict(value))\n if isinstance(value, UUID):\n return value.hex\n if isinstance(value, (HttpRequest, WSGIRequest)):\n return ...\n if isinstance(value, City):\n return GEOIP_READER.city_to_dict(value)\n if isinstance(value, Path):\n return str(value)\n if isinstance(value, Exception):\n return str(value)\n if isinstance(value, YAMLTag):\n return str(value)\n if isinstance(value, Enum):\n return value.value\n if isinstance(value, type):\n return {\n \"type\": value.__name__,\n \"module\": value.__module__,\n }\n return value\n\n\ndef sanitize_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"clean source of all Models that would interfere with the JSONField.\n Models are replaced with a dictionary of {\n app: str,\n name: str,\n pk: Any\n }\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = sanitize_item(value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n", "path": "authentik/events/utils.py"}], "after_files": [{"content": "\"\"\"event utilities\"\"\"\nimport re\nfrom copy import copy\nfrom dataclasses import asdict, is_dataclass\nfrom datetime import date, datetime, time, timedelta\nfrom enum import Enum\nfrom pathlib import Path\nfrom types import GeneratorType\nfrom typing import Any, Optional\nfrom uuid import UUID\n\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.core.handlers.wsgi import WSGIRequest\nfrom django.db import models\nfrom django.db.models.base import Model\nfrom django.http.request import HttpRequest\nfrom django.utils import timezone\nfrom django.views.debug import SafeExceptionReporterFilter\nfrom geoip2.models import City\nfrom guardian.utils import get_anonymous_user\n\nfrom authentik.blueprints.v1.common import YAMLTag\nfrom authentik.core.models import User\nfrom authentik.events.geo import GEOIP_READER\nfrom authentik.policies.types import PolicyRequest\n\n# Special keys which are *not* cleaned, even when the default filter\n# is matched\nALLOWED_SPECIAL_KEYS = re.compile(\"passing\", flags=re.I)\n\n\ndef cleanse_item(key: str, value: Any) -> Any:\n \"\"\"Cleanse a single item\"\"\"\n if isinstance(value, dict):\n return cleanse_dict(value)\n if isinstance(value, (list, tuple, set)):\n for idx, item in enumerate(value):\n value[idx] = cleanse_item(key, item)\n return value\n try:\n if SafeExceptionReporterFilter.hidden_settings.search(\n key\n ) and not ALLOWED_SPECIAL_KEYS.search(key):\n return SafeExceptionReporterFilter.cleansed_substitute\n except TypeError: # pragma: no cover\n return value\n return value\n\n\ndef cleanse_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"Cleanse a dictionary, recursively\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = cleanse_item(key, value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n\n\ndef model_to_dict(model: Model) -> dict[str, Any]:\n \"\"\"Convert model to dict\"\"\"\n name = str(model)\n if hasattr(model, \"name\"):\n name = model.name\n return {\n \"app\": model._meta.app_label,\n \"model_name\": model._meta.model_name,\n \"pk\": model.pk,\n \"name\": name,\n }\n\n\ndef get_user(user: User, original_user: Optional[User] = None) -> dict[str, Any]:\n \"\"\"Convert user object to dictionary, optionally including the original user\"\"\"\n if isinstance(user, AnonymousUser):\n user = get_anonymous_user()\n user_data = {\n \"username\": user.username,\n \"pk\": user.pk,\n \"email\": user.email,\n }\n if original_user:\n original_data = get_user(original_user)\n original_data[\"on_behalf_of\"] = user_data\n return original_data\n return user_data\n\n\n# pylint: disable=too-many-return-statements,too-many-branches\ndef sanitize_item(value: Any) -> Any:\n \"\"\"Sanitize a single item, ensure it is JSON parsable\"\"\"\n if is_dataclass(value):\n # Because asdict calls `copy.deepcopy(obj)` on everything that's not tuple/dict,\n # and deepcopy doesn't work with HttpRequest (neither django nor rest_framework).\n # (more specifically doesn't work with ResolverMatch)\n # rest_framework's custom Request class makes this more complicated as it also holds a\n # thread lock.\n # Since this class is mainly used for Events which already hold the http request context\n # we just remove the http_request from the shallow policy request\n # Currently, the only dataclass that actually holds an http request is a PolicyRequest\n if isinstance(value, PolicyRequest) and value.http_request is not None:\n value: PolicyRequest = copy(value)\n value.http_request = None\n value = asdict(value)\n if isinstance(value, dict):\n return sanitize_dict(value)\n if isinstance(value, GeneratorType):\n return sanitize_item(list(value))\n if isinstance(value, (list, tuple, set)):\n new_values = []\n for item in value:\n new_value = sanitize_item(item)\n if new_value:\n new_values.append(new_value)\n return new_values\n if isinstance(value, (User, AnonymousUser)):\n return sanitize_dict(get_user(value))\n if isinstance(value, models.Model):\n return sanitize_dict(model_to_dict(value))\n if isinstance(value, UUID):\n return value.hex\n if isinstance(value, (HttpRequest, WSGIRequest)):\n return ...\n if isinstance(value, City):\n return GEOIP_READER.city_to_dict(value)\n if isinstance(value, Path):\n return str(value)\n if isinstance(value, Exception):\n return str(value)\n if isinstance(value, YAMLTag):\n return str(value)\n if isinstance(value, Enum):\n return value.value\n if isinstance(value, type):\n return {\n \"type\": value.__name__,\n \"module\": value.__module__,\n }\n # See\n # https://github.com/encode/django-rest-framework/blob/master/rest_framework/utils/encoders.py\n # For Date Time string spec, see ECMA 262\n # https://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15\n if isinstance(value, datetime):\n representation = value.isoformat()\n if representation.endswith(\"+00:00\"):\n representation = representation[:-6] + \"Z\"\n return representation\n if isinstance(value, date):\n return value.isoformat()\n if isinstance(value, time):\n if timezone and timezone.is_aware(value):\n raise ValueError(\"JSON can't represent timezone-aware times.\")\n return value.isoformat()\n if isinstance(value, timedelta):\n return str(value.total_seconds())\n return value\n\n\ndef sanitize_dict(source: dict[Any, Any]) -> dict[Any, Any]:\n \"\"\"clean source of all Models that would interfere with the JSONField.\n Models are replaced with a dictionary of {\n app: str,\n name: str,\n pk: Any\n }\"\"\"\n final_dict = {}\n for key, value in source.items():\n new_value = sanitize_item(value)\n if new_value is not ...:\n final_dict[key] = new_value\n return final_dict\n", "path": "authentik/events/utils.py"}]} | 2,101 | 468 |
gh_patches_debug_1619 | rasdani/github-patches | git_diff | getredash__redash-3008 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GA Data Source throws an error when no rows returned
### Issue Summary
Google Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.
I have a pretty simple query with dimensions and filters, like:
```json
{
"ids": "ga:177xxxxxx",
"start_date": "2018-10-08",
"end_date": "2018-10-12",
"metrics": "ga:uniqueEvents",
"dimensions": "ga:dimension1,ga:dimension3",
"filters": "ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo"
}
```
Sometimes it returns empty result as there is no data. This results in error in redash.
### Steps to Reproduce
1. Create the Google Analytics Data Source
2. Make some query returning zero rows
3. Execute it in query editor
`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.
### Technical details:
* Redash Version: 5.0.1
* Browser/OS: Chrome/macOS
* How did you install Redash: docker-compose
GA Data Source throws an error when no rows returned
### Issue Summary
Google Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.
I have a pretty simple query with dimensions and filters, like:
```json
{
"ids": "ga:177xxxxxx",
"start_date": "2018-10-08",
"end_date": "2018-10-12",
"metrics": "ga:uniqueEvents",
"dimensions": "ga:dimension1,ga:dimension3",
"filters": "ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo"
}
```
Sometimes it returns empty result as there is no data. This results in error in redash.
### Steps to Reproduce
1. Create the Google Analytics Data Source
2. Make some query returning zero rows
3. Execute it in query editor
`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.
### Technical details:
* Redash Version: 5.0.1
* Browser/OS: Chrome/macOS
* How did you install Redash: docker-compose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/google_analytics.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import logging
4 from base64 import b64decode
5 from datetime import datetime
6 from urlparse import parse_qs, urlparse
7
8 from redash.query_runner import *
9 from redash.utils import json_dumps, json_loads
10
11 logger = logging.getLogger(__name__)
12
13 try:
14 from oauth2client.service_account import ServiceAccountCredentials
15 from apiclient.discovery import build
16 from apiclient.errors import HttpError
17 import httplib2
18 enabled = True
19 except ImportError as e:
20 enabled = False
21
22
23 types_conv = dict(
24 STRING=TYPE_STRING,
25 INTEGER=TYPE_INTEGER,
26 FLOAT=TYPE_FLOAT,
27 DATE=TYPE_DATE,
28 DATETIME=TYPE_DATETIME
29 )
30
31
32 def parse_ga_response(response):
33 columns = []
34 for h in response['columnHeaders']:
35 if h['name'] in ('ga:date', 'mcf:conversionDate'):
36 h['dataType'] = 'DATE'
37 elif h['name'] == 'ga:dateHour':
38 h['dataType'] = 'DATETIME'
39 columns.append({
40 'name': h['name'],
41 'friendly_name': h['name'].split(':', 1)[1],
42 'type': types_conv.get(h['dataType'], 'string')
43 })
44
45 rows = []
46 for r in response['rows']:
47 d = {}
48 for c, value in enumerate(r):
49 column_name = response['columnHeaders'][c]['name']
50 column_type = filter(lambda col: col['name'] == column_name, columns)[0]['type']
51
52 # mcf results come a bit different than ga results:
53 if isinstance(value, dict):
54 if 'primitiveValue' in value:
55 value = value['primitiveValue']
56 elif 'conversionPathValue' in value:
57 steps = []
58 for step in value['conversionPathValue']:
59 steps.append('{}:{}'.format(step['interactionType'], step['nodeValue']))
60 value = ', '.join(steps)
61 else:
62 raise Exception("Results format not supported")
63
64 if column_type == TYPE_DATE:
65 value = datetime.strptime(value, '%Y%m%d')
66 elif column_type == TYPE_DATETIME:
67 if len(value) == 10:
68 value = datetime.strptime(value, '%Y%m%d%H')
69 elif len(value) == 12:
70 value = datetime.strptime(value, '%Y%m%d%H%M')
71 else:
72 raise Exception("Unknown date/time format in results: '{}'".format(value))
73
74 d[column_name] = value
75 rows.append(d)
76
77 return {'columns': columns, 'rows': rows}
78
79
80 class GoogleAnalytics(BaseSQLQueryRunner):
81 @classmethod
82 def annotate_query(cls):
83 return False
84
85 @classmethod
86 def type(cls):
87 return "google_analytics"
88
89 @classmethod
90 def name(cls):
91 return "Google Analytics"
92
93 @classmethod
94 def enabled(cls):
95 return enabled
96
97 @classmethod
98 def configuration_schema(cls):
99 return {
100 'type': 'object',
101 'properties': {
102 'jsonKeyFile': {
103 "type": "string",
104 'title': 'JSON Key File'
105 }
106 },
107 'required': ['jsonKeyFile'],
108 'secret': ['jsonKeyFile']
109 }
110
111 def __init__(self, configuration):
112 super(GoogleAnalytics, self).__init__(configuration)
113 self.syntax = 'json'
114
115 def _get_analytics_service(self):
116 scope = ['https://www.googleapis.com/auth/analytics.readonly']
117 key = json_loads(b64decode(self.configuration['jsonKeyFile']))
118 creds = ServiceAccountCredentials.from_json_keyfile_dict(key, scope)
119 return build('analytics', 'v3', http=creds.authorize(httplib2.Http()))
120
121 def _get_tables(self, schema):
122 accounts = self._get_analytics_service().management().accounts().list().execute().get('items')
123 if accounts is None:
124 raise Exception("Failed getting accounts.")
125 else:
126 for account in accounts:
127 schema[account['name']] = {'name': account['name'], 'columns': []}
128 properties = self._get_analytics_service().management().webproperties().list(
129 accountId=account['id']).execute().get('items', [])
130 for property_ in properties:
131 if 'defaultProfileId' in property_ and 'name' in property_:
132 schema[account['name']]['columns'].append(
133 u'{0} (ga:{1})'.format(property_['name'], property_['defaultProfileId'])
134 )
135
136 return schema.values()
137
138 def test_connection(self):
139 try:
140 service = self._get_analytics_service()
141 service.management().accounts().list().execute()
142 except HttpError as e:
143 # Make sure we return a more readable error to the end user
144 raise Exception(e._get_reason())
145
146 def run_query(self, query, user):
147 logger.debug("Analytics is about to execute query: %s", query)
148 try:
149 params = json_loads(query)
150 except:
151 params = parse_qs(urlparse(query).query, keep_blank_values=True)
152 for key in params.keys():
153 params[key] = ','.join(params[key])
154 if '-' in key:
155 params[key.replace('-', '_')] = params.pop(key)
156
157 if 'mcf:' in params['metrics'] and 'ga:' in params['metrics']:
158 raise Exception("Can't mix mcf: and ga: metrics.")
159
160 if 'mcf:' in params.get('dimensions', '') and 'ga:' in params.get('dimensions', ''):
161 raise Exception("Can't mix mcf: and ga: dimensions.")
162
163 if 'mcf:' in params['metrics']:
164 api = self._get_analytics_service().data().mcf()
165 else:
166 api = self._get_analytics_service().data().ga()
167
168 if len(params) > 0:
169 try:
170 response = api.get(**params).execute()
171 data = parse_ga_response(response)
172 error = None
173 json_data = json_dumps(data)
174 except HttpError as e:
175 # Make sure we return a more readable error to the end user
176 error = e._get_reason()
177 json_data = None
178 else:
179 error = 'Wrong query format.'
180 json_data = None
181 return json_data, error
182
183
184 register(GoogleAnalytics)
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/query_runner/google_analytics.py b/redash/query_runner/google_analytics.py
--- a/redash/query_runner/google_analytics.py
+++ b/redash/query_runner/google_analytics.py
@@ -43,7 +43,7 @@
})
rows = []
- for r in response['rows']:
+ for r in response.get('rows', []):
d = {}
for c, value in enumerate(r):
column_name = response['columnHeaders'][c]['name']
| {"golden_diff": "diff --git a/redash/query_runner/google_analytics.py b/redash/query_runner/google_analytics.py\n--- a/redash/query_runner/google_analytics.py\n+++ b/redash/query_runner/google_analytics.py\n@@ -43,7 +43,7 @@\n })\n \n rows = []\n- for r in response['rows']:\n+ for r in response.get('rows', []):\n d = {}\n for c, value in enumerate(r):\n column_name = response['columnHeaders'][c]['name']\n", "issue": "GA Data Source throws an error when no rows returned\n### Issue Summary\r\n\r\nGoogle Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.\r\nI have a pretty simple query with dimensions and filters, like:\r\n\r\n```json\r\n{\r\n \"ids\": \"ga:177xxxxxx\",\r\n \"start_date\": \"2018-10-08\",\r\n \"end_date\": \"2018-10-12\",\r\n \"metrics\": \"ga:uniqueEvents\",\r\n \"dimensions\": \"ga:dimension1,ga:dimension3\",\r\n \"filters\": \"ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo\"\r\n}\r\n```\r\n\r\nSometimes it returns empty result as there is no data. This results in error in redash.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create the Google Analytics Data Source\r\n2. Make some query returning zero rows\r\n3. Execute it in query editor\r\n\r\n`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 5.0.1\r\n* Browser/OS: Chrome/macOS\r\n* How did you install Redash: docker-compose\r\n\nGA Data Source throws an error when no rows returned\n### Issue Summary\r\n\r\nGoogle Analytics Data Source throws `Error running query: 'rows'` when the query result is empty.\r\nI have a pretty simple query with dimensions and filters, like:\r\n\r\n```json\r\n{\r\n \"ids\": \"ga:177xxxxxx\",\r\n \"start_date\": \"2018-10-08\",\r\n \"end_date\": \"2018-10-12\",\r\n \"metrics\": \"ga:uniqueEvents\",\r\n \"dimensions\": \"ga:dimension1,ga:dimension3\",\r\n \"filters\": \"ga:dimension2==userrole;ga:eventCategory==eventcategory;ga:eventAction==enentaction;ga:dimension1!=demo\"\r\n}\r\n```\r\n\r\nSometimes it returns empty result as there is no data. This results in error in redash.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create the Google Analytics Data Source\r\n2. Make some query returning zero rows\r\n3. Execute it in query editor\r\n\r\n`Error running query: 'rows'` will be thrown. While this might be considered not a bug, I'd expect just an empty result with no errors.\r\n\r\n### Technical details:\r\n\r\n* Redash Version: 5.0.1\r\n* Browser/OS: Chrome/macOS\r\n* How did you install Redash: docker-compose\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom base64 import b64decode\nfrom datetime import datetime\nfrom urlparse import parse_qs, urlparse\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from oauth2client.service_account import ServiceAccountCredentials\n from apiclient.discovery import build\n from apiclient.errors import HttpError\n import httplib2\n enabled = True\nexcept ImportError as e:\n enabled = False\n\n\ntypes_conv = dict(\n STRING=TYPE_STRING,\n INTEGER=TYPE_INTEGER,\n FLOAT=TYPE_FLOAT,\n DATE=TYPE_DATE,\n DATETIME=TYPE_DATETIME\n)\n\n\ndef parse_ga_response(response):\n columns = []\n for h in response['columnHeaders']:\n if h['name'] in ('ga:date', 'mcf:conversionDate'):\n h['dataType'] = 'DATE'\n elif h['name'] == 'ga:dateHour':\n h['dataType'] = 'DATETIME'\n columns.append({\n 'name': h['name'],\n 'friendly_name': h['name'].split(':', 1)[1],\n 'type': types_conv.get(h['dataType'], 'string')\n })\n\n rows = []\n for r in response['rows']:\n d = {}\n for c, value in enumerate(r):\n column_name = response['columnHeaders'][c]['name']\n column_type = filter(lambda col: col['name'] == column_name, columns)[0]['type']\n\n # mcf results come a bit different than ga results:\n if isinstance(value, dict):\n if 'primitiveValue' in value:\n value = value['primitiveValue']\n elif 'conversionPathValue' in value:\n steps = []\n for step in value['conversionPathValue']:\n steps.append('{}:{}'.format(step['interactionType'], step['nodeValue']))\n value = ', '.join(steps)\n else:\n raise Exception(\"Results format not supported\")\n\n if column_type == TYPE_DATE:\n value = datetime.strptime(value, '%Y%m%d')\n elif column_type == TYPE_DATETIME:\n if len(value) == 10:\n value = datetime.strptime(value, '%Y%m%d%H')\n elif len(value) == 12:\n value = datetime.strptime(value, '%Y%m%d%H%M')\n else:\n raise Exception(\"Unknown date/time format in results: '{}'\".format(value))\n\n d[column_name] = value\n rows.append(d)\n\n return {'columns': columns, 'rows': rows}\n\n\nclass GoogleAnalytics(BaseSQLQueryRunner):\n @classmethod\n def annotate_query(cls):\n return False\n\n @classmethod\n def type(cls):\n return \"google_analytics\"\n\n @classmethod\n def name(cls):\n return \"Google Analytics\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'jsonKeyFile': {\n \"type\": \"string\",\n 'title': 'JSON Key File'\n }\n },\n 'required': ['jsonKeyFile'],\n 'secret': ['jsonKeyFile']\n }\n\n def __init__(self, configuration):\n super(GoogleAnalytics, self).__init__(configuration)\n self.syntax = 'json'\n\n def _get_analytics_service(self):\n scope = ['https://www.googleapis.com/auth/analytics.readonly']\n key = json_loads(b64decode(self.configuration['jsonKeyFile']))\n creds = ServiceAccountCredentials.from_json_keyfile_dict(key, scope)\n return build('analytics', 'v3', http=creds.authorize(httplib2.Http()))\n\n def _get_tables(self, schema):\n accounts = self._get_analytics_service().management().accounts().list().execute().get('items')\n if accounts is None:\n raise Exception(\"Failed getting accounts.\")\n else:\n for account in accounts:\n schema[account['name']] = {'name': account['name'], 'columns': []}\n properties = self._get_analytics_service().management().webproperties().list(\n accountId=account['id']).execute().get('items', [])\n for property_ in properties:\n if 'defaultProfileId' in property_ and 'name' in property_:\n schema[account['name']]['columns'].append(\n u'{0} (ga:{1})'.format(property_['name'], property_['defaultProfileId'])\n )\n\n return schema.values()\n\n def test_connection(self):\n try:\n service = self._get_analytics_service()\n service.management().accounts().list().execute()\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n raise Exception(e._get_reason())\n\n def run_query(self, query, user):\n logger.debug(\"Analytics is about to execute query: %s\", query)\n try:\n params = json_loads(query)\n except:\n params = parse_qs(urlparse(query).query, keep_blank_values=True)\n for key in params.keys():\n params[key] = ','.join(params[key])\n if '-' in key:\n params[key.replace('-', '_')] = params.pop(key)\n\n if 'mcf:' in params['metrics'] and 'ga:' in params['metrics']:\n raise Exception(\"Can't mix mcf: and ga: metrics.\")\n\n if 'mcf:' in params.get('dimensions', '') and 'ga:' in params.get('dimensions', ''):\n raise Exception(\"Can't mix mcf: and ga: dimensions.\")\n\n if 'mcf:' in params['metrics']:\n api = self._get_analytics_service().data().mcf()\n else:\n api = self._get_analytics_service().data().ga()\n\n if len(params) > 0:\n try:\n response = api.get(**params).execute()\n data = parse_ga_response(response)\n error = None\n json_data = json_dumps(data)\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n error = e._get_reason()\n json_data = None\n else:\n error = 'Wrong query format.'\n json_data = None\n return json_data, error\n\n\nregister(GoogleAnalytics)\n", "path": "redash/query_runner/google_analytics.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom base64 import b64decode\nfrom datetime import datetime\nfrom urlparse import parse_qs, urlparse\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from oauth2client.service_account import ServiceAccountCredentials\n from apiclient.discovery import build\n from apiclient.errors import HttpError\n import httplib2\n enabled = True\nexcept ImportError as e:\n enabled = False\n\n\ntypes_conv = dict(\n STRING=TYPE_STRING,\n INTEGER=TYPE_INTEGER,\n FLOAT=TYPE_FLOAT,\n DATE=TYPE_DATE,\n DATETIME=TYPE_DATETIME\n)\n\n\ndef parse_ga_response(response):\n columns = []\n for h in response['columnHeaders']:\n if h['name'] in ('ga:date', 'mcf:conversionDate'):\n h['dataType'] = 'DATE'\n elif h['name'] == 'ga:dateHour':\n h['dataType'] = 'DATETIME'\n columns.append({\n 'name': h['name'],\n 'friendly_name': h['name'].split(':', 1)[1],\n 'type': types_conv.get(h['dataType'], 'string')\n })\n\n rows = []\n for r in response.get('rows', []):\n d = {}\n for c, value in enumerate(r):\n column_name = response['columnHeaders'][c]['name']\n column_type = filter(lambda col: col['name'] == column_name, columns)[0]['type']\n\n # mcf results come a bit different than ga results:\n if isinstance(value, dict):\n if 'primitiveValue' in value:\n value = value['primitiveValue']\n elif 'conversionPathValue' in value:\n steps = []\n for step in value['conversionPathValue']:\n steps.append('{}:{}'.format(step['interactionType'], step['nodeValue']))\n value = ', '.join(steps)\n else:\n raise Exception(\"Results format not supported\")\n\n if column_type == TYPE_DATE:\n value = datetime.strptime(value, '%Y%m%d')\n elif column_type == TYPE_DATETIME:\n if len(value) == 10:\n value = datetime.strptime(value, '%Y%m%d%H')\n elif len(value) == 12:\n value = datetime.strptime(value, '%Y%m%d%H%M')\n else:\n raise Exception(\"Unknown date/time format in results: '{}'\".format(value))\n\n d[column_name] = value\n rows.append(d)\n\n return {'columns': columns, 'rows': rows}\n\n\nclass GoogleAnalytics(BaseSQLQueryRunner):\n @classmethod\n def annotate_query(cls):\n return False\n\n @classmethod\n def type(cls):\n return \"google_analytics\"\n\n @classmethod\n def name(cls):\n return \"Google Analytics\"\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'jsonKeyFile': {\n \"type\": \"string\",\n 'title': 'JSON Key File'\n }\n },\n 'required': ['jsonKeyFile'],\n 'secret': ['jsonKeyFile']\n }\n\n def __init__(self, configuration):\n super(GoogleAnalytics, self).__init__(configuration)\n self.syntax = 'json'\n\n def _get_analytics_service(self):\n scope = ['https://www.googleapis.com/auth/analytics.readonly']\n key = json_loads(b64decode(self.configuration['jsonKeyFile']))\n creds = ServiceAccountCredentials.from_json_keyfile_dict(key, scope)\n return build('analytics', 'v3', http=creds.authorize(httplib2.Http()))\n\n def _get_tables(self, schema):\n accounts = self._get_analytics_service().management().accounts().list().execute().get('items')\n if accounts is None:\n raise Exception(\"Failed getting accounts.\")\n else:\n for account in accounts:\n schema[account['name']] = {'name': account['name'], 'columns': []}\n properties = self._get_analytics_service().management().webproperties().list(\n accountId=account['id']).execute().get('items', [])\n for property_ in properties:\n if 'defaultProfileId' in property_ and 'name' in property_:\n schema[account['name']]['columns'].append(\n u'{0} (ga:{1})'.format(property_['name'], property_['defaultProfileId'])\n )\n\n return schema.values()\n\n def test_connection(self):\n try:\n service = self._get_analytics_service()\n service.management().accounts().list().execute()\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n raise Exception(e._get_reason())\n\n def run_query(self, query, user):\n logger.debug(\"Analytics is about to execute query: %s\", query)\n try:\n params = json_loads(query)\n except:\n params = parse_qs(urlparse(query).query, keep_blank_values=True)\n for key in params.keys():\n params[key] = ','.join(params[key])\n if '-' in key:\n params[key.replace('-', '_')] = params.pop(key)\n\n if 'mcf:' in params['metrics'] and 'ga:' in params['metrics']:\n raise Exception(\"Can't mix mcf: and ga: metrics.\")\n\n if 'mcf:' in params.get('dimensions', '') and 'ga:' in params.get('dimensions', ''):\n raise Exception(\"Can't mix mcf: and ga: dimensions.\")\n\n if 'mcf:' in params['metrics']:\n api = self._get_analytics_service().data().mcf()\n else:\n api = self._get_analytics_service().data().ga()\n\n if len(params) > 0:\n try:\n response = api.get(**params).execute()\n data = parse_ga_response(response)\n error = None\n json_data = json_dumps(data)\n except HttpError as e:\n # Make sure we return a more readable error to the end user\n error = e._get_reason()\n json_data = None\n else:\n error = 'Wrong query format.'\n json_data = None\n return json_data, error\n\n\nregister(GoogleAnalytics)\n", "path": "redash/query_runner/google_analytics.py"}]} | 2,643 | 110 |
gh_patches_debug_1849 | rasdani/github-patches | git_diff | WordPress__openverse-api-556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sound category mismatch
## Description
<!-- Concisely describe the bug. -->
The `sound` category for audio doesn't work on the front-end.
There seems to be a mismatch between the `audio` category of `sound_effect`:
If you go to `https://api.openverse.engineering/v1/audio/?q=cat&categories=sound`, you will get a 400 response:
```
HTTP 400 Bad Request
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"detail": {
"categories": [
"Invalid category: sound. Available options: {'music', 'audiobook', 'podcast', 'news', 'sound_effect'}"
]
}
}
```
However, if you access a single audio result, you will see that it returns `sound` for the category:
https://api.openverse.engineering/v1/audio/1bb94f50-009c-4371-a605-dd289562a9f5/
## Expectation
<!-- Concisely describe what you expected to happen. -->
Both the query category parameter and the result category property for sound effect should have the same name.
## Additional context
The catalog sets the category as `sound`, so that is the value we get from the database:
https://github.com/WordPress/openverse-catalog/blob/cb19f839e96de7ae1a55e8b7dc82a7d2bf5588e8/openverse_catalog/dags/providers/provider_api_scripts/freesound.py#L33-L34
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/catalog/api/serializers/audio_serializers.py`
Content:
```
1 from catalog.api.controllers.search_controller import get_sources
2 from catalog.api.docs.media_docs import fields_to_md
3 from catalog.api.models import AudioReport
4 from catalog.api.models.audio import Audio
5 from catalog.api.serializers.media_serializers import (
6 MediaSearchRequestSerializer,
7 MediaSearchSerializer,
8 MediaSerializer,
9 _validate_enum,
10 )
11 from elasticsearch_dsl.response import Hit
12 from rest_framework import serializers
13
14
15 class AudioSetSerializer(serializers.Serializer):
16 """An audio set, rendered as a part of the ``AudioSerializer`` output."""
17
18 title = serializers.CharField(help_text="The name of the media.", required=False)
19 foreign_landing_url = serializers.URLField(
20 required=False, help_text="A foreign landing link for the image."
21 )
22
23 creator = serializers.CharField(
24 help_text="The name of the media creator.", required=False, allow_blank=True
25 )
26 creator_url = serializers.URLField(
27 required=False, help_text="A direct link to the media creator."
28 )
29
30 url = serializers.URLField(help_text="The actual URL to the media file.")
31 filesize = serializers.CharField(
32 required=False, help_text="Number in bytes, e.g. 1024."
33 )
34 filetype = serializers.CharField(
35 required=False,
36 help_text="The type of the file, related to the file extension.",
37 )
38
39
40 class AudioSearchRequestSerializer(MediaSearchRequestSerializer):
41 """Parse and validate search query string parameters."""
42
43 fields_names = [
44 *MediaSearchRequestSerializer.fields_names,
45 "source",
46 "categories",
47 "duration",
48 ]
49 """
50 Keep the fields names in sync with the actual fields below as this list is
51 used to generate Swagger documentation.
52 """
53
54 source = serializers.CharField(
55 label="provider",
56 help_text="A comma separated list of data sources to search. Valid "
57 "inputs: "
58 f"`{list(get_sources('audio').keys())}`",
59 required=False,
60 )
61 categories = serializers.CharField(
62 label="categories",
63 help_text="A comma separated list of categories; available categories "
64 "include `music`, `sound_effect`, `podcast`, `audiobook`, "
65 "and `news`.",
66 required=False,
67 )
68 duration = serializers.CharField(
69 label="duration",
70 help_text="A comma separated list of audio lengths; available lengths "
71 "include `short`, and `long`.",
72 required=False,
73 )
74
75 @staticmethod
76 def validate_source(input_sources):
77 allowed_sources = list(get_sources("audio").keys())
78 input_sources = input_sources.split(",")
79 input_sources = [x for x in input_sources if x in allowed_sources]
80 input_sources = ",".join(input_sources)
81 return input_sources.lower()
82
83 @staticmethod
84 def validate_categories(value):
85 valid_categories = {
86 "music",
87 "sound_effect",
88 "podcast",
89 "news",
90 "audiobook",
91 }
92 _validate_enum("category", valid_categories, value)
93 return value.lower()
94
95 @staticmethod
96 def validate_duration(value):
97 valid_durations = {"short", "long"} # TODO: Finalise duration filters
98 _validate_enum("duration", valid_durations, value)
99 return value.lower()
100
101
102 class AudioSerializer(MediaSerializer):
103 """A single audio file. Used in search results."""
104
105 fields_names = [
106 *MediaSerializer.fields_names,
107 "audio_set",
108 "genre",
109 "duration",
110 "bit_rate",
111 "sample_rate",
112 "alt_files",
113 "detail_url",
114 "related_url",
115 "category",
116 ]
117 """
118 Keep the fields names in sync with the actual fields below as this list is
119 used to generate Swagger documentation.
120 """
121
122 audio_set = AudioSetSerializer(
123 required=False,
124 help_text="Reference to set of which this track is a part.",
125 read_only=True,
126 )
127
128 genres = serializers.ListField(
129 child=serializers.CharField(),
130 required=False,
131 help_text="An array of audio genres such as "
132 "`rock`, `electronic` for `music` category, or "
133 "`politics`, `sport`, `education` for `podcast` category",
134 )
135
136 duration = serializers.IntegerField(
137 required=False, help_text="The time length of the audio file in milliseconds."
138 )
139 bit_rate = serializers.IntegerField(
140 required=False, help_text="Number in bits per second, eg. 128000."
141 )
142 sample_rate = serializers.IntegerField(
143 required=False, help_text="Number in hertz, eg. 44100."
144 )
145
146 alt_files = serializers.JSONField(
147 required=False, help_text="JSON describing alternative files for this audio."
148 )
149
150 # Hyperlinks
151 thumbnail = serializers.HyperlinkedIdentityField(
152 read_only=True,
153 view_name="audio-thumb",
154 lookup_field="identifier",
155 help_text="A direct link to the miniature artwork.",
156 )
157 waveform = serializers.HyperlinkedIdentityField(
158 read_only=True,
159 view_name="audio-waveform",
160 lookup_field="identifier",
161 help_text="A direct link to the waveform peaks.",
162 )
163 detail_url = serializers.HyperlinkedIdentityField(
164 read_only=True,
165 view_name="audio-detail",
166 lookup_field="identifier",
167 help_text="A direct link to the detail view of this audio file.",
168 )
169 related_url = serializers.HyperlinkedIdentityField(
170 read_only=True,
171 view_name="audio-related",
172 lookup_field="identifier",
173 help_text="A link to an endpoint that provides similar audio files.",
174 )
175
176 # Add-on data
177 peaks = serializers.SerializerMethodField()
178
179 @staticmethod
180 def get_peaks(obj):
181 if isinstance(obj, Hit):
182 obj = Audio.objects.get(identifier=obj.identifier)
183 return obj.get_waveform()
184
185
186 class AudioSearchSerializer(MediaSearchSerializer):
187 """
188 The full audio search response.
189 This serializer is purely representational and not actually used to
190 serialize the response.
191 """
192
193 results = AudioSerializer(
194 many=True,
195 help_text=(
196 "An array of audios and their details such as "
197 f"{fields_to_md(AudioSerializer.fields_names)}."
198 ),
199 )
200
201
202 class AudioReportSerializer(serializers.ModelSerializer):
203 class Meta:
204 model = AudioReport
205 fields = ("identifier", "reason", "description")
206 read_only_fields = ("identifier",)
207
208 def create(self, validated_data):
209 if (
210 validated_data["reason"] == "other"
211 and (
212 "description" not in validated_data
213 or len(validated_data["description"])
214 )
215 < 20
216 ):
217 raise serializers.ValidationError(
218 "Description must be at least be 20 characters long"
219 )
220 return AudioReport.objects.create(**validated_data)
221
222
223 class AudioWaveformSerializer(serializers.Serializer):
224 len = serializers.SerializerMethodField()
225 points = serializers.ListField(
226 child=serializers.FloatField(min_value=0, max_value=1)
227 )
228
229 @staticmethod
230 def get_len(obj) -> int:
231 return len(obj.get("points", []))
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py
--- a/api/catalog/api/serializers/audio_serializers.py
+++ b/api/catalog/api/serializers/audio_serializers.py
@@ -88,6 +88,7 @@
"podcast",
"news",
"audiobook",
+ "pronunciation",
}
_validate_enum("category", valid_categories, value)
return value.lower()
| {"golden_diff": "diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py\n--- a/api/catalog/api/serializers/audio_serializers.py\n+++ b/api/catalog/api/serializers/audio_serializers.py\n@@ -88,6 +88,7 @@\n \"podcast\",\n \"news\",\n \"audiobook\",\n+ \"pronunciation\",\n }\n _validate_enum(\"category\", valid_categories, value)\n return value.lower()\n", "issue": "Sound category mismatch\n## Description\r\n<!-- Concisely describe the bug. -->\r\nThe `sound` category for audio doesn't work on the front-end.\r\nThere seems to be a mismatch between the `audio` category of `sound_effect`:\r\nIf you go to `https://api.openverse.engineering/v1/audio/?q=cat&categories=sound`, you will get a 400 response:\r\n```\r\nHTTP 400 Bad Request\r\nAllow: GET, HEAD, OPTIONS\r\nContent-Type: application/json\r\nVary: Accept\r\n\r\n{\r\n \"detail\": {\r\n \"categories\": [\r\n \"Invalid category: sound. Available options: {'music', 'audiobook', 'podcast', 'news', 'sound_effect'}\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nHowever, if you access a single audio result, you will see that it returns `sound` for the category:\r\nhttps://api.openverse.engineering/v1/audio/1bb94f50-009c-4371-a605-dd289562a9f5/\r\n\r\n## Expectation\r\n<!-- Concisely describe what you expected to happen. -->\r\nBoth the query category parameter and the result category property for sound effect should have the same name.\r\n\r\n## Additional context\r\nThe catalog sets the category as `sound`, so that is the value we get from the database:\r\nhttps://github.com/WordPress/openverse-catalog/blob/cb19f839e96de7ae1a55e8b7dc82a7d2bf5588e8/openverse_catalog/dags/providers/provider_api_scripts/freesound.py#L33-L34\r\n\r\n## Resolution\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [ ] \ud83d\ude4b I would be interested in resolving this bug.\r\n\n", "before_files": [{"content": "from catalog.api.controllers.search_controller import get_sources\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import AudioReport\nfrom catalog.api.models.audio import Audio\nfrom catalog.api.serializers.media_serializers import (\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n _validate_enum,\n)\nfrom elasticsearch_dsl.response import Hit\nfrom rest_framework import serializers\n\n\nclass AudioSetSerializer(serializers.Serializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n title = serializers.CharField(help_text=\"The name of the media.\", required=False)\n foreign_landing_url = serializers.URLField(\n required=False, help_text=\"A foreign landing link for the image.\"\n )\n\n creator = serializers.CharField(\n help_text=\"The name of the media creator.\", required=False, allow_blank=True\n )\n creator_url = serializers.URLField(\n required=False, help_text=\"A direct link to the media creator.\"\n )\n\n url = serializers.URLField(help_text=\"The actual URL to the media file.\")\n filesize = serializers.CharField(\n required=False, help_text=\"Number in bytes, e.g. 1024.\"\n )\n filetype = serializers.CharField(\n required=False,\n help_text=\"The type of the file, related to the file extension.\",\n )\n\n\nclass AudioSearchRequestSerializer(MediaSearchRequestSerializer):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n \"source\",\n \"categories\",\n \"duration\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n source = serializers.CharField(\n label=\"provider\",\n help_text=\"A comma separated list of data sources to search. Valid \"\n \"inputs: \"\n f\"`{list(get_sources('audio').keys())}`\",\n required=False,\n )\n categories = serializers.CharField(\n label=\"categories\",\n help_text=\"A comma separated list of categories; available categories \"\n \"include `music`, `sound_effect`, `podcast`, `audiobook`, \"\n \"and `news`.\",\n required=False,\n )\n duration = serializers.CharField(\n label=\"duration\",\n help_text=\"A comma separated list of audio lengths; available lengths \"\n \"include `short`, and `long`.\",\n required=False,\n )\n\n @staticmethod\n def validate_source(input_sources):\n allowed_sources = list(get_sources(\"audio\").keys())\n input_sources = input_sources.split(\",\")\n input_sources = [x for x in input_sources if x in allowed_sources]\n input_sources = \",\".join(input_sources)\n return input_sources.lower()\n\n @staticmethod\n def validate_categories(value):\n valid_categories = {\n \"music\",\n \"sound_effect\",\n \"podcast\",\n \"news\",\n \"audiobook\",\n }\n _validate_enum(\"category\", valid_categories, value)\n return value.lower()\n\n @staticmethod\n def validate_duration(value):\n valid_durations = {\"short\", \"long\"} # TODO: Finalise duration filters\n _validate_enum(\"duration\", valid_durations, value)\n return value.lower()\n\n\nclass AudioSerializer(MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n fields_names = [\n *MediaSerializer.fields_names,\n \"audio_set\",\n \"genre\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"alt_files\",\n \"detail_url\",\n \"related_url\",\n \"category\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n required=False,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n genres = serializers.ListField(\n child=serializers.CharField(),\n required=False,\n help_text=\"An array of audio genres such as \"\n \"`rock`, `electronic` for `music` category, or \"\n \"`politics`, `sport`, `education` for `podcast` category\",\n )\n\n duration = serializers.IntegerField(\n required=False, help_text=\"The time length of the audio file in milliseconds.\"\n )\n bit_rate = serializers.IntegerField(\n required=False, help_text=\"Number in bits per second, eg. 128000.\"\n )\n sample_rate = serializers.IntegerField(\n required=False, help_text=\"Number in hertz, eg. 44100.\"\n )\n\n alt_files = serializers.JSONField(\n required=False, help_text=\"JSON describing alternative files for this audio.\"\n )\n\n # Hyperlinks\n thumbnail = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-thumb\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the miniature artwork.\",\n )\n waveform = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n detail_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-detail\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the detail view of this audio file.\",\n )\n related_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-related\",\n lookup_field=\"identifier\",\n help_text=\"A link to an endpoint that provides similar audio files.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField()\n\n @staticmethod\n def get_peaks(obj):\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.fields_names)}.\"\n ),\n )\n\n\nclass AudioReportSerializer(serializers.ModelSerializer):\n class Meta:\n model = AudioReport\n fields = (\"identifier\", \"reason\", \"description\")\n read_only_fields = (\"identifier\",)\n\n def create(self, validated_data):\n if (\n validated_data[\"reason\"] == \"other\"\n and (\n \"description\" not in validated_data\n or len(validated_data[\"description\"])\n )\n < 20\n ):\n raise serializers.ValidationError(\n \"Description must be at least be 20 characters long\"\n )\n return AudioReport.objects.create(**validated_data)\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}], "after_files": [{"content": "from catalog.api.controllers.search_controller import get_sources\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import AudioReport\nfrom catalog.api.models.audio import Audio\nfrom catalog.api.serializers.media_serializers import (\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n _validate_enum,\n)\nfrom elasticsearch_dsl.response import Hit\nfrom rest_framework import serializers\n\n\nclass AudioSetSerializer(serializers.Serializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n title = serializers.CharField(help_text=\"The name of the media.\", required=False)\n foreign_landing_url = serializers.URLField(\n required=False, help_text=\"A foreign landing link for the image.\"\n )\n\n creator = serializers.CharField(\n help_text=\"The name of the media creator.\", required=False, allow_blank=True\n )\n creator_url = serializers.URLField(\n required=False, help_text=\"A direct link to the media creator.\"\n )\n\n url = serializers.URLField(help_text=\"The actual URL to the media file.\")\n filesize = serializers.CharField(\n required=False, help_text=\"Number in bytes, e.g. 1024.\"\n )\n filetype = serializers.CharField(\n required=False,\n help_text=\"The type of the file, related to the file extension.\",\n )\n\n\nclass AudioSearchRequestSerializer(MediaSearchRequestSerializer):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n \"source\",\n \"categories\",\n \"duration\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n source = serializers.CharField(\n label=\"provider\",\n help_text=\"A comma separated list of data sources to search. Valid \"\n \"inputs: \"\n f\"`{list(get_sources('audio').keys())}`\",\n required=False,\n )\n categories = serializers.CharField(\n label=\"categories\",\n help_text=\"A comma separated list of categories; available categories \"\n \"include `music`, `sound_effect`, `podcast`, `audiobook`, \"\n \"and `news`.\",\n required=False,\n )\n duration = serializers.CharField(\n label=\"duration\",\n help_text=\"A comma separated list of audio lengths; available lengths \"\n \"include `short`, and `long`.\",\n required=False,\n )\n\n @staticmethod\n def validate_source(input_sources):\n allowed_sources = list(get_sources(\"audio\").keys())\n input_sources = input_sources.split(\",\")\n input_sources = [x for x in input_sources if x in allowed_sources]\n input_sources = \",\".join(input_sources)\n return input_sources.lower()\n\n @staticmethod\n def validate_categories(value):\n valid_categories = {\n \"music\",\n \"sound_effect\",\n \"podcast\",\n \"news\",\n \"audiobook\",\n \"pronunciation\",\n }\n _validate_enum(\"category\", valid_categories, value)\n return value.lower()\n\n @staticmethod\n def validate_duration(value):\n valid_durations = {\"short\", \"long\"} # TODO: Finalise duration filters\n _validate_enum(\"duration\", valid_durations, value)\n return value.lower()\n\n\nclass AudioSerializer(MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n fields_names = [\n *MediaSerializer.fields_names,\n \"audio_set\",\n \"genre\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"alt_files\",\n \"detail_url\",\n \"related_url\",\n \"category\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n required=False,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n genres = serializers.ListField(\n child=serializers.CharField(),\n required=False,\n help_text=\"An array of audio genres such as \"\n \"`rock`, `electronic` for `music` category, or \"\n \"`politics`, `sport`, `education` for `podcast` category\",\n )\n\n duration = serializers.IntegerField(\n required=False, help_text=\"The time length of the audio file in milliseconds.\"\n )\n bit_rate = serializers.IntegerField(\n required=False, help_text=\"Number in bits per second, eg. 128000.\"\n )\n sample_rate = serializers.IntegerField(\n required=False, help_text=\"Number in hertz, eg. 44100.\"\n )\n\n alt_files = serializers.JSONField(\n required=False, help_text=\"JSON describing alternative files for this audio.\"\n )\n\n # Hyperlinks\n thumbnail = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-thumb\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the miniature artwork.\",\n )\n waveform = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n detail_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-detail\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the detail view of this audio file.\",\n )\n related_url = serializers.HyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-related\",\n lookup_field=\"identifier\",\n help_text=\"A link to an endpoint that provides similar audio files.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField()\n\n @staticmethod\n def get_peaks(obj):\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.fields_names)}.\"\n ),\n )\n\n\nclass AudioReportSerializer(serializers.ModelSerializer):\n class Meta:\n model = AudioReport\n fields = (\"identifier\", \"reason\", \"description\")\n read_only_fields = (\"identifier\",)\n\n def create(self, validated_data):\n if (\n validated_data[\"reason\"] == \"other\"\n and (\n \"description\" not in validated_data\n or len(validated_data[\"description\"])\n )\n < 20\n ):\n raise serializers.ValidationError(\n \"Description must be at least be 20 characters long\"\n )\n return AudioReport.objects.create(**validated_data)\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}]} | 2,755 | 103 |
gh_patches_debug_1052 | rasdani/github-patches | git_diff | mindee__doctr-404 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WeasyPrint import error Python 3.7
## 🐛 Bug
When importing weasyprint with python 3.7 I have an error: `AttributeError: 'OutStream' object has no attribute 'buffer'`*
## To Reproduce
Steps to reproduce the behavior:
`from doctr.models import ocr_predictor`
leads to:
```
AttributeError Traceback (most recent call last)
<ipython-input-4-19f78ebc9b57> in <module>()
----> 1 from doctr.models import ocr_predictor
2
3 # Load predictor
4 model = ocr_predictor(pretrained=True)
7 frames
/usr/local/lib/python3.7/dist-packages/doctr/__init__.py in <module>()
1 from .file_utils import is_tf_available, is_torch_available
2 from .version import __version__ # noqa: F401
----> 3 from . import documents
4 from . import transforms
5 from . import utils
/usr/local/lib/python3.7/dist-packages/doctr/documents/__init__.py in <module>()
1 from .elements import *
----> 2 from .reader import *
/usr/local/lib/python3.7/dist-packages/doctr/documents/reader.py in <module>()
8 from pathlib import Path
9 import fitz
---> 10 from weasyprint import HTML
11 from typing import List, Tuple, Optional, Any, Union, Sequence, Dict
12
/usr/local/lib/python3.7/dist-packages/weasyprint/__init__.py in <module>()
321 # Work around circular imports.
322 from .css import preprocess_stylesheet # noqa isort:skip
--> 323 from .html import ( # noqa isort:skip
324 HTML5_UA_COUNTER_STYLE, HTML5_UA_STYLESHEET, HTML5_PH_STYLESHEET,
325 find_base_url)
/usr/local/lib/python3.7/dist-packages/weasyprint/html.py in <module>()
21 from .css.counters import CounterStyle
22 from .formatting_structure import boxes
---> 23 from .images import SVGImage
24 from .logger import LOGGER
25 from .urls import get_url_attribute
/usr/local/lib/python3.7/dist-packages/weasyprint/images.py in <module>()
11 from itertools import cycle
12
---> 13 import pydyf
14 from PIL import Image
15
/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in <module>()
402
403
--> 404 class PDF:
405 """PDF document."""
406 def __init__(self):
/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in PDF()
506 self.write_line(b'%%EOF', output)
507
--> 508 def write(self, output=sys.stdout.buffer):
509 """Write PDF to output.
510
AttributeError: 'OutStream' object has no attribute 'buffer'
```
## Expected behavior
Nothing, special
## Environment
```
DocTR version: 0.3.0
TensorFlow version: 2.5.0
PyTorch version: 1.9.0+cu102 (torchvision 0.10.0+cu102)
OpenCV version: 4.5.3
OS: Ubuntu 18.04.5 LTS
Python version: 3.7
Is CUDA available (TensorFlow): No
Is CUDA available (PyTorch): No
CUDA runtime version: 11.0.221
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 """
7 Package installation setup
8 """
9
10 import os
11 import re
12 from pathlib import Path
13 import subprocess
14
15 from setuptools import find_packages, setup
16
17
18 version = "0.3.1a0"
19 sha = 'Unknown'
20 package_name = 'doctr'
21
22 cwd = Path(__file__).parent.absolute()
23
24 if os.getenv('BUILD_VERSION'):
25 version = os.getenv('BUILD_VERSION')
26 elif sha != 'Unknown':
27 try:
28 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
29 except Exception:
30 pass
31 version += '+' + sha[:7]
32 print(f"Building wheel {package_name}-{version}")
33
34 with open(cwd.joinpath(package_name, 'version.py'), 'w') as f:
35 f.write(f"__version__ = '{version}'\n")
36
37 with open('README.md', 'r') as f:
38 readme = f.read()
39
40 # Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py
41 _deps = [
42 "importlib_metadata",
43 "numpy>=1.16.0",
44 "scipy>=1.4.0",
45 "opencv-python>=4.2",
46 "tensorflow>=2.4.0",
47 "PyMuPDF>=1.16.0,<1.18.11",
48 "pyclipper>=1.2.0",
49 "shapely>=1.6.0",
50 "matplotlib>=3.1.0",
51 "mplcursors>=0.3",
52 "weasyprint>=52.2",
53 "unidecode>=1.0.0",
54 "tensorflow-cpu>=2.4.0",
55 "torch>=1.8.0",
56 "torchvision>=0.9.0",
57 "Pillow>=8.0.0,<8.3.0", # cf. https://github.com/python-pillow/Pillow/issues/5571
58 "tqdm>=4.30.0",
59 "tensorflow-addons>=0.13.0"
60 ]
61
62 deps = {b: a for a, b in (re.findall(r"^(([^!=<>]+)(?:[!=<>].*)?$)", x)[0] for x in _deps)}
63
64
65 def deps_list(*pkgs):
66 return [deps[pkg] for pkg in pkgs]
67
68
69 install_requires = [
70 deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it
71 deps["numpy"],
72 deps["scipy"],
73 deps["opencv-python"],
74 deps["PyMuPDF"],
75 deps["pyclipper"],
76 deps["shapely"],
77 deps["matplotlib"],
78 deps["mplcursors"],
79 deps["weasyprint"],
80 deps["unidecode"],
81 deps["Pillow"],
82 deps["tqdm"],
83 ]
84
85 extras = {}
86 extras["tf"] = deps_list("tensorflow", "tensorflow-addons")
87 extras["tf-cpu"] = deps_list("tensorflow-cpu", "tensorflow-addons")
88 extras["torch"] = deps_list("torch", "torchvision")
89 extras["all"] = (
90 extras["tf"]
91 + extras["torch"]
92 )
93
94 setup(
95 # Metadata
96 name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,
97 version=version,
98 author='François-Guillaume Fernandez, Charles Gaillard',
99 author_email='[email protected]',
100 description='Extract valuable text information from your documents',
101 long_description=readme,
102 long_description_content_type="text/markdown",
103 url='https://github.com/mindee/doctr',
104 download_url='https://github.com/mindee/doctr/tags',
105 license='Apache',
106 classifiers=[
107 'Development Status :: 4 - Beta',
108 'Intended Audience :: Developers',
109 "Intended Audience :: Education",
110 'Intended Audience :: Science/Research',
111 'License :: OSI Approved :: Apache Software License',
112 'Natural Language :: English',
113 'Operating System :: OS Independent',
114 'Programming Language :: Python :: 3',
115 'Programming Language :: Python :: 3.6',
116 'Programming Language :: Python :: 3.7',
117 'Topic :: Scientific/Engineering :: Artificial Intelligence',
118 ],
119 keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],
120
121 # Package info
122 packages=find_packages(exclude=('test',)),
123 zip_safe=True,
124 python_requires='>=3.6.0',
125 include_package_data=True,
126 install_requires=install_requires,
127 extras_require=extras,
128 package_data={'': ['LICENSE']}
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,7 @@
"shapely>=1.6.0",
"matplotlib>=3.1.0",
"mplcursors>=0.3",
- "weasyprint>=52.2",
+ "weasyprint>=52.2,<53.0",
"unidecode>=1.0.0",
"tensorflow-cpu>=2.4.0",
"torch>=1.8.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,7 +49,7 @@\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n- \"weasyprint>=52.2\",\n+ \"weasyprint>=52.2,<53.0\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n", "issue": "WeasyPrint import error Python 3.7\n## \ud83d\udc1b Bug\r\n\r\nWhen importing weasyprint with python 3.7 I have an error: `AttributeError: 'OutStream' object has no attribute 'buffer'`*\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n`from doctr.models import ocr_predictor`\r\n\r\nleads to:\r\n\r\n```\r\nAttributeError Traceback (most recent call last)\r\n\r\n<ipython-input-4-19f78ebc9b57> in <module>()\r\n----> 1 from doctr.models import ocr_predictor\r\n 2 \r\n 3 # Load predictor\r\n 4 model = ocr_predictor(pretrained=True)\r\n\r\n7 frames\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/__init__.py in <module>()\r\n 1 from .file_utils import is_tf_available, is_torch_available\r\n 2 from .version import __version__ # noqa: F401\r\n----> 3 from . import documents\r\n 4 from . import transforms\r\n 5 from . import utils\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/documents/__init__.py in <module>()\r\n 1 from .elements import *\r\n----> 2 from .reader import *\r\n\r\n/usr/local/lib/python3.7/dist-packages/doctr/documents/reader.py in <module>()\r\n 8 from pathlib import Path\r\n 9 import fitz\r\n---> 10 from weasyprint import HTML\r\n 11 from typing import List, Tuple, Optional, Any, Union, Sequence, Dict\r\n 12 \r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/__init__.py in <module>()\r\n 321 # Work around circular imports.\r\n 322 from .css import preprocess_stylesheet # noqa isort:skip\r\n--> 323 from .html import ( # noqa isort:skip\r\n 324 HTML5_UA_COUNTER_STYLE, HTML5_UA_STYLESHEET, HTML5_PH_STYLESHEET,\r\n 325 find_base_url)\r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/html.py in <module>()\r\n 21 from .css.counters import CounterStyle\r\n 22 from .formatting_structure import boxes\r\n---> 23 from .images import SVGImage\r\n 24 from .logger import LOGGER\r\n 25 from .urls import get_url_attribute\r\n\r\n/usr/local/lib/python3.7/dist-packages/weasyprint/images.py in <module>()\r\n 11 from itertools import cycle\r\n 12 \r\n---> 13 import pydyf\r\n 14 from PIL import Image\r\n 15 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in <module>()\r\n 402 \r\n 403 \r\n--> 404 class PDF:\r\n 405 \"\"\"PDF document.\"\"\"\r\n 406 def __init__(self):\r\n\r\n/usr/local/lib/python3.7/dist-packages/pydyf/__init__.py in PDF()\r\n 506 self.write_line(b'%%EOF', output)\r\n 507 \r\n--> 508 def write(self, output=sys.stdout.buffer):\r\n 509 \"\"\"Write PDF to output.\r\n 510 \r\n\r\nAttributeError: 'OutStream' object has no attribute 'buffer'\r\n\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nNothing, special\r\n\r\n## Environment\r\n```\r\nDocTR version: 0.3.0\r\nTensorFlow version: 2.5.0\r\nPyTorch version: 1.9.0+cu102 (torchvision 0.10.0+cu102)\r\nOpenCV version: 4.5.3\r\nOS: Ubuntu 18.04.5 LTS\r\nPython version: 3.7\r\nIs CUDA available (TensorFlow): No\r\nIs CUDA available (PyTorch): No\r\nCUDA runtime version: 11.0.221\r\nGPU models and configuration: Could not collect\r\nNvidia driver version: Could not collect\r\n```\r\n\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nimport re\nfrom pathlib import Path\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"0.3.1a0\"\nsha = 'Unknown'\npackage_name = 'doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_name}-{version}\")\n\nwith open(cwd.joinpath(package_name, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\n# Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py\n_deps = [\n \"importlib_metadata\",\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=4.2\",\n \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n \"weasyprint>=52.2\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n \"torchvision>=0.9.0\",\n \"Pillow>=8.0.0,<8.3.0\", # cf. https://github.com/python-pillow/Pillow/issues/5571\n \"tqdm>=4.30.0\",\n \"tensorflow-addons>=0.13.0\"\n]\n\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n return [deps[pkg] for pkg in pkgs]\n\n\ninstall_requires = [\n deps[\"importlib_metadata\"] + \";python_version<'3.8'\", # importlib_metadata for Python versions that don't have it\n deps[\"numpy\"],\n deps[\"scipy\"],\n deps[\"opencv-python\"],\n deps[\"PyMuPDF\"],\n deps[\"pyclipper\"],\n deps[\"shapely\"],\n deps[\"matplotlib\"],\n deps[\"mplcursors\"],\n deps[\"weasyprint\"],\n deps[\"unidecode\"],\n deps[\"Pillow\"],\n deps[\"tqdm\"],\n]\n\nextras = {}\nextras[\"tf\"] = deps_list(\"tensorflow\", \"tensorflow-addons\")\nextras[\"tf-cpu\"] = deps_list(\"tensorflow-cpu\", \"tensorflow-addons\")\nextras[\"torch\"] = deps_list(\"torch\", \"torchvision\")\nextras[\"all\"] = (\n extras[\"tf\"]\n + extras[\"torch\"]\n)\n\nsetup(\n # Metadata\n name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,\n version=version,\n author='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n author_email='[email protected]',\n description='Extract valuable text information from your documents',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n \"Intended Audience :: Education\",\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('test',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=install_requires,\n extras_require=extras,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\n\"\"\"\nPackage installation setup\n\"\"\"\n\nimport os\nimport re\nfrom pathlib import Path\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"0.3.1a0\"\nsha = 'Unknown'\npackage_name = 'doctr'\n\ncwd = Path(__file__).parent.absolute()\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n try:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\n except Exception:\n pass\n version += '+' + sha[:7]\nprint(f\"Building wheel {package_name}-{version}\")\n\nwith open(cwd.joinpath(package_name, 'version.py'), 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\nwith open('README.md', 'r') as f:\n readme = f.read()\n\n# Borrowed from https://github.com/huggingface/transformers/blob/master/setup.py\n_deps = [\n \"importlib_metadata\",\n \"numpy>=1.16.0\",\n \"scipy>=1.4.0\",\n \"opencv-python>=4.2\",\n \"tensorflow>=2.4.0\",\n \"PyMuPDF>=1.16.0,<1.18.11\",\n \"pyclipper>=1.2.0\",\n \"shapely>=1.6.0\",\n \"matplotlib>=3.1.0\",\n \"mplcursors>=0.3\",\n \"weasyprint>=52.2,<53.0\",\n \"unidecode>=1.0.0\",\n \"tensorflow-cpu>=2.4.0\",\n \"torch>=1.8.0\",\n \"torchvision>=0.9.0\",\n \"Pillow>=8.0.0,<8.3.0\", # cf. https://github.com/python-pillow/Pillow/issues/5571\n \"tqdm>=4.30.0\",\n \"tensorflow-addons>=0.13.0\"\n]\n\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>]+)(?:[!=<>].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n return [deps[pkg] for pkg in pkgs]\n\n\ninstall_requires = [\n deps[\"importlib_metadata\"] + \";python_version<'3.8'\", # importlib_metadata for Python versions that don't have it\n deps[\"numpy\"],\n deps[\"scipy\"],\n deps[\"opencv-python\"],\n deps[\"PyMuPDF\"],\n deps[\"pyclipper\"],\n deps[\"shapely\"],\n deps[\"matplotlib\"],\n deps[\"mplcursors\"],\n deps[\"weasyprint\"],\n deps[\"unidecode\"],\n deps[\"Pillow\"],\n deps[\"tqdm\"],\n]\n\nextras = {}\nextras[\"tf\"] = deps_list(\"tensorflow\", \"tensorflow-addons\")\nextras[\"tf-cpu\"] = deps_list(\"tensorflow-cpu\", \"tensorflow-addons\")\nextras[\"torch\"] = deps_list(\"torch\", \"torchvision\")\nextras[\"all\"] = (\n extras[\"tf\"]\n + extras[\"torch\"]\n)\n\nsetup(\n # Metadata\n name=os.getenv('PKG_INDEX') if os.getenv('PKG_INDEX') else package_name,\n version=version,\n author='Fran\u00e7ois-Guillaume Fernandez, Charles Gaillard',\n author_email='[email protected]',\n description='Extract valuable text information from your documents',\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url='https://github.com/mindee/doctr',\n download_url='https://github.com/mindee/doctr/tags',\n license='Apache',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n \"Intended Audience :: Education\",\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords=['OCR', 'deep learning', 'computer vision', 'tensorflow', 'pytorch', 'text detection', 'text recognition'],\n\n # Package info\n packages=find_packages(exclude=('test',)),\n zip_safe=True,\n python_requires='>=3.6.0',\n include_package_data=True,\n install_requires=install_requires,\n extras_require=extras,\n package_data={'': ['LICENSE']}\n)\n", "path": "setup.py"}]} | 2,497 | 127 |
gh_patches_debug_37126 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-69 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
iou3d failed when inference with gpu:1
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
Training on single GPU, when using default gpu (gpu:0) , everything is ok.
Switch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.
**Reproduction**
1. What command or script did you run?
```
python tools/train.py CONFIG_PATH --gpu-ids 1
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
- kitti
**Environment**
1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet3d/ops/iou3d/iou3d_utils.py`
Content:
```
1 import torch
2
3 from . import iou3d_cuda
4
5
6 def boxes_iou_bev(boxes_a, boxes_b):
7 """
8 :param boxes_a: (M, 5)
9 :param boxes_b: (N, 5)
10 :return:
11 ans_iou: (M, N)
12 """
13
14 ans_iou = torch.cuda.FloatTensor(
15 torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
16
17 iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
18 ans_iou)
19
20 return ans_iou
21
22
23 def nms_gpu(boxes, scores, thresh):
24 """
25 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
26 :param scores: (N)
27 :param thresh:
28 :return:
29 """
30 # areas = (x2 - x1) * (y2 - y1)
31 order = scores.sort(0, descending=True)[1]
32
33 boxes = boxes[order].contiguous()
34
35 keep = torch.LongTensor(boxes.size(0))
36 num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
37 return order[keep[:num_out].cuda()].contiguous()
38
39
40 def nms_normal_gpu(boxes, scores, thresh):
41 """
42 :param boxes: (N, 5) [x1, y1, x2, y2, ry]
43 :param scores: (N)
44 :param thresh:
45 :return:
46 """
47 # areas = (x2 - x1) * (y2 - y1)
48 order = scores.sort(0, descending=True)[1]
49
50 boxes = boxes[order].contiguous()
51
52 keep = torch.LongTensor(boxes.size(0))
53 num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
54 return order[keep[:num_out].cuda()].contiguous()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py
--- a/mmdet3d/ops/iou3d/iou3d_utils.py
+++ b/mmdet3d/ops/iou3d/iou3d_utils.py
@@ -4,15 +4,17 @@
def boxes_iou_bev(boxes_a, boxes_b):
- """
- :param boxes_a: (M, 5)
- :param boxes_b: (N, 5)
- :return:
- ans_iou: (M, N)
- """
+ """Calculate boxes IoU in the bird view.
- ans_iou = torch.cuda.FloatTensor(
- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()
+ Args:
+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).
+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).
+
+ Returns:
+ ans_iou (torch.Tensor): IoU result with shape (M, N).
+ """
+ ans_iou = boxes_a.new_zeros(
+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))
iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),
ans_iou)
@@ -21,34 +23,41 @@
def nms_gpu(boxes, scores, thresh):
+ """Non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
def nms_normal_gpu(boxes, scores, thresh):
+ """Normal non maximum suppression on GPU.
+
+ Args:
+ boxes (torch.Tensor): Input boxes with shape (N, 5).
+ scores (torch.Tensor): Scores of predicted boxes with shape (N).
+ thresh (torch.Tensor): Threshold of non maximum suppression.
+
+ Returns:
+ torch.Tensor: Remaining indices with scores in descending order.
"""
- :param boxes: (N, 5) [x1, y1, x2, y2, ry]
- :param scores: (N)
- :param thresh:
- :return:
- """
- # areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
- keep = torch.LongTensor(boxes.size(0))
- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)
- return order[keep[:num_out].cuda()].contiguous()
+ keep = boxes.new_zeros(boxes.size(0))
+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,
+ boxes.device.index)
+ return order[keep[:num_out].cuda(boxes.device)].contiguous()
| {"golden_diff": "diff --git a/mmdet3d/ops/iou3d/iou3d_utils.py b/mmdet3d/ops/iou3d/iou3d_utils.py\n--- a/mmdet3d/ops/iou3d/iou3d_utils.py\n+++ b/mmdet3d/ops/iou3d/iou3d_utils.py\n@@ -4,15 +4,17 @@\n \n \n def boxes_iou_bev(boxes_a, boxes_b):\n- \"\"\"\n- :param boxes_a: (M, 5)\n- :param boxes_b: (N, 5)\n- :return:\n- ans_iou: (M, N)\n- \"\"\"\n+ \"\"\"Calculate boxes IoU in the bird view.\n \n- ans_iou = torch.cuda.FloatTensor(\n- torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n+ Args:\n+ boxes_a (torch.Tensor): Input boxes a with shape (M, 5).\n+ boxes_b (torch.Tensor): Input boxes b with shape (N, 5).\n+\n+ Returns:\n+ ans_iou (torch.Tensor): IoU result with shape (M, N).\n+ \"\"\"\n+ ans_iou = boxes_a.new_zeros(\n+ torch.Size((boxes_a.shape[0], boxes_b.shape[0])))\n \n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n@@ -21,34 +23,41 @@\n \n \n def nms_gpu(boxes, scores, thresh):\n+ \"\"\"Non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n \n \n def nms_normal_gpu(boxes, scores, thresh):\n+ \"\"\"Normal non maximum suppression on GPU.\n+\n+ Args:\n+ boxes (torch.Tensor): Input boxes with shape (N, 5).\n+ scores (torch.Tensor): Scores of predicted boxes with shape (N).\n+ thresh (torch.Tensor): Threshold of non maximum suppression.\n+\n+ Returns:\n+ torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n- :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n- :param scores: (N)\n- :param thresh:\n- :return:\n- \"\"\"\n- # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n \n boxes = boxes[order].contiguous()\n \n- keep = torch.LongTensor(boxes.size(0))\n- num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n- return order[keep[:num_out].cuda()].contiguous()\n+ keep = boxes.new_zeros(boxes.size(0))\n+ num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,\n+ boxes.device.index)\n+ return order[keep[:num_out].cuda(boxes.device)].contiguous()\n", "issue": "iou3d failed when inference with gpu:1\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nTraining on single GPU, when using default gpu (gpu:0) , everything is ok. \r\nSwitch to gpu:1, report `an illegal memory access was encountered mmdet3d/ops/iou3d/src/iou3d.cpp 121` during inference, however training is ok.\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\n```\r\npython tools/train.py CONFIG_PATH --gpu-ids 1\r\n```\r\n2. Did you make any modifications on the code or config? Did you understand what you have modified?\r\n3. What dataset did you use?\r\n- kitti\r\n\r\n**Environment**\r\n\r\n1. Please run `python mmdet3d/utils/collect_env.py` to collect necessary environment infomation and paste it here.\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch [e.g., pip, conda, source]\r\n - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)\r\n\r\n**Error traceback**\r\nIf applicable, paste the error trackback here.\r\n```\r\nA placeholder for trackback.\r\n```\r\n\r\n**Bug fix**\r\nIf you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!\r\n\n", "before_files": [{"content": "import torch\n\nfrom . import iou3d_cuda\n\n\ndef boxes_iou_bev(boxes_a, boxes_b):\n \"\"\"\n :param boxes_a: (M, 5)\n :param boxes_b: (N, 5)\n :return:\n ans_iou: (M, N)\n \"\"\"\n\n ans_iou = torch.cuda.FloatTensor(\n torch.Size((boxes_a.shape[0], boxes_b.shape[0]))).zero_()\n\n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n\n return ans_iou\n\n\ndef nms_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n\n\ndef nms_normal_gpu(boxes, scores, thresh):\n \"\"\"\n :param boxes: (N, 5) [x1, y1, x2, y2, ry]\n :param scores: (N)\n :param thresh:\n :return:\n \"\"\"\n # areas = (x2 - x1) * (y2 - y1)\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = torch.LongTensor(boxes.size(0))\n num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh)\n return order[keep[:num_out].cuda()].contiguous()\n", "path": "mmdet3d/ops/iou3d/iou3d_utils.py"}], "after_files": [{"content": "import torch\n\nfrom . import iou3d_cuda\n\n\ndef boxes_iou_bev(boxes_a, boxes_b):\n \"\"\"Calculate boxes IoU in the bird view.\n\n Args:\n boxes_a (torch.Tensor): Input boxes a with shape (M, 5).\n boxes_b (torch.Tensor): Input boxes b with shape (N, 5).\n\n Returns:\n ans_iou (torch.Tensor): IoU result with shape (M, N).\n \"\"\"\n ans_iou = boxes_a.new_zeros(\n torch.Size((boxes_a.shape[0], boxes_b.shape[0])))\n\n iou3d_cuda.boxes_iou_bev_gpu(boxes_a.contiguous(), boxes_b.contiguous(),\n ans_iou)\n\n return ans_iou\n\n\ndef nms_gpu(boxes, scores, thresh):\n \"\"\"Non maximum suppression on GPU.\n\n Args:\n boxes (torch.Tensor): Input boxes with shape (N, 5).\n scores (torch.Tensor): Scores of predicted boxes with shape (N).\n thresh (torch.Tensor): Threshold of non maximum suppression.\n\n Returns:\n torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = boxes.new_zeros(boxes.size(0))\n num_out = iou3d_cuda.nms_gpu(boxes, keep, thresh, boxes.device.index)\n return order[keep[:num_out].cuda(boxes.device)].contiguous()\n\n\ndef nms_normal_gpu(boxes, scores, thresh):\n \"\"\"Normal non maximum suppression on GPU.\n\n Args:\n boxes (torch.Tensor): Input boxes with shape (N, 5).\n scores (torch.Tensor): Scores of predicted boxes with shape (N).\n thresh (torch.Tensor): Threshold of non maximum suppression.\n\n Returns:\n torch.Tensor: Remaining indices with scores in descending order.\n \"\"\"\n order = scores.sort(0, descending=True)[1]\n\n boxes = boxes[order].contiguous()\n\n keep = boxes.new_zeros(boxes.size(0))\n num_out = iou3d_cuda.nms_normal_gpu(boxes, keep, thresh,\n boxes.device.index)\n return order[keep[:num_out].cuda(boxes.device)].contiguous()\n", "path": "mmdet3d/ops/iou3d/iou3d_utils.py"}]} | 1,162 | 924 |
gh_patches_debug_5988 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4916 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
#6963 Too many codes in 1 package
URL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/
user: admin, initiator
expected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~
behaviour: ~~the number in the wording of the helptext is "10.000.000" and~~ the packages can contain more than 1.000.000 codes.
important screensize: -
device & browser: mac ff
Comment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.
Linked: https://github.com/liqd/a4-meinberlin/issues/4907
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/votes/tasks.py`
Content:
```
1 from background_task import background
2
3 from adhocracy4.modules.models import Module
4 from meinberlin.apps.votes.models import VotingToken
5 from meinberlin.apps.votes.models import get_token_12
6
7 # Number of tokens to insert into database per bulk_create
8 BATCH_SIZE = 1000000
9 # Max number of tokens in one download / package
10 PACKAGE_SIZE = 10000000
11
12
13 def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
14 module = Module.objects.get(pk=module_id)
15 package_number = VotingToken.next_package_number(module)
16 module_name = module.name
17 project_id = module.project.id
18 project_name = module.project.name
19
20 number_to_generate = number_of_tokens
21 package_number_limit = 0
22 if number_of_tokens > PACKAGE_SIZE:
23 package_number_limit = number_of_tokens - PACKAGE_SIZE
24 while number_to_generate > 0:
25 if number_to_generate >= BATCH_SIZE:
26 generate_voting_tokens_batch(
27 module_id,
28 BATCH_SIZE,
29 package_number,
30 number_of_tokens,
31 module_name,
32 project_id,
33 project_name,
34 existing_tokens,
35 )
36 number_to_generate = number_to_generate - BATCH_SIZE
37 else:
38 generate_voting_tokens_batch(
39 module_id,
40 number_to_generate,
41 package_number,
42 number_of_tokens,
43 module_name,
44 project_id,
45 project_name,
46 existing_tokens,
47 )
48 number_to_generate = 0
49 if package_number_limit >= number_to_generate:
50 package_number += 1
51 package_number_limit - PACKAGE_SIZE
52
53
54 @background(schedule=1)
55 def generate_voting_tokens_batch(
56 module_id,
57 batch_size,
58 package_number,
59 number_of_tokens,
60 module_name,
61 project_id,
62 project_name,
63 existing_tokens,
64 ):
65 module = Module.objects.get(pk=module_id)
66 VotingToken.objects.bulk_create(
67 [get_token_and_hash(module, package_number) for i in range(batch_size)]
68 )
69
70
71 def get_token_and_hash(module, package_number):
72 token = get_token_12()
73 token_hash = VotingToken.hash_token(token, module)
74 return VotingToken(
75 token=token, token_hash=token_hash, module=module, package_number=package_number
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py
--- a/meinberlin/apps/votes/tasks.py
+++ b/meinberlin/apps/votes/tasks.py
@@ -5,9 +5,9 @@
from meinberlin.apps.votes.models import get_token_12
# Number of tokens to insert into database per bulk_create
-BATCH_SIZE = 1000000
+BATCH_SIZE = 100000
# Max number of tokens in one download / package
-PACKAGE_SIZE = 10000000
+PACKAGE_SIZE = 1000000
def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):
| {"golden_diff": "diff --git a/meinberlin/apps/votes/tasks.py b/meinberlin/apps/votes/tasks.py\n--- a/meinberlin/apps/votes/tasks.py\n+++ b/meinberlin/apps/votes/tasks.py\n@@ -5,9 +5,9 @@\n from meinberlin.apps.votes.models import get_token_12\n \n # Number of tokens to insert into database per bulk_create\n-BATCH_SIZE = 1000000\n+BATCH_SIZE = 100000\n # Max number of tokens in one download / package\n-PACKAGE_SIZE = 10000000\n+PACKAGE_SIZE = 1000000\n \n \n def generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n", "issue": "#6963 Too many codes in 1 package\nURL: https://meinberlin-dev.liqd.net/dashboard/modules/burgerinnenhaushalt-3-phasen-21/download-codes/\r\nuser: admin, initiator\r\nexpected behaviour: Each code-package should contain a max. of 1.000.000 codes. ~~The wording of the helptext should have also the right number of 1.000.000 codes per package as each package should contain a maximum of 1.000.000 codes per excel-file.~~\r\nbehaviour: ~~the number in the wording of the helptext is \"10.000.000\" and~~ the packages can contain more than 1.000.000 codes.\r\nimportant screensize: -\r\ndevice & browser: mac ff\r\nComment/Question: I tried it with generating two mill codes and the codes were put in only one code-package. I also couldn't download the package probably because it was too big.\r\n\r\nLinked: https://github.com/liqd/a4-meinberlin/issues/4907\r\n\n", "before_files": [{"content": "from background_task import background\n\nfrom adhocracy4.modules.models import Module\nfrom meinberlin.apps.votes.models import VotingToken\nfrom meinberlin.apps.votes.models import get_token_12\n\n# Number of tokens to insert into database per bulk_create\nBATCH_SIZE = 1000000\n# Max number of tokens in one download / package\nPACKAGE_SIZE = 10000000\n\n\ndef generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n module = Module.objects.get(pk=module_id)\n package_number = VotingToken.next_package_number(module)\n module_name = module.name\n project_id = module.project.id\n project_name = module.project.name\n\n number_to_generate = number_of_tokens\n package_number_limit = 0\n if number_of_tokens > PACKAGE_SIZE:\n package_number_limit = number_of_tokens - PACKAGE_SIZE\n while number_to_generate > 0:\n if number_to_generate >= BATCH_SIZE:\n generate_voting_tokens_batch(\n module_id,\n BATCH_SIZE,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = number_to_generate - BATCH_SIZE\n else:\n generate_voting_tokens_batch(\n module_id,\n number_to_generate,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = 0\n if package_number_limit >= number_to_generate:\n package_number += 1\n package_number_limit - PACKAGE_SIZE\n\n\n@background(schedule=1)\ndef generate_voting_tokens_batch(\n module_id,\n batch_size,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n):\n module = Module.objects.get(pk=module_id)\n VotingToken.objects.bulk_create(\n [get_token_and_hash(module, package_number) for i in range(batch_size)]\n )\n\n\ndef get_token_and_hash(module, package_number):\n token = get_token_12()\n token_hash = VotingToken.hash_token(token, module)\n return VotingToken(\n token=token, token_hash=token_hash, module=module, package_number=package_number\n )\n", "path": "meinberlin/apps/votes/tasks.py"}], "after_files": [{"content": "from background_task import background\n\nfrom adhocracy4.modules.models import Module\nfrom meinberlin.apps.votes.models import VotingToken\nfrom meinberlin.apps.votes.models import get_token_12\n\n# Number of tokens to insert into database per bulk_create\nBATCH_SIZE = 100000\n# Max number of tokens in one download / package\nPACKAGE_SIZE = 1000000\n\n\ndef generate_voting_tokens(module_id, number_of_tokens, existing_tokens):\n module = Module.objects.get(pk=module_id)\n package_number = VotingToken.next_package_number(module)\n module_name = module.name\n project_id = module.project.id\n project_name = module.project.name\n\n number_to_generate = number_of_tokens\n package_number_limit = 0\n if number_of_tokens > PACKAGE_SIZE:\n package_number_limit = number_of_tokens - PACKAGE_SIZE\n while number_to_generate > 0:\n if number_to_generate >= BATCH_SIZE:\n generate_voting_tokens_batch(\n module_id,\n BATCH_SIZE,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = number_to_generate - BATCH_SIZE\n else:\n generate_voting_tokens_batch(\n module_id,\n number_to_generate,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n )\n number_to_generate = 0\n if package_number_limit >= number_to_generate:\n package_number += 1\n package_number_limit - PACKAGE_SIZE\n\n\n@background(schedule=1)\ndef generate_voting_tokens_batch(\n module_id,\n batch_size,\n package_number,\n number_of_tokens,\n module_name,\n project_id,\n project_name,\n existing_tokens,\n):\n module = Module.objects.get(pk=module_id)\n VotingToken.objects.bulk_create(\n [get_token_and_hash(module, package_number) for i in range(batch_size)]\n )\n\n\ndef get_token_and_hash(module, package_number):\n token = get_token_12()\n token_hash = VotingToken.hash_token(token, module)\n return VotingToken(\n token=token, token_hash=token_hash, module=module, package_number=package_number\n )\n", "path": "meinberlin/apps/votes/tasks.py"}]} | 1,148 | 165 |
gh_patches_debug_13326 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-5879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
La demande d'aide sur un contenu ne déclenche rien
**Description du bug**
Cliquez sur un type d'aide dans la sidebar ne semble pas déclencher de demande d'aide.
**Comment reproduire ?**
La liste des étapes qui permet de reproduire le bug :
1. Allez sur le site '...'
2. Connectez vous avec le compte '....'
3. Allez le brouillon d'un contenu (dans mon cas c'était le brouillon d'un contenu déjà publié)
4. Cliquez sur "Repreneur" dans la sidebar et contatez qu'un bref bouton check s'affiche à gauche du texte, puis disparait sans aucune forme de message.
**Comportement attendu**
J'aimerai que le clic sur ce bouton m'informe à minima que ma demande a été prise en compte et que je retrouve mon contenu dans la liste des contenus qui ont besoin d'une aide.
**Capture d'écran**

**Informations techniques**
- OS: Linux
- Navigateur: Brave (Chromium)
- Version/environnement de zds: beta
La demande d'aide sur un contenu ne déclenche rien
**Description du bug**
Cliquez sur un type d'aide dans la sidebar ne semble pas déclencher de demande d'aide.
**Comment reproduire ?**
La liste des étapes qui permet de reproduire le bug :
1. Allez sur le site '...'
2. Connectez vous avec le compte '....'
3. Allez le brouillon d'un contenu (dans mon cas c'était le brouillon d'un contenu déjà publié)
4. Cliquez sur "Repreneur" dans la sidebar et contatez qu'un bref bouton check s'affiche à gauche du texte, puis disparait sans aucune forme de message.
**Comportement attendu**
J'aimerai que le clic sur ce bouton m'informe à minima que ma demande a été prise en compte et que je retrouve mon contenu dans la liste des contenus qui ont besoin d'une aide.
**Capture d'écran**

**Informations techniques**
- OS: Linux
- Navigateur: Brave (Chromium)
- Version/environnement de zds: beta
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/tutorialv2/urls/urls_contents.py`
Content:
```
1 from django.urls import path, re_path
2 from django.views.generic.base import RedirectView
3
4 from zds.tutorialv2.views.contents import (DisplayContent, CreateContent, EditContent, EditContentLicense,
5 DeleteContent, CreateContainer, DisplayContainer, EditContainer,
6 CreateExtract, EditExtract,
7 DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff,
8 ActivateJSFiddleInContent, MoveChild,
9 DownloadContent, UpdateContentWithArchive, CreateContentFromArchive,
10 ContentsWithHelps, AddAuthorToContent,
11 RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer,
12 ContentOfAuthor, RedirectOldContentOfAuthor, AddContributorToContent,
13 RemoveContributorFromContent, ContentOfContributors,
14 AddSuggestion, RemoveSuggestion, ChangeHelp, EditContentTags)
15
16 from zds.tutorialv2.views.published import (SendNoteFormView, UpdateNoteView,
17 HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView,
18 FollowContentReaction, FollowNewContent, SendContentAlert,
19 SolveContentAlert,
20 RequestFeaturedContent, ContentStatisticsView)
21
22 urlpatterns = [
23 # Flux
24 re_path(r'^flux/rss/$', RedirectView.as_view(pattern_name='publication:feed-rss',
25 permanent=True), name='feed-rss'),
26 re_path(r'^flux/atom/$', RedirectView.as_view(pattern_name='publication:feed-atom',
27 permanent=True), name='feed-atom'),
28
29 path('voir/<str:username>/',
30 ContentOfAuthor.as_view(
31 type='ALL', context_object_name='contents'),
32 name='find-all'),
33 path('contributions/<str:username>/',
34 ContentOfContributors.as_view(
35 type='ALL', context_object_name='contribution_contents'),
36 name='find-contribution-all'),
37
38 path('tutoriels/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='TUTORIAL')),
39 path('articles/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='ARTICLE')),
40 path('tribunes/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='OPINION')),
41
42 re_path(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),
43 re_path(r'^aides/(?P<pk>\d+)/change$', ChangeHelp.as_view(), name='helps-change'),
44 re_path(r'^(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
45 DisplayContainer.as_view(public_is_prioritary=False),
46 name='view-container'),
47 re_path(r'^(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
48 DisplayContainer.as_view(public_is_prioritary=False),
49 name='view-container'),
50
51 re_path(r'^(?P<pk>\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),
52 name='view'),
53
54 re_path(r'^telecharger/(?P<pk>\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),
55 name='download-zip'),
56
57 # beta:
58 re_path(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
59 DisplayBetaContainer.as_view(public_is_prioritary=False),
60 name='beta-view-container'),
61 re_path(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
62 DisplayBetaContainer.as_view(public_is_prioritary=False),
63 name='beta-view-container'),
64
65 re_path(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/$',
66 DisplayBetaContent.as_view(), name='beta-view'),
67
68 # reactions:
69 re_path(r'^reactions/ajouter/$',
70 SendNoteFormView.as_view(redirection_is_needed=False), name='add-reaction'),
71 re_path(r'^reactions/editer/$',
72 UpdateNoteView.as_view(redirection_is_needed=False), name='update-reaction'),
73 re_path(r'^reactions/cacher/(?P<pk>\d+)/$',
74 HideReaction.as_view(), name='hide-reaction'),
75 re_path(r'^reactions/afficher/(?P<pk>\d+)/$',
76 ShowReaction.as_view(), name='show-reaction'),
77 re_path(r'^reactions/alerter/(?P<pk>\d+)/$',
78 SendNoteAlert.as_view(), name='alert-reaction'),
79 re_path(r'^reactions/resoudre/$',
80 SolveNoteAlert.as_view(), name='resolve-reaction'),
81
82 # follow:
83 re_path(r'^suivre/(?P<pk>\d+)/reactions/$',
84 FollowContentReaction.as_view(), name='follow-reactions'),
85 re_path(r'^suivre/membres/(?P<pk>\d+)/$',
86 FollowNewContent.as_view(), name='follow'),
87
88 # request
89 re_path(r'^requete/(?P<pk>\d+)/$', RequestFeaturedContent.as_view(), name='request-featured'),
90
91 # content alerts:
92 re_path(r'^alerter/(?P<pk>\d+)/$',
93 SendContentAlert.as_view(), name='alert-content'),
94 re_path(r'^resoudre/(?P<pk>\d+)/$',
95 SolveContentAlert.as_view(), name='resolve-content'),
96
97 # typo:
98 re_path(r'^reactions/typo/$', WarnTypo.as_view(), name='warn-typo'),
99
100 # create:
101 re_path(r'^nouveau-tutoriel/$',
102 CreateContent.as_view(created_content_type='TUTORIAL'), name='create-tutorial'),
103 re_path(r'^nouvel-article/$',
104 CreateContent.as_view(created_content_type='ARTICLE'), name='create-article'),
105 re_path(r'^nouveau-billet/$',
106 CreateContent.as_view(created_content_type='OPINION'), name='create-opinion'),
107 re_path(r'^nouveau-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
108 CreateContainer.as_view(),
109 name='create-container'),
110 re_path(r'^nouveau-conteneur/(?P<pk>\d+)/(?P<slug>.+)/$',
111 CreateContainer.as_view(),
112 name='create-container'),
113
114 re_path(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
115 CreateExtract.as_view(),
116 name='create-extract'),
117 re_path(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
118 CreateExtract.as_view(),
119 name='create-extract'),
120 re_path(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/$',
121 CreateExtract.as_view(),
122 name='create-extract'),
123
124 # edit:
125 re_path(r'^editer-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'
126 r'(?P<container_slug>.+)/$',
127 EditContainer.as_view(),
128 name='edit-container'),
129 re_path(r'^editer-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
130 EditContainer.as_view(),
131 name='edit-container'),
132
133 re_path(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'
134 r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',
135 EditExtract.as_view(),
136 name='edit-extract'),
137 re_path(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',
138 EditExtract.as_view(),
139 name='edit-extract'),
140 re_path(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',
141 EditExtract.as_view(),
142 name='edit-extract'),
143
144 re_path(r'^editer/(?P<pk>\d+)/(?P<slug>.+)/$',
145 EditContent.as_view(), name='edit'),
146 re_path(r'^deplacer/$', MoveChild.as_view(), name='move-element'),
147
148 re_path(r'^historique/(?P<pk>\d+)/(?P<slug>.+)/$',
149 DisplayHistory.as_view(), name='history'),
150 re_path(r'^comparaison/(?P<pk>\d+)/(?P<slug>.+)/$',
151 DisplayDiff.as_view(), name='diff'),
152 re_path(r'^ajouter-contributeur/(?P<pk>\d+)/$',
153 AddContributorToContent.as_view(), name='add-contributor'),
154 re_path(r'^enlever-contributeur/(?P<pk>\d+)/$',
155 RemoveContributorFromContent.as_view(), name='remove-contributor'),
156 re_path(r'^ajouter-auteur/(?P<pk>\d+)/$',
157 AddAuthorToContent.as_view(), name='add-author'),
158 re_path(r'^enlever-auteur/(?P<pk>\d+)/$',
159 RemoveAuthorFromContent.as_view(), name='remove-author'),
160
161 # Modify the license
162 re_path(r'^modifier-licence/(?P<pk>\d+)/$', EditContentLicense.as_view(), name='edit-license'),
163
164 # Modify the tags
165 re_path(r'^modifier-tags/(?P<pk>\d+)/$', EditContentTags.as_view(), name='edit-tags'),
166
167 # beta:
168 re_path(r'^activer-beta/(?P<pk>\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),
169 name='set-beta'),
170 re_path(r'^desactiver-beta/(?P<pk>\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),
171 name='inactive-beta'),
172 re_path(r'^stats/(?P<pk>\d+)/(?P<slug>.+)/$', ContentStatisticsView.as_view(),
173 name='stats-content'),
174 re_path(r'^ajouter-suggestion/(?P<pk>\d+)/$', AddSuggestion.as_view(),
175 name='add-suggestion'),
176 re_path(r'^enlever-suggestion/(?P<pk>\d+)/$',
177 RemoveSuggestion.as_view(), name='remove-suggestion'),
178
179
180 # jsfiddle support:
181 re_path(r'activer-js/', ActivateJSFiddleInContent.as_view(),
182 name='activate-jsfiddle'),
183
184 # delete:
185 re_path(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'
186 r'(?P<object_slug>.+)/$',
187 DeleteContainerOrExtract.as_view(),
188 name='delete'),
189 re_path(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',
190 DeleteContainerOrExtract.as_view(),
191 name='delete'),
192 re_path(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',
193 DeleteContainerOrExtract.as_view(),
194 name='delete'),
195
196 re_path(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/$',
197 DeleteContent.as_view(), name='delete'),
198
199 # markdown import
200 re_path(r'^importer/archive/nouveau/$',
201 CreateContentFromArchive.as_view(), name='import-new'),
202 re_path(r'^importer/(?P<pk>\d+)/(?P<slug>.+)/$',
203 UpdateContentWithArchive.as_view(), name='import'),
204
205 # tags
206 re_path(r'^tags/$', TagsListView.as_view(), name='tags'),
207
208 re_path(r'^$', RedirectView.as_view(
209 pattern_name='publication:list', permanent=True), name='list'),
210 ]
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/tutorialv2/urls/urls_contents.py b/zds/tutorialv2/urls/urls_contents.py
--- a/zds/tutorialv2/urls/urls_contents.py
+++ b/zds/tutorialv2/urls/urls_contents.py
@@ -40,7 +40,7 @@
path('tribunes/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='OPINION')),
re_path(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),
- re_path(r'^aides/(?P<pk>\d+)/change$', ChangeHelp.as_view(), name='helps-change'),
+ re_path(r'^aides/(?P<pk>\d+)/change/$', ChangeHelp.as_view(), name='helps-change'),
re_path(r'^(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
DisplayContainer.as_view(public_is_prioritary=False),
name='view-container'),
| {"golden_diff": "diff --git a/zds/tutorialv2/urls/urls_contents.py b/zds/tutorialv2/urls/urls_contents.py\n--- a/zds/tutorialv2/urls/urls_contents.py\n+++ b/zds/tutorialv2/urls/urls_contents.py\n@@ -40,7 +40,7 @@\n path('tribunes/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='OPINION')),\n \n re_path(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),\n- re_path(r'^aides/(?P<pk>\\d+)/change$', ChangeHelp.as_view(), name='helps-change'),\n+ re_path(r'^aides/(?P<pk>\\d+)/change/$', ChangeHelp.as_view(), name='helps-change'),\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n", "issue": "La demande d'aide sur un contenu ne d\u00e9clenche rien\n**Description du bug**\r\n\r\nCliquez sur un type d'aide dans la sidebar ne semble pas d\u00e9clencher de demande d'aide.\r\n\r\n**Comment reproduire ?**\r\n\r\nLa liste des \u00e9tapes qui permet de reproduire le bug :\r\n\r\n1. Allez sur le site '...'\r\n2. Connectez vous avec le compte '....'\r\n3. Allez le brouillon d'un contenu (dans mon cas c'\u00e9tait le brouillon d'un contenu d\u00e9j\u00e0 publi\u00e9)\r\n4. Cliquez sur \"Repreneur\" dans la sidebar et contatez qu'un bref bouton check s'affiche \u00e0 gauche du texte, puis disparait sans aucune forme de message.\r\n\r\n**Comportement attendu**\r\n\r\nJ'aimerai que le clic sur ce bouton m'informe \u00e0 minima que ma demande a \u00e9t\u00e9 prise en compte et que je retrouve mon contenu dans la liste des contenus qui ont besoin d'une aide.\r\n\r\n**Capture d'\u00e9cran**\r\n\r\n\r\n\r\n\r\n**Informations techniques**\r\n\r\n- OS: Linux\r\n- Navigateur: Brave (Chromium)\r\n- Version/environnement de zds: beta\r\n\nLa demande d'aide sur un contenu ne d\u00e9clenche rien\n**Description du bug**\r\n\r\nCliquez sur un type d'aide dans la sidebar ne semble pas d\u00e9clencher de demande d'aide.\r\n\r\n**Comment reproduire ?**\r\n\r\nLa liste des \u00e9tapes qui permet de reproduire le bug :\r\n\r\n1. Allez sur le site '...'\r\n2. Connectez vous avec le compte '....'\r\n3. Allez le brouillon d'un contenu (dans mon cas c'\u00e9tait le brouillon d'un contenu d\u00e9j\u00e0 publi\u00e9)\r\n4. Cliquez sur \"Repreneur\" dans la sidebar et contatez qu'un bref bouton check s'affiche \u00e0 gauche du texte, puis disparait sans aucune forme de message.\r\n\r\n**Comportement attendu**\r\n\r\nJ'aimerai que le clic sur ce bouton m'informe \u00e0 minima que ma demande a \u00e9t\u00e9 prise en compte et que je retrouve mon contenu dans la liste des contenus qui ont besoin d'une aide.\r\n\r\n**Capture d'\u00e9cran**\r\n\r\n\r\n\r\n\r\n**Informations techniques**\r\n\r\n- OS: Linux\r\n- Navigateur: Brave (Chromium)\r\n- Version/environnement de zds: beta\r\n\n", "before_files": [{"content": "from django.urls import path, re_path\nfrom django.views.generic.base import RedirectView\n\nfrom zds.tutorialv2.views.contents import (DisplayContent, CreateContent, EditContent, EditContentLicense,\n DeleteContent, CreateContainer, DisplayContainer, EditContainer,\n CreateExtract, EditExtract,\n DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff,\n ActivateJSFiddleInContent, MoveChild,\n DownloadContent, UpdateContentWithArchive, CreateContentFromArchive,\n ContentsWithHelps, AddAuthorToContent,\n RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer,\n ContentOfAuthor, RedirectOldContentOfAuthor, AddContributorToContent,\n RemoveContributorFromContent, ContentOfContributors,\n AddSuggestion, RemoveSuggestion, ChangeHelp, EditContentTags)\n\nfrom zds.tutorialv2.views.published import (SendNoteFormView, UpdateNoteView,\n HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView,\n FollowContentReaction, FollowNewContent, SendContentAlert,\n SolveContentAlert,\n RequestFeaturedContent, ContentStatisticsView)\n\nurlpatterns = [\n # Flux\n re_path(r'^flux/rss/$', RedirectView.as_view(pattern_name='publication:feed-rss',\n permanent=True), name='feed-rss'),\n re_path(r'^flux/atom/$', RedirectView.as_view(pattern_name='publication:feed-atom',\n permanent=True), name='feed-atom'),\n\n path('voir/<str:username>/',\n ContentOfAuthor.as_view(\n type='ALL', context_object_name='contents'),\n name='find-all'),\n path('contributions/<str:username>/',\n ContentOfContributors.as_view(\n type='ALL', context_object_name='contribution_contents'),\n name='find-contribution-all'),\n\n path('tutoriels/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='TUTORIAL')),\n path('articles/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='ARTICLE')),\n path('tribunes/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='OPINION')),\n\n re_path(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),\n re_path(r'^aides/(?P<pk>\\d+)/change$', ChangeHelp.as_view(), name='helps-change'),\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),\n name='view'),\n\n re_path(r'^telecharger/(?P<pk>\\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),\n name='download-zip'),\n\n # beta:\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayBetaContent.as_view(), name='beta-view'),\n\n # reactions:\n re_path(r'^reactions/ajouter/$',\n SendNoteFormView.as_view(redirection_is_needed=False), name='add-reaction'),\n re_path(r'^reactions/editer/$',\n UpdateNoteView.as_view(redirection_is_needed=False), name='update-reaction'),\n re_path(r'^reactions/cacher/(?P<pk>\\d+)/$',\n HideReaction.as_view(), name='hide-reaction'),\n re_path(r'^reactions/afficher/(?P<pk>\\d+)/$',\n ShowReaction.as_view(), name='show-reaction'),\n re_path(r'^reactions/alerter/(?P<pk>\\d+)/$',\n SendNoteAlert.as_view(), name='alert-reaction'),\n re_path(r'^reactions/resoudre/$',\n SolveNoteAlert.as_view(), name='resolve-reaction'),\n\n # follow:\n re_path(r'^suivre/(?P<pk>\\d+)/reactions/$',\n FollowContentReaction.as_view(), name='follow-reactions'),\n re_path(r'^suivre/membres/(?P<pk>\\d+)/$',\n FollowNewContent.as_view(), name='follow'),\n\n # request\n re_path(r'^requete/(?P<pk>\\d+)/$', RequestFeaturedContent.as_view(), name='request-featured'),\n\n # content alerts:\n re_path(r'^alerter/(?P<pk>\\d+)/$',\n SendContentAlert.as_view(), name='alert-content'),\n re_path(r'^resoudre/(?P<pk>\\d+)/$',\n SolveContentAlert.as_view(), name='resolve-content'),\n\n # typo:\n re_path(r'^reactions/typo/$', WarnTypo.as_view(), name='warn-typo'),\n\n # create:\n re_path(r'^nouveau-tutoriel/$',\n CreateContent.as_view(created_content_type='TUTORIAL'), name='create-tutorial'),\n re_path(r'^nouvel-article/$',\n CreateContent.as_view(created_content_type='ARTICLE'), name='create-article'),\n re_path(r'^nouveau-billet/$',\n CreateContent.as_view(created_content_type='OPINION'), name='create-opinion'),\n re_path(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n re_path(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n\n # edit:\n re_path(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n re_path(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n\n re_path(r'^editer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n EditContent.as_view(), name='edit'),\n re_path(r'^deplacer/$', MoveChild.as_view(), name='move-element'),\n\n re_path(r'^historique/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayHistory.as_view(), name='history'),\n re_path(r'^comparaison/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayDiff.as_view(), name='diff'),\n re_path(r'^ajouter-contributeur/(?P<pk>\\d+)/$',\n AddContributorToContent.as_view(), name='add-contributor'),\n re_path(r'^enlever-contributeur/(?P<pk>\\d+)/$',\n RemoveContributorFromContent.as_view(), name='remove-contributor'),\n re_path(r'^ajouter-auteur/(?P<pk>\\d+)/$',\n AddAuthorToContent.as_view(), name='add-author'),\n re_path(r'^enlever-auteur/(?P<pk>\\d+)/$',\n RemoveAuthorFromContent.as_view(), name='remove-author'),\n\n # Modify the license\n re_path(r'^modifier-licence/(?P<pk>\\d+)/$', EditContentLicense.as_view(), name='edit-license'),\n\n # Modify the tags\n re_path(r'^modifier-tags/(?P<pk>\\d+)/$', EditContentTags.as_view(), name='edit-tags'),\n\n # beta:\n re_path(r'^activer-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),\n name='set-beta'),\n re_path(r'^desactiver-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),\n name='inactive-beta'),\n re_path(r'^stats/(?P<pk>\\d+)/(?P<slug>.+)/$', ContentStatisticsView.as_view(),\n name='stats-content'),\n re_path(r'^ajouter-suggestion/(?P<pk>\\d+)/$', AddSuggestion.as_view(),\n name='add-suggestion'),\n re_path(r'^enlever-suggestion/(?P<pk>\\d+)/$',\n RemoveSuggestion.as_view(), name='remove-suggestion'),\n\n\n # jsfiddle support:\n re_path(r'activer-js/', ActivateJSFiddleInContent.as_view(),\n name='activate-jsfiddle'),\n\n # delete:\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'\n r'(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DeleteContent.as_view(), name='delete'),\n\n # markdown import\n re_path(r'^importer/archive/nouveau/$',\n CreateContentFromArchive.as_view(), name='import-new'),\n re_path(r'^importer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n UpdateContentWithArchive.as_view(), name='import'),\n\n # tags\n re_path(r'^tags/$', TagsListView.as_view(), name='tags'),\n\n re_path(r'^$', RedirectView.as_view(\n pattern_name='publication:list', permanent=True), name='list'),\n]\n", "path": "zds/tutorialv2/urls/urls_contents.py"}], "after_files": [{"content": "from django.urls import path, re_path\nfrom django.views.generic.base import RedirectView\n\nfrom zds.tutorialv2.views.contents import (DisplayContent, CreateContent, EditContent, EditContentLicense,\n DeleteContent, CreateContainer, DisplayContainer, EditContainer,\n CreateExtract, EditExtract,\n DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff,\n ActivateJSFiddleInContent, MoveChild,\n DownloadContent, UpdateContentWithArchive, CreateContentFromArchive,\n ContentsWithHelps, AddAuthorToContent,\n RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer,\n ContentOfAuthor, RedirectOldContentOfAuthor, AddContributorToContent,\n RemoveContributorFromContent, ContentOfContributors,\n AddSuggestion, RemoveSuggestion, ChangeHelp, EditContentTags)\n\nfrom zds.tutorialv2.views.published import (SendNoteFormView, UpdateNoteView,\n HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView,\n FollowContentReaction, FollowNewContent, SendContentAlert,\n SolveContentAlert,\n RequestFeaturedContent, ContentStatisticsView)\n\nurlpatterns = [\n # Flux\n re_path(r'^flux/rss/$', RedirectView.as_view(pattern_name='publication:feed-rss',\n permanent=True), name='feed-rss'),\n re_path(r'^flux/atom/$', RedirectView.as_view(pattern_name='publication:feed-atom',\n permanent=True), name='feed-atom'),\n\n path('voir/<str:username>/',\n ContentOfAuthor.as_view(\n type='ALL', context_object_name='contents'),\n name='find-all'),\n path('contributions/<str:username>/',\n ContentOfContributors.as_view(\n type='ALL', context_object_name='contribution_contents'),\n name='find-contribution-all'),\n\n path('tutoriels/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='TUTORIAL')),\n path('articles/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='ARTICLE')),\n path('tribunes/<int:pk>/', RedirectOldContentOfAuthor.as_view(type='OPINION')),\n\n re_path(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),\n re_path(r'^aides/(?P<pk>\\d+)/change/$', ChangeHelp.as_view(), name='helps-change'),\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n\n re_path(r'^(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),\n name='view'),\n\n re_path(r'^telecharger/(?P<pk>\\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),\n name='download-zip'),\n\n # beta:\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n\n re_path(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayBetaContent.as_view(), name='beta-view'),\n\n # reactions:\n re_path(r'^reactions/ajouter/$',\n SendNoteFormView.as_view(redirection_is_needed=False), name='add-reaction'),\n re_path(r'^reactions/editer/$',\n UpdateNoteView.as_view(redirection_is_needed=False), name='update-reaction'),\n re_path(r'^reactions/cacher/(?P<pk>\\d+)/$',\n HideReaction.as_view(), name='hide-reaction'),\n re_path(r'^reactions/afficher/(?P<pk>\\d+)/$',\n ShowReaction.as_view(), name='show-reaction'),\n re_path(r'^reactions/alerter/(?P<pk>\\d+)/$',\n SendNoteAlert.as_view(), name='alert-reaction'),\n re_path(r'^reactions/resoudre/$',\n SolveNoteAlert.as_view(), name='resolve-reaction'),\n\n # follow:\n re_path(r'^suivre/(?P<pk>\\d+)/reactions/$',\n FollowContentReaction.as_view(), name='follow-reactions'),\n re_path(r'^suivre/membres/(?P<pk>\\d+)/$',\n FollowNewContent.as_view(), name='follow'),\n\n # request\n re_path(r'^requete/(?P<pk>\\d+)/$', RequestFeaturedContent.as_view(), name='request-featured'),\n\n # content alerts:\n re_path(r'^alerter/(?P<pk>\\d+)/$',\n SendContentAlert.as_view(), name='alert-content'),\n re_path(r'^resoudre/(?P<pk>\\d+)/$',\n SolveContentAlert.as_view(), name='resolve-content'),\n\n # typo:\n re_path(r'^reactions/typo/$', WarnTypo.as_view(), name='warn-typo'),\n\n # create:\n re_path(r'^nouveau-tutoriel/$',\n CreateContent.as_view(created_content_type='TUTORIAL'), name='create-tutorial'),\n re_path(r'^nouvel-article/$',\n CreateContent.as_view(created_content_type='ARTICLE'), name='create-article'),\n re_path(r'^nouveau-billet/$',\n CreateContent.as_view(created_content_type='OPINION'), name='create-opinion'),\n re_path(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n re_path(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n re_path(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n\n # edit:\n re_path(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n re_path(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n re_path(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n\n re_path(r'^editer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n EditContent.as_view(), name='edit'),\n re_path(r'^deplacer/$', MoveChild.as_view(), name='move-element'),\n\n re_path(r'^historique/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayHistory.as_view(), name='history'),\n re_path(r'^comparaison/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DisplayDiff.as_view(), name='diff'),\n re_path(r'^ajouter-contributeur/(?P<pk>\\d+)/$',\n AddContributorToContent.as_view(), name='add-contributor'),\n re_path(r'^enlever-contributeur/(?P<pk>\\d+)/$',\n RemoveContributorFromContent.as_view(), name='remove-contributor'),\n re_path(r'^ajouter-auteur/(?P<pk>\\d+)/$',\n AddAuthorToContent.as_view(), name='add-author'),\n re_path(r'^enlever-auteur/(?P<pk>\\d+)/$',\n RemoveAuthorFromContent.as_view(), name='remove-author'),\n\n # Modify the license\n re_path(r'^modifier-licence/(?P<pk>\\d+)/$', EditContentLicense.as_view(), name='edit-license'),\n\n # Modify the tags\n re_path(r'^modifier-tags/(?P<pk>\\d+)/$', EditContentTags.as_view(), name='edit-tags'),\n\n # beta:\n re_path(r'^activer-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),\n name='set-beta'),\n re_path(r'^desactiver-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),\n name='inactive-beta'),\n re_path(r'^stats/(?P<pk>\\d+)/(?P<slug>.+)/$', ContentStatisticsView.as_view(),\n name='stats-content'),\n re_path(r'^ajouter-suggestion/(?P<pk>\\d+)/$', AddSuggestion.as_view(),\n name='add-suggestion'),\n re_path(r'^enlever-suggestion/(?P<pk>\\d+)/$',\n RemoveSuggestion.as_view(), name='remove-suggestion'),\n\n\n # jsfiddle support:\n re_path(r'activer-js/', ActivateJSFiddleInContent.as_view(),\n name='activate-jsfiddle'),\n\n # delete:\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'\n r'(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n\n re_path(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n DeleteContent.as_view(), name='delete'),\n\n # markdown import\n re_path(r'^importer/archive/nouveau/$',\n CreateContentFromArchive.as_view(), name='import-new'),\n re_path(r'^importer/(?P<pk>\\d+)/(?P<slug>.+)/$',\n UpdateContentWithArchive.as_view(), name='import'),\n\n # tags\n re_path(r'^tags/$', TagsListView.as_view(), name='tags'),\n\n re_path(r'^$', RedirectView.as_view(\n pattern_name='publication:list', permanent=True), name='list'),\n]\n", "path": "zds/tutorialv2/urls/urls_contents.py"}]} | 4,095 | 223 |
gh_patches_debug_17265 | rasdani/github-patches | git_diff | netbox-community__netbox-2694 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "White" as a cable color
### Environment
* Python version: 3.6
* NetBox version: 2.5.1
### Proposed Functionality
Add color white to the cable colors.
Optionally add:
* ~~slate~~(Dark Grey works, almost identical color)
* rose
* ~~violet~~ (Fuschia works, almost identical color)
* aqua
### Use Case
These fiber strand colors are missing
### Database Changes
None
### External Dependencies
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/constants.py`
Content:
```
1 COLOR_CHOICES = (
2 ('aa1409', 'Dark red'),
3 ('f44336', 'Red'),
4 ('e91e63', 'Pink'),
5 ('ff66ff', 'Fuschia'),
6 ('9c27b0', 'Purple'),
7 ('673ab7', 'Dark purple'),
8 ('3f51b5', 'Indigo'),
9 ('2196f3', 'Blue'),
10 ('03a9f4', 'Light blue'),
11 ('00bcd4', 'Cyan'),
12 ('009688', 'Teal'),
13 ('2f6a31', 'Dark green'),
14 ('4caf50', 'Green'),
15 ('8bc34a', 'Light green'),
16 ('cddc39', 'Lime'),
17 ('ffeb3b', 'Yellow'),
18 ('ffc107', 'Amber'),
19 ('ff9800', 'Orange'),
20 ('ff5722', 'Dark orange'),
21 ('795548', 'Brown'),
22 ('c0c0c0', 'Light grey'),
23 ('9e9e9e', 'Grey'),
24 ('607d8b', 'Dark grey'),
25 ('111111', 'Black'),
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py
--- a/netbox/utilities/constants.py
+++ b/netbox/utilities/constants.py
@@ -2,6 +2,7 @@
('aa1409', 'Dark red'),
('f44336', 'Red'),
('e91e63', 'Pink'),
+ ('ffe4e1', 'Rose'),
('ff66ff', 'Fuschia'),
('9c27b0', 'Purple'),
('673ab7', 'Dark purple'),
@@ -10,6 +11,7 @@
('03a9f4', 'Light blue'),
('00bcd4', 'Cyan'),
('009688', 'Teal'),
+ ('00ffff', 'Aqua'),
('2f6a31', 'Dark green'),
('4caf50', 'Green'),
('8bc34a', 'Light green'),
@@ -23,4 +25,5 @@
('9e9e9e', 'Grey'),
('607d8b', 'Dark grey'),
('111111', 'Black'),
+ ('ffffff', 'White'),
)
| {"golden_diff": "diff --git a/netbox/utilities/constants.py b/netbox/utilities/constants.py\n--- a/netbox/utilities/constants.py\n+++ b/netbox/utilities/constants.py\n@@ -2,6 +2,7 @@\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n+ ('ffe4e1', 'Rose'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n@@ -10,6 +11,7 @@\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n+ ('00ffff', 'Aqua'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n@@ -23,4 +25,5 @@\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n+ ('ffffff', 'White'),\n )\n", "issue": "Add \"White\" as a cable color\n### Environment\r\n* Python version: 3.6\r\n* NetBox version: 2.5.1\r\n\r\n### Proposed Functionality\r\n\r\nAdd color white to the cable colors.\r\n\r\nOptionally add:\r\n\r\n* ~~slate~~(Dark Grey works, almost identical color)\r\n* rose\r\n* ~~violet~~ (Fuschia works, almost identical color)\r\n* aqua\r\n\r\n### Use Case\r\n\r\nThese fiber strand colors are missing\r\n\r\n### Database Changes\r\n\r\nNone\r\n\r\n### External Dependencies\r\n\r\nNone\n", "before_files": [{"content": "COLOR_CHOICES = (\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n ('3f51b5', 'Indigo'),\n ('2196f3', 'Blue'),\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n ('cddc39', 'Lime'),\n ('ffeb3b', 'Yellow'),\n ('ffc107', 'Amber'),\n ('ff9800', 'Orange'),\n ('ff5722', 'Dark orange'),\n ('795548', 'Brown'),\n ('c0c0c0', 'Light grey'),\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n)\n", "path": "netbox/utilities/constants.py"}], "after_files": [{"content": "COLOR_CHOICES = (\n ('aa1409', 'Dark red'),\n ('f44336', 'Red'),\n ('e91e63', 'Pink'),\n ('ffe4e1', 'Rose'),\n ('ff66ff', 'Fuschia'),\n ('9c27b0', 'Purple'),\n ('673ab7', 'Dark purple'),\n ('3f51b5', 'Indigo'),\n ('2196f3', 'Blue'),\n ('03a9f4', 'Light blue'),\n ('00bcd4', 'Cyan'),\n ('009688', 'Teal'),\n ('00ffff', 'Aqua'),\n ('2f6a31', 'Dark green'),\n ('4caf50', 'Green'),\n ('8bc34a', 'Light green'),\n ('cddc39', 'Lime'),\n ('ffeb3b', 'Yellow'),\n ('ffc107', 'Amber'),\n ('ff9800', 'Orange'),\n ('ff5722', 'Dark orange'),\n ('795548', 'Brown'),\n ('c0c0c0', 'Light grey'),\n ('9e9e9e', 'Grey'),\n ('607d8b', 'Dark grey'),\n ('111111', 'Black'),\n ('ffffff', 'White'),\n)\n", "path": "netbox/utilities/constants.py"}]} | 708 | 282 |
gh_patches_debug_4763 | rasdani/github-patches | git_diff | pytorch__ignite-3199 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mean Absolute Percentage Error (MAPE)
## 🚀 Feature
I'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.
It is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.
For that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/metrics/regression/mean_absolute_relative_error.py`
Content:
```
1 from typing import Tuple
2
3 import torch
4
5 from ignite.contrib.metrics.regression._base import _BaseRegression
6 from ignite.exceptions import NotComputableError
7 from ignite.metrics.metric import reinit__is_reduced, sync_all_reduce
8
9
10 class MeanAbsoluteRelativeError(_BaseRegression):
11 r"""Calculate Mean Absolute Relative Error.
12
13 .. math::
14 \text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
15
16 where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.
17
18 More details can be found in the reference `Botchkarev 2018`__.
19
20 - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
21 - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.
22
23 __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf
24
25 Parameters are inherited from ``Metric.__init__``.
26
27 Args:
28 output_transform: a callable that is used to transform the
29 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
30 form expected by the metric. This can be useful if, for example, you have a multi-output model and
31 you want to compute the metric with respect to one of the outputs.
32 By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
33 device: specifies which device updates are accumulated on. Setting the
34 metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
35 non-blocking. By default, CPU.
36
37 Examples:
38 To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.
39 The output of the engine's ``process_function`` needs to be in format of
40 ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.
41
42 .. include:: defaults.rst
43 :start-after: :orphan:
44
45 .. testcode::
46
47 metric = MeanAbsoluteRelativeError()
48 metric.attach(default_evaluator, 'mare')
49 y_true = torch.tensor([1., 2., 3., 4., 5.])
50 y_pred = y_true * 0.75
51 state = default_evaluator.run([[y_pred, y_true]])
52 print(state.metrics['mare'])
53
54 .. testoutput::
55
56 0.25...
57
58 .. versionchanged:: 0.4.5
59 - Works with DDP.
60 """
61 _state_dict_all_req_keys = ("_sum_of_absolute_relative_errors", "_num_samples")
62
63 @reinit__is_reduced
64 def reset(self) -> None:
65 self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)
66 self._num_samples = 0
67
68 def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:
69 y_pred, y = output[0].detach(), output[1].detach()
70 if (y == 0).any():
71 raise NotComputableError("The ground truth has 0.")
72 absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))
73 self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)
74 self._num_samples += y.size()[0]
75
76 @sync_all_reduce("_sum_of_absolute_relative_errors", "_num_samples")
77 def compute(self) -> float:
78 if self._num_samples == 0:
79 raise NotComputableError(
80 "MeanAbsoluteRelativeError must have at least one sample before it can be computed."
81 )
82 return self._sum_of_absolute_relative_errors.item() / self._num_samples
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py
@@ -8,7 +8,7 @@
class MeanAbsoluteRelativeError(_BaseRegression):
- r"""Calculate Mean Absolute Relative Error.
+ r"""Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).
.. math::
\text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}
| {"golden_diff": "diff --git a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n--- a/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n+++ b/ignite/contrib/metrics/regression/mean_absolute_relative_error.py\n@@ -8,7 +8,7 @@\n \n \n class MeanAbsoluteRelativeError(_BaseRegression):\n- r\"\"\"Calculate Mean Absolute Relative Error.\n+ r\"\"\"Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).\n \n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n", "issue": "Mean Absolute Percentage Error (MAPE)\n## \ud83d\ude80 Feature\r\n\r\nI'd like to implement the mean absolute percentage error [(MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) in `ignite/metrics`.\r\n\r\nIt is a commonly used metric for regression problems and it would be really convenient to be able to use it directly with ignite evaluators.\r\n\r\nFor that, I would write a custom Metric class in a new file `mean_absolute_percentage_error.py` inheriting from the base `Metric` class in `ignite/metrics/metric.py`.\r\n\n", "before_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass MeanAbsoluteRelativeError(_BaseRegression):\n r\"\"\"Calculate Mean Absolute Relative Error.\n\n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n\n where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in the reference `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf\n\n Parameters are inherited from ``Metric.__init__``.\n\n Args:\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = MeanAbsoluteRelativeError()\n metric.attach(default_evaluator, 'mare')\n y_true = torch.tensor([1., 2., 3., 4., 5.])\n y_pred = y_true * 0.75\n state = default_evaluator.run([[y_pred, y_true]])\n print(state.metrics['mare'])\n\n .. testoutput::\n\n 0.25...\n\n .. versionchanged:: 0.4.5\n - Works with DDP.\n \"\"\"\n _state_dict_all_req_keys = (\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)\n self._num_samples = 0\n\n def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n if (y == 0).any():\n raise NotComputableError(\"The ground truth has 0.\")\n absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))\n self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)\n self._num_samples += y.size()[0]\n\n @sync_all_reduce(\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n def compute(self) -> float:\n if self._num_samples == 0:\n raise NotComputableError(\n \"MeanAbsoluteRelativeError must have at least one sample before it can be computed.\"\n )\n return self._sum_of_absolute_relative_errors.item() / self._num_samples\n", "path": "ignite/contrib/metrics/regression/mean_absolute_relative_error.py"}], "after_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass MeanAbsoluteRelativeError(_BaseRegression):\n r\"\"\"Calculate Mean Absolute Relative Error (MARE), also known as Mean Absolute Percentage Error (MAPE).\n\n .. math::\n \\text{MARE} = \\frac{1}{n}\\sum_{j=1}^n\\frac{\\left|A_j-P_j\\right|}{\\left|A_j\\right|}\n\n where :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in the reference `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf\n\n Parameters are inherited from ``Metric.__init__``.\n\n Args:\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = MeanAbsoluteRelativeError()\n metric.attach(default_evaluator, 'mare')\n y_true = torch.tensor([1., 2., 3., 4., 5.])\n y_pred = y_true * 0.75\n state = default_evaluator.run([[y_pred, y_true]])\n print(state.metrics['mare'])\n\n .. testoutput::\n\n 0.25...\n\n .. versionchanged:: 0.4.5\n - Works with DDP.\n \"\"\"\n _state_dict_all_req_keys = (\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_absolute_relative_errors = torch.tensor(0.0, device=self._device)\n self._num_samples = 0\n\n def _update(self, output: Tuple[torch.Tensor, torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n if (y == 0).any():\n raise NotComputableError(\"The ground truth has 0.\")\n absolute_error = torch.abs(y_pred - y.view_as(y_pred)) / torch.abs(y.view_as(y_pred))\n self._sum_of_absolute_relative_errors += torch.sum(absolute_error).to(self._device)\n self._num_samples += y.size()[0]\n\n @sync_all_reduce(\"_sum_of_absolute_relative_errors\", \"_num_samples\")\n def compute(self) -> float:\n if self._num_samples == 0:\n raise NotComputableError(\n \"MeanAbsoluteRelativeError must have at least one sample before it can be computed.\"\n )\n return self._sum_of_absolute_relative_errors.item() / self._num_samples\n", "path": "ignite/contrib/metrics/regression/mean_absolute_relative_error.py"}]} | 1,432 | 171 |
gh_patches_debug_26538 | rasdani/github-patches | git_diff | speechbrain__speechbrain-304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stats precision of FileTrainLogger
Now, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `speechbrain/utils/train_logger.py`
Content:
```
1 """
2 Loggers for experiment monitoring
3
4 Authors
5 * Peter Plantinga 2020
6 """
7 import logging
8 from speechbrain.utils.edit_distance import wer_summary
9
10 logger = logging.getLogger(__name__)
11
12
13 class TrainLogger:
14 """Abstract class defining an interface for training loggers."""
15
16 def log_stats(
17 self,
18 stats_meta,
19 train_stats=None,
20 valid_stats=None,
21 test_stats=None,
22 verbose=False,
23 ):
24 """Log the stats for one epoch.
25
26 Arguments
27 ---------
28 stats_meta : dict of str:scalar pairs
29 Meta information about the stats (e.g. epoch, learning-rate, etc.)
30 train_stats : dict of str:list pairs
31 Each loss type is represented with a str : list pair including
32 all the values for the training pass.
33 valid_stats : dict of str:list pairs
34 Each loss type is represented with a str : list pair including
35 all the values for the validation pass.
36 test_stats : dict of str:list pairs
37 Each loss type is represented with a str : list pair including
38 all the values for the test pass.
39 verbose : bool
40 Whether to also put logging information to the standard logger.
41 """
42 raise NotImplementedError
43
44
45 class FileTrainLogger(TrainLogger):
46 """Text logger of training information
47
48 Arguments
49 ---------
50 save_file : str
51 The file to use for logging train information.
52 summary_fns : dict of str:function pairs
53 Each summary function should take a list produced as output
54 from a training/validation pass and summarize it to a single scalar.
55 """
56
57 def __init__(self, save_file, summary_fns=None):
58 self.save_file = save_file
59 self.summary_fns = summary_fns or {}
60
61 def _item_to_string(self, key, value, dataset=None):
62 """Convert one item to string, handling floats"""
63 if isinstance(value, float) and 0.01 < value < 100.0:
64 value = f"{value:.2f}"
65 elif isinstance(value, float):
66 value = f"{value:.2e}"
67 if dataset is not None:
68 key = f"{dataset} {key}"
69 return f"{key}: {value}"
70
71 def _stats_to_string(self, stats, dataset=None):
72 """Convert all stats to a single string summary"""
73 return ", ".join(
74 [self._item_to_string(k, v, dataset) for k, v in stats.items()]
75 )
76
77 def log_stats(
78 self,
79 stats_meta,
80 train_stats=None,
81 valid_stats=None,
82 test_stats=None,
83 verbose=True,
84 ):
85 """See TrainLogger.log_stats()"""
86 string_summary = self._stats_to_string(stats_meta)
87 for dataset, stats in [
88 ("train", train_stats),
89 ("valid", valid_stats),
90 ("test", test_stats),
91 ]:
92 if stats is None:
93 continue
94 summary = {}
95 for stat, value_list in stats.items():
96 if stat in self.summary_fns:
97 summary[stat] = self.summary_fns[stat](value_list)
98 else:
99 summary[stat] = summarize_average(value_list)
100 string_summary += " - " + self._stats_to_string(summary, dataset)
101
102 with open(self.save_file, "a") as fout:
103 print(string_summary, file=fout)
104 if verbose:
105 logger.info(string_summary)
106
107
108 class TensorboardLogger(TrainLogger):
109 """Logs training information in the format required by Tensorboard.
110
111 Arguments
112 ---------
113 save_dir : str
114 A directory for storing all the relevant logs
115
116 Raises
117 ------
118 ImportError if Tensorboard is not installed.
119 """
120
121 def __init__(self, save_dir):
122 self.save_dir = save_dir
123
124 # Raises ImportError if TensorBoard is not installed
125 from torch.utils.tensorboard import SummaryWriter
126
127 self.writer = SummaryWriter(self.save_dir)
128 self.global_step = {"train": {}, "valid": {}, "meta": 0}
129
130 def log_stats(
131 self,
132 stats_meta,
133 train_stats=None,
134 valid_stats=None,
135 test_stats=None,
136 verbose=False,
137 ):
138 """See TrainLogger.log_stats()"""
139 self.global_step["meta"] += 1
140 for name, value in stats_meta.items():
141 self.writer.add_scalar(name, value, self.global_step["meta"])
142
143 for dataset, stats in [
144 ("train", train_stats),
145 ("valid", valid_stats),
146 ("test", test_stats),
147 ]:
148 if stats is None:
149 continue
150 for stat, value_list in stats.items():
151 if stat not in self.global_step[dataset]:
152 self.global_step[dataset][stat] = 0
153 tag = f"{stat}/{dataset}"
154 for value in value_list:
155 new_global_step = self.global_step[dataset][stat] + 1
156 self.writer.add_scalar(tag, value, new_global_step)
157 self.global_step[dataset][stat] = new_global_step
158
159
160 def summarize_average(stat_list):
161 return float(sum(stat_list) / len(stat_list))
162
163
164 def summarize_error_rate(stat_list):
165 summary = wer_summary(stat_list)
166 return summary["WER"]
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py
--- a/speechbrain/utils/train_logger.py
+++ b/speechbrain/utils/train_logger.py
@@ -49,21 +49,24 @@
---------
save_file : str
The file to use for logging train information.
+ precision : int
+ Number of decimal places to display. Default 2, example: 1.35e-5
summary_fns : dict of str:function pairs
Each summary function should take a list produced as output
from a training/validation pass and summarize it to a single scalar.
"""
- def __init__(self, save_file, summary_fns=None):
+ def __init__(self, save_file, precision=2, summary_fns=None):
self.save_file = save_file
+ self.precision = precision
self.summary_fns = summary_fns or {}
def _item_to_string(self, key, value, dataset=None):
"""Convert one item to string, handling floats"""
- if isinstance(value, float) and 0.01 < value < 100.0:
- value = f"{value:.2f}"
+ if isinstance(value, float) and 1.0 < value < 100.0:
+ value = f"{value:.{self.precision}f}"
elif isinstance(value, float):
- value = f"{value:.2e}"
+ value = f"{value:.{self.precision}e}"
if dataset is not None:
key = f"{dataset} {key}"
return f"{key}: {value}"
| {"golden_diff": "diff --git a/speechbrain/utils/train_logger.py b/speechbrain/utils/train_logger.py\n--- a/speechbrain/utils/train_logger.py\n+++ b/speechbrain/utils/train_logger.py\n@@ -49,21 +49,24 @@\n ---------\n save_file : str\n The file to use for logging train information.\n+ precision : int\n+ Number of decimal places to display. Default 2, example: 1.35e-5\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n \n- def __init__(self, save_file, summary_fns=None):\n+ def __init__(self, save_file, precision=2, summary_fns=None):\n self.save_file = save_file\n+ self.precision = precision\n self.summary_fns = summary_fns or {}\n \n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n- if isinstance(value, float) and 0.01 < value < 100.0:\n- value = f\"{value:.2f}\"\n+ if isinstance(value, float) and 1.0 < value < 100.0:\n+ value = f\"{value:.{self.precision}f}\"\n elif isinstance(value, float):\n- value = f\"{value:.2e}\"\n+ value = f\"{value:.{self.precision}e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n", "issue": "Stats precision of FileTrainLogger\nNow, all the stats logged by a FileTrainLogger have the precision 2 after their decimal points. In some training scenarios, precision 2 is not enough for some stats. I suggest allowing users to decide precision for each stats or adding precision number to 4 or 5 uniformly.\n", "before_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, summary_fns=None):\n self.save_file = save_file\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 0.01 < value < 100.0:\n value = f\"{value:.2f}\"\n elif isinstance(value, float):\n value = f\"{value:.2e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}], "after_files": [{"content": "\"\"\"\nLoggers for experiment monitoring\n\nAuthors\n * Peter Plantinga 2020\n\"\"\"\nimport logging\nfrom speechbrain.utils.edit_distance import wer_summary\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrainLogger:\n \"\"\"Abstract class defining an interface for training loggers.\"\"\"\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"Log the stats for one epoch.\n\n Arguments\n ---------\n stats_meta : dict of str:scalar pairs\n Meta information about the stats (e.g. epoch, learning-rate, etc.)\n train_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the training pass.\n valid_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the validation pass.\n test_stats : dict of str:list pairs\n Each loss type is represented with a str : list pair including\n all the values for the test pass.\n verbose : bool\n Whether to also put logging information to the standard logger.\n \"\"\"\n raise NotImplementedError\n\n\nclass FileTrainLogger(TrainLogger):\n \"\"\"Text logger of training information\n\n Arguments\n ---------\n save_file : str\n The file to use for logging train information.\n precision : int\n Number of decimal places to display. Default 2, example: 1.35e-5\n summary_fns : dict of str:function pairs\n Each summary function should take a list produced as output\n from a training/validation pass and summarize it to a single scalar.\n \"\"\"\n\n def __init__(self, save_file, precision=2, summary_fns=None):\n self.save_file = save_file\n self.precision = precision\n self.summary_fns = summary_fns or {}\n\n def _item_to_string(self, key, value, dataset=None):\n \"\"\"Convert one item to string, handling floats\"\"\"\n if isinstance(value, float) and 1.0 < value < 100.0:\n value = f\"{value:.{self.precision}f}\"\n elif isinstance(value, float):\n value = f\"{value:.{self.precision}e}\"\n if dataset is not None:\n key = f\"{dataset} {key}\"\n return f\"{key}: {value}\"\n\n def _stats_to_string(self, stats, dataset=None):\n \"\"\"Convert all stats to a single string summary\"\"\"\n return \", \".join(\n [self._item_to_string(k, v, dataset) for k, v in stats.items()]\n )\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=True,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n string_summary = self._stats_to_string(stats_meta)\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n summary = {}\n for stat, value_list in stats.items():\n if stat in self.summary_fns:\n summary[stat] = self.summary_fns[stat](value_list)\n else:\n summary[stat] = summarize_average(value_list)\n string_summary += \" - \" + self._stats_to_string(summary, dataset)\n\n with open(self.save_file, \"a\") as fout:\n print(string_summary, file=fout)\n if verbose:\n logger.info(string_summary)\n\n\nclass TensorboardLogger(TrainLogger):\n \"\"\"Logs training information in the format required by Tensorboard.\n\n Arguments\n ---------\n save_dir : str\n A directory for storing all the relevant logs\n\n Raises\n ------\n ImportError if Tensorboard is not installed.\n \"\"\"\n\n def __init__(self, save_dir):\n self.save_dir = save_dir\n\n # Raises ImportError if TensorBoard is not installed\n from torch.utils.tensorboard import SummaryWriter\n\n self.writer = SummaryWriter(self.save_dir)\n self.global_step = {\"train\": {}, \"valid\": {}, \"meta\": 0}\n\n def log_stats(\n self,\n stats_meta,\n train_stats=None,\n valid_stats=None,\n test_stats=None,\n verbose=False,\n ):\n \"\"\"See TrainLogger.log_stats()\"\"\"\n self.global_step[\"meta\"] += 1\n for name, value in stats_meta.items():\n self.writer.add_scalar(name, value, self.global_step[\"meta\"])\n\n for dataset, stats in [\n (\"train\", train_stats),\n (\"valid\", valid_stats),\n (\"test\", test_stats),\n ]:\n if stats is None:\n continue\n for stat, value_list in stats.items():\n if stat not in self.global_step[dataset]:\n self.global_step[dataset][stat] = 0\n tag = f\"{stat}/{dataset}\"\n for value in value_list:\n new_global_step = self.global_step[dataset][stat] + 1\n self.writer.add_scalar(tag, value, new_global_step)\n self.global_step[dataset][stat] = new_global_step\n\n\ndef summarize_average(stat_list):\n return float(sum(stat_list) / len(stat_list))\n\n\ndef summarize_error_rate(stat_list):\n summary = wer_summary(stat_list)\n return summary[\"WER\"]\n", "path": "speechbrain/utils/train_logger.py"}]} | 1,842 | 362 |
gh_patches_debug_6949 | rasdani/github-patches | git_diff | mkdocs__mkdocs-409 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
problem with config in command line
If I run the follow command in current development version:
mkdocs serve --config=/home/lf/git/mywork/bb/bog/mkdocs.yml
it will raise error:
```
Config file 'mkdocs.yml' does not exist.
```
But if I run the same command use version 0.11.1
Everything is OK
Is there any thing wrong with code below in [config](https://github.com/tomchristie/mkdocs/blob/master/mkdocs/config.py#L79)
```
if 'config' in options:
filename = options.pop('config')
```
Should it be:
```
if 'config' in options:
filename = options.get('config')
```
Because when we run `mkdocs serve` , we will execute this block of code two times, filename will use the default `mkdocs.yml` in the second time, this file may not exist.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/config.py`
Content:
```
1 # coding: utf-8
2
3 from mkdocs import utils
4 from mkdocs.compat import urlparse
5 from mkdocs.exceptions import ConfigurationError
6
7 import logging
8 import os
9 import yaml
10
11 log = logging.getLogger(__name__)
12
13 DEFAULT_CONFIG = {
14 'site_name': None,
15 'pages': None,
16
17 'site_url': None,
18 'site_description': None,
19 'site_author': None,
20 'site_favicon': None,
21
22 'theme': 'mkdocs',
23 'docs_dir': 'docs',
24 'site_dir': 'site',
25 'theme_dir': None,
26
27 'copyright': None,
28 'google_analytics': None,
29
30 # The address on which to serve the livereloading docs server.
31 'dev_addr': '127.0.0.1:8000',
32
33 # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.
34 # If `False`, use `<page_name>.html style file with hyperlinks to the file.
35 # True generates nicer URLs, but False is useful if browsing the output on a filesystem.
36 'use_directory_urls': True,
37
38 # Specify a link to the project source repo to be included
39 # in the documentation pages.
40 'repo_url': None,
41
42 # A name to use for the link to the project source repo.
43 # Default: If repo_url is unset then None, otherwise
44 # "GitHub" or "Bitbucket" for known url or Hostname for unknown urls.
45 'repo_name': None,
46
47 # Specify which css or javascript files from the docs
48 # directionary should be additionally included in the site.
49 # Default: List of all .css and .js files in the docs dir.
50 'extra_css': None,
51 'extra_javascript': None,
52
53 # Determine if the site should include the nav and next/prev elements.
54 # Default: True if the site has more than one page, False otherwise.
55 'include_nav': None,
56 'include_next_prev': None,
57
58 # PyMarkdown extension names.
59 'markdown_extensions': (),
60
61 # Determine if the site should generate a json search index and include
62 # search elements in the theme. - TODO
63 'include_search': False,
64
65 # Determine if the site should include a 404.html page.
66 # TODO: Implment this. Make this None, have it True if a 404.html
67 # template exists in the theme or docs dir.
68 'include_404': False,
69
70 # enabling strict mode causes MkDocs to stop the build when a problem is
71 # encountered rather than display an error.
72 'strict': False,
73 }
74
75
76 def load_config(filename='mkdocs.yml', options=None):
77 options = options or {}
78 if 'config' in options:
79 filename = options.pop('config')
80 if not os.path.exists(filename):
81 raise ConfigurationError("Config file '%s' does not exist." % filename)
82 with open(filename, 'r') as fp:
83 user_config = yaml.load(fp)
84 if not isinstance(user_config, dict):
85 raise ConfigurationError("The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.")
86 user_config.update(options)
87 return validate_config(user_config)
88
89
90 def validate_config(user_config):
91 config = DEFAULT_CONFIG.copy()
92
93 theme_in_config = 'theme' in user_config
94
95 config.update(user_config)
96
97 if not config['site_name']:
98 raise ConfigurationError("Config must contain 'site_name' setting.")
99
100 # Validate that the docs_dir and site_dir don't contain the
101 # other as this will lead to copying back and forth on each
102 # and eventually make a deep nested mess.
103 abs_site_dir = os.path.abspath(config['site_dir'])
104 abs_docs_dir = os.path.abspath(config['docs_dir'])
105 if abs_docs_dir.startswith(abs_site_dir):
106 raise ConfigurationError(
107 "The 'docs_dir' can't be within the 'site_dir'.")
108 elif abs_site_dir.startswith(abs_docs_dir):
109 raise ConfigurationError(
110 "The 'site_dir' can't be within the 'docs_dir'.")
111
112 # If not specified, then the 'pages' config simply includes all
113 # markdown files in the docs dir, without generating any header items
114 # for them.
115 pages = []
116 extra_css = []
117 extra_javascript = []
118 for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):
119 for filename in sorted(filenames):
120 fullpath = os.path.join(dirpath, filename)
121 relpath = os.path.relpath(fullpath, config['docs_dir'])
122
123 if utils.is_markdown_file(filename):
124 # index pages should always be the first listed page.
125 if os.path.splitext(relpath)[0] == 'index':
126 pages.insert(0, relpath)
127 else:
128 pages.append(relpath)
129 elif utils.is_css_file(filename):
130 extra_css.append(relpath)
131 elif utils.is_javascript_file(filename):
132 extra_javascript.append(relpath)
133
134 if config['pages'] is None:
135 config['pages'] = pages
136
137 if config['extra_css'] is None:
138 config['extra_css'] = extra_css
139
140 if config['extra_javascript'] is None:
141 config['extra_javascript'] = extra_javascript
142
143 package_dir = os.path.dirname(__file__)
144 theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]
145
146 if config['theme_dir'] is not None:
147 # If the user has given us a custom theme but not a
148 # builtin theme name then we don't want to merge them.
149 if not theme_in_config:
150 theme_dir = []
151 theme_dir.insert(0, config['theme_dir'])
152
153 config['theme_dir'] = theme_dir
154
155 if config['repo_url'] is not None and config['repo_name'] is None:
156 repo_host = urlparse(config['repo_url']).netloc.lower()
157 if repo_host == 'github.com':
158 config['repo_name'] = 'GitHub'
159 elif repo_host == 'bitbucket.org':
160 config['repo_name'] = 'Bitbucket'
161 else:
162 config['repo_name'] = repo_host.split('.')[0].title()
163
164 if config['include_next_prev'] is None:
165 config['include_next_prev'] = len(config['pages']) > 1
166
167 if config['include_nav'] is None:
168 config['include_nav'] = len(config['pages']) > 1
169
170 # To Do:
171
172 # The docs dir must exist.
173 # The theme dir must exist.
174 # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'
175 # A homepage 'index' must exist.
176 # The theme 'base.html' file must exist.
177 # Cannot set repo_name without setting repo_url.
178 # Cannot set 'include_next_prev: true' when only one page exists.
179 # Cannot set 'include_nav: true' when only one page exists.
180 # Error if any config keys provided that are not in the DEFAULT_CONFIG.
181
182 return config
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/config.py b/mkdocs/config.py
--- a/mkdocs/config.py
+++ b/mkdocs/config.py
@@ -76,7 +76,7 @@
def load_config(filename='mkdocs.yml', options=None):
options = options or {}
if 'config' in options:
- filename = options.pop('config')
+ filename = options['config']
if not os.path.exists(filename):
raise ConfigurationError("Config file '%s' does not exist." % filename)
with open(filename, 'r') as fp:
| {"golden_diff": "diff --git a/mkdocs/config.py b/mkdocs/config.py\n--- a/mkdocs/config.py\n+++ b/mkdocs/config.py\n@@ -76,7 +76,7 @@\n def load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n- filename = options.pop('config')\n+ filename = options['config']\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n", "issue": "problem with config in command line \nIf I run the follow command in current development version:\n\nmkdocs serve --config=/home/lf/git/mywork/bb/bog/mkdocs.yml\n\nit will raise error:\n\n```\nConfig file 'mkdocs.yml' does not exist.\n```\n\nBut if I run the same command use version 0.11.1\n\nEverything is OK\n\nIs there any thing wrong with code below in [config](https://github.com/tomchristie/mkdocs/blob/master/mkdocs/config.py#L79)\n\n```\nif 'config' in options:\n filename = options.pop('config')\n```\n\nShould it be:\n\n```\nif 'config' in options:\n filename = options.get('config')\n```\n\nBecause when we run `mkdocs serve` , we will execute this block of code two times, filename will use the default `mkdocs.yml` in the second time, this file may not exist.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nfrom mkdocs.exceptions import ConfigurationError\n\nimport logging\nimport os\nimport yaml\n\nlog = logging.getLogger(__name__)\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google_analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # enabling strict mode causes MkDocs to stop the build when a problem is\n # encountered rather than display an error.\n 'strict': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options.pop('config')\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n if not isinstance(user_config, dict):\n raise ConfigurationError(\"The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.\")\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n\n theme_in_config = 'theme' in user_config\n\n config.update(user_config)\n\n if not config['site_name']:\n raise ConfigurationError(\"Config must contain 'site_name' setting.\")\n\n # Validate that the docs_dir and site_dir don't contain the\n # other as this will lead to copying back and forth on each\n # and eventually make a deep nested mess.\n abs_site_dir = os.path.abspath(config['site_dir'])\n abs_docs_dir = os.path.abspath(config['docs_dir'])\n if abs_docs_dir.startswith(abs_site_dir):\n raise ConfigurationError(\n \"The 'docs_dir' can't be within the 'site_dir'.\")\n elif abs_site_dir.startswith(abs_docs_dir):\n raise ConfigurationError(\n \"The 'site_dir' can't be within the 'docs_dir'.\")\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n # If the user has given us a custom theme but not a\n # builtin theme name then we don't want to merge them.\n if not theme_in_config:\n theme_dir = []\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.org':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nfrom mkdocs.exceptions import ConfigurationError\n\nimport logging\nimport os\nimport yaml\n\nlog = logging.getLogger(__name__)\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google_analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # enabling strict mode causes MkDocs to stop the build when a problem is\n # encountered rather than display an error.\n 'strict': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n if not os.path.exists(filename):\n raise ConfigurationError(\"Config file '%s' does not exist.\" % filename)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n if not isinstance(user_config, dict):\n raise ConfigurationError(\"The mkdocs.yml file is invalid. See http://www.mkdocs.org/user-guide/configuration/ for more information.\")\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n\n theme_in_config = 'theme' in user_config\n\n config.update(user_config)\n\n if not config['site_name']:\n raise ConfigurationError(\"Config must contain 'site_name' setting.\")\n\n # Validate that the docs_dir and site_dir don't contain the\n # other as this will lead to copying back and forth on each\n # and eventually make a deep nested mess.\n abs_site_dir = os.path.abspath(config['site_dir'])\n abs_docs_dir = os.path.abspath(config['docs_dir'])\n if abs_docs_dir.startswith(abs_site_dir):\n raise ConfigurationError(\n \"The 'docs_dir' can't be within the 'site_dir'.\")\n elif abs_site_dir.startswith(abs_docs_dir):\n raise ConfigurationError(\n \"The 'site_dir' can't be within the 'docs_dir'.\")\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n # If the user has given us a custom theme but not a\n # builtin theme name then we don't want to merge them.\n if not theme_in_config:\n theme_dir = []\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.org':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}]} | 2,449 | 123 |
gh_patches_debug_30298 | rasdani/github-patches | git_diff | pulp__pulpcore-3755 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
There's a race when creating the same content in multiple processes
`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint "file_filecontent_relative_path_digest__pu_b4bae2c2_uniq"
DETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.
)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/serializers/content.py`
Content:
```
1 from gettext import gettext as _
2
3 from django.db import transaction
4 from rest_framework import serializers
5 from rest_framework.validators import UniqueValidator
6
7 from pulpcore.app import models
8 from pulpcore.app.serializers import base, fields
9 from pulpcore.app.util import get_domain
10
11
12 class BaseContentSerializer(base.ModelSerializer):
13 pulp_href = base.DetailIdentityField(view_name_pattern=r"contents(-.*/.*)-detail")
14
15 class Meta:
16 model = models.Content
17 fields = base.ModelSerializer.Meta.fields
18
19
20 class NoArtifactContentSerializer(BaseContentSerializer):
21 class Meta:
22 model = models.Content
23 fields = BaseContentSerializer.Meta.fields
24
25
26 class SingleArtifactContentSerializer(BaseContentSerializer):
27 artifact = fields.SingleContentArtifactField(
28 help_text=_("Artifact file representing the physical content"),
29 )
30
31 relative_path = serializers.CharField(
32 help_text=_("Path where the artifact is located relative to distributions base_path"),
33 validators=[fields.relative_path_validator],
34 write_only=True,
35 )
36
37 def __init__(self, *args, **kwargs):
38 """
39 Initializer for SingleArtifactContentSerializer
40 """
41 super().__init__(*args, **kwargs)
42
43 # If the content model has its own database field 'relative_path',
44 # we should not mark the field write_only
45 if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
46 self.fields["relative_path"].write_only = False
47
48 @transaction.atomic
49 def create(self, validated_data):
50 """
51 Create the content and associate it with its Artifact, or retrieve the existing content.
52
53 Args:
54 validated_data (dict): Data to save to the database
55 """
56 content = self.retrieve(validated_data)
57
58 if content is not None:
59 content.touch()
60 else:
61 artifact = validated_data.pop("artifact")
62 if "relative_path" not in self.fields or self.fields["relative_path"].write_only:
63 relative_path = validated_data.pop("relative_path")
64 else:
65 relative_path = validated_data.get("relative_path")
66 content = self.Meta.model.objects.create(**validated_data)
67 models.ContentArtifact.objects.create(
68 artifact=artifact, content=content, relative_path=relative_path
69 )
70
71 return content
72
73 def retrieve(self, validated_data):
74 """
75 Retrieve existing content unit if it exists, else return None.
76
77 This method is plugin-specific and implementing it for a specific content type
78 allows for uploading already existing content units of that type.
79 """
80 return None
81
82 class Meta:
83 model = models.Content
84 fields = BaseContentSerializer.Meta.fields + ("artifact", "relative_path")
85
86
87 class MultipleArtifactContentSerializer(BaseContentSerializer):
88 artifacts = fields.ContentArtifactsField(
89 help_text=_(
90 "A dict mapping relative paths inside the Content to the corresponding"
91 "Artifact URLs. E.g.: {'relative/path': "
92 "'/artifacts/1/'"
93 ),
94 )
95
96 @transaction.atomic
97 def create(self, validated_data):
98 """
99 Create the content and associate it with all its Artifacts.
100
101 Args:
102 validated_data (dict): Data to save to the database
103 """
104 artifacts = validated_data.pop("artifacts")
105 content = self.Meta.model.objects.create(**validated_data)
106 for relative_path, artifact in artifacts.items():
107 models.ContentArtifact.objects.create(
108 artifact=artifact, content=content, relative_path=relative_path
109 )
110 return content
111
112 class Meta:
113 model = models.Content
114 fields = BaseContentSerializer.Meta.fields + ("artifacts",)
115
116
117 class ContentChecksumSerializer(serializers.Serializer):
118 """
119 Provide a serializer with artifact checksum fields for single artifact content.
120
121 If you use this serializer, it's recommended that you prefetch artifacts:
122
123 Content.objects.prefetch_related("_artifacts").all()
124 """
125
126 md5 = fields.ContentArtifactChecksumField(
127 help_text=_("The MD5 checksum if available."),
128 checksum="md5",
129 )
130
131 sha1 = fields.ContentArtifactChecksumField(
132 help_text=_("The SHA-1 checksum if available."),
133 checksum="sha1",
134 )
135
136 sha224 = fields.ContentArtifactChecksumField(
137 help_text=_("The SHA-224 checksum if available."),
138 checksum="sha224",
139 )
140
141 sha256 = fields.ContentArtifactChecksumField(
142 help_text=_("The SHA-256 checksum if available."),
143 checksum="sha256",
144 )
145
146 sha384 = fields.ContentArtifactChecksumField(
147 help_text=_("The SHA-384 checksum if available."),
148 checksum="sha384",
149 )
150
151 sha512 = fields.ContentArtifactChecksumField(
152 help_text=_("The SHA-512 checksum if available."),
153 checksum="sha512",
154 )
155
156 class Meta:
157 model = models.Content
158 fields = base.ModelSerializer.Meta.fields + (
159 "md5",
160 "sha1",
161 "sha224",
162 "sha256",
163 "sha384",
164 "sha512",
165 )
166
167
168 class ArtifactSerializer(base.ModelSerializer):
169 pulp_href = base.IdentityField(view_name="artifacts-detail")
170
171 file = serializers.FileField(help_text=_("The stored file."), allow_empty_file=True)
172
173 size = serializers.IntegerField(help_text=_("The size of the file in bytes."), required=False)
174
175 md5 = serializers.CharField(
176 help_text=_("The MD5 checksum of the file if available."), required=False, allow_null=True
177 )
178
179 sha1 = serializers.CharField(
180 help_text=_("The SHA-1 checksum of the file if available."),
181 required=False,
182 allow_null=True,
183 )
184
185 sha224 = serializers.CharField(
186 help_text=_("The SHA-224 checksum of the file if available."),
187 required=False,
188 allow_null=True,
189 )
190
191 sha256 = serializers.CharField(
192 help_text=_("The SHA-256 checksum of the file if available."),
193 required=False,
194 allow_null=True,
195 )
196
197 sha384 = serializers.CharField(
198 help_text=_("The SHA-384 checksum of the file if available."),
199 required=False,
200 allow_null=True,
201 )
202
203 sha512 = serializers.CharField(
204 help_text=_("The SHA-512 checksum of the file if available."),
205 required=False,
206 allow_null=True,
207 )
208
209 def validate(self, data):
210 """
211 Validate file by size and by all checksums provided.
212
213 Args:
214 data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their
215 values
216
217 Raises:
218 :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any
219 of the checksums don't match their actual values.
220 """
221 super().validate(data)
222 if "size" in data:
223 if data["file"].size != int(data["size"]):
224 raise serializers.ValidationError(_("The size did not match actual size of file."))
225 else:
226 data["size"] = data["file"].size
227
228 bad_algs = []
229 for algorithm in models.Artifact.FORBIDDEN_DIGESTS:
230 if algorithm in data:
231 bad_algs.append(algorithm)
232 if bad_algs:
233 raise serializers.ValidationError(
234 _("Checksum algorithms {} forbidden for this Pulp instance.").format(bad_algs)
235 )
236
237 for algorithm in reversed(models.Artifact.DIGEST_FIELDS):
238 digest = data["file"].hashers[algorithm].hexdigest()
239
240 if algorithm in data and digest != data[algorithm]:
241 raise serializers.ValidationError(_("The %s checksum did not match.") % algorithm)
242 else:
243 data[algorithm] = digest
244
245 if algorithm in models.Artifact.RELIABLE_DIGEST_FIELDS:
246 validator = UniqueValidator(
247 models.Artifact.objects.filter(pulp_domain=get_domain()),
248 message=_("Artifact with {0} checksum of '{1}' already exists.").format(
249 algorithm, digest
250 ),
251 )
252 validator.instance = None
253 validator(digest, self.fields[algorithm])
254
255 return data
256
257 class Meta:
258 model = models.Artifact
259 fields = base.ModelSerializer.Meta.fields + (
260 "file",
261 "size",
262 "md5",
263 "sha1",
264 "sha224",
265 "sha256",
266 "sha384",
267 "sha512",
268 )
269
270
271 class SigningServiceSerializer(base.ModelSerializer):
272 """
273 A serializer for the model declaring a signing service.
274 """
275
276 pulp_href = base.IdentityField(view_name="signing-services-detail")
277 name = serializers.CharField(help_text=_("A unique name used to recognize a script."))
278 public_key = serializers.CharField(
279 help_text=_("The value of a public key used for the repository verification.")
280 )
281 pubkey_fingerprint = serializers.CharField(help_text=_("The fingerprint of the public key."))
282 script = serializers.CharField(
283 help_text=_("An absolute path to a script which is going to be used for the signing.")
284 )
285
286 class Meta:
287 model = models.SigningService
288 fields = BaseContentSerializer.Meta.fields + (
289 "name",
290 "public_key",
291 "pubkey_fingerprint",
292 "script",
293 )
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py
--- a/pulpcore/app/serializers/content.py
+++ b/pulpcore/app/serializers/content.py
@@ -1,6 +1,6 @@
from gettext import gettext as _
-from django.db import transaction
+from django.db import transaction, IntegrityError
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -45,7 +45,6 @@
if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
self.fields["relative_path"].write_only = False
- @transaction.atomic
def create(self, validated_data):
"""
Create the content and associate it with its Artifact, or retrieve the existing content.
@@ -63,10 +62,16 @@
relative_path = validated_data.pop("relative_path")
else:
relative_path = validated_data.get("relative_path")
- content = self.Meta.model.objects.create(**validated_data)
- models.ContentArtifact.objects.create(
- artifact=artifact, content=content, relative_path=relative_path
- )
+ try:
+ with transaction.atomic():
+ content = self.Meta.model.objects.create(**validated_data)
+ models.ContentArtifact.objects.create(
+ artifact=artifact, content=content, relative_path=relative_path
+ )
+ except IntegrityError:
+ content = self.retrieve(validated_data)
+ if content is None:
+ raise
return content
| {"golden_diff": "diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py\n--- a/pulpcore/app/serializers/content.py\n+++ b/pulpcore/app/serializers/content.py\n@@ -1,6 +1,6 @@\n from gettext import gettext as _\n \n-from django.db import transaction\n+from django.db import transaction, IntegrityError\n from rest_framework import serializers\n from rest_framework.validators import UniqueValidator\n \n@@ -45,7 +45,6 @@\n if hasattr(self.Meta.model, \"relative_path\") and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n \n- @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact, or retrieve the existing content.\n@@ -63,10 +62,16 @@\n relative_path = validated_data.pop(\"relative_path\")\n else:\n relative_path = validated_data.get(\"relative_path\")\n- content = self.Meta.model.objects.create(**validated_data)\n- models.ContentArtifact.objects.create(\n- artifact=artifact, content=content, relative_path=relative_path\n- )\n+ try:\n+ with transaction.atomic():\n+ content = self.Meta.model.objects.create(**validated_data)\n+ models.ContentArtifact.objects.create(\n+ artifact=artifact, content=content, relative_path=relative_path\n+ )\n+ except IntegrityError:\n+ content = self.retrieve(validated_data)\n+ if content is None:\n+ raise\n \n return content\n", "issue": "There's a race when creating the same content in multiple processes\n`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint \"file_filecontent_relative_path_digest__pu_b4bae2c2_uniq\"\r\nDETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.\r\n)`\n", "before_files": [{"content": "from gettext import gettext as _\n\nfrom django.db import transaction\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import base, fields\nfrom pulpcore.app.util import get_domain\n\n\nclass BaseContentSerializer(base.ModelSerializer):\n pulp_href = base.DetailIdentityField(view_name_pattern=r\"contents(-.*/.*)-detail\")\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields\n\n\nclass NoArtifactContentSerializer(BaseContentSerializer):\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields\n\n\nclass SingleArtifactContentSerializer(BaseContentSerializer):\n artifact = fields.SingleContentArtifactField(\n help_text=_(\"Artifact file representing the physical content\"),\n )\n\n relative_path = serializers.CharField(\n help_text=_(\"Path where the artifact is located relative to distributions base_path\"),\n validators=[fields.relative_path_validator],\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Initializer for SingleArtifactContentSerializer\n \"\"\"\n super().__init__(*args, **kwargs)\n\n # If the content model has its own database field 'relative_path',\n # we should not mark the field write_only\n if hasattr(self.Meta.model, \"relative_path\") and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact, or retrieve the existing content.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n content = self.retrieve(validated_data)\n\n if content is not None:\n content.touch()\n else:\n artifact = validated_data.pop(\"artifact\")\n if \"relative_path\" not in self.fields or self.fields[\"relative_path\"].write_only:\n relative_path = validated_data.pop(\"relative_path\")\n else:\n relative_path = validated_data.get(\"relative_path\")\n content = self.Meta.model.objects.create(**validated_data)\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n\n return content\n\n def retrieve(self, validated_data):\n \"\"\"\n Retrieve existing content unit if it exists, else return None.\n\n This method is plugin-specific and implementing it for a specific content type\n allows for uploading already existing content units of that type.\n \"\"\"\n return None\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifact\", \"relative_path\")\n\n\nclass MultipleArtifactContentSerializer(BaseContentSerializer):\n artifacts = fields.ContentArtifactsField(\n help_text=_(\n \"A dict mapping relative paths inside the Content to the corresponding\"\n \"Artifact URLs. E.g.: {'relative/path': \"\n \"'/artifacts/1/'\"\n ),\n )\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with all its Artifacts.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n artifacts = validated_data.pop(\"artifacts\")\n content = self.Meta.model.objects.create(**validated_data)\n for relative_path, artifact in artifacts.items():\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n return content\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifacts\",)\n\n\nclass ContentChecksumSerializer(serializers.Serializer):\n \"\"\"\n Provide a serializer with artifact checksum fields for single artifact content.\n\n If you use this serializer, it's recommended that you prefetch artifacts:\n\n Content.objects.prefetch_related(\"_artifacts\").all()\n \"\"\"\n\n md5 = fields.ContentArtifactChecksumField(\n help_text=_(\"The MD5 checksum if available.\"),\n checksum=\"md5\",\n )\n\n sha1 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-1 checksum if available.\"),\n checksum=\"sha1\",\n )\n\n sha224 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-224 checksum if available.\"),\n checksum=\"sha224\",\n )\n\n sha256 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-256 checksum if available.\"),\n checksum=\"sha256\",\n )\n\n sha384 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-384 checksum if available.\"),\n checksum=\"sha384\",\n )\n\n sha512 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-512 checksum if available.\"),\n checksum=\"sha512\",\n )\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields + (\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass ArtifactSerializer(base.ModelSerializer):\n pulp_href = base.IdentityField(view_name=\"artifacts-detail\")\n\n file = serializers.FileField(help_text=_(\"The stored file.\"), allow_empty_file=True)\n\n size = serializers.IntegerField(help_text=_(\"The size of the file in bytes.\"), required=False)\n\n md5 = serializers.CharField(\n help_text=_(\"The MD5 checksum of the file if available.\"), required=False, allow_null=True\n )\n\n sha1 = serializers.CharField(\n help_text=_(\"The SHA-1 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha224 = serializers.CharField(\n help_text=_(\"The SHA-224 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha256 = serializers.CharField(\n help_text=_(\"The SHA-256 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha384 = serializers.CharField(\n help_text=_(\"The SHA-384 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha512 = serializers.CharField(\n help_text=_(\"The SHA-512 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n def validate(self, data):\n \"\"\"\n Validate file by size and by all checksums provided.\n\n Args:\n data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their\n values\n\n Raises:\n :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any\n of the checksums don't match their actual values.\n \"\"\"\n super().validate(data)\n if \"size\" in data:\n if data[\"file\"].size != int(data[\"size\"]):\n raise serializers.ValidationError(_(\"The size did not match actual size of file.\"))\n else:\n data[\"size\"] = data[\"file\"].size\n\n bad_algs = []\n for algorithm in models.Artifact.FORBIDDEN_DIGESTS:\n if algorithm in data:\n bad_algs.append(algorithm)\n if bad_algs:\n raise serializers.ValidationError(\n _(\"Checksum algorithms {} forbidden for this Pulp instance.\").format(bad_algs)\n )\n\n for algorithm in reversed(models.Artifact.DIGEST_FIELDS):\n digest = data[\"file\"].hashers[algorithm].hexdigest()\n\n if algorithm in data and digest != data[algorithm]:\n raise serializers.ValidationError(_(\"The %s checksum did not match.\") % algorithm)\n else:\n data[algorithm] = digest\n\n if algorithm in models.Artifact.RELIABLE_DIGEST_FIELDS:\n validator = UniqueValidator(\n models.Artifact.objects.filter(pulp_domain=get_domain()),\n message=_(\"Artifact with {0} checksum of '{1}' already exists.\").format(\n algorithm, digest\n ),\n )\n validator.instance = None\n validator(digest, self.fields[algorithm])\n\n return data\n\n class Meta:\n model = models.Artifact\n fields = base.ModelSerializer.Meta.fields + (\n \"file\",\n \"size\",\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass SigningServiceSerializer(base.ModelSerializer):\n \"\"\"\n A serializer for the model declaring a signing service.\n \"\"\"\n\n pulp_href = base.IdentityField(view_name=\"signing-services-detail\")\n name = serializers.CharField(help_text=_(\"A unique name used to recognize a script.\"))\n public_key = serializers.CharField(\n help_text=_(\"The value of a public key used for the repository verification.\")\n )\n pubkey_fingerprint = serializers.CharField(help_text=_(\"The fingerprint of the public key.\"))\n script = serializers.CharField(\n help_text=_(\"An absolute path to a script which is going to be used for the signing.\")\n )\n\n class Meta:\n model = models.SigningService\n fields = BaseContentSerializer.Meta.fields + (\n \"name\",\n \"public_key\",\n \"pubkey_fingerprint\",\n \"script\",\n )\n", "path": "pulpcore/app/serializers/content.py"}], "after_files": [{"content": "from gettext import gettext as _\n\nfrom django.db import transaction, IntegrityError\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import base, fields\nfrom pulpcore.app.util import get_domain\n\n\nclass BaseContentSerializer(base.ModelSerializer):\n pulp_href = base.DetailIdentityField(view_name_pattern=r\"contents(-.*/.*)-detail\")\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields\n\n\nclass NoArtifactContentSerializer(BaseContentSerializer):\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields\n\n\nclass SingleArtifactContentSerializer(BaseContentSerializer):\n artifact = fields.SingleContentArtifactField(\n help_text=_(\"Artifact file representing the physical content\"),\n )\n\n relative_path = serializers.CharField(\n help_text=_(\"Path where the artifact is located relative to distributions base_path\"),\n validators=[fields.relative_path_validator],\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Initializer for SingleArtifactContentSerializer\n \"\"\"\n super().__init__(*args, **kwargs)\n\n # If the content model has its own database field 'relative_path',\n # we should not mark the field write_only\n if hasattr(self.Meta.model, \"relative_path\") and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact, or retrieve the existing content.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n content = self.retrieve(validated_data)\n\n if content is not None:\n content.touch()\n else:\n artifact = validated_data.pop(\"artifact\")\n if \"relative_path\" not in self.fields or self.fields[\"relative_path\"].write_only:\n relative_path = validated_data.pop(\"relative_path\")\n else:\n relative_path = validated_data.get(\"relative_path\")\n try:\n with transaction.atomic():\n content = self.Meta.model.objects.create(**validated_data)\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n except IntegrityError:\n content = self.retrieve(validated_data)\n if content is None:\n raise\n\n return content\n\n def retrieve(self, validated_data):\n \"\"\"\n Retrieve existing content unit if it exists, else return None.\n\n This method is plugin-specific and implementing it for a specific content type\n allows for uploading already existing content units of that type.\n \"\"\"\n return None\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifact\", \"relative_path\")\n\n\nclass MultipleArtifactContentSerializer(BaseContentSerializer):\n artifacts = fields.ContentArtifactsField(\n help_text=_(\n \"A dict mapping relative paths inside the Content to the corresponding\"\n \"Artifact URLs. E.g.: {'relative/path': \"\n \"'/artifacts/1/'\"\n ),\n )\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with all its Artifacts.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n artifacts = validated_data.pop(\"artifacts\")\n content = self.Meta.model.objects.create(**validated_data)\n for relative_path, artifact in artifacts.items():\n models.ContentArtifact.objects.create(\n artifact=artifact, content=content, relative_path=relative_path\n )\n return content\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + (\"artifacts\",)\n\n\nclass ContentChecksumSerializer(serializers.Serializer):\n \"\"\"\n Provide a serializer with artifact checksum fields for single artifact content.\n\n If you use this serializer, it's recommended that you prefetch artifacts:\n\n Content.objects.prefetch_related(\"_artifacts\").all()\n \"\"\"\n\n md5 = fields.ContentArtifactChecksumField(\n help_text=_(\"The MD5 checksum if available.\"),\n checksum=\"md5\",\n )\n\n sha1 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-1 checksum if available.\"),\n checksum=\"sha1\",\n )\n\n sha224 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-224 checksum if available.\"),\n checksum=\"sha224\",\n )\n\n sha256 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-256 checksum if available.\"),\n checksum=\"sha256\",\n )\n\n sha384 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-384 checksum if available.\"),\n checksum=\"sha384\",\n )\n\n sha512 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-512 checksum if available.\"),\n checksum=\"sha512\",\n )\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields + (\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass ArtifactSerializer(base.ModelSerializer):\n pulp_href = base.IdentityField(view_name=\"artifacts-detail\")\n\n file = serializers.FileField(help_text=_(\"The stored file.\"), allow_empty_file=True)\n\n size = serializers.IntegerField(help_text=_(\"The size of the file in bytes.\"), required=False)\n\n md5 = serializers.CharField(\n help_text=_(\"The MD5 checksum of the file if available.\"), required=False, allow_null=True\n )\n\n sha1 = serializers.CharField(\n help_text=_(\"The SHA-1 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha224 = serializers.CharField(\n help_text=_(\"The SHA-224 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha256 = serializers.CharField(\n help_text=_(\"The SHA-256 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha384 = serializers.CharField(\n help_text=_(\"The SHA-384 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha512 = serializers.CharField(\n help_text=_(\"The SHA-512 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n def validate(self, data):\n \"\"\"\n Validate file by size and by all checksums provided.\n\n Args:\n data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their\n values\n\n Raises:\n :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any\n of the checksums don't match their actual values.\n \"\"\"\n super().validate(data)\n if \"size\" in data:\n if data[\"file\"].size != int(data[\"size\"]):\n raise serializers.ValidationError(_(\"The size did not match actual size of file.\"))\n else:\n data[\"size\"] = data[\"file\"].size\n\n bad_algs = []\n for algorithm in models.Artifact.FORBIDDEN_DIGESTS:\n if algorithm in data:\n bad_algs.append(algorithm)\n if bad_algs:\n raise serializers.ValidationError(\n _(\"Checksum algorithms {} forbidden for this Pulp instance.\").format(bad_algs)\n )\n\n for algorithm in reversed(models.Artifact.DIGEST_FIELDS):\n digest = data[\"file\"].hashers[algorithm].hexdigest()\n\n if algorithm in data and digest != data[algorithm]:\n raise serializers.ValidationError(_(\"The %s checksum did not match.\") % algorithm)\n else:\n data[algorithm] = digest\n\n if algorithm in models.Artifact.RELIABLE_DIGEST_FIELDS:\n validator = UniqueValidator(\n models.Artifact.objects.filter(pulp_domain=get_domain()),\n message=_(\"Artifact with {0} checksum of '{1}' already exists.\").format(\n algorithm, digest\n ),\n )\n validator.instance = None\n validator(digest, self.fields[algorithm])\n\n return data\n\n class Meta:\n model = models.Artifact\n fields = base.ModelSerializer.Meta.fields + (\n \"file\",\n \"size\",\n \"md5\",\n \"sha1\",\n \"sha224\",\n \"sha256\",\n \"sha384\",\n \"sha512\",\n )\n\n\nclass SigningServiceSerializer(base.ModelSerializer):\n \"\"\"\n A serializer for the model declaring a signing service.\n \"\"\"\n\n pulp_href = base.IdentityField(view_name=\"signing-services-detail\")\n name = serializers.CharField(help_text=_(\"A unique name used to recognize a script.\"))\n public_key = serializers.CharField(\n help_text=_(\"The value of a public key used for the repository verification.\")\n )\n pubkey_fingerprint = serializers.CharField(help_text=_(\"The fingerprint of the public key.\"))\n script = serializers.CharField(\n help_text=_(\"An absolute path to a script which is going to be used for the signing.\")\n )\n\n class Meta:\n model = models.SigningService\n fields = BaseContentSerializer.Meta.fields + (\n \"name\",\n \"public_key\",\n \"pubkey_fingerprint\",\n \"script\",\n )\n", "path": "pulpcore/app/serializers/content.py"}]} | 3,176 | 332 |
gh_patches_debug_1757 | rasdani/github-patches | git_diff | mne-tools__mne-bids-1156 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MNE-BIDS 0.13 release
A release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2
Our last release has been in December 2022, so I feel like cutting a release now is reasonable.
I'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.
Please comment here if you need some particular thing to be fixed before the release.
cc @agramfort @hoechenberger @larsoner
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne_bids/__init__.py`
Content:
```
1 """MNE software for easily interacting with BIDS compatible datasets."""
2
3 __version__ = "0.13.dev0"
4 from mne_bids import commands
5 from mne_bids.report import make_report
6 from mne_bids.path import (
7 BIDSPath,
8 get_datatypes,
9 get_entity_vals,
10 print_dir_tree,
11 get_entities_from_fname,
12 search_folder_for_text,
13 get_bids_path_from_fname,
14 find_matching_paths,
15 )
16 from mne_bids.read import get_head_mri_trans, read_raw_bids
17 from mne_bids.utils import get_anonymization_daysback
18 from mne_bids.write import (
19 make_dataset_description,
20 write_anat,
21 write_raw_bids,
22 mark_channels,
23 write_meg_calibration,
24 write_meg_crosstalk,
25 get_anat_landmarks,
26 anonymize_dataset,
27 )
28 from mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks
29 from mne_bids.inspect import inspect_dataset
30 from mne_bids.dig import (
31 template_to_head,
32 convert_montage_to_ras,
33 convert_montage_to_mri,
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py
--- a/mne_bids/__init__.py
+++ b/mne_bids/__init__.py
@@ -1,6 +1,6 @@
"""MNE software for easily interacting with BIDS compatible datasets."""
-__version__ = "0.13.dev0"
+__version__ = "0.13"
from mne_bids import commands
from mne_bids.report import make_report
from mne_bids.path import (
| {"golden_diff": "diff --git a/mne_bids/__init__.py b/mne_bids/__init__.py\n--- a/mne_bids/__init__.py\n+++ b/mne_bids/__init__.py\n@@ -1,6 +1,6 @@\n \"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n \n-__version__ = \"0.13.dev0\"\n+__version__ = \"0.13\"\n from mne_bids import commands\n from mne_bids.report import make_report\n from mne_bids.path import (\n", "issue": "MNE-BIDS 0.13 release\nA release of MNE-BIDS has been requested: https://mne.discourse.group/t/mne-bids-0-13-release-date/7291/2\r\n\r\nOur last release has been in December 2022, so I feel like cutting a release now is reasonable.\r\n\r\nI'll migrate issues from the [0.13 milestone](https://github.com/mne-tools/mne-bids/milestone/14) to a new 0.14 milestone.\r\n\r\nPlease comment here if you need some particular thing to be fixed before the release.\r\n\r\ncc @agramfort @hoechenberger @larsoner \n", "before_files": [{"content": "\"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n\n__version__ = \"0.13.dev0\"\nfrom mne_bids import commands\nfrom mne_bids.report import make_report\nfrom mne_bids.path import (\n BIDSPath,\n get_datatypes,\n get_entity_vals,\n print_dir_tree,\n get_entities_from_fname,\n search_folder_for_text,\n get_bids_path_from_fname,\n find_matching_paths,\n)\nfrom mne_bids.read import get_head_mri_trans, read_raw_bids\nfrom mne_bids.utils import get_anonymization_daysback\nfrom mne_bids.write import (\n make_dataset_description,\n write_anat,\n write_raw_bids,\n mark_channels,\n write_meg_calibration,\n write_meg_crosstalk,\n get_anat_landmarks,\n anonymize_dataset,\n)\nfrom mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks\nfrom mne_bids.inspect import inspect_dataset\nfrom mne_bids.dig import (\n template_to_head,\n convert_montage_to_ras,\n convert_montage_to_mri,\n)\n", "path": "mne_bids/__init__.py"}], "after_files": [{"content": "\"\"\"MNE software for easily interacting with BIDS compatible datasets.\"\"\"\n\n__version__ = \"0.13\"\nfrom mne_bids import commands\nfrom mne_bids.report import make_report\nfrom mne_bids.path import (\n BIDSPath,\n get_datatypes,\n get_entity_vals,\n print_dir_tree,\n get_entities_from_fname,\n search_folder_for_text,\n get_bids_path_from_fname,\n find_matching_paths,\n)\nfrom mne_bids.read import get_head_mri_trans, read_raw_bids\nfrom mne_bids.utils import get_anonymization_daysback\nfrom mne_bids.write import (\n make_dataset_description,\n write_anat,\n write_raw_bids,\n mark_channels,\n write_meg_calibration,\n write_meg_crosstalk,\n get_anat_landmarks,\n anonymize_dataset,\n)\nfrom mne_bids.sidecar_updates import update_sidecar_json, update_anat_landmarks\nfrom mne_bids.inspect import inspect_dataset\nfrom mne_bids.dig import (\n template_to_head,\n convert_montage_to_ras,\n convert_montage_to_mri,\n)\n", "path": "mne_bids/__init__.py"}]} | 713 | 118 |
gh_patches_debug_36782 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2006 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name
*cfn-lint version: cfn-lint 0.49.1*
*cfn-lint did not catch `/` as an invalid character in a Mapping element name*
cfn-lint passed successfully with this mapping included in the template:
```yaml
Mappings:
NameServers:
10.90.0.0/16:
NameServer1: 10.90.0.10
NameServer2: 10.90.4.10
10.91.0.0/16:
NameServer1: 10.91.0.10
NameServer2: 10.91.4.10
```
However AWS rejected it:
> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/mappings/KeyName.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9 from cfnlint.helpers import REGEX_ALPHANUMERIC
10
11
12 class KeyName(CloudFormationLintRule):
13 """Check if Mapping Keys are type string"""
14 id = 'E7003'
15 shortdesc = 'Mapping keys are strings and alphanumeric'
16 description = 'Check if Mappings keys are properly typed as strings and alphanumeric'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
18 tags = ['mappings']
19
20 def check_key(self, key, path, check_alphanumeric=True):
21 """ Check the key name for string and alphanumeric"""
22 matches = []
23 if not isinstance(key, six.string_types):
24 message = 'Mapping key ({0}) has to be a string.'
25 matches.append(RuleMatch(path[:], message.format(key)))
26 elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
27 message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
28 matches.append(RuleMatch(path[:], message.format(key)))
29
30 return matches
31
32 def match(self, cfn):
33 matches = []
34
35 mappings = cfn.template.get('Mappings', {})
36 for mapping_name, mapping_value in mappings.items():
37 if isinstance(mapping_value, dict):
38 for key_name, key_value in mapping_value.items():
39 matches.extend(self.check_key(
40 key_name, ['Mappings', mapping_name, key_name], False))
41 if isinstance(key_value, dict):
42 for sub_key_name, _ in key_value.items():
43 matches.extend(
44 self.check_key(
45 sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
46
47 return matches
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py
--- a/src/cfnlint/rules/mappings/KeyName.py
+++ b/src/cfnlint/rules/mappings/KeyName.py
@@ -17,14 +17,26 @@
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'
tags = ['mappings']
- def check_key(self, key, path, check_alphanumeric=True):
+ def check_attribute(self, key, path):
+ """ Check the key name for string and alphanumeric"""
+ matches = []
+ if not isinstance(key, six.string_types):
+ message = 'Mapping attribute ({0}) has to be a string.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+ elif not re.match(REGEX_ALPHANUMERIC, key):
+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'
+ matches.append(RuleMatch(path[:], message.format(key)))
+
+ return matches
+
+ def check_key(self, key, path):
""" Check the key name for string and alphanumeric"""
matches = []
if not isinstance(key, six.string_types):
message = 'Mapping key ({0}) has to be a string.'
matches.append(RuleMatch(path[:], message.format(key)))
- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:
- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'
+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):
+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \'-\' or \'.\''
matches.append(RuleMatch(path[:], message.format(key)))
return matches
@@ -37,11 +49,11 @@
if isinstance(mapping_value, dict):
for key_name, key_value in mapping_value.items():
matches.extend(self.check_key(
- key_name, ['Mappings', mapping_name, key_name], False))
+ key_name, ['Mappings', mapping_name, key_name]))
if isinstance(key_value, dict):
for sub_key_name, _ in key_value.items():
matches.extend(
- self.check_key(
+ self.check_attribute(
sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/mappings/KeyName.py b/src/cfnlint/rules/mappings/KeyName.py\n--- a/src/cfnlint/rules/mappings/KeyName.py\n+++ b/src/cfnlint/rules/mappings/KeyName.py\n@@ -17,14 +17,26 @@\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n \n- def check_key(self, key, path, check_alphanumeric=True):\n+ def check_attribute(self, key, path):\n+ \"\"\" Check the key name for string and alphanumeric\"\"\"\n+ matches = []\n+ if not isinstance(key, six.string_types):\n+ message = 'Mapping attribute ({0}) has to be a string.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+ elif not re.match(REGEX_ALPHANUMERIC, key):\n+ message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n+ matches.append(RuleMatch(path[:], message.format(key)))\n+\n+ return matches\n+\n+ def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n- elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n- message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n+ elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n+ message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n \n return matches\n@@ -37,11 +49,11 @@\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n- key_name, ['Mappings', mapping_name, key_name], False))\n+ key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n- self.check_key(\n+ self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n \n return matches\n", "issue": "cfn-lint 0.49.1 does not catch `/` as an invalid character in a Mapping element name\n*cfn-lint version: cfn-lint 0.49.1*\r\n\r\n*cfn-lint did not catch `/` as an invalid character in a Mapping element name*\r\n\r\ncfn-lint passed successfully with this mapping included in the template:\r\n```yaml\r\nMappings:\r\n NameServers:\r\n 10.90.0.0/16:\r\n NameServer1: 10.90.0.10\r\n NameServer2: 10.90.4.10\r\n 10.91.0.0/16:\r\n NameServer1: 10.91.0.10\r\n NameServer2: 10.91.4.10\r\n```\r\n\r\nHowever AWS rejected it:\r\n> Template format error: Mappings element name '10.93.0.0/16' must be non-empty and can contain only alphanumerics, '-' or '.'\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_key(self, key, path, check_alphanumeric=True):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key) and check_alphanumeric:\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name], False))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_key(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_ALPHANUMERIC\n\n\nclass KeyName(CloudFormationLintRule):\n \"\"\"Check if Mapping Keys are type string\"\"\"\n id = 'E7003'\n shortdesc = 'Mapping keys are strings and alphanumeric'\n description = 'Check if Mappings keys are properly typed as strings and alphanumeric'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html'\n tags = ['mappings']\n\n def check_attribute(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping attribute ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match(REGEX_ALPHANUMERIC, key):\n message = 'Mapping attribute ({0}) has invalid name. Name has to be alphanumeric.'\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def check_key(self, key, path):\n \"\"\" Check the key name for string and alphanumeric\"\"\"\n matches = []\n if not isinstance(key, six.string_types):\n message = 'Mapping key ({0}) has to be a string.'\n matches.append(RuleMatch(path[:], message.format(key)))\n elif not re.match('^[a-zA-Z0-9.-]{1,255}$', key):\n message = 'Mapping key ({0}) has invalid name. Name has to be alphanumeric, \\'-\\' or \\'.\\''\n matches.append(RuleMatch(path[:], message.format(key)))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n mappings = cfn.template.get('Mappings', {})\n for mapping_name, mapping_value in mappings.items():\n if isinstance(mapping_value, dict):\n for key_name, key_value in mapping_value.items():\n matches.extend(self.check_key(\n key_name, ['Mappings', mapping_name, key_name]))\n if isinstance(key_value, dict):\n for sub_key_name, _ in key_value.items():\n matches.extend(\n self.check_attribute(\n sub_key_name, ['Mappings', mapping_name, key_name, sub_key_name]))\n\n return matches\n", "path": "src/cfnlint/rules/mappings/KeyName.py"}]} | 1,082 | 547 |
gh_patches_debug_8668 | rasdani/github-patches | git_diff | wright-group__WrightTools-1132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
shift supported Python 3 versions
Since users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python3
2
3 import os
4 from setuptools import setup, find_packages
5
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 def read(fname):
11 with open(os.path.join(here, fname)) as f:
12 return f.read()
13
14
15 extra_files = {
16 "WrightTools": [
17 "datasets",
18 "datasets/*",
19 "datasets/*/*",
20 "datasets/*/*/*",
21 "datasets/*/*/*/*",
22 "CITATION",
23 "VERSION",
24 "WT5_VERSION",
25 ]
26 }
27
28 with open(os.path.join(here, "WrightTools", "VERSION")) as version_file:
29 version = version_file.read().strip()
30
31 docs_require = ["sphinx", "sphinx-gallery==0.8.2", "sphinx-rtd-theme"]
32
33 setup(
34 name="WrightTools",
35 packages=find_packages(exclude=("tests", "tests.*")),
36 package_data=extra_files,
37 python_requires=">=3.7",
38 install_requires=[
39 "h5py",
40 "imageio",
41 "matplotlib>=3.4.0",
42 "numexpr",
43 "numpy>=1.15.0",
44 "pint",
45 "python-dateutil",
46 "scipy",
47 "tidy_headers>=1.0.0",
48 ],
49 extras_require={
50 "docs": docs_require,
51 "dev": [
52 "black",
53 "pre-commit",
54 "pydocstyle",
55 "pytest",
56 "pytest-cov",
57 "databroker>=1.2",
58 "msgpack",
59 ]
60 + docs_require,
61 },
62 version=version,
63 description="Tools for loading, processing, and plotting multidimensional spectroscopy data.",
64 long_description=read("README.rst"),
65 author="WrightTools Developers",
66 license="MIT",
67 url="http://wright.tools",
68 keywords="spectroscopy science multidimensional visualization",
69 entry_points={"console_scripts": ["wt-tree=WrightTools.__main__:wt_tree"]},
70 classifiers=[
71 "Development Status :: 5 - Production/Stable",
72 "Intended Audience :: Science/Research",
73 "License :: OSI Approved :: MIT License",
74 "Framework :: Matplotlib",
75 "Natural Language :: English",
76 "Programming Language :: Python :: 3",
77 "Programming Language :: Python :: 3.7",
78 "Programming Language :: Python :: 3.8",
79 "Programming Language :: Python :: 3.9",
80 "Topic :: Scientific/Engineering",
81 ],
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -74,9 +74,10 @@
"Framework :: Matplotlib",
"Natural Language :: English",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -74,9 +74,10 @@\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n+ \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering\",\n ],\n )\n", "issue": "shift supported Python 3 versions\nSince users are increasingly relying on 3.10 and 3.11, I propose we move testing from 3.7-9 to 3.8-11.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n with open(os.path.join(here, fname)) as f:\n return f.read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\ndocs_require = [\"sphinx\", \"sphinx-gallery==0.8.2\", \"sphinx-rtd-theme\"]\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.7\",\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=3.4.0\",\n \"numexpr\",\n \"numpy>=1.15.0\",\n \"pint\",\n \"python-dateutil\",\n \"scipy\",\n \"tidy_headers>=1.0.0\",\n ],\n extras_require={\n \"docs\": docs_require,\n \"dev\": [\n \"black\",\n \"pre-commit\",\n \"pydocstyle\",\n \"pytest\",\n \"pytest-cov\",\n \"databroker>=1.2\",\n \"msgpack\",\n ]\n + docs_require,\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n entry_points={\"console_scripts\": [\"wt-tree=WrightTools.__main__:wt_tree\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n with open(os.path.join(here, fname)) as f:\n return f.read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\ndocs_require = [\"sphinx\", \"sphinx-gallery==0.8.2\", \"sphinx-rtd-theme\"]\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.7\",\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=3.4.0\",\n \"numexpr\",\n \"numpy>=1.15.0\",\n \"pint\",\n \"python-dateutil\",\n \"scipy\",\n \"tidy_headers>=1.0.0\",\n ],\n extras_require={\n \"docs\": docs_require,\n \"dev\": [\n \"black\",\n \"pre-commit\",\n \"pydocstyle\",\n \"pytest\",\n \"pytest-cov\",\n \"databroker>=1.2\",\n \"msgpack\",\n ]\n + docs_require,\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n entry_points={\"console_scripts\": [\"wt-tree=WrightTools.__main__:wt_tree\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Framework :: Matplotlib\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}]} | 1,002 | 133 |
gh_patches_debug_44771 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1140 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
W2030 Default value required on conditionally included property
*cfn-lint version: 0.21.3*
CloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.
Example template:
```yaml
Parameters:
Retention:
Type: Number
Description: Retention in days for the log group (-1 for no retention)
Default: -1
Conditions:
IsRetention:
!Not [!Equals [!Ref 'Retention', '-1']]
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: 'some-log-group'
RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']
```
This template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.
This leads to the following linter output:
```
cfn-lint --template template.yaml
W2030 You must specify a valid Default value for Retention (-1).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
cloudformation/template.yaml:5:5
```
This can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:
```yaml
Metadata:
cfn-lint:
config:
ignore_checks:
- W2030
```
This might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/parameters/AllowedValue.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import six
18 from cfnlint.rules import CloudFormationLintRule
19 from cfnlint.rules import RuleMatch
20
21 from cfnlint.helpers import RESOURCE_SPECS
22
23
24 class AllowedValue(CloudFormationLintRule):
25 """Check if parameters have a valid value"""
26 id = 'W2030'
27 shortdesc = 'Check if parameters have a valid value'
28 description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'
29 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
30 tags = ['resources', 'property', 'allowed value']
31
32 def initialize(self, cfn):
33 """Initialize the rule"""
34 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
35 self.resource_property_types.append(resource_type_spec)
36 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
37 self.resource_sub_property_types.append(property_type_spec)
38
39 def check_value_ref(self, value, path, **kwargs):
40 """Check Ref"""
41 matches = []
42
43 if 'Fn::If' in path:
44 self.logger.debug('Not able to guarentee that the default value hasn\'t been conditioned out')
45 return matches
46
47 allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
48 cfn = kwargs.get('cfn')
49
50 if allowed_value_specs:
51 if value in cfn.template.get('Parameters', {}):
52 param = cfn.template.get('Parameters').get(value, {})
53 parameter_values = param.get('AllowedValues')
54 default_value = param.get('Default')
55 parameter_type = param.get('Type')
56 if isinstance(parameter_type, six.string_types):
57 if ((not parameter_type.startswith('List<')) and
58 (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and
59 parameter_type not in ['CommaDelimitedList', 'List<String>']):
60 # Check Allowed Values
61 if parameter_values:
62 for index, allowed_value in enumerate(parameter_values):
63 if str(allowed_value) not in allowed_value_specs:
64 param_path = ['Parameters', value, 'AllowedValues', index]
65 message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
66 matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
67 if default_value:
68 # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
69 if str(default_value) not in allowed_value_specs:
70 param_path = ['Parameters', value, 'Default']
71 message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
72 matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
73
74 return matches
75
76 def check(self, cfn, properties, value_specs, property_specs, path):
77 """Check itself"""
78 matches = list()
79 for p_value, p_path in properties.items_safe(path[:]):
80 for prop in p_value:
81 if prop in value_specs:
82 value = value_specs.get(prop).get('Value', {})
83 if value:
84 value_type = value.get('ValueType', '')
85 property_type = property_specs.get('Properties').get(prop).get('Type')
86 matches.extend(
87 cfn.check_value(
88 p_value, prop, p_path,
89 check_ref=self.check_value_ref,
90 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
91 cfn=cfn, property_type=property_type, property_name=prop
92 )
93 )
94
95 return matches
96
97 def match_resource_sub_properties(self, properties, property_type, path, cfn):
98 """Match for sub properties"""
99 matches = list()
100
101 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
102 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
103 matches.extend(self.check(cfn, properties, specs, property_specs, path))
104
105 return matches
106
107 def match_resource_properties(self, properties, resource_type, path, cfn):
108 """Check CloudFormation Properties"""
109 matches = list()
110
111 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
112 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
113 matches.extend(self.check(cfn, properties, specs, resource_specs, path))
114
115 return matches
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py
--- a/src/cfnlint/rules/parameters/AllowedValue.py
+++ b/src/cfnlint/rules/parameters/AllowedValue.py
@@ -40,12 +40,19 @@
"""Check Ref"""
matches = []
+ cfn = kwargs.get('cfn')
if 'Fn::If' in path:
- self.logger.debug('Not able to guarentee that the default value hasn\'t been conditioned out')
+ self.logger.debug(
+ 'Not able to guarentee that the default value hasn\'t been conditioned out')
+ return matches
+ if path[0] == 'Resources' and 'Condition' in cfn.template.get(
+ path[0], {}).get(path[1]):
+ self.logger.debug(
+ 'Not able to guarentee that the default value '
+ 'hasn\'t been conditioned out')
return matches
allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
- cfn = kwargs.get('cfn')
if allowed_value_specs:
if value in cfn.template.get('Parameters', {}):
@@ -63,13 +70,15 @@
if str(allowed_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'AllowedValues', index]
message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
+ matches.append(RuleMatch(param_path, message.format(
+ value, allowed_value, allowed_value_specs)))
if default_value:
# Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
if str(default_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'Default']
message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
+ matches.append(RuleMatch(param_path, message.format(
+ value, default_value, allowed_value_specs)))
return matches
@@ -87,7 +96,8 @@
cfn.check_value(
p_value, prop, p_path,
check_ref=self.check_value_ref,
- value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
+ value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'ValueTypes').get(value_type, {}),
cfn=cfn, property_type=property_type, property_name=prop
)
)
@@ -98,7 +108,8 @@
"""Match for sub properties"""
matches = list()
- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'PropertyTypes').get(property_type, {}).get('Properties', {})
property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
matches.extend(self.check(cfn, properties, specs, property_specs, path))
@@ -108,7 +119,8 @@
"""Check CloudFormation Properties"""
matches = list()
- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
+ 'ResourceTypes').get(resource_type, {}).get('Properties', {})
resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
matches.extend(self.check(cfn, properties, specs, resource_specs, path))
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py\n--- a/src/cfnlint/rules/parameters/AllowedValue.py\n+++ b/src/cfnlint/rules/parameters/AllowedValue.py\n@@ -40,12 +40,19 @@\n \"\"\"Check Ref\"\"\"\n matches = []\n \n+ cfn = kwargs.get('cfn')\n if 'Fn::If' in path:\n- self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n+ self.logger.debug(\n+ 'Not able to guarentee that the default value hasn\\'t been conditioned out')\n+ return matches\n+ if path[0] == 'Resources' and 'Condition' in cfn.template.get(\n+ path[0], {}).get(path[1]):\n+ self.logger.debug(\n+ 'Not able to guarentee that the default value '\n+ 'hasn\\'t been conditioned out')\n return matches\n \n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n- cfn = kwargs.get('cfn')\n \n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n@@ -63,13 +70,15 @@\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n+ matches.append(RuleMatch(param_path, message.format(\n+ value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n- matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n+ matches.append(RuleMatch(param_path, message.format(\n+ value, default_value, allowed_value_specs)))\n \n return matches\n \n@@ -87,7 +96,8 @@\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n- value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n+ value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n@@ -98,7 +108,8 @@\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n \n- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n \n@@ -108,7 +119,8 @@\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n \n- specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n+ specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n+ 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n", "issue": "W2030 Default value required on conditionally included property\n*cfn-lint version: 0.21.3*\r\n\r\nCloudFormation provides the AWS::NoValue pseudo-parameter, which allows for a property to be included based on a given Condition. However, cfn-lint will still validate the potential value provided for the property, even if it will not actually be used in the deployment.\r\n\r\nExample template:\r\n\r\n```yaml\r\nParameters:\r\n Retention:\r\n Type: Number\r\n Description: Retention in days for the log group (-1 for no retention)\r\n Default: -1\r\nConditions:\r\n IsRetention: \r\n !Not [!Equals [!Ref 'Retention', '-1']]\r\nResources:\r\n LogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n LogGroupName: 'some-log-group'\r\n RetentionInDays: !If [IsRetention, !Ref Retention, !Ref 'AWS::NoValue']\r\n```\r\n\r\nThis template allows the user to specify the retention on a log group, or use the number -1 if they wish to have unlimited retention. This is achieved via a Condition as well as an If block that conditionally includes the property.\r\n\r\nThis leads to the following linter output:\r\n\r\n```\r\ncfn-lint --template template.yaml\r\nW2030 You must specify a valid Default value for Retention (-1). \r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\ncloudformation/template.yaml:5:5\r\n```\r\n\r\nThis can of course be avoided by disabling this specific check in the template Metadata block. Unfortunately it cannot be disabled in the resource Metadata, as the validation error happens on the Parameter:\r\n\r\n```yaml\r\nMetadata:\r\n cfn-lint:\r\n config:\r\n ignore_checks:\r\n - W2030\r\n```\r\n\r\nThis might be a difficult situation to account for, since it would require the Condition to be evaluated to determine whether the property itself should even be checked.\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n if 'Fn::If' in path:\n self.logger.debug('Not able to guarentee that the default value hasn\\'t been conditioned out')\n return matches\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList', 'List<String>']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, path, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n cfn = kwargs.get('cfn')\n if 'Fn::If' in path:\n self.logger.debug(\n 'Not able to guarentee that the default value hasn\\'t been conditioned out')\n return matches\n if path[0] == 'Resources' and 'Condition' in cfn.template.get(\n path[0], {}).get(path[1]):\n self.logger.debug(\n 'Not able to guarentee that the default value '\n 'hasn\\'t been conditioned out')\n return matches\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList', 'List<String>']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(\n value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(\n value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}]} | 2,223 | 889 |
gh_patches_debug_63372 | rasdani/github-patches | git_diff | mkdocs__mkdocs-2481 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gh_deploy doesn't work when a config file is supplied
```
$ mkdocs gh-deploy --force --config-file mkdocs-editable.yml
...
Traceback (most recent call last):
File "/usr/local/bin/mkdocs", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/mkdocs/__main__.py", line 205, in gh_deploy_command
gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)
File "/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py", line 102, in gh_deploy
sha = _get_current_sha(os.path.dirname(config.config_file_path))
File "/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py", line 32, in _get_current_sha
proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: ''
```
The issue is that `sha = _get_current_sha(os.path.dirname(config.config_file_path))` from `gh_deploy.py` returns an empty string for `dirname` if a relative config file path is passed in.
Workaround: `--config-file $(pwd)/mkdocs-editable.yml`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/commands/gh_deploy.py`
Content:
```
1 import logging
2 import subprocess
3 import os
4 import re
5 from packaging import version
6
7 import mkdocs
8 import ghp_import
9 from mkdocs.exceptions import Abort
10
11 log = logging.getLogger(__name__)
12
13 default_message = """Deployed {sha} with MkDocs version: {version}"""
14
15
16 def _is_cwd_git_repo():
17 try:
18 proc = subprocess.Popen(
19 ['git', 'rev-parse', '--is-inside-work-tree'],
20 stdout=subprocess.PIPE,
21 stderr=subprocess.PIPE
22 )
23 except FileNotFoundError:
24 log.error("Could not find git - is it installed and on your path?")
25 raise Abort('Deployment Aborted!')
26 proc.communicate()
27 return proc.wait() == 0
28
29
30 def _get_current_sha(repo_path):
31
32 proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
33 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
34
35 stdout, _ = proc.communicate()
36 sha = stdout.decode('utf-8').strip()
37 return sha
38
39
40 def _get_remote_url(remote_name):
41
42 # No CNAME found. We will use the origin URL to determine the GitHub
43 # pages location.
44 remote = f"remote.{remote_name}.url"
45 proc = subprocess.Popen(["git", "config", "--get", remote],
46 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
47
48 stdout, _ = proc.communicate()
49 url = stdout.decode('utf-8').strip()
50
51 host = None
52 path = None
53 if 'github.com/' in url:
54 host, path = url.split('github.com/', 1)
55 elif 'github.com:' in url:
56 host, path = url.split('github.com:', 1)
57
58 return host, path
59
60
61 def _check_version(branch):
62
63 proc = subprocess.Popen(['git', 'show', '-s', '--format=%s', f'refs/heads/{branch}'],
64 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
65
66 stdout, _ = proc.communicate()
67 msg = stdout.decode('utf-8').strip()
68 m = re.search(r'\d+(\.\d+)+((a|b|rc)\d+)?(\.post\d+)?(\.dev\d+)?', msg, re.X | re.I)
69 previousv = version.parse(m.group()) if m else None
70 currentv = version.parse(mkdocs.__version__)
71 if not previousv:
72 log.warning('Version check skipped: No version specified in previous deployment.')
73 elif currentv > previousv:
74 log.info(
75 f'Previous deployment was done with MkDocs version {previousv}; '
76 f'you are deploying with a newer version ({currentv})'
77 )
78 elif currentv < previousv:
79 log.error(
80 f'Deployment terminated: Previous deployment was made with MkDocs version {previousv}; '
81 f'you are attempting to deploy with an older version ({currentv}). Use --ignore-version '
82 'to deploy anyway.'
83 )
84 raise Abort('Deployment Aborted!')
85
86
87 def gh_deploy(config, message=None, force=False, ignore_version=False, shell=False):
88
89 if not _is_cwd_git_repo():
90 log.error('Cannot deploy - this directory does not appear to be a git '
91 'repository')
92
93 remote_branch = config['remote_branch']
94 remote_name = config['remote_name']
95
96 if not ignore_version:
97 _check_version(remote_branch)
98
99 if message is None:
100 message = default_message
101 sha = _get_current_sha(os.path.dirname(config.config_file_path))
102 message = message.format(version=mkdocs.__version__, sha=sha)
103
104 log.info("Copying '%s' to '%s' branch and pushing to GitHub.",
105 config['site_dir'], config['remote_branch'])
106
107 try:
108 ghp_import.ghp_import(
109 config['site_dir'],
110 mesg=message,
111 remote=remote_name,
112 branch=remote_branch,
113 push=True,
114 force=force,
115 use_shell=shell,
116 nojekyll=True
117 )
118 except ghp_import.GhpError as e:
119 log.error("Failed to deploy to GitHub with error: \n{}".format(e.message))
120 raise Abort('Deployment Aborted!')
121
122 cname_file = os.path.join(config['site_dir'], 'CNAME')
123 # Does this repository have a CNAME set for GitHub pages?
124 if os.path.isfile(cname_file):
125 # This GitHub pages repository has a CNAME configured.
126 with(open(cname_file, 'r')) as f:
127 cname_host = f.read().strip()
128 log.info(f'Based on your CNAME file, your documentation should be '
129 f'available shortly at: http://{cname_host}')
130 log.info('NOTE: Your DNS records must be configured appropriately for '
131 'your CNAME URL to work.')
132 return
133
134 host, path = _get_remote_url(remote_name)
135
136 if host is None:
137 # This could be a GitHub Enterprise deployment.
138 log.info('Your documentation should be available shortly.')
139 else:
140 username, repo = path.split('/', 1)
141 if repo.endswith('.git'):
142 repo = repo[:-len('.git')]
143 url = f'https://{username}.github.io/{repo}/'
144 log.info(f"Your documentation should shortly be available at: {url}")
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py
--- a/mkdocs/commands/gh_deploy.py
+++ b/mkdocs/commands/gh_deploy.py
@@ -29,7 +29,7 @@
def _get_current_sha(repo_path):
- proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,
+ proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path or None,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, _ = proc.communicate()
| {"golden_diff": "diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py\n--- a/mkdocs/commands/gh_deploy.py\n+++ b/mkdocs/commands/gh_deploy.py\n@@ -29,7 +29,7 @@\n \n def _get_current_sha(repo_path):\n \n- proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\n+ proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path or None,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n \n stdout, _ = proc.communicate()\n", "issue": "gh_deploy doesn't work when a config file is supplied\n```\r\n$ mkdocs gh-deploy --force --config-file mkdocs-editable.yml\r\n...\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/mkdocs\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1062, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1668, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.9/site-packages/click/core.py\", line 763, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/__main__.py\", line 205, in gh_deploy_command\r\n gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py\", line 102, in gh_deploy\r\n sha = _get_current_sha(os.path.dirname(config.config_file_path))\r\n File \"/usr/local/lib/python3.9/site-packages/mkdocs/commands/gh_deploy.py\", line 32, in _get_current_sha\r\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\r\n File \"/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 951, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/usr/local/Cellar/[email protected]/3.9.4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 1821, in _execute_child\r\n raise child_exception_type(errno_num, err_msg, err_filename)\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\n```\r\n\r\nThe issue is that `sha = _get_current_sha(os.path.dirname(config.config_file_path))` from `gh_deploy.py` returns an empty string for `dirname` if a relative config file path is passed in.\r\n\r\nWorkaround: `--config-file $(pwd)/mkdocs-editable.yml`\r\n\n", "before_files": [{"content": "import logging\nimport subprocess\nimport os\nimport re\nfrom packaging import version\n\nimport mkdocs\nimport ghp_import\nfrom mkdocs.exceptions import Abort\n\nlog = logging.getLogger(__name__)\n\ndefault_message = \"\"\"Deployed {sha} with MkDocs version: {version}\"\"\"\n\n\ndef _is_cwd_git_repo():\n try:\n proc = subprocess.Popen(\n ['git', 'rev-parse', '--is-inside-work-tree'],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except FileNotFoundError:\n log.error(\"Could not find git - is it installed and on your path?\")\n raise Abort('Deployment Aborted!')\n proc.communicate()\n return proc.wait() == 0\n\n\ndef _get_current_sha(repo_path):\n\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n sha = stdout.decode('utf-8').strip()\n return sha\n\n\ndef _get_remote_url(remote_name):\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n remote = f\"remote.{remote_name}.url\"\n proc = subprocess.Popen([\"git\", \"config\", \"--get\", remote],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n url = stdout.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n return host, path\n\n\ndef _check_version(branch):\n\n proc = subprocess.Popen(['git', 'show', '-s', '--format=%s', f'refs/heads/{branch}'],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n msg = stdout.decode('utf-8').strip()\n m = re.search(r'\\d+(\\.\\d+)+((a|b|rc)\\d+)?(\\.post\\d+)?(\\.dev\\d+)?', msg, re.X | re.I)\n previousv = version.parse(m.group()) if m else None\n currentv = version.parse(mkdocs.__version__)\n if not previousv:\n log.warning('Version check skipped: No version specified in previous deployment.')\n elif currentv > previousv:\n log.info(\n f'Previous deployment was done with MkDocs version {previousv}; '\n f'you are deploying with a newer version ({currentv})'\n )\n elif currentv < previousv:\n log.error(\n f'Deployment terminated: Previous deployment was made with MkDocs version {previousv}; '\n f'you are attempting to deploy with an older version ({currentv}). Use --ignore-version '\n 'to deploy anyway.'\n )\n raise Abort('Deployment Aborted!')\n\n\ndef gh_deploy(config, message=None, force=False, ignore_version=False, shell=False):\n\n if not _is_cwd_git_repo():\n log.error('Cannot deploy - this directory does not appear to be a git '\n 'repository')\n\n remote_branch = config['remote_branch']\n remote_name = config['remote_name']\n\n if not ignore_version:\n _check_version(remote_branch)\n\n if message is None:\n message = default_message\n sha = _get_current_sha(os.path.dirname(config.config_file_path))\n message = message.format(version=mkdocs.__version__, sha=sha)\n\n log.info(\"Copying '%s' to '%s' branch and pushing to GitHub.\",\n config['site_dir'], config['remote_branch'])\n\n try:\n ghp_import.ghp_import(\n config['site_dir'],\n mesg=message,\n remote=remote_name,\n branch=remote_branch,\n push=True,\n force=force,\n use_shell=shell,\n nojekyll=True\n )\n except ghp_import.GhpError as e:\n log.error(\"Failed to deploy to GitHub with error: \\n{}\".format(e.message))\n raise Abort('Deployment Aborted!')\n\n cname_file = os.path.join(config['site_dir'], 'CNAME')\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile(cname_file):\n # This GitHub pages repository has a CNAME configured.\n with(open(cname_file, 'r')) as f:\n cname_host = f.read().strip()\n log.info(f'Based on your CNAME file, your documentation should be '\n f'available shortly at: http://{cname_host}')\n log.info('NOTE: Your DNS records must be configured appropriately for '\n 'your CNAME URL to work.')\n return\n\n host, path = _get_remote_url(remote_name)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n log.info('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = f'https://{username}.github.io/{repo}/'\n log.info(f\"Your documentation should shortly be available at: {url}\")\n", "path": "mkdocs/commands/gh_deploy.py"}], "after_files": [{"content": "import logging\nimport subprocess\nimport os\nimport re\nfrom packaging import version\n\nimport mkdocs\nimport ghp_import\nfrom mkdocs.exceptions import Abort\n\nlog = logging.getLogger(__name__)\n\ndefault_message = \"\"\"Deployed {sha} with MkDocs version: {version}\"\"\"\n\n\ndef _is_cwd_git_repo():\n try:\n proc = subprocess.Popen(\n ['git', 'rev-parse', '--is-inside-work-tree'],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except FileNotFoundError:\n log.error(\"Could not find git - is it installed and on your path?\")\n raise Abort('Deployment Aborted!')\n proc.communicate()\n return proc.wait() == 0\n\n\ndef _get_current_sha(repo_path):\n\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], cwd=repo_path or None,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n sha = stdout.decode('utf-8').strip()\n return sha\n\n\ndef _get_remote_url(remote_name):\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n remote = f\"remote.{remote_name}.url\"\n proc = subprocess.Popen([\"git\", \"config\", \"--get\", remote],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n url = stdout.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n return host, path\n\n\ndef _check_version(branch):\n\n proc = subprocess.Popen(['git', 'show', '-s', '--format=%s', f'refs/heads/{branch}'],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n msg = stdout.decode('utf-8').strip()\n m = re.search(r'\\d+(\\.\\d+)+((a|b|rc)\\d+)?(\\.post\\d+)?(\\.dev\\d+)?', msg, re.X | re.I)\n previousv = version.parse(m.group()) if m else None\n currentv = version.parse(mkdocs.__version__)\n if not previousv:\n log.warning('Version check skipped: No version specified in previous deployment.')\n elif currentv > previousv:\n log.info(\n f'Previous deployment was done with MkDocs version {previousv}; '\n f'you are deploying with a newer version ({currentv})'\n )\n elif currentv < previousv:\n log.error(\n f'Deployment terminated: Previous deployment was made with MkDocs version {previousv}; '\n f'you are attempting to deploy with an older version ({currentv}). Use --ignore-version '\n 'to deploy anyway.'\n )\n raise Abort('Deployment Aborted!')\n\n\ndef gh_deploy(config, message=None, force=False, ignore_version=False, shell=False):\n\n if not _is_cwd_git_repo():\n log.error('Cannot deploy - this directory does not appear to be a git '\n 'repository')\n\n remote_branch = config['remote_branch']\n remote_name = config['remote_name']\n\n if not ignore_version:\n _check_version(remote_branch)\n\n if message is None:\n message = default_message\n sha = _get_current_sha(os.path.dirname(config.config_file_path))\n message = message.format(version=mkdocs.__version__, sha=sha)\n\n log.info(\"Copying '%s' to '%s' branch and pushing to GitHub.\",\n config['site_dir'], config['remote_branch'])\n\n try:\n ghp_import.ghp_import(\n config['site_dir'],\n mesg=message,\n remote=remote_name,\n branch=remote_branch,\n push=True,\n force=force,\n use_shell=shell,\n nojekyll=True\n )\n except ghp_import.GhpError as e:\n log.error(\"Failed to deploy to GitHub with error: \\n{}\".format(e.message))\n raise Abort('Deployment Aborted!')\n\n cname_file = os.path.join(config['site_dir'], 'CNAME')\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile(cname_file):\n # This GitHub pages repository has a CNAME configured.\n with(open(cname_file, 'r')) as f:\n cname_host = f.read().strip()\n log.info(f'Based on your CNAME file, your documentation should be '\n f'available shortly at: http://{cname_host}')\n log.info('NOTE: Your DNS records must be configured appropriately for '\n 'your CNAME URL to work.')\n return\n\n host, path = _get_remote_url(remote_name)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n log.info('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = f'https://{username}.github.io/{repo}/'\n log.info(f\"Your documentation should shortly be available at: {url}\")\n", "path": "mkdocs/commands/gh_deploy.py"}]} | 2,343 | 139 |
gh_patches_debug_35416 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-1664 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] New yooksmart D10110 inverted with quirk
**Describe the bug**
I purchased a new yooksmart D10110 cover and paired with home assistant. The controls
seemed inverted and I had to move the bar twice in order to get it to move. I read reports
in the past with the suggestion to unpair and pair again, tried multiple times with no luck.
So I disabled the quirk (apologies for the brute force: moved the file to a different directory
and reloaded) and it works now. For completeness:
Before:
- buttons up and down wouldn't work
- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)
- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work
After:
- buttons up and down work
- enabled button matches the direction of the cover: if open, it shows down button enabled
**To Reproduce**
Behavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts
**Additional context**
Something that is possible, since the cover is new, is that they corrected the behavior in their firmware
and the quirk isn't needed anymore.
This device has: Firmware: 0x10013001
I can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping
it up to date.
Editted: formatting
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/yooksmart/D10110blinds.py`
Content:
```
1 """Device handler for Yooksmart D10110 roller blinds."""
2 from zigpy.profiles import zha
3 from zigpy.quirks import CustomCluster, CustomDevice
4 from zigpy.zcl.clusters.closures import WindowCovering
5 from zigpy.zcl.clusters.general import (
6 Basic,
7 Groups,
8 Identify,
9 Ota,
10 PollControl,
11 PowerConfiguration,
12 Scenes,
13 )
14
15 from zhaquirks.const import (
16 DEVICE_TYPE,
17 ENDPOINTS,
18 INPUT_CLUSTERS,
19 MODELS_INFO,
20 OUTPUT_CLUSTERS,
21 PROFILE_ID,
22 )
23
24
25 class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
26 """WindowCovering cluster implementation.
27
28 This implementation inverts the reported covering percent for non standard
29 devices that don't follow the reporting spec.
30 """
31
32 cluster_id = WindowCovering.cluster_id
33 CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
34
35 def _update_attribute(self, attrid, value):
36 if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
37 value = 100 - value
38 super()._update_attribute(attrid, value)
39
40
41 class D10110Blinds(CustomDevice):
42 """Custom device representing Yooksmart D10110 roller blinds."""
43
44 signature = {
45 # <SimpleDescriptor endpoint=1 profile=260 device_type=514
46 # device_version=1
47 # input_clusters=[0, 1, 3, 4, 5, 32, 258]
48 # output_clusters=[3, 25]>
49 MODELS_INFO: [
50 ("yooksmart", "D10110"),
51 ],
52 ENDPOINTS: {
53 1: {
54 PROFILE_ID: zha.PROFILE_ID,
55 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
56 INPUT_CLUSTERS: [
57 Basic.cluster_id,
58 PowerConfiguration.cluster_id,
59 Identify.cluster_id,
60 Groups.cluster_id,
61 Scenes.cluster_id,
62 PollControl.cluster_id,
63 WindowCovering.cluster_id,
64 ],
65 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
66 }
67 },
68 }
69
70 replacement = {
71 ENDPOINTS: {
72 1: {
73 PROFILE_ID: zha.PROFILE_ID,
74 DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
75 INPUT_CLUSTERS: [
76 Basic.cluster_id,
77 PowerConfiguration.cluster_id,
78 Identify.cluster_id,
79 Groups.cluster_id,
80 Scenes.cluster_id,
81 PollControl.cluster_id,
82 InvertedWindowCoveringCluster,
83 ],
84 OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
85 }
86 }
87 }
88
```
Path: `zhaquirks/yooksmart/__init__.py`
Content:
```
1 """Yooksmart module for custom device handlers."""
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/D10110blinds.py
+++ /dev/null
@@ -1,87 +0,0 @@
-"""Device handler for Yooksmart D10110 roller blinds."""
-from zigpy.profiles import zha
-from zigpy.quirks import CustomCluster, CustomDevice
-from zigpy.zcl.clusters.closures import WindowCovering
-from zigpy.zcl.clusters.general import (
- Basic,
- Groups,
- Identify,
- Ota,
- PollControl,
- PowerConfiguration,
- Scenes,
-)
-
-from zhaquirks.const import (
- DEVICE_TYPE,
- ENDPOINTS,
- INPUT_CLUSTERS,
- MODELS_INFO,
- OUTPUT_CLUSTERS,
- PROFILE_ID,
-)
-
-
-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):
- """WindowCovering cluster implementation.
-
- This implementation inverts the reported covering percent for non standard
- devices that don't follow the reporting spec.
- """
-
- cluster_id = WindowCovering.cluster_id
- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008
-
- def _update_attribute(self, attrid, value):
- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:
- value = 100 - value
- super()._update_attribute(attrid, value)
-
-
-class D10110Blinds(CustomDevice):
- """Custom device representing Yooksmart D10110 roller blinds."""
-
- signature = {
- # <SimpleDescriptor endpoint=1 profile=260 device_type=514
- # device_version=1
- # input_clusters=[0, 1, 3, 4, 5, 32, 258]
- # output_clusters=[3, 25]>
- MODELS_INFO: [
- ("yooksmart", "D10110"),
- ],
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- WindowCovering.cluster_id,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- },
- }
-
- replacement = {
- ENDPOINTS: {
- 1: {
- PROFILE_ID: zha.PROFILE_ID,
- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,
- INPUT_CLUSTERS: [
- Basic.cluster_id,
- PowerConfiguration.cluster_id,
- Identify.cluster_id,
- Groups.cluster_id,
- Scenes.cluster_id,
- PollControl.cluster_id,
- InvertedWindowCoveringCluster,
- ],
- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],
- }
- }
- }
diff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py
deleted file mode 100644
--- a/zhaquirks/yooksmart/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Yooksmart module for custom device handlers."""
| {"golden_diff": "diff --git a/zhaquirks/yooksmart/D10110blinds.py b/zhaquirks/yooksmart/D10110blinds.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/D10110blinds.py\n+++ /dev/null\n@@ -1,87 +0,0 @@\n-\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\n-from zigpy.profiles import zha\n-from zigpy.quirks import CustomCluster, CustomDevice\n-from zigpy.zcl.clusters.closures import WindowCovering\n-from zigpy.zcl.clusters.general import (\n- Basic,\n- Groups,\n- Identify,\n- Ota,\n- PollControl,\n- PowerConfiguration,\n- Scenes,\n-)\n-\n-from zhaquirks.const import (\n- DEVICE_TYPE,\n- ENDPOINTS,\n- INPUT_CLUSTERS,\n- MODELS_INFO,\n- OUTPUT_CLUSTERS,\n- PROFILE_ID,\n-)\n-\n-\n-class InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n- \"\"\"WindowCovering cluster implementation.\n-\n- This implementation inverts the reported covering percent for non standard\n- devices that don't follow the reporting spec.\n- \"\"\"\n-\n- cluster_id = WindowCovering.cluster_id\n- CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n-\n- def _update_attribute(self, attrid, value):\n- if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n- value = 100 - value\n- super()._update_attribute(attrid, value)\n-\n-\n-class D10110Blinds(CustomDevice):\n- \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n-\n- signature = {\n- # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n- # device_version=1\n- # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n- # output_clusters=[3, 25]>\n- MODELS_INFO: [\n- (\"yooksmart\", \"D10110\"),\n- ],\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- WindowCovering.cluster_id,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- },\n- }\n-\n- replacement = {\n- ENDPOINTS: {\n- 1: {\n- PROFILE_ID: zha.PROFILE_ID,\n- DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n- INPUT_CLUSTERS: [\n- Basic.cluster_id,\n- PowerConfiguration.cluster_id,\n- Identify.cluster_id,\n- Groups.cluster_id,\n- Scenes.cluster_id,\n- PollControl.cluster_id,\n- InvertedWindowCoveringCluster,\n- ],\n- OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n- }\n- }\n- }\ndiff --git a/zhaquirks/yooksmart/__init__.py b/zhaquirks/yooksmart/__init__.py\ndeleted file mode 100644\n--- a/zhaquirks/yooksmart/__init__.py\n+++ /dev/null\n@@ -1 +0,0 @@\n-\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "issue": "[BUG] New yooksmart D10110 inverted with quirk\n**Describe the bug**\r\nI purchased a new yooksmart D10110 cover and paired with home assistant. The controls\r\nseemed inverted and I had to move the bar twice in order to get it to move. I read reports\r\nin the past with the suggestion to unpair and pair again, tried multiple times with no luck.\r\nSo I disabled the quirk (apologies for the brute force: moved the file to a different directory\r\nand reloaded) and it works now. For completeness:\r\nBefore:\r\n- buttons up and down wouldn't work\r\n- available button would be inverted (e.g.: cover was all the way down and the down button was enabled)\r\n- in order to control the cover I'd move the progress bar all the way to 0 or to 100 then the opposite in order to work\r\nAfter:\r\n- buttons up and down work\r\n- enabled button matches the direction of the cover: if open, it shows down button enabled\r\n\r\n**To Reproduce**\r\nBehavior is consistent across multiple pair/unpair cycles and full home assistant instance restarts\r\n\r\n**Additional context**\r\nSomething that is possible, since the cover is new, is that they corrected the behavior in their firmware\r\nand the quirk isn't needed anymore.\r\nThis device has: Firmware: 0x10013001\r\n\r\nI can provide any debugging necessary. I'm using homeassistant official virtual machine image and keeping\r\nit up to date.\r\n\r\nEditted: formatting\n", "before_files": [{"content": "\"\"\"Device handler for Yooksmart D10110 roller blinds.\"\"\"\nfrom zigpy.profiles import zha\nfrom zigpy.quirks import CustomCluster, CustomDevice\nfrom zigpy.zcl.clusters.closures import WindowCovering\nfrom zigpy.zcl.clusters.general import (\n Basic,\n Groups,\n Identify,\n Ota,\n PollControl,\n PowerConfiguration,\n Scenes,\n)\n\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n)\n\n\nclass InvertedWindowCoveringCluster(CustomCluster, WindowCovering):\n \"\"\"WindowCovering cluster implementation.\n\n This implementation inverts the reported covering percent for non standard\n devices that don't follow the reporting spec.\n \"\"\"\n\n cluster_id = WindowCovering.cluster_id\n CURRENT_POSITION_LIFT_PERCENTAGE = 0x0008\n\n def _update_attribute(self, attrid, value):\n if attrid == self.CURRENT_POSITION_LIFT_PERCENTAGE:\n value = 100 - value\n super()._update_attribute(attrid, value)\n\n\nclass D10110Blinds(CustomDevice):\n \"\"\"Custom device representing Yooksmart D10110 roller blinds.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=514\n # device_version=1\n # input_clusters=[0, 1, 3, 4, 5, 32, 258]\n # output_clusters=[3, 25]>\n MODELS_INFO: [\n (\"yooksmart\", \"D10110\"),\n ],\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n WindowCovering.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n },\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.WINDOW_COVERING_DEVICE,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n PollControl.cluster_id,\n InvertedWindowCoveringCluster,\n ],\n OUTPUT_CLUSTERS: [Identify.cluster_id, Ota.cluster_id],\n }\n }\n }\n", "path": "zhaquirks/yooksmart/D10110blinds.py"}, {"content": "\"\"\"Yooksmart module for custom device handlers.\"\"\"\n", "path": "zhaquirks/yooksmart/__init__.py"}], "after_files": [{"content": null, "path": "zhaquirks/yooksmart/D10110blinds.py"}, {"content": null, "path": "zhaquirks/yooksmart/__init__.py"}]} | 1,387 | 825 |
gh_patches_debug_10567 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8102 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Author dropdown not working as expected
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
Since the last deployment on July 13, the author dropdown on the edit form behaves differently.
### Evidence / Screenshot (if possible)
<img width="1012" alt="Screenshot 2023-07-13 at 08 35 17" src="https://github.com/internetarchive/openlibrary/assets/17739465/389b1544-9d04-4de1-b218-0145867ec284">
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...add book or edit book form
2. Do ... try to add Plato as an author
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: the most obvious choice is missing (Plato, the philosopher). Instead, there are authors that have plato as part of the spelling of their names or less prolific authors with the last name Plato.
* Expected: The most likely choice, probably determined by spelling and number of works, should appear on the list.
### Details
- **Logged in (Y/N)?**
- **Browser type/version?**
- **Operating system?**
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@cdrini
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/worksearch/autocomplete.py`
Content:
```
1 import itertools
2 import web
3 import json
4
5
6 from infogami.utils import delegate
7 from infogami.utils.view import safeint
8 from openlibrary.core.models import Thing
9 from openlibrary.plugins.upstream import utils
10 from openlibrary.plugins.worksearch.search import get_solr
11 from openlibrary.utils import (
12 find_olid_in_string,
13 olid_to_key,
14 )
15
16
17 def to_json(d):
18 web.header('Content-Type', 'application/json')
19 return delegate.RawText(json.dumps(d))
20
21
22 class autocomplete(delegate.page):
23 path = "/_autocomplete"
24 fq = ['-type:edition']
25 fl = 'key,type,name,title,score'
26 olid_suffix: str | None = None
27 query = 'title:"{q}"^2 OR title:({q}*) OR name:"{q}"^2 OR name:({q}*)'
28
29 def db_fetch(self, key: str) -> Thing | None:
30 if thing := web.ctx.site.get(key):
31 return thing.as_fake_solr_record()
32 else:
33 return None
34
35 def doc_wrap(self, doc: dict):
36 """Modify the returned solr document in place."""
37 if 'name' not in doc:
38 doc['name'] = doc.get('title')
39
40 def GET(self):
41 return self.direct_get()
42
43 def direct_get(self, fq: list[str] | None = None):
44 i = web.input(q="", limit=5)
45 i.limit = safeint(i.limit, 5)
46
47 solr = get_solr()
48
49 # look for ID in query string here
50 q = solr.escape(i.q).strip()
51 embedded_olid = None
52 if self.olid_suffix:
53 embedded_olid = find_olid_in_string(q, self.olid_suffix)
54
55 if embedded_olid:
56 solr_q = f'key:"{olid_to_key(embedded_olid)}"'
57 else:
58 solr_q = self.query.format(q=q)
59
60 fq = fq or self.fq
61 params = {
62 'q_op': 'AND',
63 'rows': i.limit,
64 **({'fq': fq} if fq else {}),
65 # limit the fields returned for better performance
66 'fl': self.fl,
67 }
68
69 data = solr.select(solr_q, **params)
70 docs = data['docs']
71
72 if embedded_olid and not docs:
73 # Grumble! Work not in solr yet. Create a dummy.
74 fake_doc = self.db_fetch(olid_to_key(embedded_olid))
75 if fake_doc:
76 docs = [fake_doc]
77
78 for d in docs:
79 self.doc_wrap(d)
80
81 return to_json(docs)
82
83
84 class languages_autocomplete(delegate.page):
85 path = "/languages/_autocomplete"
86
87 def GET(self):
88 i = web.input(q="", limit=5)
89 i.limit = safeint(i.limit, 5)
90 return to_json(
91 list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))
92 )
93
94
95 class works_autocomplete(autocomplete):
96 path = "/works/_autocomplete"
97 fq = [
98 'type:work',
99 # Exclude orphaned editions from search results
100 'key:*W',
101 ]
102 fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'
103 olid_suffix = 'W'
104 query = 'title:"{q}"^2 OR title:({q}*)'
105
106 def doc_wrap(self, doc: dict):
107 doc['full_title'] = doc['title']
108 if 'subtitle' in doc:
109 doc['full_title'] += ": " + doc['subtitle']
110 doc['name'] = doc.get('title')
111
112
113 class authors_autocomplete(autocomplete):
114 path = "/authors/_autocomplete"
115 fq = ['type:author']
116 fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'
117 olid_suffix = 'A'
118 query = 'name:({q}*) OR alternate_names:({q}*)'
119
120 def doc_wrap(self, doc: dict):
121 if 'top_work' in doc:
122 doc['works'] = [doc.pop('top_work')]
123 else:
124 doc['works'] = []
125 doc['subjects'] = doc.pop('top_subjects', [])
126
127
128 class subjects_autocomplete(autocomplete):
129 # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+
130 path = "/subjects_autocomplete"
131 fq = ['type:subject']
132 fl = 'key,name'
133 query = 'name:({q}*)'
134
135 def GET(self):
136 i = web.input(type="")
137 fq = self.fq
138 if i.type:
139 fq = fq + [f'subject_type:{i.type}']
140
141 return super().direct_get(fq=fq)
142
143
144 def setup():
145 """Do required setup."""
146 pass
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py
--- a/openlibrary/plugins/worksearch/autocomplete.py
+++ b/openlibrary/plugins/worksearch/autocomplete.py
@@ -113,9 +113,9 @@
class authors_autocomplete(autocomplete):
path = "/authors/_autocomplete"
fq = ['type:author']
- fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'
+ fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'
olid_suffix = 'A'
- query = 'name:({q}*) OR alternate_names:({q}*)'
+ query = 'name:({q}*) OR alternate_names:({q}*) OR name:"{q}"^2 OR alternate_names:"{q}"^2'
def doc_wrap(self, doc: dict):
if 'top_work' in doc:
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py\n--- a/openlibrary/plugins/worksearch/autocomplete.py\n+++ b/openlibrary/plugins/worksearch/autocomplete.py\n@@ -113,9 +113,9 @@\n class authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n- fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'\n+ fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n- query = 'name:({q}*) OR alternate_names:({q}*)'\n+ query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n \n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n", "issue": "Author dropdown not working as expected\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\nSince the last deployment on July 13, the author dropdown on the edit form behaves differently. \r\n### Evidence / Screenshot (if possible)\r\n<img width=\"1012\" alt=\"Screenshot 2023-07-13 at 08 35 17\" src=\"https://github.com/internetarchive/openlibrary/assets/17739465/389b1544-9d04-4de1-b218-0145867ec284\">\r\n\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...add book or edit book form\r\n2. Do ... try to add Plato as an author\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: the most obvious choice is missing (Plato, the philosopher). Instead, there are authors that have plato as part of the spelling of their names or less prolific authors with the last name Plato.\r\n* Expected: The most likely choice, probably determined by spelling and number of works, should appear on the list.\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?**\r\n- **Browser type/version?**\r\n- **Operating system?**\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n@cdrini \n", "before_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n for d in docs:\n self.doc_wrap(d)\n\n return to_json(docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = [\n 'type:work',\n # Exclude orphaned editions from search results\n 'key:*W',\n ]\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,works,subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*)'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name'\n query = 'name:({q}*)'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}], "after_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n for d in docs:\n self.doc_wrap(d)\n\n return to_json(docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = [\n 'type:work',\n # Exclude orphaned editions from search results\n 'key:*W',\n ]\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name'\n query = 'name:({q}*)'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}]} | 2,064 | 215 |
gh_patches_debug_36709 | rasdani/github-patches | git_diff | gammapy__gammapy-3306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow to specify spectral model in ExcessMapEstimator
Currently the `ExcessMapEstimator` does not allow to define the spectral model, that is used for the flux computation. It is easy to support and should be done...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/estimators/excess_map.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 import copy
3 import logging
4 import numpy as np
5 import astropy.units as u
6 from astropy.convolution import Tophat2DKernel
7 from astropy.coordinates import Angle
8 from gammapy.datasets import MapDataset, MapDatasetOnOff
9 from gammapy.maps import Map, MapAxis
10 from gammapy.stats import CashCountsStatistic, WStatCountsStatistic
11 from .core import Estimator
12 from .utils import estimate_exposure_reco_energy
13
14 __all__ = [
15 "ExcessMapEstimator",
16 ]
17
18 log = logging.getLogger(__name__)
19
20
21 def convolved_map_dataset_counts_statistics(dataset, kernel, mask, correlate_off):
22 """Return CountsDataset objects containing smoothed maps from the MapDataset"""
23 # Kernel is modified later make a copy here
24 kernel = copy.deepcopy(kernel)
25 kernel.normalize("peak")
26
27 # fft convolution adds numerical noise, to ensure integer results we call
28 # np.rint
29 n_on = dataset.counts * mask
30 n_on_conv = np.rint(n_on.convolve(kernel.array).data)
31
32 if isinstance(dataset, MapDatasetOnOff):
33 n_off = dataset.counts_off * mask
34 npred_sig = dataset.npred_signal() * mask
35 acceptance_on = dataset.acceptance * mask
36 acceptance_off = dataset.acceptance_off * mask
37
38 npred_sig_convolve = npred_sig.convolve(kernel.array)
39 acceptance_on_convolve = acceptance_on.convolve(kernel.array)
40 if correlate_off:
41 n_off = n_off.convolve(kernel.array)
42 acceptance_off = acceptance_off.convolve(kernel.array)
43
44 with np.errstate(invalid="ignore", divide="ignore"):
45 alpha = acceptance_on_convolve / acceptance_off
46
47 return WStatCountsStatistic(
48 n_on_conv.data, n_off.data, alpha.data, npred_sig_convolve.data
49 )
50 else:
51
52 npred = dataset.npred() * mask
53 background_conv = npred.convolve(kernel.array)
54 return CashCountsStatistic(n_on_conv.data, background_conv.data)
55
56
57 class ExcessMapEstimator(Estimator):
58 """Computes correlated excess, sqrt TS (i.e. Li-Ma significance) and errors for MapDatasets.
59
60 If a model is set on the dataset the excess map estimator will compute the excess taking into account
61 the predicted counts of the model.
62
63 Some background estimation techniques like ring background or adaptive ring background will provide already
64 correlated data for OFF. In the case of already correlated OFF data, the OFF data should not be correlated again,
65 and so the option correlate_off should set to False (default).
66
67 Parameters
68 ----------
69 correlation_radius : ~astropy.coordinate.Angle
70 correlation radius to use
71 n_sigma : float
72 Confidence level for the asymmetric errors expressed in number of sigma.
73 Default is 1.
74 n_sigma_ul : float
75 Confidence level for the upper limits expressed in number of sigma.
76 Default is 3.
77 selection_optional : list of str
78 Which additional maps to estimate besides delta TS, significance and symmetric error.
79 Available options are:
80
81 * "errn-errp": estimate asymmetric errors.
82 * "ul": estimate upper limits.
83
84 By default all additional quantities are estimated.
85 energy_edges : `~astropy.units.Quantity`
86 Energy edges of the target excess maps bins.
87 apply_mask_fit : Bool
88 Apply a mask for the computation.
89 A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset
90 correlate_off : Bool
91 Correlate OFF events in the case of a MapDatasetOnOff
92 """
93
94 tag = "ExcessMapEstimator"
95 _available_selection_optional = ["errn-errp", "ul"]
96
97 def __init__(
98 self,
99 correlation_radius="0.1 deg",
100 n_sigma=1,
101 n_sigma_ul=3,
102 selection_optional=None,
103 energy_edges=None,
104 apply_mask_fit=False,
105 correlate_off=False
106 ):
107 self.correlation_radius = correlation_radius
108 self.n_sigma = n_sigma
109 self.n_sigma_ul = n_sigma_ul
110 self.apply_mask_fit = apply_mask_fit
111 self.selection_optional = selection_optional
112 self.energy_edges = energy_edges
113 self.correlate_off = correlate_off
114
115 @property
116 def correlation_radius(self):
117 return self._correlation_radius
118
119 @correlation_radius.setter
120 def correlation_radius(self, correlation_radius):
121 """Sets radius"""
122 self._correlation_radius = Angle(correlation_radius)
123
124 def run(self, dataset):
125 """Compute correlated excess, Li & Ma significance and flux maps
126
127 If a model is set on the dataset the excess map estimator will compute the excess taking into account
128 the predicted counts of the model.
129
130 Parameters
131 ----------
132 dataset : `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`
133 input dataset
134
135 Returns
136 -------
137 images : dict
138 Dictionary containing result correlated maps. Keys are:
139
140 * counts : correlated counts map
141 * background : correlated background map
142 * excess : correlated excess map
143 * ts : TS map
144 * sqrt_ts : sqrt(delta TS), or Li-Ma significance map
145 * err : symmetric error map (from covariance)
146 * flux : flux map. An exposure map must be present in the dataset to compute flux map
147 * errn : negative error map
148 * errp : positive error map
149 * ul : upper limit map
150
151 """
152 if not isinstance(dataset, MapDataset):
153 raise ValueError("Unsupported dataset type")
154
155 if self.energy_edges is None:
156 energy_axis = dataset.counts.geom.axes["energy"]
157 energy_edges = u.Quantity([energy_axis.edges[0], energy_axis.edges[-1]])
158 else:
159 energy_edges = self.energy_edges
160
161 axis = MapAxis.from_energy_edges(energy_edges)
162
163 resampled_dataset = dataset.resample_energy_axis(energy_axis=axis)
164
165 # Beware we rely here on the correct npred background in MapDataset.resample_energy_axis
166 resampled_dataset.models = dataset.models
167
168 result = self.estimate_excess_map(resampled_dataset)
169
170 return result
171
172 def estimate_excess_map(self, dataset):
173 """Estimate excess and ts maps for single dataset.
174
175 If exposure is defined, a flux map is also computed.
176
177 Parameters
178 ----------
179 dataset : `MapDataset`
180 Map dataset
181 """
182
183 pixel_size = np.mean(np.abs(dataset.counts.geom.wcs.wcs.cdelt))
184 size = self.correlation_radius.deg / pixel_size
185 kernel = Tophat2DKernel(size)
186
187 geom = dataset.counts.geom
188
189 if self.apply_mask_fit:
190 mask = dataset.mask
191 elif dataset.mask_safe:
192 mask = dataset.mask_safe
193 else:
194 mask = np.ones(dataset.data_shape, dtype=bool)
195
196 counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)
197
198 n_on = Map.from_geom(geom, data=counts_stat.n_on)
199 bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)
200 excess = Map.from_geom(geom, data=counts_stat.n_sig)
201
202 result = {"counts": n_on, "background": bkg, "excess": excess}
203
204 tsmap = Map.from_geom(geom, data=counts_stat.ts)
205 sqrt_ts = Map.from_geom(geom, data=counts_stat.sqrt_ts)
206 result.update({"ts": tsmap, "sqrt_ts": sqrt_ts})
207
208 err = Map.from_geom(geom, data=counts_stat.error * self.n_sigma)
209 result.update({"err": err})
210
211 if dataset.exposure:
212 reco_exposure = estimate_exposure_reco_energy(dataset)
213 with np.errstate(invalid="ignore", divide="ignore"):
214 flux = excess / reco_exposure
215 flux.quantity = flux.quantity.to("1 / (cm2 s)")
216 else:
217 flux = Map.from_geom(
218 geom=dataset.counts.geom, data=np.nan * np.ones(dataset.data_shape)
219 )
220 result.update({"flux": flux})
221
222 if "errn-errp" in self.selection_optional:
223 errn = Map.from_geom(geom, data=counts_stat.compute_errn(self.n_sigma))
224 errp = Map.from_geom(geom, data=counts_stat.compute_errp(self.n_sigma))
225 result.update({"errn": errn, "errp": errp})
226
227 if "ul" in self.selection_optional:
228 ul = Map.from_geom(
229 geom, data=counts_stat.compute_upper_limit(self.n_sigma_ul)
230 )
231 result.update({"ul": ul})
232
233 # return nan values outside mask
234 for key in result:
235 result[key].data[~mask] = np.nan
236
237 return result
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/estimators/excess_map.py b/gammapy/estimators/excess_map.py
--- a/gammapy/estimators/excess_map.py
+++ b/gammapy/estimators/excess_map.py
@@ -89,6 +89,9 @@
A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset
correlate_off : Bool
Correlate OFF events in the case of a MapDatasetOnOff
+ spectral_model : `~gammapy.modeling.models.SpectralModel`
+ Spectral model used for the computation of the flux map.
+ If None, a Power Law of index 2 is assumed (default).
"""
tag = "ExcessMapEstimator"
@@ -102,7 +105,8 @@
selection_optional=None,
energy_edges=None,
apply_mask_fit=False,
- correlate_off=False
+ correlate_off=False,
+ spectral_model=None,
):
self.correlation_radius = correlation_radius
self.n_sigma = n_sigma
@@ -111,6 +115,7 @@
self.selection_optional = selection_optional
self.energy_edges = energy_edges
self.correlate_off = correlate_off
+ self.spectral_model = spectral_model
@property
def correlation_radius(self):
@@ -193,7 +198,9 @@
else:
mask = np.ones(dataset.data_shape, dtype=bool)
- counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)
+ counts_stat = convolved_map_dataset_counts_statistics(
+ dataset, kernel, mask, self.correlate_off
+ )
n_on = Map.from_geom(geom, data=counts_stat.n_on)
bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)
@@ -209,7 +216,7 @@
result.update({"err": err})
if dataset.exposure:
- reco_exposure = estimate_exposure_reco_energy(dataset)
+ reco_exposure = estimate_exposure_reco_energy(dataset, self.spectral_model)
with np.errstate(invalid="ignore", divide="ignore"):
flux = excess / reco_exposure
flux.quantity = flux.quantity.to("1 / (cm2 s)")
| {"golden_diff": "diff --git a/gammapy/estimators/excess_map.py b/gammapy/estimators/excess_map.py\n--- a/gammapy/estimators/excess_map.py\n+++ b/gammapy/estimators/excess_map.py\n@@ -89,6 +89,9 @@\n A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset\n correlate_off : Bool\n Correlate OFF events in the case of a MapDatasetOnOff\n+ spectral_model : `~gammapy.modeling.models.SpectralModel`\n+ Spectral model used for the computation of the flux map. \n+ If None, a Power Law of index 2 is assumed (default). \n \"\"\"\n \n tag = \"ExcessMapEstimator\"\n@@ -102,7 +105,8 @@\n selection_optional=None,\n energy_edges=None,\n apply_mask_fit=False,\n- correlate_off=False\n+ correlate_off=False,\n+ spectral_model=None,\n ):\n self.correlation_radius = correlation_radius\n self.n_sigma = n_sigma\n@@ -111,6 +115,7 @@\n self.selection_optional = selection_optional\n self.energy_edges = energy_edges\n self.correlate_off = correlate_off\n+ self.spectral_model = spectral_model\n \n @property\n def correlation_radius(self):\n@@ -193,7 +198,9 @@\n else:\n mask = np.ones(dataset.data_shape, dtype=bool)\n \n- counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)\n+ counts_stat = convolved_map_dataset_counts_statistics(\n+ dataset, kernel, mask, self.correlate_off\n+ )\n \n n_on = Map.from_geom(geom, data=counts_stat.n_on)\n bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)\n@@ -209,7 +216,7 @@\n result.update({\"err\": err})\n \n if dataset.exposure:\n- reco_exposure = estimate_exposure_reco_energy(dataset)\n+ reco_exposure = estimate_exposure_reco_energy(dataset, self.spectral_model)\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n flux = excess / reco_exposure\n flux.quantity = flux.quantity.to(\"1 / (cm2 s)\")\n", "issue": "Allow to specify spectral model in ExcessMapEstimator\nCurrently the `ExcessMapEstimator` does not allow to define the spectral model, that is used for the flux computation. It is easy to support and should be done...\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport copy\nimport logging\nimport numpy as np\nimport astropy.units as u\nfrom astropy.convolution import Tophat2DKernel\nfrom astropy.coordinates import Angle\nfrom gammapy.datasets import MapDataset, MapDatasetOnOff\nfrom gammapy.maps import Map, MapAxis\nfrom gammapy.stats import CashCountsStatistic, WStatCountsStatistic\nfrom .core import Estimator\nfrom .utils import estimate_exposure_reco_energy\n\n__all__ = [\n \"ExcessMapEstimator\",\n]\n\nlog = logging.getLogger(__name__)\n\n\ndef convolved_map_dataset_counts_statistics(dataset, kernel, mask, correlate_off):\n \"\"\"Return CountsDataset objects containing smoothed maps from the MapDataset\"\"\"\n # Kernel is modified later make a copy here\n kernel = copy.deepcopy(kernel)\n kernel.normalize(\"peak\")\n\n # fft convolution adds numerical noise, to ensure integer results we call\n # np.rint\n n_on = dataset.counts * mask\n n_on_conv = np.rint(n_on.convolve(kernel.array).data)\n\n if isinstance(dataset, MapDatasetOnOff):\n n_off = dataset.counts_off * mask\n npred_sig = dataset.npred_signal() * mask\n acceptance_on = dataset.acceptance * mask\n acceptance_off = dataset.acceptance_off * mask\n\n npred_sig_convolve = npred_sig.convolve(kernel.array)\n acceptance_on_convolve = acceptance_on.convolve(kernel.array)\n if correlate_off:\n n_off = n_off.convolve(kernel.array)\n acceptance_off = acceptance_off.convolve(kernel.array)\n\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n alpha = acceptance_on_convolve / acceptance_off\n\n return WStatCountsStatistic(\n n_on_conv.data, n_off.data, alpha.data, npred_sig_convolve.data\n )\n else:\n\n npred = dataset.npred() * mask\n background_conv = npred.convolve(kernel.array)\n return CashCountsStatistic(n_on_conv.data, background_conv.data)\n\n\nclass ExcessMapEstimator(Estimator):\n \"\"\"Computes correlated excess, sqrt TS (i.e. Li-Ma significance) and errors for MapDatasets.\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Some background estimation techniques like ring background or adaptive ring background will provide already\n correlated data for OFF. In the case of already correlated OFF data, the OFF data should not be correlated again,\n and so the option correlate_off should set to False (default).\n\n Parameters\n ----------\n correlation_radius : ~astropy.coordinate.Angle\n correlation radius to use\n n_sigma : float\n Confidence level for the asymmetric errors expressed in number of sigma.\n Default is 1.\n n_sigma_ul : float\n Confidence level for the upper limits expressed in number of sigma.\n Default is 3.\n selection_optional : list of str\n Which additional maps to estimate besides delta TS, significance and symmetric error.\n Available options are:\n\n * \"errn-errp\": estimate asymmetric errors.\n * \"ul\": estimate upper limits.\n\n By default all additional quantities are estimated.\n energy_edges : `~astropy.units.Quantity`\n Energy edges of the target excess maps bins.\n apply_mask_fit : Bool\n Apply a mask for the computation.\n A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset\n correlate_off : Bool\n Correlate OFF events in the case of a MapDatasetOnOff\n \"\"\"\n\n tag = \"ExcessMapEstimator\"\n _available_selection_optional = [\"errn-errp\", \"ul\"]\n\n def __init__(\n self,\n correlation_radius=\"0.1 deg\",\n n_sigma=1,\n n_sigma_ul=3,\n selection_optional=None,\n energy_edges=None,\n apply_mask_fit=False,\n correlate_off=False\n ):\n self.correlation_radius = correlation_radius\n self.n_sigma = n_sigma\n self.n_sigma_ul = n_sigma_ul\n self.apply_mask_fit = apply_mask_fit\n self.selection_optional = selection_optional\n self.energy_edges = energy_edges\n self.correlate_off = correlate_off\n\n @property\n def correlation_radius(self):\n return self._correlation_radius\n\n @correlation_radius.setter\n def correlation_radius(self, correlation_radius):\n \"\"\"Sets radius\"\"\"\n self._correlation_radius = Angle(correlation_radius)\n\n def run(self, dataset):\n \"\"\"Compute correlated excess, Li & Ma significance and flux maps\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Parameters\n ----------\n dataset : `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`\n input dataset\n\n Returns\n -------\n images : dict\n Dictionary containing result correlated maps. Keys are:\n\n * counts : correlated counts map\n * background : correlated background map\n * excess : correlated excess map\n * ts : TS map\n * sqrt_ts : sqrt(delta TS), or Li-Ma significance map\n * err : symmetric error map (from covariance)\n * flux : flux map. An exposure map must be present in the dataset to compute flux map\n * errn : negative error map\n * errp : positive error map\n * ul : upper limit map\n\n \"\"\"\n if not isinstance(dataset, MapDataset):\n raise ValueError(\"Unsupported dataset type\")\n\n if self.energy_edges is None:\n energy_axis = dataset.counts.geom.axes[\"energy\"]\n energy_edges = u.Quantity([energy_axis.edges[0], energy_axis.edges[-1]])\n else:\n energy_edges = self.energy_edges\n\n axis = MapAxis.from_energy_edges(energy_edges)\n\n resampled_dataset = dataset.resample_energy_axis(energy_axis=axis)\n\n # Beware we rely here on the correct npred background in MapDataset.resample_energy_axis\n resampled_dataset.models = dataset.models\n\n result = self.estimate_excess_map(resampled_dataset)\n\n return result\n\n def estimate_excess_map(self, dataset):\n \"\"\"Estimate excess and ts maps for single dataset.\n\n If exposure is defined, a flux map is also computed.\n\n Parameters\n ----------\n dataset : `MapDataset`\n Map dataset\n \"\"\"\n\n pixel_size = np.mean(np.abs(dataset.counts.geom.wcs.wcs.cdelt))\n size = self.correlation_radius.deg / pixel_size\n kernel = Tophat2DKernel(size)\n\n geom = dataset.counts.geom\n\n if self.apply_mask_fit:\n mask = dataset.mask\n elif dataset.mask_safe:\n mask = dataset.mask_safe\n else:\n mask = np.ones(dataset.data_shape, dtype=bool)\n\n counts_stat = convolved_map_dataset_counts_statistics(dataset, kernel, mask, self.correlate_off)\n\n n_on = Map.from_geom(geom, data=counts_stat.n_on)\n bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)\n excess = Map.from_geom(geom, data=counts_stat.n_sig)\n\n result = {\"counts\": n_on, \"background\": bkg, \"excess\": excess}\n\n tsmap = Map.from_geom(geom, data=counts_stat.ts)\n sqrt_ts = Map.from_geom(geom, data=counts_stat.sqrt_ts)\n result.update({\"ts\": tsmap, \"sqrt_ts\": sqrt_ts})\n\n err = Map.from_geom(geom, data=counts_stat.error * self.n_sigma)\n result.update({\"err\": err})\n\n if dataset.exposure:\n reco_exposure = estimate_exposure_reco_energy(dataset)\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n flux = excess / reco_exposure\n flux.quantity = flux.quantity.to(\"1 / (cm2 s)\")\n else:\n flux = Map.from_geom(\n geom=dataset.counts.geom, data=np.nan * np.ones(dataset.data_shape)\n )\n result.update({\"flux\": flux})\n\n if \"errn-errp\" in self.selection_optional:\n errn = Map.from_geom(geom, data=counts_stat.compute_errn(self.n_sigma))\n errp = Map.from_geom(geom, data=counts_stat.compute_errp(self.n_sigma))\n result.update({\"errn\": errn, \"errp\": errp})\n\n if \"ul\" in self.selection_optional:\n ul = Map.from_geom(\n geom, data=counts_stat.compute_upper_limit(self.n_sigma_ul)\n )\n result.update({\"ul\": ul})\n\n # return nan values outside mask\n for key in result:\n result[key].data[~mask] = np.nan\n\n return result\n", "path": "gammapy/estimators/excess_map.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport copy\nimport logging\nimport numpy as np\nimport astropy.units as u\nfrom astropy.convolution import Tophat2DKernel\nfrom astropy.coordinates import Angle\nfrom gammapy.datasets import MapDataset, MapDatasetOnOff\nfrom gammapy.maps import Map, MapAxis\nfrom gammapy.stats import CashCountsStatistic, WStatCountsStatistic\nfrom .core import Estimator\nfrom .utils import estimate_exposure_reco_energy\n\n__all__ = [\n \"ExcessMapEstimator\",\n]\n\nlog = logging.getLogger(__name__)\n\n\ndef convolved_map_dataset_counts_statistics(dataset, kernel, mask, correlate_off):\n \"\"\"Return CountsDataset objects containing smoothed maps from the MapDataset\"\"\"\n # Kernel is modified later make a copy here\n kernel = copy.deepcopy(kernel)\n kernel.normalize(\"peak\")\n\n # fft convolution adds numerical noise, to ensure integer results we call\n # np.rint\n n_on = dataset.counts * mask\n n_on_conv = np.rint(n_on.convolve(kernel.array).data)\n\n if isinstance(dataset, MapDatasetOnOff):\n n_off = dataset.counts_off * mask\n npred_sig = dataset.npred_signal() * mask\n acceptance_on = dataset.acceptance * mask\n acceptance_off = dataset.acceptance_off * mask\n\n npred_sig_convolve = npred_sig.convolve(kernel.array)\n acceptance_on_convolve = acceptance_on.convolve(kernel.array)\n if correlate_off:\n n_off = n_off.convolve(kernel.array)\n acceptance_off = acceptance_off.convolve(kernel.array)\n\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n alpha = acceptance_on_convolve / acceptance_off\n\n return WStatCountsStatistic(\n n_on_conv.data, n_off.data, alpha.data, npred_sig_convolve.data\n )\n else:\n\n npred = dataset.npred() * mask\n background_conv = npred.convolve(kernel.array)\n return CashCountsStatistic(n_on_conv.data, background_conv.data)\n\n\nclass ExcessMapEstimator(Estimator):\n \"\"\"Computes correlated excess, sqrt TS (i.e. Li-Ma significance) and errors for MapDatasets.\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Some background estimation techniques like ring background or adaptive ring background will provide already\n correlated data for OFF. In the case of already correlated OFF data, the OFF data should not be correlated again,\n and so the option correlate_off should set to False (default).\n\n Parameters\n ----------\n correlation_radius : ~astropy.coordinate.Angle\n correlation radius to use\n n_sigma : float\n Confidence level for the asymmetric errors expressed in number of sigma.\n Default is 1.\n n_sigma_ul : float\n Confidence level for the upper limits expressed in number of sigma.\n Default is 3.\n selection_optional : list of str\n Which additional maps to estimate besides delta TS, significance and symmetric error.\n Available options are:\n\n * \"errn-errp\": estimate asymmetric errors.\n * \"ul\": estimate upper limits.\n\n By default all additional quantities are estimated.\n energy_edges : `~astropy.units.Quantity`\n Energy edges of the target excess maps bins.\n apply_mask_fit : Bool\n Apply a mask for the computation.\n A `~gammapy.datasets.MapDataset.mask_fit` must be present on the input dataset\n correlate_off : Bool\n Correlate OFF events in the case of a MapDatasetOnOff\n spectral_model : `~gammapy.modeling.models.SpectralModel`\n Spectral model used for the computation of the flux map. \n If None, a Power Law of index 2 is assumed (default). \n \"\"\"\n\n tag = \"ExcessMapEstimator\"\n _available_selection_optional = [\"errn-errp\", \"ul\"]\n\n def __init__(\n self,\n correlation_radius=\"0.1 deg\",\n n_sigma=1,\n n_sigma_ul=3,\n selection_optional=None,\n energy_edges=None,\n apply_mask_fit=False,\n correlate_off=False,\n spectral_model=None,\n ):\n self.correlation_radius = correlation_radius\n self.n_sigma = n_sigma\n self.n_sigma_ul = n_sigma_ul\n self.apply_mask_fit = apply_mask_fit\n self.selection_optional = selection_optional\n self.energy_edges = energy_edges\n self.correlate_off = correlate_off\n self.spectral_model = spectral_model\n\n @property\n def correlation_radius(self):\n return self._correlation_radius\n\n @correlation_radius.setter\n def correlation_radius(self, correlation_radius):\n \"\"\"Sets radius\"\"\"\n self._correlation_radius = Angle(correlation_radius)\n\n def run(self, dataset):\n \"\"\"Compute correlated excess, Li & Ma significance and flux maps\n\n If a model is set on the dataset the excess map estimator will compute the excess taking into account\n the predicted counts of the model.\n\n Parameters\n ----------\n dataset : `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`\n input dataset\n\n Returns\n -------\n images : dict\n Dictionary containing result correlated maps. Keys are:\n\n * counts : correlated counts map\n * background : correlated background map\n * excess : correlated excess map\n * ts : TS map\n * sqrt_ts : sqrt(delta TS), or Li-Ma significance map\n * err : symmetric error map (from covariance)\n * flux : flux map. An exposure map must be present in the dataset to compute flux map\n * errn : negative error map\n * errp : positive error map\n * ul : upper limit map\n\n \"\"\"\n if not isinstance(dataset, MapDataset):\n raise ValueError(\"Unsupported dataset type\")\n\n if self.energy_edges is None:\n energy_axis = dataset.counts.geom.axes[\"energy\"]\n energy_edges = u.Quantity([energy_axis.edges[0], energy_axis.edges[-1]])\n else:\n energy_edges = self.energy_edges\n\n axis = MapAxis.from_energy_edges(energy_edges)\n\n resampled_dataset = dataset.resample_energy_axis(energy_axis=axis)\n\n # Beware we rely here on the correct npred background in MapDataset.resample_energy_axis\n resampled_dataset.models = dataset.models\n\n result = self.estimate_excess_map(resampled_dataset)\n\n return result\n\n def estimate_excess_map(self, dataset):\n \"\"\"Estimate excess and ts maps for single dataset.\n\n If exposure is defined, a flux map is also computed.\n\n Parameters\n ----------\n dataset : `MapDataset`\n Map dataset\n \"\"\"\n\n pixel_size = np.mean(np.abs(dataset.counts.geom.wcs.wcs.cdelt))\n size = self.correlation_radius.deg / pixel_size\n kernel = Tophat2DKernel(size)\n\n geom = dataset.counts.geom\n\n if self.apply_mask_fit:\n mask = dataset.mask\n elif dataset.mask_safe:\n mask = dataset.mask_safe\n else:\n mask = np.ones(dataset.data_shape, dtype=bool)\n\n counts_stat = convolved_map_dataset_counts_statistics(\n dataset, kernel, mask, self.correlate_off\n )\n\n n_on = Map.from_geom(geom, data=counts_stat.n_on)\n bkg = Map.from_geom(geom, data=counts_stat.n_on - counts_stat.n_sig)\n excess = Map.from_geom(geom, data=counts_stat.n_sig)\n\n result = {\"counts\": n_on, \"background\": bkg, \"excess\": excess}\n\n tsmap = Map.from_geom(geom, data=counts_stat.ts)\n sqrt_ts = Map.from_geom(geom, data=counts_stat.sqrt_ts)\n result.update({\"ts\": tsmap, \"sqrt_ts\": sqrt_ts})\n\n err = Map.from_geom(geom, data=counts_stat.error * self.n_sigma)\n result.update({\"err\": err})\n\n if dataset.exposure:\n reco_exposure = estimate_exposure_reco_energy(dataset, self.spectral_model)\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n flux = excess / reco_exposure\n flux.quantity = flux.quantity.to(\"1 / (cm2 s)\")\n else:\n flux = Map.from_geom(\n geom=dataset.counts.geom, data=np.nan * np.ones(dataset.data_shape)\n )\n result.update({\"flux\": flux})\n\n if \"errn-errp\" in self.selection_optional:\n errn = Map.from_geom(geom, data=counts_stat.compute_errn(self.n_sigma))\n errp = Map.from_geom(geom, data=counts_stat.compute_errp(self.n_sigma))\n result.update({\"errn\": errn, \"errp\": errp})\n\n if \"ul\" in self.selection_optional:\n ul = Map.from_geom(\n geom, data=counts_stat.compute_upper_limit(self.n_sigma_ul)\n )\n result.update({\"ul\": ul})\n\n # return nan values outside mask\n for key in result:\n result[key].data[~mask] = np.nan\n\n return result\n", "path": "gammapy/estimators/excess_map.py"}]} | 2,842 | 523 |
gh_patches_debug_3138 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] [Authentication] updates to support Arlington
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3734
# Changed projects
* Microsoft.Bot.Connector
* Microsoft.Bot.Connector.Tests
[R9]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-connector/botframework/connector/auth/government_constants.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from abc import ABC
4
5
6 class GovernmentConstants(ABC):
7
8 """
9 Government Channel Service property value
10 """
11
12 CHANNEL_SERVICE = "https://botframework.azure.us"
13
14 """
15 TO CHANNEL FROM BOT: Login URL
16 """
17 TO_CHANNEL_FROM_BOT_LOGIN_URL = (
18 "https://login.microsoftonline.us/"
19 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
20 "oauth2/v2.0/token"
21 )
22
23 """
24 TO CHANNEL FROM BOT: OAuth scope to request
25 """
26 TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = "https://api.botframework.us/.default"
27
28 """
29 TO BOT FROM CHANNEL: Token issuer
30 """
31 TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = "https://api.botframework.us"
32
33 """
34 TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA
35 """
36 TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (
37 "https://login.botframework.azure.us/v1/.well-known/openidconfiguration"
38 )
39
40 """
41 TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA
42 """
43 TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (
44 "https://login.microsoftonline.us/"
45 "cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/"
46 ".well-known/openid-configuration"
47 )
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py
+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py
@@ -15,9 +15,7 @@
TO CHANNEL FROM BOT: Login URL
"""
TO_CHANNEL_FROM_BOT_LOGIN_URL = (
- "https://login.microsoftonline.us/"
- "cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
- "oauth2/v2.0/token"
+ "https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us"
)
"""
| {"golden_diff": "diff --git a/libraries/botframework-connector/botframework/connector/auth/government_constants.py b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n--- a/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n+++ b/libraries/botframework-connector/botframework/connector/auth/government_constants.py\n@@ -15,9 +15,7 @@\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n- \"https://login.microsoftonline.us/\"\n- \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n- \"oauth2/v2.0/token\"\n+ \"https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us\"\n )\n \n \"\"\"\n", "issue": "[PORT] [Authentication] updates to support Arlington\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3734\n\n\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Connector\r\n* Microsoft.Bot.Connector.Tests\r\n\r\n[R9]\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom abc import ABC\n\n\nclass GovernmentConstants(ABC):\n\n \"\"\"\n Government Channel Service property value\n \"\"\"\n\n CHANNEL_SERVICE = \"https://botframework.azure.us\"\n\n \"\"\"\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/\"\n \"oauth2/v2.0/token\"\n )\n\n \"\"\"\n TO CHANNEL FROM BOT: OAuth scope to request\n \"\"\"\n TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = \"https://api.botframework.us/.default\"\n\n \"\"\"\n TO BOT FROM CHANNEL: Token issuer\n \"\"\"\n TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = \"https://api.botframework.us\"\n\n \"\"\"\n TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (\n \"https://login.botframework.azure.us/v1/.well-known/openidconfiguration\"\n )\n\n \"\"\"\n TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/\"\n \".well-known/openid-configuration\"\n )\n", "path": "libraries/botframework-connector/botframework/connector/auth/government_constants.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom abc import ABC\n\n\nclass GovernmentConstants(ABC):\n\n \"\"\"\n Government Channel Service property value\n \"\"\"\n\n CHANNEL_SERVICE = \"https://botframework.azure.us\"\n\n \"\"\"\n TO CHANNEL FROM BOT: Login URL\n \"\"\"\n TO_CHANNEL_FROM_BOT_LOGIN_URL = (\n \"https://login.microsoftonline.us/MicrosoftServices.onmicrosoft.us\"\n )\n\n \"\"\"\n TO CHANNEL FROM BOT: OAuth scope to request\n \"\"\"\n TO_CHANNEL_FROM_BOT_OAUTH_SCOPE = \"https://api.botframework.us/.default\"\n\n \"\"\"\n TO BOT FROM CHANNEL: Token issuer\n \"\"\"\n TO_BOT_FROM_CHANNEL_TOKEN_ISSUER = \"https://api.botframework.us\"\n\n \"\"\"\n TO BOT FROM CHANNEL: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_CHANNEL_OPEN_ID_METADATA_URL = (\n \"https://login.botframework.azure.us/v1/.well-known/openidconfiguration\"\n )\n\n \"\"\"\n TO BOT FROM GOV EMULATOR: OpenID metadata document for tokens coming from MSA\n \"\"\"\n TO_BOT_FROM_EMULATOR_OPEN_ID_METADATA_URL = (\n \"https://login.microsoftonline.us/\"\n \"cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0/\"\n \".well-known/openid-configuration\"\n )\n", "path": "libraries/botframework-connector/botframework/connector/auth/government_constants.py"}]} | 779 | 192 |
gh_patches_debug_272 | rasdani/github-patches | git_diff | cupy__cupy-1028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cupy.copyto behaves differently from numpy.copyto when src is a python scalar
Code:
```python
import numpy
import cupy
def copyto_check(xp):
x = xp.zeros(3, dtype=numpy.float32)
# replace first and third items with 1.0
xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))
print(x)
print('numpy', numpy.__version__)
copyto_check(numpy)
print('cupy', cupy.__version__)
copyto_check(cupy)
```
Output:
```
numpy 1.14.0
[1. 0. 1.]
cupy 2.2.0
[1. 1. 1.]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/manipulation/basic.py`
Content:
```
1 import numpy
2 import six
3
4 from cupy import core
5
6
7 def copyto(dst, src, casting='same_kind', where=None):
8 """Copies values from one array to another with broadcasting.
9
10 This function can be called for arrays on different devices. In this case,
11 casting, ``where``, and broadcasting is not supported, and an exception is
12 raised if these are used.
13
14 Args:
15 dst (cupy.ndarray): Target array.
16 src (cupy.ndarray): Source array.
17 casting (str): Casting rule. See :func:`numpy.can_cast` for detail.
18 where (cupy.ndarray of bool): If specified, this array acts as a mask,
19 and an element is copied only if the corresponding element of
20 ``where`` is True.
21
22 .. seealso:: :func:`numpy.copyto`
23
24 """
25
26 src_type = type(src)
27 src_is_python_scalar = (src_type in six.integer_types or
28 src_type in (bool, float, complex))
29 if src_is_python_scalar:
30 src_dtype = numpy.dtype(type(src))
31 can_cast = numpy.can_cast(src, dst.dtype, casting)
32 else:
33 src_dtype = src.dtype
34 can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)
35
36 if not can_cast:
37 raise TypeError('Cannot cast %s to %s in %s casting mode' %
38 (src_dtype, dst.dtype, casting))
39 if dst.size == 0:
40 return
41
42 if src_is_python_scalar:
43 dst.fill(src)
44 return
45
46 if where is None:
47 if _can_memcpy(dst, src):
48 dst.data.copy_from(src.data, src.nbytes)
49 else:
50 device = dst.device
51 with device:
52 if src.device != device:
53 src = src.copy()
54 core.elementwise_copy(src, dst)
55 else:
56 core.elementwise_copy_where(src, where, dst)
57
58
59 def _can_memcpy(dst, src):
60 c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous
61 f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous
62 return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \
63 dst.size == src.size
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py
--- a/cupy/manipulation/basic.py
+++ b/cupy/manipulation/basic.py
@@ -39,7 +39,7 @@
if dst.size == 0:
return
- if src_is_python_scalar:
+ if src_is_python_scalar and where is None:
dst.fill(src)
return
| {"golden_diff": "diff --git a/cupy/manipulation/basic.py b/cupy/manipulation/basic.py\n--- a/cupy/manipulation/basic.py\n+++ b/cupy/manipulation/basic.py\n@@ -39,7 +39,7 @@\n if dst.size == 0:\n return\n \n- if src_is_python_scalar:\n+ if src_is_python_scalar and where is None:\n dst.fill(src)\n return\n", "issue": "cupy.copyto behaves differently from numpy.copyto when src is a python scalar\nCode:\r\n```python\r\nimport numpy\r\nimport cupy\r\n\r\ndef copyto_check(xp):\r\n x = xp.zeros(3, dtype=numpy.float32)\r\n # replace first and third items with 1.0\r\n xp.copyto(x, 1.0, where=xp.asarray([True, False, True]))\r\n print(x)\r\n\r\nprint('numpy', numpy.__version__)\r\ncopyto_check(numpy)\r\nprint('cupy', cupy.__version__)\r\ncopyto_check(cupy)\r\n```\r\nOutput:\r\n```\r\nnumpy 1.14.0\r\n[1. 0. 1.]\r\ncupy 2.2.0\r\n[1. 1. 1.]\r\n```\n", "before_files": [{"content": "import numpy\nimport six\n\nfrom cupy import core\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = (src_type in six.integer_types or\n src_type in (bool, float, complex))\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if where is None:\n if _can_memcpy(dst, src):\n dst.data.copy_from(src.data, src.nbytes)\n else:\n device = dst.device\n with device:\n if src.device != device:\n src = src.copy()\n core.elementwise_copy(src, dst)\n else:\n core.elementwise_copy_where(src, where, dst)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/manipulation/basic.py"}], "after_files": [{"content": "import numpy\nimport six\n\nfrom cupy import core\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = (src_type in six.integer_types or\n src_type in (bool, float, complex))\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if dst.size == 0:\n return\n\n if src_is_python_scalar and where is None:\n dst.fill(src)\n return\n\n if where is None:\n if _can_memcpy(dst, src):\n dst.data.copy_from(src.data, src.nbytes)\n else:\n device = dst.device\n with device:\n if src.device != device:\n src = src.copy()\n core.elementwise_copy(src, dst)\n else:\n core.elementwise_copy_where(src, where, dst)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/manipulation/basic.py"}]} | 1,028 | 91 |
gh_patches_debug_28700 | rasdani/github-patches | git_diff | meltano__meltano-6552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured
The project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:
- `auto`
- `config_override`
- `db`
- `default`
- `dotenv`
- `env`
- `inherited`
- `meltano_env`
- `meltano_yml`
CC @pnadolny13 @aaronsteers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/core/tracking/contexts/project.py`
Content:
```
1 """Project context for the Snowplow tracker."""
2
3 from __future__ import annotations
4
5 import uuid
6 from enum import Enum, auto
7
8 from cached_property import cached_property
9 from snowplow_tracker import SelfDescribingJson
10 from structlog.stdlib import get_logger
11
12 from meltano.core.project import Project
13 from meltano.core.project_settings_service import ProjectSettingsService
14 from meltano.core.tracking.schemas import ProjectContextSchema
15 from meltano.core.utils import hash_sha256
16
17 logger = get_logger(__name__)
18
19
20 class ProjectUUIDSource(Enum):
21 """The source of the `project_uuid` used for telemetry."""
22
23 # The UUID was explicitly provided in the config as the `project_id`.
24 explicit = auto()
25
26 # The UUID was derived by hashing the `project_id` in the config.
27 derived = auto()
28
29 # The UUID was randomly generated (UUID v4) since no `project_id` was configured.
30 random = auto()
31
32
33 class ProjectContext(SelfDescribingJson):
34 """Tracking context for the Meltano project."""
35
36 def __init__(self, project: Project, client_id: uuid.UUID):
37 """Initialize a meltano tracking "project" context.
38
39 Args:
40 project: The Meltano project.
41 client_id: The client ID from `analytics.json`.
42 """
43 self.project = project
44 self.settings_service = ProjectSettingsService(project)
45 self.send_anonymous_usage_stats = self.settings_service.get(
46 "send_anonymous_usage_stats", True
47 )
48
49 super().__init__(
50 ProjectContextSchema.url,
51 {
52 "context_uuid": str(uuid.uuid4()),
53 "project_uuid": str(self.project_uuid),
54 "project_uuid_source": self.project_uuid_source.name,
55 "client_uuid": str(client_id),
56 "environment_name_hash": (
57 hash_sha256(self.project.active_environment.name)
58 if self.project.active_environment
59 else None
60 ),
61 },
62 )
63
64 @property
65 def project_uuid_source(self) -> ProjectUUIDSource:
66 """Obtain the source of the `project_uuid` used for telemetry.
67
68 Returns:
69 ProjectUUIDSource: The source of the `project_uuid` used for telemetry.
70 """
71 # Ensure the `project_uuid` has been generated
72 self.project_uuid # noqa: WPS428
73 return self._project_uuid_source
74
75 @cached_property
76 def project_uuid(self) -> uuid.UUID:
77 """Obtain the `project_id` from the project config file.
78
79 If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the
80 project config file.
81
82 Returns:
83 The project UUID.
84 """
85 project_id_str = self.settings_service.get("project_id")
86
87 if project_id_str:
88 try:
89 # Project ID might already be a UUID
90 project_id = uuid.UUID(project_id_str)
91 except ValueError:
92 # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID
93 project_id = uuid.UUID(hash_sha256(project_id_str)[::2])
94 self._project_uuid_source = ProjectUUIDSource.derived
95 else:
96 self._project_uuid_source = ProjectUUIDSource.explicit
97 else:
98 project_id = uuid.uuid4()
99 self._project_uuid_source = ProjectUUIDSource.random
100
101 return project_id
102
```
Path: `src/meltano/core/tracking/schemas.py`
Content:
```
1 """Meltano Iglu schemas metadata & utilities."""
2
3 from __future__ import annotations
4
5 from dataclasses import dataclass
6
7 DEFAULT_VENDOR = "com.meltano"
8
9
10 @dataclass
11 class IgluSchema:
12 """Dataclass to store the name, version, vendor, and URL for an Iglu schema."""
13
14 name: str
15 version: str
16 vendor: str = DEFAULT_VENDOR
17
18 @property
19 def url(self) -> str:
20 """Construct an iglu schema URL.
21
22 Returns:
23 The URL to the schema.
24 """
25 return f"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}"
26
27
28 CliContextSchema = IgluSchema("cli_context", "1-1-0")
29 CliEventSchema = IgluSchema("cli_event", "1-0-1")
30 BlockEventSchema = IgluSchema("block_event", "1-0-0")
31 EnvironmentContextSchema = IgluSchema("environment_context", "1-0-0")
32 ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
33 ExitEventSchema = IgluSchema("exit_event", "1-0-0")
34 PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
35 ProjectContextSchema = IgluSchema("project_context", "1-0-0")
36 TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py
--- a/src/meltano/core/tracking/contexts/project.py
+++ b/src/meltano/core/tracking/contexts/project.py
@@ -42,9 +42,10 @@
"""
self.project = project
self.settings_service = ProjectSettingsService(project)
- self.send_anonymous_usage_stats = self.settings_service.get(
- "send_anonymous_usage_stats", True
- )
+ (
+ send_anonymous_usage_stats,
+ send_anonymous_usage_stats_metadata,
+ ) = self.settings_service.get_with_metadata("send_anonymous_usage_stats")
super().__init__(
ProjectContextSchema.url,
@@ -58,6 +59,10 @@
if self.project.active_environment
else None
),
+ "send_anonymous_usage_stats": send_anonymous_usage_stats,
+ "send_anonymous_usage_stats_source": (
+ send_anonymous_usage_stats_metadata["source"].value
+ ),
},
)
diff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py
--- a/src/meltano/core/tracking/schemas.py
+++ b/src/meltano/core/tracking/schemas.py
@@ -32,5 +32,5 @@
ExceptionContextSchema = IgluSchema("exception_context", "1-0-0")
ExitEventSchema = IgluSchema("exit_event", "1-0-0")
PluginsContextSchema = IgluSchema("plugins_context", "1-0-0")
-ProjectContextSchema = IgluSchema("project_context", "1-0-0")
+ProjectContextSchema = IgluSchema("project_context", "1-1-0")
TelemetryStateChangeEventSchema = IgluSchema("telemetry_state_change_event", "1-0-0")
| {"golden_diff": "diff --git a/src/meltano/core/tracking/contexts/project.py b/src/meltano/core/tracking/contexts/project.py\n--- a/src/meltano/core/tracking/contexts/project.py\n+++ b/src/meltano/core/tracking/contexts/project.py\n@@ -42,9 +42,10 @@\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n- self.send_anonymous_usage_stats = self.settings_service.get(\n- \"send_anonymous_usage_stats\", True\n- )\n+ (\n+ send_anonymous_usage_stats,\n+ send_anonymous_usage_stats_metadata,\n+ ) = self.settings_service.get_with_metadata(\"send_anonymous_usage_stats\")\n \n super().__init__(\n ProjectContextSchema.url,\n@@ -58,6 +59,10 @@\n if self.project.active_environment\n else None\n ),\n+ \"send_anonymous_usage_stats\": send_anonymous_usage_stats,\n+ \"send_anonymous_usage_stats_source\": (\n+ send_anonymous_usage_stats_metadata[\"source\"].value\n+ ),\n },\n )\n \ndiff --git a/src/meltano/core/tracking/schemas.py b/src/meltano/core/tracking/schemas.py\n--- a/src/meltano/core/tracking/schemas.py\n+++ b/src/meltano/core/tracking/schemas.py\n@@ -32,5 +32,5 @@\n ExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\n ExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\n PluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\n-ProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\n+ProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\n TelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "issue": "[Feature]: Collect telemetry data about how `send_anonymous_usage_stats` was configured\nThe project context (and its schema) should be updated to include the key `send_anonymous_usage_stats_source` with the value `ProjectSettingService.get_with_metadata('send_anonymous_usage_stats')[1]['source'].value`, which can be one of the following strings:\r\n- `auto`\r\n- `config_override`\r\n- `db`\r\n- `default`\r\n- `dotenv`\r\n- `env`\r\n- `inherited`\r\n- `meltano_env`\r\n- `meltano_yml`\r\n\r\nCC @pnadolny13 @aaronsteers \n", "before_files": [{"content": "\"\"\"Project context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport uuid\nfrom enum import Enum, auto\n\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.tracking.schemas import ProjectContextSchema\nfrom meltano.core.utils import hash_sha256\n\nlogger = get_logger(__name__)\n\n\nclass ProjectUUIDSource(Enum):\n \"\"\"The source of the `project_uuid` used for telemetry.\"\"\"\n\n # The UUID was explicitly provided in the config as the `project_id`.\n explicit = auto()\n\n # The UUID was derived by hashing the `project_id` in the config.\n derived = auto()\n\n # The UUID was randomly generated (UUID v4) since no `project_id` was configured.\n random = auto()\n\n\nclass ProjectContext(SelfDescribingJson):\n \"\"\"Tracking context for the Meltano project.\"\"\"\n\n def __init__(self, project: Project, client_id: uuid.UUID):\n \"\"\"Initialize a meltano tracking \"project\" context.\n\n Args:\n project: The Meltano project.\n client_id: The client ID from `analytics.json`.\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n self.send_anonymous_usage_stats = self.settings_service.get(\n \"send_anonymous_usage_stats\", True\n )\n\n super().__init__(\n ProjectContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"project_uuid\": str(self.project_uuid),\n \"project_uuid_source\": self.project_uuid_source.name,\n \"client_uuid\": str(client_id),\n \"environment_name_hash\": (\n hash_sha256(self.project.active_environment.name)\n if self.project.active_environment\n else None\n ),\n },\n )\n\n @property\n def project_uuid_source(self) -> ProjectUUIDSource:\n \"\"\"Obtain the source of the `project_uuid` used for telemetry.\n\n Returns:\n ProjectUUIDSource: The source of the `project_uuid` used for telemetry.\n \"\"\"\n # Ensure the `project_uuid` has been generated\n self.project_uuid # noqa: WPS428\n return self._project_uuid_source\n\n @cached_property\n def project_uuid(self) -> uuid.UUID:\n \"\"\"Obtain the `project_id` from the project config file.\n\n If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the\n project config file.\n\n Returns:\n The project UUID.\n \"\"\"\n project_id_str = self.settings_service.get(\"project_id\")\n\n if project_id_str:\n try:\n # Project ID might already be a UUID\n project_id = uuid.UUID(project_id_str)\n except ValueError:\n # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID\n project_id = uuid.UUID(hash_sha256(project_id_str)[::2])\n self._project_uuid_source = ProjectUUIDSource.derived\n else:\n self._project_uuid_source = ProjectUUIDSource.explicit\n else:\n project_id = uuid.uuid4()\n self._project_uuid_source = ProjectUUIDSource.random\n\n return project_id\n", "path": "src/meltano/core/tracking/contexts/project.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-0-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-0-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}], "after_files": [{"content": "\"\"\"Project context for the Snowplow tracker.\"\"\"\n\nfrom __future__ import annotations\n\nimport uuid\nfrom enum import Enum, auto\n\nfrom cached_property import cached_property\nfrom snowplow_tracker import SelfDescribingJson\nfrom structlog.stdlib import get_logger\n\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import ProjectSettingsService\nfrom meltano.core.tracking.schemas import ProjectContextSchema\nfrom meltano.core.utils import hash_sha256\n\nlogger = get_logger(__name__)\n\n\nclass ProjectUUIDSource(Enum):\n \"\"\"The source of the `project_uuid` used for telemetry.\"\"\"\n\n # The UUID was explicitly provided in the config as the `project_id`.\n explicit = auto()\n\n # The UUID was derived by hashing the `project_id` in the config.\n derived = auto()\n\n # The UUID was randomly generated (UUID v4) since no `project_id` was configured.\n random = auto()\n\n\nclass ProjectContext(SelfDescribingJson):\n \"\"\"Tracking context for the Meltano project.\"\"\"\n\n def __init__(self, project: Project, client_id: uuid.UUID):\n \"\"\"Initialize a meltano tracking \"project\" context.\n\n Args:\n project: The Meltano project.\n client_id: The client ID from `analytics.json`.\n \"\"\"\n self.project = project\n self.settings_service = ProjectSettingsService(project)\n (\n send_anonymous_usage_stats,\n send_anonymous_usage_stats_metadata,\n ) = self.settings_service.get_with_metadata(\"send_anonymous_usage_stats\")\n\n super().__init__(\n ProjectContextSchema.url,\n {\n \"context_uuid\": str(uuid.uuid4()),\n \"project_uuid\": str(self.project_uuid),\n \"project_uuid_source\": self.project_uuid_source.name,\n \"client_uuid\": str(client_id),\n \"environment_name_hash\": (\n hash_sha256(self.project.active_environment.name)\n if self.project.active_environment\n else None\n ),\n \"send_anonymous_usage_stats\": send_anonymous_usage_stats,\n \"send_anonymous_usage_stats_source\": (\n send_anonymous_usage_stats_metadata[\"source\"].value\n ),\n },\n )\n\n @property\n def project_uuid_source(self) -> ProjectUUIDSource:\n \"\"\"Obtain the source of the `project_uuid` used for telemetry.\n\n Returns:\n ProjectUUIDSource: The source of the `project_uuid` used for telemetry.\n \"\"\"\n # Ensure the `project_uuid` has been generated\n self.project_uuid # noqa: WPS428\n return self._project_uuid_source\n\n @cached_property\n def project_uuid(self) -> uuid.UUID:\n \"\"\"Obtain the `project_id` from the project config file.\n\n If it is not found (e.g. first time run), generate a valid v4 UUID, and and store it in the\n project config file.\n\n Returns:\n The project UUID.\n \"\"\"\n project_id_str = self.settings_service.get(\"project_id\")\n\n if project_id_str:\n try:\n # Project ID might already be a UUID\n project_id = uuid.UUID(project_id_str)\n except ValueError:\n # If the project ID is not a UUID, then we hash it, and use the hash to make a UUID\n project_id = uuid.UUID(hash_sha256(project_id_str)[::2])\n self._project_uuid_source = ProjectUUIDSource.derived\n else:\n self._project_uuid_source = ProjectUUIDSource.explicit\n else:\n project_id = uuid.uuid4()\n self._project_uuid_source = ProjectUUIDSource.random\n\n return project_id\n", "path": "src/meltano/core/tracking/contexts/project.py"}, {"content": "\"\"\"Meltano Iglu schemas metadata & utilities.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nDEFAULT_VENDOR = \"com.meltano\"\n\n\n@dataclass\nclass IgluSchema:\n \"\"\"Dataclass to store the name, version, vendor, and URL for an Iglu schema.\"\"\"\n\n name: str\n version: str\n vendor: str = DEFAULT_VENDOR\n\n @property\n def url(self) -> str:\n \"\"\"Construct an iglu schema URL.\n\n Returns:\n The URL to the schema.\n \"\"\"\n return f\"iglu:{self.vendor}/{self.name}/jsonschema/{self.version}\"\n\n\nCliContextSchema = IgluSchema(\"cli_context\", \"1-1-0\")\nCliEventSchema = IgluSchema(\"cli_event\", \"1-0-1\")\nBlockEventSchema = IgluSchema(\"block_event\", \"1-0-0\")\nEnvironmentContextSchema = IgluSchema(\"environment_context\", \"1-0-0\")\nExceptionContextSchema = IgluSchema(\"exception_context\", \"1-0-0\")\nExitEventSchema = IgluSchema(\"exit_event\", \"1-0-0\")\nPluginsContextSchema = IgluSchema(\"plugins_context\", \"1-0-0\")\nProjectContextSchema = IgluSchema(\"project_context\", \"1-1-0\")\nTelemetryStateChangeEventSchema = IgluSchema(\"telemetry_state_change_event\", \"1-0-0\")\n", "path": "src/meltano/core/tracking/schemas.py"}]} | 1,733 | 422 |
gh_patches_debug_26978 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Write More Idioms
We should write some more idioms:
- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the "old" way
- [x] how to use `target`s (0.11.0+)
- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/control_flow/conditional.py`
Content:
```
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6
7 __all__ = ["switch", "ifelse"]
8
9
10 class Merge(Task):
11 def __init__(self, **kwargs) -> None:
12 if kwargs.setdefault("skip_on_upstream_skip", False):
13 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
14 kwargs.setdefault("trigger", prefect.triggers.not_all_skipped)
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next(
19 (v for k, v in sorted(task_results.items()) if v is not None), None,
20 )
21
22
23 class CompareValue(Task):
24 """
25 This task stores a `value` at initialization and compares it to a `value` received at runtime.
26 If the values don't match, it raises a SKIP exception.
27
28 Args:
29 - value (Any): the value this task will attempt to match when it runs
30 - **kwargs: keyword arguments for the Task
31 """
32
33 def __init__(self, value: Any, **kwargs: Any):
34 self.value = value
35 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
36 super().__init__(**kwargs)
37
38 def run(self, value: Any) -> None:
39 """
40 Raises a SKIP signal if the passed value does not match the task's match value;
41 succeeds silently otherwise.
42
43 Args:
44 - value (Any): the value that will be matched against the task's value.
45 """
46 if value != self.value:
47 raise signals.SKIP(
48 'Provided value "{}" did not match "{}"'.format(value, self.value)
49 )
50
51
52 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
53 """
54 Adds a SWITCH to a workflow.
55
56 The condition task is evaluated and the result is compared to the keys of the cases
57 dictionary. The task corresponding to the matching key is run; all other tasks are
58 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
59 `skip_on_upstream_skip=False`.
60
61 Example:
62 ```python
63 @task
64 def condition():
65 return "b" # returning 'b' will take the b_branch
66
67 @task
68 def a_branch():
69 return "A Branch"
70
71 @task
72 def b_branch():
73 return "B Branch"
74
75 with Flow("switch-flow") as flow:
76 switch(condition, dict(a=a_branch, b=b_branch))
77 ```
78
79 Args:
80 - condition (Task): a task whose result forms the condition for the switch
81 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
82 The value of the `condition` task will be compared to the keys of this dict, and
83 the matching task will be executed.
84
85 Raises:
86 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
87 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
88 is passing a list of tasks as one of the cases, which adds the `List` task
89 to the switch condition but leaves the tasks themselves upstream.
90 """
91
92 with prefect.tags("switch"):
93 for value, task in cases.items():
94 task = prefect.utilities.tasks.as_task(task)
95 match_condition = CompareValue(value=value).bind(value=condition)
96 task.set_dependencies(upstream_tasks=[match_condition])
97
98
99 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
100 """
101 Builds a conditional branch into a workflow.
102
103 If the condition evaluates True(ish), the true_task will run. If it
104 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
105 all downstream tasks that don't set `skip_on_upstream_skip=False`.
106
107 Args:
108 - condition (Task): a task whose boolean result forms the condition for the ifelse
109 - true_task (Task): a task that will be executed if the condition is True
110 - false_task (Task): a task that will be executed if the condition is False
111 """
112
113 @prefect.task
114 def as_bool(x):
115 return bool(x)
116
117 cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}
118 if cases:
119 switch(condition=as_bool(condition), cases=cases)
120
121
122 def merge(*tasks: Task) -> Task:
123 """
124 Merges conditional branches back together.
125
126 A conditional branch in a flow results in one or more tasks proceeding and one or
127 more tasks skipping. It is often convenient to merge those branches back into a
128 single result. This function is a simple way to achieve that goal. By default this
129 task will skip if all its upstream dependencies are also skipped.
130
131 The merge will return the first real result it encounters, or `None`. If multiple
132 tasks might return a result, group them with a list.
133
134 Example:
135 ```python
136 with Flow("My Flow"):
137 true_branch = ActionIfTrue()
138 false_branch = ActionIfFalse()
139 ifelse(CheckCondition(), true_branch, false_branch)
140
141 merged_result = merge(true_branch, false_branch)
142 ```
143
144 Args:
145 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
146 assumed to all sit downstream of different `switch` branches, such that only
147 one of them will contain a result and the others will all be skipped.
148
149 Returns:
150 - Task: a Task representing the merged result.
151
152 """
153 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -4,7 +4,7 @@
from prefect import Task
from prefect.engine import signals
-__all__ = ["switch", "ifelse"]
+__all__ = ["switch", "ifelse", "merge"]
class Merge(Task):
@@ -119,7 +119,7 @@
switch(condition=as_bool(condition), cases=cases)
-def merge(*tasks: Task) -> Task:
+def merge(*tasks: Task, flow=None) -> Task:
"""
Merges conditional branches back together.
@@ -145,9 +145,13 @@
- *tasks (Task): tasks whose results should be merged into a single result. The tasks are
assumed to all sit downstream of different `switch` branches, such that only
one of them will contain a result and the others will all be skipped.
+ - flow (Flow, optional): The flow to use, defaults to the current flow
+ in context if no flow is specified
Returns:
- Task: a Task representing the merged result.
"""
- return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
+ return Merge().bind(
+ **{"task_{}".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow
+ )
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -4,7 +4,7 @@\n from prefect import Task\n from prefect.engine import signals\n \n-__all__ = [\"switch\", \"ifelse\"]\n+__all__ = [\"switch\", \"ifelse\", \"merge\"]\n \n \n class Merge(Task):\n@@ -119,7 +119,7 @@\n switch(condition=as_bool(condition), cases=cases)\n \n \n-def merge(*tasks: Task) -> Task:\n+def merge(*tasks: Task, flow=None) -> Task:\n \"\"\"\n Merges conditional branches back together.\n \n@@ -145,9 +145,13 @@\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n+ - flow (Flow, optional): The flow to use, defaults to the current flow\n+ in context if no flow is specified\n \n Returns:\n - Task: a Task representing the merged result.\n \n \"\"\"\n- return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n+ return Merge().bind(\n+ **{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow\n+ )\n", "issue": "Write More Idioms\nWe should write some more idioms:\r\n\r\n- [x] how to define conditional logic using the [new conditional api](https://github.com/PrefectHQ/prefect/pull/2443) and the \"old\" way\r\n- [x] how to use `target`s (0.11.0+)\r\n- [x] how to configure notifications (three options: write a downstream task, state handler, cloud hook)\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}\n if cases:\n switch(condition=as_bool(condition), cases=cases)\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}], "after_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\", \"merge\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n cases = {c: t for c, t in [(True, true_task), (False, false_task)] if t is not None}\n if cases:\n switch(condition=as_bool(condition), cases=cases)\n\n\ndef merge(*tasks: Task, flow=None) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n - flow (Flow, optional): The flow to use, defaults to the current flow\n in context if no flow is specified\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(\n **{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)}, flow=flow\n )\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 1,981 | 347 |
gh_patches_debug_8710 | rasdani/github-patches | git_diff | ietf-tools__datatracker-4911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrapping issues in ballot modal on narrow screens
### Describe the issue
<img width="582" alt="Screenshot 2022-12-15 at 18 02 42" src="https://user-images.githubusercontent.com/200328/207908976-51568fb5-a3b4-4ccc-8026-8065d13da38d.png">
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/doc/templatetags/ballot_icon.py`
Content:
```
1 # Copyright The IETF Trust 2012-2021, All Rights Reserved
2 # Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).
3 # All rights reserved. Contact: Pasi Eronen <[email protected]>
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions
7 # are met:
8 #
9 # * Redistributions of source code must retain the above copyright
10 # notice, this list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 #
17 # * Neither the name of the Nokia Corporation and/or its
18 # subsidiary(-ies) nor the names of its contributors may be used
19 # to endorse or promote products derived from this software
20 # without specific prior written permission.
21 #
22 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
23 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
24 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
25 # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
26 # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
27 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
28 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
29 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
30 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
31 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
32 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33
34 import datetime
35
36 import debug # pyflakes:ignore
37
38 from django import template
39 from django.urls import reverse as urlreverse
40 from django.db.models import Q
41 from django.utils import timezone
42 from django.utils.safestring import mark_safe
43
44 from ietf.ietfauth.utils import user_is_person, has_role
45 from ietf.doc.models import BallotPositionDocEvent, IESG_BALLOT_ACTIVE_STATES
46 from ietf.name.models import BallotPositionName
47
48
49 register = template.Library()
50
51 @register.filter
52 def showballoticon(doc):
53 if doc.type_id == "draft":
54 if doc.stream_id == 'ietf' and doc.get_state_slug("draft-iesg") not in IESG_BALLOT_ACTIVE_STATES:
55 return False
56 elif doc.stream_id == 'irtf' and doc.get_state_slug("draft-stream-irtf") not in ['irsgpoll']:
57 return False
58 elif doc.type_id == "charter":
59 if doc.get_state_slug() not in ("intrev", "extrev", "iesgrev"):
60 return False
61 elif doc.type_id == "conflrev":
62 if doc.get_state_slug() not in ("iesgeval","defer"):
63 return False
64 elif doc.type_id == "statchg":
65 if doc.get_state_slug() not in ("iesgeval","defer", "in-lc"):
66 return False
67
68 return True
69
70 @register.simple_tag(takes_context=True)
71 def ballot_icon(context, doc):
72 user = context.get("user")
73
74 if not doc:
75 return ""
76
77 if not showballoticon(doc):
78 return ""
79
80 ballot = doc.ballot if hasattr(doc, 'ballot') else doc.active_ballot()
81
82 if not ballot:
83 return ""
84
85 def sort_key(t):
86 _, pos = t
87 if not pos:
88 return (2, 0)
89 elif pos.pos.blocking:
90 return (0, pos.pos.order)
91 else:
92 return (1, pos.pos.order)
93
94 positions = list(ballot.active_balloter_positions().items())
95 positions.sort(key=sort_key)
96
97 right_click_string = ''
98 if has_role(user, "Area Director"):
99 right_click_string = 'oncontextmenu="window.location.href=\'%s\';return false;"' % urlreverse('ietf.doc.views_ballot.edit_position', kwargs=dict(name=doc.name, ballot_id=ballot.pk))
100
101 my_blocking = False
102 for i, (balloter, pos) in enumerate(positions):
103 if user_is_person(user,balloter) and pos and pos.pos.blocking:
104 my_blocking = True
105 break
106
107 typename = "Unknown"
108 if ballot.ballot_type.slug=='irsg-approve':
109 typename = "IRSG"
110 else:
111 typename = "IESG"
112
113 res = ['<a %s href="%s" data-bs-toggle="modal" data-bs-target="#modal-%d" aria-label="%s positions" title="%s positions (click to show more)" class="ballot-icon"><table' % (
114 right_click_string,
115 urlreverse("ietf.doc.views_doc.ballot_popup", kwargs=dict(name=doc.name, ballot_id=ballot.pk)),
116 ballot.pk,
117 typename,
118 typename,)]
119 if my_blocking:
120 res.append(' class="is-blocking" ')
121 res.append('><tbody>')
122
123 res.append("<tr>")
124
125 for i, (ad, pos) in enumerate(positions):
126 # The IRSG has many more members than the IESG, so make the table wider
127 if i > 0 and i % (5 if len(positions) <= 15 else 10) == 0:
128 res.append("</tr><tr>")
129
130 c = "position-%s" % (pos.pos.slug if pos else "norecord")
131
132 if user_is_person(user, ad):
133 c += " my"
134
135 res.append('<td class="%s"></td>' % c)
136
137 # add sufficient table calls to last row to avoid HTML validation warning
138 while (i + 1) % 5 != 0:
139 res.append('<td class="position-empty"></td>')
140 i = i + 1
141
142 res.append("</tr></tbody></table></a>")
143 res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl"><div class="modal-content"></div></div></div>' % ballot.pk)
144
145 return mark_safe("".join(res))
146
147 @register.filter
148 def ballotposition(doc, user):
149 if not showballoticon(doc) or not has_role(user, "Area Director"):
150 return None
151
152 ballot = doc.active_ballot()
153 if not ballot:
154 return None
155
156 changed_pos = doc.latest_event(BallotPositionDocEvent, type="changed_ballot_position", balloter__user=user, ballot=ballot)
157 if changed_pos:
158 pos = changed_pos.pos
159 else:
160 pos = BallotPositionName.objects.get(slug="norecord")
161 return pos
162
163
164 @register.filter
165 def state_age_colored(doc):
166 if doc.type_id == "draft":
167 if not doc.get_state_slug() in ["active", "rfc"]:
168 # Don't show anything for expired/withdrawn/replaced drafts
169 return ""
170 iesg_state = doc.get_state_slug("draft-iesg")
171 if not iesg_state:
172 return ""
173
174 if iesg_state in ["dead", "watching", "pub", "idexists"]:
175 return ""
176 try:
177 state_datetime = (
178 doc.docevent_set.filter(
179 Q(type="started_iesg_process")
180 | Q(type="changed_state", statedocevent__state_type="draft-iesg")
181 )
182 .order_by("-time")[0]
183 .time
184 )
185 except IndexError:
186 state_datetime = datetime.datetime(1990, 1, 1, tzinfo=datetime.timezone.utc)
187 days = (timezone.now() - state_datetime).days
188 # loosely based on
189 # https://trac.ietf.org/trac/iesg/wiki/PublishPath
190 if iesg_state == "lc":
191 goal1 = 30
192 goal2 = 30
193 elif iesg_state == "rfcqueue":
194 goal1 = 60
195 goal2 = 120
196 elif iesg_state in ["lc-req", "ann"]:
197 goal1 = 4
198 goal2 = 7
199 elif "need-rev" in [x.slug for x in doc.tags.all()]:
200 goal1 = 14
201 goal2 = 28
202 elif iesg_state == "pub-req":
203 goal1 = 7
204 goal2 = 14
205 elif iesg_state == "ad-eval":
206 goal1 = 14
207 goal2 = 28
208 else:
209 goal1 = 14
210 goal2 = 28
211 if days > goal2:
212 class_name = "bg-danger"
213 elif days > goal1:
214 class_name = "bg-warning"
215 else:
216 # don't show a badge when things are in the green; clutters display
217 # class_name = "text-success"
218 return ""
219 if days > goal1:
220 title = ' title="In state for %d day%s; goal is <%d days."' % (
221 days,
222 "s" if days != 1 else "",
223 goal1,
224 )
225 else:
226 title = ""
227 return mark_safe(
228 '<span class="badge rounded-pill %s" %s><i class="bi bi-clock-fill"></i> %d</span>'
229 % (class_name, title, days)
230 )
231 else:
232 return ""
233
234
235 @register.filter
236 def auth48_alert_badge(doc):
237 """Return alert badge, if any, for a document"""
238 if doc.type_id != 'draft':
239 return ''
240
241 iesg_state = doc.get_state_slug('draft-iesg')
242 if iesg_state != 'rfcqueue':
243 return ''
244
245 rfced_state = doc.get_state_slug('draft-rfceditor')
246 if rfced_state == 'auth48':
247 return mark_safe('<span class="badge rounded-pill bg-info" title="AUTH48">AUTH48</span>')
248
249 return ''
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ietf/doc/templatetags/ballot_icon.py b/ietf/doc/templatetags/ballot_icon.py
--- a/ietf/doc/templatetags/ballot_icon.py
+++ b/ietf/doc/templatetags/ballot_icon.py
@@ -140,7 +140,7 @@
i = i + 1
res.append("</tr></tbody></table></a>")
- res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl"><div class="modal-content"></div></div></div>' % ballot.pk)
+ res.append('<div id="modal-%d" class="modal fade" tabindex="-1" role="dialog" aria-hidden="true"><div class="modal-dialog modal-dialog-scrollable modal-xl modal-fullscreen-lg-down"><div class="modal-content"></div></div></div>' % ballot.pk)
return mark_safe("".join(res))
| {"golden_diff": "diff --git a/ietf/doc/templatetags/ballot_icon.py b/ietf/doc/templatetags/ballot_icon.py\n--- a/ietf/doc/templatetags/ballot_icon.py\n+++ b/ietf/doc/templatetags/ballot_icon.py\n@@ -140,7 +140,7 @@\n i = i + 1\n \n res.append(\"</tr></tbody></table></a>\")\n- res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n+ res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl modal-fullscreen-lg-down\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n \n return mark_safe(\"\".join(res))\n", "issue": "Wrapping issues in ballot modal on narrow screens\n### Describe the issue\n\n<img width=\"582\" alt=\"Screenshot 2022-12-15 at 18 02 42\" src=\"https://user-images.githubusercontent.com/200328/207908976-51568fb5-a3b4-4ccc-8026-8065d13da38d.png\">\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2012-2021, All Rights Reserved\n# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n# All rights reserved. Contact: Pasi Eronen <[email protected]>\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n#\n# * Neither the name of the Nokia Corporation and/or its\n# subsidiary(-ies) nor the names of its contributors may be used\n# to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport datetime\n\nimport debug # pyflakes:ignore\n\nfrom django import template\nfrom django.urls import reverse as urlreverse\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\n\nfrom ietf.ietfauth.utils import user_is_person, has_role\nfrom ietf.doc.models import BallotPositionDocEvent, IESG_BALLOT_ACTIVE_STATES\nfrom ietf.name.models import BallotPositionName\n\n\nregister = template.Library()\n\[email protected]\ndef showballoticon(doc):\n if doc.type_id == \"draft\":\n if doc.stream_id == 'ietf' and doc.get_state_slug(\"draft-iesg\") not in IESG_BALLOT_ACTIVE_STATES:\n return False\n elif doc.stream_id == 'irtf' and doc.get_state_slug(\"draft-stream-irtf\") not in ['irsgpoll']:\n return False\n elif doc.type_id == \"charter\":\n if doc.get_state_slug() not in (\"intrev\", \"extrev\", \"iesgrev\"):\n return False\n elif doc.type_id == \"conflrev\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\"):\n return False\n elif doc.type_id == \"statchg\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\", \"in-lc\"):\n return False\n\n return True\n\[email protected]_tag(takes_context=True)\ndef ballot_icon(context, doc):\n user = context.get(\"user\")\n\n if not doc:\n return \"\"\n\n if not showballoticon(doc):\n return \"\"\n\n ballot = doc.ballot if hasattr(doc, 'ballot') else doc.active_ballot()\n\n if not ballot:\n return \"\"\n\n def sort_key(t):\n _, pos = t\n if not pos:\n return (2, 0)\n elif pos.pos.blocking:\n return (0, pos.pos.order)\n else:\n return (1, pos.pos.order)\n\n positions = list(ballot.active_balloter_positions().items())\n positions.sort(key=sort_key)\n\n right_click_string = ''\n if has_role(user, \"Area Director\"):\n right_click_string = 'oncontextmenu=\"window.location.href=\\'%s\\';return false;\"' % urlreverse('ietf.doc.views_ballot.edit_position', kwargs=dict(name=doc.name, ballot_id=ballot.pk))\n\n my_blocking = False\n for i, (balloter, pos) in enumerate(positions):\n if user_is_person(user,balloter) and pos and pos.pos.blocking:\n my_blocking = True\n break\n\n typename = \"Unknown\"\n if ballot.ballot_type.slug=='irsg-approve':\n typename = \"IRSG\"\n else:\n typename = \"IESG\"\n\n res = ['<a %s href=\"%s\" data-bs-toggle=\"modal\" data-bs-target=\"#modal-%d\" aria-label=\"%s positions\" title=\"%s positions (click to show more)\" class=\"ballot-icon\"><table' % (\n right_click_string,\n urlreverse(\"ietf.doc.views_doc.ballot_popup\", kwargs=dict(name=doc.name, ballot_id=ballot.pk)),\n ballot.pk,\n typename,\n typename,)]\n if my_blocking:\n res.append(' class=\"is-blocking\" ')\n res.append('><tbody>')\n\n res.append(\"<tr>\")\n\n for i, (ad, pos) in enumerate(positions):\n # The IRSG has many more members than the IESG, so make the table wider\n if i > 0 and i % (5 if len(positions) <= 15 else 10) == 0:\n res.append(\"</tr><tr>\")\n\n c = \"position-%s\" % (pos.pos.slug if pos else \"norecord\")\n\n if user_is_person(user, ad):\n c += \" my\"\n\n res.append('<td class=\"%s\"></td>' % c)\n\n # add sufficient table calls to last row to avoid HTML validation warning\n while (i + 1) % 5 != 0:\n res.append('<td class=\"position-empty\"></td>')\n i = i + 1\n\n res.append(\"</tr></tbody></table></a>\")\n res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n\n return mark_safe(\"\".join(res))\n\[email protected]\ndef ballotposition(doc, user):\n if not showballoticon(doc) or not has_role(user, \"Area Director\"):\n return None\n\n ballot = doc.active_ballot()\n if not ballot:\n return None\n\n changed_pos = doc.latest_event(BallotPositionDocEvent, type=\"changed_ballot_position\", balloter__user=user, ballot=ballot)\n if changed_pos:\n pos = changed_pos.pos\n else:\n pos = BallotPositionName.objects.get(slug=\"norecord\")\n return pos\n\n\[email protected]\ndef state_age_colored(doc):\n if doc.type_id == \"draft\":\n if not doc.get_state_slug() in [\"active\", \"rfc\"]:\n # Don't show anything for expired/withdrawn/replaced drafts\n return \"\"\n iesg_state = doc.get_state_slug(\"draft-iesg\")\n if not iesg_state:\n return \"\"\n\n if iesg_state in [\"dead\", \"watching\", \"pub\", \"idexists\"]:\n return \"\"\n try:\n state_datetime = (\n doc.docevent_set.filter(\n Q(type=\"started_iesg_process\")\n | Q(type=\"changed_state\", statedocevent__state_type=\"draft-iesg\")\n )\n .order_by(\"-time\")[0]\n .time\n )\n except IndexError:\n state_datetime = datetime.datetime(1990, 1, 1, tzinfo=datetime.timezone.utc)\n days = (timezone.now() - state_datetime).days\n # loosely based on\n # https://trac.ietf.org/trac/iesg/wiki/PublishPath\n if iesg_state == \"lc\":\n goal1 = 30\n goal2 = 30\n elif iesg_state == \"rfcqueue\":\n goal1 = 60\n goal2 = 120\n elif iesg_state in [\"lc-req\", \"ann\"]:\n goal1 = 4\n goal2 = 7\n elif \"need-rev\" in [x.slug for x in doc.tags.all()]:\n goal1 = 14\n goal2 = 28\n elif iesg_state == \"pub-req\":\n goal1 = 7\n goal2 = 14\n elif iesg_state == \"ad-eval\":\n goal1 = 14\n goal2 = 28\n else:\n goal1 = 14\n goal2 = 28\n if days > goal2:\n class_name = \"bg-danger\"\n elif days > goal1:\n class_name = \"bg-warning\"\n else:\n # don't show a badge when things are in the green; clutters display\n # class_name = \"text-success\"\n return \"\"\n if days > goal1:\n title = ' title=\"In state for %d day%s; goal is <%d days.\"' % (\n days,\n \"s\" if days != 1 else \"\",\n goal1,\n )\n else:\n title = \"\"\n return mark_safe(\n '<span class=\"badge rounded-pill %s\" %s><i class=\"bi bi-clock-fill\"></i> %d</span>'\n % (class_name, title, days)\n )\n else:\n return \"\"\n\n\[email protected]\ndef auth48_alert_badge(doc):\n \"\"\"Return alert badge, if any, for a document\"\"\"\n if doc.type_id != 'draft':\n return ''\n\n iesg_state = doc.get_state_slug('draft-iesg')\n if iesg_state != 'rfcqueue':\n return ''\n\n rfced_state = doc.get_state_slug('draft-rfceditor')\n if rfced_state == 'auth48':\n return mark_safe('<span class=\"badge rounded-pill bg-info\" title=\"AUTH48\">AUTH48</span>')\n\n return ''\n", "path": "ietf/doc/templatetags/ballot_icon.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2012-2021, All Rights Reserved\n# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n# All rights reserved. Contact: Pasi Eronen <[email protected]>\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n#\n# * Neither the name of the Nokia Corporation and/or its\n# subsidiary(-ies) nor the names of its contributors may be used\n# to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport datetime\n\nimport debug # pyflakes:ignore\n\nfrom django import template\nfrom django.urls import reverse as urlreverse\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\n\nfrom ietf.ietfauth.utils import user_is_person, has_role\nfrom ietf.doc.models import BallotPositionDocEvent, IESG_BALLOT_ACTIVE_STATES\nfrom ietf.name.models import BallotPositionName\n\n\nregister = template.Library()\n\[email protected]\ndef showballoticon(doc):\n if doc.type_id == \"draft\":\n if doc.stream_id == 'ietf' and doc.get_state_slug(\"draft-iesg\") not in IESG_BALLOT_ACTIVE_STATES:\n return False\n elif doc.stream_id == 'irtf' and doc.get_state_slug(\"draft-stream-irtf\") not in ['irsgpoll']:\n return False\n elif doc.type_id == \"charter\":\n if doc.get_state_slug() not in (\"intrev\", \"extrev\", \"iesgrev\"):\n return False\n elif doc.type_id == \"conflrev\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\"):\n return False\n elif doc.type_id == \"statchg\":\n if doc.get_state_slug() not in (\"iesgeval\",\"defer\", \"in-lc\"):\n return False\n\n return True\n\[email protected]_tag(takes_context=True)\ndef ballot_icon(context, doc):\n user = context.get(\"user\")\n\n if not doc:\n return \"\"\n\n if not showballoticon(doc):\n return \"\"\n\n ballot = doc.ballot if hasattr(doc, 'ballot') else doc.active_ballot()\n\n if not ballot:\n return \"\"\n\n def sort_key(t):\n _, pos = t\n if not pos:\n return (2, 0)\n elif pos.pos.blocking:\n return (0, pos.pos.order)\n else:\n return (1, pos.pos.order)\n\n positions = list(ballot.active_balloter_positions().items())\n positions.sort(key=sort_key)\n\n right_click_string = ''\n if has_role(user, \"Area Director\"):\n right_click_string = 'oncontextmenu=\"window.location.href=\\'%s\\';return false;\"' % urlreverse('ietf.doc.views_ballot.edit_position', kwargs=dict(name=doc.name, ballot_id=ballot.pk))\n\n my_blocking = False\n for i, (balloter, pos) in enumerate(positions):\n if user_is_person(user,balloter) and pos and pos.pos.blocking:\n my_blocking = True\n break\n\n typename = \"Unknown\"\n if ballot.ballot_type.slug=='irsg-approve':\n typename = \"IRSG\"\n else:\n typename = \"IESG\"\n\n res = ['<a %s href=\"%s\" data-bs-toggle=\"modal\" data-bs-target=\"#modal-%d\" aria-label=\"%s positions\" title=\"%s positions (click to show more)\" class=\"ballot-icon\"><table' % (\n right_click_string,\n urlreverse(\"ietf.doc.views_doc.ballot_popup\", kwargs=dict(name=doc.name, ballot_id=ballot.pk)),\n ballot.pk,\n typename,\n typename,)]\n if my_blocking:\n res.append(' class=\"is-blocking\" ')\n res.append('><tbody>')\n\n res.append(\"<tr>\")\n\n for i, (ad, pos) in enumerate(positions):\n # The IRSG has many more members than the IESG, so make the table wider\n if i > 0 and i % (5 if len(positions) <= 15 else 10) == 0:\n res.append(\"</tr><tr>\")\n\n c = \"position-%s\" % (pos.pos.slug if pos else \"norecord\")\n\n if user_is_person(user, ad):\n c += \" my\"\n\n res.append('<td class=\"%s\"></td>' % c)\n\n # add sufficient table calls to last row to avoid HTML validation warning\n while (i + 1) % 5 != 0:\n res.append('<td class=\"position-empty\"></td>')\n i = i + 1\n\n res.append(\"</tr></tbody></table></a>\")\n res.append('<div id=\"modal-%d\" class=\"modal fade\" tabindex=\"-1\" role=\"dialog\" aria-hidden=\"true\"><div class=\"modal-dialog modal-dialog-scrollable modal-xl modal-fullscreen-lg-down\"><div class=\"modal-content\"></div></div></div>' % ballot.pk)\n\n return mark_safe(\"\".join(res))\n\[email protected]\ndef ballotposition(doc, user):\n if not showballoticon(doc) or not has_role(user, \"Area Director\"):\n return None\n\n ballot = doc.active_ballot()\n if not ballot:\n return None\n\n changed_pos = doc.latest_event(BallotPositionDocEvent, type=\"changed_ballot_position\", balloter__user=user, ballot=ballot)\n if changed_pos:\n pos = changed_pos.pos\n else:\n pos = BallotPositionName.objects.get(slug=\"norecord\")\n return pos\n\n\[email protected]\ndef state_age_colored(doc):\n if doc.type_id == \"draft\":\n if not doc.get_state_slug() in [\"active\", \"rfc\"]:\n # Don't show anything for expired/withdrawn/replaced drafts\n return \"\"\n iesg_state = doc.get_state_slug(\"draft-iesg\")\n if not iesg_state:\n return \"\"\n\n if iesg_state in [\"dead\", \"watching\", \"pub\", \"idexists\"]:\n return \"\"\n try:\n state_datetime = (\n doc.docevent_set.filter(\n Q(type=\"started_iesg_process\")\n | Q(type=\"changed_state\", statedocevent__state_type=\"draft-iesg\")\n )\n .order_by(\"-time\")[0]\n .time\n )\n except IndexError:\n state_datetime = datetime.datetime(1990, 1, 1, tzinfo=datetime.timezone.utc)\n days = (timezone.now() - state_datetime).days\n # loosely based on\n # https://trac.ietf.org/trac/iesg/wiki/PublishPath\n if iesg_state == \"lc\":\n goal1 = 30\n goal2 = 30\n elif iesg_state == \"rfcqueue\":\n goal1 = 60\n goal2 = 120\n elif iesg_state in [\"lc-req\", \"ann\"]:\n goal1 = 4\n goal2 = 7\n elif \"need-rev\" in [x.slug for x in doc.tags.all()]:\n goal1 = 14\n goal2 = 28\n elif iesg_state == \"pub-req\":\n goal1 = 7\n goal2 = 14\n elif iesg_state == \"ad-eval\":\n goal1 = 14\n goal2 = 28\n else:\n goal1 = 14\n goal2 = 28\n if days > goal2:\n class_name = \"bg-danger\"\n elif days > goal1:\n class_name = \"bg-warning\"\n else:\n # don't show a badge when things are in the green; clutters display\n # class_name = \"text-success\"\n return \"\"\n if days > goal1:\n title = ' title=\"In state for %d day%s; goal is <%d days.\"' % (\n days,\n \"s\" if days != 1 else \"\",\n goal1,\n )\n else:\n title = \"\"\n return mark_safe(\n '<span class=\"badge rounded-pill %s\" %s><i class=\"bi bi-clock-fill\"></i> %d</span>'\n % (class_name, title, days)\n )\n else:\n return \"\"\n\n\[email protected]\ndef auth48_alert_badge(doc):\n \"\"\"Return alert badge, if any, for a document\"\"\"\n if doc.type_id != 'draft':\n return ''\n\n iesg_state = doc.get_state_slug('draft-iesg')\n if iesg_state != 'rfcqueue':\n return ''\n\n rfced_state = doc.get_state_slug('draft-rfceditor')\n if rfced_state == 'auth48':\n return mark_safe('<span class=\"badge rounded-pill bg-info\" title=\"AUTH48\">AUTH48</span>')\n\n return ''\n", "path": "ietf/doc/templatetags/ballot_icon.py"}]} | 3,281 | 229 |
gh_patches_debug_12227 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2041 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add PyInstaller indicator to `mitmproxy --version`
We currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary.
https://pythonhosted.org/PyInstaller/runtime-information.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/utils/debug.py`
Content:
```
1 import gc
2 import os
3 import sys
4 import threading
5 import signal
6 import platform
7 import traceback
8 import subprocess
9
10 from mitmproxy import version
11 from mitmproxy import utils
12
13 from OpenSSL import SSL
14
15
16 def dump_system_info():
17 git_describe = 'release version'
18 with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))):
19 try:
20 c = ['git', 'describe', '--tags', '--long']
21 git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)
22 last_tag, tag_dist, commit = git_describe.decode().strip().rsplit("-", 2)
23
24 if last_tag.startswith('v'):
25 # remove the 'v' prefix
26 last_tag = last_tag[1:]
27 if commit.startswith('g'):
28 # remove the 'g' prefix added by recent git versions
29 commit = commit[1:]
30
31 # build the same version specifier as used for snapshots by rtool
32 git_describe = "{version}dev{tag:04}-0x{commit}".format(
33 version=last_tag,
34 tag=int(tag_dist),
35 commit=commit,
36 )
37 except:
38 pass
39
40 data = [
41 "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
42 "Python version: {}".format(platform.python_version()),
43 "Platform: {}".format(platform.platform()),
44 "SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
45 ]
46 d = platform.linux_distribution()
47 t = "Linux distro: %s %s %s" % d
48 if d[0]: # pragma: no cover
49 data.append(t)
50
51 d = platform.mac_ver()
52 t = "Mac version: %s %s %s" % d
53 if d[0]: # pragma: no cover
54 data.append(t)
55
56 d = platform.win32_ver()
57 t = "Windows version: %s %s %s %s" % d
58 if d[0]: # pragma: no cover
59 data.append(t)
60
61 return "\n".join(data)
62
63
64 def dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover
65 print("****************************************************", file=file)
66 print("Summary", file=file)
67 print("=======", file=file)
68
69 try:
70 import psutil
71 except:
72 print("(psutil not installed, skipping some debug info)", file=file)
73 else:
74 p = psutil.Process()
75 print("num threads: ", p.num_threads(), file=file)
76 if hasattr(p, "num_fds"):
77 print("num fds: ", p.num_fds(), file=file)
78 print("memory: ", p.memory_info(), file=file)
79
80 print(file=file)
81 print("Files", file=file)
82 print("=====", file=file)
83 for i in p.open_files():
84 print(i, file=file)
85
86 print(file=file)
87 print("Connections", file=file)
88 print("===========", file=file)
89 for i in p.connections():
90 print(i, file=file)
91
92 print(file=file)
93 print("Threads", file=file)
94 print("=======", file=file)
95 bthreads = []
96 for i in threading.enumerate():
97 if hasattr(i, "_threadinfo"):
98 bthreads.append(i)
99 else:
100 print(i.name, file=file)
101 bthreads.sort(key=lambda x: x._thread_started)
102 for i in bthreads:
103 print(i._threadinfo(), file=file)
104
105 print(file=file)
106 print("Memory", file=file)
107 print("=======", file=file)
108 gc.collect()
109 d = {}
110 for i in gc.get_objects():
111 t = str(type(i))
112 if "mitmproxy" in t:
113 d[t] = d.setdefault(t, 0) + 1
114 itms = list(d.items())
115 itms.sort(key=lambda x: x[1])
116 for i in itms[-20:]:
117 print(i[1], i[0], file=file)
118 print("****************************************************", file=file)
119
120 if not testing:
121 sys.exit(1)
122
123
124 def dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):
125 id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
126 code = []
127 for threadId, stack in sys._current_frames().items():
128 code.append(
129 "\n# Thread: %s(%d)" % (
130 id2name.get(threadId, ""), threadId
131 )
132 )
133 for filename, lineno, name, line in traceback.extract_stack(stack):
134 code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
135 if line:
136 code.append(" %s" % (line.strip()))
137 print("\n".join(code), file=file)
138 if not testing: # pragma: no cover
139 sys.exit(1)
140
141
142 def register_info_dumpers():
143 if os.name != "nt": # pragma: windows no cover
144 signal.signal(signal.SIGUSR1, dump_info)
145 signal.signal(signal.SIGUSR2, dump_stacks)
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py
--- a/mitmproxy/utils/debug.py
+++ b/mitmproxy/utils/debug.py
@@ -37,8 +37,12 @@
except:
pass
+ bin_indicator = "" # PyInstaller builds indicator, if using precompiled binary
+ if getattr(sys, 'frozen', False):
+ bin_indicator = "Precompiled Binary"
+
data = [
- "Mitmproxy version: {} ({})".format(version.VERSION, git_describe),
+ "Mitmproxy version: {} ({}) {}".format(version.VERSION, git_describe, bin_indicator),
"Python version: {}".format(platform.python_version()),
"Platform: {}".format(platform.platform()),
"SSL version: {}".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),
| {"golden_diff": "diff --git a/mitmproxy/utils/debug.py b/mitmproxy/utils/debug.py\n--- a/mitmproxy/utils/debug.py\n+++ b/mitmproxy/utils/debug.py\n@@ -37,8 +37,12 @@\n except:\n pass\n \n+ bin_indicator = \"\" # PyInstaller builds indicator, if using precompiled binary\n+ if getattr(sys, 'frozen', False):\n+ bin_indicator = \"Precompiled Binary\"\n+\n data = [\n- \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n+ \"Mitmproxy version: {} ({}) {}\".format(version.VERSION, git_describe, bin_indicator),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n", "issue": "Add PyInstaller indicator to `mitmproxy --version`\nWe currently cannot distinguish if users use our precompiled binaries or if they installed mitmproxy using pip/brew/$packagemanager. It would be very useful to output if we are running the precompiled PyInstaller binary. \r\n\r\nhttps://pythonhosted.org/PyInstaller/runtime-information.html\n", "before_files": [{"content": "import gc\nimport os\nimport sys\nimport threading\nimport signal\nimport platform\nimport traceback\nimport subprocess\n\nfrom mitmproxy import version\nfrom mitmproxy import utils\n\nfrom OpenSSL import SSL\n\n\ndef dump_system_info():\n git_describe = 'release version'\n with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))):\n try:\n c = ['git', 'describe', '--tags', '--long']\n git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n\n if last_tag.startswith('v'):\n # remove the 'v' prefix\n last_tag = last_tag[1:]\n if commit.startswith('g'):\n # remove the 'g' prefix added by recent git versions\n commit = commit[1:]\n\n # build the same version specifier as used for snapshots by rtool\n git_describe = \"{version}dev{tag:04}-0x{commit}\".format(\n version=last_tag,\n tag=int(tag_dist),\n commit=commit,\n )\n except:\n pass\n\n data = [\n \"Mitmproxy version: {} ({})\".format(version.VERSION, git_describe),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n ]\n d = platform.linux_distribution()\n t = \"Linux distro: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.mac_ver()\n t = \"Mac version: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.win32_ver()\n t = \"Windows version: %s %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n return \"\\n\".join(data)\n\n\ndef dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover\n print(\"****************************************************\", file=file)\n print(\"Summary\", file=file)\n print(\"=======\", file=file)\n\n try:\n import psutil\n except:\n print(\"(psutil not installed, skipping some debug info)\", file=file)\n else:\n p = psutil.Process()\n print(\"num threads: \", p.num_threads(), file=file)\n if hasattr(p, \"num_fds\"):\n print(\"num fds: \", p.num_fds(), file=file)\n print(\"memory: \", p.memory_info(), file=file)\n\n print(file=file)\n print(\"Files\", file=file)\n print(\"=====\", file=file)\n for i in p.open_files():\n print(i, file=file)\n\n print(file=file)\n print(\"Connections\", file=file)\n print(\"===========\", file=file)\n for i in p.connections():\n print(i, file=file)\n\n print(file=file)\n print(\"Threads\", file=file)\n print(\"=======\", file=file)\n bthreads = []\n for i in threading.enumerate():\n if hasattr(i, \"_threadinfo\"):\n bthreads.append(i)\n else:\n print(i.name, file=file)\n bthreads.sort(key=lambda x: x._thread_started)\n for i in bthreads:\n print(i._threadinfo(), file=file)\n\n print(file=file)\n print(\"Memory\", file=file)\n print(\"=======\", file=file)\n gc.collect()\n d = {}\n for i in gc.get_objects():\n t = str(type(i))\n if \"mitmproxy\" in t:\n d[t] = d.setdefault(t, 0) + 1\n itms = list(d.items())\n itms.sort(key=lambda x: x[1])\n for i in itms[-20:]:\n print(i[1], i[0], file=file)\n print(\"****************************************************\", file=file)\n\n if not testing:\n sys.exit(1)\n\n\ndef dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):\n id2name = dict([(th.ident, th.name) for th in threading.enumerate()])\n code = []\n for threadId, stack in sys._current_frames().items():\n code.append(\n \"\\n# Thread: %s(%d)\" % (\n id2name.get(threadId, \"\"), threadId\n )\n )\n for filename, lineno, name, line in traceback.extract_stack(stack):\n code.append('File: \"%s\", line %d, in %s' % (filename, lineno, name))\n if line:\n code.append(\" %s\" % (line.strip()))\n print(\"\\n\".join(code), file=file)\n if not testing: # pragma: no cover\n sys.exit(1)\n\n\ndef register_info_dumpers():\n if os.name != \"nt\": # pragma: windows no cover\n signal.signal(signal.SIGUSR1, dump_info)\n signal.signal(signal.SIGUSR2, dump_stacks)\n", "path": "mitmproxy/utils/debug.py"}], "after_files": [{"content": "import gc\nimport os\nimport sys\nimport threading\nimport signal\nimport platform\nimport traceback\nimport subprocess\n\nfrom mitmproxy import version\nfrom mitmproxy import utils\n\nfrom OpenSSL import SSL\n\n\ndef dump_system_info():\n git_describe = 'release version'\n with utils.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))):\n try:\n c = ['git', 'describe', '--tags', '--long']\n git_describe = subprocess.check_output(c, stderr=subprocess.STDOUT)\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n\n if last_tag.startswith('v'):\n # remove the 'v' prefix\n last_tag = last_tag[1:]\n if commit.startswith('g'):\n # remove the 'g' prefix added by recent git versions\n commit = commit[1:]\n\n # build the same version specifier as used for snapshots by rtool\n git_describe = \"{version}dev{tag:04}-0x{commit}\".format(\n version=last_tag,\n tag=int(tag_dist),\n commit=commit,\n )\n except:\n pass\n\n bin_indicator = \"\" # PyInstaller builds indicator, if using precompiled binary\n if getattr(sys, 'frozen', False):\n bin_indicator = \"Precompiled Binary\"\n\n data = [\n \"Mitmproxy version: {} ({}) {}\".format(version.VERSION, git_describe, bin_indicator),\n \"Python version: {}\".format(platform.python_version()),\n \"Platform: {}\".format(platform.platform()),\n \"SSL version: {}\".format(SSL.SSLeay_version(SSL.SSLEAY_VERSION).decode()),\n ]\n d = platform.linux_distribution()\n t = \"Linux distro: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.mac_ver()\n t = \"Mac version: %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n d = platform.win32_ver()\n t = \"Windows version: %s %s %s %s\" % d\n if d[0]: # pragma: no cover\n data.append(t)\n\n return \"\\n\".join(data)\n\n\ndef dump_info(signal=None, frame=None, file=sys.stdout, testing=False): # pragma: no cover\n print(\"****************************************************\", file=file)\n print(\"Summary\", file=file)\n print(\"=======\", file=file)\n\n try:\n import psutil\n except:\n print(\"(psutil not installed, skipping some debug info)\", file=file)\n else:\n p = psutil.Process()\n print(\"num threads: \", p.num_threads(), file=file)\n if hasattr(p, \"num_fds\"):\n print(\"num fds: \", p.num_fds(), file=file)\n print(\"memory: \", p.memory_info(), file=file)\n\n print(file=file)\n print(\"Files\", file=file)\n print(\"=====\", file=file)\n for i in p.open_files():\n print(i, file=file)\n\n print(file=file)\n print(\"Connections\", file=file)\n print(\"===========\", file=file)\n for i in p.connections():\n print(i, file=file)\n\n print(file=file)\n print(\"Threads\", file=file)\n print(\"=======\", file=file)\n bthreads = []\n for i in threading.enumerate():\n if hasattr(i, \"_threadinfo\"):\n bthreads.append(i)\n else:\n print(i.name, file=file)\n bthreads.sort(key=lambda x: x._thread_started)\n for i in bthreads:\n print(i._threadinfo(), file=file)\n\n print(file=file)\n print(\"Memory\", file=file)\n print(\"=======\", file=file)\n gc.collect()\n d = {}\n for i in gc.get_objects():\n t = str(type(i))\n if \"mitmproxy\" in t:\n d[t] = d.setdefault(t, 0) + 1\n itms = list(d.items())\n itms.sort(key=lambda x: x[1])\n for i in itms[-20:]:\n print(i[1], i[0], file=file)\n print(\"****************************************************\", file=file)\n\n if not testing:\n sys.exit(1)\n\n\ndef dump_stacks(signal=None, frame=None, file=sys.stdout, testing=False):\n id2name = dict([(th.ident, th.name) for th in threading.enumerate()])\n code = []\n for threadId, stack in sys._current_frames().items():\n code.append(\n \"\\n# Thread: %s(%d)\" % (\n id2name.get(threadId, \"\"), threadId\n )\n )\n for filename, lineno, name, line in traceback.extract_stack(stack):\n code.append('File: \"%s\", line %d, in %s' % (filename, lineno, name))\n if line:\n code.append(\" %s\" % (line.strip()))\n print(\"\\n\".join(code), file=file)\n if not testing: # pragma: no cover\n sys.exit(1)\n\n\ndef register_info_dumpers():\n if os.name != \"nt\": # pragma: windows no cover\n signal.signal(signal.SIGUSR1, dump_info)\n signal.signal(signal.SIGUSR2, dump_stacks)\n", "path": "mitmproxy/utils/debug.py"}]} | 1,788 | 188 |
gh_patches_debug_28571 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-5219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poundland spider address parsing issue
The addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:
`"addr:street_address": "5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t"`
The problem is caused by line 20 in the code:
` item["street_address"] = ", ".join(filter(None, store["address"].get("line")))`
where is is assumed that "line" from the scraped JSON will be an array of values. But it is sometimes "line" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.
I guess that before applying that code we should test whether "line" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/poundland.py`
Content:
```
1 import scrapy
2
3 from locations.dict_parser import DictParser
4 from locations.hours import OpeningHours
5
6
7 class PoundlandSpider(scrapy.Spider):
8 name = "poundland"
9 item_attributes = {"brand": "Poundland", "brand_wikidata": "Q1434528"}
10 start_urls = [
11 "https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000"
12 ]
13 custom_settings = {"DEFAULT_REQUEST_HEADERS": {"Accept": "application/json"}}
14
15 def parse(self, response):
16 # We may have to handle pagination at some point
17 for store in response.json()["locations"]:
18 item = DictParser.parse(store)
19
20 item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
21
22 # "store_id" seems to be a better ref than "id"
23 item["ref"] = store.get("store_id")
24 item["website"] = "https://www.poundland.co.uk/store-finder/store_page/view/id/" + item["ref"] + "/"
25
26 oh = OpeningHours()
27 for rule in store["opening_hours"]:
28 if rule["hours"] == "Closed":
29 continue
30 open_time, close_time = rule["hours"].split(" - ")
31 oh.add_range(rule["day"][:2], open_time, close_time)
32
33 item["opening_hours"] = oh.as_opening_hours()
34
35 item["extras"] = {}
36 item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
37 item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
38
39 if store["is_pep_co_only"] == "1":
40 item["brand"] = "Pep&Co"
41 item["brand_wikidata"] = "Q24908166"
42 else:
43 if store.get("pepshopinshop") == "1":
44 # Pep and Poundland at this location
45 pep = item.copy()
46
47 pep["ref"] = pep["ref"] + "_pep"
48
49 pep["brand"] = "Pep&Co"
50 pep["brand_wikidata"] = "Q24908166"
51
52 pep["located_in"] = self.item_attributes["brand"]
53 pep["located_in_wikidata"] = self.item_attributes["brand_wikidata"]
54
55 yield pep
56
57 yield item
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py
--- a/locations/spiders/poundland.py
+++ b/locations/spiders/poundland.py
@@ -1,7 +1,9 @@
import scrapy
+from locations.categories import Extras, apply_yes_no
from locations.dict_parser import DictParser
from locations.hours import OpeningHours
+from locations.spiders.vapestore_gb import clean_address
class PoundlandSpider(scrapy.Spider):
@@ -17,7 +19,7 @@
for store in response.json()["locations"]:
item = DictParser.parse(store)
- item["street_address"] = ", ".join(filter(None, store["address"].get("line")))
+ item["street_address"] = clean_address(store["address"].get("line"))
# "store_id" seems to be a better ref than "id"
item["ref"] = store.get("store_id")
@@ -30,10 +32,9 @@
open_time, close_time = rule["hours"].split(" - ")
oh.add_range(rule["day"][:2], open_time, close_time)
- item["opening_hours"] = oh.as_opening_hours()
+ item["opening_hours"] = oh
- item["extras"] = {}
- item["extras"]["atm"] = "yes" if store.get("atm") == "1" else "no"
+ apply_yes_no(Extras.ATM, item, store.get("atm") == "1")
item["extras"]["icestore"] = "yes" if store.get("icestore") == "1" else "no"
if store["is_pep_co_only"] == "1":
| {"golden_diff": "diff --git a/locations/spiders/poundland.py b/locations/spiders/poundland.py\n--- a/locations/spiders/poundland.py\n+++ b/locations/spiders/poundland.py\n@@ -1,7 +1,9 @@\n import scrapy\n \n+from locations.categories import Extras, apply_yes_no\n from locations.dict_parser import DictParser\n from locations.hours import OpeningHours\n+from locations.spiders.vapestore_gb import clean_address\n \n \n class PoundlandSpider(scrapy.Spider):\n@@ -17,7 +19,7 @@\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n \n- item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n+ item[\"street_address\"] = clean_address(store[\"address\"].get(\"line\"))\n \n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n@@ -30,10 +32,9 @@\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n \n- item[\"opening_hours\"] = oh.as_opening_hours()\n+ item[\"opening_hours\"] = oh\n \n- item[\"extras\"] = {}\n- item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n+ apply_yes_no(Extras.ATM, item, store.get(\"atm\") == \"1\")\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n \n if store[\"is_pep_co_only\"] == \"1\":\n", "issue": "Poundland spider address parsing issue\nThe addr:street_address field returned by the poundland.py spider is sometimes broken, giving results such as:\r\n`\"addr:street_address\": \"5, 6, -, 5, 8, , T, a, f, f, , S, t, r, e, e, t\"`\r\nThe problem is caused by line 20 in the code:\r\n` item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))`\r\nwhere is is assumed that \"line\" from the scraped JSON will be an array of values. But it is sometimes \"line\" is just a single string. When this happens, the string itself is split into individual characters, giving results like the one above.\r\n\r\nI guess that before applying that code we should test whether \"line\" is a single string. I don't think I know enough python to know the best way to fix this, and a quick Google suggests there may be a difference between Python 2 and Python 3 (which would make it difficult for me to test any solutions).\n", "before_files": [{"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\n\n\nclass PoundlandSpider(scrapy.Spider):\n name = \"poundland\"\n item_attributes = {\"brand\": \"Poundland\", \"brand_wikidata\": \"Q1434528\"}\n start_urls = [\n \"https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000\"\n ]\n custom_settings = {\"DEFAULT_REQUEST_HEADERS\": {\"Accept\": \"application/json\"}}\n\n def parse(self, response):\n # We may have to handle pagination at some point\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n\n item[\"street_address\"] = \", \".join(filter(None, store[\"address\"].get(\"line\")))\n\n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n item[\"website\"] = \"https://www.poundland.co.uk/store-finder/store_page/view/id/\" + item[\"ref\"] + \"/\"\n\n oh = OpeningHours()\n for rule in store[\"opening_hours\"]:\n if rule[\"hours\"] == \"Closed\":\n continue\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n\n item[\"opening_hours\"] = oh.as_opening_hours()\n\n item[\"extras\"] = {}\n item[\"extras\"][\"atm\"] = \"yes\" if store.get(\"atm\") == \"1\" else \"no\"\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n\n if store[\"is_pep_co_only\"] == \"1\":\n item[\"brand\"] = \"Pep&Co\"\n item[\"brand_wikidata\"] = \"Q24908166\"\n else:\n if store.get(\"pepshopinshop\") == \"1\":\n # Pep and Poundland at this location\n pep = item.copy()\n\n pep[\"ref\"] = pep[\"ref\"] + \"_pep\"\n\n pep[\"brand\"] = \"Pep&Co\"\n pep[\"brand_wikidata\"] = \"Q24908166\"\n\n pep[\"located_in\"] = self.item_attributes[\"brand\"]\n pep[\"located_in_wikidata\"] = self.item_attributes[\"brand_wikidata\"]\n\n yield pep\n\n yield item\n", "path": "locations/spiders/poundland.py"}], "after_files": [{"content": "import scrapy\n\nfrom locations.categories import Extras, apply_yes_no\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours\nfrom locations.spiders.vapestore_gb import clean_address\n\n\nclass PoundlandSpider(scrapy.Spider):\n name = \"poundland\"\n item_attributes = {\"brand\": \"Poundland\", \"brand_wikidata\": \"Q1434528\"}\n start_urls = [\n \"https://www.poundland.co.uk/rest/poundland/V1/locator/?searchCriteria[scope]=store-locator&searchCriteria[current_page]=1&searchCriteria[page_size]=10000\"\n ]\n custom_settings = {\"DEFAULT_REQUEST_HEADERS\": {\"Accept\": \"application/json\"}}\n\n def parse(self, response):\n # We may have to handle pagination at some point\n for store in response.json()[\"locations\"]:\n item = DictParser.parse(store)\n\n item[\"street_address\"] = clean_address(store[\"address\"].get(\"line\"))\n\n # \"store_id\" seems to be a better ref than \"id\"\n item[\"ref\"] = store.get(\"store_id\")\n item[\"website\"] = \"https://www.poundland.co.uk/store-finder/store_page/view/id/\" + item[\"ref\"] + \"/\"\n\n oh = OpeningHours()\n for rule in store[\"opening_hours\"]:\n if rule[\"hours\"] == \"Closed\":\n continue\n open_time, close_time = rule[\"hours\"].split(\" - \")\n oh.add_range(rule[\"day\"][:2], open_time, close_time)\n\n item[\"opening_hours\"] = oh\n\n apply_yes_no(Extras.ATM, item, store.get(\"atm\") == \"1\")\n item[\"extras\"][\"icestore\"] = \"yes\" if store.get(\"icestore\") == \"1\" else \"no\"\n\n if store[\"is_pep_co_only\"] == \"1\":\n item[\"brand\"] = \"Pep&Co\"\n item[\"brand_wikidata\"] = \"Q24908166\"\n else:\n if store.get(\"pepshopinshop\") == \"1\":\n # Pep and Poundland at this location\n pep = item.copy()\n\n pep[\"ref\"] = pep[\"ref\"] + \"_pep\"\n\n pep[\"brand\"] = \"Pep&Co\"\n pep[\"brand_wikidata\"] = \"Q24908166\"\n\n pep[\"located_in\"] = self.item_attributes[\"brand\"]\n pep[\"located_in_wikidata\"] = self.item_attributes[\"brand_wikidata\"]\n\n yield pep\n\n yield item\n", "path": "locations/spiders/poundland.py"}]} | 1,164 | 378 |
gh_patches_debug_5530 | rasdani/github-patches | git_diff | urllib3__urllib3-2204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
is_connection_dropped checks against None but uses False as default value for getattr
I happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.
See implementation of `is_connection_dropped(conn: socket.socket) -> bool`:
https://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23
If there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.
Since this seemed to never have caused problems, the `sock = getattr(conn, "sock", False)` can probably be replaced with just `sock = conn.sock`.
Alternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/urllib3/util/connection.py`
Content:
```
1 import socket
2 from typing import List, Optional, Tuple, Union
3
4 from urllib3.exceptions import LocationParseError
5
6 from .wait import wait_for_read
7
8 SOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore
9 SocketOptions = List[Tuple[int, int, Union[int, bytes]]]
10
11
12 def is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific
13 """
14 Returns True if the connection is dropped and should be closed.
15
16 :param conn:
17 :class:`http.client.HTTPConnection` object.
18 """
19 sock = getattr(conn, "sock", False)
20 if sock is None: # Connection already closed (such as by httplib).
21 return True
22 # Returns True if readable, which here means it's been dropped
23 return wait_for_read(sock, timeout=0.0)
24
25
26 # This function is copied from socket.py in the Python 2.7 standard
27 # library test suite. Added to its signature is only `socket_options`.
28 # One additional modification is that we avoid binding to IPv6 servers
29 # discovered in DNS if the system doesn't have IPv6 functionality.
30 def create_connection(
31 address: Tuple[str, int],
32 timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,
33 source_address: Optional[Tuple[str, int]] = None,
34 socket_options: Optional[SocketOptions] = None,
35 ) -> socket.socket:
36 """Connect to *address* and return the socket object.
37
38 Convenience function. Connect to *address* (a 2-tuple ``(host,
39 port)``) and return the socket object. Passing the optional
40 *timeout* parameter will set the timeout on the socket instance
41 before attempting to connect. If no *timeout* is supplied, the
42 global default timeout setting returned by :func:`socket.getdefaulttimeout`
43 is used. If *source_address* is set it must be a tuple of (host, port)
44 for the socket to bind as a source address before making the connection.
45 An host of '' or port 0 tells the OS to use the default.
46 """
47
48 host, port = address
49 if host.startswith("["):
50 host = host.strip("[]")
51 err = None
52
53 # Using the value from allowed_gai_family() in the context of getaddrinfo lets
54 # us select whether to work with IPv4 DNS records, IPv6 records, or both.
55 # The original create_connection function always returns all records.
56 family = allowed_gai_family()
57
58 try:
59 host.encode("idna")
60 except UnicodeError:
61 raise LocationParseError(f"'{host}', label empty or too long") from None
62
63 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
64 af, socktype, proto, canonname, sa = res
65 sock = None
66 try:
67 sock = socket.socket(af, socktype, proto)
68
69 # If provided, set socket level options before connecting.
70 _set_socket_options(sock, socket_options)
71
72 if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:
73 sock.settimeout(timeout)
74 if source_address:
75 sock.bind(source_address)
76 sock.connect(sa)
77 return sock
78
79 except OSError as e:
80 err = e
81 if sock is not None:
82 sock.close()
83 sock = None
84
85 if err is not None:
86 raise err
87
88 raise OSError("getaddrinfo returns an empty list")
89
90
91 def _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:
92 if options is None:
93 return
94
95 for opt in options:
96 sock.setsockopt(*opt)
97
98
99 def allowed_gai_family() -> socket.AddressFamily:
100 """This function is designed to work in the context of
101 getaddrinfo, where family=socket.AF_UNSPEC is the default and
102 will perform a DNS search for both IPv6 and IPv4 records."""
103
104 family = socket.AF_INET
105 if HAS_IPV6:
106 family = socket.AF_UNSPEC
107 return family
108
109
110 def _has_ipv6(host: str) -> bool:
111 """ Returns True if the system can bind an IPv6 address. """
112 sock = None
113 has_ipv6 = False
114
115 if socket.has_ipv6:
116 # has_ipv6 returns true if cPython was compiled with IPv6 support.
117 # It does not tell us if the system has IPv6 support enabled. To
118 # determine that we must bind to an IPv6 address.
119 # https://github.com/urllib3/urllib3/pull/611
120 # https://bugs.python.org/issue658327
121 try:
122 sock = socket.socket(socket.AF_INET6)
123 sock.bind((host, 0))
124 has_ipv6 = True
125 except Exception:
126 pass
127
128 if sock:
129 sock.close()
130 return has_ipv6
131
132
133 HAS_IPV6 = _has_ipv6("::1")
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py
--- a/src/urllib3/util/connection.py
+++ b/src/urllib3/util/connection.py
@@ -16,7 +16,7 @@
:param conn:
:class:`http.client.HTTPConnection` object.
"""
- sock = getattr(conn, "sock", False)
+ sock = getattr(conn, "sock", None)
if sock is None: # Connection already closed (such as by httplib).
return True
# Returns True if readable, which here means it's been dropped
| {"golden_diff": "diff --git a/src/urllib3/util/connection.py b/src/urllib3/util/connection.py\n--- a/src/urllib3/util/connection.py\n+++ b/src/urllib3/util/connection.py\n@@ -16,7 +16,7 @@\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n- sock = getattr(conn, \"sock\", False)\n+ sock = getattr(conn, \"sock\", None)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n", "issue": "is_connection_dropped checks against None but uses False as default value for getattr\nI happened to read this line and the code looks fishy. I did not otherwise verify the potential bug.\r\n\r\nSee implementation of `is_connection_dropped(conn: socket.socket) -> bool`:\r\n\r\nhttps://github.com/urllib3/urllib3/blob/287052a16a59bcaba5772387de36fa9a49eb8378/src/urllib3/util/connection.py#L19-L23\r\n\r\nIf there is no property `sock` on `conn`, then we will call `wait_for_read(False, timeout=0.0)`, which e.g. may end up putting the `False` into the iterable passed to `select`.\r\n\r\nSince this seemed to never have caused problems, the `sock = getattr(conn, \"sock\", False)` can probably be replaced with just `sock = conn.sock`.\r\n\r\nAlternatives would be to replace the default (last argument of `getattr`) of `False` with `None` or replace the `if sock is None` with `if not sock`.\n", "before_files": [{"content": "import socket\nfrom typing import List, Optional, Tuple, Union\n\nfrom urllib3.exceptions import LocationParseError\n\nfrom .wait import wait_for_read\n\nSOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore\nSocketOptions = List[Tuple[int, int, Union[int, bytes]]]\n\n\ndef is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n sock = getattr(conn, \"sock\", False)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n return wait_for_read(sock, timeout=0.0)\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\n# One additional modification is that we avoid binding to IPv6 servers\n# discovered in DNS if the system doesn't have IPv6 functionality.\ndef create_connection(\n address: Tuple[str, int],\n timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,\n source_address: Optional[Tuple[str, int]] = None,\n socket_options: Optional[SocketOptions] = None,\n) -> socket.socket:\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`socket.getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith(\"[\"):\n host = host.strip(\"[]\")\n err = None\n\n # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n # The original create_connection function always returns all records.\n family = allowed_gai_family()\n\n try:\n host.encode(\"idna\")\n except UnicodeError:\n raise LocationParseError(f\"'{host}', label empty or too long\") from None\n\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n _set_socket_options(sock, socket_options)\n\n if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except OSError as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise OSError(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n\n\ndef allowed_gai_family() -> socket.AddressFamily:\n \"\"\"This function is designed to work in the context of\n getaddrinfo, where family=socket.AF_UNSPEC is the default and\n will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n\n family = socket.AF_INET\n if HAS_IPV6:\n family = socket.AF_UNSPEC\n return family\n\n\ndef _has_ipv6(host: str) -> bool:\n \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n sock = None\n has_ipv6 = False\n\n if socket.has_ipv6:\n # has_ipv6 returns true if cPython was compiled with IPv6 support.\n # It does not tell us if the system has IPv6 support enabled. To\n # determine that we must bind to an IPv6 address.\n # https://github.com/urllib3/urllib3/pull/611\n # https://bugs.python.org/issue658327\n try:\n sock = socket.socket(socket.AF_INET6)\n sock.bind((host, 0))\n has_ipv6 = True\n except Exception:\n pass\n\n if sock:\n sock.close()\n return has_ipv6\n\n\nHAS_IPV6 = _has_ipv6(\"::1\")\n", "path": "src/urllib3/util/connection.py"}], "after_files": [{"content": "import socket\nfrom typing import List, Optional, Tuple, Union\n\nfrom urllib3.exceptions import LocationParseError\n\nfrom .wait import wait_for_read\n\nSOCKET_GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT # type: ignore\nSocketOptions = List[Tuple[int, int, Union[int, bytes]]]\n\n\ndef is_connection_dropped(conn: socket.socket) -> bool: # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`http.client.HTTPConnection` object.\n \"\"\"\n sock = getattr(conn, \"sock\", None)\n if sock is None: # Connection already closed (such as by httplib).\n return True\n # Returns True if readable, which here means it's been dropped\n return wait_for_read(sock, timeout=0.0)\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\n# One additional modification is that we avoid binding to IPv6 servers\n# discovered in DNS if the system doesn't have IPv6 functionality.\ndef create_connection(\n address: Tuple[str, int],\n timeout: Optional[float] = SOCKET_GLOBAL_DEFAULT_TIMEOUT,\n source_address: Optional[Tuple[str, int]] = None,\n socket_options: Optional[SocketOptions] = None,\n) -> socket.socket:\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`socket.getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith(\"[\"):\n host = host.strip(\"[]\")\n err = None\n\n # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n # The original create_connection function always returns all records.\n family = allowed_gai_family()\n\n try:\n host.encode(\"idna\")\n except UnicodeError:\n raise LocationParseError(f\"'{host}', label empty or too long\") from None\n\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n _set_socket_options(sock, socket_options)\n\n if timeout is not SOCKET_GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except OSError as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise OSError(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock: socket.socket, options: Optional[SocketOptions]) -> None:\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n\n\ndef allowed_gai_family() -> socket.AddressFamily:\n \"\"\"This function is designed to work in the context of\n getaddrinfo, where family=socket.AF_UNSPEC is the default and\n will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n\n family = socket.AF_INET\n if HAS_IPV6:\n family = socket.AF_UNSPEC\n return family\n\n\ndef _has_ipv6(host: str) -> bool:\n \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n sock = None\n has_ipv6 = False\n\n if socket.has_ipv6:\n # has_ipv6 returns true if cPython was compiled with IPv6 support.\n # It does not tell us if the system has IPv6 support enabled. To\n # determine that we must bind to an IPv6 address.\n # https://github.com/urllib3/urllib3/pull/611\n # https://bugs.python.org/issue658327\n try:\n sock = socket.socket(socket.AF_INET6)\n sock.bind((host, 0))\n has_ipv6 = True\n except Exception:\n pass\n\n if sock:\n sock.close()\n return has_ipv6\n\n\nHAS_IPV6 = _has_ipv6(\"::1\")\n", "path": "src/urllib3/util/connection.py"}]} | 1,866 | 135 |
gh_patches_debug_9785 | rasdani/github-patches | git_diff | freedomofpress__securedrop-603 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Database error when trying to delete replies in the journalist interface
An error is thrown deleting replies in the journalist interface. An attempt is made to remove a record for the reply from the database but replies are only recorded on the filesystem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/journalist.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import config
3 import version
4 import crypto_util
5 import store
6 import template_filters
7 from db import db_session, Source, Submission, SourceStar, get_one_or_else
8
9 import os
10 from datetime import datetime
11 from flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)
12 from flask_wtf.csrf import CsrfProtect
13 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
14
15 import background
16
17 app = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)
18 app.config.from_object(config.JournalistInterfaceFlaskConfig)
19 CsrfProtect(app)
20
21 app.jinja_env.globals['version'] = version.__version__
22 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
23 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
24 app.jinja_env.globals['use_custom_header_image'] = True
25 else:
26 app.jinja_env.globals['header_image'] = 'logo.png'
27 app.jinja_env.globals['use_custom_header_image'] = False
28
29 app.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat
30
31
32 @app.teardown_appcontext
33 def shutdown_session(exception=None):
34 """Automatically remove database sessions at the end of the request, or
35 when the application shuts down"""
36 db_session.remove()
37
38
39 def get_source(sid):
40 """Return a Source object, representing the database row, for the source
41 with id `sid`"""
42 source = None
43 query = Source.query.filter(Source.filesystem_id == sid)
44 source = get_one_or_else(query, app.logger, abort)
45
46 return source
47
48
49 @app.before_request
50 def setup_g():
51 """Store commonly used values in Flask's special g object"""
52 if request.method == 'POST':
53 sid = request.form.get('sid')
54 if sid:
55 g.sid = sid
56 g.source = get_source(sid)
57
58
59 def get_docs(sid):
60 """Get docs associated with source id `sid`, sorted by submission date"""
61 docs = []
62 for filename in os.listdir(store.path(sid)):
63 os_stat = os.stat(store.path(sid, filename))
64 docs.append(dict(
65 name=filename,
66 date=datetime.fromtimestamp(os_stat.st_mtime),
67 size=os_stat.st_size,
68 ))
69 # sort in chronological order
70 docs.sort(key=lambda x: int(x['name'].split('-')[0]))
71 return docs
72
73
74 def make_star_true(sid):
75 source = get_source(sid)
76 if source.star:
77 source.star.starred = True
78 else:
79 source_star = SourceStar(source)
80 db_session.add(source_star)
81
82
83 def make_star_false(sid):
84 source = get_source(sid)
85 source.star.starred = False
86
87
88 @app.route('/col/add_star/<sid>', methods=('POST',))
89 def add_star(sid):
90 make_star_true(sid)
91 db_session.commit()
92 return redirect(url_for('index'))
93
94
95 @app.route("/col/remove_star/<sid>", methods=('POST',))
96 def remove_star(sid):
97 make_star_false(sid)
98 db_session.commit()
99 return redirect(url_for('index'))
100
101
102 @app.route('/')
103 def index():
104 unstarred = []
105 starred = []
106 for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():
107 star = SourceStar.query.filter(SourceStar.source_id == source.id).first()
108 if star and star.starred:
109 starred.append(source)
110 else:
111 unstarred.append(source)
112 source.num_unread = len(
113 Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())
114
115 return render_template('index.html', unstarred=unstarred, starred=starred)
116
117
118 @app.route('/col/<sid>')
119 def col(sid):
120 source = get_source(sid)
121 docs = get_docs(sid)
122 submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]
123 # Only include documents loaded from the filesystem which are replies or which are also listed in the
124 # submissions table to avoid displaying partially uploaded files (#561).
125 docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]
126
127 haskey = crypto_util.getkey(sid)
128 return render_template("col.html", sid=sid,
129 codename=source.journalist_designation, docs=docs, haskey=haskey,
130 flagged=source.flagged)
131
132
133 def delete_collection(source_id):
134 # Delete the source's collection of submissions
135 store.delete_source_directory(source_id)
136
137 # Delete the source's reply keypair
138 crypto_util.delete_reply_keypair(source_id)
139
140 # Delete their entry in the db
141 source = get_source(source_id)
142 db_session.delete(source)
143 db_session.commit()
144
145
146 @app.route('/col/process', methods=('POST',))
147 def col_process():
148 actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}
149 if 'cols_selected' not in request.form:
150 return redirect(url_for('index'))
151
152 cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist
153 action = request.form['action']
154
155 if action not in actions:
156 return abort(500)
157
158 method = actions[action]
159 return method(cols_selected)
160
161
162 def col_star(cols_selected):
163 for sid in cols_selected:
164 make_star_true(sid)
165
166 db_session.commit()
167 return redirect(url_for('index'))
168
169
170 def col_un_star(cols_selected):
171 for source_id in cols_selected:
172 make_star_false(source_id)
173
174 db_session.commit()
175 return redirect(url_for('index'))
176
177
178 @app.route('/col/delete/<sid>', methods=('POST',))
179 def col_delete_single(sid):
180 """deleting a single collection from its /col page"""
181 source = get_source(sid)
182 delete_collection(sid)
183 flash("%s's collection deleted" % (source.journalist_designation,), "notification")
184 return redirect(url_for('index'))
185
186
187 def col_delete(cols_selected):
188 """deleting multiple collections from the index"""
189 if len(cols_selected) < 1:
190 flash("No collections selected to delete!", "error")
191 else:
192 for source_id in cols_selected:
193 delete_collection(source_id)
194 flash("%s %s deleted" % (
195 len(cols_selected),
196 "collection" if len(cols_selected) == 1 else "collections"
197 ), "notification")
198
199 return redirect(url_for('index'))
200
201
202 @app.route('/col/<sid>/<fn>')
203 def doc(sid, fn):
204 if '..' in fn or fn.startswith('/'):
205 abort(404)
206 try:
207 Submission.query.filter(Submission.filename == fn).one().downloaded = True
208 except NoResultFound as e:
209 app.logger.error("Could not mark " + fn + " as downloaded: %s" % (e,))
210 db_session.commit()
211 return send_file(store.path(sid, fn), mimetype="application/pgp-encrypted")
212
213
214 @app.route('/reply', methods=('POST',))
215 def reply():
216 msg = request.form['msg']
217 g.source.interaction_count += 1
218 filename = "{0}-reply.gpg".format(g.source.interaction_count)
219
220 crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=
221 store.path(g.sid, filename))
222
223 db_session.commit()
224 return render_template('reply.html', sid=g.sid,
225 codename=g.source.journalist_designation)
226
227
228 @app.route('/regenerate-code', methods=('POST',))
229 def generate_code():
230 original_journalist_designation = g.source.journalist_designation
231 g.source.journalist_designation = crypto_util.display_id()
232
233 for doc in Submission.query.filter(Submission.source_id == g.source.id).all():
234 doc.filename = store.rename_submission(g.sid, doc.filename, g.source.journalist_filename())
235 db_session.commit()
236
237 flash("The source '%s' has been renamed to '%s'" % (original_journalist_designation, g.source.journalist_designation), "notification")
238 return redirect('/col/' + g.sid)
239
240
241 @app.route('/download_unread/<sid>')
242 def download_unread(sid):
243 id = Source.query.filter(Source.filesystem_id == sid).one().id
244 docs = [doc.filename for doc in
245 Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]
246 return bulk_download(sid, docs)
247
248
249 @app.route('/bulk', methods=('POST',))
250 def bulk():
251 action = request.form['action']
252
253 doc_names_selected = request.form.getlist('doc_names_selected')
254 docs_selected = [
255 doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]
256 filenames_selected = [
257 doc['name'] for doc in docs_selected]
258
259 if not docs_selected:
260 if action == 'download':
261 flash("No collections selected to download!", "error")
262 elif action == 'delete':
263 flash("No collections selected to delete!", "error")
264 return redirect(url_for('col', sid=g.sid))
265
266 if action == 'download':
267 return bulk_download(g.sid, filenames_selected)
268 elif action == 'delete':
269 return bulk_delete(g.sid, docs_selected)
270 else:
271 abort(400)
272
273
274 def bulk_delete(sid, docs_selected):
275 source = get_source(sid)
276 confirm_delete = bool(request.form.get('confirm_delete', False))
277 if confirm_delete:
278 for doc in docs_selected:
279 db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
280 fn = store.path(sid, doc['name'])
281 store.secure_unlink(fn)
282 db_session.commit()
283 return render_template('delete.html', sid=sid,
284 codename=source.journalist_designation,
285 docs_selected=docs_selected, confirm_delete=confirm_delete)
286
287
288 def bulk_download(sid, docs_selected):
289 source = get_source(sid)
290 filenames = []
291 for doc in docs_selected:
292 filenames.append(store.path(sid, doc))
293 try:
294 Submission.query.filter(Submission.filename == doc).one().downloaded = True
295 except NoResultFound as e:
296 app.logger.error("Could not mark " + doc + " as downloaded: %s" % (e,))
297 db_session.commit()
298 zip = store.get_bulk_archive(filenames, zip_directory=source.journalist_filename())
299 return send_file(zip.name, mimetype="application/zip",
300 attachment_filename=source.journalist_filename() + ".zip",
301 as_attachment=True)
302
303
304 @app.route('/flag', methods=('POST',))
305 def flag():
306 g.source.flagged = True
307 db_session.commit()
308 return render_template('flag.html', sid=g.sid,
309 codename=g.source.journalist_designation)
310 def write_pidfile():
311 pid = str(os.getpid())
312 with open(config.JOURNALIST_PIDFILE, 'w') as fp:
313 fp.write(pid)
314
315 if __name__ == "__main__":
316 write_pidfile()
317 # TODO make sure debug=False in production
318 app.run(debug=True, host='0.0.0.0', port=8081)
319
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/journalist.py b/securedrop/journalist.py
--- a/securedrop/journalist.py
+++ b/securedrop/journalist.py
@@ -276,7 +276,8 @@
confirm_delete = bool(request.form.get('confirm_delete', False))
if confirm_delete:
for doc in docs_selected:
- db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
+ if not doc['name'].endswith('reply.gpg'):
+ db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
fn = store.path(sid, doc['name'])
store.secure_unlink(fn)
db_session.commit()
| {"golden_diff": "diff --git a/securedrop/journalist.py b/securedrop/journalist.py\n--- a/securedrop/journalist.py\n+++ b/securedrop/journalist.py\n@@ -276,7 +276,8 @@\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n- db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n+ if not doc['name'].endswith('reply.gpg'):\n+ db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n", "issue": "Database error when trying to delete replies in the journalist interface\nAn error is thrown deleting replies in the journalist interface. An attempt is made to remove a record for the reply from the database but replies are only recorded on the filesystem.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport config\nimport version\nimport crypto_util\nimport store\nimport template_filters\nfrom db import db_session, Source, Submission, SourceStar, get_one_or_else\n\nimport os\nfrom datetime import datetime\nfrom flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)\nfrom flask_wtf.csrf import CsrfProtect\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport background\n\napp = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)\napp.config.from_object(config.JournalistInterfaceFlaskConfig)\nCsrfProtect(app)\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\napp.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef get_source(sid):\n \"\"\"Return a Source object, representing the database row, for the source\n with id `sid`\"\"\"\n source = None\n query = Source.query.filter(Source.filesystem_id == sid)\n source = get_one_or_else(query, app.logger, abort)\n\n return source\n\n\[email protected]_request\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n if request.method == 'POST':\n sid = request.form.get('sid')\n if sid:\n g.sid = sid\n g.source = get_source(sid)\n\n\ndef get_docs(sid):\n \"\"\"Get docs associated with source id `sid`, sorted by submission date\"\"\"\n docs = []\n for filename in os.listdir(store.path(sid)):\n os_stat = os.stat(store.path(sid, filename))\n docs.append(dict(\n name=filename,\n date=datetime.fromtimestamp(os_stat.st_mtime),\n size=os_stat.st_size,\n ))\n # sort in chronological order\n docs.sort(key=lambda x: int(x['name'].split('-')[0]))\n return docs\n\n\ndef make_star_true(sid):\n source = get_source(sid)\n if source.star:\n source.star.starred = True\n else:\n source_star = SourceStar(source)\n db_session.add(source_star)\n\n\ndef make_star_false(sid):\n source = get_source(sid)\n source.star.starred = False\n\n\[email protected]('/col/add_star/<sid>', methods=('POST',))\ndef add_star(sid):\n make_star_true(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected](\"/col/remove_star/<sid>\", methods=('POST',))\ndef remove_star(sid):\n make_star_false(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/')\ndef index():\n unstarred = []\n starred = []\n for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():\n star = SourceStar.query.filter(SourceStar.source_id == source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())\n\n return render_template('index.html', unstarred=unstarred, starred=starred)\n\n\[email protected]('/col/<sid>')\ndef col(sid):\n source = get_source(sid)\n docs = get_docs(sid)\n submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]\n # Only include documents loaded from the filesystem which are replies or which are also listed in the\n # submissions table to avoid displaying partially uploaded files (#561).\n docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]\n\n haskey = crypto_util.getkey(sid)\n return render_template(\"col.html\", sid=sid,\n codename=source.journalist_designation, docs=docs, haskey=haskey,\n flagged=source.flagged)\n\n\ndef delete_collection(source_id):\n # Delete the source's collection of submissions\n store.delete_source_directory(source_id)\n\n # Delete the source's reply keypair\n crypto_util.delete_reply_keypair(source_id)\n\n # Delete their entry in the db\n source = get_source(source_id)\n db_session.delete(source)\n db_session.commit()\n\n\[email protected]('/col/process', methods=('POST',))\ndef col_process():\n actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}\n if 'cols_selected' not in request.form:\n return redirect(url_for('index'))\n\n cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist\n action = request.form['action']\n\n if action not in actions:\n return abort(500)\n\n method = actions[action]\n return method(cols_selected)\n\n\ndef col_star(cols_selected):\n for sid in cols_selected:\n make_star_true(sid)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\ndef col_un_star(cols_selected):\n for source_id in cols_selected:\n make_star_false(source_id)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/col/delete/<sid>', methods=('POST',))\ndef col_delete_single(sid):\n \"\"\"deleting a single collection from its /col page\"\"\"\n source = get_source(sid)\n delete_collection(sid)\n flash(\"%s's collection deleted\" % (source.journalist_designation,), \"notification\")\n return redirect(url_for('index'))\n\n\ndef col_delete(cols_selected):\n \"\"\"deleting multiple collections from the index\"\"\"\n if len(cols_selected) < 1:\n flash(\"No collections selected to delete!\", \"error\")\n else:\n for source_id in cols_selected:\n delete_collection(source_id)\n flash(\"%s %s deleted\" % (\n len(cols_selected),\n \"collection\" if len(cols_selected) == 1 else \"collections\"\n ), \"notification\")\n\n return redirect(url_for('index'))\n\n\[email protected]('/col/<sid>/<fn>')\ndef doc(sid, fn):\n if '..' in fn or fn.startswith('/'):\n abort(404)\n try:\n Submission.query.filter(Submission.filename == fn).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n db_session.commit()\n return send_file(store.path(sid, fn), mimetype=\"application/pgp-encrypted\")\n\n\[email protected]('/reply', methods=('POST',))\ndef reply():\n msg = request.form['msg']\n g.source.interaction_count += 1\n filename = \"{0}-reply.gpg\".format(g.source.interaction_count)\n\n crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=\n store.path(g.sid, filename))\n\n db_session.commit()\n return render_template('reply.html', sid=g.sid,\n codename=g.source.journalist_designation)\n\n\[email protected]('/regenerate-code', methods=('POST',))\ndef generate_code():\n original_journalist_designation = g.source.journalist_designation\n g.source.journalist_designation = crypto_util.display_id()\n \n for doc in Submission.query.filter(Submission.source_id == g.source.id).all():\n doc.filename = store.rename_submission(g.sid, doc.filename, g.source.journalist_filename())\n db_session.commit()\n\n flash(\"The source '%s' has been renamed to '%s'\" % (original_journalist_designation, g.source.journalist_designation), \"notification\")\n return redirect('/col/' + g.sid)\n\n\[email protected]('/download_unread/<sid>')\ndef download_unread(sid):\n id = Source.query.filter(Source.filesystem_id == sid).one().id\n docs = [doc.filename for doc in\n Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]\n return bulk_download(sid, docs)\n\n\[email protected]('/bulk', methods=('POST',))\ndef bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n docs_selected = [\n doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]\n filenames_selected = [\n doc['name'] for doc in docs_selected]\n\n if not docs_selected:\n if action == 'download':\n flash(\"No collections selected to download!\", \"error\")\n elif action == 'delete':\n flash(\"No collections selected to delete!\", \"error\")\n return redirect(url_for('col', sid=g.sid))\n\n if action == 'download':\n return bulk_download(g.sid, filenames_selected)\n elif action == 'delete':\n return bulk_delete(g.sid, docs_selected)\n else:\n abort(400)\n\n\ndef bulk_delete(sid, docs_selected):\n source = get_source(sid)\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n return render_template('delete.html', sid=sid,\n codename=source.journalist_designation,\n docs_selected=docs_selected, confirm_delete=confirm_delete)\n\n\ndef bulk_download(sid, docs_selected):\n source = get_source(sid)\n filenames = []\n for doc in docs_selected:\n filenames.append(store.path(sid, doc))\n try:\n Submission.query.filter(Submission.filename == doc).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + doc + \" as downloaded: %s\" % (e,))\n db_session.commit()\n zip = store.get_bulk_archive(filenames, zip_directory=source.journalist_filename())\n return send_file(zip.name, mimetype=\"application/zip\",\n attachment_filename=source.journalist_filename() + \".zip\",\n as_attachment=True)\n\n\[email protected]('/flag', methods=('POST',))\ndef flag():\n g.source.flagged = True\n db_session.commit()\n return render_template('flag.html', sid=g.sid,\n codename=g.source.journalist_designation)\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.JOURNALIST_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug=False in production\n app.run(debug=True, host='0.0.0.0', port=8081)\n", "path": "securedrop/journalist.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport config\nimport version\nimport crypto_util\nimport store\nimport template_filters\nfrom db import db_session, Source, Submission, SourceStar, get_one_or_else\n\nimport os\nfrom datetime import datetime\nfrom flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)\nfrom flask_wtf.csrf import CsrfProtect\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport background\n\napp = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)\napp.config.from_object(config.JournalistInterfaceFlaskConfig)\nCsrfProtect(app)\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\napp.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef get_source(sid):\n \"\"\"Return a Source object, representing the database row, for the source\n with id `sid`\"\"\"\n source = None\n query = Source.query.filter(Source.filesystem_id == sid)\n source = get_one_or_else(query, app.logger, abort)\n\n return source\n\n\[email protected]_request\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n if request.method == 'POST':\n sid = request.form.get('sid')\n if sid:\n g.sid = sid\n g.source = get_source(sid)\n\n\ndef get_docs(sid):\n \"\"\"Get docs associated with source id `sid`, sorted by submission date\"\"\"\n docs = []\n for filename in os.listdir(store.path(sid)):\n os_stat = os.stat(store.path(sid, filename))\n docs.append(dict(\n name=filename,\n date=datetime.fromtimestamp(os_stat.st_mtime),\n size=os_stat.st_size,\n ))\n # sort in chronological order\n docs.sort(key=lambda x: int(x['name'].split('-')[0]))\n return docs\n\n\ndef make_star_true(sid):\n source = get_source(sid)\n if source.star:\n source.star.starred = True\n else:\n source_star = SourceStar(source)\n db_session.add(source_star)\n\n\ndef make_star_false(sid):\n source = get_source(sid)\n source.star.starred = False\n\n\[email protected]('/col/add_star/<sid>', methods=('POST',))\ndef add_star(sid):\n make_star_true(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected](\"/col/remove_star/<sid>\", methods=('POST',))\ndef remove_star(sid):\n make_star_false(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/')\ndef index():\n unstarred = []\n starred = []\n for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():\n star = SourceStar.query.filter(SourceStar.source_id == source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())\n\n return render_template('index.html', unstarred=unstarred, starred=starred)\n\n\[email protected]('/col/<sid>')\ndef col(sid):\n source = get_source(sid)\n docs = get_docs(sid)\n submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]\n # Only include documents loaded from the filesystem which are replies or which are also listed in the\n # submissions table to avoid displaying partially uploaded files (#561).\n docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]\n\n haskey = crypto_util.getkey(sid)\n return render_template(\"col.html\", sid=sid,\n codename=source.journalist_designation, docs=docs, haskey=haskey,\n flagged=source.flagged)\n\n\ndef delete_collection(source_id):\n # Delete the source's collection of submissions\n store.delete_source_directory(source_id)\n\n # Delete the source's reply keypair\n crypto_util.delete_reply_keypair(source_id)\n\n # Delete their entry in the db\n source = get_source(source_id)\n db_session.delete(source)\n db_session.commit()\n\n\[email protected]('/col/process', methods=('POST',))\ndef col_process():\n actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}\n if 'cols_selected' not in request.form:\n return redirect(url_for('index'))\n\n cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist\n action = request.form['action']\n\n if action not in actions:\n return abort(500)\n\n method = actions[action]\n return method(cols_selected)\n\n\ndef col_star(cols_selected):\n for sid in cols_selected:\n make_star_true(sid)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\ndef col_un_star(cols_selected):\n for source_id in cols_selected:\n make_star_false(source_id)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/col/delete/<sid>', methods=('POST',))\ndef col_delete_single(sid):\n \"\"\"deleting a single collection from its /col page\"\"\"\n source = get_source(sid)\n delete_collection(sid)\n flash(\"%s's collection deleted\" % (source.journalist_designation,), \"notification\")\n return redirect(url_for('index'))\n\n\ndef col_delete(cols_selected):\n \"\"\"deleting multiple collections from the index\"\"\"\n if len(cols_selected) < 1:\n flash(\"No collections selected to delete!\", \"error\")\n else:\n for source_id in cols_selected:\n delete_collection(source_id)\n flash(\"%s %s deleted\" % (\n len(cols_selected),\n \"collection\" if len(cols_selected) == 1 else \"collections\"\n ), \"notification\")\n\n return redirect(url_for('index'))\n\n\[email protected]('/col/<sid>/<fn>')\ndef doc(sid, fn):\n if '..' in fn or fn.startswith('/'):\n abort(404)\n try:\n Submission.query.filter(Submission.filename == fn).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n db_session.commit()\n return send_file(store.path(sid, fn), mimetype=\"application/pgp-encrypted\")\n\n\[email protected]('/reply', methods=('POST',))\ndef reply():\n msg = request.form['msg']\n g.source.interaction_count += 1\n filename = \"{0}-reply.gpg\".format(g.source.interaction_count)\n\n crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=\n store.path(g.sid, filename))\n\n db_session.commit()\n return render_template('reply.html', sid=g.sid,\n codename=g.source.journalist_designation)\n\n\[email protected]('/regenerate-code', methods=('POST',))\ndef generate_code():\n original_journalist_designation = g.source.journalist_designation\n g.source.journalist_designation = crypto_util.display_id()\n \n for doc in Submission.query.filter(Submission.source_id == g.source.id).all():\n doc.filename = store.rename_submission(g.sid, doc.filename, g.source.journalist_filename())\n db_session.commit()\n\n flash(\"The source '%s' has been renamed to '%s'\" % (original_journalist_designation, g.source.journalist_designation), \"notification\")\n return redirect('/col/' + g.sid)\n\n\[email protected]('/download_unread/<sid>')\ndef download_unread(sid):\n id = Source.query.filter(Source.filesystem_id == sid).one().id\n docs = [doc.filename for doc in\n Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]\n return bulk_download(sid, docs)\n\n\[email protected]('/bulk', methods=('POST',))\ndef bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n docs_selected = [\n doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]\n filenames_selected = [\n doc['name'] for doc in docs_selected]\n\n if not docs_selected:\n if action == 'download':\n flash(\"No collections selected to download!\", \"error\")\n elif action == 'delete':\n flash(\"No collections selected to delete!\", \"error\")\n return redirect(url_for('col', sid=g.sid))\n\n if action == 'download':\n return bulk_download(g.sid, filenames_selected)\n elif action == 'delete':\n return bulk_delete(g.sid, docs_selected)\n else:\n abort(400)\n\n\ndef bulk_delete(sid, docs_selected):\n source = get_source(sid)\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n if not doc['name'].endswith('reply.gpg'):\n db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n return render_template('delete.html', sid=sid,\n codename=source.journalist_designation,\n docs_selected=docs_selected, confirm_delete=confirm_delete)\n\n\ndef bulk_download(sid, docs_selected):\n source = get_source(sid)\n filenames = []\n for doc in docs_selected:\n filenames.append(store.path(sid, doc))\n try:\n Submission.query.filter(Submission.filename == doc).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + doc + \" as downloaded: %s\" % (e,))\n db_session.commit()\n zip = store.get_bulk_archive(filenames, zip_directory=source.journalist_filename())\n return send_file(zip.name, mimetype=\"application/zip\",\n attachment_filename=source.journalist_filename() + \".zip\",\n as_attachment=True)\n\n\[email protected]('/flag', methods=('POST',))\ndef flag():\n g.source.flagged = True\n db_session.commit()\n return render_template('flag.html', sid=g.sid,\n codename=g.source.journalist_designation)\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.JOURNALIST_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug=False in production\n app.run(debug=True, host='0.0.0.0', port=8081)\n", "path": "securedrop/journalist.py"}]} | 3,576 | 157 |
gh_patches_debug_30382 | rasdani/github-patches | git_diff | pytorch__audio-3 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need API for saving to file.
Currently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.
@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchaudio/__init__.py`
Content:
```
1 import torch
2
3 from cffi import FFI
4 ffi = FFI()
5 from ._ext import th_sox
6
7 def load(filename, out=None):
8 if out is not None:
9 assert torch.is_tensor(out)
10 assert not out.is_cuda
11 else:
12 out = torch.FloatTensor()
13
14 if isinstance(out, torch.FloatTensor):
15 func = th_sox.libthsox_Float_read_audio_file
16 elif isinstance(out, torch.DoubleTensor):
17 func = th_sox.libthsox_Double_read_audio_file
18 elif isinstance(out, torch.ByteTensor):
19 func = th_sox.libthsox_Byte_read_audio_file
20 elif isinstance(out, torch.CharTensor):
21 func = th_sox.libthsox_Char_read_audio_file
22 elif isinstance(out, torch.ShortTensor):
23 func = th_sox.libthsox_Short_read_audio_file
24 elif isinstance(out, torch.IntTensor):
25 func = th_sox.libthsox_Int_read_audio_file
26 elif isinstance(out, torch.LongTensor):
27 func = th_sox.libthsox_Long_read_audio_file
28
29 sample_rate_p = ffi.new('int*')
30 func(bytes(filename), out, sample_rate_p)
31 sample_rate = sample_rate_p[0]
32 return out, sample_rate
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py
--- a/torchaudio/__init__.py
+++ b/torchaudio/__init__.py
@@ -1,32 +1,40 @@
+import os
+
import torch
from cffi import FFI
+
ffi = FFI()
from ._ext import th_sox
+
+def check_input(src):
+ if not torch.is_tensor(src):
+ raise TypeError('Expected a tensor, got %s' % type(src))
+ if not src.__module__ == 'torch':
+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))
+
+
def load(filename, out=None):
if out is not None:
- assert torch.is_tensor(out)
- assert not out.is_cuda
+ check_input(out)
else:
out = torch.FloatTensor()
-
- if isinstance(out, torch.FloatTensor):
- func = th_sox.libthsox_Float_read_audio_file
- elif isinstance(out, torch.DoubleTensor):
- func = th_sox.libthsox_Double_read_audio_file
- elif isinstance(out, torch.ByteTensor):
- func = th_sox.libthsox_Byte_read_audio_file
- elif isinstance(out, torch.CharTensor):
- func = th_sox.libthsox_Char_read_audio_file
- elif isinstance(out, torch.ShortTensor):
- func = th_sox.libthsox_Short_read_audio_file
- elif isinstance(out, torch.IntTensor):
- func = th_sox.libthsox_Int_read_audio_file
- elif isinstance(out, torch.LongTensor):
- func = th_sox.libthsox_Long_read_audio_file
-
- sample_rate_p = ffi.new('int*')
+ typename = type(out).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))
+ sample_rate_p = ffi.new('int*')
func(bytes(filename), out, sample_rate_p)
sample_rate = sample_rate_p[0]
return out, sample_rate
+
+
+def save(filepath, src, sample_rate):
+ filename, extension = os.path.splitext(filepath)
+ if type(sample_rate) != int:
+ raise TypeError('Sample rate should be a integer')
+
+ check_input(src)
+ typename = type(src).__name__.replace('Tensor', '')
+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))
+
+ func(bytes(filepath), src, extension[1:], sample_rate)
| {"golden_diff": "diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py\n--- a/torchaudio/__init__.py\n+++ b/torchaudio/__init__.py\n@@ -1,32 +1,40 @@\n+import os\n+\n import torch\n \n from cffi import FFI\n+\n ffi = FFI()\n from ._ext import th_sox\n \n+\n+def check_input(src):\n+ if not torch.is_tensor(src):\n+ raise TypeError('Expected a tensor, got %s' % type(src))\n+ if not src.__module__ == 'torch':\n+ raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n+\n+\n def load(filename, out=None):\n if out is not None:\n- assert torch.is_tensor(out)\n- assert not out.is_cuda\n+ check_input(out)\n else:\n out = torch.FloatTensor()\n-\n- if isinstance(out, torch.FloatTensor):\n- func = th_sox.libthsox_Float_read_audio_file\n- elif isinstance(out, torch.DoubleTensor):\n- func = th_sox.libthsox_Double_read_audio_file\n- elif isinstance(out, torch.ByteTensor):\n- func = th_sox.libthsox_Byte_read_audio_file\n- elif isinstance(out, torch.CharTensor):\n- func = th_sox.libthsox_Char_read_audio_file\n- elif isinstance(out, torch.ShortTensor):\n- func = th_sox.libthsox_Short_read_audio_file\n- elif isinstance(out, torch.IntTensor):\n- func = th_sox.libthsox_Int_read_audio_file\n- elif isinstance(out, torch.LongTensor):\n- func = th_sox.libthsox_Long_read_audio_file\n- \n- sample_rate_p = ffi.new('int*') \n+ typename = type(out).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))\n+ sample_rate_p = ffi.new('int*')\n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n+\n+\n+def save(filepath, src, sample_rate):\n+ filename, extension = os.path.splitext(filepath)\n+ if type(sample_rate) != int:\n+ raise TypeError('Sample rate should be a integer')\n+\n+ check_input(src)\n+ typename = type(src).__name__.replace('Tensor', '')\n+ func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))\n+\n+ func(bytes(filepath), src, extension[1:], sample_rate)\n", "issue": "Need API for saving to file.\nCurrently we only have a load function. But after training the network it would be great if we can save the generated tensor to a file.\r\n\r\n@soumith I think we can reuse a lot of code from this repo https://github.com/MattVitelli/GRUV\n", "before_files": [{"content": "import torch\n\nfrom cffi import FFI\nffi = FFI()\nfrom ._ext import th_sox\n\ndef load(filename, out=None):\n if out is not None:\n assert torch.is_tensor(out)\n assert not out.is_cuda\n else:\n out = torch.FloatTensor()\n\n if isinstance(out, torch.FloatTensor):\n func = th_sox.libthsox_Float_read_audio_file\n elif isinstance(out, torch.DoubleTensor):\n func = th_sox.libthsox_Double_read_audio_file\n elif isinstance(out, torch.ByteTensor):\n func = th_sox.libthsox_Byte_read_audio_file\n elif isinstance(out, torch.CharTensor):\n func = th_sox.libthsox_Char_read_audio_file\n elif isinstance(out, torch.ShortTensor):\n func = th_sox.libthsox_Short_read_audio_file\n elif isinstance(out, torch.IntTensor):\n func = th_sox.libthsox_Int_read_audio_file\n elif isinstance(out, torch.LongTensor):\n func = th_sox.libthsox_Long_read_audio_file\n \n sample_rate_p = ffi.new('int*') \n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n", "path": "torchaudio/__init__.py"}], "after_files": [{"content": "import os\n\nimport torch\n\nfrom cffi import FFI\n\nffi = FFI()\nfrom ._ext import th_sox\n\n\ndef check_input(src):\n if not torch.is_tensor(src):\n raise TypeError('Expected a tensor, got %s' % type(src))\n if not src.__module__ == 'torch':\n raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n\n\ndef load(filename, out=None):\n if out is not None:\n check_input(out)\n else:\n out = torch.FloatTensor()\n typename = type(out).__name__.replace('Tensor', '')\n func = getattr(th_sox, 'libthsox_{}_read_audio_file'.format(typename))\n sample_rate_p = ffi.new('int*')\n func(bytes(filename), out, sample_rate_p)\n sample_rate = sample_rate_p[0]\n return out, sample_rate\n\n\ndef save(filepath, src, sample_rate):\n filename, extension = os.path.splitext(filepath)\n if type(sample_rate) != int:\n raise TypeError('Sample rate should be a integer')\n\n check_input(src)\n typename = type(src).__name__.replace('Tensor', '')\n func = getattr(th_sox, 'libthsox_{}_write_audio_file'.format(typename))\n\n func(bytes(filepath), src, extension[1:], sample_rate)\n", "path": "torchaudio/__init__.py"}]} | 654 | 579 |
gh_patches_debug_10609 | rasdani/github-patches | git_diff | kubeflow__pipelines-6240 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v2 sample test] kaniko build times out / OOM
We've observed significantly high build time outs and OOMs with Kaniko recently.
I've tried several combinations:
1. 1.3.0-debug with/without --snapshotMode=redo + 4GB memory
2. 1.6.0-debug with/without --snapshotMode=redo + 8GB memory https://github.com/kubeflow/pipelines/pull/6226
but none of them run stably in reasonable amount of time.
The memory and timeout issues can be found upstream:
* https://github.com/GoogleContainerTools/kaniko/issues/1680
* https://github.com/GoogleContainerTools/kaniko/issues/1333
but they are both long standing, and no one still maintains the repo actively.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/kfp/compiler/v2_compat.py`
Content:
```
1 # Copyright 2021 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Utility functions for enabling v2-compatible pipelines in v1."""
15 import collections
16 import json
17 from typing import Optional
18
19 from kfp import dsl
20 from kfp.compiler import _default_transformers
21 from kfp.pipeline_spec import pipeline_spec_pb2
22 from kfp.v2 import compiler
23
24 from kubernetes import client as k8s_client
25
26 _DEFAULT_LAUNCHER_IMAGE = "gcr.io/ml-pipeline/kfp-launcher:1.6.6"
27
28
29 def update_op(op: dsl.ContainerOp,
30 pipeline_name: dsl.PipelineParam,
31 pipeline_root: dsl.PipelineParam,
32 launcher_image: Optional[str] = None) -> None:
33 """Updates the passed in Op for running in v2-compatible mode.
34
35 Args:
36 op: The Op to update.
37 pipeline_spec: The PipelineSpec for the pipeline under which `op`
38 runs.
39 pipeline_root: The root output directory for pipeline artifacts.
40 launcher_image: An optional launcher image. Useful for tests.
41 """
42 op.is_v2 = True
43 # Inject the launcher binary and overwrite the entrypoint.
44 image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE
45 launcher_container = dsl.UserContainer(name="kfp-launcher",
46 image=image_name,
47 command="/bin/mount_launcher.sh",
48 mirror_volume_mounts=True)
49
50 op.add_init_container(launcher_container)
51 op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))
52 op.add_volume_mount(
53 k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))
54
55 # op.command + op.args will have the following sections:
56 # 1. args passed to kfp-launcher
57 # 2. a separator "--"
58 # 3. parameters in format "key1=value1", "key2=value2", ...
59 # 4. a separator "--" as end of arguments passed to launcher
60 # 5. (start of op.args) arguments of the original user program command + args
61 #
62 # example:
63 # - command:
64 # - /kfp-launcher/launch
65 # - '--mlmd_server_address'
66 # - $(METADATA_GRPC_SERVICE_HOST)
67 # - '--mlmd_server_port'
68 # - $(METADATA_GRPC_SERVICE_PORT)
69 # - ... # more launcher params
70 # - '--pipeline_task_id'
71 # - $(KFP_POD_NAME)
72 # - '--pipeline_root'
73 # - ''
74 # - '--' # start of parameter values
75 # - first=first
76 # - second=second
77 # - '--' # start of user command and args
78 # args:
79 # - sh
80 # - '-ec'
81 # - |
82 # program_path=$(mktemp)
83 # printf "%s" "$0" > "$program_path"
84 # python3 -u "$program_path" "$@"
85 # - >
86 # import json
87 # import xxx
88 # ...
89 op.command = [
90 "/kfp-launcher/launch",
91 "--mlmd_server_address",
92 "$(METADATA_GRPC_SERVICE_HOST)",
93 "--mlmd_server_port",
94 "$(METADATA_GRPC_SERVICE_PORT)",
95 "--runtime_info_json",
96 "$(KFP_V2_RUNTIME_INFO)",
97 "--container_image",
98 "$(KFP_V2_IMAGE)",
99 "--task_name",
100 op.name,
101 "--pipeline_name",
102 pipeline_name,
103 "--run_id",
104 "$(KFP_RUN_ID)",
105 "--run_resource",
106 "workflows.argoproj.io/$(WORKFLOW_ID)",
107 "--namespace",
108 "$(KFP_NAMESPACE)",
109 "--pod_name",
110 "$(KFP_POD_NAME)",
111 "--pod_uid",
112 "$(KFP_POD_UID)",
113 "--pipeline_root",
114 pipeline_root,
115 "--enable_caching",
116 "$(ENABLE_CACHING)",
117 ]
118
119 # Mount necessary environment variables.
120 op.apply(_default_transformers.add_kfp_pod_env)
121 op.container.add_env_variable(
122 k8s_client.V1EnvVar(name="KFP_V2_IMAGE", value=op.container.image))
123
124 config_map_ref = k8s_client.V1ConfigMapEnvSource(
125 name='metadata-grpc-configmap', optional=True)
126 op.container.add_env_from(
127 k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))
128
129 op.arguments = list(op.container_spec.command) + list(op.container_spec.args)
130
131 runtime_info = {
132 "inputParameters": collections.OrderedDict(),
133 "inputArtifacts": collections.OrderedDict(),
134 "outputParameters": collections.OrderedDict(),
135 "outputArtifacts": collections.OrderedDict(),
136 }
137
138 op.command += ["--"]
139 component_spec = op.component_spec
140 for parameter, spec in sorted(
141 component_spec.input_definitions.parameters.items()):
142 parameter_info = {
143 "type":
144 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
145 }
146 op.command += [f"{parameter}={op._parameter_arguments[parameter]}"]
147 runtime_info["inputParameters"][parameter] = parameter_info
148 op.command += ["--"]
149
150 for artifact_name, spec in sorted(
151 component_spec.input_definitions.artifacts.items()):
152 artifact_info = {
153 "metadataPath": op.input_artifact_paths[artifact_name],
154 "schemaTitle": spec.artifact_type.schema_title,
155 "instanceSchema": spec.artifact_type.instance_schema,
156 }
157 runtime_info["inputArtifacts"][artifact_name] = artifact_info
158
159 for parameter, spec in sorted(
160 component_spec.output_definitions.parameters.items()):
161 parameter_info = {
162 "type":
163 pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),
164 "path":
165 op.file_outputs[parameter],
166 }
167 runtime_info["outputParameters"][parameter] = parameter_info
168
169 for artifact_name, spec in sorted(
170 component_spec.output_definitions.artifacts.items()):
171 # TODO: Assert instance_schema.
172 artifact_info = {
173 # Type used to register output artifacts.
174 "schemaTitle": spec.artifact_type.schema_title,
175 "instanceSchema": spec.artifact_type.instance_schema,
176 # File used to write out the registered artifact ID.
177 "metadataPath": op.file_outputs[artifact_name],
178 }
179 runtime_info["outputArtifacts"][artifact_name] = artifact_info
180
181 op.container.add_env_variable(
182 k8s_client.V1EnvVar(name="KFP_V2_RUNTIME_INFO",
183 value=json.dumps(runtime_info)))
184
185 op.pod_annotations['pipelines.kubeflow.org/v2_component'] = "true"
186 op.pod_labels['pipelines.kubeflow.org/v2_component']= "true"
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py
--- a/sdk/python/kfp/compiler/v2_compat.py
+++ b/sdk/python/kfp/compiler/v2_compat.py
@@ -44,7 +44,7 @@
image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE
launcher_container = dsl.UserContainer(name="kfp-launcher",
image=image_name,
- command="/bin/mount_launcher.sh",
+ command=["launcher", "--copy", "/kfp-launcher/launch"],
mirror_volume_mounts=True)
op.add_init_container(launcher_container)
| {"golden_diff": "diff --git a/sdk/python/kfp/compiler/v2_compat.py b/sdk/python/kfp/compiler/v2_compat.py\n--- a/sdk/python/kfp/compiler/v2_compat.py\n+++ b/sdk/python/kfp/compiler/v2_compat.py\n@@ -44,7 +44,7 @@\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n- command=\"/bin/mount_launcher.sh\",\n+ command=[\"launcher\", \"--copy\", \"/kfp-launcher/launch\"],\n mirror_volume_mounts=True)\n \n op.add_init_container(launcher_container)\n", "issue": "[v2 sample test] kaniko build times out / OOM\nWe've observed significantly high build time outs and OOMs with Kaniko recently.\r\nI've tried several combinations:\r\n1. 1.3.0-debug with/without --snapshotMode=redo + 4GB memory\r\n2. 1.6.0-debug with/without --snapshotMode=redo + 8GB memory https://github.com/kubeflow/pipelines/pull/6226\r\n\r\nbut none of them run stably in reasonable amount of time.\r\n\r\nThe memory and timeout issues can be found upstream:\r\n* https://github.com/GoogleContainerTools/kaniko/issues/1680\r\n* https://github.com/GoogleContainerTools/kaniko/issues/1333\r\n\r\nbut they are both long standing, and no one still maintains the repo actively.\n", "before_files": [{"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility functions for enabling v2-compatible pipelines in v1.\"\"\"\nimport collections\nimport json\nfrom typing import Optional\n\nfrom kfp import dsl\nfrom kfp.compiler import _default_transformers\nfrom kfp.pipeline_spec import pipeline_spec_pb2\nfrom kfp.v2 import compiler\n\nfrom kubernetes import client as k8s_client\n\n_DEFAULT_LAUNCHER_IMAGE = \"gcr.io/ml-pipeline/kfp-launcher:1.6.6\"\n\n\ndef update_op(op: dsl.ContainerOp,\n pipeline_name: dsl.PipelineParam,\n pipeline_root: dsl.PipelineParam,\n launcher_image: Optional[str] = None) -> None:\n \"\"\"Updates the passed in Op for running in v2-compatible mode.\n\n Args:\n op: The Op to update.\n pipeline_spec: The PipelineSpec for the pipeline under which `op`\n runs.\n pipeline_root: The root output directory for pipeline artifacts.\n launcher_image: An optional launcher image. Useful for tests.\n \"\"\"\n op.is_v2 = True\n # Inject the launcher binary and overwrite the entrypoint.\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n command=\"/bin/mount_launcher.sh\",\n mirror_volume_mounts=True)\n\n op.add_init_container(launcher_container)\n op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))\n op.add_volume_mount(\n k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))\n\n # op.command + op.args will have the following sections:\n # 1. args passed to kfp-launcher\n # 2. a separator \"--\"\n # 3. parameters in format \"key1=value1\", \"key2=value2\", ...\n # 4. a separator \"--\" as end of arguments passed to launcher\n # 5. (start of op.args) arguments of the original user program command + args\n #\n # example:\n # - command:\n # - /kfp-launcher/launch\n # - '--mlmd_server_address'\n # - $(METADATA_GRPC_SERVICE_HOST)\n # - '--mlmd_server_port'\n # - $(METADATA_GRPC_SERVICE_PORT)\n # - ... # more launcher params\n # - '--pipeline_task_id'\n # - $(KFP_POD_NAME)\n # - '--pipeline_root'\n # - ''\n # - '--' # start of parameter values\n # - first=first\n # - second=second\n # - '--' # start of user command and args\n # args:\n # - sh\n # - '-ec'\n # - |\n # program_path=$(mktemp)\n # printf \"%s\" \"$0\" > \"$program_path\"\n # python3 -u \"$program_path\" \"$@\"\n # - >\n # import json\n # import xxx\n # ...\n op.command = [\n \"/kfp-launcher/launch\",\n \"--mlmd_server_address\",\n \"$(METADATA_GRPC_SERVICE_HOST)\",\n \"--mlmd_server_port\",\n \"$(METADATA_GRPC_SERVICE_PORT)\",\n \"--runtime_info_json\",\n \"$(KFP_V2_RUNTIME_INFO)\",\n \"--container_image\",\n \"$(KFP_V2_IMAGE)\",\n \"--task_name\",\n op.name,\n \"--pipeline_name\",\n pipeline_name,\n \"--run_id\",\n \"$(KFP_RUN_ID)\",\n \"--run_resource\",\n \"workflows.argoproj.io/$(WORKFLOW_ID)\",\n \"--namespace\",\n \"$(KFP_NAMESPACE)\",\n \"--pod_name\",\n \"$(KFP_POD_NAME)\",\n \"--pod_uid\",\n \"$(KFP_POD_UID)\",\n \"--pipeline_root\",\n pipeline_root,\n \"--enable_caching\",\n \"$(ENABLE_CACHING)\",\n ]\n\n # Mount necessary environment variables.\n op.apply(_default_transformers.add_kfp_pod_env)\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_IMAGE\", value=op.container.image))\n\n config_map_ref = k8s_client.V1ConfigMapEnvSource(\n name='metadata-grpc-configmap', optional=True)\n op.container.add_env_from(\n k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))\n\n op.arguments = list(op.container_spec.command) + list(op.container_spec.args)\n\n runtime_info = {\n \"inputParameters\": collections.OrderedDict(),\n \"inputArtifacts\": collections.OrderedDict(),\n \"outputParameters\": collections.OrderedDict(),\n \"outputArtifacts\": collections.OrderedDict(),\n }\n\n op.command += [\"--\"]\n component_spec = op.component_spec\n for parameter, spec in sorted(\n component_spec.input_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n }\n op.command += [f\"{parameter}={op._parameter_arguments[parameter]}\"]\n runtime_info[\"inputParameters\"][parameter] = parameter_info\n op.command += [\"--\"]\n\n for artifact_name, spec in sorted(\n component_spec.input_definitions.artifacts.items()):\n artifact_info = {\n \"metadataPath\": op.input_artifact_paths[artifact_name],\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n }\n runtime_info[\"inputArtifacts\"][artifact_name] = artifact_info\n\n for parameter, spec in sorted(\n component_spec.output_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n \"path\":\n op.file_outputs[parameter],\n }\n runtime_info[\"outputParameters\"][parameter] = parameter_info\n\n for artifact_name, spec in sorted(\n component_spec.output_definitions.artifacts.items()):\n # TODO: Assert instance_schema.\n artifact_info = {\n # Type used to register output artifacts.\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n # File used to write out the registered artifact ID.\n \"metadataPath\": op.file_outputs[artifact_name],\n }\n runtime_info[\"outputArtifacts\"][artifact_name] = artifact_info\n\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_RUNTIME_INFO\",\n value=json.dumps(runtime_info)))\n\n op.pod_annotations['pipelines.kubeflow.org/v2_component'] = \"true\"\n op.pod_labels['pipelines.kubeflow.org/v2_component']= \"true\"\n", "path": "sdk/python/kfp/compiler/v2_compat.py"}], "after_files": [{"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility functions for enabling v2-compatible pipelines in v1.\"\"\"\nimport collections\nimport json\nfrom typing import Optional\n\nfrom kfp import dsl\nfrom kfp.compiler import _default_transformers\nfrom kfp.pipeline_spec import pipeline_spec_pb2\nfrom kfp.v2 import compiler\n\nfrom kubernetes import client as k8s_client\n\n_DEFAULT_LAUNCHER_IMAGE = \"gcr.io/ml-pipeline/kfp-launcher:1.6.6\"\n\n\ndef update_op(op: dsl.ContainerOp,\n pipeline_name: dsl.PipelineParam,\n pipeline_root: dsl.PipelineParam,\n launcher_image: Optional[str] = None) -> None:\n \"\"\"Updates the passed in Op for running in v2-compatible mode.\n\n Args:\n op: The Op to update.\n pipeline_spec: The PipelineSpec for the pipeline under which `op`\n runs.\n pipeline_root: The root output directory for pipeline artifacts.\n launcher_image: An optional launcher image. Useful for tests.\n \"\"\"\n op.is_v2 = True\n # Inject the launcher binary and overwrite the entrypoint.\n image_name = launcher_image or _DEFAULT_LAUNCHER_IMAGE\n launcher_container = dsl.UserContainer(name=\"kfp-launcher\",\n image=image_name,\n command=[\"launcher\", \"--copy\", \"/kfp-launcher/launch\"],\n mirror_volume_mounts=True)\n\n op.add_init_container(launcher_container)\n op.add_volume(k8s_client.V1Volume(name='kfp-launcher'))\n op.add_volume_mount(\n k8s_client.V1VolumeMount(name='kfp-launcher', mount_path='/kfp-launcher'))\n\n # op.command + op.args will have the following sections:\n # 1. args passed to kfp-launcher\n # 2. a separator \"--\"\n # 3. parameters in format \"key1=value1\", \"key2=value2\", ...\n # 4. a separator \"--\" as end of arguments passed to launcher\n # 5. (start of op.args) arguments of the original user program command + args\n #\n # example:\n # - command:\n # - /kfp-launcher/launch\n # - '--mlmd_server_address'\n # - $(METADATA_GRPC_SERVICE_HOST)\n # - '--mlmd_server_port'\n # - $(METADATA_GRPC_SERVICE_PORT)\n # - ... # more launcher params\n # - '--pipeline_task_id'\n # - $(KFP_POD_NAME)\n # - '--pipeline_root'\n # - ''\n # - '--' # start of parameter values\n # - first=first\n # - second=second\n # - '--' # start of user command and args\n # args:\n # - sh\n # - '-ec'\n # - |\n # program_path=$(mktemp)\n # printf \"%s\" \"$0\" > \"$program_path\"\n # python3 -u \"$program_path\" \"$@\"\n # - >\n # import json\n # import xxx\n # ...\n op.command = [\n \"/kfp-launcher/launch\",\n \"--mlmd_server_address\",\n \"$(METADATA_GRPC_SERVICE_HOST)\",\n \"--mlmd_server_port\",\n \"$(METADATA_GRPC_SERVICE_PORT)\",\n \"--runtime_info_json\",\n \"$(KFP_V2_RUNTIME_INFO)\",\n \"--container_image\",\n \"$(KFP_V2_IMAGE)\",\n \"--task_name\",\n op.name,\n \"--pipeline_name\",\n pipeline_name,\n \"--run_id\",\n \"$(KFP_RUN_ID)\",\n \"--run_resource\",\n \"workflows.argoproj.io/$(WORKFLOW_ID)\",\n \"--namespace\",\n \"$(KFP_NAMESPACE)\",\n \"--pod_name\",\n \"$(KFP_POD_NAME)\",\n \"--pod_uid\",\n \"$(KFP_POD_UID)\",\n \"--pipeline_root\",\n pipeline_root,\n \"--enable_caching\",\n \"$(ENABLE_CACHING)\",\n ]\n\n # Mount necessary environment variables.\n op.apply(_default_transformers.add_kfp_pod_env)\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_IMAGE\", value=op.container.image))\n\n config_map_ref = k8s_client.V1ConfigMapEnvSource(\n name='metadata-grpc-configmap', optional=True)\n op.container.add_env_from(\n k8s_client.V1EnvFromSource(config_map_ref=config_map_ref))\n\n op.arguments = list(op.container_spec.command) + list(op.container_spec.args)\n\n runtime_info = {\n \"inputParameters\": collections.OrderedDict(),\n \"inputArtifacts\": collections.OrderedDict(),\n \"outputParameters\": collections.OrderedDict(),\n \"outputArtifacts\": collections.OrderedDict(),\n }\n\n op.command += [\"--\"]\n component_spec = op.component_spec\n for parameter, spec in sorted(\n component_spec.input_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n }\n op.command += [f\"{parameter}={op._parameter_arguments[parameter]}\"]\n runtime_info[\"inputParameters\"][parameter] = parameter_info\n op.command += [\"--\"]\n\n for artifact_name, spec in sorted(\n component_spec.input_definitions.artifacts.items()):\n artifact_info = {\n \"metadataPath\": op.input_artifact_paths[artifact_name],\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n }\n runtime_info[\"inputArtifacts\"][artifact_name] = artifact_info\n\n for parameter, spec in sorted(\n component_spec.output_definitions.parameters.items()):\n parameter_info = {\n \"type\":\n pipeline_spec_pb2.PrimitiveType.PrimitiveTypeEnum.Name(spec.type),\n \"path\":\n op.file_outputs[parameter],\n }\n runtime_info[\"outputParameters\"][parameter] = parameter_info\n\n for artifact_name, spec in sorted(\n component_spec.output_definitions.artifacts.items()):\n # TODO: Assert instance_schema.\n artifact_info = {\n # Type used to register output artifacts.\n \"schemaTitle\": spec.artifact_type.schema_title,\n \"instanceSchema\": spec.artifact_type.instance_schema,\n # File used to write out the registered artifact ID.\n \"metadataPath\": op.file_outputs[artifact_name],\n }\n runtime_info[\"outputArtifacts\"][artifact_name] = artifact_info\n\n op.container.add_env_variable(\n k8s_client.V1EnvVar(name=\"KFP_V2_RUNTIME_INFO\",\n value=json.dumps(runtime_info)))\n\n op.pod_annotations['pipelines.kubeflow.org/v2_component'] = \"true\"\n op.pod_labels['pipelines.kubeflow.org/v2_component']= \"true\"\n", "path": "sdk/python/kfp/compiler/v2_compat.py"}]} | 2,458 | 139 |
gh_patches_debug_14248 | rasdani/github-patches | git_diff | ivy-llc__ivy-21042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gumble_softmax
add mindspore gumble_softmax
- [gumble_softmax] #21042
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/mindspore/ops/function/nn_func.py`
Content:
```
1 """Includes Mindspore Frontend functions listed in the TODO list
2 https://github.com/unifyai/ivy/issues/14951."""
3
4 # local
5 import ivy
6 from ivy.func_wrapper import with_supported_dtypes
7 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
8
9
10 def _broadcast_pooling_helper(x, pool_dims: str = "2d", name: str = "padding"):
11 dims = {"1d": 1, "2d": 2, "3d": 3}
12
13 if isinstance(x, int):
14 return tuple([x for _ in range(dims[pool_dims])])
15
16 if len(x) == 1:
17 return tuple([x[0] for _ in range(dims[pool_dims])])
18 elif len(x) == dims[pool_dims]:
19 return tuple(x)
20 elif len(x) != dims[pool_dims]:
21 raise ValueError(
22 f"`{name}` must either be a single int, "
23 f"or a tuple of {dims[pool_dims]} ints. "
24 )
25
26
27 @with_supported_dtypes(
28 {
29 "2.0.0 and below": (
30 "int8",
31 "int16",
32 "int32",
33 "int64",
34 "float16",
35 "float32",
36 "float64",
37 )
38 },
39 "mindspore",
40 )
41 @to_ivy_arrays_and_back
42 def dropout2d(input, p=0.5, training=True):
43 return ivy.dropout2d(input, p, training=training, data_format="NCHW")
44
45
46 @with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
47 @to_ivy_arrays_and_back
48 def selu(input_x):
49 return ivy.selu(input_x)
50
51
52 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
53 @to_ivy_arrays_and_back
54 def softsign(x):
55 return ivy.divide(x, ivy.add(1, ivy.abs(x)))
56
57
58 @with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
59 @to_ivy_arrays_and_back
60 def log_softmax(input, axis=-1):
61 return ivy.log_softmax(input)
62
63
64 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
65 @to_ivy_arrays_and_back
66 def kl_div(logits, labels, reduction="mean"):
67 """
68 Computes the Kullback-Leibler (KL) Divergence between the logits and the labels.
69
70 Parameters:
71 logits (numpy array): The input logits array.
72 labels (numpy array): The label array which has the same shape as logits.
73 reduction (str): Specifies the reduction to be applied to the output.
74 Its value must be one of 'none', 'mean', 'batchmean',
75 or 'sum'. Default: 'mean'.
76
77 Returns:
78 float or numpy array: If reduction is 'none', then output is
79 a numpy array and has the same shape as logits.
80 Otherwise, it is a scalar (float).
81 """
82 assert ivy.shape(logits) == ivy.shape(
83 labels
84 ), "logits and labels must have the same shape."
85 L = labels * (ivy.log(labels) - logits)
86 if reduction == "none":
87 return L
88 elif reduction == "mean":
89 return ivy.mean(L)
90 elif reduction == "batchmean":
91 return ivy.mean(L, axis=0)
92 elif reduction == "sum":
93 return ivy.sum(L)
94 else:
95 raise ValueError(
96 "Invalid reduction mode. Supported values are 'none', 'mean', 'batchmean',"
97 " or 'sum'."
98 )
99
100
101 @with_supported_dtypes(
102 {
103 "2.0.0 and below": (
104 "int8",
105 "int16",
106 "int32",
107 "int64",
108 "float16",
109 "float32",
110 "float64",
111 )
112 },
113 "mindspore",
114 )
115 @to_ivy_arrays_and_back
116 def dropout3d(input, p=0.5, training=True):
117 return ivy.dropout3d(input, p, training=training, data_format="NCDHW")
118
119
120 @with_supported_dtypes(
121 {
122 "2.0.0 and below": (
123 "int8",
124 "int16",
125 "int32",
126 "int64",
127 "float16",
128 "float32",
129 "float64",
130 )
131 },
132 "mindspore",
133 )
134 @to_ivy_arrays_and_back
135 def interpolate(
136 input,
137 size=None,
138 scale_factor=None,
139 mode="nearest",
140 align_corners=False,
141 recompute_scale_factor=False,
142 ):
143 return ivy.interpolate(
144 input,
145 size=size,
146 scale_factor=scale_factor,
147 mode=mode,
148 align_corners=align_corners,
149 recompute_scale_factor=recompute_scale_factor,
150 )
151
152
153 @with_supported_dtypes(
154 {
155 "2.0 and below": (
156 "int8",
157 "int16",
158 "int32",
159 "int64",
160 "float16",
161 "float32",
162 "float64",
163 )
164 },
165 "mindspore",
166 )
167 @to_ivy_arrays_and_back
168 def pad(input, pad_width, mode="constant", constant_values=0):
169 return ivy.pad(input, pad_width, mode=mode, constant_values=constant_values)
170
171
172 @with_supported_dtypes(
173 {"2.0.0 and below": ("float16", "float32", "float64")}, "mindspore"
174 )
175 @to_ivy_arrays_and_back
176 def adaptive_avg_pool2d(input, output_size):
177 return ivy.adaptive_avg_pool2d(input, output_size)
178
179
180 @to_ivy_arrays_and_back
181 def avg_pool2d(
182 input,
183 kernel_size,
184 stride=None,
185 padding=0,
186 pad_mode=False,
187 count_include_pad=True,
188 divisor_override=None,
189 ):
190 # Figure out input dims N
191 input_rank = input.ndim
192
193 if input_rank == 4:
194 # NCHW
195 data_format = "NCHW"
196
197 kernel_size = _broadcast_pooling_helper(kernel_size, "2d", name="kernel_size")
198 stride = _broadcast_pooling_helper(stride, "2d", name="stride")
199 padding = _broadcast_pooling_helper(padding, "2d", name="padding")
200 kernel_pads = list(zip(kernel_size, padding))
201
202 # Padding should be less than or equal to half of kernel size
203 if not all([pad <= kernel / 2 for kernel, pad in kernel_pads]):
204 raise ValueError(
205 "pad should be smaller than or equal to half of kernel size, "
206 f"but got padding={padding}, kernel_size={kernel_size}. "
207 )
208
209 # Figure out padding string
210 if all([pad == ivy.ceil((kernel - 1) / 2) for kernel, pad in kernel_pads]):
211 padding_str = "SAME"
212 else:
213 padding_str = "VALID"
214
215 return ivy.avg_pool2d(
216 input,
217 kernel_size,
218 stride,
219 padding_str,
220 data_format=data_format,
221 pad_mode=pad_mode,
222 count_include_pad=count_include_pad,
223 divisor_override=divisor_override,
224 )
225
226
227 @to_ivy_arrays_and_back
228 def flatten(input, order="C", *, start_dim=1, end_dim=-1):
229 return ivy.flatten(input, order=order, start_dim=start_dim, end_dim=end_dim)
230
231
232 @with_supported_dtypes(
233 {"2.0.0 and below": ("float16", "float32", "float64")},
234 "mindspore",
235 )
236 @to_ivy_arrays_and_back
237 def fast_gelu(input_x):
238 return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(
239 0.851 * (input_x - ivy.abs(input_x))
240 )
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py
+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
@@ -238,3 +238,23 @@
return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(
0.851 * (input_x - ivy.abs(input_x))
)
+
+
+@with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
+@to_ivy_arrays_and_back
+def gumbel_softmax(logits, tau=1, hard=False, dim=-1):
+ gumbels = -ivy.empty_like(logits).exponential().log()
+ gumbels = (logits + gumbels) / tau
+ y_soft = ivy.softmax(gumbels, axis=dim)
+
+ if hard:
+ indices = y_soft.max(axis=dim, keepdims=True)[1]
+ y_hard = ivy.zeros_like(logits)
+ updates = ivy.ones_like(indices)
+ y_hard = ivy.scatter_nd(indices, updates, reduction="replace", out=y_hard)
+
+ ret = y_hard - y_soft.stop_gradient(preserve_type=True) + y_soft
+ else:
+ ret = y_soft
+
+ return ret
| {"golden_diff": "diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n@@ -238,3 +238,23 @@\n return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(\n 0.851 * (input_x - ivy.abs(input_x))\n )\n+\n+\n+@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n+@to_ivy_arrays_and_back\n+def gumbel_softmax(logits, tau=1, hard=False, dim=-1):\n+ gumbels = -ivy.empty_like(logits).exponential().log()\n+ gumbels = (logits + gumbels) / tau\n+ y_soft = ivy.softmax(gumbels, axis=dim)\n+\n+ if hard:\n+ indices = y_soft.max(axis=dim, keepdims=True)[1]\n+ y_hard = ivy.zeros_like(logits)\n+ updates = ivy.ones_like(indices)\n+ y_hard = ivy.scatter_nd(indices, updates, reduction=\"replace\", out=y_hard)\n+\n+ ret = y_hard - y_soft.stop_gradient(preserve_type=True) + y_soft\n+ else:\n+ ret = y_soft\n+\n+ return ret\n", "issue": "gumble_softmax\nadd mindspore gumble_softmax\r\n\r\n- [gumble_softmax] #21042\n", "before_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\ndef _broadcast_pooling_helper(x, pool_dims: str = \"2d\", name: str = \"padding\"):\n dims = {\"1d\": 1, \"2d\": 2, \"3d\": 3}\n\n if isinstance(x, int):\n return tuple([x for _ in range(dims[pool_dims])])\n\n if len(x) == 1:\n return tuple([x[0] for _ in range(dims[pool_dims])])\n elif len(x) == dims[pool_dims]:\n return tuple(x)\n elif len(x) != dims[pool_dims]:\n raise ValueError(\n f\"`{name}` must either be a single int, \"\n f\"or a tuple of {dims[pool_dims]} ints. \"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout2d(input, p=0.5, training=True):\n return ivy.dropout2d(input, p, training=training, data_format=\"NCHW\")\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef selu(input_x):\n return ivy.selu(input_x)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef log_softmax(input, axis=-1):\n return ivy.log_softmax(input)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef kl_div(logits, labels, reduction=\"mean\"):\n \"\"\"\n Computes the Kullback-Leibler (KL) Divergence between the logits and the labels.\n\n Parameters:\n logits (numpy array): The input logits array.\n labels (numpy array): The label array which has the same shape as logits.\n reduction (str): Specifies the reduction to be applied to the output.\n Its value must be one of 'none', 'mean', 'batchmean',\n or 'sum'. Default: 'mean'.\n\n Returns:\n float or numpy array: If reduction is 'none', then output is\n a numpy array and has the same shape as logits.\n Otherwise, it is a scalar (float).\n \"\"\"\n assert ivy.shape(logits) == ivy.shape(\n labels\n ), \"logits and labels must have the same shape.\"\n L = labels * (ivy.log(labels) - logits)\n if reduction == \"none\":\n return L\n elif reduction == \"mean\":\n return ivy.mean(L)\n elif reduction == \"batchmean\":\n return ivy.mean(L, axis=0)\n elif reduction == \"sum\":\n return ivy.sum(L)\n else:\n raise ValueError(\n \"Invalid reduction mode. Supported values are 'none', 'mean', 'batchmean',\"\n \" or 'sum'.\"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout3d(input, p=0.5, training=True):\n return ivy.dropout3d(input, p, training=training, data_format=\"NCDHW\")\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef interpolate(\n input,\n size=None,\n scale_factor=None,\n mode=\"nearest\",\n align_corners=False,\n recompute_scale_factor=False,\n):\n return ivy.interpolate(\n input,\n size=size,\n scale_factor=scale_factor,\n mode=mode,\n align_corners=align_corners,\n recompute_scale_factor=recompute_scale_factor,\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef pad(input, pad_width, mode=\"constant\", constant_values=0):\n return ivy.pad(input, pad_width, mode=mode, constant_values=constant_values)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")}, \"mindspore\"\n)\n@to_ivy_arrays_and_back\ndef adaptive_avg_pool2d(input, output_size):\n return ivy.adaptive_avg_pool2d(input, output_size)\n\n\n@to_ivy_arrays_and_back\ndef avg_pool2d(\n input,\n kernel_size,\n stride=None,\n padding=0,\n pad_mode=False,\n count_include_pad=True,\n divisor_override=None,\n):\n # Figure out input dims N\n input_rank = input.ndim\n\n if input_rank == 4:\n # NCHW\n data_format = \"NCHW\"\n\n kernel_size = _broadcast_pooling_helper(kernel_size, \"2d\", name=\"kernel_size\")\n stride = _broadcast_pooling_helper(stride, \"2d\", name=\"stride\")\n padding = _broadcast_pooling_helper(padding, \"2d\", name=\"padding\")\n kernel_pads = list(zip(kernel_size, padding))\n\n # Padding should be less than or equal to half of kernel size\n if not all([pad <= kernel / 2 for kernel, pad in kernel_pads]):\n raise ValueError(\n \"pad should be smaller than or equal to half of kernel size, \"\n f\"but got padding={padding}, kernel_size={kernel_size}. \"\n )\n\n # Figure out padding string\n if all([pad == ivy.ceil((kernel - 1) / 2) for kernel, pad in kernel_pads]):\n padding_str = \"SAME\"\n else:\n padding_str = \"VALID\"\n\n return ivy.avg_pool2d(\n input,\n kernel_size,\n stride,\n padding_str,\n data_format=data_format,\n pad_mode=pad_mode,\n count_include_pad=count_include_pad,\n divisor_override=divisor_override,\n )\n\n\n@to_ivy_arrays_and_back\ndef flatten(input, order=\"C\", *, start_dim=1, end_dim=-1):\n return ivy.flatten(input, order=order, start_dim=start_dim, end_dim=end_dim)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")},\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef fast_gelu(input_x):\n return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(\n 0.851 * (input_x - ivy.abs(input_x))\n )\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}], "after_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\ndef _broadcast_pooling_helper(x, pool_dims: str = \"2d\", name: str = \"padding\"):\n dims = {\"1d\": 1, \"2d\": 2, \"3d\": 3}\n\n if isinstance(x, int):\n return tuple([x for _ in range(dims[pool_dims])])\n\n if len(x) == 1:\n return tuple([x[0] for _ in range(dims[pool_dims])])\n elif len(x) == dims[pool_dims]:\n return tuple(x)\n elif len(x) != dims[pool_dims]:\n raise ValueError(\n f\"`{name}` must either be a single int, \"\n f\"or a tuple of {dims[pool_dims]} ints. \"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout2d(input, p=0.5, training=True):\n return ivy.dropout2d(input, p, training=training, data_format=\"NCHW\")\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef selu(input_x):\n return ivy.selu(input_x)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef log_softmax(input, axis=-1):\n return ivy.log_softmax(input)\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef kl_div(logits, labels, reduction=\"mean\"):\n \"\"\"\n Computes the Kullback-Leibler (KL) Divergence between the logits and the labels.\n\n Parameters:\n logits (numpy array): The input logits array.\n labels (numpy array): The label array which has the same shape as logits.\n reduction (str): Specifies the reduction to be applied to the output.\n Its value must be one of 'none', 'mean', 'batchmean',\n or 'sum'. Default: 'mean'.\n\n Returns:\n float or numpy array: If reduction is 'none', then output is\n a numpy array and has the same shape as logits.\n Otherwise, it is a scalar (float).\n \"\"\"\n assert ivy.shape(logits) == ivy.shape(\n labels\n ), \"logits and labels must have the same shape.\"\n L = labels * (ivy.log(labels) - logits)\n if reduction == \"none\":\n return L\n elif reduction == \"mean\":\n return ivy.mean(L)\n elif reduction == \"batchmean\":\n return ivy.mean(L, axis=0)\n elif reduction == \"sum\":\n return ivy.sum(L)\n else:\n raise ValueError(\n \"Invalid reduction mode. Supported values are 'none', 'mean', 'batchmean',\"\n \" or 'sum'.\"\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef dropout3d(input, p=0.5, training=True):\n return ivy.dropout3d(input, p, training=training, data_format=\"NCDHW\")\n\n\n@with_supported_dtypes(\n {\n \"2.0.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef interpolate(\n input,\n size=None,\n scale_factor=None,\n mode=\"nearest\",\n align_corners=False,\n recompute_scale_factor=False,\n):\n return ivy.interpolate(\n input,\n size=size,\n scale_factor=scale_factor,\n mode=mode,\n align_corners=align_corners,\n recompute_scale_factor=recompute_scale_factor,\n )\n\n\n@with_supported_dtypes(\n {\n \"2.0 and below\": (\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float16\",\n \"float32\",\n \"float64\",\n )\n },\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef pad(input, pad_width, mode=\"constant\", constant_values=0):\n return ivy.pad(input, pad_width, mode=mode, constant_values=constant_values)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")}, \"mindspore\"\n)\n@to_ivy_arrays_and_back\ndef adaptive_avg_pool2d(input, output_size):\n return ivy.adaptive_avg_pool2d(input, output_size)\n\n\n@to_ivy_arrays_and_back\ndef avg_pool2d(\n input,\n kernel_size,\n stride=None,\n padding=0,\n pad_mode=False,\n count_include_pad=True,\n divisor_override=None,\n):\n # Figure out input dims N\n input_rank = input.ndim\n\n if input_rank == 4:\n # NCHW\n data_format = \"NCHW\"\n\n kernel_size = _broadcast_pooling_helper(kernel_size, \"2d\", name=\"kernel_size\")\n stride = _broadcast_pooling_helper(stride, \"2d\", name=\"stride\")\n padding = _broadcast_pooling_helper(padding, \"2d\", name=\"padding\")\n kernel_pads = list(zip(kernel_size, padding))\n\n # Padding should be less than or equal to half of kernel size\n if not all([pad <= kernel / 2 for kernel, pad in kernel_pads]):\n raise ValueError(\n \"pad should be smaller than or equal to half of kernel size, \"\n f\"but got padding={padding}, kernel_size={kernel_size}. \"\n )\n\n # Figure out padding string\n if all([pad == ivy.ceil((kernel - 1) / 2) for kernel, pad in kernel_pads]):\n padding_str = \"SAME\"\n else:\n padding_str = \"VALID\"\n\n return ivy.avg_pool2d(\n input,\n kernel_size,\n stride,\n padding_str,\n data_format=data_format,\n pad_mode=pad_mode,\n count_include_pad=count_include_pad,\n divisor_override=divisor_override,\n )\n\n\n@to_ivy_arrays_and_back\ndef flatten(input, order=\"C\", *, start_dim=1, end_dim=-1):\n return ivy.flatten(input, order=order, start_dim=start_dim, end_dim=end_dim)\n\n\n@with_supported_dtypes(\n {\"2.0.0 and below\": (\"float16\", \"float32\", \"float64\")},\n \"mindspore\",\n)\n@to_ivy_arrays_and_back\ndef fast_gelu(input_x):\n return (input_x / (1 + ivy.exp(-1.702 * ivy.abs(input_x)))) * ivy.exp(\n 0.851 * (input_x - ivy.abs(input_x))\n )\n\n\n@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef gumbel_softmax(logits, tau=1, hard=False, dim=-1):\n gumbels = -ivy.empty_like(logits).exponential().log()\n gumbels = (logits + gumbels) / tau\n y_soft = ivy.softmax(gumbels, axis=dim)\n\n if hard:\n indices = y_soft.max(axis=dim, keepdims=True)[1]\n y_hard = ivy.zeros_like(logits)\n updates = ivy.ones_like(indices)\n y_hard = ivy.scatter_nd(indices, updates, reduction=\"replace\", out=y_hard)\n\n ret = y_hard - y_soft.stop_gradient(preserve_type=True) + y_soft\n else:\n ret = y_soft\n\n return ret\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}]} | 2,722 | 359 |
gh_patches_debug_4602 | rasdani/github-patches | git_diff | inventree__InvenTree-6300 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plugin with git, but no commits causes all plugins to fail to load
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
Creating a plugin in `InvenTree/plugins/your_plugin/` with the proper files required, and then initiating a git repo in this directory will cause all plugins to fail on the next reload of the server
### Steps to Reproduce
1. `mkdir InvenTree/plugins/someplugin && touch InvenTree/plugins/someplugin/__init__.py && cd InvenTree/plugins/someplugin && git init`
2. 2. Start up the InvenTree instance
3. All plugins will be listed with "?" icons, and even installed plugins will be unavailable
### Expected behaviour
Ignore that no commits have been issues
### Deployment Method
- [X] Docker
- [X] Bare metal
### Version Information
# Version Information:
InvenTree-Version: 0.14.0 dev
Django Version: 3.2.23
Commit Hash: 5d018e8
Commit Date: 2024-01-15
Commit Branch: details-panel
Database: sqlite3
Debug-Mode: True
Deployed using Docker: True
Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with
Installer: DOC
Active plugins: []
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
a
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/plugin/helpers.py`
Content:
```
1 """Helpers for plugin app."""
2
3 import inspect
4 import logging
5 import os
6 import pathlib
7 import pkgutil
8 import sysconfig
9 import traceback
10 from importlib.metadata import entry_points
11
12 from django import template
13 from django.conf import settings
14 from django.core.exceptions import AppRegistryNotReady
15 from django.db.utils import IntegrityError
16
17 logger = logging.getLogger('inventree')
18
19
20 # region logging / errors
21 class IntegrationPluginError(Exception):
22 """Error that encapsulates another error and adds the path / reference of the raising plugin."""
23
24 def __init__(self, path, message):
25 """Init a plugin error.
26
27 Args:
28 path: Path on which the error occurred - used to find out which plugin it was
29 message: The original error message
30 """
31 self.path = path
32 self.message = message
33
34 def __str__(self):
35 """Returns the error message."""
36 return self.message # pragma: no cover
37
38
39 class MixinImplementationError(ValueError):
40 """Error if mixin was implemented wrong in plugin.
41
42 Mostly raised if constant is missing
43 """
44
45 pass
46
47
48 class MixinNotImplementedError(NotImplementedError):
49 """Error if necessary mixin function was not overwritten."""
50
51 pass
52
53
54 def log_error(error, reference: str = 'general'):
55 """Log an plugin error."""
56 from plugin import registry
57
58 # make sure the registry is set up
59 if reference not in registry.errors:
60 registry.errors[reference] = []
61
62 # add error to stack
63 registry.errors[reference].append(error)
64
65
66 def handle_error(error, do_raise: bool = True, do_log: bool = True, log_name: str = ''):
67 """Handles an error and casts it as an IntegrationPluginError."""
68 package_path = traceback.extract_tb(error.__traceback__)[-1].filename
69 install_path = sysconfig.get_paths()['purelib']
70
71 try:
72 package_name = pathlib.Path(package_path).relative_to(install_path).parts[0]
73 except ValueError:
74 # is file - loaded -> form a name for that
75 try:
76 path_obj = pathlib.Path(package_path).relative_to(settings.BASE_DIR)
77 path_parts = [*path_obj.parts]
78 path_parts[-1] = path_parts[-1].replace(
79 path_obj.suffix, ''
80 ) # remove suffix
81
82 # remove path prefixes
83 if path_parts[0] == 'plugin':
84 path_parts.remove('plugin')
85 path_parts.pop(0)
86 else:
87 path_parts.remove('plugins') # pragma: no cover
88
89 package_name = '.'.join(path_parts)
90 except Exception:
91 package_name = package_path
92
93 if do_log:
94 log_kwargs = {}
95 if log_name:
96 log_kwargs['reference'] = log_name
97 log_error({package_name: str(error)}, **log_kwargs)
98
99 if do_raise:
100 # do a straight raise if we are playing with environment variables at execution time, ignore the broken sample
101 if (
102 settings.TESTING_ENV
103 and package_name != 'integration.broken_sample'
104 and isinstance(error, IntegrityError)
105 ):
106 raise error # pragma: no cover
107
108 raise IntegrationPluginError(package_name, str(error))
109
110
111 def get_entrypoints():
112 """Returns list for entrypoints for InvenTree plugins."""
113 return entry_points().get('inventree_plugins', [])
114
115
116 # endregion
117
118
119 # region git-helpers
120 def get_git_log(path):
121 """Get dict with info of the last commit to file named in path."""
122 import datetime
123
124 from dulwich.repo import NotGitRepository, Repo
125
126 from InvenTree.ready import isInTestMode
127
128 output = None
129 path = os.path.abspath(path)
130
131 if os.path.exists(path) and os.path.isfile(path):
132 path = os.path.dirname(path)
133
134 # only do this if we are not in test mode
135 if not isInTestMode(): # pragma: no cover
136 try:
137 repo = Repo(path)
138 head = repo.head()
139 commit = repo[head]
140
141 output = [
142 head.decode(),
143 commit.author.decode().split('<')[0][:-1],
144 commit.author.decode().split('<')[1][:-1],
145 datetime.datetime.fromtimestamp(commit.author_time).isoformat(),
146 commit.message.decode().split('\n')[0],
147 ]
148 except NotGitRepository:
149 pass
150
151 if not output:
152 output = 5 * [''] # pragma: no cover
153
154 return {
155 'hash': output[0],
156 'author': output[1],
157 'mail': output[2],
158 'date': output[3],
159 'message': output[4],
160 }
161
162
163 # endregion
164
165
166 # region plugin finders
167 def get_modules(pkg, path=None):
168 """Get all modules in a package."""
169 context = {}
170
171 if path is None:
172 path = pkg.__path__
173 elif type(path) is not list:
174 path = [path]
175
176 for loader, name, _ in pkgutil.walk_packages(path):
177 try:
178 module = loader.find_module(name).load_module(name)
179 pkg_names = getattr(module, '__all__', None)
180 for k, v in vars(module).items():
181 if not k.startswith('_') and (pkg_names is None or k in pkg_names):
182 context[k] = v
183 context[name] = module
184 except AppRegistryNotReady: # pragma: no cover
185 pass
186 except Exception as error:
187 # this 'protects' against malformed plugin modules by more or less silently failing
188
189 # log to stack
190 log_error({name: str(error)}, 'discovery')
191
192 return [v for k, v in context.items()]
193
194
195 def get_classes(module):
196 """Get all classes in a given module."""
197 return inspect.getmembers(module, inspect.isclass)
198
199
200 def get_plugins(pkg, baseclass, path=None):
201 """Return a list of all modules under a given package.
202
203 - Modules must be a subclass of the provided 'baseclass'
204 - Modules must have a non-empty NAME parameter
205 """
206 plugins = []
207
208 modules = get_modules(pkg, path=path)
209
210 # Iterate through each module in the package
211 for mod in modules:
212 # Iterate through each class in the module
213 for item in get_classes(mod):
214 plugin = item[1]
215 if issubclass(plugin, baseclass) and plugin.NAME:
216 plugins.append(plugin)
217
218 return plugins
219
220
221 # endregion
222
223
224 # region templates
225 def render_template(plugin, template_file, context=None):
226 """Locate and render a template file, available in the global template context."""
227 try:
228 tmp = template.loader.get_template(template_file)
229 except template.TemplateDoesNotExist:
230 logger.exception(
231 "Plugin %s could not locate template '%s'", plugin.slug, template_file
232 )
233
234 return f"""
235 <div class='alert alert-block alert-danger'>
236 Template file <em>{template_file}</em> does not exist.
237 </div>
238 """
239
240 # Render with the provided context
241 html = tmp.render(context)
242
243 return html
244
245
246 def render_text(text, context=None):
247 """Locate a raw string with provided context."""
248 ctx = template.Context(context)
249
250 return template.Template(text).render(ctx)
251
252
253 # endregion
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/plugin/helpers.py b/InvenTree/plugin/helpers.py
--- a/InvenTree/plugin/helpers.py
+++ b/InvenTree/plugin/helpers.py
@@ -145,6 +145,8 @@
datetime.datetime.fromtimestamp(commit.author_time).isoformat(),
commit.message.decode().split('\n')[0],
]
+ except KeyError as err:
+ logger.debug('No HEAD tag found in git repo at path %s', path)
except NotGitRepository:
pass
| {"golden_diff": "diff --git a/InvenTree/plugin/helpers.py b/InvenTree/plugin/helpers.py\n--- a/InvenTree/plugin/helpers.py\n+++ b/InvenTree/plugin/helpers.py\n@@ -145,6 +145,8 @@\n datetime.datetime.fromtimestamp(commit.author_time).isoformat(),\n commit.message.decode().split('\\n')[0],\n ]\n+ except KeyError as err:\n+ logger.debug('No HEAD tag found in git repo at path %s', path)\n except NotGitRepository:\n pass\n", "issue": "Plugin with git, but no commits causes all plugins to fail to load\n### Please verify that this bug has NOT been raised before.\n\n- [X] I checked and didn't find a similar issue\n\n### Describe the bug*\n\nCreating a plugin in `InvenTree/plugins/your_plugin/` with the proper files required, and then initiating a git repo in this directory will cause all plugins to fail on the next reload of the server\n\n### Steps to Reproduce\n\n1. `mkdir InvenTree/plugins/someplugin && touch InvenTree/plugins/someplugin/__init__.py && cd InvenTree/plugins/someplugin && git init`\r\n2. 2. Start up the InvenTree instance\r\n3. All plugins will be listed with \"?\" icons, and even installed plugins will be unavailable\n\n### Expected behaviour\n\nIgnore that no commits have been issues\n\n### Deployment Method\n\n- [X] Docker\n- [X] Bare metal\n\n### Version Information\n\n# Version Information:\r\nInvenTree-Version: 0.14.0 dev\r\nDjango Version: 3.2.23\r\nCommit Hash: 5d018e8\r\nCommit Date: 2024-01-15\r\nCommit Branch: details-panel\r\nDatabase: sqlite3\r\nDebug-Mode: True\r\nDeployed using Docker: True\r\nPlatform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with\r\nInstaller: DOC\r\n\r\nActive plugins: []\r\n\n\n### Please verify if you can reproduce this bug on the demo site.\n\n- [ ] I can reproduce this bug on the demo site.\n\n### Relevant log output\n\n```shell\na\n```\n\n", "before_files": [{"content": "\"\"\"Helpers for plugin app.\"\"\"\n\nimport inspect\nimport logging\nimport os\nimport pathlib\nimport pkgutil\nimport sysconfig\nimport traceback\nfrom importlib.metadata import entry_points\n\nfrom django import template\nfrom django.conf import settings\nfrom django.core.exceptions import AppRegistryNotReady\nfrom django.db.utils import IntegrityError\n\nlogger = logging.getLogger('inventree')\n\n\n# region logging / errors\nclass IntegrationPluginError(Exception):\n \"\"\"Error that encapsulates another error and adds the path / reference of the raising plugin.\"\"\"\n\n def __init__(self, path, message):\n \"\"\"Init a plugin error.\n\n Args:\n path: Path on which the error occurred - used to find out which plugin it was\n message: The original error message\n \"\"\"\n self.path = path\n self.message = message\n\n def __str__(self):\n \"\"\"Returns the error message.\"\"\"\n return self.message # pragma: no cover\n\n\nclass MixinImplementationError(ValueError):\n \"\"\"Error if mixin was implemented wrong in plugin.\n\n Mostly raised if constant is missing\n \"\"\"\n\n pass\n\n\nclass MixinNotImplementedError(NotImplementedError):\n \"\"\"Error if necessary mixin function was not overwritten.\"\"\"\n\n pass\n\n\ndef log_error(error, reference: str = 'general'):\n \"\"\"Log an plugin error.\"\"\"\n from plugin import registry\n\n # make sure the registry is set up\n if reference not in registry.errors:\n registry.errors[reference] = []\n\n # add error to stack\n registry.errors[reference].append(error)\n\n\ndef handle_error(error, do_raise: bool = True, do_log: bool = True, log_name: str = ''):\n \"\"\"Handles an error and casts it as an IntegrationPluginError.\"\"\"\n package_path = traceback.extract_tb(error.__traceback__)[-1].filename\n install_path = sysconfig.get_paths()['purelib']\n\n try:\n package_name = pathlib.Path(package_path).relative_to(install_path).parts[0]\n except ValueError:\n # is file - loaded -> form a name for that\n try:\n path_obj = pathlib.Path(package_path).relative_to(settings.BASE_DIR)\n path_parts = [*path_obj.parts]\n path_parts[-1] = path_parts[-1].replace(\n path_obj.suffix, ''\n ) # remove suffix\n\n # remove path prefixes\n if path_parts[0] == 'plugin':\n path_parts.remove('plugin')\n path_parts.pop(0)\n else:\n path_parts.remove('plugins') # pragma: no cover\n\n package_name = '.'.join(path_parts)\n except Exception:\n package_name = package_path\n\n if do_log:\n log_kwargs = {}\n if log_name:\n log_kwargs['reference'] = log_name\n log_error({package_name: str(error)}, **log_kwargs)\n\n if do_raise:\n # do a straight raise if we are playing with environment variables at execution time, ignore the broken sample\n if (\n settings.TESTING_ENV\n and package_name != 'integration.broken_sample'\n and isinstance(error, IntegrityError)\n ):\n raise error # pragma: no cover\n\n raise IntegrationPluginError(package_name, str(error))\n\n\ndef get_entrypoints():\n \"\"\"Returns list for entrypoints for InvenTree plugins.\"\"\"\n return entry_points().get('inventree_plugins', [])\n\n\n# endregion\n\n\n# region git-helpers\ndef get_git_log(path):\n \"\"\"Get dict with info of the last commit to file named in path.\"\"\"\n import datetime\n\n from dulwich.repo import NotGitRepository, Repo\n\n from InvenTree.ready import isInTestMode\n\n output = None\n path = os.path.abspath(path)\n\n if os.path.exists(path) and os.path.isfile(path):\n path = os.path.dirname(path)\n\n # only do this if we are not in test mode\n if not isInTestMode(): # pragma: no cover\n try:\n repo = Repo(path)\n head = repo.head()\n commit = repo[head]\n\n output = [\n head.decode(),\n commit.author.decode().split('<')[0][:-1],\n commit.author.decode().split('<')[1][:-1],\n datetime.datetime.fromtimestamp(commit.author_time).isoformat(),\n commit.message.decode().split('\\n')[0],\n ]\n except NotGitRepository:\n pass\n\n if not output:\n output = 5 * [''] # pragma: no cover\n\n return {\n 'hash': output[0],\n 'author': output[1],\n 'mail': output[2],\n 'date': output[3],\n 'message': output[4],\n }\n\n\n# endregion\n\n\n# region plugin finders\ndef get_modules(pkg, path=None):\n \"\"\"Get all modules in a package.\"\"\"\n context = {}\n\n if path is None:\n path = pkg.__path__\n elif type(path) is not list:\n path = [path]\n\n for loader, name, _ in pkgutil.walk_packages(path):\n try:\n module = loader.find_module(name).load_module(name)\n pkg_names = getattr(module, '__all__', None)\n for k, v in vars(module).items():\n if not k.startswith('_') and (pkg_names is None or k in pkg_names):\n context[k] = v\n context[name] = module\n except AppRegistryNotReady: # pragma: no cover\n pass\n except Exception as error:\n # this 'protects' against malformed plugin modules by more or less silently failing\n\n # log to stack\n log_error({name: str(error)}, 'discovery')\n\n return [v for k, v in context.items()]\n\n\ndef get_classes(module):\n \"\"\"Get all classes in a given module.\"\"\"\n return inspect.getmembers(module, inspect.isclass)\n\n\ndef get_plugins(pkg, baseclass, path=None):\n \"\"\"Return a list of all modules under a given package.\n\n - Modules must be a subclass of the provided 'baseclass'\n - Modules must have a non-empty NAME parameter\n \"\"\"\n plugins = []\n\n modules = get_modules(pkg, path=path)\n\n # Iterate through each module in the package\n for mod in modules:\n # Iterate through each class in the module\n for item in get_classes(mod):\n plugin = item[1]\n if issubclass(plugin, baseclass) and plugin.NAME:\n plugins.append(plugin)\n\n return plugins\n\n\n# endregion\n\n\n# region templates\ndef render_template(plugin, template_file, context=None):\n \"\"\"Locate and render a template file, available in the global template context.\"\"\"\n try:\n tmp = template.loader.get_template(template_file)\n except template.TemplateDoesNotExist:\n logger.exception(\n \"Plugin %s could not locate template '%s'\", plugin.slug, template_file\n )\n\n return f\"\"\"\n <div class='alert alert-block alert-danger'>\n Template file <em>{template_file}</em> does not exist.\n </div>\n \"\"\"\n\n # Render with the provided context\n html = tmp.render(context)\n\n return html\n\n\ndef render_text(text, context=None):\n \"\"\"Locate a raw string with provided context.\"\"\"\n ctx = template.Context(context)\n\n return template.Template(text).render(ctx)\n\n\n# endregion\n", "path": "InvenTree/plugin/helpers.py"}], "after_files": [{"content": "\"\"\"Helpers for plugin app.\"\"\"\n\nimport inspect\nimport logging\nimport os\nimport pathlib\nimport pkgutil\nimport sysconfig\nimport traceback\nfrom importlib.metadata import entry_points\n\nfrom django import template\nfrom django.conf import settings\nfrom django.core.exceptions import AppRegistryNotReady\nfrom django.db.utils import IntegrityError\n\nlogger = logging.getLogger('inventree')\n\n\n# region logging / errors\nclass IntegrationPluginError(Exception):\n \"\"\"Error that encapsulates another error and adds the path / reference of the raising plugin.\"\"\"\n\n def __init__(self, path, message):\n \"\"\"Init a plugin error.\n\n Args:\n path: Path on which the error occurred - used to find out which plugin it was\n message: The original error message\n \"\"\"\n self.path = path\n self.message = message\n\n def __str__(self):\n \"\"\"Returns the error message.\"\"\"\n return self.message # pragma: no cover\n\n\nclass MixinImplementationError(ValueError):\n \"\"\"Error if mixin was implemented wrong in plugin.\n\n Mostly raised if constant is missing\n \"\"\"\n\n pass\n\n\nclass MixinNotImplementedError(NotImplementedError):\n \"\"\"Error if necessary mixin function was not overwritten.\"\"\"\n\n pass\n\n\ndef log_error(error, reference: str = 'general'):\n \"\"\"Log an plugin error.\"\"\"\n from plugin import registry\n\n # make sure the registry is set up\n if reference not in registry.errors:\n registry.errors[reference] = []\n\n # add error to stack\n registry.errors[reference].append(error)\n\n\ndef handle_error(error, do_raise: bool = True, do_log: bool = True, log_name: str = ''):\n \"\"\"Handles an error and casts it as an IntegrationPluginError.\"\"\"\n package_path = traceback.extract_tb(error.__traceback__)[-1].filename\n install_path = sysconfig.get_paths()['purelib']\n\n try:\n package_name = pathlib.Path(package_path).relative_to(install_path).parts[0]\n except ValueError:\n # is file - loaded -> form a name for that\n try:\n path_obj = pathlib.Path(package_path).relative_to(settings.BASE_DIR)\n path_parts = [*path_obj.parts]\n path_parts[-1] = path_parts[-1].replace(\n path_obj.suffix, ''\n ) # remove suffix\n\n # remove path prefixes\n if path_parts[0] == 'plugin':\n path_parts.remove('plugin')\n path_parts.pop(0)\n else:\n path_parts.remove('plugins') # pragma: no cover\n\n package_name = '.'.join(path_parts)\n except Exception:\n package_name = package_path\n\n if do_log:\n log_kwargs = {}\n if log_name:\n log_kwargs['reference'] = log_name\n log_error({package_name: str(error)}, **log_kwargs)\n\n if do_raise:\n # do a straight raise if we are playing with environment variables at execution time, ignore the broken sample\n if (\n settings.TESTING_ENV\n and package_name != 'integration.broken_sample'\n and isinstance(error, IntegrityError)\n ):\n raise error # pragma: no cover\n\n raise IntegrationPluginError(package_name, str(error))\n\n\ndef get_entrypoints():\n \"\"\"Returns list for entrypoints for InvenTree plugins.\"\"\"\n return entry_points().get('inventree_plugins', [])\n\n\n# endregion\n\n\n# region git-helpers\ndef get_git_log(path):\n \"\"\"Get dict with info of the last commit to file named in path.\"\"\"\n import datetime\n\n from dulwich.repo import NotGitRepository, Repo\n\n from InvenTree.ready import isInTestMode\n\n output = None\n path = os.path.abspath(path)\n\n if os.path.exists(path) and os.path.isfile(path):\n path = os.path.dirname(path)\n\n # only do this if we are not in test mode\n if not isInTestMode(): # pragma: no cover\n try:\n repo = Repo(path)\n head = repo.head()\n commit = repo[head]\n\n output = [\n head.decode(),\n commit.author.decode().split('<')[0][:-1],\n commit.author.decode().split('<')[1][:-1],\n datetime.datetime.fromtimestamp(commit.author_time).isoformat(),\n commit.message.decode().split('\\n')[0],\n ]\n except KeyError as err:\n logger.debug('No HEAD tag found in git repo at path %s', path)\n except NotGitRepository:\n pass\n\n if not output:\n output = 5 * [''] # pragma: no cover\n\n return {\n 'hash': output[0],\n 'author': output[1],\n 'mail': output[2],\n 'date': output[3],\n 'message': output[4],\n }\n\n\n# endregion\n\n\n# region plugin finders\ndef get_modules(pkg, path=None):\n \"\"\"Get all modules in a package.\"\"\"\n context = {}\n\n if path is None:\n path = pkg.__path__\n elif type(path) is not list:\n path = [path]\n\n for loader, name, _ in pkgutil.walk_packages(path):\n try:\n module = loader.find_module(name).load_module(name)\n pkg_names = getattr(module, '__all__', None)\n for k, v in vars(module).items():\n if not k.startswith('_') and (pkg_names is None or k in pkg_names):\n context[k] = v\n context[name] = module\n except AppRegistryNotReady: # pragma: no cover\n pass\n except Exception as error:\n # this 'protects' against malformed plugin modules by more or less silently failing\n\n # log to stack\n log_error({name: str(error)}, 'discovery')\n\n return [v for k, v in context.items()]\n\n\ndef get_classes(module):\n \"\"\"Get all classes in a given module.\"\"\"\n return inspect.getmembers(module, inspect.isclass)\n\n\ndef get_plugins(pkg, baseclass, path=None):\n \"\"\"Return a list of all modules under a given package.\n\n - Modules must be a subclass of the provided 'baseclass'\n - Modules must have a non-empty NAME parameter\n \"\"\"\n plugins = []\n\n modules = get_modules(pkg, path=path)\n\n # Iterate through each module in the package\n for mod in modules:\n # Iterate through each class in the module\n for item in get_classes(mod):\n plugin = item[1]\n if issubclass(plugin, baseclass) and plugin.NAME:\n plugins.append(plugin)\n\n return plugins\n\n\n# endregion\n\n\n# region templates\ndef render_template(plugin, template_file, context=None):\n \"\"\"Locate and render a template file, available in the global template context.\"\"\"\n try:\n tmp = template.loader.get_template(template_file)\n except template.TemplateDoesNotExist:\n logger.exception(\n \"Plugin %s could not locate template '%s'\", plugin.slug, template_file\n )\n\n return f\"\"\"\n <div class='alert alert-block alert-danger'>\n Template file <em>{template_file}</em> does not exist.\n </div>\n \"\"\"\n\n # Render with the provided context\n html = tmp.render(context)\n\n return html\n\n\ndef render_text(text, context=None):\n \"\"\"Locate a raw string with provided context.\"\"\"\n ctx = template.Context(context)\n\n return template.Template(text).render(ctx)\n\n\n# endregion\n", "path": "InvenTree/plugin/helpers.py"}]} | 2,850 | 114 |
gh_patches_debug_163 | rasdani/github-patches | git_diff | fedora-infra__bodhi-1935 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The CI yaml file is invalid yaml
I noticed today that our CentOS CI service jobs have been failing for a week or two due to the yaml being invalid:
```
>>> with open('devel/ci/githubprb-project.yml') as yml:
... a = yaml.load(yml.read())
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib64/python2.7/site-packages/yaml/__init__.py", line 71, in load
return loader.get_single_data()
File "/usr/lib64/python2.7/site-packages/yaml/constructor.py", line 37, in get_single_data
node = self.get_single_node()
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python2.7/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib64/python2.7/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib64/python2.7/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 220, in fetch_more_tokens
return self.fetch_value()
File "/usr/lib64/python2.7/site-packages/yaml/scanner.py", line 576, in fetch_value
self.get_mark())
yaml.scanner.ScannerError: mapping values are not allowed here
in "<string>", line 20, column 99:
... ase review the Jenkins job. Hint: You can search for "JENKIES FA ...
^
```
I personally am responsible, when I made https://github.com/fedora-infra/bodhi/commit/791d4e3ea98d252daa6fb4856cb394eb8b07d0b3. Shame!
Anywyays, it's easy to fix and we should add a test that ensures the YAML is at least parseable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import __main__
2 __requires__ = __main__.__requires__ = 'WebOb>=1.4.1'
3 import pkg_resources # noqa
4
5 # The following two imports are required to shut up an
6 # atexit error when running tests with python 2.7
7 from setuptools import setup, find_packages # noqa
8 import logging # noqa
9 import multiprocessing # noqa
10 import os # noqa
11 import setuptools.command.egg_info # noqa
12 import sys # noqa
13
14
15 def get_requirements(requirements_file='requirements.txt'):
16 """
17 Get the contents of a file listing the requirements.
18
19 Args:
20 requirements_file (str): path to a requirements file
21
22 Returns:
23 list: the list of requirements, or an empty list if
24 `requirements_file` could not be opened or read
25 """
26 lines = open(requirements_file).readlines()
27 dependencies = []
28 for line in lines:
29 maybe_dep = line.strip()
30 if maybe_dep.startswith('#'):
31 # Skip pure comment lines
32 continue
33 if maybe_dep.startswith('git+'):
34 # VCS reference for dev purposes, expect a trailing comment
35 # with the normal requirement
36 __, __, maybe_dep = maybe_dep.rpartition('#')
37 else:
38 # Ignore any trailing comment
39 maybe_dep, __, __ = maybe_dep.partition('#')
40 # Remove any whitespace and assume non-empty results are dependencies
41 maybe_dep = maybe_dep.strip()
42 if maybe_dep:
43 dependencies.append(maybe_dep)
44 return dependencies
45
46
47 here = os.path.abspath(os.path.dirname(__file__))
48 README = open(os.path.join(here, 'README.rst')).read()
49 VERSION = '3.0.0'
50 # Possible options are at https://pypi.python.org/pypi?%3Aaction=list_classifiers
51 CLASSIFIERS = [
52 'Development Status :: 5 - Production/Stable',
53 'Intended Audience :: Developers',
54 'Intended Audience :: System Administrators',
55 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
56 'Operating System :: POSIX :: Linux',
57 'Programming Language :: Python :: 2.7',
58 'Topic :: System :: Software Distribution']
59 LICENSE = 'GPLv2+'
60 MAINTAINER = 'Fedora Infrastructure Team'
61 MAINTAINER_EMAIL = '[email protected]'
62 PLATFORMS = ['Fedora', 'GNU/Linux']
63 URL = 'https://github.com/fedora-infra/bodhi'
64
65
66 setuptools.command.egg_info.manifest_maker.template = 'BODHI_MANIFEST.in'
67
68
69 setup(
70 name='bodhi',
71 version=VERSION,
72 description='bodhi common package',
73 long_description=README,
74 classifiers=CLASSIFIERS,
75 license=LICENSE,
76 maintainer=MAINTAINER,
77 maintainer_email=MAINTAINER_EMAIL,
78 platforms=PLATFORMS,
79 url=URL,
80 keywords='fedora',
81 packages=['bodhi'],
82 include_package_data=True,
83 zip_safe=False,
84 install_requires=[],
85 tests_require=[
86 'flake8',
87 'pytest',
88 'pytest-cov',
89 'webtest',
90 'mock',
91 ],
92 )
93
94
95 setuptools.command.egg_info.manifest_maker.template = 'CLIENT_MANIFEST.in'
96
97
98 setup(
99 name='bodhi-client',
100 version=VERSION,
101 description='bodhi client',
102 long_description=README,
103 classifiers=CLASSIFIERS,
104 license=LICENSE,
105 maintainer=MAINTAINER,
106 maintainer_email=MAINTAINER_EMAIL,
107 platforms=PLATFORMS,
108 url=URL,
109 keywords='fedora',
110 packages=['bodhi.client'],
111 include_package_data=False,
112 zip_safe=False,
113 install_requires=['click', 'iniparse', 'python-fedora >= 0.9.0', 'six'],
114 entry_points="""\
115 [console_scripts]
116 bodhi = bodhi.client:cli
117 """)
118
119
120 setuptools.command.egg_info.manifest_maker.template = 'SERVER_MANIFEST.in'
121 # Due to https://github.com/pypa/setuptools/issues/808, we need to include the bodhi superpackage
122 # and then remove it if we want find_packages() to find the bodhi.server package and its
123 # subpackages without including the bodhi top level package.
124 server_packages = find_packages(
125 exclude=['bodhi.client', 'bodhi.client.*', 'bodhi.tests', 'bodhi.tests.*'])
126 server_packages.remove('bodhi')
127
128
129 setup(
130 name='bodhi-server',
131 version=VERSION,
132 description='bodhi server',
133 long_description=README,
134 classifiers=CLASSIFIERS + [
135 'Framework :: Pyramid',
136 'Programming Language :: JavaScript',
137 "Topic :: Internet :: WWW/HTTP",
138 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application"],
139 license=LICENSE,
140 maintainer=MAINTAINER,
141 maintainer_email=MAINTAINER_EMAIL,
142 platforms=PLATFORMS,
143 url=URL,
144 keywords='web fedora pyramid',
145 packages=server_packages,
146 include_package_data=True,
147 zip_safe=False,
148 install_requires=get_requirements(),
149 message_extractors={'.': []},
150 entry_points="""\
151 [paste.app_factory]
152 main = bodhi.server:main
153 [console_scripts]
154 initialize_bodhi_db = bodhi.server.scripts.initializedb:main
155 bodhi-clean-old-mashes = bodhi.server.scripts.clean_old_mashes:clean_up
156 bodhi-dequeue-stable = bodhi.server.scripts.dequeue_stable:dequeue_stable
157 bodhi-push = bodhi.server.push:push
158 bodhi-expire-overrides = bodhi.server.scripts.expire_overrides:main
159 bodhi-untag-branched = bodhi.server.scripts.untag_branched:main
160 bodhi-approve-testing = bodhi.server.scripts.approve_testing:main
161 bodhi-manage-releases = bodhi.server.scripts.manage_releases:main
162 bodhi-check-policies = bodhi.server.scripts.check_policies:check
163 [moksha.consumer]
164 masher = bodhi.server.consumers.masher:Masher
165 updates = bodhi.server.consumers.updates:UpdatesHandler
166 signed = bodhi.server.consumers.signed:SignedHandler
167 """,
168 paster_plugins=['pyramid'])
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -86,6 +86,7 @@
'flake8',
'pytest',
'pytest-cov',
+ 'pyyaml',
'webtest',
'mock',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -86,6 +86,7 @@\n 'flake8',\n 'pytest',\n 'pytest-cov',\n+ 'pyyaml',\n 'webtest',\n 'mock',\n ],\n", "issue": "The CI yaml file is invalid yaml\nI noticed today that our CentOS CI service jobs have been failing for a week or two due to the yaml being invalid:\r\n\r\n```\r\n>>> with open('devel/ci/githubprb-project.yml') as yml:\r\n... a = yaml.load(yml.read()) \r\n... \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 2, in <module>\r\n File \"/usr/lib64/python2.7/site-packages/yaml/__init__.py\", line 71, in load\r\n return loader.get_single_data()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/constructor.py\", line 37, in get_single_data\r\n node = self.get_single_node()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 36, in get_single_node\r\n document = self.compose_document()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 55, in compose_document\r\n node = self.compose_node(None, None)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 82, in compose_node\r\n node = self.compose_sequence_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 111, in compose_sequence_node\r\n node.value.append(self.compose_node(node, index))\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 82, in compose_node\r\n node = self.compose_sequence_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 111, in compose_sequence_node\r\n node.value.append(self.compose_node(node, index))\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 133, in compose_mapping_node\r\n item_value = self.compose_node(node, item_key)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 84, in compose_node\r\n node = self.compose_mapping_node(anchor)\r\n File \"/usr/lib64/python2.7/site-packages/yaml/composer.py\", line 127, in compose_mapping_node\r\n while not self.check_event(MappingEndEvent):\r\n File \"/usr/lib64/python2.7/site-packages/yaml/parser.py\", line 98, in check_event\r\n self.current_event = self.state()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/parser.py\", line 428, in parse_block_mapping_key\r\n if self.check_token(KeyToken):\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 116, in check_token\r\n self.fetch_more_tokens()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 220, in fetch_more_tokens\r\n return self.fetch_value()\r\n File \"/usr/lib64/python2.7/site-packages/yaml/scanner.py\", line 576, in fetch_value\r\n self.get_mark())\r\nyaml.scanner.ScannerError: mapping values are not allowed here\r\n in \"<string>\", line 20, column 99:\r\n ... ase review the Jenkins job. Hint: You can search for \"JENKIES FA ... \r\n ^\r\n```\r\n\r\nI personally am responsible, when I made https://github.com/fedora-infra/bodhi/commit/791d4e3ea98d252daa6fb4856cb394eb8b07d0b3. Shame!\r\n\r\nAnywyays, it's easy to fix and we should add a test that ensures the YAML is at least parseable.\n", "before_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources # noqa\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nfrom setuptools import setup, find_packages # noqa\nimport logging # noqa\nimport multiprocessing # noqa\nimport os # noqa\nimport setuptools.command.egg_info # noqa\nimport sys # noqa\n\n\ndef get_requirements(requirements_file='requirements.txt'):\n \"\"\"\n Get the contents of a file listing the requirements.\n\n Args:\n requirements_file (str): path to a requirements file\n\n Returns:\n list: the list of requirements, or an empty list if\n `requirements_file` could not be opened or read\n \"\"\"\n lines = open(requirements_file).readlines()\n dependencies = []\n for line in lines:\n maybe_dep = line.strip()\n if maybe_dep.startswith('#'):\n # Skip pure comment lines\n continue\n if maybe_dep.startswith('git+'):\n # VCS reference for dev purposes, expect a trailing comment\n # with the normal requirement\n __, __, maybe_dep = maybe_dep.rpartition('#')\n else:\n # Ignore any trailing comment\n maybe_dep, __, __ = maybe_dep.partition('#')\n # Remove any whitespace and assume non-empty results are dependencies\n maybe_dep = maybe_dep.strip()\n if maybe_dep:\n dependencies.append(maybe_dep)\n return dependencies\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nVERSION = '3.0.0'\n# Possible options are at https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: System :: Software Distribution']\nLICENSE = 'GPLv2+'\nMAINTAINER = 'Fedora Infrastructure Team'\nMAINTAINER_EMAIL = '[email protected]'\nPLATFORMS = ['Fedora', 'GNU/Linux']\nURL = 'https://github.com/fedora-infra/bodhi'\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'BODHI_MANIFEST.in'\n\n\nsetup(\n name='bodhi',\n version=VERSION,\n description='bodhi common package',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi'],\n include_package_data=True,\n zip_safe=False,\n install_requires=[],\n tests_require=[\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'webtest',\n 'mock',\n ],\n)\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'CLIENT_MANIFEST.in'\n\n\nsetup(\n name='bodhi-client',\n version=VERSION,\n description='bodhi client',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi.client'],\n include_package_data=False,\n zip_safe=False,\n install_requires=['click', 'iniparse', 'python-fedora >= 0.9.0', 'six'],\n entry_points=\"\"\"\\\n [console_scripts]\n bodhi = bodhi.client:cli\n \"\"\")\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'SERVER_MANIFEST.in'\n# Due to https://github.com/pypa/setuptools/issues/808, we need to include the bodhi superpackage\n# and then remove it if we want find_packages() to find the bodhi.server package and its\n# subpackages without including the bodhi top level package.\nserver_packages = find_packages(\n exclude=['bodhi.client', 'bodhi.client.*', 'bodhi.tests', 'bodhi.tests.*'])\nserver_packages.remove('bodhi')\n\n\nsetup(\n name='bodhi-server',\n version=VERSION,\n description='bodhi server',\n long_description=README,\n classifiers=CLASSIFIERS + [\n 'Framework :: Pyramid',\n 'Programming Language :: JavaScript',\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\"],\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='web fedora pyramid',\n packages=server_packages,\n include_package_data=True,\n zip_safe=False,\n install_requires=get_requirements(),\n message_extractors={'.': []},\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = bodhi.server:main\n [console_scripts]\n initialize_bodhi_db = bodhi.server.scripts.initializedb:main\n bodhi-clean-old-mashes = bodhi.server.scripts.clean_old_mashes:clean_up\n bodhi-dequeue-stable = bodhi.server.scripts.dequeue_stable:dequeue_stable\n bodhi-push = bodhi.server.push:push\n bodhi-expire-overrides = bodhi.server.scripts.expire_overrides:main\n bodhi-untag-branched = bodhi.server.scripts.untag_branched:main\n bodhi-approve-testing = bodhi.server.scripts.approve_testing:main\n bodhi-manage-releases = bodhi.server.scripts.manage_releases:main\n bodhi-check-policies = bodhi.server.scripts.check_policies:check\n [moksha.consumer]\n masher = bodhi.server.consumers.masher:Masher\n updates = bodhi.server.consumers.updates:UpdatesHandler\n signed = bodhi.server.consumers.signed:SignedHandler\n \"\"\",\n paster_plugins=['pyramid'])\n", "path": "setup.py"}], "after_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources # noqa\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nfrom setuptools import setup, find_packages # noqa\nimport logging # noqa\nimport multiprocessing # noqa\nimport os # noqa\nimport setuptools.command.egg_info # noqa\nimport sys # noqa\n\n\ndef get_requirements(requirements_file='requirements.txt'):\n \"\"\"\n Get the contents of a file listing the requirements.\n\n Args:\n requirements_file (str): path to a requirements file\n\n Returns:\n list: the list of requirements, or an empty list if\n `requirements_file` could not be opened or read\n \"\"\"\n lines = open(requirements_file).readlines()\n dependencies = []\n for line in lines:\n maybe_dep = line.strip()\n if maybe_dep.startswith('#'):\n # Skip pure comment lines\n continue\n if maybe_dep.startswith('git+'):\n # VCS reference for dev purposes, expect a trailing comment\n # with the normal requirement\n __, __, maybe_dep = maybe_dep.rpartition('#')\n else:\n # Ignore any trailing comment\n maybe_dep, __, __ = maybe_dep.partition('#')\n # Remove any whitespace and assume non-empty results are dependencies\n maybe_dep = maybe_dep.strip()\n if maybe_dep:\n dependencies.append(maybe_dep)\n return dependencies\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nVERSION = '3.0.0'\n# Possible options are at https://pypi.python.org/pypi?%3Aaction=list_classifiers\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: System :: Software Distribution']\nLICENSE = 'GPLv2+'\nMAINTAINER = 'Fedora Infrastructure Team'\nMAINTAINER_EMAIL = '[email protected]'\nPLATFORMS = ['Fedora', 'GNU/Linux']\nURL = 'https://github.com/fedora-infra/bodhi'\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'BODHI_MANIFEST.in'\n\n\nsetup(\n name='bodhi',\n version=VERSION,\n description='bodhi common package',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi'],\n include_package_data=True,\n zip_safe=False,\n install_requires=[],\n tests_require=[\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'pyyaml',\n 'webtest',\n 'mock',\n ],\n)\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'CLIENT_MANIFEST.in'\n\n\nsetup(\n name='bodhi-client',\n version=VERSION,\n description='bodhi client',\n long_description=README,\n classifiers=CLASSIFIERS,\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='fedora',\n packages=['bodhi.client'],\n include_package_data=False,\n zip_safe=False,\n install_requires=['click', 'iniparse', 'python-fedora >= 0.9.0', 'six'],\n entry_points=\"\"\"\\\n [console_scripts]\n bodhi = bodhi.client:cli\n \"\"\")\n\n\nsetuptools.command.egg_info.manifest_maker.template = 'SERVER_MANIFEST.in'\n# Due to https://github.com/pypa/setuptools/issues/808, we need to include the bodhi superpackage\n# and then remove it if we want find_packages() to find the bodhi.server package and its\n# subpackages without including the bodhi top level package.\nserver_packages = find_packages(\n exclude=['bodhi.client', 'bodhi.client.*', 'bodhi.tests', 'bodhi.tests.*'])\nserver_packages.remove('bodhi')\n\n\nsetup(\n name='bodhi-server',\n version=VERSION,\n description='bodhi server',\n long_description=README,\n classifiers=CLASSIFIERS + [\n 'Framework :: Pyramid',\n 'Programming Language :: JavaScript',\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\"],\n license=LICENSE,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n platforms=PLATFORMS,\n url=URL,\n keywords='web fedora pyramid',\n packages=server_packages,\n include_package_data=True,\n zip_safe=False,\n install_requires=get_requirements(),\n message_extractors={'.': []},\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = bodhi.server:main\n [console_scripts]\n initialize_bodhi_db = bodhi.server.scripts.initializedb:main\n bodhi-clean-old-mashes = bodhi.server.scripts.clean_old_mashes:clean_up\n bodhi-dequeue-stable = bodhi.server.scripts.dequeue_stable:dequeue_stable\n bodhi-push = bodhi.server.push:push\n bodhi-expire-overrides = bodhi.server.scripts.expire_overrides:main\n bodhi-untag-branched = bodhi.server.scripts.untag_branched:main\n bodhi-approve-testing = bodhi.server.scripts.approve_testing:main\n bodhi-manage-releases = bodhi.server.scripts.manage_releases:main\n bodhi-check-policies = bodhi.server.scripts.check_policies:check\n [moksha.consumer]\n masher = bodhi.server.consumers.masher:Masher\n updates = bodhi.server.consumers.updates:UpdatesHandler\n signed = bodhi.server.consumers.signed:SignedHandler\n \"\"\",\n paster_plugins=['pyramid'])\n", "path": "setup.py"}]} | 3,062 | 64 |
gh_patches_debug_37125 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-546 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scout breaks caching for django
We gave scout a try and looked promising but then we quickly had to disable it on production.
Issue is that scout is adding a `Vary: Cookie` header which breaks caching
This normally happens when some code in django accesses the request.user object. I'm assuming scout is trying to add some meta data and accesses it causing this issue.
We've run into this problem ourselves in the past and the way around is it to look for the internal cached user on the request object. Like this:
```
# going request.user will generate cookie vary headers, but since
# we aren't changing the output based on this we want to see the user
# without adding the header, so look for the lazy user
if request and hasattr(request, '_cached_user'):
# noinspection PyProtectedMember
user = request._cached_user
```
I think if replaced the check for user here
https://github.com/scoutapp/scout_apm_python/blob/master/src/scout_apm/django/middleware.py#L139
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/django/middleware.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import sys
5
6 import django
7 from django.conf import settings
8
9 from scout_apm.compat import string_types
10 from scout_apm.core.config import scout_config
11 from scout_apm.core.tracked_request import TrackedRequest
12 from scout_apm.core.web_requests import (
13 create_filtered_path,
14 ignore_path,
15 track_amazon_request_queue_time,
16 track_request_queue_time,
17 )
18
19 if django.VERSION >= (1, 11):
20 from django.urls import get_urlconf
21 else:
22 from django.core.urlresolvers import get_urlconf
23
24
25 def get_operation_name(request):
26 view_func = request.resolver_match.func
27 view_name = request.resolver_match._func_path
28
29 if hasattr(view_func, "model_admin"):
30 # Seems to comes from Django admin (attribute only set on Django 1.9+)
31 admin_class = view_func.model_admin.__class__
32 view_name = (
33 admin_class.__module__
34 + "."
35 + admin_class.__name__
36 + "."
37 + view_func.__name__
38 )
39
40 django_rest_framework_name = _get_django_rest_framework_name(
41 request, view_func, view_name
42 )
43 if django_rest_framework_name is not None:
44 return django_rest_framework_name
45
46 # Seems to be a Tastypie Resource. Need to resort to some stack inspection
47 # to find a better name since its decorators don't wrap very well
48 if view_name == "tastypie.resources.wrapper":
49 tastypie_name = _get_tastypie_operation_name(request, view_func)
50 if tastypie_name is not None:
51 return tastypie_name
52
53 return "Controller/" + view_name
54
55
56 def _get_django_rest_framework_name(request, view_func, view_name):
57 try:
58 from rest_framework.viewsets import ViewSetMixin
59 except ImportError:
60 return None
61
62 kls = getattr(view_func, "cls", None)
63 if isinstance(kls, type) and not issubclass(kls, ViewSetMixin):
64 return None
65
66 # Get 'actions' set in ViewSetMixin.as_view
67 actions = getattr(view_func, "actions", None)
68 if not actions or not isinstance(actions, dict):
69 return None
70
71 method_lower = request.method.lower()
72 if method_lower not in actions:
73 return None
74
75 return "Controller/{}.{}".format(view_name, actions[method_lower])
76
77
78 def _get_tastypie_operation_name(request, view_func):
79 try:
80 from tastypie.resources import Resource
81 except ImportError:
82 return None
83
84 if sys.version_info[0] == 2: # pragma: no cover
85 try:
86 wrapper = view_func.__closure__[0].cell_contents
87 except (AttributeError, IndexError):
88 return None
89 elif sys.version_info[0] == 3:
90 try:
91 wrapper = view_func.__wrapped__
92 except AttributeError:
93 return None
94
95 if not hasattr(wrapper, "__closure__") or len(wrapper.__closure__) != 2:
96 return None
97
98 instance = wrapper.__closure__[0].cell_contents
99 if not isinstance(instance, Resource): # pragma: no cover
100 return None
101
102 method_name = wrapper.__closure__[1].cell_contents
103 if not isinstance(method_name, string_types): # pragma: no cover
104 return None
105
106 if method_name.startswith("dispatch_"): # pragma: no cover
107 method_name = request.method.lower() + method_name.split("dispatch", 1)[1]
108
109 return "Controller/{}.{}.{}".format(
110 instance.__module__, instance.__class__.__name__, method_name
111 )
112
113
114 def track_request_view_data(request, tracked_request):
115 path = request.path
116 tracked_request.tag(
117 "path",
118 create_filtered_path(
119 path, [(k, v) for k, vs in request.GET.lists() for v in vs]
120 ),
121 )
122 if ignore_path(path):
123 tracked_request.tag("ignore_transaction", True)
124
125 if scout_config.value("collect_remote_ip"):
126 try:
127 # Determine a remote IP to associate with the request. The value is
128 # spoofable by the requester so this is not suitable to use in any
129 # security sensitive context.
130 user_ip = (
131 request.META.get("HTTP_X_FORWARDED_FOR", "").split(",")[0]
132 or request.META.get("HTTP_CLIENT_IP", "").split(",")[0]
133 or request.META.get("REMOTE_ADDR", None)
134 )
135 tracked_request.tag("user_ip", user_ip)
136 except Exception:
137 pass
138
139 user = getattr(request, "user", None)
140 if user is not None:
141 try:
142 tracked_request.tag("username", user.get_username())
143 except Exception:
144 pass
145
146 tracked_request.tag("urlconf", get_urlconf(settings.ROOT_URLCONF))
147
148
149 class MiddlewareTimingMiddleware(object):
150 """
151 Insert as early into the Middleware stack as possible (outermost layers),
152 so that other middlewares called after can be timed.
153 """
154
155 def __init__(self, get_response):
156 self.get_response = get_response
157
158 def __call__(self, request):
159 if not scout_config.value("monitor"):
160 return self.get_response(request)
161
162 tracked_request = TrackedRequest.instance()
163
164 tracked_request.start_span(
165 operation="Middleware", should_capture_backtrace=False
166 )
167 queue_time = request.META.get("HTTP_X_QUEUE_START") or request.META.get(
168 "HTTP_X_REQUEST_START", ""
169 )
170 queue_time_tracked = track_request_queue_time(queue_time, tracked_request)
171 if not queue_time_tracked:
172 track_amazon_request_queue_time(
173 request.META.get("HTTP_X_AMZN_TRACE_ID", ""), tracked_request
174 )
175
176 try:
177 return self.get_response(request)
178 finally:
179 tracked_request.stop_span()
180
181
182 class ViewTimingMiddleware(object):
183 """
184 Insert as deep into the middleware stack as possible, ideally wrapping no
185 other middleware. Designed to time the View itself
186 """
187
188 def __init__(self, get_response):
189 self.get_response = get_response
190
191 def __call__(self, request):
192 """
193 Wrap a single incoming request with start and stop calls.
194 This will start timing, but relies on the process_view callback to
195 capture more details about what view was really called, and other
196 similar info.
197
198 If process_view isn't called, then the request will not
199 be recorded. This can happen if a middleware further along the stack
200 doesn't call onward, and instead returns a response directly.
201 """
202 if not scout_config.value("monitor"):
203 return self.get_response(request)
204
205 tracked_request = TrackedRequest.instance()
206
207 # This operation name won't be recorded unless changed later in
208 # process_view
209 tracked_request.start_span(operation="Unknown", should_capture_backtrace=False)
210 try:
211 response = self.get_response(request)
212 if 500 <= response.status_code <= 599:
213 tracked_request.tag("error", "true")
214 return response
215 finally:
216 tracked_request.stop_span()
217
218 def process_view(self, request, view_func, view_args, view_kwargs):
219 """
220 Capture details about the view_func that is about to execute
221 """
222 if not scout_config.value("monitor"):
223 return
224 tracked_request = TrackedRequest.instance()
225 tracked_request.is_real_request = True
226
227 track_request_view_data(request, tracked_request)
228
229 span = tracked_request.current_span()
230 if span is not None:
231 span.operation = get_operation_name(request)
232
233 def process_exception(self, request, exception):
234 """
235 Mark this request as having errored out
236
237 Does not modify or catch or otherwise change the exception thrown
238 """
239 if not scout_config.value("monitor"):
240 return
241 TrackedRequest.instance().tag("error", "true")
242
243
244 class OldStyleMiddlewareTimingMiddleware(object):
245 """
246 Insert as early into the Middleware stack as possible (outermost layers),
247 so that other middlewares called after can be timed.
248 """
249
250 def process_request(self, request):
251 if not scout_config.value("monitor"):
252 return
253 tracked_request = TrackedRequest.instance()
254 request._scout_tracked_request = tracked_request
255
256 queue_time = request.META.get("HTTP_X_QUEUE_START") or request.META.get(
257 "HTTP_X_REQUEST_START", ""
258 )
259 queue_time_tracked = track_request_queue_time(queue_time, tracked_request)
260 if not queue_time_tracked:
261 track_amazon_request_queue_time(
262 request.META.get("HTTP_X_AMZN_TRACE_ID", ""), tracked_request
263 )
264
265 tracked_request.start_span(
266 operation="Middleware", should_capture_backtrace=False
267 )
268
269 def process_response(self, request, response):
270 # Only stop span if there's a request, but presume we are balanced,
271 # i.e. that custom instrumentation within the application is not
272 # causing errors
273 tracked_request = getattr(request, "_scout_tracked_request", None)
274 if 500 <= response.status_code <= 599:
275 tracked_request.tag("error", "true")
276 if tracked_request is not None:
277 tracked_request.stop_span()
278 return response
279
280
281 class OldStyleViewMiddleware(object):
282 def process_view(self, request, view_func, view_func_args, view_func_kwargs):
283 tracked_request = getattr(request, "_scout_tracked_request", None)
284 if tracked_request is None:
285 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
286 # don't do anything
287 return
288
289 tracked_request.is_real_request = True
290
291 track_request_view_data(request, tracked_request)
292
293 span = tracked_request.start_span(
294 operation=get_operation_name(request), should_capture_backtrace=False
295 )
296 # Save the span into the request, so we can check
297 # if we're matched up when stopping
298 request._scout_view_span = span
299
300 def process_response(self, request, response):
301 tracked_request = getattr(request, "_scout_tracked_request", None)
302 if tracked_request is None:
303 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
304 # don't do anything
305 return response
306
307 # Only stop span if we started, but presume we are balanced, i.e. that
308 # custom instrumentation within the application is not causing errors
309 span = getattr(request, "_scout_view_span", None)
310 if span is not None:
311 tracked_request.stop_span()
312 return response
313
314 def process_exception(self, request, exception):
315 tracked_request = getattr(request, "_scout_tracked_request", None)
316 if tracked_request is None:
317 # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so
318 # don't do anything
319 return
320
321 tracked_request.tag("error", "true")
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/django/middleware.py b/src/scout_apm/django/middleware.py
--- a/src/scout_apm/django/middleware.py
+++ b/src/scout_apm/django/middleware.py
@@ -136,7 +136,11 @@
except Exception:
pass
- user = getattr(request, "user", None)
+ # Django's request.user caches in this attribute on first access. We only
+ # want to track the user if the application code has touched request.user
+ # because touching it causes session access, which adds "Cookie" to the
+ # "Vary" header.
+ user = getattr(request, "_cached_user", None)
if user is not None:
try:
tracked_request.tag("username", user.get_username())
@@ -209,6 +213,7 @@
tracked_request.start_span(operation="Unknown", should_capture_backtrace=False)
try:
response = self.get_response(request)
+ track_request_view_data(request, tracked_request)
if 500 <= response.status_code <= 599:
tracked_request.tag("error", "true")
return response
@@ -224,8 +229,6 @@
tracked_request = TrackedRequest.instance()
tracked_request.is_real_request = True
- track_request_view_data(request, tracked_request)
-
span = tracked_request.current_span()
if span is not None:
span.operation = get_operation_name(request)
@@ -288,8 +291,6 @@
tracked_request.is_real_request = True
- track_request_view_data(request, tracked_request)
-
span = tracked_request.start_span(
operation=get_operation_name(request), should_capture_backtrace=False
)
@@ -304,6 +305,8 @@
# don't do anything
return response
+ track_request_view_data(request, tracked_request)
+
# Only stop span if we started, but presume we are balanced, i.e. that
# custom instrumentation within the application is not causing errors
span = getattr(request, "_scout_view_span", None)
| {"golden_diff": "diff --git a/src/scout_apm/django/middleware.py b/src/scout_apm/django/middleware.py\n--- a/src/scout_apm/django/middleware.py\n+++ b/src/scout_apm/django/middleware.py\n@@ -136,7 +136,11 @@\n except Exception:\n pass\n \n- user = getattr(request, \"user\", None)\n+ # Django's request.user caches in this attribute on first access. We only\n+ # want to track the user if the application code has touched request.user\n+ # because touching it causes session access, which adds \"Cookie\" to the\n+ # \"Vary\" header.\n+ user = getattr(request, \"_cached_user\", None)\n if user is not None:\n try:\n tracked_request.tag(\"username\", user.get_username())\n@@ -209,6 +213,7 @@\n tracked_request.start_span(operation=\"Unknown\", should_capture_backtrace=False)\n try:\n response = self.get_response(request)\n+ track_request_view_data(request, tracked_request)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n return response\n@@ -224,8 +229,6 @@\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n \n- track_request_view_data(request, tracked_request)\n-\n span = tracked_request.current_span()\n if span is not None:\n span.operation = get_operation_name(request)\n@@ -288,8 +291,6 @@\n \n tracked_request.is_real_request = True\n \n- track_request_view_data(request, tracked_request)\n-\n span = tracked_request.start_span(\n operation=get_operation_name(request), should_capture_backtrace=False\n )\n@@ -304,6 +305,8 @@\n # don't do anything\n return response\n \n+ track_request_view_data(request, tracked_request)\n+\n # Only stop span if we started, but presume we are balanced, i.e. that\n # custom instrumentation within the application is not causing errors\n span = getattr(request, \"_scout_view_span\", None)\n", "issue": "scout breaks caching for django\nWe gave scout a try and looked promising but then we quickly had to disable it on production.\r\n\r\nIssue is that scout is adding a `Vary: Cookie` header which breaks caching\r\n\r\n\r\n\r\nThis normally happens when some code in django accesses the request.user object. I'm assuming scout is trying to add some meta data and accesses it causing this issue.\r\n\r\nWe've run into this problem ourselves in the past and the way around is it to look for the internal cached user on the request object. Like this:\r\n\r\n```\r\n # going request.user will generate cookie vary headers, but since\r\n # we aren't changing the output based on this we want to see the user\r\n # without adding the header, so look for the lazy user\r\n if request and hasattr(request, '_cached_user'):\r\n # noinspection PyProtectedMember\r\n user = request._cached_user\r\n```\r\n\r\nI think if replaced the check for user here\r\n\r\nhttps://github.com/scoutapp/scout_apm_python/blob/master/src/scout_apm/django/middleware.py#L139\r\n\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\n\nimport django\nfrom django.conf import settings\n\nfrom scout_apm.compat import string_types\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\nif django.VERSION >= (1, 11):\n from django.urls import get_urlconf\nelse:\n from django.core.urlresolvers import get_urlconf\n\n\ndef get_operation_name(request):\n view_func = request.resolver_match.func\n view_name = request.resolver_match._func_path\n\n if hasattr(view_func, \"model_admin\"):\n # Seems to comes from Django admin (attribute only set on Django 1.9+)\n admin_class = view_func.model_admin.__class__\n view_name = (\n admin_class.__module__\n + \".\"\n + admin_class.__name__\n + \".\"\n + view_func.__name__\n )\n\n django_rest_framework_name = _get_django_rest_framework_name(\n request, view_func, view_name\n )\n if django_rest_framework_name is not None:\n return django_rest_framework_name\n\n # Seems to be a Tastypie Resource. Need to resort to some stack inspection\n # to find a better name since its decorators don't wrap very well\n if view_name == \"tastypie.resources.wrapper\":\n tastypie_name = _get_tastypie_operation_name(request, view_func)\n if tastypie_name is not None:\n return tastypie_name\n\n return \"Controller/\" + view_name\n\n\ndef _get_django_rest_framework_name(request, view_func, view_name):\n try:\n from rest_framework.viewsets import ViewSetMixin\n except ImportError:\n return None\n\n kls = getattr(view_func, \"cls\", None)\n if isinstance(kls, type) and not issubclass(kls, ViewSetMixin):\n return None\n\n # Get 'actions' set in ViewSetMixin.as_view\n actions = getattr(view_func, \"actions\", None)\n if not actions or not isinstance(actions, dict):\n return None\n\n method_lower = request.method.lower()\n if method_lower not in actions:\n return None\n\n return \"Controller/{}.{}\".format(view_name, actions[method_lower])\n\n\ndef _get_tastypie_operation_name(request, view_func):\n try:\n from tastypie.resources import Resource\n except ImportError:\n return None\n\n if sys.version_info[0] == 2: # pragma: no cover\n try:\n wrapper = view_func.__closure__[0].cell_contents\n except (AttributeError, IndexError):\n return None\n elif sys.version_info[0] == 3:\n try:\n wrapper = view_func.__wrapped__\n except AttributeError:\n return None\n\n if not hasattr(wrapper, \"__closure__\") or len(wrapper.__closure__) != 2:\n return None\n\n instance = wrapper.__closure__[0].cell_contents\n if not isinstance(instance, Resource): # pragma: no cover\n return None\n\n method_name = wrapper.__closure__[1].cell_contents\n if not isinstance(method_name, string_types): # pragma: no cover\n return None\n\n if method_name.startswith(\"dispatch_\"): # pragma: no cover\n method_name = request.method.lower() + method_name.split(\"dispatch\", 1)[1]\n\n return \"Controller/{}.{}.{}\".format(\n instance.__module__, instance.__class__.__name__, method_name\n )\n\n\ndef track_request_view_data(request, tracked_request):\n path = request.path\n tracked_request.tag(\n \"path\",\n create_filtered_path(\n path, [(k, v) for k, vs in request.GET.lists() for v in vs]\n ),\n )\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n if scout_config.value(\"collect_remote_ip\"):\n try:\n # Determine a remote IP to associate with the request. The value is\n # spoofable by the requester so this is not suitable to use in any\n # security sensitive context.\n user_ip = (\n request.META.get(\"HTTP_X_FORWARDED_FOR\", \"\").split(\",\")[0]\n or request.META.get(\"HTTP_CLIENT_IP\", \"\").split(\",\")[0]\n or request.META.get(\"REMOTE_ADDR\", None)\n )\n tracked_request.tag(\"user_ip\", user_ip)\n except Exception:\n pass\n\n user = getattr(request, \"user\", None)\n if user is not None:\n try:\n tracked_request.tag(\"username\", user.get_username())\n except Exception:\n pass\n\n tracked_request.tag(\"urlconf\", get_urlconf(settings.ROOT_URLCONF))\n\n\nclass MiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n try:\n return self.get_response(request)\n finally:\n tracked_request.stop_span()\n\n\nclass ViewTimingMiddleware(object):\n \"\"\"\n Insert as deep into the middleware stack as possible, ideally wrapping no\n other middleware. Designed to time the View itself\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n \"\"\"\n Wrap a single incoming request with start and stop calls.\n This will start timing, but relies on the process_view callback to\n capture more details about what view was really called, and other\n similar info.\n\n If process_view isn't called, then the request will not\n be recorded. This can happen if a middleware further along the stack\n doesn't call onward, and instead returns a response directly.\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n # This operation name won't be recorded unless changed later in\n # process_view\n tracked_request.start_span(operation=\"Unknown\", should_capture_backtrace=False)\n try:\n response = self.get_response(request)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n return response\n finally:\n tracked_request.stop_span()\n\n def process_view(self, request, view_func, view_args, view_kwargs):\n \"\"\"\n Capture details about the view_func that is about to execute\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n track_request_view_data(request, tracked_request)\n\n span = tracked_request.current_span()\n if span is not None:\n span.operation = get_operation_name(request)\n\n def process_exception(self, request, exception):\n \"\"\"\n Mark this request as having errored out\n\n Does not modify or catch or otherwise change the exception thrown\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n TrackedRequest.instance().tag(\"error\", \"true\")\n\n\nclass OldStyleMiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def process_request(self, request):\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n request._scout_tracked_request = tracked_request\n\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n\n def process_response(self, request, response):\n # Only stop span if there's a request, but presume we are balanced,\n # i.e. that custom instrumentation within the application is not\n # causing errors\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n if tracked_request is not None:\n tracked_request.stop_span()\n return response\n\n\nclass OldStyleViewMiddleware(object):\n def process_view(self, request, view_func, view_func_args, view_func_kwargs):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.is_real_request = True\n\n track_request_view_data(request, tracked_request)\n\n span = tracked_request.start_span(\n operation=get_operation_name(request), should_capture_backtrace=False\n )\n # Save the span into the request, so we can check\n # if we're matched up when stopping\n request._scout_view_span = span\n\n def process_response(self, request, response):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return response\n\n # Only stop span if we started, but presume we are balanced, i.e. that\n # custom instrumentation within the application is not causing errors\n span = getattr(request, \"_scout_view_span\", None)\n if span is not None:\n tracked_request.stop_span()\n return response\n\n def process_exception(self, request, exception):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.tag(\"error\", \"true\")\n", "path": "src/scout_apm/django/middleware.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\n\nimport django\nfrom django.conf import settings\n\nfrom scout_apm.compat import string_types\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import (\n create_filtered_path,\n ignore_path,\n track_amazon_request_queue_time,\n track_request_queue_time,\n)\n\nif django.VERSION >= (1, 11):\n from django.urls import get_urlconf\nelse:\n from django.core.urlresolvers import get_urlconf\n\n\ndef get_operation_name(request):\n view_func = request.resolver_match.func\n view_name = request.resolver_match._func_path\n\n if hasattr(view_func, \"model_admin\"):\n # Seems to comes from Django admin (attribute only set on Django 1.9+)\n admin_class = view_func.model_admin.__class__\n view_name = (\n admin_class.__module__\n + \".\"\n + admin_class.__name__\n + \".\"\n + view_func.__name__\n )\n\n django_rest_framework_name = _get_django_rest_framework_name(\n request, view_func, view_name\n )\n if django_rest_framework_name is not None:\n return django_rest_framework_name\n\n # Seems to be a Tastypie Resource. Need to resort to some stack inspection\n # to find a better name since its decorators don't wrap very well\n if view_name == \"tastypie.resources.wrapper\":\n tastypie_name = _get_tastypie_operation_name(request, view_func)\n if tastypie_name is not None:\n return tastypie_name\n\n return \"Controller/\" + view_name\n\n\ndef _get_django_rest_framework_name(request, view_func, view_name):\n try:\n from rest_framework.viewsets import ViewSetMixin\n except ImportError:\n return None\n\n kls = getattr(view_func, \"cls\", None)\n if isinstance(kls, type) and not issubclass(kls, ViewSetMixin):\n return None\n\n # Get 'actions' set in ViewSetMixin.as_view\n actions = getattr(view_func, \"actions\", None)\n if not actions or not isinstance(actions, dict):\n return None\n\n method_lower = request.method.lower()\n if method_lower not in actions:\n return None\n\n return \"Controller/{}.{}\".format(view_name, actions[method_lower])\n\n\ndef _get_tastypie_operation_name(request, view_func):\n try:\n from tastypie.resources import Resource\n except ImportError:\n return None\n\n if sys.version_info[0] == 2: # pragma: no cover\n try:\n wrapper = view_func.__closure__[0].cell_contents\n except (AttributeError, IndexError):\n return None\n elif sys.version_info[0] == 3:\n try:\n wrapper = view_func.__wrapped__\n except AttributeError:\n return None\n\n if not hasattr(wrapper, \"__closure__\") or len(wrapper.__closure__) != 2:\n return None\n\n instance = wrapper.__closure__[0].cell_contents\n if not isinstance(instance, Resource): # pragma: no cover\n return None\n\n method_name = wrapper.__closure__[1].cell_contents\n if not isinstance(method_name, string_types): # pragma: no cover\n return None\n\n if method_name.startswith(\"dispatch_\"): # pragma: no cover\n method_name = request.method.lower() + method_name.split(\"dispatch\", 1)[1]\n\n return \"Controller/{}.{}.{}\".format(\n instance.__module__, instance.__class__.__name__, method_name\n )\n\n\ndef track_request_view_data(request, tracked_request):\n path = request.path\n tracked_request.tag(\n \"path\",\n create_filtered_path(\n path, [(k, v) for k, vs in request.GET.lists() for v in vs]\n ),\n )\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n if scout_config.value(\"collect_remote_ip\"):\n try:\n # Determine a remote IP to associate with the request. The value is\n # spoofable by the requester so this is not suitable to use in any\n # security sensitive context.\n user_ip = (\n request.META.get(\"HTTP_X_FORWARDED_FOR\", \"\").split(\",\")[0]\n or request.META.get(\"HTTP_CLIENT_IP\", \"\").split(\",\")[0]\n or request.META.get(\"REMOTE_ADDR\", None)\n )\n tracked_request.tag(\"user_ip\", user_ip)\n except Exception:\n pass\n\n # Django's request.user caches in this attribute on first access. We only\n # want to track the user if the application code has touched request.user\n # because touching it causes session access, which adds \"Cookie\" to the\n # \"Vary\" header.\n user = getattr(request, \"_cached_user\", None)\n if user is not None:\n try:\n tracked_request.tag(\"username\", user.get_username())\n except Exception:\n pass\n\n tracked_request.tag(\"urlconf\", get_urlconf(settings.ROOT_URLCONF))\n\n\nclass MiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n try:\n return self.get_response(request)\n finally:\n tracked_request.stop_span()\n\n\nclass ViewTimingMiddleware(object):\n \"\"\"\n Insert as deep into the middleware stack as possible, ideally wrapping no\n other middleware. Designed to time the View itself\n \"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n \"\"\"\n Wrap a single incoming request with start and stop calls.\n This will start timing, but relies on the process_view callback to\n capture more details about what view was really called, and other\n similar info.\n\n If process_view isn't called, then the request will not\n be recorded. This can happen if a middleware further along the stack\n doesn't call onward, and instead returns a response directly.\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return self.get_response(request)\n\n tracked_request = TrackedRequest.instance()\n\n # This operation name won't be recorded unless changed later in\n # process_view\n tracked_request.start_span(operation=\"Unknown\", should_capture_backtrace=False)\n try:\n response = self.get_response(request)\n track_request_view_data(request, tracked_request)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n return response\n finally:\n tracked_request.stop_span()\n\n def process_view(self, request, view_func, view_args, view_kwargs):\n \"\"\"\n Capture details about the view_func that is about to execute\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n span = tracked_request.current_span()\n if span is not None:\n span.operation = get_operation_name(request)\n\n def process_exception(self, request, exception):\n \"\"\"\n Mark this request as having errored out\n\n Does not modify or catch or otherwise change the exception thrown\n \"\"\"\n if not scout_config.value(\"monitor\"):\n return\n TrackedRequest.instance().tag(\"error\", \"true\")\n\n\nclass OldStyleMiddlewareTimingMiddleware(object):\n \"\"\"\n Insert as early into the Middleware stack as possible (outermost layers),\n so that other middlewares called after can be timed.\n \"\"\"\n\n def process_request(self, request):\n if not scout_config.value(\"monitor\"):\n return\n tracked_request = TrackedRequest.instance()\n request._scout_tracked_request = tracked_request\n\n queue_time = request.META.get(\"HTTP_X_QUEUE_START\") or request.META.get(\n \"HTTP_X_REQUEST_START\", \"\"\n )\n queue_time_tracked = track_request_queue_time(queue_time, tracked_request)\n if not queue_time_tracked:\n track_amazon_request_queue_time(\n request.META.get(\"HTTP_X_AMZN_TRACE_ID\", \"\"), tracked_request\n )\n\n tracked_request.start_span(\n operation=\"Middleware\", should_capture_backtrace=False\n )\n\n def process_response(self, request, response):\n # Only stop span if there's a request, but presume we are balanced,\n # i.e. that custom instrumentation within the application is not\n # causing errors\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if 500 <= response.status_code <= 599:\n tracked_request.tag(\"error\", \"true\")\n if tracked_request is not None:\n tracked_request.stop_span()\n return response\n\n\nclass OldStyleViewMiddleware(object):\n def process_view(self, request, view_func, view_func_args, view_func_kwargs):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.is_real_request = True\n\n span = tracked_request.start_span(\n operation=get_operation_name(request), should_capture_backtrace=False\n )\n # Save the span into the request, so we can check\n # if we're matched up when stopping\n request._scout_view_span = span\n\n def process_response(self, request, response):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return response\n\n track_request_view_data(request, tracked_request)\n\n # Only stop span if we started, but presume we are balanced, i.e. that\n # custom instrumentation within the application is not causing errors\n span = getattr(request, \"_scout_view_span\", None)\n if span is not None:\n tracked_request.stop_span()\n return response\n\n def process_exception(self, request, exception):\n tracked_request = getattr(request, \"_scout_tracked_request\", None)\n if tracked_request is None:\n # Looks like OldStyleMiddlewareTimingMiddleware didn't run, so\n # don't do anything\n return\n\n tracked_request.tag(\"error\", \"true\")\n", "path": "src/scout_apm/django/middleware.py"}]} | 3,707 | 473 |
gh_patches_debug_21704 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-3407 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Different colors per completion column
I think it would present a lot of visual clarity to be able to style the completion list's “sections” separately.
For example in pentadactyl, I can very easily visually distinguish between URLs and URL titles based on color alone:

I'm lacking this visual feedback in qutebrowser. I think as a rough first approximation it would be fine to give explicit colors to the Nth sections (regardless of what they are), but I think in principle for this to work as well as possible the section order would have to be somewhat consistent between commands (so the URL is always in the same place, the title is always in the same place, etc.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/completion/completiondelegate.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Completion item delegate for CompletionView.
21
22 We use this to be able to highlight parts of the text.
23 """
24
25 import re
26 import html
27
28 from PyQt5.QtWidgets import QStyle, QStyleOptionViewItem, QStyledItemDelegate
29 from PyQt5.QtCore import QRectF, QSize, Qt
30 from PyQt5.QtGui import (QIcon, QPalette, QTextDocument, QTextOption,
31 QAbstractTextDocumentLayout)
32
33 from qutebrowser.config import config
34 from qutebrowser.utils import qtutils, jinja
35
36
37 _cached_stylesheet = None
38
39
40 class CompletionItemDelegate(QStyledItemDelegate):
41
42 """Delegate used by CompletionView to draw individual items.
43
44 Mainly a cleaned up port of Qt's way to draw a TreeView item, except it
45 uses a QTextDocument to draw the text and add marking.
46
47 Original implementation:
48 qt/src/gui/styles/qcommonstyle.cpp:drawControl:2153
49
50 Attributes:
51 _opt: The QStyleOptionViewItem which is used.
52 _style: The style to be used.
53 _painter: The QPainter to be used.
54 _doc: The QTextDocument to be used.
55 """
56
57 # FIXME this is horribly slow when resizing.
58 # We should probably cache something in _get_textdoc or so, but as soon as
59 # we implement eliding that cache probably isn't worth much anymore...
60 # https://github.com/qutebrowser/qutebrowser/issues/121
61
62 def __init__(self, parent=None):
63 self._painter = None
64 self._opt = None
65 self._doc = None
66 self._style = None
67 super().__init__(parent)
68
69 def _draw_background(self):
70 """Draw the background of an ItemViewItem."""
71 self._style.drawPrimitive(self._style.PE_PanelItemViewItem, self._opt,
72 self._painter, self._opt.widget)
73
74 def _draw_icon(self):
75 """Draw the icon of an ItemViewItem."""
76 icon_rect = self._style.subElementRect(
77 self._style.SE_ItemViewItemDecoration, self._opt, self._opt.widget)
78 if not icon_rect.isValid():
79 # The rect seems to be wrong in all kind of ways if no icon should
80 # be displayed.
81 return
82
83 mode = QIcon.Normal
84 if not self._opt.state & QStyle.State_Enabled:
85 mode = QIcon.Disabled
86 elif self._opt.state & QStyle.State_Selected:
87 mode = QIcon.Selected
88 state = QIcon.On if self._opt.state & QStyle.State_Open else QIcon.Off
89 self._opt.icon.paint(self._painter, icon_rect,
90 self._opt.decorationAlignment, mode, state)
91
92 def _draw_text(self, index):
93 """Draw the text of an ItemViewItem.
94
95 This is the main part where we differ from the original implementation
96 in Qt: We use a QTextDocument to draw text.
97
98 Args:
99 index: The QModelIndex of the item to draw.
100 """
101 if not self._opt.text:
102 return
103
104 text_rect_ = self._style.subElementRect(
105 self._style.SE_ItemViewItemText, self._opt, self._opt.widget)
106 qtutils.ensure_valid(text_rect_)
107 margin = self._style.pixelMetric(QStyle.PM_FocusFrameHMargin,
108 self._opt, self._opt.widget) + 1
109 # remove width padding
110 text_rect = text_rect_.adjusted(margin, 0, -margin, 0)
111 qtutils.ensure_valid(text_rect)
112 # move text upwards a bit
113 if index.parent().isValid():
114 text_rect.adjust(0, -1, 0, -1)
115 else:
116 text_rect.adjust(0, -2, 0, -2)
117 self._painter.save()
118 state = self._opt.state
119 if state & QStyle.State_Enabled and state & QStyle.State_Active:
120 cg = QPalette.Normal
121 elif state & QStyle.State_Enabled:
122 cg = QPalette.Inactive
123 else:
124 cg = QPalette.Disabled
125
126 if state & QStyle.State_Selected:
127 self._painter.setPen(self._opt.palette.color(
128 cg, QPalette.HighlightedText))
129 # This is a dirty fix for the text jumping by one pixel for
130 # whatever reason.
131 text_rect.adjust(0, -1, 0, 0)
132 else:
133 self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))
134
135 if state & QStyle.State_Editing:
136 self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))
137 self._painter.drawRect(text_rect_.adjusted(0, 0, -1, -1))
138
139 self._painter.translate(text_rect.left(), text_rect.top())
140 self._get_textdoc(index)
141 self._draw_textdoc(text_rect)
142 self._painter.restore()
143
144 def _draw_textdoc(self, rect):
145 """Draw the QTextDocument of an item.
146
147 Args:
148 rect: The QRect to clip the drawing to.
149 """
150 # We can't use drawContents because then the color would be ignored.
151 clip = QRectF(0, 0, rect.width(), rect.height())
152 self._painter.save()
153
154 if self._opt.state & QStyle.State_Selected:
155 color = config.val.colors.completion.item.selected.fg
156 elif not self._opt.state & QStyle.State_Enabled:
157 color = config.val.colors.completion.category.fg
158 else:
159 color = config.val.colors.completion.fg
160 self._painter.setPen(color)
161
162 ctx = QAbstractTextDocumentLayout.PaintContext()
163 ctx.palette.setColor(QPalette.Text, self._painter.pen().color())
164 if clip.isValid():
165 self._painter.setClipRect(clip)
166 ctx.clip = clip
167 self._doc.documentLayout().draw(self._painter, ctx)
168 self._painter.restore()
169
170 def _get_textdoc(self, index):
171 """Create the QTextDocument of an item.
172
173 Args:
174 index: The QModelIndex of the item to draw.
175 """
176 # FIXME we probably should do eliding here. See
177 # qcommonstyle.cpp:viewItemDrawText
178 # https://github.com/qutebrowser/qutebrowser/issues/118
179 text_option = QTextOption()
180 if self._opt.features & QStyleOptionViewItem.WrapText:
181 text_option.setWrapMode(QTextOption.WordWrap)
182 else:
183 text_option.setWrapMode(QTextOption.ManualWrap)
184 text_option.setTextDirection(self._opt.direction)
185 text_option.setAlignment(QStyle.visualAlignment(
186 self._opt.direction, self._opt.displayAlignment))
187
188 if self._doc is not None:
189 self._doc.deleteLater()
190 self._doc = QTextDocument(self)
191 self._doc.setDefaultFont(self._opt.font)
192 self._doc.setDefaultTextOption(text_option)
193 self._doc.setDocumentMargin(2)
194
195 assert _cached_stylesheet is not None
196 self._doc.setDefaultStyleSheet(_cached_stylesheet)
197
198 if index.parent().isValid():
199 view = self.parent()
200 pattern = view.pattern
201 columns_to_filter = index.model().columns_to_filter(index)
202 if index.column() in columns_to_filter and pattern:
203 repl = r'<span class="highlight">\g<0></span>'
204 text = re.sub(re.escape(pattern).replace(r'\ ', r'|'),
205 repl, html.escape(self._opt.text),
206 flags=re.IGNORECASE)
207 self._doc.setHtml(text)
208 else:
209 self._doc.setPlainText(self._opt.text)
210 else:
211 self._doc.setHtml(
212 '<span style="font: {};">{}</span>'.format(
213 html.escape(config.val.fonts.completion.category),
214 html.escape(self._opt.text)))
215
216 def _draw_focus_rect(self):
217 """Draw the focus rectangle of an ItemViewItem."""
218 state = self._opt.state
219 if not state & QStyle.State_HasFocus:
220 return
221 o = self._opt
222 o.rect = self._style.subElementRect(
223 self._style.SE_ItemViewItemFocusRect, self._opt, self._opt.widget)
224 o.state |= QStyle.State_KeyboardFocusChange | QStyle.State_Item
225 qtutils.ensure_valid(o.rect)
226 if state & QStyle.State_Enabled:
227 cg = QPalette.Normal
228 else:
229 cg = QPalette.Disabled
230 if state & QStyle.State_Selected:
231 role = QPalette.Highlight
232 else:
233 role = QPalette.Window
234 o.backgroundColor = self._opt.palette.color(cg, role)
235 self._style.drawPrimitive(QStyle.PE_FrameFocusRect, o, self._painter,
236 self._opt.widget)
237
238 def sizeHint(self, option, index):
239 """Override sizeHint of QStyledItemDelegate.
240
241 Return the cell size based on the QTextDocument size, but might not
242 work correctly yet.
243
244 Args:
245 option: const QStyleOptionViewItem & option
246 index: const QModelIndex & index
247
248 Return:
249 A QSize with the recommended size.
250 """
251 value = index.data(Qt.SizeHintRole)
252 if value is not None:
253 return value
254 self._opt = QStyleOptionViewItem(option)
255 self.initStyleOption(self._opt, index)
256 self._style = self._opt.widget.style()
257 self._get_textdoc(index)
258 docsize = self._doc.size().toSize()
259 size = self._style.sizeFromContents(QStyle.CT_ItemViewItem, self._opt,
260 docsize, self._opt.widget)
261 qtutils.ensure_valid(size)
262 return size + QSize(10, 3)
263
264 def paint(self, painter, option, index):
265 """Override the QStyledItemDelegate paint function.
266
267 Args:
268 painter: QPainter * painter
269 option: const QStyleOptionViewItem & option
270 index: const QModelIndex & index
271 """
272 self._painter = painter
273 self._painter.save()
274 self._opt = QStyleOptionViewItem(option)
275 self.initStyleOption(self._opt, index)
276 self._style = self._opt.widget.style()
277
278 self._draw_background()
279 self._draw_icon()
280 self._draw_text(index)
281 self._draw_focus_rect()
282
283 self._painter.restore()
284
285
286 @config.change_filter('colors.completion.match.fg', function=True)
287 def _update_stylesheet():
288 """Update the cached stylesheet."""
289 stylesheet = """
290 .highlight {
291 color: {{ conf.colors.completion.match.fg }};
292 }
293 """
294 with jinja.environment.no_autoescape():
295 template = jinja.environment.from_string(stylesheet)
296
297 global _cached_stylesheet
298 _cached_stylesheet = template.render(conf=config.val)
299
300
301 def init():
302 """Initialize the cached stylesheet."""
303 _update_stylesheet()
304 config.instance.changed.connect(_update_stylesheet)
305
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qutebrowser/completion/completiondelegate.py b/qutebrowser/completion/completiondelegate.py
--- a/qutebrowser/completion/completiondelegate.py
+++ b/qutebrowser/completion/completiondelegate.py
@@ -138,10 +138,10 @@
self._painter.translate(text_rect.left(), text_rect.top())
self._get_textdoc(index)
- self._draw_textdoc(text_rect)
+ self._draw_textdoc(text_rect, index.column())
self._painter.restore()
- def _draw_textdoc(self, rect):
+ def _draw_textdoc(self, rect, col):
"""Draw the QTextDocument of an item.
Args:
@@ -156,7 +156,9 @@
elif not self._opt.state & QStyle.State_Enabled:
color = config.val.colors.completion.category.fg
else:
- color = config.val.colors.completion.fg
+ colors = config.val.colors.completion.fg
+ # if multiple colors are set, use different colors per column
+ color = colors[col % len(colors)]
self._painter.setPen(color)
ctx = QAbstractTextDocumentLayout.PaintContext()
| {"golden_diff": "diff --git a/qutebrowser/completion/completiondelegate.py b/qutebrowser/completion/completiondelegate.py\n--- a/qutebrowser/completion/completiondelegate.py\n+++ b/qutebrowser/completion/completiondelegate.py\n@@ -138,10 +138,10 @@\n \n self._painter.translate(text_rect.left(), text_rect.top())\n self._get_textdoc(index)\n- self._draw_textdoc(text_rect)\n+ self._draw_textdoc(text_rect, index.column())\n self._painter.restore()\n \n- def _draw_textdoc(self, rect):\n+ def _draw_textdoc(self, rect, col):\n \"\"\"Draw the QTextDocument of an item.\n \n Args:\n@@ -156,7 +156,9 @@\n elif not self._opt.state & QStyle.State_Enabled:\n color = config.val.colors.completion.category.fg\n else:\n- color = config.val.colors.completion.fg\n+ colors = config.val.colors.completion.fg\n+ # if multiple colors are set, use different colors per column\n+ color = colors[col % len(colors)]\n self._painter.setPen(color)\n \n ctx = QAbstractTextDocumentLayout.PaintContext()\n", "issue": "Different colors per completion column\nI think it would present a lot of visual clarity to be able to style the completion list's \u201csections\u201d separately.\n\nFor example in pentadactyl, I can very easily visually distinguish between URLs and URL titles based on color alone:\n\n\n\nI'm lacking this visual feedback in qutebrowser. I think as a rough first approximation it would be fine to give explicit colors to the Nth sections (regardless of what they are), but I think in principle for this to work as well as possible the section order would have to be somewhat consistent between commands (so the URL is always in the same place, the title is always in the same place, etc.)\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Completion item delegate for CompletionView.\n\nWe use this to be able to highlight parts of the text.\n\"\"\"\n\nimport re\nimport html\n\nfrom PyQt5.QtWidgets import QStyle, QStyleOptionViewItem, QStyledItemDelegate\nfrom PyQt5.QtCore import QRectF, QSize, Qt\nfrom PyQt5.QtGui import (QIcon, QPalette, QTextDocument, QTextOption,\n QAbstractTextDocumentLayout)\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import qtutils, jinja\n\n\n_cached_stylesheet = None\n\n\nclass CompletionItemDelegate(QStyledItemDelegate):\n\n \"\"\"Delegate used by CompletionView to draw individual items.\n\n Mainly a cleaned up port of Qt's way to draw a TreeView item, except it\n uses a QTextDocument to draw the text and add marking.\n\n Original implementation:\n qt/src/gui/styles/qcommonstyle.cpp:drawControl:2153\n\n Attributes:\n _opt: The QStyleOptionViewItem which is used.\n _style: The style to be used.\n _painter: The QPainter to be used.\n _doc: The QTextDocument to be used.\n \"\"\"\n\n # FIXME this is horribly slow when resizing.\n # We should probably cache something in _get_textdoc or so, but as soon as\n # we implement eliding that cache probably isn't worth much anymore...\n # https://github.com/qutebrowser/qutebrowser/issues/121\n\n def __init__(self, parent=None):\n self._painter = None\n self._opt = None\n self._doc = None\n self._style = None\n super().__init__(parent)\n\n def _draw_background(self):\n \"\"\"Draw the background of an ItemViewItem.\"\"\"\n self._style.drawPrimitive(self._style.PE_PanelItemViewItem, self._opt,\n self._painter, self._opt.widget)\n\n def _draw_icon(self):\n \"\"\"Draw the icon of an ItemViewItem.\"\"\"\n icon_rect = self._style.subElementRect(\n self._style.SE_ItemViewItemDecoration, self._opt, self._opt.widget)\n if not icon_rect.isValid():\n # The rect seems to be wrong in all kind of ways if no icon should\n # be displayed.\n return\n\n mode = QIcon.Normal\n if not self._opt.state & QStyle.State_Enabled:\n mode = QIcon.Disabled\n elif self._opt.state & QStyle.State_Selected:\n mode = QIcon.Selected\n state = QIcon.On if self._opt.state & QStyle.State_Open else QIcon.Off\n self._opt.icon.paint(self._painter, icon_rect,\n self._opt.decorationAlignment, mode, state)\n\n def _draw_text(self, index):\n \"\"\"Draw the text of an ItemViewItem.\n\n This is the main part where we differ from the original implementation\n in Qt: We use a QTextDocument to draw text.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n if not self._opt.text:\n return\n\n text_rect_ = self._style.subElementRect(\n self._style.SE_ItemViewItemText, self._opt, self._opt.widget)\n qtutils.ensure_valid(text_rect_)\n margin = self._style.pixelMetric(QStyle.PM_FocusFrameHMargin,\n self._opt, self._opt.widget) + 1\n # remove width padding\n text_rect = text_rect_.adjusted(margin, 0, -margin, 0)\n qtutils.ensure_valid(text_rect)\n # move text upwards a bit\n if index.parent().isValid():\n text_rect.adjust(0, -1, 0, -1)\n else:\n text_rect.adjust(0, -2, 0, -2)\n self._painter.save()\n state = self._opt.state\n if state & QStyle.State_Enabled and state & QStyle.State_Active:\n cg = QPalette.Normal\n elif state & QStyle.State_Enabled:\n cg = QPalette.Inactive\n else:\n cg = QPalette.Disabled\n\n if state & QStyle.State_Selected:\n self._painter.setPen(self._opt.palette.color(\n cg, QPalette.HighlightedText))\n # This is a dirty fix for the text jumping by one pixel for\n # whatever reason.\n text_rect.adjust(0, -1, 0, 0)\n else:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n\n if state & QStyle.State_Editing:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n self._painter.drawRect(text_rect_.adjusted(0, 0, -1, -1))\n\n self._painter.translate(text_rect.left(), text_rect.top())\n self._get_textdoc(index)\n self._draw_textdoc(text_rect)\n self._painter.restore()\n\n def _draw_textdoc(self, rect):\n \"\"\"Draw the QTextDocument of an item.\n\n Args:\n rect: The QRect to clip the drawing to.\n \"\"\"\n # We can't use drawContents because then the color would be ignored.\n clip = QRectF(0, 0, rect.width(), rect.height())\n self._painter.save()\n\n if self._opt.state & QStyle.State_Selected:\n color = config.val.colors.completion.item.selected.fg\n elif not self._opt.state & QStyle.State_Enabled:\n color = config.val.colors.completion.category.fg\n else:\n color = config.val.colors.completion.fg\n self._painter.setPen(color)\n\n ctx = QAbstractTextDocumentLayout.PaintContext()\n ctx.palette.setColor(QPalette.Text, self._painter.pen().color())\n if clip.isValid():\n self._painter.setClipRect(clip)\n ctx.clip = clip\n self._doc.documentLayout().draw(self._painter, ctx)\n self._painter.restore()\n\n def _get_textdoc(self, index):\n \"\"\"Create the QTextDocument of an item.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n # FIXME we probably should do eliding here. See\n # qcommonstyle.cpp:viewItemDrawText\n # https://github.com/qutebrowser/qutebrowser/issues/118\n text_option = QTextOption()\n if self._opt.features & QStyleOptionViewItem.WrapText:\n text_option.setWrapMode(QTextOption.WordWrap)\n else:\n text_option.setWrapMode(QTextOption.ManualWrap)\n text_option.setTextDirection(self._opt.direction)\n text_option.setAlignment(QStyle.visualAlignment(\n self._opt.direction, self._opt.displayAlignment))\n\n if self._doc is not None:\n self._doc.deleteLater()\n self._doc = QTextDocument(self)\n self._doc.setDefaultFont(self._opt.font)\n self._doc.setDefaultTextOption(text_option)\n self._doc.setDocumentMargin(2)\n\n assert _cached_stylesheet is not None\n self._doc.setDefaultStyleSheet(_cached_stylesheet)\n\n if index.parent().isValid():\n view = self.parent()\n pattern = view.pattern\n columns_to_filter = index.model().columns_to_filter(index)\n if index.column() in columns_to_filter and pattern:\n repl = r'<span class=\"highlight\">\\g<0></span>'\n text = re.sub(re.escape(pattern).replace(r'\\ ', r'|'),\n repl, html.escape(self._opt.text),\n flags=re.IGNORECASE)\n self._doc.setHtml(text)\n else:\n self._doc.setPlainText(self._opt.text)\n else:\n self._doc.setHtml(\n '<span style=\"font: {};\">{}</span>'.format(\n html.escape(config.val.fonts.completion.category),\n html.escape(self._opt.text)))\n\n def _draw_focus_rect(self):\n \"\"\"Draw the focus rectangle of an ItemViewItem.\"\"\"\n state = self._opt.state\n if not state & QStyle.State_HasFocus:\n return\n o = self._opt\n o.rect = self._style.subElementRect(\n self._style.SE_ItemViewItemFocusRect, self._opt, self._opt.widget)\n o.state |= QStyle.State_KeyboardFocusChange | QStyle.State_Item\n qtutils.ensure_valid(o.rect)\n if state & QStyle.State_Enabled:\n cg = QPalette.Normal\n else:\n cg = QPalette.Disabled\n if state & QStyle.State_Selected:\n role = QPalette.Highlight\n else:\n role = QPalette.Window\n o.backgroundColor = self._opt.palette.color(cg, role)\n self._style.drawPrimitive(QStyle.PE_FrameFocusRect, o, self._painter,\n self._opt.widget)\n\n def sizeHint(self, option, index):\n \"\"\"Override sizeHint of QStyledItemDelegate.\n\n Return the cell size based on the QTextDocument size, but might not\n work correctly yet.\n\n Args:\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n\n Return:\n A QSize with the recommended size.\n \"\"\"\n value = index.data(Qt.SizeHintRole)\n if value is not None:\n return value\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n self._get_textdoc(index)\n docsize = self._doc.size().toSize()\n size = self._style.sizeFromContents(QStyle.CT_ItemViewItem, self._opt,\n docsize, self._opt.widget)\n qtutils.ensure_valid(size)\n return size + QSize(10, 3)\n\n def paint(self, painter, option, index):\n \"\"\"Override the QStyledItemDelegate paint function.\n\n Args:\n painter: QPainter * painter\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n \"\"\"\n self._painter = painter\n self._painter.save()\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n\n self._draw_background()\n self._draw_icon()\n self._draw_text(index)\n self._draw_focus_rect()\n\n self._painter.restore()\n\n\[email protected]_filter('colors.completion.match.fg', function=True)\ndef _update_stylesheet():\n \"\"\"Update the cached stylesheet.\"\"\"\n stylesheet = \"\"\"\n .highlight {\n color: {{ conf.colors.completion.match.fg }};\n }\n \"\"\"\n with jinja.environment.no_autoescape():\n template = jinja.environment.from_string(stylesheet)\n\n global _cached_stylesheet\n _cached_stylesheet = template.render(conf=config.val)\n\n\ndef init():\n \"\"\"Initialize the cached stylesheet.\"\"\"\n _update_stylesheet()\n config.instance.changed.connect(_update_stylesheet)\n", "path": "qutebrowser/completion/completiondelegate.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Completion item delegate for CompletionView.\n\nWe use this to be able to highlight parts of the text.\n\"\"\"\n\nimport re\nimport html\n\nfrom PyQt5.QtWidgets import QStyle, QStyleOptionViewItem, QStyledItemDelegate\nfrom PyQt5.QtCore import QRectF, QSize, Qt\nfrom PyQt5.QtGui import (QIcon, QPalette, QTextDocument, QTextOption,\n QAbstractTextDocumentLayout)\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import qtutils, jinja\n\n\n_cached_stylesheet = None\n\n\nclass CompletionItemDelegate(QStyledItemDelegate):\n\n \"\"\"Delegate used by CompletionView to draw individual items.\n\n Mainly a cleaned up port of Qt's way to draw a TreeView item, except it\n uses a QTextDocument to draw the text and add marking.\n\n Original implementation:\n qt/src/gui/styles/qcommonstyle.cpp:drawControl:2153\n\n Attributes:\n _opt: The QStyleOptionViewItem which is used.\n _style: The style to be used.\n _painter: The QPainter to be used.\n _doc: The QTextDocument to be used.\n \"\"\"\n\n # FIXME this is horribly slow when resizing.\n # We should probably cache something in _get_textdoc or so, but as soon as\n # we implement eliding that cache probably isn't worth much anymore...\n # https://github.com/qutebrowser/qutebrowser/issues/121\n\n def __init__(self, parent=None):\n self._painter = None\n self._opt = None\n self._doc = None\n self._style = None\n super().__init__(parent)\n\n def _draw_background(self):\n \"\"\"Draw the background of an ItemViewItem.\"\"\"\n self._style.drawPrimitive(self._style.PE_PanelItemViewItem, self._opt,\n self._painter, self._opt.widget)\n\n def _draw_icon(self):\n \"\"\"Draw the icon of an ItemViewItem.\"\"\"\n icon_rect = self._style.subElementRect(\n self._style.SE_ItemViewItemDecoration, self._opt, self._opt.widget)\n if not icon_rect.isValid():\n # The rect seems to be wrong in all kind of ways if no icon should\n # be displayed.\n return\n\n mode = QIcon.Normal\n if not self._opt.state & QStyle.State_Enabled:\n mode = QIcon.Disabled\n elif self._opt.state & QStyle.State_Selected:\n mode = QIcon.Selected\n state = QIcon.On if self._opt.state & QStyle.State_Open else QIcon.Off\n self._opt.icon.paint(self._painter, icon_rect,\n self._opt.decorationAlignment, mode, state)\n\n def _draw_text(self, index):\n \"\"\"Draw the text of an ItemViewItem.\n\n This is the main part where we differ from the original implementation\n in Qt: We use a QTextDocument to draw text.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n if not self._opt.text:\n return\n\n text_rect_ = self._style.subElementRect(\n self._style.SE_ItemViewItemText, self._opt, self._opt.widget)\n qtutils.ensure_valid(text_rect_)\n margin = self._style.pixelMetric(QStyle.PM_FocusFrameHMargin,\n self._opt, self._opt.widget) + 1\n # remove width padding\n text_rect = text_rect_.adjusted(margin, 0, -margin, 0)\n qtutils.ensure_valid(text_rect)\n # move text upwards a bit\n if index.parent().isValid():\n text_rect.adjust(0, -1, 0, -1)\n else:\n text_rect.adjust(0, -2, 0, -2)\n self._painter.save()\n state = self._opt.state\n if state & QStyle.State_Enabled and state & QStyle.State_Active:\n cg = QPalette.Normal\n elif state & QStyle.State_Enabled:\n cg = QPalette.Inactive\n else:\n cg = QPalette.Disabled\n\n if state & QStyle.State_Selected:\n self._painter.setPen(self._opt.palette.color(\n cg, QPalette.HighlightedText))\n # This is a dirty fix for the text jumping by one pixel for\n # whatever reason.\n text_rect.adjust(0, -1, 0, 0)\n else:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n\n if state & QStyle.State_Editing:\n self._painter.setPen(self._opt.palette.color(cg, QPalette.Text))\n self._painter.drawRect(text_rect_.adjusted(0, 0, -1, -1))\n\n self._painter.translate(text_rect.left(), text_rect.top())\n self._get_textdoc(index)\n self._draw_textdoc(text_rect, index.column())\n self._painter.restore()\n\n def _draw_textdoc(self, rect, col):\n \"\"\"Draw the QTextDocument of an item.\n\n Args:\n rect: The QRect to clip the drawing to.\n \"\"\"\n # We can't use drawContents because then the color would be ignored.\n clip = QRectF(0, 0, rect.width(), rect.height())\n self._painter.save()\n\n if self._opt.state & QStyle.State_Selected:\n color = config.val.colors.completion.item.selected.fg\n elif not self._opt.state & QStyle.State_Enabled:\n color = config.val.colors.completion.category.fg\n else:\n colors = config.val.colors.completion.fg\n # if multiple colors are set, use different colors per column\n color = colors[col % len(colors)]\n self._painter.setPen(color)\n\n ctx = QAbstractTextDocumentLayout.PaintContext()\n ctx.palette.setColor(QPalette.Text, self._painter.pen().color())\n if clip.isValid():\n self._painter.setClipRect(clip)\n ctx.clip = clip\n self._doc.documentLayout().draw(self._painter, ctx)\n self._painter.restore()\n\n def _get_textdoc(self, index):\n \"\"\"Create the QTextDocument of an item.\n\n Args:\n index: The QModelIndex of the item to draw.\n \"\"\"\n # FIXME we probably should do eliding here. See\n # qcommonstyle.cpp:viewItemDrawText\n # https://github.com/qutebrowser/qutebrowser/issues/118\n text_option = QTextOption()\n if self._opt.features & QStyleOptionViewItem.WrapText:\n text_option.setWrapMode(QTextOption.WordWrap)\n else:\n text_option.setWrapMode(QTextOption.ManualWrap)\n text_option.setTextDirection(self._opt.direction)\n text_option.setAlignment(QStyle.visualAlignment(\n self._opt.direction, self._opt.displayAlignment))\n\n if self._doc is not None:\n self._doc.deleteLater()\n self._doc = QTextDocument(self)\n self._doc.setDefaultFont(self._opt.font)\n self._doc.setDefaultTextOption(text_option)\n self._doc.setDocumentMargin(2)\n\n assert _cached_stylesheet is not None\n self._doc.setDefaultStyleSheet(_cached_stylesheet)\n\n if index.parent().isValid():\n view = self.parent()\n pattern = view.pattern\n columns_to_filter = index.model().columns_to_filter(index)\n if index.column() in columns_to_filter and pattern:\n repl = r'<span class=\"highlight\">\\g<0></span>'\n text = re.sub(re.escape(pattern).replace(r'\\ ', r'|'),\n repl, html.escape(self._opt.text),\n flags=re.IGNORECASE)\n self._doc.setHtml(text)\n else:\n self._doc.setPlainText(self._opt.text)\n else:\n self._doc.setHtml(\n '<span style=\"font: {};\">{}</span>'.format(\n html.escape(config.val.fonts.completion.category),\n html.escape(self._opt.text)))\n\n def _draw_focus_rect(self):\n \"\"\"Draw the focus rectangle of an ItemViewItem.\"\"\"\n state = self._opt.state\n if not state & QStyle.State_HasFocus:\n return\n o = self._opt\n o.rect = self._style.subElementRect(\n self._style.SE_ItemViewItemFocusRect, self._opt, self._opt.widget)\n o.state |= QStyle.State_KeyboardFocusChange | QStyle.State_Item\n qtutils.ensure_valid(o.rect)\n if state & QStyle.State_Enabled:\n cg = QPalette.Normal\n else:\n cg = QPalette.Disabled\n if state & QStyle.State_Selected:\n role = QPalette.Highlight\n else:\n role = QPalette.Window\n o.backgroundColor = self._opt.palette.color(cg, role)\n self._style.drawPrimitive(QStyle.PE_FrameFocusRect, o, self._painter,\n self._opt.widget)\n\n def sizeHint(self, option, index):\n \"\"\"Override sizeHint of QStyledItemDelegate.\n\n Return the cell size based on the QTextDocument size, but might not\n work correctly yet.\n\n Args:\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n\n Return:\n A QSize with the recommended size.\n \"\"\"\n value = index.data(Qt.SizeHintRole)\n if value is not None:\n return value\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n self._get_textdoc(index)\n docsize = self._doc.size().toSize()\n size = self._style.sizeFromContents(QStyle.CT_ItemViewItem, self._opt,\n docsize, self._opt.widget)\n qtutils.ensure_valid(size)\n return size + QSize(10, 3)\n\n def paint(self, painter, option, index):\n \"\"\"Override the QStyledItemDelegate paint function.\n\n Args:\n painter: QPainter * painter\n option: const QStyleOptionViewItem & option\n index: const QModelIndex & index\n \"\"\"\n self._painter = painter\n self._painter.save()\n self._opt = QStyleOptionViewItem(option)\n self.initStyleOption(self._opt, index)\n self._style = self._opt.widget.style()\n\n self._draw_background()\n self._draw_icon()\n self._draw_text(index)\n self._draw_focus_rect()\n\n self._painter.restore()\n\n\[email protected]_filter('colors.completion.match.fg', function=True)\ndef _update_stylesheet():\n \"\"\"Update the cached stylesheet.\"\"\"\n stylesheet = \"\"\"\n .highlight {\n color: {{ conf.colors.completion.match.fg }};\n }\n \"\"\"\n with jinja.environment.no_autoescape():\n template = jinja.environment.from_string(stylesheet)\n\n global _cached_stylesheet\n _cached_stylesheet = template.render(conf=config.val)\n\n\ndef init():\n \"\"\"Initialize the cached stylesheet.\"\"\"\n _update_stylesheet()\n config.instance.changed.connect(_update_stylesheet)\n", "path": "qutebrowser/completion/completiondelegate.py"}]} | 3,771 | 269 |
gh_patches_debug_26586 | rasdani/github-patches | git_diff | getsentry__sentry-python-2086 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DjangoCache `IndexError` raised when using keywords approach
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.22.1
### Steps to Reproduce
1. Install the latest version
2. Run the code with django cache and get the key using keyword approach
3. Observe `IndexError` issue
Snippet:
```python
from djang.core.cache import cache
cache.get(key="my_key") # <-- `IndexError` as there will no `args[0]` which is used for spans
```
### Expected Result
No exception raised and value retrieved
### Actual Result
`IndexError` raised:
```python
IndexError
tuple index out of range
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/django/caching.py`
Content:
```
1 import functools
2 from typing import TYPE_CHECKING
3
4 from django import VERSION as DJANGO_VERSION
5 from django.core.cache import CacheHandler
6
7 from sentry_sdk import Hub
8 from sentry_sdk.consts import OP, SPANDATA
9 from sentry_sdk._compat import text_type
10
11
12 if TYPE_CHECKING:
13 from typing import Any
14 from typing import Callable
15
16
17 METHODS_TO_INSTRUMENT = [
18 "get",
19 "get_many",
20 ]
21
22
23 def _patch_cache_method(cache, method_name):
24 # type: (CacheHandler, str) -> None
25 from sentry_sdk.integrations.django import DjangoIntegration
26
27 def _instrument_call(cache, method_name, original_method, args, kwargs):
28 # type: (CacheHandler, str, Callable[..., Any], Any, Any) -> Any
29 hub = Hub.current
30 integration = hub.get_integration(DjangoIntegration)
31 if integration is None or not integration.cache_spans:
32 return original_method(*args, **kwargs)
33
34 description = "{} {}".format(method_name, args[0])
35
36 with hub.start_span(op=OP.CACHE, description=description) as span:
37 value = original_method(*args, **kwargs)
38
39 if value:
40 span.set_data(SPANDATA.CACHE_HIT, True)
41
42 size = len(text_type(value).encode("utf-8"))
43 span.set_data(SPANDATA.CACHE_ITEM_SIZE, size)
44
45 else:
46 span.set_data(SPANDATA.CACHE_HIT, False)
47
48 return value
49
50 original_method = getattr(cache, method_name)
51
52 @functools.wraps(original_method)
53 def sentry_method(*args, **kwargs):
54 # type: (*Any, **Any) -> Any
55 return _instrument_call(cache, method_name, original_method, args, kwargs)
56
57 setattr(cache, method_name, sentry_method)
58
59
60 def _patch_cache(cache):
61 # type: (CacheHandler) -> None
62 if not hasattr(cache, "_sentry_patched"):
63 for method_name in METHODS_TO_INSTRUMENT:
64 _patch_cache_method(cache, method_name)
65 cache._sentry_patched = True
66
67
68 def patch_caching():
69 # type: () -> None
70 from sentry_sdk.integrations.django import DjangoIntegration
71
72 if not hasattr(CacheHandler, "_sentry_patched"):
73 if DJANGO_VERSION < (3, 2):
74 original_get_item = CacheHandler.__getitem__
75
76 @functools.wraps(original_get_item)
77 def sentry_get_item(self, alias):
78 # type: (CacheHandler, str) -> Any
79 cache = original_get_item(self, alias)
80
81 integration = Hub.current.get_integration(DjangoIntegration)
82 if integration and integration.cache_spans:
83 _patch_cache(cache)
84
85 return cache
86
87 CacheHandler.__getitem__ = sentry_get_item
88 CacheHandler._sentry_patched = True
89
90 else:
91 original_create_connection = CacheHandler.create_connection
92
93 @functools.wraps(original_create_connection)
94 def sentry_create_connection(self, alias):
95 # type: (CacheHandler, str) -> Any
96 cache = original_create_connection(self, alias)
97
98 integration = Hub.current.get_integration(DjangoIntegration)
99 if integration and integration.cache_spans:
100 _patch_cache(cache)
101
102 return cache
103
104 CacheHandler.create_connection = sentry_create_connection
105 CacheHandler._sentry_patched = True
106
```
Path: `sentry_sdk/consts.py`
Content:
```
1 from sentry_sdk._types import TYPE_CHECKING
2
3 if TYPE_CHECKING:
4 import sentry_sdk
5
6 from typing import Optional
7 from typing import Callable
8 from typing import Union
9 from typing import List
10 from typing import Type
11 from typing import Dict
12 from typing import Any
13 from typing import Sequence
14 from typing_extensions import TypedDict
15
16 from sentry_sdk.integrations import Integration
17
18 from sentry_sdk._types import (
19 BreadcrumbProcessor,
20 Event,
21 EventProcessor,
22 ProfilerMode,
23 TracesSampler,
24 TransactionProcessor,
25 )
26
27 # Experiments are feature flags to enable and disable certain unstable SDK
28 # functionality. Changing them from the defaults (`None`) in production
29 # code is highly discouraged. They are not subject to any stability
30 # guarantees such as the ones from semantic versioning.
31 Experiments = TypedDict(
32 "Experiments",
33 {
34 "max_spans": Optional[int],
35 "record_sql_params": Optional[bool],
36 # TODO: Remove these 2 profiling related experiments
37 "profiles_sample_rate": Optional[float],
38 "profiler_mode": Optional[ProfilerMode],
39 },
40 total=False,
41 )
42
43 DEFAULT_QUEUE_SIZE = 100
44 DEFAULT_MAX_BREADCRUMBS = 100
45
46 MATCH_ALL = r".*"
47
48
49 class INSTRUMENTER:
50 SENTRY = "sentry"
51 OTEL = "otel"
52
53
54 class SPANDATA:
55 """
56 Additional information describing the type of the span.
57 See: https://develop.sentry.dev/sdk/performance/span-data-conventions/
58 """
59
60 DB_SYSTEM = "db.system"
61 """
62 An identifier for the database management system (DBMS) product being used.
63 See: https://github.com/open-telemetry/opentelemetry-specification/blob/24de67b3827a4e3ab2515cd8ab62d5bcf837c586/specification/trace/semantic_conventions/database.md
64 Example: postgresql
65 """
66
67 CACHE_HIT = "cache.hit"
68 """
69 A boolean indicating whether the requested data was found in the cache.
70 Example: true
71 """
72
73 CACHE_ITEM_SIZE = "cache.item_size"
74 """
75 The size of the requested data in bytes.
76 Example: 58
77 """
78
79 HTTP_QUERY = "http.query"
80 """
81 The Query string present in the URL.
82 Example: ?foo=bar&bar=baz
83 """
84
85 HTTP_FRAGMENT = "http.fragment"
86 """
87 The Fragments present in the URL.
88 Example: #foo=bar
89 """
90
91 HTTP_METHOD = "http.method"
92 """
93 The HTTP method used.
94 Example: GET
95 """
96
97
98 class OP:
99 CACHE = "cache"
100 DB = "db"
101 DB_REDIS = "db.redis"
102 EVENT_DJANGO = "event.django"
103 FUNCTION = "function"
104 FUNCTION_AWS = "function.aws"
105 FUNCTION_GCP = "function.gcp"
106 GRPC_CLIENT = "grpc.client"
107 GRPC_SERVER = "grpc.server"
108 HTTP_CLIENT = "http.client"
109 HTTP_CLIENT_STREAM = "http.client.stream"
110 HTTP_SERVER = "http.server"
111 MIDDLEWARE_DJANGO = "middleware.django"
112 MIDDLEWARE_STARLETTE = "middleware.starlette"
113 MIDDLEWARE_STARLETTE_RECEIVE = "middleware.starlette.receive"
114 MIDDLEWARE_STARLETTE_SEND = "middleware.starlette.send"
115 MIDDLEWARE_STARLITE = "middleware.starlite"
116 MIDDLEWARE_STARLITE_RECEIVE = "middleware.starlite.receive"
117 MIDDLEWARE_STARLITE_SEND = "middleware.starlite.send"
118 QUEUE_SUBMIT_ARQ = "queue.submit.arq"
119 QUEUE_TASK_ARQ = "queue.task.arq"
120 QUEUE_SUBMIT_CELERY = "queue.submit.celery"
121 QUEUE_TASK_CELERY = "queue.task.celery"
122 QUEUE_TASK_RQ = "queue.task.rq"
123 QUEUE_SUBMIT_HUEY = "queue.submit.huey"
124 QUEUE_TASK_HUEY = "queue.task.huey"
125 SUBPROCESS = "subprocess"
126 SUBPROCESS_WAIT = "subprocess.wait"
127 SUBPROCESS_COMMUNICATE = "subprocess.communicate"
128 TEMPLATE_RENDER = "template.render"
129 VIEW_RENDER = "view.render"
130 VIEW_RESPONSE_RENDER = "view.response.render"
131 WEBSOCKET_SERVER = "websocket.server"
132 SOCKET_CONNECTION = "socket.connection"
133 SOCKET_DNS = "socket.dns"
134
135
136 # This type exists to trick mypy and PyCharm into thinking `init` and `Client`
137 # take these arguments (even though they take opaque **kwargs)
138 class ClientConstructor(object):
139 def __init__(
140 self,
141 dsn=None, # type: Optional[str]
142 max_breadcrumbs=DEFAULT_MAX_BREADCRUMBS, # type: int
143 release=None, # type: Optional[str]
144 environment=None, # type: Optional[str]
145 server_name=None, # type: Optional[str]
146 shutdown_timeout=2, # type: float
147 integrations=[], # type: Sequence[Integration] # noqa: B006
148 in_app_include=[], # type: List[str] # noqa: B006
149 in_app_exclude=[], # type: List[str] # noqa: B006
150 default_integrations=True, # type: bool
151 dist=None, # type: Optional[str]
152 transport=None, # type: Optional[Union[sentry_sdk.transport.Transport, Type[sentry_sdk.transport.Transport], Callable[[Event], None]]]
153 transport_queue_size=DEFAULT_QUEUE_SIZE, # type: int
154 sample_rate=1.0, # type: float
155 send_default_pii=False, # type: bool
156 http_proxy=None, # type: Optional[str]
157 https_proxy=None, # type: Optional[str]
158 ignore_errors=[], # type: Sequence[Union[type, str]] # noqa: B006
159 request_bodies="medium", # type: str
160 before_send=None, # type: Optional[EventProcessor]
161 before_breadcrumb=None, # type: Optional[BreadcrumbProcessor]
162 debug=False, # type: bool
163 attach_stacktrace=False, # type: bool
164 ca_certs=None, # type: Optional[str]
165 propagate_traces=True, # type: bool
166 traces_sample_rate=None, # type: Optional[float]
167 traces_sampler=None, # type: Optional[TracesSampler]
168 profiles_sample_rate=None, # type: Optional[float]
169 profiles_sampler=None, # type: Optional[TracesSampler]
170 profiler_mode=None, # type: Optional[ProfilerMode]
171 auto_enabling_integrations=True, # type: bool
172 auto_session_tracking=True, # type: bool
173 send_client_reports=True, # type: bool
174 _experiments={}, # type: Experiments # noqa: B006
175 proxy_headers=None, # type: Optional[Dict[str, str]]
176 instrumenter=INSTRUMENTER.SENTRY, # type: Optional[str]
177 before_send_transaction=None, # type: Optional[TransactionProcessor]
178 project_root=None, # type: Optional[str]
179 enable_tracing=None, # type: Optional[bool]
180 include_local_variables=True, # type: Optional[bool]
181 trace_propagation_targets=[ # noqa: B006
182 MATCH_ALL
183 ], # type: Optional[Sequence[str]]
184 functions_to_trace=[], # type: Sequence[str] # noqa: B006
185 event_scrubber=None, # type: Optional[sentry_sdk.scrubber.EventScrubber]
186 ):
187 # type: (...) -> None
188 pass
189
190
191 def _get_default_options():
192 # type: () -> Dict[str, Any]
193 import inspect
194
195 if hasattr(inspect, "getfullargspec"):
196 getargspec = inspect.getfullargspec
197 else:
198 getargspec = inspect.getargspec # type: ignore
199
200 a = getargspec(ClientConstructor.__init__)
201 defaults = a.defaults or ()
202 return dict(zip(a.args[-len(defaults) :], defaults))
203
204
205 DEFAULT_OPTIONS = _get_default_options()
206 del _get_default_options
207
208
209 VERSION = "1.22.1"
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py
--- a/sentry_sdk/consts.py
+++ b/sentry_sdk/consts.py
@@ -96,7 +96,7 @@
class OP:
- CACHE = "cache"
+ CACHE_GET_ITEM = "cache.get_item"
DB = "db"
DB_REDIS = "db.redis"
EVENT_DJANGO = "event.django"
diff --git a/sentry_sdk/integrations/django/caching.py b/sentry_sdk/integrations/django/caching.py
--- a/sentry_sdk/integrations/django/caching.py
+++ b/sentry_sdk/integrations/django/caching.py
@@ -20,6 +20,18 @@
]
+def _get_span_description(method_name, args, kwargs):
+ # type: (str, Any, Any) -> str
+ description = "{} ".format(method_name)
+
+ if args is not None and len(args) >= 1:
+ description += text_type(args[0])
+ elif kwargs is not None and "key" in kwargs:
+ description += text_type(kwargs["key"])
+
+ return description
+
+
def _patch_cache_method(cache, method_name):
# type: (CacheHandler, str) -> None
from sentry_sdk.integrations.django import DjangoIntegration
@@ -31,9 +43,9 @@
if integration is None or not integration.cache_spans:
return original_method(*args, **kwargs)
- description = "{} {}".format(method_name, args[0])
+ description = _get_span_description(method_name, args, kwargs)
- with hub.start_span(op=OP.CACHE, description=description) as span:
+ with hub.start_span(op=OP.CACHE_GET_ITEM, description=description) as span:
value = original_method(*args, **kwargs)
if value:
| {"golden_diff": "diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py\n--- a/sentry_sdk/consts.py\n+++ b/sentry_sdk/consts.py\n@@ -96,7 +96,7 @@\n \n \n class OP:\n- CACHE = \"cache\"\n+ CACHE_GET_ITEM = \"cache.get_item\"\n DB = \"db\"\n DB_REDIS = \"db.redis\"\n EVENT_DJANGO = \"event.django\"\ndiff --git a/sentry_sdk/integrations/django/caching.py b/sentry_sdk/integrations/django/caching.py\n--- a/sentry_sdk/integrations/django/caching.py\n+++ b/sentry_sdk/integrations/django/caching.py\n@@ -20,6 +20,18 @@\n ]\n \n \n+def _get_span_description(method_name, args, kwargs):\n+ # type: (str, Any, Any) -> str\n+ description = \"{} \".format(method_name)\n+\n+ if args is not None and len(args) >= 1:\n+ description += text_type(args[0])\n+ elif kwargs is not None and \"key\" in kwargs:\n+ description += text_type(kwargs[\"key\"])\n+\n+ return description\n+\n+\n def _patch_cache_method(cache, method_name):\n # type: (CacheHandler, str) -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n@@ -31,9 +43,9 @@\n if integration is None or not integration.cache_spans:\n return original_method(*args, **kwargs)\n \n- description = \"{} {}\".format(method_name, args[0])\n+ description = _get_span_description(method_name, args, kwargs)\n \n- with hub.start_span(op=OP.CACHE, description=description) as span:\n+ with hub.start_span(op=OP.CACHE_GET_ITEM, description=description) as span:\n value = original_method(*args, **kwargs)\n \n if value:\n", "issue": "DjangoCache `IndexError` raised when using keywords approach\n### How do you use Sentry?\r\n\r\nSentry Saas (sentry.io)\r\n\r\n### Version\r\n\r\n1.22.1\r\n\r\n### Steps to Reproduce\r\n\r\n1. Install the latest version\r\n2. Run the code with django cache and get the key using keyword approach\r\n3. Observe `IndexError` issue\r\n\r\nSnippet:\r\n```python\r\nfrom djang.core.cache import cache\r\n\r\ncache.get(key=\"my_key\") # <-- `IndexError` as there will no `args[0]` which is used for spans\r\n```\r\n\r\n### Expected Result\r\n\r\nNo exception raised and value retrieved\r\n\r\n### Actual Result\r\n`IndexError` raised:\r\n```python\r\nIndexError\r\ntuple index out of range\r\n```\n", "before_files": [{"content": "import functools\nfrom typing import TYPE_CHECKING\n\nfrom django import VERSION as DJANGO_VERSION\nfrom django.core.cache import CacheHandler\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.consts import OP, SPANDATA\nfrom sentry_sdk._compat import text_type\n\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Callable\n\n\nMETHODS_TO_INSTRUMENT = [\n \"get\",\n \"get_many\",\n]\n\n\ndef _patch_cache_method(cache, method_name):\n # type: (CacheHandler, str) -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n def _instrument_call(cache, method_name, original_method, args, kwargs):\n # type: (CacheHandler, str, Callable[..., Any], Any, Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is None or not integration.cache_spans:\n return original_method(*args, **kwargs)\n\n description = \"{} {}\".format(method_name, args[0])\n\n with hub.start_span(op=OP.CACHE, description=description) as span:\n value = original_method(*args, **kwargs)\n\n if value:\n span.set_data(SPANDATA.CACHE_HIT, True)\n\n size = len(text_type(value).encode(\"utf-8\"))\n span.set_data(SPANDATA.CACHE_ITEM_SIZE, size)\n\n else:\n span.set_data(SPANDATA.CACHE_HIT, False)\n\n return value\n\n original_method = getattr(cache, method_name)\n\n @functools.wraps(original_method)\n def sentry_method(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n return _instrument_call(cache, method_name, original_method, args, kwargs)\n\n setattr(cache, method_name, sentry_method)\n\n\ndef _patch_cache(cache):\n # type: (CacheHandler) -> None\n if not hasattr(cache, \"_sentry_patched\"):\n for method_name in METHODS_TO_INSTRUMENT:\n _patch_cache_method(cache, method_name)\n cache._sentry_patched = True\n\n\ndef patch_caching():\n # type: () -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n if not hasattr(CacheHandler, \"_sentry_patched\"):\n if DJANGO_VERSION < (3, 2):\n original_get_item = CacheHandler.__getitem__\n\n @functools.wraps(original_get_item)\n def sentry_get_item(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_get_item(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.__getitem__ = sentry_get_item\n CacheHandler._sentry_patched = True\n\n else:\n original_create_connection = CacheHandler.create_connection\n\n @functools.wraps(original_create_connection)\n def sentry_create_connection(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_create_connection(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.create_connection = sentry_create_connection\n CacheHandler._sentry_patched = True\n", "path": "sentry_sdk/integrations/django/caching.py"}, {"content": "from sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n import sentry_sdk\n\n from typing import Optional\n from typing import Callable\n from typing import Union\n from typing import List\n from typing import Type\n from typing import Dict\n from typing import Any\n from typing import Sequence\n from typing_extensions import TypedDict\n\n from sentry_sdk.integrations import Integration\n\n from sentry_sdk._types import (\n BreadcrumbProcessor,\n Event,\n EventProcessor,\n ProfilerMode,\n TracesSampler,\n TransactionProcessor,\n )\n\n # Experiments are feature flags to enable and disable certain unstable SDK\n # functionality. Changing them from the defaults (`None`) in production\n # code is highly discouraged. They are not subject to any stability\n # guarantees such as the ones from semantic versioning.\n Experiments = TypedDict(\n \"Experiments\",\n {\n \"max_spans\": Optional[int],\n \"record_sql_params\": Optional[bool],\n # TODO: Remove these 2 profiling related experiments\n \"profiles_sample_rate\": Optional[float],\n \"profiler_mode\": Optional[ProfilerMode],\n },\n total=False,\n )\n\nDEFAULT_QUEUE_SIZE = 100\nDEFAULT_MAX_BREADCRUMBS = 100\n\nMATCH_ALL = r\".*\"\n\n\nclass INSTRUMENTER:\n SENTRY = \"sentry\"\n OTEL = \"otel\"\n\n\nclass SPANDATA:\n \"\"\"\n Additional information describing the type of the span.\n See: https://develop.sentry.dev/sdk/performance/span-data-conventions/\n \"\"\"\n\n DB_SYSTEM = \"db.system\"\n \"\"\"\n An identifier for the database management system (DBMS) product being used.\n See: https://github.com/open-telemetry/opentelemetry-specification/blob/24de67b3827a4e3ab2515cd8ab62d5bcf837c586/specification/trace/semantic_conventions/database.md\n Example: postgresql\n \"\"\"\n\n CACHE_HIT = \"cache.hit\"\n \"\"\"\n A boolean indicating whether the requested data was found in the cache.\n Example: true\n \"\"\"\n\n CACHE_ITEM_SIZE = \"cache.item_size\"\n \"\"\"\n The size of the requested data in bytes.\n Example: 58\n \"\"\"\n\n HTTP_QUERY = \"http.query\"\n \"\"\"\n The Query string present in the URL.\n Example: ?foo=bar&bar=baz\n \"\"\"\n\n HTTP_FRAGMENT = \"http.fragment\"\n \"\"\"\n The Fragments present in the URL.\n Example: #foo=bar\n \"\"\"\n\n HTTP_METHOD = \"http.method\"\n \"\"\"\n The HTTP method used.\n Example: GET\n \"\"\"\n\n\nclass OP:\n CACHE = \"cache\"\n DB = \"db\"\n DB_REDIS = \"db.redis\"\n EVENT_DJANGO = \"event.django\"\n FUNCTION = \"function\"\n FUNCTION_AWS = \"function.aws\"\n FUNCTION_GCP = \"function.gcp\"\n GRPC_CLIENT = \"grpc.client\"\n GRPC_SERVER = \"grpc.server\"\n HTTP_CLIENT = \"http.client\"\n HTTP_CLIENT_STREAM = \"http.client.stream\"\n HTTP_SERVER = \"http.server\"\n MIDDLEWARE_DJANGO = \"middleware.django\"\n MIDDLEWARE_STARLETTE = \"middleware.starlette\"\n MIDDLEWARE_STARLETTE_RECEIVE = \"middleware.starlette.receive\"\n MIDDLEWARE_STARLETTE_SEND = \"middleware.starlette.send\"\n MIDDLEWARE_STARLITE = \"middleware.starlite\"\n MIDDLEWARE_STARLITE_RECEIVE = \"middleware.starlite.receive\"\n MIDDLEWARE_STARLITE_SEND = \"middleware.starlite.send\"\n QUEUE_SUBMIT_ARQ = \"queue.submit.arq\"\n QUEUE_TASK_ARQ = \"queue.task.arq\"\n QUEUE_SUBMIT_CELERY = \"queue.submit.celery\"\n QUEUE_TASK_CELERY = \"queue.task.celery\"\n QUEUE_TASK_RQ = \"queue.task.rq\"\n QUEUE_SUBMIT_HUEY = \"queue.submit.huey\"\n QUEUE_TASK_HUEY = \"queue.task.huey\"\n SUBPROCESS = \"subprocess\"\n SUBPROCESS_WAIT = \"subprocess.wait\"\n SUBPROCESS_COMMUNICATE = \"subprocess.communicate\"\n TEMPLATE_RENDER = \"template.render\"\n VIEW_RENDER = \"view.render\"\n VIEW_RESPONSE_RENDER = \"view.response.render\"\n WEBSOCKET_SERVER = \"websocket.server\"\n SOCKET_CONNECTION = \"socket.connection\"\n SOCKET_DNS = \"socket.dns\"\n\n\n# This type exists to trick mypy and PyCharm into thinking `init` and `Client`\n# take these arguments (even though they take opaque **kwargs)\nclass ClientConstructor(object):\n def __init__(\n self,\n dsn=None, # type: Optional[str]\n max_breadcrumbs=DEFAULT_MAX_BREADCRUMBS, # type: int\n release=None, # type: Optional[str]\n environment=None, # type: Optional[str]\n server_name=None, # type: Optional[str]\n shutdown_timeout=2, # type: float\n integrations=[], # type: Sequence[Integration] # noqa: B006\n in_app_include=[], # type: List[str] # noqa: B006\n in_app_exclude=[], # type: List[str] # noqa: B006\n default_integrations=True, # type: bool\n dist=None, # type: Optional[str]\n transport=None, # type: Optional[Union[sentry_sdk.transport.Transport, Type[sentry_sdk.transport.Transport], Callable[[Event], None]]]\n transport_queue_size=DEFAULT_QUEUE_SIZE, # type: int\n sample_rate=1.0, # type: float\n send_default_pii=False, # type: bool\n http_proxy=None, # type: Optional[str]\n https_proxy=None, # type: Optional[str]\n ignore_errors=[], # type: Sequence[Union[type, str]] # noqa: B006\n request_bodies=\"medium\", # type: str\n before_send=None, # type: Optional[EventProcessor]\n before_breadcrumb=None, # type: Optional[BreadcrumbProcessor]\n debug=False, # type: bool\n attach_stacktrace=False, # type: bool\n ca_certs=None, # type: Optional[str]\n propagate_traces=True, # type: bool\n traces_sample_rate=None, # type: Optional[float]\n traces_sampler=None, # type: Optional[TracesSampler]\n profiles_sample_rate=None, # type: Optional[float]\n profiles_sampler=None, # type: Optional[TracesSampler]\n profiler_mode=None, # type: Optional[ProfilerMode]\n auto_enabling_integrations=True, # type: bool\n auto_session_tracking=True, # type: bool\n send_client_reports=True, # type: bool\n _experiments={}, # type: Experiments # noqa: B006\n proxy_headers=None, # type: Optional[Dict[str, str]]\n instrumenter=INSTRUMENTER.SENTRY, # type: Optional[str]\n before_send_transaction=None, # type: Optional[TransactionProcessor]\n project_root=None, # type: Optional[str]\n enable_tracing=None, # type: Optional[bool]\n include_local_variables=True, # type: Optional[bool]\n trace_propagation_targets=[ # noqa: B006\n MATCH_ALL\n ], # type: Optional[Sequence[str]]\n functions_to_trace=[], # type: Sequence[str] # noqa: B006\n event_scrubber=None, # type: Optional[sentry_sdk.scrubber.EventScrubber]\n ):\n # type: (...) -> None\n pass\n\n\ndef _get_default_options():\n # type: () -> Dict[str, Any]\n import inspect\n\n if hasattr(inspect, \"getfullargspec\"):\n getargspec = inspect.getfullargspec\n else:\n getargspec = inspect.getargspec # type: ignore\n\n a = getargspec(ClientConstructor.__init__)\n defaults = a.defaults or ()\n return dict(zip(a.args[-len(defaults) :], defaults))\n\n\nDEFAULT_OPTIONS = _get_default_options()\ndel _get_default_options\n\n\nVERSION = \"1.22.1\"\n", "path": "sentry_sdk/consts.py"}], "after_files": [{"content": "import functools\nfrom typing import TYPE_CHECKING\n\nfrom django import VERSION as DJANGO_VERSION\nfrom django.core.cache import CacheHandler\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.consts import OP, SPANDATA\nfrom sentry_sdk._compat import text_type\n\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Callable\n\n\nMETHODS_TO_INSTRUMENT = [\n \"get\",\n \"get_many\",\n]\n\n\ndef _get_span_description(method_name, args, kwargs):\n # type: (str, Any, Any) -> str\n description = \"{} \".format(method_name)\n\n if args is not None and len(args) >= 1:\n description += text_type(args[0])\n elif kwargs is not None and \"key\" in kwargs:\n description += text_type(kwargs[\"key\"])\n\n return description\n\n\ndef _patch_cache_method(cache, method_name):\n # type: (CacheHandler, str) -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n def _instrument_call(cache, method_name, original_method, args, kwargs):\n # type: (CacheHandler, str, Callable[..., Any], Any, Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is None or not integration.cache_spans:\n return original_method(*args, **kwargs)\n\n description = _get_span_description(method_name, args, kwargs)\n\n with hub.start_span(op=OP.CACHE_GET_ITEM, description=description) as span:\n value = original_method(*args, **kwargs)\n\n if value:\n span.set_data(SPANDATA.CACHE_HIT, True)\n\n size = len(text_type(value).encode(\"utf-8\"))\n span.set_data(SPANDATA.CACHE_ITEM_SIZE, size)\n\n else:\n span.set_data(SPANDATA.CACHE_HIT, False)\n\n return value\n\n original_method = getattr(cache, method_name)\n\n @functools.wraps(original_method)\n def sentry_method(*args, **kwargs):\n # type: (*Any, **Any) -> Any\n return _instrument_call(cache, method_name, original_method, args, kwargs)\n\n setattr(cache, method_name, sentry_method)\n\n\ndef _patch_cache(cache):\n # type: (CacheHandler) -> None\n if not hasattr(cache, \"_sentry_patched\"):\n for method_name in METHODS_TO_INSTRUMENT:\n _patch_cache_method(cache, method_name)\n cache._sentry_patched = True\n\n\ndef patch_caching():\n # type: () -> None\n from sentry_sdk.integrations.django import DjangoIntegration\n\n if not hasattr(CacheHandler, \"_sentry_patched\"):\n if DJANGO_VERSION < (3, 2):\n original_get_item = CacheHandler.__getitem__\n\n @functools.wraps(original_get_item)\n def sentry_get_item(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_get_item(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.__getitem__ = sentry_get_item\n CacheHandler._sentry_patched = True\n\n else:\n original_create_connection = CacheHandler.create_connection\n\n @functools.wraps(original_create_connection)\n def sentry_create_connection(self, alias):\n # type: (CacheHandler, str) -> Any\n cache = original_create_connection(self, alias)\n\n integration = Hub.current.get_integration(DjangoIntegration)\n if integration and integration.cache_spans:\n _patch_cache(cache)\n\n return cache\n\n CacheHandler.create_connection = sentry_create_connection\n CacheHandler._sentry_patched = True\n", "path": "sentry_sdk/integrations/django/caching.py"}, {"content": "from sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n import sentry_sdk\n\n from typing import Optional\n from typing import Callable\n from typing import Union\n from typing import List\n from typing import Type\n from typing import Dict\n from typing import Any\n from typing import Sequence\n from typing_extensions import TypedDict\n\n from sentry_sdk.integrations import Integration\n\n from sentry_sdk._types import (\n BreadcrumbProcessor,\n Event,\n EventProcessor,\n ProfilerMode,\n TracesSampler,\n TransactionProcessor,\n )\n\n # Experiments are feature flags to enable and disable certain unstable SDK\n # functionality. Changing them from the defaults (`None`) in production\n # code is highly discouraged. They are not subject to any stability\n # guarantees such as the ones from semantic versioning.\n Experiments = TypedDict(\n \"Experiments\",\n {\n \"max_spans\": Optional[int],\n \"record_sql_params\": Optional[bool],\n # TODO: Remove these 2 profiling related experiments\n \"profiles_sample_rate\": Optional[float],\n \"profiler_mode\": Optional[ProfilerMode],\n },\n total=False,\n )\n\nDEFAULT_QUEUE_SIZE = 100\nDEFAULT_MAX_BREADCRUMBS = 100\n\nMATCH_ALL = r\".*\"\n\n\nclass INSTRUMENTER:\n SENTRY = \"sentry\"\n OTEL = \"otel\"\n\n\nclass SPANDATA:\n \"\"\"\n Additional information describing the type of the span.\n See: https://develop.sentry.dev/sdk/performance/span-data-conventions/\n \"\"\"\n\n DB_SYSTEM = \"db.system\"\n \"\"\"\n An identifier for the database management system (DBMS) product being used.\n See: https://github.com/open-telemetry/opentelemetry-specification/blob/24de67b3827a4e3ab2515cd8ab62d5bcf837c586/specification/trace/semantic_conventions/database.md\n Example: postgresql\n \"\"\"\n\n CACHE_HIT = \"cache.hit\"\n \"\"\"\n A boolean indicating whether the requested data was found in the cache.\n Example: true\n \"\"\"\n\n CACHE_ITEM_SIZE = \"cache.item_size\"\n \"\"\"\n The size of the requested data in bytes.\n Example: 58\n \"\"\"\n\n HTTP_QUERY = \"http.query\"\n \"\"\"\n The Query string present in the URL.\n Example: ?foo=bar&bar=baz\n \"\"\"\n\n HTTP_FRAGMENT = \"http.fragment\"\n \"\"\"\n The Fragments present in the URL.\n Example: #foo=bar\n \"\"\"\n\n HTTP_METHOD = \"http.method\"\n \"\"\"\n The HTTP method used.\n Example: GET\n \"\"\"\n\n\nclass OP:\n CACHE_GET_ITEM = \"cache.get_item\"\n DB = \"db\"\n DB_REDIS = \"db.redis\"\n EVENT_DJANGO = \"event.django\"\n FUNCTION = \"function\"\n FUNCTION_AWS = \"function.aws\"\n FUNCTION_GCP = \"function.gcp\"\n GRPC_CLIENT = \"grpc.client\"\n GRPC_SERVER = \"grpc.server\"\n HTTP_CLIENT = \"http.client\"\n HTTP_CLIENT_STREAM = \"http.client.stream\"\n HTTP_SERVER = \"http.server\"\n MIDDLEWARE_DJANGO = \"middleware.django\"\n MIDDLEWARE_STARLETTE = \"middleware.starlette\"\n MIDDLEWARE_STARLETTE_RECEIVE = \"middleware.starlette.receive\"\n MIDDLEWARE_STARLETTE_SEND = \"middleware.starlette.send\"\n MIDDLEWARE_STARLITE = \"middleware.starlite\"\n MIDDLEWARE_STARLITE_RECEIVE = \"middleware.starlite.receive\"\n MIDDLEWARE_STARLITE_SEND = \"middleware.starlite.send\"\n QUEUE_SUBMIT_ARQ = \"queue.submit.arq\"\n QUEUE_TASK_ARQ = \"queue.task.arq\"\n QUEUE_SUBMIT_CELERY = \"queue.submit.celery\"\n QUEUE_TASK_CELERY = \"queue.task.celery\"\n QUEUE_TASK_RQ = \"queue.task.rq\"\n QUEUE_SUBMIT_HUEY = \"queue.submit.huey\"\n QUEUE_TASK_HUEY = \"queue.task.huey\"\n SUBPROCESS = \"subprocess\"\n SUBPROCESS_WAIT = \"subprocess.wait\"\n SUBPROCESS_COMMUNICATE = \"subprocess.communicate\"\n TEMPLATE_RENDER = \"template.render\"\n VIEW_RENDER = \"view.render\"\n VIEW_RESPONSE_RENDER = \"view.response.render\"\n WEBSOCKET_SERVER = \"websocket.server\"\n SOCKET_CONNECTION = \"socket.connection\"\n SOCKET_DNS = \"socket.dns\"\n\n\n# This type exists to trick mypy and PyCharm into thinking `init` and `Client`\n# take these arguments (even though they take opaque **kwargs)\nclass ClientConstructor(object):\n def __init__(\n self,\n dsn=None, # type: Optional[str]\n max_breadcrumbs=DEFAULT_MAX_BREADCRUMBS, # type: int\n release=None, # type: Optional[str]\n environment=None, # type: Optional[str]\n server_name=None, # type: Optional[str]\n shutdown_timeout=2, # type: float\n integrations=[], # type: Sequence[Integration] # noqa: B006\n in_app_include=[], # type: List[str] # noqa: B006\n in_app_exclude=[], # type: List[str] # noqa: B006\n default_integrations=True, # type: bool\n dist=None, # type: Optional[str]\n transport=None, # type: Optional[Union[sentry_sdk.transport.Transport, Type[sentry_sdk.transport.Transport], Callable[[Event], None]]]\n transport_queue_size=DEFAULT_QUEUE_SIZE, # type: int\n sample_rate=1.0, # type: float\n send_default_pii=False, # type: bool\n http_proxy=None, # type: Optional[str]\n https_proxy=None, # type: Optional[str]\n ignore_errors=[], # type: Sequence[Union[type, str]] # noqa: B006\n request_bodies=\"medium\", # type: str\n before_send=None, # type: Optional[EventProcessor]\n before_breadcrumb=None, # type: Optional[BreadcrumbProcessor]\n debug=False, # type: bool\n attach_stacktrace=False, # type: bool\n ca_certs=None, # type: Optional[str]\n propagate_traces=True, # type: bool\n traces_sample_rate=None, # type: Optional[float]\n traces_sampler=None, # type: Optional[TracesSampler]\n profiles_sample_rate=None, # type: Optional[float]\n profiles_sampler=None, # type: Optional[TracesSampler]\n profiler_mode=None, # type: Optional[ProfilerMode]\n auto_enabling_integrations=True, # type: bool\n auto_session_tracking=True, # type: bool\n send_client_reports=True, # type: bool\n _experiments={}, # type: Experiments # noqa: B006\n proxy_headers=None, # type: Optional[Dict[str, str]]\n instrumenter=INSTRUMENTER.SENTRY, # type: Optional[str]\n before_send_transaction=None, # type: Optional[TransactionProcessor]\n project_root=None, # type: Optional[str]\n enable_tracing=None, # type: Optional[bool]\n include_local_variables=True, # type: Optional[bool]\n trace_propagation_targets=[ # noqa: B006\n MATCH_ALL\n ], # type: Optional[Sequence[str]]\n functions_to_trace=[], # type: Sequence[str] # noqa: B006\n event_scrubber=None, # type: Optional[sentry_sdk.scrubber.EventScrubber]\n ):\n # type: (...) -> None\n pass\n\n\ndef _get_default_options():\n # type: () -> Dict[str, Any]\n import inspect\n\n if hasattr(inspect, \"getfullargspec\"):\n getargspec = inspect.getfullargspec\n else:\n getargspec = inspect.getargspec # type: ignore\n\n a = getargspec(ClientConstructor.__init__)\n defaults = a.defaults or ()\n return dict(zip(a.args[-len(defaults) :], defaults))\n\n\nDEFAULT_OPTIONS = _get_default_options()\ndel _get_default_options\n\n\nVERSION = \"1.22.1\"\n", "path": "sentry_sdk/consts.py"}]} | 3,762 | 421 |
gh_patches_debug_24978 | rasdani/github-patches | git_diff | chainer__chainer-310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split_axis.backward fails on incomplete gradients
When there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/split_axis.py`
Content:
```
1 import collections
2
3 import numpy
4
5 from chainer import cuda
6 from chainer import function
7 from chainer.utils import type_check
8
9
10 _args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'
11 _preamble = '''
12 #define COPY(statement) \
13 int l = i / (rdim * cdimy); \
14 int c = i / rdim % cdimy + coffset; \
15 int r = i % rdim; \
16 int idx = r + rdim * (c + cdimx * l); \
17 statement;
18 '''
19
20
21 class SplitAxis(function.Function):
22
23 """Function that splits multiple arrays towards the specified axis."""
24
25 def __init__(self, indices_or_sections, axis):
26 if not isinstance(indices_or_sections, (int, collections.Iterable)):
27 raise TypeError('indices_or_sections must be integer or 1-D array')
28 self.indices_or_sections = indices_or_sections
29 self.axis = axis
30
31 def check_type_forward(self, in_types):
32 type_check.expect(in_types.size() == 1)
33 type_check.expect(in_types[0].ndim >= self.axis)
34
35 if isinstance(self.indices_or_sections, collections.Iterable):
36 max_index = type_check.Variable(
37 self.indices_or_sections[-1], 'max_index')
38 type_check.expect(in_types[0].shape[self.axis] > max_index)
39 else:
40 sections = type_check.Variable(
41 self.indices_or_sections, 'sections')
42 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
43
44 def forward_cpu(self, x):
45 if isinstance(self.indices_or_sections, collections.Iterable):
46 cdimx = x[0].shape[self.axis]
47 ind = list(self.indices_or_sections)
48 ind.append(cdimx)
49 prev_i = 0
50 for i in ind:
51 cdimy = max(0, min(i, cdimx) - prev_i)
52 if cdimy == 0:
53 raise ValueError('Not support if shape contains 0')
54 prev_i = i
55 return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))
56
57 def forward_gpu(self, x):
58 xshape = x[0].shape
59 self.cdimx = xshape[self.axis]
60 self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)
61
62 if isinstance(self.indices_or_sections, collections.Iterable):
63 ind = list(self.indices_or_sections)
64 ind.append(self.cdimx)
65 else:
66 sec = self.indices_or_sections
67 if self.cdimx % sec:
68 raise ValueError(
69 'array split does not result in an equal division')
70 ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)
71 ys = []
72 kernel = cuda.elementwise(
73 _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)
74 prev_i = 0
75 for i in ind:
76 cdimy = max(0, min(i, self.cdimx) - prev_i)
77 s = list(xshape)
78 s[self.axis] = cdimy
79 y = cuda.empty(tuple(s), dtype=x[0].dtype)
80 if cdimy == 0:
81 raise ValueError('Not support if shape contains 0')
82 kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)
83 prev_i = i
84 ys.append(y)
85 return tuple(ys)
86
87 def backward_cpu(self, x, gys):
88 return numpy.concatenate(gys, axis=self.axis),
89
90 def backward_gpu(self, x, gys):
91 gx = cuda.empty_like(x[0])
92 coffset = 0
93 kernel = cuda.elementwise(
94 _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
95 for gy in gys:
96 cdimy = gy.shape[self.axis]
97 if cdimy != 0:
98 kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
99 coffset += cdimy
100 return gx,
101
102
103 def split_axis(x, indices_or_sections, axis):
104 """Splits given variables along an axis.
105
106 Args:
107 x (tuple of Variables): Variables to be split.
108 indices_or_sections (int or 1-D array): If this argument is an integer,
109 N, the array will be divided into N equal arrays along axis.
110 If it is a 1-D array of sorted integers, it
111 indicates the positions where the array is split.
112 axis (int): Axis that the input array is split along.
113
114 Returns:
115 ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects
116 if the number of outputs is more than 1 or
117 :class:`~chainer.Variable` otherwise.
118
119 .. note::
120 This function raises ``ValueError`` if at least
121 one of the outputs is splitted to zero-size
122 (i.e. `axis`-th value of its shape is zero).
123
124 """
125 return SplitAxis(indices_or_sections, axis)(x)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py
--- a/chainer/functions/split_axis.py
+++ b/chainer/functions/split_axis.py
@@ -1,6 +1,7 @@
import collections
import numpy
+import six
from chainer import cuda
from chainer import function
@@ -85,14 +86,25 @@
return tuple(ys)
def backward_cpu(self, x, gys):
- return numpy.concatenate(gys, axis=self.axis),
+ if any(gy is None for gy in gys):
+ gx = numpy.zeros_like(x[0])
+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)
+ for gxi, gy in six.moves.zip(gxs, gys):
+ if gy is None:
+ continue
+ gxi[:] = gy
+ return gx,
+ else:
+ return numpy.concatenate(gys, axis=self.axis),
def backward_gpu(self, x, gys):
- gx = cuda.empty_like(x[0])
+ gx = cuda.zeros_like(x[0])
coffset = 0
kernel = cuda.elementwise(
_args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)
for gy in gys:
+ if gy is None:
+ continue
cdimy = gy.shape[self.axis]
if cdimy != 0:
kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)
| {"golden_diff": "diff --git a/chainer/functions/split_axis.py b/chainer/functions/split_axis.py\n--- a/chainer/functions/split_axis.py\n+++ b/chainer/functions/split_axis.py\n@@ -1,6 +1,7 @@\n import collections\n \n import numpy\n+import six\n \n from chainer import cuda\n from chainer import function\n@@ -85,14 +86,25 @@\n return tuple(ys)\n \n def backward_cpu(self, x, gys):\n- return numpy.concatenate(gys, axis=self.axis),\n+ if any(gy is None for gy in gys):\n+ gx = numpy.zeros_like(x[0])\n+ gxs = numpy.split(gx, self.indices_or_sections, self.axis)\n+ for gxi, gy in six.moves.zip(gxs, gys):\n+ if gy is None:\n+ continue\n+ gxi[:] = gy\n+ return gx,\n+ else:\n+ return numpy.concatenate(gys, axis=self.axis),\n \n def backward_gpu(self, x, gys):\n- gx = cuda.empty_like(x[0])\n+ gx = cuda.zeros_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n+ if gy is None:\n+ continue\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n", "issue": "split_axis.backward fails on incomplete gradients\nWhen there is a None in the grad_outputs, split_axis fails to backprop the incomplete gradients.\n\n", "before_files": [{"content": "import collections\n\nimport numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(tuple(s), dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.empty_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}], "after_files": [{"content": "import collections\n\nimport numpy\nimport six\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\n_args = 'float* y, float* x, int cdimy, int cdimx, int rdim, int coffset'\n_preamble = '''\n#define COPY(statement) \\\n int l = i / (rdim * cdimy); \\\n int c = i / rdim % cdimy + coffset; \\\n int r = i % rdim; \\\n int idx = r + rdim * (c + cdimx * l); \\\n statement;\n'''\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim >= self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward_cpu(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n return tuple(numpy.split(x[0], self.indices_or_sections, self.axis))\n\n def forward_gpu(self, x):\n xshape = x[0].shape\n self.cdimx = xshape[self.axis]\n self.rdim = numpy.prod(xshape[self.axis + 1:], dtype=int)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n ind = list(self.indices_or_sections)\n ind.append(self.cdimx)\n else:\n sec = self.indices_or_sections\n if self.cdimx % sec:\n raise ValueError(\n 'array split does not result in an equal division')\n ind = numpy.arange(1, sec + 1) * (self.cdimx // sec)\n ys = []\n kernel = cuda.elementwise(\n _args, 'COPY(y[i] = x[idx])', 'split_fwd', preamble=_preamble)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, self.cdimx) - prev_i)\n s = list(xshape)\n s[self.axis] = cdimy\n y = cuda.empty(tuple(s), dtype=x[0].dtype)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n kernel(y, x[0], cdimy, self.cdimx, self.rdim, prev_i)\n prev_i = i\n ys.append(y)\n return tuple(ys)\n\n def backward_cpu(self, x, gys):\n if any(gy is None for gy in gys):\n gx = numpy.zeros_like(x[0])\n gxs = numpy.split(gx, self.indices_or_sections, self.axis)\n for gxi, gy in six.moves.zip(gxs, gys):\n if gy is None:\n continue\n gxi[:] = gy\n return gx,\n else:\n return numpy.concatenate(gys, axis=self.axis),\n\n def backward_gpu(self, x, gys):\n gx = cuda.zeros_like(x[0])\n coffset = 0\n kernel = cuda.elementwise(\n _args, 'COPY(x[idx] = y[i])', 'split_bwd', preamble=_preamble)\n for gy in gys:\n if gy is None:\n continue\n cdimy = gy.shape[self.axis]\n if cdimy != 0:\n kernel(gy, gx, cdimy, self.cdimx, self.rdim, coffset)\n coffset += cdimy\n return gx,\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n ``tuple`` or ``Variable``: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises ``ValueError`` if at least\n one of the outputs is splitted to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/split_axis.py"}]} | 1,698 | 348 |
gh_patches_debug_61068 | rasdani/github-patches | git_diff | Mailu__Mailu-719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Alternatives useless after podop
After updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains 😢
Sending internal mail between alternatives works as expected but not with external mail, a "relay denied" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/postfix.py`
Content:
```
1 from mailu import db, models
2 from mailu.internal import internal
3
4 import flask
5
6
7 @internal.route("/postfix/domain/<domain_name>")
8 def postfix_mailbox_domain(domain_name):
9 domain = models.Domain.query.get(domain_name) or flask.abort(404)
10 return flask.jsonify(domain.name)
11
12
13 @internal.route("/postfix/mailbox/<email>")
14 def postfix_mailbox_map(email):
15 user = models.User.query.get(email) or flask.abort(404)
16 return flask.jsonify(user.email)
17
18
19 @internal.route("/postfix/alias/<alias>")
20 def postfix_alias_map(alias):
21 localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)
22 alternative = models.Alternative.query.get(domain)
23 if alternative:
24 domain = alternative.domain_name
25 email = '{}@{}'.format(localpart, domain)
26 if localpart is None:
27 return flask.jsonify(domain)
28 else:
29 alias_obj = models.Alias.resolve(localpart, domain)
30 if alias_obj:
31 return flask.jsonify(",".join(alias_obj.destination))
32 user_obj = models.User.query.get(email)
33 if user_obj:
34 return flask.jsonify(user_obj.destination)
35 return flask.abort(404)
36
37
38 @internal.route("/postfix/transport/<email>")
39 def postfix_transport(email):
40 localpart, domain = email.split('@', 1) if '@' in email else (None, email)
41 relay = models.Relay.query.get(domain) or flask.abort(404)
42 return flask.jsonify("smtp:[{}]".format(relay.smtp))
43
44
45 @internal.route("/postfix/sender/<sender>")
46 def postfix_sender(sender):
47 """ Simply reject any sender that pretends to be from a local domain
48 """
49 localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)
50 domain = models.Domain.query.get(domain_name)
51 alternative = models.Alternative.query.get(domain_name)
52 if domain or alternative:
53 return flask.jsonify("REJECT")
54 return flask.abort(404)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py
--- a/core/admin/mailu/internal/views/postfix.py
+++ b/core/admin/mailu/internal/views/postfix.py
@@ -6,7 +6,9 @@
@internal.route("/postfix/domain/<domain_name>")
def postfix_mailbox_domain(domain_name):
- domain = models.Domain.query.get(domain_name) or flask.abort(404)
+ domain = models.Domain.query.get(domain_name) or \
+ models.Alternative.query.get(domain_name) or \
+ flask.abort(404)
return flask.jsonify(domain.name)
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py\n--- a/core/admin/mailu/internal/views/postfix.py\n+++ b/core/admin/mailu/internal/views/postfix.py\n@@ -6,7 +6,9 @@\n \n @internal.route(\"/postfix/domain/<domain_name>\")\n def postfix_mailbox_domain(domain_name):\n- domain = models.Domain.query.get(domain_name) or flask.abort(404)\n+ domain = models.Domain.query.get(domain_name) or \\\n+ models.Alternative.query.get(domain_name) or \\\n+ flask.abort(404)\n return flask.jsonify(domain.name)\n", "issue": "Alternatives useless after podop\nAfter updating to master to get all the up-to-date fixes it also moves postfix to use podop and it seems to no longer support receiving external mail from alternative domains \ud83d\ude22 \r\n\r\nSending internal mail between alternatives works as expected but not with external mail, a \"relay denied\" message is shown in the logs and when checking the postfix podop views it looks like alternative is never mentioned.\n", "before_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}], "after_files": [{"content": "from mailu import db, models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or \\\n models.Alternative.query.get(domain_name) or \\\n flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<alias>\")\ndef postfix_alias_map(alias):\n localpart, domain = alias.split('@', 1) if '@' in alias else (None, alias)\n alternative = models.Alternative.query.get(domain)\n if alternative:\n domain = alternative.domain_name\n email = '{}@{}'.format(localpart, domain)\n if localpart is None:\n return flask.jsonify(domain)\n else:\n alias_obj = models.Alias.resolve(localpart, domain)\n if alias_obj:\n return flask.jsonify(\",\".join(alias_obj.destination))\n user_obj = models.User.query.get(email)\n if user_obj:\n return flask.jsonify(user_obj.destination)\n return flask.abort(404)\n\n\[email protected](\"/postfix/transport/<email>\")\ndef postfix_transport(email):\n localpart, domain = email.split('@', 1) if '@' in email else (None, email)\n relay = models.Relay.query.get(domain) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/<sender>\")\ndef postfix_sender(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n localpart, domain_name = sender.split('@', 1) if '@' in sender else (None, sender)\n domain = models.Domain.query.get(domain_name)\n alternative = models.Alternative.query.get(domain_name)\n if domain or alternative:\n return flask.jsonify(\"REJECT\")\n return flask.abort(404)\n", "path": "core/admin/mailu/internal/views/postfix.py"}]} | 898 | 140 |
gh_patches_debug_16390 | rasdani/github-patches | git_diff | open-mmlab__mmdetection3d-398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Question about 3d NMS
As I can see [iou3d_utils](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/iou3d/iou3d_utils.py) is based on [iou3d_nms_utils](https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/iou3d_nms_utils.py) of `OpenPCDet`. The implementation in `OpenPCDet` supports fair `iou` and `nms` with full 3d parametrization: `[x, y, z, dx, dy, dz, heading]`. However the implementation in `mmdetection3d` supports only `[x1, y1, x2, y2, ry]`. This design choice brings a couple of disadvantages. For example, for `VoteNet` on `SUNRGBD` we first predict boxes with angles and then [apply](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/vote_head.py#L627) `nms` to the aligned boxes without angle.
So, my question is, why not to use `nms` from `OpenPCDet` instead of applying lifehacks with `aligned_nms` or `bev_nms`?
Thanks in advance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet3d/core/post_processing/box3d_nms.py`
Content:
```
1 import numba
2 import numpy as np
3 import torch
4
5 from mmdet3d.ops.iou3d.iou3d_utils import nms_gpu, nms_normal_gpu
6
7
8 def box3d_multiclass_nms(mlvl_bboxes,
9 mlvl_bboxes_for_nms,
10 mlvl_scores,
11 score_thr,
12 max_num,
13 cfg,
14 mlvl_dir_scores=None):
15 """Multi-class nms for 3D boxes.
16
17 Args:
18 mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).
19 M is the dimensions of boxes.
20 mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape
21 (N, 4). N is the number of boxes.
22 mlvl_scores (torch.Tensor): Multi-level boxes with shape
23 (N, ). N is the number of boxes.
24 score_thr (float): Score thredhold to filter boxes with low
25 confidence.
26 max_num (int): Maximum number of boxes will be kept.
27 cfg (dict): Configuration dict of NMS.
28 mlvl_dir_scores (torch.Tensor, optional): Multi-level scores
29 of direction classifier. Defaults to None.
30
31 Returns:
32 tuple[torch.Tensor]: Return results after nms, including 3D \
33 bounding boxes, scores, labels and direction scores.
34 """
35 # do multi class nms
36 # the fg class id range: [0, num_classes-1]
37 num_classes = mlvl_scores.shape[1] - 1
38 bboxes = []
39 scores = []
40 labels = []
41 dir_scores = []
42 for i in range(0, num_classes):
43 # get bboxes and scores of this class
44 cls_inds = mlvl_scores[:, i] > score_thr
45 if not cls_inds.any():
46 continue
47
48 _scores = mlvl_scores[cls_inds, i]
49 _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :]
50
51 if cfg.use_rotate_nms:
52 nms_func = nms_gpu
53 else:
54 nms_func = nms_normal_gpu
55
56 selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr)
57 _mlvl_bboxes = mlvl_bboxes[cls_inds, :]
58 bboxes.append(_mlvl_bboxes[selected])
59 scores.append(_scores[selected])
60 cls_label = mlvl_bboxes.new_full((len(selected), ),
61 i,
62 dtype=torch.long)
63 labels.append(cls_label)
64
65 if mlvl_dir_scores is not None:
66 _mlvl_dir_scores = mlvl_dir_scores[cls_inds]
67 dir_scores.append(_mlvl_dir_scores[selected])
68
69 if bboxes:
70 bboxes = torch.cat(bboxes, dim=0)
71 scores = torch.cat(scores, dim=0)
72 labels = torch.cat(labels, dim=0)
73 if mlvl_dir_scores is not None:
74 dir_scores = torch.cat(dir_scores, dim=0)
75 if bboxes.shape[0] > max_num:
76 _, inds = scores.sort(descending=True)
77 inds = inds[:max_num]
78 bboxes = bboxes[inds, :]
79 labels = labels[inds]
80 scores = scores[inds]
81 if mlvl_dir_scores is not None:
82 dir_scores = dir_scores[inds]
83 else:
84 bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1)))
85 scores = mlvl_scores.new_zeros((0, ))
86 labels = mlvl_scores.new_zeros((0, ), dtype=torch.long)
87 dir_scores = mlvl_scores.new_zeros((0, ))
88 return bboxes, scores, labels, dir_scores
89
90
91 def aligned_3d_nms(boxes, scores, classes, thresh):
92 """3d nms for aligned boxes.
93
94 Args:
95 boxes (torch.Tensor): Aligned box with shape [n, 6].
96 scores (torch.Tensor): Scores of each box.
97 classes (torch.Tensor): Class of each box.
98 thresh (float): Iou threshold for nms.
99
100 Returns:
101 torch.Tensor: Indices of selected boxes.
102 """
103 x1 = boxes[:, 0]
104 y1 = boxes[:, 1]
105 z1 = boxes[:, 2]
106 x2 = boxes[:, 3]
107 y2 = boxes[:, 4]
108 z2 = boxes[:, 5]
109 area = (x2 - x1) * (y2 - y1) * (z2 - z1)
110 zero = boxes.new_zeros(1, )
111
112 score_sorted = torch.argsort(scores)
113 pick = []
114 while (score_sorted.shape[0] != 0):
115 last = score_sorted.shape[0]
116 i = score_sorted[-1]
117 pick.append(i)
118
119 xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]])
120 yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]])
121 zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]])
122 xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]])
123 yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]])
124 zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]])
125 classes1 = classes[i]
126 classes2 = classes[score_sorted[:last - 1]]
127 inter_l = torch.max(zero, xx2 - xx1)
128 inter_w = torch.max(zero, yy2 - yy1)
129 inter_h = torch.max(zero, zz2 - zz1)
130
131 inter = inter_l * inter_w * inter_h
132 iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter)
133 iou = iou * (classes1 == classes2).float()
134 score_sorted = score_sorted[torch.nonzero(
135 iou <= thresh, as_tuple=False).flatten()]
136
137 indices = boxes.new_tensor(pick, dtype=torch.long)
138 return indices
139
140
141 @numba.jit(nopython=True)
142 def circle_nms(dets, thresh, post_max_size=83):
143 """Circular NMS.
144
145 An object is only counted as positive if no other center
146 with a higher confidence exists within a radius r using a
147 bird-eye view distance metric.
148
149 Args:
150 dets (torch.Tensor): Detection results with the shape of [N, 3].
151 thresh (float): Value of threshold.
152 post_max_size (int): Max number of prediction to be kept. Defaults
153 to 83
154
155 Returns:
156 torch.Tensor: Indexes of the detections to be kept.
157 """
158 x1 = dets[:, 0]
159 y1 = dets[:, 1]
160 scores = dets[:, 2]
161 order = scores.argsort()[::-1].astype(np.int32) # highest->lowest
162 ndets = dets.shape[0]
163 suppressed = np.zeros((ndets), dtype=np.int32)
164 keep = []
165 for _i in range(ndets):
166 i = order[_i] # start with highest score box
167 if suppressed[
168 i] == 1: # if any box have enough iou with this, remove it
169 continue
170 keep.append(i)
171 for _j in range(_i + 1, ndets):
172 j = order[_j]
173 if suppressed[j] == 1:
174 continue
175 # calculate center distance between i and j box
176 dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2
177
178 # ovr = inter / areas[j]
179 if dist <= thresh:
180 suppressed[j] = 1
181 return keep[:post_max_size]
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmdet3d/core/post_processing/box3d_nms.py b/mmdet3d/core/post_processing/box3d_nms.py
--- a/mmdet3d/core/post_processing/box3d_nms.py
+++ b/mmdet3d/core/post_processing/box3d_nms.py
@@ -18,9 +18,9 @@
mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).
M is the dimensions of boxes.
mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape
- (N, 4). N is the number of boxes.
+ (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes.
mlvl_scores (torch.Tensor): Multi-level boxes with shape
- (N, ). N is the number of boxes.
+ (N, C + 1). N is the number of boxes. C is the number of classes.
score_thr (float): Score thredhold to filter boxes with low
confidence.
max_num (int): Maximum number of boxes will be kept.
| {"golden_diff": "diff --git a/mmdet3d/core/post_processing/box3d_nms.py b/mmdet3d/core/post_processing/box3d_nms.py\n--- a/mmdet3d/core/post_processing/box3d_nms.py\n+++ b/mmdet3d/core/post_processing/box3d_nms.py\n@@ -18,9 +18,9 @@\n mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).\n M is the dimensions of boxes.\n mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape\n- (N, 4). N is the number of boxes.\n+ (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes.\n mlvl_scores (torch.Tensor): Multi-level boxes with shape\n- (N, ). N is the number of boxes.\n+ (N, C + 1). N is the number of boxes. C is the number of classes.\n score_thr (float): Score thredhold to filter boxes with low\n confidence.\n max_num (int): Maximum number of boxes will be kept.\n", "issue": "Question about 3d NMS\nAs I can see [iou3d_utils](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/ops/iou3d/iou3d_utils.py) is based on [iou3d_nms_utils](https://github.com/open-mmlab/OpenPCDet/blob/master/pcdet/ops/iou3d_nms/iou3d_nms_utils.py) of `OpenPCDet`. The implementation in `OpenPCDet` supports fair `iou` and `nms` with full 3d parametrization: `[x, y, z, dx, dy, dz, heading]`. However the implementation in `mmdetection3d` supports only `[x1, y1, x2, y2, ry]`. This design choice brings a couple of disadvantages. For example, for `VoteNet` on `SUNRGBD` we first predict boxes with angles and then [apply](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/dense_heads/vote_head.py#L627) `nms` to the aligned boxes without angle.\r\n\r\nSo, my question is, why not to use `nms` from `OpenPCDet` instead of applying lifehacks with `aligned_nms` or `bev_nms`?\r\nThanks in advance.\n", "before_files": [{"content": "import numba\nimport numpy as np\nimport torch\n\nfrom mmdet3d.ops.iou3d.iou3d_utils import nms_gpu, nms_normal_gpu\n\n\ndef box3d_multiclass_nms(mlvl_bboxes,\n mlvl_bboxes_for_nms,\n mlvl_scores,\n score_thr,\n max_num,\n cfg,\n mlvl_dir_scores=None):\n \"\"\"Multi-class nms for 3D boxes.\n\n Args:\n mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).\n M is the dimensions of boxes.\n mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape\n (N, 4). N is the number of boxes.\n mlvl_scores (torch.Tensor): Multi-level boxes with shape\n (N, ). N is the number of boxes.\n score_thr (float): Score thredhold to filter boxes with low\n confidence.\n max_num (int): Maximum number of boxes will be kept.\n cfg (dict): Configuration dict of NMS.\n mlvl_dir_scores (torch.Tensor, optional): Multi-level scores\n of direction classifier. Defaults to None.\n\n Returns:\n tuple[torch.Tensor]: Return results after nms, including 3D \\\n bounding boxes, scores, labels and direction scores.\n \"\"\"\n # do multi class nms\n # the fg class id range: [0, num_classes-1]\n num_classes = mlvl_scores.shape[1] - 1\n bboxes = []\n scores = []\n labels = []\n dir_scores = []\n for i in range(0, num_classes):\n # get bboxes and scores of this class\n cls_inds = mlvl_scores[:, i] > score_thr\n if not cls_inds.any():\n continue\n\n _scores = mlvl_scores[cls_inds, i]\n _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :]\n\n if cfg.use_rotate_nms:\n nms_func = nms_gpu\n else:\n nms_func = nms_normal_gpu\n\n selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr)\n _mlvl_bboxes = mlvl_bboxes[cls_inds, :]\n bboxes.append(_mlvl_bboxes[selected])\n scores.append(_scores[selected])\n cls_label = mlvl_bboxes.new_full((len(selected), ),\n i,\n dtype=torch.long)\n labels.append(cls_label)\n\n if mlvl_dir_scores is not None:\n _mlvl_dir_scores = mlvl_dir_scores[cls_inds]\n dir_scores.append(_mlvl_dir_scores[selected])\n\n if bboxes:\n bboxes = torch.cat(bboxes, dim=0)\n scores = torch.cat(scores, dim=0)\n labels = torch.cat(labels, dim=0)\n if mlvl_dir_scores is not None:\n dir_scores = torch.cat(dir_scores, dim=0)\n if bboxes.shape[0] > max_num:\n _, inds = scores.sort(descending=True)\n inds = inds[:max_num]\n bboxes = bboxes[inds, :]\n labels = labels[inds]\n scores = scores[inds]\n if mlvl_dir_scores is not None:\n dir_scores = dir_scores[inds]\n else:\n bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1)))\n scores = mlvl_scores.new_zeros((0, ))\n labels = mlvl_scores.new_zeros((0, ), dtype=torch.long)\n dir_scores = mlvl_scores.new_zeros((0, ))\n return bboxes, scores, labels, dir_scores\n\n\ndef aligned_3d_nms(boxes, scores, classes, thresh):\n \"\"\"3d nms for aligned boxes.\n\n Args:\n boxes (torch.Tensor): Aligned box with shape [n, 6].\n scores (torch.Tensor): Scores of each box.\n classes (torch.Tensor): Class of each box.\n thresh (float): Iou threshold for nms.\n\n Returns:\n torch.Tensor: Indices of selected boxes.\n \"\"\"\n x1 = boxes[:, 0]\n y1 = boxes[:, 1]\n z1 = boxes[:, 2]\n x2 = boxes[:, 3]\n y2 = boxes[:, 4]\n z2 = boxes[:, 5]\n area = (x2 - x1) * (y2 - y1) * (z2 - z1)\n zero = boxes.new_zeros(1, )\n\n score_sorted = torch.argsort(scores)\n pick = []\n while (score_sorted.shape[0] != 0):\n last = score_sorted.shape[0]\n i = score_sorted[-1]\n pick.append(i)\n\n xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]])\n yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]])\n zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]])\n xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]])\n yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]])\n zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]])\n classes1 = classes[i]\n classes2 = classes[score_sorted[:last - 1]]\n inter_l = torch.max(zero, xx2 - xx1)\n inter_w = torch.max(zero, yy2 - yy1)\n inter_h = torch.max(zero, zz2 - zz1)\n\n inter = inter_l * inter_w * inter_h\n iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter)\n iou = iou * (classes1 == classes2).float()\n score_sorted = score_sorted[torch.nonzero(\n iou <= thresh, as_tuple=False).flatten()]\n\n indices = boxes.new_tensor(pick, dtype=torch.long)\n return indices\n\n\[email protected](nopython=True)\ndef circle_nms(dets, thresh, post_max_size=83):\n \"\"\"Circular NMS.\n\n An object is only counted as positive if no other center\n with a higher confidence exists within a radius r using a\n bird-eye view distance metric.\n\n Args:\n dets (torch.Tensor): Detection results with the shape of [N, 3].\n thresh (float): Value of threshold.\n post_max_size (int): Max number of prediction to be kept. Defaults\n to 83\n\n Returns:\n torch.Tensor: Indexes of the detections to be kept.\n \"\"\"\n x1 = dets[:, 0]\n y1 = dets[:, 1]\n scores = dets[:, 2]\n order = scores.argsort()[::-1].astype(np.int32) # highest->lowest\n ndets = dets.shape[0]\n suppressed = np.zeros((ndets), dtype=np.int32)\n keep = []\n for _i in range(ndets):\n i = order[_i] # start with highest score box\n if suppressed[\n i] == 1: # if any box have enough iou with this, remove it\n continue\n keep.append(i)\n for _j in range(_i + 1, ndets):\n j = order[_j]\n if suppressed[j] == 1:\n continue\n # calculate center distance between i and j box\n dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2\n\n # ovr = inter / areas[j]\n if dist <= thresh:\n suppressed[j] = 1\n return keep[:post_max_size]\n", "path": "mmdet3d/core/post_processing/box3d_nms.py"}], "after_files": [{"content": "import numba\nimport numpy as np\nimport torch\n\nfrom mmdet3d.ops.iou3d.iou3d_utils import nms_gpu, nms_normal_gpu\n\n\ndef box3d_multiclass_nms(mlvl_bboxes,\n mlvl_bboxes_for_nms,\n mlvl_scores,\n score_thr,\n max_num,\n cfg,\n mlvl_dir_scores=None):\n \"\"\"Multi-class nms for 3D boxes.\n\n Args:\n mlvl_bboxes (torch.Tensor): Multi-level boxes with shape (N, M).\n M is the dimensions of boxes.\n mlvl_bboxes_for_nms (torch.Tensor): Multi-level boxes with shape\n (N, 5) ([x1, y1, x2, y2, ry]). N is the number of boxes.\n mlvl_scores (torch.Tensor): Multi-level boxes with shape\n (N, C + 1). N is the number of boxes. C is the number of classes.\n score_thr (float): Score thredhold to filter boxes with low\n confidence.\n max_num (int): Maximum number of boxes will be kept.\n cfg (dict): Configuration dict of NMS.\n mlvl_dir_scores (torch.Tensor, optional): Multi-level scores\n of direction classifier. Defaults to None.\n\n Returns:\n tuple[torch.Tensor]: Return results after nms, including 3D \\\n bounding boxes, scores, labels and direction scores.\n \"\"\"\n # do multi class nms\n # the fg class id range: [0, num_classes-1]\n num_classes = mlvl_scores.shape[1] - 1\n bboxes = []\n scores = []\n labels = []\n dir_scores = []\n for i in range(0, num_classes):\n # get bboxes and scores of this class\n cls_inds = mlvl_scores[:, i] > score_thr\n if not cls_inds.any():\n continue\n\n _scores = mlvl_scores[cls_inds, i]\n _bboxes_for_nms = mlvl_bboxes_for_nms[cls_inds, :]\n\n if cfg.use_rotate_nms:\n nms_func = nms_gpu\n else:\n nms_func = nms_normal_gpu\n\n selected = nms_func(_bboxes_for_nms, _scores, cfg.nms_thr)\n _mlvl_bboxes = mlvl_bboxes[cls_inds, :]\n bboxes.append(_mlvl_bboxes[selected])\n scores.append(_scores[selected])\n cls_label = mlvl_bboxes.new_full((len(selected), ),\n i,\n dtype=torch.long)\n labels.append(cls_label)\n\n if mlvl_dir_scores is not None:\n _mlvl_dir_scores = mlvl_dir_scores[cls_inds]\n dir_scores.append(_mlvl_dir_scores[selected])\n\n if bboxes:\n bboxes = torch.cat(bboxes, dim=0)\n scores = torch.cat(scores, dim=0)\n labels = torch.cat(labels, dim=0)\n if mlvl_dir_scores is not None:\n dir_scores = torch.cat(dir_scores, dim=0)\n if bboxes.shape[0] > max_num:\n _, inds = scores.sort(descending=True)\n inds = inds[:max_num]\n bboxes = bboxes[inds, :]\n labels = labels[inds]\n scores = scores[inds]\n if mlvl_dir_scores is not None:\n dir_scores = dir_scores[inds]\n else:\n bboxes = mlvl_scores.new_zeros((0, mlvl_bboxes.size(-1)))\n scores = mlvl_scores.new_zeros((0, ))\n labels = mlvl_scores.new_zeros((0, ), dtype=torch.long)\n dir_scores = mlvl_scores.new_zeros((0, ))\n return bboxes, scores, labels, dir_scores\n\n\ndef aligned_3d_nms(boxes, scores, classes, thresh):\n \"\"\"3d nms for aligned boxes.\n\n Args:\n boxes (torch.Tensor): Aligned box with shape [n, 6].\n scores (torch.Tensor): Scores of each box.\n classes (torch.Tensor): Class of each box.\n thresh (float): Iou threshold for nms.\n\n Returns:\n torch.Tensor: Indices of selected boxes.\n \"\"\"\n x1 = boxes[:, 0]\n y1 = boxes[:, 1]\n z1 = boxes[:, 2]\n x2 = boxes[:, 3]\n y2 = boxes[:, 4]\n z2 = boxes[:, 5]\n area = (x2 - x1) * (y2 - y1) * (z2 - z1)\n zero = boxes.new_zeros(1, )\n\n score_sorted = torch.argsort(scores)\n pick = []\n while (score_sorted.shape[0] != 0):\n last = score_sorted.shape[0]\n i = score_sorted[-1]\n pick.append(i)\n\n xx1 = torch.max(x1[i], x1[score_sorted[:last - 1]])\n yy1 = torch.max(y1[i], y1[score_sorted[:last - 1]])\n zz1 = torch.max(z1[i], z1[score_sorted[:last - 1]])\n xx2 = torch.min(x2[i], x2[score_sorted[:last - 1]])\n yy2 = torch.min(y2[i], y2[score_sorted[:last - 1]])\n zz2 = torch.min(z2[i], z2[score_sorted[:last - 1]])\n classes1 = classes[i]\n classes2 = classes[score_sorted[:last - 1]]\n inter_l = torch.max(zero, xx2 - xx1)\n inter_w = torch.max(zero, yy2 - yy1)\n inter_h = torch.max(zero, zz2 - zz1)\n\n inter = inter_l * inter_w * inter_h\n iou = inter / (area[i] + area[score_sorted[:last - 1]] - inter)\n iou = iou * (classes1 == classes2).float()\n score_sorted = score_sorted[torch.nonzero(\n iou <= thresh, as_tuple=False).flatten()]\n\n indices = boxes.new_tensor(pick, dtype=torch.long)\n return indices\n\n\[email protected](nopython=True)\ndef circle_nms(dets, thresh, post_max_size=83):\n \"\"\"Circular NMS.\n\n An object is only counted as positive if no other center\n with a higher confidence exists within a radius r using a\n bird-eye view distance metric.\n\n Args:\n dets (torch.Tensor): Detection results with the shape of [N, 3].\n thresh (float): Value of threshold.\n post_max_size (int): Max number of prediction to be kept. Defaults\n to 83\n\n Returns:\n torch.Tensor: Indexes of the detections to be kept.\n \"\"\"\n x1 = dets[:, 0]\n y1 = dets[:, 1]\n scores = dets[:, 2]\n order = scores.argsort()[::-1].astype(np.int32) # highest->lowest\n ndets = dets.shape[0]\n suppressed = np.zeros((ndets), dtype=np.int32)\n keep = []\n for _i in range(ndets):\n i = order[_i] # start with highest score box\n if suppressed[\n i] == 1: # if any box have enough iou with this, remove it\n continue\n keep.append(i)\n for _j in range(_i + 1, ndets):\n j = order[_j]\n if suppressed[j] == 1:\n continue\n # calculate center distance between i and j box\n dist = (x1[i] - x1[j])**2 + (y1[i] - y1[j])**2\n\n # ovr = inter / areas[j]\n if dist <= thresh:\n suppressed[j] = 1\n return keep[:post_max_size]\n", "path": "mmdet3d/core/post_processing/box3d_nms.py"}]} | 2,703 | 258 |
gh_patches_debug_34169 | rasdani/github-patches | git_diff | conan-io__conan-center-index-253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] catch2/2.9.2: Expected CMake scripts to be included in the package
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **catch2/2.9.2**
I expected to have access to cmake scripts that are installed with Catch2.
The helper scripts are set to be installed.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33
Then they are deleted during packaging.
https://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51
Currently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/catch2/2.x.x/conanfile.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4
5 from conans import ConanFile, CMake, tools
6
7
8 class ConanRecipe(ConanFile):
9 name = "catch2"
10 description = "A modern, C++-native, header-only, framework for unit-tests, TDD and BDD"
11 topics = ("conan", "catch2", "header-only", "unit-test", "tdd", "bdd")
12 homepage = "https://github.com/catchorg/Catch2"
13 url = "https://github.com/conan-io/conan-center-index"
14 license = "BSL-1.0"
15
16 settings = "os", "compiler", "build_type", "arch"
17
18 generators = "cmake"
19
20 _source_subfolder = "source_subfolder"
21
22 def source(self):
23 tools.get(**self.conan_data["sources"][self.version])
24 extracted_dir = "Catch2-" + self.version
25 os.rename(extracted_dir, self._source_subfolder)
26
27 _build_subfolder = "build_subfolder"
28
29 def _configure_cmake(self):
30 cmake = CMake(self)
31 cmake.definitions["BUILD_TESTING"] = "OFF"
32 cmake.definitions["CATCH_INSTALL_DOCS"] = "OFF"
33 cmake.definitions["CATCH_INSTALL_HELPERS"] = "ON"
34 cmake.configure(
35 source_folder=self._source_subfolder,
36 build_folder=self._build_subfolder
37 )
38 return cmake
39
40 def build(self):
41 cmake = self._configure_cmake()
42 cmake.build()
43
44 def package(self):
45 self.copy(pattern="LICENSE.txt", dst="licenses",
46 src=self._source_subfolder)
47
48 cmake = self._configure_cmake()
49 cmake.install()
50
51 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
52 tools.rmdir(os.path.join(self.package_folder, "share"))
53
54 def package_id(self):
55 self.info.header_only()
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py
--- a/recipes/catch2/2.x.x/conanfile.py
+++ b/recipes/catch2/2.x.x/conanfile.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python
-
import os
from conans import ConanFile, CMake, tools
@@ -12,20 +10,16 @@
homepage = "https://github.com/catchorg/Catch2"
url = "https://github.com/conan-io/conan-center-index"
license = "BSL-1.0"
-
settings = "os", "compiler", "build_type", "arch"
-
generators = "cmake"
-
_source_subfolder = "source_subfolder"
+ _build_subfolder = "build_subfolder"
def source(self):
tools.get(**self.conan_data["sources"][self.version])
extracted_dir = "Catch2-" + self.version
os.rename(extracted_dir, self._source_subfolder)
- _build_subfolder = "build_subfolder"
-
def _configure_cmake(self):
cmake = CMake(self)
cmake.definitions["BUILD_TESTING"] = "OFF"
@@ -42,14 +36,18 @@
cmake.build()
def package(self):
- self.copy(pattern="LICENSE.txt", dst="licenses",
- src=self._source_subfolder)
-
+ self.copy(pattern="LICENSE.txt", dst="licenses", src=self._source_subfolder)
cmake = self._configure_cmake()
cmake.install()
-
tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
tools.rmdir(os.path.join(self.package_folder, "share"))
+ for cmake_file in ["ParseAndAddCatchTests.cmake", "Catch.cmake"]:
+ self.copy(cmake_file,
+ src=os.path.join(self._source_subfolder, "contrib"),
+ dst=os.path.join("lib", "cmake", "Catch2"))
def package_id(self):
self.info.header_only()
+
+ def package_info(self):
+ self.cpp_info.builddirs = [os.path.join("lib", "cmake", "Catch2")]
| {"golden_diff": "diff --git a/recipes/catch2/2.x.x/conanfile.py b/recipes/catch2/2.x.x/conanfile.py\n--- a/recipes/catch2/2.x.x/conanfile.py\n+++ b/recipes/catch2/2.x.x/conanfile.py\n@@ -1,5 +1,3 @@\n-#!/usr/bin/env python\n-\n import os\n \n from conans import ConanFile, CMake, tools\n@@ -12,20 +10,16 @@\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n-\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n-\n generators = \"cmake\"\n-\n _source_subfolder = \"source_subfolder\"\n+ _build_subfolder = \"build_subfolder\"\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n \n- _build_subfolder = \"build_subfolder\"\n-\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n@@ -42,14 +36,18 @@\n cmake.build()\n \n def package(self):\n- self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n- src=self._source_subfolder)\n-\n+ self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n-\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n+ for cmake_file in [\"ParseAndAddCatchTests.cmake\", \"Catch.cmake\"]:\n+ self.copy(cmake_file,\n+ src=os.path.join(self._source_subfolder, \"contrib\"),\n+ dst=os.path.join(\"lib\", \"cmake\", \"Catch2\"))\n \n def package_id(self):\n self.info.header_only()\n+\n+ def package_info(self):\n+ self.cpp_info.builddirs = [os.path.join(\"lib\", \"cmake\", \"Catch2\")]\n", "issue": "[package] catch2/2.9.2: Expected CMake scripts to be included in the package \n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **catch2/2.9.2**\r\n\r\nI expected to have access to cmake scripts that are installed with Catch2.\r\n\r\nThe helper scripts are set to be installed.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L33\r\n\r\nThen they are deleted during packaging.\r\n\r\nhttps://github.com/conan-io/conan-center-index/blob/6a7ff72be4e6fa6362112459f7319f6e6e565a99/recipes/catch2/2.x.x/conanfile.py#L51\r\n\r\nCurrently, I am using the older bincrafters package (catch2/2.5.0@bincrafters/stable) which still includes the CMake scripts. I would need to maintain my own conan package to use the newer version of Catch2.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanRecipe(ConanFile):\n name = \"catch2\"\n description = \"A modern, C++-native, header-only, framework for unit-tests, TDD and BDD\"\n topics = (\"conan\", \"catch2\", \"header-only\", \"unit-test\", \"tdd\", \"bdd\")\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n\n generators = \"cmake\"\n\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n _build_subfolder = \"build_subfolder\"\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_DOCS\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_HELPERS\"] = \"ON\"\n cmake.configure(\n source_folder=self._source_subfolder,\n build_folder=self._build_subfolder\n )\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\",\n src=self._source_subfolder)\n\n cmake = self._configure_cmake()\n cmake.install()\n\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/catch2/2.x.x/conanfile.py"}], "after_files": [{"content": "import os\n\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanRecipe(ConanFile):\n name = \"catch2\"\n description = \"A modern, C++-native, header-only, framework for unit-tests, TDD and BDD\"\n topics = (\"conan\", \"catch2\", \"header-only\", \"unit-test\", \"tdd\", \"bdd\")\n homepage = \"https://github.com/catchorg/Catch2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSL-1.0\"\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n generators = \"cmake\"\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"Catch2-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"BUILD_TESTING\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_DOCS\"] = \"OFF\"\n cmake.definitions[\"CATCH_INSTALL_HELPERS\"] = \"ON\"\n cmake.configure(\n source_folder=self._source_subfolder,\n build_folder=self._build_subfolder\n )\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE.txt\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n for cmake_file in [\"ParseAndAddCatchTests.cmake\", \"Catch.cmake\"]:\n self.copy(cmake_file,\n src=os.path.join(self._source_subfolder, \"contrib\"),\n dst=os.path.join(\"lib\", \"cmake\", \"Catch2\"))\n\n def package_id(self):\n self.info.header_only()\n\n def package_info(self):\n self.cpp_info.builddirs = [os.path.join(\"lib\", \"cmake\", \"Catch2\")]\n", "path": "recipes/catch2/2.x.x/conanfile.py"}]} | 1,065 | 508 |
gh_patches_debug_35972 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-1241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Interface values don't convert correctly from Pydantic models
When calling `from_pydantic` on a class with an interface field, the field value is always converted into an instance of the base class, no matter what its starting type is. The expected behavior should probably be to convert to the corresponding subtype class instead. See here for an example: https://gist.github.com/Matt343/fbce0cdffe1523bb22016bed6f65473f
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/experimental/pydantic/__init__.py`
Content:
```
1 from .error_type import error_type
2 from .exceptions import UnregisteredTypeException
3 from .object_type import input, type
4
5
6 __all__ = ["error_type", "UnregisteredTypeException", "input", "type"]
7
```
Path: `strawberry/experimental/pydantic/conversion.py`
Content:
```
1 from typing import Union, cast
2
3 from strawberry.field import StrawberryField
4 from strawberry.scalars import is_scalar
5 from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
6 from strawberry.union import StrawberryUnion
7
8
9 def _convert_from_pydantic_to_strawberry_type(
10 type_: Union[StrawberryType, type], data_from_model=None, extra=None
11 ):
12 data = data_from_model if data_from_model is not None else extra
13
14 if isinstance(type_, StrawberryOptional):
15 if data is None:
16 return data
17 return _convert_from_pydantic_to_strawberry_type(
18 type_.of_type, data_from_model=data, extra=extra
19 )
20 if isinstance(type_, StrawberryUnion):
21 for option_type in type_.types:
22 if hasattr(option_type, "_pydantic_type"):
23 source_type = option_type._pydantic_type # type: ignore
24 else:
25 source_type = cast(type, option_type)
26 if isinstance(data, source_type):
27 return _convert_from_pydantic_to_strawberry_type(
28 option_type, data_from_model=data, extra=extra
29 )
30 if isinstance(type_, StrawberryList):
31 items = []
32 for index, item in enumerate(data):
33 items.append(
34 _convert_from_pydantic_to_strawberry_type(
35 type_.of_type,
36 data_from_model=item,
37 extra=extra[index] if extra else None,
38 )
39 )
40
41 return items
42 elif is_scalar(type_):
43 return data
44 else:
45 return convert_pydantic_model_to_strawberry_class(
46 type_, model_instance=data_from_model, extra=extra
47 )
48
49
50 def convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):
51 extra = extra or {}
52 kwargs = {}
53
54 for field in cls._type_definition.fields:
55 field = cast(StrawberryField, field)
56 python_name = field.python_name
57
58 data_from_extra = extra.get(python_name, None)
59 data_from_model = (
60 getattr(model_instance, python_name, None) if model_instance else None
61 )
62 kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(
63 field.type, data_from_model, extra=data_from_extra
64 )
65
66 return cls(**kwargs)
67
```
Path: `strawberry/experimental/pydantic/object_type.py`
Content:
```
1 import builtins
2 import dataclasses
3 from functools import partial
4 from typing import Any, Dict, List, Optional, Tuple, Type, cast
5
6 from pydantic import BaseModel
7 from pydantic.fields import ModelField
8
9 from strawberry.arguments import UNSET
10 from strawberry.experimental.pydantic.conversion import (
11 convert_pydantic_model_to_strawberry_class,
12 )
13 from strawberry.experimental.pydantic.fields import get_basic_type
14 from strawberry.field import StrawberryField
15 from strawberry.object_type import _process_type, _wrap_dataclass
16 from strawberry.private import Private
17 from strawberry.types.type_resolver import _get_fields
18 from strawberry.types.types import FederationTypeParams, TypeDefinition
19
20 from .exceptions import MissingFieldsListError, UnregisteredTypeException
21
22
23 def replace_pydantic_types(type_: Any):
24 if hasattr(type_, "__args__"):
25 new_type = type_.copy_with(
26 tuple(replace_pydantic_types(t) for t in type_.__args__)
27 )
28
29 if isinstance(new_type, TypeDefinition):
30 # TODO: Not sure if this is necessary. No coverage in tests
31 # TODO: Unnecessary with StrawberryObject
32
33 new_type = builtins.type(
34 new_type.name,
35 (),
36 {"_type_definition": new_type},
37 )
38
39 return new_type
40
41 if issubclass(type_, BaseModel):
42 if hasattr(type_, "_strawberry_type"):
43 return type_._strawberry_type
44 else:
45 raise UnregisteredTypeException(type_)
46
47 return type_
48
49
50 def get_type_for_field(field: ModelField):
51 type_ = field.outer_type_
52 type_ = get_basic_type(type_)
53 type_ = replace_pydantic_types(type_)
54
55 if not field.required:
56 type_ = Optional[type_]
57
58 return type_
59
60
61 def _get_private_fields(cls: Type) -> List[dataclasses.Field]:
62 private_fields: List[dataclasses.Field] = []
63 for field in dataclasses.fields(cls):
64 if isinstance(field.type, Private):
65 private_fields.append(field)
66 return private_fields
67
68
69 def type(
70 model: Type[BaseModel],
71 *,
72 fields: List[str],
73 name: Optional[str] = None,
74 is_input: bool = False,
75 is_interface: bool = False,
76 description: Optional[str] = None,
77 federation: Optional[FederationTypeParams] = None,
78 ):
79 def wrap(cls):
80 if not fields:
81 raise MissingFieldsListError(model)
82
83 model_fields = model.__fields__
84 fields_set = set(fields)
85
86 all_fields: List[Tuple[str, Any, dataclasses.Field]] = [
87 (
88 name,
89 get_type_for_field(field),
90 StrawberryField(
91 python_name=field.name,
92 graphql_name=field.alias if field.has_alias else None,
93 default=field.default if not field.required else UNSET,
94 default_factory=(
95 field.default_factory if field.default_factory else UNSET
96 ),
97 type_annotation=get_type_for_field(field),
98 ),
99 )
100 for name, field in model_fields.items()
101 if name in fields_set
102 ]
103
104 wrapped = _wrap_dataclass(cls)
105 extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))
106 private_fields = _get_private_fields(wrapped)
107
108 all_fields.extend(
109 (
110 (
111 field.name,
112 field.type,
113 field,
114 )
115 for field in extra_fields + private_fields
116 )
117 )
118
119 # Sort fields so that fields with missing defaults go first
120 # because dataclasses require that fields with no defaults are defined
121 # first
122 missing_default = []
123 has_default = []
124 for field in all_fields:
125 if field[2].default is dataclasses.MISSING:
126 missing_default.append(field)
127 else:
128 has_default.append(field)
129
130 sorted_fields = missing_default + has_default
131
132 cls = dataclasses.make_dataclass(
133 cls.__name__,
134 sorted_fields,
135 )
136
137 _process_type(
138 cls,
139 name=name,
140 is_input=is_input,
141 is_interface=is_interface,
142 description=description,
143 federation=federation,
144 )
145
146 model._strawberry_type = cls # type: ignore
147 cls._pydantic_type = model # type: ignore
148
149 def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:
150 return convert_pydantic_model_to_strawberry_class(
151 cls=cls, model_instance=instance, extra=extra
152 )
153
154 def to_pydantic(self) -> Any:
155 instance_kwargs = dataclasses.asdict(self)
156
157 return model(**instance_kwargs)
158
159 cls.from_pydantic = staticmethod(from_pydantic)
160 cls.to_pydantic = to_pydantic
161
162 return cls
163
164 return wrap
165
166
167 input = partial(type, is_input=True)
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/experimental/pydantic/__init__.py b/strawberry/experimental/pydantic/__init__.py
--- a/strawberry/experimental/pydantic/__init__.py
+++ b/strawberry/experimental/pydantic/__init__.py
@@ -1,6 +1,6 @@
from .error_type import error_type
from .exceptions import UnregisteredTypeException
-from .object_type import input, type
+from .object_type import input, interface, type
-__all__ = ["error_type", "UnregisteredTypeException", "input", "type"]
+__all__ = ["error_type", "UnregisteredTypeException", "input", "type", "interface"]
diff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py
--- a/strawberry/experimental/pydantic/conversion.py
+++ b/strawberry/experimental/pydantic/conversion.py
@@ -1,5 +1,6 @@
from typing import Union, cast
+from strawberry.enum import EnumDefinition
from strawberry.field import StrawberryField
from strawberry.scalars import is_scalar
from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType
@@ -27,6 +28,8 @@
return _convert_from_pydantic_to_strawberry_type(
option_type, data_from_model=data, extra=extra
)
+ if isinstance(type_, EnumDefinition):
+ return data
if isinstance(type_, StrawberryList):
items = []
for index, item in enumerate(data):
@@ -42,6 +45,10 @@
elif is_scalar(type_):
return data
else:
+ # in the case of an interface, the concrete type may be more specific
+ # than the type in the field definition
+ if hasattr(type(data), "_strawberry_type"):
+ type_ = type(data)._strawberry_type
return convert_pydantic_model_to_strawberry_class(
type_, model_instance=data_from_model, extra=extra
)
diff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py
--- a/strawberry/experimental/pydantic/object_type.py
+++ b/strawberry/experimental/pydantic/object_type.py
@@ -132,6 +132,7 @@
cls = dataclasses.make_dataclass(
cls.__name__,
sorted_fields,
+ bases=cls.__bases__,
)
_process_type(
@@ -165,3 +166,5 @@
input = partial(type, is_input=True)
+
+interface = partial(type, is_interface=True)
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/__init__.py b/strawberry/experimental/pydantic/__init__.py\n--- a/strawberry/experimental/pydantic/__init__.py\n+++ b/strawberry/experimental/pydantic/__init__.py\n@@ -1,6 +1,6 @@\n from .error_type import error_type\n from .exceptions import UnregisteredTypeException\n-from .object_type import input, type\n+from .object_type import input, interface, type\n \n \n-__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\"]\n+__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\", \"interface\"]\ndiff --git a/strawberry/experimental/pydantic/conversion.py b/strawberry/experimental/pydantic/conversion.py\n--- a/strawberry/experimental/pydantic/conversion.py\n+++ b/strawberry/experimental/pydantic/conversion.py\n@@ -1,5 +1,6 @@\n from typing import Union, cast\n \n+from strawberry.enum import EnumDefinition\n from strawberry.field import StrawberryField\n from strawberry.scalars import is_scalar\n from strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\n@@ -27,6 +28,8 @@\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n+ if isinstance(type_, EnumDefinition):\n+ return data\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n@@ -42,6 +45,10 @@\n elif is_scalar(type_):\n return data\n else:\n+ # in the case of an interface, the concrete type may be more specific\n+ # than the type in the field definition\n+ if hasattr(type(data), \"_strawberry_type\"):\n+ type_ = type(data)._strawberry_type\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\ndiff --git a/strawberry/experimental/pydantic/object_type.py b/strawberry/experimental/pydantic/object_type.py\n--- a/strawberry/experimental/pydantic/object_type.py\n+++ b/strawberry/experimental/pydantic/object_type.py\n@@ -132,6 +132,7 @@\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n+ bases=cls.__bases__,\n )\n \n _process_type(\n@@ -165,3 +166,5 @@\n \n \n input = partial(type, is_input=True)\n+\n+interface = partial(type, is_interface=True)\n", "issue": "Interface values don't convert correctly from Pydantic models\nWhen calling `from_pydantic` on a class with an interface field, the field value is always converted into an instance of the base class, no matter what its starting type is. The expected behavior should probably be to convert to the corresponding subtype class instead. See here for an example: https://gist.github.com/Matt343/fbce0cdffe1523bb22016bed6f65473f\n", "before_files": [{"content": "from .error_type import error_type\nfrom .exceptions import UnregisteredTypeException\nfrom .object_type import input, type\n\n\n__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\"]\n", "path": "strawberry/experimental/pydantic/__init__.py"}, {"content": "from typing import Union, cast\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\nfrom strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\nfrom strawberry.union import StrawberryUnion\n\n\ndef _convert_from_pydantic_to_strawberry_type(\n type_: Union[StrawberryType, type], data_from_model=None, extra=None\n):\n data = data_from_model if data_from_model is not None else extra\n\n if isinstance(type_, StrawberryOptional):\n if data is None:\n return data\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryUnion):\n for option_type in type_.types:\n if hasattr(option_type, \"_pydantic_type\"):\n source_type = option_type._pydantic_type # type: ignore\n else:\n source_type = cast(type, option_type)\n if isinstance(data, source_type):\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n items.append(\n _convert_from_pydantic_to_strawberry_type(\n type_.of_type,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n )\n\n return items\n elif is_scalar(type_):\n return data\n else:\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(\n field.type, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}, {"content": "import builtins\nimport dataclasses\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple, Type, cast\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.private import Private\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import FederationTypeParams, TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any):\n if hasattr(type_, \"__args__\"):\n new_type = type_.copy_with(\n tuple(replace_pydantic_types(t) for t in type_.__args__)\n )\n\n if isinstance(new_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n new_type = builtins.type(\n new_type.name,\n (),\n {\"_type_definition\": new_type},\n )\n\n return new_type\n\n if issubclass(type_, BaseModel):\n if hasattr(type_, \"_strawberry_type\"):\n return type_._strawberry_type\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField):\n type_ = field.outer_type_\n type_ = get_basic_type(type_)\n type_ = replace_pydantic_types(type_)\n\n if not field.required:\n type_ = Optional[type_]\n\n return type_\n\n\ndef _get_private_fields(cls: Type) -> List[dataclasses.Field]:\n private_fields: List[dataclasses.Field] = []\n for field in dataclasses.fields(cls):\n if isinstance(field.type, Private):\n private_fields.append(field)\n return private_fields\n\n\ndef type(\n model: Type[BaseModel],\n *,\n fields: List[str],\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n federation: Optional[FederationTypeParams] = None,\n):\n def wrap(cls):\n if not fields:\n raise MissingFieldsListError(model)\n\n model_fields = model.__fields__\n fields_set = set(fields)\n\n all_fields: List[Tuple[str, Any, dataclasses.Field]] = [\n (\n name,\n get_type_for_field(field),\n StrawberryField(\n python_name=field.name,\n graphql_name=field.alias if field.has_alias else None,\n default=field.default if not field.required else UNSET,\n default_factory=(\n field.default_factory if field.default_factory else UNSET\n ),\n type_annotation=get_type_for_field(field),\n ),\n )\n for name, field in model_fields.items()\n if name in fields_set\n ]\n\n wrapped = _wrap_dataclass(cls)\n extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))\n private_fields = _get_private_fields(wrapped)\n\n all_fields.extend(\n (\n (\n field.name,\n field.type,\n field,\n )\n for field in extra_fields + private_fields\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n # because dataclasses require that fields with no defaults are defined\n # first\n missing_default = []\n has_default = []\n for field in all_fields:\n if field[2].default is dataclasses.MISSING:\n missing_default.append(field)\n else:\n has_default.append(field)\n\n sorted_fields = missing_default + has_default\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n federation=federation,\n )\n\n model._strawberry_type = cls # type: ignore\n cls._pydantic_type = model # type: ignore\n\n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic(self) -> Any:\n instance_kwargs = dataclasses.asdict(self)\n\n return model(**instance_kwargs)\n\n cls.from_pydantic = staticmethod(from_pydantic)\n cls.to_pydantic = to_pydantic\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}], "after_files": [{"content": "from .error_type import error_type\nfrom .exceptions import UnregisteredTypeException\nfrom .object_type import input, interface, type\n\n\n__all__ = [\"error_type\", \"UnregisteredTypeException\", \"input\", \"type\", \"interface\"]\n", "path": "strawberry/experimental/pydantic/__init__.py"}, {"content": "from typing import Union, cast\n\nfrom strawberry.enum import EnumDefinition\nfrom strawberry.field import StrawberryField\nfrom strawberry.scalars import is_scalar\nfrom strawberry.type import StrawberryList, StrawberryOptional, StrawberryType\nfrom strawberry.union import StrawberryUnion\n\n\ndef _convert_from_pydantic_to_strawberry_type(\n type_: Union[StrawberryType, type], data_from_model=None, extra=None\n):\n data = data_from_model if data_from_model is not None else extra\n\n if isinstance(type_, StrawberryOptional):\n if data is None:\n return data\n return _convert_from_pydantic_to_strawberry_type(\n type_.of_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, StrawberryUnion):\n for option_type in type_.types:\n if hasattr(option_type, \"_pydantic_type\"):\n source_type = option_type._pydantic_type # type: ignore\n else:\n source_type = cast(type, option_type)\n if isinstance(data, source_type):\n return _convert_from_pydantic_to_strawberry_type(\n option_type, data_from_model=data, extra=extra\n )\n if isinstance(type_, EnumDefinition):\n return data\n if isinstance(type_, StrawberryList):\n items = []\n for index, item in enumerate(data):\n items.append(\n _convert_from_pydantic_to_strawberry_type(\n type_.of_type,\n data_from_model=item,\n extra=extra[index] if extra else None,\n )\n )\n\n return items\n elif is_scalar(type_):\n return data\n else:\n # in the case of an interface, the concrete type may be more specific\n # than the type in the field definition\n if hasattr(type(data), \"_strawberry_type\"):\n type_ = type(data)._strawberry_type\n return convert_pydantic_model_to_strawberry_class(\n type_, model_instance=data_from_model, extra=extra\n )\n\n\ndef convert_pydantic_model_to_strawberry_class(cls, *, model_instance=None, extra=None):\n extra = extra or {}\n kwargs = {}\n\n for field in cls._type_definition.fields:\n field = cast(StrawberryField, field)\n python_name = field.python_name\n\n data_from_extra = extra.get(python_name, None)\n data_from_model = (\n getattr(model_instance, python_name, None) if model_instance else None\n )\n kwargs[python_name] = _convert_from_pydantic_to_strawberry_type(\n field.type, data_from_model, extra=data_from_extra\n )\n\n return cls(**kwargs)\n", "path": "strawberry/experimental/pydantic/conversion.py"}, {"content": "import builtins\nimport dataclasses\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple, Type, cast\n\nfrom pydantic import BaseModel\nfrom pydantic.fields import ModelField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.experimental.pydantic.conversion import (\n convert_pydantic_model_to_strawberry_class,\n)\nfrom strawberry.experimental.pydantic.fields import get_basic_type\nfrom strawberry.field import StrawberryField\nfrom strawberry.object_type import _process_type, _wrap_dataclass\nfrom strawberry.private import Private\nfrom strawberry.types.type_resolver import _get_fields\nfrom strawberry.types.types import FederationTypeParams, TypeDefinition\n\nfrom .exceptions import MissingFieldsListError, UnregisteredTypeException\n\n\ndef replace_pydantic_types(type_: Any):\n if hasattr(type_, \"__args__\"):\n new_type = type_.copy_with(\n tuple(replace_pydantic_types(t) for t in type_.__args__)\n )\n\n if isinstance(new_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n\n new_type = builtins.type(\n new_type.name,\n (),\n {\"_type_definition\": new_type},\n )\n\n return new_type\n\n if issubclass(type_, BaseModel):\n if hasattr(type_, \"_strawberry_type\"):\n return type_._strawberry_type\n else:\n raise UnregisteredTypeException(type_)\n\n return type_\n\n\ndef get_type_for_field(field: ModelField):\n type_ = field.outer_type_\n type_ = get_basic_type(type_)\n type_ = replace_pydantic_types(type_)\n\n if not field.required:\n type_ = Optional[type_]\n\n return type_\n\n\ndef _get_private_fields(cls: Type) -> List[dataclasses.Field]:\n private_fields: List[dataclasses.Field] = []\n for field in dataclasses.fields(cls):\n if isinstance(field.type, Private):\n private_fields.append(field)\n return private_fields\n\n\ndef type(\n model: Type[BaseModel],\n *,\n fields: List[str],\n name: Optional[str] = None,\n is_input: bool = False,\n is_interface: bool = False,\n description: Optional[str] = None,\n federation: Optional[FederationTypeParams] = None,\n):\n def wrap(cls):\n if not fields:\n raise MissingFieldsListError(model)\n\n model_fields = model.__fields__\n fields_set = set(fields)\n\n all_fields: List[Tuple[str, Any, dataclasses.Field]] = [\n (\n name,\n get_type_for_field(field),\n StrawberryField(\n python_name=field.name,\n graphql_name=field.alias if field.has_alias else None,\n default=field.default if not field.required else UNSET,\n default_factory=(\n field.default_factory if field.default_factory else UNSET\n ),\n type_annotation=get_type_for_field(field),\n ),\n )\n for name, field in model_fields.items()\n if name in fields_set\n ]\n\n wrapped = _wrap_dataclass(cls)\n extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped))\n private_fields = _get_private_fields(wrapped)\n\n all_fields.extend(\n (\n (\n field.name,\n field.type,\n field,\n )\n for field in extra_fields + private_fields\n )\n )\n\n # Sort fields so that fields with missing defaults go first\n # because dataclasses require that fields with no defaults are defined\n # first\n missing_default = []\n has_default = []\n for field in all_fields:\n if field[2].default is dataclasses.MISSING:\n missing_default.append(field)\n else:\n has_default.append(field)\n\n sorted_fields = missing_default + has_default\n\n cls = dataclasses.make_dataclass(\n cls.__name__,\n sorted_fields,\n bases=cls.__bases__,\n )\n\n _process_type(\n cls,\n name=name,\n is_input=is_input,\n is_interface=is_interface,\n description=description,\n federation=federation,\n )\n\n model._strawberry_type = cls # type: ignore\n cls._pydantic_type = model # type: ignore\n\n def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any:\n return convert_pydantic_model_to_strawberry_class(\n cls=cls, model_instance=instance, extra=extra\n )\n\n def to_pydantic(self) -> Any:\n instance_kwargs = dataclasses.asdict(self)\n\n return model(**instance_kwargs)\n\n cls.from_pydantic = staticmethod(from_pydantic)\n cls.to_pydantic = to_pydantic\n\n return cls\n\n return wrap\n\n\ninput = partial(type, is_input=True)\n\ninterface = partial(type, is_interface=True)\n", "path": "strawberry/experimental/pydantic/object_type.py"}]} | 2,525 | 600 |
gh_patches_debug_3764 | rasdani/github-patches | git_diff | quantumlib__Cirq-2296 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug in supremacy test circuit
I ran the function
```
generate_boixo_2018_supremacy_circuits_v2_bristlecone(
n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit
```
in `cirq.experiments.google_v2_supremacy_circuit` with `n_rows = 1` and got trapped in infinite loops.
I think this is because when we have `n_rows = 1`, `_make_cz_layer` would never return any cz gate, since there's only one qubit, thus in ` _add_cz_layer`, the loop
```
while not cz_layer:
qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())
cz_layer = list(_make_cz_layer(qubits, layer_index))
layer_index += 1
```
would never end.
My suggestion would be change `assert 1 <= n_rows <= 11` in `generate_boixo_2018_supremacy_circuits_v2_bristlecone` to `assert 2 <= n_rows <= 11`, since it does not make anysense to have one-qubit cz layer in the first place.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/experiments/google_v2_supremacy_circuit.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import random
16 from typing import Callable, Iterable, TypeVar, cast, Sequence
17
18 from cirq.circuits import InsertStrategy
19 from cirq import circuits, devices, google, ops
20
21
22 def generate_boixo_2018_supremacy_circuits_v2(
23 qubits: Iterable[devices.GridQubit], cz_depth: int,
24 seed: int) -> circuits.Circuit:
25 """
26 Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.
27 See also https://arxiv.org/abs/1807.10749
28
29 Args:
30 qubits: qubit grid in which to generate the circuit.
31 cz_depth: number of layers with CZ gates.
32 seed: seed for the random instance.
33
34 Returns:
35 A circuit corresponding to instance
36 inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}
37
38 The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]
39 (as in the QASM mapping)
40 """
41
42 non_diagonal_gates = [ops.pauli_gates.X**(1/2), ops.pauli_gates.Y**(1/2)]
43 rand_gen = random.Random(seed).random
44
45 circuit = circuits.Circuit()
46
47 # Add an initial moment of Hadamards
48 circuit.append(ops.common_gates.H(qubit) for qubit in qubits)
49
50 layer_index = 0
51 if cz_depth:
52 layer_index = _add_cz_layer(layer_index, circuit)
53 # In the first moment, add T gates when possible
54 for qubit in qubits:
55 if not circuit.operation_at(qubit, 1):
56 circuit.append(ops.common_gates.T(qubit),
57 strategy=InsertStrategy.EARLIEST)
58
59 for moment_index in range(2, cz_depth+1):
60 layer_index = _add_cz_layer(layer_index, circuit)
61 # Add single qubit gates in the same moment
62 for qubit in qubits:
63 if not circuit.operation_at(qubit, moment_index):
64 last_op = circuit.operation_at(qubit, moment_index-1)
65 if last_op:
66 gate = cast(ops.GateOperation, last_op).gate
67 # Add a random non diagonal gate after a CZ
68 if gate == ops.CZ:
69 circuit.append(_choice(rand_gen,
70 non_diagonal_gates).on(qubit),
71 strategy=InsertStrategy.EARLIEST)
72 # Add a T gate after a non diagonal gate
73 elif not gate == ops.T:
74 circuit.append(ops.common_gates.T(qubit),
75 strategy=InsertStrategy.EARLIEST)
76
77 # Add a final moment of Hadamards
78 circuit.append([ops.common_gates.H(qubit) for qubit in qubits],
79 strategy=InsertStrategy.NEW_THEN_INLINE)
80
81 return circuit
82
83
84 def generate_boixo_2018_supremacy_circuits_v2_grid(n_rows: int, n_cols: int,
85 cz_depth: int, seed: int
86 ) -> circuits.Circuit:
87 """
88 Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.
89 See also https://arxiv.org/abs/1807.10749
90
91 Args:
92 n_rows: number of rows of a 2D lattice.
93 n_cols: number of columns.
94 cz_depth: number of layers with CZ gates.
95 seed: seed for the random instance.
96
97 Returns:
98 A circuit corresponding to instance
99 inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}
100
101 The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]
102 (as in the QASM mapping)
103 """
104 qubits = [devices.GridQubit(i, j) for i in range(n_rows)
105 for j in range(n_cols)]
106 return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)
107
108
109 def generate_boixo_2018_supremacy_circuits_v2_bristlecone(
110 n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit:
111 """
112 Generates Google Random Circuits v2 in Bristlecone.
113 See also https://arxiv.org/abs/1807.10749
114
115 Args:
116 n_rows: number of rows in a Bristlecone lattice.
117 Note that we do not include single qubit corners.
118 cz_depth: number of layers with CZ gates.
119 seed: seed for the random instance.
120
121 Returns:
122 A circuit with given size and seed.
123 """
124 def get_qubits(n_rows):
125 def count_neighbors(qubits, qubit):
126 """Counts the qubits that the given qubit can interact with."""
127 possibles = [
128 devices.GridQubit(qubit.row + 1, qubit.col),
129 devices.GridQubit(qubit.row - 1, qubit.col),
130 devices.GridQubit(qubit.row, qubit.col + 1),
131 devices.GridQubit(qubit.row, qubit.col - 1),
132 ]
133 return len(list(e for e in possibles if e in qubits))
134
135 assert 1 <= n_rows <= 11
136 max_row = n_rows - 1
137 dev = google.Bristlecone
138 # we need a consistent order of qubits
139 qubits = list(dev.qubits)
140 qubits.sort()
141 qubits = [q for q in qubits
142 if q.row <= max_row and q.row + q.col < n_rows + 6
143 and q.row - q.col < n_rows - 5]
144 qubits = [q for q in qubits if count_neighbors(qubits, q) > 1]
145 return qubits
146
147 qubits = get_qubits(n_rows)
148 return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)
149
150
151 T = TypeVar('T')
152
153
154 def _choice(rand_gen: Callable[[], float], sequence: Sequence[T]) -> T:
155 """Choose a random element from a non-empty sequence.
156
157 Use this instead of random.choice, with random.random(), for reproducibility
158 """
159 return sequence[int(rand_gen() * len(sequence))]
160
161
162 def _add_cz_layer(layer_index: int, circuit: circuits.Circuit) -> int:
163 cz_layer = None
164 while not cz_layer:
165 qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())
166 cz_layer = list(_make_cz_layer(qubits, layer_index))
167 layer_index += 1
168
169 circuit.append(cz_layer, strategy=InsertStrategy.NEW_THEN_INLINE)
170 return layer_index
171
172
173 def _make_cz_layer(qubits: Iterable[devices.GridQubit], layer_index: int
174 ) -> Iterable[ops.Operation]:
175 """
176 Each layer index corresponds to a shift/transpose of this CZ pattern:
177
178 ●───● ● ● ●───● ● ● . . .
179
180 ● ● ●───● ● ● ●───● . . .
181
182 ●───● ● ● ●───● ● ● . . .
183
184 ● ● ●───● ● ● ●───● . . .
185
186 ●───● ● ● ●───● ● ● . . .
187
188 ● ● ●───● ● ● ●───● . . .
189 . . . . . . . . .
190 . . . . . . . . .
191 . . . . . . . . .
192
193 Labelled edges, showing the exact index-to-CZs mapping (mod 8):
194
195 ●─0─●─2─●─4─●─6─●─0─. . .
196 3│ 7│ 3│ 7│ 3│
197 ●─4─●─6─●─0─●─2─●─4─. . .
198 1│ 5│ 1│ 5│ 1│
199 ●─0─●─2─●─4─●─6─●─0─. . .
200 7│ 3│ 7│ 3│ 7│
201 ●─4─●─6─●─0─●─2─●─4─. . .
202 5│ 1│ 5│ 1│ 5│
203 ●─0─●─2─●─4─●─6─●─0─. . .
204 3│ 7│ 3│ 7│ 3│
205 . . . . . .
206 . . . . . .
207 . . . . . .
208
209 Note that, for small devices, some layers will be empty because the layer
210 only contains edges not present on the device.
211 """
212
213 # map to an internal layer index to match the cycle order of public circuits
214 layer_index_map = [0, 3, 2, 1, 4, 7, 6, 5]
215 internal_layer_index = layer_index_map[layer_index % 8]
216
217 dir_row = internal_layer_index % 2
218 dir_col = 1 - dir_row
219 shift = (internal_layer_index >> 1) % 4
220
221 for q in qubits:
222 q2 = devices.GridQubit(q.row + dir_row, q.col + dir_col)
223 if q2 not in qubits:
224 continue # This edge isn't on the device.
225 if (q.row * (2 - dir_row) + q.col * (2 - dir_col)) % 4 != shift:
226 continue # No CZ along this edge for this layer.
227
228 yield ops.common_gates.CZ(q, q2)
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq/experiments/google_v2_supremacy_circuit.py b/cirq/experiments/google_v2_supremacy_circuit.py
--- a/cirq/experiments/google_v2_supremacy_circuit.py
+++ b/cirq/experiments/google_v2_supremacy_circuit.py
@@ -132,7 +132,7 @@
]
return len(list(e for e in possibles if e in qubits))
- assert 1 <= n_rows <= 11
+ assert 2 <= n_rows <= 11
max_row = n_rows - 1
dev = google.Bristlecone
# we need a consistent order of qubits
| {"golden_diff": "diff --git a/cirq/experiments/google_v2_supremacy_circuit.py b/cirq/experiments/google_v2_supremacy_circuit.py\n--- a/cirq/experiments/google_v2_supremacy_circuit.py\n+++ b/cirq/experiments/google_v2_supremacy_circuit.py\n@@ -132,7 +132,7 @@\n ]\n return len(list(e for e in possibles if e in qubits))\n \n- assert 1 <= n_rows <= 11\n+ assert 2 <= n_rows <= 11\n max_row = n_rows - 1\n dev = google.Bristlecone\n # we need a consistent order of qubits\n", "issue": "bug in supremacy test circuit\nI ran the function\r\n```\r\ngenerate_boixo_2018_supremacy_circuits_v2_bristlecone(\r\n n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit\r\n```\r\nin `cirq.experiments.google_v2_supremacy_circuit` with `n_rows = 1` and got trapped in infinite loops.\r\n\r\nI think this is because when we have `n_rows = 1`, `_make_cz_layer` would never return any cz gate, since there's only one qubit, thus in ` _add_cz_layer`, the loop\r\n```\r\nwhile not cz_layer:\r\n qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())\r\n cz_layer = list(_make_cz_layer(qubits, layer_index))\r\n layer_index += 1\r\n```\r\nwould never end.\r\n\r\nMy suggestion would be change `assert 1 <= n_rows <= 11` in `generate_boixo_2018_supremacy_circuits_v2_bristlecone` to `assert 2 <= n_rows <= 11`, since it does not make anysense to have one-qubit cz layer in the first place.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport random\nfrom typing import Callable, Iterable, TypeVar, cast, Sequence\n\nfrom cirq.circuits import InsertStrategy\nfrom cirq import circuits, devices, google, ops\n\n\ndef generate_boixo_2018_supremacy_circuits_v2(\n qubits: Iterable[devices.GridQubit], cz_depth: int,\n seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n qubits: qubit grid in which to generate the circuit.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n\n non_diagonal_gates = [ops.pauli_gates.X**(1/2), ops.pauli_gates.Y**(1/2)]\n rand_gen = random.Random(seed).random\n\n circuit = circuits.Circuit()\n\n # Add an initial moment of Hadamards\n circuit.append(ops.common_gates.H(qubit) for qubit in qubits)\n\n layer_index = 0\n if cz_depth:\n layer_index = _add_cz_layer(layer_index, circuit)\n # In the first moment, add T gates when possible\n for qubit in qubits:\n if not circuit.operation_at(qubit, 1):\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n for moment_index in range(2, cz_depth+1):\n layer_index = _add_cz_layer(layer_index, circuit)\n # Add single qubit gates in the same moment\n for qubit in qubits:\n if not circuit.operation_at(qubit, moment_index):\n last_op = circuit.operation_at(qubit, moment_index-1)\n if last_op:\n gate = cast(ops.GateOperation, last_op).gate\n # Add a random non diagonal gate after a CZ\n if gate == ops.CZ:\n circuit.append(_choice(rand_gen,\n non_diagonal_gates).on(qubit),\n strategy=InsertStrategy.EARLIEST)\n # Add a T gate after a non diagonal gate\n elif not gate == ops.T:\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n # Add a final moment of Hadamards\n circuit.append([ops.common_gates.H(qubit) for qubit in qubits],\n strategy=InsertStrategy.NEW_THEN_INLINE)\n\n return circuit\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_grid(n_rows: int, n_cols: int,\n cz_depth: int, seed: int\n ) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows of a 2D lattice.\n n_cols: number of columns.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n qubits = [devices.GridQubit(i, j) for i in range(n_rows)\n for j in range(n_cols)]\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_bristlecone(\n n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 in Bristlecone.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows in a Bristlecone lattice.\n Note that we do not include single qubit corners.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit with given size and seed.\n \"\"\"\n def get_qubits(n_rows):\n def count_neighbors(qubits, qubit):\n \"\"\"Counts the qubits that the given qubit can interact with.\"\"\"\n possibles = [\n devices.GridQubit(qubit.row + 1, qubit.col),\n devices.GridQubit(qubit.row - 1, qubit.col),\n devices.GridQubit(qubit.row, qubit.col + 1),\n devices.GridQubit(qubit.row, qubit.col - 1),\n ]\n return len(list(e for e in possibles if e in qubits))\n\n assert 1 <= n_rows <= 11\n max_row = n_rows - 1\n dev = google.Bristlecone\n # we need a consistent order of qubits\n qubits = list(dev.qubits)\n qubits.sort()\n qubits = [q for q in qubits\n if q.row <= max_row and q.row + q.col < n_rows + 6\n and q.row - q.col < n_rows - 5]\n qubits = [q for q in qubits if count_neighbors(qubits, q) > 1]\n return qubits\n\n qubits = get_qubits(n_rows)\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\nT = TypeVar('T')\n\n\ndef _choice(rand_gen: Callable[[], float], sequence: Sequence[T]) -> T:\n \"\"\"Choose a random element from a non-empty sequence.\n\n Use this instead of random.choice, with random.random(), for reproducibility\n \"\"\"\n return sequence[int(rand_gen() * len(sequence))]\n\n\ndef _add_cz_layer(layer_index: int, circuit: circuits.Circuit) -> int:\n cz_layer = None\n while not cz_layer:\n qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())\n cz_layer = list(_make_cz_layer(qubits, layer_index))\n layer_index += 1\n\n circuit.append(cz_layer, strategy=InsertStrategy.NEW_THEN_INLINE)\n return layer_index\n\n\ndef _make_cz_layer(qubits: Iterable[devices.GridQubit], layer_index: int\n ) -> Iterable[ops.Operation]:\n \"\"\"\n Each layer index corresponds to a shift/transpose of this CZ pattern:\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n . . . . . . . . .\n . . . . . . . . .\n . . . . . . . . .\n\n Labelled edges, showing the exact index-to-CZs mapping (mod 8):\n\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 1\u2502 5\u2502 1\u2502 5\u2502 1\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 7\u2502 3\u2502 7\u2502 3\u2502 7\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 5\u2502 1\u2502 5\u2502 1\u2502 5\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n . . . . . .\n . . . . . .\n . . . . . .\n\n Note that, for small devices, some layers will be empty because the layer\n only contains edges not present on the device.\n \"\"\"\n\n # map to an internal layer index to match the cycle order of public circuits\n layer_index_map = [0, 3, 2, 1, 4, 7, 6, 5]\n internal_layer_index = layer_index_map[layer_index % 8]\n\n dir_row = internal_layer_index % 2\n dir_col = 1 - dir_row\n shift = (internal_layer_index >> 1) % 4\n\n for q in qubits:\n q2 = devices.GridQubit(q.row + dir_row, q.col + dir_col)\n if q2 not in qubits:\n continue # This edge isn't on the device.\n if (q.row * (2 - dir_row) + q.col * (2 - dir_col)) % 4 != shift:\n continue # No CZ along this edge for this layer.\n\n yield ops.common_gates.CZ(q, q2)\n", "path": "cirq/experiments/google_v2_supremacy_circuit.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport random\nfrom typing import Callable, Iterable, TypeVar, cast, Sequence\n\nfrom cirq.circuits import InsertStrategy\nfrom cirq import circuits, devices, google, ops\n\n\ndef generate_boixo_2018_supremacy_circuits_v2(\n qubits: Iterable[devices.GridQubit], cz_depth: int,\n seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n qubits: qubit grid in which to generate the circuit.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n\n non_diagonal_gates = [ops.pauli_gates.X**(1/2), ops.pauli_gates.Y**(1/2)]\n rand_gen = random.Random(seed).random\n\n circuit = circuits.Circuit()\n\n # Add an initial moment of Hadamards\n circuit.append(ops.common_gates.H(qubit) for qubit in qubits)\n\n layer_index = 0\n if cz_depth:\n layer_index = _add_cz_layer(layer_index, circuit)\n # In the first moment, add T gates when possible\n for qubit in qubits:\n if not circuit.operation_at(qubit, 1):\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n for moment_index in range(2, cz_depth+1):\n layer_index = _add_cz_layer(layer_index, circuit)\n # Add single qubit gates in the same moment\n for qubit in qubits:\n if not circuit.operation_at(qubit, moment_index):\n last_op = circuit.operation_at(qubit, moment_index-1)\n if last_op:\n gate = cast(ops.GateOperation, last_op).gate\n # Add a random non diagonal gate after a CZ\n if gate == ops.CZ:\n circuit.append(_choice(rand_gen,\n non_diagonal_gates).on(qubit),\n strategy=InsertStrategy.EARLIEST)\n # Add a T gate after a non diagonal gate\n elif not gate == ops.T:\n circuit.append(ops.common_gates.T(qubit),\n strategy=InsertStrategy.EARLIEST)\n\n # Add a final moment of Hadamards\n circuit.append([ops.common_gates.H(qubit) for qubit in qubits],\n strategy=InsertStrategy.NEW_THEN_INLINE)\n\n return circuit\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_grid(n_rows: int, n_cols: int,\n cz_depth: int, seed: int\n ) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 as in github.com/sboixo/GRCS cz_v2.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows of a 2D lattice.\n n_cols: number of columns.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit corresponding to instance\n inst_{n_rows}x{n_cols}_{cz_depth+1}_{seed}\n\n The mapping of qubits is cirq.GridQubit(j,k) -> q[j*n_cols+k]\n (as in the QASM mapping)\n \"\"\"\n qubits = [devices.GridQubit(i, j) for i in range(n_rows)\n for j in range(n_cols)]\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\ndef generate_boixo_2018_supremacy_circuits_v2_bristlecone(\n n_rows: int, cz_depth: int, seed: int) -> circuits.Circuit:\n \"\"\"\n Generates Google Random Circuits v2 in Bristlecone.\n See also https://arxiv.org/abs/1807.10749\n\n Args:\n n_rows: number of rows in a Bristlecone lattice.\n Note that we do not include single qubit corners.\n cz_depth: number of layers with CZ gates.\n seed: seed for the random instance.\n\n Returns:\n A circuit with given size and seed.\n \"\"\"\n def get_qubits(n_rows):\n def count_neighbors(qubits, qubit):\n \"\"\"Counts the qubits that the given qubit can interact with.\"\"\"\n possibles = [\n devices.GridQubit(qubit.row + 1, qubit.col),\n devices.GridQubit(qubit.row - 1, qubit.col),\n devices.GridQubit(qubit.row, qubit.col + 1),\n devices.GridQubit(qubit.row, qubit.col - 1),\n ]\n return len(list(e for e in possibles if e in qubits))\n\n assert 2 <= n_rows <= 11\n max_row = n_rows - 1\n dev = google.Bristlecone\n # we need a consistent order of qubits\n qubits = list(dev.qubits)\n qubits.sort()\n qubits = [q for q in qubits\n if q.row <= max_row and q.row + q.col < n_rows + 6\n and q.row - q.col < n_rows - 5]\n qubits = [q for q in qubits if count_neighbors(qubits, q) > 1]\n return qubits\n\n qubits = get_qubits(n_rows)\n return generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth, seed)\n\n\nT = TypeVar('T')\n\n\ndef _choice(rand_gen: Callable[[], float], sequence: Sequence[T]) -> T:\n \"\"\"Choose a random element from a non-empty sequence.\n\n Use this instead of random.choice, with random.random(), for reproducibility\n \"\"\"\n return sequence[int(rand_gen() * len(sequence))]\n\n\ndef _add_cz_layer(layer_index: int, circuit: circuits.Circuit) -> int:\n cz_layer = None\n while not cz_layer:\n qubits = cast(Iterable[devices.GridQubit], circuit.all_qubits())\n cz_layer = list(_make_cz_layer(qubits, layer_index))\n layer_index += 1\n\n circuit.append(cz_layer, strategy=InsertStrategy.NEW_THEN_INLINE)\n return layer_index\n\n\ndef _make_cz_layer(qubits: Iterable[devices.GridQubit], layer_index: int\n ) -> Iterable[ops.Operation]:\n \"\"\"\n Each layer index corresponds to a shift/transpose of this CZ pattern:\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n\n \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf . . .\n\n \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf \u25cf \u25cf \u25cf\u2500\u2500\u2500\u25cf . . .\n . . . . . . . . .\n . . . . . . . . .\n . . . . . . . . .\n\n Labelled edges, showing the exact index-to-CZs mapping (mod 8):\n\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 1\u2502 5\u2502 1\u2502 5\u2502 1\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 7\u2502 3\u2502 7\u2502 3\u2502 7\u2502\n \u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500. . .\n 5\u2502 1\u2502 5\u2502 1\u2502 5\u2502\n \u25cf\u25000\u2500\u25cf\u25002\u2500\u25cf\u25004\u2500\u25cf\u25006\u2500\u25cf\u25000\u2500. . .\n 3\u2502 7\u2502 3\u2502 7\u2502 3\u2502\n . . . . . .\n . . . . . .\n . . . . . .\n\n Note that, for small devices, some layers will be empty because the layer\n only contains edges not present on the device.\n \"\"\"\n\n # map to an internal layer index to match the cycle order of public circuits\n layer_index_map = [0, 3, 2, 1, 4, 7, 6, 5]\n internal_layer_index = layer_index_map[layer_index % 8]\n\n dir_row = internal_layer_index % 2\n dir_col = 1 - dir_row\n shift = (internal_layer_index >> 1) % 4\n\n for q in qubits:\n q2 = devices.GridQubit(q.row + dir_row, q.col + dir_col)\n if q2 not in qubits:\n continue # This edge isn't on the device.\n if (q.row * (2 - dir_row) + q.col * (2 - dir_col)) % 4 != shift:\n continue # No CZ along this edge for this layer.\n\n yield ops.common_gates.CZ(q, q2)\n", "path": "cirq/experiments/google_v2_supremacy_circuit.py"}]} | 3,479 | 153 |
gh_patches_debug_5 | rasdani/github-patches | git_diff | freedomofpress__securedrop-1117 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update kernel module blacklist
During an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.
This means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.
A quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:
``` yml
disabled_kernel_modules:
- bluetooth
- iwlwifi
```
to:
``` yml
disabled_kernel_modules:
- btusb
- bluetooth
- iwlmvm
- iwlwifi
```
The order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.
This list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).
Finally, we need to make sure this updated module blacklist still works on the old hardware as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/version.py`
Content:
```
1 __version__ = '0.3.4'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.3.4'
+__version__ = '0.3.5'
| {"golden_diff": "diff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.3.4'\n+__version__ = '0.3.5'\n", "issue": "Update kernel module blacklist\nDuring an installation last week, we encountered an issue with the kernel module blacklist. The install was using the new generation of Intel NUCs ([NUC5i5RYK](http://www.amazon.com/dp/B00SD9ISIQ) and [NUC5i5RYH](http://www.amazon.com/dp/B00SD9IS1S/)). Unlike the previous generation of NUCs, which did not include wireless networking hardware by default, the new generation includes wireless networking hardware for Wifi and Bluetooth on the motherboard.\n\nThis means that Ubuntu running on the servers not only loaded the high-level kernel modules for wifi and bluetooth support (`iwlwifi` and `bluetooth`), it also loaded modules necessary for support on the specific (included) hardware: `iwlmvm` and `btusb`. When the `remove kernel modules` Ansible role ran, it failed with an error because it could not remove the top-level modules without removing their dependencies first.\n\nA quickfix to get this working on the new hardware was to change `disabled_kernel_modules` in `group_vars/securedrop.yml` from:\n\n``` yml\ndisabled_kernel_modules:\n - bluetooth\n - iwlwifi\n```\n\nto:\n\n``` yml\ndisabled_kernel_modules:\n - btusb\n - bluetooth\n - iwlmvm\n - iwlwifi\n```\n\nThe order of the modules is important! We need to make sure the the dependencies are removed prior to the target modules that depend on them.\n\nThis list is also likely specific to the new generation of Intel NUCs. If we want to support a wider variety of hardware, we may want to try being smart about removing kernel modules and their dependencies, e.g. something akin to this technique from [Stack Exchange](https://askubuntu.com/questions/317230/how-can-i-temporarily-disable-a-kernel-module).\n\nFinally, we need to make sure this updated module blacklist still works on the old hardware as well.\n\n", "before_files": [{"content": "__version__ = '0.3.4'\n", "path": "securedrop/version.py"}], "after_files": [{"content": "__version__ = '0.3.5'\n", "path": "securedrop/version.py"}]} | 688 | 62 |
gh_patches_debug_6718 | rasdani/github-patches | git_diff | getmoto__moto-556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix S3 issues with botocore 1.3.29
botocore 1.3.29 breaks s3 in tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/__init__.py`
Content:
```
1 from __future__ import unicode_literals
2 import logging
3 logging.getLogger('boto').setLevel(logging.CRITICAL)
4
5 __title__ = 'moto'
6 __version__ = '0.4.22'
7
8 from .autoscaling import mock_autoscaling # flake8: noqa
9 from .awslambda import mock_lambda # flake8: noqa
10 from .cloudformation import mock_cloudformation # flake8: noqa
11 from .cloudwatch import mock_cloudwatch # flake8: noqa
12 from .datapipeline import mock_datapipeline # flake8: noqa
13 from .dynamodb import mock_dynamodb # flake8: noqa
14 from .dynamodb2 import mock_dynamodb2 # flake8: noqa
15 from .ec2 import mock_ec2 # flake8: noqa
16 from .ecs import mock_ecs # flake8: noqa
17 from .elb import mock_elb # flake8: noqa
18 from .emr import mock_emr # flake8: noqa
19 from .glacier import mock_glacier # flake8: noqa
20 from .iam import mock_iam # flake8: noqa
21 from .kinesis import mock_kinesis # flake8: noqa
22 from .kms import mock_kms # flake8: noqa
23 from .rds import mock_rds # flake8: noqa
24 from .rds2 import mock_rds2 # flake8: noqa
25 from .redshift import mock_redshift # flake8: noqa
26 from .s3 import mock_s3 # flake8: noqa
27 from .s3bucket_path import mock_s3bucket_path # flake8: noqa
28 from .ses import mock_ses # flake8: noqa
29 from .sns import mock_sns # flake8: noqa
30 from .sqs import mock_sqs # flake8: noqa
31 from .sts import mock_sts # flake8: noqa
32 from .route53 import mock_route53 # flake8: noqa
33 from .swf import mock_swf # flake8: noqa
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/__init__.py b/moto/__init__.py
--- a/moto/__init__.py
+++ b/moto/__init__.py
@@ -31,3 +31,13 @@
from .sts import mock_sts # flake8: noqa
from .route53 import mock_route53 # flake8: noqa
from .swf import mock_swf # flake8: noqa
+
+
+try:
+ # Need to monkey-patch botocore requests back to underlying urllib3 classes
+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection
+except ImportError:
+ pass
+else:
+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection
+ HTTPConnectionPool.ConnectionCls = HTTPConnection
| {"golden_diff": "diff --git a/moto/__init__.py b/moto/__init__.py\n--- a/moto/__init__.py\n+++ b/moto/__init__.py\n@@ -31,3 +31,13 @@\n from .sts import mock_sts # flake8: noqa\n from .route53 import mock_route53 # flake8: noqa\n from .swf import mock_swf # flake8: noqa\n+\n+\n+try:\n+ # Need to monkey-patch botocore requests back to underlying urllib3 classes\n+ from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\n+except ImportError:\n+ pass\n+else:\n+ HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n+ HTTPConnectionPool.ConnectionCls = HTTPConnection\n", "issue": "Fix S3 issues with botocore 1.3.29\nbotocore 1.3.29 breaks s3 in tests\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nimport logging\nlogging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '0.4.22'\n\nfrom .autoscaling import mock_autoscaling # flake8: noqa\nfrom .awslambda import mock_lambda # flake8: noqa\nfrom .cloudformation import mock_cloudformation # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch # flake8: noqa\nfrom .datapipeline import mock_datapipeline # flake8: noqa\nfrom .dynamodb import mock_dynamodb # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2 # flake8: noqa\nfrom .ec2 import mock_ec2 # flake8: noqa\nfrom .ecs import mock_ecs # flake8: noqa\nfrom .elb import mock_elb # flake8: noqa\nfrom .emr import mock_emr # flake8: noqa\nfrom .glacier import mock_glacier # flake8: noqa\nfrom .iam import mock_iam # flake8: noqa\nfrom .kinesis import mock_kinesis # flake8: noqa\nfrom .kms import mock_kms # flake8: noqa\nfrom .rds import mock_rds # flake8: noqa\nfrom .rds2 import mock_rds2 # flake8: noqa\nfrom .redshift import mock_redshift # flake8: noqa\nfrom .s3 import mock_s3 # flake8: noqa\nfrom .s3bucket_path import mock_s3bucket_path # flake8: noqa\nfrom .ses import mock_ses # flake8: noqa\nfrom .sns import mock_sns # flake8: noqa\nfrom .sqs import mock_sqs # flake8: noqa\nfrom .sts import mock_sts # flake8: noqa\nfrom .route53 import mock_route53 # flake8: noqa\nfrom .swf import mock_swf # flake8: noqa\n", "path": "moto/__init__.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nimport logging\nlogging.getLogger('boto').setLevel(logging.CRITICAL)\n\n__title__ = 'moto'\n__version__ = '0.4.22'\n\nfrom .autoscaling import mock_autoscaling # flake8: noqa\nfrom .awslambda import mock_lambda # flake8: noqa\nfrom .cloudformation import mock_cloudformation # flake8: noqa\nfrom .cloudwatch import mock_cloudwatch # flake8: noqa\nfrom .datapipeline import mock_datapipeline # flake8: noqa\nfrom .dynamodb import mock_dynamodb # flake8: noqa\nfrom .dynamodb2 import mock_dynamodb2 # flake8: noqa\nfrom .ec2 import mock_ec2 # flake8: noqa\nfrom .ecs import mock_ecs # flake8: noqa\nfrom .elb import mock_elb # flake8: noqa\nfrom .emr import mock_emr # flake8: noqa\nfrom .glacier import mock_glacier # flake8: noqa\nfrom .iam import mock_iam # flake8: noqa\nfrom .kinesis import mock_kinesis # flake8: noqa\nfrom .kms import mock_kms # flake8: noqa\nfrom .rds import mock_rds # flake8: noqa\nfrom .rds2 import mock_rds2 # flake8: noqa\nfrom .redshift import mock_redshift # flake8: noqa\nfrom .s3 import mock_s3 # flake8: noqa\nfrom .s3bucket_path import mock_s3bucket_path # flake8: noqa\nfrom .ses import mock_ses # flake8: noqa\nfrom .sns import mock_sns # flake8: noqa\nfrom .sqs import mock_sqs # flake8: noqa\nfrom .sts import mock_sts # flake8: noqa\nfrom .route53 import mock_route53 # flake8: noqa\nfrom .swf import mock_swf # flake8: noqa\n\n\ntry:\n # Need to monkey-patch botocore requests back to underlying urllib3 classes\n from botocore.awsrequest import HTTPSConnectionPool, HTTPConnectionPool, HTTPConnection, VerifiedHTTPSConnection\nexcept ImportError:\n pass\nelse:\n HTTPSConnectionPool.ConnectionCls = VerifiedHTTPSConnection\n HTTPConnectionPool.ConnectionCls = HTTPConnection\n", "path": "moto/__init__.py"}]} | 819 | 180 |
gh_patches_debug_4681 | rasdani/github-patches | git_diff | awslabs__gluonts-1159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiprocessing hangs when num_workers > len(dataset)
## Description
I'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.
Running the same code, but without specifying `num_workers`, it works as expected.
## To Reproduce
```python
from pathlib import Path
from typing import Optional
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
from gluonts.dataset.common import TrainDatasets
from gluonts.model.gpvar import GPVAREstimator
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.mx.trainer import Trainer
def load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):
ds = get_dataset(dataset_name)
if target_dim is None:
target_dim = len(ds.train)
grouper = MultivariateGrouper(max_target_dim=target_dim)
meta = ds.metadata
meta.feat_static_cat[0].cardinality = target_dim
return (TrainDatasets(
metadata=meta,
train=grouper(ds.train),
test=grouper(ds.test)
), target_dim)
ds, target_dim = load_multivariate_dataset("exchange_rate")
metadata = ds.metadata
estimator = GPVAREstimator(
prediction_length=metadata.prediction_length,
freq=metadata.freq,
target_dim=target_dim,
trainer=Trainer(
epochs=2,
num_batches_per_epoch=10,
batch_size=8,
),
)
predictor = estimator.train(training_data=ds.train, num_workers=2)
predictor.serialize(Path("/tmp"))
```
## Error message or code output
Nothing happens.
## Environment
- Operating system: Mac OSX 10.15.7
- Python version: 3.6.12
- GluonTS version: 0.6.0
- MXNet version: 1.7.0post1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gluonts/itertools.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from typing import Iterable, Iterator, List, TypeVar
15 import itertools
16 import random
17
18 T = TypeVar("T")
19
20
21 def cyclic(it):
22 """Like `itertools.cycle`, but does not store the data."""
23
24 while True:
25 yield from it
26
27
28 def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
29 """Groups elements from `iterable` into batches of size `batch_size`.
30
31 >>> list(batcher("ABCDEFG", 3))
32 [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]
33
34 Unlike the grouper proposed in the documentation of itertools, `batcher`
35 doesn't fill up missing values.
36 """
37 it: Iterator[T] = iter(iterable)
38
39 def get_batch():
40 return list(itertools.islice(it, batch_size))
41
42 # has an empty list so that we have a 2D array for sure
43 return iter(get_batch, [])
44
45
46 class cached(Iterable):
47 """
48 An iterable wrapper, which caches values in a list the first time it is iterated.
49
50 The primary use-case for this is to avoid re-computing the element of the sequence,
51 in case the inner iterable does it on demand.
52
53 This should be used to wrap deterministic iterables, i.e. iterables where the data
54 generation process is not random, and that yield the same elements when iterated
55 multiple times.
56 """
57
58 def __init__(self, iterable: Iterable) -> None:
59 self.iterable = iterable
60 self.cache = None
61
62 def __iter__(self):
63 if self.cache is None:
64 self.cache = []
65 for element in self.iterable:
66 yield element
67 self.cache.append(element)
68 else:
69 yield from self.cache
70
71
72 def pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):
73 """
74 An iterator that yields item from a given iterator in a pseudo-shuffled order.
75 """
76 shuffle_buffer = []
77
78 for element in iterator:
79 shuffle_buffer.append(element)
80 if len(shuffle_buffer) >= shuffle_buffer_length:
81 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
82
83 while shuffle_buffer:
84 yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py
--- a/src/gluonts/itertools.py
+++ b/src/gluonts/itertools.py
@@ -21,8 +21,13 @@
def cyclic(it):
"""Like `itertools.cycle`, but does not store the data."""
+ at_least_one = False
while True:
- yield from it
+ for el in it:
+ at_least_one = True
+ yield el
+ if not at_least_one:
+ break
def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:
| {"golden_diff": "diff --git a/src/gluonts/itertools.py b/src/gluonts/itertools.py\n--- a/src/gluonts/itertools.py\n+++ b/src/gluonts/itertools.py\n@@ -21,8 +21,13 @@\n def cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n \n+ at_least_one = False\n while True:\n- yield from it\n+ for el in it:\n+ at_least_one = True\n+ yield el\n+ if not at_least_one:\n+ break\n \n \n def batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n", "issue": "Multiprocessing hangs when num_workers > len(dataset)\n## Description\r\nI'm trying to serialize a predictor trained on multiple cores. When calling the `serialize` method nothing happens.\r\nRunning the same code, but without specifying `num_workers`, it works as expected.\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom typing import Optional\r\n\r\nfrom gluonts.dataset.multivariate_grouper import MultivariateGrouper\r\nfrom gluonts.dataset.common import TrainDatasets\r\nfrom gluonts.model.gpvar import GPVAREstimator\r\nfrom gluonts.dataset.repository.datasets import get_dataset\r\nfrom gluonts.mx.trainer import Trainer\r\n\r\n\r\ndef load_multivariate_dataset(dataset_name: str, target_dim: Optional[int] = None):\r\n ds = get_dataset(dataset_name)\r\n\r\n if target_dim is None:\r\n target_dim = len(ds.train)\r\n\r\n grouper = MultivariateGrouper(max_target_dim=target_dim)\r\n\r\n meta = ds.metadata\r\n meta.feat_static_cat[0].cardinality = target_dim\r\n\r\n return (TrainDatasets(\r\n metadata=meta,\r\n train=grouper(ds.train),\r\n test=grouper(ds.test)\r\n ), target_dim)\r\n\r\n\r\nds, target_dim = load_multivariate_dataset(\"exchange_rate\")\r\nmetadata = ds.metadata\r\n\r\nestimator = GPVAREstimator(\r\n prediction_length=metadata.prediction_length,\r\n freq=metadata.freq,\r\n target_dim=target_dim,\r\n trainer=Trainer(\r\n epochs=2,\r\n num_batches_per_epoch=10,\r\n batch_size=8,\r\n ),\r\n)\r\n\r\npredictor = estimator.train(training_data=ds.train, num_workers=2)\r\n\r\npredictor.serialize(Path(\"/tmp\"))\r\n\r\n```\r\n\r\n## Error message or code output\r\nNothing happens.\r\n\r\n\r\n## Environment\r\n- Operating system: Mac OSX 10.15.7\r\n- Python version: 3.6.12\r\n- GluonTS version: 0.6.0\r\n- MXNet version: 1.7.0post1\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Iterable, Iterator, List, TypeVar\nimport itertools\nimport random\n\nT = TypeVar(\"T\")\n\n\ndef cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n\n while True:\n yield from it\n\n\ndef batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n \"\"\"Groups elements from `iterable` into batches of size `batch_size`.\n\n >>> list(batcher(\"ABCDEFG\", 3))\n [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]\n\n Unlike the grouper proposed in the documentation of itertools, `batcher`\n doesn't fill up missing values.\n \"\"\"\n it: Iterator[T] = iter(iterable)\n\n def get_batch():\n return list(itertools.islice(it, batch_size))\n\n # has an empty list so that we have a 2D array for sure\n return iter(get_batch, [])\n\n\nclass cached(Iterable):\n \"\"\"\n An iterable wrapper, which caches values in a list the first time it is iterated.\n\n The primary use-case for this is to avoid re-computing the element of the sequence,\n in case the inner iterable does it on demand.\n\n This should be used to wrap deterministic iterables, i.e. iterables where the data\n generation process is not random, and that yield the same elements when iterated\n multiple times.\n \"\"\"\n\n def __init__(self, iterable: Iterable) -> None:\n self.iterable = iterable\n self.cache = None\n\n def __iter__(self):\n if self.cache is None:\n self.cache = []\n for element in self.iterable:\n yield element\n self.cache.append(element)\n else:\n yield from self.cache\n\n\ndef pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):\n \"\"\"\n An iterator that yields item from a given iterator in a pseudo-shuffled order.\n \"\"\"\n shuffle_buffer = []\n\n for element in iterator:\n shuffle_buffer.append(element)\n if len(shuffle_buffer) >= shuffle_buffer_length:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n\n while shuffle_buffer:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n", "path": "src/gluonts/itertools.py"}], "after_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom typing import Iterable, Iterator, List, TypeVar\nimport itertools\nimport random\n\nT = TypeVar(\"T\")\n\n\ndef cyclic(it):\n \"\"\"Like `itertools.cycle`, but does not store the data.\"\"\"\n\n at_least_one = False\n while True:\n for el in it:\n at_least_one = True\n yield el\n if not at_least_one:\n break\n\n\ndef batcher(iterable: Iterable[T], batch_size: int) -> Iterator[List[T]]:\n \"\"\"Groups elements from `iterable` into batches of size `batch_size`.\n\n >>> list(batcher(\"ABCDEFG\", 3))\n [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']]\n\n Unlike the grouper proposed in the documentation of itertools, `batcher`\n doesn't fill up missing values.\n \"\"\"\n it: Iterator[T] = iter(iterable)\n\n def get_batch():\n return list(itertools.islice(it, batch_size))\n\n # has an empty list so that we have a 2D array for sure\n return iter(get_batch, [])\n\n\nclass cached(Iterable):\n \"\"\"\n An iterable wrapper, which caches values in a list the first time it is iterated.\n\n The primary use-case for this is to avoid re-computing the element of the sequence,\n in case the inner iterable does it on demand.\n\n This should be used to wrap deterministic iterables, i.e. iterables where the data\n generation process is not random, and that yield the same elements when iterated\n multiple times.\n \"\"\"\n\n def __init__(self, iterable: Iterable) -> None:\n self.iterable = iterable\n self.cache = None\n\n def __iter__(self):\n if self.cache is None:\n self.cache = []\n for element in self.iterable:\n yield element\n self.cache.append(element)\n else:\n yield from self.cache\n\n\ndef pseudo_shuffled(iterator: Iterator, shuffle_buffer_length: int):\n \"\"\"\n An iterator that yields item from a given iterator in a pseudo-shuffled order.\n \"\"\"\n shuffle_buffer = []\n\n for element in iterator:\n shuffle_buffer.append(element)\n if len(shuffle_buffer) >= shuffle_buffer_length:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n\n while shuffle_buffer:\n yield shuffle_buffer.pop(random.randrange(len(shuffle_buffer)))\n", "path": "src/gluonts/itertools.py"}]} | 1,467 | 151 |
gh_patches_debug_33074 | rasdani/github-patches | git_diff | ARM-DOE__ACT-817 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in xsection plot map code
* ACT version: Current Version
* Python version: All
* Operating System: All
### Description
xsection plot map is generating images with duplicate axes, see image below. I believe this is probably the cause to our baseline image failure.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `act/plotting/xsectiondisplay.py`
Content:
```
1 """
2 Stores the class for XSectionDisplay.
3
4 """
5
6 # Import third party libraries
7 import matplotlib.pyplot as plt
8 import numpy as np
9
10 try:
11 import cartopy.crs as ccrs
12
13 CARTOPY_AVAILABLE = True
14 except ImportError:
15 CARTOPY_AVAILABLE = False
16
17 # Import Local Libs
18 from ..utils import data_utils
19 from .plot import Display
20
21
22 class XSectionDisplay(Display):
23 """
24 Plots cross sections of multidimensional datasets. The data
25 must be able to be sliced into a 2 dimensional slice using the
26 xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands.
27
28 This is inherited from the :func:`act.plotting.Display`
29 class and has therefore has the same attributes as that class.
30 See :func:`act.plotting.Display`
31 for more information. There are no additional attributes or parameters
32 to this class.
33
34 In order to create geographic plots, ACT needs the Cartopy package to be
35 installed on your system. More information about
36 Cartopy go here:https://scitools.org.uk/cartopy/docs/latest/.
37
38 Examples
39 --------
40 For example, if you only want to do a cross section through the first
41 time period of a 3D dataset called :code:`ir_temperature`, you would
42 do the following in xarray:
43
44 .. code-block:: python
45
46 time_slice = my_ds["ir_temperature"].isel(time=0)
47
48 The methods of this class support passing in keyword arguments into
49 xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands
50 so that new datasets do not need to be created when slicing by specific time
51 periods or spatial slices. For example, to plot the first time period
52 from :code:`my_ds`, simply do:
53
54 .. code-block:: python
55
56 xsection = XSectionDisplay(my_ds, figsize=(15, 8))
57 xsection.plot_xsection_map(
58 None,
59 "ir_temperature",
60 vmin=220,
61 vmax=300,
62 cmap="Greys",
63 x="longitude",
64 y="latitude",
65 isel_kwargs={"time": 0},
66 )
67
68 Here, the array is sliced by the first time period as specified
69 in :code:`isel_kwargs`. The other keyword arguments are standard keyword
70 arguments taken by :func:`matplotlib.pyplot.pcolormesh`.
71
72 """
73
74 def __init__(self, ds, subplot_shape=(1,), ds_name=None, **kwargs):
75 super().__init__(ds, subplot_shape, ds_name, **kwargs)
76
77 def set_subplot_to_map(self, subplot_index):
78 total_num_plots = self.axes.shape
79
80 if len(total_num_plots) == 2:
81 second_number = total_num_plots[0]
82 j = subplot_index[1]
83 else:
84 second_number = 1
85 j = 0
86
87 third_number = second_number * subplot_index[0] + j + 1
88
89 self.axes[subplot_index] = plt.subplot(
90 total_num_plots[0],
91 second_number,
92 third_number,
93 projection=ccrs.PlateCarree(),
94 )
95
96 def set_xrng(self, xrng, subplot_index=(0,)):
97 """
98 Sets the x range of the plot.
99
100 Parameters
101 ----------
102 xrng : 2 number array
103 The x limits of the plot.
104 subplot_index : 1 or 2D tuple, list, or array
105 The index of the subplot to set the x range of.
106
107 """
108 if self.axes is None:
109 raise RuntimeError('set_xrng requires the plot to be displayed.')
110
111 if not hasattr(self, 'xrng') and len(self.axes.shape) == 2:
112 self.xrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=xrng[0].dtype)
113 elif not hasattr(self, 'xrng') and len(self.axes.shape) == 1:
114 self.xrng = np.zeros((self.axes.shape[0], 2), dtype=xrng[0].dtype)
115
116 self.axes[subplot_index].set_xlim(xrng)
117 self.xrng[subplot_index, :] = np.array(xrng)
118
119 def set_yrng(self, yrng, subplot_index=(0,)):
120 """
121 Sets the y range of the plot.
122
123 Parameters
124 ----------
125 yrng : 2 number array
126 The y limits of the plot.
127 subplot_index : 1 or 2D tuple, list, or array
128 The index of the subplot to set the x range of.
129
130 """
131 if self.axes is None:
132 raise RuntimeError('set_yrng requires the plot to be displayed.')
133
134 if not hasattr(self, 'yrng') and len(self.axes.shape) == 2:
135 self.yrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=yrng[0].dtype)
136 elif not hasattr(self, 'yrng') and len(self.axes.shape) == 1:
137 self.yrng = np.zeros((self.axes.shape[0], 2), dtype=yrng[0].dtype)
138
139 if yrng[0] == yrng[1]:
140 yrng[1] = yrng[1] + 1
141
142 self.axes[subplot_index].set_ylim(yrng)
143
144 self.yrng[subplot_index, :] = yrng
145
146 def plot_xsection(
147 self,
148 dsname,
149 varname,
150 x=None,
151 y=None,
152 subplot_index=(0,),
153 sel_kwargs=None,
154 isel_kwargs=None,
155 **kwargs,
156 ):
157 """
158 This function plots a cross section whose x and y coordinates are
159 specified by the variable names either provided by the user or
160 automatically detected by xarray.
161
162 Parameters
163 ----------
164 dsname : str or None
165 The name of the datastream to plot from. Set to None to have
166 ACT attempt to automatically detect this.
167 varname : str
168 The name of the variable to plot.
169 x : str or None
170 The name of the x coordinate variable.
171 y : str or None
172 The name of the y coordinate variable.
173 subplot_index : tuple
174 The index of the subplot to create the plot in.
175 sel_kwargs : dict
176 The keyword arguments to pass into :py:func:`xarray.DataArray.sel`
177 This is useful when your data is in 3 or more dimensions and you
178 want to only view a cross section on a specific x-y plane. For more
179 information on how to use xarray's .sel and .isel functionality
180 to slice datasets, see the documentation on :func:`xarray.DataArray.sel`.
181 isel_kwargs : dict
182 The keyword arguments to pass into :py:func:`xarray.DataArray.sel`
183 **kwargs : keyword arguments
184 Additional keyword arguments will be passed into
185 :func:`xarray.DataArray.plot`.
186
187 Returns
188 -------
189 ax : matplotlib axis handle
190 The matplotlib axis handle corresponding to the plot.
191
192 """
193 if dsname is None and len(self._ds.keys()) > 1:
194 raise ValueError(
195 'You must choose a datastream when there are 2 '
196 'or more datasets in the TimeSeriesDisplay '
197 'object.'
198 )
199 elif dsname is None:
200 dsname = list(self._ds.keys())[0]
201 temp_ds = self._ds[dsname].copy()
202
203 if sel_kwargs is not None:
204 temp_ds = temp_ds.sel(**sel_kwargs, method='nearest')
205
206 if isel_kwargs is not None:
207 temp_ds = temp_ds.isel(**isel_kwargs)
208
209 if (x is not None and y is None) or (y is None and x is not None):
210 raise RuntimeError(
211 'Both x and y must be specified if we are'
212 + 'not trying to automatically detect them!'
213 )
214
215 if x is not None:
216 coord_list = {}
217 x_coord_dim = temp_ds[x].dims[0]
218 coord_list[x] = x_coord_dim
219 y_coord_dim = temp_ds[y].dims[0]
220 coord_list[y] = y_coord_dim
221 new_ds = data_utils.assign_coordinates(temp_ds, coord_list)
222 my_dataarray = new_ds[varname]
223 else:
224 my_dataarray = temp_ds[varname]
225
226 coord_keys = [key for key in my_dataarray.coords.keys()]
227 # X-array will sometimes shorten latitude and longitude variables
228 if x == 'longitude' and x not in coord_keys:
229 xc = 'lon'
230 else:
231 xc = x
232 if y == 'latitude' and y not in coord_keys:
233 yc = 'lat'
234 else:
235 yc = y
236
237 if x is None:
238 ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
239 else:
240 ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
241
242 the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]
243 if x is None:
244 x = the_coords[0]
245 else:
246 x = coord_list[x]
247
248 if y is None:
249 y = the_coords[1]
250 else:
251 y = coord_list[y]
252
253 xrng = self.axes[subplot_index].get_xlim()
254 self.set_xrng(xrng, subplot_index)
255 yrng = self.axes[subplot_index].get_ylim()
256 self.set_yrng(yrng, subplot_index)
257 del temp_ds
258 return ax
259
260 def plot_xsection_map(
261 self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs
262 ):
263 """
264 Plots a cross section of 2D data on a geographical map.
265
266 Parameters
267 ----------
268 dsname : str or None
269 The name of the datastream to plot from. Set to None
270 to have ACT attempt to automatically detect this.
271 varname : str
272 The name of the variable to plot.
273 subplot_index : tuple
274 The index of the subplot to plot inside.
275 coastlines : bool
276 Set to True to plot the coastlines.
277 background : bool
278 Set to True to plot a stock image background.
279 **kwargs : keyword arguments
280 Additional keyword arguments will be passed into
281 :func:`act.plotting.XSectionDisplay.plot_xsection`
282
283 Returns
284 -------
285 ax : matplotlib axis handle
286 The matplotlib axis handle corresponding to the plot.
287
288 """
289 if not CARTOPY_AVAILABLE:
290 raise ImportError(
291 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'
292 )
293
294 self.set_subplot_to_map(subplot_index)
295 self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)
296 xlims = self.xrng[subplot_index].flatten()
297 ylims = self.yrng[subplot_index].flatten()
298 self.axes[subplot_index].set_xticks(np.linspace(round(xlims[0], 0), round(xlims[1], 0), 10))
299 self.axes[subplot_index].set_yticks(np.linspace(round(ylims[0], 0), round(ylims[1], 0), 10))
300
301 if coastlines:
302 self.axes[subplot_index].coastlines(resolution='10m')
303
304 if background:
305 self.axes[subplot_index].stock_img()
306
307 return self.axes[subplot_index]
308
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/act/plotting/xsectiondisplay.py b/act/plotting/xsectiondisplay.py
--- a/act/plotting/xsectiondisplay.py
+++ b/act/plotting/xsectiondisplay.py
@@ -75,6 +75,7 @@
super().__init__(ds, subplot_shape, ds_name, **kwargs)
def set_subplot_to_map(self, subplot_index):
+ self.fig.delaxes(self.axes[subplot_index])
total_num_plots = self.axes.shape
if len(total_num_plots) == 2:
@@ -235,9 +236,9 @@
yc = y
if x is None:
- ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
+ my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)
else:
- ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
+ my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)
the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]
if x is None:
@@ -255,7 +256,7 @@
yrng = self.axes[subplot_index].get_ylim()
self.set_yrng(yrng, subplot_index)
del temp_ds
- return ax
+ return self.axes[subplot_index]
def plot_xsection_map(
self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs
@@ -290,7 +291,6 @@
raise ImportError(
'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'
)
-
self.set_subplot_to_map(subplot_index)
self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)
xlims = self.xrng[subplot_index].flatten()
| {"golden_diff": "diff --git a/act/plotting/xsectiondisplay.py b/act/plotting/xsectiondisplay.py\n--- a/act/plotting/xsectiondisplay.py\n+++ b/act/plotting/xsectiondisplay.py\n@@ -75,6 +75,7 @@\n super().__init__(ds, subplot_shape, ds_name, **kwargs)\n \n def set_subplot_to_map(self, subplot_index):\n+ self.fig.delaxes(self.axes[subplot_index])\n total_num_plots = self.axes.shape\n \n if len(total_num_plots) == 2:\n@@ -235,9 +236,9 @@\n yc = y\n \n if x is None:\n- ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n+ my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n else:\n- ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n+ my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n \n the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]\n if x is None:\n@@ -255,7 +256,7 @@\n yrng = self.axes[subplot_index].get_ylim()\n self.set_yrng(yrng, subplot_index)\n del temp_ds\n- return ax\n+ return self.axes[subplot_index]\n \n def plot_xsection_map(\n self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs\n@@ -290,7 +291,6 @@\n raise ImportError(\n 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'\n )\n-\n self.set_subplot_to_map(subplot_index)\n self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)\n xlims = self.xrng[subplot_index].flatten()\n", "issue": "Bug in xsection plot map code\n* ACT version: Current Version\r\n* Python version: All\r\n* Operating System: All\r\n\r\n### Description\r\n\r\nxsection plot map is generating images with duplicate axes, see image below. I believe this is probably the cause to our baseline image failure.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nStores the class for XSectionDisplay.\n\n\"\"\"\n\n# Import third party libraries\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ntry:\n import cartopy.crs as ccrs\n\n CARTOPY_AVAILABLE = True\nexcept ImportError:\n CARTOPY_AVAILABLE = False\n\n# Import Local Libs\nfrom ..utils import data_utils\nfrom .plot import Display\n\n\nclass XSectionDisplay(Display):\n \"\"\"\n Plots cross sections of multidimensional datasets. The data\n must be able to be sliced into a 2 dimensional slice using the\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands.\n\n This is inherited from the :func:`act.plotting.Display`\n class and has therefore has the same attributes as that class.\n See :func:`act.plotting.Display`\n for more information. There are no additional attributes or parameters\n to this class.\n\n In order to create geographic plots, ACT needs the Cartopy package to be\n installed on your system. More information about\n Cartopy go here:https://scitools.org.uk/cartopy/docs/latest/.\n\n Examples\n --------\n For example, if you only want to do a cross section through the first\n time period of a 3D dataset called :code:`ir_temperature`, you would\n do the following in xarray:\n\n .. code-block:: python\n\n time_slice = my_ds[\"ir_temperature\"].isel(time=0)\n\n The methods of this class support passing in keyword arguments into\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands\n so that new datasets do not need to be created when slicing by specific time\n periods or spatial slices. For example, to plot the first time period\n from :code:`my_ds`, simply do:\n\n .. code-block:: python\n\n xsection = XSectionDisplay(my_ds, figsize=(15, 8))\n xsection.plot_xsection_map(\n None,\n \"ir_temperature\",\n vmin=220,\n vmax=300,\n cmap=\"Greys\",\n x=\"longitude\",\n y=\"latitude\",\n isel_kwargs={\"time\": 0},\n )\n\n Here, the array is sliced by the first time period as specified\n in :code:`isel_kwargs`. The other keyword arguments are standard keyword\n arguments taken by :func:`matplotlib.pyplot.pcolormesh`.\n\n \"\"\"\n\n def __init__(self, ds, subplot_shape=(1,), ds_name=None, **kwargs):\n super().__init__(ds, subplot_shape, ds_name, **kwargs)\n\n def set_subplot_to_map(self, subplot_index):\n total_num_plots = self.axes.shape\n\n if len(total_num_plots) == 2:\n second_number = total_num_plots[0]\n j = subplot_index[1]\n else:\n second_number = 1\n j = 0\n\n third_number = second_number * subplot_index[0] + j + 1\n\n self.axes[subplot_index] = plt.subplot(\n total_num_plots[0],\n second_number,\n third_number,\n projection=ccrs.PlateCarree(),\n )\n\n def set_xrng(self, xrng, subplot_index=(0,)):\n \"\"\"\n Sets the x range of the plot.\n\n Parameters\n ----------\n xrng : 2 number array\n The x limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_xrng requires the plot to be displayed.')\n\n if not hasattr(self, 'xrng') and len(self.axes.shape) == 2:\n self.xrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=xrng[0].dtype)\n elif not hasattr(self, 'xrng') and len(self.axes.shape) == 1:\n self.xrng = np.zeros((self.axes.shape[0], 2), dtype=xrng[0].dtype)\n\n self.axes[subplot_index].set_xlim(xrng)\n self.xrng[subplot_index, :] = np.array(xrng)\n\n def set_yrng(self, yrng, subplot_index=(0,)):\n \"\"\"\n Sets the y range of the plot.\n\n Parameters\n ----------\n yrng : 2 number array\n The y limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_yrng requires the plot to be displayed.')\n\n if not hasattr(self, 'yrng') and len(self.axes.shape) == 2:\n self.yrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=yrng[0].dtype)\n elif not hasattr(self, 'yrng') and len(self.axes.shape) == 1:\n self.yrng = np.zeros((self.axes.shape[0], 2), dtype=yrng[0].dtype)\n\n if yrng[0] == yrng[1]:\n yrng[1] = yrng[1] + 1\n\n self.axes[subplot_index].set_ylim(yrng)\n\n self.yrng[subplot_index, :] = yrng\n\n def plot_xsection(\n self,\n dsname,\n varname,\n x=None,\n y=None,\n subplot_index=(0,),\n sel_kwargs=None,\n isel_kwargs=None,\n **kwargs,\n ):\n \"\"\"\n This function plots a cross section whose x and y coordinates are\n specified by the variable names either provided by the user or\n automatically detected by xarray.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None to have\n ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n x : str or None\n The name of the x coordinate variable.\n y : str or None\n The name of the y coordinate variable.\n subplot_index : tuple\n The index of the subplot to create the plot in.\n sel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n This is useful when your data is in 3 or more dimensions and you\n want to only view a cross section on a specific x-y plane. For more\n information on how to use xarray's .sel and .isel functionality\n to slice datasets, see the documentation on :func:`xarray.DataArray.sel`.\n isel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`xarray.DataArray.plot`.\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if dsname is None and len(self._ds.keys()) > 1:\n raise ValueError(\n 'You must choose a datastream when there are 2 '\n 'or more datasets in the TimeSeriesDisplay '\n 'object.'\n )\n elif dsname is None:\n dsname = list(self._ds.keys())[0]\n temp_ds = self._ds[dsname].copy()\n\n if sel_kwargs is not None:\n temp_ds = temp_ds.sel(**sel_kwargs, method='nearest')\n\n if isel_kwargs is not None:\n temp_ds = temp_ds.isel(**isel_kwargs)\n\n if (x is not None and y is None) or (y is None and x is not None):\n raise RuntimeError(\n 'Both x and y must be specified if we are'\n + 'not trying to automatically detect them!'\n )\n\n if x is not None:\n coord_list = {}\n x_coord_dim = temp_ds[x].dims[0]\n coord_list[x] = x_coord_dim\n y_coord_dim = temp_ds[y].dims[0]\n coord_list[y] = y_coord_dim\n new_ds = data_utils.assign_coordinates(temp_ds, coord_list)\n my_dataarray = new_ds[varname]\n else:\n my_dataarray = temp_ds[varname]\n\n coord_keys = [key for key in my_dataarray.coords.keys()]\n # X-array will sometimes shorten latitude and longitude variables\n if x == 'longitude' and x not in coord_keys:\n xc = 'lon'\n else:\n xc = x\n if y == 'latitude' and y not in coord_keys:\n yc = 'lat'\n else:\n yc = y\n\n if x is None:\n ax = my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n else:\n ax = my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n\n the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]\n if x is None:\n x = the_coords[0]\n else:\n x = coord_list[x]\n\n if y is None:\n y = the_coords[1]\n else:\n y = coord_list[y]\n\n xrng = self.axes[subplot_index].get_xlim()\n self.set_xrng(xrng, subplot_index)\n yrng = self.axes[subplot_index].get_ylim()\n self.set_yrng(yrng, subplot_index)\n del temp_ds\n return ax\n\n def plot_xsection_map(\n self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs\n ):\n \"\"\"\n Plots a cross section of 2D data on a geographical map.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None\n to have ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n subplot_index : tuple\n The index of the subplot to plot inside.\n coastlines : bool\n Set to True to plot the coastlines.\n background : bool\n Set to True to plot a stock image background.\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`act.plotting.XSectionDisplay.plot_xsection`\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if not CARTOPY_AVAILABLE:\n raise ImportError(\n 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'\n )\n\n self.set_subplot_to_map(subplot_index)\n self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)\n xlims = self.xrng[subplot_index].flatten()\n ylims = self.yrng[subplot_index].flatten()\n self.axes[subplot_index].set_xticks(np.linspace(round(xlims[0], 0), round(xlims[1], 0), 10))\n self.axes[subplot_index].set_yticks(np.linspace(round(ylims[0], 0), round(ylims[1], 0), 10))\n\n if coastlines:\n self.axes[subplot_index].coastlines(resolution='10m')\n\n if background:\n self.axes[subplot_index].stock_img()\n\n return self.axes[subplot_index]\n", "path": "act/plotting/xsectiondisplay.py"}], "after_files": [{"content": "\"\"\"\nStores the class for XSectionDisplay.\n\n\"\"\"\n\n# Import third party libraries\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ntry:\n import cartopy.crs as ccrs\n\n CARTOPY_AVAILABLE = True\nexcept ImportError:\n CARTOPY_AVAILABLE = False\n\n# Import Local Libs\nfrom ..utils import data_utils\nfrom .plot import Display\n\n\nclass XSectionDisplay(Display):\n \"\"\"\n Plots cross sections of multidimensional datasets. The data\n must be able to be sliced into a 2 dimensional slice using the\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands.\n\n This is inherited from the :func:`act.plotting.Display`\n class and has therefore has the same attributes as that class.\n See :func:`act.plotting.Display`\n for more information. There are no additional attributes or parameters\n to this class.\n\n In order to create geographic plots, ACT needs the Cartopy package to be\n installed on your system. More information about\n Cartopy go here:https://scitools.org.uk/cartopy/docs/latest/.\n\n Examples\n --------\n For example, if you only want to do a cross section through the first\n time period of a 3D dataset called :code:`ir_temperature`, you would\n do the following in xarray:\n\n .. code-block:: python\n\n time_slice = my_ds[\"ir_temperature\"].isel(time=0)\n\n The methods of this class support passing in keyword arguments into\n xarray :func:`xarray.Dataset.sel` and :func:`xarray.Dataset.isel` commands\n so that new datasets do not need to be created when slicing by specific time\n periods or spatial slices. For example, to plot the first time period\n from :code:`my_ds`, simply do:\n\n .. code-block:: python\n\n xsection = XSectionDisplay(my_ds, figsize=(15, 8))\n xsection.plot_xsection_map(\n None,\n \"ir_temperature\",\n vmin=220,\n vmax=300,\n cmap=\"Greys\",\n x=\"longitude\",\n y=\"latitude\",\n isel_kwargs={\"time\": 0},\n )\n\n Here, the array is sliced by the first time period as specified\n in :code:`isel_kwargs`. The other keyword arguments are standard keyword\n arguments taken by :func:`matplotlib.pyplot.pcolormesh`.\n\n \"\"\"\n\n def __init__(self, ds, subplot_shape=(1,), ds_name=None, **kwargs):\n super().__init__(ds, subplot_shape, ds_name, **kwargs)\n\n def set_subplot_to_map(self, subplot_index):\n self.fig.delaxes(self.axes[subplot_index])\n total_num_plots = self.axes.shape\n\n if len(total_num_plots) == 2:\n second_number = total_num_plots[0]\n j = subplot_index[1]\n else:\n second_number = 1\n j = 0\n\n third_number = second_number * subplot_index[0] + j + 1\n\n self.axes[subplot_index] = plt.subplot(\n total_num_plots[0],\n second_number,\n third_number,\n projection=ccrs.PlateCarree(),\n )\n\n def set_xrng(self, xrng, subplot_index=(0,)):\n \"\"\"\n Sets the x range of the plot.\n\n Parameters\n ----------\n xrng : 2 number array\n The x limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_xrng requires the plot to be displayed.')\n\n if not hasattr(self, 'xrng') and len(self.axes.shape) == 2:\n self.xrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=xrng[0].dtype)\n elif not hasattr(self, 'xrng') and len(self.axes.shape) == 1:\n self.xrng = np.zeros((self.axes.shape[0], 2), dtype=xrng[0].dtype)\n\n self.axes[subplot_index].set_xlim(xrng)\n self.xrng[subplot_index, :] = np.array(xrng)\n\n def set_yrng(self, yrng, subplot_index=(0,)):\n \"\"\"\n Sets the y range of the plot.\n\n Parameters\n ----------\n yrng : 2 number array\n The y limits of the plot.\n subplot_index : 1 or 2D tuple, list, or array\n The index of the subplot to set the x range of.\n\n \"\"\"\n if self.axes is None:\n raise RuntimeError('set_yrng requires the plot to be displayed.')\n\n if not hasattr(self, 'yrng') and len(self.axes.shape) == 2:\n self.yrng = np.zeros((self.axes.shape[0], self.axes.shape[1], 2), dtype=yrng[0].dtype)\n elif not hasattr(self, 'yrng') and len(self.axes.shape) == 1:\n self.yrng = np.zeros((self.axes.shape[0], 2), dtype=yrng[0].dtype)\n\n if yrng[0] == yrng[1]:\n yrng[1] = yrng[1] + 1\n\n self.axes[subplot_index].set_ylim(yrng)\n\n self.yrng[subplot_index, :] = yrng\n\n def plot_xsection(\n self,\n dsname,\n varname,\n x=None,\n y=None,\n subplot_index=(0,),\n sel_kwargs=None,\n isel_kwargs=None,\n **kwargs,\n ):\n \"\"\"\n This function plots a cross section whose x and y coordinates are\n specified by the variable names either provided by the user or\n automatically detected by xarray.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None to have\n ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n x : str or None\n The name of the x coordinate variable.\n y : str or None\n The name of the y coordinate variable.\n subplot_index : tuple\n The index of the subplot to create the plot in.\n sel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n This is useful when your data is in 3 or more dimensions and you\n want to only view a cross section on a specific x-y plane. For more\n information on how to use xarray's .sel and .isel functionality\n to slice datasets, see the documentation on :func:`xarray.DataArray.sel`.\n isel_kwargs : dict\n The keyword arguments to pass into :py:func:`xarray.DataArray.sel`\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`xarray.DataArray.plot`.\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if dsname is None and len(self._ds.keys()) > 1:\n raise ValueError(\n 'You must choose a datastream when there are 2 '\n 'or more datasets in the TimeSeriesDisplay '\n 'object.'\n )\n elif dsname is None:\n dsname = list(self._ds.keys())[0]\n temp_ds = self._ds[dsname].copy()\n\n if sel_kwargs is not None:\n temp_ds = temp_ds.sel(**sel_kwargs, method='nearest')\n\n if isel_kwargs is not None:\n temp_ds = temp_ds.isel(**isel_kwargs)\n\n if (x is not None and y is None) or (y is None and x is not None):\n raise RuntimeError(\n 'Both x and y must be specified if we are'\n + 'not trying to automatically detect them!'\n )\n\n if x is not None:\n coord_list = {}\n x_coord_dim = temp_ds[x].dims[0]\n coord_list[x] = x_coord_dim\n y_coord_dim = temp_ds[y].dims[0]\n coord_list[y] = y_coord_dim\n new_ds = data_utils.assign_coordinates(temp_ds, coord_list)\n my_dataarray = new_ds[varname]\n else:\n my_dataarray = temp_ds[varname]\n\n coord_keys = [key for key in my_dataarray.coords.keys()]\n # X-array will sometimes shorten latitude and longitude variables\n if x == 'longitude' and x not in coord_keys:\n xc = 'lon'\n else:\n xc = x\n if y == 'latitude' and y not in coord_keys:\n yc = 'lat'\n else:\n yc = y\n\n if x is None:\n my_dataarray.plot(ax=self.axes[subplot_index], **kwargs)\n else:\n my_dataarray.plot(ax=self.axes[subplot_index], x=xc, y=yc, **kwargs)\n\n the_coords = [the_keys for the_keys in my_dataarray.coords.keys()]\n if x is None:\n x = the_coords[0]\n else:\n x = coord_list[x]\n\n if y is None:\n y = the_coords[1]\n else:\n y = coord_list[y]\n\n xrng = self.axes[subplot_index].get_xlim()\n self.set_xrng(xrng, subplot_index)\n yrng = self.axes[subplot_index].get_ylim()\n self.set_yrng(yrng, subplot_index)\n del temp_ds\n return self.axes[subplot_index]\n\n def plot_xsection_map(\n self, dsname, varname, subplot_index=(0,), coastlines=True, background=False, **kwargs\n ):\n \"\"\"\n Plots a cross section of 2D data on a geographical map.\n\n Parameters\n ----------\n dsname : str or None\n The name of the datastream to plot from. Set to None\n to have ACT attempt to automatically detect this.\n varname : str\n The name of the variable to plot.\n subplot_index : tuple\n The index of the subplot to plot inside.\n coastlines : bool\n Set to True to plot the coastlines.\n background : bool\n Set to True to plot a stock image background.\n **kwargs : keyword arguments\n Additional keyword arguments will be passed into\n :func:`act.plotting.XSectionDisplay.plot_xsection`\n\n Returns\n -------\n ax : matplotlib axis handle\n The matplotlib axis handle corresponding to the plot.\n\n \"\"\"\n if not CARTOPY_AVAILABLE:\n raise ImportError(\n 'Cartopy needs to be installed in order to plot ' + 'cross sections on maps!'\n )\n self.set_subplot_to_map(subplot_index)\n self.plot_xsection(dsname, varname, subplot_index=subplot_index, **kwargs)\n xlims = self.xrng[subplot_index].flatten()\n ylims = self.yrng[subplot_index].flatten()\n self.axes[subplot_index].set_xticks(np.linspace(round(xlims[0], 0), round(xlims[1], 0), 10))\n self.axes[subplot_index].set_yticks(np.linspace(round(ylims[0], 0), round(ylims[1], 0), 10))\n\n if coastlines:\n self.axes[subplot_index].coastlines(resolution='10m')\n\n if background:\n self.axes[subplot_index].stock_img()\n\n return self.axes[subplot_index]\n", "path": "act/plotting/xsectiondisplay.py"}]} | 3,702 | 429 |
gh_patches_debug_10230 | rasdani/github-patches | git_diff | streamlink__streamlink-925 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BBC iPlayer plugin cannot find VPID
### Checklist
- [x] This is a bug report.
- [ ] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
The BBC IPlayer plugin cannot find the VPID for valid urls.
### Reproduction steps / Explicit stream URLs to test
The following command:
`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`
produces this output:
```
[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4
[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4
error: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars
```
and the same goes for any other valid iplayer url.
### Environment details
Operating system: arch linux
Streamlink and Python versions: streamlink-0.6.0 and python-3.6.1
### Comments, logs, screenshots, etc.
AFAICS, the page downloaded from the iplayer url does not contain the string "vpid".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bbciplayer.py`
Content:
```
1 from __future__ import print_function
2
3 import base64
4 import re
5 from functools import partial
6 from hashlib import sha1
7
8 from streamlink.plugin import Plugin
9 from streamlink.plugin.api import http
10 from streamlink.plugin.api import validate
11 from streamlink.stream import HDSStream
12 from streamlink.stream import HLSStream
13 from streamlink.utils import parse_xml, parse_json
14
15
16 class BBCiPlayer(Plugin):
17 url_re = re.compile(r"""https?://(?:www\.)?bbc.co.uk/iplayer/
18 (
19 episode/(?P<episode_id>\w+)|
20 live/(?P<channel_name>\w+)
21 )
22 """, re.VERBOSE)
23 vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
24 tvip_re = re.compile(r'event_master_brand=(\w+?)&')
25 swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
26 hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
27 api_url = ("http://open.live.bbc.co.uk/mediaselector/5/select/"
28 "version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/")
29 platforms = ("pc", "iptv-all")
30
31 mediaselector_schema = validate.Schema(
32 validate.transform(partial(parse_xml, ignore_ns=True)),
33 validate.union({
34 "hds": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hds']"),
35 "hls": validate.xml_findall(".//media[@kind='video']//connection[@transferFormat='hls']")
36 }),
37 {validate.text: validate.all(
38 [validate.all(validate.getattr("attrib"), validate.get("href"))],
39 validate.transform(lambda x: list(set(x))) # unique
40 )}
41 )
42
43 @classmethod
44 def can_handle_url(cls, url):
45 return cls.url_re.match(url) is not None
46
47 @classmethod
48 def _hash_vpid(cls, vpid):
49 return sha1(cls.hash + str(vpid).encode("utf8")).hexdigest()
50
51 def find_vpid(self, url):
52 self.logger.debug("Looking for vpid on {0}", url)
53 res = http.get(url)
54 m = self.vpid_re.search(res.text)
55 return m and m.group(1)
56
57 def find_tvip(self, url):
58 self.logger.debug("Looking for tvip on {0}", url)
59 res = http.get(url)
60 m = self.tvip_re.search(res.text)
61 return m and m.group(1)
62
63 def mediaselector(self, vpid):
64 for platform in self.platforms:
65 url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)
66 stream_urls = http.get(url, schema=self.mediaselector_schema)
67 for surl in stream_urls.get("hls"):
68 for s in HLSStream.parse_variant_playlist(self.session, surl).items():
69 yield s
70 for surl in stream_urls.get("hds"):
71 for s in HDSStream.parse_manifest(self.session, surl).items():
72 yield s
73
74 def _get_streams(self):
75 m = self.url_re.match(self.url)
76 episode_id = m.group("episode_id")
77 channel_name = m.group("channel_name")
78
79 if episode_id:
80 self.logger.debug("Loading streams for episode: {0}", episode_id)
81 vpid = self.find_vpid(self.url)
82 if vpid:
83 self.logger.debug("Found VPID: {0}", vpid)
84 for s in self.mediaselector(vpid):
85 yield s
86 else:
87 self.logger.error("Could not find VPID for episode {0}", episode_id)
88 elif channel_name:
89 self.logger.debug("Loading stream for live channel: {0}", channel_name)
90 tvip = self.find_tvip(self.url)
91 if tvip:
92 self.logger.debug("Found TVIP: {0}", tvip)
93 for s in self.mediaselector(tvip):
94 yield s
95
96
97 __plugin__ = BBCiPlayer
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py
--- a/src/streamlink/plugins/bbciplayer.py
+++ b/src/streamlink/plugins/bbciplayer.py
@@ -20,7 +20,7 @@
live/(?P<channel_name>\w+)
)
""", re.VERBOSE)
- vpid_re = re.compile(r'"vpid"\s*:\s*"(\w+)"')
+ vpid_re = re.compile(r'"ident_id"\s*:\s*"(\w+)"')
tvip_re = re.compile(r'event_master_brand=(\w+?)&')
swf_url = "http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf"
hash = base64.b64decode(b"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==")
| {"golden_diff": "diff --git a/src/streamlink/plugins/bbciplayer.py b/src/streamlink/plugins/bbciplayer.py\n--- a/src/streamlink/plugins/bbciplayer.py\n+++ b/src/streamlink/plugins/bbciplayer.py\n@@ -20,7 +20,7 @@\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n- vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n+ vpid_re = re.compile(r'\"ident_id\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n", "issue": "BBC iPlayer plugin cannot find VPID\n### Checklist\r\n\r\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [ ] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\n\r\nThe BBC IPlayer plugin cannot find the VPID for valid urls.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nThe following command:\r\n\r\n`streamlink -l debug 'http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars' best`\r\n\r\nproduces this output:\r\n\r\n```\r\n[cli][info] Found matching plugin bbciplayer for URL http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][debug] Loading streams for episode: b013pnv4\r\n[plugin.bbciplayer][debug] Looking for vpid on http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n[plugin.bbciplayer][error] Could not find VPID for episode b013pnv4\r\nerror: No playable streams found on this URL: http://www.bbc.co.uk/iplayer/episode/b013pnv4/horizon-20112012-2-seeing-stars\r\n\r\n```\r\n\r\nand the same goes for any other valid iplayer url.\r\n\r\n### Environment details\r\n\r\nOperating system: arch linux\r\nStreamlink and Python versions: streamlink-0.6.0 and python-3.6.1\r\n\r\n### Comments, logs, screenshots, etc.\r\n\r\nAFAICS, the page downloaded from the iplayer url does not contain the string \"vpid\".\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport base64\nimport re\nfrom functools import partial\nfrom hashlib import sha1\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_xml, parse_json\n\n\nclass BBCiPlayer(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:www\\.)?bbc.co.uk/iplayer/\n (\n episode/(?P<episode_id>\\w+)|\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n vpid_re = re.compile(r'\"vpid\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n api_url = (\"http://open.live.bbc.co.uk/mediaselector/5/select/\"\n \"version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/\")\n platforms = (\"pc\", \"iptv-all\")\n\n mediaselector_schema = validate.Schema(\n validate.transform(partial(parse_xml, ignore_ns=True)),\n validate.union({\n \"hds\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hds']\"),\n \"hls\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hls']\")\n }),\n {validate.text: validate.all(\n [validate.all(validate.getattr(\"attrib\"), validate.get(\"href\"))],\n validate.transform(lambda x: list(set(x))) # unique\n )}\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def _hash_vpid(cls, vpid):\n return sha1(cls.hash + str(vpid).encode(\"utf8\")).hexdigest()\n\n def find_vpid(self, url):\n self.logger.debug(\"Looking for vpid on {0}\", url)\n res = http.get(url)\n m = self.vpid_re.search(res.text)\n return m and m.group(1)\n\n def find_tvip(self, url):\n self.logger.debug(\"Looking for tvip on {0}\", url)\n res = http.get(url)\n m = self.tvip_re.search(res.text)\n return m and m.group(1)\n\n def mediaselector(self, vpid):\n for platform in self.platforms:\n url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)\n stream_urls = http.get(url, schema=self.mediaselector_schema)\n for surl in stream_urls.get(\"hls\"):\n for s in HLSStream.parse_variant_playlist(self.session, surl).items():\n yield s\n for surl in stream_urls.get(\"hds\"):\n for s in HDSStream.parse_manifest(self.session, surl).items():\n yield s\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n episode_id = m.group(\"episode_id\")\n channel_name = m.group(\"channel_name\")\n\n if episode_id:\n self.logger.debug(\"Loading streams for episode: {0}\", episode_id)\n vpid = self.find_vpid(self.url)\n if vpid:\n self.logger.debug(\"Found VPID: {0}\", vpid)\n for s in self.mediaselector(vpid):\n yield s\n else:\n self.logger.error(\"Could not find VPID for episode {0}\", episode_id)\n elif channel_name:\n self.logger.debug(\"Loading stream for live channel: {0}\", channel_name)\n tvip = self.find_tvip(self.url)\n if tvip:\n self.logger.debug(\"Found TVIP: {0}\", tvip)\n for s in self.mediaselector(tvip):\n yield s\n\n\n__plugin__ = BBCiPlayer\n", "path": "src/streamlink/plugins/bbciplayer.py"}], "after_files": [{"content": "from __future__ import print_function\n\nimport base64\nimport re\nfrom functools import partial\nfrom hashlib import sha1\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_xml, parse_json\n\n\nclass BBCiPlayer(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:www\\.)?bbc.co.uk/iplayer/\n (\n episode/(?P<episode_id>\\w+)|\n live/(?P<channel_name>\\w+)\n )\n \"\"\", re.VERBOSE)\n vpid_re = re.compile(r'\"ident_id\"\\s*:\\s*\"(\\w+)\"')\n tvip_re = re.compile(r'event_master_brand=(\\w+?)&')\n swf_url = \"http://emp.bbci.co.uk/emp/SMPf/1.18.3/StandardMediaPlayerChromelessFlash.swf\"\n hash = base64.b64decode(b\"N2RmZjc2NzFkMGM2OTdmZWRiMWQ5MDVkOWExMjE3MTk5MzhiOTJiZg==\")\n api_url = (\"http://open.live.bbc.co.uk/mediaselector/5/select/\"\n \"version/2.0/mediaset/{platform}/vpid/{vpid}/atk/{vpid_hash}/asn/1/\")\n platforms = (\"pc\", \"iptv-all\")\n\n mediaselector_schema = validate.Schema(\n validate.transform(partial(parse_xml, ignore_ns=True)),\n validate.union({\n \"hds\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hds']\"),\n \"hls\": validate.xml_findall(\".//media[@kind='video']//connection[@transferFormat='hls']\")\n }),\n {validate.text: validate.all(\n [validate.all(validate.getattr(\"attrib\"), validate.get(\"href\"))],\n validate.transform(lambda x: list(set(x))) # unique\n )}\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def _hash_vpid(cls, vpid):\n return sha1(cls.hash + str(vpid).encode(\"utf8\")).hexdigest()\n\n def find_vpid(self, url):\n self.logger.debug(\"Looking for vpid on {0}\", url)\n res = http.get(url)\n m = self.vpid_re.search(res.text)\n return m and m.group(1)\n\n def find_tvip(self, url):\n self.logger.debug(\"Looking for tvip on {0}\", url)\n res = http.get(url)\n m = self.tvip_re.search(res.text)\n return m and m.group(1)\n\n def mediaselector(self, vpid):\n for platform in self.platforms:\n url = self.api_url.format(vpid=vpid, vpid_hash=self._hash_vpid(vpid), platform=platform)\n stream_urls = http.get(url, schema=self.mediaselector_schema)\n for surl in stream_urls.get(\"hls\"):\n for s in HLSStream.parse_variant_playlist(self.session, surl).items():\n yield s\n for surl in stream_urls.get(\"hds\"):\n for s in HDSStream.parse_manifest(self.session, surl).items():\n yield s\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n episode_id = m.group(\"episode_id\")\n channel_name = m.group(\"channel_name\")\n\n if episode_id:\n self.logger.debug(\"Loading streams for episode: {0}\", episode_id)\n vpid = self.find_vpid(self.url)\n if vpid:\n self.logger.debug(\"Found VPID: {0}\", vpid)\n for s in self.mediaselector(vpid):\n yield s\n else:\n self.logger.error(\"Could not find VPID for episode {0}\", episode_id)\n elif channel_name:\n self.logger.debug(\"Loading stream for live channel: {0}\", channel_name)\n tvip = self.find_tvip(self.url)\n if tvip:\n self.logger.debug(\"Found TVIP: {0}\", tvip)\n for s in self.mediaselector(tvip):\n yield s\n\n\n__plugin__ = BBCiPlayer\n", "path": "src/streamlink/plugins/bbciplayer.py"}]} | 1,814 | 233 |
gh_patches_debug_21634 | rasdani/github-patches | git_diff | encode__uvicorn-507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FORWARDED_ALLOW_IPS cannot be loaded from ENV
The following two lines together makes impossible to read `FORWARDED_ALLOW_IPS` from env variable.
https://github.com/encode/uvicorn/blob/4da64be4335ba50e43d7ac8c3b5535460a9c1594/uvicorn/config.py#L160
https://github.com/encode/uvicorn/blob/4da64be4335ba50e43d7ac8c3b5535460a9c1594/uvicorn/config.py#L193
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/config.py`
Content:
```
1 import asyncio
2 import inspect
3 import logging
4 import logging.config
5 import os
6 import socket
7 import ssl
8 import sys
9 from typing import List, Tuple
10
11 import click
12
13 from uvicorn.importer import ImportFromStringError, import_from_string
14 from uvicorn.middleware.asgi2 import ASGI2Middleware
15 from uvicorn.middleware.debug import DebugMiddleware
16 from uvicorn.middleware.message_logger import MessageLoggerMiddleware
17 from uvicorn.middleware.proxy_headers import ProxyHeadersMiddleware
18 from uvicorn.middleware.wsgi import WSGIMiddleware
19
20 TRACE_LOG_LEVEL = 5
21
22 LOG_LEVELS = {
23 "critical": logging.CRITICAL,
24 "error": logging.ERROR,
25 "warning": logging.WARNING,
26 "info": logging.INFO,
27 "debug": logging.DEBUG,
28 "trace": TRACE_LOG_LEVEL,
29 }
30 HTTP_PROTOCOLS = {
31 "auto": "uvicorn.protocols.http.auto:AutoHTTPProtocol",
32 "h11": "uvicorn.protocols.http.h11_impl:H11Protocol",
33 "httptools": "uvicorn.protocols.http.httptools_impl:HttpToolsProtocol",
34 }
35 WS_PROTOCOLS = {
36 "auto": "uvicorn.protocols.websockets.auto:AutoWebSocketsProtocol",
37 "none": None,
38 "websockets": "uvicorn.protocols.websockets.websockets_impl:WebSocketProtocol",
39 "wsproto": "uvicorn.protocols.websockets.wsproto_impl:WSProtocol",
40 }
41 LIFESPAN = {
42 "auto": "uvicorn.lifespan.on:LifespanOn",
43 "on": "uvicorn.lifespan.on:LifespanOn",
44 "off": "uvicorn.lifespan.off:LifespanOff",
45 }
46 LOOP_SETUPS = {
47 "none": None,
48 "auto": "uvicorn.loops.auto:auto_loop_setup",
49 "asyncio": "uvicorn.loops.asyncio:asyncio_setup",
50 "uvloop": "uvicorn.loops.uvloop:uvloop_setup",
51 "iocp": "uvicorn.loops.iocp:iocp_setup",
52 }
53 INTERFACES = ["auto", "asgi3", "asgi2", "wsgi"]
54
55
56 # Fallback to 'ssl.PROTOCOL_SSLv23' in order to support Python < 3.5.3.
57 SSL_PROTOCOL_VERSION = getattr(ssl, "PROTOCOL_TLS", ssl.PROTOCOL_SSLv23)
58
59
60 LOGGING_CONFIG = {
61 "version": 1,
62 "disable_existing_loggers": False,
63 "formatters": {
64 "default": {
65 "()": "uvicorn.logging.DefaultFormatter",
66 "fmt": "%(levelprefix)s %(message)s",
67 },
68 "access": {
69 "()": "uvicorn.logging.AccessFormatter",
70 "fmt": '%(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s',
71 },
72 },
73 "handlers": {
74 "default": {
75 "formatter": "default",
76 "class": "logging.StreamHandler",
77 "stream": "ext://sys.stderr",
78 },
79 "access": {
80 "formatter": "access",
81 "class": "logging.StreamHandler",
82 "stream": "ext://sys.stdout",
83 },
84 },
85 "loggers": {
86 "": {"handlers": ["default"], "level": "INFO"},
87 "uvicorn.error": {"level": "INFO"},
88 "uvicorn.access": {"handlers": ["access"], "level": "INFO", "propagate": False},
89 },
90 }
91
92 logger = logging.getLogger("uvicorn.error")
93
94
95 def create_ssl_context(certfile, keyfile, ssl_version, cert_reqs, ca_certs, ciphers):
96 ctx = ssl.SSLContext(ssl_version)
97 ctx.load_cert_chain(certfile, keyfile)
98 ctx.verify_mode = cert_reqs
99 if ca_certs:
100 ctx.load_verify_locations(ca_certs)
101 if ciphers:
102 ctx.set_ciphers(ciphers)
103 return ctx
104
105
106 class Config:
107 def __init__(
108 self,
109 app,
110 host="127.0.0.1",
111 port=8000,
112 uds=None,
113 fd=None,
114 loop="auto",
115 http="auto",
116 ws="auto",
117 lifespan="auto",
118 env_file=None,
119 log_config=LOGGING_CONFIG,
120 log_level=None,
121 access_log=True,
122 interface="auto",
123 debug=False,
124 reload=False,
125 reload_dirs=None,
126 workers=None,
127 proxy_headers=True,
128 forwarded_allow_ips=None,
129 root_path="",
130 limit_concurrency=None,
131 limit_max_requests=None,
132 timeout_keep_alive=5,
133 timeout_notify=30,
134 callback_notify=None,
135 ssl_keyfile=None,
136 ssl_certfile=None,
137 ssl_version=SSL_PROTOCOL_VERSION,
138 ssl_cert_reqs=ssl.CERT_NONE,
139 ssl_ca_certs=None,
140 ssl_ciphers="TLSv1",
141 headers=None,
142 ):
143 self.app = app
144 self.host = host
145 self.port = port
146 self.uds = uds
147 self.fd = fd
148 self.loop = loop
149 self.http = http
150 self.ws = ws
151 self.lifespan = lifespan
152 self.log_config = log_config
153 self.log_level = log_level
154 self.access_log = access_log
155 self.interface = interface
156 self.debug = debug
157 self.reload = reload
158 self.workers = workers or 1
159 self.proxy_headers = proxy_headers
160 self.forwarded_allow_ips = forwarded_allow_ips or "127.0.0.1"
161 self.root_path = root_path
162 self.limit_concurrency = limit_concurrency
163 self.limit_max_requests = limit_max_requests
164 self.timeout_keep_alive = timeout_keep_alive
165 self.timeout_notify = timeout_notify
166 self.callback_notify = callback_notify
167 self.ssl_keyfile = ssl_keyfile
168 self.ssl_certfile = ssl_certfile
169 self.ssl_version = ssl_version
170 self.ssl_cert_reqs = ssl_cert_reqs
171 self.ssl_ca_certs = ssl_ca_certs
172 self.ssl_ciphers = ssl_ciphers
173 self.headers = headers if headers else [] # type: List[str]
174 self.encoded_headers = None # type: List[Tuple[bytes, bytes]]
175
176 self.loaded = False
177 self.configure_logging()
178
179 if reload_dirs is None:
180 self.reload_dirs = [os.getcwd()]
181 else:
182 self.reload_dirs = reload_dirs
183
184 if env_file is not None:
185 from dotenv import load_dotenv
186
187 logger.info("Loading environment from '%s'", env_file)
188 load_dotenv(dotenv_path=env_file)
189
190 if workers is None and "WEB_CONCURRENCY" in os.environ:
191 self.workers = int(os.environ["WEB_CONCURRENCY"])
192
193 if forwarded_allow_ips is None and "FORWARDED_ALLOW_IPS" in os.environ:
194 self.forwarded_allow_ips = os.environ["FORWARDED_ALLOW_IPS"]
195
196 @property
197 def is_ssl(self) -> bool:
198 return bool(self.ssl_keyfile or self.ssl_certfile)
199
200 def configure_logging(self):
201 logging.addLevelName(TRACE_LOG_LEVEL, "TRACE")
202
203 if sys.version_info < (3, 7):
204 # https://bugs.python.org/issue30520
205 import pickle
206
207 def __reduce__(self):
208 if isinstance(self, logging.RootLogger):
209 return logging.getLogger, ()
210
211 if logging.getLogger(self.name) is not self:
212 raise pickle.PicklingError("logger cannot be pickled")
213 return logging.getLogger, (self.name,)
214
215 logging.Logger.__reduce__ = __reduce__
216
217 if self.log_config is not None:
218 if isinstance(self.log_config, dict):
219 logging.config.dictConfig(self.log_config)
220 else:
221 logging.config.fileConfig(self.log_config)
222
223 if self.log_level is not None:
224 if isinstance(self.log_level, str):
225 log_level = LOG_LEVELS[self.log_level]
226 else:
227 log_level = self.log_level
228 logging.getLogger("").setLevel(log_level)
229 logging.getLogger("uvicorn.error").setLevel(log_level)
230 logging.getLogger("uvicorn.access").setLevel(log_level)
231 logging.getLogger("uvicorn.asgi").setLevel(log_level)
232 if self.access_log is False:
233 logging.getLogger("uvicorn.access").handlers = []
234 logging.getLogger("uvicorn.access").propagate = False
235
236 def load(self):
237 assert not self.loaded
238
239 if self.is_ssl:
240 self.ssl = create_ssl_context(
241 keyfile=self.ssl_keyfile,
242 certfile=self.ssl_certfile,
243 ssl_version=self.ssl_version,
244 cert_reqs=self.ssl_cert_reqs,
245 ca_certs=self.ssl_ca_certs,
246 ciphers=self.ssl_ciphers,
247 )
248 else:
249 self.ssl = None
250
251 encoded_headers = [
252 (key.lower().encode("latin1"), value.encode("latin1"))
253 for key, value in self.headers
254 ]
255 self.encoded_headers = (
256 encoded_headers
257 if b"server" in dict(encoded_headers)
258 else [(b"server", b"uvicorn")] + encoded_headers
259 ) # type: List[Tuple[bytes, bytes]]
260
261 if isinstance(self.http, str):
262 self.http_protocol_class = import_from_string(HTTP_PROTOCOLS[self.http])
263 else:
264 self.http_protocol_class = self.http
265
266 if isinstance(self.ws, str):
267 self.ws_protocol_class = import_from_string(WS_PROTOCOLS[self.ws])
268 else:
269 self.ws_protocol_class = self.ws
270
271 self.lifespan_class = import_from_string(LIFESPAN[self.lifespan])
272
273 try:
274 self.loaded_app = import_from_string(self.app)
275 except ImportFromStringError as exc:
276 logger.error("Error loading ASGI app. %s" % exc)
277 sys.exit(1)
278
279 if self.interface == "auto":
280 if inspect.isclass(self.loaded_app):
281 use_asgi_3 = hasattr(self.loaded_app, "__await__")
282 elif inspect.isfunction(self.loaded_app):
283 use_asgi_3 = asyncio.iscoroutinefunction(self.loaded_app)
284 else:
285 call = getattr(self.loaded_app, "__call__", None)
286 use_asgi_3 = asyncio.iscoroutinefunction(call)
287 self.interface = "asgi3" if use_asgi_3 else "asgi2"
288
289 if self.interface == "wsgi":
290 self.loaded_app = WSGIMiddleware(self.loaded_app)
291 self.ws_protocol_class = None
292 elif self.interface == "asgi2":
293 self.loaded_app = ASGI2Middleware(self.loaded_app)
294
295 if self.debug:
296 self.loaded_app = DebugMiddleware(self.loaded_app)
297 if logger.level <= TRACE_LOG_LEVEL:
298 self.loaded_app = MessageLoggerMiddleware(self.loaded_app)
299 if self.proxy_headers:
300 self.loaded_app = ProxyHeadersMiddleware(
301 self.loaded_app, trusted_hosts=self.forwarded_allow_ips
302 )
303
304 self.loaded = True
305
306 def setup_event_loop(self):
307 loop_setup = import_from_string(LOOP_SETUPS[self.loop])
308 if loop_setup is not None:
309 loop_setup()
310
311 def bind_socket(self):
312 sock = socket.socket()
313 sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
314 try:
315 sock.bind((self.host, self.port))
316 except OSError as exc:
317 logger.error(exc)
318 sys.exit(1)
319 sock.set_inheritable(True)
320
321 message = "Uvicorn running on %s://%s:%d (Press CTRL+C to quit)"
322 color_message = (
323 "Uvicorn running on "
324 + click.style("%s://%s:%d", bold=True)
325 + " (Press CTRL+C to quit)"
326 )
327 protocol_name = "https" if self.is_ssl else "http"
328 logger.info(
329 message,
330 protocol_name,
331 self.host,
332 self.port,
333 extra={"color_message": color_message},
334 )
335 return sock
336
337 @property
338 def should_reload(self):
339 return isinstance(self.app, str) and (self.debug or self.reload)
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/config.py b/uvicorn/config.py
--- a/uvicorn/config.py
+++ b/uvicorn/config.py
@@ -157,7 +157,6 @@
self.reload = reload
self.workers = workers or 1
self.proxy_headers = proxy_headers
- self.forwarded_allow_ips = forwarded_allow_ips or "127.0.0.1"
self.root_path = root_path
self.limit_concurrency = limit_concurrency
self.limit_max_requests = limit_max_requests
@@ -190,8 +189,12 @@
if workers is None and "WEB_CONCURRENCY" in os.environ:
self.workers = int(os.environ["WEB_CONCURRENCY"])
- if forwarded_allow_ips is None and "FORWARDED_ALLOW_IPS" in os.environ:
- self.forwarded_allow_ips = os.environ["FORWARDED_ALLOW_IPS"]
+ if forwarded_allow_ips is None:
+ self.forwarded_allow_ips = os.environ.get(
+ "FORWARDED_ALLOW_IPS", "127.0.0.1"
+ )
+ else:
+ self.forwarded_allow_ips = forwarded_allow_ips
@property
def is_ssl(self) -> bool:
| {"golden_diff": "diff --git a/uvicorn/config.py b/uvicorn/config.py\n--- a/uvicorn/config.py\n+++ b/uvicorn/config.py\n@@ -157,7 +157,6 @@\n self.reload = reload\n self.workers = workers or 1\n self.proxy_headers = proxy_headers\n- self.forwarded_allow_ips = forwarded_allow_ips or \"127.0.0.1\"\n self.root_path = root_path\n self.limit_concurrency = limit_concurrency\n self.limit_max_requests = limit_max_requests\n@@ -190,8 +189,12 @@\n if workers is None and \"WEB_CONCURRENCY\" in os.environ:\n self.workers = int(os.environ[\"WEB_CONCURRENCY\"])\n \n- if forwarded_allow_ips is None and \"FORWARDED_ALLOW_IPS\" in os.environ:\n- self.forwarded_allow_ips = os.environ[\"FORWARDED_ALLOW_IPS\"]\n+ if forwarded_allow_ips is None:\n+ self.forwarded_allow_ips = os.environ.get(\n+ \"FORWARDED_ALLOW_IPS\", \"127.0.0.1\"\n+ )\n+ else:\n+ self.forwarded_allow_ips = forwarded_allow_ips\n \n @property\n def is_ssl(self) -> bool:\n", "issue": "FORWARDED_ALLOW_IPS cannot be loaded from ENV\nThe following two lines together makes impossible to read `FORWARDED_ALLOW_IPS` from env variable.\r\n\r\nhttps://github.com/encode/uvicorn/blob/4da64be4335ba50e43d7ac8c3b5535460a9c1594/uvicorn/config.py#L160\r\nhttps://github.com/encode/uvicorn/blob/4da64be4335ba50e43d7ac8c3b5535460a9c1594/uvicorn/config.py#L193\n", "before_files": [{"content": "import asyncio\nimport inspect\nimport logging\nimport logging.config\nimport os\nimport socket\nimport ssl\nimport sys\nfrom typing import List, Tuple\n\nimport click\n\nfrom uvicorn.importer import ImportFromStringError, import_from_string\nfrom uvicorn.middleware.asgi2 import ASGI2Middleware\nfrom uvicorn.middleware.debug import DebugMiddleware\nfrom uvicorn.middleware.message_logger import MessageLoggerMiddleware\nfrom uvicorn.middleware.proxy_headers import ProxyHeadersMiddleware\nfrom uvicorn.middleware.wsgi import WSGIMiddleware\n\nTRACE_LOG_LEVEL = 5\n\nLOG_LEVELS = {\n \"critical\": logging.CRITICAL,\n \"error\": logging.ERROR,\n \"warning\": logging.WARNING,\n \"info\": logging.INFO,\n \"debug\": logging.DEBUG,\n \"trace\": TRACE_LOG_LEVEL,\n}\nHTTP_PROTOCOLS = {\n \"auto\": \"uvicorn.protocols.http.auto:AutoHTTPProtocol\",\n \"h11\": \"uvicorn.protocols.http.h11_impl:H11Protocol\",\n \"httptools\": \"uvicorn.protocols.http.httptools_impl:HttpToolsProtocol\",\n}\nWS_PROTOCOLS = {\n \"auto\": \"uvicorn.protocols.websockets.auto:AutoWebSocketsProtocol\",\n \"none\": None,\n \"websockets\": \"uvicorn.protocols.websockets.websockets_impl:WebSocketProtocol\",\n \"wsproto\": \"uvicorn.protocols.websockets.wsproto_impl:WSProtocol\",\n}\nLIFESPAN = {\n \"auto\": \"uvicorn.lifespan.on:LifespanOn\",\n \"on\": \"uvicorn.lifespan.on:LifespanOn\",\n \"off\": \"uvicorn.lifespan.off:LifespanOff\",\n}\nLOOP_SETUPS = {\n \"none\": None,\n \"auto\": \"uvicorn.loops.auto:auto_loop_setup\",\n \"asyncio\": \"uvicorn.loops.asyncio:asyncio_setup\",\n \"uvloop\": \"uvicorn.loops.uvloop:uvloop_setup\",\n \"iocp\": \"uvicorn.loops.iocp:iocp_setup\",\n}\nINTERFACES = [\"auto\", \"asgi3\", \"asgi2\", \"wsgi\"]\n\n\n# Fallback to 'ssl.PROTOCOL_SSLv23' in order to support Python < 3.5.3.\nSSL_PROTOCOL_VERSION = getattr(ssl, \"PROTOCOL_TLS\", ssl.PROTOCOL_SSLv23)\n\n\nLOGGING_CONFIG = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\n \"()\": \"uvicorn.logging.DefaultFormatter\",\n \"fmt\": \"%(levelprefix)s %(message)s\",\n },\n \"access\": {\n \"()\": \"uvicorn.logging.AccessFormatter\",\n \"fmt\": '%(levelprefix)s %(client_addr)s - \"%(request_line)s\" %(status_code)s',\n },\n },\n \"handlers\": {\n \"default\": {\n \"formatter\": \"default\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": \"ext://sys.stderr\",\n },\n \"access\": {\n \"formatter\": \"access\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": \"ext://sys.stdout\",\n },\n },\n \"loggers\": {\n \"\": {\"handlers\": [\"default\"], \"level\": \"INFO\"},\n \"uvicorn.error\": {\"level\": \"INFO\"},\n \"uvicorn.access\": {\"handlers\": [\"access\"], \"level\": \"INFO\", \"propagate\": False},\n },\n}\n\nlogger = logging.getLogger(\"uvicorn.error\")\n\n\ndef create_ssl_context(certfile, keyfile, ssl_version, cert_reqs, ca_certs, ciphers):\n ctx = ssl.SSLContext(ssl_version)\n ctx.load_cert_chain(certfile, keyfile)\n ctx.verify_mode = cert_reqs\n if ca_certs:\n ctx.load_verify_locations(ca_certs)\n if ciphers:\n ctx.set_ciphers(ciphers)\n return ctx\n\n\nclass Config:\n def __init__(\n self,\n app,\n host=\"127.0.0.1\",\n port=8000,\n uds=None,\n fd=None,\n loop=\"auto\",\n http=\"auto\",\n ws=\"auto\",\n lifespan=\"auto\",\n env_file=None,\n log_config=LOGGING_CONFIG,\n log_level=None,\n access_log=True,\n interface=\"auto\",\n debug=False,\n reload=False,\n reload_dirs=None,\n workers=None,\n proxy_headers=True,\n forwarded_allow_ips=None,\n root_path=\"\",\n limit_concurrency=None,\n limit_max_requests=None,\n timeout_keep_alive=5,\n timeout_notify=30,\n callback_notify=None,\n ssl_keyfile=None,\n ssl_certfile=None,\n ssl_version=SSL_PROTOCOL_VERSION,\n ssl_cert_reqs=ssl.CERT_NONE,\n ssl_ca_certs=None,\n ssl_ciphers=\"TLSv1\",\n headers=None,\n ):\n self.app = app\n self.host = host\n self.port = port\n self.uds = uds\n self.fd = fd\n self.loop = loop\n self.http = http\n self.ws = ws\n self.lifespan = lifespan\n self.log_config = log_config\n self.log_level = log_level\n self.access_log = access_log\n self.interface = interface\n self.debug = debug\n self.reload = reload\n self.workers = workers or 1\n self.proxy_headers = proxy_headers\n self.forwarded_allow_ips = forwarded_allow_ips or \"127.0.0.1\"\n self.root_path = root_path\n self.limit_concurrency = limit_concurrency\n self.limit_max_requests = limit_max_requests\n self.timeout_keep_alive = timeout_keep_alive\n self.timeout_notify = timeout_notify\n self.callback_notify = callback_notify\n self.ssl_keyfile = ssl_keyfile\n self.ssl_certfile = ssl_certfile\n self.ssl_version = ssl_version\n self.ssl_cert_reqs = ssl_cert_reqs\n self.ssl_ca_certs = ssl_ca_certs\n self.ssl_ciphers = ssl_ciphers\n self.headers = headers if headers else [] # type: List[str]\n self.encoded_headers = None # type: List[Tuple[bytes, bytes]]\n\n self.loaded = False\n self.configure_logging()\n\n if reload_dirs is None:\n self.reload_dirs = [os.getcwd()]\n else:\n self.reload_dirs = reload_dirs\n\n if env_file is not None:\n from dotenv import load_dotenv\n\n logger.info(\"Loading environment from '%s'\", env_file)\n load_dotenv(dotenv_path=env_file)\n\n if workers is None and \"WEB_CONCURRENCY\" in os.environ:\n self.workers = int(os.environ[\"WEB_CONCURRENCY\"])\n\n if forwarded_allow_ips is None and \"FORWARDED_ALLOW_IPS\" in os.environ:\n self.forwarded_allow_ips = os.environ[\"FORWARDED_ALLOW_IPS\"]\n\n @property\n def is_ssl(self) -> bool:\n return bool(self.ssl_keyfile or self.ssl_certfile)\n\n def configure_logging(self):\n logging.addLevelName(TRACE_LOG_LEVEL, \"TRACE\")\n\n if sys.version_info < (3, 7):\n # https://bugs.python.org/issue30520\n import pickle\n\n def __reduce__(self):\n if isinstance(self, logging.RootLogger):\n return logging.getLogger, ()\n\n if logging.getLogger(self.name) is not self:\n raise pickle.PicklingError(\"logger cannot be pickled\")\n return logging.getLogger, (self.name,)\n\n logging.Logger.__reduce__ = __reduce__\n\n if self.log_config is not None:\n if isinstance(self.log_config, dict):\n logging.config.dictConfig(self.log_config)\n else:\n logging.config.fileConfig(self.log_config)\n\n if self.log_level is not None:\n if isinstance(self.log_level, str):\n log_level = LOG_LEVELS[self.log_level]\n else:\n log_level = self.log_level\n logging.getLogger(\"\").setLevel(log_level)\n logging.getLogger(\"uvicorn.error\").setLevel(log_level)\n logging.getLogger(\"uvicorn.access\").setLevel(log_level)\n logging.getLogger(\"uvicorn.asgi\").setLevel(log_level)\n if self.access_log is False:\n logging.getLogger(\"uvicorn.access\").handlers = []\n logging.getLogger(\"uvicorn.access\").propagate = False\n\n def load(self):\n assert not self.loaded\n\n if self.is_ssl:\n self.ssl = create_ssl_context(\n keyfile=self.ssl_keyfile,\n certfile=self.ssl_certfile,\n ssl_version=self.ssl_version,\n cert_reqs=self.ssl_cert_reqs,\n ca_certs=self.ssl_ca_certs,\n ciphers=self.ssl_ciphers,\n )\n else:\n self.ssl = None\n\n encoded_headers = [\n (key.lower().encode(\"latin1\"), value.encode(\"latin1\"))\n for key, value in self.headers\n ]\n self.encoded_headers = (\n encoded_headers\n if b\"server\" in dict(encoded_headers)\n else [(b\"server\", b\"uvicorn\")] + encoded_headers\n ) # type: List[Tuple[bytes, bytes]]\n\n if isinstance(self.http, str):\n self.http_protocol_class = import_from_string(HTTP_PROTOCOLS[self.http])\n else:\n self.http_protocol_class = self.http\n\n if isinstance(self.ws, str):\n self.ws_protocol_class = import_from_string(WS_PROTOCOLS[self.ws])\n else:\n self.ws_protocol_class = self.ws\n\n self.lifespan_class = import_from_string(LIFESPAN[self.lifespan])\n\n try:\n self.loaded_app = import_from_string(self.app)\n except ImportFromStringError as exc:\n logger.error(\"Error loading ASGI app. %s\" % exc)\n sys.exit(1)\n\n if self.interface == \"auto\":\n if inspect.isclass(self.loaded_app):\n use_asgi_3 = hasattr(self.loaded_app, \"__await__\")\n elif inspect.isfunction(self.loaded_app):\n use_asgi_3 = asyncio.iscoroutinefunction(self.loaded_app)\n else:\n call = getattr(self.loaded_app, \"__call__\", None)\n use_asgi_3 = asyncio.iscoroutinefunction(call)\n self.interface = \"asgi3\" if use_asgi_3 else \"asgi2\"\n\n if self.interface == \"wsgi\":\n self.loaded_app = WSGIMiddleware(self.loaded_app)\n self.ws_protocol_class = None\n elif self.interface == \"asgi2\":\n self.loaded_app = ASGI2Middleware(self.loaded_app)\n\n if self.debug:\n self.loaded_app = DebugMiddleware(self.loaded_app)\n if logger.level <= TRACE_LOG_LEVEL:\n self.loaded_app = MessageLoggerMiddleware(self.loaded_app)\n if self.proxy_headers:\n self.loaded_app = ProxyHeadersMiddleware(\n self.loaded_app, trusted_hosts=self.forwarded_allow_ips\n )\n\n self.loaded = True\n\n def setup_event_loop(self):\n loop_setup = import_from_string(LOOP_SETUPS[self.loop])\n if loop_setup is not None:\n loop_setup()\n\n def bind_socket(self):\n sock = socket.socket()\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n try:\n sock.bind((self.host, self.port))\n except OSError as exc:\n logger.error(exc)\n sys.exit(1)\n sock.set_inheritable(True)\n\n message = \"Uvicorn running on %s://%s:%d (Press CTRL+C to quit)\"\n color_message = (\n \"Uvicorn running on \"\n + click.style(\"%s://%s:%d\", bold=True)\n + \" (Press CTRL+C to quit)\"\n )\n protocol_name = \"https\" if self.is_ssl else \"http\"\n logger.info(\n message,\n protocol_name,\n self.host,\n self.port,\n extra={\"color_message\": color_message},\n )\n return sock\n\n @property\n def should_reload(self):\n return isinstance(self.app, str) and (self.debug or self.reload)\n", "path": "uvicorn/config.py"}], "after_files": [{"content": "import asyncio\nimport inspect\nimport logging\nimport logging.config\nimport os\nimport socket\nimport ssl\nimport sys\nfrom typing import List, Tuple\n\nimport click\n\nfrom uvicorn.importer import ImportFromStringError, import_from_string\nfrom uvicorn.middleware.asgi2 import ASGI2Middleware\nfrom uvicorn.middleware.debug import DebugMiddleware\nfrom uvicorn.middleware.message_logger import MessageLoggerMiddleware\nfrom uvicorn.middleware.proxy_headers import ProxyHeadersMiddleware\nfrom uvicorn.middleware.wsgi import WSGIMiddleware\n\nTRACE_LOG_LEVEL = 5\n\nLOG_LEVELS = {\n \"critical\": logging.CRITICAL,\n \"error\": logging.ERROR,\n \"warning\": logging.WARNING,\n \"info\": logging.INFO,\n \"debug\": logging.DEBUG,\n \"trace\": TRACE_LOG_LEVEL,\n}\nHTTP_PROTOCOLS = {\n \"auto\": \"uvicorn.protocols.http.auto:AutoHTTPProtocol\",\n \"h11\": \"uvicorn.protocols.http.h11_impl:H11Protocol\",\n \"httptools\": \"uvicorn.protocols.http.httptools_impl:HttpToolsProtocol\",\n}\nWS_PROTOCOLS = {\n \"auto\": \"uvicorn.protocols.websockets.auto:AutoWebSocketsProtocol\",\n \"none\": None,\n \"websockets\": \"uvicorn.protocols.websockets.websockets_impl:WebSocketProtocol\",\n \"wsproto\": \"uvicorn.protocols.websockets.wsproto_impl:WSProtocol\",\n}\nLIFESPAN = {\n \"auto\": \"uvicorn.lifespan.on:LifespanOn\",\n \"on\": \"uvicorn.lifespan.on:LifespanOn\",\n \"off\": \"uvicorn.lifespan.off:LifespanOff\",\n}\nLOOP_SETUPS = {\n \"none\": None,\n \"auto\": \"uvicorn.loops.auto:auto_loop_setup\",\n \"asyncio\": \"uvicorn.loops.asyncio:asyncio_setup\",\n \"uvloop\": \"uvicorn.loops.uvloop:uvloop_setup\",\n \"iocp\": \"uvicorn.loops.iocp:iocp_setup\",\n}\nINTERFACES = [\"auto\", \"asgi3\", \"asgi2\", \"wsgi\"]\n\n\n# Fallback to 'ssl.PROTOCOL_SSLv23' in order to support Python < 3.5.3.\nSSL_PROTOCOL_VERSION = getattr(ssl, \"PROTOCOL_TLS\", ssl.PROTOCOL_SSLv23)\n\n\nLOGGING_CONFIG = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\n \"()\": \"uvicorn.logging.DefaultFormatter\",\n \"fmt\": \"%(levelprefix)s %(message)s\",\n },\n \"access\": {\n \"()\": \"uvicorn.logging.AccessFormatter\",\n \"fmt\": '%(levelprefix)s %(client_addr)s - \"%(request_line)s\" %(status_code)s',\n },\n },\n \"handlers\": {\n \"default\": {\n \"formatter\": \"default\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": \"ext://sys.stderr\",\n },\n \"access\": {\n \"formatter\": \"access\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": \"ext://sys.stdout\",\n },\n },\n \"loggers\": {\n \"\": {\"handlers\": [\"default\"], \"level\": \"INFO\"},\n \"uvicorn.error\": {\"level\": \"INFO\"},\n \"uvicorn.access\": {\"handlers\": [\"access\"], \"level\": \"INFO\", \"propagate\": False},\n },\n}\n\nlogger = logging.getLogger(\"uvicorn.error\")\n\n\ndef create_ssl_context(certfile, keyfile, ssl_version, cert_reqs, ca_certs, ciphers):\n ctx = ssl.SSLContext(ssl_version)\n ctx.load_cert_chain(certfile, keyfile)\n ctx.verify_mode = cert_reqs\n if ca_certs:\n ctx.load_verify_locations(ca_certs)\n if ciphers:\n ctx.set_ciphers(ciphers)\n return ctx\n\n\nclass Config:\n def __init__(\n self,\n app,\n host=\"127.0.0.1\",\n port=8000,\n uds=None,\n fd=None,\n loop=\"auto\",\n http=\"auto\",\n ws=\"auto\",\n lifespan=\"auto\",\n env_file=None,\n log_config=LOGGING_CONFIG,\n log_level=None,\n access_log=True,\n interface=\"auto\",\n debug=False,\n reload=False,\n reload_dirs=None,\n workers=None,\n proxy_headers=True,\n forwarded_allow_ips=None,\n root_path=\"\",\n limit_concurrency=None,\n limit_max_requests=None,\n timeout_keep_alive=5,\n timeout_notify=30,\n callback_notify=None,\n ssl_keyfile=None,\n ssl_certfile=None,\n ssl_version=SSL_PROTOCOL_VERSION,\n ssl_cert_reqs=ssl.CERT_NONE,\n ssl_ca_certs=None,\n ssl_ciphers=\"TLSv1\",\n headers=None,\n ):\n self.app = app\n self.host = host\n self.port = port\n self.uds = uds\n self.fd = fd\n self.loop = loop\n self.http = http\n self.ws = ws\n self.lifespan = lifespan\n self.log_config = log_config\n self.log_level = log_level\n self.access_log = access_log\n self.interface = interface\n self.debug = debug\n self.reload = reload\n self.workers = workers or 1\n self.proxy_headers = proxy_headers\n self.root_path = root_path\n self.limit_concurrency = limit_concurrency\n self.limit_max_requests = limit_max_requests\n self.timeout_keep_alive = timeout_keep_alive\n self.timeout_notify = timeout_notify\n self.callback_notify = callback_notify\n self.ssl_keyfile = ssl_keyfile\n self.ssl_certfile = ssl_certfile\n self.ssl_version = ssl_version\n self.ssl_cert_reqs = ssl_cert_reqs\n self.ssl_ca_certs = ssl_ca_certs\n self.ssl_ciphers = ssl_ciphers\n self.headers = headers if headers else [] # type: List[str]\n self.encoded_headers = None # type: List[Tuple[bytes, bytes]]\n\n self.loaded = False\n self.configure_logging()\n\n if reload_dirs is None:\n self.reload_dirs = [os.getcwd()]\n else:\n self.reload_dirs = reload_dirs\n\n if env_file is not None:\n from dotenv import load_dotenv\n\n logger.info(\"Loading environment from '%s'\", env_file)\n load_dotenv(dotenv_path=env_file)\n\n if workers is None and \"WEB_CONCURRENCY\" in os.environ:\n self.workers = int(os.environ[\"WEB_CONCURRENCY\"])\n\n if forwarded_allow_ips is None:\n self.forwarded_allow_ips = os.environ.get(\n \"FORWARDED_ALLOW_IPS\", \"127.0.0.1\"\n )\n else:\n self.forwarded_allow_ips = forwarded_allow_ips\n\n @property\n def is_ssl(self) -> bool:\n return bool(self.ssl_keyfile or self.ssl_certfile)\n\n def configure_logging(self):\n logging.addLevelName(TRACE_LOG_LEVEL, \"TRACE\")\n\n if sys.version_info < (3, 7):\n # https://bugs.python.org/issue30520\n import pickle\n\n def __reduce__(self):\n if isinstance(self, logging.RootLogger):\n return logging.getLogger, ()\n\n if logging.getLogger(self.name) is not self:\n raise pickle.PicklingError(\"logger cannot be pickled\")\n return logging.getLogger, (self.name,)\n\n logging.Logger.__reduce__ = __reduce__\n\n if self.log_config is not None:\n if isinstance(self.log_config, dict):\n logging.config.dictConfig(self.log_config)\n else:\n logging.config.fileConfig(self.log_config)\n\n if self.log_level is not None:\n if isinstance(self.log_level, str):\n log_level = LOG_LEVELS[self.log_level]\n else:\n log_level = self.log_level\n logging.getLogger(\"\").setLevel(log_level)\n logging.getLogger(\"uvicorn.error\").setLevel(log_level)\n logging.getLogger(\"uvicorn.access\").setLevel(log_level)\n logging.getLogger(\"uvicorn.asgi\").setLevel(log_level)\n if self.access_log is False:\n logging.getLogger(\"uvicorn.access\").handlers = []\n logging.getLogger(\"uvicorn.access\").propagate = False\n\n def load(self):\n assert not self.loaded\n\n if self.is_ssl:\n self.ssl = create_ssl_context(\n keyfile=self.ssl_keyfile,\n certfile=self.ssl_certfile,\n ssl_version=self.ssl_version,\n cert_reqs=self.ssl_cert_reqs,\n ca_certs=self.ssl_ca_certs,\n ciphers=self.ssl_ciphers,\n )\n else:\n self.ssl = None\n\n encoded_headers = [\n (key.lower().encode(\"latin1\"), value.encode(\"latin1\"))\n for key, value in self.headers\n ]\n self.encoded_headers = (\n encoded_headers\n if b\"server\" in dict(encoded_headers)\n else [(b\"server\", b\"uvicorn\")] + encoded_headers\n ) # type: List[Tuple[bytes, bytes]]\n\n if isinstance(self.http, str):\n self.http_protocol_class = import_from_string(HTTP_PROTOCOLS[self.http])\n else:\n self.http_protocol_class = self.http\n\n if isinstance(self.ws, str):\n self.ws_protocol_class = import_from_string(WS_PROTOCOLS[self.ws])\n else:\n self.ws_protocol_class = self.ws\n\n self.lifespan_class = import_from_string(LIFESPAN[self.lifespan])\n\n try:\n self.loaded_app = import_from_string(self.app)\n except ImportFromStringError as exc:\n logger.error(\"Error loading ASGI app. %s\" % exc)\n sys.exit(1)\n\n if self.interface == \"auto\":\n if inspect.isclass(self.loaded_app):\n use_asgi_3 = hasattr(self.loaded_app, \"__await__\")\n elif inspect.isfunction(self.loaded_app):\n use_asgi_3 = asyncio.iscoroutinefunction(self.loaded_app)\n else:\n call = getattr(self.loaded_app, \"__call__\", None)\n use_asgi_3 = asyncio.iscoroutinefunction(call)\n self.interface = \"asgi3\" if use_asgi_3 else \"asgi2\"\n\n if self.interface == \"wsgi\":\n self.loaded_app = WSGIMiddleware(self.loaded_app)\n self.ws_protocol_class = None\n elif self.interface == \"asgi2\":\n self.loaded_app = ASGI2Middleware(self.loaded_app)\n\n if self.debug:\n self.loaded_app = DebugMiddleware(self.loaded_app)\n if logger.level <= TRACE_LOG_LEVEL:\n self.loaded_app = MessageLoggerMiddleware(self.loaded_app)\n if self.proxy_headers:\n self.loaded_app = ProxyHeadersMiddleware(\n self.loaded_app, trusted_hosts=self.forwarded_allow_ips\n )\n\n self.loaded = True\n\n def setup_event_loop(self):\n loop_setup = import_from_string(LOOP_SETUPS[self.loop])\n if loop_setup is not None:\n loop_setup()\n\n def bind_socket(self):\n sock = socket.socket()\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n try:\n sock.bind((self.host, self.port))\n except OSError as exc:\n logger.error(exc)\n sys.exit(1)\n sock.set_inheritable(True)\n\n message = \"Uvicorn running on %s://%s:%d (Press CTRL+C to quit)\"\n color_message = (\n \"Uvicorn running on \"\n + click.style(\"%s://%s:%d\", bold=True)\n + \" (Press CTRL+C to quit)\"\n )\n protocol_name = \"https\" if self.is_ssl else \"http\"\n logger.info(\n message,\n protocol_name,\n self.host,\n self.port,\n extra={\"color_message\": color_message},\n )\n return sock\n\n @property\n def should_reload(self):\n return isinstance(self.app, str) and (self.debug or self.reload)\n", "path": "uvicorn/config.py"}]} | 3,908 | 277 |
gh_patches_debug_149 | rasdani/github-patches | git_diff | apache__tvm-6399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`import tvm` now requires pytest
With the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.
The chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.
https://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56
Within `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.
cc @tqchen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/__init__.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=redefined-builtin, wildcard-import
18 """TVM: Open Deep Learning Compiler Stack."""
19 import multiprocessing
20 import sys
21 import traceback
22
23 # top-level alias
24 # tvm._ffi
25 from ._ffi.base import TVMError, __version__
26 from ._ffi.runtime_ctypes import DataTypeCode, DataType
27 from ._ffi import register_object, register_func, register_extension, get_global_func
28
29 # top-level alias
30 # tvm.runtime
31 from .runtime.object import Object
32 from .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl
33 from .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon
34 from .runtime import ndarray as nd
35
36 # tvm.error
37 from . import error
38
39 # tvm.ir
40 from .ir import IRModule
41 from .ir import transform
42 from .ir import container
43 from . import ir
44
45 # tvm.tir
46 from . import tir
47
48 # tvm.target
49 from . import target
50
51 # tvm.te
52 from . import te
53
54 # tvm.testing
55 from . import testing
56
57 # tvm.driver
58 from .driver import build, lower
59
60 # tvm.parser
61 from . import parser
62
63 # tvm tir hybrid script
64 from . import hybrid
65
66 # others
67 from . import arith
68
69 # support infra
70 from . import support
71
72 # Contrib initializers
73 from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel
74
75
76 def tvm_wrap_excepthook(exception_hook):
77 """Wrap given excepthook with TVM additional work."""
78
79 def wrapper(exctype, value, trbk):
80 """Clean subprocesses when TVM is interrupted."""
81 exception_hook(exctype, value, trbk)
82 if hasattr(multiprocessing, 'active_children'):
83 # pylint: disable=not-callable
84 for p in multiprocessing.active_children():
85 p.terminate()
86
87 return wrapper
88
89
90 sys.excepthook = tvm_wrap_excepthook(sys.excepthook)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py
--- a/python/tvm/__init__.py
+++ b/python/tvm/__init__.py
@@ -51,9 +51,6 @@
# tvm.te
from . import te
-# tvm.testing
-from . import testing
-
# tvm.driver
from .driver import build, lower
| {"golden_diff": "diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py\n--- a/python/tvm/__init__.py\n+++ b/python/tvm/__init__.py\n@@ -51,9 +51,6 @@\n # tvm.te\n from . import te\n \n-# tvm.testing\n-from . import testing\n-\n # tvm.driver\n from .driver import build, lower\n", "issue": "`import tvm` now requires pytest\nWith the merge of #6331, `import tvm` now requires pytest. I created this issue just to check whether this is something intentional or something that we want to fix.\r\n\r\nThe chain from `import tvm` to `import pytest` happens due to the `from .import testing` on `python/tvm/__init__.py`. There is nothing actually done with that import.\r\n\r\nhttps://github.com/apache/incubator-tvm/blob/a4ebb16ed76bfea4ce4eed7be7ea73d4a01027e2/python/tvm/__init__.py#L53-L56\r\n\r\nWithin `python/tvm/testing.py` then there is the `import pytest`. I was thinking that we might want to remove these lines from `__init__.py`, so that we don't load `tvm.testing` and will only import it when required. I'm happy to submit a PR removing those lines, in case there is an understanding that it makes sense.\r\n\r\ncc @tqchen \n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=redefined-builtin, wildcard-import\n\"\"\"TVM: Open Deep Learning Compiler Stack.\"\"\"\nimport multiprocessing\nimport sys\nimport traceback\n\n# top-level alias\n# tvm._ffi\nfrom ._ffi.base import TVMError, __version__\nfrom ._ffi.runtime_ctypes import DataTypeCode, DataType\nfrom ._ffi import register_object, register_func, register_extension, get_global_func\n\n# top-level alias\n# tvm.runtime\nfrom .runtime.object import Object\nfrom .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl\nfrom .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon\nfrom .runtime import ndarray as nd\n\n# tvm.error\nfrom . import error\n\n# tvm.ir\nfrom .ir import IRModule\nfrom .ir import transform\nfrom .ir import container\nfrom . import ir\n\n# tvm.tir\nfrom . import tir\n\n# tvm.target\nfrom . import target\n\n# tvm.te\nfrom . import te\n\n# tvm.testing\nfrom . import testing\n\n# tvm.driver\nfrom .driver import build, lower\n\n# tvm.parser\nfrom . import parser\n\n# tvm tir hybrid script\nfrom . import hybrid\n\n# others\nfrom . import arith\n\n# support infra\nfrom . import support\n\n# Contrib initializers\nfrom .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel\n\n\ndef tvm_wrap_excepthook(exception_hook):\n \"\"\"Wrap given excepthook with TVM additional work.\"\"\"\n\n def wrapper(exctype, value, trbk):\n \"\"\"Clean subprocesses when TVM is interrupted.\"\"\"\n exception_hook(exctype, value, trbk)\n if hasattr(multiprocessing, 'active_children'):\n # pylint: disable=not-callable\n for p in multiprocessing.active_children():\n p.terminate()\n\n return wrapper\n\n\nsys.excepthook = tvm_wrap_excepthook(sys.excepthook)\n", "path": "python/tvm/__init__.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=redefined-builtin, wildcard-import\n\"\"\"TVM: Open Deep Learning Compiler Stack.\"\"\"\nimport multiprocessing\nimport sys\nimport traceback\n\n# top-level alias\n# tvm._ffi\nfrom ._ffi.base import TVMError, __version__\nfrom ._ffi.runtime_ctypes import DataTypeCode, DataType\nfrom ._ffi import register_object, register_func, register_extension, get_global_func\n\n# top-level alias\n# tvm.runtime\nfrom .runtime.object import Object\nfrom .runtime.ndarray import context, cpu, gpu, opencl, cl, vulkan, metal, mtl\nfrom .runtime.ndarray import vpi, rocm, ext_dev, micro_dev, hexagon\nfrom .runtime import ndarray as nd\n\n# tvm.error\nfrom . import error\n\n# tvm.ir\nfrom .ir import IRModule\nfrom .ir import transform\nfrom .ir import container\nfrom . import ir\n\n# tvm.tir\nfrom . import tir\n\n# tvm.target\nfrom . import target\n\n# tvm.te\nfrom . import te\n\n# tvm.driver\nfrom .driver import build, lower\n\n# tvm.parser\nfrom . import parser\n\n# tvm tir hybrid script\nfrom . import hybrid\n\n# others\nfrom . import arith\n\n# support infra\nfrom . import support\n\n# Contrib initializers\nfrom .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel\n\n\ndef tvm_wrap_excepthook(exception_hook):\n \"\"\"Wrap given excepthook with TVM additional work.\"\"\"\n\n def wrapper(exctype, value, trbk):\n \"\"\"Clean subprocesses when TVM is interrupted.\"\"\"\n exception_hook(exctype, value, trbk)\n if hasattr(multiprocessing, 'active_children'):\n # pylint: disable=not-callable\n for p in multiprocessing.active_children():\n p.terminate()\n\n return wrapper\n\n\nsys.excepthook = tvm_wrap_excepthook(sys.excepthook)\n", "path": "python/tvm/__init__.py"}]} | 1,283 | 87 |
gh_patches_debug_1023 | rasdani/github-patches | git_diff | pyca__cryptography-4037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in HKDF?
I think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.
[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the "length of output keying material in octets (<= 255*HashLen)".
An octet consists of 8 bit.
Currently, `max_length` is computed as:
```
max_length = 255 * (algorithm.digest_size // 8)
```
The problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.
(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/kdf/hkdf.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import six
8
9 from cryptography import utils
10 from cryptography.exceptions import (
11 AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons
12 )
13 from cryptography.hazmat.backends.interfaces import HMACBackend
14 from cryptography.hazmat.primitives import constant_time, hmac
15 from cryptography.hazmat.primitives.kdf import KeyDerivationFunction
16
17
18 @utils.register_interface(KeyDerivationFunction)
19 class HKDF(object):
20 def __init__(self, algorithm, length, salt, info, backend):
21 if not isinstance(backend, HMACBackend):
22 raise UnsupportedAlgorithm(
23 "Backend object does not implement HMACBackend.",
24 _Reasons.BACKEND_MISSING_INTERFACE
25 )
26
27 self._algorithm = algorithm
28
29 if not (salt is None or isinstance(salt, bytes)):
30 raise TypeError("salt must be bytes.")
31
32 if salt is None:
33 salt = b"\x00" * self._algorithm.digest_size
34
35 self._salt = salt
36
37 self._backend = backend
38
39 self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)
40
41 def _extract(self, key_material):
42 h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)
43 h.update(key_material)
44 return h.finalize()
45
46 def derive(self, key_material):
47 if not isinstance(key_material, bytes):
48 raise TypeError("key_material must be bytes.")
49
50 return self._hkdf_expand.derive(self._extract(key_material))
51
52 def verify(self, key_material, expected_key):
53 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
54 raise InvalidKey
55
56
57 @utils.register_interface(KeyDerivationFunction)
58 class HKDFExpand(object):
59 def __init__(self, algorithm, length, info, backend):
60 if not isinstance(backend, HMACBackend):
61 raise UnsupportedAlgorithm(
62 "Backend object does not implement HMACBackend.",
63 _Reasons.BACKEND_MISSING_INTERFACE
64 )
65
66 self._algorithm = algorithm
67
68 self._backend = backend
69
70 max_length = 255 * (algorithm.digest_size // 8)
71
72 if length > max_length:
73 raise ValueError(
74 "Can not derive keys larger than {0} octets.".format(
75 max_length
76 ))
77
78 self._length = length
79
80 if not (info is None or isinstance(info, bytes)):
81 raise TypeError("info must be bytes.")
82
83 if info is None:
84 info = b""
85
86 self._info = info
87
88 self._used = False
89
90 def _expand(self, key_material):
91 output = [b""]
92 counter = 1
93
94 while self._algorithm.digest_size * (len(output) - 1) < self._length:
95 h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)
96 h.update(output[-1])
97 h.update(self._info)
98 h.update(six.int2byte(counter))
99 output.append(h.finalize())
100 counter += 1
101
102 return b"".join(output)[:self._length]
103
104 def derive(self, key_material):
105 if not isinstance(key_material, bytes):
106 raise TypeError("key_material must be bytes.")
107
108 if self._used:
109 raise AlreadyFinalized
110
111 self._used = True
112 return self._expand(key_material)
113
114 def verify(self, key_material, expected_key):
115 if not constant_time.bytes_eq(self.derive(key_material), expected_key):
116 raise InvalidKey
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py
--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py
+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py
@@ -67,7 +67,7 @@
self._backend = backend
- max_length = 255 * (algorithm.digest_size // 8)
+ max_length = 255 * algorithm.digest_size
if length > max_length:
raise ValueError(
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/kdf/hkdf.py b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n--- a/src/cryptography/hazmat/primitives/kdf/hkdf.py\n+++ b/src/cryptography/hazmat/primitives/kdf/hkdf.py\n@@ -67,7 +67,7 @@\n \n self._backend = backend\n \n- max_length = 255 * (algorithm.digest_size // 8)\n+ max_length = 255 * algorithm.digest_size\n \n if length > max_length:\n raise ValueError(\n", "issue": "Bug in HKDF?\nI think the computation of [`max_length`](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L70) in `src/cryptography/hazmat/primitives/kdf/hkdf.py` is wrong.\r\n\r\n[RFC5869](https://tools.ietf.org/html/rfc5869) states on page 3 that the input `L` of the HKDF-Expand function describes the \"length of output keying material in octets (<= 255*HashLen)\".\r\nAn octet consists of 8 bit. \r\n\r\nCurrently, `max_length` is computed as:\r\n\r\n```\r\nmax_length = 255 * (algorithm.digest_size // 8)\r\n```\r\n\r\nThe problem is, that `algorithm.digest_size` returns the size of the digest in bytes. (There are 8 bits per byte). Therefore, the division by 8 is wrong, and thus, `max_length` is unnecessarily small.\r\n\r\n(same applies for the computation of `salt` as well ([line 33](https://github.com/pyca/cryptography/blob/66460d8f62b3f27a009bb61be6ce7675c8451b6e/src/cryptography/hazmat/primitives/kdf/hkdf.py#L33)), in the case where `salt is None`)\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * self._algorithm.digest_size\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * (algorithm.digest_size // 8)\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n AlreadyFinalized, InvalidKey, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.backends.interfaces import HMACBackend\nfrom cryptography.hazmat.primitives import constant_time, hmac\nfrom cryptography.hazmat.primitives.kdf import KeyDerivationFunction\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDF(object):\n def __init__(self, algorithm, length, salt, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n if not (salt is None or isinstance(salt, bytes)):\n raise TypeError(\"salt must be bytes.\")\n\n if salt is None:\n salt = b\"\\x00\" * self._algorithm.digest_size\n\n self._salt = salt\n\n self._backend = backend\n\n self._hkdf_expand = HKDFExpand(self._algorithm, length, info, backend)\n\n def _extract(self, key_material):\n h = hmac.HMAC(self._salt, self._algorithm, backend=self._backend)\n h.update(key_material)\n return h.finalize()\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n return self._hkdf_expand.derive(self._extract(key_material))\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n\n\[email protected]_interface(KeyDerivationFunction)\nclass HKDFExpand(object):\n def __init__(self, algorithm, length, info, backend):\n if not isinstance(backend, HMACBackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement HMACBackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n self._algorithm = algorithm\n\n self._backend = backend\n\n max_length = 255 * algorithm.digest_size\n\n if length > max_length:\n raise ValueError(\n \"Can not derive keys larger than {0} octets.\".format(\n max_length\n ))\n\n self._length = length\n\n if not (info is None or isinstance(info, bytes)):\n raise TypeError(\"info must be bytes.\")\n\n if info is None:\n info = b\"\"\n\n self._info = info\n\n self._used = False\n\n def _expand(self, key_material):\n output = [b\"\"]\n counter = 1\n\n while self._algorithm.digest_size * (len(output) - 1) < self._length:\n h = hmac.HMAC(key_material, self._algorithm, backend=self._backend)\n h.update(output[-1])\n h.update(self._info)\n h.update(six.int2byte(counter))\n output.append(h.finalize())\n counter += 1\n\n return b\"\".join(output)[:self._length]\n\n def derive(self, key_material):\n if not isinstance(key_material, bytes):\n raise TypeError(\"key_material must be bytes.\")\n\n if self._used:\n raise AlreadyFinalized\n\n self._used = True\n return self._expand(key_material)\n\n def verify(self, key_material, expected_key):\n if not constant_time.bytes_eq(self.derive(key_material), expected_key):\n raise InvalidKey\n", "path": "src/cryptography/hazmat/primitives/kdf/hkdf.py"}]} | 1,658 | 131 |
gh_patches_debug_17802 | rasdani/github-patches | git_diff | python-discord__bot-919 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use appropriate log level for exceptions from event listeners
From @SebastiaanZ:
> Finally, `discord.py` currently "hides" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/bot.py`
Content:
```
1 import asyncio
2 import logging
3 import socket
4 import warnings
5 from typing import Optional
6
7 import aiohttp
8 import discord
9 from discord.ext import commands
10
11 from bot import DEBUG_MODE, api, constants
12 from bot.async_stats import AsyncStatsClient
13
14 log = logging.getLogger('bot')
15
16
17 class Bot(commands.Bot):
18 """A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client."""
19
20 def __init__(self, *args, **kwargs):
21 if "connector" in kwargs:
22 warnings.warn(
23 "If login() is called (or the bot is started), the connector will be overwritten "
24 "with an internal one"
25 )
26
27 super().__init__(*args, **kwargs)
28
29 self.http_session: Optional[aiohttp.ClientSession] = None
30 self.api_client = api.APIClient(loop=self.loop)
31
32 self._connector = None
33 self._resolver = None
34 self._guild_available = asyncio.Event()
35
36 statsd_url = constants.Stats.statsd_host
37
38 if DEBUG_MODE:
39 # Since statsd is UDP, there are no errors for sending to a down port.
40 # For this reason, setting the statsd host to 127.0.0.1 for development
41 # will effectively disable stats.
42 statsd_url = "127.0.0.1"
43
44 self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix="bot")
45
46 def add_cog(self, cog: commands.Cog) -> None:
47 """Adds a "cog" to the bot and logs the operation."""
48 super().add_cog(cog)
49 log.info(f"Cog loaded: {cog.qualified_name}")
50
51 def clear(self) -> None:
52 """
53 Clears the internal state of the bot and recreates the connector and sessions.
54
55 Will cause a DeprecationWarning if called outside a coroutine.
56 """
57 # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate
58 # our own stuff here too.
59 self._recreate()
60 super().clear()
61
62 async def close(self) -> None:
63 """Close the Discord connection and the aiohttp session, connector, statsd client, and resolver."""
64 await super().close()
65
66 await self.api_client.close()
67
68 if self.http_session:
69 await self.http_session.close()
70
71 if self._connector:
72 await self._connector.close()
73
74 if self._resolver:
75 await self._resolver.close()
76
77 if self.stats._transport:
78 await self.stats._transport.close()
79
80 async def login(self, *args, **kwargs) -> None:
81 """Re-create the connector and set up sessions before logging into Discord."""
82 self._recreate()
83 await self.stats.create_socket()
84 await super().login(*args, **kwargs)
85
86 def _recreate(self) -> None:
87 """Re-create the connector, aiohttp session, and the APIClient."""
88 # Use asyncio for DNS resolution instead of threads so threads aren't spammed.
89 # Doesn't seem to have any state with regards to being closed, so no need to worry?
90 self._resolver = aiohttp.AsyncResolver()
91
92 # Its __del__ does send a warning but it doesn't always show up for some reason.
93 if self._connector and not self._connector._closed:
94 log.warning(
95 "The previous connector was not closed; it will remain open and be overwritten"
96 )
97
98 # Use AF_INET as its socket family to prevent HTTPS related problems both locally
99 # and in production.
100 self._connector = aiohttp.TCPConnector(
101 resolver=self._resolver,
102 family=socket.AF_INET,
103 )
104
105 # Client.login() will call HTTPClient.static_login() which will create a session using
106 # this connector attribute.
107 self.http.connector = self._connector
108
109 # Its __del__ does send a warning but it doesn't always show up for some reason.
110 if self.http_session and not self.http_session.closed:
111 log.warning(
112 "The previous session was not closed; it will remain open and be overwritten"
113 )
114
115 self.http_session = aiohttp.ClientSession(connector=self._connector)
116 self.api_client.recreate(force=True, connector=self._connector)
117
118 async def on_guild_available(self, guild: discord.Guild) -> None:
119 """
120 Set the internal guild available event when constants.Guild.id becomes available.
121
122 If the cache appears to still be empty (no members, no channels, or no roles), the event
123 will not be set.
124 """
125 if guild.id != constants.Guild.id:
126 return
127
128 if not guild.roles or not guild.members or not guild.channels:
129 msg = "Guild available event was dispatched but the cache appears to still be empty!"
130 log.warning(msg)
131
132 try:
133 webhook = await self.fetch_webhook(constants.Webhooks.dev_log)
134 except discord.HTTPException as e:
135 log.error(f"Failed to fetch webhook to send empty cache warning: status {e.status}")
136 else:
137 await webhook.send(f"<@&{constants.Roles.admin}> {msg}")
138
139 return
140
141 self._guild_available.set()
142
143 async def on_guild_unavailable(self, guild: discord.Guild) -> None:
144 """Clear the internal guild available event when constants.Guild.id becomes unavailable."""
145 if guild.id != constants.Guild.id:
146 return
147
148 self._guild_available.clear()
149
150 async def wait_until_guild_available(self) -> None:
151 """
152 Wait until the constants.Guild.id guild is available (and the cache is ready).
153
154 The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE
155 gateway event before giving up and thus not populating the cache for unavailable guilds.
156 """
157 await self._guild_available.wait()
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/bot.py b/bot/bot.py
--- a/bot/bot.py
+++ b/bot/bot.py
@@ -7,6 +7,7 @@
import aiohttp
import discord
from discord.ext import commands
+from sentry_sdk import push_scope
from bot import DEBUG_MODE, api, constants
from bot.async_stats import AsyncStatsClient
@@ -155,3 +156,14 @@
gateway event before giving up and thus not populating the cache for unavailable guilds.
"""
await self._guild_available.wait()
+
+ async def on_error(self, event: str, *args, **kwargs) -> None:
+ """Log errors raised in event listeners rather than printing them to stderr."""
+ self.stats.incr(f"errors.event.{event}")
+
+ with push_scope() as scope:
+ scope.set_tag("event", event)
+ scope.set_extra("args", args)
+ scope.set_extra("kwargs", kwargs)
+
+ log.exception(f"Unhandled exception in {event}.")
| {"golden_diff": "diff --git a/bot/bot.py b/bot/bot.py\n--- a/bot/bot.py\n+++ b/bot/bot.py\n@@ -7,6 +7,7 @@\n import aiohttp\n import discord\n from discord.ext import commands\n+from sentry_sdk import push_scope\n \n from bot import DEBUG_MODE, api, constants\n from bot.async_stats import AsyncStatsClient\n@@ -155,3 +156,14 @@\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n+\n+ async def on_error(self, event: str, *args, **kwargs) -> None:\n+ \"\"\"Log errors raised in event listeners rather than printing them to stderr.\"\"\"\n+ self.stats.incr(f\"errors.event.{event}\")\n+\n+ with push_scope() as scope:\n+ scope.set_tag(\"event\", event)\n+ scope.set_extra(\"args\", args)\n+ scope.set_extra(\"kwargs\", kwargs)\n+\n+ log.exception(f\"Unhandled exception in {event}.\")\n", "issue": "Use appropriate log level for exceptions from event listeners\nFrom @SebastiaanZ:\r\n\r\n> Finally, `discord.py` currently \"hides\" errors/tracebacks that happen in event listeners as we only have a custom error handler for commands. This isn't too bad locally, since `d.py` **prints** those exceptions to stderr, but it obviously means they'll never show up in Sentry, as they are **not actually logged** with the appropriate level.\n", "before_files": [{"content": "import asyncio\nimport logging\nimport socket\nimport warnings\nfrom typing import Optional\n\nimport aiohttp\nimport discord\nfrom discord.ext import commands\n\nfrom bot import DEBUG_MODE, api, constants\nfrom bot.async_stats import AsyncStatsClient\n\nlog = logging.getLogger('bot')\n\n\nclass Bot(commands.Bot):\n \"\"\"A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if \"connector\" in kwargs:\n warnings.warn(\n \"If login() is called (or the bot is started), the connector will be overwritten \"\n \"with an internal one\"\n )\n\n super().__init__(*args, **kwargs)\n\n self.http_session: Optional[aiohttp.ClientSession] = None\n self.api_client = api.APIClient(loop=self.loop)\n\n self._connector = None\n self._resolver = None\n self._guild_available = asyncio.Event()\n\n statsd_url = constants.Stats.statsd_host\n\n if DEBUG_MODE:\n # Since statsd is UDP, there are no errors for sending to a down port.\n # For this reason, setting the statsd host to 127.0.0.1 for development\n # will effectively disable stats.\n statsd_url = \"127.0.0.1\"\n\n self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix=\"bot\")\n\n def add_cog(self, cog: commands.Cog) -> None:\n \"\"\"Adds a \"cog\" to the bot and logs the operation.\"\"\"\n super().add_cog(cog)\n log.info(f\"Cog loaded: {cog.qualified_name}\")\n\n def clear(self) -> None:\n \"\"\"\n Clears the internal state of the bot and recreates the connector and sessions.\n\n Will cause a DeprecationWarning if called outside a coroutine.\n \"\"\"\n # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate\n # our own stuff here too.\n self._recreate()\n super().clear()\n\n async def close(self) -> None:\n \"\"\"Close the Discord connection and the aiohttp session, connector, statsd client, and resolver.\"\"\"\n await super().close()\n\n await self.api_client.close()\n\n if self.http_session:\n await self.http_session.close()\n\n if self._connector:\n await self._connector.close()\n\n if self._resolver:\n await self._resolver.close()\n\n if self.stats._transport:\n await self.stats._transport.close()\n\n async def login(self, *args, **kwargs) -> None:\n \"\"\"Re-create the connector and set up sessions before logging into Discord.\"\"\"\n self._recreate()\n await self.stats.create_socket()\n await super().login(*args, **kwargs)\n\n def _recreate(self) -> None:\n \"\"\"Re-create the connector, aiohttp session, and the APIClient.\"\"\"\n # Use asyncio for DNS resolution instead of threads so threads aren't spammed.\n # Doesn't seem to have any state with regards to being closed, so no need to worry?\n self._resolver = aiohttp.AsyncResolver()\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self._connector and not self._connector._closed:\n log.warning(\n \"The previous connector was not closed; it will remain open and be overwritten\"\n )\n\n # Use AF_INET as its socket family to prevent HTTPS related problems both locally\n # and in production.\n self._connector = aiohttp.TCPConnector(\n resolver=self._resolver,\n family=socket.AF_INET,\n )\n\n # Client.login() will call HTTPClient.static_login() which will create a session using\n # this connector attribute.\n self.http.connector = self._connector\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self.http_session and not self.http_session.closed:\n log.warning(\n \"The previous session was not closed; it will remain open and be overwritten\"\n )\n\n self.http_session = aiohttp.ClientSession(connector=self._connector)\n self.api_client.recreate(force=True, connector=self._connector)\n\n async def on_guild_available(self, guild: discord.Guild) -> None:\n \"\"\"\n Set the internal guild available event when constants.Guild.id becomes available.\n\n If the cache appears to still be empty (no members, no channels, or no roles), the event\n will not be set.\n \"\"\"\n if guild.id != constants.Guild.id:\n return\n\n if not guild.roles or not guild.members or not guild.channels:\n msg = \"Guild available event was dispatched but the cache appears to still be empty!\"\n log.warning(msg)\n\n try:\n webhook = await self.fetch_webhook(constants.Webhooks.dev_log)\n except discord.HTTPException as e:\n log.error(f\"Failed to fetch webhook to send empty cache warning: status {e.status}\")\n else:\n await webhook.send(f\"<@&{constants.Roles.admin}> {msg}\")\n\n return\n\n self._guild_available.set()\n\n async def on_guild_unavailable(self, guild: discord.Guild) -> None:\n \"\"\"Clear the internal guild available event when constants.Guild.id becomes unavailable.\"\"\"\n if guild.id != constants.Guild.id:\n return\n\n self._guild_available.clear()\n\n async def wait_until_guild_available(self) -> None:\n \"\"\"\n Wait until the constants.Guild.id guild is available (and the cache is ready).\n\n The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n", "path": "bot/bot.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport socket\nimport warnings\nfrom typing import Optional\n\nimport aiohttp\nimport discord\nfrom discord.ext import commands\nfrom sentry_sdk import push_scope\n\nfrom bot import DEBUG_MODE, api, constants\nfrom bot.async_stats import AsyncStatsClient\n\nlog = logging.getLogger('bot')\n\n\nclass Bot(commands.Bot):\n \"\"\"A subclass of `discord.ext.commands.Bot` with an aiohttp session and an API client.\"\"\"\n\n def __init__(self, *args, **kwargs):\n if \"connector\" in kwargs:\n warnings.warn(\n \"If login() is called (or the bot is started), the connector will be overwritten \"\n \"with an internal one\"\n )\n\n super().__init__(*args, **kwargs)\n\n self.http_session: Optional[aiohttp.ClientSession] = None\n self.api_client = api.APIClient(loop=self.loop)\n\n self._connector = None\n self._resolver = None\n self._guild_available = asyncio.Event()\n\n statsd_url = constants.Stats.statsd_host\n\n if DEBUG_MODE:\n # Since statsd is UDP, there are no errors for sending to a down port.\n # For this reason, setting the statsd host to 127.0.0.1 for development\n # will effectively disable stats.\n statsd_url = \"127.0.0.1\"\n\n self.stats = AsyncStatsClient(self.loop, statsd_url, 8125, prefix=\"bot\")\n\n def add_cog(self, cog: commands.Cog) -> None:\n \"\"\"Adds a \"cog\" to the bot and logs the operation.\"\"\"\n super().add_cog(cog)\n log.info(f\"Cog loaded: {cog.qualified_name}\")\n\n def clear(self) -> None:\n \"\"\"\n Clears the internal state of the bot and recreates the connector and sessions.\n\n Will cause a DeprecationWarning if called outside a coroutine.\n \"\"\"\n # Because discord.py recreates the HTTPClient session, may as well follow suit and recreate\n # our own stuff here too.\n self._recreate()\n super().clear()\n\n async def close(self) -> None:\n \"\"\"Close the Discord connection and the aiohttp session, connector, statsd client, and resolver.\"\"\"\n await super().close()\n\n await self.api_client.close()\n\n if self.http_session:\n await self.http_session.close()\n\n if self._connector:\n await self._connector.close()\n\n if self._resolver:\n await self._resolver.close()\n\n if self.stats._transport:\n await self.stats._transport.close()\n\n async def login(self, *args, **kwargs) -> None:\n \"\"\"Re-create the connector and set up sessions before logging into Discord.\"\"\"\n self._recreate()\n await self.stats.create_socket()\n await super().login(*args, **kwargs)\n\n def _recreate(self) -> None:\n \"\"\"Re-create the connector, aiohttp session, and the APIClient.\"\"\"\n # Use asyncio for DNS resolution instead of threads so threads aren't spammed.\n # Doesn't seem to have any state with regards to being closed, so no need to worry?\n self._resolver = aiohttp.AsyncResolver()\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self._connector and not self._connector._closed:\n log.warning(\n \"The previous connector was not closed; it will remain open and be overwritten\"\n )\n\n # Use AF_INET as its socket family to prevent HTTPS related problems both locally\n # and in production.\n self._connector = aiohttp.TCPConnector(\n resolver=self._resolver,\n family=socket.AF_INET,\n )\n\n # Client.login() will call HTTPClient.static_login() which will create a session using\n # this connector attribute.\n self.http.connector = self._connector\n\n # Its __del__ does send a warning but it doesn't always show up for some reason.\n if self.http_session and not self.http_session.closed:\n log.warning(\n \"The previous session was not closed; it will remain open and be overwritten\"\n )\n\n self.http_session = aiohttp.ClientSession(connector=self._connector)\n self.api_client.recreate(force=True, connector=self._connector)\n\n async def on_guild_available(self, guild: discord.Guild) -> None:\n \"\"\"\n Set the internal guild available event when constants.Guild.id becomes available.\n\n If the cache appears to still be empty (no members, no channels, or no roles), the event\n will not be set.\n \"\"\"\n if guild.id != constants.Guild.id:\n return\n\n if not guild.roles or not guild.members or not guild.channels:\n msg = \"Guild available event was dispatched but the cache appears to still be empty!\"\n log.warning(msg)\n\n try:\n webhook = await self.fetch_webhook(constants.Webhooks.dev_log)\n except discord.HTTPException as e:\n log.error(f\"Failed to fetch webhook to send empty cache warning: status {e.status}\")\n else:\n await webhook.send(f\"<@&{constants.Roles.admin}> {msg}\")\n\n return\n\n self._guild_available.set()\n\n async def on_guild_unavailable(self, guild: discord.Guild) -> None:\n \"\"\"Clear the internal guild available event when constants.Guild.id becomes unavailable.\"\"\"\n if guild.id != constants.Guild.id:\n return\n\n self._guild_available.clear()\n\n async def wait_until_guild_available(self) -> None:\n \"\"\"\n Wait until the constants.Guild.id guild is available (and the cache is ready).\n\n The on_ready event is inadequate because it only waits 2 seconds for a GUILD_CREATE\n gateway event before giving up and thus not populating the cache for unavailable guilds.\n \"\"\"\n await self._guild_available.wait()\n\n async def on_error(self, event: str, *args, **kwargs) -> None:\n \"\"\"Log errors raised in event listeners rather than printing them to stderr.\"\"\"\n self.stats.incr(f\"errors.event.{event}\")\n\n with push_scope() as scope:\n scope.set_tag(\"event\", event)\n scope.set_extra(\"args\", args)\n scope.set_extra(\"kwargs\", kwargs)\n\n log.exception(f\"Unhandled exception in {event}.\")\n", "path": "bot/bot.py"}]} | 1,974 | 230 |
gh_patches_debug_2160 | rasdani/github-patches | git_diff | facebookresearch__hydra-1593 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Config composition error with latest version of OmegaConf
# 🐛 Bug
## Description
When using OmegaConf at commit 2dd15f9 (first commit where this problem occurs), there are multiple Hydra tests failures, for instance:
```
pytest "tests/test_basic_launcher.py::TestBasicLauncher::test_sweep_1_job[basic-overrides0]"
(...)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = __INVALID__, value = None
def validate_and_convert(self, value: Any) -> Any:
"""
Validates input and converts to canonical form
:param value: input value
:return: converted value ("100" may be converted to 100 for example)
"""
if value is None:
if self._is_optional():
return None
> raise ValidationError("Non optional field cannot be assigned None")
E hydra.errors.ConfigCompositionException
../omegaconf/omegaconf/nodes.py:55: ConfigCompositionException
```
## Checklist
- [X] I checked on the latest version of Hydra
- [X] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).
## To reproduce
Use master branch of Hydra with OmegaConf's commit 2dd15f9
## Additional context
This might actually be an OmegaConf bug (I'm not sure).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/conf/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9
10 @dataclass
11 class HelpConf:
12 app_name: str = MISSING
13 header: str = MISSING
14 footer: str = MISSING
15 template: str = MISSING
16
17
18 @dataclass
19 class HydraHelpConf:
20 hydra_help: str = MISSING
21 template: str = MISSING
22
23
24 @dataclass
25 class RunDir:
26 dir: str = MISSING
27
28
29 @dataclass
30 class SweepDir:
31 dir: str = MISSING
32 subdir: str = MISSING
33
34
35 @dataclass
36 class OverridesConf:
37 # Overrides for the hydra configuration
38 hydra: List[str] = field(default_factory=lambda: [])
39 # Overrides for the task configuration
40 task: List[str] = field(default_factory=lambda: [])
41
42
43 # job runtime information will be populated here
44 @dataclass
45 class JobConf:
46 # Job name, populated automatically unless specified by the user (in config or cli)
47 name: str = MISSING
48
49 # Populated automatically by Hydra.
50 # Concatenation of job overrides that can be used as a part
51 # of the directory name.
52 # This can be configured via hydra.job.config.override_dirname
53 override_dirname: str = MISSING
54
55 # Job ID in underlying scheduling system
56 id: str = MISSING
57
58 # Job number if job is a part of a sweep
59 num: int = MISSING
60
61 # The config name used by the job
62 config_name: Optional[str] = MISSING
63
64 # Environment variables to set remotely
65 env_set: Dict[str, str] = field(default_factory=dict)
66 # Environment variables to copy from the launching machine
67 env_copy: List[str] = field(default_factory=list)
68
69 # Job config
70 @dataclass
71 class JobConfig:
72 @dataclass
73 # configuration for the ${hydra.job.override_dirname} runtime variable
74 class OverrideDirname:
75 kv_sep: str = "="
76 item_sep: str = ","
77 exclude_keys: List[str] = field(default_factory=list)
78
79 override_dirname: OverrideDirname = OverrideDirname()
80
81 config: JobConfig = JobConfig()
82
83
84 @dataclass
85 class ConfigSourceInfo:
86 path: str
87 schema: str
88 provider: str
89
90
91 @dataclass
92 class RuntimeConf:
93 version: str = MISSING
94 cwd: str = MISSING
95 config_sources: List[ConfigSourceInfo] = MISSING
96
97 # Composition choices dictionary
98 choices: Dict[str, str] = field(default_factory=lambda: {})
99
100
101 @dataclass
102 class HydraConf:
103 defaults: List[Any] = field(
104 default_factory=lambda: [
105 {"output": "default"},
106 {"launcher": "basic"},
107 {"sweeper": "basic"},
108 {"help": "default"},
109 {"hydra_help": "default"},
110 {"hydra_logging": "default"},
111 {"job_logging": "default"},
112 {"callbacks": None},
113 # env specific overrides
114 {"env": "default"},
115 ]
116 )
117
118 # Elements to append to the config search path.
119 # Note: This can only be configured in the primary config.
120 searchpath: List[str] = field(default_factory=list)
121
122 # Normal run output configuration
123 run: RunDir = RunDir()
124 # Multi-run output configuration
125 sweep: SweepDir = SweepDir()
126 # Logging configuration for Hydra
127 hydra_logging: Any = MISSING
128 # Logging configuration for the job
129 job_logging: Any = MISSING
130
131 # Sweeper configuration
132 sweeper: Any = MISSING
133 # Launcher configuration
134 launcher: Any = MISSING
135 # Callbacks configuration
136 callbacks: Dict[str, Any] = field(default_factory=dict)
137
138 # Program Help template
139 help: HelpConf = HelpConf()
140 # Hydra's Help template
141 hydra_help: HydraHelpConf = HydraHelpConf()
142
143 # Output directory for produced configuration files and overrides.
144 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
145 # and extra context when looking at past runs.
146 # Setting to None will prevent the creation of the output subdir.
147 output_subdir: Optional[str] = ".hydra"
148
149 # Those lists will contain runtime overrides
150 overrides: OverridesConf = OverridesConf()
151
152 job: JobConf = JobConf()
153
154 # populated at runtime
155 runtime: RuntimeConf = RuntimeConf()
156
157 # Can be a boolean, string or a list of strings
158 # If a boolean, setting to true will set the log level for the root logger to debug
159 # If a string, it's interpreted as a the list [string]
160 # If a list, each element is interpreted as a logger to have logging level set to debug.
161 # Typical command lines to manipulate hydra.verbose:
162 # hydra.verbose=true
163 # hydra.verbose=[hydra,__main__]
164 # TODO: good use case for Union support in OmegaConf
165 verbose: Any = False
166
167
168 cs = ConfigStore.instance()
169
170 cs.store(
171 group="hydra",
172 name="config",
173 node=HydraConf(),
174 provider="hydra",
175 )
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -95,7 +95,8 @@
config_sources: List[ConfigSourceInfo] = MISSING
# Composition choices dictionary
- choices: Dict[str, str] = field(default_factory=lambda: {})
+ # Ideally, the value type would be Union[str, List[str], None]
+ choices: Dict[str, Any] = field(default_factory=lambda: {})
@dataclass
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -95,7 +95,8 @@\n config_sources: List[ConfigSourceInfo] = MISSING\n \n # Composition choices dictionary\n- choices: Dict[str, str] = field(default_factory=lambda: {})\n+ # Ideally, the value type would be Union[str, List[str], None]\n+ choices: Dict[str, Any] = field(default_factory=lambda: {})\n \n \n @dataclass\n", "issue": "[Bug] Config composition error with latest version of OmegaConf\n# \ud83d\udc1b Bug\r\n## Description\r\n\r\nWhen using OmegaConf at commit 2dd15f9 (first commit where this problem occurs), there are multiple Hydra tests failures, for instance:\r\n\r\n```\r\npytest \"tests/test_basic_launcher.py::TestBasicLauncher::test_sweep_1_job[basic-overrides0]\"\r\n(...)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = __INVALID__, value = None\r\n\r\n def validate_and_convert(self, value: Any) -> Any:\r\n \"\"\"\r\n Validates input and converts to canonical form\r\n :param value: input value\r\n :return: converted value (\"100\" may be converted to 100 for example)\r\n \"\"\"\r\n if value is None:\r\n if self._is_optional():\r\n return None\r\n> raise ValidationError(\"Non optional field cannot be assigned None\")\r\nE hydra.errors.ConfigCompositionException\r\n\r\n../omegaconf/omegaconf/nodes.py:55: ConfigCompositionException\r\n```\r\n\r\n## Checklist\r\n- [X] I checked on the latest version of Hydra\r\n- [X] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).\r\n\r\n## To reproduce\r\n\r\nUse master branch of Hydra with OmegaConf's commit 2dd15f9\r\n\r\n## Additional context\r\n\r\nThis might actually be an OmegaConf bug (I'm not sure).\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass ConfigSourceInfo:\n path: str\n schema: str\n provider: str\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n config_sources: List[ConfigSourceInfo] = MISSING\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n # env specific overrides\n {\"env\": \"default\"},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass ConfigSourceInfo:\n path: str\n schema: str\n provider: str\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n config_sources: List[ConfigSourceInfo] = MISSING\n\n # Composition choices dictionary\n # Ideally, the value type would be Union[str, List[str], None]\n choices: Dict[str, Any] = field(default_factory=lambda: {})\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n # env specific overrides\n {\"env\": \"default\"},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}]} | 2,220 | 130 |
gh_patches_debug_18475 | rasdani/github-patches | git_diff | getnikola__nikola-1957 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
handle include tag in mako templates
Currently templates used via include tags are not considered dependencies. It's not hard.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/template/mako.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Mako template handler."""
28
29 from __future__ import unicode_literals, print_function, absolute_import
30 import os
31 import shutil
32 import sys
33 import tempfile
34
35 from mako import util, lexer
36 from mako.lookup import TemplateLookup
37 from mako.template import Template
38 from markupsafe import Markup # It's ok, Mako requires it
39
40 from nikola.plugin_categories import TemplateSystem
41 from nikola.utils import makedirs, get_logger, STDERR_HANDLER
42
43 LOGGER = get_logger('mako', STDERR_HANDLER)
44
45
46 class MakoTemplates(TemplateSystem):
47
48 """Support for Mako templates."""
49
50 name = "mako"
51
52 lookup = None
53 cache = {}
54 filters = {}
55 directories = []
56 cache_dir = None
57
58 def get_deps(self, filename):
59 """Get dependencies for a template (internal function)."""
60 text = util.read_file(filename)
61 lex = lexer.Lexer(text=text, filename=filename)
62 lex.parse()
63
64 deps = []
65 for n in lex.template.nodes:
66 keyword = getattr(n, 'keyword', None)
67 if keyword in ["inherit", "namespace"]:
68 deps.append(n.attributes['file'])
69 # TODO: include tags are not handled
70 return deps
71
72 def set_directories(self, directories, cache_folder):
73 """Create a new template lookup with set directories."""
74 cache_dir = os.path.join(cache_folder, '.mako.tmp')
75 # Workaround for a Mako bug, Issue #825
76 if sys.version_info[0] == 2:
77 try:
78 os.path.abspath(cache_dir).decode('ascii')
79 except UnicodeEncodeError:
80 cache_dir = tempfile.mkdtemp()
81 LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))
82 if os.path.exists(cache_dir):
83 shutil.rmtree(cache_dir)
84 self.directories = directories
85 self.cache_dir = cache_dir
86 self.create_lookup()
87
88 def inject_directory(self, directory):
89 """Add a directory to the lookup and recreate it if it's not there yet."""
90 if directory not in self.directories:
91 self.directories.append(directory)
92 self.create_lookup()
93
94 def create_lookup(self):
95 """Create a template lookup."""
96 self.lookup = TemplateLookup(
97 directories=self.directories,
98 module_directory=self.cache_dir,
99 output_encoding='utf-8')
100
101 def set_site(self, site):
102 """Set the Nikola site."""
103 self.site = site
104 self.filters.update(self.site.config['TEMPLATE_FILTERS'])
105
106 def render_template(self, template_name, output_name, context):
107 """Render the template into output_name using context."""
108 context['striphtml'] = striphtml
109 template = self.lookup.get_template(template_name)
110 data = template.render_unicode(**context)
111 if output_name is not None:
112 makedirs(os.path.dirname(output_name))
113 with open(output_name, 'w+') as output:
114 output.write(data)
115 return data
116
117 def render_template_to_string(self, template, context):
118 """Render template to a string using context."""
119 context.update(self.filters)
120 return Template(template).render(**context)
121
122 def template_deps(self, template_name):
123 """Generate list of dependencies for a template."""
124 # We can cache here because dependencies should
125 # not change between runs
126 if self.cache.get(template_name, None) is None:
127 template = self.lookup.get_template(template_name)
128 dep_filenames = self.get_deps(template.filename)
129 deps = [template.filename]
130 for fname in dep_filenames:
131 deps += self.template_deps(fname)
132 self.cache[template_name] = tuple(deps)
133 return list(self.cache[template_name])
134
135
136 def striphtml(text):
137 """Strip HTML tags from text."""
138 return Markup(text).striptags()
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py
--- a/nikola/plugins/template/mako.py
+++ b/nikola/plugins/template/mako.py
@@ -32,7 +32,7 @@
import sys
import tempfile
-from mako import util, lexer
+from mako import util, lexer, parsetree
from mako.lookup import TemplateLookup
from mako.template import Template
from markupsafe import Markup # It's ok, Mako requires it
@@ -64,9 +64,8 @@
deps = []
for n in lex.template.nodes:
keyword = getattr(n, 'keyword', None)
- if keyword in ["inherit", "namespace"]:
+ if keyword in ["inherit", "namespace"] or isinstance(n, parsetree.IncludeTag):
deps.append(n.attributes['file'])
- # TODO: include tags are not handled
return deps
def set_directories(self, directories, cache_folder):
| {"golden_diff": "diff --git a/nikola/plugins/template/mako.py b/nikola/plugins/template/mako.py\n--- a/nikola/plugins/template/mako.py\n+++ b/nikola/plugins/template/mako.py\n@@ -32,7 +32,7 @@\n import sys\n import tempfile\n \n-from mako import util, lexer\n+from mako import util, lexer, parsetree\n from mako.lookup import TemplateLookup\n from mako.template import Template\n from markupsafe import Markup # It's ok, Mako requires it\n@@ -64,9 +64,8 @@\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n- if keyword in [\"inherit\", \"namespace\"]:\n+ if keyword in [\"inherit\", \"namespace\"] or isinstance(n, parsetree.IncludeTag):\n deps.append(n.attributes['file'])\n- # TODO: include tags are not handled\n return deps\n \n def set_directories(self, directories, cache_folder):\n", "issue": "handle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\nhandle include tag in mako templates\nCurrently templates used via include tags are not considered dependencies. It's not hard.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Mako template handler.\"\"\"\n\nfrom __future__ import unicode_literals, print_function, absolute_import\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nfrom mako import util, lexer\nfrom mako.lookup import TemplateLookup\nfrom mako.template import Template\nfrom markupsafe import Markup # It's ok, Mako requires it\n\nfrom nikola.plugin_categories import TemplateSystem\nfrom nikola.utils import makedirs, get_logger, STDERR_HANDLER\n\nLOGGER = get_logger('mako', STDERR_HANDLER)\n\n\nclass MakoTemplates(TemplateSystem):\n\n \"\"\"Support for Mako templates.\"\"\"\n\n name = \"mako\"\n\n lookup = None\n cache = {}\n filters = {}\n directories = []\n cache_dir = None\n\n def get_deps(self, filename):\n \"\"\"Get dependencies for a template (internal function).\"\"\"\n text = util.read_file(filename)\n lex = lexer.Lexer(text=text, filename=filename)\n lex.parse()\n\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n if keyword in [\"inherit\", \"namespace\"]:\n deps.append(n.attributes['file'])\n # TODO: include tags are not handled\n return deps\n\n def set_directories(self, directories, cache_folder):\n \"\"\"Create a new template lookup with set directories.\"\"\"\n cache_dir = os.path.join(cache_folder, '.mako.tmp')\n # Workaround for a Mako bug, Issue #825\n if sys.version_info[0] == 2:\n try:\n os.path.abspath(cache_dir).decode('ascii')\n except UnicodeEncodeError:\n cache_dir = tempfile.mkdtemp()\n LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))\n if os.path.exists(cache_dir):\n shutil.rmtree(cache_dir)\n self.directories = directories\n self.cache_dir = cache_dir\n self.create_lookup()\n\n def inject_directory(self, directory):\n \"\"\"Add a directory to the lookup and recreate it if it's not there yet.\"\"\"\n if directory not in self.directories:\n self.directories.append(directory)\n self.create_lookup()\n\n def create_lookup(self):\n \"\"\"Create a template lookup.\"\"\"\n self.lookup = TemplateLookup(\n directories=self.directories,\n module_directory=self.cache_dir,\n output_encoding='utf-8')\n\n def set_site(self, site):\n \"\"\"Set the Nikola site.\"\"\"\n self.site = site\n self.filters.update(self.site.config['TEMPLATE_FILTERS'])\n\n def render_template(self, template_name, output_name, context):\n \"\"\"Render the template into output_name using context.\"\"\"\n context['striphtml'] = striphtml\n template = self.lookup.get_template(template_name)\n data = template.render_unicode(**context)\n if output_name is not None:\n makedirs(os.path.dirname(output_name))\n with open(output_name, 'w+') as output:\n output.write(data)\n return data\n\n def render_template_to_string(self, template, context):\n \"\"\"Render template to a string using context.\"\"\"\n context.update(self.filters)\n return Template(template).render(**context)\n\n def template_deps(self, template_name):\n \"\"\"Generate list of dependencies for a template.\"\"\"\n # We can cache here because dependencies should\n # not change between runs\n if self.cache.get(template_name, None) is None:\n template = self.lookup.get_template(template_name)\n dep_filenames = self.get_deps(template.filename)\n deps = [template.filename]\n for fname in dep_filenames:\n deps += self.template_deps(fname)\n self.cache[template_name] = tuple(deps)\n return list(self.cache[template_name])\n\n\ndef striphtml(text):\n \"\"\"Strip HTML tags from text.\"\"\"\n return Markup(text).striptags()\n", "path": "nikola/plugins/template/mako.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Mako template handler.\"\"\"\n\nfrom __future__ import unicode_literals, print_function, absolute_import\nimport os\nimport shutil\nimport sys\nimport tempfile\n\nfrom mako import util, lexer, parsetree\nfrom mako.lookup import TemplateLookup\nfrom mako.template import Template\nfrom markupsafe import Markup # It's ok, Mako requires it\n\nfrom nikola.plugin_categories import TemplateSystem\nfrom nikola.utils import makedirs, get_logger, STDERR_HANDLER\n\nLOGGER = get_logger('mako', STDERR_HANDLER)\n\n\nclass MakoTemplates(TemplateSystem):\n\n \"\"\"Support for Mako templates.\"\"\"\n\n name = \"mako\"\n\n lookup = None\n cache = {}\n filters = {}\n directories = []\n cache_dir = None\n\n def get_deps(self, filename):\n \"\"\"Get dependencies for a template (internal function).\"\"\"\n text = util.read_file(filename)\n lex = lexer.Lexer(text=text, filename=filename)\n lex.parse()\n\n deps = []\n for n in lex.template.nodes:\n keyword = getattr(n, 'keyword', None)\n if keyword in [\"inherit\", \"namespace\"] or isinstance(n, parsetree.IncludeTag):\n deps.append(n.attributes['file'])\n return deps\n\n def set_directories(self, directories, cache_folder):\n \"\"\"Create a new template lookup with set directories.\"\"\"\n cache_dir = os.path.join(cache_folder, '.mako.tmp')\n # Workaround for a Mako bug, Issue #825\n if sys.version_info[0] == 2:\n try:\n os.path.abspath(cache_dir).decode('ascii')\n except UnicodeEncodeError:\n cache_dir = tempfile.mkdtemp()\n LOGGER.warning('Because of a Mako bug, setting cache_dir to {0}'.format(cache_dir))\n if os.path.exists(cache_dir):\n shutil.rmtree(cache_dir)\n self.directories = directories\n self.cache_dir = cache_dir\n self.create_lookup()\n\n def inject_directory(self, directory):\n \"\"\"Add a directory to the lookup and recreate it if it's not there yet.\"\"\"\n if directory not in self.directories:\n self.directories.append(directory)\n self.create_lookup()\n\n def create_lookup(self):\n \"\"\"Create a template lookup.\"\"\"\n self.lookup = TemplateLookup(\n directories=self.directories,\n module_directory=self.cache_dir,\n output_encoding='utf-8')\n\n def set_site(self, site):\n \"\"\"Set the Nikola site.\"\"\"\n self.site = site\n self.filters.update(self.site.config['TEMPLATE_FILTERS'])\n\n def render_template(self, template_name, output_name, context):\n \"\"\"Render the template into output_name using context.\"\"\"\n context['striphtml'] = striphtml\n template = self.lookup.get_template(template_name)\n data = template.render_unicode(**context)\n if output_name is not None:\n makedirs(os.path.dirname(output_name))\n with open(output_name, 'w+') as output:\n output.write(data)\n return data\n\n def render_template_to_string(self, template, context):\n \"\"\"Render template to a string using context.\"\"\"\n context.update(self.filters)\n return Template(template).render(**context)\n\n def template_deps(self, template_name):\n \"\"\"Generate list of dependencies for a template.\"\"\"\n # We can cache here because dependencies should\n # not change between runs\n if self.cache.get(template_name, None) is None:\n template = self.lookup.get_template(template_name)\n dep_filenames = self.get_deps(template.filename)\n deps = [template.filename]\n for fname in dep_filenames:\n deps += self.template_deps(fname)\n self.cache[template_name] = tuple(deps)\n return list(self.cache[template_name])\n\n\ndef striphtml(text):\n \"\"\"Strip HTML tags from text.\"\"\"\n return Markup(text).striptags()\n", "path": "nikola/plugins/template/mako.py"}]} | 1,684 | 217 |
gh_patches_debug_31198 | rasdani/github-patches | git_diff | cloudtools__troposphere-654 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AutoScalingRollingUpdate with 'If' Aws Helper Condition fail during validate
Hi, in autoscalinggroup UpdatePolicy if i have a AutoScalingRollingUpdate with a IF AWS Helper Condition it fails during validate with:
##
AttributeError: 'If' object has no attribute 'MinInstancesInService'
##
Example code:
##
AutoScalingRollingUpdate=If(
'RollingUpdate',
pol.AutoScalingRollingUpdate(
MaxBatchSize=get_mapped_value('RollingUpdateMaxBatchSize'),
MinInstancesInService=get_mapped_value('RollingUpdateMinInstancesInService'),
MinSuccessfulInstancesPercent=get_mapped_value('RollingUpdateMinSuccessfulInstancesPercent'),
PauseTime=get_mapped_value('RollingUpdatePauseTime'),
SuspendProcesses=[
'HealthCheck',
'ReplaceUnhealthy',
'AlarmNotification',
'ScheduledActions'
],
WaitOnResourceSignals=True
),
Ref('AWS::NoValue')
),
##
To solve issue, in troposphere/autoscaling.py function validate should be:
```
diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py
index cc5873f..8f7a43d 100644
--- a/troposphere/autoscaling.py
+++ b/troposphere/autoscaling.py
@@ -136,7 +136,8 @@ class AutoScalingGroup(AWSObject):
update_policy = self.resource['UpdatePolicy']
if (not isinstance(update_policy, AWSHelperFn) and
- 'AutoScalingRollingUpdate' in update_policy.properties):
+ 'AutoScalingRollingUpdate' in update_policy.properties and
+ not isinstance(update_policy.AutoScalingRollingUpdate, AWSHelperFn)):
rolling_update = update_policy.AutoScalingRollingUpdate
isMinNoCheck = isinstance(
```
##
Regards, Alberto.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/autoscaling.py`
Content:
```
1 # Copyright (c) 2012-2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSHelperFn, AWSObject, AWSProperty, If, FindInMap, Ref
7 from .validators import boolean, integer
8 from . import cloudformation
9
10
11 EC2_INSTANCE_LAUNCH = "autoscaling:EC2_INSTANCE_LAUNCH"
12 EC2_INSTANCE_LAUNCH_ERROR = "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
13 EC2_INSTANCE_TERMINATE = "autoscaling:EC2_INSTANCE_TERMINATE"
14 EC2_INSTANCE_TERMINATE_ERROR = "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
15 TEST_NOTIFICATION = "autoscaling:TEST_NOTIFICATION"
16
17 # Termination Policy constants
18 Default = 'Default'
19 OldestInstance = 'OldestInstance'
20 NewestInstance = 'NewestInstance'
21 OldestLaunchConfiguration = 'OldestLaunchConfiguration'
22 ClosestToNextInstanceHour = 'ClosestToNextInstanceHour'
23
24
25 class Tag(AWSHelperFn):
26 def __init__(self, key, value, propogate):
27 self.data = {
28 'Key': key,
29 'Value': value,
30 'PropagateAtLaunch': propogate,
31 }
32
33 def JSONrepr(self):
34 return self.data
35
36
37 class Tags(AWSHelperFn):
38 defaultPropagateAtLaunch = True
39 manyType = [type([]), type(())]
40
41 def __init__(self, **kwargs):
42 self.tags = []
43 for k, v in sorted(kwargs.iteritems()):
44 if type(v) in self.manyType:
45 propagate = str(v[1]).lower()
46 v = v[0]
47 else:
48 propagate = str(self.defaultPropagateAtLaunch).lower()
49 self.tags.append({
50 'Key': k,
51 'Value': v,
52 'PropagateAtLaunch': propagate,
53 })
54
55 # append tags to list
56 def __add__(self, newtags):
57 newtags.tags = self.tags + newtags.tags
58 return newtags
59
60 def JSONrepr(self):
61 return self.tags
62
63
64 class NotificationConfigurations(AWSProperty):
65 props = {
66 'TopicARN': (basestring, True),
67 'NotificationTypes': (list, True),
68 }
69
70
71 class MetricsCollection(AWSProperty):
72 props = {
73 'Granularity': (basestring, True),
74 'Metrics': (list, False),
75 }
76
77
78 class Metadata(AWSHelperFn):
79 def __init__(self, init, authentication=None):
80 self.validate(init, authentication)
81 # get keys and values from init and authentication
82
83 # if there's only one data point, then we know its the default
84 # cfn-init; where the key is 'config'
85 if len(init.data) == 1:
86 initKey, initValue = init.data.popitem()
87 self.data = {initKey: initValue}
88 else:
89 self.data = init.data
90
91 if authentication:
92 authKey, authValue = authentication.data.popitem()
93 self.data[authKey] = authValue
94
95 def validate(self, init, authentication):
96 if not isinstance(init, cloudformation.Init):
97 raise ValueError(
98 'init must be of type cloudformation.Init'
99 )
100
101 is_instance = isinstance(authentication, cloudformation.Authentication)
102 if authentication and not is_instance:
103 raise ValueError(
104 'authentication must be of type cloudformation.Authentication'
105 )
106
107 def JSONrepr(self):
108 return self.data
109
110
111 class AutoScalingGroup(AWSObject):
112 resource_type = "AWS::AutoScaling::AutoScalingGroup"
113
114 props = {
115 'AvailabilityZones': (list, False),
116 'Cooldown': (integer, False),
117 'DesiredCapacity': (integer, False),
118 'HealthCheckGracePeriod': (integer, False),
119 'HealthCheckType': (basestring, False),
120 'InstanceId': (basestring, False),
121 'LaunchConfigurationName': (basestring, False),
122 'LoadBalancerNames': (list, False),
123 'MaxSize': (integer, True),
124 'MetricsCollection': ([MetricsCollection], False),
125 'MinSize': (integer, True),
126 'NotificationConfigurations': ([NotificationConfigurations], False),
127 'PlacementGroup': (basestring, False),
128 'Tags': (list, False),
129 'TargetGroupARNs': ([basestring], False),
130 'TerminationPolicies': ([basestring], False),
131 'VPCZoneIdentifier': (list, False),
132 }
133
134 def validate(self):
135 if 'UpdatePolicy' in self.resource:
136 update_policy = self.resource['UpdatePolicy']
137
138 if (not isinstance(update_policy, AWSHelperFn) and
139 'AutoScalingRollingUpdate' in update_policy.properties):
140 rolling_update = update_policy.AutoScalingRollingUpdate
141
142 isMinNoCheck = isinstance(
143 rolling_update.MinInstancesInService,
144 (FindInMap, Ref)
145 )
146 isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))
147
148 if not (isMinNoCheck or isMaxNoCheck):
149 maxCount = int(self.MaxSize)
150 minCount = int(rolling_update.MinInstancesInService)
151
152 if minCount >= maxCount:
153 raise ValueError(
154 "The UpdatePolicy attribute "
155 "MinInstancesInService must be less than the "
156 "autoscaling group's MaxSize")
157
158 launch_config = self.properties.get('LaunchConfigurationName')
159 instance_id = self.properties.get('InstanceId')
160 if launch_config and instance_id:
161 raise ValueError("LaunchConfigurationName and InstanceId "
162 "are mutually exclusive.")
163 if not launch_config and not instance_id:
164 raise ValueError("Must specify either LaunchConfigurationName or "
165 "InstanceId: http://docs.aws.amazon.com/AWSCloud"
166 "Formation/latest/UserGuide/aws-properties-as-gr"
167 "oup.html#cfn-as-group-instanceid")
168
169 availability_zones = self.properties.get('AvailabilityZones')
170 vpc_zone_identifier = self.properties.get('VPCZoneIdentifier')
171 if not availability_zones and not vpc_zone_identifier:
172 raise ValueError("Must specify AvailabilityZones and/or "
173 "VPCZoneIdentifier: http://docs.aws.amazon.com/A"
174 "WSCloudFormation/latest/UserGuide/aws-propertie"
175 "s-as-group.html#cfn-as-group-vpczoneidentifier")
176 return True
177
178
179 class LaunchConfiguration(AWSObject):
180 resource_type = "AWS::AutoScaling::LaunchConfiguration"
181
182 props = {
183 'AssociatePublicIpAddress': (boolean, False),
184 'BlockDeviceMappings': (list, False),
185 'ClassicLinkVPCId': (basestring, False),
186 'ClassicLinkVPCSecurityGroups': ([basestring], False),
187 'EbsOptimized': (boolean, False),
188 'IamInstanceProfile': (basestring, False),
189 'ImageId': (basestring, True),
190 'InstanceId': (basestring, False),
191 'InstanceMonitoring': (boolean, False),
192 'InstanceType': (basestring, True),
193 'KernelId': (basestring, False),
194 'KeyName': (basestring, False),
195 'Metadata': (Metadata, False),
196 'PlacementTenancy': (basestring, False),
197 'RamDiskId': (basestring, False),
198 'SecurityGroups': (list, False),
199 'SpotPrice': (basestring, False),
200 'UserData': (basestring, False),
201 }
202
203
204 class StepAdjustments(AWSProperty):
205 props = {
206 'MetricIntervalLowerBound': (integer, False),
207 'MetricIntervalUpperBound': (integer, False),
208 'ScalingAdjustment': (integer, True),
209 }
210
211
212 class ScalingPolicy(AWSObject):
213 resource_type = "AWS::AutoScaling::ScalingPolicy"
214
215 props = {
216 'AdjustmentType': (basestring, True),
217 'AutoScalingGroupName': (basestring, True),
218 'Cooldown': (integer, False),
219 'EstimatedInstanceWarmup': (integer, False),
220 'MetricAggregationType': (basestring, False),
221 'MinAdjustmentMagnitude': (integer, False),
222 'PolicyType': (basestring, False),
223 'ScalingAdjustment': (integer, False),
224 'StepAdjustments': ([StepAdjustments], False),
225 }
226
227
228 class ScheduledAction(AWSObject):
229 resource_type = "AWS::AutoScaling::ScheduledAction"
230
231 props = {
232 'AutoScalingGroupName': (basestring, True),
233 'DesiredCapacity': (integer, False),
234 'EndTime': (basestring, False),
235 'MaxSize': (integer, False),
236 'MinSize': (integer, False),
237 'Recurrence': (basestring, False),
238 'StartTime': (basestring, False),
239 }
240
241
242 class LifecycleHook(AWSObject):
243 resource_type = "AWS::AutoScaling::LifecycleHook"
244
245 props = {
246 'AutoScalingGroupName': (basestring, True),
247 'DefaultResult': (basestring, False),
248 'HeartbeatTimeout': (integer, False),
249 'LifecycleHookName': (basestring, False),
250 'LifecycleTransition': (basestring, True),
251 'NotificationMetadata': (basestring, False),
252 'NotificationTargetARN': (basestring, True),
253 'RoleARN': (basestring, True),
254 }
255
256
257 class Trigger(AWSObject):
258 resource_type = "AWS::AutoScaling::Trigger"
259
260 props = {
261 'AutoScalingGroupName': (basestring, True),
262 'BreachDuration': (integer, True),
263 'Dimensions': (list, True),
264 'LowerBreachScaleIncrement': (integer, False),
265 'LowerThreshold': (integer, True),
266 'MetricName': (basestring, True),
267 'Namespace': (basestring, True),
268 'Period': (integer, True),
269 'Statistic': (basestring, True),
270 'Unit': (basestring, False),
271 'UpperBreachScaleIncrement': (integer, False),
272 'UpperThreshold': (integer, True),
273 }
274
275
276 class EBSBlockDevice(AWSProperty):
277 # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html
278 props = {
279 'DeleteOnTermination': (boolean, False),
280 'Encrypted': (boolean, False),
281 'Iops': (integer, False),
282 'SnapshotId': (basestring, False),
283 'VolumeSize': (integer, False),
284 'VolumeType': (basestring, False),
285 }
286
287
288 class BlockDeviceMapping(AWSProperty):
289 # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-mapping.html
290 props = {
291 'DeviceName': (basestring, True),
292 'Ebs': (EBSBlockDevice, False),
293 'NoDevice': (boolean, False),
294 'VirtualName': (basestring, False),
295 }
296
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py
--- a/troposphere/autoscaling.py
+++ b/troposphere/autoscaling.py
@@ -137,23 +137,26 @@
if (not isinstance(update_policy, AWSHelperFn) and
'AutoScalingRollingUpdate' in update_policy.properties):
- rolling_update = update_policy.AutoScalingRollingUpdate
-
- isMinNoCheck = isinstance(
- rolling_update.MinInstancesInService,
- (FindInMap, Ref)
- )
- isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))
-
- if not (isMinNoCheck or isMaxNoCheck):
- maxCount = int(self.MaxSize)
- minCount = int(rolling_update.MinInstancesInService)
-
- if minCount >= maxCount:
- raise ValueError(
- "The UpdatePolicy attribute "
- "MinInstancesInService must be less than the "
- "autoscaling group's MaxSize")
+ if not isinstance(
+ update_policy.AutoScalingRollingUpdate, AWSHelperFn):
+ rolling_update = update_policy.AutoScalingRollingUpdate
+
+ is_min_no_check = isinstance(
+ rolling_update.MinInstancesInService,
+ (FindInMap, Ref)
+ )
+ is_max_no_check = isinstance(self.MaxSize,
+ (If, FindInMap, Ref))
+
+ if not (is_min_no_check or is_max_no_check):
+ max_count = int(self.MaxSize)
+ min_count = int(rolling_update.MinInstancesInService)
+
+ if min_count >= max_count:
+ raise ValueError(
+ "The UpdatePolicy attribute "
+ "MinInstancesInService must be less than the "
+ "autoscaling group's MaxSize")
launch_config = self.properties.get('LaunchConfigurationName')
instance_id = self.properties.get('InstanceId')
| {"golden_diff": "diff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py\n--- a/troposphere/autoscaling.py\n+++ b/troposphere/autoscaling.py\n@@ -137,23 +137,26 @@\n \n if (not isinstance(update_policy, AWSHelperFn) and\n 'AutoScalingRollingUpdate' in update_policy.properties):\n- rolling_update = update_policy.AutoScalingRollingUpdate\n-\n- isMinNoCheck = isinstance(\n- rolling_update.MinInstancesInService,\n- (FindInMap, Ref)\n- )\n- isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))\n-\n- if not (isMinNoCheck or isMaxNoCheck):\n- maxCount = int(self.MaxSize)\n- minCount = int(rolling_update.MinInstancesInService)\n-\n- if minCount >= maxCount:\n- raise ValueError(\n- \"The UpdatePolicy attribute \"\n- \"MinInstancesInService must be less than the \"\n- \"autoscaling group's MaxSize\")\n+ if not isinstance(\n+ update_policy.AutoScalingRollingUpdate, AWSHelperFn):\n+ rolling_update = update_policy.AutoScalingRollingUpdate\n+\n+ is_min_no_check = isinstance(\n+ rolling_update.MinInstancesInService,\n+ (FindInMap, Ref)\n+ )\n+ is_max_no_check = isinstance(self.MaxSize,\n+ (If, FindInMap, Ref))\n+\n+ if not (is_min_no_check or is_max_no_check):\n+ max_count = int(self.MaxSize)\n+ min_count = int(rolling_update.MinInstancesInService)\n+\n+ if min_count >= max_count:\n+ raise ValueError(\n+ \"The UpdatePolicy attribute \"\n+ \"MinInstancesInService must be less than the \"\n+ \"autoscaling group's MaxSize\")\n \n launch_config = self.properties.get('LaunchConfigurationName')\n instance_id = self.properties.get('InstanceId')\n", "issue": "AutoScalingRollingUpdate with 'If' Aws Helper Condition fail during validate\nHi, in autoscalinggroup UpdatePolicy if i have a AutoScalingRollingUpdate with a IF AWS Helper Condition it fails during validate with:\r\n##\r\nAttributeError: 'If' object has no attribute 'MinInstancesInService'\r\n##\r\n\r\nExample code:\r\n##\r\n AutoScalingRollingUpdate=If(\r\n 'RollingUpdate',\r\n pol.AutoScalingRollingUpdate(\r\n MaxBatchSize=get_mapped_value('RollingUpdateMaxBatchSize'),\r\n MinInstancesInService=get_mapped_value('RollingUpdateMinInstancesInService'),\r\n MinSuccessfulInstancesPercent=get_mapped_value('RollingUpdateMinSuccessfulInstancesPercent'),\r\n PauseTime=get_mapped_value('RollingUpdatePauseTime'),\r\n SuspendProcesses=[\r\n 'HealthCheck',\r\n 'ReplaceUnhealthy',\r\n 'AlarmNotification',\r\n 'ScheduledActions'\r\n ],\r\n WaitOnResourceSignals=True\r\n ),\r\n Ref('AWS::NoValue')\r\n ),\r\n##\r\n\r\nTo solve issue, in troposphere/autoscaling.py function validate should be:\r\n```\r\ndiff --git a/troposphere/autoscaling.py b/troposphere/autoscaling.py\r\nindex cc5873f..8f7a43d 100644\r\n--- a/troposphere/autoscaling.py\r\n+++ b/troposphere/autoscaling.py\r\n@@ -136,7 +136,8 @@ class AutoScalingGroup(AWSObject):\r\n update_policy = self.resource['UpdatePolicy']\r\n \r\n if (not isinstance(update_policy, AWSHelperFn) and\r\n- 'AutoScalingRollingUpdate' in update_policy.properties):\r\n+ 'AutoScalingRollingUpdate' in update_policy.properties and\r\n+ not isinstance(update_policy.AutoScalingRollingUpdate, AWSHelperFn)):\r\n rolling_update = update_policy.AutoScalingRollingUpdate\r\n \r\n isMinNoCheck = isinstance(\r\n```\r\n##\r\n\r\nRegards, Alberto.\n", "before_files": [{"content": "# Copyright (c) 2012-2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty, If, FindInMap, Ref\nfrom .validators import boolean, integer\nfrom . import cloudformation\n\n\nEC2_INSTANCE_LAUNCH = \"autoscaling:EC2_INSTANCE_LAUNCH\"\nEC2_INSTANCE_LAUNCH_ERROR = \"autoscaling:EC2_INSTANCE_LAUNCH_ERROR\"\nEC2_INSTANCE_TERMINATE = \"autoscaling:EC2_INSTANCE_TERMINATE\"\nEC2_INSTANCE_TERMINATE_ERROR = \"autoscaling:EC2_INSTANCE_TERMINATE_ERROR\"\nTEST_NOTIFICATION = \"autoscaling:TEST_NOTIFICATION\"\n\n# Termination Policy constants\nDefault = 'Default'\nOldestInstance = 'OldestInstance'\nNewestInstance = 'NewestInstance'\nOldestLaunchConfiguration = 'OldestLaunchConfiguration'\nClosestToNextInstanceHour = 'ClosestToNextInstanceHour'\n\n\nclass Tag(AWSHelperFn):\n def __init__(self, key, value, propogate):\n self.data = {\n 'Key': key,\n 'Value': value,\n 'PropagateAtLaunch': propogate,\n }\n\n def JSONrepr(self):\n return self.data\n\n\nclass Tags(AWSHelperFn):\n defaultPropagateAtLaunch = True\n manyType = [type([]), type(())]\n\n def __init__(self, **kwargs):\n self.tags = []\n for k, v in sorted(kwargs.iteritems()):\n if type(v) in self.manyType:\n propagate = str(v[1]).lower()\n v = v[0]\n else:\n propagate = str(self.defaultPropagateAtLaunch).lower()\n self.tags.append({\n 'Key': k,\n 'Value': v,\n 'PropagateAtLaunch': propagate,\n })\n\n # append tags to list\n def __add__(self, newtags):\n newtags.tags = self.tags + newtags.tags\n return newtags\n\n def JSONrepr(self):\n return self.tags\n\n\nclass NotificationConfigurations(AWSProperty):\n props = {\n 'TopicARN': (basestring, True),\n 'NotificationTypes': (list, True),\n }\n\n\nclass MetricsCollection(AWSProperty):\n props = {\n 'Granularity': (basestring, True),\n 'Metrics': (list, False),\n }\n\n\nclass Metadata(AWSHelperFn):\n def __init__(self, init, authentication=None):\n self.validate(init, authentication)\n # get keys and values from init and authentication\n\n # if there's only one data point, then we know its the default\n # cfn-init; where the key is 'config'\n if len(init.data) == 1:\n initKey, initValue = init.data.popitem()\n self.data = {initKey: initValue}\n else:\n self.data = init.data\n\n if authentication:\n authKey, authValue = authentication.data.popitem()\n self.data[authKey] = authValue\n\n def validate(self, init, authentication):\n if not isinstance(init, cloudformation.Init):\n raise ValueError(\n 'init must be of type cloudformation.Init'\n )\n\n is_instance = isinstance(authentication, cloudformation.Authentication)\n if authentication and not is_instance:\n raise ValueError(\n 'authentication must be of type cloudformation.Authentication'\n )\n\n def JSONrepr(self):\n return self.data\n\n\nclass AutoScalingGroup(AWSObject):\n resource_type = \"AWS::AutoScaling::AutoScalingGroup\"\n\n props = {\n 'AvailabilityZones': (list, False),\n 'Cooldown': (integer, False),\n 'DesiredCapacity': (integer, False),\n 'HealthCheckGracePeriod': (integer, False),\n 'HealthCheckType': (basestring, False),\n 'InstanceId': (basestring, False),\n 'LaunchConfigurationName': (basestring, False),\n 'LoadBalancerNames': (list, False),\n 'MaxSize': (integer, True),\n 'MetricsCollection': ([MetricsCollection], False),\n 'MinSize': (integer, True),\n 'NotificationConfigurations': ([NotificationConfigurations], False),\n 'PlacementGroup': (basestring, False),\n 'Tags': (list, False),\n 'TargetGroupARNs': ([basestring], False),\n 'TerminationPolicies': ([basestring], False),\n 'VPCZoneIdentifier': (list, False),\n }\n\n def validate(self):\n if 'UpdatePolicy' in self.resource:\n update_policy = self.resource['UpdatePolicy']\n\n if (not isinstance(update_policy, AWSHelperFn) and\n 'AutoScalingRollingUpdate' in update_policy.properties):\n rolling_update = update_policy.AutoScalingRollingUpdate\n\n isMinNoCheck = isinstance(\n rolling_update.MinInstancesInService,\n (FindInMap, Ref)\n )\n isMaxNoCheck = isinstance(self.MaxSize, (If, FindInMap, Ref))\n\n if not (isMinNoCheck or isMaxNoCheck):\n maxCount = int(self.MaxSize)\n minCount = int(rolling_update.MinInstancesInService)\n\n if minCount >= maxCount:\n raise ValueError(\n \"The UpdatePolicy attribute \"\n \"MinInstancesInService must be less than the \"\n \"autoscaling group's MaxSize\")\n\n launch_config = self.properties.get('LaunchConfigurationName')\n instance_id = self.properties.get('InstanceId')\n if launch_config and instance_id:\n raise ValueError(\"LaunchConfigurationName and InstanceId \"\n \"are mutually exclusive.\")\n if not launch_config and not instance_id:\n raise ValueError(\"Must specify either LaunchConfigurationName or \"\n \"InstanceId: http://docs.aws.amazon.com/AWSCloud\"\n \"Formation/latest/UserGuide/aws-properties-as-gr\"\n \"oup.html#cfn-as-group-instanceid\")\n\n availability_zones = self.properties.get('AvailabilityZones')\n vpc_zone_identifier = self.properties.get('VPCZoneIdentifier')\n if not availability_zones and not vpc_zone_identifier:\n raise ValueError(\"Must specify AvailabilityZones and/or \"\n \"VPCZoneIdentifier: http://docs.aws.amazon.com/A\"\n \"WSCloudFormation/latest/UserGuide/aws-propertie\"\n \"s-as-group.html#cfn-as-group-vpczoneidentifier\")\n return True\n\n\nclass LaunchConfiguration(AWSObject):\n resource_type = \"AWS::AutoScaling::LaunchConfiguration\"\n\n props = {\n 'AssociatePublicIpAddress': (boolean, False),\n 'BlockDeviceMappings': (list, False),\n 'ClassicLinkVPCId': (basestring, False),\n 'ClassicLinkVPCSecurityGroups': ([basestring], False),\n 'EbsOptimized': (boolean, False),\n 'IamInstanceProfile': (basestring, False),\n 'ImageId': (basestring, True),\n 'InstanceId': (basestring, False),\n 'InstanceMonitoring': (boolean, False),\n 'InstanceType': (basestring, True),\n 'KernelId': (basestring, False),\n 'KeyName': (basestring, False),\n 'Metadata': (Metadata, False),\n 'PlacementTenancy': (basestring, False),\n 'RamDiskId': (basestring, False),\n 'SecurityGroups': (list, False),\n 'SpotPrice': (basestring, False),\n 'UserData': (basestring, False),\n }\n\n\nclass StepAdjustments(AWSProperty):\n props = {\n 'MetricIntervalLowerBound': (integer, False),\n 'MetricIntervalUpperBound': (integer, False),\n 'ScalingAdjustment': (integer, True),\n }\n\n\nclass ScalingPolicy(AWSObject):\n resource_type = \"AWS::AutoScaling::ScalingPolicy\"\n\n props = {\n 'AdjustmentType': (basestring, True),\n 'AutoScalingGroupName': (basestring, True),\n 'Cooldown': (integer, False),\n 'EstimatedInstanceWarmup': (integer, False),\n 'MetricAggregationType': (basestring, False),\n 'MinAdjustmentMagnitude': (integer, False),\n 'PolicyType': (basestring, False),\n 'ScalingAdjustment': (integer, False),\n 'StepAdjustments': ([StepAdjustments], False),\n }\n\n\nclass ScheduledAction(AWSObject):\n resource_type = \"AWS::AutoScaling::ScheduledAction\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DesiredCapacity': (integer, False),\n 'EndTime': (basestring, False),\n 'MaxSize': (integer, False),\n 'MinSize': (integer, False),\n 'Recurrence': (basestring, False),\n 'StartTime': (basestring, False),\n }\n\n\nclass LifecycleHook(AWSObject):\n resource_type = \"AWS::AutoScaling::LifecycleHook\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DefaultResult': (basestring, False),\n 'HeartbeatTimeout': (integer, False),\n 'LifecycleHookName': (basestring, False),\n 'LifecycleTransition': (basestring, True),\n 'NotificationMetadata': (basestring, False),\n 'NotificationTargetARN': (basestring, True),\n 'RoleARN': (basestring, True),\n }\n\n\nclass Trigger(AWSObject):\n resource_type = \"AWS::AutoScaling::Trigger\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'BreachDuration': (integer, True),\n 'Dimensions': (list, True),\n 'LowerBreachScaleIncrement': (integer, False),\n 'LowerThreshold': (integer, True),\n 'MetricName': (basestring, True),\n 'Namespace': (basestring, True),\n 'Period': (integer, True),\n 'Statistic': (basestring, True),\n 'Unit': (basestring, False),\n 'UpperBreachScaleIncrement': (integer, False),\n 'UpperThreshold': (integer, True),\n }\n\n\nclass EBSBlockDevice(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html\n props = {\n 'DeleteOnTermination': (boolean, False),\n 'Encrypted': (boolean, False),\n 'Iops': (integer, False),\n 'SnapshotId': (basestring, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (basestring, False),\n }\n\n\nclass BlockDeviceMapping(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-mapping.html\n props = {\n 'DeviceName': (basestring, True),\n 'Ebs': (EBSBlockDevice, False),\n 'NoDevice': (boolean, False),\n 'VirtualName': (basestring, False),\n }\n", "path": "troposphere/autoscaling.py"}], "after_files": [{"content": "# Copyright (c) 2012-2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty, If, FindInMap, Ref\nfrom .validators import boolean, integer\nfrom . import cloudformation\n\n\nEC2_INSTANCE_LAUNCH = \"autoscaling:EC2_INSTANCE_LAUNCH\"\nEC2_INSTANCE_LAUNCH_ERROR = \"autoscaling:EC2_INSTANCE_LAUNCH_ERROR\"\nEC2_INSTANCE_TERMINATE = \"autoscaling:EC2_INSTANCE_TERMINATE\"\nEC2_INSTANCE_TERMINATE_ERROR = \"autoscaling:EC2_INSTANCE_TERMINATE_ERROR\"\nTEST_NOTIFICATION = \"autoscaling:TEST_NOTIFICATION\"\n\n# Termination Policy constants\nDefault = 'Default'\nOldestInstance = 'OldestInstance'\nNewestInstance = 'NewestInstance'\nOldestLaunchConfiguration = 'OldestLaunchConfiguration'\nClosestToNextInstanceHour = 'ClosestToNextInstanceHour'\n\n\nclass Tag(AWSHelperFn):\n def __init__(self, key, value, propogate):\n self.data = {\n 'Key': key,\n 'Value': value,\n 'PropagateAtLaunch': propogate,\n }\n\n def JSONrepr(self):\n return self.data\n\n\nclass Tags(AWSHelperFn):\n defaultPropagateAtLaunch = True\n manyType = [type([]), type(())]\n\n def __init__(self, **kwargs):\n self.tags = []\n for k, v in sorted(kwargs.iteritems()):\n if type(v) in self.manyType:\n propagate = str(v[1]).lower()\n v = v[0]\n else:\n propagate = str(self.defaultPropagateAtLaunch).lower()\n self.tags.append({\n 'Key': k,\n 'Value': v,\n 'PropagateAtLaunch': propagate,\n })\n\n # append tags to list\n def __add__(self, newtags):\n newtags.tags = self.tags + newtags.tags\n return newtags\n\n def JSONrepr(self):\n return self.tags\n\n\nclass NotificationConfigurations(AWSProperty):\n props = {\n 'TopicARN': (basestring, True),\n 'NotificationTypes': (list, True),\n }\n\n\nclass MetricsCollection(AWSProperty):\n props = {\n 'Granularity': (basestring, True),\n 'Metrics': (list, False),\n }\n\n\nclass Metadata(AWSHelperFn):\n def __init__(self, init, authentication=None):\n self.validate(init, authentication)\n # get keys and values from init and authentication\n\n # if there's only one data point, then we know its the default\n # cfn-init; where the key is 'config'\n if len(init.data) == 1:\n initKey, initValue = init.data.popitem()\n self.data = {initKey: initValue}\n else:\n self.data = init.data\n\n if authentication:\n authKey, authValue = authentication.data.popitem()\n self.data[authKey] = authValue\n\n def validate(self, init, authentication):\n if not isinstance(init, cloudformation.Init):\n raise ValueError(\n 'init must be of type cloudformation.Init'\n )\n\n is_instance = isinstance(authentication, cloudformation.Authentication)\n if authentication and not is_instance:\n raise ValueError(\n 'authentication must be of type cloudformation.Authentication'\n )\n\n def JSONrepr(self):\n return self.data\n\n\nclass AutoScalingGroup(AWSObject):\n resource_type = \"AWS::AutoScaling::AutoScalingGroup\"\n\n props = {\n 'AvailabilityZones': (list, False),\n 'Cooldown': (integer, False),\n 'DesiredCapacity': (integer, False),\n 'HealthCheckGracePeriod': (integer, False),\n 'HealthCheckType': (basestring, False),\n 'InstanceId': (basestring, False),\n 'LaunchConfigurationName': (basestring, False),\n 'LoadBalancerNames': (list, False),\n 'MaxSize': (integer, True),\n 'MetricsCollection': ([MetricsCollection], False),\n 'MinSize': (integer, True),\n 'NotificationConfigurations': ([NotificationConfigurations], False),\n 'PlacementGroup': (basestring, False),\n 'Tags': (list, False),\n 'TargetGroupARNs': ([basestring], False),\n 'TerminationPolicies': ([basestring], False),\n 'VPCZoneIdentifier': (list, False),\n }\n\n def validate(self):\n if 'UpdatePolicy' in self.resource:\n update_policy = self.resource['UpdatePolicy']\n\n if (not isinstance(update_policy, AWSHelperFn) and\n 'AutoScalingRollingUpdate' in update_policy.properties):\n if not isinstance(\n update_policy.AutoScalingRollingUpdate, AWSHelperFn):\n rolling_update = update_policy.AutoScalingRollingUpdate\n\n is_min_no_check = isinstance(\n rolling_update.MinInstancesInService,\n (FindInMap, Ref)\n )\n is_max_no_check = isinstance(self.MaxSize,\n (If, FindInMap, Ref))\n\n if not (is_min_no_check or is_max_no_check):\n max_count = int(self.MaxSize)\n min_count = int(rolling_update.MinInstancesInService)\n\n if min_count >= max_count:\n raise ValueError(\n \"The UpdatePolicy attribute \"\n \"MinInstancesInService must be less than the \"\n \"autoscaling group's MaxSize\")\n\n launch_config = self.properties.get('LaunchConfigurationName')\n instance_id = self.properties.get('InstanceId')\n if launch_config and instance_id:\n raise ValueError(\"LaunchConfigurationName and InstanceId \"\n \"are mutually exclusive.\")\n if not launch_config and not instance_id:\n raise ValueError(\"Must specify either LaunchConfigurationName or \"\n \"InstanceId: http://docs.aws.amazon.com/AWSCloud\"\n \"Formation/latest/UserGuide/aws-properties-as-gr\"\n \"oup.html#cfn-as-group-instanceid\")\n\n availability_zones = self.properties.get('AvailabilityZones')\n vpc_zone_identifier = self.properties.get('VPCZoneIdentifier')\n if not availability_zones and not vpc_zone_identifier:\n raise ValueError(\"Must specify AvailabilityZones and/or \"\n \"VPCZoneIdentifier: http://docs.aws.amazon.com/A\"\n \"WSCloudFormation/latest/UserGuide/aws-propertie\"\n \"s-as-group.html#cfn-as-group-vpczoneidentifier\")\n return True\n\n\nclass LaunchConfiguration(AWSObject):\n resource_type = \"AWS::AutoScaling::LaunchConfiguration\"\n\n props = {\n 'AssociatePublicIpAddress': (boolean, False),\n 'BlockDeviceMappings': (list, False),\n 'ClassicLinkVPCId': (basestring, False),\n 'ClassicLinkVPCSecurityGroups': ([basestring], False),\n 'EbsOptimized': (boolean, False),\n 'IamInstanceProfile': (basestring, False),\n 'ImageId': (basestring, True),\n 'InstanceId': (basestring, False),\n 'InstanceMonitoring': (boolean, False),\n 'InstanceType': (basestring, True),\n 'KernelId': (basestring, False),\n 'KeyName': (basestring, False),\n 'Metadata': (Metadata, False),\n 'PlacementTenancy': (basestring, False),\n 'RamDiskId': (basestring, False),\n 'SecurityGroups': (list, False),\n 'SpotPrice': (basestring, False),\n 'UserData': (basestring, False),\n }\n\n\nclass StepAdjustments(AWSProperty):\n props = {\n 'MetricIntervalLowerBound': (integer, False),\n 'MetricIntervalUpperBound': (integer, False),\n 'ScalingAdjustment': (integer, True),\n }\n\n\nclass ScalingPolicy(AWSObject):\n resource_type = \"AWS::AutoScaling::ScalingPolicy\"\n\n props = {\n 'AdjustmentType': (basestring, True),\n 'AutoScalingGroupName': (basestring, True),\n 'Cooldown': (integer, False),\n 'EstimatedInstanceWarmup': (integer, False),\n 'MetricAggregationType': (basestring, False),\n 'MinAdjustmentMagnitude': (integer, False),\n 'PolicyType': (basestring, False),\n 'ScalingAdjustment': (integer, False),\n 'StepAdjustments': ([StepAdjustments], False),\n }\n\n\nclass ScheduledAction(AWSObject):\n resource_type = \"AWS::AutoScaling::ScheduledAction\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DesiredCapacity': (integer, False),\n 'EndTime': (basestring, False),\n 'MaxSize': (integer, False),\n 'MinSize': (integer, False),\n 'Recurrence': (basestring, False),\n 'StartTime': (basestring, False),\n }\n\n\nclass LifecycleHook(AWSObject):\n resource_type = \"AWS::AutoScaling::LifecycleHook\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'DefaultResult': (basestring, False),\n 'HeartbeatTimeout': (integer, False),\n 'LifecycleHookName': (basestring, False),\n 'LifecycleTransition': (basestring, True),\n 'NotificationMetadata': (basestring, False),\n 'NotificationTargetARN': (basestring, True),\n 'RoleARN': (basestring, True),\n }\n\n\nclass Trigger(AWSObject):\n resource_type = \"AWS::AutoScaling::Trigger\"\n\n props = {\n 'AutoScalingGroupName': (basestring, True),\n 'BreachDuration': (integer, True),\n 'Dimensions': (list, True),\n 'LowerBreachScaleIncrement': (integer, False),\n 'LowerThreshold': (integer, True),\n 'MetricName': (basestring, True),\n 'Namespace': (basestring, True),\n 'Period': (integer, True),\n 'Statistic': (basestring, True),\n 'Unit': (basestring, False),\n 'UpperBreachScaleIncrement': (integer, False),\n 'UpperThreshold': (integer, True),\n }\n\n\nclass EBSBlockDevice(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html\n props = {\n 'DeleteOnTermination': (boolean, False),\n 'Encrypted': (boolean, False),\n 'Iops': (integer, False),\n 'SnapshotId': (basestring, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (basestring, False),\n }\n\n\nclass BlockDeviceMapping(AWSProperty):\n # http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-mapping.html\n props = {\n 'DeviceName': (basestring, True),\n 'Ebs': (EBSBlockDevice, False),\n 'NoDevice': (boolean, False),\n 'VirtualName': (basestring, False),\n }\n", "path": "troposphere/autoscaling.py"}]} | 3,804 | 431 |
gh_patches_debug_3836 | rasdani/github-patches | git_diff | StackStorm__st2-2925 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"st2 key delete" does not obey interface and populate user query params
st2 --debug key delete -s user -u stanley netdev_servicewrapper_address
2016-09-23 17:13:48,545 DEBUG - Using cached token from file "/home/vagrant/.st2/token-st2admin"
curl -X GET -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: _/_' -H 'User-Agent: python-requests/2.11.1' -H 'X-Auth-Token: 235ca028e8e545efbd28806090ca3bd6' 'http://127.0.0.1:9101/v1/keys/netdev_servicewrapper_address?scope=user'
{
"faultstring": "KeyValuePair with name: st2admin:netdev_servicewrapper_address and scope: user not found in db."
}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2client/st2client/commands/keyvalue.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import json
18 import logging
19 from os.path import join as pjoin
20
21 from st2client.commands import resource
22 from st2client.commands.noop import NoopCommand
23 from st2client.commands.resource import add_auth_token_to_kwargs_from_cli
24 from st2client.formatters import table
25 from st2client.models.keyvalue import KeyValuePair
26 from st2client.utils.date import format_isodate_for_user_timezone
27
28 LOG = logging.getLogger(__name__)
29
30 DEFAULT_SCOPE = 'system'
31
32
33 class KeyValuePairBranch(resource.ResourceBranch):
34
35 def __init__(self, description, app, subparsers, parent_parser=None):
36 super(KeyValuePairBranch, self).__init__(
37 KeyValuePair, description, app, subparsers,
38 parent_parser=parent_parser,
39 commands={
40 'list': KeyValuePairListCommand,
41 'get': KeyValuePairGetCommand,
42 'delete': KeyValuePairDeleteCommand,
43 'create': NoopCommand,
44 'update': NoopCommand
45 })
46
47 # Registers extended commands
48 self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,
49 self.subparsers)
50 self.commands['load'] = KeyValuePairLoadCommand(
51 self.resource, self.app, self.subparsers)
52 self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(
53 self.resource, self.app, self.subparsers)
54
55 # Remove unsupported commands
56 # TODO: Refactor parent class and make it nicer
57 del self.commands['create']
58 del self.commands['update']
59
60
61 class KeyValuePairListCommand(resource.ResourceListCommand):
62 display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'user',
63 'expire_timestamp']
64 attribute_transform_functions = {
65 'expire_timestamp': format_isodate_for_user_timezone,
66 }
67
68 def __init__(self, *args, **kwargs):
69 super(KeyValuePairListCommand, self).__init__(*args, **kwargs)
70
71 # Filter options
72 self.parser.add_argument('--prefix', help=('Only return values which name starts with the '
73 ' provided prefix.'))
74 self.parser.add_argument('--decrypt', action='store_true',
75 help='Decrypt secrets and display plain text.')
76 self.parser.add_argument('-s', '--scope', default='system', dest='scope',
77 help='Scope item is under. Example: "user".')
78 self.parser.add_argument('-u', '--user', dest='user', default=None,
79 help='User for user scoped items (admin only).')
80
81 def run_and_print(self, args, **kwargs):
82 if args.prefix:
83 kwargs['prefix'] = args.prefix
84
85 decrypt = getattr(args, 'decrypt', False)
86 kwargs['params'] = {'decrypt': str(decrypt).lower()}
87 scope = getattr(args, 'scope', DEFAULT_SCOPE)
88 kwargs['params']['scope'] = scope
89 kwargs['params']['user'] = args.user
90
91 instances = self.run(args, **kwargs)
92 self.print_output(reversed(instances), table.MultiColumnTable,
93 attributes=args.attr, widths=args.width,
94 json=args.json,
95 yaml=args.yaml,
96 attribute_transform_functions=self.attribute_transform_functions)
97
98
99 class KeyValuePairGetCommand(resource.ResourceGetCommand):
100 pk_argument_name = 'name'
101 display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'expire_timestamp']
102
103 def __init__(self, kv_resource, *args, **kwargs):
104 super(KeyValuePairGetCommand, self).__init__(kv_resource, *args, **kwargs)
105 self.parser.add_argument('-d', '--decrypt', action='store_true',
106 help='Decrypt secret if encrypted and show plain text.')
107 self.parser.add_argument('-s', '--scope', default=DEFAULT_SCOPE, dest='scope',
108 help='Scope item is under. Example: "user".')
109
110 @resource.add_auth_token_to_kwargs_from_cli
111 def run(self, args, **kwargs):
112 resource_name = getattr(args, self.pk_argument_name, None)
113 decrypt = getattr(args, 'decrypt', False)
114 scope = getattr(args, 'scope', DEFAULT_SCOPE)
115 kwargs['params'] = {'decrypt': str(decrypt).lower()}
116 kwargs['params']['scope'] = scope
117 return self.get_resource_by_id(id=resource_name, **kwargs)
118
119
120 class KeyValuePairSetCommand(resource.ResourceCommand):
121 display_attributes = ['name', 'value', 'expire_timestamp']
122
123 def __init__(self, resource, *args, **kwargs):
124 super(KeyValuePairSetCommand, self).__init__(
125 resource, 'set',
126 'Set an existing %s.' % resource.get_display_name().lower(),
127 *args, **kwargs
128 )
129
130 self.parser.add_argument('name',
131 metavar='name',
132 help='Name of the key value pair.')
133 self.parser.add_argument('value', help='Value paired with the key.')
134 self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,
135 help='TTL (in seconds) for this value.')
136 self.parser.add_argument('-e', '--encrypt', dest='secret',
137 action='store_true',
138 help='Encrypt value before saving the value.')
139 self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,
140 help='Specify the scope under which you want ' +
141 'to place the item.')
142 self.parser.add_argument('-u', '--user', dest='user', default=None,
143 help='User for user scoped items (admin only).')
144
145 @add_auth_token_to_kwargs_from_cli
146 def run(self, args, **kwargs):
147 instance = KeyValuePair()
148 instance.id = args.name # TODO: refactor and get rid of id
149 instance.name = args.name
150 instance.value = args.value
151 instance.scope = args.scope
152 instance.user = args.user
153
154 if args.secret:
155 instance.secret = args.secret
156
157 if args.ttl:
158 instance.ttl = args.ttl
159
160 return self.manager.update(instance, **kwargs)
161
162 def run_and_print(self, args, **kwargs):
163 instance = self.run(args, **kwargs)
164 self.print_output(instance, table.PropertyValueTable,
165 attributes=self.display_attributes, json=args.json,
166 yaml=args.yaml)
167
168
169 class KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):
170 pk_argument_name = 'name'
171
172 def __init__(self, resource, *args, **kwargs):
173 super(KeyValuePairDeleteCommand, self).__init__(resource, *args, **kwargs)
174
175 self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,
176 help='Specify the scope under which you want ' +
177 'to place the item.')
178 self.parser.add_argument('-u', '--user', dest='user', default=None,
179 help='User for user scoped items (admin only).')
180
181 @add_auth_token_to_kwargs_from_cli
182 def run(self, args, **kwargs):
183 resource_id = getattr(args, self.pk_argument_name, None)
184 scope = getattr(args, 'scope', DEFAULT_SCOPE)
185 kwargs['params'] = {}
186 kwargs['params']['scope'] = scope
187 instance = self.get_resource(resource_id, **kwargs)
188
189 if not instance:
190 raise resource.ResourceNotFoundError('KeyValuePair with id "%s" not found', resource_id)
191
192 instance.id = resource_id # TODO: refactor and get rid of id
193 self.manager.delete(instance, **kwargs)
194
195
196 class KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):
197 """
198 Commands which delete all the key value pairs which match the provided
199 prefix.
200 """
201 def __init__(self, resource, *args, **kwargs):
202 super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',
203 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)
204
205 self.parser.add_argument('-p', '--prefix', required=True,
206 help='Name prefix (e.g. twitter.TwitterSensor:)')
207
208 @add_auth_token_to_kwargs_from_cli
209 def run(self, args, **kwargs):
210 prefix = args.prefix
211 key_pairs = self.manager.get_all(prefix=prefix)
212
213 to_delete = []
214 for key_pair in key_pairs:
215 key_pair.id = key_pair.name
216 to_delete.append(key_pair)
217
218 deleted = []
219 for key_pair in to_delete:
220 self.manager.delete(instance=key_pair, **kwargs)
221 deleted.append(key_pair)
222
223 return deleted
224
225 def run_and_print(self, args, **kwargs):
226 # TODO: Need to use args, instead of kwargs (args=) because of bad API
227 # FIX ME
228 deleted = self.run(args, **kwargs)
229 key_ids = [key_pair.id for key_pair in deleted]
230
231 print('Deleted %s keys' % (len(deleted)))
232 print('Deleted key ids: %s' % (', '.join(key_ids)))
233
234
235 class KeyValuePairLoadCommand(resource.ResourceCommand):
236 pk_argument_name = 'name'
237 display_attributes = ['name', 'value']
238
239 def __init__(self, resource, *args, **kwargs):
240 help_text = ('Load a list of %s from file.' %
241 resource.get_plural_display_name().lower())
242 super(KeyValuePairLoadCommand, self).__init__(resource, 'load',
243 help_text, *args, **kwargs)
244
245 self.parser.add_argument(
246 'file', help=('JSON file containing the %s to create.'
247 % resource.get_plural_display_name().lower()))
248
249 @add_auth_token_to_kwargs_from_cli
250 def run(self, args, **kwargs):
251 file_path = os.path.normpath(pjoin(os.getcwd(), args.file))
252
253 if not os.path.exists(args.file):
254 raise ValueError('File "%s" doesn\'t exist' % (file_path))
255
256 if not os.path.isfile(args.file):
257 raise ValueError('"%s" is not a file' % (file_path))
258
259 with open(file_path, 'r') as f:
260 kvps = json.loads(f.read())
261
262 instances = []
263 for item in kvps:
264 name = item['name']
265 value = item['value']
266
267 instance = KeyValuePair()
268 instance.id = name # TODO: refactor and get rid of id
269 instance.name = name
270 instance.value = value
271
272 self.manager.update(instance, **kwargs)
273 instances.append(instance)
274 return instances
275
276 def run_and_print(self, args, **kwargs):
277 instances = self.run(args, **kwargs)
278 self.print_output(instances, table.MultiColumnTable,
279 attributes=['id', 'name', 'value'], json=args.json, yaml=args.yaml)
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py
--- a/st2client/st2client/commands/keyvalue.py
+++ b/st2client/st2client/commands/keyvalue.py
@@ -184,6 +184,7 @@
scope = getattr(args, 'scope', DEFAULT_SCOPE)
kwargs['params'] = {}
kwargs['params']['scope'] = scope
+ kwargs['params']['user'] = args.user
instance = self.get_resource(resource_id, **kwargs)
if not instance:
| {"golden_diff": "diff --git a/st2client/st2client/commands/keyvalue.py b/st2client/st2client/commands/keyvalue.py\n--- a/st2client/st2client/commands/keyvalue.py\n+++ b/st2client/st2client/commands/keyvalue.py\n@@ -184,6 +184,7 @@\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {}\n kwargs['params']['scope'] = scope\n+ kwargs['params']['user'] = args.user\n instance = self.get_resource(resource_id, **kwargs)\n \n if not instance:\n", "issue": "\"st2 key delete\" does not obey interface and populate user query params\nst2 --debug key delete -s user -u stanley netdev_servicewrapper_address\n2016-09-23 17:13:48,545 DEBUG - Using cached token from file \"/home/vagrant/.st2/token-st2admin\"\n\ncurl -X GET -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: _/_' -H 'User-Agent: python-requests/2.11.1' -H 'X-Auth-Token: 235ca028e8e545efbd28806090ca3bd6' 'http://127.0.0.1:9101/v1/keys/netdev_servicewrapper_address?scope=user'\n\n{\n \"faultstring\": \"KeyValuePair with name: st2admin:netdev_servicewrapper_address and scope: user not found in db.\"\n}\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport json\nimport logging\nfrom os.path import join as pjoin\n\nfrom st2client.commands import resource\nfrom st2client.commands.noop import NoopCommand\nfrom st2client.commands.resource import add_auth_token_to_kwargs_from_cli\nfrom st2client.formatters import table\nfrom st2client.models.keyvalue import KeyValuePair\nfrom st2client.utils.date import format_isodate_for_user_timezone\n\nLOG = logging.getLogger(__name__)\n\nDEFAULT_SCOPE = 'system'\n\n\nclass KeyValuePairBranch(resource.ResourceBranch):\n\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(KeyValuePairBranch, self).__init__(\n KeyValuePair, description, app, subparsers,\n parent_parser=parent_parser,\n commands={\n 'list': KeyValuePairListCommand,\n 'get': KeyValuePairGetCommand,\n 'delete': KeyValuePairDeleteCommand,\n 'create': NoopCommand,\n 'update': NoopCommand\n })\n\n # Registers extended commands\n self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,\n self.subparsers)\n self.commands['load'] = KeyValuePairLoadCommand(\n self.resource, self.app, self.subparsers)\n self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(\n self.resource, self.app, self.subparsers)\n\n # Remove unsupported commands\n # TODO: Refactor parent class and make it nicer\n del self.commands['create']\n del self.commands['update']\n\n\nclass KeyValuePairListCommand(resource.ResourceListCommand):\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'user',\n 'expire_timestamp']\n attribute_transform_functions = {\n 'expire_timestamp': format_isodate_for_user_timezone,\n }\n\n def __init__(self, *args, **kwargs):\n super(KeyValuePairListCommand, self).__init__(*args, **kwargs)\n\n # Filter options\n self.parser.add_argument('--prefix', help=('Only return values which name starts with the '\n ' provided prefix.'))\n self.parser.add_argument('--decrypt', action='store_true',\n help='Decrypt secrets and display plain text.')\n self.parser.add_argument('-s', '--scope', default='system', dest='scope',\n help='Scope item is under. Example: \"user\".')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n def run_and_print(self, args, **kwargs):\n if args.prefix:\n kwargs['prefix'] = args.prefix\n\n decrypt = getattr(args, 'decrypt', False)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params']['scope'] = scope\n kwargs['params']['user'] = args.user\n\n instances = self.run(args, **kwargs)\n self.print_output(reversed(instances), table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json,\n yaml=args.yaml,\n attribute_transform_functions=self.attribute_transform_functions)\n\n\nclass KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'expire_timestamp']\n\n def __init__(self, kv_resource, *args, **kwargs):\n super(KeyValuePairGetCommand, self).__init__(kv_resource, *args, **kwargs)\n self.parser.add_argument('-d', '--decrypt', action='store_true',\n help='Decrypt secret if encrypted and show plain text.')\n self.parser.add_argument('-s', '--scope', default=DEFAULT_SCOPE, dest='scope',\n help='Scope item is under. Example: \"user\".')\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_name = getattr(args, self.pk_argument_name, None)\n decrypt = getattr(args, 'decrypt', False)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n kwargs['params']['scope'] = scope\n return self.get_resource_by_id(id=resource_name, **kwargs)\n\n\nclass KeyValuePairSetCommand(resource.ResourceCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n resource, 'set',\n 'Set an existing %s.' % resource.get_display_name().lower(),\n *args, **kwargs\n )\n\n self.parser.add_argument('name',\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,\n help='TTL (in seconds) for this value.')\n self.parser.add_argument('-e', '--encrypt', dest='secret',\n action='store_true',\n help='Encrypt value before saving the value.')\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n instance = KeyValuePair()\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n instance.scope = args.scope\n instance.user = args.user\n\n if args.secret:\n instance.secret = args.secret\n\n if args.ttl:\n instance.ttl = args.ttl\n\n return self.manager.update(instance, **kwargs)\n\n def run_and_print(self, args, **kwargs):\n instance = self.run(args, **kwargs)\n self.print_output(instance, table.PropertyValueTable,\n attributes=self.display_attributes, json=args.json,\n yaml=args.yaml)\n\n\nclass KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):\n pk_argument_name = 'name'\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteCommand, self).__init__(resource, *args, **kwargs)\n\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_id = getattr(args, self.pk_argument_name, None)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {}\n kwargs['params']['scope'] = scope\n instance = self.get_resource(resource_id, **kwargs)\n\n if not instance:\n raise resource.ResourceNotFoundError('KeyValuePair with id \"%s\" not found', resource_id)\n\n instance.id = resource_id # TODO: refactor and get rid of id\n self.manager.delete(instance, **kwargs)\n\n\nclass KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):\n \"\"\"\n Commands which delete all the key value pairs which match the provided\n prefix.\n \"\"\"\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',\n 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)\n\n self.parser.add_argument('-p', '--prefix', required=True,\n help='Name prefix (e.g. twitter.TwitterSensor:)')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n prefix = args.prefix\n key_pairs = self.manager.get_all(prefix=prefix)\n\n to_delete = []\n for key_pair in key_pairs:\n key_pair.id = key_pair.name\n to_delete.append(key_pair)\n\n deleted = []\n for key_pair in to_delete:\n self.manager.delete(instance=key_pair, **kwargs)\n deleted.append(key_pair)\n\n return deleted\n\n def run_and_print(self, args, **kwargs):\n # TODO: Need to use args, instead of kwargs (args=) because of bad API\n # FIX ME\n deleted = self.run(args, **kwargs)\n key_ids = [key_pair.id for key_pair in deleted]\n\n print('Deleted %s keys' % (len(deleted)))\n print('Deleted key ids: %s' % (', '.join(key_ids)))\n\n\nclass KeyValuePairLoadCommand(resource.ResourceCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n help_text = ('Load a list of %s from file.' %\n resource.get_plural_display_name().lower())\n super(KeyValuePairLoadCommand, self).__init__(resource, 'load',\n help_text, *args, **kwargs)\n\n self.parser.add_argument(\n 'file', help=('JSON file containing the %s to create.'\n % resource.get_plural_display_name().lower()))\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n file_path = os.path.normpath(pjoin(os.getcwd(), args.file))\n\n if not os.path.exists(args.file):\n raise ValueError('File \"%s\" doesn\\'t exist' % (file_path))\n\n if not os.path.isfile(args.file):\n raise ValueError('\"%s\" is not a file' % (file_path))\n\n with open(file_path, 'r') as f:\n kvps = json.loads(f.read())\n\n instances = []\n for item in kvps:\n name = item['name']\n value = item['value']\n\n instance = KeyValuePair()\n instance.id = name # TODO: refactor and get rid of id\n instance.name = name\n instance.value = value\n\n self.manager.update(instance, **kwargs)\n instances.append(instance)\n return instances\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=['id', 'name', 'value'], json=args.json, yaml=args.yaml)\n", "path": "st2client/st2client/commands/keyvalue.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport json\nimport logging\nfrom os.path import join as pjoin\n\nfrom st2client.commands import resource\nfrom st2client.commands.noop import NoopCommand\nfrom st2client.commands.resource import add_auth_token_to_kwargs_from_cli\nfrom st2client.formatters import table\nfrom st2client.models.keyvalue import KeyValuePair\nfrom st2client.utils.date import format_isodate_for_user_timezone\n\nLOG = logging.getLogger(__name__)\n\nDEFAULT_SCOPE = 'system'\n\n\nclass KeyValuePairBranch(resource.ResourceBranch):\n\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(KeyValuePairBranch, self).__init__(\n KeyValuePair, description, app, subparsers,\n parent_parser=parent_parser,\n commands={\n 'list': KeyValuePairListCommand,\n 'get': KeyValuePairGetCommand,\n 'delete': KeyValuePairDeleteCommand,\n 'create': NoopCommand,\n 'update': NoopCommand\n })\n\n # Registers extended commands\n self.commands['set'] = KeyValuePairSetCommand(self.resource, self.app,\n self.subparsers)\n self.commands['load'] = KeyValuePairLoadCommand(\n self.resource, self.app, self.subparsers)\n self.commands['delete_by_prefix'] = KeyValuePairDeleteByPrefixCommand(\n self.resource, self.app, self.subparsers)\n\n # Remove unsupported commands\n # TODO: Refactor parent class and make it nicer\n del self.commands['create']\n del self.commands['update']\n\n\nclass KeyValuePairListCommand(resource.ResourceListCommand):\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'user',\n 'expire_timestamp']\n attribute_transform_functions = {\n 'expire_timestamp': format_isodate_for_user_timezone,\n }\n\n def __init__(self, *args, **kwargs):\n super(KeyValuePairListCommand, self).__init__(*args, **kwargs)\n\n # Filter options\n self.parser.add_argument('--prefix', help=('Only return values which name starts with the '\n ' provided prefix.'))\n self.parser.add_argument('--decrypt', action='store_true',\n help='Decrypt secrets and display plain text.')\n self.parser.add_argument('-s', '--scope', default='system', dest='scope',\n help='Scope item is under. Example: \"user\".')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n def run_and_print(self, args, **kwargs):\n if args.prefix:\n kwargs['prefix'] = args.prefix\n\n decrypt = getattr(args, 'decrypt', False)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params']['scope'] = scope\n kwargs['params']['user'] = args.user\n\n instances = self.run(args, **kwargs)\n self.print_output(reversed(instances), table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json,\n yaml=args.yaml,\n attribute_transform_functions=self.attribute_transform_functions)\n\n\nclass KeyValuePairGetCommand(resource.ResourceGetCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value', 'secret', 'encrypted', 'scope', 'expire_timestamp']\n\n def __init__(self, kv_resource, *args, **kwargs):\n super(KeyValuePairGetCommand, self).__init__(kv_resource, *args, **kwargs)\n self.parser.add_argument('-d', '--decrypt', action='store_true',\n help='Decrypt secret if encrypted and show plain text.')\n self.parser.add_argument('-s', '--scope', default=DEFAULT_SCOPE, dest='scope',\n help='Scope item is under. Example: \"user\".')\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_name = getattr(args, self.pk_argument_name, None)\n decrypt = getattr(args, 'decrypt', False)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {'decrypt': str(decrypt).lower()}\n kwargs['params']['scope'] = scope\n return self.get_resource_by_id(id=resource_name, **kwargs)\n\n\nclass KeyValuePairSetCommand(resource.ResourceCommand):\n display_attributes = ['name', 'value', 'expire_timestamp']\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairSetCommand, self).__init__(\n resource, 'set',\n 'Set an existing %s.' % resource.get_display_name().lower(),\n *args, **kwargs\n )\n\n self.parser.add_argument('name',\n metavar='name',\n help='Name of the key value pair.')\n self.parser.add_argument('value', help='Value paired with the key.')\n self.parser.add_argument('-l', '--ttl', dest='ttl', type=int, default=None,\n help='TTL (in seconds) for this value.')\n self.parser.add_argument('-e', '--encrypt', dest='secret',\n action='store_true',\n help='Encrypt value before saving the value.')\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n instance = KeyValuePair()\n instance.id = args.name # TODO: refactor and get rid of id\n instance.name = args.name\n instance.value = args.value\n instance.scope = args.scope\n instance.user = args.user\n\n if args.secret:\n instance.secret = args.secret\n\n if args.ttl:\n instance.ttl = args.ttl\n\n return self.manager.update(instance, **kwargs)\n\n def run_and_print(self, args, **kwargs):\n instance = self.run(args, **kwargs)\n self.print_output(instance, table.PropertyValueTable,\n attributes=self.display_attributes, json=args.json,\n yaml=args.yaml)\n\n\nclass KeyValuePairDeleteCommand(resource.ResourceDeleteCommand):\n pk_argument_name = 'name'\n\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteCommand, self).__init__(resource, *args, **kwargs)\n\n self.parser.add_argument('-s', '--scope', dest='scope', default=DEFAULT_SCOPE,\n help='Specify the scope under which you want ' +\n 'to place the item.')\n self.parser.add_argument('-u', '--user', dest='user', default=None,\n help='User for user scoped items (admin only).')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n resource_id = getattr(args, self.pk_argument_name, None)\n scope = getattr(args, 'scope', DEFAULT_SCOPE)\n kwargs['params'] = {}\n kwargs['params']['scope'] = scope\n kwargs['params']['user'] = args.user\n instance = self.get_resource(resource_id, **kwargs)\n\n if not instance:\n raise resource.ResourceNotFoundError('KeyValuePair with id \"%s\" not found', resource_id)\n\n instance.id = resource_id # TODO: refactor and get rid of id\n self.manager.delete(instance, **kwargs)\n\n\nclass KeyValuePairDeleteByPrefixCommand(resource.ResourceCommand):\n \"\"\"\n Commands which delete all the key value pairs which match the provided\n prefix.\n \"\"\"\n def __init__(self, resource, *args, **kwargs):\n super(KeyValuePairDeleteByPrefixCommand, self).__init__(resource, 'delete_by_prefix',\n 'Delete KeyValue pairs which match the provided prefix', *args, **kwargs)\n\n self.parser.add_argument('-p', '--prefix', required=True,\n help='Name prefix (e.g. twitter.TwitterSensor:)')\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n prefix = args.prefix\n key_pairs = self.manager.get_all(prefix=prefix)\n\n to_delete = []\n for key_pair in key_pairs:\n key_pair.id = key_pair.name\n to_delete.append(key_pair)\n\n deleted = []\n for key_pair in to_delete:\n self.manager.delete(instance=key_pair, **kwargs)\n deleted.append(key_pair)\n\n return deleted\n\n def run_and_print(self, args, **kwargs):\n # TODO: Need to use args, instead of kwargs (args=) because of bad API\n # FIX ME\n deleted = self.run(args, **kwargs)\n key_ids = [key_pair.id for key_pair in deleted]\n\n print('Deleted %s keys' % (len(deleted)))\n print('Deleted key ids: %s' % (', '.join(key_ids)))\n\n\nclass KeyValuePairLoadCommand(resource.ResourceCommand):\n pk_argument_name = 'name'\n display_attributes = ['name', 'value']\n\n def __init__(self, resource, *args, **kwargs):\n help_text = ('Load a list of %s from file.' %\n resource.get_plural_display_name().lower())\n super(KeyValuePairLoadCommand, self).__init__(resource, 'load',\n help_text, *args, **kwargs)\n\n self.parser.add_argument(\n 'file', help=('JSON file containing the %s to create.'\n % resource.get_plural_display_name().lower()))\n\n @add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n file_path = os.path.normpath(pjoin(os.getcwd(), args.file))\n\n if not os.path.exists(args.file):\n raise ValueError('File \"%s\" doesn\\'t exist' % (file_path))\n\n if not os.path.isfile(args.file):\n raise ValueError('\"%s\" is not a file' % (file_path))\n\n with open(file_path, 'r') as f:\n kvps = json.loads(f.read())\n\n instances = []\n for item in kvps:\n name = item['name']\n value = item['value']\n\n instance = KeyValuePair()\n instance.id = name # TODO: refactor and get rid of id\n instance.name = name\n instance.value = value\n\n self.manager.update(instance, **kwargs)\n instances.append(instance)\n return instances\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=['id', 'name', 'value'], json=args.json, yaml=args.yaml)\n", "path": "st2client/st2client/commands/keyvalue.py"}]} | 3,657 | 129 |
gh_patches_debug_34374 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-9297 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set Athena Workgroup Encryption also tries to change readonly field
### Describe the bug
Hi, we are currently trying to set the encryption for our primary athena workgroups that are unencrypted. The policy looks like this:
```yaml
- name: set-athena-workgroup-encryption
resource: awscc.athena_workgroup
filters:
- type: value
key: Name
value: "primary"
- type: value
key: "WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption"
value: absent
actions:
- type: update
WorkGroupConfiguration:
EnforceWorkGroupConfiguration: true
- type: update
WorkGroupConfiguration:
ResultConfiguration:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
When executing this policy we get this error though:
```
2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing
Traceback (most recent call last):
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py", line 307, in run
policy()
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 1357, in __call__
resources = mode.run()
^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 364, in run
results = a.process(resources)
^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py", line 43, in process
client.update_resource(
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated
2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing
- set-athena-workgroup-encryption
```
But we don't want to update the engine version itself.
### What did you expect to happen?
We expected the policy to update the encryption setting and not touch the engine version, because the attribute was not specified in our policy
### Cloud Provider
Amazon Web Services (AWS)
### Cloud Custodian version and dependency information
```shell
Custodian: 0.9.34
Python: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0]
Platform: posix.uname_result(sysname='Linux', nodename='marcel', release='6.2.0-39-generic', version='#40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023', machine='x86_64')
Using venv: True
Docker: False
Installed:
argcomplete==3.2.1
attrs==23.2.0
boto3==1.34.21
botocore==1.34.21
docutils==0.18.1
importlib-metadata==6.11.0
jmespath==1.0.1
jsonschema==4.21.0
jsonschema-specifications==2023.12.1
python-dateutil==2.8.2
pyyaml==6.0.1
referencing==0.31.1
rpds-py==0.17.1
s3transfer==0.10.0
six==1.16.0
tabulate==0.9.0
urllib3==1.26.18
zipp==3.17.0
```
### Policy
```shell
- name: set-athena-workgroup-encryption
resource: awscc.athena_workgroup
filters:
- type: value
key: Name
value: "primary"
- type: value
key: "WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption"
value: absent
actions:
- type: update
WorkGroupConfiguration:
EnforceWorkGroupConfiguration: true
- type: update
WorkGroupConfiguration:
ResultConfiguration:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
### Relevant log/traceback output
```shell
2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing
Traceback (most recent call last):
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py", line 307, in run
policy()
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 1357, in __call__
resources = mode.run()
^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py", line 364, in run
results = a.process(resources)
^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py", line 43, in process
client.update_resource(
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated
2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing
- set-athena-workgroup-encryption
```
### Extra information or context
We tried to use the update attributes like this
```yaml
- type: update
WorkGroupConfigurationUpdates:
EnforceWorkGroupConfiguration: true
ResultConfigurationUpdates:
EncryptionConfiguration:
EncryptionOption: SSE_S3
```
but there is currently a bug in AWS which resets the workgroup right after the operation. We are in communication with AWS Support there, but in the meantime we tried to make it work with the approach described above.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_awscc/c7n_awscc/actions.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import json
4
5 import jsonpatch
6
7 from c7n.actions import Action
8 from c7n.utils import local_session, type_schema
9
10
11 class ControlAction(Action):
12 def get_identity(self, r):
13 id_fields = self.manager.schema["primaryIdentifier"]
14 idv = {}
15 for idf in id_fields:
16 idn = idf.rsplit("/", 1)[-1]
17 idv[idn] = r[idn]
18 if len(idv) == 1:
19 return idv[idn]
20 return json.dumps(idv)
21
22
23 class Delete(ControlAction):
24 schema = type_schema("delete")
25
26 def process(self, resources):
27 client = local_session(self.manager.session_factory).client("cloudcontrol")
28 for r in resources:
29 client.delete_resource(
30 TypeName=self.manager.resource_type.cfn_type,
31 Identifier=self.get_identity(r),
32 )
33
34
35 class Update(ControlAction):
36 # schema is setup at resource type initialization
37
38 def process(self, resources):
39 client = local_session(self.manager.session_factory).client("cloudcontrol")
40 for r in resources:
41 patch = self.get_patch(r)
42 client.update_resource(
43 TypeName=self.manager.resource_type.cfn_type,
44 Identifier=self.get_identity(r),
45 PatchDocument=patch.to_string(),
46 )
47
48 def get_patch(self, r):
49 tgt = dict(r)
50 for k, v in self.data.items():
51 if k == "type":
52 continue
53 tgt[k] = v
54 return jsonpatch.make_patch(r, tgt)
55
```
Path: `tools/c7n_awscc/c7n_awscc/manager.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import json
4 from pathlib import Path
5
6 from c7n.filters import Filter # noqa
7
8 from .actions import Delete, Update
9 from .query import CloudControl
10 from .provider import resources
11
12 from c7n.query import TypeInfo, QueryResourceManager
13
14
15 _IndexData = None
16
17
18 def get_index():
19 global _IndexData
20
21 if _IndexData is not None:
22 return _IndexData
23
24 index_path = Path(__file__).parent / "data" / "index.json"
25 _IndexData = json.loads(index_path.read_text(encoding="utf8"))
26 return _IndexData
27
28
29 def initialize_resource(resource_name):
30 """Load a resource class from its name"""
31 rpath = Path(__file__).parent / "data" / f"aws_{resource_name}.json"
32 if not rpath.exists():
33 return None
34 rinfo = json.loads(rpath.read_text(encoding="utf8"))
35
36 type_info = type(
37 "resource_type",
38 (TypeInfo,),
39 dict(
40 id=rinfo["primaryIdentifier"][0].split("/", 1)[-1],
41 service=rinfo["typeName"].split("::")[1].lower(),
42 cfn_type=rinfo["typeName"],
43 ),
44 )
45
46 rname = "_".join([s.lower() for s in rinfo["typeName"].split("::")[1:]])
47 class_name = "".join([s.lower().capitalize() for s in rinfo["typeName"].split("::")[1:]])
48 mod_name = f"c7n_awscc.resources.{resource_name}"
49
50 permissions = rinfo.get("handlers", {}).get("read", {}).get("permissions", []) + rinfo.get(
51 "handlers", {}
52 ).get("list", {}).get("permissions", [])
53
54 rtype = type(
55 class_name,
56 (QueryResourceManager,),
57 dict(
58 __module__=mod_name,
59 source_mapping={"describe": CloudControl},
60 resource_type=type_info,
61 permissions=permissions,
62 schema=rinfo,
63 ),
64 )
65
66 rtype.action_registry.register(
67 "delete",
68 type(
69 class_name + "Delete",
70 (Delete,),
71 {
72 "permissions": rinfo["handlers"]["delete"]["permissions"],
73 "__module__": mod_name,
74 },
75 ),
76 )
77
78 if "update" in rinfo["handlers"]:
79 rtype.action_registry.register(
80 "update",
81 type(
82 class_name + "Update",
83 (Update,),
84 {
85 "schema": get_update_schema(rtype.schema, rname),
86 "permissions": rinfo["handlers"]["update"]["permissions"],
87 "__module__": mod_name,
88 },
89 ),
90 )
91
92 process_supplementary_data(rtype)
93 resources.register(rname, rtype)
94
95 return {rtype.__name__: rtype}
96
97
98 def process_supplementary_data(rtype):
99 idx = get_index()
100 augment = idx["augment"][rtype.resource_type.cfn_type]
101 rtype.resource_type.service = augment.get("service") or ""
102
103
104 def get_update_schema(schema, rname):
105 prop_names = set(schema["properties"])
106 create_only = {s.rsplit("/", 1)[-1] for s in schema.get("createOnlyProperties", ())}
107 read_only = {s.rsplit("/", 1)[-1] for s in schema.get("readOnlyProperties", ())}
108
109 updatable = prop_names - (create_only | read_only)
110 update_schema = {
111 "additionalProperties": False,
112 "properties": {u: schema["properties"][u] for u in updatable},
113 }
114 update_schema["properties"]["type"] = {"enum": ["update"]}
115
116 if "definitions" in schema:
117 update_schema["definitions"] = dict(schema["definitions"])
118 update_refs(update_schema, rname)
119
120 return update_schema
121
122
123 def update_refs(schema_node, rname):
124 for k, v in schema_node.items():
125 if k == "$ref" and v.startswith("#/definitions/"):
126 # mutating while iterating but there's only ref value ever
127 schema_node[k] = "#/definitions/resources/awscc.%s/actions/update/%s" % (
128 rname,
129 v[2:],
130 )
131 elif isinstance(v, dict):
132 update_refs(v, rname)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_awscc/c7n_awscc/actions.py b/tools/c7n_awscc/c7n_awscc/actions.py
--- a/tools/c7n_awscc/c7n_awscc/actions.py
+++ b/tools/c7n_awscc/c7n_awscc/actions.py
@@ -33,8 +33,16 @@
class Update(ControlAction):
+ """Update a resource.
+
+ Supports either whole key/value replacement via an attribute mapping
+ or jsonpatch `patch:`
+ """
+
# schema is setup at resource type initialization
+ _action_meta = ("type", "patch")
+
def process(self, resources):
client = local_session(self.manager.session_factory).client("cloudcontrol")
for r in resources:
@@ -46,9 +54,23 @@
)
def get_patch(self, r):
- tgt = dict(r)
+ # we support either using json patch to do a partial modification.
+ if self.data.get("patch"):
+ return jsonpatch.JsonPatch(self.data["patch"])
+
+ current = dict(r)
+
+ # the action's schema reflects updatable properties
+ updatable = {k for k in self.schema["properties"] if k not in self._action_meta}
+ for k in list(set(current) - updatable):
+ del current[k]
+
+ # shallow copy for patch generation
+ tgt = dict(current)
+
+ # or whole key value replacement.
for k, v in self.data.items():
if k == "type":
continue
tgt[k] = v
- return jsonpatch.make_patch(r, tgt)
+ return jsonpatch.make_patch(current, tgt)
diff --git a/tools/c7n_awscc/c7n_awscc/manager.py b/tools/c7n_awscc/c7n_awscc/manager.py
--- a/tools/c7n_awscc/c7n_awscc/manager.py
+++ b/tools/c7n_awscc/c7n_awscc/manager.py
@@ -112,6 +112,19 @@
"properties": {u: schema["properties"][u] for u in updatable},
}
update_schema["properties"]["type"] = {"enum": ["update"]}
+ update_schema["properties"]["patch"] = {
+ # This schema is pretty minimal
+ "description": "Json patch to apply to resources",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "required": ["op", "path"],
+ "properties": {
+ "path": {"type": "string"},
+ "op": {"enum": ["add", "remove", "update", "replace", "move", "copy", "test"]},
+ },
+ },
+ }
if "definitions" in schema:
update_schema["definitions"] = dict(schema["definitions"])
| {"golden_diff": "diff --git a/tools/c7n_awscc/c7n_awscc/actions.py b/tools/c7n_awscc/c7n_awscc/actions.py\n--- a/tools/c7n_awscc/c7n_awscc/actions.py\n+++ b/tools/c7n_awscc/c7n_awscc/actions.py\n@@ -33,8 +33,16 @@\n \n \n class Update(ControlAction):\n+ \"\"\"Update a resource.\n+\n+ Supports either whole key/value replacement via an attribute mapping\n+ or jsonpatch `patch:`\n+ \"\"\"\n+\n # schema is setup at resource type initialization\n \n+ _action_meta = (\"type\", \"patch\")\n+\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n@@ -46,9 +54,23 @@\n )\n \n def get_patch(self, r):\n- tgt = dict(r)\n+ # we support either using json patch to do a partial modification.\n+ if self.data.get(\"patch\"):\n+ return jsonpatch.JsonPatch(self.data[\"patch\"])\n+\n+ current = dict(r)\n+\n+ # the action's schema reflects updatable properties\n+ updatable = {k for k in self.schema[\"properties\"] if k not in self._action_meta}\n+ for k in list(set(current) - updatable):\n+ del current[k]\n+\n+ # shallow copy for patch generation\n+ tgt = dict(current)\n+\n+ # or whole key value replacement.\n for k, v in self.data.items():\n if k == \"type\":\n continue\n tgt[k] = v\n- return jsonpatch.make_patch(r, tgt)\n+ return jsonpatch.make_patch(current, tgt)\ndiff --git a/tools/c7n_awscc/c7n_awscc/manager.py b/tools/c7n_awscc/c7n_awscc/manager.py\n--- a/tools/c7n_awscc/c7n_awscc/manager.py\n+++ b/tools/c7n_awscc/c7n_awscc/manager.py\n@@ -112,6 +112,19 @@\n \"properties\": {u: schema[\"properties\"][u] for u in updatable},\n }\n update_schema[\"properties\"][\"type\"] = {\"enum\": [\"update\"]}\n+ update_schema[\"properties\"][\"patch\"] = {\n+ # This schema is pretty minimal\n+ \"description\": \"Json patch to apply to resources\",\n+ \"type\": \"array\",\n+ \"items\": {\n+ \"type\": \"object\",\n+ \"required\": [\"op\", \"path\"],\n+ \"properties\": {\n+ \"path\": {\"type\": \"string\"},\n+ \"op\": {\"enum\": [\"add\", \"remove\", \"update\", \"replace\", \"move\", \"copy\", \"test\"]},\n+ },\n+ },\n+ }\n \n if \"definitions\" in schema:\n update_schema[\"definitions\"] = dict(schema[\"definitions\"])\n", "issue": "Set Athena Workgroup Encryption also tries to change readonly field\n### Describe the bug\n\nHi, we are currently trying to set the encryption for our primary athena workgroups that are unencrypted. The policy looks like this:\r\n\r\n```yaml\r\n- name: set-athena-workgroup-encryption\r\n resource: awscc.athena_workgroup\r\n filters:\r\n - type: value\r\n key: Name\r\n value: \"primary\"\r\n - type: value\r\n key: \"WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption\"\r\n value: absent\r\n actions:\r\n - type: update\r\n WorkGroupConfiguration:\r\n EnforceWorkGroupConfiguration: true\r\n - type: update\r\n WorkGroupConfiguration:\r\n ResultConfiguration:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\r\n```\r\n\r\nWhen executing this policy we get this error though: \r\n\r\n```\r\n2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing\r\nTraceback (most recent call last):\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py\", line 307, in run\r\n policy()\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 1357, in __call__\r\n resources = mode.run()\r\n ^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 364, in run\r\n results = a.process(resources)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py\", line 43, in process\r\n client.update_resource(\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 553, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 1009, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated\r\n2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing\r\n - set-athena-workgroup-encryption\r\n```\r\n\r\nBut we don't want to update the engine version itself.\n\n### What did you expect to happen?\n\nWe expected the policy to update the encryption setting and not touch the engine version, because the attribute was not specified in our policy\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\nCustodian: 0.9.34\r\nPython: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0]\r\nPlatform: posix.uname_result(sysname='Linux', nodename='marcel', release='6.2.0-39-generic', version='#40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023', machine='x86_64')\r\nUsing venv: True\r\nDocker: False\r\nInstalled:\r\n\r\nargcomplete==3.2.1\r\nattrs==23.2.0\r\nboto3==1.34.21\r\nbotocore==1.34.21\r\ndocutils==0.18.1\r\nimportlib-metadata==6.11.0\r\njmespath==1.0.1\r\njsonschema==4.21.0\r\njsonschema-specifications==2023.12.1\r\npython-dateutil==2.8.2\r\npyyaml==6.0.1\r\nreferencing==0.31.1\r\nrpds-py==0.17.1\r\ns3transfer==0.10.0\r\nsix==1.16.0\r\ntabulate==0.9.0\r\nurllib3==1.26.18\r\nzipp==3.17.0\n```\n\n\n### Policy\n\n```shell\n- name: set-athena-workgroup-encryption\r\n resource: awscc.athena_workgroup\r\n filters:\r\n - type: value\r\n key: Name\r\n value: \"primary\"\r\n - type: value\r\n key: \"WorkGroupConfiguration.ResultConfiguration.EncryptionConfiguration.EncryptionOption\"\r\n value: absent\r\n actions:\r\n - type: update\r\n WorkGroupConfiguration:\r\n EnforceWorkGroupConfiguration: true\r\n - type: update\r\n WorkGroupConfiguration:\r\n ResultConfiguration:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\n```\n\n\n### Relevant log/traceback output\n\n```shell\n2024-02-15 09:59:25,626: custodian.commands:ERROR Error while executing policy set-athena-workgroup-encryption, continuing\r\nTraceback (most recent call last):\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/commands.py\", line 307, in run\r\n policy()\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 1357, in __call__\r\n resources = mode.run()\r\n ^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n/policy.py\", line 364, in run\r\n results = a.process(resources)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/c7n_awscc/actions.py\", line 43, in process\r\n client.update_resource(\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 553, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/.../policies/.venv/lib/python3.11/site-packages/botocore/client.py\", line 1009, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateResource operation: Invalid patch update: readOnlyProperties [/properties/WorkGroupConfiguration/EngineVersion/EffectiveEngineVersion] cannot be updated\r\n2024-02-15 09:59:25,627: custodian.commands:ERROR The following policies had errors while executing\r\n - set-athena-workgroup-encryption\n```\n\n\n### Extra information or context\n\nWe tried to use the update attributes like this \r\n\r\n```yaml\r\n - type: update\r\n WorkGroupConfigurationUpdates:\r\n EnforceWorkGroupConfiguration: true\r\n ResultConfigurationUpdates:\r\n EncryptionConfiguration:\r\n EncryptionOption: SSE_S3\r\n```\r\n\r\nbut there is currently a bug in AWS which resets the workgroup right after the operation. We are in communication with AWS Support there, but in the meantime we tried to make it work with the approach described above.\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\n\nimport jsonpatch\n\nfrom c7n.actions import Action\nfrom c7n.utils import local_session, type_schema\n\n\nclass ControlAction(Action):\n def get_identity(self, r):\n id_fields = self.manager.schema[\"primaryIdentifier\"]\n idv = {}\n for idf in id_fields:\n idn = idf.rsplit(\"/\", 1)[-1]\n idv[idn] = r[idn]\n if len(idv) == 1:\n return idv[idn]\n return json.dumps(idv)\n\n\nclass Delete(ControlAction):\n schema = type_schema(\"delete\")\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n client.delete_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n )\n\n\nclass Update(ControlAction):\n # schema is setup at resource type initialization\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n patch = self.get_patch(r)\n client.update_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n PatchDocument=patch.to_string(),\n )\n\n def get_patch(self, r):\n tgt = dict(r)\n for k, v in self.data.items():\n if k == \"type\":\n continue\n tgt[k] = v\n return jsonpatch.make_patch(r, tgt)\n", "path": "tools/c7n_awscc/c7n_awscc/actions.py"}, {"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\nfrom pathlib import Path\n\nfrom c7n.filters import Filter # noqa\n\nfrom .actions import Delete, Update\nfrom .query import CloudControl\nfrom .provider import resources\n\nfrom c7n.query import TypeInfo, QueryResourceManager\n\n\n_IndexData = None\n\n\ndef get_index():\n global _IndexData\n\n if _IndexData is not None:\n return _IndexData\n\n index_path = Path(__file__).parent / \"data\" / \"index.json\"\n _IndexData = json.loads(index_path.read_text(encoding=\"utf8\"))\n return _IndexData\n\n\ndef initialize_resource(resource_name):\n \"\"\"Load a resource class from its name\"\"\"\n rpath = Path(__file__).parent / \"data\" / f\"aws_{resource_name}.json\"\n if not rpath.exists():\n return None\n rinfo = json.loads(rpath.read_text(encoding=\"utf8\"))\n\n type_info = type(\n \"resource_type\",\n (TypeInfo,),\n dict(\n id=rinfo[\"primaryIdentifier\"][0].split(\"/\", 1)[-1],\n service=rinfo[\"typeName\"].split(\"::\")[1].lower(),\n cfn_type=rinfo[\"typeName\"],\n ),\n )\n\n rname = \"_\".join([s.lower() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n class_name = \"\".join([s.lower().capitalize() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n mod_name = f\"c7n_awscc.resources.{resource_name}\"\n\n permissions = rinfo.get(\"handlers\", {}).get(\"read\", {}).get(\"permissions\", []) + rinfo.get(\n \"handlers\", {}\n ).get(\"list\", {}).get(\"permissions\", [])\n\n rtype = type(\n class_name,\n (QueryResourceManager,),\n dict(\n __module__=mod_name,\n source_mapping={\"describe\": CloudControl},\n resource_type=type_info,\n permissions=permissions,\n schema=rinfo,\n ),\n )\n\n rtype.action_registry.register(\n \"delete\",\n type(\n class_name + \"Delete\",\n (Delete,),\n {\n \"permissions\": rinfo[\"handlers\"][\"delete\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n if \"update\" in rinfo[\"handlers\"]:\n rtype.action_registry.register(\n \"update\",\n type(\n class_name + \"Update\",\n (Update,),\n {\n \"schema\": get_update_schema(rtype.schema, rname),\n \"permissions\": rinfo[\"handlers\"][\"update\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n process_supplementary_data(rtype)\n resources.register(rname, rtype)\n\n return {rtype.__name__: rtype}\n\n\ndef process_supplementary_data(rtype):\n idx = get_index()\n augment = idx[\"augment\"][rtype.resource_type.cfn_type]\n rtype.resource_type.service = augment.get(\"service\") or \"\"\n\n\ndef get_update_schema(schema, rname):\n prop_names = set(schema[\"properties\"])\n create_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"createOnlyProperties\", ())}\n read_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"readOnlyProperties\", ())}\n\n updatable = prop_names - (create_only | read_only)\n update_schema = {\n \"additionalProperties\": False,\n \"properties\": {u: schema[\"properties\"][u] for u in updatable},\n }\n update_schema[\"properties\"][\"type\"] = {\"enum\": [\"update\"]}\n\n if \"definitions\" in schema:\n update_schema[\"definitions\"] = dict(schema[\"definitions\"])\n update_refs(update_schema, rname)\n\n return update_schema\n\n\ndef update_refs(schema_node, rname):\n for k, v in schema_node.items():\n if k == \"$ref\" and v.startswith(\"#/definitions/\"):\n # mutating while iterating but there's only ref value ever\n schema_node[k] = \"#/definitions/resources/awscc.%s/actions/update/%s\" % (\n rname,\n v[2:],\n )\n elif isinstance(v, dict):\n update_refs(v, rname)\n", "path": "tools/c7n_awscc/c7n_awscc/manager.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\n\nimport jsonpatch\n\nfrom c7n.actions import Action\nfrom c7n.utils import local_session, type_schema\n\n\nclass ControlAction(Action):\n def get_identity(self, r):\n id_fields = self.manager.schema[\"primaryIdentifier\"]\n idv = {}\n for idf in id_fields:\n idn = idf.rsplit(\"/\", 1)[-1]\n idv[idn] = r[idn]\n if len(idv) == 1:\n return idv[idn]\n return json.dumps(idv)\n\n\nclass Delete(ControlAction):\n schema = type_schema(\"delete\")\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n client.delete_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n )\n\n\nclass Update(ControlAction):\n \"\"\"Update a resource.\n\n Supports either whole key/value replacement via an attribute mapping\n or jsonpatch `patch:`\n \"\"\"\n\n # schema is setup at resource type initialization\n\n _action_meta = (\"type\", \"patch\")\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client(\"cloudcontrol\")\n for r in resources:\n patch = self.get_patch(r)\n client.update_resource(\n TypeName=self.manager.resource_type.cfn_type,\n Identifier=self.get_identity(r),\n PatchDocument=patch.to_string(),\n )\n\n def get_patch(self, r):\n # we support either using json patch to do a partial modification.\n if self.data.get(\"patch\"):\n return jsonpatch.JsonPatch(self.data[\"patch\"])\n\n current = dict(r)\n\n # the action's schema reflects updatable properties\n updatable = {k for k in self.schema[\"properties\"] if k not in self._action_meta}\n for k in list(set(current) - updatable):\n del current[k]\n\n # shallow copy for patch generation\n tgt = dict(current)\n\n # or whole key value replacement.\n for k, v in self.data.items():\n if k == \"type\":\n continue\n tgt[k] = v\n return jsonpatch.make_patch(current, tgt)\n", "path": "tools/c7n_awscc/c7n_awscc/actions.py"}, {"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport json\nfrom pathlib import Path\n\nfrom c7n.filters import Filter # noqa\n\nfrom .actions import Delete, Update\nfrom .query import CloudControl\nfrom .provider import resources\n\nfrom c7n.query import TypeInfo, QueryResourceManager\n\n\n_IndexData = None\n\n\ndef get_index():\n global _IndexData\n\n if _IndexData is not None:\n return _IndexData\n\n index_path = Path(__file__).parent / \"data\" / \"index.json\"\n _IndexData = json.loads(index_path.read_text(encoding=\"utf8\"))\n return _IndexData\n\n\ndef initialize_resource(resource_name):\n \"\"\"Load a resource class from its name\"\"\"\n rpath = Path(__file__).parent / \"data\" / f\"aws_{resource_name}.json\"\n if not rpath.exists():\n return None\n rinfo = json.loads(rpath.read_text(encoding=\"utf8\"))\n\n type_info = type(\n \"resource_type\",\n (TypeInfo,),\n dict(\n id=rinfo[\"primaryIdentifier\"][0].split(\"/\", 1)[-1],\n service=rinfo[\"typeName\"].split(\"::\")[1].lower(),\n cfn_type=rinfo[\"typeName\"],\n ),\n )\n\n rname = \"_\".join([s.lower() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n class_name = \"\".join([s.lower().capitalize() for s in rinfo[\"typeName\"].split(\"::\")[1:]])\n mod_name = f\"c7n_awscc.resources.{resource_name}\"\n\n permissions = rinfo.get(\"handlers\", {}).get(\"read\", {}).get(\"permissions\", []) + rinfo.get(\n \"handlers\", {}\n ).get(\"list\", {}).get(\"permissions\", [])\n\n rtype = type(\n class_name,\n (QueryResourceManager,),\n dict(\n __module__=mod_name,\n source_mapping={\"describe\": CloudControl},\n resource_type=type_info,\n permissions=permissions,\n schema=rinfo,\n ),\n )\n\n rtype.action_registry.register(\n \"delete\",\n type(\n class_name + \"Delete\",\n (Delete,),\n {\n \"permissions\": rinfo[\"handlers\"][\"delete\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n if \"update\" in rinfo[\"handlers\"]:\n rtype.action_registry.register(\n \"update\",\n type(\n class_name + \"Update\",\n (Update,),\n {\n \"schema\": get_update_schema(rtype.schema, rname),\n \"permissions\": rinfo[\"handlers\"][\"update\"][\"permissions\"],\n \"__module__\": mod_name,\n },\n ),\n )\n\n process_supplementary_data(rtype)\n resources.register(rname, rtype)\n\n return {rtype.__name__: rtype}\n\n\ndef process_supplementary_data(rtype):\n idx = get_index()\n augment = idx[\"augment\"][rtype.resource_type.cfn_type]\n rtype.resource_type.service = augment.get(\"service\") or \"\"\n\n\ndef get_update_schema(schema, rname):\n prop_names = set(schema[\"properties\"])\n create_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"createOnlyProperties\", ())}\n read_only = {s.rsplit(\"/\", 1)[-1] for s in schema.get(\"readOnlyProperties\", ())}\n\n updatable = prop_names - (create_only | read_only)\n update_schema = {\n \"additionalProperties\": False,\n \"properties\": {u: schema[\"properties\"][u] for u in updatable},\n }\n update_schema[\"properties\"][\"type\"] = {\"enum\": [\"update\"]}\n update_schema[\"properties\"][\"patch\"] = {\n # This schema is pretty minimal\n \"description\": \"Json patch to apply to resources\",\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"required\": [\"op\", \"path\"],\n \"properties\": {\n \"path\": {\"type\": \"string\"},\n \"op\": {\"enum\": [\"add\", \"remove\", \"update\", \"replace\", \"move\", \"copy\", \"test\"]},\n },\n },\n }\n\n if \"definitions\" in schema:\n update_schema[\"definitions\"] = dict(schema[\"definitions\"])\n update_refs(update_schema, rname)\n\n return update_schema\n\n\ndef update_refs(schema_node, rname):\n for k, v in schema_node.items():\n if k == \"$ref\" and v.startswith(\"#/definitions/\"):\n # mutating while iterating but there's only ref value ever\n schema_node[k] = \"#/definitions/resources/awscc.%s/actions/update/%s\" % (\n rname,\n v[2:],\n )\n elif isinstance(v, dict):\n update_refs(v, rname)\n", "path": "tools/c7n_awscc/c7n_awscc/manager.py"}]} | 3,672 | 653 |
gh_patches_debug_40894 | rasdani/github-patches | git_diff | dask__distributed-3786 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version mismatch warning is a little scary
## Background
When clients/scheduler/workers have mismatched versions, users get an informative error message like the following:
```
/home/mrocklin/workspace/distributed/distributed/client.py:1079: VersionMismatchWarning: Mismatched versions found
blosc
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | None |
| scheduler | 1.9.1 |
| tcp://172.31.15.170:46853 | 1.9.1 |
| tcp://172.31.18.92:41153 | 1.9.1 |
| tcp://172.31.42.33:42009 | 1.9.1 |
| tcp://172.31.7.159:38461 | 1.9.1 |
+---------------------------+---------+
cloudpickle
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 1.3.0 |
| scheduler | 1.4.0 |
| tcp://172.31.15.170:46853 | 1.4.0 |
| tcp://172.31.18.92:41153 | 1.4.0 |
| tcp://172.31.42.33:42009 | 1.4.0 |
| tcp://172.31.7.159:38461 | 1.4.0 |
+---------------------------+---------+
dask
+---------------------------+---------------------+
| | version |
+---------------------------+---------------------+
| client | 2.14.0+34.g8ab7f942 |
| scheduler | 2.15.0 |
| tcp://172.31.15.170:46853 | 2.15.0 |
| tcp://172.31.18.92:41153 | 2.15.0 |
| tcp://172.31.42.33:42009 | 2.15.0 |
| tcp://172.31.7.159:38461 | 2.15.0 |
+---------------------------+---------------------+
distributed
+---------------------------+---------------------+
| | version |
+---------------------------+---------------------+
| client | 2.14.0+47.gb4dc9c64 |
| scheduler | 2.15.0 |
| tcp://172.31.15.170:46853 | 2.15.0 |
| tcp://172.31.18.92:41153 | 2.15.0 |
| tcp://172.31.42.33:42009 | 2.15.0 |
| tcp://172.31.7.159:38461 | 2.15.0 |
+---------------------------+---------------------+
lz4
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 2.2.1 |
| scheduler | 3.0.2 |
| tcp://172.31.15.170:46853 | 3.0.2 |
| tcp://172.31.18.92:41153 | 3.0.2 |
| tcp://172.31.42.33:42009 | 3.0.2 |
| tcp://172.31.7.159:38461 | 3.0.2 |
+---------------------------+---------+
msgpack
+---------------------------+---------+
| | version |
+---------------------------+---------+
| client | 0.6.2 |
| scheduler | 1.0.0 |
| tcp://172.31.15.170:46853 | 1.0.0 |
| tcp://172.31.18.92:41153 | 1.0.0 |
| tcp://172.31.42.33:42009 | 1.0.0 |
| tcp://172.31.7.159:38461 | 1.0.0 |
+---------------------------+---------+
python
+---------------------------+---------------+
| | version |
+---------------------------+---------------+
| client | 3.7.6.final.0 |
| scheduler | 3.7.4.final.0 |
| tcp://172.31.15.170:46853 | 3.7.4.final.0 |
| tcp://172.31.18.92:41153 | 3.7.4.final.0 |
| tcp://172.31.42.33:42009 | 3.7.4.final.0 |
| tcp://172.31.7.159:38461 | 3.7.4.final.0 |
+---------------------------+---------------+
warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
```
This is generally pretty great. We used to get a ton of github issues that reduced down to version mismatches, and now we don't. Hooray for informative error messages.
## Moving forward
However, I've run into a couple of issues that arise with these messages in practice, where I think that we might be able to improve them a bit.
1. They can get very long, especially if you have lots of workers.
2. We don't call out really important issues in relation to less important issues. It's entirely ok if your `msgpack` version is a little off, but probably not ok if some machines have `lz4` and some don't.
So I wonder if we might reorganize this message a bit. We might have something like the following:
```
+-----------+----------+------------+------------+
| Package | client | scheduler | workers |
+-----------+----------+------------+------------+
| python | 3.7 | 3.8 | {3.7, 3.8} |
| lz4 | ... | ... | ... |
| msgpack | | ... | ... |
+-----------+----------+------------+------------+
Notes:
- msgpack: Variation is ok, as long as everything is above 0.6
- lz4: Variation is ok, but missing libraries are not
- python: Variation is sometimes ok, sometimes not. It depends on your workloads
```
We pack down the one-line-per-worker policy, and instead include sets of versions in the table if necessary. This makes it a bit harder to debug if one of the workers is mismatched, but I think that this behavior isn't common.
We include optional prose around each of the libraries and include that prose if that library is found to be mismatched.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/versions.py`
Content:
```
1 """ utilities for package version introspection """
2
3 from __future__ import print_function, division, absolute_import
4
5 from collections import defaultdict
6 import platform
7 import struct
8 import os
9 import sys
10 import importlib
11
12
13 required_packages = [
14 ("dask", lambda p: p.__version__),
15 ("distributed", lambda p: p.__version__),
16 ("msgpack", lambda p: ".".join([str(v) for v in p.version])),
17 ("cloudpickle", lambda p: p.__version__),
18 ("tornado", lambda p: p.version),
19 ("toolz", lambda p: p.__version__),
20 ]
21
22 optional_packages = [
23 ("numpy", lambda p: p.__version__),
24 ("lz4", lambda p: p.__version__),
25 ("blosc", lambda p: p.__version__),
26 ]
27
28
29 # only these scheduler packages will be checked for version mismatch
30 scheduler_relevant_packages = set(pkg for pkg, _ in required_packages) | set(
31 ["lz4", "blosc"]
32 )
33
34
35 def get_versions(packages=None):
36 """
37 Return basic information on our software installation, and our installed versions of packages.
38 """
39 if packages is None:
40 packages = []
41
42 d = {
43 "host": get_system_info(),
44 "packages": get_package_info(
45 required_packages + optional_packages + list(packages)
46 ),
47 }
48
49 return d
50
51
52 def get_system_info():
53 (sysname, nodename, release, version, machine, processor) = platform.uname()
54 host = {
55 "python": "%d.%d.%d.%s.%s" % sys.version_info[:],
56 "python-bits": struct.calcsize("P") * 8,
57 "OS": "%s" % sysname,
58 "OS-release": "%s" % release,
59 "machine": "%s" % machine,
60 "processor": "%s" % processor,
61 "byteorder": "%s" % sys.byteorder,
62 "LC_ALL": "%s" % os.environ.get("LC_ALL", "None"),
63 "LANG": "%s" % os.environ.get("LANG", "None"),
64 }
65
66 return host
67
68
69 def version_of_package(pkg):
70 """ Try a variety of common ways to get the version of a package """
71 from .utils import ignoring
72
73 with ignoring(AttributeError):
74 return pkg.__version__
75 with ignoring(AttributeError):
76 return str(pkg.version)
77 with ignoring(AttributeError):
78 return ".".join(map(str, pkg.version_info))
79 return None
80
81
82 def get_package_info(pkgs):
83 """ get package versions for the passed required & optional packages """
84
85 pversions = [("python", ".".join(map(str, sys.version_info)))]
86 for pkg in pkgs:
87 if isinstance(pkg, (tuple, list)):
88 modname, ver_f = pkg
89 else:
90 modname = pkg
91 ver_f = version_of_package
92
93 if ver_f is None:
94 ver_f = version_of_package
95
96 try:
97 mod = importlib.import_module(modname)
98 ver = ver_f(mod)
99 pversions.append((modname, ver))
100 except Exception:
101 pversions.append((modname, None))
102
103 return dict(pversions)
104
105
106 def error_message(scheduler, workers, client, client_name="client"):
107 from .utils import asciitable
108
109 nodes = {**{client_name: client}, **{"scheduler": scheduler}, **workers}
110
111 # Hold all versions, e.g. versions["scheduler"]["distributed"] = 2.9.3
112 node_packages = defaultdict(dict)
113
114 # Collect all package versions
115 packages = set()
116 for node, info in nodes.items():
117 if info is None or not (isinstance(info, dict)) or "packages" not in info:
118 node_packages[node] = defaultdict(lambda: "UNKNOWN")
119 else:
120 node_packages[node] = defaultdict(lambda: "MISSING")
121 for pkg, version in info["packages"].items():
122 node_packages[node][pkg] = version
123 packages.add(pkg)
124
125 errs = []
126 for pkg in sorted(packages):
127 versions = set(
128 node_packages[node][pkg]
129 for node in nodes
130 if node != "scheduler" or pkg in scheduler_relevant_packages
131 )
132 if len(versions) <= 1:
133 continue
134 rows = [
135 (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()
136 ]
137 errs.append("%s\n%s" % (pkg, asciitable(["", "version"], rows)))
138 if errs:
139 return "Mismatched versions found\n" "\n" "%s" % ("\n\n".join(errs))
140 else:
141 return ""
142
143
144 class VersionMismatchWarning(Warning):
145 """Indicates version mismatch between nodes"""
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/distributed/versions.py b/distributed/versions.py
--- a/distributed/versions.py
+++ b/distributed/versions.py
@@ -2,7 +2,6 @@
from __future__ import print_function, division, absolute_import
-from collections import defaultdict
import platform
import struct
import os
@@ -32,6 +31,14 @@
)
+# notes to be displayed for mismatch packages
+notes_mismatch_package = {
+ "msgpack": "Variation is ok, as long as everything is above 0.6",
+ "lz4": "Variation is ok, but missing libraries are not",
+ "python": "Variation is sometimes ok, sometimes not. It depends on your workloads",
+}
+
+
def get_versions(packages=None):
"""
Return basic information on our software installation, and our installed versions of packages.
@@ -106,37 +113,56 @@
def error_message(scheduler, workers, client, client_name="client"):
from .utils import asciitable
- nodes = {**{client_name: client}, **{"scheduler": scheduler}, **workers}
-
- # Hold all versions, e.g. versions["scheduler"]["distributed"] = 2.9.3
- node_packages = defaultdict(dict)
+ client = client.get("packages") if client else "UNKNOWN"
+ scheduler = scheduler.get("packages") if scheduler else "UNKNOWN"
+ workers = {k: v.get("packages") if v else "UNKNOWN" for k, v in workers.items()}
- # Collect all package versions
packages = set()
- for node, info in nodes.items():
- if info is None or not (isinstance(info, dict)) or "packages" not in info:
- node_packages[node] = defaultdict(lambda: "UNKNOWN")
- else:
- node_packages[node] = defaultdict(lambda: "MISSING")
- for pkg, version in info["packages"].items():
- node_packages[node][pkg] = version
- packages.add(pkg)
+ packages.update(client)
+ packages.update(scheduler)
+ for worker in workers:
+ packages.update(workers.get(worker))
errs = []
+ notes = []
for pkg in sorted(packages):
- versions = set(
- node_packages[node][pkg]
- for node in nodes
- if node != "scheduler" or pkg in scheduler_relevant_packages
+ versions = set()
+ scheduler_version = (
+ scheduler.get(pkg, "MISSING") if isinstance(scheduler, dict) else scheduler
+ )
+ if pkg in scheduler_relevant_packages:
+ versions.add(scheduler_version)
+
+ client_version = (
+ client.get(pkg, "MISSING") if isinstance(client, dict) else client
)
+ versions.add(client_version)
+
+ worker_versions = set(
+ workers[w].get(pkg, "MISSING")
+ if isinstance(workers[w], dict)
+ else workers[w]
+ for w in workers
+ )
+ versions |= worker_versions
+
if len(versions) <= 1:
continue
- rows = [
- (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()
- ]
- errs.append("%s\n%s" % (pkg, asciitable(["", "version"], rows)))
+ if len(worker_versions) == 1:
+ worker_versions = list(worker_versions)[0]
+ elif len(worker_versions) == 0:
+ worker_versions = None
+
+ errs.append((pkg, client_version, scheduler_version, worker_versions))
+ if pkg in notes_mismatch_package.keys():
+ notes.append(f"- {pkg}: {notes_mismatch_package[pkg]}")
+
if errs:
- return "Mismatched versions found\n" "\n" "%s" % ("\n\n".join(errs))
+ err_table = asciitable(["Package", client_name, "scheduler", "workers"], errs)
+ err_msg = f"Mismatched versions found\n\n{err_table}"
+ if notes:
+ err_msg += "\nNotes: \n{}".format("\n".join(notes))
+ return err_msg
else:
return ""
| {"golden_diff": "diff --git a/distributed/versions.py b/distributed/versions.py\n--- a/distributed/versions.py\n+++ b/distributed/versions.py\n@@ -2,7 +2,6 @@\n \n from __future__ import print_function, division, absolute_import\n \n-from collections import defaultdict\n import platform\n import struct\n import os\n@@ -32,6 +31,14 @@\n )\n \n \n+# notes to be displayed for mismatch packages\n+notes_mismatch_package = {\n+ \"msgpack\": \"Variation is ok, as long as everything is above 0.6\",\n+ \"lz4\": \"Variation is ok, but missing libraries are not\",\n+ \"python\": \"Variation is sometimes ok, sometimes not. It depends on your workloads\",\n+}\n+\n+\n def get_versions(packages=None):\n \"\"\"\n Return basic information on our software installation, and our installed versions of packages.\n@@ -106,37 +113,56 @@\n def error_message(scheduler, workers, client, client_name=\"client\"):\n from .utils import asciitable\n \n- nodes = {**{client_name: client}, **{\"scheduler\": scheduler}, **workers}\n-\n- # Hold all versions, e.g. versions[\"scheduler\"][\"distributed\"] = 2.9.3\n- node_packages = defaultdict(dict)\n+ client = client.get(\"packages\") if client else \"UNKNOWN\"\n+ scheduler = scheduler.get(\"packages\") if scheduler else \"UNKNOWN\"\n+ workers = {k: v.get(\"packages\") if v else \"UNKNOWN\" for k, v in workers.items()}\n \n- # Collect all package versions\n packages = set()\n- for node, info in nodes.items():\n- if info is None or not (isinstance(info, dict)) or \"packages\" not in info:\n- node_packages[node] = defaultdict(lambda: \"UNKNOWN\")\n- else:\n- node_packages[node] = defaultdict(lambda: \"MISSING\")\n- for pkg, version in info[\"packages\"].items():\n- node_packages[node][pkg] = version\n- packages.add(pkg)\n+ packages.update(client)\n+ packages.update(scheduler)\n+ for worker in workers:\n+ packages.update(workers.get(worker))\n \n errs = []\n+ notes = []\n for pkg in sorted(packages):\n- versions = set(\n- node_packages[node][pkg]\n- for node in nodes\n- if node != \"scheduler\" or pkg in scheduler_relevant_packages\n+ versions = set()\n+ scheduler_version = (\n+ scheduler.get(pkg, \"MISSING\") if isinstance(scheduler, dict) else scheduler\n+ )\n+ if pkg in scheduler_relevant_packages:\n+ versions.add(scheduler_version)\n+\n+ client_version = (\n+ client.get(pkg, \"MISSING\") if isinstance(client, dict) else client\n )\n+ versions.add(client_version)\n+\n+ worker_versions = set(\n+ workers[w].get(pkg, \"MISSING\")\n+ if isinstance(workers[w], dict)\n+ else workers[w]\n+ for w in workers\n+ )\n+ versions |= worker_versions\n+\n if len(versions) <= 1:\n continue\n- rows = [\n- (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()\n- ]\n- errs.append(\"%s\\n%s\" % (pkg, asciitable([\"\", \"version\"], rows)))\n+ if len(worker_versions) == 1:\n+ worker_versions = list(worker_versions)[0]\n+ elif len(worker_versions) == 0:\n+ worker_versions = None\n+\n+ errs.append((pkg, client_version, scheduler_version, worker_versions))\n+ if pkg in notes_mismatch_package.keys():\n+ notes.append(f\"- {pkg}: {notes_mismatch_package[pkg]}\")\n+\n if errs:\n- return \"Mismatched versions found\\n\" \"\\n\" \"%s\" % (\"\\n\\n\".join(errs))\n+ err_table = asciitable([\"Package\", client_name, \"scheduler\", \"workers\"], errs)\n+ err_msg = f\"Mismatched versions found\\n\\n{err_table}\"\n+ if notes:\n+ err_msg += \"\\nNotes: \\n{}\".format(\"\\n\".join(notes))\n+ return err_msg\n else:\n return \"\"\n", "issue": "Version mismatch warning is a little scary\n## Background\r\n\r\nWhen clients/scheduler/workers have mismatched versions, users get an informative error message like the following:\r\n\r\n```\r\n/home/mrocklin/workspace/distributed/distributed/client.py:1079: VersionMismatchWarning: Mismatched versions found\r\n\r\nblosc\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | None |\r\n| scheduler | 1.9.1 |\r\n| tcp://172.31.15.170:46853 | 1.9.1 |\r\n| tcp://172.31.18.92:41153 | 1.9.1 |\r\n| tcp://172.31.42.33:42009 | 1.9.1 |\r\n| tcp://172.31.7.159:38461 | 1.9.1 |\r\n+---------------------------+---------+\r\n\r\ncloudpickle\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 1.3.0 |\r\n| scheduler | 1.4.0 |\r\n| tcp://172.31.15.170:46853 | 1.4.0 |\r\n| tcp://172.31.18.92:41153 | 1.4.0 |\r\n| tcp://172.31.42.33:42009 | 1.4.0 |\r\n| tcp://172.31.7.159:38461 | 1.4.0 |\r\n+---------------------------+---------+\r\n\r\ndask\r\n+---------------------------+---------------------+\r\n| | version |\r\n+---------------------------+---------------------+\r\n| client | 2.14.0+34.g8ab7f942 |\r\n| scheduler | 2.15.0 |\r\n| tcp://172.31.15.170:46853 | 2.15.0 |\r\n| tcp://172.31.18.92:41153 | 2.15.0 |\r\n| tcp://172.31.42.33:42009 | 2.15.0 |\r\n| tcp://172.31.7.159:38461 | 2.15.0 |\r\n+---------------------------+---------------------+\r\n\r\ndistributed\r\n+---------------------------+---------------------+\r\n| | version |\r\n+---------------------------+---------------------+\r\n| client | 2.14.0+47.gb4dc9c64 |\r\n| scheduler | 2.15.0 |\r\n| tcp://172.31.15.170:46853 | 2.15.0 |\r\n| tcp://172.31.18.92:41153 | 2.15.0 |\r\n| tcp://172.31.42.33:42009 | 2.15.0 |\r\n| tcp://172.31.7.159:38461 | 2.15.0 |\r\n+---------------------------+---------------------+\r\n\r\nlz4\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 2.2.1 |\r\n| scheduler | 3.0.2 |\r\n| tcp://172.31.15.170:46853 | 3.0.2 |\r\n| tcp://172.31.18.92:41153 | 3.0.2 |\r\n| tcp://172.31.42.33:42009 | 3.0.2 |\r\n| tcp://172.31.7.159:38461 | 3.0.2 |\r\n+---------------------------+---------+\r\n\r\nmsgpack\r\n+---------------------------+---------+\r\n| | version |\r\n+---------------------------+---------+\r\n| client | 0.6.2 |\r\n| scheduler | 1.0.0 |\r\n| tcp://172.31.15.170:46853 | 1.0.0 |\r\n| tcp://172.31.18.92:41153 | 1.0.0 |\r\n| tcp://172.31.42.33:42009 | 1.0.0 |\r\n| tcp://172.31.7.159:38461 | 1.0.0 |\r\n+---------------------------+---------+\r\n\r\npython\r\n+---------------------------+---------------+\r\n| | version |\r\n+---------------------------+---------------+\r\n| client | 3.7.6.final.0 |\r\n| scheduler | 3.7.4.final.0 |\r\n| tcp://172.31.15.170:46853 | 3.7.4.final.0 |\r\n| tcp://172.31.18.92:41153 | 3.7.4.final.0 |\r\n| tcp://172.31.42.33:42009 | 3.7.4.final.0 |\r\n| tcp://172.31.7.159:38461 | 3.7.4.final.0 |\r\n+---------------------------+---------------+\r\n warnings.warn(version_module.VersionMismatchWarning(msg[0][\"warning\"]))\r\n```\r\n\r\nThis is generally pretty great. We used to get a ton of github issues that reduced down to version mismatches, and now we don't. Hooray for informative error messages.\r\n\r\n## Moving forward\r\n\r\nHowever, I've run into a couple of issues that arise with these messages in practice, where I think that we might be able to improve them a bit.\r\n\r\n1. They can get very long, especially if you have lots of workers.\r\n2. We don't call out really important issues in relation to less important issues. It's entirely ok if your `msgpack` version is a little off, but probably not ok if some machines have `lz4` and some don't. \r\n\r\nSo I wonder if we might reorganize this message a bit. We might have something like the following:\r\n\r\n```\r\n+-----------+----------+------------+------------+\r\n| Package | client | scheduler | workers |\r\n+-----------+----------+------------+------------+\r\n| python | 3.7 | 3.8 | {3.7, 3.8} |\r\n| lz4 | ... | ... | ... |\r\n| msgpack | | ... | ... |\r\n+-----------+----------+------------+------------+\r\n\r\nNotes:\r\n\r\n- msgpack: Variation is ok, as long as everything is above 0.6\r\n- lz4: Variation is ok, but missing libraries are not\r\n- python: Variation is sometimes ok, sometimes not. It depends on your workloads\r\n```\r\n\r\nWe pack down the one-line-per-worker policy, and instead include sets of versions in the table if necessary. This makes it a bit harder to debug if one of the workers is mismatched, but I think that this behavior isn't common. \r\n\r\nWe include optional prose around each of the libraries and include that prose if that library is found to be mismatched.\n", "before_files": [{"content": "\"\"\" utilities for package version introspection \"\"\"\n\nfrom __future__ import print_function, division, absolute_import\n\nfrom collections import defaultdict\nimport platform\nimport struct\nimport os\nimport sys\nimport importlib\n\n\nrequired_packages = [\n (\"dask\", lambda p: p.__version__),\n (\"distributed\", lambda p: p.__version__),\n (\"msgpack\", lambda p: \".\".join([str(v) for v in p.version])),\n (\"cloudpickle\", lambda p: p.__version__),\n (\"tornado\", lambda p: p.version),\n (\"toolz\", lambda p: p.__version__),\n]\n\noptional_packages = [\n (\"numpy\", lambda p: p.__version__),\n (\"lz4\", lambda p: p.__version__),\n (\"blosc\", lambda p: p.__version__),\n]\n\n\n# only these scheduler packages will be checked for version mismatch\nscheduler_relevant_packages = set(pkg for pkg, _ in required_packages) | set(\n [\"lz4\", \"blosc\"]\n)\n\n\ndef get_versions(packages=None):\n \"\"\"\n Return basic information on our software installation, and our installed versions of packages.\n \"\"\"\n if packages is None:\n packages = []\n\n d = {\n \"host\": get_system_info(),\n \"packages\": get_package_info(\n required_packages + optional_packages + list(packages)\n ),\n }\n\n return d\n\n\ndef get_system_info():\n (sysname, nodename, release, version, machine, processor) = platform.uname()\n host = {\n \"python\": \"%d.%d.%d.%s.%s\" % sys.version_info[:],\n \"python-bits\": struct.calcsize(\"P\") * 8,\n \"OS\": \"%s\" % sysname,\n \"OS-release\": \"%s\" % release,\n \"machine\": \"%s\" % machine,\n \"processor\": \"%s\" % processor,\n \"byteorder\": \"%s\" % sys.byteorder,\n \"LC_ALL\": \"%s\" % os.environ.get(\"LC_ALL\", \"None\"),\n \"LANG\": \"%s\" % os.environ.get(\"LANG\", \"None\"),\n }\n\n return host\n\n\ndef version_of_package(pkg):\n \"\"\" Try a variety of common ways to get the version of a package \"\"\"\n from .utils import ignoring\n\n with ignoring(AttributeError):\n return pkg.__version__\n with ignoring(AttributeError):\n return str(pkg.version)\n with ignoring(AttributeError):\n return \".\".join(map(str, pkg.version_info))\n return None\n\n\ndef get_package_info(pkgs):\n \"\"\" get package versions for the passed required & optional packages \"\"\"\n\n pversions = [(\"python\", \".\".join(map(str, sys.version_info)))]\n for pkg in pkgs:\n if isinstance(pkg, (tuple, list)):\n modname, ver_f = pkg\n else:\n modname = pkg\n ver_f = version_of_package\n\n if ver_f is None:\n ver_f = version_of_package\n\n try:\n mod = importlib.import_module(modname)\n ver = ver_f(mod)\n pversions.append((modname, ver))\n except Exception:\n pversions.append((modname, None))\n\n return dict(pversions)\n\n\ndef error_message(scheduler, workers, client, client_name=\"client\"):\n from .utils import asciitable\n\n nodes = {**{client_name: client}, **{\"scheduler\": scheduler}, **workers}\n\n # Hold all versions, e.g. versions[\"scheduler\"][\"distributed\"] = 2.9.3\n node_packages = defaultdict(dict)\n\n # Collect all package versions\n packages = set()\n for node, info in nodes.items():\n if info is None or not (isinstance(info, dict)) or \"packages\" not in info:\n node_packages[node] = defaultdict(lambda: \"UNKNOWN\")\n else:\n node_packages[node] = defaultdict(lambda: \"MISSING\")\n for pkg, version in info[\"packages\"].items():\n node_packages[node][pkg] = version\n packages.add(pkg)\n\n errs = []\n for pkg in sorted(packages):\n versions = set(\n node_packages[node][pkg]\n for node in nodes\n if node != \"scheduler\" or pkg in scheduler_relevant_packages\n )\n if len(versions) <= 1:\n continue\n rows = [\n (node_name, node_packages[node_name][pkg]) for node_name in nodes.keys()\n ]\n errs.append(\"%s\\n%s\" % (pkg, asciitable([\"\", \"version\"], rows)))\n if errs:\n return \"Mismatched versions found\\n\" \"\\n\" \"%s\" % (\"\\n\\n\".join(errs))\n else:\n return \"\"\n\n\nclass VersionMismatchWarning(Warning):\n \"\"\"Indicates version mismatch between nodes\"\"\"\n", "path": "distributed/versions.py"}], "after_files": [{"content": "\"\"\" utilities for package version introspection \"\"\"\n\nfrom __future__ import print_function, division, absolute_import\n\nimport platform\nimport struct\nimport os\nimport sys\nimport importlib\n\n\nrequired_packages = [\n (\"dask\", lambda p: p.__version__),\n (\"distributed\", lambda p: p.__version__),\n (\"msgpack\", lambda p: \".\".join([str(v) for v in p.version])),\n (\"cloudpickle\", lambda p: p.__version__),\n (\"tornado\", lambda p: p.version),\n (\"toolz\", lambda p: p.__version__),\n]\n\noptional_packages = [\n (\"numpy\", lambda p: p.__version__),\n (\"lz4\", lambda p: p.__version__),\n (\"blosc\", lambda p: p.__version__),\n]\n\n\n# only these scheduler packages will be checked for version mismatch\nscheduler_relevant_packages = set(pkg for pkg, _ in required_packages) | set(\n [\"lz4\", \"blosc\"]\n)\n\n\n# notes to be displayed for mismatch packages\nnotes_mismatch_package = {\n \"msgpack\": \"Variation is ok, as long as everything is above 0.6\",\n \"lz4\": \"Variation is ok, but missing libraries are not\",\n \"python\": \"Variation is sometimes ok, sometimes not. It depends on your workloads\",\n}\n\n\ndef get_versions(packages=None):\n \"\"\"\n Return basic information on our software installation, and our installed versions of packages.\n \"\"\"\n if packages is None:\n packages = []\n\n d = {\n \"host\": get_system_info(),\n \"packages\": get_package_info(\n required_packages + optional_packages + list(packages)\n ),\n }\n\n return d\n\n\ndef get_system_info():\n (sysname, nodename, release, version, machine, processor) = platform.uname()\n host = {\n \"python\": \"%d.%d.%d.%s.%s\" % sys.version_info[:],\n \"python-bits\": struct.calcsize(\"P\") * 8,\n \"OS\": \"%s\" % sysname,\n \"OS-release\": \"%s\" % release,\n \"machine\": \"%s\" % machine,\n \"processor\": \"%s\" % processor,\n \"byteorder\": \"%s\" % sys.byteorder,\n \"LC_ALL\": \"%s\" % os.environ.get(\"LC_ALL\", \"None\"),\n \"LANG\": \"%s\" % os.environ.get(\"LANG\", \"None\"),\n }\n\n return host\n\n\ndef version_of_package(pkg):\n \"\"\" Try a variety of common ways to get the version of a package \"\"\"\n from .utils import ignoring\n\n with ignoring(AttributeError):\n return pkg.__version__\n with ignoring(AttributeError):\n return str(pkg.version)\n with ignoring(AttributeError):\n return \".\".join(map(str, pkg.version_info))\n return None\n\n\ndef get_package_info(pkgs):\n \"\"\" get package versions for the passed required & optional packages \"\"\"\n\n pversions = [(\"python\", \".\".join(map(str, sys.version_info)))]\n for pkg in pkgs:\n if isinstance(pkg, (tuple, list)):\n modname, ver_f = pkg\n else:\n modname = pkg\n ver_f = version_of_package\n\n if ver_f is None:\n ver_f = version_of_package\n\n try:\n mod = importlib.import_module(modname)\n ver = ver_f(mod)\n pversions.append((modname, ver))\n except Exception:\n pversions.append((modname, None))\n\n return dict(pversions)\n\n\ndef error_message(scheduler, workers, client, client_name=\"client\"):\n from .utils import asciitable\n\n client = client.get(\"packages\") if client else \"UNKNOWN\"\n scheduler = scheduler.get(\"packages\") if scheduler else \"UNKNOWN\"\n workers = {k: v.get(\"packages\") if v else \"UNKNOWN\" for k, v in workers.items()}\n\n packages = set()\n packages.update(client)\n packages.update(scheduler)\n for worker in workers:\n packages.update(workers.get(worker))\n\n errs = []\n notes = []\n for pkg in sorted(packages):\n versions = set()\n scheduler_version = (\n scheduler.get(pkg, \"MISSING\") if isinstance(scheduler, dict) else scheduler\n )\n if pkg in scheduler_relevant_packages:\n versions.add(scheduler_version)\n\n client_version = (\n client.get(pkg, \"MISSING\") if isinstance(client, dict) else client\n )\n versions.add(client_version)\n\n worker_versions = set(\n workers[w].get(pkg, \"MISSING\")\n if isinstance(workers[w], dict)\n else workers[w]\n for w in workers\n )\n versions |= worker_versions\n\n if len(versions) <= 1:\n continue\n if len(worker_versions) == 1:\n worker_versions = list(worker_versions)[0]\n elif len(worker_versions) == 0:\n worker_versions = None\n\n errs.append((pkg, client_version, scheduler_version, worker_versions))\n if pkg in notes_mismatch_package.keys():\n notes.append(f\"- {pkg}: {notes_mismatch_package[pkg]}\")\n\n if errs:\n err_table = asciitable([\"Package\", client_name, \"scheduler\", \"workers\"], errs)\n err_msg = f\"Mismatched versions found\\n\\n{err_table}\"\n if notes:\n err_msg += \"\\nNotes: \\n{}\".format(\"\\n\".join(notes))\n return err_msg\n else:\n return \"\"\n\n\nclass VersionMismatchWarning(Warning):\n \"\"\"Indicates version mismatch between nodes\"\"\"\n", "path": "distributed/versions.py"}]} | 3,395 | 934 |
gh_patches_debug_35936 | rasdani/github-patches | git_diff | ESMCI__cime-3019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Comment at the top of env_batch.xml is wrong
The comment at the top of env_batch.xml (imported from config/config_headers.xml) is:
> These variables may be changed anytime during a run, they
> control arguments to the batch submit command.
>
However, if I submit a job and then try:
```
./xmlchange JOB_WALLCLOCK_TIME=40:00:00
./xmlchange JOB_QUEUE=long
```
My job is killed (if it is in the queue when I make the change) or resubmit fails and I see the following message in my output file:
```
goldy@hobart^GFile /scratch/cluster/goldy/FQ3D_ne5pg3_ne5pg3_mg37/LockedFiles/env_batch.xml has been modified
found difference in JOB_WALLCLOCK_TIME : case '40:00:00' locked '80:00:00'
found difference in USER_REQUESTED_WALLTIME : case '40:00:00' locked ''
ERROR: Batch configuration has changed, please run case.setup --reset
```
The CIME documentation does not seem to mention this ability to change (cf http://esmci.github.io/cime/users_guide/running-a-case.html?highlight=job_wallclock_time).
Please fix the header with correct information.
Question: Is there a way to see the `<header>` information via `xmlquery`?
Reported by a CESM 2.0 user.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/case/case_submit.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 case.submit - Submit a cesm workflow to the queueing system or run it
5 if there is no queueing system. A cesm workflow may include multiple
6 jobs.
7 submit, check_case and check_da_settings are members of class Case in file case.py
8 """
9 import socket
10 from six.moves import configparser
11 from CIME.XML.standard_module_setup import *
12 from CIME.utils import expect, run_and_log_case_status, verbatim_success_msg
13 from CIME.locked_files import unlock_file, lock_file
14 from CIME.test_status import *
15
16 logger = logging.getLogger(__name__)
17
18 def _build_prereq_str(case, prev_job_ids):
19 delimiter = case.get_value("depend_separator")
20 prereq_str = ""
21 for job_id in prev_job_ids.values():
22 prereq_str += str(job_id) + delimiter
23 return prereq_str[:-1]
24
25 def _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
26 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
27 batch_args=None):
28 if job is None:
29 job = case.get_primary_job()
30
31 rundir = case.get_value("RUNDIR")
32 if job != "case.test":
33 continue_run = case.get_value("CONTINUE_RUN")
34 expect(os.path.isdir(rundir) or not continue_run,
35 " CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir))
36
37 # if case.submit is called with the no_batch flag then we assume that this
38 # flag will stay in effect for the duration of the RESUBMITs
39 env_batch = case.get_env("batch")
40 if resubmit:
41 if env_batch.get_batch_system_type() == "none":
42 no_batch = True
43
44 # This is a resubmission, do not reinitialize test values
45 if job == "case.test":
46 case.set_value("IS_FIRST_RUN", False)
47
48 resub = case.get_value("RESUBMIT")
49 logger.info("Submitting job '{}', resubmit={:d}".format(job, resub))
50 case.set_value("RESUBMIT", resub-1)
51 if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
52 case.set_value("CONTINUE_RUN", True)
53
54 else:
55 if job == "case.test":
56 case.set_value("IS_FIRST_RUN", True)
57
58 if no_batch:
59 batch_system = "none"
60 else:
61 batch_system = env_batch.get_batch_system_type()
62
63 case.set_value("BATCH_SYSTEM", batch_system)
64
65 env_batch_has_changed = False
66 try:
67 case.check_lockedfile(os.path.basename(env_batch.filename))
68 except SystemExit:
69 env_batch_has_changed = True
70
71 if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
72 # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
73 logger.warning(\
74 """
75 env_batch.xml appears to have changed, regenerating batch scripts
76 manual edits to these file will be lost!
77 """)
78 env_batch.make_all_batch_files(case)
79
80 unlock_file(os.path.basename(env_batch.filename))
81 lock_file(os.path.basename(env_batch.filename))
82
83 if job == case.get_primary_job():
84 case.check_case()
85 case.check_DA_settings()
86 if case.get_value("MACH") == "mira":
87 with open(".original_host", "w") as fd:
88 fd.write( socket.gethostname())
89
90 #Load Modules
91 case.load_env()
92
93 case.flush()
94
95 logger.warning("submit_jobs {}".format(job))
96 job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,
97 skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,
98 allow_fail=allow_fail, mail_user=mail_user,
99 mail_type=mail_type, batch_args=batch_args)
100
101 xml_jobids = []
102 for jobname, jobid in job_ids.items():
103 logger.info("Submitted job {} with id {}".format(jobname, jobid))
104 if jobid:
105 xml_jobids.append("{}:{}".format(jobname, jobid))
106
107 xml_jobid_text = ", ".join(xml_jobids)
108 if xml_jobid_text:
109 case.set_value("JOB_IDS", xml_jobid_text)
110
111 return xml_jobid_text
112
113 def submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,
114 resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,
115 batch_args=None):
116 if resubmit_immediate and self.get_value("MACH") in ['mira', 'cetus']:
117 logger.warning("resubmit_immediate does not work on Mira/Cetus, submitting normally")
118 resubmit_immediate = False
119
120 if self.get_value("TEST"):
121 caseroot = self.get_value("CASEROOT")
122 casebaseid = self.get_value("CASEBASEID")
123 # This should take care of the race condition where the submitted job
124 # begins immediately and tries to set RUN phase. We proactively assume
125 # a passed SUBMIT phase. If this state is already PASS, don't set it again
126 # because then we'll lose RUN phase info if it's there. This info is important
127 # for system_tests_common to know if it needs to reinitialize the test or not.
128 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
129 phase_status = ts.get_status(SUBMIT_PHASE)
130 if phase_status != TEST_PASS_STATUS:
131 ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)
132
133 # If this is a resubmit check the hidden file .submit_options for
134 # any submit options used on the original submit and use them again
135 caseroot = self.get_value("CASEROOT")
136 submit_options = os.path.join(caseroot, ".submit_options")
137 if resubmit and os.path.exists(submit_options):
138 config = configparser.SafeConfigParser()
139 config.read(submit_options)
140 if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):
141 skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')
142 if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):
143 mail_user = config.get('SubmitOptions', 'mail_user')
144 if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):
145 mail_type = config.get('SubmitOptions', 'mail_type').split(',')
146 if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):
147 batch_args = config.get('SubmitOptions', 'batch_args')
148
149 try:
150 functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,
151 allow_fail=allow_fail, resubmit=resubmit,
152 resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,
153 mail_user=mail_user, mail_type=mail_type,
154 batch_args=batch_args)
155 run_and_log_case_status(functor, "case.submit", caseroot=caseroot,
156 custom_success_msg_functor=verbatim_success_msg)
157 except:
158 # If something failed in the batch system, make sure to mark
159 # the test as failed if we are running a test.
160 if self.get_value("TEST"):
161 with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:
162 ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)
163
164 raise
165
166 def check_case(self):
167 self.check_lockedfiles()
168 self.create_namelists() # Must be called before check_all_input_data
169 logger.info("Checking that inputdata is available as part of case submission")
170 self.check_all_input_data()
171
172 if self.get_value('COMP_WAV') == 'ww':
173 # the ww3 buildnml has dependancies on inputdata so we must run it again
174 self.create_namelists(component='WAV')
175
176
177 expect(self.get_value("BUILD_COMPLETE"), "Build complete is "
178 "not True please rebuild the model by calling case.build")
179 logger.info("Check case OK")
180
181 def check_DA_settings(self):
182 script = self.get_value("DATA_ASSIMILATION_SCRIPT")
183 cycles = self.get_value("DATA_ASSIMILATION_CYCLES")
184 if len(script) > 0 and os.path.isfile(script) and cycles > 0:
185 logger.info("Data Assimilation enabled using script {} with {:d} cycles".format(script,
186 cycles))
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py
--- a/scripts/lib/CIME/case/case_submit.py
+++ b/scripts/lib/CIME/case/case_submit.py
@@ -37,10 +37,35 @@
# if case.submit is called with the no_batch flag then we assume that this
# flag will stay in effect for the duration of the RESUBMITs
env_batch = case.get_env("batch")
- if resubmit:
- if env_batch.get_batch_system_type() == "none":
- no_batch = True
+ if resubmit and env_batch.get_batch_system_type() == "none":
+ no_batch = True
+ if no_batch:
+ batch_system = "none"
+ else:
+ batch_system = env_batch.get_batch_system_type()
+
+ case.set_value("BATCH_SYSTEM", batch_system)
+
+ env_batch_has_changed = False
+ try:
+ case.check_lockedfile(os.path.basename(env_batch.filename))
+ except:
+ env_batch_has_changed = True
+
+ if batch_system != "none" and env_batch_has_changed:
+ # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
+ logger.warning(\
+"""
+env_batch.xml appears to have changed, regenerating batch scripts
+manual edits to these file will be lost!
+""")
+ env_batch.make_all_batch_files(case)
+
+ unlock_file(os.path.basename(env_batch.filename))
+ lock_file(os.path.basename(env_batch.filename))
+
+ if resubmit:
# This is a resubmission, do not reinitialize test values
if job == "case.test":
case.set_value("IS_FIRST_RUN", False)
@@ -55,31 +80,6 @@
if job == "case.test":
case.set_value("IS_FIRST_RUN", True)
- if no_batch:
- batch_system = "none"
- else:
- batch_system = env_batch.get_batch_system_type()
-
- case.set_value("BATCH_SYSTEM", batch_system)
-
- env_batch_has_changed = False
- try:
- case.check_lockedfile(os.path.basename(env_batch.filename))
- except SystemExit:
- env_batch_has_changed = True
-
- if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
- # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
- logger.warning(\
-"""
-env_batch.xml appears to have changed, regenerating batch scripts
-manual edits to these file will be lost!
-""")
- env_batch.make_all_batch_files(case)
-
- unlock_file(os.path.basename(env_batch.filename))
- lock_file(os.path.basename(env_batch.filename))
-
if job == case.get_primary_job():
case.check_case()
case.check_DA_settings()
| {"golden_diff": "diff --git a/scripts/lib/CIME/case/case_submit.py b/scripts/lib/CIME/case/case_submit.py\n--- a/scripts/lib/CIME/case/case_submit.py\n+++ b/scripts/lib/CIME/case/case_submit.py\n@@ -37,10 +37,35 @@\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n- if resubmit:\n- if env_batch.get_batch_system_type() == \"none\":\n- no_batch = True\n \n+ if resubmit and env_batch.get_batch_system_type() == \"none\":\n+ no_batch = True\n+ if no_batch:\n+ batch_system = \"none\"\n+ else:\n+ batch_system = env_batch.get_batch_system_type()\n+\n+ case.set_value(\"BATCH_SYSTEM\", batch_system)\n+\n+ env_batch_has_changed = False\n+ try:\n+ case.check_lockedfile(os.path.basename(env_batch.filename))\n+ except:\n+ env_batch_has_changed = True\n+\n+ if batch_system != \"none\" and env_batch_has_changed:\n+ # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n+ logger.warning(\\\n+\"\"\"\n+env_batch.xml appears to have changed, regenerating batch scripts\n+manual edits to these file will be lost!\n+\"\"\")\n+ env_batch.make_all_batch_files(case)\n+\n+ unlock_file(os.path.basename(env_batch.filename))\n+ lock_file(os.path.basename(env_batch.filename))\n+\n+ if resubmit:\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n@@ -55,31 +80,6 @@\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n \n- if no_batch:\n- batch_system = \"none\"\n- else:\n- batch_system = env_batch.get_batch_system_type()\n-\n- case.set_value(\"BATCH_SYSTEM\", batch_system)\n-\n- env_batch_has_changed = False\n- try:\n- case.check_lockedfile(os.path.basename(env_batch.filename))\n- except SystemExit:\n- env_batch_has_changed = True\n-\n- if env_batch.get_batch_system_type() != \"none\" and env_batch_has_changed:\n- # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n- logger.warning(\\\n-\"\"\"\n-env_batch.xml appears to have changed, regenerating batch scripts\n-manual edits to these file will be lost!\n-\"\"\")\n- env_batch.make_all_batch_files(case)\n-\n- unlock_file(os.path.basename(env_batch.filename))\n- lock_file(os.path.basename(env_batch.filename))\n-\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n", "issue": "Comment at the top of env_batch.xml is wrong\nThe comment at the top of env_batch.xml (imported from config/config_headers.xml) is:\r\n\r\n> These variables may be changed anytime during a run, they\r\n> control arguments to the batch submit command.\r\n> \r\nHowever, if I submit a job and then try:\r\n```\r\n./xmlchange JOB_WALLCLOCK_TIME=40:00:00\r\n./xmlchange JOB_QUEUE=long\r\n```\r\nMy job is killed (if it is in the queue when I make the change) or resubmit fails and I see the following message in my output file:\r\n```\r\ngoldy@hobart^GFile /scratch/cluster/goldy/FQ3D_ne5pg3_ne5pg3_mg37/LockedFiles/env_batch.xml has been modified\r\n found difference in JOB_WALLCLOCK_TIME : case '40:00:00' locked '80:00:00'\r\n found difference in USER_REQUESTED_WALLTIME : case '40:00:00' locked ''\r\nERROR: Batch configuration has changed, please run case.setup --reset\r\n```\r\nThe CIME documentation does not seem to mention this ability to change (cf http://esmci.github.io/cime/users_guide/running-a-case.html?highlight=job_wallclock_time).\r\n\r\nPlease fix the header with correct information.\r\n\r\nQuestion: Is there a way to see the `<header>` information via `xmlquery`?\r\n\r\nReported by a CESM 2.0 user.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\nsubmit, check_case and check_da_settings are members of class Case in file case.py\n\"\"\"\nimport socket\nfrom six.moves import configparser\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status, verbatim_success_msg\nfrom CIME.locked_files import unlock_file, lock_file\nfrom CIME.test_status import *\n\nlogger = logging.getLogger(__name__)\n\ndef _build_prereq_str(case, prev_job_ids):\n delimiter = case.get_value(\"depend_separator\")\n prereq_str = \"\"\n for job_id in prev_job_ids.values():\n prereq_str += str(job_id) + delimiter\n return prereq_str[:-1]\n\ndef _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if job is None:\n job = case.get_primary_job()\n\n rundir = case.get_value(\"RUNDIR\")\n if job != \"case.test\":\n continue_run = case.get_value(\"CONTINUE_RUN\")\n expect(os.path.isdir(rundir) or not continue_run,\n \" CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n if resubmit:\n if env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '{}', resubmit={:d}\".format(job, resub))\n case.set_value(\"RESUBMIT\", resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n\n else:\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n\n env_batch_has_changed = False\n try:\n case.check_lockedfile(os.path.basename(env_batch.filename))\n except SystemExit:\n env_batch_has_changed = True\n\n if env_batch.get_batch_system_type() != \"none\" and env_batch_has_changed:\n # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n logger.warning(\\\n\"\"\"\nenv_batch.xml appears to have changed, regenerating batch scripts\nmanual edits to these file will be lost!\n\"\"\")\n env_batch.make_all_batch_files(case)\n\n unlock_file(os.path.basename(env_batch.filename))\n lock_file(os.path.basename(env_batch.filename))\n\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\", \"w\") as fd:\n fd.write( socket.gethostname())\n\n #Load Modules\n case.load_env()\n\n case.flush()\n\n logger.warning(\"submit_jobs {}\".format(job))\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,\n skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,\n allow_fail=allow_fail, mail_user=mail_user,\n mail_type=mail_type, batch_args=batch_args)\n\n xml_jobids = []\n for jobname, jobid in job_ids.items():\n logger.info(\"Submitted job {} with id {}\".format(jobname, jobid))\n if jobid:\n xml_jobids.append(\"{}:{}\".format(jobname, jobid))\n\n xml_jobid_text = \", \".join(xml_jobids)\n if xml_jobid_text:\n case.set_value(\"JOB_IDS\", xml_jobid_text)\n\n return xml_jobid_text\n\ndef submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if resubmit_immediate and self.get_value(\"MACH\") in ['mira', 'cetus']:\n logger.warning(\"resubmit_immediate does not work on Mira/Cetus, submitting normally\")\n resubmit_immediate = False\n\n if self.get_value(\"TEST\"):\n caseroot = self.get_value(\"CASEROOT\")\n casebaseid = self.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase. If this state is already PASS, don't set it again\n # because then we'll lose RUN phase info if it's there. This info is important\n # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n phase_status = ts.get_status(SUBMIT_PHASE)\n if phase_status != TEST_PASS_STATUS:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n # If this is a resubmit check the hidden file .submit_options for\n # any submit options used on the original submit and use them again\n caseroot = self.get_value(\"CASEROOT\")\n submit_options = os.path.join(caseroot, \".submit_options\")\n if resubmit and os.path.exists(submit_options):\n config = configparser.SafeConfigParser()\n config.read(submit_options)\n if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):\n skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')\n if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):\n mail_user = config.get('SubmitOptions', 'mail_user')\n if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):\n mail_type = config.get('SubmitOptions', 'mail_type').split(',')\n if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):\n batch_args = config.get('SubmitOptions', 'batch_args')\n\n try:\n functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,\n allow_fail=allow_fail, resubmit=resubmit,\n resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,\n mail_user=mail_user, mail_type=mail_type,\n batch_args=batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=caseroot,\n custom_success_msg_functor=verbatim_success_msg)\n except:\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if self.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(self):\n self.check_lockedfiles()\n self.create_namelists() # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n self.check_all_input_data()\n\n if self.get_value('COMP_WAV') == 'ww':\n # the ww3 buildnml has dependancies on inputdata so we must run it again\n self.create_namelists(component='WAV')\n\n\n expect(self.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(self):\n script = self.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = self.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if len(script) > 0 and os.path.isfile(script) and cycles > 0:\n logger.info(\"Data Assimilation enabled using script {} with {:d} cycles\".format(script,\n cycles))\n", "path": "scripts/lib/CIME/case/case_submit.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\ncase.submit - Submit a cesm workflow to the queueing system or run it\nif there is no queueing system. A cesm workflow may include multiple\njobs.\nsubmit, check_case and check_da_settings are members of class Case in file case.py\n\"\"\"\nimport socket\nfrom six.moves import configparser\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_and_log_case_status, verbatim_success_msg\nfrom CIME.locked_files import unlock_file, lock_file\nfrom CIME.test_status import *\n\nlogger = logging.getLogger(__name__)\n\ndef _build_prereq_str(case, prev_job_ids):\n delimiter = case.get_value(\"depend_separator\")\n prereq_str = \"\"\n for job_id in prev_job_ids.values():\n prereq_str += str(job_id) + delimiter\n return prereq_str[:-1]\n\ndef _submit(case, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if job is None:\n job = case.get_primary_job()\n\n rundir = case.get_value(\"RUNDIR\")\n if job != \"case.test\":\n continue_run = case.get_value(\"CONTINUE_RUN\")\n expect(os.path.isdir(rundir) or not continue_run,\n \" CONTINUE_RUN is true but RUNDIR {} does not exist\".format(rundir))\n\n # if case.submit is called with the no_batch flag then we assume that this\n # flag will stay in effect for the duration of the RESUBMITs\n env_batch = case.get_env(\"batch\")\n\n if resubmit and env_batch.get_batch_system_type() == \"none\":\n no_batch = True\n if no_batch:\n batch_system = \"none\"\n else:\n batch_system = env_batch.get_batch_system_type()\n\n case.set_value(\"BATCH_SYSTEM\", batch_system)\n\n env_batch_has_changed = False\n try:\n case.check_lockedfile(os.path.basename(env_batch.filename))\n except:\n env_batch_has_changed = True\n\n if batch_system != \"none\" and env_batch_has_changed:\n # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)\n logger.warning(\\\n\"\"\"\nenv_batch.xml appears to have changed, regenerating batch scripts\nmanual edits to these file will be lost!\n\"\"\")\n env_batch.make_all_batch_files(case)\n\n unlock_file(os.path.basename(env_batch.filename))\n lock_file(os.path.basename(env_batch.filename))\n\n if resubmit:\n # This is a resubmission, do not reinitialize test values\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", False)\n\n resub = case.get_value(\"RESUBMIT\")\n logger.info(\"Submitting job '{}', resubmit={:d}\".format(job, resub))\n case.set_value(\"RESUBMIT\", resub-1)\n if case.get_value(\"RESUBMIT_SETS_CONTINUE_RUN\"):\n case.set_value(\"CONTINUE_RUN\", True)\n\n else:\n if job == \"case.test\":\n case.set_value(\"IS_FIRST_RUN\", True)\n\n if job == case.get_primary_job():\n case.check_case()\n case.check_DA_settings()\n if case.get_value(\"MACH\") == \"mira\":\n with open(\".original_host\", \"w\") as fd:\n fd.write( socket.gethostname())\n\n #Load Modules\n case.load_env()\n\n case.flush()\n\n logger.warning(\"submit_jobs {}\".format(job))\n job_ids = case.submit_jobs(no_batch=no_batch, job=job, prereq=prereq,\n skip_pnl=skip_pnl, resubmit_immediate=resubmit_immediate,\n allow_fail=allow_fail, mail_user=mail_user,\n mail_type=mail_type, batch_args=batch_args)\n\n xml_jobids = []\n for jobname, jobid in job_ids.items():\n logger.info(\"Submitted job {} with id {}\".format(jobname, jobid))\n if jobid:\n xml_jobids.append(\"{}:{}\".format(jobname, jobid))\n\n xml_jobid_text = \", \".join(xml_jobids)\n if xml_jobid_text:\n case.set_value(\"JOB_IDS\", xml_jobid_text)\n\n return xml_jobid_text\n\ndef submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False,\n resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None,\n batch_args=None):\n if resubmit_immediate and self.get_value(\"MACH\") in ['mira', 'cetus']:\n logger.warning(\"resubmit_immediate does not work on Mira/Cetus, submitting normally\")\n resubmit_immediate = False\n\n if self.get_value(\"TEST\"):\n caseroot = self.get_value(\"CASEROOT\")\n casebaseid = self.get_value(\"CASEBASEID\")\n # This should take care of the race condition where the submitted job\n # begins immediately and tries to set RUN phase. We proactively assume\n # a passed SUBMIT phase. If this state is already PASS, don't set it again\n # because then we'll lose RUN phase info if it's there. This info is important\n # for system_tests_common to know if it needs to reinitialize the test or not.\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n phase_status = ts.get_status(SUBMIT_PHASE)\n if phase_status != TEST_PASS_STATUS:\n ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS)\n\n # If this is a resubmit check the hidden file .submit_options for\n # any submit options used on the original submit and use them again\n caseroot = self.get_value(\"CASEROOT\")\n submit_options = os.path.join(caseroot, \".submit_options\")\n if resubmit and os.path.exists(submit_options):\n config = configparser.SafeConfigParser()\n config.read(submit_options)\n if not skip_pnl and config.has_option('SubmitOptions','skip_pnl'):\n skip_pnl = config.getboolean('SubmitOptions', 'skip_pnl')\n if mail_user is None and config.has_option('SubmitOptions', 'mail_user'):\n mail_user = config.get('SubmitOptions', 'mail_user')\n if mail_type is None and config.has_option('SubmitOptions', 'mail_type'):\n mail_type = config.get('SubmitOptions', 'mail_type').split(',')\n if batch_args is None and config.has_option('SubmitOptions', 'batch_args'):\n batch_args = config.get('SubmitOptions', 'batch_args')\n\n try:\n functor = lambda: _submit(self, job=job, no_batch=no_batch, prereq=prereq,\n allow_fail=allow_fail, resubmit=resubmit,\n resubmit_immediate=resubmit_immediate, skip_pnl=skip_pnl,\n mail_user=mail_user, mail_type=mail_type,\n batch_args=batch_args)\n run_and_log_case_status(functor, \"case.submit\", caseroot=caseroot,\n custom_success_msg_functor=verbatim_success_msg)\n except:\n # If something failed in the batch system, make sure to mark\n # the test as failed if we are running a test.\n if self.get_value(\"TEST\"):\n with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts:\n ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS)\n\n raise\n\ndef check_case(self):\n self.check_lockedfiles()\n self.create_namelists() # Must be called before check_all_input_data\n logger.info(\"Checking that inputdata is available as part of case submission\")\n self.check_all_input_data()\n\n if self.get_value('COMP_WAV') == 'ww':\n # the ww3 buildnml has dependancies on inputdata so we must run it again\n self.create_namelists(component='WAV')\n\n\n expect(self.get_value(\"BUILD_COMPLETE\"), \"Build complete is \"\n \"not True please rebuild the model by calling case.build\")\n logger.info(\"Check case OK\")\n\ndef check_DA_settings(self):\n script = self.get_value(\"DATA_ASSIMILATION_SCRIPT\")\n cycles = self.get_value(\"DATA_ASSIMILATION_CYCLES\")\n if len(script) > 0 and os.path.isfile(script) and cycles > 0:\n logger.info(\"Data Assimilation enabled using script {} with {:d} cycles\".format(script,\n cycles))\n", "path": "scripts/lib/CIME/case/case_submit.py"}]} | 2,917 | 655 |
gh_patches_debug_339 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3164 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)
### Issue Description
The explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).
Could this check be removed to allow for PyTorch 2.0?
### Environment
Mac OS 11.7.1
Python 3.10
PyTorch 2.0
Pyro 1.8.3
### Code Snippet
https://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/torch_patch.py`
Content:
```
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import functools
5 import math
6 import weakref
7
8 import torch
9
10 assert torch.__version__.startswith("1.")
11
12
13 def patch_dependency(target, root_module=torch):
14 parts = target.split(".")
15 assert parts[0] == root_module.__name__
16 module = root_module
17 for part in parts[1:-1]:
18 module = getattr(module, part)
19 name = parts[-1]
20 old_fn = getattr(module, name, None)
21 old_fn = getattr(old_fn, "_pyro_unpatched", old_fn) # ensure patching is idempotent
22
23 def decorator(new_fn):
24 try:
25 functools.update_wrapper(new_fn, old_fn)
26 except Exception:
27 for attr in functools.WRAPPER_ASSIGNMENTS:
28 if hasattr(old_fn, attr):
29 setattr(new_fn, attr, getattr(old_fn, attr))
30 new_fn._pyro_unpatched = old_fn
31 setattr(module, name, new_fn)
32 return new_fn
33
34 return decorator
35
36
37 # TODO: Move upstream to allow for pickle serialization of transforms
38 @patch_dependency("torch.distributions.transforms.Transform.__getstate__")
39 def _Transform__getstate__(self):
40 attrs = {}
41 for k, v in self.__dict__.items():
42 if isinstance(v, weakref.ref):
43 attrs[k] = None
44 else:
45 attrs[k] = v
46 return attrs
47
48
49 # TODO move upstream
50 @patch_dependency("torch.distributions.transforms.Transform.clear_cache")
51 def _Transform_clear_cache(self):
52 if self._cache_size == 1:
53 self._cached_x_y = None, None
54
55
56 # TODO move upstream
57 @patch_dependency("torch.distributions.TransformedDistribution.clear_cache")
58 def _TransformedDistribution_clear_cache(self):
59 for t in self.transforms:
60 t.clear_cache()
61
62
63 # TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream
64 @patch_dependency("torch.distributions.HalfCauchy.log_prob")
65 def _HalfCauchy_logprob(self, value):
66 if self._validate_args:
67 self._validate_sample(value)
68 value = torch.as_tensor(
69 value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device
70 )
71 log_prob = self.base_dist.log_prob(value) + math.log(2)
72 log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float("inf"))
73 return log_prob
74
75
76 # TODO fix batch_shape have an extra singleton dimension upstream
77 @patch_dependency("torch.distributions.constraints._PositiveDefinite.check")
78 def _PositiveDefinite_check(self, value):
79 matrix_shape = value.shape[-2:]
80 batch_shape = value.shape[:-2]
81 flattened_value = value.reshape((-1,) + matrix_shape)
82 return torch.stack(
83 [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]
84 ).view(batch_shape)
85
86
87 @patch_dependency("torch.distributions.constraints._CorrCholesky.check")
88 def _CorrCholesky_check(self, value):
89 row_norm = torch.linalg.norm(value.detach(), dim=-1)
90 unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)
91 return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm
92
93
94 # This adds a __call__ method to satisfy sphinx.
95 @patch_dependency("torch.distributions.utils.lazy_property.__call__")
96 def _lazy_property__call__(self):
97 raise NotImplementedError
98
99
100 __all__ = []
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py
--- a/pyro/distributions/torch_patch.py
+++ b/pyro/distributions/torch_patch.py
@@ -7,8 +7,6 @@
import torch
-assert torch.__version__.startswith("1.")
-
def patch_dependency(target, root_module=torch):
parts = target.split(".")
| {"golden_diff": "diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py\n--- a/pyro/distributions/torch_patch.py\n+++ b/pyro/distributions/torch_patch.py\n@@ -7,8 +7,6 @@\n \n import torch\n \n-assert torch.__version__.startswith(\"1.\")\n-\n \n def patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n", "issue": "PyTorch 2.0 compatibility: Explicit PyTorch 1.x check causing issues with packages that depend on PyTorch / pyro (e.g. BoTorch)\n### Issue Description\r\nThe explicit check for PyTorch 1.x here (https://github.com/pyro-ppl/pyro/blob/dev/pyro/distributions/torch_patch.py#L10) is causing problems when another package has a dependency on PyTorch + Pyro, since PyTorch is now at 2.0. For example, it is causing BoTorch tests to fail here (https://github.com/pytorch/botorch/pull/1551).\r\n\r\nCould this check be removed to allow for PyTorch 2.0?\r\n\r\n### Environment\r\nMac OS 11.7.1\r\nPython 3.10\r\nPyTorch 2.0\r\nPyro 1.8.3\r\n\r\n### Code Snippet\r\nhttps://github.com/pytorch/botorch/actions/runs/3659534850/jobs/6185642011\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\nassert torch.__version__.startswith(\"1.\")\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n# TODO fix batch_shape have an extra singleton dimension upstream\n@patch_dependency(\"torch.distributions.constraints._PositiveDefinite.check\")\ndef _PositiveDefinite_check(self, value):\n matrix_shape = value.shape[-2:]\n batch_shape = value.shape[:-2]\n flattened_value = value.reshape((-1,) + matrix_shape)\n return torch.stack(\n [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]\n ).view(batch_shape)\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}], "after_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n# TODO fix batch_shape have an extra singleton dimension upstream\n@patch_dependency(\"torch.distributions.constraints._PositiveDefinite.check\")\ndef _PositiveDefinite_check(self, value):\n matrix_shape = value.shape[-2:]\n batch_shape = value.shape[:-2]\n flattened_value = value.reshape((-1,) + matrix_shape)\n return torch.stack(\n [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]\n ).view(batch_shape)\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}]} | 1,484 | 86 |
gh_patches_debug_30012 | rasdani/github-patches | git_diff | TheAlgorithms__Python-2443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dev sprint ideas: More tests, type hints and less complexity
currently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.
it's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code
[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)
thank you
### Dev sprint ideas:
* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)
* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)
* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...
* [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)
* [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu
* [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)
* [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm
* [ ] In a ___separate___ PR reduce the McCabe complexity
* [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern
* [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)
* [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140
* [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140
* [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searches/simple_binary_search.py`
Content:
```
1 """
2 Pure Python implementation of a binary search algorithm.
3
4 For doctests run following command:
5 python3 -m doctest -v simple_binary_search.py
6
7 For manual testing run:
8 python3 simple_binary_search.py
9 """
10 from __future__ import annotations
11
12
13 def binary_search(a_list: list[int], item: int) -> bool:
14 """
15 >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
16 >>> print(binary_search(test_list, 3))
17 False
18 >>> print(binary_search(test_list, 13))
19 True
20 >>> print(binary_search([4, 4, 5, 6, 7], 4))
21 True
22 >>> print(binary_search([4, 4, 5, 6, 7], -10))
23 False
24 >>> print(binary_search([-18, 2], -18))
25 True
26 >>> print(binary_search([5], 5))
27 True
28 >>> print(binary_search(['a', 'c', 'd'], 'c'))
29 True
30 >>> print(binary_search(['a', 'c', 'd'], 'f'))
31 False
32 >>> print(binary_search([], 1))
33 False
34 >>> print(binary_search([-.1, .1 , .8], .1))
35 True
36 >>> binary_search(range(-5000, 5000, 10), 80)
37 True
38 >>> binary_search(range(-5000, 5000, 10), 1255)
39 False
40 >>> binary_search(range(0, 10000, 5), 2)
41 False
42 """
43 if len(a_list) == 0:
44 return False
45 midpoint = len(a_list) // 2
46 if a_list[midpoint] == item:
47 return True
48 if item < a_list[midpoint]:
49 return binary_search(a_list[:midpoint], item)
50 else:
51 return binary_search(a_list[midpoint + 1 :], item)
52
53
54 if __name__ == "__main__":
55 user_input = input("Enter numbers separated by comma:\n").strip()
56 sequence = [int(item.strip()) for item in user_input.split(",")]
57 target = int(input("Enter the number to be found in the list:\n").strip())
58 not_str = "" if binary_search(sequence, target) else "not "
59 print(f"{target} was {not_str}found in {sequence}")
60
```
Path: `graphs/minimum_spanning_tree_kruskal.py`
Content:
```
1 if __name__ == "__main__":
2 num_nodes, num_edges = list(map(int, input().strip().split()))
3
4 edges = []
5
6 for i in range(num_edges):
7 node1, node2, cost = list(map(int, input().strip().split()))
8 edges.append((i, node1, node2, cost))
9
10 edges = sorted(edges, key=lambda edge: edge[3])
11
12 parent = list(range(num_nodes))
13
14 def find_parent(i):
15 if i != parent[i]:
16 parent[i] = find_parent(parent[i])
17 return parent[i]
18
19 minimum_spanning_tree_cost = 0
20 minimum_spanning_tree = []
21
22 for edge in edges:
23 parent_a = find_parent(edge[1])
24 parent_b = find_parent(edge[2])
25 if parent_a != parent_b:
26 minimum_spanning_tree_cost += edge[3]
27 minimum_spanning_tree.append(edge)
28 parent[parent_a] = parent_b
29
30 print(minimum_spanning_tree_cost)
31 for edge in minimum_spanning_tree:
32 print(edge)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py
--- a/graphs/minimum_spanning_tree_kruskal.py
+++ b/graphs/minimum_spanning_tree_kruskal.py
@@ -1,13 +1,5 @@
-if __name__ == "__main__":
- num_nodes, num_edges = list(map(int, input().strip().split()))
-
- edges = []
-
- for i in range(num_edges):
- node1, node2, cost = list(map(int, input().strip().split()))
- edges.append((i, node1, node2, cost))
-
- edges = sorted(edges, key=lambda edge: edge[3])
+def kruskal(num_nodes, num_edges, edges):
+ edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
@@ -20,13 +12,22 @@
minimum_spanning_tree = []
for edge in edges:
- parent_a = find_parent(edge[1])
- parent_b = find_parent(edge[2])
+ parent_a = find_parent(edge[0])
+ parent_b = find_parent(edge[1])
if parent_a != parent_b:
- minimum_spanning_tree_cost += edge[3]
+ minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
- print(minimum_spanning_tree_cost)
- for edge in minimum_spanning_tree:
- print(edge)
+ return minimum_spanning_tree
+
+
+if __name__ == "__main__": # pragma: no cover
+ num_nodes, num_edges = list(map(int, input().strip().split()))
+ edges = []
+
+ for _ in range(num_edges):
+ node1, node2, cost = [int(x) for x in input().strip().split()]
+ edges.append((node1, node2, cost))
+
+ kruskal(num_nodes, num_edges, edges)
diff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py
--- a/searches/simple_binary_search.py
+++ b/searches/simple_binary_search.py
@@ -42,7 +42,7 @@
if item < a_list[midpoint]:
return binary_search(a_list[:midpoint], item)
else:
- return binary_search(a_list[midpoint + 1:], item)
+ return binary_search(a_list[midpoint + 1 :], item)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/graphs/minimum_spanning_tree_kruskal.py b/graphs/minimum_spanning_tree_kruskal.py\n--- a/graphs/minimum_spanning_tree_kruskal.py\n+++ b/graphs/minimum_spanning_tree_kruskal.py\n@@ -1,13 +1,5 @@\n-if __name__ == \"__main__\":\n- num_nodes, num_edges = list(map(int, input().strip().split()))\n-\n- edges = []\n-\n- for i in range(num_edges):\n- node1, node2, cost = list(map(int, input().strip().split()))\n- edges.append((i, node1, node2, cost))\n-\n- edges = sorted(edges, key=lambda edge: edge[3])\n+def kruskal(num_nodes, num_edges, edges):\n+ edges = sorted(edges, key=lambda edge: edge[2])\n \n parent = list(range(num_nodes))\n \n@@ -20,13 +12,22 @@\n minimum_spanning_tree = []\n \n for edge in edges:\n- parent_a = find_parent(edge[1])\n- parent_b = find_parent(edge[2])\n+ parent_a = find_parent(edge[0])\n+ parent_b = find_parent(edge[1])\n if parent_a != parent_b:\n- minimum_spanning_tree_cost += edge[3]\n+ minimum_spanning_tree_cost += edge[2]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n \n- print(minimum_spanning_tree_cost)\n- for edge in minimum_spanning_tree:\n- print(edge)\n+ return minimum_spanning_tree\n+\n+\n+if __name__ == \"__main__\": # pragma: no cover\n+ num_nodes, num_edges = list(map(int, input().strip().split()))\n+ edges = []\n+\n+ for _ in range(num_edges):\n+ node1, node2, cost = [int(x) for x in input().strip().split()]\n+ edges.append((node1, node2, cost))\n+\n+ kruskal(num_nodes, num_edges, edges)\ndiff --git a/searches/simple_binary_search.py b/searches/simple_binary_search.py\n--- a/searches/simple_binary_search.py\n+++ b/searches/simple_binary_search.py\n@@ -42,7 +42,7 @@\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n- return binary_search(a_list[midpoint + 1:], item)\n+ return binary_search(a_list[midpoint + 1 :], item)\n \n \n if __name__ == \"__main__\":\n", "issue": "Dev sprint ideas: More tests, type hints and less complexity\ncurrently, some of the programs use static type checking like this [program](https://github.com/TheAlgorithms/Python/blob/master/dynamic_programming/fast_fibonacci.py) but some of the programs did not use static typing.\r\n\r\nit's a good practice to use static typing as it makes code more clear and readable, should we make it a standard for this repository.we can use [mypy](http://mypy-lang.org/) for testing code \r\n\r\n[more on static typing](https://medium.com/@ageitgey/learn-how-to-use-static-type-checking-in-python-3-6-in-10-minutes-12c86d72677b)\r\n \r\nthank you\r\n\r\n### Dev sprint ideas:\r\n* [ ] [Add tests to Python files with <10% test coverage.](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645231020)\r\n* [ ] [Add static typing to functions and methods.](https://github.com/TheAlgorithms/Python/issues/2128)\r\n* [ ] [Set `flake8 --max-complexity=15`](https://github.com/TheAlgorithms/Python/issues/2128#issuecomment-645190839) (Ensure files have strong tests ___before___ refactoring). Test results from #2139...\r\n * [ ] ./boolean_algebra/quine_mc_cluskey.py:82:1: C901 'selection' is too complex (17)\r\n * [ ] ./digital_image_processing/edge_detection/canny.py:20:1: C901 'canny' is too complex (17) @lighttxu\r\n * [ ] ./graphs/minimum_spanning_tree_prims.py:5:1: C901 'PrimsAlgorithm' is too complex (21)\r\n * [ ] Add doctests aligned with https://en.wikipedia.org/wiki/Prim%27s_algorithm\r\n * [ ] In a ___separate___ PR reduce the McCabe complexity\r\n * [ ] ./linear_algebra/src/polynom-for-points.py:1:1: C901 'points_to_polynomial' is too complex (23) @nic-dern\r\n * [ ] ./machine_learning/linear_discriminant_analysis.py:251:1: C901 'main' is too complex (25)\r\n * [x] ./hashes/hamming_code.py:71:1: C901 'emitterConverter' is too complex (16) #2140\r\n * [x] ./hashes/hamming_code.py:153:1: C901 'receptorConverter' is too complex (20) #2140\r\n * [x] ./project_euler/problem_551/sol1.py:20:1: C901 'next_term' is too complex (16) #2141\n", "before_files": [{"content": "\"\"\"\nPure Python implementation of a binary search algorithm.\n\nFor doctests run following command:\npython3 -m doctest -v simple_binary_search.py\n\nFor manual testing run:\npython3 simple_binary_search.py\n\"\"\"\nfrom __future__ import annotations\n\n\ndef binary_search(a_list: list[int], item: int) -> bool:\n \"\"\"\n >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]\n >>> print(binary_search(test_list, 3))\n False\n >>> print(binary_search(test_list, 13))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], 4))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], -10))\n False\n >>> print(binary_search([-18, 2], -18))\n True\n >>> print(binary_search([5], 5))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'c'))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'f'))\n False\n >>> print(binary_search([], 1))\n False\n >>> print(binary_search([-.1, .1 , .8], .1))\n True\n >>> binary_search(range(-5000, 5000, 10), 80)\n True\n >>> binary_search(range(-5000, 5000, 10), 1255)\n False\n >>> binary_search(range(0, 10000, 5), 2)\n False\n \"\"\"\n if len(a_list) == 0:\n return False\n midpoint = len(a_list) // 2\n if a_list[midpoint] == item:\n return True\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n return binary_search(a_list[midpoint + 1 :], item)\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by comma:\\n\").strip()\n sequence = [int(item.strip()) for item in user_input.split(\",\")]\n target = int(input(\"Enter the number to be found in the list:\\n\").strip())\n not_str = \"\" if binary_search(sequence, target) else \"not \"\n print(f\"{target} was {not_str}found in {sequence}\")\n", "path": "searches/simple_binary_search.py"}, {"content": "if __name__ == \"__main__\":\n num_nodes, num_edges = list(map(int, input().strip().split()))\n\n edges = []\n\n for i in range(num_edges):\n node1, node2, cost = list(map(int, input().strip().split()))\n edges.append((i, node1, node2, cost))\n\n edges = sorted(edges, key=lambda edge: edge[3])\n\n parent = list(range(num_nodes))\n\n def find_parent(i):\n if i != parent[i]:\n parent[i] = find_parent(parent[i])\n return parent[i]\n\n minimum_spanning_tree_cost = 0\n minimum_spanning_tree = []\n\n for edge in edges:\n parent_a = find_parent(edge[1])\n parent_b = find_parent(edge[2])\n if parent_a != parent_b:\n minimum_spanning_tree_cost += edge[3]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n\n print(minimum_spanning_tree_cost)\n for edge in minimum_spanning_tree:\n print(edge)\n", "path": "graphs/minimum_spanning_tree_kruskal.py"}], "after_files": [{"content": "\"\"\"\nPure Python implementation of a binary search algorithm.\n\nFor doctests run following command:\npython3 -m doctest -v simple_binary_search.py\n\nFor manual testing run:\npython3 simple_binary_search.py\n\"\"\"\nfrom typing import List\n\n\ndef binary_search(a_list: List[int], item: int) -> bool:\n \"\"\"\n >>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]\n >>> print(binary_search(test_list, 3))\n False\n >>> print(binary_search(test_list, 13))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], 4))\n True\n >>> print(binary_search([4, 4, 5, 6, 7], -10))\n False\n >>> print(binary_search([-18, 2], -18))\n True\n >>> print(binary_search([5], 5))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'c'))\n True\n >>> print(binary_search(['a', 'c', 'd'], 'f'))\n False\n >>> print(binary_search([], 1))\n False\n >>> print(binary_search([.1, .4 , -.1], .1))\n True\n \"\"\"\n if len(a_list) == 0:\n return False\n midpoint = len(a_list) // 2\n if a_list[midpoint] == item:\n return True\n if item < a_list[midpoint]:\n return binary_search(a_list[:midpoint], item)\n else:\n return binary_search(a_list[midpoint + 1 :], item)\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by comma:\\n\").strip()\n sequence = [int(item.strip()) for item in user_input.split(\",\")]\n target = int(input(\"Enter the number to be found in the list:\\n\").strip())\n not_str = \"\" if binary_search(sequence, target) else \"not \"\n print(f\"{target} was {not_str}found in {sequence}\")\n", "path": "searches/simple_binary_search.py"}, {"content": "def kruskal(num_nodes, num_edges, edges):\n edges = sorted(edges, key=lambda edge: edge[2])\n\n parent = list(range(num_nodes))\n\n def find_parent(i):\n if i != parent[i]:\n parent[i] = find_parent(parent[i])\n return parent[i]\n\n minimum_spanning_tree_cost = 0\n minimum_spanning_tree = []\n\n for edge in edges:\n parent_a = find_parent(edge[0])\n parent_b = find_parent(edge[1])\n if parent_a != parent_b:\n minimum_spanning_tree_cost += edge[2]\n minimum_spanning_tree.append(edge)\n parent[parent_a] = parent_b\n\n return minimum_spanning_tree\n\n\nif __name__ == \"__main__\": # pragma: no cover\n num_nodes, num_edges = list(map(int, input().strip().split()))\n edges = []\n\n for _ in range(num_edges):\n node1, node2, cost = [int(x) for x in input().strip().split()]\n edges.append((node1, node2, cost))\n\n kruskal(num_nodes, num_edges, edges)\n", "path": "graphs/minimum_spanning_tree_kruskal.py"}]} | 1,893 | 565 |
gh_patches_debug_32139 | rasdani/github-patches | git_diff | certbot__certbot-2141 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Travis: pep8: not found
In #2138 it was discovered that `pep8` is not run because it's not installed at the right time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import codecs
2 import os
3 import re
4 import sys
5
6 from setuptools import setup
7 from setuptools import find_packages
8
9 # Workaround for http://bugs.python.org/issue8876, see
10 # http://bugs.python.org/issue8876#msg208792
11 # This can be removed when using Python 2.7.9 or later:
12 # https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
13 if os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':
14 del os.link
15
16
17 def read_file(filename, encoding='utf8'):
18 """Read unicode from given file."""
19 with codecs.open(filename, encoding=encoding) as fd:
20 return fd.read()
21
22
23 here = os.path.abspath(os.path.dirname(__file__))
24
25 # read version number (and other metadata) from package init
26 init_fn = os.path.join(here, 'letsencrypt', '__init__.py')
27 meta = dict(re.findall(r"""__([a-z]+)__ = '([^']+)""", read_file(init_fn)))
28
29 readme = read_file(os.path.join(here, 'README.rst'))
30 changes = read_file(os.path.join(here, 'CHANGES.rst'))
31 version = meta['version']
32
33 # Please update tox.ini when modifying dependency version requirements
34 install_requires = [
35 'acme=={0}'.format(version),
36 'configobj',
37 'cryptography>=0.7', # load_pem_x509_certificate
38 'parsedatetime',
39 'psutil>=2.1.0', # net_connections introduced in 2.1.0
40 'PyOpenSSL',
41 'pyrfc3339',
42 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280
43 'pytz',
44 'setuptools', # pkg_resources
45 'six',
46 'zope.component',
47 'zope.interface',
48 ]
49
50 # env markers in extras_require cause problems with older pip: #517
51 if sys.version_info < (2, 7):
52 install_requires.extend([
53 # only some distros recognize stdlib argparse as already satisfying
54 'argparse',
55 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17
56 'mock<1.1.0',
57 ])
58 else:
59 install_requires.extend([
60 'ConfigArgParse',
61 'mock',
62 ])
63
64 dev_extras = [
65 # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
66 'astroid==1.3.5',
67 'pylint==1.4.2', # upstream #248
68 'twine',
69 'wheel',
70 ]
71
72 docs_extras = [
73 'repoze.sphinx.autointerface',
74 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
75 'sphinx_rtd_theme',
76 'sphinxcontrib-programoutput',
77 ]
78
79 testing_extras = [
80 'coverage',
81 'nose',
82 'nosexcover',
83 'pep8',
84 'tox',
85 ]
86
87 setup(
88 name='letsencrypt',
89 version=version,
90 description="Let's Encrypt client",
91 long_description=readme, # later: + '\n\n' + changes
92 url='https://github.com/letsencrypt/letsencrypt',
93 author="Let's Encrypt Project",
94 author_email='[email protected]',
95 license='Apache License 2.0',
96 classifiers=[
97 'Development Status :: 3 - Alpha',
98 'Environment :: Console',
99 'Environment :: Console :: Curses',
100 'Intended Audience :: System Administrators',
101 'License :: OSI Approved :: Apache Software License',
102 'Operating System :: POSIX :: Linux',
103 'Programming Language :: Python',
104 'Programming Language :: Python :: 2',
105 'Programming Language :: Python :: 2.6',
106 'Programming Language :: Python :: 2.7',
107 'Topic :: Internet :: WWW/HTTP',
108 'Topic :: Security',
109 'Topic :: System :: Installation/Setup',
110 'Topic :: System :: Networking',
111 'Topic :: System :: Systems Administration',
112 'Topic :: Utilities',
113 ],
114
115 packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),
116 include_package_data=True,
117
118 install_requires=install_requires,
119 extras_require={
120 'dev': dev_extras,
121 'docs': docs_extras,
122 'testing': testing_extras,
123 },
124
125 tests_require=install_requires,
126 # to test all packages run "python setup.py test -s
127 # {acme,letsencrypt_apache,letsencrypt_nginx}"
128 test_suite='letsencrypt',
129
130 entry_points={
131 'console_scripts': [
132 'letsencrypt = letsencrypt.cli:main',
133 'letsencrypt-renewer = letsencrypt.renewer:main',
134 ],
135 'letsencrypt.plugins': [
136 'manual = letsencrypt.plugins.manual:Authenticator',
137 'null = letsencrypt.plugins.null:Installer',
138 'standalone = letsencrypt.plugins.standalone:Authenticator',
139 'webroot = letsencrypt.plugins.webroot:Authenticator',
140 ],
141 },
142 )
143
```
Path: `acme/setup.py`
Content:
```
1 import sys
2
3 from setuptools import setup
4 from setuptools import find_packages
5
6
7 version = '0.2.0.dev0'
8
9 # Please update tox.ini when modifying dependency version requirements
10 install_requires = [
11 # load_pem_private/public_key (>=0.6)
12 # rsa_recover_prime_factors (>=0.8)
13 'cryptography>=0.8',
14 # Connection.set_tlsext_host_name (>=0.13)
15 'PyOpenSSL>=0.13',
16 'pyrfc3339',
17 'pytz',
18 'requests',
19 'setuptools', # pkg_resources
20 'six',
21 'werkzeug',
22 ]
23
24 # env markers in extras_require cause problems with older pip: #517
25 if sys.version_info < (2, 7):
26 install_requires.extend([
27 # only some distros recognize stdlib argparse as already satisfying
28 'argparse',
29 'mock<1.1.0',
30 ])
31 else:
32 install_requires.append('mock')
33
34 if sys.version_info < (2, 7, 9):
35 # For secure SSL connection with Python 2.7 (InsecurePlatformWarning)
36 install_requires.append('ndg-httpsclient')
37 install_requires.append('pyasn1')
38
39 docs_extras = [
40 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
41 'sphinx_rtd_theme',
42 'sphinxcontrib-programoutput',
43 ]
44
45 testing_extras = [
46 'nose',
47 'tox',
48 ]
49
50
51 setup(
52 name='acme',
53 version=version,
54 description='ACME protocol implementation in Python',
55 url='https://github.com/letsencrypt/letsencrypt',
56 author="Let's Encrypt Project",
57 author_email='[email protected]',
58 license='Apache License 2.0',
59 classifiers=[
60 'Development Status :: 3 - Alpha',
61 'Intended Audience :: Developers',
62 'License :: OSI Approved :: Apache Software License',
63 'Programming Language :: Python',
64 'Programming Language :: Python :: 2',
65 'Programming Language :: Python :: 2.6',
66 'Programming Language :: Python :: 2.7',
67 'Programming Language :: Python :: 3',
68 'Programming Language :: Python :: 3.3',
69 'Programming Language :: Python :: 3.4',
70 'Programming Language :: Python :: 3.5',
71 'Topic :: Internet :: WWW/HTTP',
72 'Topic :: Security',
73 ],
74
75 packages=find_packages(),
76 include_package_data=True,
77 install_requires=install_requires,
78 extras_require={
79 'docs': docs_extras,
80 'testing': testing_extras,
81 },
82 entry_points={
83 'console_scripts': [
84 'jws = acme.jose.jws:CLI.run',
85 ],
86 },
87 test_suite='acme',
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/acme/setup.py b/acme/setup.py
--- a/acme/setup.py
+++ b/acme/setup.py
@@ -36,17 +36,18 @@
install_requires.append('ndg-httpsclient')
install_requires.append('pyasn1')
+dev_extras = [
+ 'nose',
+ 'pep8',
+ 'tox',
+]
+
docs_extras = [
'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
'sphinx_rtd_theme',
'sphinxcontrib-programoutput',
]
-testing_extras = [
- 'nose',
- 'tox',
-]
-
setup(
name='acme',
@@ -76,8 +77,8 @@
include_package_data=True,
install_requires=install_requires,
extras_require={
+ 'dev': dev_extras,
'docs': docs_extras,
- 'testing': testing_extras,
},
entry_points={
'console_scripts': [
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,12 @@
dev_extras = [
# Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289
'astroid==1.3.5',
+ 'coverage',
+ 'nose',
+ 'nosexcover',
+ 'pep8',
'pylint==1.4.2', # upstream #248
+ 'tox',
'twine',
'wheel',
]
@@ -76,14 +81,6 @@
'sphinxcontrib-programoutput',
]
-testing_extras = [
- 'coverage',
- 'nose',
- 'nosexcover',
- 'pep8',
- 'tox',
-]
-
setup(
name='letsencrypt',
version=version,
@@ -119,7 +116,6 @@
extras_require={
'dev': dev_extras,
'docs': docs_extras,
- 'testing': testing_extras,
},
tests_require=install_requires,
| {"golden_diff": "diff --git a/acme/setup.py b/acme/setup.py\n--- a/acme/setup.py\n+++ b/acme/setup.py\n@@ -36,17 +36,18 @@\n install_requires.append('ndg-httpsclient')\n install_requires.append('pyasn1')\n \n+dev_extras = [\n+ 'nose',\n+ 'pep8',\n+ 'tox',\n+]\n+\n docs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n ]\n \n-testing_extras = [\n- 'nose',\n- 'tox',\n-]\n-\n \n setup(\n name='acme',\n@@ -76,8 +77,8 @@\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n+ 'dev': dev_extras,\n 'docs': docs_extras,\n- 'testing': testing_extras,\n },\n entry_points={\n 'console_scripts': [\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,12 @@\n dev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n+ 'coverage',\n+ 'nose',\n+ 'nosexcover',\n+ 'pep8',\n 'pylint==1.4.2', # upstream #248\n+ 'tox',\n 'twine',\n 'wheel',\n ]\n@@ -76,14 +81,6 @@\n 'sphinxcontrib-programoutput',\n ]\n \n-testing_extras = [\n- 'coverage',\n- 'nose',\n- 'nosexcover',\n- 'pep8',\n- 'tox',\n-]\n-\n setup(\n name='letsencrypt',\n version=version,\n@@ -119,7 +116,6 @@\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n- 'testing': testing_extras,\n },\n \n tests_require=install_requires,\n", "issue": "Travis: pep8: not found\nIn #2138 it was discovered that `pep8` is not run because it's not installed at the right time.\n\n", "before_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'letsencrypt', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n 'acme=={0}'.format(version),\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n 'psutil>=2.1.0', # net_connections introduced in 2.1.0\n 'PyOpenSSL',\n 'pyrfc3339',\n 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280\n 'pytz',\n 'setuptools', # pkg_resources\n 'six',\n 'zope.component',\n 'zope.interface',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17\n 'mock<1.1.0',\n ])\nelse:\n install_requires.extend([\n 'ConfigArgParse',\n 'mock',\n ])\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'pylint==1.4.2', # upstream #248\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'coverage',\n 'nose',\n 'nosexcover',\n 'pep8',\n 'tox',\n]\n\nsetup(\n name='letsencrypt',\n version=version,\n description=\"Let's Encrypt client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n\n tests_require=install_requires,\n # to test all packages run \"python setup.py test -s\n # {acme,letsencrypt_apache,letsencrypt_nginx}\"\n test_suite='letsencrypt',\n\n entry_points={\n 'console_scripts': [\n 'letsencrypt = letsencrypt.cli:main',\n 'letsencrypt-renewer = letsencrypt.renewer:main',\n ],\n 'letsencrypt.plugins': [\n 'manual = letsencrypt.plugins.manual:Authenticator',\n 'null = letsencrypt.plugins.null:Installer',\n 'standalone = letsencrypt.plugins.standalone:Authenticator',\n 'webroot = letsencrypt.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}, {"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.2.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n 'requests',\n 'setuptools', # pkg_resources\n 'six',\n 'werkzeug',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\nif sys.version_info < (2, 7, 9):\n # For secure SSL connection with Python 2.7 (InsecurePlatformWarning)\n install_requires.append('ndg-httpsclient')\n install_requires.append('pyasn1')\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\ntesting_extras = [\n 'nose',\n 'tox',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'docs': docs_extras,\n 'testing': testing_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}], "after_files": [{"content": "import codecs\nimport os\nimport re\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n# Workaround for http://bugs.python.org/issue8876, see\n# http://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'letsencrypt', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nchanges = read_file(os.path.join(here, 'CHANGES.rst'))\nversion = meta['version']\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n 'acme=={0}'.format(version),\n 'configobj',\n 'cryptography>=0.7', # load_pem_x509_certificate\n 'parsedatetime',\n 'psutil>=2.1.0', # net_connections introduced in 2.1.0\n 'PyOpenSSL',\n 'pyrfc3339',\n 'python2-pythondialog>=3.2.2rc1', # Debian squeeze support, cf. #280\n 'pytz',\n 'setuptools', # pkg_resources\n 'six',\n 'zope.component',\n 'zope.interface',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'ConfigArgParse>=0.10.0', # python2.6 support, upstream #17\n 'mock<1.1.0',\n ])\nelse:\n install_requires.extend([\n 'ConfigArgParse',\n 'mock',\n ])\n\ndev_extras = [\n # Pin astroid==1.3.5, pylint==1.4.2 as a workaround for #289\n 'astroid==1.3.5',\n 'coverage',\n 'nose',\n 'nosexcover',\n 'pep8',\n 'pylint==1.4.2', # upstream #248\n 'tox',\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\nsetup(\n name='letsencrypt',\n version=version,\n description=\"Let's Encrypt client\",\n long_description=readme, # later: + '\\n\\n' + changes\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n\n tests_require=install_requires,\n # to test all packages run \"python setup.py test -s\n # {acme,letsencrypt_apache,letsencrypt_nginx}\"\n test_suite='letsencrypt',\n\n entry_points={\n 'console_scripts': [\n 'letsencrypt = letsencrypt.cli:main',\n 'letsencrypt-renewer = letsencrypt.renewer:main',\n ],\n 'letsencrypt.plugins': [\n 'manual = letsencrypt.plugins.manual:Authenticator',\n 'null = letsencrypt.plugins.null:Installer',\n 'standalone = letsencrypt.plugins.standalone:Authenticator',\n 'webroot = letsencrypt.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "setup.py"}, {"content": "import sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.2.0.dev0'\n\n# Please update tox.ini when modifying dependency version requirements\ninstall_requires = [\n # load_pem_private/public_key (>=0.6)\n # rsa_recover_prime_factors (>=0.8)\n 'cryptography>=0.8',\n # Connection.set_tlsext_host_name (>=0.13)\n 'PyOpenSSL>=0.13',\n 'pyrfc3339',\n 'pytz',\n 'requests',\n 'setuptools', # pkg_resources\n 'six',\n 'werkzeug',\n]\n\n# env markers in extras_require cause problems with older pip: #517\nif sys.version_info < (2, 7):\n install_requires.extend([\n # only some distros recognize stdlib argparse as already satisfying\n 'argparse',\n 'mock<1.1.0',\n ])\nelse:\n install_requires.append('mock')\n\nif sys.version_info < (2, 7, 9):\n # For secure SSL connection with Python 2.7 (InsecurePlatformWarning)\n install_requires.append('ndg-httpsclient')\n install_requires.append('pyasn1')\n\ndev_extras = [\n 'nose',\n 'pep8',\n 'tox',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n 'sphinxcontrib-programoutput',\n]\n\n\nsetup(\n name='acme',\n version=version,\n description='ACME protocol implementation in Python',\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Let's Encrypt Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n entry_points={\n 'console_scripts': [\n 'jws = acme.jose.jws:CLI.run',\n ],\n },\n test_suite='acme',\n)\n", "path": "acme/setup.py"}]} | 2,578 | 503 |
gh_patches_debug_20698 | rasdani/github-patches | git_diff | freqtrade__freqtrade-5530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API Server under 2021.8
<!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Windows Server 2019
* Python Version: Miniconda 3
* CCXT version: 1.55.56_____ (`pip freeze | grep ccxt`)
* Freqtrade Version: 2021.8 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
This might be a bug, I post it as question, since I am nor sure for 100%.
(OS and Miniconda configuration works fine for Freqtrade since 2020.12)
Trading works fine under Telegram with current version.
With current version and activated API Server, the system remains idle and does not begin to trade. Play button in GUI is pushed. Even no trades visible in DB, opened in a SQlite explorer.
API Server web GUI works excellent.
(Trading with API Server works fine under 2021.7)
*Ask the question you have not been able to find an answer in our [Documentation](https://www.freqtrade.io/en/latest/)*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/rpc/api_server/uvicorn_threaded.py`
Content:
```
1 import contextlib
2 import threading
3 import time
4
5 import uvicorn
6
7
8 class UvicornServer(uvicorn.Server):
9 """
10 Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742
11
12 Removed install_signal_handlers() override based on changes from this commit:
13 https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6
14
15 Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:
16 https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638
17
18 Fix by overriding run() and forcing creation of new event loop if uvloop is available
19 """
20
21 def run(self, sockets=None):
22 import asyncio
23
24 """
25 Parent implementation calls self.config.setup_event_loop(),
26 but we need to create uvloop event loop manually
27 """
28 try:
29 import uvloop # noqa
30 except ImportError: # pragma: no cover
31 from uvicorn.loops.asyncio import asyncio_setup
32 asyncio_setup()
33 else:
34 asyncio.set_event_loop(uvloop.new_event_loop())
35 try:
36 loop = asyncio.get_event_loop()
37 except RuntimeError:
38 # When running in a thread, we'll not have an eventloop yet.
39 loop = asyncio.new_event_loop()
40 loop.run_until_complete(self.serve(sockets=sockets))
41
42 @contextlib.contextmanager
43 def run_in_thread(self):
44 self.thread = threading.Thread(target=self.run)
45 self.thread.start()
46 while not self.started:
47 time.sleep(1e-3)
48
49 def cleanup(self):
50 self.should_exit = True
51 self.thread.join()
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py
--- a/freqtrade/rpc/api_server/uvicorn_threaded.py
+++ b/freqtrade/rpc/api_server/uvicorn_threaded.py
@@ -5,6 +5,20 @@
import uvicorn
+def asyncio_setup() -> None: # pragma: no cover
+ # Set eventloop for win32 setups
+ # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop
+ # via policy.
+ import sys
+
+ if sys.version_info >= (3, 8) and sys.platform == "win32":
+ import asyncio
+ import selectors
+ selector = selectors.SelectSelector()
+ loop = asyncio.SelectorEventLoop(selector)
+ asyncio.set_event_loop(loop)
+
+
class UvicornServer(uvicorn.Server):
"""
Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742
@@ -28,7 +42,7 @@
try:
import uvloop # noqa
except ImportError: # pragma: no cover
- from uvicorn.loops.asyncio import asyncio_setup
+
asyncio_setup()
else:
asyncio.set_event_loop(uvloop.new_event_loop())
| {"golden_diff": "diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py\n--- a/freqtrade/rpc/api_server/uvicorn_threaded.py\n+++ b/freqtrade/rpc/api_server/uvicorn_threaded.py\n@@ -5,6 +5,20 @@\n import uvicorn\n \n \n+def asyncio_setup() -> None: # pragma: no cover\n+ # Set eventloop for win32 setups\n+ # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop\n+ # via policy.\n+ import sys\n+\n+ if sys.version_info >= (3, 8) and sys.platform == \"win32\":\n+ import asyncio\n+ import selectors\n+ selector = selectors.SelectSelector()\n+ loop = asyncio.SelectorEventLoop(selector)\n+ asyncio.set_event_loop(loop)\n+\n+\n class UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n@@ -28,7 +42,7 @@\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n- from uvicorn.loops.asyncio import asyncio_setup\n+\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n", "issue": "API Server under 2021.8\n<!-- \r\nHave you searched for similar issues before posting it?\r\nDid you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there\r\n\r\nPlease do not use the question template to report bugs or to request new features.\r\n-->\r\n\r\n## Describe your environment\r\n\r\n * Operating system: Windows Server 2019\r\n * Python Version: Miniconda 3\r\n * CCXT version: 1.55.56_____ (`pip freeze | grep ccxt`)\r\n * Freqtrade Version: 2021.8 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\n## Your question\r\nThis might be a bug, I post it as question, since I am nor sure for 100%.\r\n\r\n(OS and Miniconda configuration works fine for Freqtrade since 2020.12)\r\nTrading works fine under Telegram with current version.\r\nWith current version and activated API Server, the system remains idle and does not begin to trade. Play button in GUI is pushed. Even no trades visible in DB, opened in a SQlite explorer.\r\nAPI Server web GUI works excellent.\r\n(Trading with API Server works fine under 2021.7)\r\n\r\n*Ask the question you have not been able to find an answer in our [Documentation](https://www.freqtrade.io/en/latest/)*\r\n\n", "before_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n from uvicorn.loops.asyncio import asyncio_setup\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n # When running in a thread, we'll not have an eventloop yet.\n loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}], "after_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\ndef asyncio_setup() -> None: # pragma: no cover\n # Set eventloop for win32 setups\n # Reverts a change done in uvicorn 0.15.0 - which now sets the eventloop\n # via policy.\n import sys\n\n if sys.version_info >= (3, 8) and sys.platform == \"win32\":\n import asyncio\n import selectors\n selector = selectors.SelectSelector()\n loop = asyncio.SelectorEventLoop(selector)\n asyncio.set_event_loop(loop)\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n # When running in a thread, we'll not have an eventloop yet.\n loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}]} | 1,106 | 310 |
gh_patches_debug_12651 | rasdani/github-patches | git_diff | spotify__luigi-2415 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MissingParameterException when generating execution summary
Generating the execution summary may generate a MissingParameterException if a `requires` attempts to access a Config parameter which was specified only on the command line.
Example:
```python
# trouble.py
import luigi
class conf(luigi.Config):
opt = luigi.Parameter()
class Root(luigi.WrapperTask):
def requires(self):
if conf().opt == 'A':
return A()
class A(luigi.Task):
def complete(self):
return False
def run(self):
raise Exception()
```
```
$ luigi --module trouble Root --conf-opt=A
===== Luigi Execution Summary =====
Scheduled 2 tasks of which:
* 1 failed:
- 1 A()
* 1 were left pending, among these:
* 1 had failed dependencies:
- 1 Root()
This progress looks :( because there were failed tasks
===== Luigi Execution Summary =====
Traceback (most recent call last):
File "/opt/etl-virtualenv/bin/luigi", line 11, in <module>
sys.exit(luigi_run())
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/cmdline.py", line 11, in luigi_run
run_with_retcodes(argv)
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/retcodes.py", line 83, in run_with_retcodes
task_sets = luigi.execution_summary._summary_dict(worker)
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py", line 337, in _summary_dict
_populate_unknown_statuses(set_tasks)
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py", line 75, in _populate_unknown_statuses
_depth_first_search(set_tasks, task, visited)
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py", line 88, in _depth_first_search
for task in current_task._requires():
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task.py", line 607, in _requires
return flatten(self.requires()) # base impl
File "/private/tmp/trouble.py", line 8, in requires
if conf().opt == 'A':
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task_register.py", line 88, in __call__
param_values = cls.get_param_values(params, args, kwargs)
File "/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task.py", line 412, in get_param_values
raise parameter.MissingParameterException("%s: requires the '%s' parameter to be set" % (exc_desc, param_name))
luigi.parameter.MissingParameterException: conf[args=(), kwargs={}]: requires the 'opt' parameter to be set
```
This seems to be because the execution summary is generated outside any CmdlineParser context manager in run_with_retcodes. So this should be fairly easy to avoid by extending the entire run_with_retcodes to be within the CmdlineParser cm already there for the `retcode()` config - or if that could cause side effects I am unaware of then a separate context just for the call to _summary_dict.
I can attempt a PR for either approach.
May be related to #1964
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `luigi/retcodes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2015-2015 Spotify AB
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 #
17 """
18 Module containing the logic for exit codes for the luigi binary. It's useful
19 when you in a programmatic way need to know if luigi actually finished the
20 given task, and if not why.
21 """
22
23 import luigi
24 import sys
25 import logging
26 from luigi import IntParameter
27
28
29 class retcode(luigi.Config):
30 """
31 See the :ref:`return codes configuration section <retcode-config>`.
32 """
33 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
34 unhandled_exception = IntParameter(default=4,
35 description='For internal luigi errors.',
36 )
37 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
38 missing_data = IntParameter(default=0,
39 description="For when there are incomplete ExternalTask dependencies.",
40 )
41 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
42 task_failed = IntParameter(default=0,
43 description='''For when a task's run() method fails.''',
44 )
45 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
46 already_running = IntParameter(default=0,
47 description='For both local --lock and luigid "lock"',
48 )
49 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
50 scheduling_error = IntParameter(default=0,
51 description='''For when a task's complete() or requires() fails,
52 or task-limit reached'''
53 )
54 # default value inconsistent with doc/configuration.rst for backwards compatibility reasons
55 not_run = IntParameter(default=0,
56 description="For when a task is not granted run permission by the scheduler."
57 )
58
59
60 def run_with_retcodes(argv):
61 """
62 Run luigi with command line parsing, but raise ``SystemExit`` with the configured exit code.
63
64 Note: Usually you use the luigi binary directly and don't call this function yourself.
65
66 :param argv: Should (conceptually) be ``sys.argv[1:]``
67 """
68 logger = logging.getLogger('luigi-interface')
69 with luigi.cmdline_parser.CmdlineParser.global_instance(argv):
70 retcodes = retcode()
71
72 worker = None
73 try:
74 worker = luigi.interface._run(argv)['worker']
75 except luigi.interface.PidLockAlreadyTakenExit:
76 sys.exit(retcodes.already_running)
77 except Exception:
78 # Some errors occur before logging is set up, we set it up now
79 luigi.interface.setup_interface_logging()
80 logger.exception("Uncaught exception in luigi")
81 sys.exit(retcodes.unhandled_exception)
82
83 task_sets = luigi.execution_summary._summary_dict(worker)
84 root_task = luigi.execution_summary._root_task(worker)
85 non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()
86
87 def has(status):
88 assert status in luigi.execution_summary._ORDERED_STATUSES
89 return status in non_empty_categories
90
91 codes_and_conds = (
92 (retcodes.missing_data, has('still_pending_ext')),
93 (retcodes.task_failed, has('failed')),
94 (retcodes.already_running, has('run_by_other_worker')),
95 (retcodes.scheduling_error, has('scheduling_error')),
96 (retcodes.not_run, has('not_run')),
97 )
98 expected_ret_code = max(code * (1 if cond else 0) for code, cond in codes_and_conds)
99
100 if expected_ret_code == 0 and \
101 root_task not in task_sets["completed"] and \
102 root_task not in task_sets["already_done"]:
103 sys.exit(retcodes.not_run)
104 else:
105 sys.exit(expected_ret_code)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/luigi/retcodes.py b/luigi/retcodes.py
--- a/luigi/retcodes.py
+++ b/luigi/retcodes.py
@@ -80,9 +80,10 @@
logger.exception("Uncaught exception in luigi")
sys.exit(retcodes.unhandled_exception)
- task_sets = luigi.execution_summary._summary_dict(worker)
- root_task = luigi.execution_summary._root_task(worker)
- non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()
+ with luigi.cmdline_parser.CmdlineParser.global_instance(argv):
+ task_sets = luigi.execution_summary._summary_dict(worker)
+ root_task = luigi.execution_summary._root_task(worker)
+ non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()
def has(status):
assert status in luigi.execution_summary._ORDERED_STATUSES
| {"golden_diff": "diff --git a/luigi/retcodes.py b/luigi/retcodes.py\n--- a/luigi/retcodes.py\n+++ b/luigi/retcodes.py\n@@ -80,9 +80,10 @@\n logger.exception(\"Uncaught exception in luigi\")\n sys.exit(retcodes.unhandled_exception)\n \n- task_sets = luigi.execution_summary._summary_dict(worker)\n- root_task = luigi.execution_summary._root_task(worker)\n- non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()\n+ with luigi.cmdline_parser.CmdlineParser.global_instance(argv):\n+ task_sets = luigi.execution_summary._summary_dict(worker)\n+ root_task = luigi.execution_summary._root_task(worker)\n+ non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()\n \n def has(status):\n assert status in luigi.execution_summary._ORDERED_STATUSES\n", "issue": "MissingParameterException when generating execution summary\nGenerating the execution summary may generate a MissingParameterException if a `requires` attempts to access a Config parameter which was specified only on the command line.\r\n\r\nExample:\r\n\r\n```python\r\n# trouble.py\r\nimport luigi\r\n\r\nclass conf(luigi.Config):\r\n opt = luigi.Parameter()\r\n\r\nclass Root(luigi.WrapperTask):\r\n def requires(self):\r\n if conf().opt == 'A':\r\n return A()\r\n\r\nclass A(luigi.Task):\r\n def complete(self):\r\n return False\r\n\r\n def run(self):\r\n raise Exception()\r\n```\r\n\r\n```\r\n$ luigi --module trouble Root --conf-opt=A\r\n===== Luigi Execution Summary =====\r\n\r\nScheduled 2 tasks of which:\r\n* 1 failed:\r\n - 1 A()\r\n* 1 were left pending, among these:\r\n * 1 had failed dependencies:\r\n - 1 Root()\r\n\r\nThis progress looks :( because there were failed tasks\r\n\r\n===== Luigi Execution Summary =====\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/etl-virtualenv/bin/luigi\", line 11, in <module>\r\n sys.exit(luigi_run())\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/cmdline.py\", line 11, in luigi_run\r\n run_with_retcodes(argv)\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/retcodes.py\", line 83, in run_with_retcodes\r\n task_sets = luigi.execution_summary._summary_dict(worker)\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py\", line 337, in _summary_dict\r\n _populate_unknown_statuses(set_tasks)\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py\", line 75, in _populate_unknown_statuses\r\n _depth_first_search(set_tasks, task, visited)\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/execution_summary.py\", line 88, in _depth_first_search\r\n for task in current_task._requires():\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task.py\", line 607, in _requires\r\n return flatten(self.requires()) # base impl\r\n File \"/private/tmp/trouble.py\", line 8, in requires\r\n if conf().opt == 'A':\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task_register.py\", line 88, in __call__\r\n param_values = cls.get_param_values(params, args, kwargs)\r\n File \"/opt/etl-virtualenv/lib/python3.6/site-packages/luigi/task.py\", line 412, in get_param_values\r\n raise parameter.MissingParameterException(\"%s: requires the '%s' parameter to be set\" % (exc_desc, param_name))\r\nluigi.parameter.MissingParameterException: conf[args=(), kwargs={}]: requires the 'opt' parameter to be set\r\n```\r\n\r\nThis seems to be because the execution summary is generated outside any CmdlineParser context manager in run_with_retcodes. So this should be fairly easy to avoid by extending the entire run_with_retcodes to be within the CmdlineParser cm already there for the `retcode()` config - or if that could cause side effects I am unaware of then a separate context just for the call to _summary_dict.\r\n\r\nI can attempt a PR for either approach.\r\n\r\nMay be related to #1964\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2015-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nModule containing the logic for exit codes for the luigi binary. It's useful\nwhen you in a programmatic way need to know if luigi actually finished the\ngiven task, and if not why.\n\"\"\"\n\nimport luigi\nimport sys\nimport logging\nfrom luigi import IntParameter\n\n\nclass retcode(luigi.Config):\n \"\"\"\n See the :ref:`return codes configuration section <retcode-config>`.\n \"\"\"\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n unhandled_exception = IntParameter(default=4,\n description='For internal luigi errors.',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n missing_data = IntParameter(default=0,\n description=\"For when there are incomplete ExternalTask dependencies.\",\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n task_failed = IntParameter(default=0,\n description='''For when a task's run() method fails.''',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n already_running = IntParameter(default=0,\n description='For both local --lock and luigid \"lock\"',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n scheduling_error = IntParameter(default=0,\n description='''For when a task's complete() or requires() fails,\n or task-limit reached'''\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n not_run = IntParameter(default=0,\n description=\"For when a task is not granted run permission by the scheduler.\"\n )\n\n\ndef run_with_retcodes(argv):\n \"\"\"\n Run luigi with command line parsing, but raise ``SystemExit`` with the configured exit code.\n\n Note: Usually you use the luigi binary directly and don't call this function yourself.\n\n :param argv: Should (conceptually) be ``sys.argv[1:]``\n \"\"\"\n logger = logging.getLogger('luigi-interface')\n with luigi.cmdline_parser.CmdlineParser.global_instance(argv):\n retcodes = retcode()\n\n worker = None\n try:\n worker = luigi.interface._run(argv)['worker']\n except luigi.interface.PidLockAlreadyTakenExit:\n sys.exit(retcodes.already_running)\n except Exception:\n # Some errors occur before logging is set up, we set it up now\n luigi.interface.setup_interface_logging()\n logger.exception(\"Uncaught exception in luigi\")\n sys.exit(retcodes.unhandled_exception)\n\n task_sets = luigi.execution_summary._summary_dict(worker)\n root_task = luigi.execution_summary._root_task(worker)\n non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()\n\n def has(status):\n assert status in luigi.execution_summary._ORDERED_STATUSES\n return status in non_empty_categories\n\n codes_and_conds = (\n (retcodes.missing_data, has('still_pending_ext')),\n (retcodes.task_failed, has('failed')),\n (retcodes.already_running, has('run_by_other_worker')),\n (retcodes.scheduling_error, has('scheduling_error')),\n (retcodes.not_run, has('not_run')),\n )\n expected_ret_code = max(code * (1 if cond else 0) for code, cond in codes_and_conds)\n\n if expected_ret_code == 0 and \\\n root_task not in task_sets[\"completed\"] and \\\n root_task not in task_sets[\"already_done\"]:\n sys.exit(retcodes.not_run)\n else:\n sys.exit(expected_ret_code)\n", "path": "luigi/retcodes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2015-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nModule containing the logic for exit codes for the luigi binary. It's useful\nwhen you in a programmatic way need to know if luigi actually finished the\ngiven task, and if not why.\n\"\"\"\n\nimport luigi\nimport sys\nimport logging\nfrom luigi import IntParameter\n\n\nclass retcode(luigi.Config):\n \"\"\"\n See the :ref:`return codes configuration section <retcode-config>`.\n \"\"\"\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n unhandled_exception = IntParameter(default=4,\n description='For internal luigi errors.',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n missing_data = IntParameter(default=0,\n description=\"For when there are incomplete ExternalTask dependencies.\",\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n task_failed = IntParameter(default=0,\n description='''For when a task's run() method fails.''',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n already_running = IntParameter(default=0,\n description='For both local --lock and luigid \"lock\"',\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n scheduling_error = IntParameter(default=0,\n description='''For when a task's complete() or requires() fails,\n or task-limit reached'''\n )\n # default value inconsistent with doc/configuration.rst for backwards compatibility reasons\n not_run = IntParameter(default=0,\n description=\"For when a task is not granted run permission by the scheduler.\"\n )\n\n\ndef run_with_retcodes(argv):\n \"\"\"\n Run luigi with command line parsing, but raise ``SystemExit`` with the configured exit code.\n\n Note: Usually you use the luigi binary directly and don't call this function yourself.\n\n :param argv: Should (conceptually) be ``sys.argv[1:]``\n \"\"\"\n logger = logging.getLogger('luigi-interface')\n with luigi.cmdline_parser.CmdlineParser.global_instance(argv):\n retcodes = retcode()\n\n worker = None\n try:\n worker = luigi.interface._run(argv)['worker']\n except luigi.interface.PidLockAlreadyTakenExit:\n sys.exit(retcodes.already_running)\n except Exception:\n # Some errors occur before logging is set up, we set it up now\n luigi.interface.setup_interface_logging()\n logger.exception(\"Uncaught exception in luigi\")\n sys.exit(retcodes.unhandled_exception)\n\n with luigi.cmdline_parser.CmdlineParser.global_instance(argv):\n task_sets = luigi.execution_summary._summary_dict(worker)\n root_task = luigi.execution_summary._root_task(worker)\n non_empty_categories = {k: v for k, v in task_sets.items() if v}.keys()\n\n def has(status):\n assert status in luigi.execution_summary._ORDERED_STATUSES\n return status in non_empty_categories\n\n codes_and_conds = (\n (retcodes.missing_data, has('still_pending_ext')),\n (retcodes.task_failed, has('failed')),\n (retcodes.already_running, has('run_by_other_worker')),\n (retcodes.scheduling_error, has('scheduling_error')),\n (retcodes.not_run, has('not_run')),\n )\n expected_ret_code = max(code * (1 if cond else 0) for code, cond in codes_and_conds)\n\n if expected_ret_code == 0 and \\\n root_task not in task_sets[\"completed\"] and \\\n root_task not in task_sets[\"already_done\"]:\n sys.exit(retcodes.not_run)\n else:\n sys.exit(expected_ret_code)\n", "path": "luigi/retcodes.py"}]} | 2,158 | 213 |
gh_patches_debug_24575 | rasdani/github-patches | git_diff | pyro-ppl__numpyro-1205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install numpyro[cuda111]: pb
Hi,
I experience a pb with Numpyro install on a GPU (Linux)
```python
- cuda_version: '11-3'
- cuda_release: '11.3.0-1'
- cuda_drivers_version: '465.19.01-1'
- cudnn_version: '8.2.0.53'
- nccl_version: '2.9.8'
```
Using prescription on [this Numpyro webpage](https://pypi.org/project/numpyro/)
```python
pip install numpyro[cuda111] -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
I get:
```
Looking in links: https://storage.googleapis.com/jax-releases/jax_releases.html
Collecting numpyro[cuda111]
Downloading numpyro-0.8.0-py3-none-any.whl (264 kB)
|████████████████████████████████| 264 kB 8.2 MB/s
Collecting jax>=0.2.13
Using cached jax-0.2.24-py3-none-any.whl
Collecting tqdm
Downloading tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
|████████████████████████████████| 76 kB 2.6 MB/s
Collecting jaxlib>=0.1.65
Using cached https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.1.73%2Bcuda11.cudnn82-cp38-none-manylinux2010_x86_64.whl (138.6 MB)
Requirement already satisfied: numpy>=1.18 in anaconda3/envs/jaxtmp/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (1.21.3)
Collecting typing-extensions
Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)
Requirement already satisfied: scipy>=1.2.1 in /anaconda3/envs/jaxtmp/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (1.7.1)
Requirement already satisfied: absl-py in .local/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (0.13.0)
Requirement already satisfied: opt-einsum in.local/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (3.3.0)
WARNING: jax 0.2.24 does not provide the extra 'cuda111'
Requirement already satisfied: flatbuffers<3.0,>=1.12 in /.local/lib/python3.8/site-packages (from jaxlib>=0.1.65->numpyro[cuda111]) (2.0)
Collecting six
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Installing collected packages: six, typing-extensions, tqdm, jaxlib, jax, numpyro
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-probability 0.13.0 requires cloudpickle>=1.3, which is not installed.
tensorflow-probability 0.13.0 requires decorator, which is not installed.
Successfully installed jax-0.2.24 jaxlib-0.1.73+cuda11.cudnn82 numpyro-0.8.0 six-1.16.0 tqdm-4.62.3 typing-extensions-3.10.0.2
```
Some listing via pip
```python
(jaxtmp) ccwgige016:$ pip list
Package Version Location
---------------------- ------------------------------- ---------------------------------------------
absl-py 0.13.0
appdirs 1.4.4
certifi 2021.10.8
configparser 5.0.2
dill 0.3.4
dm-tree 0.1.6
flatbuffers 2.0
gast 0.5.1
jax 0.2.24
jax-cosmo 0.1rc8.dev27+g3e30b6c.d20211024 /sps/lsstcest/users/campagne/jax_cosmo_forked
jaxlib 0.1.73+cuda11.cudnn82
numpy 1.21.3
numpyro 0.8.0
opt-einsum 3.3.0
pip 21.2.4
scipy 1.7.1
setuptools 58.0.4
six 1.16.0
tensorflow-probability 0.13.0
torch 1.9.0
tqdm 4.62.3
typing-extensions 3.10.0.2
wheel 0.37.0
```
idem with conda
```python
# packages in environment at /sps/lsst/users/campagne/anaconda3/envs/jaxtmp:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
ca-certificates 2021.9.30 h06a4308_1
certifi 2021.10.8 py38h06a4308_0
jax 0.2.24 pypi_0 pypi
jaxlib 0.1.73+cuda11.cudnn82 pypi_0 pypi
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgomp 9.3.0 h5101ec6_17
libstdcxx-ng 9.3.0 hd4cf53a_17
ncurses 6.2 he6710b0_1
numpy 1.21.3 pypi_0 pypi
numpyro 0.8.0 pypi_0 pypi
openssl 1.1.1l h7f8727e_0
pip 21.2.4 py38h06a4308_0
python 3.8.12 h12debd9_0
readline 8.1 h27cfd23_0
scipy 1.7.1 pypi_0 pypi
setuptools 58.0.4 py38h06a4308_0
six 1.16.0 pypi_0 pypi
sqlite 3.36.0 hc218d9a_0
tk 8.6.11 h1ccaba5_0
tqdm 4.62.3 pypi_0 pypi
typing-extensions 3.10.0.2 pypi_0 pypi
wheel 0.37.0 pyhd3eb1b0_1
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
```
Now try to do something
```python
(jaxtmp) ccwgige016:$ python
Python 3.8.12 (default, Oct 12 2021, 13:49:34)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax
ModuleNotFoundError: No module named 'numpy.core'
ImportError: numpy.core._multiarray_umath failed to import
ImportError: numpy.core.umath failed to import
2021-10-27 12:44:41.774617: F external/org_tensorflow/tensorflow/compiler/xla/python/xla.cc:77] Check failed: tensorflow::RegisterNumpyBfloat16()
Aborted
```
Do you have a solution? Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from __future__ import absolute_import, division, print_function
5
6 import os
7 import sys
8
9 from setuptools import find_packages, setup
10
11 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
12 _available_cuda_versions = [
13 "101",
14 "102",
15 "110",
16 "111",
17 ] # TODO: align these with what's available in JAX before release
18 _jax_version_constraints = ">=0.2.13"
19 _jaxlib_version_constraints = ">=0.1.65"
20
21 # Find version
22 for line in open(os.path.join(PROJECT_PATH, "numpyro", "version.py")):
23 if line.startswith("__version__ = "):
24 version = line.strip().split()[2][1:-1]
25
26 # READ README.md for long description on PyPi.
27 try:
28 long_description = open("README.md", encoding="utf-8").read()
29 except Exception as e:
30 sys.stderr.write("Failed to read README.md:\n {}\n".format(e))
31 sys.stderr.flush()
32 long_description = ""
33
34 setup(
35 name="numpyro",
36 version=version,
37 description="Pyro PPL on NumPy",
38 packages=find_packages(include=["numpyro", "numpyro.*"]),
39 url="https://github.com/pyro-ppl/numpyro",
40 author="Uber AI Labs",
41 install_requires=[
42 f"jax{_jax_version_constraints}",
43 f"jaxlib{_jaxlib_version_constraints}",
44 "tqdm",
45 ],
46 extras_require={
47 "doc": [
48 "ipython", # sphinx needs this to render codes
49 "nbsphinx>=0.8.5",
50 "readthedocs-sphinx-search==0.1.0",
51 "sphinx",
52 "sphinx_rtd_theme",
53 "sphinx-gallery",
54 ],
55 "test": [
56 "black[jupyter]>=21.8b0",
57 "flake8",
58 "isort>=5.0",
59 "pytest>=4.1",
60 "pyro-api>=0.1.1",
61 "scipy>=1.1",
62 ],
63 "dev": [
64 "dm-haiku",
65 "flax",
66 "funsor==0.4.1",
67 "graphviz",
68 "jaxns==0.0.7",
69 "optax>=0.0.6",
70 "tensorflow_probability>=0.13",
71 ],
72 "examples": [
73 "arviz",
74 "jupyter",
75 "matplotlib",
76 "pandas",
77 "seaborn",
78 "scikit-learn",
79 "wordcloud",
80 ],
81 "cpu": f"jax[cpu]{_jax_version_constraints}",
82 # TPU and CUDA installations, currently require to add package repository URL, i.e.,
83 # pip install numpyro[cuda101] -f https://storage.googleapis.com/jax-releases/jax_releases.html
84 "tpu": f"jax[tpu]{_jax_version_constraints}",
85 **{
86 f"cuda{version}": f"jax[cuda{version}]{_jax_version_constraints}"
87 for version in _available_cuda_versions
88 },
89 },
90 long_description=long_description,
91 long_description_content_type="text/markdown",
92 keywords="probabilistic machine learning bayesian statistics",
93 license="Apache License 2.0",
94 classifiers=[
95 "Intended Audience :: Developers",
96 "Intended Audience :: Education",
97 "Intended Audience :: Science/Research",
98 "License :: OSI Approved :: Apache Software License",
99 "Operating System :: POSIX :: Linux",
100 "Operating System :: MacOS :: MacOS X",
101 "Programming Language :: Python :: 3.6",
102 "Programming Language :: Python :: 3.7",
103 "Programming Language :: Python :: 3.8",
104 "Programming Language :: Python :: 3.9",
105 ],
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,12 +9,6 @@
from setuptools import find_packages, setup
PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
-_available_cuda_versions = [
- "101",
- "102",
- "110",
- "111",
-] # TODO: align these with what's available in JAX before release
_jax_version_constraints = ">=0.2.13"
_jaxlib_version_constraints = ">=0.1.65"
@@ -80,12 +74,9 @@
],
"cpu": f"jax[cpu]{_jax_version_constraints}",
# TPU and CUDA installations, currently require to add package repository URL, i.e.,
- # pip install numpyro[cuda101] -f https://storage.googleapis.com/jax-releases/jax_releases.html
+ # pip install numpyro[cuda] -f https://storage.googleapis.com/jax-releases/jax_releases.html
"tpu": f"jax[tpu]{_jax_version_constraints}",
- **{
- f"cuda{version}": f"jax[cuda{version}]{_jax_version_constraints}"
- for version in _available_cuda_versions
- },
+ "cuda": f"jax[cuda]{_jax_version_constraints}",
},
long_description=long_description,
long_description_content_type="text/markdown",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,12 +9,6 @@\n from setuptools import find_packages, setup\n \n PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\n-_available_cuda_versions = [\n- \"101\",\n- \"102\",\n- \"110\",\n- \"111\",\n-] # TODO: align these with what's available in JAX before release\n _jax_version_constraints = \">=0.2.13\"\n _jaxlib_version_constraints = \">=0.1.65\"\n \n@@ -80,12 +74,9 @@\n ],\n \"cpu\": f\"jax[cpu]{_jax_version_constraints}\",\n # TPU and CUDA installations, currently require to add package repository URL, i.e.,\n- # pip install numpyro[cuda101] -f https://storage.googleapis.com/jax-releases/jax_releases.html\n+ # pip install numpyro[cuda] -f https://storage.googleapis.com/jax-releases/jax_releases.html\n \"tpu\": f\"jax[tpu]{_jax_version_constraints}\",\n- **{\n- f\"cuda{version}\": f\"jax[cuda{version}]{_jax_version_constraints}\"\n- for version in _available_cuda_versions\n- },\n+ \"cuda\": f\"jax[cuda]{_jax_version_constraints}\",\n },\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n", "issue": "pip install numpyro[cuda111]: pb\nHi, \r\nI experience a pb with Numpyro install on a GPU (Linux)\r\n```python\r\n - cuda_version: '11-3'\r\n- cuda_release: '11.3.0-1' \r\n- cuda_drivers_version: '465.19.01-1' \r\n- cudnn_version: '8.2.0.53'\r\n- nccl_version: '2.9.8'\r\n```\r\n\r\nUsing prescription on [this Numpyro webpage](https://pypi.org/project/numpyro/)\r\n```python\r\npip install numpyro[cuda111] -f https://storage.googleapis.com/jax-releases/jax_releases.html\r\n```\r\n\r\nI get:\r\n```\r\nLooking in links: https://storage.googleapis.com/jax-releases/jax_releases.html\r\nCollecting numpyro[cuda111]\r\n Downloading numpyro-0.8.0-py3-none-any.whl (264 kB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 264 kB 8.2 MB/s \r\nCollecting jax>=0.2.13\r\n Using cached jax-0.2.24-py3-none-any.whl\r\nCollecting tqdm\r\n Downloading tqdm-4.62.3-py2.py3-none-any.whl (76 kB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 76 kB 2.6 MB/s \r\nCollecting jaxlib>=0.1.65\r\n Using cached https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.1.73%2Bcuda11.cudnn82-cp38-none-manylinux2010_x86_64.whl (138.6 MB)\r\nRequirement already satisfied: numpy>=1.18 in anaconda3/envs/jaxtmp/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (1.21.3)\r\nCollecting typing-extensions\r\n Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB)\r\nRequirement already satisfied: scipy>=1.2.1 in /anaconda3/envs/jaxtmp/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (1.7.1)\r\nRequirement already satisfied: absl-py in .local/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (0.13.0)\r\nRequirement already satisfied: opt-einsum in.local/lib/python3.8/site-packages (from jax>=0.2.13->numpyro[cuda111]) (3.3.0)\r\nWARNING: jax 0.2.24 does not provide the extra 'cuda111'\r\nRequirement already satisfied: flatbuffers<3.0,>=1.12 in /.local/lib/python3.8/site-packages (from jaxlib>=0.1.65->numpyro[cuda111]) (2.0)\r\nCollecting six\r\n Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)\r\nInstalling collected packages: six, typing-extensions, tqdm, jaxlib, jax, numpyro\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ntensorflow-probability 0.13.0 requires cloudpickle>=1.3, which is not installed.\r\ntensorflow-probability 0.13.0 requires decorator, which is not installed.\r\nSuccessfully installed jax-0.2.24 jaxlib-0.1.73+cuda11.cudnn82 numpyro-0.8.0 six-1.16.0 tqdm-4.62.3 typing-extensions-3.10.0.2\r\n```\r\nSome listing via pip\r\n```python\r\n(jaxtmp) ccwgige016:$ pip list\r\nPackage Version Location\r\n---------------------- ------------------------------- ---------------------------------------------\r\nabsl-py 0.13.0\r\nappdirs 1.4.4\r\ncertifi 2021.10.8\r\nconfigparser 5.0.2\r\ndill 0.3.4\r\ndm-tree 0.1.6\r\nflatbuffers 2.0\r\ngast 0.5.1\r\njax 0.2.24\r\njax-cosmo 0.1rc8.dev27+g3e30b6c.d20211024 /sps/lsstcest/users/campagne/jax_cosmo_forked\r\njaxlib 0.1.73+cuda11.cudnn82\r\nnumpy 1.21.3\r\nnumpyro 0.8.0\r\nopt-einsum 3.3.0\r\npip 21.2.4\r\nscipy 1.7.1\r\nsetuptools 58.0.4\r\nsix 1.16.0\r\ntensorflow-probability 0.13.0\r\ntorch 1.9.0\r\ntqdm 4.62.3\r\ntyping-extensions 3.10.0.2\r\nwheel 0.37.0\r\n```\r\nidem with conda\r\n```python\r\n# packages in environment at /sps/lsst/users/campagne/anaconda3/envs/jaxtmp:\r\n#\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main \r\n_openmp_mutex 4.5 1_gnu \r\nca-certificates 2021.9.30 h06a4308_1 \r\ncertifi 2021.10.8 py38h06a4308_0 \r\njax 0.2.24 pypi_0 pypi\r\njaxlib 0.1.73+cuda11.cudnn82 pypi_0 pypi\r\nld_impl_linux-64 2.35.1 h7274673_9 \r\nlibffi 3.3 he6710b0_2 \r\nlibgcc-ng 9.3.0 h5101ec6_17 \r\nlibgomp 9.3.0 h5101ec6_17 \r\nlibstdcxx-ng 9.3.0 hd4cf53a_17 \r\nncurses 6.2 he6710b0_1 \r\nnumpy 1.21.3 pypi_0 pypi\r\nnumpyro 0.8.0 pypi_0 pypi\r\nopenssl 1.1.1l h7f8727e_0 \r\npip 21.2.4 py38h06a4308_0 \r\npython 3.8.12 h12debd9_0 \r\nreadline 8.1 h27cfd23_0 \r\nscipy 1.7.1 pypi_0 pypi\r\nsetuptools 58.0.4 py38h06a4308_0 \r\nsix 1.16.0 pypi_0 pypi\r\nsqlite 3.36.0 hc218d9a_0 \r\ntk 8.6.11 h1ccaba5_0 \r\ntqdm 4.62.3 pypi_0 pypi\r\ntyping-extensions 3.10.0.2 pypi_0 pypi\r\nwheel 0.37.0 pyhd3eb1b0_1 \r\nxz 5.2.5 h7b6447c_0 \r\nzlib 1.2.11 h7b6447c_3 \r\n```\r\nNow try to do something\r\n```python\r\n(jaxtmp) ccwgige016:$ python\r\nPython 3.8.12 (default, Oct 12 2021, 13:49:34) \r\n[GCC 7.5.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import jax\r\nModuleNotFoundError: No module named 'numpy.core'\r\nImportError: numpy.core._multiarray_umath failed to import\r\nImportError: numpy.core.umath failed to import\r\n2021-10-27 12:44:41.774617: F external/org_tensorflow/tensorflow/compiler/xla/python/xla.cc:77] Check failed: tensorflow::RegisterNumpyBfloat16() \r\nAborted\r\n```\r\nDo you have a solution? Thanks\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\n_available_cuda_versions = [\n \"101\",\n \"102\",\n \"110\",\n \"111\",\n] # TODO: align these with what's available in JAX before release\n_jax_version_constraints = \">=0.2.13\"\n_jaxlib_version_constraints = \">=0.1.65\"\n\n# Find version\nfor line in open(os.path.join(PROJECT_PATH, \"numpyro\", \"version.py\")):\n if line.startswith(\"__version__ = \"):\n version = line.strip().split()[2][1:-1]\n\n# READ README.md for long description on PyPi.\ntry:\n long_description = open(\"README.md\", encoding=\"utf-8\").read()\nexcept Exception as e:\n sys.stderr.write(\"Failed to read README.md:\\n {}\\n\".format(e))\n sys.stderr.flush()\n long_description = \"\"\n\nsetup(\n name=\"numpyro\",\n version=version,\n description=\"Pyro PPL on NumPy\",\n packages=find_packages(include=[\"numpyro\", \"numpyro.*\"]),\n url=\"https://github.com/pyro-ppl/numpyro\",\n author=\"Uber AI Labs\",\n install_requires=[\n f\"jax{_jax_version_constraints}\",\n f\"jaxlib{_jaxlib_version_constraints}\",\n \"tqdm\",\n ],\n extras_require={\n \"doc\": [\n \"ipython\", # sphinx needs this to render codes\n \"nbsphinx>=0.8.5\",\n \"readthedocs-sphinx-search==0.1.0\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinx-gallery\",\n ],\n \"test\": [\n \"black[jupyter]>=21.8b0\",\n \"flake8\",\n \"isort>=5.0\",\n \"pytest>=4.1\",\n \"pyro-api>=0.1.1\",\n \"scipy>=1.1\",\n ],\n \"dev\": [\n \"dm-haiku\",\n \"flax\",\n \"funsor==0.4.1\",\n \"graphviz\",\n \"jaxns==0.0.7\",\n \"optax>=0.0.6\",\n \"tensorflow_probability>=0.13\",\n ],\n \"examples\": [\n \"arviz\",\n \"jupyter\",\n \"matplotlib\",\n \"pandas\",\n \"seaborn\",\n \"scikit-learn\",\n \"wordcloud\",\n ],\n \"cpu\": f\"jax[cpu]{_jax_version_constraints}\",\n # TPU and CUDA installations, currently require to add package repository URL, i.e.,\n # pip install numpyro[cuda101] -f https://storage.googleapis.com/jax-releases/jax_releases.html\n \"tpu\": f\"jax[tpu]{_jax_version_constraints}\",\n **{\n f\"cuda{version}\": f\"jax[cuda{version}]{_jax_version_constraints}\"\n for version in _available_cuda_versions\n },\n },\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n keywords=\"probabilistic machine learning bayesian statistics\",\n license=\"Apache License 2.0\",\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\n_jax_version_constraints = \">=0.2.13\"\n_jaxlib_version_constraints = \">=0.1.65\"\n\n# Find version\nfor line in open(os.path.join(PROJECT_PATH, \"numpyro\", \"version.py\")):\n if line.startswith(\"__version__ = \"):\n version = line.strip().split()[2][1:-1]\n\n# READ README.md for long description on PyPi.\ntry:\n long_description = open(\"README.md\", encoding=\"utf-8\").read()\nexcept Exception as e:\n sys.stderr.write(\"Failed to read README.md:\\n {}\\n\".format(e))\n sys.stderr.flush()\n long_description = \"\"\n\nsetup(\n name=\"numpyro\",\n version=version,\n description=\"Pyro PPL on NumPy\",\n packages=find_packages(include=[\"numpyro\", \"numpyro.*\"]),\n url=\"https://github.com/pyro-ppl/numpyro\",\n author=\"Uber AI Labs\",\n install_requires=[\n f\"jax{_jax_version_constraints}\",\n f\"jaxlib{_jaxlib_version_constraints}\",\n \"tqdm\",\n ],\n extras_require={\n \"doc\": [\n \"ipython\", # sphinx needs this to render codes\n \"nbsphinx>=0.8.5\",\n \"readthedocs-sphinx-search==0.1.0\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinx-gallery\",\n ],\n \"test\": [\n \"black[jupyter]>=21.8b0\",\n \"flake8\",\n \"isort>=5.0\",\n \"pytest>=4.1\",\n \"pyro-api>=0.1.1\",\n \"scipy>=1.1\",\n ],\n \"dev\": [\n \"dm-haiku\",\n \"flax\",\n \"funsor==0.4.1\",\n \"graphviz\",\n \"jaxns==0.0.7\",\n \"optax>=0.0.6\",\n \"tensorflow_probability>=0.13\",\n ],\n \"examples\": [\n \"arviz\",\n \"jupyter\",\n \"matplotlib\",\n \"pandas\",\n \"seaborn\",\n \"scikit-learn\",\n \"wordcloud\",\n ],\n \"cpu\": f\"jax[cpu]{_jax_version_constraints}\",\n # TPU and CUDA installations, currently require to add package repository URL, i.e.,\n # pip install numpyro[cuda] -f https://storage.googleapis.com/jax-releases/jax_releases.html\n \"tpu\": f\"jax[tpu]{_jax_version_constraints}\",\n \"cuda\": f\"jax[cuda]{_jax_version_constraints}\",\n },\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n keywords=\"probabilistic machine learning bayesian statistics\",\n license=\"Apache License 2.0\",\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]} | 3,383 | 331 |
gh_patches_debug_8486 | rasdani/github-patches | git_diff | networkx__networkx-2713 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
inter_community_non_edges ignore directionality
Hi,
I think the function:
nx.algorithms.community.quality.inter_community_non_edges()
does not work properly for directed graph. It always return the non-edge of a undirected graph, basically halving the number of edges. This mean that the performance function (nx.algorithms.community.performance) will never by higher than 50% for a directed graph.
I'm using version '2.0.dev_20170801111157', python 3.5.1
Best,
Nicolas
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/algorithms/community/quality.py`
Content:
```
1 # quality.py - functions for measuring partitions of a graph
2 #
3 # Copyright 2015-2017 NetworkX developers.
4 #
5 # This file is part of NetworkX.
6 #
7 # NetworkX is distributed under a BSD license; see LICENSE.txt for more
8 # information.
9 """Functions for measuring the quality of a partition (into
10 communities).
11
12 """
13 from __future__ import division
14
15 from functools import wraps
16 from itertools import product
17
18 import networkx as nx
19 from networkx import NetworkXError
20 from networkx.utils import not_implemented_for
21 from networkx.algorithms.community.community_utils import is_partition
22
23 __all__ = ['coverage', 'modularity', 'performance']
24
25
26 class NotAPartition(NetworkXError):
27 """Raised if a given collection is not a partition.
28
29 """
30
31 def __init__(self, G, collection):
32 msg = '{} is not a valid partition of the graph {}'
33 msg = msg.format(G, collection)
34 super(NotAPartition, self).__init__(msg)
35
36
37 def require_partition(func):
38 """Decorator that raises an exception if a partition is not a valid
39 partition of the nodes of a graph.
40
41 Raises :exc:`networkx.NetworkXError` if the partition is not valid.
42
43 This decorator should be used on functions whose first two arguments
44 are a graph and a partition of the nodes of that graph (in that
45 order)::
46
47 >>> @require_partition
48 ... def foo(G, partition):
49 ... print('partition is valid!')
50 ...
51 >>> G = nx.complete_graph(5)
52 >>> partition = [{0, 1}, {2, 3}, {4}]
53 >>> foo(G, partition)
54 partition is valid!
55 >>> partition = [{0}, {2, 3}, {4}]
56 >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL
57 Traceback (most recent call last):
58 ...
59 NetworkXError: `partition` is not a valid partition of the nodes of G
60 >>> partition = [{0, 1}, {1, 2, 3}, {4}]
61 >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL
62 Traceback (most recent call last):
63 ...
64 NetworkXError: `partition` is not a valid partition of the nodes of G
65
66 """
67
68 @wraps(func)
69 def new_func(*args, **kw):
70 # Here we assume that the first two arguments are (G, partition).
71 if not is_partition(*args[:2]):
72 raise nx.NetworkXError('`partition` is not a valid partition of'
73 ' the nodes of G')
74 return func(*args, **kw)
75 return new_func
76
77
78 def intra_community_edges(G, partition):
79 """Returns the number of intra-community edges according to the given
80 partition of the nodes of `G`.
81
82 `G` must be a NetworkX graph.
83
84 `partition` must be a partition of the nodes of `G`.
85
86 The "intra-community edges" are those edges joining a pair of nodes
87 in the same block of the partition.
88
89 """
90 return sum(G.subgraph(block).size() for block in partition)
91
92
93 def inter_community_edges(G, partition):
94 """Returns the number of inter-community edges according to the given
95 partition of the nodes of `G`.
96
97 `G` must be a NetworkX graph.
98
99 `partition` must be a partition of the nodes of `G`.
100
101 The *inter-community edges* are those edges joining a pair of nodes
102 in different blocks of the partition.
103
104 Implementation note: this function creates an intermediate graph
105 that may require the same amount of memory as required to store
106 `G`.
107
108 """
109 # Alternate implementation that does not require constructing a new
110 # graph object (but does require constructing an affiliation
111 # dictionary):
112 #
113 # aff = dict(chain.from_iterable(((v, block) for v in block)
114 # for block in partition))
115 # return sum(1 for u, v in G.edges() if aff[u] != aff[v])
116 #
117 return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()
118
119
120 def inter_community_non_edges(G, partition):
121 """Returns the number of inter-community non-edges according to the
122 given partition of the nodes of `G`.
123
124 `G` must be a NetworkX graph.
125
126 `partition` must be a partition of the nodes of `G`.
127
128 A *non-edge* is a pair of nodes (undirected if `G` is undirected)
129 that are not adjacent in `G`. The *inter-community non-edges* are
130 those non-edges on a pair of nodes in different blocks of the
131 partition.
132
133 Implementation note: this function creates two intermediate graphs,
134 which may require up to twice the amount of memory as required to
135 store `G`.
136
137 """
138 # Alternate implementation that does not require constructing two
139 # new graph objects (but does require constructing an affiliation
140 # dictionary):
141 #
142 # aff = dict(chain.from_iterable(((v, block) for v in block)
143 # for block in partition))
144 # return sum(1 for u, v in nx.non_edges(G) if aff[u] != aff[v])
145 #
146 return inter_community_edges(nx.complement(G), partition)
147
148
149 @not_implemented_for('multigraph')
150 @require_partition
151 def performance(G, partition):
152 """Returns the performance of a partition.
153
154 The *performance* of a partition is the ratio of the number of
155 intra-community edges plus inter-community non-edges with the total
156 number of potential edges.
157
158 Parameters
159 ----------
160 G : NetworkX graph
161 A simple graph (directed or undirected).
162
163 partition : sequence
164
165 Partition of the nodes of `G`, represented as a sequence of
166 sets of nodes. Each block of the partition represents a
167 community.
168
169 Returns
170 -------
171 float
172 The performance of the partition, as defined above.
173
174 Raises
175 ------
176 NetworkXError
177 If `partition` is not a valid partition of the nodes of `G`.
178
179 References
180 ----------
181 .. [1] Santo Fortunato.
182 "Community Detection in Graphs".
183 *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174
184 <https://arxiv.org/abs/0906.0612>
185
186 """
187 # Compute the number of intra-community edges and inter-community
188 # edges.
189 intra_edges = intra_community_edges(G, partition)
190 inter_edges = inter_community_non_edges(G, partition)
191 # Compute the number of edges in the complete graph (directed or
192 # undirected, as it depends on `G`) on `n` nodes.
193 #
194 # (If `G` is an undirected graph, we divide by two since we have
195 # double-counted each potential edge. We use integer division since
196 # `total_pairs` is guaranteed to be even.)
197 n = len(G)
198 total_pairs = n * (n - 1)
199 if not G.is_directed():
200 total_pairs //= 2
201 return (intra_edges + inter_edges) / total_pairs
202
203
204 @require_partition
205 def coverage(G, partition):
206 """Returns the coverage of a partition.
207
208 The *coverage* of a partition is the ratio of the number of
209 intra-community edges to the total number of edges in the graph.
210
211 Parameters
212 ----------
213 G : NetworkX graph
214
215 partition : sequence
216 Partition of the nodes of `G`, represented as a sequence of
217 sets of nodes. Each block of the partition represents a
218 community.
219
220 Returns
221 -------
222 float
223 The coverage of the partition, as defined above.
224
225 Raises
226 ------
227 NetworkXError
228 If `partition` is not a valid partition of the nodes of `G`.
229
230 Notes
231 -----
232 If `G` is a multigraph, the multiplicity of edges is counted.
233
234 References
235 ----------
236 .. [1] Santo Fortunato.
237 "Community Detection in Graphs".
238 *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174
239 <https://arxiv.org/abs/0906.0612>
240
241 """
242 intra_edges = intra_community_edges(G, partition)
243 total_edges = G.number_of_edges()
244 return intra_edges / total_edges
245
246
247 def modularity(G, communities, weight='weight'):
248 r"""Returns the modularity of the given partition of the graph.
249
250 Modularity is defined in [1]_ as
251
252 .. math::
253
254 Q = \frac{1}{2m} \sum_{ij} \left( A_{ij} - \frac{k_ik_j}{2m}\right)
255 \delta(c_i,c_j)
256
257 where $m$ is the number of edges, $A$ is the adjacency matrix of
258 `G`, $k_i$ is the degree of $i$ and $\delta(c_i, c_j)$
259 is 1 if $i$ and $j$ are in the same community and 0 otherwise.
260
261 Parameters
262 ----------
263 G : NetworkX Graph
264
265 communities : list
266 List of sets of nodes of `G` representing a partition of the
267 nodes.
268
269 Returns
270 -------
271 Q : float
272 The modularity of the paritition.
273
274 Raises
275 ------
276 NotAPartition
277 If `communities` is not a partition of the nodes of `G`.
278
279 Examples
280 --------
281 >>> G = nx.barbell_graph(3, 0)
282 >>> nx.algorithms.community.modularity(G, [{0, 1, 2}, {3, 4, 5}])
283 0.35714285714285704
284
285 References
286 ----------
287 .. [1] M. E. J. Newman *Networks: An Introduction*, page 224.
288 Oxford University Press, 2011.
289
290 """
291 if not is_partition(G, communities):
292 raise NotAPartition(G, communities)
293
294 multigraph = G.is_multigraph()
295 directed = G.is_directed()
296 m = G.size(weight=weight)
297 if directed:
298 out_degree = dict(G.out_degree(weight=weight))
299 in_degree = dict(G.in_degree(weight=weight))
300 norm = 1 / m
301 else:
302 out_degree = dict(G.degree(weight=weight))
303 in_degree = out_degree
304 norm = 1 / (2 * m)
305
306 def val(u, v):
307 try:
308 if multigraph:
309 w = sum(d.get(weight, 1) for k, d in G[u][v].items())
310 else:
311 w = G[u][v].get(weight, 1)
312 except KeyError:
313 w = 0
314 # Double count self-loops if the graph is undirected.
315 if u == v and not directed:
316 w *= 2
317 return w - in_degree[u] * out_degree[v] * norm
318
319 Q = sum(val(u, v) for c in communities for u, v in product(c, repeat=2))
320 return Q * norm
321
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/networkx/algorithms/community/quality.py b/networkx/algorithms/community/quality.py
--- a/networkx/algorithms/community/quality.py
+++ b/networkx/algorithms/community/quality.py
@@ -114,7 +114,10 @@
# for block in partition))
# return sum(1 for u, v in G.edges() if aff[u] != aff[v])
#
- return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()
+ if G.is_directed():
+ return nx.quotient_graph(G, partition, create_using=nx.MultiDiGraph()).size()
+ else:
+ return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()
def inter_community_non_edges(G, partition):
| {"golden_diff": "diff --git a/networkx/algorithms/community/quality.py b/networkx/algorithms/community/quality.py\n--- a/networkx/algorithms/community/quality.py\n+++ b/networkx/algorithms/community/quality.py\n@@ -114,7 +114,10 @@\n # for block in partition))\n # return sum(1 for u, v in G.edges() if aff[u] != aff[v])\n #\n- return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()\n+ if G.is_directed():\n+ return nx.quotient_graph(G, partition, create_using=nx.MultiDiGraph()).size()\n+ else:\n+ return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()\n \n \n def inter_community_non_edges(G, partition):\n", "issue": "inter_community_non_edges ignore directionality\nHi,\r\n\r\nI think the function:\r\n\r\nnx.algorithms.community.quality.inter_community_non_edges()\r\n\r\ndoes not work properly for directed graph. It always return the non-edge of a undirected graph, basically halving the number of edges. This mean that the performance function (nx.algorithms.community.performance) will never by higher than 50% for a directed graph.\r\n\r\nI'm using version '2.0.dev_20170801111157', python 3.5.1\r\n\r\nBest,\r\n\r\nNicolas \n", "before_files": [{"content": "# quality.py - functions for measuring partitions of a graph\n#\n# Copyright 2015-2017 NetworkX developers.\n#\n# This file is part of NetworkX.\n#\n# NetworkX is distributed under a BSD license; see LICENSE.txt for more\n# information.\n\"\"\"Functions for measuring the quality of a partition (into\ncommunities).\n\n\"\"\"\nfrom __future__ import division\n\nfrom functools import wraps\nfrom itertools import product\n\nimport networkx as nx\nfrom networkx import NetworkXError\nfrom networkx.utils import not_implemented_for\nfrom networkx.algorithms.community.community_utils import is_partition\n\n__all__ = ['coverage', 'modularity', 'performance']\n\n\nclass NotAPartition(NetworkXError):\n \"\"\"Raised if a given collection is not a partition.\n\n \"\"\"\n\n def __init__(self, G, collection):\n msg = '{} is not a valid partition of the graph {}'\n msg = msg.format(G, collection)\n super(NotAPartition, self).__init__(msg)\n\n\ndef require_partition(func):\n \"\"\"Decorator that raises an exception if a partition is not a valid\n partition of the nodes of a graph.\n\n Raises :exc:`networkx.NetworkXError` if the partition is not valid.\n\n This decorator should be used on functions whose first two arguments\n are a graph and a partition of the nodes of that graph (in that\n order)::\n\n >>> @require_partition\n ... def foo(G, partition):\n ... print('partition is valid!')\n ...\n >>> G = nx.complete_graph(5)\n >>> partition = [{0, 1}, {2, 3}, {4}]\n >>> foo(G, partition)\n partition is valid!\n >>> partition = [{0}, {2, 3}, {4}]\n >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n NetworkXError: `partition` is not a valid partition of the nodes of G\n >>> partition = [{0, 1}, {1, 2, 3}, {4}]\n >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n NetworkXError: `partition` is not a valid partition of the nodes of G\n\n \"\"\"\n\n @wraps(func)\n def new_func(*args, **kw):\n # Here we assume that the first two arguments are (G, partition).\n if not is_partition(*args[:2]):\n raise nx.NetworkXError('`partition` is not a valid partition of'\n ' the nodes of G')\n return func(*args, **kw)\n return new_func\n\n\ndef intra_community_edges(G, partition):\n \"\"\"Returns the number of intra-community edges according to the given\n partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n The \"intra-community edges\" are those edges joining a pair of nodes\n in the same block of the partition.\n\n \"\"\"\n return sum(G.subgraph(block).size() for block in partition)\n\n\ndef inter_community_edges(G, partition):\n \"\"\"Returns the number of inter-community edges according to the given\n partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n The *inter-community edges* are those edges joining a pair of nodes\n in different blocks of the partition.\n\n Implementation note: this function creates an intermediate graph\n that may require the same amount of memory as required to store\n `G`.\n\n \"\"\"\n # Alternate implementation that does not require constructing a new\n # graph object (but does require constructing an affiliation\n # dictionary):\n #\n # aff = dict(chain.from_iterable(((v, block) for v in block)\n # for block in partition))\n # return sum(1 for u, v in G.edges() if aff[u] != aff[v])\n #\n return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()\n\n\ndef inter_community_non_edges(G, partition):\n \"\"\"Returns the number of inter-community non-edges according to the\n given partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n A *non-edge* is a pair of nodes (undirected if `G` is undirected)\n that are not adjacent in `G`. The *inter-community non-edges* are\n those non-edges on a pair of nodes in different blocks of the\n partition.\n\n Implementation note: this function creates two intermediate graphs,\n which may require up to twice the amount of memory as required to\n store `G`.\n\n \"\"\"\n # Alternate implementation that does not require constructing two\n # new graph objects (but does require constructing an affiliation\n # dictionary):\n #\n # aff = dict(chain.from_iterable(((v, block) for v in block)\n # for block in partition))\n # return sum(1 for u, v in nx.non_edges(G) if aff[u] != aff[v])\n #\n return inter_community_edges(nx.complement(G), partition)\n\n\n@not_implemented_for('multigraph')\n@require_partition\ndef performance(G, partition):\n \"\"\"Returns the performance of a partition.\n\n The *performance* of a partition is the ratio of the number of\n intra-community edges plus inter-community non-edges with the total\n number of potential edges.\n\n Parameters\n ----------\n G : NetworkX graph\n A simple graph (directed or undirected).\n\n partition : sequence\n\n Partition of the nodes of `G`, represented as a sequence of\n sets of nodes. Each block of the partition represents a\n community.\n\n Returns\n -------\n float\n The performance of the partition, as defined above.\n\n Raises\n ------\n NetworkXError\n If `partition` is not a valid partition of the nodes of `G`.\n\n References\n ----------\n .. [1] Santo Fortunato.\n \"Community Detection in Graphs\".\n *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174\n <https://arxiv.org/abs/0906.0612>\n\n \"\"\"\n # Compute the number of intra-community edges and inter-community\n # edges.\n intra_edges = intra_community_edges(G, partition)\n inter_edges = inter_community_non_edges(G, partition)\n # Compute the number of edges in the complete graph (directed or\n # undirected, as it depends on `G`) on `n` nodes.\n #\n # (If `G` is an undirected graph, we divide by two since we have\n # double-counted each potential edge. We use integer division since\n # `total_pairs` is guaranteed to be even.)\n n = len(G)\n total_pairs = n * (n - 1)\n if not G.is_directed():\n total_pairs //= 2\n return (intra_edges + inter_edges) / total_pairs\n\n\n@require_partition\ndef coverage(G, partition):\n \"\"\"Returns the coverage of a partition.\n\n The *coverage* of a partition is the ratio of the number of\n intra-community edges to the total number of edges in the graph.\n\n Parameters\n ----------\n G : NetworkX graph\n\n partition : sequence\n Partition of the nodes of `G`, represented as a sequence of\n sets of nodes. Each block of the partition represents a\n community.\n\n Returns\n -------\n float\n The coverage of the partition, as defined above.\n\n Raises\n ------\n NetworkXError\n If `partition` is not a valid partition of the nodes of `G`.\n\n Notes\n -----\n If `G` is a multigraph, the multiplicity of edges is counted.\n\n References\n ----------\n .. [1] Santo Fortunato.\n \"Community Detection in Graphs\".\n *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174\n <https://arxiv.org/abs/0906.0612>\n\n \"\"\"\n intra_edges = intra_community_edges(G, partition)\n total_edges = G.number_of_edges()\n return intra_edges / total_edges\n\n\ndef modularity(G, communities, weight='weight'):\n r\"\"\"Returns the modularity of the given partition of the graph.\n\n Modularity is defined in [1]_ as\n\n .. math::\n\n Q = \\frac{1}{2m} \\sum_{ij} \\left( A_{ij} - \\frac{k_ik_j}{2m}\\right)\n \\delta(c_i,c_j)\n\n where $m$ is the number of edges, $A$ is the adjacency matrix of\n `G`, $k_i$ is the degree of $i$ and $\\delta(c_i, c_j)$\n is 1 if $i$ and $j$ are in the same community and 0 otherwise.\n\n Parameters\n ----------\n G : NetworkX Graph\n\n communities : list\n List of sets of nodes of `G` representing a partition of the\n nodes.\n\n Returns\n -------\n Q : float\n The modularity of the paritition.\n\n Raises\n ------\n NotAPartition\n If `communities` is not a partition of the nodes of `G`.\n\n Examples\n --------\n >>> G = nx.barbell_graph(3, 0)\n >>> nx.algorithms.community.modularity(G, [{0, 1, 2}, {3, 4, 5}])\n 0.35714285714285704\n\n References\n ----------\n .. [1] M. E. J. Newman *Networks: An Introduction*, page 224.\n Oxford University Press, 2011.\n\n \"\"\"\n if not is_partition(G, communities):\n raise NotAPartition(G, communities)\n\n multigraph = G.is_multigraph()\n directed = G.is_directed()\n m = G.size(weight=weight)\n if directed:\n out_degree = dict(G.out_degree(weight=weight))\n in_degree = dict(G.in_degree(weight=weight))\n norm = 1 / m\n else:\n out_degree = dict(G.degree(weight=weight))\n in_degree = out_degree\n norm = 1 / (2 * m)\n\n def val(u, v):\n try:\n if multigraph:\n w = sum(d.get(weight, 1) for k, d in G[u][v].items())\n else:\n w = G[u][v].get(weight, 1)\n except KeyError:\n w = 0\n # Double count self-loops if the graph is undirected.\n if u == v and not directed:\n w *= 2\n return w - in_degree[u] * out_degree[v] * norm\n\n Q = sum(val(u, v) for c in communities for u, v in product(c, repeat=2))\n return Q * norm\n", "path": "networkx/algorithms/community/quality.py"}], "after_files": [{"content": "# quality.py - functions for measuring partitions of a graph\n#\n# Copyright 2015-2017 NetworkX developers.\n#\n# This file is part of NetworkX.\n#\n# NetworkX is distributed under a BSD license; see LICENSE.txt for more\n# information.\n\"\"\"Functions for measuring the quality of a partition (into\ncommunities).\n\n\"\"\"\nfrom __future__ import division\n\nfrom functools import wraps\nfrom itertools import product\n\nimport networkx as nx\nfrom networkx import NetworkXError\nfrom networkx.utils import not_implemented_for\nfrom networkx.algorithms.community.community_utils import is_partition\n\n__all__ = ['coverage', 'modularity', 'performance']\n\n\nclass NotAPartition(NetworkXError):\n \"\"\"Raised if a given collection is not a partition.\n\n \"\"\"\n\n def __init__(self, G, collection):\n msg = '{} is not a valid partition of the graph {}'\n msg = msg.format(G, collection)\n super(NotAPartition, self).__init__(msg)\n\n\ndef require_partition(func):\n \"\"\"Decorator that raises an exception if a partition is not a valid\n partition of the nodes of a graph.\n\n Raises :exc:`networkx.NetworkXError` if the partition is not valid.\n\n This decorator should be used on functions whose first two arguments\n are a graph and a partition of the nodes of that graph (in that\n order)::\n\n >>> @require_partition\n ... def foo(G, partition):\n ... print('partition is valid!')\n ...\n >>> G = nx.complete_graph(5)\n >>> partition = [{0, 1}, {2, 3}, {4}]\n >>> foo(G, partition)\n partition is valid!\n >>> partition = [{0}, {2, 3}, {4}]\n >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n NetworkXError: `partition` is not a valid partition of the nodes of G\n >>> partition = [{0, 1}, {1, 2, 3}, {4}]\n >>> foo(G, partition) # doctest: +IGNORE_EXCEPTION_DETAIL\n Traceback (most recent call last):\n ...\n NetworkXError: `partition` is not a valid partition of the nodes of G\n\n \"\"\"\n\n @wraps(func)\n def new_func(*args, **kw):\n # Here we assume that the first two arguments are (G, partition).\n if not is_partition(*args[:2]):\n raise nx.NetworkXError('`partition` is not a valid partition of'\n ' the nodes of G')\n return func(*args, **kw)\n return new_func\n\n\ndef intra_community_edges(G, partition):\n \"\"\"Returns the number of intra-community edges according to the given\n partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n The \"intra-community edges\" are those edges joining a pair of nodes\n in the same block of the partition.\n\n \"\"\"\n return sum(G.subgraph(block).size() for block in partition)\n\n\ndef inter_community_edges(G, partition):\n \"\"\"Returns the number of inter-community edges according to the given\n partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n The *inter-community edges* are those edges joining a pair of nodes\n in different blocks of the partition.\n\n Implementation note: this function creates an intermediate graph\n that may require the same amount of memory as required to store\n `G`.\n\n \"\"\"\n # Alternate implementation that does not require constructing a new\n # graph object (but does require constructing an affiliation\n # dictionary):\n #\n # aff = dict(chain.from_iterable(((v, block) for v in block)\n # for block in partition))\n # return sum(1 for u, v in G.edges() if aff[u] != aff[v])\n #\n if G.is_directed():\n return nx.quotient_graph(G, partition, create_using=nx.MultiDiGraph()).size()\n else:\n return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()\n\n\ndef inter_community_non_edges(G, partition):\n \"\"\"Returns the number of inter-community non-edges according to the\n given partition of the nodes of `G`.\n\n `G` must be a NetworkX graph.\n\n `partition` must be a partition of the nodes of `G`.\n\n A *non-edge* is a pair of nodes (undirected if `G` is undirected)\n that are not adjacent in `G`. The *inter-community non-edges* are\n those non-edges on a pair of nodes in different blocks of the\n partition.\n\n Implementation note: this function creates two intermediate graphs,\n which may require up to twice the amount of memory as required to\n store `G`.\n\n \"\"\"\n # Alternate implementation that does not require constructing two\n # new graph objects (but does require constructing an affiliation\n # dictionary):\n #\n # aff = dict(chain.from_iterable(((v, block) for v in block)\n # for block in partition))\n # return sum(1 for u, v in nx.non_edges(G) if aff[u] != aff[v])\n #\n return inter_community_edges(nx.complement(G), partition)\n\n\n@not_implemented_for('multigraph')\n@require_partition\ndef performance(G, partition):\n \"\"\"Returns the performance of a partition.\n\n The *performance* of a partition is the ratio of the number of\n intra-community edges plus inter-community non-edges with the total\n number of potential edges.\n\n Parameters\n ----------\n G : NetworkX graph\n A simple graph (directed or undirected).\n\n partition : sequence\n\n Partition of the nodes of `G`, represented as a sequence of\n sets of nodes. Each block of the partition represents a\n community.\n\n Returns\n -------\n float\n The performance of the partition, as defined above.\n\n Raises\n ------\n NetworkXError\n If `partition` is not a valid partition of the nodes of `G`.\n\n References\n ----------\n .. [1] Santo Fortunato.\n \"Community Detection in Graphs\".\n *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174\n <https://arxiv.org/abs/0906.0612>\n\n \"\"\"\n # Compute the number of intra-community edges and inter-community\n # edges.\n intra_edges = intra_community_edges(G, partition)\n inter_edges = inter_community_non_edges(G, partition)\n # Compute the number of edges in the complete graph (directed or\n # undirected, as it depends on `G`) on `n` nodes.\n #\n # (If `G` is an undirected graph, we divide by two since we have\n # double-counted each potential edge. We use integer division since\n # `total_pairs` is guaranteed to be even.)\n n = len(G)\n total_pairs = n * (n - 1)\n if not G.is_directed():\n total_pairs //= 2\n return (intra_edges + inter_edges) / total_pairs\n\n\n@require_partition\ndef coverage(G, partition):\n \"\"\"Returns the coverage of a partition.\n\n The *coverage* of a partition is the ratio of the number of\n intra-community edges to the total number of edges in the graph.\n\n Parameters\n ----------\n G : NetworkX graph\n\n partition : sequence\n Partition of the nodes of `G`, represented as a sequence of\n sets of nodes. Each block of the partition represents a\n community.\n\n Returns\n -------\n float\n The coverage of the partition, as defined above.\n\n Raises\n ------\n NetworkXError\n If `partition` is not a valid partition of the nodes of `G`.\n\n Notes\n -----\n If `G` is a multigraph, the multiplicity of edges is counted.\n\n References\n ----------\n .. [1] Santo Fortunato.\n \"Community Detection in Graphs\".\n *Physical Reports*, Volume 486, Issue 3--5 pp. 75--174\n <https://arxiv.org/abs/0906.0612>\n\n \"\"\"\n intra_edges = intra_community_edges(G, partition)\n total_edges = G.number_of_edges()\n return intra_edges / total_edges\n\n\ndef modularity(G, communities, weight='weight'):\n r\"\"\"Returns the modularity of the given partition of the graph.\n\n Modularity is defined in [1]_ as\n\n .. math::\n\n Q = \\frac{1}{2m} \\sum_{ij} \\left( A_{ij} - \\frac{k_ik_j}{2m}\\right)\n \\delta(c_i,c_j)\n\n where $m$ is the number of edges, $A$ is the adjacency matrix of\n `G`, $k_i$ is the degree of $i$ and $\\delta(c_i, c_j)$\n is 1 if $i$ and $j$ are in the same community and 0 otherwise.\n\n Parameters\n ----------\n G : NetworkX Graph\n\n communities : list\n List of sets of nodes of `G` representing a partition of the\n nodes.\n\n Returns\n -------\n Q : float\n The modularity of the paritition.\n\n Raises\n ------\n NotAPartition\n If `communities` is not a partition of the nodes of `G`.\n\n Examples\n --------\n >>> G = nx.barbell_graph(3, 0)\n >>> nx.algorithms.community.modularity(G, [{0, 1, 2}, {3, 4, 5}])\n 0.35714285714285704\n\n References\n ----------\n .. [1] M. E. J. Newman *Networks: An Introduction*, page 224.\n Oxford University Press, 2011.\n\n \"\"\"\n if not is_partition(G, communities):\n raise NotAPartition(G, communities)\n\n multigraph = G.is_multigraph()\n directed = G.is_directed()\n m = G.size(weight=weight)\n if directed:\n out_degree = dict(G.out_degree(weight=weight))\n in_degree = dict(G.in_degree(weight=weight))\n norm = 1 / m\n else:\n out_degree = dict(G.degree(weight=weight))\n in_degree = out_degree\n norm = 1 / (2 * m)\n\n def val(u, v):\n try:\n if multigraph:\n w = sum(d.get(weight, 1) for k, d in G[u][v].items())\n else:\n w = G[u][v].get(weight, 1)\n except KeyError:\n w = 0\n # Double count self-loops if the graph is undirected.\n if u == v and not directed:\n w *= 2\n return w - in_degree[u] * out_degree[v] * norm\n\n Q = sum(val(u, v) for c in communities for u, v in product(c, repeat=2))\n return Q * norm\n", "path": "networkx/algorithms/community/quality.py"}]} | 3,727 | 178 |
gh_patches_debug_25617 | rasdani/github-patches | git_diff | saleor__saleor-3169 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User type should be able to return `created` and `lastLogin` fields
There is no way to obtain information when a user was registered and when she/he logged last time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/account/resolvers.py`
Content:
```
1 from django.db.models import Q
2 from i18naddress import get_validation_rules
3
4 from ...account import models
5 from ...core.utils import get_client_ip, get_country_by_ip
6 from ..utils import filter_by_query_param
7 from .types import AddressValidationData, ChoiceValue
8
9 USER_SEARCH_FIELDS = (
10 'email', 'default_shipping_address__first_name',
11 'default_shipping_address__last_name', 'default_shipping_address__city',
12 'default_shipping_address__country')
13
14
15 def resolve_customers(info, query):
16 qs = models.User.objects.filter(
17 Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))
18 ).prefetch_related('addresses')
19 return filter_by_query_param(
20 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
21
22
23 def resolve_staff_users(info, query):
24 qs = models.User.objects.filter(is_staff=True)
25 return filter_by_query_param(
26 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
27
28
29 def resolve_address_validator(info, input):
30 country_code = input['country_code']
31 if not country_code:
32 client_ip = get_client_ip(info.context)
33 country = get_country_by_ip(client_ip)
34 if country:
35 country_code = country.code
36 else:
37 return None
38 params = {
39 'country_code': country_code,
40 'country_area': input['country_area'],
41 'city_area': input['city_area']}
42 rules = get_validation_rules(params)
43
44 return AddressValidationData(
45 country_code=rules.country_code,
46 country_name=rules.country_name,
47 address_format=rules.address_format,
48 address_latin_format=rules.address_latin_format,
49 allowed_fields=rules.allowed_fields,
50 required_fields=rules.required_fields,
51 upper_fields=rules.upper_fields,
52 country_area_type=rules.country_area_type,
53 country_area_choices=[
54 ChoiceValue(area[0], area[1])
55 for area in rules.country_area_choices],
56 city_type=rules.city_type,
57 city_area_choices=[
58 ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],
59 postal_code_type=rules.postal_code_type,
60 postal_code_matchers=[
61 compiled.pattern for compiled in rules.postal_code_matchers],
62 postal_code_examples=rules.postal_code_examples,
63 postal_code_prefix=rules.postal_code_prefix
64 )
65
```
Path: `saleor/graphql/account/types.py`
Content:
```
1 import graphene
2 from django.contrib.auth import get_user_model
3 from graphene import relay
4
5 from ...account import models
6 from ...core.permissions import get_permissions
7 from ..core.types.common import (
8 CountableDjangoObjectType, CountryDisplay, PermissionDisplay)
9 from ..utils import format_permissions_for_display
10
11
12 class AddressInput(graphene.InputObjectType):
13 first_name = graphene.String(description='Given name.')
14 last_name = graphene.String(description='Family name.')
15 company_name = graphene.String(description='Company or organization.')
16 street_address_1 = graphene.String(description='Address.')
17 street_address_2 = graphene.String(description='Address.')
18 city = graphene.String(description='City.')
19 city_area = graphene.String(description='District.')
20 postal_code = graphene.String(description='Postal code.')
21 country = graphene.String(description='Country.')
22 country_area = graphene.String(description='State or province.')
23 phone = graphene.String(description='Phone number.')
24
25
26 class Address(CountableDjangoObjectType):
27 country = graphene.Field(
28 CountryDisplay, required=True, description='Default shop\'s country')
29
30 class Meta:
31 exclude_fields = ['user_set', 'user_addresses']
32 description = 'Represents user address data.'
33 interfaces = [relay.Node]
34 model = models.Address
35
36 def resolve_country(self, info):
37 return CountryDisplay(
38 code=self.country.code, country=self.country.name)
39
40
41 class User(CountableDjangoObjectType):
42 permissions = graphene.List(PermissionDisplay)
43
44 class Meta:
45 exclude_fields = [
46 'date_joined', 'password', 'is_superuser',
47 'OrderEvent_set', 'last_login']
48 description = 'Represents user data.'
49 interfaces = [relay.Node]
50 model = get_user_model()
51 filter_fields = ['is_staff']
52
53 def resolve_permissions(self, info, **kwargs):
54 if self.is_superuser:
55 permissions = get_permissions()
56 else:
57 permissions = self.user_permissions.prefetch_related(
58 'content_type').order_by('codename')
59 return format_permissions_for_display(permissions)
60
61
62 class AddressValidationInput(graphene.InputObjectType):
63 country_code = graphene.String()
64 country_area = graphene.String()
65 city_area = graphene.String()
66
67
68 class ChoiceValue(graphene.ObjectType):
69 raw = graphene.String()
70 verbose = graphene.String()
71
72
73 class AddressValidationData(graphene.ObjectType):
74 country_code = graphene.String()
75 country_name = graphene.String()
76 address_format = graphene.String()
77 address_latin_format = graphene.String()
78 allowed_fields = graphene.List(graphene.String)
79 required_fields = graphene.List(graphene.String)
80 upper_fields = graphene.List(graphene.String)
81 country_area_type = graphene.String()
82 country_area_choices = graphene.List(ChoiceValue)
83 city_type = graphene.String()
84 city_area_choices = graphene.List(ChoiceValue)
85 postal_code_type = graphene.String()
86 postal_code_matchers = graphene.List(graphene.String)
87 postal_code_examples = graphene.List(graphene.String)
88 postal_code_prefix = graphene.String()
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py
--- a/saleor/graphql/account/resolvers.py
+++ b/saleor/graphql/account/resolvers.py
@@ -16,14 +16,16 @@
qs = models.User.objects.filter(
Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))
).prefetch_related('addresses')
- return filter_by_query_param(
+ qs = filter_by_query_param(
queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
+ return qs.distinct()
def resolve_staff_users(info, query):
qs = models.User.objects.filter(is_staff=True)
- return filter_by_query_param(
+ qs = filter_by_query_param(
queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
+ return qs.distinct()
def resolve_address_validator(info, input):
diff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py
--- a/saleor/graphql/account/types.py
+++ b/saleor/graphql/account/types.py
@@ -42,9 +42,7 @@
permissions = graphene.List(PermissionDisplay)
class Meta:
- exclude_fields = [
- 'date_joined', 'password', 'is_superuser',
- 'OrderEvent_set', 'last_login']
+ exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']
description = 'Represents user data.'
interfaces = [relay.Node]
model = get_user_model()
| {"golden_diff": "diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py\n--- a/saleor/graphql/account/resolvers.py\n+++ b/saleor/graphql/account/resolvers.py\n@@ -16,14 +16,16 @@\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n- return filter_by_query_param(\n+ qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n+ return qs.distinct()\n \n \n def resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n- return filter_by_query_param(\n+ qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n+ return qs.distinct()\n \n \n def resolve_address_validator(info, input):\ndiff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py\n--- a/saleor/graphql/account/types.py\n+++ b/saleor/graphql/account/types.py\n@@ -42,9 +42,7 @@\n permissions = graphene.List(PermissionDisplay)\n \n class Meta:\n- exclude_fields = [\n- 'date_joined', 'password', 'is_superuser',\n- 'OrderEvent_set', 'last_login']\n+ exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n", "issue": "User type should be able to return `created` and `lastLogin` fields\nThere is no way to obtain information when a user was registered and when she/he logged last time.\n", "before_files": [{"content": "from django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n return filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n return filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix\n )\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(PermissionDisplay)\n\n class Meta:\n exclude_fields = [\n 'date_joined', 'password', 'is_superuser',\n 'OrderEvent_set', 'last_login']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n filter_fields = ['is_staff']\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}], "after_files": [{"content": "from django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False))\n ).prefetch_related('addresses')\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n return qs.distinct()\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n return qs.distinct()\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix\n )\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(PermissionDisplay)\n\n class Meta:\n exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n filter_fields = ['is_staff']\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}]} | 1,739 | 340 |
gh_patches_debug_25 | rasdani/github-patches | git_diff | Zeroto521__my-data-toolkit-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PERF: `to_set` speeds up especial for large data
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [x] whatsnew entry
| data | `set(s)` | `set(s.unique())` |
| -------------------- | ---------------- | ----------------- |
| small, `list(range(10)` | 1.83 µs ± 31.6 ns | 1.17 ms ± 144 µs |
| large, `list(range(10)*1000` | 9.67 µs ± 564 ns | 255 µs ± 14.9 µs |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/accessor/index/to_set.py`
Content:
```
1 import pandas as pd
2
3 from dtoolkit.accessor.register import register_index_method
4
5
6 @register_index_method
7 def to_set(index: pd.Index) -> set:
8 """
9 Return a :keyword:`set` of the values.
10
11 A sugary syntax wraps :keyword:`set`::
12
13 set(index)
14
15 Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.
16
17 Returns
18 -------
19 set
20
21 See Also
22 --------
23 pandas.Index.unique
24
25 Examples
26 --------
27 >>> import dtoolkit.accessor
28 >>> import pandas as pd
29 >>> i = pd.Index([1, 2, 2])
30 >>> i
31 Int64Index([1, 2, 2], dtype='int64')
32 >>> i.to_set()
33 {1, 2}
34 """
35
36 return set(index)
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py
--- a/dtoolkit/accessor/index/to_set.py
+++ b/dtoolkit/accessor/index/to_set.py
@@ -33,4 +33,4 @@
{1, 2}
"""
- return set(index)
+ return set(index.unique())
| {"golden_diff": "diff --git a/dtoolkit/accessor/index/to_set.py b/dtoolkit/accessor/index/to_set.py\n--- a/dtoolkit/accessor/index/to_set.py\n+++ b/dtoolkit/accessor/index/to_set.py\n@@ -33,4 +33,4 @@\n {1, 2}\n \"\"\"\n \n- return set(index)\n+ return set(index.unique())\n", "issue": "PERF: `to_set` speeds up especial for large data\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [x] whatsnew entry\r\n\r\n\r\n| data | `set(s)` | `set(s.unique())` |\r\n| -------------------- | ---------------- | ----------------- |\r\n| small, `list(range(10)` | 1.83 \u00b5s \u00b1 31.6 ns | 1.17 ms \u00b1 144 \u00b5s |\r\n| large, `list(range(10)*1000` | 9.67 \u00b5s \u00b1 564 ns | 255 \u00b5s \u00b1 14.9 \u00b5s |\r\n\n", "before_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index)\n", "path": "dtoolkit/accessor/index/to_set.py"}], "after_files": [{"content": "import pandas as pd\n\nfrom dtoolkit.accessor.register import register_index_method\n\n\n@register_index_method\ndef to_set(index: pd.Index) -> set:\n \"\"\"\n Return a :keyword:`set` of the values.\n\n A sugary syntax wraps :keyword:`set`::\n\n set(index)\n\n Different to :meth:`~pandas.Index.unique`, it returns :class:`~pandas.Index`.\n\n Returns\n -------\n set\n\n See Also\n --------\n pandas.Index.unique\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> i = pd.Index([1, 2, 2])\n >>> i\n Int64Index([1, 2, 2], dtype='int64')\n >>> i.to_set()\n {1, 2}\n \"\"\"\n\n return set(index.unique())\n", "path": "dtoolkit/accessor/index/to_set.py"}]} | 826 | 83 |
gh_patches_debug_5712 | rasdani/github-patches | git_diff | privacyidea__privacyidea-1946 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
More detailed description of splitAtSign
We should have a more precise description how ``splitAtSign`` interacts with a given ``realm`` parameter.
We should describe, that it works with ``/validate/`` and ``/auth`` endpoint.
We can have a link for a more detailed description here:
https://github.com/privacyidea/privacyidea/blob/master/doc/configuration/system_config.rst#split--sign
Which links to this location
https://github.com/privacyidea/privacyidea/blob/master/doc/configuration/realms.rst
that has the detailed description.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # privacyIDEA documentation build configuration file, created by
4 # sphinx-quickstart on Fri Jun 13 07:31:01 2014.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14
15 # The version info for the project you're documenting, acts as replacement for
16 # |version| and |release|, also used in various other places throughout the
17 # built documents.
18 #
19 # The short X.Y version.
20 version = '3.1'
21 # The full version, including alpha/beta/rc tags.
22 #release = '2.16dev5'
23 release = version
24
25
26 import sys
27 import os
28
29 # Monkey-patch functools.wraps
30 # http://stackoverflow.com/questions/28366818/preserve-default-arguments-of-wrapped-decorated-python-function-in-sphinx-docume
31 import functools
32
33 def no_op_wraps(func, assigned=None, updated=None):
34 """Replaces functools.wraps in order to undo wrapping.
35
36 Can be used to preserve the decorated function's signature
37 in the documentation generated by Sphinx.
38
39 """
40 def wrapper(decorator):
41 return func
42 return wrapper
43
44 functools.wraps = no_op_wraps
45
46 # If extensions (or modules to document with autodoc) are in another directory,
47 # add these directories to sys.path here. If the directory is relative to the
48 # documentation root, use os.path.abspath to make it absolute, like shown here.
49 sys.path.insert(0, os.path.abspath('..'))
50 sys.path.append(os.path.abspath('_themes/flask-sphinx-themes'))
51 sys.path.insert(0, os.path.abspath('../privacyidea'))
52
53 # -- General configuration -----------------------------------------------------
54
55 # If your documentation needs a minimal Sphinx version, state it here.
56 #needs_sphinx = '1.0'
57
58 # Add any Sphinx extension module names here, as strings. They can be extensions
59 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
60 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.imgmath', 'sphinx.ext.viewcode',
61 'sphinxcontrib.autohttp.flask']
62 http_index_ignore_prefixes = ['/token']
63
64 # Add any paths that contain templates here, relative to this directory.
65 templates_path = ['_templates']
66
67 # The suffix of source filenames.
68 source_suffix = '.rst'
69
70 # The encoding of source files.
71 #source_encoding = 'utf-8-sig'
72
73 # The master toctree document.
74 master_doc = 'index'
75
76 # General information about the project.
77 project = u'privacyIDEA'
78 copyright = u'2014-2019, Cornelius Kölbel'
79
80 # The language for content autogenerated by Sphinx. Refer to documentation
81 # for a list of supported languages.
82 #language = None
83
84 # There are two options for replacing |today|: either, you set today to some
85 # non-false value, then it is used:
86 #today = ''
87 # Else, today_fmt is used as the format for a strftime call.
88 #today_fmt = '%B %d, %Y'
89
90 # List of patterns, relative to source directory, that match files and
91 # directories to ignore when looking for source files.
92 exclude_patterns = ['_build']
93
94 # The reST default role (used for this markup: `text`) to use for all documents.
95 #default_role = None
96
97 # If true, '()' will be appended to :func: etc. cross-reference text.
98 add_function_parentheses = True
99
100 # If true, the current module name will be prepended to all description
101 # unit titles (such as .. function::).
102 #add_module_names = True
103
104 # If true, sectionauthor and moduleauthor directives will be shown in the
105 # output. They are ignored by default.
106 #show_authors = False
107
108 # The name of the Pygments (syntax highlighting) style to use.
109 pygments_style = 'sphinx'
110
111 # A list of ignored prefixes for module index sorting.
112 #modindex_common_prefix = []
113
114
115 # -- Options for HTML output ---------------------------------------------------
116
117 # The theme to use for HTML and HTML Help pages. See the documentation for
118 # a list of builtin themes.
119 #html_theme = 'sphinxdoc'
120 #html_theme = 'sphinx_rtd_theme'
121 #html_theme = 'agogo'
122 html_theme = 'flask'
123
124 # Theme options are theme-specific and customize the look and feel of a theme
125 # further. For a list of options available for each theme, see the
126 # documentation.
127 #html_theme_options = {}
128
129 # Add any paths that contain custom themes here, relative to this directory.
130 html_theme_path = ['_themes/flask-sphinx-themes']
131
132
133 # The name for this set of Sphinx documents. If None, it defaults to
134 # "<project> v<release> documentation".
135 #html_title = None
136
137 # A shorter title for the navigation bar. Default is the same as html_title.
138 #html_short_title = None
139
140 # The name of an image file (relative to this directory) to place at the top
141 # of the sidebar.
142 html_logo = "images/privacyidea-color.png"
143
144 # The name of an image file (within the static path) to use as favicon of the
145 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
146 # pixels large.
147 #html_favicon = None
148
149 # Add any paths that contain custom static files (such as style sheets) here,
150 # relative to this directory. They are copied after the builtin static files,
151 # so a file named "default.css" will overwrite the builtin "default.css".
152 html_static_path = ['_static']
153
154 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
155 # using the given strftime format.
156 #html_last_updated_fmt = '%b %d, %Y'
157
158 # If true, SmartyPants will be used to convert quotes and dashes to
159 # typographically correct entities.
160 #html_use_smartypants = True
161
162 # Custom sidebar templates, maps document names to template names.
163 #html_sidebars = {}
164
165 # Additional templates that should be rendered to pages, maps page names to
166 # template names.
167 #html_additional_pages = {}
168
169 # If false, no module index is generated.
170 #html_domain_indices = True
171
172 # If false, no index is generated.
173 #html_use_index = True
174
175 # If true, the index is split into individual pages for each letter.
176 #html_split_index = False
177
178 # If true, links to the reST sources are added to the pages.
179 #html_show_sourcelink = True
180
181 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
182 #html_show_sphinx = True
183
184 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
185 #html_show_copyright = True
186
187 # If true, an OpenSearch description file will be output, and all pages will
188 # contain a <link> tag referring to it. The value of this option must be the
189 # base URL from which the finished HTML is served.
190 #html_use_opensearch = ''
191
192 # This is the file name suffix for HTML files (e.g. ".xhtml").
193 #html_file_suffix = None
194
195 # Output file base name for HTML help builder.
196 htmlhelp_basename = 'privacyIDEAdoc'
197
198
199 # -- Options for LaTeX output --------------------------------------------------
200
201 latex_elements = {
202 # The paper size ('letterpaper' or 'a4paper').
203 #'papersize': 'letterpaper',
204
205 # The font size ('10pt', '11pt' or '12pt').
206 #'pointsize': '10pt',
207
208 # Additional stuff for the LaTeX preamble.
209 #'preamble': '',
210 }
211
212 # Grouping the document tree into LaTeX files. List of tuples
213 # (source start file, target name, title, author, documentclass [howto/manual]).
214 latex_documents = [
215 ('index', 'privacyIDEA.tex', u'privacyIDEA Authentication System',
216 u'Cornelius Kölbel', 'manual'),
217 ]
218
219 # The name of an image file (relative to this directory) to place at the top of
220 # the title page.
221 #latex_logo = None
222
223 # For "manual" documents, if this is true, then toplevel headings are parts,
224 # not chapters.
225 #latex_use_parts = False
226
227 # If true, show page references after internal links.
228 #latex_show_pagerefs = False
229
230 # If true, show URL addresses after external links.
231 #latex_show_urls = False
232
233 # Documents to append as an appendix to all manuals.
234 #latex_appendices = []
235
236 # If false, no module index is generated.
237 #latex_domain_indices = True
238
239
240 # -- Options for manual page output --------------------------------------------
241
242 # One entry per manual page. List of tuples
243 # (source start file, name, description, authors, manual section).
244 man_pages = [
245 ('index', 'privacyidea-server', u'privacyIDEA Authentication System',
246 [u'Cornelius Kölbel'], 1)
247 ]
248
249 # If true, show URL addresses after external links.
250 #man_show_urls = False
251
252
253 # -- Options for Texinfo output ------------------------------------------------
254
255 # Grouping the document tree into Texinfo files. List of tuples
256 # (source start file, target name, title, author,
257 # dir menu entry, description, category)
258 texinfo_documents = [
259 ('index', 'privacyIDEA', u'privacyIDEA AUthentication System',
260 u'Cornelius Kölbel', 'privacyIDEA', 'One line description of project.',
261 'Miscellaneous'),
262 ]
263
264 # Documents to append as an appendix to all manuals.
265 #texinfo_appendices = []
266
267 # If false, no module index is generated.
268 #texinfo_domain_indices = True
269
270 # How to display URL addresses: 'footnote', 'no', or 'inline'.
271 #texinfo_show_urls = 'footnote'
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -151,6 +151,10 @@
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
+html_css_files = [
+ 'css/custom.css',
+]
+
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
| {"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -151,6 +151,10 @@\n # so a file named \"default.css\" will overwrite the builtin \"default.css\".\n html_static_path = ['_static']\n \n+html_css_files = [\n+ 'css/custom.css',\n+]\n+\n # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n # using the given strftime format.\n #html_last_updated_fmt = '%b %d, %Y'\n", "issue": "More detailed description of splitAtSign\nWe should have a more precise description how ``splitAtSign`` interacts with a given ``realm`` parameter.\r\nWe should describe, that it works with ``/validate/`` and ``/auth`` endpoint.\r\n\r\nWe can have a link for a more detailed description here:\r\nhttps://github.com/privacyidea/privacyidea/blob/master/doc/configuration/system_config.rst#split--sign\r\n\r\nWhich links to this location\r\nhttps://github.com/privacyidea/privacyidea/blob/master/doc/configuration/realms.rst\r\nthat has the detailed description.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA documentation build configuration file, created by\n# sphinx-quickstart on Fri Jun 13 07:31:01 2014.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '3.1'\n# The full version, including alpha/beta/rc tags.\n#release = '2.16dev5'\nrelease = version\n\n\nimport sys\nimport os\n\n# Monkey-patch functools.wraps\n# http://stackoverflow.com/questions/28366818/preserve-default-arguments-of-wrapped-decorated-python-function-in-sphinx-docume\nimport functools\n\ndef no_op_wraps(func, assigned=None, updated=None):\n \"\"\"Replaces functools.wraps in order to undo wrapping.\n\n Can be used to preserve the decorated function's signature\n in the documentation generated by Sphinx.\n\n \"\"\"\n def wrapper(decorator):\n return func\n return wrapper\n\nfunctools.wraps = no_op_wraps\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.abspath('_themes/flask-sphinx-themes'))\nsys.path.insert(0, os.path.abspath('../privacyidea'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.imgmath', 'sphinx.ext.viewcode', \n 'sphinxcontrib.autohttp.flask']\nhttp_index_ignore_prefixes = ['/token']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'privacyIDEA'\ncopyright = u'2014-2019, Cornelius K\u00f6lbel'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'sphinxdoc'\n#html_theme = 'sphinx_rtd_theme'\n#html_theme = 'agogo'\nhtml_theme = 'flask'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['_themes/flask-sphinx-themes']\n\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"images/privacyidea-color.png\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'privacyIDEAdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'privacyIDEA.tex', u'privacyIDEA Authentication System',\n u'Cornelius K\u00f6lbel', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'privacyidea-server', u'privacyIDEA Authentication System',\n [u'Cornelius K\u00f6lbel'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'privacyIDEA', u'privacyIDEA AUthentication System',\n u'Cornelius K\u00f6lbel', 'privacyIDEA', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n", "path": "doc/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA documentation build configuration file, created by\n# sphinx-quickstart on Fri Jun 13 07:31:01 2014.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '3.1'\n# The full version, including alpha/beta/rc tags.\n#release = '2.16dev5'\nrelease = version\n\n\nimport sys\nimport os\n\n# Monkey-patch functools.wraps\n# http://stackoverflow.com/questions/28366818/preserve-default-arguments-of-wrapped-decorated-python-function-in-sphinx-docume\nimport functools\n\ndef no_op_wraps(func, assigned=None, updated=None):\n \"\"\"Replaces functools.wraps in order to undo wrapping.\n\n Can be used to preserve the decorated function's signature\n in the documentation generated by Sphinx.\n\n \"\"\"\n def wrapper(decorator):\n return func\n return wrapper\n\nfunctools.wraps = no_op_wraps\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.abspath('_themes/flask-sphinx-themes'))\nsys.path.insert(0, os.path.abspath('../privacyidea'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.imgmath', 'sphinx.ext.viewcode', \n 'sphinxcontrib.autohttp.flask']\nhttp_index_ignore_prefixes = ['/token']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'privacyIDEA'\ncopyright = u'2014-2019, Cornelius K\u00f6lbel'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\nadd_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'sphinxdoc'\n#html_theme = 'sphinx_rtd_theme'\n#html_theme = 'agogo'\nhtml_theme = 'flask'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['_themes/flask-sphinx-themes']\n\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"images/privacyidea-color.png\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_css_files = [\n 'css/custom.css',\n]\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'privacyIDEAdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'privacyIDEA.tex', u'privacyIDEA Authentication System',\n u'Cornelius K\u00f6lbel', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'privacyidea-server', u'privacyIDEA Authentication System',\n [u'Cornelius K\u00f6lbel'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'privacyIDEA', u'privacyIDEA AUthentication System',\n u'Cornelius K\u00f6lbel', 'privacyIDEA', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n", "path": "doc/conf.py"}]} | 3,284 | 120 |
gh_patches_debug_13395 | rasdani/github-patches | git_diff | facebookresearch__xformers-57 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] Failing triton dropout test
# 🐛 Bug
See https://app.circleci.com/pipelines/github/facebookresearch/xformers/212/workflows/8988c71c-84f5-4bd0-bd59-ac7d293c2370/jobs/398
Not sure why this happens just now, looking into that
## Command
can repro locally with ` pytest tests -k dropout -x -v `
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xformers/triton/k_dropout.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # CREDITS: This comes almost as-is from the Triton dropout tutorial
8 # https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py
9
10 import triton
11 import triton.language as tl
12
13
14 # fmt: off
15 @triton.autotune(
16 configs=[
17 triton.Config({"BLOCK_SIZE" : 256}, num_warps=1),
18 triton.Config({"BLOCK_SIZE" : 512}, num_warps=2),
19 triton.Config({"BLOCK_SIZE" : 1024}, num_warps=4),
20 triton.Config({"BLOCK_SIZE" : 2048}, num_warps=8),
21 triton.Config({"BLOCK_SIZE" : 4096}, num_warps=8),
22 ],
23 key=["N"],
24 )
25 @triton.jit
26 def k_dropout(
27 Y, X, S,
28 stride,
29 N,
30 p,
31 **meta,
32 ):
33 """
34 Apply dropout on an input tensor
35 Y : Output (M, N)
36 X : Input (M, N)
37 S : Seeds (M,)
38 p : dropout probability
39 """
40 # fmt: on
41
42 # compute memory offsets of elements handled by this instance
43 BLOCK_SIZE = meta["BLOCK_SIZE"]
44 row = tl.program_id(axis=0)
45 col = tl.program_id(axis=1)
46 offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)
47 mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N
48
49 # load data from x
50 x_ptrs = X + offsets
51 x = tl.load(x_ptrs, mask=mask)
52
53 # randomly prune it
54 seed = S + row
55 random = tl.rand(seed.to(tl.int32), offsets)
56 x_keep = random > p
57
58 # write-back
59 zero = 0.
60 zero = zero.to(x.dtype)
61 output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)
62 y_ptrs = Y + offsets
63 tl.store(y_ptrs, output, mask=mask)
64
```
Path: `xformers/triton/dropout.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # CREDITS: This comes almost as-is from the Triton dropout tutorial
8 # https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py
9
10 import torch
11 import triton
12 from torch.cuda.amp import custom_bwd, custom_fwd
13
14 from xformers.triton.k_dropout import k_dropout
15
16
17 # Helper to handle the SPMD launch grid and error cases
18 class _dropout(torch.autograd.Function):
19 @staticmethod
20 @custom_fwd(cast_inputs=torch.float16)
21 def forward(ctx, x, p):
22 # Soft-flatten an hypothetical 3rd dimension
23 x_ = x.reshape(-1, x.shape[-1])
24 y = torch.empty_like(x_)
25 _, N = x_.shape
26
27 assert y.stride(-1) == 1 and x_.stride(-1) == 1
28
29 # Generate one seed per sample
30 # seed max is int32 max for positive numbers: 2**16
31 seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)
32
33 # SPMD launch grid
34 def grid(meta):
35 return (
36 x_.shape[0],
37 triton.cdiv(x_.shape[1], meta["BLOCK_SIZE"]),
38 )
39
40 k_dropout[grid](y, x_, seeds, y.stride(0), N, p)
41
42 ctx.save_for_backward(seeds)
43 ctx.p = p
44
45 return y.reshape_as(x)
46
47 @staticmethod
48 @custom_bwd
49 def backward(ctx, grad_out):
50 (seeds,) = ctx.saved_tensors
51
52 # Soft-flatten an hypothetical 3rd dimension
53 grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])
54 grad_in = torch.empty_like(grad_out_)
55 _, N = grad_out_.shape
56
57 assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1
58
59 # SPMD launch grid
60 def grid(meta):
61 return (
62 grad_out_.shape[0],
63 triton.cdiv(grad_out_.shape[1], meta["BLOCK_SIZE"]),
64 )
65
66 k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)
67
68 return grad_in.reshape_as(grad_out), None
69
70
71 def dropout(x: torch.Tensor, p: float):
72 return _dropout.apply(x, p)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xformers/triton/dropout.py b/xformers/triton/dropout.py
--- a/xformers/triton/dropout.py
+++ b/xformers/triton/dropout.py
@@ -69,4 +69,7 @@
def dropout(x: torch.Tensor, p: float):
- return _dropout.apply(x, p)
+ if p > 0.0:
+ return _dropout.apply(x, p)
+
+ return x
diff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py
--- a/xformers/triton/k_dropout.py
+++ b/xformers/triton/k_dropout.py
@@ -24,7 +24,7 @@
)
@triton.jit
def k_dropout(
- Y, X, S,
+ Y, X, SEEDS,
stride,
N,
p,
@@ -51,7 +51,7 @@
x = tl.load(x_ptrs, mask=mask)
# randomly prune it
- seed = S + row
+ seed = SEEDS + row
random = tl.rand(seed.to(tl.int32), offsets)
x_keep = random > p
| {"golden_diff": "diff --git a/xformers/triton/dropout.py b/xformers/triton/dropout.py\n--- a/xformers/triton/dropout.py\n+++ b/xformers/triton/dropout.py\n@@ -69,4 +69,7 @@\n \n \n def dropout(x: torch.Tensor, p: float):\n- return _dropout.apply(x, p)\n+ if p > 0.0:\n+ return _dropout.apply(x, p)\n+\n+ return x\ndiff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py\n--- a/xformers/triton/k_dropout.py\n+++ b/xformers/triton/k_dropout.py\n@@ -24,7 +24,7 @@\n )\n @triton.jit\n def k_dropout(\n- Y, X, S,\n+ Y, X, SEEDS,\n stride,\n N,\n p,\n@@ -51,7 +51,7 @@\n x = tl.load(x_ptrs, mask=mask)\n \n # randomly prune it\n- seed = S + row\n+ seed = SEEDS + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n", "issue": "[bug] Failing triton dropout test \n# \ud83d\udc1b Bug\r\n\r\nSee https://app.circleci.com/pipelines/github/facebookresearch/xformers/212/workflows/8988c71c-84f5-4bd0-bd59-ac7d293c2370/jobs/398\r\n\r\nNot sure why this happens just now, looking into that\r\n\r\n## Command\r\ncan repro locally with ` pytest tests -k dropout -x -v `\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n\n# fmt: off\[email protected](\n configs=[\n triton.Config({\"BLOCK_SIZE\" : 256}, num_warps=1),\n triton.Config({\"BLOCK_SIZE\" : 512}, num_warps=2),\n triton.Config({\"BLOCK_SIZE\" : 1024}, num_warps=4),\n triton.Config({\"BLOCK_SIZE\" : 2048}, num_warps=8),\n triton.Config({\"BLOCK_SIZE\" : 4096}, num_warps=8),\n ],\n key=[\"N\"],\n)\[email protected]\ndef k_dropout(\n Y, X, S,\n stride,\n N,\n p,\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n S : Seeds (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n # compute memory offsets of elements handled by this instance\n BLOCK_SIZE = meta[\"BLOCK_SIZE\"]\n row = tl.program_id(axis=0)\n col = tl.program_id(axis=1)\n offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)\n mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N\n\n # load data from x\n x_ptrs = X + offsets\n x = tl.load(x_ptrs, mask=mask)\n\n # randomly prune it\n seed = S + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n\n # write-back\n zero = 0.\n zero = zero.to(x.dtype)\n output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)\n y_ptrs = Y + offsets\n tl.store(y_ptrs, output, mask=mask)\n", "path": "xformers/triton/k_dropout.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport torch\nimport triton\nfrom torch.cuda.amp import custom_bwd, custom_fwd\n\nfrom xformers.triton.k_dropout import k_dropout\n\n\n# Helper to handle the SPMD launch grid and error cases\nclass _dropout(torch.autograd.Function):\n @staticmethod\n @custom_fwd(cast_inputs=torch.float16)\n def forward(ctx, x, p):\n # Soft-flatten an hypothetical 3rd dimension\n x_ = x.reshape(-1, x.shape[-1])\n y = torch.empty_like(x_)\n _, N = x_.shape\n\n assert y.stride(-1) == 1 and x_.stride(-1) == 1\n\n # Generate one seed per sample\n # seed max is int32 max for positive numbers: 2**16\n seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)\n\n # SPMD launch grid\n def grid(meta):\n return (\n x_.shape[0],\n triton.cdiv(x_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](y, x_, seeds, y.stride(0), N, p)\n\n ctx.save_for_backward(seeds)\n ctx.p = p\n\n return y.reshape_as(x)\n\n @staticmethod\n @custom_bwd\n def backward(ctx, grad_out):\n (seeds,) = ctx.saved_tensors\n\n # Soft-flatten an hypothetical 3rd dimension\n grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])\n grad_in = torch.empty_like(grad_out_)\n _, N = grad_out_.shape\n\n assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1\n\n # SPMD launch grid\n def grid(meta):\n return (\n grad_out_.shape[0],\n triton.cdiv(grad_out_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)\n\n return grad_in.reshape_as(grad_out), None\n\n\ndef dropout(x: torch.Tensor, p: float):\n return _dropout.apply(x, p)\n", "path": "xformers/triton/dropout.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n\n# fmt: off\[email protected](\n configs=[\n triton.Config({\"BLOCK_SIZE\" : 256}, num_warps=1),\n triton.Config({\"BLOCK_SIZE\" : 512}, num_warps=2),\n triton.Config({\"BLOCK_SIZE\" : 1024}, num_warps=4),\n triton.Config({\"BLOCK_SIZE\" : 2048}, num_warps=8),\n triton.Config({\"BLOCK_SIZE\" : 4096}, num_warps=8),\n ],\n key=[\"N\"],\n)\[email protected]\ndef k_dropout(\n Y, X, SEEDS,\n stride,\n N,\n p,\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n S : Seeds (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n # compute memory offsets of elements handled by this instance\n BLOCK_SIZE = meta[\"BLOCK_SIZE\"]\n row = tl.program_id(axis=0)\n col = tl.program_id(axis=1)\n offsets = row * stride + col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)\n mask = col * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) < N\n\n # load data from x\n x_ptrs = X + offsets\n x = tl.load(x_ptrs, mask=mask)\n\n # randomly prune it\n seed = SEEDS + row\n random = tl.rand(seed.to(tl.int32), offsets)\n x_keep = random > p\n\n # write-back\n zero = 0.\n zero = zero.to(x.dtype)\n output = tl.where(x_keep, (x / (1 - p)).to(x.dtype), zero)\n y_ptrs = Y + offsets\n tl.store(y_ptrs, output, mask=mask)\n", "path": "xformers/triton/k_dropout.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport torch\nimport triton\nfrom torch.cuda.amp import custom_bwd, custom_fwd\n\nfrom xformers.triton.k_dropout import k_dropout\n\n\n# Helper to handle the SPMD launch grid and error cases\nclass _dropout(torch.autograd.Function):\n @staticmethod\n @custom_fwd(cast_inputs=torch.float16)\n def forward(ctx, x, p):\n # Soft-flatten an hypothetical 3rd dimension\n x_ = x.reshape(-1, x.shape[-1])\n y = torch.empty_like(x_)\n _, N = x_.shape\n\n assert y.stride(-1) == 1 and x_.stride(-1) == 1\n\n # Generate one seed per sample\n # seed max is int32 max for positive numbers: 2**16\n seeds = torch.randint(65536, (x_.shape[0],), device=x.device).to(torch.int32)\n\n # SPMD launch grid\n def grid(meta):\n return (\n x_.shape[0],\n triton.cdiv(x_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](y, x_, seeds, y.stride(0), N, p)\n\n ctx.save_for_backward(seeds)\n ctx.p = p\n\n return y.reshape_as(x)\n\n @staticmethod\n @custom_bwd\n def backward(ctx, grad_out):\n (seeds,) = ctx.saved_tensors\n\n # Soft-flatten an hypothetical 3rd dimension\n grad_out_ = grad_out.reshape(-1, grad_out.shape[-1])\n grad_in = torch.empty_like(grad_out_)\n _, N = grad_out_.shape\n\n assert grad_in.stride(-1) == 1 and grad_out_.stride(-1) == 1\n\n # SPMD launch grid\n def grid(meta):\n return (\n grad_out_.shape[0],\n triton.cdiv(grad_out_.shape[1], meta[\"BLOCK_SIZE\"]),\n )\n\n k_dropout[grid](grad_in, grad_out_, seeds, grad_out_.stride(0), N, ctx.p)\n\n return grad_in.reshape_as(grad_out), None\n\n\ndef dropout(x: torch.Tensor, p: float):\n if p > 0.0:\n return _dropout.apply(x, p)\n\n return x\n", "path": "xformers/triton/dropout.py"}]} | 1,742 | 274 |
gh_patches_debug_25915 | rasdani/github-patches | git_diff | microsoft__AzureTRE-1653 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Health check endpoint should log all the service status as it queries
Currently the `/health` endpoint queries Cosmos / Service Bus / the RP - and returns the statuses. If any are not ok, the response is a 503.
There is currently no way to query that endpoint when the gateway has blocked access - so we at least need it to log the results so we can track back and see what service was down, when.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api_app/api/routes/health.py`
Content:
```
1 from fastapi import APIRouter
2 from models.schemas.status import HealthCheck, ServiceStatus, StatusEnum
3 from resources import strings
4 from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status
5 from fastapi import HTTPException, status
6
7 router = APIRouter()
8
9
10 @router.get("/health", name=strings.API_GET_HEALTH_STATUS)
11 async def health_check() -> HealthCheck:
12 cosmos_status, cosmos_message = create_state_store_status()
13 sb_status, sb_message = await create_service_bus_status()
14 rp_status, rp_message = create_resource_processor_status()
15 services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),
16 ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),
17 ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]
18 health_check_result = HealthCheck(services=services)
19 if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:
20 raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())
21 return health_check_result
22
```
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.10"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.10"
+__version__ = "0.2.11"
diff --git a/api_app/api/routes/health.py b/api_app/api/routes/health.py
--- a/api_app/api/routes/health.py
+++ b/api_app/api/routes/health.py
@@ -3,6 +3,7 @@
from resources import strings
from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status
from fastapi import HTTPException, status
+import logging
router = APIRouter()
@@ -17,5 +18,8 @@
ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]
health_check_result = HealthCheck(services=services)
if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:
+ logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')
+ logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')
+ logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())
return health_check_result
| {"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.10\"\n+__version__ = \"0.2.11\"\ndiff --git a/api_app/api/routes/health.py b/api_app/api/routes/health.py\n--- a/api_app/api/routes/health.py\n+++ b/api_app/api/routes/health.py\n@@ -3,6 +3,7 @@\n from resources import strings\n from services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\n from fastapi import HTTPException, status\n+import logging\n \n router = APIRouter()\n \n@@ -17,5 +18,8 @@\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n+ logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')\n+ logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')\n+ logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "issue": "Health check endpoint should log all the service status as it queries\nCurrently the `/health` endpoint queries Cosmos / Service Bus / the RP - and returns the statuses. If any are not ok, the response is a 503.\r\n\r\nThere is currently no way to query that endpoint when the gateway has blocked access - so we at least need it to log the results so we can track back and see what service was down, when.\n", "before_files": [{"content": "from fastapi import APIRouter\nfrom models.schemas.status import HealthCheck, ServiceStatus, StatusEnum\nfrom resources import strings\nfrom services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\nfrom fastapi import HTTPException, status\n\nrouter = APIRouter()\n\n\[email protected](\"/health\", name=strings.API_GET_HEALTH_STATUS)\nasync def health_check() -> HealthCheck:\n cosmos_status, cosmos_message = create_state_store_status()\n sb_status, sb_message = await create_service_bus_status()\n rp_status, rp_message = create_resource_processor_status()\n services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),\n ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "path": "api_app/api/routes/health.py"}, {"content": "__version__ = \"0.2.10\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "from fastapi import APIRouter\nfrom models.schemas.status import HealthCheck, ServiceStatus, StatusEnum\nfrom resources import strings\nfrom services.health_checker import create_resource_processor_status, create_state_store_status, create_service_bus_status\nfrom fastapi import HTTPException, status\nimport logging\n\nrouter = APIRouter()\n\n\[email protected](\"/health\", name=strings.API_GET_HEALTH_STATUS)\nasync def health_check() -> HealthCheck:\n cosmos_status, cosmos_message = create_state_store_status()\n sb_status, sb_message = await create_service_bus_status()\n rp_status, rp_message = create_resource_processor_status()\n services = [ServiceStatus(service=strings.COSMOS_DB, status=cosmos_status, message=cosmos_message),\n ServiceStatus(service=strings.SERVICE_BUS, status=sb_status, message=sb_message),\n ServiceStatus(service=strings.RESOURCE_PROCESSOR, status=rp_status, message=rp_message)]\n health_check_result = HealthCheck(services=services)\n if cosmos_status == StatusEnum.not_ok or sb_status == StatusEnum.not_ok or rp_status == StatusEnum.not_ok:\n logging.error(f'Cosmos Status: {cosmos_status}, message: {cosmos_message}')\n logging.error(f'Service Bus Status: {sb_status}, message: {sb_message}')\n logging.error(f'Resource Processor Status: {rp_status}, message: {rp_message}')\n raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=health_check_result.json())\n return health_check_result\n", "path": "api_app/api/routes/health.py"}, {"content": "__version__ = \"0.2.11\"\n", "path": "api_app/_version.py"}]} | 673 | 320 |
gh_patches_debug_37018 | rasdani/github-patches | git_diff | arviz-devs__arviz-1076 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plot_trace lines is unclear and it may yield unexpected results
**Describe the bug**
The argument `lines` for the function `plot_trace` can give unexpected results. Moreover, the documentation is a bit nebulous.
**To Reproduce**
A toy example is defined
```python
import pymc3 as pm
import arviz as az
import numpy as np
# fake data
mu_real = 0
sigma_real = 1
n_samples = 150
Y = np.random.normal(loc=mu_real, scale=sigma_real, size=n_samples)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sigma=10)
sigma = pm.HalfNormal('sigma', sigma=10)
likelihood = pm.Normal('likelihood', mu=mu, sigma=sigma, observed=Y)
trace = pm.sample()
```
As per [documentation](https://arviz-devs.github.io/arviz/generated/arviz.plot_trace.html#arviz.plot_trace), the argument `lines` accepts a tuple in the form `(var_name, {‘coord’: selection}, [line, positions])`. So, the command
```python
az.plot_trace(trace, lines=(('mu', {}, mu_real),))
```
yields correctly

I can also pass a list of tuples or a list of tuples and lists and it will work fine:
```
az.plot_trace(trace, lines=[('mu', {}, mu_real)]) # list of tuples
az.plot_trace(trace, lines=[['mu', {}, mu_real]]) # list of lists
az.plot_trace(trace, lines=[['mu', {}, mu_real], ('sigma', {}, sigma_real)]) # list of lists and tuples
```
however, I cannot pass a simple tuple because I will get a `KeyError: 0`
```python
az.plot_trace(trace, lines=(['mu', {}, mu_real]))
az.plot_trace(trace, lines=(('mu', {}, mu_real)))
```
Also, I can pass a variable or coordinate name that do not exist and Arviz will not complain---but not lines will be plotted (here I would expect a warning)
```python
az.plot_trace(trace, lines=[('hey', {}, mu_real)])
az.plot_trace(trace, lines=[('mu', {'hey'}, mu_real)])
```

The weird behavior happens when I pass a string:
```python
az.plot_trace(trace, lines=[('mu', {}, 'hey')])
```

**Expected behavior**
The [documentation](https://arviz-devs.github.io/arviz/generated/arviz.plot_trace.html#arviz.plot_trace) could be improved and the function could check the inputs. In addition to what described above, the placeholder `[line, positions]` in `(var_name, {‘coord’: selection}, [line, positions])` should be something like `[line_positions]` otherwise one may think (like myself :) ) that two values should be inserted (one for `line` and one for `positions`).
**Additional context**
I am using Win10, fresh conda environment with PyMC3 and Arviz from master.
Possibly related https://github.com/pymc-devs/pymc3/issues/3495, https://github.com/pymc-devs/pymc3/issues/3497
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/backends/matplotlib/traceplot.py`
Content:
```
1 """Matplotlib traceplot."""
2
3 import matplotlib.pyplot as plt
4 from matplotlib.lines import Line2D
5 import numpy as np
6
7 from . import backend_kwarg_defaults, backend_show
8 from ...distplot import plot_dist
9 from ...plot_utils import _scale_fig_size, get_bins, make_label
10
11
12 def plot_trace(
13 data,
14 var_names, # pylint: disable=unused-argument
15 divergences,
16 figsize,
17 rug,
18 lines,
19 combined,
20 legend,
21 plot_kwargs,
22 fill_kwargs,
23 rug_kwargs,
24 hist_kwargs,
25 trace_kwargs,
26 plotters,
27 divergence_data,
28 colors,
29 backend_kwargs,
30 show,
31 ):
32 """Plot distribution (histogram or kernel density estimates) and sampled values.
33
34 If `divergences` data is available in `sample_stats`, will plot the location of divergences as
35 dashed vertical lines.
36
37 Parameters
38 ----------
39 data : obj
40 Any object that can be converted to an az.InferenceData object
41 Refer to documentation of az.convert_to_dataset for details
42 var_names : string, or list of strings
43 One or more variables to be plotted.
44 divergences : {"bottom", "top", None, False}
45 Plot location of divergences on the traceplots. Options are "bottom", "top", or False-y.
46 figsize : figure size tuple
47 If None, size is (12, variables * 2)
48 rug : bool
49 If True adds a rugplot. Defaults to False. Ignored for 2D KDE. Only affects continuous
50 variables.
51 lines : tuple
52 Tuple of (var_name, {'coord': selection}, [line, positions]) to be overplotted as
53 vertical lines on the density and horizontal lines on the trace.
54 combined : bool
55 Flag for combining multiple chains into a single line. If False (default), chains will be
56 plotted separately.
57 legend : bool
58 Add a legend to the figure with the chain color code.
59 plot_kwargs : dict
60 Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.
61 fill_kwargs : dict
62 Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.
63 rug_kwargs : dict
64 Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.
65 hist_kwargs : dict
66 Extra keyword arguments passed to `arviz.plot_dist`. Only affects discrete variables.
67 trace_kwargs : dict
68 Extra keyword arguments passed to `plt.plot`
69 Returns
70 -------
71 axes : matplotlib axes
72
73
74 Examples
75 --------
76 Plot a subset variables
77
78 .. plot::
79 :context: close-figs
80
81 >>> import arviz as az
82 >>> data = az.load_arviz_data('non_centered_eight')
83 >>> coords = {'school': ['Choate', 'Lawrenceville']}
84 >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords)
85
86 Show all dimensions of multidimensional variables in the same plot
87
88 .. plot::
89 :context: close-figs
90
91 >>> az.plot_trace(data, compact=True)
92
93 Combine all chains into one distribution
94
95 .. plot::
96 :context: close-figs
97
98 >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, combined=True)
99
100
101 Plot reference lines against distribution and trace
102
103 .. plot::
104 :context: close-figs
105
106 >>> lines = (('theta_t',{'school': "Choate"}, [-1]),)
107 >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, lines=lines)
108
109 """
110 # Set plot default backend kwargs
111 if backend_kwargs is None:
112 backend_kwargs = {}
113
114 backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}
115
116 textsize = plot_kwargs.pop("textsize", 10)
117
118 figsize, _, titlesize, xt_labelsize, linewidth, _ = _scale_fig_size(
119 figsize, textsize, rows=len(plotters), cols=2
120 )
121
122 trace_kwargs.setdefault("linewidth", linewidth)
123 plot_kwargs.setdefault("linewidth", linewidth)
124
125 _, axes = plt.subplots(len(plotters), 2, squeeze=False, figsize=figsize, **backend_kwargs)
126
127 for idx, (var_name, selection, value) in enumerate(plotters):
128 value = np.atleast_2d(value)
129
130 if len(value.shape) == 2:
131 _plot_chains_mpl(
132 axes,
133 idx,
134 value,
135 data,
136 colors,
137 combined,
138 xt_labelsize,
139 rug,
140 trace_kwargs,
141 hist_kwargs,
142 plot_kwargs,
143 fill_kwargs,
144 rug_kwargs,
145 )
146 else:
147 value = value.reshape((value.shape[0], value.shape[1], -1))
148 for sub_idx in range(value.shape[2]):
149 _plot_chains_mpl(
150 axes,
151 idx,
152 value[..., sub_idx],
153 data,
154 colors,
155 combined,
156 xt_labelsize,
157 rug,
158 trace_kwargs,
159 hist_kwargs,
160 plot_kwargs,
161 fill_kwargs,
162 rug_kwargs,
163 )
164
165 if value[0].dtype.kind == "i":
166 xticks = get_bins(value)
167 axes[idx, 0].set_xticks(xticks[:-1])
168 axes[idx, 0].set_yticks([])
169 for col in (0, 1):
170 axes[idx, col].set_title(make_label(var_name, selection), fontsize=titlesize, wrap=True)
171 axes[idx, col].tick_params(labelsize=xt_labelsize)
172
173 xlims = [ax.get_xlim() for ax in axes[idx, :]]
174 ylims = [ax.get_ylim() for ax in axes[idx, :]]
175
176 if divergences:
177 div_selection = {k: v for k, v in selection.items() if k in divergence_data.dims}
178 divs = divergence_data.sel(**div_selection).values
179 # if combined:
180 # divs = divs.flatten()
181 divs = np.atleast_2d(divs)
182
183 for chain, chain_divs in enumerate(divs):
184 div_draws = data.draw.values[chain_divs]
185 div_idxs = np.arange(len(chain_divs))[chain_divs]
186 if div_idxs.size > 0:
187 if divergences == "top":
188 ylocs = [ylim[1] for ylim in ylims]
189 else:
190 ylocs = [ylim[0] for ylim in ylims]
191 values = value[chain, div_idxs]
192 axes[idx, 1].plot(
193 div_draws,
194 np.zeros_like(div_idxs) + ylocs[1],
195 marker="|",
196 color="black",
197 markeredgewidth=1.5,
198 markersize=30,
199 linestyle="None",
200 alpha=hist_kwargs["alpha"],
201 zorder=-5,
202 )
203 axes[idx, 1].set_ylim(*ylims[1])
204 axes[idx, 0].plot(
205 values,
206 np.zeros_like(values) + ylocs[0],
207 marker="|",
208 color="black",
209 markeredgewidth=1.5,
210 markersize=30,
211 linestyle="None",
212 alpha=trace_kwargs["alpha"],
213 zorder=-5,
214 )
215 axes[idx, 0].set_ylim(*ylims[0])
216
217 for _, _, vlines in (j for j in lines if j[0] == var_name and j[1] == selection):
218 if isinstance(vlines, (float, int)):
219 line_values = [vlines]
220 else:
221 line_values = np.atleast_1d(vlines).ravel()
222 axes[idx, 0].vlines(line_values, *ylims[0], colors="black", linewidth=1.5, alpha=0.75)
223 axes[idx, 1].hlines(
224 line_values, *xlims[1], colors="black", linewidth=1.5, alpha=trace_kwargs["alpha"]
225 )
226 axes[idx, 0].set_ylim(bottom=0, top=ylims[0][1])
227 axes[idx, 1].set_xlim(left=data.draw.min(), right=data.draw.max())
228 axes[idx, 1].set_ylim(*ylims[1])
229 if legend:
230 handles = [
231 Line2D([], [], color=color, label=chain_id)
232 for chain_id, color in zip(data.chain.values, colors)
233 ]
234 if combined:
235 handles.insert(0, Line2D([], [], color=colors[-1], label="combined"))
236 axes[0, 1].legend(handles=handles, title="chain")
237
238 if backend_show(show):
239 plt.show()
240
241 return axes
242
243
244 def _plot_chains_mpl(
245 axes,
246 idx,
247 value,
248 data,
249 colors,
250 combined,
251 xt_labelsize,
252 rug,
253 trace_kwargs,
254 hist_kwargs,
255 plot_kwargs,
256 fill_kwargs,
257 rug_kwargs,
258 ):
259 for chain_idx, row in enumerate(value):
260 axes[idx, 1].plot(data.draw.values, row, color=colors[chain_idx], **trace_kwargs)
261
262 if not combined:
263 plot_kwargs["color"] = colors[chain_idx]
264 plot_dist(
265 values=row,
266 textsize=xt_labelsize,
267 rug=rug,
268 ax=axes[idx, 0],
269 hist_kwargs=hist_kwargs,
270 plot_kwargs=plot_kwargs,
271 fill_kwargs=fill_kwargs,
272 rug_kwargs=rug_kwargs,
273 backend="matplotlib",
274 show=False,
275 )
276
277 if combined:
278 plot_kwargs["color"] = colors[-1]
279 plot_dist(
280 values=value.flatten(),
281 textsize=xt_labelsize,
282 rug=rug,
283 ax=axes[idx, 0],
284 hist_kwargs=hist_kwargs,
285 plot_kwargs=plot_kwargs,
286 fill_kwargs=fill_kwargs,
287 rug_kwargs=rug_kwargs,
288 backend="matplotlib",
289 show=False,
290 )
291
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/arviz/plots/backends/matplotlib/traceplot.py b/arviz/plots/backends/matplotlib/traceplot.py
--- a/arviz/plots/backends/matplotlib/traceplot.py
+++ b/arviz/plots/backends/matplotlib/traceplot.py
@@ -1,5 +1,6 @@
"""Matplotlib traceplot."""
+import warnings
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import numpy as np
@@ -48,8 +49,8 @@
rug : bool
If True adds a rugplot. Defaults to False. Ignored for 2D KDE. Only affects continuous
variables.
- lines : tuple
- Tuple of (var_name, {'coord': selection}, [line, positions]) to be overplotted as
+ lines : tuple or list
+ list of tuple of (var_name, {'coord': selection}, [line_positions]) to be overplotted as
vertical lines on the density and horizontal lines on the trace.
combined : bool
Flag for combining multiple chains into a single line. If False (default), chains will be
@@ -124,6 +125,21 @@
_, axes = plt.subplots(len(plotters), 2, squeeze=False, figsize=figsize, **backend_kwargs)
+ # Check the input for lines
+ if lines is not None:
+ all_var_names = set(plotter[0] for plotter in plotters)
+
+ invalid_var_names = set()
+ for line in lines:
+ if line[0] not in all_var_names:
+ invalid_var_names.add(line[0])
+ if invalid_var_names:
+ warnings.warn(
+ "A valid var_name should be provided, found {} expected from {}".format(
+ invalid_var_names, all_var_names
+ )
+ )
+
for idx, (var_name, selection, value) in enumerate(plotters):
value = np.atleast_2d(value)
@@ -219,6 +235,10 @@
line_values = [vlines]
else:
line_values = np.atleast_1d(vlines).ravel()
+ if not np.issubdtype(line_values.dtype, np.number):
+ raise ValueError(
+ "line-positions should be numeric, found {}".format(line_values)
+ )
axes[idx, 0].vlines(line_values, *ylims[0], colors="black", linewidth=1.5, alpha=0.75)
axes[idx, 1].hlines(
line_values, *xlims[1], colors="black", linewidth=1.5, alpha=trace_kwargs["alpha"]
| {"golden_diff": "diff --git a/arviz/plots/backends/matplotlib/traceplot.py b/arviz/plots/backends/matplotlib/traceplot.py\n--- a/arviz/plots/backends/matplotlib/traceplot.py\n+++ b/arviz/plots/backends/matplotlib/traceplot.py\n@@ -1,5 +1,6 @@\n \"\"\"Matplotlib traceplot.\"\"\"\n \n+import warnings\n import matplotlib.pyplot as plt\n from matplotlib.lines import Line2D\n import numpy as np\n@@ -48,8 +49,8 @@\n rug : bool\n If True adds a rugplot. Defaults to False. Ignored for 2D KDE. Only affects continuous\n variables.\n- lines : tuple\n- Tuple of (var_name, {'coord': selection}, [line, positions]) to be overplotted as\n+ lines : tuple or list\n+ list of tuple of (var_name, {'coord': selection}, [line_positions]) to be overplotted as\n vertical lines on the density and horizontal lines on the trace.\n combined : bool\n Flag for combining multiple chains into a single line. If False (default), chains will be\n@@ -124,6 +125,21 @@\n \n _, axes = plt.subplots(len(plotters), 2, squeeze=False, figsize=figsize, **backend_kwargs)\n \n+ # Check the input for lines\n+ if lines is not None:\n+ all_var_names = set(plotter[0] for plotter in plotters)\n+\n+ invalid_var_names = set()\n+ for line in lines:\n+ if line[0] not in all_var_names:\n+ invalid_var_names.add(line[0])\n+ if invalid_var_names:\n+ warnings.warn(\n+ \"A valid var_name should be provided, found {} expected from {}\".format(\n+ invalid_var_names, all_var_names\n+ )\n+ )\n+\n for idx, (var_name, selection, value) in enumerate(plotters):\n value = np.atleast_2d(value)\n \n@@ -219,6 +235,10 @@\n line_values = [vlines]\n else:\n line_values = np.atleast_1d(vlines).ravel()\n+ if not np.issubdtype(line_values.dtype, np.number):\n+ raise ValueError(\n+ \"line-positions should be numeric, found {}\".format(line_values)\n+ )\n axes[idx, 0].vlines(line_values, *ylims[0], colors=\"black\", linewidth=1.5, alpha=0.75)\n axes[idx, 1].hlines(\n line_values, *xlims[1], colors=\"black\", linewidth=1.5, alpha=trace_kwargs[\"alpha\"]\n", "issue": "plot_trace lines is unclear and it may yield unexpected results\n**Describe the bug**\r\nThe argument `lines` for the function `plot_trace` can give unexpected results. Moreover, the documentation is a bit nebulous.\r\n \r\n**To Reproduce**\r\nA toy example is defined\r\n```python\r\nimport pymc3 as pm\r\nimport arviz as az\r\nimport numpy as np\r\n\r\n# fake data\r\nmu_real = 0\r\nsigma_real = 1\r\nn_samples = 150\r\n\r\nY = np.random.normal(loc=mu_real, scale=sigma_real, size=n_samples)\r\n\r\nwith pm.Model() as model:\r\n mu = pm.Normal('mu', mu=0, sigma=10) \r\n sigma = pm.HalfNormal('sigma', sigma=10)\r\n likelihood = pm.Normal('likelihood', mu=mu, sigma=sigma, observed=Y)\r\n trace = pm.sample()\r\n```\r\nAs per [documentation](https://arviz-devs.github.io/arviz/generated/arviz.plot_trace.html#arviz.plot_trace), the argument `lines` accepts a tuple in the form `(var_name, {\u2018coord\u2019: selection}, [line, positions])`. So, the command \r\n```python\r\naz.plot_trace(trace, lines=(('mu', {}, mu_real),))\r\n```\r\nyields correctly \r\n\r\n\r\nI can also pass a list of tuples or a list of tuples and lists and it will work fine:\r\n```\r\naz.plot_trace(trace, lines=[('mu', {}, mu_real)]) # list of tuples\r\naz.plot_trace(trace, lines=[['mu', {}, mu_real]]) # list of lists\r\naz.plot_trace(trace, lines=[['mu', {}, mu_real], ('sigma', {}, sigma_real)]) # list of lists and tuples\r\n```\r\nhowever, I cannot pass a simple tuple because I will get a `KeyError: 0`\r\n```python\r\naz.plot_trace(trace, lines=(['mu', {}, mu_real]))\r\naz.plot_trace(trace, lines=(('mu', {}, mu_real)))\r\n```\r\nAlso, I can pass a variable or coordinate name that do not exist and Arviz will not complain---but not lines will be plotted (here I would expect a warning)\r\n```python\r\naz.plot_trace(trace, lines=[('hey', {}, mu_real)])\r\naz.plot_trace(trace, lines=[('mu', {'hey'}, mu_real)])\r\n```\r\n\r\nThe weird behavior happens when I pass a string:\r\n```python\r\naz.plot_trace(trace, lines=[('mu', {}, 'hey')])\r\n```\r\n\r\n\r\n**Expected behavior**\r\nThe [documentation](https://arviz-devs.github.io/arviz/generated/arviz.plot_trace.html#arviz.plot_trace) could be improved and the function could check the inputs. In addition to what described above, the placeholder `[line, positions]` in `(var_name, {\u2018coord\u2019: selection}, [line, positions])` should be something like `[line_positions]` otherwise one may think (like myself :) ) that two values should be inserted (one for `line` and one for `positions`).\r\n\r\n**Additional context**\r\nI am using Win10, fresh conda environment with PyMC3 and Arviz from master.\r\n\r\nPossibly related https://github.com/pymc-devs/pymc3/issues/3495, https://github.com/pymc-devs/pymc3/issues/3497\n", "before_files": [{"content": "\"\"\"Matplotlib traceplot.\"\"\"\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.lines import Line2D\nimport numpy as np\n\nfrom . import backend_kwarg_defaults, backend_show\nfrom ...distplot import plot_dist\nfrom ...plot_utils import _scale_fig_size, get_bins, make_label\n\n\ndef plot_trace(\n data,\n var_names, # pylint: disable=unused-argument\n divergences,\n figsize,\n rug,\n lines,\n combined,\n legend,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n hist_kwargs,\n trace_kwargs,\n plotters,\n divergence_data,\n colors,\n backend_kwargs,\n show,\n):\n \"\"\"Plot distribution (histogram or kernel density estimates) and sampled values.\n\n If `divergences` data is available in `sample_stats`, will plot the location of divergences as\n dashed vertical lines.\n\n Parameters\n ----------\n data : obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n var_names : string, or list of strings\n One or more variables to be plotted.\n divergences : {\"bottom\", \"top\", None, False}\n Plot location of divergences on the traceplots. Options are \"bottom\", \"top\", or False-y.\n figsize : figure size tuple\n If None, size is (12, variables * 2)\n rug : bool\n If True adds a rugplot. Defaults to False. Ignored for 2D KDE. Only affects continuous\n variables.\n lines : tuple\n Tuple of (var_name, {'coord': selection}, [line, positions]) to be overplotted as\n vertical lines on the density and horizontal lines on the trace.\n combined : bool\n Flag for combining multiple chains into a single line. If False (default), chains will be\n plotted separately.\n legend : bool\n Add a legend to the figure with the chain color code.\n plot_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n fill_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n rug_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n hist_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects discrete variables.\n trace_kwargs : dict\n Extra keyword arguments passed to `plt.plot`\n Returns\n -------\n axes : matplotlib axes\n\n\n Examples\n --------\n Plot a subset variables\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('non_centered_eight')\n >>> coords = {'school': ['Choate', 'Lawrenceville']}\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords)\n\n Show all dimensions of multidimensional variables in the same plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, compact=True)\n\n Combine all chains into one distribution\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, combined=True)\n\n\n Plot reference lines against distribution and trace\n\n .. plot::\n :context: close-figs\n\n >>> lines = (('theta_t',{'school': \"Choate\"}, [-1]),)\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, lines=lines)\n\n \"\"\"\n # Set plot default backend kwargs\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n textsize = plot_kwargs.pop(\"textsize\", 10)\n\n figsize, _, titlesize, xt_labelsize, linewidth, _ = _scale_fig_size(\n figsize, textsize, rows=len(plotters), cols=2\n )\n\n trace_kwargs.setdefault(\"linewidth\", linewidth)\n plot_kwargs.setdefault(\"linewidth\", linewidth)\n\n _, axes = plt.subplots(len(plotters), 2, squeeze=False, figsize=figsize, **backend_kwargs)\n\n for idx, (var_name, selection, value) in enumerate(plotters):\n value = np.atleast_2d(value)\n\n if len(value.shape) == 2:\n _plot_chains_mpl(\n axes,\n idx,\n value,\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n )\n else:\n value = value.reshape((value.shape[0], value.shape[1], -1))\n for sub_idx in range(value.shape[2]):\n _plot_chains_mpl(\n axes,\n idx,\n value[..., sub_idx],\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n )\n\n if value[0].dtype.kind == \"i\":\n xticks = get_bins(value)\n axes[idx, 0].set_xticks(xticks[:-1])\n axes[idx, 0].set_yticks([])\n for col in (0, 1):\n axes[idx, col].set_title(make_label(var_name, selection), fontsize=titlesize, wrap=True)\n axes[idx, col].tick_params(labelsize=xt_labelsize)\n\n xlims = [ax.get_xlim() for ax in axes[idx, :]]\n ylims = [ax.get_ylim() for ax in axes[idx, :]]\n\n if divergences:\n div_selection = {k: v for k, v in selection.items() if k in divergence_data.dims}\n divs = divergence_data.sel(**div_selection).values\n # if combined:\n # divs = divs.flatten()\n divs = np.atleast_2d(divs)\n\n for chain, chain_divs in enumerate(divs):\n div_draws = data.draw.values[chain_divs]\n div_idxs = np.arange(len(chain_divs))[chain_divs]\n if div_idxs.size > 0:\n if divergences == \"top\":\n ylocs = [ylim[1] for ylim in ylims]\n else:\n ylocs = [ylim[0] for ylim in ylims]\n values = value[chain, div_idxs]\n axes[idx, 1].plot(\n div_draws,\n np.zeros_like(div_idxs) + ylocs[1],\n marker=\"|\",\n color=\"black\",\n markeredgewidth=1.5,\n markersize=30,\n linestyle=\"None\",\n alpha=hist_kwargs[\"alpha\"],\n zorder=-5,\n )\n axes[idx, 1].set_ylim(*ylims[1])\n axes[idx, 0].plot(\n values,\n np.zeros_like(values) + ylocs[0],\n marker=\"|\",\n color=\"black\",\n markeredgewidth=1.5,\n markersize=30,\n linestyle=\"None\",\n alpha=trace_kwargs[\"alpha\"],\n zorder=-5,\n )\n axes[idx, 0].set_ylim(*ylims[0])\n\n for _, _, vlines in (j for j in lines if j[0] == var_name and j[1] == selection):\n if isinstance(vlines, (float, int)):\n line_values = [vlines]\n else:\n line_values = np.atleast_1d(vlines).ravel()\n axes[idx, 0].vlines(line_values, *ylims[0], colors=\"black\", linewidth=1.5, alpha=0.75)\n axes[idx, 1].hlines(\n line_values, *xlims[1], colors=\"black\", linewidth=1.5, alpha=trace_kwargs[\"alpha\"]\n )\n axes[idx, 0].set_ylim(bottom=0, top=ylims[0][1])\n axes[idx, 1].set_xlim(left=data.draw.min(), right=data.draw.max())\n axes[idx, 1].set_ylim(*ylims[1])\n if legend:\n handles = [\n Line2D([], [], color=color, label=chain_id)\n for chain_id, color in zip(data.chain.values, colors)\n ]\n if combined:\n handles.insert(0, Line2D([], [], color=colors[-1], label=\"combined\"))\n axes[0, 1].legend(handles=handles, title=\"chain\")\n\n if backend_show(show):\n plt.show()\n\n return axes\n\n\ndef _plot_chains_mpl(\n axes,\n idx,\n value,\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n):\n for chain_idx, row in enumerate(value):\n axes[idx, 1].plot(data.draw.values, row, color=colors[chain_idx], **trace_kwargs)\n\n if not combined:\n plot_kwargs[\"color\"] = colors[chain_idx]\n plot_dist(\n values=row,\n textsize=xt_labelsize,\n rug=rug,\n ax=axes[idx, 0],\n hist_kwargs=hist_kwargs,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n rug_kwargs=rug_kwargs,\n backend=\"matplotlib\",\n show=False,\n )\n\n if combined:\n plot_kwargs[\"color\"] = colors[-1]\n plot_dist(\n values=value.flatten(),\n textsize=xt_labelsize,\n rug=rug,\n ax=axes[idx, 0],\n hist_kwargs=hist_kwargs,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n rug_kwargs=rug_kwargs,\n backend=\"matplotlib\",\n show=False,\n )\n", "path": "arviz/plots/backends/matplotlib/traceplot.py"}], "after_files": [{"content": "\"\"\"Matplotlib traceplot.\"\"\"\n\nimport warnings\nimport matplotlib.pyplot as plt\nfrom matplotlib.lines import Line2D\nimport numpy as np\n\nfrom . import backend_kwarg_defaults, backend_show\nfrom ...distplot import plot_dist\nfrom ...plot_utils import _scale_fig_size, get_bins, make_label\n\n\ndef plot_trace(\n data,\n var_names, # pylint: disable=unused-argument\n divergences,\n figsize,\n rug,\n lines,\n combined,\n legend,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n hist_kwargs,\n trace_kwargs,\n plotters,\n divergence_data,\n colors,\n backend_kwargs,\n show,\n):\n \"\"\"Plot distribution (histogram or kernel density estimates) and sampled values.\n\n If `divergences` data is available in `sample_stats`, will plot the location of divergences as\n dashed vertical lines.\n\n Parameters\n ----------\n data : obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n var_names : string, or list of strings\n One or more variables to be plotted.\n divergences : {\"bottom\", \"top\", None, False}\n Plot location of divergences on the traceplots. Options are \"bottom\", \"top\", or False-y.\n figsize : figure size tuple\n If None, size is (12, variables * 2)\n rug : bool\n If True adds a rugplot. Defaults to False. Ignored for 2D KDE. Only affects continuous\n variables.\n lines : tuple or list\n list of tuple of (var_name, {'coord': selection}, [line_positions]) to be overplotted as\n vertical lines on the density and horizontal lines on the trace.\n combined : bool\n Flag for combining multiple chains into a single line. If False (default), chains will be\n plotted separately.\n legend : bool\n Add a legend to the figure with the chain color code.\n plot_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n fill_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n rug_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects continuous variables.\n hist_kwargs : dict\n Extra keyword arguments passed to `arviz.plot_dist`. Only affects discrete variables.\n trace_kwargs : dict\n Extra keyword arguments passed to `plt.plot`\n Returns\n -------\n axes : matplotlib axes\n\n\n Examples\n --------\n Plot a subset variables\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('non_centered_eight')\n >>> coords = {'school': ['Choate', 'Lawrenceville']}\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords)\n\n Show all dimensions of multidimensional variables in the same plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, compact=True)\n\n Combine all chains into one distribution\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, combined=True)\n\n\n Plot reference lines against distribution and trace\n\n .. plot::\n :context: close-figs\n\n >>> lines = (('theta_t',{'school': \"Choate\"}, [-1]),)\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, lines=lines)\n\n \"\"\"\n # Set plot default backend kwargs\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n textsize = plot_kwargs.pop(\"textsize\", 10)\n\n figsize, _, titlesize, xt_labelsize, linewidth, _ = _scale_fig_size(\n figsize, textsize, rows=len(plotters), cols=2\n )\n\n trace_kwargs.setdefault(\"linewidth\", linewidth)\n plot_kwargs.setdefault(\"linewidth\", linewidth)\n\n _, axes = plt.subplots(len(plotters), 2, squeeze=False, figsize=figsize, **backend_kwargs)\n\n # Check the input for lines\n if lines is not None:\n all_var_names = set(plotter[0] for plotter in plotters)\n\n invalid_var_names = set()\n for line in lines:\n if line[0] not in all_var_names:\n invalid_var_names.add(line[0])\n if invalid_var_names:\n warnings.warn(\n \"A valid var_name should be provided, found {} expected from {}\".format(\n invalid_var_names, all_var_names\n )\n )\n\n for idx, (var_name, selection, value) in enumerate(plotters):\n value = np.atleast_2d(value)\n\n if len(value.shape) == 2:\n _plot_chains_mpl(\n axes,\n idx,\n value,\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n )\n else:\n value = value.reshape((value.shape[0], value.shape[1], -1))\n for sub_idx in range(value.shape[2]):\n _plot_chains_mpl(\n axes,\n idx,\n value[..., sub_idx],\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n )\n\n if value[0].dtype.kind == \"i\":\n xticks = get_bins(value)\n axes[idx, 0].set_xticks(xticks[:-1])\n axes[idx, 0].set_yticks([])\n for col in (0, 1):\n axes[idx, col].set_title(make_label(var_name, selection), fontsize=titlesize, wrap=True)\n axes[idx, col].tick_params(labelsize=xt_labelsize)\n\n xlims = [ax.get_xlim() for ax in axes[idx, :]]\n ylims = [ax.get_ylim() for ax in axes[idx, :]]\n\n if divergences:\n div_selection = {k: v for k, v in selection.items() if k in divergence_data.dims}\n divs = divergence_data.sel(**div_selection).values\n # if combined:\n # divs = divs.flatten()\n divs = np.atleast_2d(divs)\n\n for chain, chain_divs in enumerate(divs):\n div_draws = data.draw.values[chain_divs]\n div_idxs = np.arange(len(chain_divs))[chain_divs]\n if div_idxs.size > 0:\n if divergences == \"top\":\n ylocs = [ylim[1] for ylim in ylims]\n else:\n ylocs = [ylim[0] for ylim in ylims]\n values = value[chain, div_idxs]\n axes[idx, 1].plot(\n div_draws,\n np.zeros_like(div_idxs) + ylocs[1],\n marker=\"|\",\n color=\"black\",\n markeredgewidth=1.5,\n markersize=30,\n linestyle=\"None\",\n alpha=hist_kwargs[\"alpha\"],\n zorder=-5,\n )\n axes[idx, 1].set_ylim(*ylims[1])\n axes[idx, 0].plot(\n values,\n np.zeros_like(values) + ylocs[0],\n marker=\"|\",\n color=\"black\",\n markeredgewidth=1.5,\n markersize=30,\n linestyle=\"None\",\n alpha=trace_kwargs[\"alpha\"],\n zorder=-5,\n )\n axes[idx, 0].set_ylim(*ylims[0])\n\n for _, _, vlines in (j for j in lines if j[0] == var_name and j[1] == selection):\n if isinstance(vlines, (float, int)):\n line_values = [vlines]\n else:\n line_values = np.atleast_1d(vlines).ravel()\n if not np.issubdtype(line_values.dtype, np.number):\n raise ValueError(\n \"line-positions should be numeric, found {}\".format(line_values)\n )\n axes[idx, 0].vlines(line_values, *ylims[0], colors=\"black\", linewidth=1.5, alpha=0.75)\n axes[idx, 1].hlines(\n line_values, *xlims[1], colors=\"black\", linewidth=1.5, alpha=trace_kwargs[\"alpha\"]\n )\n axes[idx, 0].set_ylim(bottom=0, top=ylims[0][1])\n axes[idx, 1].set_xlim(left=data.draw.min(), right=data.draw.max())\n axes[idx, 1].set_ylim(*ylims[1])\n if legend:\n handles = [\n Line2D([], [], color=color, label=chain_id)\n for chain_id, color in zip(data.chain.values, colors)\n ]\n if combined:\n handles.insert(0, Line2D([], [], color=colors[-1], label=\"combined\"))\n axes[0, 1].legend(handles=handles, title=\"chain\")\n\n if backend_show(show):\n plt.show()\n\n return axes\n\n\ndef _plot_chains_mpl(\n axes,\n idx,\n value,\n data,\n colors,\n combined,\n xt_labelsize,\n rug,\n trace_kwargs,\n hist_kwargs,\n plot_kwargs,\n fill_kwargs,\n rug_kwargs,\n):\n for chain_idx, row in enumerate(value):\n axes[idx, 1].plot(data.draw.values, row, color=colors[chain_idx], **trace_kwargs)\n\n if not combined:\n plot_kwargs[\"color\"] = colors[chain_idx]\n plot_dist(\n values=row,\n textsize=xt_labelsize,\n rug=rug,\n ax=axes[idx, 0],\n hist_kwargs=hist_kwargs,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n rug_kwargs=rug_kwargs,\n backend=\"matplotlib\",\n show=False,\n )\n\n if combined:\n plot_kwargs[\"color\"] = colors[-1]\n plot_dist(\n values=value.flatten(),\n textsize=xt_labelsize,\n rug=rug,\n ax=axes[idx, 0],\n hist_kwargs=hist_kwargs,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n rug_kwargs=rug_kwargs,\n backend=\"matplotlib\",\n show=False,\n )\n", "path": "arviz/plots/backends/matplotlib/traceplot.py"}]} | 4,077 | 597 |
gh_patches_debug_26067 | rasdani/github-patches | git_diff | beeware__toga-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A single column table leads to only showing first letter of value
## Expected Behavior
Table like this:
**Filename**
xxx
yyy
zzz
## Current Behavior
**Filename**
x
y
z
## Steps to reproduce
Use toga.Table(headings=['Filename'], data=['xxx', 'yyy', 'zzz'], on_select=self.handle_name_select)
## Analysis
The problem seems to be in the ListSource class.
`def _create_row(self, data):
if isinstance(data, dict):
row = Row(**data)
else:
row = Row(**dict(zip(self._accessors, data)))
row._source = self
return row'
In list_source.py line 56 it says:
`row = Row(**dict(zip(self._accessors, data)))`
but the data parameter is a string when using a list of strings as data, leading to the zipping of the individual characters. When passing in the data as [('xxx',), ('yyy',), ('zzz',)] the error does not occur.
So either the API should make it explicit that it expects a list of lists, or handle the data-is-a-list-of-strings case correctly
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/core/toga/sources/list_source.py`
Content:
```
1 from .base import Source
2
3
4 class Row:
5 def __init__(self, **data):
6 self._attrs = list(data.keys())
7 self._source = None
8 for name, value in data.items():
9 setattr(self, name, value)
10
11 ######################################################################
12 # Utility wrappers
13 ######################################################################
14
15 def __setattr__(self, attr, value):
16 super().__setattr__(attr, value)
17 if attr in self._attrs:
18 if self._source is not None:
19 self._source._notify('change', item=self)
20
21
22 class ListSource(Source):
23 """A data source to store a list of multiple data values, in a row-like fashion.
24
25 Args:
26 data (`list`): The data in the list. Each entry in the list should have the
27 same number of entries as there are accessors.
28 accessors (`list`): A list of attribute names for accessing the value
29 in each column of the row.
30 """
31 def __init__(self, data, accessors):
32 super().__init__()
33 self._accessors = accessors
34 self._data = []
35 for value in data:
36 self._data.append(self._create_row(value))
37
38 ######################################################################
39 # Methods required by the ListSource interface
40 ######################################################################
41
42 def __len__(self):
43 return len(self._data)
44
45 def __getitem__(self, index):
46 return self._data[index]
47
48 ######################################################################
49 # Factory methods for new rows
50 ######################################################################
51
52 def _create_row(self, data):
53 if isinstance(data, dict):
54 row = Row(**data)
55 else:
56 row = Row(**dict(zip(self._accessors, data)))
57 row._source = self
58 return row
59
60 ######################################################################
61 # Utility methods to make ListSources more list-like
62 ######################################################################
63
64 def __setitem__(self, index, value):
65 row = self._create_row(value)
66 self._data[index] = row
67 self._notify('insert', index=index, item=row)
68
69 def __iter__(self):
70 return iter(self._data)
71
72 def clear(self):
73 self._data = []
74 self._notify('clear')
75
76 def insert(self, index, *values, **named):
77 # Coalesce values and data into a single data dictionary,
78 # and use that to create the data row. Explicitly named data override.
79 row = self._create_row(dict(zip(self._accessors, values), **named))
80 self._data.insert(index, row)
81 self._notify('insert', index=index, item=row)
82 return row
83
84 def prepend(self, *values, **named):
85 return self.insert(0, *values, **named)
86
87 def append(self, *values, **named):
88 return self.insert(len(self), *values, **named)
89
90 def remove(self, row):
91 self._data.remove(row)
92 self._notify('remove', item=row)
93 return row
94
95 def index(self, row):
96 return self._data.index(row)
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/core/toga/sources/list_source.py b/src/core/toga/sources/list_source.py
--- a/src/core/toga/sources/list_source.py
+++ b/src/core/toga/sources/list_source.py
@@ -24,7 +24,7 @@
Args:
data (`list`): The data in the list. Each entry in the list should have the
- same number of entries as there are accessors.
+ same number of entries as there are accessors.
accessors (`list`): A list of attribute names for accessing the value
in each column of the row.
"""
@@ -50,10 +50,20 @@
######################################################################
def _create_row(self, data):
+ """Create a Row object from the given data.
+ Args:
+ data (any): The type of `data` determines how it is handled
+ ``dict``: each key corresponds to a column accessor
+ iterables, except ``str`` and ``dict``: each item corresponds to a column
+ all else: `data` will fill the first column
+ """
+
if isinstance(data, dict):
row = Row(**data)
- else:
+ elif hasattr(data, '__iter__') and not isinstance(data, str):
row = Row(**dict(zip(self._accessors, data)))
+ else:
+ row = Row(**{self._accessors[0]: data})
row._source = self
return row
| {"golden_diff": "diff --git a/src/core/toga/sources/list_source.py b/src/core/toga/sources/list_source.py\n--- a/src/core/toga/sources/list_source.py\n+++ b/src/core/toga/sources/list_source.py\n@@ -24,7 +24,7 @@\n \n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n- same number of entries as there are accessors.\n+ same number of entries as there are accessors. \n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n@@ -50,10 +50,20 @@\n ######################################################################\n \n def _create_row(self, data):\n+ \"\"\"Create a Row object from the given data.\n+ Args:\n+ data (any): The type of `data` determines how it is handled\n+ ``dict``: each key corresponds to a column accessor\n+ iterables, except ``str`` and ``dict``: each item corresponds to a column\n+ all else: `data` will fill the first column\n+ \"\"\"\n+\n if isinstance(data, dict):\n row = Row(**data)\n- else:\n+ elif hasattr(data, '__iter__') and not isinstance(data, str):\n row = Row(**dict(zip(self._accessors, data)))\n+ else:\n+ row = Row(**{self._accessors[0]: data})\n row._source = self\n return row\n", "issue": "A single column table leads to only showing first letter of value\n## Expected Behavior\r\n\r\nTable like this:\r\n**Filename**\r\nxxx\r\nyyy\r\nzzz\r\n\r\n## Current Behavior\r\n**Filename**\r\nx\r\ny\r\nz\r\n\r\n## Steps to reproduce\r\n\r\nUse toga.Table(headings=['Filename'], data=['xxx', 'yyy', 'zzz'], on_select=self.handle_name_select)\r\n\r\n## Analysis\r\nThe problem seems to be in the ListSource class.\r\n`def _create_row(self, data):\r\n if isinstance(data, dict):\r\n row = Row(**data)\r\n else:\r\n row = Row(**dict(zip(self._accessors, data)))\r\n row._source = self\r\n return row'\r\n\r\nIn list_source.py line 56 it says:\r\n\r\n`row = Row(**dict(zip(self._accessors, data)))`\r\n\r\nbut the data parameter is a string when using a list of strings as data, leading to the zipping of the individual characters. When passing in the data as [('xxx',), ('yyy',), ('zzz',)] the error does not occur.\r\n\r\nSo either the API should make it explicit that it expects a list of lists, or handle the data-is-a-list-of-strings case correctly\n", "before_files": [{"content": "from .base import Source\n\n\nclass Row:\n def __init__(self, **data):\n self._attrs = list(data.keys())\n self._source = None\n for name, value in data.items():\n setattr(self, name, value)\n\n ######################################################################\n # Utility wrappers\n ######################################################################\n\n def __setattr__(self, attr, value):\n super().__setattr__(attr, value)\n if attr in self._attrs:\n if self._source is not None:\n self._source._notify('change', item=self)\n\n\nclass ListSource(Source):\n \"\"\"A data source to store a list of multiple data values, in a row-like fashion.\n\n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n same number of entries as there are accessors.\n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n def __init__(self, data, accessors):\n super().__init__()\n self._accessors = accessors\n self._data = []\n for value in data:\n self._data.append(self._create_row(value))\n\n ######################################################################\n # Methods required by the ListSource interface\n ######################################################################\n\n def __len__(self):\n return len(self._data)\n\n def __getitem__(self, index):\n return self._data[index]\n\n ######################################################################\n # Factory methods for new rows\n ######################################################################\n\n def _create_row(self, data):\n if isinstance(data, dict):\n row = Row(**data)\n else:\n row = Row(**dict(zip(self._accessors, data)))\n row._source = self\n return row\n\n ######################################################################\n # Utility methods to make ListSources more list-like\n ######################################################################\n\n def __setitem__(self, index, value):\n row = self._create_row(value)\n self._data[index] = row\n self._notify('insert', index=index, item=row)\n\n def __iter__(self):\n return iter(self._data)\n\n def clear(self):\n self._data = []\n self._notify('clear')\n\n def insert(self, index, *values, **named):\n # Coalesce values and data into a single data dictionary,\n # and use that to create the data row. Explicitly named data override.\n row = self._create_row(dict(zip(self._accessors, values), **named))\n self._data.insert(index, row)\n self._notify('insert', index=index, item=row)\n return row\n\n def prepend(self, *values, **named):\n return self.insert(0, *values, **named)\n\n def append(self, *values, **named):\n return self.insert(len(self), *values, **named)\n\n def remove(self, row):\n self._data.remove(row)\n self._notify('remove', item=row)\n return row\n\n def index(self, row):\n return self._data.index(row)\n", "path": "src/core/toga/sources/list_source.py"}], "after_files": [{"content": "from .base import Source\n\n\nclass Row:\n def __init__(self, **data):\n self._attrs = list(data.keys())\n self._source = None\n for name, value in data.items():\n setattr(self, name, value)\n\n ######################################################################\n # Utility wrappers\n ######################################################################\n\n def __setattr__(self, attr, value):\n super().__setattr__(attr, value)\n if attr in self._attrs:\n if self._source is not None:\n self._source._notify('change', item=self)\n\n\nclass ListSource(Source):\n \"\"\"A data source to store a list of multiple data values, in a row-like fashion.\n\n Args:\n data (`list`): The data in the list. Each entry in the list should have the\n same number of entries as there are accessors. \n accessors (`list`): A list of attribute names for accessing the value\n in each column of the row.\n \"\"\"\n def __init__(self, data, accessors):\n super().__init__()\n self._accessors = accessors\n self._data = []\n for value in data:\n self._data.append(self._create_row(value))\n\n ######################################################################\n # Methods required by the ListSource interface\n ######################################################################\n\n def __len__(self):\n return len(self._data)\n\n def __getitem__(self, index):\n return self._data[index]\n\n ######################################################################\n # Factory methods for new rows\n ######################################################################\n\n def _create_row(self, data):\n \"\"\"Create a Row object from the given data.\n Args:\n data (any): The type of `data` determines how it is handled\n ``dict``: each key corresponds to a column accessor\n iterables, except ``str`` and ``dict``: each item corresponds to a column\n all else: `data` will fill the first column\n \"\"\"\n\n if isinstance(data, dict):\n row = Row(**data)\n elif hasattr(data, '__iter__') and not isinstance(data, str):\n row = Row(**dict(zip(self._accessors, data)))\n else:\n row = Row(**{self._accessors[0]: data})\n row._source = self\n return row\n\n ######################################################################\n # Utility methods to make ListSources more list-like\n ######################################################################\n\n def __setitem__(self, index, value):\n row = self._create_row(value)\n self._data[index] = row\n self._notify('insert', index=index, item=row)\n\n def __iter__(self):\n return iter(self._data)\n\n def clear(self):\n self._data = []\n self._notify('clear')\n\n def insert(self, index, *values, **named):\n # Coalesce values and data into a single data dictionary,\n # and use that to create the data row. Explicitly named data override.\n row = self._create_row(dict(zip(self._accessors, values), **named))\n self._data.insert(index, row)\n self._notify('insert', index=index, item=row)\n return row\n\n def prepend(self, *values, **named):\n return self.insert(0, *values, **named)\n\n def append(self, *values, **named):\n return self.insert(len(self), *values, **named)\n\n def remove(self, row):\n self._data.remove(row)\n self._notify('remove', item=row)\n return row\n\n def index(self, row):\n return self._data.index(row)\n", "path": "src/core/toga/sources/list_source.py"}]} | 1,347 | 327 |
gh_patches_debug_25481 | rasdani/github-patches | git_diff | fonttools__fonttools-1276 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[varLib.mutator] Does not mutate weight/width/italicAngle
Because those are not in MVAR, but in standard axes. Quoting MVAR spec:
> Note: The OS/2.usWeightClass, OS/2.usWidthClass and post.italicAngle values are not supported by variation data in the 'MVAR' table. This is because values for these three fields correspond directly to input axis values for the 'wght', 'wdth' and 'slnt' variation axes. See the discussion of these axes in the OpenType Design-Variation Axis Tag Registry for details on the relationship between these fields and the corresponding design axes.
There's probably some binary flags that we can also tweak, re boldness, italicness, etc of the font.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/varLib/mutator.py`
Content:
```
1 """
2 Instantiate a variation font. Run, eg:
3
4 $ fonttools varLib.mutator ./NotoSansArabic-VF.ttf wght=140 wdth=85
5 """
6 from __future__ import print_function, division, absolute_import
7 from fontTools.misc.py23 import *
8 from fontTools.misc.fixedTools import floatToFixedToFloat, otRound
9 from fontTools.ttLib import TTFont
10 from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates
11 from fontTools.varLib import _GetCoordinates, _SetCoordinates, _DesignspaceAxis
12 from fontTools.varLib.models import supportScalar, normalizeLocation
13 from fontTools.varLib.merger import MutatorMerger
14 from fontTools.varLib.varStore import VarStoreInstancer
15 from fontTools.varLib.mvar import MVAR_ENTRIES
16 from fontTools.varLib.iup import iup_delta
17 import os.path
18 import logging
19
20
21 log = logging.getLogger("fontTools.varlib.mutator")
22
23
24 def instantiateVariableFont(varfont, location, inplace=False):
25 """ Generate a static instance from a variable TTFont and a dictionary
26 defining the desired location along the variable font's axes.
27 The location values must be specified as user-space coordinates, e.g.:
28
29 {'wght': 400, 'wdth': 100}
30
31 By default, a new TTFont object is returned. If ``inplace`` is True, the
32 input varfont is modified and reduced to a static font.
33 """
34 if not inplace:
35 # make a copy to leave input varfont unmodified
36 stream = BytesIO()
37 varfont.save(stream)
38 stream.seek(0)
39 varfont = TTFont(stream)
40
41 fvar = varfont['fvar']
42 axes = {a.axisTag:(a.minValue,a.defaultValue,a.maxValue) for a in fvar.axes}
43 loc = normalizeLocation(location, axes)
44 if 'avar' in varfont:
45 maps = varfont['avar'].segments
46 loc = {k:_DesignspaceAxis._map(v, maps[k]) for k,v in loc.items()}
47 # Quantize to F2Dot14, to avoid surprise interpolations.
48 loc = {k:floatToFixedToFloat(v, 14) for k,v in loc.items()}
49 # Location is normalized now
50 log.info("Normalized location: %s", loc)
51
52 log.info("Mutating glyf/gvar tables")
53 gvar = varfont['gvar']
54 glyf = varfont['glyf']
55 # get list of glyph names in gvar sorted by component depth
56 glyphnames = sorted(
57 gvar.variations.keys(),
58 key=lambda name: (
59 glyf[name].getCompositeMaxpValues(glyf).maxComponentDepth
60 if glyf[name].isComposite() else 0,
61 name))
62 for glyphname in glyphnames:
63 variations = gvar.variations[glyphname]
64 coordinates,_ = _GetCoordinates(varfont, glyphname)
65 origCoords, endPts = None, None
66 for var in variations:
67 scalar = supportScalar(loc, var.axes)
68 if not scalar: continue
69 delta = var.coordinates
70 if None in delta:
71 if origCoords is None:
72 origCoords,control = _GetCoordinates(varfont, glyphname)
73 endPts = control[1] if control[0] >= 1 else list(range(len(control[1])))
74 delta = iup_delta(delta, origCoords, endPts)
75 coordinates += GlyphCoordinates(delta) * scalar
76 _SetCoordinates(varfont, glyphname, coordinates)
77
78 if 'cvar' in varfont:
79 log.info("Mutating cvt/cvar tables")
80 cvar = varfont['cvar']
81 cvt = varfont['cvt ']
82 deltas = {}
83 for var in cvar.variations:
84 scalar = supportScalar(loc, var.axes)
85 if not scalar: continue
86 for i, c in enumerate(var.coordinates):
87 if c is not None:
88 deltas[i] = deltas.get(i, 0) + scalar * c
89 for i, delta in deltas.items():
90 cvt[i] += otRound(delta)
91
92 if 'MVAR' in varfont:
93 log.info("Mutating MVAR table")
94 mvar = varfont['MVAR'].table
95 varStoreInstancer = VarStoreInstancer(mvar.VarStore, fvar.axes, loc)
96 records = mvar.ValueRecord
97 for rec in records:
98 mvarTag = rec.ValueTag
99 if mvarTag not in MVAR_ENTRIES:
100 continue
101 tableTag, itemName = MVAR_ENTRIES[mvarTag]
102 delta = otRound(varStoreInstancer[rec.VarIdx])
103 if not delta:
104 continue
105 setattr(varfont[tableTag], itemName,
106 getattr(varfont[tableTag], itemName) + delta)
107
108 if 'GDEF' in varfont:
109 log.info("Mutating GDEF/GPOS/GSUB tables")
110 merger = MutatorMerger(varfont, loc)
111
112 log.info("Building interpolated tables")
113 merger.instantiate()
114
115 if 'name' in varfont:
116 log.info("Pruning name table")
117 exclude = {a.axisNameID for a in fvar.axes}
118 for i in fvar.instances:
119 exclude.add(i.subfamilyNameID)
120 exclude.add(i.postscriptNameID)
121 varfont['name'].names[:] = [
122 n for n in varfont['name'].names
123 if n.nameID not in exclude
124 ]
125
126 log.info("Removing variable tables")
127 for tag in ('avar','cvar','fvar','gvar','HVAR','MVAR','VVAR','STAT'):
128 if tag in varfont:
129 del varfont[tag]
130
131 return varfont
132
133
134 def main(args=None):
135 from fontTools import configLogger
136 import argparse
137
138 parser = argparse.ArgumentParser(
139 "fonttools varLib.mutator", description="Instantiate a variable font")
140 parser.add_argument(
141 "input", metavar="INPUT.ttf", help="Input variable TTF file.")
142 parser.add_argument(
143 "locargs", metavar="AXIS=LOC", nargs="*",
144 help="List of space separated locations. A location consist in "
145 "the name of a variation axis, followed by '=' and a number. E.g.: "
146 " wght=700 wdth=80. The default is the location of the base master.")
147 parser.add_argument(
148 "-o", "--output", metavar="OUTPUT.ttf", default=None,
149 help="Output instance TTF file (default: INPUT-instance.ttf).")
150 logging_group = parser.add_mutually_exclusive_group(required=False)
151 logging_group.add_argument(
152 "-v", "--verbose", action="store_true", help="Run more verbosely.")
153 logging_group.add_argument(
154 "-q", "--quiet", action="store_true", help="Turn verbosity off.")
155 options = parser.parse_args(args)
156
157 varfilename = options.input
158 outfile = (
159 os.path.splitext(varfilename)[0] + '-instance.ttf'
160 if not options.output else options.output)
161 configLogger(level=(
162 "DEBUG" if options.verbose else
163 "ERROR" if options.quiet else
164 "INFO"))
165
166 loc = {}
167 for arg in options.locargs:
168 try:
169 tag, val = arg.split('=')
170 assert len(tag) <= 4
171 loc[tag.ljust(4)] = float(val)
172 except (ValueError, AssertionError):
173 parser.error("invalid location argument format: %r" % arg)
174 log.info("Location: %s", loc)
175
176 log.info("Loading variable font")
177 varfont = TTFont(varfilename)
178
179 instantiateVariableFont(varfont, loc, inplace=True)
180
181 log.info("Saving instance font %s", outfile)
182 varfont.save(outfile)
183
184
185 if __name__ == "__main__":
186 import sys
187 if len(sys.argv) > 1:
188 sys.exit(main())
189 import doctest
190 sys.exit(doctest.testmod().failed)
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/varLib/mutator.py b/Lib/fontTools/varLib/mutator.py
--- a/Lib/fontTools/varLib/mutator.py
+++ b/Lib/fontTools/varLib/mutator.py
@@ -20,6 +20,13 @@
log = logging.getLogger("fontTools.varlib.mutator")
+# map 'wdth' axis (1..200) to OS/2.usWidthClass (1..9), rounding to closest
+OS2_WIDTH_CLASS_VALUES = {}
+percents = [50.0, 62.5, 75.0, 87.5, 100.0, 112.5, 125.0, 150.0, 200.0]
+for i, (prev, curr) in enumerate(zip(percents[:-1], percents[1:]), start=1):
+ half = (prev + curr) / 2
+ OS2_WIDTH_CLASS_VALUES[half] = i
+
def instantiateVariableFont(varfont, location, inplace=False):
""" Generate a static instance from a variable TTFont and a dictionary
@@ -123,6 +130,21 @@
if n.nameID not in exclude
]
+ if "wght" in location and "OS/2" in varfont:
+ varfont["OS/2"].usWeightClass = otRound(
+ max(1, min(location["wght"], 1000))
+ )
+ if "wdth" in location:
+ wdth = location["wdth"]
+ for percent, widthClass in sorted(OS2_WIDTH_CLASS_VALUES.items()):
+ if wdth < percent:
+ varfont["OS/2"].usWidthClass = widthClass
+ break
+ else:
+ varfont["OS/2"].usWidthClass = 9
+ if "slnt" in location and "post" in varfont:
+ varfont["post"].italicAngle = max(-90, min(location["slnt"], 90))
+
log.info("Removing variable tables")
for tag in ('avar','cvar','fvar','gvar','HVAR','MVAR','VVAR','STAT'):
if tag in varfont:
| {"golden_diff": "diff --git a/Lib/fontTools/varLib/mutator.py b/Lib/fontTools/varLib/mutator.py\n--- a/Lib/fontTools/varLib/mutator.py\n+++ b/Lib/fontTools/varLib/mutator.py\n@@ -20,6 +20,13 @@\n \n log = logging.getLogger(\"fontTools.varlib.mutator\")\n \n+# map 'wdth' axis (1..200) to OS/2.usWidthClass (1..9), rounding to closest\n+OS2_WIDTH_CLASS_VALUES = {}\n+percents = [50.0, 62.5, 75.0, 87.5, 100.0, 112.5, 125.0, 150.0, 200.0]\n+for i, (prev, curr) in enumerate(zip(percents[:-1], percents[1:]), start=1):\n+\thalf = (prev + curr) / 2\n+\tOS2_WIDTH_CLASS_VALUES[half] = i\n+\n \n def instantiateVariableFont(varfont, location, inplace=False):\n \t\"\"\" Generate a static instance from a variable TTFont and a dictionary\n@@ -123,6 +130,21 @@\n \t\t\tif n.nameID not in exclude\n \t\t]\n \n+\tif \"wght\" in location and \"OS/2\" in varfont:\n+\t\tvarfont[\"OS/2\"].usWeightClass = otRound(\n+\t\t\tmax(1, min(location[\"wght\"], 1000))\n+\t\t)\n+\tif \"wdth\" in location:\n+\t\twdth = location[\"wdth\"]\n+\t\tfor percent, widthClass in sorted(OS2_WIDTH_CLASS_VALUES.items()):\n+\t\t\tif wdth < percent:\n+\t\t\t\tvarfont[\"OS/2\"].usWidthClass = widthClass\n+\t\t\t\tbreak\n+\t\telse:\n+\t\t\tvarfont[\"OS/2\"].usWidthClass = 9\n+\tif \"slnt\" in location and \"post\" in varfont:\n+\t\tvarfont[\"post\"].italicAngle = max(-90, min(location[\"slnt\"], 90))\n+\n \tlog.info(\"Removing variable tables\")\n \tfor tag in ('avar','cvar','fvar','gvar','HVAR','MVAR','VVAR','STAT'):\n \t\tif tag in varfont:\n", "issue": "[varLib.mutator] Does not mutate weight/width/italicAngle\nBecause those are not in MVAR, but in standard axes. Quoting MVAR spec:\r\n\r\n> Note: The OS/2.usWeightClass, OS/2.usWidthClass and post.italicAngle values are not supported by variation data in the 'MVAR' table. This is because values for these three fields correspond directly to input axis values for the 'wght', 'wdth' and 'slnt' variation axes. See the discussion of these axes in the OpenType Design-Variation Axis Tag Registry for details on the relationship between these fields and the corresponding design axes.\r\n\r\nThere's probably some binary flags that we can also tweak, re boldness, italicness, etc of the font.\n", "before_files": [{"content": "\"\"\"\nInstantiate a variation font. Run, eg:\n\n$ fonttools varLib.mutator ./NotoSansArabic-VF.ttf wght=140 wdth=85\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc.fixedTools import floatToFixedToFloat, otRound\nfrom fontTools.ttLib import TTFont\nfrom fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates\nfrom fontTools.varLib import _GetCoordinates, _SetCoordinates, _DesignspaceAxis\nfrom fontTools.varLib.models import supportScalar, normalizeLocation\nfrom fontTools.varLib.merger import MutatorMerger\nfrom fontTools.varLib.varStore import VarStoreInstancer\nfrom fontTools.varLib.mvar import MVAR_ENTRIES\nfrom fontTools.varLib.iup import iup_delta\nimport os.path\nimport logging\n\n\nlog = logging.getLogger(\"fontTools.varlib.mutator\")\n\n\ndef instantiateVariableFont(varfont, location, inplace=False):\n\t\"\"\" Generate a static instance from a variable TTFont and a dictionary\n\tdefining the desired location along the variable font's axes.\n\tThe location values must be specified as user-space coordinates, e.g.:\n\n\t\t{'wght': 400, 'wdth': 100}\n\n\tBy default, a new TTFont object is returned. If ``inplace`` is True, the\n\tinput varfont is modified and reduced to a static font.\n\t\"\"\"\n\tif not inplace:\n\t\t# make a copy to leave input varfont unmodified\n\t\tstream = BytesIO()\n\t\tvarfont.save(stream)\n\t\tstream.seek(0)\n\t\tvarfont = TTFont(stream)\n\n\tfvar = varfont['fvar']\n\taxes = {a.axisTag:(a.minValue,a.defaultValue,a.maxValue) for a in fvar.axes}\n\tloc = normalizeLocation(location, axes)\n\tif 'avar' in varfont:\n\t\tmaps = varfont['avar'].segments\n\t\tloc = {k:_DesignspaceAxis._map(v, maps[k]) for k,v in loc.items()}\n\t# Quantize to F2Dot14, to avoid surprise interpolations.\n\tloc = {k:floatToFixedToFloat(v, 14) for k,v in loc.items()}\n\t# Location is normalized now\n\tlog.info(\"Normalized location: %s\", loc)\n\n\tlog.info(\"Mutating glyf/gvar tables\")\n\tgvar = varfont['gvar']\n\tglyf = varfont['glyf']\n\t# get list of glyph names in gvar sorted by component depth\n\tglyphnames = sorted(\n\t\tgvar.variations.keys(),\n\t\tkey=lambda name: (\n\t\t\tglyf[name].getCompositeMaxpValues(glyf).maxComponentDepth\n\t\t\tif glyf[name].isComposite() else 0,\n\t\t\tname))\n\tfor glyphname in glyphnames:\n\t\tvariations = gvar.variations[glyphname]\n\t\tcoordinates,_ = _GetCoordinates(varfont, glyphname)\n\t\torigCoords, endPts = None, None\n\t\tfor var in variations:\n\t\t\tscalar = supportScalar(loc, var.axes)\n\t\t\tif not scalar: continue\n\t\t\tdelta = var.coordinates\n\t\t\tif None in delta:\n\t\t\t\tif origCoords is None:\n\t\t\t\t\torigCoords,control = _GetCoordinates(varfont, glyphname)\n\t\t\t\t\tendPts = control[1] if control[0] >= 1 else list(range(len(control[1])))\n\t\t\t\tdelta = iup_delta(delta, origCoords, endPts)\n\t\t\tcoordinates += GlyphCoordinates(delta) * scalar\n\t\t_SetCoordinates(varfont, glyphname, coordinates)\n\n\tif 'cvar' in varfont:\n\t\tlog.info(\"Mutating cvt/cvar tables\")\n\t\tcvar = varfont['cvar']\n\t\tcvt = varfont['cvt ']\n\t\tdeltas = {}\n\t\tfor var in cvar.variations:\n\t\t\tscalar = supportScalar(loc, var.axes)\n\t\t\tif not scalar: continue\n\t\t\tfor i, c in enumerate(var.coordinates):\n\t\t\t\tif c is not None:\n\t\t\t\t\tdeltas[i] = deltas.get(i, 0) + scalar * c\n\t\tfor i, delta in deltas.items():\n\t\t\tcvt[i] += otRound(delta)\n\n\tif 'MVAR' in varfont:\n\t\tlog.info(\"Mutating MVAR table\")\n\t\tmvar = varfont['MVAR'].table\n\t\tvarStoreInstancer = VarStoreInstancer(mvar.VarStore, fvar.axes, loc)\n\t\trecords = mvar.ValueRecord\n\t\tfor rec in records:\n\t\t\tmvarTag = rec.ValueTag\n\t\t\tif mvarTag not in MVAR_ENTRIES:\n\t\t\t\tcontinue\n\t\t\ttableTag, itemName = MVAR_ENTRIES[mvarTag]\n\t\t\tdelta = otRound(varStoreInstancer[rec.VarIdx])\n\t\t\tif not delta:\n\t\t\t\tcontinue\n\t\t\tsetattr(varfont[tableTag], itemName,\n\t\t\t\tgetattr(varfont[tableTag], itemName) + delta)\n\n\tif 'GDEF' in varfont:\n\t\tlog.info(\"Mutating GDEF/GPOS/GSUB tables\")\n\t\tmerger = MutatorMerger(varfont, loc)\n\n\t\tlog.info(\"Building interpolated tables\")\n\t\tmerger.instantiate()\n\n\tif 'name' in varfont:\n\t\tlog.info(\"Pruning name table\")\n\t\texclude = {a.axisNameID for a in fvar.axes}\n\t\tfor i in fvar.instances:\n\t\t\texclude.add(i.subfamilyNameID)\n\t\t\texclude.add(i.postscriptNameID)\n\t\tvarfont['name'].names[:] = [\n\t\t\tn for n in varfont['name'].names\n\t\t\tif n.nameID not in exclude\n\t\t]\n\n\tlog.info(\"Removing variable tables\")\n\tfor tag in ('avar','cvar','fvar','gvar','HVAR','MVAR','VVAR','STAT'):\n\t\tif tag in varfont:\n\t\t\tdel varfont[tag]\n\n\treturn varfont\n\n\ndef main(args=None):\n\tfrom fontTools import configLogger\n\timport argparse\n\n\tparser = argparse.ArgumentParser(\n\t\t\"fonttools varLib.mutator\", description=\"Instantiate a variable font\")\n\tparser.add_argument(\n\t\t\"input\", metavar=\"INPUT.ttf\", help=\"Input variable TTF file.\")\n\tparser.add_argument(\n\t\t\"locargs\", metavar=\"AXIS=LOC\", nargs=\"*\",\n\t\thelp=\"List of space separated locations. A location consist in \"\n\t\t\"the name of a variation axis, followed by '=' and a number. E.g.: \"\n\t\t\" wght=700 wdth=80. The default is the location of the base master.\")\n\tparser.add_argument(\n\t\t\"-o\", \"--output\", metavar=\"OUTPUT.ttf\", default=None,\n\t\thelp=\"Output instance TTF file (default: INPUT-instance.ttf).\")\n\tlogging_group = parser.add_mutually_exclusive_group(required=False)\n\tlogging_group.add_argument(\n\t\t\"-v\", \"--verbose\", action=\"store_true\", help=\"Run more verbosely.\")\n\tlogging_group.add_argument(\n\t\t\"-q\", \"--quiet\", action=\"store_true\", help=\"Turn verbosity off.\")\n\toptions = parser.parse_args(args)\n\n\tvarfilename = options.input\n\toutfile = (\n\t\tos.path.splitext(varfilename)[0] + '-instance.ttf'\n\t\tif not options.output else options.output)\n\tconfigLogger(level=(\n\t\t\"DEBUG\" if options.verbose else\n\t\t\"ERROR\" if options.quiet else\n\t\t\"INFO\"))\n\n\tloc = {}\n\tfor arg in options.locargs:\n\t\ttry:\n\t\t\ttag, val = arg.split('=')\n\t\t\tassert len(tag) <= 4\n\t\t\tloc[tag.ljust(4)] = float(val)\n\t\texcept (ValueError, AssertionError):\n\t\t\tparser.error(\"invalid location argument format: %r\" % arg)\n\tlog.info(\"Location: %s\", loc)\n\n\tlog.info(\"Loading variable font\")\n\tvarfont = TTFont(varfilename)\n\n\tinstantiateVariableFont(varfont, loc, inplace=True)\n\n\tlog.info(\"Saving instance font %s\", outfile)\n\tvarfont.save(outfile)\n\n\nif __name__ == \"__main__\":\n\timport sys\n\tif len(sys.argv) > 1:\n\t\tsys.exit(main())\n\timport doctest\n\tsys.exit(doctest.testmod().failed)\n", "path": "Lib/fontTools/varLib/mutator.py"}], "after_files": [{"content": "\"\"\"\nInstantiate a variation font. Run, eg:\n\n$ fonttools varLib.mutator ./NotoSansArabic-VF.ttf wght=140 wdth=85\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc.fixedTools import floatToFixedToFloat, otRound\nfrom fontTools.ttLib import TTFont\nfrom fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates\nfrom fontTools.varLib import _GetCoordinates, _SetCoordinates, _DesignspaceAxis\nfrom fontTools.varLib.models import supportScalar, normalizeLocation\nfrom fontTools.varLib.merger import MutatorMerger\nfrom fontTools.varLib.varStore import VarStoreInstancer\nfrom fontTools.varLib.mvar import MVAR_ENTRIES\nfrom fontTools.varLib.iup import iup_delta\nimport os.path\nimport logging\n\n\nlog = logging.getLogger(\"fontTools.varlib.mutator\")\n\n# map 'wdth' axis (1..200) to OS/2.usWidthClass (1..9), rounding to closest\nOS2_WIDTH_CLASS_VALUES = {}\npercents = [50.0, 62.5, 75.0, 87.5, 100.0, 112.5, 125.0, 150.0, 200.0]\nfor i, (prev, curr) in enumerate(zip(percents[:-1], percents[1:]), start=1):\n\thalf = (prev + curr) / 2\n\tOS2_WIDTH_CLASS_VALUES[half] = i\n\n\ndef instantiateVariableFont(varfont, location, inplace=False):\n\t\"\"\" Generate a static instance from a variable TTFont and a dictionary\n\tdefining the desired location along the variable font's axes.\n\tThe location values must be specified as user-space coordinates, e.g.:\n\n\t\t{'wght': 400, 'wdth': 100}\n\n\tBy default, a new TTFont object is returned. If ``inplace`` is True, the\n\tinput varfont is modified and reduced to a static font.\n\t\"\"\"\n\tif not inplace:\n\t\t# make a copy to leave input varfont unmodified\n\t\tstream = BytesIO()\n\t\tvarfont.save(stream)\n\t\tstream.seek(0)\n\t\tvarfont = TTFont(stream)\n\n\tfvar = varfont['fvar']\n\taxes = {a.axisTag:(a.minValue,a.defaultValue,a.maxValue) for a in fvar.axes}\n\tloc = normalizeLocation(location, axes)\n\tif 'avar' in varfont:\n\t\tmaps = varfont['avar'].segments\n\t\tloc = {k:_DesignspaceAxis._map(v, maps[k]) for k,v in loc.items()}\n\t# Quantize to F2Dot14, to avoid surprise interpolations.\n\tloc = {k:floatToFixedToFloat(v, 14) for k,v in loc.items()}\n\t# Location is normalized now\n\tlog.info(\"Normalized location: %s\", loc)\n\n\tlog.info(\"Mutating glyf/gvar tables\")\n\tgvar = varfont['gvar']\n\tglyf = varfont['glyf']\n\t# get list of glyph names in gvar sorted by component depth\n\tglyphnames = sorted(\n\t\tgvar.variations.keys(),\n\t\tkey=lambda name: (\n\t\t\tglyf[name].getCompositeMaxpValues(glyf).maxComponentDepth\n\t\t\tif glyf[name].isComposite() else 0,\n\t\t\tname))\n\tfor glyphname in glyphnames:\n\t\tvariations = gvar.variations[glyphname]\n\t\tcoordinates,_ = _GetCoordinates(varfont, glyphname)\n\t\torigCoords, endPts = None, None\n\t\tfor var in variations:\n\t\t\tscalar = supportScalar(loc, var.axes)\n\t\t\tif not scalar: continue\n\t\t\tdelta = var.coordinates\n\t\t\tif None in delta:\n\t\t\t\tif origCoords is None:\n\t\t\t\t\torigCoords,control = _GetCoordinates(varfont, glyphname)\n\t\t\t\t\tendPts = control[1] if control[0] >= 1 else list(range(len(control[1])))\n\t\t\t\tdelta = iup_delta(delta, origCoords, endPts)\n\t\t\tcoordinates += GlyphCoordinates(delta) * scalar\n\t\t_SetCoordinates(varfont, glyphname, coordinates)\n\n\tif 'cvar' in varfont:\n\t\tlog.info(\"Mutating cvt/cvar tables\")\n\t\tcvar = varfont['cvar']\n\t\tcvt = varfont['cvt ']\n\t\tdeltas = {}\n\t\tfor var in cvar.variations:\n\t\t\tscalar = supportScalar(loc, var.axes)\n\t\t\tif not scalar: continue\n\t\t\tfor i, c in enumerate(var.coordinates):\n\t\t\t\tif c is not None:\n\t\t\t\t\tdeltas[i] = deltas.get(i, 0) + scalar * c\n\t\tfor i, delta in deltas.items():\n\t\t\tcvt[i] += otRound(delta)\n\n\tif 'MVAR' in varfont:\n\t\tlog.info(\"Mutating MVAR table\")\n\t\tmvar = varfont['MVAR'].table\n\t\tvarStoreInstancer = VarStoreInstancer(mvar.VarStore, fvar.axes, loc)\n\t\trecords = mvar.ValueRecord\n\t\tfor rec in records:\n\t\t\tmvarTag = rec.ValueTag\n\t\t\tif mvarTag not in MVAR_ENTRIES:\n\t\t\t\tcontinue\n\t\t\ttableTag, itemName = MVAR_ENTRIES[mvarTag]\n\t\t\tdelta = otRound(varStoreInstancer[rec.VarIdx])\n\t\t\tif not delta:\n\t\t\t\tcontinue\n\t\t\tsetattr(varfont[tableTag], itemName,\n\t\t\t\tgetattr(varfont[tableTag], itemName) + delta)\n\n\tif 'GDEF' in varfont:\n\t\tlog.info(\"Mutating GDEF/GPOS/GSUB tables\")\n\t\tmerger = MutatorMerger(varfont, loc)\n\n\t\tlog.info(\"Building interpolated tables\")\n\t\tmerger.instantiate()\n\n\tif 'name' in varfont:\n\t\tlog.info(\"Pruning name table\")\n\t\texclude = {a.axisNameID for a in fvar.axes}\n\t\tfor i in fvar.instances:\n\t\t\texclude.add(i.subfamilyNameID)\n\t\t\texclude.add(i.postscriptNameID)\n\t\tvarfont['name'].names[:] = [\n\t\t\tn for n in varfont['name'].names\n\t\t\tif n.nameID not in exclude\n\t\t]\n\n\tif \"wght\" in location and \"OS/2\" in varfont:\n\t\tvarfont[\"OS/2\"].usWeightClass = otRound(\n\t\t\tmax(1, min(location[\"wght\"], 1000))\n\t\t)\n\tif \"wdth\" in location:\n\t\twdth = location[\"wdth\"]\n\t\tfor percent, widthClass in sorted(OS2_WIDTH_CLASS_VALUES.items()):\n\t\t\tif wdth < percent:\n\t\t\t\tvarfont[\"OS/2\"].usWidthClass = widthClass\n\t\t\t\tbreak\n\t\telse:\n\t\t\tvarfont[\"OS/2\"].usWidthClass = 9\n\tif \"slnt\" in location and \"post\" in varfont:\n\t\tvarfont[\"post\"].italicAngle = max(-90, min(location[\"slnt\"], 90))\n\n\tlog.info(\"Removing variable tables\")\n\tfor tag in ('avar','cvar','fvar','gvar','HVAR','MVAR','VVAR','STAT'):\n\t\tif tag in varfont:\n\t\t\tdel varfont[tag]\n\n\treturn varfont\n\n\ndef main(args=None):\n\tfrom fontTools import configLogger\n\timport argparse\n\n\tparser = argparse.ArgumentParser(\n\t\t\"fonttools varLib.mutator\", description=\"Instantiate a variable font\")\n\tparser.add_argument(\n\t\t\"input\", metavar=\"INPUT.ttf\", help=\"Input variable TTF file.\")\n\tparser.add_argument(\n\t\t\"locargs\", metavar=\"AXIS=LOC\", nargs=\"*\",\n\t\thelp=\"List of space separated locations. A location consist in \"\n\t\t\"the name of a variation axis, followed by '=' and a number. E.g.: \"\n\t\t\" wght=700 wdth=80. The default is the location of the base master.\")\n\tparser.add_argument(\n\t\t\"-o\", \"--output\", metavar=\"OUTPUT.ttf\", default=None,\n\t\thelp=\"Output instance TTF file (default: INPUT-instance.ttf).\")\n\tlogging_group = parser.add_mutually_exclusive_group(required=False)\n\tlogging_group.add_argument(\n\t\t\"-v\", \"--verbose\", action=\"store_true\", help=\"Run more verbosely.\")\n\tlogging_group.add_argument(\n\t\t\"-q\", \"--quiet\", action=\"store_true\", help=\"Turn verbosity off.\")\n\toptions = parser.parse_args(args)\n\n\tvarfilename = options.input\n\toutfile = (\n\t\tos.path.splitext(varfilename)[0] + '-instance.ttf'\n\t\tif not options.output else options.output)\n\tconfigLogger(level=(\n\t\t\"DEBUG\" if options.verbose else\n\t\t\"ERROR\" if options.quiet else\n\t\t\"INFO\"))\n\n\tloc = {}\n\tfor arg in options.locargs:\n\t\ttry:\n\t\t\ttag, val = arg.split('=')\n\t\t\tassert len(tag) <= 4\n\t\t\tloc[tag.ljust(4)] = float(val)\n\t\texcept (ValueError, AssertionError):\n\t\t\tparser.error(\"invalid location argument format: %r\" % arg)\n\tlog.info(\"Location: %s\", loc)\n\n\tlog.info(\"Loading variable font\")\n\tvarfont = TTFont(varfilename)\n\n\tinstantiateVariableFont(varfont, loc, inplace=True)\n\n\tlog.info(\"Saving instance font %s\", outfile)\n\tvarfont.save(outfile)\n\n\nif __name__ == \"__main__\":\n\timport sys\n\tif len(sys.argv) > 1:\n\t\tsys.exit(main())\n\timport doctest\n\tsys.exit(doctest.testmod().failed)\n", "path": "Lib/fontTools/varLib/mutator.py"}]} | 2,672 | 516 |
gh_patches_debug_64354 | rasdani/github-patches | git_diff | PyGithub__PyGithub-1470 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The documentation implies that PaginatedList has __len__, but it doesn't.
https://github.com/PyGithub/PyGithub/blob/393bd21bd168769d8ecf4b23f4b32e9a4de5f17a/github/PaginatedList.py#L107
In REPL:
```python
>>> print(len(g.search_code(query="filename:vimrc")))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'PaginatedList' has no len()
```
8ae2bcb implemented `__len__` and mentioned it in the `__doc__`. 8a589c9 removed `__len__` implementation but left `__doc__` as is. Related #579.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/PaginatedList.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Bill Mill <[email protected]> #
9 # Copyright 2013 Vincent Jacques <[email protected]> #
10 # Copyright 2013 davidbrai <[email protected]> #
11 # Copyright 2014 Thialfihar <[email protected]> #
12 # Copyright 2014 Vincent Jacques <[email protected]> #
13 # Copyright 2015 Dan Vanderkam <[email protected]> #
14 # Copyright 2015 Eliot Walker <[email protected]> #
15 # Copyright 2016 Peter Buckley <[email protected]> #
16 # Copyright 2017 Jannis Gebauer <[email protected]> #
17 # Copyright 2018 Gilad Shefer <[email protected]> #
18 # Copyright 2018 Joel Koglin <[email protected]> #
19 # Copyright 2018 Wan Liuyang <[email protected]> #
20 # Copyright 2018 sfdye <[email protected]> #
21 # #
22 # This file is part of PyGithub. #
23 # http://pygithub.readthedocs.io/ #
24 # #
25 # PyGithub is free software: you can redistribute it and/or modify it under #
26 # the terms of the GNU Lesser General Public License as published by the Free #
27 # Software Foundation, either version 3 of the License, or (at your option) #
28 # any later version. #
29 # #
30 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
31 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
32 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
33 # details. #
34 # #
35 # You should have received a copy of the GNU Lesser General Public License #
36 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
37 # #
38 ################################################################################
39
40 from urllib.parse import parse_qs
41
42
43 class PaginatedListBase:
44 def __init__(self):
45 self.__elements = list()
46
47 def __getitem__(self, index):
48 assert isinstance(index, (int, slice))
49 if isinstance(index, int):
50 self.__fetchToIndex(index)
51 return self.__elements[index]
52 else:
53 return self._Slice(self, index)
54
55 def __iter__(self):
56 for element in self.__elements:
57 yield element
58 while self._couldGrow():
59 newElements = self._grow()
60 for element in newElements:
61 yield element
62
63 def _isBiggerThan(self, index):
64 return len(self.__elements) > index or self._couldGrow()
65
66 def __fetchToIndex(self, index):
67 while len(self.__elements) <= index and self._couldGrow():
68 self._grow()
69
70 def _grow(self):
71 newElements = self._fetchNextPage()
72 self.__elements += newElements
73 return newElements
74
75 class _Slice:
76 def __init__(self, theList, theSlice):
77 self.__list = theList
78 self.__start = theSlice.start or 0
79 self.__stop = theSlice.stop
80 self.__step = theSlice.step or 1
81
82 def __iter__(self):
83 index = self.__start
84 while not self.__finished(index):
85 if self.__list._isBiggerThan(index):
86 yield self.__list[index]
87 index += self.__step
88 else:
89 return
90
91 def __finished(self, index):
92 return self.__stop is not None and index >= self.__stop
93
94
95 class PaginatedList(PaginatedListBase):
96 """
97 This class abstracts the `pagination of the API <http://developer.github.com/v3/#pagination>`_.
98
99 You can simply enumerate through instances of this class::
100
101 for repo in user.get_repos():
102 print(repo.name)
103
104 If you want to know the total number of items in the list::
105
106 print(user.get_repos().totalCount)
107 print(len(user.get_repos()))
108
109 You can also index them or take slices::
110
111 second_repo = user.get_repos()[1]
112 first_repos = user.get_repos()[:10]
113
114 If you want to iterate in reversed order, just do::
115
116 for repo in user.get_repos().reversed:
117 print(repo.name)
118
119 And if you really need it, you can explicitly access a specific page::
120
121 some_repos = user.get_repos().get_page(0)
122 some_other_repos = user.get_repos().get_page(3)
123 """
124
125 def __init__(
126 self,
127 contentClass,
128 requester,
129 firstUrl,
130 firstParams,
131 headers=None,
132 list_item="items",
133 ):
134 super().__init__()
135 self.__requester = requester
136 self.__contentClass = contentClass
137 self.__firstUrl = firstUrl
138 self.__firstParams = firstParams or ()
139 self.__nextUrl = firstUrl
140 self.__nextParams = firstParams or {}
141 self.__headers = headers
142 self.__list_item = list_item
143 if self.__requester.per_page != 30:
144 self.__nextParams["per_page"] = self.__requester.per_page
145 self._reversed = False
146 self.__totalCount = None
147
148 @property
149 def totalCount(self):
150 if not self.__totalCount:
151 params = {} if self.__nextParams is None else self.__nextParams.copy()
152 # set per_page = 1 so the totalCount is just the number of pages
153 params.update({"per_page": 1})
154 headers, data = self.__requester.requestJsonAndCheck(
155 "GET", self.__firstUrl, parameters=params, headers=self.__headers
156 )
157 if "link" not in headers:
158 if data and "total_count" in data:
159 self.__totalCount = data["total_count"]
160 elif data:
161 self.__totalCount = len(data)
162 else:
163 self.__totalCount = 0
164 else:
165 links = self.__parseLinkHeader(headers)
166 lastUrl = links.get("last")
167 self.__totalCount = int(parse_qs(lastUrl)["page"][0])
168 return self.__totalCount
169
170 def _getLastPageUrl(self):
171 headers, data = self.__requester.requestJsonAndCheck(
172 "GET", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers
173 )
174 links = self.__parseLinkHeader(headers)
175 lastUrl = links.get("last")
176 return lastUrl
177
178 @property
179 def reversed(self):
180 r = PaginatedList(
181 self.__contentClass,
182 self.__requester,
183 self.__firstUrl,
184 self.__firstParams,
185 self.__headers,
186 self.__list_item,
187 )
188 r.__reverse()
189 return r
190
191 def __reverse(self):
192 self._reversed = True
193 lastUrl = self._getLastPageUrl()
194 if lastUrl:
195 self.__nextUrl = lastUrl
196
197 def _couldGrow(self):
198 return self.__nextUrl is not None
199
200 def _fetchNextPage(self):
201 headers, data = self.__requester.requestJsonAndCheck(
202 "GET", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers
203 )
204 data = data if data else []
205
206 self.__nextUrl = None
207 if len(data) > 0:
208 links = self.__parseLinkHeader(headers)
209 if self._reversed:
210 if "prev" in links:
211 self.__nextUrl = links["prev"]
212 elif "next" in links:
213 self.__nextUrl = links["next"]
214 self.__nextParams = None
215
216 if self.__list_item in data:
217 self.__totalCount = data.get("total_count")
218 data = data[self.__list_item]
219
220 content = [
221 self.__contentClass(self.__requester, headers, element, completed=False)
222 for element in data
223 if element is not None
224 ]
225 if self._reversed:
226 return content[::-1]
227 return content
228
229 def __parseLinkHeader(self, headers):
230 links = {}
231 if "link" in headers:
232 linkHeaders = headers["link"].split(", ")
233 for linkHeader in linkHeaders:
234 (url, rel) = linkHeader.split("; ")
235 url = url[1:-1]
236 rel = rel[5:-1]
237 links[rel] = url
238 return links
239
240 def get_page(self, page):
241 params = dict(self.__firstParams)
242 if page != 0:
243 params["page"] = page + 1
244 if self.__requester.per_page != 30:
245 params["per_page"] = self.__requester.per_page
246 headers, data = self.__requester.requestJsonAndCheck(
247 "GET", self.__firstUrl, parameters=params, headers=self.__headers
248 )
249
250 if self.__list_item in data:
251 self.__totalCount = data.get("total_count")
252 data = data[self.__list_item]
253
254 return [
255 self.__contentClass(self.__requester, headers, element, completed=False)
256 for element in data
257 ]
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/github/PaginatedList.py b/github/PaginatedList.py
--- a/github/PaginatedList.py
+++ b/github/PaginatedList.py
@@ -104,7 +104,6 @@
If you want to know the total number of items in the list::
print(user.get_repos().totalCount)
- print(len(user.get_repos()))
You can also index them or take slices::
| {"golden_diff": "diff --git a/github/PaginatedList.py b/github/PaginatedList.py\n--- a/github/PaginatedList.py\n+++ b/github/PaginatedList.py\n@@ -104,7 +104,6 @@\n If you want to know the total number of items in the list::\n \n print(user.get_repos().totalCount)\n- print(len(user.get_repos()))\n \n You can also index them or take slices::\n", "issue": "The documentation implies that PaginatedList has __len__, but it doesn't.\nhttps://github.com/PyGithub/PyGithub/blob/393bd21bd168769d8ecf4b23f4b32e9a4de5f17a/github/PaginatedList.py#L107\r\n\r\nIn REPL:\r\n\r\n```python\r\n>>> print(len(g.search_code(query=\"filename:vimrc\")))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: object of type 'PaginatedList' has no len()\r\n```\r\n\r\n8ae2bcb implemented `__len__` and mentioned it in the `__doc__`. 8a589c9 removed `__len__` implementation but left `__doc__` as is. Related #579.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Bill Mill <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 davidbrai <[email protected]> #\n# Copyright 2014 Thialfihar <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Dan Vanderkam <[email protected]> #\n# Copyright 2015 Eliot Walker <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2018 Gilad Shefer <[email protected]> #\n# Copyright 2018 Joel Koglin <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nfrom urllib.parse import parse_qs\n\n\nclass PaginatedListBase:\n def __init__(self):\n self.__elements = list()\n\n def __getitem__(self, index):\n assert isinstance(index, (int, slice))\n if isinstance(index, int):\n self.__fetchToIndex(index)\n return self.__elements[index]\n else:\n return self._Slice(self, index)\n\n def __iter__(self):\n for element in self.__elements:\n yield element\n while self._couldGrow():\n newElements = self._grow()\n for element in newElements:\n yield element\n\n def _isBiggerThan(self, index):\n return len(self.__elements) > index or self._couldGrow()\n\n def __fetchToIndex(self, index):\n while len(self.__elements) <= index and self._couldGrow():\n self._grow()\n\n def _grow(self):\n newElements = self._fetchNextPage()\n self.__elements += newElements\n return newElements\n\n class _Slice:\n def __init__(self, theList, theSlice):\n self.__list = theList\n self.__start = theSlice.start or 0\n self.__stop = theSlice.stop\n self.__step = theSlice.step or 1\n\n def __iter__(self):\n index = self.__start\n while not self.__finished(index):\n if self.__list._isBiggerThan(index):\n yield self.__list[index]\n index += self.__step\n else:\n return\n\n def __finished(self, index):\n return self.__stop is not None and index >= self.__stop\n\n\nclass PaginatedList(PaginatedListBase):\n \"\"\"\n This class abstracts the `pagination of the API <http://developer.github.com/v3/#pagination>`_.\n\n You can simply enumerate through instances of this class::\n\n for repo in user.get_repos():\n print(repo.name)\n\n If you want to know the total number of items in the list::\n\n print(user.get_repos().totalCount)\n print(len(user.get_repos()))\n\n You can also index them or take slices::\n\n second_repo = user.get_repos()[1]\n first_repos = user.get_repos()[:10]\n\n If you want to iterate in reversed order, just do::\n\n for repo in user.get_repos().reversed:\n print(repo.name)\n\n And if you really need it, you can explicitly access a specific page::\n\n some_repos = user.get_repos().get_page(0)\n some_other_repos = user.get_repos().get_page(3)\n \"\"\"\n\n def __init__(\n self,\n contentClass,\n requester,\n firstUrl,\n firstParams,\n headers=None,\n list_item=\"items\",\n ):\n super().__init__()\n self.__requester = requester\n self.__contentClass = contentClass\n self.__firstUrl = firstUrl\n self.__firstParams = firstParams or ()\n self.__nextUrl = firstUrl\n self.__nextParams = firstParams or {}\n self.__headers = headers\n self.__list_item = list_item\n if self.__requester.per_page != 30:\n self.__nextParams[\"per_page\"] = self.__requester.per_page\n self._reversed = False\n self.__totalCount = None\n\n @property\n def totalCount(self):\n if not self.__totalCount:\n params = {} if self.__nextParams is None else self.__nextParams.copy()\n # set per_page = 1 so the totalCount is just the number of pages\n params.update({\"per_page\": 1})\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n if \"link\" not in headers:\n if data and \"total_count\" in data:\n self.__totalCount = data[\"total_count\"]\n elif data:\n self.__totalCount = len(data)\n else:\n self.__totalCount = 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n self.__totalCount = int(parse_qs(lastUrl)[\"page\"][0])\n return self.__totalCount\n\n def _getLastPageUrl(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers\n )\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n return lastUrl\n\n @property\n def reversed(self):\n r = PaginatedList(\n self.__contentClass,\n self.__requester,\n self.__firstUrl,\n self.__firstParams,\n self.__headers,\n self.__list_item,\n )\n r.__reverse()\n return r\n\n def __reverse(self):\n self._reversed = True\n lastUrl = self._getLastPageUrl()\n if lastUrl:\n self.__nextUrl = lastUrl\n\n def _couldGrow(self):\n return self.__nextUrl is not None\n\n def _fetchNextPage(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers\n )\n data = data if data else []\n\n self.__nextUrl = None\n if len(data) > 0:\n links = self.__parseLinkHeader(headers)\n if self._reversed:\n if \"prev\" in links:\n self.__nextUrl = links[\"prev\"]\n elif \"next\" in links:\n self.__nextUrl = links[\"next\"]\n self.__nextParams = None\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n content = [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n if element is not None\n ]\n if self._reversed:\n return content[::-1]\n return content\n\n def __parseLinkHeader(self, headers):\n links = {}\n if \"link\" in headers:\n linkHeaders = headers[\"link\"].split(\", \")\n for linkHeader in linkHeaders:\n (url, rel) = linkHeader.split(\"; \")\n url = url[1:-1]\n rel = rel[5:-1]\n links[rel] = url\n return links\n\n def get_page(self, page):\n params = dict(self.__firstParams)\n if page != 0:\n params[\"page\"] = page + 1\n if self.__requester.per_page != 30:\n params[\"per_page\"] = self.__requester.per_page\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n return [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n ]\n", "path": "github/PaginatedList.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Bill Mill <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 davidbrai <[email protected]> #\n# Copyright 2014 Thialfihar <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2015 Dan Vanderkam <[email protected]> #\n# Copyright 2015 Eliot Walker <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2017 Jannis Gebauer <[email protected]> #\n# Copyright 2018 Gilad Shefer <[email protected]> #\n# Copyright 2018 Joel Koglin <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nfrom urllib.parse import parse_qs\n\n\nclass PaginatedListBase:\n def __init__(self):\n self.__elements = list()\n\n def __getitem__(self, index):\n assert isinstance(index, (int, slice))\n if isinstance(index, int):\n self.__fetchToIndex(index)\n return self.__elements[index]\n else:\n return self._Slice(self, index)\n\n def __iter__(self):\n for element in self.__elements:\n yield element\n while self._couldGrow():\n newElements = self._grow()\n for element in newElements:\n yield element\n\n def _isBiggerThan(self, index):\n return len(self.__elements) > index or self._couldGrow()\n\n def __fetchToIndex(self, index):\n while len(self.__elements) <= index and self._couldGrow():\n self._grow()\n\n def _grow(self):\n newElements = self._fetchNextPage()\n self.__elements += newElements\n return newElements\n\n class _Slice:\n def __init__(self, theList, theSlice):\n self.__list = theList\n self.__start = theSlice.start or 0\n self.__stop = theSlice.stop\n self.__step = theSlice.step or 1\n\n def __iter__(self):\n index = self.__start\n while not self.__finished(index):\n if self.__list._isBiggerThan(index):\n yield self.__list[index]\n index += self.__step\n else:\n return\n\n def __finished(self, index):\n return self.__stop is not None and index >= self.__stop\n\n\nclass PaginatedList(PaginatedListBase):\n \"\"\"\n This class abstracts the `pagination of the API <http://developer.github.com/v3/#pagination>`_.\n\n You can simply enumerate through instances of this class::\n\n for repo in user.get_repos():\n print(repo.name)\n\n If you want to know the total number of items in the list::\n\n print(user.get_repos().totalCount)\n\n You can also index them or take slices::\n\n second_repo = user.get_repos()[1]\n first_repos = user.get_repos()[:10]\n\n If you want to iterate in reversed order, just do::\n\n for repo in user.get_repos().reversed:\n print(repo.name)\n\n And if you really need it, you can explicitly access a specific page::\n\n some_repos = user.get_repos().get_page(0)\n some_other_repos = user.get_repos().get_page(3)\n \"\"\"\n\n def __init__(\n self,\n contentClass,\n requester,\n firstUrl,\n firstParams,\n headers=None,\n list_item=\"items\",\n ):\n super().__init__()\n self.__requester = requester\n self.__contentClass = contentClass\n self.__firstUrl = firstUrl\n self.__firstParams = firstParams or ()\n self.__nextUrl = firstUrl\n self.__nextParams = firstParams or {}\n self.__headers = headers\n self.__list_item = list_item\n if self.__requester.per_page != 30:\n self.__nextParams[\"per_page\"] = self.__requester.per_page\n self._reversed = False\n self.__totalCount = None\n\n @property\n def totalCount(self):\n if not self.__totalCount:\n params = {} if self.__nextParams is None else self.__nextParams.copy()\n # set per_page = 1 so the totalCount is just the number of pages\n params.update({\"per_page\": 1})\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n if \"link\" not in headers:\n if data and \"total_count\" in data:\n self.__totalCount = data[\"total_count\"]\n elif data:\n self.__totalCount = len(data)\n else:\n self.__totalCount = 0\n else:\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n self.__totalCount = int(parse_qs(lastUrl)[\"page\"][0])\n return self.__totalCount\n\n def _getLastPageUrl(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=self.__nextParams, headers=self.__headers\n )\n links = self.__parseLinkHeader(headers)\n lastUrl = links.get(\"last\")\n return lastUrl\n\n @property\n def reversed(self):\n r = PaginatedList(\n self.__contentClass,\n self.__requester,\n self.__firstUrl,\n self.__firstParams,\n self.__headers,\n self.__list_item,\n )\n r.__reverse()\n return r\n\n def __reverse(self):\n self._reversed = True\n lastUrl = self._getLastPageUrl()\n if lastUrl:\n self.__nextUrl = lastUrl\n\n def _couldGrow(self):\n return self.__nextUrl is not None\n\n def _fetchNextPage(self):\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__nextUrl, parameters=self.__nextParams, headers=self.__headers\n )\n data = data if data else []\n\n self.__nextUrl = None\n if len(data) > 0:\n links = self.__parseLinkHeader(headers)\n if self._reversed:\n if \"prev\" in links:\n self.__nextUrl = links[\"prev\"]\n elif \"next\" in links:\n self.__nextUrl = links[\"next\"]\n self.__nextParams = None\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n content = [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n if element is not None\n ]\n if self._reversed:\n return content[::-1]\n return content\n\n def __parseLinkHeader(self, headers):\n links = {}\n if \"link\" in headers:\n linkHeaders = headers[\"link\"].split(\", \")\n for linkHeader in linkHeaders:\n (url, rel) = linkHeader.split(\"; \")\n url = url[1:-1]\n rel = rel[5:-1]\n links[rel] = url\n return links\n\n def get_page(self, page):\n params = dict(self.__firstParams)\n if page != 0:\n params[\"page\"] = page + 1\n if self.__requester.per_page != 30:\n params[\"per_page\"] = self.__requester.per_page\n headers, data = self.__requester.requestJsonAndCheck(\n \"GET\", self.__firstUrl, parameters=params, headers=self.__headers\n )\n\n if self.__list_item in data:\n self.__totalCount = data.get(\"total_count\")\n data = data[self.__list_item]\n\n return [\n self.__contentClass(self.__requester, headers, element, completed=False)\n for element in data\n ]\n", "path": "github/PaginatedList.py"}]} | 3,227 | 95 |
gh_patches_debug_37784 | rasdani/github-patches | git_diff | bokeh__bokeh-8738 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows phantomjs not killed on selenium termination
I reinstalled a fresh python environment on windows with python 3.7 and pythonenv
I use only pip for package dependencies
When I tried to run tests some failed because temp files were locked.
<img width="726" alt="win32error" src="https://user-images.githubusercontent.com/18531147/54091987-214f4580-4387-11e9-9584-6a117a356ad2.png">
<img width="257" alt="test_failures" src="https://user-images.githubusercontent.com/18531147/54091989-24e2cc80-4387-11e9-9c42-3573dabd1813.PNG">
When driver terminate phantomjs is not correctly killed:
<img width="294" alt="proc_pantomjs" src="https://user-images.githubusercontent.com/18531147/54092002-45128b80-4387-11e9-9967-bf74b1e41bd7.PNG">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/io/webdriver.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 '''
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17 log = logging.getLogger(__name__)
18
19 #-----------------------------------------------------------------------------
20 # Imports
21 #-----------------------------------------------------------------------------
22
23 # Standard library imports
24 import atexit
25 import signal
26 import warnings
27 from os.path import devnull
28
29 # External imports
30
31 # Bokeh imports
32 from ..util.dependencies import import_required, detect_phantomjs
33
34 #-----------------------------------------------------------------------------
35 # Globals and constants
36 #-----------------------------------------------------------------------------
37
38 __all__ = (
39 'create_phantomjs_webdriver',
40 'terminate_webdriver',
41 'webdriver_control',
42 )
43
44 #-----------------------------------------------------------------------------
45 # General API
46 #-----------------------------------------------------------------------------
47
48 #-----------------------------------------------------------------------------
49 # Dev API
50 #-----------------------------------------------------------------------------
51
52 def create_phantomjs_webdriver():
53 with warnings.catch_warnings():
54 warnings.filterwarnings("ignore", ".*", UserWarning, "selenium.webdriver.phantomjs.webdriver")
55
56 webdriver = import_required('selenium.webdriver',
57 'To use bokeh.io image export functions you need selenium ' +
58 '("conda install -c bokeh selenium" or "pip install selenium")')
59
60 phantomjs_path = detect_phantomjs()
61 return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)
62
63 def terminate_webdriver(driver):
64 if driver.name == "phantomjs":
65 # https://github.com/seleniumhq/selenium/issues/767
66 if driver.service.process:
67 driver.service.process.send_signal(signal.SIGTERM)
68
69 try:
70 driver.quit()
71 except (IOError, OSError): # IOError for Python 2.7
72 pass
73
74 #-----------------------------------------------------------------------------
75 # Private API
76 #-----------------------------------------------------------------------------
77
78 class _WebdriverState(object):
79 '''
80
81 '''
82
83 def __init__(self, reuse=True, kind="phantomjs"):
84 self.reuse = reuse
85 self.kind = kind
86 self.current = None
87
88 def reset(self):
89 if self.current is not None:
90 terminate_webdriver(self.current)
91 self.current = None
92
93 def get(self):
94 if not self.reuse or self.current is None:
95 if self.current is not None:
96 terminate_webdriver(self.current)
97 self.current = self.create()
98 return self.current
99
100 def create(self):
101 if self.kind == "phantomjs":
102 return create_phantomjs_webdriver()
103 raise ValueError("Unknown webdriver kind %r" % self.kind)
104
105 @property
106 def reuse(self):
107 return self._reuse
108
109 @reuse.setter
110 def reuse(self, value):
111 self._reuse = value
112
113 @property
114 def kind(self):
115 return self._kind
116
117 @kind.setter
118 def kind(self, value):
119 # TODO (bev) enum/value check when more are added
120 self._kind = value
121
122 #-----------------------------------------------------------------------------
123 # Code
124 #-----------------------------------------------------------------------------
125
126
127 webdriver_control = _WebdriverState()
128
129 atexit.register(lambda: webdriver_control.reset())
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py
--- a/bokeh/io/webdriver.py
+++ b/bokeh/io/webdriver.py
@@ -21,6 +21,7 @@
#-----------------------------------------------------------------------------
# Standard library imports
+import sys
import atexit
import signal
import warnings
@@ -29,7 +30,7 @@
# External imports
# Bokeh imports
-from ..util.dependencies import import_required, detect_phantomjs
+from ..util.dependencies import import_required, detect_phantomjs, import_optional
#-----------------------------------------------------------------------------
# Globals and constants
@@ -49,6 +50,20 @@
# Dev API
#-----------------------------------------------------------------------------
+
+def kill_proc_tree(pid, including_parent=True):
+ psutil = import_optional('psutil')
+ if psutil is not None:
+ parent = psutil.Process(pid)
+ children = parent.children(recursive=True)
+ for child in children:
+ child.kill()
+ psutil.wait_procs(children)
+ if including_parent:
+ parent.kill()
+ parent.wait(5)
+
+
def create_phantomjs_webdriver():
with warnings.catch_warnings():
warnings.filterwarnings("ignore", ".*", UserWarning, "selenium.webdriver.phantomjs.webdriver")
@@ -60,21 +75,25 @@
phantomjs_path = detect_phantomjs()
return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)
+
def terminate_webdriver(driver):
if driver.name == "phantomjs":
# https://github.com/seleniumhq/selenium/issues/767
if driver.service.process:
+ if sys.platform == 'win32':
+ kill_proc_tree(driver.service.process.pid, including_parent=False)
driver.service.process.send_signal(signal.SIGTERM)
try:
driver.quit()
- except (IOError, OSError): # IOError for Python 2.7
+ except (IOError, OSError): # IOError for Python 2.7
pass
#-----------------------------------------------------------------------------
# Private API
#-----------------------------------------------------------------------------
+
class _WebdriverState(object):
'''
| {"golden_diff": "diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py\n--- a/bokeh/io/webdriver.py\n+++ b/bokeh/io/webdriver.py\n@@ -21,6 +21,7 @@\n #-----------------------------------------------------------------------------\n \n # Standard library imports\n+import sys\n import atexit\n import signal\n import warnings\n@@ -29,7 +30,7 @@\n # External imports\n \n # Bokeh imports\n-from ..util.dependencies import import_required, detect_phantomjs\n+from ..util.dependencies import import_required, detect_phantomjs, import_optional\n \n #-----------------------------------------------------------------------------\n # Globals and constants\n@@ -49,6 +50,20 @@\n # Dev API\n #-----------------------------------------------------------------------------\n \n+\n+def kill_proc_tree(pid, including_parent=True):\n+ psutil = import_optional('psutil')\n+ if psutil is not None:\n+ parent = psutil.Process(pid)\n+ children = parent.children(recursive=True)\n+ for child in children:\n+ child.kill()\n+ psutil.wait_procs(children)\n+ if including_parent:\n+ parent.kill()\n+ parent.wait(5)\n+\n+\n def create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n@@ -60,21 +75,25 @@\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n \n+\n def terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n+ if sys.platform == 'win32':\n+ kill_proc_tree(driver.service.process.pid, including_parent=False)\n driver.service.process.send_signal(signal.SIGTERM)\n \n try:\n driver.quit()\n- except (IOError, OSError): # IOError for Python 2.7\n+ except (IOError, OSError): # IOError for Python 2.7\n pass\n \n #-----------------------------------------------------------------------------\n # Private API\n #-----------------------------------------------------------------------------\n \n+\n class _WebdriverState(object):\n '''\n", "issue": "Windows phantomjs not killed on selenium termination\nI reinstalled a fresh python environment on windows with python 3.7 and pythonenv\r\nI use only pip for package dependencies\r\nWhen I tried to run tests some failed because temp files were locked.\r\n<img width=\"726\" alt=\"win32error\" src=\"https://user-images.githubusercontent.com/18531147/54091987-214f4580-4387-11e9-9584-6a117a356ad2.png\">\r\n<img width=\"257\" alt=\"test_failures\" src=\"https://user-images.githubusercontent.com/18531147/54091989-24e2cc80-4387-11e9-9c42-3573dabd1813.PNG\">\r\n\r\n\r\nWhen driver terminate phantomjs is not correctly killed:\r\n<img width=\"294\" alt=\"proc_pantomjs\" src=\"https://user-images.githubusercontent.com/18531147/54092002-45128b80-4387-11e9-9967-bf74b1e41bd7.PNG\">\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport atexit\nimport signal\nimport warnings\nfrom os.path import devnull\n\n# External imports\n\n# Bokeh imports\nfrom ..util.dependencies import import_required, detect_phantomjs\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'create_phantomjs_webdriver',\n 'terminate_webdriver',\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\ndef create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n\n webdriver = import_required('selenium.webdriver',\n 'To use bokeh.io image export functions you need selenium ' +\n '(\"conda install -c bokeh selenium\" or \"pip install selenium\")')\n\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n\ndef terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n driver.service.process.send_signal(signal.SIGTERM)\n\n try:\n driver.quit()\n except (IOError, OSError): # IOError for Python 2.7\n pass\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\nclass _WebdriverState(object):\n '''\n\n '''\n\n def __init__(self, reuse=True, kind=\"phantomjs\"):\n self.reuse = reuse\n self.kind = kind\n self.current = None\n\n def reset(self):\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = None\n\n def get(self):\n if not self.reuse or self.current is None:\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = self.create()\n return self.current\n\n def create(self):\n if self.kind == \"phantomjs\":\n return create_phantomjs_webdriver()\n raise ValueError(\"Unknown webdriver kind %r\" % self.kind)\n\n @property\n def reuse(self):\n return self._reuse\n\n @reuse.setter\n def reuse(self, value):\n self._reuse = value\n\n @property\n def kind(self):\n return self._kind\n\n @kind.setter\n def kind(self, value):\n # TODO (bev) enum/value check when more are added\n self._kind = value\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.reset())\n", "path": "bokeh/io/webdriver.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\nimport sys\nimport atexit\nimport signal\nimport warnings\nfrom os.path import devnull\n\n# External imports\n\n# Bokeh imports\nfrom ..util.dependencies import import_required, detect_phantomjs, import_optional\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'create_phantomjs_webdriver',\n 'terminate_webdriver',\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n\ndef kill_proc_tree(pid, including_parent=True):\n psutil = import_optional('psutil')\n if psutil is not None:\n parent = psutil.Process(pid)\n children = parent.children(recursive=True)\n for child in children:\n child.kill()\n psutil.wait_procs(children)\n if including_parent:\n parent.kill()\n parent.wait(5)\n\n\ndef create_phantomjs_webdriver():\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", \".*\", UserWarning, \"selenium.webdriver.phantomjs.webdriver\")\n\n webdriver = import_required('selenium.webdriver',\n 'To use bokeh.io image export functions you need selenium ' +\n '(\"conda install -c bokeh selenium\" or \"pip install selenium\")')\n\n phantomjs_path = detect_phantomjs()\n return webdriver.PhantomJS(executable_path=phantomjs_path, service_log_path=devnull)\n\n\ndef terminate_webdriver(driver):\n if driver.name == \"phantomjs\":\n # https://github.com/seleniumhq/selenium/issues/767\n if driver.service.process:\n if sys.platform == 'win32':\n kill_proc_tree(driver.service.process.pid, including_parent=False)\n driver.service.process.send_signal(signal.SIGTERM)\n\n try:\n driver.quit()\n except (IOError, OSError): # IOError for Python 2.7\n pass\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n\nclass _WebdriverState(object):\n '''\n\n '''\n\n def __init__(self, reuse=True, kind=\"phantomjs\"):\n self.reuse = reuse\n self.kind = kind\n self.current = None\n\n def reset(self):\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = None\n\n def get(self):\n if not self.reuse or self.current is None:\n if self.current is not None:\n terminate_webdriver(self.current)\n self.current = self.create()\n return self.current\n\n def create(self):\n if self.kind == \"phantomjs\":\n return create_phantomjs_webdriver()\n raise ValueError(\"Unknown webdriver kind %r\" % self.kind)\n\n @property\n def reuse(self):\n return self._reuse\n\n @reuse.setter\n def reuse(self, value):\n self._reuse = value\n\n @property\n def kind(self):\n return self._kind\n\n @kind.setter\n def kind(self, value):\n # TODO (bev) enum/value check when more are added\n self._kind = value\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.reset())\n", "path": "bokeh/io/webdriver.py"}]} | 1,529 | 473 |
gh_patches_debug_24271 | rasdani/github-patches | git_diff | ivy-llc__ivy-15738 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ihfft
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py`
Content:
```
1 import ivy
2 from ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back
3 from ivy.func_wrapper import with_unsupported_dtypes
4
5
6 @to_ivy_arrays_and_back
7 def ifft(a, n=None, axis=-1, norm=None):
8 a = ivy.array(a, dtype=ivy.complex128)
9 if norm is None:
10 norm = "backward"
11 return ivy.ifft(a, axis, norm=norm, n=n)
12
13
14 @to_ivy_arrays_and_back
15 @with_unsupported_dtypes({"1.24.3 and below": ("float16",)}, "numpy")
16 def ifftshift(x, axes=None):
17 x = ivy.asarray(x)
18
19 if axes is None:
20 axes = tuple(range(x.ndim))
21 shift = [-(dim // 2) for dim in x.shape]
22 elif isinstance(
23 axes,
24 (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),
25 ):
26 shift = -(x.shape[axes] // 2)
27 else:
28 shift = [-(x.shape[ax] // 2) for ax in axes]
29
30 roll = ivy.roll(x, shift, axis=axes)
31
32 return roll
33
34
35 @to_ivy_arrays_and_back
36 def fft(a, n=None, axis=-1, norm=None):
37 return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)
38
39
40 @to_ivy_arrays_and_back
41 @with_unsupported_dtypes({"1.24.3 and below": ("float16",)}, "numpy")
42 def fftshift(x, axes=None):
43 x = ivy.asarray(x)
44
45 if axes is None:
46 axes = tuple(range(x.ndim))
47 shift = [(dim // 2) for dim in x.shape]
48 elif isinstance(
49 axes,
50 (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),
51 ):
52 shift = x.shape[axes] // 2
53 else:
54 shift = [(x.shape[ax] // 2) for ax in axes]
55
56 roll = ivy.roll(x, shift, axis=axes)
57
58 return roll
59
60
61 @with_unsupported_dtypes({"1.9.0 and below": ("float16",)}, "torch")
62 @to_ivy_arrays_and_back
63 def rfft(a, n=None, axis=-1, norm=None):
64 if norm is None:
65 norm = "backward"
66 a = ivy.array(a, dtype=ivy.float64)
67 return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)
68
69
70 @with_unsupported_dtypes({"2.4.2 and below": ("int",)}, "paddle")
71 @to_ivy_arrays_and_back
72 def fftfreq(n, d=1.0):
73 if not isinstance(
74 n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))
75 ):
76 raise ValueError("n should be an integer")
77
78 N = (n - 1) // 2 + 1
79 val = 1.0 / (n * d)
80 results = ivy.empty(tuple([n]), dtype=int)
81
82 p1 = ivy.arange(0, N, dtype=int)
83 results[:N] = p1
84 p2 = ivy.arange(-(n // 2), 0, dtype=int)
85 results[N:] = p2
86
87 return results * val
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
--- a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
+++ b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py
@@ -3,6 +3,23 @@
from ivy.func_wrapper import with_unsupported_dtypes
+_SWAP_DIRECTION_MAP = {
+ None: "forward",
+ "backward": "forward",
+ "ortho": "ortho",
+ "forward": "backward",
+}
+
+
+def _swap_direction(norm):
+ try:
+ return _SWAP_DIRECTION_MAP[norm]
+ except KeyError:
+ raise ValueError(
+ f'Invalid norm value {norm}; should be "backward", "ortho" or "forward".'
+ ) from None
+
+
@to_ivy_arrays_and_back
def ifft(a, n=None, axis=-1, norm=None):
a = ivy.array(a, dtype=ivy.complex128)
@@ -67,6 +84,17 @@
return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"1.12.0 and below": ("float16",)}, "numpy")
+def ihfft(a, n=None, axis=-1, norm=None):
+ a = ivy.array(a, dtype=ivy.float64)
+ if n is None:
+ n = a.shape[axis]
+ norm = _swap_direction(norm)
+ output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)
+ return output
+
+
@with_unsupported_dtypes({"2.4.2 and below": ("int",)}, "paddle")
@to_ivy_arrays_and_back
def fftfreq(n, d=1.0):
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n--- a/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n+++ b/ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py\n@@ -3,6 +3,23 @@\n from ivy.func_wrapper import with_unsupported_dtypes\n \n \n+_SWAP_DIRECTION_MAP = {\n+ None: \"forward\",\n+ \"backward\": \"forward\",\n+ \"ortho\": \"ortho\",\n+ \"forward\": \"backward\",\n+}\n+\n+\n+def _swap_direction(norm):\n+ try:\n+ return _SWAP_DIRECTION_MAP[norm]\n+ except KeyError:\n+ raise ValueError(\n+ f'Invalid norm value {norm}; should be \"backward\", \"ortho\" or \"forward\".'\n+ ) from None\n+\n+\n @to_ivy_arrays_and_back\n def ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n@@ -67,6 +84,17 @@\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n \n \n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"1.12.0 and below\": (\"float16\",)}, \"numpy\")\n+def ihfft(a, n=None, axis=-1, norm=None):\n+ a = ivy.array(a, dtype=ivy.float64)\n+ if n is None:\n+ n = a.shape[axis]\n+ norm = _swap_direction(norm)\n+ output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)\n+ return output\n+\n+\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n @to_ivy_arrays_and_back\n def fftfreq(n, d=1.0):\n", "issue": "ihfft\n\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef ifftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [-(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = -(x.shape[axes] // 2)\n else:\n shift = [-(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef fftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = x.shape[axes] // 2\n else:\n shift = [(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@with_unsupported_dtypes({\"1.9.0 and below\": (\"float16\",)}, \"torch\")\n@to_ivy_arrays_and_back\ndef rfft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n a = ivy.array(a, dtype=ivy.float64)\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fftfreq(n, d=1.0):\n if not isinstance(\n n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))\n ):\n raise ValueError(\"n should be an integer\")\n\n N = (n - 1) // 2 + 1\n val = 1.0 / (n * d)\n results = ivy.empty(tuple([n]), dtype=int)\n\n p1 = ivy.arange(0, N, dtype=int)\n results[:N] = p1\n p2 = ivy.arange(-(n // 2), 0, dtype=int)\n results[N:] = p2\n\n return results * val\n", "path": "ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py"}], "after_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n_SWAP_DIRECTION_MAP = {\n None: \"forward\",\n \"backward\": \"forward\",\n \"ortho\": \"ortho\",\n \"forward\": \"backward\",\n}\n\n\ndef _swap_direction(norm):\n try:\n return _SWAP_DIRECTION_MAP[norm]\n except KeyError:\n raise ValueError(\n f'Invalid norm value {norm}; should be \"backward\", \"ortho\" or \"forward\".'\n ) from None\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.complex128)\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef ifftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [-(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = -(x.shape[axes] // 2)\n else:\n shift = [-(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n return ivy.fft(ivy.astype(a, ivy.complex128), axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.24.3 and below\": (\"float16\",)}, \"numpy\")\ndef fftshift(x, axes=None):\n x = ivy.asarray(x)\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shift = [(dim // 2) for dim in x.shape]\n elif isinstance(\n axes,\n (int, type(ivy.uint8), type(ivy.uint16), type(ivy.uint32), type(ivy.uint64)),\n ):\n shift = x.shape[axes] // 2\n else:\n shift = [(x.shape[ax] // 2) for ax in axes]\n\n roll = ivy.roll(x, shift, axis=axes)\n\n return roll\n\n\n@with_unsupported_dtypes({\"1.9.0 and below\": (\"float16\",)}, \"torch\")\n@to_ivy_arrays_and_back\ndef rfft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n a = ivy.array(a, dtype=ivy.float64)\n return ivy.dft(a, axis=axis, inverse=False, onesided=True, dft_length=n, norm=norm)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.12.0 and below\": (\"float16\",)}, \"numpy\")\ndef ihfft(a, n=None, axis=-1, norm=None):\n a = ivy.array(a, dtype=ivy.float64)\n if n is None:\n n = a.shape[axis]\n norm = _swap_direction(norm)\n output = ivy.conj(rfft(a, n, axis, norm=norm).ivy_array)\n return output\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fftfreq(n, d=1.0):\n if not isinstance(\n n, (int, type(ivy.int8), type(ivy.int16), type(ivy.int32), type(ivy.int64))\n ):\n raise ValueError(\"n should be an integer\")\n\n N = (n - 1) // 2 + 1\n val = 1.0 / (n * d)\n results = ivy.empty(tuple([n]), dtype=int)\n\n p1 = ivy.arange(0, N, dtype=int)\n results[:N] = p1\n p2 = ivy.arange(-(n // 2), 0, dtype=int)\n results[N:] = p2\n\n return results * val\n", "path": "ivy/functional/frontends/numpy/fft/discrete_fourier_transform.py"}]} | 1,246 | 450 |
gh_patches_debug_25212 | rasdani/github-patches | git_diff | vyperlang__vyper-3030 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
can't declare variable `public` but can define custom getter
### Version Information
* vyper Version: 0.3.4+commit.f31f0ec
* OS: osx
* Python Version: 3.8.9
### What's your issue about?
This code fails to compile:
```
slates : public(HashMap[bytes32, DynArray[address, 15]])
```
but I can define my own getter for that type just fine:
```
slates : HashMap[bytes32, DynArray[address, 15]]
@external
def slate(sid :bytes32) -> DynArray[address, 15]:
return self.slates[sid]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/ast/expansion.py`
Content:
```
1 import copy
2
3 from vyper import ast as vy_ast
4 from vyper.exceptions import CompilerPanic
5
6
7 def expand_annotated_ast(vyper_module: vy_ast.Module) -> None:
8 """
9 Perform expansion / simplification operations on an annotated Vyper AST.
10
11 This pass uses annotated type information to modify the AST, simplifying
12 logic and expanding subtrees to reduce the compexity during codegen.
13
14 Arguments
15 ---------
16 vyper_module : Module
17 Top-level Vyper AST node that has been type-checked and annotated.
18 """
19 generate_public_variable_getters(vyper_module)
20 remove_unused_statements(vyper_module)
21
22
23 def generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:
24 """
25 Create getter functions for public variables.
26
27 Arguments
28 ---------
29 vyper_module : Module
30 Top-level Vyper AST node.
31 """
32
33 for node in vyper_module.get_children(vy_ast.VariableDecl, {"annotation.func.id": "public"}):
34 func_type = node._metadata["func_type"]
35 input_types, return_type = func_type.get_signature()
36 input_nodes = []
37
38 # use the annotation node as a base to build the input args and return type
39 # starting with `args[0]` to remove the surrounding `public()` call`
40 annotation = copy.copy(node.annotation.args[0])
41
42 # the base return statement is an `Attribute` node, e.g. `self.<var_name>`
43 # for each input type we wrap it in a `Subscript` to access a specific member
44 return_stmt: vy_ast.VyperNode = vy_ast.Attribute(
45 value=vy_ast.Name(id="self"), attr=func_type.name
46 )
47 return_stmt._metadata["type"] = node._metadata["type"]
48
49 for i, type_ in enumerate(input_types):
50 if not isinstance(annotation, vy_ast.Subscript):
51 # if we get here something has failed in type checking
52 raise CompilerPanic("Mismatch between node and input type while building getter")
53 if annotation.value.get("id") == "HashMap": # type: ignore
54 # for a HashMap, split the key/value types and use the key type as the next arg
55 arg, annotation = annotation.slice.value.elements # type: ignore
56 else:
57 # for other types, build an input arg node from the expected type
58 # and remove the outer `Subscript` from the annotation
59 arg = vy_ast.Name(id=type_._id)
60 annotation = annotation.value
61 input_nodes.append(vy_ast.arg(arg=f"arg{i}", annotation=arg))
62
63 # wrap the return statement in a `Subscript`
64 return_stmt = vy_ast.Subscript(
65 value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f"arg{i}"))
66 )
67
68 # after iterating the input types, the remaining annotation node is our return type
69 return_node = annotation
70
71 # join everything together as a new `FunctionDef` node, annotate it
72 # with the type, and append it to the existing `Module` node
73 expanded = vy_ast.FunctionDef.from_node(
74 node.annotation,
75 name=func_type.name,
76 args=vy_ast.arguments(args=input_nodes, defaults=[]),
77 body=[vy_ast.Return(value=return_stmt)],
78 decorator_list=[vy_ast.Name(id="external"), vy_ast.Name(id="view")],
79 returns=return_node,
80 )
81 expanded._metadata["type"] = func_type
82 return_node.set_parent(expanded)
83 vyper_module.add_to_body(expanded)
84
85
86 def remove_unused_statements(vyper_module: vy_ast.Module) -> None:
87 """
88 Remove statement nodes that are unused after type checking.
89
90 Once type checking is complete, we can remove now-meaningless statements to
91 simplify the AST prior to IR generation.
92
93 Arguments
94 ---------
95 vyper_module : Module
96 Top-level Vyper AST node.
97 """
98
99 # constant declarations - values were substituted within the AST during folding
100 for node in vyper_module.get_children(vy_ast.VariableDecl, {"annotation.func.id": "constant"}):
101 vyper_module.remove_from_body(node)
102
103 # `implements: interface` statements - validated during type checking
104 for node in vyper_module.get_children(vy_ast.AnnAssign, {"target.id": "implements"}):
105 vyper_module.remove_from_body(node)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/ast/expansion.py b/vyper/ast/expansion.py
--- a/vyper/ast/expansion.py
+++ b/vyper/ast/expansion.py
@@ -53,6 +53,9 @@
if annotation.value.get("id") == "HashMap": # type: ignore
# for a HashMap, split the key/value types and use the key type as the next arg
arg, annotation = annotation.slice.value.elements # type: ignore
+ elif annotation.value.get("id") == "DynArray":
+ arg = vy_ast.Name(id=type_._id)
+ annotation = annotation.slice.value.elements[0] # type: ignore
else:
# for other types, build an input arg node from the expected type
# and remove the outer `Subscript` from the annotation
@@ -66,7 +69,7 @@
)
# after iterating the input types, the remaining annotation node is our return type
- return_node = annotation
+ return_node = copy.copy(annotation)
# join everything together as a new `FunctionDef` node, annotate it
# with the type, and append it to the existing `Module` node
| {"golden_diff": "diff --git a/vyper/ast/expansion.py b/vyper/ast/expansion.py\n--- a/vyper/ast/expansion.py\n+++ b/vyper/ast/expansion.py\n@@ -53,6 +53,9 @@\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n+ elif annotation.value.get(\"id\") == \"DynArray\":\n+ arg = vy_ast.Name(id=type_._id)\n+ annotation = annotation.slice.value.elements[0] # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n@@ -66,7 +69,7 @@\n )\n \n # after iterating the input types, the remaining annotation node is our return type\n- return_node = annotation\n+ return_node = copy.copy(annotation)\n \n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n", "issue": "can't declare variable `public` but can define custom getter\n### Version Information\r\n\r\n* vyper Version: 0.3.4+commit.f31f0ec\r\n* OS: osx\r\n* Python Version: 3.8.9\r\n\r\n### What's your issue about?\r\n\r\nThis code fails to compile:\r\n\r\n```\r\nslates : public(HashMap[bytes32, DynArray[address, 15]])\r\n```\r\n\r\nbut I can define my own getter for that type just fine:\r\n\r\n```\r\nslates : HashMap[bytes32, DynArray[address, 15]]\r\n\r\n@external\r\ndef slate(sid :bytes32) -> DynArray[address, 15]:\r\n return self.slates[sid]\r\n```\r\n\n", "before_files": [{"content": "import copy\n\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import CompilerPanic\n\n\ndef expand_annotated_ast(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Perform expansion / simplification operations on an annotated Vyper AST.\n\n This pass uses annotated type information to modify the AST, simplifying\n logic and expanding subtrees to reduce the compexity during codegen.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node that has been type-checked and annotated.\n \"\"\"\n generate_public_variable_getters(vyper_module)\n remove_unused_statements(vyper_module)\n\n\ndef generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Create getter functions for public variables.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"public\"}):\n func_type = node._metadata[\"func_type\"]\n input_types, return_type = func_type.get_signature()\n input_nodes = []\n\n # use the annotation node as a base to build the input args and return type\n # starting with `args[0]` to remove the surrounding `public()` call`\n annotation = copy.copy(node.annotation.args[0])\n\n # the base return statement is an `Attribute` node, e.g. `self.<var_name>`\n # for each input type we wrap it in a `Subscript` to access a specific member\n return_stmt: vy_ast.VyperNode = vy_ast.Attribute(\n value=vy_ast.Name(id=\"self\"), attr=func_type.name\n )\n return_stmt._metadata[\"type\"] = node._metadata[\"type\"]\n\n for i, type_ in enumerate(input_types):\n if not isinstance(annotation, vy_ast.Subscript):\n # if we get here something has failed in type checking\n raise CompilerPanic(\"Mismatch between node and input type while building getter\")\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.value\n input_nodes.append(vy_ast.arg(arg=f\"arg{i}\", annotation=arg))\n\n # wrap the return statement in a `Subscript`\n return_stmt = vy_ast.Subscript(\n value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f\"arg{i}\"))\n )\n\n # after iterating the input types, the remaining annotation node is our return type\n return_node = annotation\n\n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n expanded = vy_ast.FunctionDef.from_node(\n node.annotation,\n name=func_type.name,\n args=vy_ast.arguments(args=input_nodes, defaults=[]),\n body=[vy_ast.Return(value=return_stmt)],\n decorator_list=[vy_ast.Name(id=\"external\"), vy_ast.Name(id=\"view\")],\n returns=return_node,\n )\n expanded._metadata[\"type\"] = func_type\n return_node.set_parent(expanded)\n vyper_module.add_to_body(expanded)\n\n\ndef remove_unused_statements(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Remove statement nodes that are unused after type checking.\n\n Once type checking is complete, we can remove now-meaningless statements to\n simplify the AST prior to IR generation.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n # constant declarations - values were substituted within the AST during folding\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"constant\"}):\n vyper_module.remove_from_body(node)\n\n # `implements: interface` statements - validated during type checking\n for node in vyper_module.get_children(vy_ast.AnnAssign, {\"target.id\": \"implements\"}):\n vyper_module.remove_from_body(node)\n", "path": "vyper/ast/expansion.py"}], "after_files": [{"content": "import copy\n\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import CompilerPanic\n\n\ndef expand_annotated_ast(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Perform expansion / simplification operations on an annotated Vyper AST.\n\n This pass uses annotated type information to modify the AST, simplifying\n logic and expanding subtrees to reduce the compexity during codegen.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node that has been type-checked and annotated.\n \"\"\"\n generate_public_variable_getters(vyper_module)\n remove_unused_statements(vyper_module)\n\n\ndef generate_public_variable_getters(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Create getter functions for public variables.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"public\"}):\n func_type = node._metadata[\"func_type\"]\n input_types, return_type = func_type.get_signature()\n input_nodes = []\n\n # use the annotation node as a base to build the input args and return type\n # starting with `args[0]` to remove the surrounding `public()` call`\n annotation = copy.copy(node.annotation.args[0])\n\n # the base return statement is an `Attribute` node, e.g. `self.<var_name>`\n # for each input type we wrap it in a `Subscript` to access a specific member\n return_stmt: vy_ast.VyperNode = vy_ast.Attribute(\n value=vy_ast.Name(id=\"self\"), attr=func_type.name\n )\n return_stmt._metadata[\"type\"] = node._metadata[\"type\"]\n\n for i, type_ in enumerate(input_types):\n if not isinstance(annotation, vy_ast.Subscript):\n # if we get here something has failed in type checking\n raise CompilerPanic(\"Mismatch between node and input type while building getter\")\n if annotation.value.get(\"id\") == \"HashMap\": # type: ignore\n # for a HashMap, split the key/value types and use the key type as the next arg\n arg, annotation = annotation.slice.value.elements # type: ignore\n elif annotation.value.get(\"id\") == \"DynArray\":\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.slice.value.elements[0] # type: ignore\n else:\n # for other types, build an input arg node from the expected type\n # and remove the outer `Subscript` from the annotation\n arg = vy_ast.Name(id=type_._id)\n annotation = annotation.value\n input_nodes.append(vy_ast.arg(arg=f\"arg{i}\", annotation=arg))\n\n # wrap the return statement in a `Subscript`\n return_stmt = vy_ast.Subscript(\n value=return_stmt, slice=vy_ast.Index(value=vy_ast.Name(id=f\"arg{i}\"))\n )\n\n # after iterating the input types, the remaining annotation node is our return type\n return_node = copy.copy(annotation)\n\n # join everything together as a new `FunctionDef` node, annotate it\n # with the type, and append it to the existing `Module` node\n expanded = vy_ast.FunctionDef.from_node(\n node.annotation,\n name=func_type.name,\n args=vy_ast.arguments(args=input_nodes, defaults=[]),\n body=[vy_ast.Return(value=return_stmt)],\n decorator_list=[vy_ast.Name(id=\"external\"), vy_ast.Name(id=\"view\")],\n returns=return_node,\n )\n expanded._metadata[\"type\"] = func_type\n return_node.set_parent(expanded)\n vyper_module.add_to_body(expanded)\n\n\ndef remove_unused_statements(vyper_module: vy_ast.Module) -> None:\n \"\"\"\n Remove statement nodes that are unused after type checking.\n\n Once type checking is complete, we can remove now-meaningless statements to\n simplify the AST prior to IR generation.\n\n Arguments\n ---------\n vyper_module : Module\n Top-level Vyper AST node.\n \"\"\"\n\n # constant declarations - values were substituted within the AST during folding\n for node in vyper_module.get_children(vy_ast.VariableDecl, {\"annotation.func.id\": \"constant\"}):\n vyper_module.remove_from_body(node)\n\n # `implements: interface` statements - validated during type checking\n for node in vyper_module.get_children(vy_ast.AnnAssign, {\"target.id\": \"implements\"}):\n vyper_module.remove_from_body(node)\n", "path": "vyper/ast/expansion.py"}]} | 1,577 | 267 |
gh_patches_debug_6357 | rasdani/github-patches | git_diff | dmlc__gluon-nlp-1219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecation warning due to invalid escape sequences
## Description
Deprecation warning due to invalid escape sequences. Using raw strings or escaping them again helps in resolving this. Check https://github.com/asottile/pyupgrade/ for automatic fix of this.
## To Reproduce
```
find . -iname '*.py' | grep -Ev 'rdf4|doc|benchmark|tool' | xargs -P4 -I{} python3.8 -Wall -m py_compile {}
./ci/batch/submit-job.py:95: DeprecationWarning: invalid escape sequence \-
jobName = re.sub('[^A-Za-z0-9_\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ci/batch/submit-job.py`
Content:
```
1 import argparse
2 import random
3 import re
4 import sys
5 import time
6 from datetime import datetime
7
8 import boto3
9 from botocore.compat import total_seconds
10
11 parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
12
13 parser.add_argument('--profile', help='profile name of aws account.', type=str,
14 default=None)
15 parser.add_argument('--region', help='Default region when creating new connections', type=str,
16 default=None)
17 parser.add_argument('--name', help='name of the job', type=str, default='dummy')
18 parser.add_argument('--job-queue', help='name of the job queue to submit this job', type=str,
19 default='gluon-nlp-jobs')
20 parser.add_argument('--job-definition', help='name of the job job definition', type=str,
21 default='gluon-nlp-jobs:8')
22 parser.add_argument('--source-ref',
23 help='ref in GluonNLP main github. e.g. master, refs/pull/500/head',
24 type=str, default='master')
25 parser.add_argument('--work-dir',
26 help='working directory inside the repo. e.g. scripts/sentiment_analysis',
27 type=str, default='scripts/bert')
28 parser.add_argument('--saved-output',
29 help='output to be saved, relative to working directory. '
30 'it can be either a single file or a directory',
31 type=str, default='.')
32 parser.add_argument('--save-path',
33 help='s3 path where files are saved.',
34 type=str, default='batch/temp/{}'.format(datetime.now().isoformat()))
35 parser.add_argument('--conda-env',
36 help='conda environment preset to use.',
37 type=str, default='gpu/py3')
38 parser.add_argument('--command', help='command to run', type=str,
39 default='git rev-parse HEAD | tee stdout.log')
40 parser.add_argument('--remote',
41 help='git repo address. https://github.com/dmlc/gluon-nlp',
42 type=str, default="https://github.com/dmlc/gluon-nlp")
43 parser.add_argument('--wait', help='block wait until the job completes. '
44 'Non-zero exit code if job fails.', action='store_true')
45 parser.add_argument('--timeout', help='job timeout in seconds', default=None, type=int)
46
47 args = parser.parse_args()
48
49 session = boto3.Session(profile_name=args.profile, region_name=args.region)
50 batch, cloudwatch = [session.client(service_name=sn) for sn in ['batch', 'logs']]
51
52 def printLogs(logGroupName, logStreamName, startTime):
53 kwargs = {'logGroupName': logGroupName,
54 'logStreamName': logStreamName,
55 'startTime': startTime,
56 'startFromHead': True}
57
58 lastTimestamp = 0
59 while True:
60 logEvents = cloudwatch.get_log_events(**kwargs)
61
62 for event in logEvents['events']:
63 lastTimestamp = event['timestamp']
64 timestamp = datetime.utcfromtimestamp(lastTimestamp / 1000.0).isoformat()
65 print('[{}] {}'.format((timestamp + '.000')[:23] + 'Z', event['message']))
66
67 nextToken = logEvents['nextForwardToken']
68 if nextToken and kwargs.get('nextToken') != nextToken:
69 kwargs['nextToken'] = nextToken
70 else:
71 break
72 return lastTimestamp
73
74
75 def getLogStream(logGroupName, jobName, jobId):
76 response = cloudwatch.describe_log_streams(
77 logGroupName=logGroupName,
78 logStreamNamePrefix=jobName + '/' + jobId
79 )
80 logStreams = response['logStreams']
81 if not logStreams:
82 return ''
83 else:
84 return logStreams[0]['logStreamName']
85
86 def nowInMillis():
87 endTime = long(total_seconds(datetime.utcnow() - datetime(1970, 1, 1))) * 1000
88 return endTime
89
90
91 def main():
92 spin = ['-', '/', '|', '\\', '-', '/', '|', '\\']
93 logGroupName = '/aws/batch/job'
94
95 jobName = re.sub('[^A-Za-z0-9_\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules
96 jobQueue = args.job_queue
97 jobDefinition = args.job_definition
98 command = args.command.split()
99 wait = args.wait
100
101 parameters={
102 'SOURCE_REF': args.source_ref,
103 'WORK_DIR': args.work_dir,
104 'SAVED_OUTPUT': args.saved_output,
105 'SAVE_PATH': args.save_path,
106 'CONDA_ENV': args.conda_env,
107 'COMMAND': args.command,
108 'REMOTE': args.remote
109 }
110 kwargs = dict(
111 jobName=jobName,
112 jobQueue=jobQueue,
113 jobDefinition=jobDefinition,
114 parameters=parameters,
115 )
116 if args.timeout is not None:
117 kwargs['timeout'] = {'attemptDurationSeconds': args.timeout}
118 submitJobResponse = batch.submit_job(**kwargs)
119
120 jobId = submitJobResponse['jobId']
121 print('Submitted job [{} - {}] to the job queue [{}]'.format(jobName, jobId, jobQueue))
122
123 spinner = 0
124 running = False
125 status_set = set()
126 startTime = 0
127
128 while wait:
129 time.sleep(random.randint(5, 10))
130 describeJobsResponse = batch.describe_jobs(jobs=[jobId])
131 status = describeJobsResponse['jobs'][0]['status']
132 if status == 'SUCCEEDED' or status == 'FAILED':
133 print('=' * 80)
134 print('Job [{} - {}] {}'.format(jobName, jobId, status))
135
136 sys.exit(status == 'FAILED')
137
138 elif status == 'RUNNING':
139 logStreamName = getLogStream(logGroupName, jobName, jobId)
140 if not running:
141 running = True
142 print('\rJob [{} - {}] is RUNNING.'.format(jobName, jobId))
143 if logStreamName:
144 print('Output [{}]:\n {}'.format(logStreamName, '=' * 80))
145 if logStreamName:
146 startTime = printLogs(logGroupName, logStreamName, startTime) + 1
147 elif status not in status_set:
148 status_set.add(status)
149 print('\rJob [%s - %s] is %-9s... %s' % (jobName, jobId, status, spin[spinner % len(spin)]),)
150 sys.stdout.flush()
151 spinner += 1
152
153 if __name__ == '__main__':
154 main()
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ci/batch/submit-job.py b/ci/batch/submit-job.py
--- a/ci/batch/submit-job.py
+++ b/ci/batch/submit-job.py
@@ -92,7 +92,7 @@
spin = ['-', '/', '|', '\\', '-', '/', '|', '\\']
logGroupName = '/aws/batch/job'
- jobName = re.sub('[^A-Za-z0-9_\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules
+ jobName = re.sub(r'[^A-Za-z0-9_\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules
jobQueue = args.job_queue
jobDefinition = args.job_definition
command = args.command.split()
| {"golden_diff": "diff --git a/ci/batch/submit-job.py b/ci/batch/submit-job.py\n--- a/ci/batch/submit-job.py\n+++ b/ci/batch/submit-job.py\n@@ -92,7 +92,7 @@\n spin = ['-', '/', '|', '\\\\', '-', '/', '|', '\\\\']\n logGroupName = '/aws/batch/job'\n \n- jobName = re.sub('[^A-Za-z0-9_\\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules\n+ jobName = re.sub(r'[^A-Za-z0-9_\\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules\n jobQueue = args.job_queue\n jobDefinition = args.job_definition\n command = args.command.split()\n", "issue": "Deprecation warning due to invalid escape sequences\n## Description\r\n\r\nDeprecation warning due to invalid escape sequences. Using raw strings or escaping them again helps in resolving this. Check https://github.com/asottile/pyupgrade/ for automatic fix of this.\r\n\r\n## To Reproduce\r\n\r\n```\r\nfind . -iname '*.py' | grep -Ev 'rdf4|doc|benchmark|tool' | xargs -P4 -I{} python3.8 -Wall -m py_compile {}\r\n./ci/batch/submit-job.py:95: DeprecationWarning: invalid escape sequence \\-\r\n jobName = re.sub('[^A-Za-z0-9_\\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules\r\n```\n", "before_files": [{"content": "import argparse\nimport random\nimport re\nimport sys\nimport time\nfrom datetime import datetime\n\nimport boto3\nfrom botocore.compat import total_seconds\n\nparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\nparser.add_argument('--profile', help='profile name of aws account.', type=str,\n default=None)\nparser.add_argument('--region', help='Default region when creating new connections', type=str,\n default=None)\nparser.add_argument('--name', help='name of the job', type=str, default='dummy')\nparser.add_argument('--job-queue', help='name of the job queue to submit this job', type=str,\n default='gluon-nlp-jobs')\nparser.add_argument('--job-definition', help='name of the job job definition', type=str,\n default='gluon-nlp-jobs:8')\nparser.add_argument('--source-ref',\n help='ref in GluonNLP main github. e.g. master, refs/pull/500/head',\n type=str, default='master')\nparser.add_argument('--work-dir',\n help='working directory inside the repo. e.g. scripts/sentiment_analysis',\n type=str, default='scripts/bert')\nparser.add_argument('--saved-output',\n help='output to be saved, relative to working directory. '\n 'it can be either a single file or a directory',\n type=str, default='.')\nparser.add_argument('--save-path',\n help='s3 path where files are saved.',\n type=str, default='batch/temp/{}'.format(datetime.now().isoformat()))\nparser.add_argument('--conda-env',\n help='conda environment preset to use.',\n type=str, default='gpu/py3')\nparser.add_argument('--command', help='command to run', type=str,\n default='git rev-parse HEAD | tee stdout.log')\nparser.add_argument('--remote',\n help='git repo address. https://github.com/dmlc/gluon-nlp',\n type=str, default=\"https://github.com/dmlc/gluon-nlp\")\nparser.add_argument('--wait', help='block wait until the job completes. '\n 'Non-zero exit code if job fails.', action='store_true')\nparser.add_argument('--timeout', help='job timeout in seconds', default=None, type=int)\n\nargs = parser.parse_args()\n\nsession = boto3.Session(profile_name=args.profile, region_name=args.region)\nbatch, cloudwatch = [session.client(service_name=sn) for sn in ['batch', 'logs']]\n\ndef printLogs(logGroupName, logStreamName, startTime):\n kwargs = {'logGroupName': logGroupName,\n 'logStreamName': logStreamName,\n 'startTime': startTime,\n 'startFromHead': True}\n\n lastTimestamp = 0\n while True:\n logEvents = cloudwatch.get_log_events(**kwargs)\n\n for event in logEvents['events']:\n lastTimestamp = event['timestamp']\n timestamp = datetime.utcfromtimestamp(lastTimestamp / 1000.0).isoformat()\n print('[{}] {}'.format((timestamp + '.000')[:23] + 'Z', event['message']))\n\n nextToken = logEvents['nextForwardToken']\n if nextToken and kwargs.get('nextToken') != nextToken:\n kwargs['nextToken'] = nextToken\n else:\n break\n return lastTimestamp\n\n\ndef getLogStream(logGroupName, jobName, jobId):\n response = cloudwatch.describe_log_streams(\n logGroupName=logGroupName,\n logStreamNamePrefix=jobName + '/' + jobId\n )\n logStreams = response['logStreams']\n if not logStreams:\n return ''\n else:\n return logStreams[0]['logStreamName']\n\ndef nowInMillis():\n endTime = long(total_seconds(datetime.utcnow() - datetime(1970, 1, 1))) * 1000\n return endTime\n\n\ndef main():\n spin = ['-', '/', '|', '\\\\', '-', '/', '|', '\\\\']\n logGroupName = '/aws/batch/job'\n\n jobName = re.sub('[^A-Za-z0-9_\\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules\n jobQueue = args.job_queue\n jobDefinition = args.job_definition\n command = args.command.split()\n wait = args.wait\n\n parameters={\n 'SOURCE_REF': args.source_ref,\n 'WORK_DIR': args.work_dir,\n 'SAVED_OUTPUT': args.saved_output,\n 'SAVE_PATH': args.save_path,\n 'CONDA_ENV': args.conda_env,\n 'COMMAND': args.command,\n 'REMOTE': args.remote\n }\n kwargs = dict(\n jobName=jobName,\n jobQueue=jobQueue,\n jobDefinition=jobDefinition,\n parameters=parameters,\n )\n if args.timeout is not None:\n kwargs['timeout'] = {'attemptDurationSeconds': args.timeout}\n submitJobResponse = batch.submit_job(**kwargs)\n\n jobId = submitJobResponse['jobId']\n print('Submitted job [{} - {}] to the job queue [{}]'.format(jobName, jobId, jobQueue))\n\n spinner = 0\n running = False\n status_set = set()\n startTime = 0\n\n while wait:\n time.sleep(random.randint(5, 10))\n describeJobsResponse = batch.describe_jobs(jobs=[jobId])\n status = describeJobsResponse['jobs'][0]['status']\n if status == 'SUCCEEDED' or status == 'FAILED':\n print('=' * 80)\n print('Job [{} - {}] {}'.format(jobName, jobId, status))\n\n sys.exit(status == 'FAILED')\n\n elif status == 'RUNNING':\n logStreamName = getLogStream(logGroupName, jobName, jobId)\n if not running:\n running = True\n print('\\rJob [{} - {}] is RUNNING.'.format(jobName, jobId))\n if logStreamName:\n print('Output [{}]:\\n {}'.format(logStreamName, '=' * 80))\n if logStreamName:\n startTime = printLogs(logGroupName, logStreamName, startTime) + 1\n elif status not in status_set:\n status_set.add(status)\n print('\\rJob [%s - %s] is %-9s... %s' % (jobName, jobId, status, spin[spinner % len(spin)]),)\n sys.stdout.flush()\n spinner += 1\n\nif __name__ == '__main__':\n main()\n", "path": "ci/batch/submit-job.py"}], "after_files": [{"content": "import argparse\nimport random\nimport re\nimport sys\nimport time\nfrom datetime import datetime\n\nimport boto3\nfrom botocore.compat import total_seconds\n\nparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\nparser.add_argument('--profile', help='profile name of aws account.', type=str,\n default=None)\nparser.add_argument('--region', help='Default region when creating new connections', type=str,\n default=None)\nparser.add_argument('--name', help='name of the job', type=str, default='dummy')\nparser.add_argument('--job-queue', help='name of the job queue to submit this job', type=str,\n default='gluon-nlp-jobs')\nparser.add_argument('--job-definition', help='name of the job job definition', type=str,\n default='gluon-nlp-jobs:8')\nparser.add_argument('--source-ref',\n help='ref in GluonNLP main github. e.g. master, refs/pull/500/head',\n type=str, default='master')\nparser.add_argument('--work-dir',\n help='working directory inside the repo. e.g. scripts/sentiment_analysis',\n type=str, default='scripts/bert')\nparser.add_argument('--saved-output',\n help='output to be saved, relative to working directory. '\n 'it can be either a single file or a directory',\n type=str, default='.')\nparser.add_argument('--save-path',\n help='s3 path where files are saved.',\n type=str, default='batch/temp/{}'.format(datetime.now().isoformat()))\nparser.add_argument('--conda-env',\n help='conda environment preset to use.',\n type=str, default='gpu/py3')\nparser.add_argument('--command', help='command to run', type=str,\n default='git rev-parse HEAD | tee stdout.log')\nparser.add_argument('--remote',\n help='git repo address. https://github.com/dmlc/gluon-nlp',\n type=str, default=\"https://github.com/dmlc/gluon-nlp\")\nparser.add_argument('--wait', help='block wait until the job completes. '\n 'Non-zero exit code if job fails.', action='store_true')\nparser.add_argument('--timeout', help='job timeout in seconds', default=None, type=int)\n\nargs = parser.parse_args()\n\nsession = boto3.Session(profile_name=args.profile, region_name=args.region)\nbatch, cloudwatch = [session.client(service_name=sn) for sn in ['batch', 'logs']]\n\ndef printLogs(logGroupName, logStreamName, startTime):\n kwargs = {'logGroupName': logGroupName,\n 'logStreamName': logStreamName,\n 'startTime': startTime,\n 'startFromHead': True}\n\n lastTimestamp = 0\n while True:\n logEvents = cloudwatch.get_log_events(**kwargs)\n\n for event in logEvents['events']:\n lastTimestamp = event['timestamp']\n timestamp = datetime.utcfromtimestamp(lastTimestamp / 1000.0).isoformat()\n print('[{}] {}'.format((timestamp + '.000')[:23] + 'Z', event['message']))\n\n nextToken = logEvents['nextForwardToken']\n if nextToken and kwargs.get('nextToken') != nextToken:\n kwargs['nextToken'] = nextToken\n else:\n break\n return lastTimestamp\n\n\ndef getLogStream(logGroupName, jobName, jobId):\n response = cloudwatch.describe_log_streams(\n logGroupName=logGroupName,\n logStreamNamePrefix=jobName + '/' + jobId\n )\n logStreams = response['logStreams']\n if not logStreams:\n return ''\n else:\n return logStreams[0]['logStreamName']\n\ndef nowInMillis():\n endTime = long(total_seconds(datetime.utcnow() - datetime(1970, 1, 1))) * 1000\n return endTime\n\n\ndef main():\n spin = ['-', '/', '|', '\\\\', '-', '/', '|', '\\\\']\n logGroupName = '/aws/batch/job'\n\n jobName = re.sub(r'[^A-Za-z0-9_\\-]', '', args.name)[:128] # Enforce AWS Batch jobName rules\n jobQueue = args.job_queue\n jobDefinition = args.job_definition\n command = args.command.split()\n wait = args.wait\n\n parameters={\n 'SOURCE_REF': args.source_ref,\n 'WORK_DIR': args.work_dir,\n 'SAVED_OUTPUT': args.saved_output,\n 'SAVE_PATH': args.save_path,\n 'CONDA_ENV': args.conda_env,\n 'COMMAND': args.command,\n 'REMOTE': args.remote\n }\n kwargs = dict(\n jobName=jobName,\n jobQueue=jobQueue,\n jobDefinition=jobDefinition,\n parameters=parameters,\n )\n if args.timeout is not None:\n kwargs['timeout'] = {'attemptDurationSeconds': args.timeout}\n submitJobResponse = batch.submit_job(**kwargs)\n\n jobId = submitJobResponse['jobId']\n print('Submitted job [{} - {}] to the job queue [{}]'.format(jobName, jobId, jobQueue))\n\n spinner = 0\n running = False\n status_set = set()\n startTime = 0\n\n while wait:\n time.sleep(random.randint(5, 10))\n describeJobsResponse = batch.describe_jobs(jobs=[jobId])\n status = describeJobsResponse['jobs'][0]['status']\n if status == 'SUCCEEDED' or status == 'FAILED':\n print('=' * 80)\n print('Job [{} - {}] {}'.format(jobName, jobId, status))\n\n sys.exit(status == 'FAILED')\n\n elif status == 'RUNNING':\n logStreamName = getLogStream(logGroupName, jobName, jobId)\n if not running:\n running = True\n print('\\rJob [{} - {}] is RUNNING.'.format(jobName, jobId))\n if logStreamName:\n print('Output [{}]:\\n {}'.format(logStreamName, '=' * 80))\n if logStreamName:\n startTime = printLogs(logGroupName, logStreamName, startTime) + 1\n elif status not in status_set:\n status_set.add(status)\n print('\\rJob [%s - %s] is %-9s... %s' % (jobName, jobId, status, spin[spinner % len(spin)]),)\n sys.stdout.flush()\n spinner += 1\n\nif __name__ == '__main__':\n main()\n", "path": "ci/batch/submit-job.py"}]} | 2,165 | 179 |
gh_patches_debug_27421 | rasdani/github-patches | git_diff | rdmorganiser__rdmo-585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import from parent project list doesn't check permissions
### Description / Beschreibung
The list of projects in the import from parent project selection displays all parent projects independent of the user permissions.
### Expected behaviour / Erwartetes Verhalten
Import from parent project should display only projects where users have permissions to import data.
### Steps to reproduce / Schritte zum Reproduzieren
1. Load the dev fixtures in the latest RDMO Version: `python manage.py loaddata ../rdmo/testing/fixtures/*`
3. Log in to manager account.
4. Open Project "Child 1" or "Child 11"
5. List contains "Parent" although user does not have permissions.
6. When chosing "Parent" and trying to get to next step, RDMO displays "Forbidden", but I think it would be nicer if the option would not be present in the list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rdmo/projects/views/project.py`
Content:
```
1 import logging
2
3 from django.contrib.auth.mixins import LoginRequiredMixin
4 from django.core.exceptions import PermissionDenied
5 from django.db import models
6 from django.db.models import F, OuterRef, Subquery
7 from django.forms import Form
8 from django.http import Http404
9 from django.shortcuts import get_object_or_404, redirect
10 from django.urls import reverse_lazy
11 from django.utils.decorators import method_decorator
12 from django.utils.translation import gettext_lazy as _
13 from django.views.decorators.csrf import ensure_csrf_cookie
14 from django.views.generic import DeleteView, DetailView, TemplateView
15 from django.views.generic.edit import FormMixin
16 from django_filters.views import FilterView
17
18 from rdmo.accounts.utils import is_site_manager
19 from rdmo.core.plugins import get_plugin, get_plugins
20 from rdmo.core.views import ObjectPermissionMixin, RedirectViewMixin
21 from rdmo.questions.models import Catalog
22 from rdmo.questions.utils import get_widgets
23 from rdmo.tasks.models import Task
24 from rdmo.views.models import View
25
26 from ..filters import ProjectFilter
27 from ..models import Integration, Invite, Membership, Project, Value
28
29 logger = logging.getLogger(__name__)
30
31
32 class ProjectsView(LoginRequiredMixin, FilterView):
33 template_name = 'projects/projects.html'
34 context_object_name = 'projects'
35 paginate_by = 20
36 filterset_class = ProjectFilter
37
38 def get_queryset(self):
39 # prepare projects queryset for this user
40 queryset = Project.objects.filter(user=self.request.user)
41 for instance in queryset:
42 queryset |= instance.get_descendants()
43 queryset = queryset.distinct()
44
45 # prepare subquery for role
46 membership_subquery = models.Subquery(
47 Membership.objects.filter(project=models.OuterRef('pk'), user=self.request.user).values('role')
48 )
49 queryset = queryset.annotate(role=membership_subquery)
50
51 # prepare subquery for last_changed
52 last_changed_subquery = models.Subquery(
53 Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]
54 )
55 queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))
56
57 # order by last changed
58 queryset = queryset.order_by('-last_changed')
59
60 return queryset
61
62 def get_context_data(self, **kwargs):
63 context = super(ProjectsView, self).get_context_data(**kwargs)
64 context['invites'] = Invite.objects.filter(user=self.request.user)
65 context['is_site_manager'] = is_site_manager(self.request.user)
66 return context
67
68
69 class SiteProjectsView(LoginRequiredMixin, FilterView):
70 template_name = 'projects/site_projects.html'
71 context_object_name = 'projects'
72 paginate_by = 20
73 filterset_class = ProjectFilter
74 model = Project
75
76 def get_queryset(self):
77 if is_site_manager(self.request.user):
78 # prepare projects queryset for the site manager
79 queryset = Project.objects.filter_current_site()
80
81 # prepare subquery for last_changed
82 last_changed_subquery = models.Subquery(
83 Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]
84 )
85 queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))
86
87 return queryset
88 else:
89 raise PermissionDenied()
90
91
92 class ProjectDetailView(ObjectPermissionMixin, DetailView):
93 model = Project
94 queryset = Project.objects.prefetch_related(
95 'issues',
96 'issues__task',
97 'issues__task__conditions',
98 'issues__task__conditions__source',
99 'issues__task__conditions__target_option',
100 'tasks',
101 'views',
102 'values'
103 )
104 permission_required = 'projects.view_project_object'
105
106 def get_context_data(self, **kwargs):
107 context = super(ProjectDetailView, self).get_context_data(**kwargs)
108 project = context['project']
109 ancestors = project.get_ancestors(include_self=True)
110 values = project.values.filter(snapshot=None).select_related('attribute', 'option')
111 highest = Membership.objects.filter(project__in=ancestors, user_id=OuterRef('user_id')).order_by('-project__level')
112 memberships = Membership.objects.filter(project__in=ancestors) \
113 .annotate(highest=Subquery(highest.values('project__level')[:1])) \
114 .filter(highest=F('project__level')) \
115 .select_related('user')
116
117 integrations = Integration.objects.filter(project__in=ancestors)
118 context['catalogs'] = Catalog.objects.filter_current_site() \
119 .filter_group(self.request.user) \
120 .filter_availability(self.request.user)
121 context['tasks_available'] = Task.objects.filter_current_site() \
122 .filter_catalog(self.object.catalog) \
123 .filter_group(self.request.user) \
124 .filter_availability(self.request.user).exists()
125 context['views_available'] = View.objects.filter_current_site() \
126 .filter_catalog(self.object.catalog) \
127 .filter_group(self.request.user) \
128 .filter_availability(self.request.user).exists()
129 context['memberships'] = memberships.order_by('user__last_name', '-project__level')
130 context['integrations'] = integrations.order_by('provider_key', '-project__level')
131 context['providers'] = get_plugins('PROJECT_ISSUE_PROVIDERS')
132 context['issues'] = [issue for issue in project.issues.all() if issue.resolve(values)]
133 context['snapshots'] = project.snapshots.all()
134 context['invites'] = project.invites.all()
135 context['membership'] = Membership.objects.filter(project=project, user=self.request.user).first()
136
137 return context
138
139
140 class ProjectDeleteView(ObjectPermissionMixin, RedirectViewMixin, DeleteView):
141 model = Project
142 queryset = Project.objects.all()
143 success_url = reverse_lazy('projects')
144 permission_required = 'projects.delete_project_object'
145
146
147 class ProjectJoinView(LoginRequiredMixin, RedirectViewMixin, TemplateView):
148 template_name = 'core/error.html'
149
150 def get(self, request, token):
151 try:
152 invite = Invite.objects.get(token=token)
153
154 if invite.is_expired:
155 error = _('Sorry, your invitation has been expired.')
156 invite.delete()
157 elif invite.user and invite.user != request.user:
158 error = _('Sorry, but this invitation is for the user "%s".' % invite.user)
159 elif Membership.objects.filter(project=invite.project, user=request.user).exists():
160 invite.delete()
161 return redirect(invite.project.get_absolute_url())
162 else:
163 Membership.objects.create(
164 project=invite.project,
165 user=request.user,
166 role=invite.role
167 )
168 invite.delete()
169 return redirect(invite.project.get_absolute_url())
170
171 except Invite.DoesNotExist:
172 error = _('Sorry, the invitation link is not valid.')
173
174 return self.render_to_response({
175 'title': _('Error'),
176 'errors': [error]
177 })
178
179
180 class ProjectCancelView(LoginRequiredMixin, RedirectViewMixin, TemplateView):
181 template_name = 'core/error.html'
182 success_url = reverse_lazy('projects')
183
184 def get(self, request, token=None):
185 invite = get_object_or_404(Invite, token=token)
186 if invite.user in [None, request.user]:
187 invite.delete()
188
189 return redirect(self.success_url)
190
191
192 class ProjectLeaveView(ObjectPermissionMixin, RedirectViewMixin, FormMixin, DetailView):
193 model = Project
194 form_class = Form
195 queryset = Project.objects.all()
196 success_url = reverse_lazy('projects')
197 permission_required = 'projects.leave_project_object'
198 template_name = 'projects/project_confirm_leave.html'
199
200 def post(self, request, *args, **kwargs):
201 form = self.get_form()
202 if form.is_valid() and 'cancel' not in request.POST:
203 membership = Membership.objects.filter(project=self.get_object()).get(user=request.user)
204 if not membership.is_last_owner:
205 membership.delete()
206
207 return redirect(self.success_url)
208
209
210 class ProjectExportView(ObjectPermissionMixin, DetailView):
211 model = Project
212 queryset = Project.objects.all()
213 permission_required = 'projects.export_project_object'
214
215 def get_export_plugin(self):
216 export_plugin = get_plugin('PROJECT_EXPORTS', self.kwargs.get('format'))
217 if export_plugin is None:
218 raise Http404
219
220 export_plugin.request = self.request
221 export_plugin.project = self.object
222
223 return export_plugin
224
225 def get(self, request, *args, **kwargs):
226 self.object = self.get_object()
227 return self.get_export_plugin().render()
228
229 def post(self, request, *args, **kwargs):
230 self.object = self.get_object()
231 return self.get_export_plugin().submit()
232
233
234 class ProjectQuestionsView(ObjectPermissionMixin, DetailView):
235 model = Project
236 queryset = Project.objects.all()
237 permission_required = 'projects.view_project_object'
238 template_name = 'projects/project_questions.html'
239
240 @method_decorator(ensure_csrf_cookie)
241 def get(self, request, *args, **kwargs):
242 self.object = self.get_object()
243
244 if self.object.catalog is None:
245 return redirect('project_error', pk=self.object.pk)
246 else:
247 context = self.get_context_data(object=self.object)
248 context['widgets'] = get_widgets()
249 return self.render_to_response(context)
250
251
252 class ProjectErrorView(ObjectPermissionMixin, DetailView):
253 model = Project
254 queryset = Project.objects.all()
255 permission_required = 'projects.view_project_object'
256 template_name = 'projects/project_error.html'
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rdmo/projects/views/project.py b/rdmo/projects/views/project.py
--- a/rdmo/projects/views/project.py
+++ b/rdmo/projects/views/project.py
@@ -126,6 +126,11 @@
.filter_catalog(self.object.catalog) \
.filter_group(self.request.user) \
.filter_availability(self.request.user).exists()
+ ancestors_import = []
+ for instance in ancestors.exclude(id=project.id):
+ if self.request.user.has_perm('projects.view_project_object', instance):
+ ancestors_import.append(instance)
+ context['ancestors_import'] = ancestors_import
context['memberships'] = memberships.order_by('user__last_name', '-project__level')
context['integrations'] = integrations.order_by('provider_key', '-project__level')
context['providers'] = get_plugins('PROJECT_ISSUE_PROVIDERS')
@@ -133,7 +138,6 @@
context['snapshots'] = project.snapshots.all()
context['invites'] = project.invites.all()
context['membership'] = Membership.objects.filter(project=project, user=self.request.user).first()
-
return context
| {"golden_diff": "diff --git a/rdmo/projects/views/project.py b/rdmo/projects/views/project.py\n--- a/rdmo/projects/views/project.py\n+++ b/rdmo/projects/views/project.py\n@@ -126,6 +126,11 @@\n .filter_catalog(self.object.catalog) \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user).exists()\n+ ancestors_import = []\n+ for instance in ancestors.exclude(id=project.id):\n+ if self.request.user.has_perm('projects.view_project_object', instance):\n+ ancestors_import.append(instance)\n+ context['ancestors_import'] = ancestors_import\n context['memberships'] = memberships.order_by('user__last_name', '-project__level')\n context['integrations'] = integrations.order_by('provider_key', '-project__level')\n context['providers'] = get_plugins('PROJECT_ISSUE_PROVIDERS')\n@@ -133,7 +138,6 @@\n context['snapshots'] = project.snapshots.all()\n context['invites'] = project.invites.all()\n context['membership'] = Membership.objects.filter(project=project, user=self.request.user).first()\n-\n return context\n", "issue": "Import from parent project list doesn't check permissions\n### Description / Beschreibung\r\n\r\nThe list of projects in the import from parent project selection displays all parent projects independent of the user permissions.\r\n\r\n### Expected behaviour / Erwartetes Verhalten\r\n\r\nImport from parent project should display only projects where users have permissions to import data.\r\n\r\n### Steps to reproduce / Schritte zum Reproduzieren\r\n\r\n1. Load the dev fixtures in the latest RDMO Version: `python manage.py loaddata ../rdmo/testing/fixtures/*`\r\n3. Log in to manager account.\r\n4. Open Project \"Child 1\" or \"Child 11\"\r\n5. List contains \"Parent\" although user does not have permissions.\r\n6. When chosing \"Parent\" and trying to get to next step, RDMO displays \"Forbidden\", but I think it would be nicer if the option would not be present in the list.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import models\nfrom django.db.models import F, OuterRef, Subquery\nfrom django.forms import Form\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse_lazy\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.csrf import ensure_csrf_cookie\nfrom django.views.generic import DeleteView, DetailView, TemplateView\nfrom django.views.generic.edit import FormMixin\nfrom django_filters.views import FilterView\n\nfrom rdmo.accounts.utils import is_site_manager\nfrom rdmo.core.plugins import get_plugin, get_plugins\nfrom rdmo.core.views import ObjectPermissionMixin, RedirectViewMixin\nfrom rdmo.questions.models import Catalog\nfrom rdmo.questions.utils import get_widgets\nfrom rdmo.tasks.models import Task\nfrom rdmo.views.models import View\n\nfrom ..filters import ProjectFilter\nfrom ..models import Integration, Invite, Membership, Project, Value\n\nlogger = logging.getLogger(__name__)\n\n\nclass ProjectsView(LoginRequiredMixin, FilterView):\n template_name = 'projects/projects.html'\n context_object_name = 'projects'\n paginate_by = 20\n filterset_class = ProjectFilter\n\n def get_queryset(self):\n # prepare projects queryset for this user\n queryset = Project.objects.filter(user=self.request.user)\n for instance in queryset:\n queryset |= instance.get_descendants()\n queryset = queryset.distinct()\n\n # prepare subquery for role\n membership_subquery = models.Subquery(\n Membership.objects.filter(project=models.OuterRef('pk'), user=self.request.user).values('role')\n )\n queryset = queryset.annotate(role=membership_subquery)\n\n # prepare subquery for last_changed\n last_changed_subquery = models.Subquery(\n Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]\n )\n queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))\n\n # order by last changed\n queryset = queryset.order_by('-last_changed')\n\n return queryset\n\n def get_context_data(self, **kwargs):\n context = super(ProjectsView, self).get_context_data(**kwargs)\n context['invites'] = Invite.objects.filter(user=self.request.user)\n context['is_site_manager'] = is_site_manager(self.request.user)\n return context\n\n\nclass SiteProjectsView(LoginRequiredMixin, FilterView):\n template_name = 'projects/site_projects.html'\n context_object_name = 'projects'\n paginate_by = 20\n filterset_class = ProjectFilter\n model = Project\n\n def get_queryset(self):\n if is_site_manager(self.request.user):\n # prepare projects queryset for the site manager\n queryset = Project.objects.filter_current_site()\n\n # prepare subquery for last_changed\n last_changed_subquery = models.Subquery(\n Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]\n )\n queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))\n\n return queryset\n else:\n raise PermissionDenied()\n\n\nclass ProjectDetailView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.prefetch_related(\n 'issues',\n 'issues__task',\n 'issues__task__conditions',\n 'issues__task__conditions__source',\n 'issues__task__conditions__target_option',\n 'tasks',\n 'views',\n 'values'\n )\n permission_required = 'projects.view_project_object'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectDetailView, self).get_context_data(**kwargs)\n project = context['project']\n ancestors = project.get_ancestors(include_self=True)\n values = project.values.filter(snapshot=None).select_related('attribute', 'option')\n highest = Membership.objects.filter(project__in=ancestors, user_id=OuterRef('user_id')).order_by('-project__level')\n memberships = Membership.objects.filter(project__in=ancestors) \\\n .annotate(highest=Subquery(highest.values('project__level')[:1])) \\\n .filter(highest=F('project__level')) \\\n .select_related('user')\n\n integrations = Integration.objects.filter(project__in=ancestors)\n context['catalogs'] = Catalog.objects.filter_current_site() \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user)\n context['tasks_available'] = Task.objects.filter_current_site() \\\n .filter_catalog(self.object.catalog) \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user).exists()\n context['views_available'] = View.objects.filter_current_site() \\\n .filter_catalog(self.object.catalog) \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user).exists()\n context['memberships'] = memberships.order_by('user__last_name', '-project__level')\n context['integrations'] = integrations.order_by('provider_key', '-project__level')\n context['providers'] = get_plugins('PROJECT_ISSUE_PROVIDERS')\n context['issues'] = [issue for issue in project.issues.all() if issue.resolve(values)]\n context['snapshots'] = project.snapshots.all()\n context['invites'] = project.invites.all()\n context['membership'] = Membership.objects.filter(project=project, user=self.request.user).first()\n\n return context\n\n\nclass ProjectDeleteView(ObjectPermissionMixin, RedirectViewMixin, DeleteView):\n model = Project\n queryset = Project.objects.all()\n success_url = reverse_lazy('projects')\n permission_required = 'projects.delete_project_object'\n\n\nclass ProjectJoinView(LoginRequiredMixin, RedirectViewMixin, TemplateView):\n template_name = 'core/error.html'\n\n def get(self, request, token):\n try:\n invite = Invite.objects.get(token=token)\n\n if invite.is_expired:\n error = _('Sorry, your invitation has been expired.')\n invite.delete()\n elif invite.user and invite.user != request.user:\n error = _('Sorry, but this invitation is for the user \"%s\".' % invite.user)\n elif Membership.objects.filter(project=invite.project, user=request.user).exists():\n invite.delete()\n return redirect(invite.project.get_absolute_url())\n else:\n Membership.objects.create(\n project=invite.project,\n user=request.user,\n role=invite.role\n )\n invite.delete()\n return redirect(invite.project.get_absolute_url())\n\n except Invite.DoesNotExist:\n error = _('Sorry, the invitation link is not valid.')\n\n return self.render_to_response({\n 'title': _('Error'),\n 'errors': [error]\n })\n\n\nclass ProjectCancelView(LoginRequiredMixin, RedirectViewMixin, TemplateView):\n template_name = 'core/error.html'\n success_url = reverse_lazy('projects')\n\n def get(self, request, token=None):\n invite = get_object_or_404(Invite, token=token)\n if invite.user in [None, request.user]:\n invite.delete()\n\n return redirect(self.success_url)\n\n\nclass ProjectLeaveView(ObjectPermissionMixin, RedirectViewMixin, FormMixin, DetailView):\n model = Project\n form_class = Form\n queryset = Project.objects.all()\n success_url = reverse_lazy('projects')\n permission_required = 'projects.leave_project_object'\n template_name = 'projects/project_confirm_leave.html'\n\n def post(self, request, *args, **kwargs):\n form = self.get_form()\n if form.is_valid() and 'cancel' not in request.POST:\n membership = Membership.objects.filter(project=self.get_object()).get(user=request.user)\n if not membership.is_last_owner:\n membership.delete()\n\n return redirect(self.success_url)\n\n\nclass ProjectExportView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.export_project_object'\n\n def get_export_plugin(self):\n export_plugin = get_plugin('PROJECT_EXPORTS', self.kwargs.get('format'))\n if export_plugin is None:\n raise Http404\n\n export_plugin.request = self.request\n export_plugin.project = self.object\n\n return export_plugin\n\n def get(self, request, *args, **kwargs):\n self.object = self.get_object()\n return self.get_export_plugin().render()\n\n def post(self, request, *args, **kwargs):\n self.object = self.get_object()\n return self.get_export_plugin().submit()\n\n\nclass ProjectQuestionsView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.view_project_object'\n template_name = 'projects/project_questions.html'\n\n @method_decorator(ensure_csrf_cookie)\n def get(self, request, *args, **kwargs):\n self.object = self.get_object()\n\n if self.object.catalog is None:\n return redirect('project_error', pk=self.object.pk)\n else:\n context = self.get_context_data(object=self.object)\n context['widgets'] = get_widgets()\n return self.render_to_response(context)\n\n\nclass ProjectErrorView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.view_project_object'\n template_name = 'projects/project_error.html'\n", "path": "rdmo/projects/views/project.py"}], "after_files": [{"content": "import logging\n\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import models\nfrom django.db.models import F, OuterRef, Subquery\nfrom django.forms import Form\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse_lazy\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.csrf import ensure_csrf_cookie\nfrom django.views.generic import DeleteView, DetailView, TemplateView\nfrom django.views.generic.edit import FormMixin\nfrom django_filters.views import FilterView\n\nfrom rdmo.accounts.utils import is_site_manager\nfrom rdmo.core.plugins import get_plugin, get_plugins\nfrom rdmo.core.views import ObjectPermissionMixin, RedirectViewMixin\nfrom rdmo.questions.models import Catalog\nfrom rdmo.questions.utils import get_widgets\nfrom rdmo.tasks.models import Task\nfrom rdmo.views.models import View\n\nfrom ..filters import ProjectFilter\nfrom ..models import Integration, Invite, Membership, Project, Value\n\nlogger = logging.getLogger(__name__)\n\n\nclass ProjectsView(LoginRequiredMixin, FilterView):\n template_name = 'projects/projects.html'\n context_object_name = 'projects'\n paginate_by = 20\n filterset_class = ProjectFilter\n\n def get_queryset(self):\n # prepare projects queryset for this user\n queryset = Project.objects.filter(user=self.request.user)\n for instance in queryset:\n queryset |= instance.get_descendants()\n queryset = queryset.distinct()\n\n # prepare subquery for role\n membership_subquery = models.Subquery(\n Membership.objects.filter(project=models.OuterRef('pk'), user=self.request.user).values('role')\n )\n queryset = queryset.annotate(role=membership_subquery)\n\n # prepare subquery for last_changed\n last_changed_subquery = models.Subquery(\n Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]\n )\n queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))\n\n # order by last changed\n queryset = queryset.order_by('-last_changed')\n\n return queryset\n\n def get_context_data(self, **kwargs):\n context = super(ProjectsView, self).get_context_data(**kwargs)\n context['invites'] = Invite.objects.filter(user=self.request.user)\n context['is_site_manager'] = is_site_manager(self.request.user)\n return context\n\n\nclass SiteProjectsView(LoginRequiredMixin, FilterView):\n template_name = 'projects/site_projects.html'\n context_object_name = 'projects'\n paginate_by = 20\n filterset_class = ProjectFilter\n model = Project\n\n def get_queryset(self):\n if is_site_manager(self.request.user):\n # prepare projects queryset for the site manager\n queryset = Project.objects.filter_current_site()\n\n # prepare subquery for last_changed\n last_changed_subquery = models.Subquery(\n Value.objects.filter(project=models.OuterRef('pk')).order_by('-updated').values('updated')[:1]\n )\n queryset = queryset.annotate(last_changed=models.functions.Greatest('updated', last_changed_subquery))\n\n return queryset\n else:\n raise PermissionDenied()\n\n\nclass ProjectDetailView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.prefetch_related(\n 'issues',\n 'issues__task',\n 'issues__task__conditions',\n 'issues__task__conditions__source',\n 'issues__task__conditions__target_option',\n 'tasks',\n 'views',\n 'values'\n )\n permission_required = 'projects.view_project_object'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectDetailView, self).get_context_data(**kwargs)\n project = context['project']\n ancestors = project.get_ancestors(include_self=True)\n values = project.values.filter(snapshot=None).select_related('attribute', 'option')\n highest = Membership.objects.filter(project__in=ancestors, user_id=OuterRef('user_id')).order_by('-project__level')\n memberships = Membership.objects.filter(project__in=ancestors) \\\n .annotate(highest=Subquery(highest.values('project__level')[:1])) \\\n .filter(highest=F('project__level')) \\\n .select_related('user')\n\n integrations = Integration.objects.filter(project__in=ancestors)\n context['catalogs'] = Catalog.objects.filter_current_site() \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user)\n context['tasks_available'] = Task.objects.filter_current_site() \\\n .filter_catalog(self.object.catalog) \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user).exists()\n context['views_available'] = View.objects.filter_current_site() \\\n .filter_catalog(self.object.catalog) \\\n .filter_group(self.request.user) \\\n .filter_availability(self.request.user).exists()\n ancestors_import = []\n for instance in ancestors.exclude(id=project.id):\n if self.request.user.has_perm('projects.view_project_object', instance):\n ancestors_import.append(instance)\n context['ancestors_import'] = ancestors_import\n context['memberships'] = memberships.order_by('user__last_name', '-project__level')\n context['integrations'] = integrations.order_by('provider_key', '-project__level')\n context['providers'] = get_plugins('PROJECT_ISSUE_PROVIDERS')\n context['issues'] = [issue for issue in project.issues.all() if issue.resolve(values)]\n context['snapshots'] = project.snapshots.all()\n context['invites'] = project.invites.all()\n context['membership'] = Membership.objects.filter(project=project, user=self.request.user).first()\n return context\n\n\nclass ProjectDeleteView(ObjectPermissionMixin, RedirectViewMixin, DeleteView):\n model = Project\n queryset = Project.objects.all()\n success_url = reverse_lazy('projects')\n permission_required = 'projects.delete_project_object'\n\n\nclass ProjectJoinView(LoginRequiredMixin, RedirectViewMixin, TemplateView):\n template_name = 'core/error.html'\n\n def get(self, request, token):\n try:\n invite = Invite.objects.get(token=token)\n\n if invite.is_expired:\n error = _('Sorry, your invitation has been expired.')\n invite.delete()\n elif invite.user and invite.user != request.user:\n error = _('Sorry, but this invitation is for the user \"%s\".' % invite.user)\n elif Membership.objects.filter(project=invite.project, user=request.user).exists():\n invite.delete()\n return redirect(invite.project.get_absolute_url())\n else:\n Membership.objects.create(\n project=invite.project,\n user=request.user,\n role=invite.role\n )\n invite.delete()\n return redirect(invite.project.get_absolute_url())\n\n except Invite.DoesNotExist:\n error = _('Sorry, the invitation link is not valid.')\n\n return self.render_to_response({\n 'title': _('Error'),\n 'errors': [error]\n })\n\n\nclass ProjectCancelView(LoginRequiredMixin, RedirectViewMixin, TemplateView):\n template_name = 'core/error.html'\n success_url = reverse_lazy('projects')\n\n def get(self, request, token=None):\n invite = get_object_or_404(Invite, token=token)\n if invite.user in [None, request.user]:\n invite.delete()\n\n return redirect(self.success_url)\n\n\nclass ProjectLeaveView(ObjectPermissionMixin, RedirectViewMixin, FormMixin, DetailView):\n model = Project\n form_class = Form\n queryset = Project.objects.all()\n success_url = reverse_lazy('projects')\n permission_required = 'projects.leave_project_object'\n template_name = 'projects/project_confirm_leave.html'\n\n def post(self, request, *args, **kwargs):\n form = self.get_form()\n if form.is_valid() and 'cancel' not in request.POST:\n membership = Membership.objects.filter(project=self.get_object()).get(user=request.user)\n if not membership.is_last_owner:\n membership.delete()\n\n return redirect(self.success_url)\n\n\nclass ProjectExportView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.export_project_object'\n\n def get_export_plugin(self):\n export_plugin = get_plugin('PROJECT_EXPORTS', self.kwargs.get('format'))\n if export_plugin is None:\n raise Http404\n\n export_plugin.request = self.request\n export_plugin.project = self.object\n\n return export_plugin\n\n def get(self, request, *args, **kwargs):\n self.object = self.get_object()\n return self.get_export_plugin().render()\n\n def post(self, request, *args, **kwargs):\n self.object = self.get_object()\n return self.get_export_plugin().submit()\n\n\nclass ProjectQuestionsView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.view_project_object'\n template_name = 'projects/project_questions.html'\n\n @method_decorator(ensure_csrf_cookie)\n def get(self, request, *args, **kwargs):\n self.object = self.get_object()\n\n if self.object.catalog is None:\n return redirect('project_error', pk=self.object.pk)\n else:\n context = self.get_context_data(object=self.object)\n context['widgets'] = get_widgets()\n return self.render_to_response(context)\n\n\nclass ProjectErrorView(ObjectPermissionMixin, DetailView):\n model = Project\n queryset = Project.objects.all()\n permission_required = 'projects.view_project_object'\n template_name = 'projects/project_error.html'\n", "path": "rdmo/projects/views/project.py"}]} | 3,119 | 257 |
gh_patches_debug_28637 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1771 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add profile information to the verification admin
When manually reviewing verification requests it would be helpful to have more information in the admin such as the users full name, location, department and website.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/verifications/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.utils.timezone import now
3
4 from grandchallenge.verifications.models import Verification
5
6
7 def mark_verified(modeladmin, request, queryset):
8 queryset.update(is_verified=True, verified_at=now())
9
10
11 mark_verified.short_description = "Mark selected users as verified"
12 mark_verified.allowed_permissions = ("change",)
13
14
15 def mark_not_verified(modeladmin, request, queryset):
16 queryset.update(is_verified=False, verified_at=None)
17
18
19 mark_not_verified.short_description = "Mark selected users as not verified"
20 mark_not_verified.allowed_permissions = ("change",)
21
22
23 class VerificationAdmin(admin.ModelAdmin):
24 list_display = (
25 "user",
26 "created",
27 "signup_email",
28 "signup_email_activated",
29 "email",
30 "email_is_verified",
31 "is_verified",
32 "verified_at",
33 )
34 list_filter = ("email_is_verified", "is_verified")
35 readonly_fields = (
36 "created",
37 "modified",
38 "email_is_verified",
39 "email_verified_at",
40 "is_verified",
41 "verified_at",
42 )
43 search_fields = ("user__username", "email", "user__email")
44 actions = (mark_verified, mark_not_verified)
45 autocomplete_fields = ("user",)
46
47 def signup_email_activated(self, instance):
48 return instance.signup_email_activated
49
50 signup_email_activated.boolean = True
51
52 def get_readonly_fields(self, request, obj=None):
53 if obj:
54 return ("user", "email", *self.readonly_fields)
55 else:
56 return self.readonly_fields
57
58
59 admin.site.register(Verification, VerificationAdmin)
60
```
Path: `app/grandchallenge/verifications/models.py`
Content:
```
1 from allauth.account.signals import email_confirmed
2 from django.contrib.auth import get_user_model
3 from django.db import models
4 from django.utils.timezone import now
5 from pyswot import is_academic
6
7 from grandchallenge.subdomains.utils import reverse
8 from grandchallenge.verifications.tokens import (
9 email_verification_token_generator,
10 )
11
12
13 def email_is_trusted(*, email):
14 return is_academic(email)
15
16
17 class Verification(models.Model):
18 created = models.DateTimeField(auto_now_add=True)
19 modified = models.DateTimeField(auto_now=True)
20
21 user = models.OneToOneField(
22 get_user_model(), unique=True, on_delete=models.CASCADE,
23 )
24
25 email = models.EmailField(blank=True)
26 email_is_verified = models.BooleanField(default=False, editable=False)
27 email_verified_at = models.DateTimeField(
28 blank=True, null=True, editable=False
29 )
30
31 is_verified = models.BooleanField(default=None, null=True, editable=False)
32 verified_at = models.DateTimeField(blank=True, null=True, editable=False)
33
34 def __str__(self):
35 return f"Verification for {self.user}"
36
37 @property
38 def signup_email(self):
39 return self.user.email
40
41 @property
42 def signup_email_activated(self):
43 return self.user.emailaddress_set.filter(
44 verified=True, email=self.signup_email
45 ).exists()
46
47 @property
48 def signup_email_is_trusted(self):
49 return self.signup_email_activated and email_is_trusted(
50 email=self.signup_email
51 )
52
53 @property
54 def verification_email_is_trusted(self):
55 return self.email_is_verified and email_is_trusted(email=self.email)
56
57 @property
58 def token(self):
59 return email_verification_token_generator.make_token(self.user)
60
61 @property
62 def verification_url(self):
63 return reverse("verifications:confirm", kwargs={"token": self.token},)
64
65 def save(self, *args, **kwargs):
66 if self.signup_email_is_trusted or self.verification_email_is_trusted:
67 self.is_verified = True
68 self.verified_at = now()
69
70 super().save(*args, **kwargs)
71
72
73 def create_verification(email_address, *_, **__):
74 if (
75 email_is_trusted(email=email_address.email)
76 and not Verification.objects.filter(user=email_address.user).exists()
77 ):
78 Verification.objects.create(
79 user=email_address.user, email=email_address.email
80 )
81
82
83 email_confirmed.connect(create_verification)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/verifications/admin.py b/app/grandchallenge/verifications/admin.py
--- a/app/grandchallenge/verifications/admin.py
+++ b/app/grandchallenge/verifications/admin.py
@@ -23,14 +23,15 @@
class VerificationAdmin(admin.ModelAdmin):
list_display = (
"user",
+ "user_info",
"created",
"signup_email",
- "signup_email_activated",
"email",
"email_is_verified",
"is_verified",
"verified_at",
)
+ list_select_related = ("user__user_profile",)
list_filter = ("email_is_verified", "is_verified")
readonly_fields = (
"created",
diff --git a/app/grandchallenge/verifications/models.py b/app/grandchallenge/verifications/models.py
--- a/app/grandchallenge/verifications/models.py
+++ b/app/grandchallenge/verifications/models.py
@@ -1,6 +1,7 @@
from allauth.account.signals import email_confirmed
from django.contrib.auth import get_user_model
from django.db import models
+from django.utils.html import format_html
from django.utils.timezone import now
from pyswot import is_academic
@@ -69,6 +70,17 @@
super().save(*args, **kwargs)
+ @property
+ def user_info(self):
+ return format_html(
+ "<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>",
+ self.user.get_full_name(),
+ self.user.user_profile.institution,
+ self.user.user_profile.department,
+ self.user.user_profile.country,
+ self.user.user_profile.website,
+ )
+
def create_verification(email_address, *_, **__):
if (
| {"golden_diff": "diff --git a/app/grandchallenge/verifications/admin.py b/app/grandchallenge/verifications/admin.py\n--- a/app/grandchallenge/verifications/admin.py\n+++ b/app/grandchallenge/verifications/admin.py\n@@ -23,14 +23,15 @@\n class VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n+ \"user_info\",\n \"created\",\n \"signup_email\",\n- \"signup_email_activated\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n+ list_select_related = (\"user__user_profile\",)\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\ndiff --git a/app/grandchallenge/verifications/models.py b/app/grandchallenge/verifications/models.py\n--- a/app/grandchallenge/verifications/models.py\n+++ b/app/grandchallenge/verifications/models.py\n@@ -1,6 +1,7 @@\n from allauth.account.signals import email_confirmed\n from django.contrib.auth import get_user_model\n from django.db import models\n+from django.utils.html import format_html\n from django.utils.timezone import now\n from pyswot import is_academic\n \n@@ -69,6 +70,17 @@\n \n super().save(*args, **kwargs)\n \n+ @property\n+ def user_info(self):\n+ return format_html(\n+ \"<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>\",\n+ self.user.get_full_name(),\n+ self.user.user_profile.institution,\n+ self.user.user_profile.department,\n+ self.user.user_profile.country,\n+ self.user.user_profile.website,\n+ )\n+\n \n def create_verification(email_address, *_, **__):\n if (\n", "issue": "Add profile information to the verification admin\nWhen manually reviewing verification requests it would be helpful to have more information in the admin such as the users full name, location, department and website.\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.utils.timezone import now\n\nfrom grandchallenge.verifications.models import Verification\n\n\ndef mark_verified(modeladmin, request, queryset):\n queryset.update(is_verified=True, verified_at=now())\n\n\nmark_verified.short_description = \"Mark selected users as verified\"\nmark_verified.allowed_permissions = (\"change\",)\n\n\ndef mark_not_verified(modeladmin, request, queryset):\n queryset.update(is_verified=False, verified_at=None)\n\n\nmark_not_verified.short_description = \"Mark selected users as not verified\"\nmark_not_verified.allowed_permissions = (\"change\",)\n\n\nclass VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n \"created\",\n \"signup_email\",\n \"signup_email_activated\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\n \"modified\",\n \"email_is_verified\",\n \"email_verified_at\",\n \"is_verified\",\n \"verified_at\",\n )\n search_fields = (\"user__username\", \"email\", \"user__email\")\n actions = (mark_verified, mark_not_verified)\n autocomplete_fields = (\"user\",)\n\n def signup_email_activated(self, instance):\n return instance.signup_email_activated\n\n signup_email_activated.boolean = True\n\n def get_readonly_fields(self, request, obj=None):\n if obj:\n return (\"user\", \"email\", *self.readonly_fields)\n else:\n return self.readonly_fields\n\n\nadmin.site.register(Verification, VerificationAdmin)\n", "path": "app/grandchallenge/verifications/admin.py"}, {"content": "from allauth.account.signals import email_confirmed\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.timezone import now\nfrom pyswot import is_academic\n\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.verifications.tokens import (\n email_verification_token_generator,\n)\n\n\ndef email_is_trusted(*, email):\n return is_academic(email)\n\n\nclass Verification(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n user = models.OneToOneField(\n get_user_model(), unique=True, on_delete=models.CASCADE,\n )\n\n email = models.EmailField(blank=True)\n email_is_verified = models.BooleanField(default=False, editable=False)\n email_verified_at = models.DateTimeField(\n blank=True, null=True, editable=False\n )\n\n is_verified = models.BooleanField(default=None, null=True, editable=False)\n verified_at = models.DateTimeField(blank=True, null=True, editable=False)\n\n def __str__(self):\n return f\"Verification for {self.user}\"\n\n @property\n def signup_email(self):\n return self.user.email\n\n @property\n def signup_email_activated(self):\n return self.user.emailaddress_set.filter(\n verified=True, email=self.signup_email\n ).exists()\n\n @property\n def signup_email_is_trusted(self):\n return self.signup_email_activated and email_is_trusted(\n email=self.signup_email\n )\n\n @property\n def verification_email_is_trusted(self):\n return self.email_is_verified and email_is_trusted(email=self.email)\n\n @property\n def token(self):\n return email_verification_token_generator.make_token(self.user)\n\n @property\n def verification_url(self):\n return reverse(\"verifications:confirm\", kwargs={\"token\": self.token},)\n\n def save(self, *args, **kwargs):\n if self.signup_email_is_trusted or self.verification_email_is_trusted:\n self.is_verified = True\n self.verified_at = now()\n\n super().save(*args, **kwargs)\n\n\ndef create_verification(email_address, *_, **__):\n if (\n email_is_trusted(email=email_address.email)\n and not Verification.objects.filter(user=email_address.user).exists()\n ):\n Verification.objects.create(\n user=email_address.user, email=email_address.email\n )\n\n\nemail_confirmed.connect(create_verification)\n", "path": "app/grandchallenge/verifications/models.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.utils.timezone import now\n\nfrom grandchallenge.verifications.models import Verification\n\n\ndef mark_verified(modeladmin, request, queryset):\n queryset.update(is_verified=True, verified_at=now())\n\n\nmark_verified.short_description = \"Mark selected users as verified\"\nmark_verified.allowed_permissions = (\"change\",)\n\n\ndef mark_not_verified(modeladmin, request, queryset):\n queryset.update(is_verified=False, verified_at=None)\n\n\nmark_not_verified.short_description = \"Mark selected users as not verified\"\nmark_not_verified.allowed_permissions = (\"change\",)\n\n\nclass VerificationAdmin(admin.ModelAdmin):\n list_display = (\n \"user\",\n \"user_info\",\n \"created\",\n \"signup_email\",\n \"email\",\n \"email_is_verified\",\n \"is_verified\",\n \"verified_at\",\n )\n list_select_related = (\"user__user_profile\",)\n list_filter = (\"email_is_verified\", \"is_verified\")\n readonly_fields = (\n \"created\",\n \"modified\",\n \"email_is_verified\",\n \"email_verified_at\",\n \"is_verified\",\n \"verified_at\",\n )\n search_fields = (\"user__username\", \"email\", \"user__email\")\n actions = (mark_verified, mark_not_verified)\n autocomplete_fields = (\"user\",)\n\n def signup_email_activated(self, instance):\n return instance.signup_email_activated\n\n signup_email_activated.boolean = True\n\n def get_readonly_fields(self, request, obj=None):\n if obj:\n return (\"user\", \"email\", *self.readonly_fields)\n else:\n return self.readonly_fields\n\n\nadmin.site.register(Verification, VerificationAdmin)\n", "path": "app/grandchallenge/verifications/admin.py"}, {"content": "from allauth.account.signals import email_confirmed\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.html import format_html\nfrom django.utils.timezone import now\nfrom pyswot import is_academic\n\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.verifications.tokens import (\n email_verification_token_generator,\n)\n\n\ndef email_is_trusted(*, email):\n return is_academic(email)\n\n\nclass Verification(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n user = models.OneToOneField(\n get_user_model(), unique=True, on_delete=models.CASCADE,\n )\n\n email = models.EmailField(blank=True)\n email_is_verified = models.BooleanField(default=False, editable=False)\n email_verified_at = models.DateTimeField(\n blank=True, null=True, editable=False\n )\n\n is_verified = models.BooleanField(default=None, null=True, editable=False)\n verified_at = models.DateTimeField(blank=True, null=True, editable=False)\n\n def __str__(self):\n return f\"Verification for {self.user}\"\n\n @property\n def signup_email(self):\n return self.user.email\n\n @property\n def signup_email_activated(self):\n return self.user.emailaddress_set.filter(\n verified=True, email=self.signup_email\n ).exists()\n\n @property\n def signup_email_is_trusted(self):\n return self.signup_email_activated and email_is_trusted(\n email=self.signup_email\n )\n\n @property\n def verification_email_is_trusted(self):\n return self.email_is_verified and email_is_trusted(email=self.email)\n\n @property\n def token(self):\n return email_verification_token_generator.make_token(self.user)\n\n @property\n def verification_url(self):\n return reverse(\"verifications:confirm\", kwargs={\"token\": self.token},)\n\n def save(self, *args, **kwargs):\n if self.signup_email_is_trusted or self.verification_email_is_trusted:\n self.is_verified = True\n self.verified_at = now()\n\n super().save(*args, **kwargs)\n\n @property\n def user_info(self):\n return format_html(\n \"<span>{} <br/> {} <br/> {} <br/> {} <br/> {}</span>\",\n self.user.get_full_name(),\n self.user.user_profile.institution,\n self.user.user_profile.department,\n self.user.user_profile.country,\n self.user.user_profile.website,\n )\n\n\ndef create_verification(email_address, *_, **__):\n if (\n email_is_trusted(email=email_address.email)\n and not Verification.objects.filter(user=email_address.user).exists()\n ):\n Verification.objects.create(\n user=email_address.user, email=email_address.email\n )\n\n\nemail_confirmed.connect(create_verification)\n", "path": "app/grandchallenge/verifications/models.py"}]} | 1,446 | 393 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.