problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_1919
|
rasdani/github-patches
|
git_diff
|
hylang__hy-885
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exclamation mark ! is not mangled
I noticed that https://github.com/hylang/hyway/blob/master/conway.hy uses "!" in `set!` and `get!`, but Hy doesn't mangle "!" into something else. The variable is added to the module as-is. That means it'll be hard to reach it from normal Python code. Also, hy2py on Hy code with `set!` returns invalid syntax: `def set!(`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/lex/parser.py`
Content:
```
1 # Copyright (c) 2013 Nicolas Dandrimont <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import sys
22 from functools import wraps
23
24 from rply import ParserGenerator
25
26 from hy.models.complex import HyComplex
27 from hy.models.cons import HyCons
28 from hy.models.dict import HyDict
29 from hy.models.expression import HyExpression
30 from hy.models.float import HyFloat
31 from hy.models.integer import HyInteger
32 from hy.models.keyword import HyKeyword
33 from hy.models.list import HyList
34 from hy.models.set import HySet
35 from hy.models.string import HyString
36 from hy.models.symbol import HySymbol
37
38 from .lexer import lexer
39 from .exceptions import LexException, PrematureEndOfInput
40
41
42 pg = ParserGenerator(
43 [rule.name for rule in lexer.rules] + ['$end'],
44 cache_id="hy_parser"
45 )
46
47
48 def set_boundaries(fun):
49 @wraps(fun)
50 def wrapped(p):
51 start = p[0].source_pos
52 end = p[-1].source_pos
53 ret = fun(p)
54 ret.start_line = start.lineno
55 ret.start_column = start.colno
56 if start is not end:
57 ret.end_line = end.lineno
58 ret.end_column = end.colno
59 else:
60 ret.end_line = start.lineno
61 ret.end_column = start.colno + len(p[0].value)
62 return ret
63 return wrapped
64
65
66 def set_quote_boundaries(fun):
67 @wraps(fun)
68 def wrapped(p):
69 start = p[0].source_pos
70 ret = fun(p)
71 ret.start_line = start.lineno
72 ret.start_column = start.colno
73 ret.end_line = p[-1].end_line
74 ret.end_column = p[-1].end_column
75 return ret
76 return wrapped
77
78
79 @pg.production("main : HASHBANG real_main")
80 def main_hashbang(p):
81 return p[1]
82
83
84 @pg.production("main : real_main")
85 def main(p):
86 return p[0]
87
88
89 @pg.production("real_main : list_contents")
90 def real_main(p):
91 return p[0]
92
93
94 @pg.production("real_main : $end")
95 def real_main_empty(p):
96 return []
97
98
99 def reject_spurious_dots(*items):
100 "Reject the spurious dots from items"
101 for list in items:
102 for tok in list:
103 if tok == "." and type(tok) == HySymbol:
104 raise LexException("Malformed dotted list",
105 tok.start_line, tok.start_column)
106
107
108 @pg.production("paren : LPAREN list_contents RPAREN")
109 @set_boundaries
110 def paren(p):
111 cont = p[1]
112
113 # Dotted lists are expressions of the form
114 # (a b c . d)
115 # that evaluate to nested cons cells of the form
116 # (a . (b . (c . d)))
117 if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == ".":
118
119 reject_spurious_dots(cont[:-2], cont[-1:])
120
121 if len(cont) == 3:
122 # Two-item dotted list: return the cons cell directly
123 return HyCons(cont[0], cont[2])
124 else:
125 # Return a nested cons cell
126 return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))
127
128 # Warn preemptively on a malformed dotted list.
129 # Only check for dots after the first item to allow for a potential
130 # attribute accessor shorthand
131 reject_spurious_dots(cont[1:])
132
133 return HyExpression(p[1])
134
135
136 @pg.production("paren : LPAREN RPAREN")
137 @set_boundaries
138 def empty_paren(p):
139 return HyExpression([])
140
141
142 @pg.production("list_contents : term list_contents")
143 def list_contents(p):
144 return [p[0]] + p[1]
145
146
147 @pg.production("list_contents : term")
148 def list_contents_single(p):
149 return [p[0]]
150
151
152 @pg.production("term : identifier")
153 @pg.production("term : paren")
154 @pg.production("term : dict")
155 @pg.production("term : list")
156 @pg.production("term : set")
157 @pg.production("term : string")
158 def term(p):
159 return p[0]
160
161
162 @pg.production("term : QUOTE term")
163 @set_quote_boundaries
164 def term_quote(p):
165 return HyExpression([HySymbol("quote"), p[1]])
166
167
168 @pg.production("term : QUASIQUOTE term")
169 @set_quote_boundaries
170 def term_quasiquote(p):
171 return HyExpression([HySymbol("quasiquote"), p[1]])
172
173
174 @pg.production("term : UNQUOTE term")
175 @set_quote_boundaries
176 def term_unquote(p):
177 return HyExpression([HySymbol("unquote"), p[1]])
178
179
180 @pg.production("term : UNQUOTESPLICE term")
181 @set_quote_boundaries
182 def term_unquote_splice(p):
183 return HyExpression([HySymbol("unquote_splice"), p[1]])
184
185
186 @pg.production("term : HASHREADER term")
187 @set_quote_boundaries
188 def hash_reader(p):
189 st = p[0].getstr()[1]
190 str_object = HyString(st)
191 expr = p[1]
192 return HyExpression([HySymbol("dispatch_reader_macro"), str_object, expr])
193
194
195 @pg.production("set : HLCURLY list_contents RCURLY")
196 @set_boundaries
197 def t_set(p):
198 return HySet(p[1])
199
200
201 @pg.production("set : HLCURLY RCURLY")
202 @set_boundaries
203 def empty_set(p):
204 return HySet([])
205
206
207 @pg.production("dict : LCURLY list_contents RCURLY")
208 @set_boundaries
209 def t_dict(p):
210 return HyDict(p[1])
211
212
213 @pg.production("dict : LCURLY RCURLY")
214 @set_boundaries
215 def empty_dict(p):
216 return HyDict([])
217
218
219 @pg.production("list : LBRACKET list_contents RBRACKET")
220 @set_boundaries
221 def t_list(p):
222 return HyList(p[1])
223
224
225 @pg.production("list : LBRACKET RBRACKET")
226 @set_boundaries
227 def t_empty_list(p):
228 return HyList([])
229
230
231 if sys.version_info[0] >= 3:
232 def uni_hystring(s):
233 return HyString(eval(s))
234 else:
235 def uni_hystring(s):
236 return HyString(eval('u'+s))
237
238
239 @pg.production("string : STRING")
240 @set_boundaries
241 def t_string(p):
242 # remove trailing quote
243 s = p[0].value[:-1]
244 # get the header
245 header, s = s.split('"', 1)
246 # remove unicode marker
247 header = header.replace("u", "")
248 # build python string
249 s = header + '"""' + s + '"""'
250 return uni_hystring(s)
251
252
253 @pg.production("string : PARTIAL_STRING")
254 def t_partial_string(p):
255 # Any unterminated string requires more input
256 raise PrematureEndOfInput("Premature end of input")
257
258
259 @pg.production("identifier : IDENTIFIER")
260 @set_boundaries
261 def t_identifier(p):
262 obj = p[0].value
263
264 try:
265 return HyInteger(obj)
266 except ValueError:
267 pass
268
269 if '/' in obj:
270 try:
271 lhs, rhs = obj.split('/')
272 return HyExpression([HySymbol('fraction'), HyInteger(lhs),
273 HyInteger(rhs)])
274 except ValueError:
275 pass
276
277 try:
278 return HyFloat(obj)
279 except ValueError:
280 pass
281
282 if obj != 'j':
283 try:
284 return HyComplex(obj)
285 except ValueError:
286 pass
287
288 table = {
289 "true": "True",
290 "false": "False",
291 "nil": "None",
292 "null": "None",
293 }
294
295 if obj in table:
296 return HySymbol(table[obj])
297
298 if obj.startswith(":"):
299 return HyKeyword(obj)
300
301 def mangle(p):
302 if p.startswith("*") and p.endswith("*") and p not in ("*", "**"):
303 p = p[1:-1].upper()
304
305 if "-" in p and p != "-":
306 p = p.replace("-", "_")
307
308 if p.endswith("?") and p != "?":
309 p = "is_%s" % (p[:-1])
310
311 return p
312
313 obj = ".".join([mangle(part) for part in obj.split(".")])
314
315 return HySymbol(obj)
316
317
318 @pg.error
319 def error_handler(token):
320 tokentype = token.gettokentype()
321 if tokentype == '$end':
322 raise PrematureEndOfInput("Premature end of input")
323 else:
324 raise LexException(
325 "Ran into a %s where it wasn't expected." % tokentype,
326 token.source_pos.lineno, token.source_pos.colno)
327
328
329 parser = pg.build()
330
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hy/lex/parser.py b/hy/lex/parser.py
--- a/hy/lex/parser.py
+++ b/hy/lex/parser.py
@@ -308,6 +308,9 @@
if p.endswith("?") and p != "?":
p = "is_%s" % (p[:-1])
+ if p.endswith("!") and p != "!":
+ p = "%s_bang" % (p[:-1])
+
return p
obj = ".".join([mangle(part) for part in obj.split(".")])
|
{"golden_diff": "diff --git a/hy/lex/parser.py b/hy/lex/parser.py\n--- a/hy/lex/parser.py\n+++ b/hy/lex/parser.py\n@@ -308,6 +308,9 @@\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n \n+ if p.endswith(\"!\") and p != \"!\":\n+ p = \"%s_bang\" % (p[:-1])\n+\n return p\n \n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n", "issue": "Exclamation mark ! is not mangled\nI noticed that https://github.com/hylang/hyway/blob/master/conway.hy uses \"!\" in `set!` and `get!`, but Hy doesn't mangle \"!\" into something else. The variable is added to the module as-is. That means it'll be hard to reach it from normal Python code. Also, hy2py on Hy code with `set!` returns invalid syntax: `def set!(`.\n\n", "before_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.cons import HyCons\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.list import HyList\nfrom hy.models.set import HySet\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\ndef reject_spurious_dots(*items):\n \"Reject the spurious dots from items\"\n for list in items:\n for tok in list:\n if tok == \".\" and type(tok) == HySymbol:\n raise LexException(\"Malformed dotted list\",\n tok.start_line, tok.start_column)\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n cont = p[1]\n\n # Dotted lists are expressions of the form\n # (a b c . d)\n # that evaluate to nested cons cells of the form\n # (a . (b . (c . d)))\n if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == \".\":\n\n reject_spurious_dots(cont[:-2], cont[-1:])\n\n if len(cont) == 3:\n # Two-item dotted list: return the cons cell directly\n return HyCons(cont[0], cont[2])\n else:\n # Return a nested cons cell\n return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))\n\n # Warn preemptively on a malformed dotted list.\n # Only check for dots after the first item to allow for a potential\n # attribute accessor shorthand\n reject_spurious_dots(cont[1:])\n\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : set\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"term : HASHREADER term\")\n@set_quote_boundaries\ndef hash_reader(p):\n st = p[0].getstr()[1]\n str_object = HyString(st)\n expr = p[1]\n return HyExpression([HySymbol(\"dispatch_reader_macro\"), str_object, expr])\n\n\[email protected](\"set : HLCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_set(p):\n return HySet(p[1])\n\n\[email protected](\"set : HLCURLY RCURLY\")\n@set_boundaries\ndef empty_set(p):\n return HySet([])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"string : PARTIAL_STRING\")\ndef t_partial_string(p):\n # Any unterminated string requires more input\n raise PrematureEndOfInput(\"Premature end of input\")\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n if '/' in obj:\n try:\n lhs, rhs = obj.split('/')\n return HyExpression([HySymbol('fraction'), HyInteger(lhs),\n HyInteger(rhs)])\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"nil\": \"None\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n def mangle(p):\n if p.startswith(\"*\") and p.endswith(\"*\") and p not in (\"*\", \"**\"):\n p = p[1:-1].upper()\n\n if \"-\" in p and p != \"-\":\n p = p.replace(\"-\", \"_\")\n\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n\n return p\n\n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput(\"Premature end of input\")\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected.\" % tokentype,\n token.source_pos.lineno, token.source_pos.colno)\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py"}], "after_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.cons import HyCons\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.list import HyList\nfrom hy.models.set import HySet\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\ndef reject_spurious_dots(*items):\n \"Reject the spurious dots from items\"\n for list in items:\n for tok in list:\n if tok == \".\" and type(tok) == HySymbol:\n raise LexException(\"Malformed dotted list\",\n tok.start_line, tok.start_column)\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n cont = p[1]\n\n # Dotted lists are expressions of the form\n # (a b c . d)\n # that evaluate to nested cons cells of the form\n # (a . (b . (c . d)))\n if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == \".\":\n\n reject_spurious_dots(cont[:-2], cont[-1:])\n\n if len(cont) == 3:\n # Two-item dotted list: return the cons cell directly\n return HyCons(cont[0], cont[2])\n else:\n # Return a nested cons cell\n return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))\n\n # Warn preemptively on a malformed dotted list.\n # Only check for dots after the first item to allow for a potential\n # attribute accessor shorthand\n reject_spurious_dots(cont[1:])\n\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : set\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"term : HASHREADER term\")\n@set_quote_boundaries\ndef hash_reader(p):\n st = p[0].getstr()[1]\n str_object = HyString(st)\n expr = p[1]\n return HyExpression([HySymbol(\"dispatch_reader_macro\"), str_object, expr])\n\n\[email protected](\"set : HLCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_set(p):\n return HySet(p[1])\n\n\[email protected](\"set : HLCURLY RCURLY\")\n@set_boundaries\ndef empty_set(p):\n return HySet([])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"string : PARTIAL_STRING\")\ndef t_partial_string(p):\n # Any unterminated string requires more input\n raise PrematureEndOfInput(\"Premature end of input\")\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n if '/' in obj:\n try:\n lhs, rhs = obj.split('/')\n return HyExpression([HySymbol('fraction'), HyInteger(lhs),\n HyInteger(rhs)])\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"nil\": \"None\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n def mangle(p):\n if p.startswith(\"*\") and p.endswith(\"*\") and p not in (\"*\", \"**\"):\n p = p[1:-1].upper()\n\n if \"-\" in p and p != \"-\":\n p = p.replace(\"-\", \"_\")\n\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n\n if p.endswith(\"!\") and p != \"!\":\n p = \"%s_bang\" % (p[:-1])\n\n return p\n\n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput(\"Premature end of input\")\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected.\" % tokentype,\n token.source_pos.lineno, token.source_pos.colno)\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py"}]}
| 3,411 | 126 |
gh_patches_debug_3669
|
rasdani/github-patches
|
git_diff
|
ocadotechnology__codeforlife-portal-783
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
portal API not working anymore
**Describe the bug**
When trying to access the following URLs:
/api/lastconnectedsince/YYYY/MM/DD
.../registered/YYYY/MM/DD
.../userspercountry/CC
On any of our servers, we get a 500 error.
Google console says:
> TemplateSyntaxError: 'url' is not a valid tag or filter in tag library 'future'
It happens even with the right to access them.
**To Reproduce**
If you have an authorised google account, go to https://www.codeforlife.education/api/lastconnectedsince/2018/07/20/ and you will see a 500 error
**Expected behaviour**
This page to display a number when your google account is autorised
**Desktop (please complete the following information):**
- OS:Ubuntu 16.04
- Browser:Chrome
**Additional context**
The urls.py file has been reworked for forward compatibility
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3 import versioneer
4 setup(name='codeforlife-portal',
5 cmdclass=versioneer.get_cmdclass(),
6 version=versioneer.get_version(),
7 packages=find_packages(),
8 include_package_data=True,
9 install_requires=[
10 'django==1.9.13',
11 'django-appconf==1.0.1',
12 'django-countries==3.4.1',
13 'djangorestframework==3.1.3',
14 'django-jquery==1.9.1',
15 'django-autoconfig==0.8.0',
16 'django-pipeline==1.5.4',
17 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11
18
19 'pyyaml==3.10',
20 'rapid-router >= 1.0.0.post.dev1',
21 'six==1.11.0',
22 'aimmo',
23 'docutils==0.12',
24 'reportlab==3.2.0',
25 'postcodes==0.1',
26 'django-formtools==1.0',
27 'django-two-factor-auth==1.5.0',
28 'urllib3==1.22',
29 'requests==2.18.4',
30
31 'django-classy-tags==0.6.1',
32 'django-treebeard==4.3',
33 'django-sekizai==0.10.0',
34
35 'django-online-status==0.1.0',
36
37 'Pillow==3.3.2',
38 'django-reversion==2.0.0',
39 'sqlparse',
40 'libsass',
41 'django-forms-bootstrap'
42 ],
43 tests_require=[
44 'django-setuptest==0.2.1',
45 'django-selenium-clean==0.3.0',
46 'responses==0.4.0',
47 'selenium==2.48.0',
48 ],
49 test_suite='setuptest.setuptest.SetupTestSuite',
50 zip_safe=False,
51 )
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
'django==1.9.13',
'django-appconf==1.0.1',
'django-countries==3.4.1',
- 'djangorestframework==3.1.3',
+ 'djangorestframework==3.2.3',
'django-jquery==1.9.1',
'django-autoconfig==0.8.0',
'django-pipeline==1.5.4',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n 'django==1.9.13',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n- 'djangorestframework==3.1.3',\n+ 'djangorestframework==3.2.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.8.0',\n 'django-pipeline==1.5.4',\n", "issue": "portal API not working anymore\n**Describe the bug**\r\nWhen trying to access the following URLs:\r\n/api/lastconnectedsince/YYYY/MM/DD\r\n.../registered/YYYY/MM/DD\r\n.../userspercountry/CC\r\nOn any of our servers, we get a 500 error.\r\nGoogle console says: \r\n\r\n> TemplateSyntaxError: 'url' is not a valid tag or filter in tag library 'future'\r\n\r\nIt happens even with the right to access them.\r\n\r\n**To Reproduce**\r\nIf you have an authorised google account, go to https://www.codeforlife.education/api/lastconnectedsince/2018/07/20/ and you will see a 500 error\r\n\r\n**Expected behaviour**\r\nThis page to display a number when your google account is autorised\r\n\r\n**Desktop (please complete the following information):**\r\n\r\n- OS:Ubuntu 16.04\r\n- Browser:Chrome\r\n \r\n**Additional context**\r\nThe urls.py file has been reworked for forward compatibility\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\nimport versioneer\nsetup(name='codeforlife-portal',\n cmdclass=versioneer.get_cmdclass(),\n version=versioneer.get_version(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django==1.9.13',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n 'djangorestframework==3.1.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.8.0',\n 'django-pipeline==1.5.4',\n 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11\n\n 'pyyaml==3.10',\n 'rapid-router >= 1.0.0.post.dev1',\n 'six==1.11.0',\n 'aimmo',\n 'docutils==0.12',\n 'reportlab==3.2.0',\n 'postcodes==0.1',\n 'django-formtools==1.0',\n 'django-two-factor-auth==1.5.0',\n 'urllib3==1.22',\n 'requests==2.18.4',\n\n 'django-classy-tags==0.6.1',\n 'django-treebeard==4.3',\n 'django-sekizai==0.10.0',\n\n 'django-online-status==0.1.0',\n\n 'Pillow==3.3.2',\n 'django-reversion==2.0.0',\n 'sqlparse',\n 'libsass',\n 'django-forms-bootstrap'\n ],\n tests_require=[\n 'django-setuptest==0.2.1',\n 'django-selenium-clean==0.3.0',\n 'responses==0.4.0',\n 'selenium==2.48.0',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n zip_safe=False,\n )\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\nimport versioneer\nsetup(name='codeforlife-portal',\n cmdclass=versioneer.get_cmdclass(),\n version=versioneer.get_version(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django==1.9.13',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n 'djangorestframework==3.2.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.8.0',\n 'django-pipeline==1.5.4',\n 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11\n\n 'pyyaml==3.10',\n 'rapid-router >= 1.0.0.post.dev1',\n 'six==1.11.0',\n 'aimmo',\n 'docutils==0.12',\n 'reportlab==3.2.0',\n 'postcodes==0.1',\n 'django-formtools==1.0',\n 'django-two-factor-auth==1.5.0',\n 'urllib3==1.22',\n 'requests==2.18.4',\n\n 'django-classy-tags==0.6.1',\n 'django-treebeard==4.3',\n 'django-sekizai==0.10.0',\n\n 'django-online-status==0.1.0',\n\n 'Pillow==3.3.2',\n 'django-reversion==2.0.0',\n 'sqlparse',\n 'libsass',\n 'django-forms-bootstrap'\n ],\n tests_require=[\n 'django-setuptest==0.2.1',\n 'django-selenium-clean==0.3.0',\n 'responses==0.4.0',\n 'selenium==2.48.0',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n zip_safe=False,\n )\n", "path": "setup.py"}]}
| 1,021 | 131 |
gh_patches_debug_29415
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-162
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError when writing to stdout in python2.6
```
$ pre-commit run fixmyjs
fixmyjs............................................................................................................................................................................................Failed
hookid: fixmyjs
Traceback (most recent call last):
File "virtualenv_run/bin/pre-commit", line 14, in <module>
sys.exit(main())
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/util.py", line 41, in wrapper
return func(argv)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/main.py", line 99, in main
return run(runner, args)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 144, in run
return _run_hook(runner, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 116, in _run_hook
return _run_single_hook(runner, repo, hook_id, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 91, in _run_single_hook
write(output.strip() + '\n')
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2713' in position 0: ordinal not in range(128)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/output.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import subprocess
4
5 from pre_commit import color
6
7
8 # TODO: smell: import side-effects
9 COLS = int(
10 subprocess.Popen(
11 ['tput', 'cols'], stdout=subprocess.PIPE
12 ).communicate()[0] or
13 # Default in the case of no terminal
14 80
15 )
16
17
18 def get_hook_message(
19 start,
20 postfix='',
21 end_msg=None,
22 end_len=0,
23 end_color=None,
24 use_color=None,
25 cols=COLS,
26 ):
27 """Prints a message for running a hook.
28
29 This currently supports three approaches:
30
31 # Print `start` followed by dots, leaving 6 characters at the end
32 >>> print_hook_message('start', end_len=6)
33 start...............................................................
34
35 # Print `start` followed by dots with the end message colored if coloring
36 # is specified and a newline afterwards
37 >>> print_hook_message(
38 'start',
39 end_msg='end',
40 end_color=color.RED,
41 use_color=True,
42 )
43 start...................................................................end
44
45 # Print `start` followed by dots, followed by the `postfix` message
46 # uncolored, followed by the `end_msg` colored if specified and a newline
47 # afterwards
48 >>> print_hook_message(
49 'start',
50 postfix='postfix ',
51 end_msg='end',
52 end_color=color.RED,
53 use_color=True,
54 )
55 start...........................................................postfix end
56 """
57 if bool(end_msg) == bool(end_len):
58 raise ValueError('Expected one of (`end_msg`, `end_len`)')
59 if end_msg is not None and (end_color is None or use_color is None):
60 raise ValueError(
61 '`end_color` and `use_color` are required with `end_msg`'
62 )
63
64 if end_len:
65 return start + '.' * (cols - len(start) - end_len - 1)
66 else:
67 return '{0}{1}{2}{3}\n'.format(
68 start,
69 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
70 postfix,
71 color.format_color(end_msg, end_color, use_color),
72 )
73
```
Path: `pre_commit/commands/run.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import logging
5 import os
6 import sys
7
8 from pre_commit import git
9 from pre_commit import color
10 from pre_commit.logging_handler import LoggingHandler
11 from pre_commit.output import get_hook_message
12 from pre_commit.staged_files_only import staged_files_only
13 from pre_commit.util import noop_context
14
15
16 logger = logging.getLogger('pre_commit')
17
18
19 def _get_skips(environ):
20 skips = environ.get('SKIP', '')
21 return set(skip.strip() for skip in skips.split(',') if skip.strip())
22
23
24 def _hook_msg_start(hook, verbose):
25 return '{0}{1}'.format(
26 '[{0}] '.format(hook['id']) if verbose else '',
27 hook['name'],
28 )
29
30
31 def _print_no_files_skipped(hook, write, args):
32 write(get_hook_message(
33 _hook_msg_start(hook, args.verbose),
34 postfix='(no files to check) ',
35 end_msg='Skipped',
36 end_color=color.TURQUOISE,
37 use_color=args.color,
38 ))
39
40
41 def _print_user_skipped(hook, write, args):
42 write(get_hook_message(
43 _hook_msg_start(hook, args.verbose),
44 end_msg='Skipped',
45 end_color=color.YELLOW,
46 use_color=args.color,
47 ))
48
49
50 def _run_single_hook(runner, repository, hook, args, write, skips=set()):
51 if args.all_files:
52 get_filenames = git.get_all_files_matching
53 elif git.is_in_merge_conflict():
54 get_filenames = git.get_conflicted_files_matching
55 else:
56 get_filenames = git.get_staged_files_matching
57
58 filenames = get_filenames(hook['files'], hook['exclude'])
59 if hook['id'] in skips:
60 _print_user_skipped(hook, write, args)
61 return 0
62 elif not filenames:
63 _print_no_files_skipped(hook, write, args)
64 return 0
65
66 # Print the hook and the dots first in case the hook takes hella long to
67 # run.
68 write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
69 sys.stdout.flush()
70
71 retcode, stdout, stderr = repository.run_hook(hook, filenames)
72
73 if retcode != hook['expected_return_value']:
74 retcode = 1
75 print_color = color.RED
76 pass_fail = 'Failed'
77 else:
78 retcode = 0
79 print_color = color.GREEN
80 pass_fail = 'Passed'
81
82 write(color.format_color(pass_fail, print_color, args.color) + '\n')
83
84 if (stdout or stderr) and (retcode or args.verbose):
85 write('hookid: {0}\n'.format(hook['id']))
86 write('\n')
87 for output in (stdout, stderr):
88 if output.strip():
89 write(output.strip() + '\n')
90 write('\n')
91
92 return retcode
93
94
95 def _run_hooks(runner, args, write, environ):
96 """Actually run the hooks."""
97 retval = 0
98
99 skips = _get_skips(environ)
100
101 for repo in runner.repositories:
102 for _, hook in repo.hooks:
103 retval |= _run_single_hook(
104 runner, repo, hook, args, write, skips=skips,
105 )
106
107 return retval
108
109
110 def _run_hook(runner, args, write):
111 hook_id = args.hook
112 for repo in runner.repositories:
113 for hook_id_in_repo, hook in repo.hooks:
114 if hook_id == hook_id_in_repo:
115 return _run_single_hook(
116 runner, repo, hook, args, write=write,
117 )
118 else:
119 write('No hook with id `{0}`\n'.format(hook_id))
120 return 1
121
122
123 def _has_unmerged_paths(runner):
124 _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])
125 return bool(stdout.strip())
126
127
128 def run(runner, args, write=sys.stdout.write, environ=os.environ):
129 # Set up our logging handler
130 logger.addHandler(LoggingHandler(args.color, write=write))
131 logger.setLevel(logging.INFO)
132
133 # Check if we have unresolved merge conflict files and fail fast.
134 if _has_unmerged_paths(runner):
135 logger.error('Unmerged files. Resolve before committing.')
136 return 1
137
138 if args.no_stash or args.all_files:
139 ctx = noop_context()
140 else:
141 ctx = staged_files_only(runner.cmd_runner)
142
143 with ctx:
144 if args.hook:
145 return _run_hook(runner, args, write=write)
146 else:
147 return _run_hooks(runner, args, write=write, environ=environ)
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -9,6 +9,7 @@
from pre_commit import color
from pre_commit.logging_handler import LoggingHandler
from pre_commit.output import get_hook_message
+from pre_commit.output import sys_stdout_write_wrapper
from pre_commit.staged_files_only import staged_files_only
from pre_commit.util import noop_context
@@ -125,7 +126,7 @@
return bool(stdout.strip())
-def run(runner, args, write=sys.stdout.write, environ=os.environ):
+def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,8 +1,10 @@
from __future__ import unicode_literals
import subprocess
+import sys
from pre_commit import color
+from pre_commit import five
# TODO: smell: import side-effects
@@ -70,3 +72,14 @@
postfix,
color.format_color(end_msg, end_color, use_color),
)
+
+
+def sys_stdout_write_wrapper(s, stream=sys.stdout):
+ """Python 2.6 chokes on unicode being passed to sys.stdout.write.
+
+ This is an adapter because PY2 is ok with bytes and PY3 requires text.
+ """
+ assert type(s) is five.text
+ if five.PY2: # pragma: no cover (PY2)
+ s = s.encode('UTF-8')
+ stream.write(s)
|
{"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -9,6 +9,7 @@\n from pre_commit import color\n from pre_commit.logging_handler import LoggingHandler\n from pre_commit.output import get_hook_message\n+from pre_commit.output import sys_stdout_write_wrapper\n from pre_commit.staged_files_only import staged_files_only\n from pre_commit.util import noop_context\n \n@@ -125,7 +126,7 @@\n return bool(stdout.strip())\n \n \n-def run(runner, args, write=sys.stdout.write, environ=os.environ):\n+def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\ndiff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -1,8 +1,10 @@\n from __future__ import unicode_literals\n \n import subprocess\n+import sys\n \n from pre_commit import color\n+from pre_commit import five\n \n \n # TODO: smell: import side-effects\n@@ -70,3 +72,14 @@\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n+\n+\n+def sys_stdout_write_wrapper(s, stream=sys.stdout):\n+ \"\"\"Python 2.6 chokes on unicode being passed to sys.stdout.write.\n+\n+ This is an adapter because PY2 is ok with bytes and PY3 requires text.\n+ \"\"\"\n+ assert type(s) is five.text\n+ if five.PY2: # pragma: no cover (PY2)\n+ s = s.encode('UTF-8')\n+ stream.write(s)\n", "issue": "UnicodeEncodeError when writing to stdout in python2.6 \n```\n$ pre-commit run fixmyjs\nfixmyjs............................................................................................................................................................................................Failed\nhookid: fixmyjs\n\nTraceback (most recent call last):\n File \"virtualenv_run/bin/pre-commit\", line 14, in <module>\n sys.exit(main())\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/util.py\", line 41, in wrapper\n return func(argv)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/main.py\", line 99, in main\n return run(runner, args)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 144, in run\n return _run_hook(runner, args, write=write)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 116, in _run_hook\n return _run_single_hook(runner, repo, hook_id, args, write=write)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 91, in _run_single_hook\n write(output.strip() + '\\n')\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\u2713' in position 0: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport subprocess\n\nfrom pre_commit import color\n\n\n# TODO: smell: import side-effects\nCOLS = int(\n subprocess.Popen(\n ['tput', 'cols'], stdout=subprocess.PIPE\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n)\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import color\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return set(skip.strip() for skip in skips.split(',') if skip.strip())\n\n\ndef _hook_msg_start(hook, verbose):\n return '{0}{1}'.format(\n '[{0}] '.format(hook['id']) if verbose else '',\n hook['name'],\n )\n\n\ndef _print_no_files_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix='(no files to check) ',\n end_msg='Skipped',\n end_color=color.TURQUOISE,\n use_color=args.color,\n ))\n\n\ndef _print_user_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg='Skipped',\n end_color=color.YELLOW,\n use_color=args.color,\n ))\n\n\ndef _run_single_hook(runner, repository, hook, args, write, skips=set()):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n get_filenames = git.get_conflicted_files_matching\n else:\n get_filenames = git.get_staged_files_matching\n\n filenames = get_filenames(hook['files'], hook['exclude'])\n if hook['id'] in skips:\n _print_user_skipped(hook, write, args)\n return 0\n elif not filenames:\n _print_no_files_skipped(hook, write, args)\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))\n sys.stdout.flush()\n\n retcode, stdout, stderr = repository.run_hook(hook, filenames)\n\n if retcode != hook['expected_return_value']:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('hookid: {0}\\n'.format(hook['id']))\n write('\\n')\n for output in (stdout, stderr):\n if output.strip():\n write(output.strip() + '\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(runner, args, write, environ):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n\n skips = _get_skips(environ)\n\n for repo in runner.repositories:\n for _, hook in repo.hooks:\n retval |= _run_single_hook(\n runner, repo, hook, args, write, skips=skips,\n )\n\n return retval\n\n\ndef _run_hook(runner, args, write):\n hook_id = args.hook\n for repo in runner.repositories:\n for hook_id_in_repo, hook in repo.hooks:\n if hook_id == hook_id_in_repo:\n return _run_single_hook(\n runner, repo, hook, args, write=write,\n )\n else:\n write('No hook with id `{0}`\\n'.format(hook_id))\n return 1\n\n\ndef _has_unmerged_paths(runner):\n _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])\n return bool(stdout.strip())\n\n\ndef run(runner, args, write=sys.stdout.write, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths(runner):\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n\n if args.no_stash or args.all_files:\n ctx = noop_context()\n else:\n ctx = staged_files_only(runner.cmd_runner)\n\n with ctx:\n if args.hook:\n return _run_hook(runner, args, write=write)\n else:\n return _run_hooks(runner, args, write=write, environ=environ)\n", "path": "pre_commit/commands/run.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\nCOLS = int(\n subprocess.Popen(\n ['tput', 'cols'], stdout=subprocess.PIPE\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n)\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\ndef sys_stdout_write_wrapper(s, stream=sys.stdout):\n \"\"\"Python 2.6 chokes on unicode being passed to sys.stdout.write.\n\n This is an adapter because PY2 is ok with bytes and PY3 requires text.\n \"\"\"\n assert type(s) is five.text\n if five.PY2: # pragma: no cover (PY2)\n s = s.encode('UTF-8')\n stream.write(s)\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import color\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.output import sys_stdout_write_wrapper\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return set(skip.strip() for skip in skips.split(',') if skip.strip())\n\n\ndef _hook_msg_start(hook, verbose):\n return '{0}{1}'.format(\n '[{0}] '.format(hook['id']) if verbose else '',\n hook['name'],\n )\n\n\ndef _print_no_files_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix='(no files to check) ',\n end_msg='Skipped',\n end_color=color.TURQUOISE,\n use_color=args.color,\n ))\n\n\ndef _print_user_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg='Skipped',\n end_color=color.YELLOW,\n use_color=args.color,\n ))\n\n\ndef _run_single_hook(runner, repository, hook, args, write, skips=set()):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n get_filenames = git.get_conflicted_files_matching\n else:\n get_filenames = git.get_staged_files_matching\n\n filenames = get_filenames(hook['files'], hook['exclude'])\n if hook['id'] in skips:\n _print_user_skipped(hook, write, args)\n return 0\n elif not filenames:\n _print_no_files_skipped(hook, write, args)\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))\n sys.stdout.flush()\n\n retcode, stdout, stderr = repository.run_hook(hook, filenames)\n\n if retcode != hook['expected_return_value']:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('hookid: {0}\\n'.format(hook['id']))\n write('\\n')\n for output in (stdout, stderr):\n if output.strip():\n write(output.strip() + '\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(runner, args, write, environ):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n\n skips = _get_skips(environ)\n\n for repo in runner.repositories:\n for _, hook in repo.hooks:\n retval |= _run_single_hook(\n runner, repo, hook, args, write, skips=skips,\n )\n\n return retval\n\n\ndef _run_hook(runner, args, write):\n hook_id = args.hook\n for repo in runner.repositories:\n for hook_id_in_repo, hook in repo.hooks:\n if hook_id == hook_id_in_repo:\n return _run_single_hook(\n runner, repo, hook, args, write=write,\n )\n else:\n write('No hook with id `{0}`\\n'.format(hook_id))\n return 1\n\n\ndef _has_unmerged_paths(runner):\n _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])\n return bool(stdout.strip())\n\n\ndef run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths(runner):\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n\n if args.no_stash or args.all_files:\n ctx = noop_context()\n else:\n ctx = staged_files_only(runner.cmd_runner)\n\n with ctx:\n if args.hook:\n return _run_hook(runner, args, write=write)\n else:\n return _run_hooks(runner, args, write=write, environ=environ)\n", "path": "pre_commit/commands/run.py"}]}
| 2,566 | 392 |
gh_patches_debug_27114
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1126
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix "similar-code" issue in website/activemembers/admin.py
Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/svthalia/concrexit/website/activemembers/admin.py#issue_5eceacbde96d31000100042c
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/activemembers/admin.py`
Content:
```
1 """Registers admin interfaces for the activemembers module"""
2 import csv
3 import datetime
4
5 from django import forms
6 from django.contrib import admin, messages
7 from django.db.models import Q
8 from django.http import HttpResponse
9 from django.utils import timezone
10 from django.utils.translation import gettext_lazy as _
11
12 from activemembers import models
13 from activemembers.forms import MemberGroupMembershipForm, MemberGroupForm
14 from utils.snippets import datetime_to_lectureyear
15 from utils.translation import TranslatedModelAdmin
16
17
18 class MemberGroupMembershipInlineFormSet(forms.BaseInlineFormSet):
19 """
20 Solely here for performance reasons.
21
22 Needed because the `__str__()` of `MemberGroupMembership` (which is
23 displayed above each inline form) uses the username, name of the member
24 and name of the group.
25 """
26
27 def __init__(self, *args, **kwargs):
28 super().__init__(*args, **kwargs)
29 self.queryset = self.queryset.select_related("member", "group").filter(
30 until=None
31 )
32
33
34 class MemberGroupMembershipInline(admin.StackedInline):
35 """Inline for group memberships"""
36
37 model = models.MemberGroupMembership
38 formset = MemberGroupMembershipInlineFormSet
39 can_delete = False
40 ordering = ("since",)
41 extra = 0
42 autocomplete_fields = ("member",)
43
44
45 @admin.register(models.Committee)
46 class CommitteeAdmin(TranslatedModelAdmin):
47 """Manage the committees"""
48
49 inlines = (MemberGroupMembershipInline,)
50 form = MemberGroupForm
51 list_display = ("name", "since", "until", "active", "email")
52 list_filter = (
53 "until",
54 "active",
55 )
56 search_fields = ("name", "description")
57 filter_horizontal = ("permissions",)
58
59 fields = (
60 "name",
61 "description",
62 "photo",
63 "permissions",
64 "since",
65 "until",
66 "contact_mailinglist",
67 "contact_email",
68 "active",
69 "display_members",
70 )
71
72 def email(self, instance):
73 if instance.contact_email:
74 return instance.contact_email
75 elif instance.contact_mailinglist:
76 return instance.contact_mailinglist.name + "@thalia.nu"
77 return None
78
79
80 @admin.register(models.Society)
81 class SocietyAdmin(TranslatedModelAdmin):
82 """Manage the societies"""
83
84 inlines = (MemberGroupMembershipInline,)
85 form = MemberGroupForm
86 list_display = ("name", "since", "until", "active", "email")
87 list_filter = (
88 "until",
89 "active",
90 )
91 search_fields = ("name", "description")
92 filter_horizontal = ("permissions",)
93
94 fields = (
95 "name",
96 "description",
97 "photo",
98 "permissions",
99 "since",
100 "until",
101 "contact_mailinglist",
102 "contact_email",
103 "active",
104 "display_members",
105 )
106
107 def email(self, instance):
108 if instance.contact_email:
109 return instance.contact_email
110 elif instance.contact_mailinglist:
111 return instance.contact_mailinglist.name + "@thalia.nu"
112 return None
113
114
115 @admin.register(models.Board)
116 class BoardAdmin(TranslatedModelAdmin):
117 """Manage the board"""
118
119 inlines = (MemberGroupMembershipInline,)
120 form = MemberGroupForm
121 exclude = ("is_board",)
122 filter_horizontal = ("permissions",)
123
124 fields = (
125 "name",
126 "description",
127 "photo",
128 "permissions",
129 "contact_mailinglist",
130 "contact_email",
131 "since",
132 "until",
133 "display_members",
134 )
135
136
137 class TypeFilter(admin.SimpleListFilter):
138 """Filter memberships on board-only"""
139
140 title = _("group memberships")
141 parameter_name = "group_type"
142
143 def lookups(self, request, model_admin):
144 return [
145 ("boards", _("Only boards")),
146 ("committees", _("Only committees")),
147 ("societies", _("Only societies")),
148 ]
149
150 def queryset(self, request, queryset):
151 if self.value() == "boards":
152 return queryset.exclude(group__board=None)
153 elif self.value() == "committees":
154 return queryset.exclude(group__committee=None)
155 elif self.value() == "societies":
156 return queryset.exclude(group__society=None)
157
158 return queryset
159
160
161 class LectureYearFilter(admin.SimpleListFilter):
162 """Filter the memberships on those started or ended in a lecture year"""
163
164 title = _("lecture year")
165 parameter_name = "lecture_year"
166
167 def lookups(self, request, model_admin):
168 current_year = datetime_to_lectureyear(timezone.now())
169 first_year = datetime_to_lectureyear(
170 models.MemberGroupMembership.objects.earliest("since").since
171 )
172
173 return [
174 (year, "{}-{}".format(year, year + 1))
175 for year in range(first_year, current_year + 1)
176 ]
177
178 def queryset(self, request, queryset):
179 if not self.value():
180 return queryset
181
182 year = int(self.value())
183 first_of_september = datetime.date(year=year, month=9, day=1)
184
185 return queryset.exclude(until__lt=first_of_september)
186
187
188 class ActiveMembershipsFilter(admin.SimpleListFilter):
189 """Filter the memberships by whether they are active or not"""
190
191 title = _("active memberships")
192 parameter_name = "active"
193
194 def lookups(self, request, model_name):
195 return (
196 ("active", _("Active")),
197 ("inactive", _("Inactive")),
198 )
199
200 def queryset(self, request, queryset):
201 now = timezone.now()
202
203 if self.value() == "active":
204 return queryset.filter(Q(until__isnull=True) | Q(until__gte=now))
205
206 if self.value() == "inactive":
207 return queryset.filter(until__lt=now)
208
209
210 @admin.register(models.MemberGroupMembership)
211 class MemberGroupMembershipAdmin(TranslatedModelAdmin):
212 """Manage the group memberships"""
213
214 form = MemberGroupMembershipForm
215 list_display = ("member", "group", "since", "until", "chair", "role")
216 list_filter = ("group", TypeFilter, LectureYearFilter, ActiveMembershipsFilter)
217 list_select_related = (
218 "member",
219 "group",
220 )
221 search_fields = ("member__first_name", "member__last_name", "member__email")
222 date_hierarchy = "since"
223 actions = ("export",)
224
225 def changelist_view(self, request, extra_context=None):
226 self.message_user(
227 request,
228 _(
229 "Do not edit existing memberships if the "
230 "chair of a group has changed, add a "
231 "new membership instead."
232 ),
233 messages.WARNING,
234 )
235 return super().changelist_view(request, extra_context)
236
237 def export(self, request, queryset):
238 response = HttpResponse(content_type="text/csv")
239 response["Content-Disposition"] = (
240 "attachment;" "filename=" '"group_memberships.csv"'
241 )
242 writer = csv.writer(response)
243 writer.writerow(
244 [
245 _("First name"),
246 _("Last name"),
247 _("Email"),
248 _("Group"),
249 _("Member since"),
250 _("Member until"),
251 _("Chair of the group"),
252 _("Role"),
253 ]
254 )
255
256 for membership in queryset:
257 writer.writerow(
258 [
259 membership.member.first_name,
260 membership.member.last_name,
261 membership.member.email,
262 membership.group,
263 membership.since,
264 membership.until,
265 membership.chair,
266 membership.role,
267 ]
268 )
269
270 return response
271
272 export.short_description = _("Export selected memberships")
273
274
275 @admin.register(models.Mentorship)
276 class MentorshipAdmin(admin.ModelAdmin):
277 """Manage the mentorships"""
278
279 autocomplete_fields = ("member",)
280 search_fields = ("member__first_name", "member__last_name")
281 list_filter = ("year",)
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/activemembers/admin.py b/website/activemembers/admin.py
--- a/website/activemembers/admin.py
+++ b/website/activemembers/admin.py
@@ -42,9 +42,8 @@
autocomplete_fields = ("member",)
[email protected](models.Committee)
-class CommitteeAdmin(TranslatedModelAdmin):
- """Manage the committees"""
+class MemberGroupAdmin(TranslatedModelAdmin):
+ """Manage the member groups"""
inlines = (MemberGroupMembershipInline,)
form = MemberGroupForm
@@ -77,39 +76,18 @@
return None
[email protected](models.Society)
-class SocietyAdmin(TranslatedModelAdmin):
- """Manage the societies"""
[email protected](models.Committee)
+class CommitteeAdmin(MemberGroupAdmin):
+ """Manage the committees"""
- inlines = (MemberGroupMembershipInline,)
- form = MemberGroupForm
- list_display = ("name", "since", "until", "active", "email")
- list_filter = (
- "until",
- "active",
- )
- search_fields = ("name", "description")
- filter_horizontal = ("permissions",)
+ pass
- fields = (
- "name",
- "description",
- "photo",
- "permissions",
- "since",
- "until",
- "contact_mailinglist",
- "contact_email",
- "active",
- "display_members",
- )
- def email(self, instance):
- if instance.contact_email:
- return instance.contact_email
- elif instance.contact_mailinglist:
- return instance.contact_mailinglist.name + "@thalia.nu"
- return None
[email protected](models.Society)
+class SocietyAdmin(MemberGroupAdmin):
+ """Manage the societies"""
+
+ pass
@admin.register(models.Board)
|
{"golden_diff": "diff --git a/website/activemembers/admin.py b/website/activemembers/admin.py\n--- a/website/activemembers/admin.py\n+++ b/website/activemembers/admin.py\n@@ -42,9 +42,8 @@\n autocomplete_fields = (\"member\",)\n \n \[email protected](models.Committee)\n-class CommitteeAdmin(TranslatedModelAdmin):\n- \"\"\"Manage the committees\"\"\"\n+class MemberGroupAdmin(TranslatedModelAdmin):\n+ \"\"\"Manage the member groups\"\"\"\n \n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n@@ -77,39 +76,18 @@\n return None\n \n \[email protected](models.Society)\n-class SocietyAdmin(TranslatedModelAdmin):\n- \"\"\"Manage the societies\"\"\"\[email protected](models.Committee)\n+class CommitteeAdmin(MemberGroupAdmin):\n+ \"\"\"Manage the committees\"\"\"\n \n- inlines = (MemberGroupMembershipInline,)\n- form = MemberGroupForm\n- list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n- list_filter = (\n- \"until\",\n- \"active\",\n- )\n- search_fields = (\"name\", \"description\")\n- filter_horizontal = (\"permissions\",)\n+ pass\n \n- fields = (\n- \"name\",\n- \"description\",\n- \"photo\",\n- \"permissions\",\n- \"since\",\n- \"until\",\n- \"contact_mailinglist\",\n- \"contact_email\",\n- \"active\",\n- \"display_members\",\n- )\n \n- def email(self, instance):\n- if instance.contact_email:\n- return instance.contact_email\n- elif instance.contact_mailinglist:\n- return instance.contact_mailinglist.name + \"@thalia.nu\"\n- return None\[email protected](models.Society)\n+class SocietyAdmin(MemberGroupAdmin):\n+ \"\"\"Manage the societies\"\"\"\n+\n+ pass\n \n \n @admin.register(models.Board)\n", "issue": "Fix \"similar-code\" issue in website/activemembers/admin.py\nSimilar blocks of code found in 2 locations. Consider refactoring.\n\nhttps://codeclimate.com/github/svthalia/concrexit/website/activemembers/admin.py#issue_5eceacbde96d31000100042c\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the activemembers module\"\"\"\nimport csv\nimport datetime\n\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db.models import Q\nfrom django.http import HttpResponse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom activemembers import models\nfrom activemembers.forms import MemberGroupMembershipForm, MemberGroupForm\nfrom utils.snippets import datetime_to_lectureyear\nfrom utils.translation import TranslatedModelAdmin\n\n\nclass MemberGroupMembershipInlineFormSet(forms.BaseInlineFormSet):\n \"\"\"\n Solely here for performance reasons.\n\n Needed because the `__str__()` of `MemberGroupMembership` (which is\n displayed above each inline form) uses the username, name of the member\n and name of the group.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.queryset = self.queryset.select_related(\"member\", \"group\").filter(\n until=None\n )\n\n\nclass MemberGroupMembershipInline(admin.StackedInline):\n \"\"\"Inline for group memberships\"\"\"\n\n model = models.MemberGroupMembership\n formset = MemberGroupMembershipInlineFormSet\n can_delete = False\n ordering = (\"since\",)\n extra = 0\n autocomplete_fields = (\"member\",)\n\n\[email protected](models.Committee)\nclass CommitteeAdmin(TranslatedModelAdmin):\n \"\"\"Manage the committees\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n list_filter = (\n \"until\",\n \"active\",\n )\n search_fields = (\"name\", \"description\")\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"since\",\n \"until\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"active\",\n \"display_members\",\n )\n\n def email(self, instance):\n if instance.contact_email:\n return instance.contact_email\n elif instance.contact_mailinglist:\n return instance.contact_mailinglist.name + \"@thalia.nu\"\n return None\n\n\[email protected](models.Society)\nclass SocietyAdmin(TranslatedModelAdmin):\n \"\"\"Manage the societies\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n list_filter = (\n \"until\",\n \"active\",\n )\n search_fields = (\"name\", \"description\")\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"since\",\n \"until\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"active\",\n \"display_members\",\n )\n\n def email(self, instance):\n if instance.contact_email:\n return instance.contact_email\n elif instance.contact_mailinglist:\n return instance.contact_mailinglist.name + \"@thalia.nu\"\n return None\n\n\[email protected](models.Board)\nclass BoardAdmin(TranslatedModelAdmin):\n \"\"\"Manage the board\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n exclude = (\"is_board\",)\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"since\",\n \"until\",\n \"display_members\",\n )\n\n\nclass TypeFilter(admin.SimpleListFilter):\n \"\"\"Filter memberships on board-only\"\"\"\n\n title = _(\"group memberships\")\n parameter_name = \"group_type\"\n\n def lookups(self, request, model_admin):\n return [\n (\"boards\", _(\"Only boards\")),\n (\"committees\", _(\"Only committees\")),\n (\"societies\", _(\"Only societies\")),\n ]\n\n def queryset(self, request, queryset):\n if self.value() == \"boards\":\n return queryset.exclude(group__board=None)\n elif self.value() == \"committees\":\n return queryset.exclude(group__committee=None)\n elif self.value() == \"societies\":\n return queryset.exclude(group__society=None)\n\n return queryset\n\n\nclass LectureYearFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships on those started or ended in a lecture year\"\"\"\n\n title = _(\"lecture year\")\n parameter_name = \"lecture_year\"\n\n def lookups(self, request, model_admin):\n current_year = datetime_to_lectureyear(timezone.now())\n first_year = datetime_to_lectureyear(\n models.MemberGroupMembership.objects.earliest(\"since\").since\n )\n\n return [\n (year, \"{}-{}\".format(year, year + 1))\n for year in range(first_year, current_year + 1)\n ]\n\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n\n year = int(self.value())\n first_of_september = datetime.date(year=year, month=9, day=1)\n\n return queryset.exclude(until__lt=first_of_september)\n\n\nclass ActiveMembershipsFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships by whether they are active or not\"\"\"\n\n title = _(\"active memberships\")\n parameter_name = \"active\"\n\n def lookups(self, request, model_name):\n return (\n (\"active\", _(\"Active\")),\n (\"inactive\", _(\"Inactive\")),\n )\n\n def queryset(self, request, queryset):\n now = timezone.now()\n\n if self.value() == \"active\":\n return queryset.filter(Q(until__isnull=True) | Q(until__gte=now))\n\n if self.value() == \"inactive\":\n return queryset.filter(until__lt=now)\n\n\[email protected](models.MemberGroupMembership)\nclass MemberGroupMembershipAdmin(TranslatedModelAdmin):\n \"\"\"Manage the group memberships\"\"\"\n\n form = MemberGroupMembershipForm\n list_display = (\"member\", \"group\", \"since\", \"until\", \"chair\", \"role\")\n list_filter = (\"group\", TypeFilter, LectureYearFilter, ActiveMembershipsFilter)\n list_select_related = (\n \"member\",\n \"group\",\n )\n search_fields = (\"member__first_name\", \"member__last_name\", \"member__email\")\n date_hierarchy = \"since\"\n actions = (\"export\",)\n\n def changelist_view(self, request, extra_context=None):\n self.message_user(\n request,\n _(\n \"Do not edit existing memberships if the \"\n \"chair of a group has changed, add a \"\n \"new membership instead.\"\n ),\n messages.WARNING,\n )\n return super().changelist_view(request, extra_context)\n\n def export(self, request, queryset):\n response = HttpResponse(content_type=\"text/csv\")\n response[\"Content-Disposition\"] = (\n \"attachment;\" \"filename=\" '\"group_memberships.csv\"'\n )\n writer = csv.writer(response)\n writer.writerow(\n [\n _(\"First name\"),\n _(\"Last name\"),\n _(\"Email\"),\n _(\"Group\"),\n _(\"Member since\"),\n _(\"Member until\"),\n _(\"Chair of the group\"),\n _(\"Role\"),\n ]\n )\n\n for membership in queryset:\n writer.writerow(\n [\n membership.member.first_name,\n membership.member.last_name,\n membership.member.email,\n membership.group,\n membership.since,\n membership.until,\n membership.chair,\n membership.role,\n ]\n )\n\n return response\n\n export.short_description = _(\"Export selected memberships\")\n\n\[email protected](models.Mentorship)\nclass MentorshipAdmin(admin.ModelAdmin):\n \"\"\"Manage the mentorships\"\"\"\n\n autocomplete_fields = (\"member\",)\n search_fields = (\"member__first_name\", \"member__last_name\")\n list_filter = (\"year\",)\n", "path": "website/activemembers/admin.py"}], "after_files": [{"content": "\"\"\"Registers admin interfaces for the activemembers module\"\"\"\nimport csv\nimport datetime\n\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db.models import Q\nfrom django.http import HttpResponse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom activemembers import models\nfrom activemembers.forms import MemberGroupMembershipForm, MemberGroupForm\nfrom utils.snippets import datetime_to_lectureyear\nfrom utils.translation import TranslatedModelAdmin\n\n\nclass MemberGroupMembershipInlineFormSet(forms.BaseInlineFormSet):\n \"\"\"\n Solely here for performance reasons.\n\n Needed because the `__str__()` of `MemberGroupMembership` (which is\n displayed above each inline form) uses the username, name of the member\n and name of the group.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.queryset = self.queryset.select_related(\"member\", \"group\").filter(\n until=None\n )\n\n\nclass MemberGroupMembershipInline(admin.StackedInline):\n \"\"\"Inline for group memberships\"\"\"\n\n model = models.MemberGroupMembership\n formset = MemberGroupMembershipInlineFormSet\n can_delete = False\n ordering = (\"since\",)\n extra = 0\n autocomplete_fields = (\"member\",)\n\n\nclass MemberGroupAdmin(TranslatedModelAdmin):\n \"\"\"Manage the member groups\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n list_filter = (\n \"until\",\n \"active\",\n )\n search_fields = (\"name\", \"description\")\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"since\",\n \"until\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"active\",\n \"display_members\",\n )\n\n def email(self, instance):\n if instance.contact_email:\n return instance.contact_email\n elif instance.contact_mailinglist:\n return instance.contact_mailinglist.name + \"@thalia.nu\"\n return None\n\n\[email protected](models.Committee)\nclass CommitteeAdmin(MemberGroupAdmin):\n \"\"\"Manage the committees\"\"\"\n\n pass\n\n\[email protected](models.Society)\nclass SocietyAdmin(MemberGroupAdmin):\n \"\"\"Manage the societies\"\"\"\n\n pass\n\n\[email protected](models.Board)\nclass BoardAdmin(TranslatedModelAdmin):\n \"\"\"Manage the board\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n exclude = (\"is_board\",)\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"since\",\n \"until\",\n \"display_members\",\n )\n\n\nclass TypeFilter(admin.SimpleListFilter):\n \"\"\"Filter memberships on board-only\"\"\"\n\n title = _(\"group memberships\")\n parameter_name = \"group_type\"\n\n def lookups(self, request, model_admin):\n return [\n (\"boards\", _(\"Only boards\")),\n (\"committees\", _(\"Only committees\")),\n (\"societies\", _(\"Only societies\")),\n ]\n\n def queryset(self, request, queryset):\n if self.value() == \"boards\":\n return queryset.exclude(group__board=None)\n elif self.value() == \"committees\":\n return queryset.exclude(group__committee=None)\n elif self.value() == \"societies\":\n return queryset.exclude(group__society=None)\n\n return queryset\n\n\nclass LectureYearFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships on those started or ended in a lecture year\"\"\"\n\n title = _(\"lecture year\")\n parameter_name = \"lecture_year\"\n\n def lookups(self, request, model_admin):\n current_year = datetime_to_lectureyear(timezone.now())\n first_year = datetime_to_lectureyear(\n models.MemberGroupMembership.objects.earliest(\"since\").since\n )\n\n return [\n (year, \"{}-{}\".format(year, year + 1))\n for year in range(first_year, current_year + 1)\n ]\n\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n\n year = int(self.value())\n first_of_september = datetime.date(year=year, month=9, day=1)\n\n return queryset.exclude(until__lt=first_of_september)\n\n\nclass ActiveMembershipsFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships by whether they are active or not\"\"\"\n\n title = _(\"active memberships\")\n parameter_name = \"active\"\n\n def lookups(self, request, model_name):\n return (\n (\"active\", _(\"Active\")),\n (\"inactive\", _(\"Inactive\")),\n )\n\n def queryset(self, request, queryset):\n now = timezone.now()\n\n if self.value() == \"active\":\n return queryset.filter(Q(until__isnull=True) | Q(until__gte=now))\n\n if self.value() == \"inactive\":\n return queryset.filter(until__lt=now)\n\n\[email protected](models.MemberGroupMembership)\nclass MemberGroupMembershipAdmin(TranslatedModelAdmin):\n \"\"\"Manage the group memberships\"\"\"\n\n form = MemberGroupMembershipForm\n list_display = (\"member\", \"group\", \"since\", \"until\", \"chair\", \"role\")\n list_filter = (\"group\", TypeFilter, LectureYearFilter, ActiveMembershipsFilter)\n list_select_related = (\n \"member\",\n \"group\",\n )\n search_fields = (\"member__first_name\", \"member__last_name\", \"member__email\")\n date_hierarchy = \"since\"\n actions = (\"export\",)\n\n def changelist_view(self, request, extra_context=None):\n self.message_user(\n request,\n _(\n \"Do not edit existing memberships if the \"\n \"chair of a group has changed, add a \"\n \"new membership instead.\"\n ),\n messages.WARNING,\n )\n return super().changelist_view(request, extra_context)\n\n def export(self, request, queryset):\n response = HttpResponse(content_type=\"text/csv\")\n response[\"Content-Disposition\"] = (\n \"attachment;\" \"filename=\" '\"group_memberships.csv\"'\n )\n writer = csv.writer(response)\n writer.writerow(\n [\n _(\"First name\"),\n _(\"Last name\"),\n _(\"Email\"),\n _(\"Group\"),\n _(\"Member since\"),\n _(\"Member until\"),\n _(\"Chair of the group\"),\n _(\"Role\"),\n ]\n )\n\n for membership in queryset:\n writer.writerow(\n [\n membership.member.first_name,\n membership.member.last_name,\n membership.member.email,\n membership.group,\n membership.since,\n membership.until,\n membership.chair,\n membership.role,\n ]\n )\n\n return response\n\n export.short_description = _(\"Export selected memberships\")\n\n\[email protected](models.Mentorship)\nclass MentorshipAdmin(admin.ModelAdmin):\n \"\"\"Manage the mentorships\"\"\"\n\n autocomplete_fields = (\"member\",)\n search_fields = (\"member__first_name\", \"member__last_name\")\n list_filter = (\"year\",)\n", "path": "website/activemembers/admin.py"}]}
| 2,760 | 423 |
gh_patches_debug_31932
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-2390
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot load SavedModel with RSquare metric
**System information**
- macOS 11.2.1
- TensorFlow 2.4.1 , via pip install tensorflow
- TensorFlow-Addons 0.12.1 , via pip install tensorflow_addons
- Python version: 3.7.7
- Is GPU used? no
**Describe the bug**
I have a saved keras model.
If the model uses the RSquare metric, I am not able to load it back.
But with the exact same model defined without the RSquare metric, I do not have any issue.
**Code to reproduce the issue**
```
import tensorflow as tf
import tensorflow_addons as tfa
USE_R2 = True
model = tf.keras.models.Sequential(tf.keras.layers.Dense(1))
if USE_R2:
metrics = [tfa.metrics.RSquare(y_shape=(1,))]
else:
metrics = None
model.compile(loss='mse', metrics=metrics)
x = tf.constant( [[1, 2, 3, 4]] )
y = tf.constant( [[1]] )
model.fit(x,y)
model.save('./tmp')
model = tf.keras.models.load_model('./tmp')
```
**Other info / logs**
I get the following error:
> ValueError: Shapes () and (1,) are incompatible
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/metrics/r_square.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements R^2 scores."""
16 from typing import Tuple
17
18 import tensorflow as tf
19 from tensorflow.keras import backend as K
20 from tensorflow.keras.metrics import Metric
21 from tensorflow.python.ops import weights_broadcast_ops
22
23 from typeguard import typechecked
24 from tensorflow_addons.utils.types import AcceptableDTypes
25
26
27 VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
28
29
30 def _reduce_average(
31 input_tensor: tf.Tensor, axis=None, keepdims=False, weights=None
32 ) -> tf.Tensor:
33 """Computes the (weighted) mean of elements across dimensions of a tensor."""
34 if weights is None:
35 return tf.reduce_mean(input_tensor, axis=axis, keepdims=keepdims)
36
37 weighted_sum = tf.reduce_sum(weights * input_tensor, axis=axis, keepdims=keepdims)
38 sum_of_weights = tf.reduce_sum(weights, axis=axis, keepdims=keepdims)
39 average = weighted_sum / sum_of_weights
40 return average
41
42
43 @tf.keras.utils.register_keras_serializable(package="Addons")
44 class RSquare(Metric):
45 """Compute R^2 score.
46
47 This is also called the [coefficient of determination
48 ](https://en.wikipedia.org/wiki/Coefficient_of_determination).
49 It tells how close are data to the fitted regression line.
50
51 - Highest score can be 1.0 and it indicates that the predictors
52 perfectly accounts for variation in the target.
53 - Score 0.0 indicates that the predictors do not
54 account for variation in the target.
55 - It can also be negative if the model is worse.
56
57 The sample weighting for this metric implementation mimics the
58 behaviour of the [scikit-learn implementation
59 ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)
60 of the same metric.
61
62 Args:
63 multioutput: `string`, the reduce method for scores.
64 Should be one of `["raw_values", "uniform_average", "variance_weighted"]`.
65 name: (Optional) string name of the metric instance.
66 dtype: (Optional) data type of the metric result.
67
68 Usage:
69
70 >>> y_true = np.array([1, 4, 3], dtype=np.float32)
71 >>> y_pred = np.array([2, 4, 4], dtype=np.float32)
72 >>> metric = tfa.metrics.r_square.RSquare()
73 >>> metric.update_state(y_true, y_pred)
74 >>> result = metric.result()
75 >>> result.numpy()
76 0.57142854
77 """
78
79 @typechecked
80 def __init__(
81 self,
82 name: str = "r_square",
83 dtype: AcceptableDTypes = None,
84 y_shape: Tuple[int, ...] = (),
85 multioutput: str = "uniform_average",
86 **kwargs,
87 ):
88 super().__init__(name=name, dtype=dtype, **kwargs)
89 self.y_shape = y_shape
90
91 if multioutput not in VALID_MULTIOUTPUT:
92 raise ValueError(
93 "The multioutput argument must be one of {}, but was: {}".format(
94 VALID_MULTIOUTPUT, multioutput
95 )
96 )
97 self.multioutput = multioutput
98 self.squared_sum = self.add_weight(
99 name="squared_sum", shape=y_shape, initializer="zeros", dtype=dtype
100 )
101 self.sum = self.add_weight(
102 name="sum", shape=y_shape, initializer="zeros", dtype=dtype
103 )
104 self.res = self.add_weight(
105 name="residual", shape=y_shape, initializer="zeros", dtype=dtype
106 )
107 self.count = self.add_weight(
108 name="count", shape=y_shape, initializer="zeros", dtype=dtype
109 )
110
111 def update_state(self, y_true, y_pred, sample_weight=None) -> None:
112 y_true = tf.cast(y_true, dtype=self._dtype)
113 y_pred = tf.cast(y_pred, dtype=self._dtype)
114 if sample_weight is None:
115 sample_weight = 1
116 sample_weight = tf.cast(sample_weight, dtype=self._dtype)
117 sample_weight = weights_broadcast_ops.broadcast_weights(
118 weights=sample_weight, values=y_true
119 )
120
121 weighted_y_true = y_true * sample_weight
122 self.sum.assign_add(tf.reduce_sum(weighted_y_true, axis=0))
123 self.squared_sum.assign_add(tf.reduce_sum(y_true * weighted_y_true, axis=0))
124 self.res.assign_add(
125 tf.reduce_sum((y_true - y_pred) ** 2 * sample_weight, axis=0)
126 )
127 self.count.assign_add(tf.reduce_sum(sample_weight, axis=0))
128
129 def result(self) -> tf.Tensor:
130 mean = self.sum / self.count
131 total = self.squared_sum - self.sum * mean
132 raw_scores = 1 - (self.res / total)
133 raw_scores = tf.where(tf.math.is_inf(raw_scores), 0.0, raw_scores)
134
135 if self.multioutput == "raw_values":
136 return raw_scores
137 if self.multioutput == "uniform_average":
138 return tf.reduce_mean(raw_scores)
139 if self.multioutput == "variance_weighted":
140 return _reduce_average(raw_scores, weights=total)
141 raise RuntimeError(
142 "The multioutput attribute must be one of {}, but was: {}".format(
143 VALID_MULTIOUTPUT, self.multioutput
144 )
145 )
146
147 def reset_states(self) -> None:
148 # The state of the metric will be reset at the start of each epoch.
149 K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/metrics/r_square.py b/tensorflow_addons/metrics/r_square.py
--- a/tensorflow_addons/metrics/r_square.py
+++ b/tensorflow_addons/metrics/r_square.py
@@ -24,7 +24,7 @@
from tensorflow_addons.utils.types import AcceptableDTypes
-VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
+_VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
def _reduce_average(
@@ -88,10 +88,10 @@
super().__init__(name=name, dtype=dtype, **kwargs)
self.y_shape = y_shape
- if multioutput not in VALID_MULTIOUTPUT:
+ if multioutput not in _VALID_MULTIOUTPUT:
raise ValueError(
"The multioutput argument must be one of {}, but was: {}".format(
- VALID_MULTIOUTPUT, multioutput
+ _VALID_MULTIOUTPUT, multioutput
)
)
self.multioutput = multioutput
@@ -138,12 +138,15 @@
return tf.reduce_mean(raw_scores)
if self.multioutput == "variance_weighted":
return _reduce_average(raw_scores, weights=total)
- raise RuntimeError(
- "The multioutput attribute must be one of {}, but was: {}".format(
- VALID_MULTIOUTPUT, self.multioutput
- )
- )
def reset_states(self) -> None:
# The state of the metric will be reset at the start of each epoch.
K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])
+
+ def get_config(self):
+ config = {
+ "y_shape": self.y_shape,
+ "multioutput": self.multioutput,
+ }
+ base_config = super().get_config()
+ return {**base_config, **config}
|
{"golden_diff": "diff --git a/tensorflow_addons/metrics/r_square.py b/tensorflow_addons/metrics/r_square.py\n--- a/tensorflow_addons/metrics/r_square.py\n+++ b/tensorflow_addons/metrics/r_square.py\n@@ -24,7 +24,7 @@\n from tensorflow_addons.utils.types import AcceptableDTypes\n \n \n-VALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n+_VALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n \n \n def _reduce_average(\n@@ -88,10 +88,10 @@\n super().__init__(name=name, dtype=dtype, **kwargs)\n self.y_shape = y_shape\n \n- if multioutput not in VALID_MULTIOUTPUT:\n+ if multioutput not in _VALID_MULTIOUTPUT:\n raise ValueError(\n \"The multioutput argument must be one of {}, but was: {}\".format(\n- VALID_MULTIOUTPUT, multioutput\n+ _VALID_MULTIOUTPUT, multioutput\n )\n )\n self.multioutput = multioutput\n@@ -138,12 +138,15 @@\n return tf.reduce_mean(raw_scores)\n if self.multioutput == \"variance_weighted\":\n return _reduce_average(raw_scores, weights=total)\n- raise RuntimeError(\n- \"The multioutput attribute must be one of {}, but was: {}\".format(\n- VALID_MULTIOUTPUT, self.multioutput\n- )\n- )\n \n def reset_states(self) -> None:\n # The state of the metric will be reset at the start of each epoch.\n K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])\n+\n+ def get_config(self):\n+ config = {\n+ \"y_shape\": self.y_shape,\n+ \"multioutput\": self.multioutput,\n+ }\n+ base_config = super().get_config()\n+ return {**base_config, **config}\n", "issue": "Cannot load SavedModel with RSquare metric\n**System information**\r\n- macOS 11.2.1\r\n- TensorFlow 2.4.1 , via pip install tensorflow\r\n- TensorFlow-Addons 0.12.1 , via pip install tensorflow_addons\r\n- Python version: 3.7.7\r\n- Is GPU used? no\r\n\r\n**Describe the bug**\r\n\r\nI have a saved keras model. \r\nIf the model uses the RSquare metric, I am not able to load it back. \r\nBut with the exact same model defined without the RSquare metric, I do not have any issue.\r\n\r\n**Code to reproduce the issue**\r\n\r\n```\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\nUSE_R2 = True\r\n\r\nmodel = tf.keras.models.Sequential(tf.keras.layers.Dense(1))\r\nif USE_R2:\r\n metrics = [tfa.metrics.RSquare(y_shape=(1,))]\r\nelse:\r\n metrics = None\r\nmodel.compile(loss='mse', metrics=metrics)\r\n\r\nx = tf.constant( [[1, 2, 3, 4]] )\r\ny = tf.constant( [[1]] )\r\n\r\nmodel.fit(x,y)\r\n\r\nmodel.save('./tmp')\r\nmodel = tf.keras.models.load_model('./tmp')\r\n```\r\n\r\n**Other info / logs**\r\n\r\nI get the following error:\r\n\r\n> ValueError: Shapes () and (1,) are incompatible\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements R^2 scores.\"\"\"\nfrom typing import Tuple\n\nimport tensorflow as tf\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.metrics import Metric\nfrom tensorflow.python.ops import weights_broadcast_ops\n\nfrom typeguard import typechecked\nfrom tensorflow_addons.utils.types import AcceptableDTypes\n\n\nVALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n\n\ndef _reduce_average(\n input_tensor: tf.Tensor, axis=None, keepdims=False, weights=None\n) -> tf.Tensor:\n \"\"\"Computes the (weighted) mean of elements across dimensions of a tensor.\"\"\"\n if weights is None:\n return tf.reduce_mean(input_tensor, axis=axis, keepdims=keepdims)\n\n weighted_sum = tf.reduce_sum(weights * input_tensor, axis=axis, keepdims=keepdims)\n sum_of_weights = tf.reduce_sum(weights, axis=axis, keepdims=keepdims)\n average = weighted_sum / sum_of_weights\n return average\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass RSquare(Metric):\n \"\"\"Compute R^2 score.\n\n This is also called the [coefficient of determination\n ](https://en.wikipedia.org/wiki/Coefficient_of_determination).\n It tells how close are data to the fitted regression line.\n\n - Highest score can be 1.0 and it indicates that the predictors\n perfectly accounts for variation in the target.\n - Score 0.0 indicates that the predictors do not\n account for variation in the target.\n - It can also be negative if the model is worse.\n\n The sample weighting for this metric implementation mimics the\n behaviour of the [scikit-learn implementation\n ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)\n of the same metric.\n\n Args:\n multioutput: `string`, the reduce method for scores.\n Should be one of `[\"raw_values\", \"uniform_average\", \"variance_weighted\"]`.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Usage:\n\n >>> y_true = np.array([1, 4, 3], dtype=np.float32)\n >>> y_pred = np.array([2, 4, 4], dtype=np.float32)\n >>> metric = tfa.metrics.r_square.RSquare()\n >>> metric.update_state(y_true, y_pred)\n >>> result = metric.result()\n >>> result.numpy()\n 0.57142854\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n name: str = \"r_square\",\n dtype: AcceptableDTypes = None,\n y_shape: Tuple[int, ...] = (),\n multioutput: str = \"uniform_average\",\n **kwargs,\n ):\n super().__init__(name=name, dtype=dtype, **kwargs)\n self.y_shape = y_shape\n\n if multioutput not in VALID_MULTIOUTPUT:\n raise ValueError(\n \"The multioutput argument must be one of {}, but was: {}\".format(\n VALID_MULTIOUTPUT, multioutput\n )\n )\n self.multioutput = multioutput\n self.squared_sum = self.add_weight(\n name=\"squared_sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.sum = self.add_weight(\n name=\"sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.res = self.add_weight(\n name=\"residual\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.count = self.add_weight(\n name=\"count\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None) -> None:\n y_true = tf.cast(y_true, dtype=self._dtype)\n y_pred = tf.cast(y_pred, dtype=self._dtype)\n if sample_weight is None:\n sample_weight = 1\n sample_weight = tf.cast(sample_weight, dtype=self._dtype)\n sample_weight = weights_broadcast_ops.broadcast_weights(\n weights=sample_weight, values=y_true\n )\n\n weighted_y_true = y_true * sample_weight\n self.sum.assign_add(tf.reduce_sum(weighted_y_true, axis=0))\n self.squared_sum.assign_add(tf.reduce_sum(y_true * weighted_y_true, axis=0))\n self.res.assign_add(\n tf.reduce_sum((y_true - y_pred) ** 2 * sample_weight, axis=0)\n )\n self.count.assign_add(tf.reduce_sum(sample_weight, axis=0))\n\n def result(self) -> tf.Tensor:\n mean = self.sum / self.count\n total = self.squared_sum - self.sum * mean\n raw_scores = 1 - (self.res / total)\n raw_scores = tf.where(tf.math.is_inf(raw_scores), 0.0, raw_scores)\n\n if self.multioutput == \"raw_values\":\n return raw_scores\n if self.multioutput == \"uniform_average\":\n return tf.reduce_mean(raw_scores)\n if self.multioutput == \"variance_weighted\":\n return _reduce_average(raw_scores, weights=total)\n raise RuntimeError(\n \"The multioutput attribute must be one of {}, but was: {}\".format(\n VALID_MULTIOUTPUT, self.multioutput\n )\n )\n\n def reset_states(self) -> None:\n # The state of the metric will be reset at the start of each epoch.\n K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])\n", "path": "tensorflow_addons/metrics/r_square.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements R^2 scores.\"\"\"\nfrom typing import Tuple\n\nimport tensorflow as tf\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.metrics import Metric\nfrom tensorflow.python.ops import weights_broadcast_ops\n\nfrom typeguard import typechecked\nfrom tensorflow_addons.utils.types import AcceptableDTypes\n\n\n_VALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n\n\ndef _reduce_average(\n input_tensor: tf.Tensor, axis=None, keepdims=False, weights=None\n) -> tf.Tensor:\n \"\"\"Computes the (weighted) mean of elements across dimensions of a tensor.\"\"\"\n if weights is None:\n return tf.reduce_mean(input_tensor, axis=axis, keepdims=keepdims)\n\n weighted_sum = tf.reduce_sum(weights * input_tensor, axis=axis, keepdims=keepdims)\n sum_of_weights = tf.reduce_sum(weights, axis=axis, keepdims=keepdims)\n average = weighted_sum / sum_of_weights\n return average\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass RSquare(Metric):\n \"\"\"Compute R^2 score.\n\n This is also called the [coefficient of determination\n ](https://en.wikipedia.org/wiki/Coefficient_of_determination).\n It tells how close are data to the fitted regression line.\n\n - Highest score can be 1.0 and it indicates that the predictors\n perfectly accounts for variation in the target.\n - Score 0.0 indicates that the predictors do not\n account for variation in the target.\n - It can also be negative if the model is worse.\n\n The sample weighting for this metric implementation mimics the\n behaviour of the [scikit-learn implementation\n ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)\n of the same metric.\n\n Args:\n multioutput: `string`, the reduce method for scores.\n Should be one of `[\"raw_values\", \"uniform_average\", \"variance_weighted\"]`.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Usage:\n\n >>> y_true = np.array([1, 4, 3], dtype=np.float32)\n >>> y_pred = np.array([2, 4, 4], dtype=np.float32)\n >>> metric = tfa.metrics.r_square.RSquare()\n >>> metric.update_state(y_true, y_pred)\n >>> result = metric.result()\n >>> result.numpy()\n 0.57142854\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n name: str = \"r_square\",\n dtype: AcceptableDTypes = None,\n y_shape: Tuple[int, ...] = (),\n multioutput: str = \"uniform_average\",\n **kwargs,\n ):\n super().__init__(name=name, dtype=dtype, **kwargs)\n self.y_shape = y_shape\n\n if multioutput not in _VALID_MULTIOUTPUT:\n raise ValueError(\n \"The multioutput argument must be one of {}, but was: {}\".format(\n _VALID_MULTIOUTPUT, multioutput\n )\n )\n self.multioutput = multioutput\n self.squared_sum = self.add_weight(\n name=\"squared_sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.sum = self.add_weight(\n name=\"sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.res = self.add_weight(\n name=\"residual\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.count = self.add_weight(\n name=\"count\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None) -> None:\n y_true = tf.cast(y_true, dtype=self._dtype)\n y_pred = tf.cast(y_pred, dtype=self._dtype)\n if sample_weight is None:\n sample_weight = 1\n sample_weight = tf.cast(sample_weight, dtype=self._dtype)\n sample_weight = weights_broadcast_ops.broadcast_weights(\n weights=sample_weight, values=y_true\n )\n\n weighted_y_true = y_true * sample_weight\n self.sum.assign_add(tf.reduce_sum(weighted_y_true, axis=0))\n self.squared_sum.assign_add(tf.reduce_sum(y_true * weighted_y_true, axis=0))\n self.res.assign_add(\n tf.reduce_sum((y_true - y_pred) ** 2 * sample_weight, axis=0)\n )\n self.count.assign_add(tf.reduce_sum(sample_weight, axis=0))\n\n def result(self) -> tf.Tensor:\n mean = self.sum / self.count\n total = self.squared_sum - self.sum * mean\n raw_scores = 1 - (self.res / total)\n raw_scores = tf.where(tf.math.is_inf(raw_scores), 0.0, raw_scores)\n\n if self.multioutput == \"raw_values\":\n return raw_scores\n if self.multioutput == \"uniform_average\":\n return tf.reduce_mean(raw_scores)\n if self.multioutput == \"variance_weighted\":\n return _reduce_average(raw_scores, weights=total)\n\n def reset_states(self) -> None:\n # The state of the metric will be reset at the start of each epoch.\n K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])\n\n def get_config(self):\n config = {\n \"y_shape\": self.y_shape,\n \"multioutput\": self.multioutput,\n }\n base_config = super().get_config()\n return {**base_config, **config}\n", "path": "tensorflow_addons/metrics/r_square.py"}]}
| 2,230 | 423 |
gh_patches_debug_14532
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-2674
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problem with user directories when sandboxing is disabled
Hi, i have strange problem in recent version. I have disabled wineprefix sandboxing, so my user directories were configured as symlinks. But in recent version programs (or even wine config) do not start with error:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 676, in run_winecfg
self.prelaunch()
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 778, in prelaunch
self.sandbox(prefix_manager)
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 926, in sandbox
wine_prefix.desktop_integration(restore=True)
File "/usr/lib/python3.7/site-packages/lutris/util/wine/prefix.py", line 132, in desktop_integration
os.rename(old_path, path)
NotADirectoryError: [Errno 20] není adresářem: '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha.winecfg' -> '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha'
which is true, because "Plocha" is symlink to proper Desktop directory. When i delete "Plocha.winecfg", same error moves to next user directory "Mé dokumenty" (Documents). When i delete all "winecfg" files, all symlinks to existing user directories are replaced by empty directories. I.e. i am not able to run programs linked to standard user directories despite sandboxing is disabled.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/wine/prefix.py`
Content:
```
1 """Wine prefix management"""
2 import os
3 from lutris.util.wine.registry import WineRegistry
4 from lutris.util.log import logger
5 from lutris.util import joypad, system, xdgshortcuts
6 from lutris.util.display import DISPLAY_MANAGER
7
8 DESKTOP_KEYS = ["Desktop", "Personal", "My Music", "My Videos", "My Pictures"]
9 DEFAULT_DESKTOP_FOLDERS = ["Desktop", "My Documents", "My Music", "My Videos", "My Pictures"]
10 DESKTOP_XDG = ["DESKTOP", "DOCUMENTS", "MUSIC", "VIDEOS", "PICTURES"]
11
12
13 class WinePrefixManager:
14 """Class to allow modification of Wine prefixes without the use of Wine"""
15
16 hkcu_prefix = "HKEY_CURRENT_USER"
17
18 def __init__(self, path):
19 if not path:
20 logger.warning("No path specified for Wine prefix")
21 self.path = path
22
23 def setup_defaults(self):
24 """Sets the defaults for newly created prefixes"""
25 self.override_dll("winemenubuilder.exe", "")
26 try:
27 self.desktop_integration()
28 except OSError as ex:
29 logger.error(
30 "Failed to setup desktop integration, the prefix may not be valid."
31 )
32 logger.exception(ex)
33
34 def get_registry_path(self, key):
35 """Matches registry keys to a registry file
36
37 Currently, only HKEY_CURRENT_USER keys are supported.
38 """
39 if key.startswith(self.hkcu_prefix):
40 return os.path.join(self.path, "user.reg")
41 raise ValueError("Unsupported key '{}'".format(key))
42
43 def get_key_path(self, key):
44 if key.startswith(self.hkcu_prefix):
45 return key[len(self.hkcu_prefix) + 1:]
46 raise ValueError(
47 "The key {} is currently not supported by WinePrefixManager".format(key)
48 )
49
50 def get_registry_key(self, key, subkey):
51 registry = WineRegistry(self.get_registry_path(key))
52 return registry.query(self.get_key_path(key), subkey)
53
54 def set_registry_key(self, key, subkey, value):
55 registry = WineRegistry(self.get_registry_path(key))
56 registry.set_value(self.get_key_path(key), subkey, value)
57 registry.save()
58
59 def clear_registry_key(self, key):
60 registry = WineRegistry(self.get_registry_path(key))
61 registry.clear_key(self.get_key_path(key))
62 registry.save()
63
64 def clear_registry_subkeys(self, key, subkeys):
65 registry = WineRegistry(self.get_registry_path(key))
66 registry.clear_subkeys(self.get_key_path(key), subkeys)
67 registry.save()
68
69 def override_dll(self, dll, mode):
70 key = self.hkcu_prefix + "/Software/Wine/DllOverrides"
71 if mode.startswith("dis"):
72 mode = ""
73 if mode not in ("builtin", "native", "builtin,native", "native,builtin", ""):
74 logger.error("DLL override '%s' mode is not valid", mode)
75 return
76 self.set_registry_key(key, dll, mode)
77
78 def get_desktop_folders(self):
79 """Return the list of desktop folder names loaded from the Windows registry"""
80 desktop_folders = []
81 for key in DESKTOP_KEYS:
82 folder = self.get_registry_key(
83 self.hkcu_prefix
84 + "/Software/Microsoft/Windows/CurrentVersion/Explorer/Shell Folders",
85 key,
86 )
87 if not folder:
88 logger.warning("Couldn't load shell folder name for %s", key)
89 continue
90 desktop_folders.append(folder[folder.rfind("\\") + 1:])
91 return desktop_folders or DEFAULT_DESKTOP_FOLDERS
92
93 def desktop_integration(self, desktop_dir=None, restore=False):
94 """Overwrite desktop integration"""
95 user = os.getenv("USER")
96 user_dir = os.path.join(self.path, "drive_c/users/", user)
97 desktop_folders = self.get_desktop_folders()
98
99 if desktop_dir:
100 desktop_dir = os.path.expanduser(desktop_dir)
101 else:
102 desktop_dir = user_dir
103
104 if system.path_exists(user_dir):
105 # Replace or restore desktop integration symlinks
106 for i, item in enumerate(desktop_folders):
107 path = os.path.join(user_dir, item)
108 old_path = path + ".winecfg"
109
110 if os.path.islink(path):
111 if not restore:
112 os.unlink(path)
113 elif os.path.isdir(path):
114 try:
115 os.rmdir(path)
116 # We can't delete nonempty dir, so we rename as wine do.
117 except OSError:
118 os.rename(path, old_path)
119
120 if restore and not os.path.isdir(path):
121 os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)
122 # We don't need all the others process of the loop
123 continue
124
125 if desktop_dir != user_dir:
126 try:
127 src_path = os.path.join(desktop_dir, item)
128 except TypeError:
129 # There is supposedly a None value in there
130 # The current code shouldn't allow that
131 # Just raise a exception with the values
132 raise RuntimeError("Missing value desktop_dir=%s or item=%s"
133 % (desktop_dir, item))
134
135 os.makedirs(src_path, exist_ok=True)
136 os.symlink(src_path, path)
137 else:
138 # We use first the renamed dir, otherwise we make it.
139 if os.path.isdir(old_path):
140 os.rename(old_path, path)
141 else:
142 os.makedirs(path, exist_ok=True)
143
144 # Security: Remove other symlinks.
145 for item in os.listdir(user_dir):
146 path = os.path.join(user_dir, item)
147 if item not in DEFAULT_DESKTOP_FOLDERS and os.path.islink(path):
148 os.unlink(path)
149 os.makedirs(path)
150
151 def set_crash_dialogs(self, enabled):
152 """Enable or diable Wine crash dialogs"""
153 self.set_registry_key(
154 self.hkcu_prefix + "/Software/Wine/WineDbg",
155 "ShowCrashDialog",
156 1 if enabled else 0,
157 )
158
159 def set_virtual_desktop(self, enabled):
160 """Enable or disable wine virtual desktop.
161 The Lutris virtual desktop is refered to as 'WineDesktop', in Wine the
162 virtual desktop name is 'default'.
163 """
164 path = self.hkcu_prefix + "/Software/Wine/Explorer"
165 if enabled:
166 self.set_registry_key(path, "Desktop", "WineDesktop")
167 default_resolution = "x".join(DISPLAY_MANAGER.get_current_resolution())
168 logger.debug(
169 "Enabling wine virtual desktop with default resolution of %s",
170 default_resolution,
171 )
172 self.set_registry_key(
173 self.hkcu_prefix + "/Software/Wine/Explorer/Desktops",
174 "WineDesktop",
175 default_resolution,
176 )
177 else:
178 self.clear_registry_key(path)
179
180 def set_desktop_size(self, desktop_size):
181 """Sets the desktop size if one is given but do not reset the key if
182 one isn't.
183 """
184 path = self.hkcu_prefix + "/Software/Wine/Explorer/Desktops"
185 if desktop_size:
186 self.set_registry_key(path, "WineDesktop", desktop_size)
187
188 def use_xvid_mode(self, enabled):
189 """Set this to "Y" to allow wine switch the resolution using XVidMode extension."""
190 self.set_registry_key(
191 self.hkcu_prefix + "/Software/Wine/X11 Driver",
192 "UseXVidMode",
193 "Y" if enabled else "N",
194 )
195
196 def configure_joypads(self):
197 joypads = joypad.get_joypads()
198 key = self.hkcu_prefix + "/Software/Wine/DirectInput/Joysticks"
199 self.clear_registry_key(key)
200 for device, joypad_name in joypads:
201 if "event" in device:
202 disabled_joypad = "{} (js)".format(joypad_name)
203 else:
204 disabled_joypad = "{} (event)".format(joypad_name)
205 self.set_registry_key(key, disabled_joypad, "disabled")
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lutris/util/wine/prefix.py b/lutris/util/wine/prefix.py
--- a/lutris/util/wine/prefix.py
+++ b/lutris/util/wine/prefix.py
@@ -117,7 +117,11 @@
except OSError:
os.rename(path, old_path)
- if restore and not os.path.isdir(path):
+ # if we want to create a symlink and one is already there, just skip to the next item.
+ # this also makes sure the elif doesn't find a dir (isdir only looks at the target of the symlink).
+ if restore and os.path.islink(path):
+ continue
+ elif restore and not os.path.isdir(path):
os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)
# We don't need all the others process of the loop
continue
|
{"golden_diff": "diff --git a/lutris/util/wine/prefix.py b/lutris/util/wine/prefix.py\n--- a/lutris/util/wine/prefix.py\n+++ b/lutris/util/wine/prefix.py\n@@ -117,7 +117,11 @@\n except OSError:\n os.rename(path, old_path)\n \n- if restore and not os.path.isdir(path):\n+ # if we want to create a symlink and one is already there, just skip to the next item.\n+ # this also makes sure the elif doesn't find a dir (isdir only looks at the target of the symlink).\n+ if restore and os.path.islink(path):\n+ continue\n+ elif restore and not os.path.isdir(path):\n os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)\n # We don't need all the others process of the loop\n continue\n", "issue": "Problem with user directories when sandboxing is disabled\nHi, i have strange problem in recent version. I have disabled wineprefix sandboxing, so my user directories were configured as symlinks. But in recent version programs (or even wine config) do not start with error:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 676, in run_winecfg\r\n self.prelaunch()\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 778, in prelaunch\r\n self.sandbox(prefix_manager)\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 926, in sandbox\r\n wine_prefix.desktop_integration(restore=True)\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/wine/prefix.py\", line 132, in desktop_integration\r\n os.rename(old_path, path)\r\nNotADirectoryError: [Errno 20] nen\u00ed adres\u00e1\u0159em: '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha.winecfg' -> '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha'\r\n\r\nwhich is true, because \"Plocha\" is symlink to proper Desktop directory. When i delete \"Plocha.winecfg\", same error moves to next user directory \"M\u00e9 dokumenty\" (Documents). When i delete all \"winecfg\" files, all symlinks to existing user directories are replaced by empty directories. I.e. i am not able to run programs linked to standard user directories despite sandboxing is disabled.\n", "before_files": [{"content": "\"\"\"Wine prefix management\"\"\"\nimport os\nfrom lutris.util.wine.registry import WineRegistry\nfrom lutris.util.log import logger\nfrom lutris.util import joypad, system, xdgshortcuts\nfrom lutris.util.display import DISPLAY_MANAGER\n\nDESKTOP_KEYS = [\"Desktop\", \"Personal\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDEFAULT_DESKTOP_FOLDERS = [\"Desktop\", \"My Documents\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDESKTOP_XDG = [\"DESKTOP\", \"DOCUMENTS\", \"MUSIC\", \"VIDEOS\", \"PICTURES\"]\n\n\nclass WinePrefixManager:\n \"\"\"Class to allow modification of Wine prefixes without the use of Wine\"\"\"\n\n hkcu_prefix = \"HKEY_CURRENT_USER\"\n\n def __init__(self, path):\n if not path:\n logger.warning(\"No path specified for Wine prefix\")\n self.path = path\n\n def setup_defaults(self):\n \"\"\"Sets the defaults for newly created prefixes\"\"\"\n self.override_dll(\"winemenubuilder.exe\", \"\")\n try:\n self.desktop_integration()\n except OSError as ex:\n logger.error(\n \"Failed to setup desktop integration, the prefix may not be valid.\"\n )\n logger.exception(ex)\n\n def get_registry_path(self, key):\n \"\"\"Matches registry keys to a registry file\n\n Currently, only HKEY_CURRENT_USER keys are supported.\n \"\"\"\n if key.startswith(self.hkcu_prefix):\n return os.path.join(self.path, \"user.reg\")\n raise ValueError(\"Unsupported key '{}'\".format(key))\n\n def get_key_path(self, key):\n if key.startswith(self.hkcu_prefix):\n return key[len(self.hkcu_prefix) + 1:]\n raise ValueError(\n \"The key {} is currently not supported by WinePrefixManager\".format(key)\n )\n\n def get_registry_key(self, key, subkey):\n registry = WineRegistry(self.get_registry_path(key))\n return registry.query(self.get_key_path(key), subkey)\n\n def set_registry_key(self, key, subkey, value):\n registry = WineRegistry(self.get_registry_path(key))\n registry.set_value(self.get_key_path(key), subkey, value)\n registry.save()\n\n def clear_registry_key(self, key):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_key(self.get_key_path(key))\n registry.save()\n\n def clear_registry_subkeys(self, key, subkeys):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_subkeys(self.get_key_path(key), subkeys)\n registry.save()\n\n def override_dll(self, dll, mode):\n key = self.hkcu_prefix + \"/Software/Wine/DllOverrides\"\n if mode.startswith(\"dis\"):\n mode = \"\"\n if mode not in (\"builtin\", \"native\", \"builtin,native\", \"native,builtin\", \"\"):\n logger.error(\"DLL override '%s' mode is not valid\", mode)\n return\n self.set_registry_key(key, dll, mode)\n\n def get_desktop_folders(self):\n \"\"\"Return the list of desktop folder names loaded from the Windows registry\"\"\"\n desktop_folders = []\n for key in DESKTOP_KEYS:\n folder = self.get_registry_key(\n self.hkcu_prefix\n + \"/Software/Microsoft/Windows/CurrentVersion/Explorer/Shell Folders\",\n key,\n )\n if not folder:\n logger.warning(\"Couldn't load shell folder name for %s\", key)\n continue\n desktop_folders.append(folder[folder.rfind(\"\\\\\") + 1:])\n return desktop_folders or DEFAULT_DESKTOP_FOLDERS\n\n def desktop_integration(self, desktop_dir=None, restore=False):\n \"\"\"Overwrite desktop integration\"\"\"\n user = os.getenv(\"USER\")\n user_dir = os.path.join(self.path, \"drive_c/users/\", user)\n desktop_folders = self.get_desktop_folders()\n\n if desktop_dir:\n desktop_dir = os.path.expanduser(desktop_dir)\n else:\n desktop_dir = user_dir\n\n if system.path_exists(user_dir):\n # Replace or restore desktop integration symlinks\n for i, item in enumerate(desktop_folders):\n path = os.path.join(user_dir, item)\n old_path = path + \".winecfg\"\n\n if os.path.islink(path):\n if not restore:\n os.unlink(path)\n elif os.path.isdir(path):\n try:\n os.rmdir(path)\n # We can't delete nonempty dir, so we rename as wine do.\n except OSError:\n os.rename(path, old_path)\n\n if restore and not os.path.isdir(path):\n os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)\n # We don't need all the others process of the loop\n continue\n\n if desktop_dir != user_dir:\n try:\n src_path = os.path.join(desktop_dir, item)\n except TypeError:\n # There is supposedly a None value in there\n # The current code shouldn't allow that\n # Just raise a exception with the values\n raise RuntimeError(\"Missing value desktop_dir=%s or item=%s\"\n % (desktop_dir, item))\n\n os.makedirs(src_path, exist_ok=True)\n os.symlink(src_path, path)\n else:\n # We use first the renamed dir, otherwise we make it.\n if os.path.isdir(old_path):\n os.rename(old_path, path)\n else:\n os.makedirs(path, exist_ok=True)\n\n # Security: Remove other symlinks.\n for item in os.listdir(user_dir):\n path = os.path.join(user_dir, item)\n if item not in DEFAULT_DESKTOP_FOLDERS and os.path.islink(path):\n os.unlink(path)\n os.makedirs(path)\n\n def set_crash_dialogs(self, enabled):\n \"\"\"Enable or diable Wine crash dialogs\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/WineDbg\",\n \"ShowCrashDialog\",\n 1 if enabled else 0,\n )\n\n def set_virtual_desktop(self, enabled):\n \"\"\"Enable or disable wine virtual desktop.\n The Lutris virtual desktop is refered to as 'WineDesktop', in Wine the\n virtual desktop name is 'default'.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer\"\n if enabled:\n self.set_registry_key(path, \"Desktop\", \"WineDesktop\")\n default_resolution = \"x\".join(DISPLAY_MANAGER.get_current_resolution())\n logger.debug(\n \"Enabling wine virtual desktop with default resolution of %s\",\n default_resolution,\n )\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\",\n \"WineDesktop\",\n default_resolution,\n )\n else:\n self.clear_registry_key(path)\n\n def set_desktop_size(self, desktop_size):\n \"\"\"Sets the desktop size if one is given but do not reset the key if\n one isn't.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\"\n if desktop_size:\n self.set_registry_key(path, \"WineDesktop\", desktop_size)\n\n def use_xvid_mode(self, enabled):\n \"\"\"Set this to \"Y\" to allow wine switch the resolution using XVidMode extension.\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/X11 Driver\",\n \"UseXVidMode\",\n \"Y\" if enabled else \"N\",\n )\n\n def configure_joypads(self):\n joypads = joypad.get_joypads()\n key = self.hkcu_prefix + \"/Software/Wine/DirectInput/Joysticks\"\n self.clear_registry_key(key)\n for device, joypad_name in joypads:\n if \"event\" in device:\n disabled_joypad = \"{} (js)\".format(joypad_name)\n else:\n disabled_joypad = \"{} (event)\".format(joypad_name)\n self.set_registry_key(key, disabled_joypad, \"disabled\")\n", "path": "lutris/util/wine/prefix.py"}], "after_files": [{"content": "\"\"\"Wine prefix management\"\"\"\nimport os\nfrom lutris.util.wine.registry import WineRegistry\nfrom lutris.util.log import logger\nfrom lutris.util import joypad, system, xdgshortcuts\nfrom lutris.util.display import DISPLAY_MANAGER\n\nDESKTOP_KEYS = [\"Desktop\", \"Personal\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDEFAULT_DESKTOP_FOLDERS = [\"Desktop\", \"My Documents\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDESKTOP_XDG = [\"DESKTOP\", \"DOCUMENTS\", \"MUSIC\", \"VIDEOS\", \"PICTURES\"]\n\n\nclass WinePrefixManager:\n \"\"\"Class to allow modification of Wine prefixes without the use of Wine\"\"\"\n\n hkcu_prefix = \"HKEY_CURRENT_USER\"\n\n def __init__(self, path):\n if not path:\n logger.warning(\"No path specified for Wine prefix\")\n self.path = path\n\n def setup_defaults(self):\n \"\"\"Sets the defaults for newly created prefixes\"\"\"\n self.override_dll(\"winemenubuilder.exe\", \"\")\n try:\n self.desktop_integration()\n except OSError as ex:\n logger.error(\n \"Failed to setup desktop integration, the prefix may not be valid.\"\n )\n logger.exception(ex)\n\n def get_registry_path(self, key):\n \"\"\"Matches registry keys to a registry file\n\n Currently, only HKEY_CURRENT_USER keys are supported.\n \"\"\"\n if key.startswith(self.hkcu_prefix):\n return os.path.join(self.path, \"user.reg\")\n raise ValueError(\"Unsupported key '{}'\".format(key))\n\n def get_key_path(self, key):\n if key.startswith(self.hkcu_prefix):\n return key[len(self.hkcu_prefix) + 1:]\n raise ValueError(\n \"The key {} is currently not supported by WinePrefixManager\".format(key)\n )\n\n def get_registry_key(self, key, subkey):\n registry = WineRegistry(self.get_registry_path(key))\n return registry.query(self.get_key_path(key), subkey)\n\n def set_registry_key(self, key, subkey, value):\n registry = WineRegistry(self.get_registry_path(key))\n registry.set_value(self.get_key_path(key), subkey, value)\n registry.save()\n\n def clear_registry_key(self, key):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_key(self.get_key_path(key))\n registry.save()\n\n def clear_registry_subkeys(self, key, subkeys):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_subkeys(self.get_key_path(key), subkeys)\n registry.save()\n\n def override_dll(self, dll, mode):\n key = self.hkcu_prefix + \"/Software/Wine/DllOverrides\"\n if mode.startswith(\"dis\"):\n mode = \"\"\n if mode not in (\"builtin\", \"native\", \"builtin,native\", \"native,builtin\", \"\"):\n logger.error(\"DLL override '%s' mode is not valid\", mode)\n return\n self.set_registry_key(key, dll, mode)\n\n def get_desktop_folders(self):\n \"\"\"Return the list of desktop folder names loaded from the Windows registry\"\"\"\n desktop_folders = []\n for key in DESKTOP_KEYS:\n folder = self.get_registry_key(\n self.hkcu_prefix\n + \"/Software/Microsoft/Windows/CurrentVersion/Explorer/Shell Folders\",\n key,\n )\n if not folder:\n logger.warning(\"Couldn't load shell folder name for %s\", key)\n continue\n desktop_folders.append(folder[folder.rfind(\"\\\\\") + 1:])\n return desktop_folders or DEFAULT_DESKTOP_FOLDERS\n\n def desktop_integration(self, desktop_dir=None, restore=False):\n \"\"\"Overwrite desktop integration\"\"\"\n user = os.getenv(\"USER\")\n user_dir = os.path.join(self.path, \"drive_c/users/\", user)\n desktop_folders = self.get_desktop_folders()\n\n if desktop_dir:\n desktop_dir = os.path.expanduser(desktop_dir)\n else:\n desktop_dir = user_dir\n\n if system.path_exists(user_dir):\n # Replace or restore desktop integration symlinks\n for i, item in enumerate(desktop_folders):\n path = os.path.join(user_dir, item)\n old_path = path + \".winecfg\"\n\n if os.path.islink(path):\n if not restore:\n os.unlink(path)\n elif os.path.isdir(path):\n try:\n os.rmdir(path)\n # We can't delete nonempty dir, so we rename as wine do.\n except OSError:\n os.rename(path, old_path)\n\n # if we want to create a symlink and one is already there, just skip to the next item.\n # this also makes sure the elif doesn't find a dir (isdir only looks at the target of the symlink).\n if restore and os.path.islink(path):\n continue\n elif restore and not os.path.isdir(path):\n os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)\n # We don't need all the others process of the loop\n continue\n\n if desktop_dir != user_dir:\n try:\n src_path = os.path.join(desktop_dir, item)\n except TypeError:\n # There is supposedly a None value in there\n # The current code shouldn't allow that\n # Just raise a exception with the values\n raise RuntimeError(\"Missing value desktop_dir=%s or item=%s\"\n % (desktop_dir, item))\n\n os.makedirs(src_path, exist_ok=True)\n os.symlink(src_path, path)\n else:\n # We use first the renamed dir, otherwise we make it.\n if os.path.isdir(old_path):\n os.rename(old_path, path)\n else:\n os.makedirs(path, exist_ok=True)\n\n # Security: Remove other symlinks.\n for item in os.listdir(user_dir):\n path = os.path.join(user_dir, item)\n if item not in DEFAULT_DESKTOP_FOLDERS and os.path.islink(path):\n os.unlink(path)\n os.makedirs(path)\n\n def set_crash_dialogs(self, enabled):\n \"\"\"Enable or diable Wine crash dialogs\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/WineDbg\",\n \"ShowCrashDialog\",\n 1 if enabled else 0,\n )\n\n def set_virtual_desktop(self, enabled):\n \"\"\"Enable or disable wine virtual desktop.\n The Lutris virtual desktop is refered to as 'WineDesktop', in Wine the\n virtual desktop name is 'default'.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer\"\n if enabled:\n self.set_registry_key(path, \"Desktop\", \"WineDesktop\")\n default_resolution = \"x\".join(DISPLAY_MANAGER.get_current_resolution())\n logger.debug(\n \"Enabling wine virtual desktop with default resolution of %s\",\n default_resolution,\n )\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\",\n \"WineDesktop\",\n default_resolution,\n )\n else:\n self.clear_registry_key(path)\n\n def set_desktop_size(self, desktop_size):\n \"\"\"Sets the desktop size if one is given but do not reset the key if\n one isn't.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\"\n if desktop_size:\n self.set_registry_key(path, \"WineDesktop\", desktop_size)\n\n def use_xvid_mode(self, enabled):\n \"\"\"Set this to \"Y\" to allow wine switch the resolution using XVidMode extension.\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/X11 Driver\",\n \"UseXVidMode\",\n \"Y\" if enabled else \"N\",\n )\n\n def configure_joypads(self):\n joypads = joypad.get_joypads()\n key = self.hkcu_prefix + \"/Software/Wine/DirectInput/Joysticks\"\n self.clear_registry_key(key)\n for device, joypad_name in joypads:\n if \"event\" in device:\n disabled_joypad = \"{} (js)\".format(joypad_name)\n else:\n disabled_joypad = \"{} (event)\".format(joypad_name)\n self.set_registry_key(key, disabled_joypad, \"disabled\")\n", "path": "lutris/util/wine/prefix.py"}]}
| 2,873 | 199 |
gh_patches_debug_59
|
rasdani/github-patches
|
git_diff
|
Anselmoo__spectrafit-662
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Docs]: Using mike for versioning docs
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Information in the Docs
https://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spectrafit/__init__.py`
Content:
```
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "1.0.0a2"
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "1.0.0a2"
+__version__ = "1.0.0a3"
|
{"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"1.0.0a2\"\n+__version__ = \"1.0.0a3\"\n", "issue": "[Docs]: Using mike for versioning docs\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Information in the Docs\n\nhttps://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a2\"\n", "path": "spectrafit/__init__.py"}], "after_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a3\"\n", "path": "spectrafit/__init__.py"}]}
| 374 | 96 |
gh_patches_debug_40787
|
rasdani/github-patches
|
git_diff
|
ietf-tools__datatracker-7199
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor to drop dependency on decorator package
### Description
We have a few decorators defined in `ietf/utils/decorator.py` that use the "decorator" package. This provides the `@decorator` decorator and a `decorate` method. The built-in Python `functools.wraps()` method can fulfill the needs here without the additional dependency. As far as I can tell we're not making use of any of the features provided by the "decorator" package.
The `@decorator` mechanism also seems to interfere with Django's `@method_decorator`, which led to refactoring the `requires_api_key` decorator already.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/utils/decorators.py`
Content:
```
1 # Copyright The IETF Trust 2016-2020, All Rights Reserved
2 # -*- coding: utf-8 -*-
3
4
5 import datetime
6
7 from decorator import decorator, decorate
8 from functools import wraps
9
10 from django.conf import settings
11 from django.contrib.auth import login
12 from django.http import HttpResponse
13 from django.shortcuts import render
14 from django.utils import timezone
15 from django.utils.encoding import force_bytes
16
17 import debug # pyflakes:ignore
18
19 from ietf.utils.test_runner import set_coverage_checking
20 from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent
21 from ietf.utils import log
22
23 @decorator
24 def skip_coverage(f, *args, **kwargs):
25 if settings.TEST_CODE_COVERAGE_CHECKER:
26 set_coverage_checking(False)
27 result = f(*args, **kwargs)
28 set_coverage_checking(True)
29 return result
30 else:
31 return f(*args, **kwargs)
32
33 @decorator
34 def person_required(f, request, *args, **kwargs):
35 if not request.user.is_authenticated:
36 raise ValueError("The @person_required decorator should be called after @login_required.")
37 try:
38 request.user.person
39 except Person.DoesNotExist:
40 return render(request, 'registration/missing_person.html')
41 return f(request, *args, **kwargs)
42
43
44 def require_api_key(f):
45 @wraps(f)
46 def _wrapper(request, *args, **kwargs):
47 def err(code, text):
48 return HttpResponse(text, status=code, content_type='text/plain')
49 # Check method and get hash
50 if request.method == 'POST':
51 hash = request.POST.get('apikey')
52 elif request.method == 'GET':
53 hash = request.GET.get('apikey')
54 else:
55 return err(405, "Method not allowed")
56 if not hash:
57 return err(400, "Missing apikey parameter")
58 # Check hash
59 key = PersonalApiKey.validate_key(force_bytes(hash))
60 if not key:
61 return err(403, "Invalid apikey")
62 # Check endpoint
63 urlpath = request.META.get('PATH_INFO')
64 if not (urlpath and urlpath == key.endpoint):
65 return err(400, "Apikey endpoint mismatch")
66 # Check time since regular login
67 person = key.person
68 last_login = person.user.last_login
69 if not person.user.is_staff:
70 time_limit = (timezone.now() - datetime.timedelta(days=settings.UTILS_APIKEY_GUI_LOGIN_LIMIT_DAYS))
71 if last_login == None or last_login < time_limit:
72 return err(400, "Too long since last regular login")
73 # Log in
74 login(request, person.user)
75 # restore the user.last_login field, so it reflects only gui logins
76 person.user.last_login = last_login
77 person.user.save()
78 # Update stats
79 key.count += 1
80 key.latest = timezone.now()
81 key.save()
82 PersonApiKeyEvent.objects.create(person=person, type='apikey_login', key=key, desc="Logged in with key ID %s, endpoint %s" % (key.id, key.endpoint))
83 # Execute decorated function
84 try:
85 ret = f(request, *args, **kwargs)
86 except AttributeError as e:
87 log.log("Bad API call: args: %s, kwargs: %s, exception: %s" % (args, kwargs, e))
88 return err(400, "Bad or missing parameters")
89 return ret
90 return _wrapper
91
92
93 def _memoize(func, self, *args, **kwargs):
94 '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
95 if kwargs: # frozenset is used to ensure hashability
96 key = args, frozenset(list(kwargs.items()))
97 else:
98 key = args
99 # instance method, set up cache if needed
100 if not hasattr(self, '_cache'):
101 self._cache = {}
102 if not func in self._cache:
103 self._cache[func] = {}
104 #
105 cache = self._cache[func]
106 if key not in cache:
107 cache[key] = func(self, *args, **kwargs)
108 return cache[key]
109 def memoize(func):
110 if not hasattr(func, '__class__'):
111 raise NotImplementedError("Use @lru_cache instead of memoize() for functions.")
112 # For methods, we want the cache on the object, not on the class, in order
113 # to not having to think about cache bloat and content becoming stale, so
114 # we cannot set up the cache here.
115 return decorate(func, _memoize)
116
117
118 def ignore_view_kwargs(*args):
119 """Ignore the specified kwargs if they are present
120
121 Usage:
122 @ignore_view_kwargs("ignore_arg1", "ignore_arg2")
123 def my_view(request, good_arg):
124 ...
125
126 This will allow my_view() to be used in url() paths that have zero, one, or both of
127 ignore_arg1 and ignore_arg2 captured. These will be ignored, while good_arg will still
128 be captured as usual.
129 """
130 kwargs_to_ignore = args
131
132 def decorate(view):
133 @wraps(view)
134 def wrapped(*args, **kwargs):
135 for kwarg in kwargs_to_ignore:
136 kwargs.pop(kwarg, None)
137 return view(*args, **kwargs)
138
139 return wrapped
140
141 return decorate
142
143
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ietf/utils/decorators.py b/ietf/utils/decorators.py
--- a/ietf/utils/decorators.py
+++ b/ietf/utils/decorators.py
@@ -4,7 +4,6 @@
import datetime
-from decorator import decorator, decorate
from functools import wraps
from django.conf import settings
@@ -20,25 +19,29 @@
from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent
from ietf.utils import log
-@decorator
-def skip_coverage(f, *args, **kwargs):
- if settings.TEST_CODE_COVERAGE_CHECKER:
- set_coverage_checking(False)
- result = f(*args, **kwargs)
- set_coverage_checking(True)
- return result
- else:
- return f(*args, **kwargs)
-
-@decorator
-def person_required(f, request, *args, **kwargs):
- if not request.user.is_authenticated:
- raise ValueError("The @person_required decorator should be called after @login_required.")
- try:
- request.user.person
- except Person.DoesNotExist:
- return render(request, 'registration/missing_person.html')
- return f(request, *args, **kwargs)
+def skip_coverage(f):
+ @wraps(f)
+ def _wrapper(*args, **kwargs):
+ if settings.TEST_CODE_COVERAGE_CHECKER:
+ set_coverage_checking(False)
+ result = f(*args, **kwargs)
+ set_coverage_checking(True)
+ return result
+ else:
+ return f(*args, **kwargs)
+ return _wrapper
+
+def person_required(f):
+ @wraps(f)
+ def _wrapper(request, *args, **kwargs):
+ if not request.user.is_authenticated:
+ raise ValueError("The @person_required decorator should be called after @login_required.")
+ try:
+ request.user.person
+ except Person.DoesNotExist:
+ return render(request, 'registration/missing_person.html')
+ return f(request, *args, **kwargs)
+ return _wrapper
def require_api_key(f):
@@ -90,29 +93,31 @@
return _wrapper
-def _memoize(func, self, *args, **kwargs):
- '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
- if kwargs: # frozenset is used to ensure hashability
- key = args, frozenset(list(kwargs.items()))
- else:
- key = args
- # instance method, set up cache if needed
- if not hasattr(self, '_cache'):
- self._cache = {}
- if not func in self._cache:
- self._cache[func] = {}
- #
- cache = self._cache[func]
- if key not in cache:
- cache[key] = func(self, *args, **kwargs)
- return cache[key]
def memoize(func):
+ @wraps(func)
+ def _memoize(self, *args, **kwargs):
+ '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
+ if kwargs: # frozenset is used to ensure hashability
+ key = args, frozenset(list(kwargs.items()))
+ else:
+ key = args
+ # instance method, set up cache if needed
+ if not hasattr(self, '_cache'):
+ self._cache = {}
+ if not func in self._cache:
+ self._cache[func] = {}
+ #
+ cache = self._cache[func]
+ if key not in cache:
+ cache[key] = func(self, *args, **kwargs)
+ return cache[key]
+
if not hasattr(func, '__class__'):
raise NotImplementedError("Use @lru_cache instead of memoize() for functions.")
# For methods, we want the cache on the object, not on the class, in order
# to not having to think about cache bloat and content becoming stale, so
# we cannot set up the cache here.
- return decorate(func, _memoize)
+ return _memoize
def ignore_view_kwargs(*args):
|
{"golden_diff": "diff --git a/ietf/utils/decorators.py b/ietf/utils/decorators.py\n--- a/ietf/utils/decorators.py\n+++ b/ietf/utils/decorators.py\n@@ -4,7 +4,6 @@\n \n import datetime\n \n-from decorator import decorator, decorate\n from functools import wraps\n \n from django.conf import settings\n@@ -20,25 +19,29 @@\n from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent\n from ietf.utils import log\n \n-@decorator\n-def skip_coverage(f, *args, **kwargs):\n- if settings.TEST_CODE_COVERAGE_CHECKER:\n- set_coverage_checking(False)\n- result = f(*args, **kwargs)\n- set_coverage_checking(True)\n- return result\n- else:\n- return f(*args, **kwargs)\n-\n-@decorator\n-def person_required(f, request, *args, **kwargs):\n- if not request.user.is_authenticated:\n- raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n- try:\n- request.user.person\n- except Person.DoesNotExist:\n- return render(request, 'registration/missing_person.html')\n- return f(request, *args, **kwargs)\n+def skip_coverage(f):\n+ @wraps(f)\n+ def _wrapper(*args, **kwargs):\n+ if settings.TEST_CODE_COVERAGE_CHECKER:\n+ set_coverage_checking(False)\n+ result = f(*args, **kwargs)\n+ set_coverage_checking(True)\n+ return result\n+ else:\n+ return f(*args, **kwargs)\n+ return _wrapper\n+\n+def person_required(f):\n+ @wraps(f)\n+ def _wrapper(request, *args, **kwargs):\n+ if not request.user.is_authenticated:\n+ raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n+ try:\n+ request.user.person\n+ except Person.DoesNotExist:\n+ return render(request, 'registration/missing_person.html')\n+ return f(request, *args, **kwargs)\n+ return _wrapper\n \n \n def require_api_key(f):\n@@ -90,29 +93,31 @@\n return _wrapper\n \n \n-def _memoize(func, self, *args, **kwargs):\n- '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n- if kwargs: # frozenset is used to ensure hashability\n- key = args, frozenset(list(kwargs.items()))\n- else:\n- key = args\n- # instance method, set up cache if needed\n- if not hasattr(self, '_cache'):\n- self._cache = {}\n- if not func in self._cache:\n- self._cache[func] = {} \n- #\n- cache = self._cache[func]\n- if key not in cache:\n- cache[key] = func(self, *args, **kwargs)\n- return cache[key]\n def memoize(func):\n+ @wraps(func)\n+ def _memoize(self, *args, **kwargs):\n+ '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n+ if kwargs: # frozenset is used to ensure hashability\n+ key = args, frozenset(list(kwargs.items()))\n+ else:\n+ key = args\n+ # instance method, set up cache if needed\n+ if not hasattr(self, '_cache'):\n+ self._cache = {}\n+ if not func in self._cache:\n+ self._cache[func] = {} \n+ #\n+ cache = self._cache[func]\n+ if key not in cache:\n+ cache[key] = func(self, *args, **kwargs)\n+ return cache[key]\n+\n if not hasattr(func, '__class__'):\n raise NotImplementedError(\"Use @lru_cache instead of memoize() for functions.\")\n # For methods, we want the cache on the object, not on the class, in order\n # to not having to think about cache bloat and content becoming stale, so\n # we cannot set up the cache here.\n- return decorate(func, _memoize)\n+ return _memoize\n \n \n def ignore_view_kwargs(*args):\n", "issue": "Refactor to drop dependency on decorator package\n### Description\n\nWe have a few decorators defined in `ietf/utils/decorator.py` that use the \"decorator\" package. This provides the `@decorator` decorator and a `decorate` method. The built-in Python `functools.wraps()` method can fulfill the needs here without the additional dependency. As far as I can tell we're not making use of any of the features provided by the \"decorator\" package.\r\n\r\nThe `@decorator` mechanism also seems to interfere with Django's `@method_decorator`, which led to refactoring the `requires_api_key` decorator already.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2016-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n\nimport datetime\n\nfrom decorator import decorator, decorate\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.contrib.auth import login\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.utils import timezone\nfrom django.utils.encoding import force_bytes\n\nimport debug # pyflakes:ignore\n\nfrom ietf.utils.test_runner import set_coverage_checking\nfrom ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent\nfrom ietf.utils import log\n\n@decorator\ndef skip_coverage(f, *args, **kwargs):\n if settings.TEST_CODE_COVERAGE_CHECKER:\n set_coverage_checking(False)\n result = f(*args, **kwargs)\n set_coverage_checking(True)\n return result\n else:\n return f(*args, **kwargs)\n\n@decorator\ndef person_required(f, request, *args, **kwargs):\n if not request.user.is_authenticated:\n raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n try:\n request.user.person\n except Person.DoesNotExist:\n return render(request, 'registration/missing_person.html')\n return f(request, *args, **kwargs)\n\n\ndef require_api_key(f):\n @wraps(f)\n def _wrapper(request, *args, **kwargs):\n def err(code, text):\n return HttpResponse(text, status=code, content_type='text/plain')\n # Check method and get hash\n if request.method == 'POST':\n hash = request.POST.get('apikey')\n elif request.method == 'GET':\n hash = request.GET.get('apikey')\n else:\n return err(405, \"Method not allowed\")\n if not hash:\n return err(400, \"Missing apikey parameter\")\n # Check hash\n key = PersonalApiKey.validate_key(force_bytes(hash))\n if not key:\n return err(403, \"Invalid apikey\")\n # Check endpoint\n urlpath = request.META.get('PATH_INFO')\n if not (urlpath and urlpath == key.endpoint):\n return err(400, \"Apikey endpoint mismatch\") \n # Check time since regular login\n person = key.person\n last_login = person.user.last_login\n if not person.user.is_staff:\n time_limit = (timezone.now() - datetime.timedelta(days=settings.UTILS_APIKEY_GUI_LOGIN_LIMIT_DAYS))\n if last_login == None or last_login < time_limit:\n return err(400, \"Too long since last regular login\")\n # Log in\n login(request, person.user)\n # restore the user.last_login field, so it reflects only gui logins\n person.user.last_login = last_login\n person.user.save()\n # Update stats\n key.count += 1\n key.latest = timezone.now()\n key.save()\n PersonApiKeyEvent.objects.create(person=person, type='apikey_login', key=key, desc=\"Logged in with key ID %s, endpoint %s\" % (key.id, key.endpoint))\n # Execute decorated function\n try:\n ret = f(request, *args, **kwargs)\n except AttributeError as e:\n log.log(\"Bad API call: args: %s, kwargs: %s, exception: %s\" % (args, kwargs, e))\n return err(400, \"Bad or missing parameters\")\n return ret\n return _wrapper\n\n\ndef _memoize(func, self, *args, **kwargs):\n '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n if kwargs: # frozenset is used to ensure hashability\n key = args, frozenset(list(kwargs.items()))\n else:\n key = args\n # instance method, set up cache if needed\n if not hasattr(self, '_cache'):\n self._cache = {}\n if not func in self._cache:\n self._cache[func] = {} \n #\n cache = self._cache[func]\n if key not in cache:\n cache[key] = func(self, *args, **kwargs)\n return cache[key]\ndef memoize(func):\n if not hasattr(func, '__class__'):\n raise NotImplementedError(\"Use @lru_cache instead of memoize() for functions.\")\n # For methods, we want the cache on the object, not on the class, in order\n # to not having to think about cache bloat and content becoming stale, so\n # we cannot set up the cache here.\n return decorate(func, _memoize)\n\n\ndef ignore_view_kwargs(*args):\n \"\"\"Ignore the specified kwargs if they are present\n\n Usage: \n @ignore_view_kwargs(\"ignore_arg1\", \"ignore_arg2\")\n def my_view(request, good_arg):\n ...\n\n This will allow my_view() to be used in url() paths that have zero, one, or both of\n ignore_arg1 and ignore_arg2 captured. These will be ignored, while good_arg will still\n be captured as usual.\n \"\"\"\n kwargs_to_ignore = args\n\n def decorate(view):\n @wraps(view)\n def wrapped(*args, **kwargs):\n for kwarg in kwargs_to_ignore:\n kwargs.pop(kwarg, None)\n return view(*args, **kwargs)\n\n return wrapped\n\n return decorate\n\n\n", "path": "ietf/utils/decorators.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2016-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n\nimport datetime\n\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.contrib.auth import login\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.utils import timezone\nfrom django.utils.encoding import force_bytes\n\nimport debug # pyflakes:ignore\n\nfrom ietf.utils.test_runner import set_coverage_checking\nfrom ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent\nfrom ietf.utils import log\n\ndef skip_coverage(f):\n @wraps(f)\n def _wrapper(*args, **kwargs):\n if settings.TEST_CODE_COVERAGE_CHECKER:\n set_coverage_checking(False)\n result = f(*args, **kwargs)\n set_coverage_checking(True)\n return result\n else:\n return f(*args, **kwargs)\n return _wrapper\n\ndef person_required(f):\n @wraps(f)\n def _wrapper(request, *args, **kwargs):\n if not request.user.is_authenticated:\n raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n try:\n request.user.person\n except Person.DoesNotExist:\n return render(request, 'registration/missing_person.html')\n return f(request, *args, **kwargs)\n return _wrapper\n\n\ndef require_api_key(f):\n @wraps(f)\n def _wrapper(request, *args, **kwargs):\n def err(code, text):\n return HttpResponse(text, status=code, content_type='text/plain')\n # Check method and get hash\n if request.method == 'POST':\n hash = request.POST.get('apikey')\n elif request.method == 'GET':\n hash = request.GET.get('apikey')\n else:\n return err(405, \"Method not allowed\")\n if not hash:\n return err(400, \"Missing apikey parameter\")\n # Check hash\n key = PersonalApiKey.validate_key(force_bytes(hash))\n if not key:\n return err(403, \"Invalid apikey\")\n # Check endpoint\n urlpath = request.META.get('PATH_INFO')\n if not (urlpath and urlpath == key.endpoint):\n return err(400, \"Apikey endpoint mismatch\") \n # Check time since regular login\n person = key.person\n last_login = person.user.last_login\n if not person.user.is_staff:\n time_limit = (timezone.now() - datetime.timedelta(days=settings.UTILS_APIKEY_GUI_LOGIN_LIMIT_DAYS))\n if last_login == None or last_login < time_limit:\n return err(400, \"Too long since last regular login\")\n # Log in\n login(request, person.user)\n # restore the user.last_login field, so it reflects only gui logins\n person.user.last_login = last_login\n person.user.save()\n # Update stats\n key.count += 1\n key.latest = timezone.now()\n key.save()\n PersonApiKeyEvent.objects.create(person=person, type='apikey_login', key=key, desc=\"Logged in with key ID %s, endpoint %s\" % (key.id, key.endpoint))\n # Execute decorated function\n try:\n ret = f(request, *args, **kwargs)\n except AttributeError as e:\n log.log(\"Bad API call: args: %s, kwargs: %s, exception: %s\" % (args, kwargs, e))\n return err(400, \"Bad or missing parameters\")\n return ret\n return _wrapper\n\n\ndef memoize(func):\n @wraps(func)\n def _memoize(self, *args, **kwargs):\n '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n if kwargs: # frozenset is used to ensure hashability\n key = args, frozenset(list(kwargs.items()))\n else:\n key = args\n # instance method, set up cache if needed\n if not hasattr(self, '_cache'):\n self._cache = {}\n if not func in self._cache:\n self._cache[func] = {} \n #\n cache = self._cache[func]\n if key not in cache:\n cache[key] = func(self, *args, **kwargs)\n return cache[key]\n\n if not hasattr(func, '__class__'):\n raise NotImplementedError(\"Use @lru_cache instead of memoize() for functions.\")\n # For methods, we want the cache on the object, not on the class, in order\n # to not having to think about cache bloat and content becoming stale, so\n # we cannot set up the cache here.\n return _memoize\n\n\ndef ignore_view_kwargs(*args):\n \"\"\"Ignore the specified kwargs if they are present\n\n Usage: \n @ignore_view_kwargs(\"ignore_arg1\", \"ignore_arg2\")\n def my_view(request, good_arg):\n ...\n\n This will allow my_view() to be used in url() paths that have zero, one, or both of\n ignore_arg1 and ignore_arg2 captured. These will be ignored, while good_arg will still\n be captured as usual.\n \"\"\"\n kwargs_to_ignore = args\n\n def decorate(view):\n @wraps(view)\n def wrapped(*args, **kwargs):\n for kwarg in kwargs_to_ignore:\n kwargs.pop(kwarg, None)\n return view(*args, **kwargs)\n\n return wrapped\n\n return decorate\n\n\n", "path": "ietf/utils/decorators.py"}]}
| 1,927 | 933 |
gh_patches_debug_10024
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-8933
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] rllib train ... --checkpoint-at-end flag is ignored
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
Ray 0.8.5
Python 3.7
MacOS Catalina
PyTorch 1.4.0
No checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.
### Reproduction (REQUIRED)
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end
```
No checkpoint directory exists under `~/ray_results/default/PPO...`
Add the `--checkpoint-freq 10` flag:
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10
```
Now there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end.
Could the choice of stop criteria, `training_iteration`, have something to do with it?
> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
[rllib] rllib train ... --checkpoint-at-end flag is ignored
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
Ray 0.8.5
Python 3.7
MacOS Catalina
PyTorch 1.4.0
No checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.
### Reproduction (REQUIRED)
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end
```
No checkpoint directory exists under `~/ray_results/default/PPO...`
Add the `--checkpoint-freq 10` flag:
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10
```
Now there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end.
Could the choice of stop criteria, `training_iteration`, have something to do with it?
> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/train.py`
Content:
```
1 #!/usr/bin/env python
2
3 import argparse
4 import os
5 from pathlib import Path
6 import yaml
7
8 import ray
9 from ray.cluster_utils import Cluster
10 from ray.tune.config_parser import make_parser
11 from ray.tune.result import DEFAULT_RESULTS_DIR
12 from ray.tune.resources import resources_to_json
13 from ray.tune.tune import _make_scheduler, run_experiments
14 from ray.rllib.utils.framework import try_import_tf, try_import_torch
15
16 # Try to import both backends for flag checking/warnings.
17 tf = try_import_tf()
18 torch, _ = try_import_torch()
19
20 EXAMPLE_USAGE = """
21 Training example via RLlib CLI:
22 rllib train --run DQN --env CartPole-v0
23
24 Grid search example via RLlib CLI:
25 rllib train -f tuned_examples/cartpole-grid-search-example.yaml
26
27 Grid search example via executable:
28 ./train.py -f tuned_examples/cartpole-grid-search-example.yaml
29
30 Note that -f overrides all other trial-specific command-line options.
31 """
32
33
34 def create_parser(parser_creator=None):
35 parser = make_parser(
36 parser_creator=parser_creator,
37 formatter_class=argparse.RawDescriptionHelpFormatter,
38 description="Train a reinforcement learning agent.",
39 epilog=EXAMPLE_USAGE)
40
41 # See also the base parser definition in ray/tune/config_parser.py
42 parser.add_argument(
43 "--ray-address",
44 default=None,
45 type=str,
46 help="Connect to an existing Ray cluster at this address instead "
47 "of starting a new one.")
48 parser.add_argument(
49 "--no-ray-ui",
50 action="store_true",
51 help="Whether to disable the Ray web ui.")
52 parser.add_argument(
53 "--local-mode",
54 action="store_true",
55 help="Whether to run ray with `local_mode=True`. "
56 "Only if --ray-num-nodes is not used.")
57 parser.add_argument(
58 "--ray-num-cpus",
59 default=None,
60 type=int,
61 help="--num-cpus to use if starting a new cluster.")
62 parser.add_argument(
63 "--ray-num-gpus",
64 default=None,
65 type=int,
66 help="--num-gpus to use if starting a new cluster.")
67 parser.add_argument(
68 "--ray-num-nodes",
69 default=None,
70 type=int,
71 help="Emulate multiple cluster nodes for debugging.")
72 parser.add_argument(
73 "--ray-redis-max-memory",
74 default=None,
75 type=int,
76 help="--redis-max-memory to use if starting a new cluster.")
77 parser.add_argument(
78 "--ray-memory",
79 default=None,
80 type=int,
81 help="--memory to use if starting a new cluster.")
82 parser.add_argument(
83 "--ray-object-store-memory",
84 default=None,
85 type=int,
86 help="--object-store-memory to use if starting a new cluster.")
87 parser.add_argument(
88 "--experiment-name",
89 default="default",
90 type=str,
91 help="Name of the subdirectory under `local_dir` to put results in.")
92 parser.add_argument(
93 "--local-dir",
94 default=DEFAULT_RESULTS_DIR,
95 type=str,
96 help="Local dir to save training results to. Defaults to '{}'.".format(
97 DEFAULT_RESULTS_DIR))
98 parser.add_argument(
99 "--upload-dir",
100 default="",
101 type=str,
102 help="Optional URI to sync training results to (e.g. s3://bucket).")
103 parser.add_argument(
104 "-v", action="store_true", help="Whether to use INFO level logging.")
105 parser.add_argument(
106 "-vv", action="store_true", help="Whether to use DEBUG level logging.")
107 parser.add_argument(
108 "--resume",
109 action="store_true",
110 help="Whether to attempt to resume previous Tune experiments.")
111 parser.add_argument(
112 "--torch",
113 action="store_true",
114 help="Whether to use PyTorch (instead of tf) as the DL framework.")
115 parser.add_argument(
116 "--eager",
117 action="store_true",
118 help="Whether to attempt to enable TF eager execution.")
119 parser.add_argument(
120 "--trace",
121 action="store_true",
122 help="Whether to attempt to enable tracing for eager mode.")
123 parser.add_argument(
124 "--env", default=None, type=str, help="The gym environment to use.")
125 parser.add_argument(
126 "--queue-trials",
127 action="store_true",
128 help=(
129 "Whether to queue trials when the cluster does not currently have "
130 "enough resources to launch one. This should be set to True when "
131 "running on an autoscaling cluster to enable automatic scale-up."))
132 parser.add_argument(
133 "-f",
134 "--config-file",
135 default=None,
136 type=str,
137 help="If specified, use config options from this file. Note that this "
138 "overrides any trial-specific options set via flags above.")
139 return parser
140
141
142 def run(args, parser):
143 if args.config_file:
144 with open(args.config_file) as f:
145 experiments = yaml.safe_load(f)
146 else:
147 # Note: keep this in sync with tune/config_parser.py
148 experiments = {
149 args.experiment_name: { # i.e. log to ~/ray_results/default
150 "run": args.run,
151 "checkpoint_freq": args.checkpoint_freq,
152 "keep_checkpoints_num": args.keep_checkpoints_num,
153 "checkpoint_score_attr": args.checkpoint_score_attr,
154 "local_dir": args.local_dir,
155 "resources_per_trial": (
156 args.resources_per_trial and
157 resources_to_json(args.resources_per_trial)),
158 "stop": args.stop,
159 "config": dict(args.config, env=args.env),
160 "restore": args.restore,
161 "num_samples": args.num_samples,
162 "upload_dir": args.upload_dir,
163 }
164 }
165
166 verbose = 1
167 for exp in experiments.values():
168 # Bazel makes it hard to find files specified in `args` (and `data`).
169 # Look for them here.
170 # NOTE: Some of our yaml files don't have a `config` section.
171 if exp.get("config", {}).get("input") and \
172 not os.path.exists(exp["config"]["input"]):
173 # This script runs in the ray/rllib dir.
174 rllib_dir = Path(__file__).parent
175 input_file = rllib_dir.absolute().joinpath(exp["config"]["input"])
176 exp["config"]["input"] = str(input_file)
177
178 if not exp.get("run"):
179 parser.error("the following arguments are required: --run")
180 if not exp.get("env") and not exp.get("config", {}).get("env"):
181 parser.error("the following arguments are required: --env")
182 if args.eager:
183 exp["config"]["framework"] = "tfe"
184 elif args.torch:
185 exp["config"]["framework"] = "torch"
186 else:
187 exp["config"]["framework"] = "tf"
188 if args.v:
189 exp["config"]["log_level"] = "INFO"
190 verbose = 2
191 if args.vv:
192 exp["config"]["log_level"] = "DEBUG"
193 verbose = 3
194 if args.trace:
195 if exp["config"]["framework"] != "tfe":
196 raise ValueError("Must enable --eager to enable tracing.")
197 exp["config"]["eager_tracing"] = True
198
199 if args.ray_num_nodes:
200 cluster = Cluster()
201 for _ in range(args.ray_num_nodes):
202 cluster.add_node(
203 num_cpus=args.ray_num_cpus or 1,
204 num_gpus=args.ray_num_gpus or 0,
205 object_store_memory=args.ray_object_store_memory,
206 memory=args.ray_memory,
207 redis_max_memory=args.ray_redis_max_memory)
208 ray.init(address=cluster.address)
209 else:
210 ray.init(
211 include_webui=not args.no_ray_ui,
212 address=args.ray_address,
213 object_store_memory=args.ray_object_store_memory,
214 memory=args.ray_memory,
215 redis_max_memory=args.ray_redis_max_memory,
216 num_cpus=args.ray_num_cpus,
217 num_gpus=args.ray_num_gpus,
218 local_mode=args.local_mode)
219 run_experiments(
220 experiments,
221 scheduler=_make_scheduler(args),
222 queue_trials=args.queue_trials,
223 resume=args.resume,
224 verbose=verbose,
225 concurrent=True)
226
227
228 if __name__ == "__main__":
229 parser = create_parser()
230 args = parser.parse_args()
231 run(args, parser)
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/train.py b/rllib/train.py
--- a/rllib/train.py
+++ b/rllib/train.py
@@ -149,6 +149,7 @@
args.experiment_name: { # i.e. log to ~/ray_results/default
"run": args.run,
"checkpoint_freq": args.checkpoint_freq,
+ "checkpoint_at_end": args.checkpoint_at_end,
"keep_checkpoints_num": args.keep_checkpoints_num,
"checkpoint_score_attr": args.checkpoint_score_attr,
"local_dir": args.local_dir,
|
{"golden_diff": "diff --git a/rllib/train.py b/rllib/train.py\n--- a/rllib/train.py\n+++ b/rllib/train.py\n@@ -149,6 +149,7 @@\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n+ \"checkpoint_at_end\": args.checkpoint_at_end,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n", "issue": "[rllib] rllib train ... --checkpoint-at-end flag is ignored\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nRay 0.8.5\r\nPython 3.7\r\nMacOS Catalina\r\nPyTorch 1.4.0\r\n\r\nNo checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end\r\n```\r\n\r\nNo checkpoint directory exists under `~/ray_results/default/PPO...`\r\n\r\nAdd the `--checkpoint-freq 10` flag:\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10\r\n```\r\n\r\nNow there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end. \r\n\r\nCould the choice of stop criteria, `training_iteration`, have something to do with it?\r\n\r\n> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n[rllib] rllib train ... --checkpoint-at-end flag is ignored\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nRay 0.8.5\r\nPython 3.7\r\nMacOS Catalina\r\nPyTorch 1.4.0\r\n\r\nNo checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end\r\n```\r\n\r\nNo checkpoint directory exists under `~/ray_results/default/PPO...`\r\n\r\nAdd the `--checkpoint-freq 10` flag:\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10\r\n```\r\n\r\nNow there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end. \r\n\r\nCould the choice of stop criteria, `training_iteration`, have something to do with it?\r\n\r\n> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport argparse\nimport os\nfrom pathlib import Path\nimport yaml\n\nimport ray\nfrom ray.cluster_utils import Cluster\nfrom ray.tune.config_parser import make_parser\nfrom ray.tune.result import DEFAULT_RESULTS_DIR\nfrom ray.tune.resources import resources_to_json\nfrom ray.tune.tune import _make_scheduler, run_experiments\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch\n\n# Try to import both backends for flag checking/warnings.\ntf = try_import_tf()\ntorch, _ = try_import_torch()\n\nEXAMPLE_USAGE = \"\"\"\nTraining example via RLlib CLI:\n rllib train --run DQN --env CartPole-v0\n\nGrid search example via RLlib CLI:\n rllib train -f tuned_examples/cartpole-grid-search-example.yaml\n\nGrid search example via executable:\n ./train.py -f tuned_examples/cartpole-grid-search-example.yaml\n\nNote that -f overrides all other trial-specific command-line options.\n\"\"\"\n\n\ndef create_parser(parser_creator=None):\n parser = make_parser(\n parser_creator=parser_creator,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n description=\"Train a reinforcement learning agent.\",\n epilog=EXAMPLE_USAGE)\n\n # See also the base parser definition in ray/tune/config_parser.py\n parser.add_argument(\n \"--ray-address\",\n default=None,\n type=str,\n help=\"Connect to an existing Ray cluster at this address instead \"\n \"of starting a new one.\")\n parser.add_argument(\n \"--no-ray-ui\",\n action=\"store_true\",\n help=\"Whether to disable the Ray web ui.\")\n parser.add_argument(\n \"--local-mode\",\n action=\"store_true\",\n help=\"Whether to run ray with `local_mode=True`. \"\n \"Only if --ray-num-nodes is not used.\")\n parser.add_argument(\n \"--ray-num-cpus\",\n default=None,\n type=int,\n help=\"--num-cpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-gpus\",\n default=None,\n type=int,\n help=\"--num-gpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-nodes\",\n default=None,\n type=int,\n help=\"Emulate multiple cluster nodes for debugging.\")\n parser.add_argument(\n \"--ray-redis-max-memory\",\n default=None,\n type=int,\n help=\"--redis-max-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-memory\",\n default=None,\n type=int,\n help=\"--memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-object-store-memory\",\n default=None,\n type=int,\n help=\"--object-store-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--experiment-name\",\n default=\"default\",\n type=str,\n help=\"Name of the subdirectory under `local_dir` to put results in.\")\n parser.add_argument(\n \"--local-dir\",\n default=DEFAULT_RESULTS_DIR,\n type=str,\n help=\"Local dir to save training results to. Defaults to '{}'.\".format(\n DEFAULT_RESULTS_DIR))\n parser.add_argument(\n \"--upload-dir\",\n default=\"\",\n type=str,\n help=\"Optional URI to sync training results to (e.g. s3://bucket).\")\n parser.add_argument(\n \"-v\", action=\"store_true\", help=\"Whether to use INFO level logging.\")\n parser.add_argument(\n \"-vv\", action=\"store_true\", help=\"Whether to use DEBUG level logging.\")\n parser.add_argument(\n \"--resume\",\n action=\"store_true\",\n help=\"Whether to attempt to resume previous Tune experiments.\")\n parser.add_argument(\n \"--torch\",\n action=\"store_true\",\n help=\"Whether to use PyTorch (instead of tf) as the DL framework.\")\n parser.add_argument(\n \"--eager\",\n action=\"store_true\",\n help=\"Whether to attempt to enable TF eager execution.\")\n parser.add_argument(\n \"--trace\",\n action=\"store_true\",\n help=\"Whether to attempt to enable tracing for eager mode.\")\n parser.add_argument(\n \"--env\", default=None, type=str, help=\"The gym environment to use.\")\n parser.add_argument(\n \"--queue-trials\",\n action=\"store_true\",\n help=(\n \"Whether to queue trials when the cluster does not currently have \"\n \"enough resources to launch one. This should be set to True when \"\n \"running on an autoscaling cluster to enable automatic scale-up.\"))\n parser.add_argument(\n \"-f\",\n \"--config-file\",\n default=None,\n type=str,\n help=\"If specified, use config options from this file. Note that this \"\n \"overrides any trial-specific options set via flags above.\")\n return parser\n\n\ndef run(args, parser):\n if args.config_file:\n with open(args.config_file) as f:\n experiments = yaml.safe_load(f)\n else:\n # Note: keep this in sync with tune/config_parser.py\n experiments = {\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n \"resources_per_trial\": (\n args.resources_per_trial and\n resources_to_json(args.resources_per_trial)),\n \"stop\": args.stop,\n \"config\": dict(args.config, env=args.env),\n \"restore\": args.restore,\n \"num_samples\": args.num_samples,\n \"upload_dir\": args.upload_dir,\n }\n }\n\n verbose = 1\n for exp in experiments.values():\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n # NOTE: Some of our yaml files don't have a `config` section.\n if exp.get(\"config\", {}).get(\"input\") and \\\n not os.path.exists(exp[\"config\"][\"input\"]):\n # This script runs in the ray/rllib dir.\n rllib_dir = Path(__file__).parent\n input_file = rllib_dir.absolute().joinpath(exp[\"config\"][\"input\"])\n exp[\"config\"][\"input\"] = str(input_file)\n\n if not exp.get(\"run\"):\n parser.error(\"the following arguments are required: --run\")\n if not exp.get(\"env\") and not exp.get(\"config\", {}).get(\"env\"):\n parser.error(\"the following arguments are required: --env\")\n if args.eager:\n exp[\"config\"][\"framework\"] = \"tfe\"\n elif args.torch:\n exp[\"config\"][\"framework\"] = \"torch\"\n else:\n exp[\"config\"][\"framework\"] = \"tf\"\n if args.v:\n exp[\"config\"][\"log_level\"] = \"INFO\"\n verbose = 2\n if args.vv:\n exp[\"config\"][\"log_level\"] = \"DEBUG\"\n verbose = 3\n if args.trace:\n if exp[\"config\"][\"framework\"] != \"tfe\":\n raise ValueError(\"Must enable --eager to enable tracing.\")\n exp[\"config\"][\"eager_tracing\"] = True\n\n if args.ray_num_nodes:\n cluster = Cluster()\n for _ in range(args.ray_num_nodes):\n cluster.add_node(\n num_cpus=args.ray_num_cpus or 1,\n num_gpus=args.ray_num_gpus or 0,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory)\n ray.init(address=cluster.address)\n else:\n ray.init(\n include_webui=not args.no_ray_ui,\n address=args.ray_address,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory,\n num_cpus=args.ray_num_cpus,\n num_gpus=args.ray_num_gpus,\n local_mode=args.local_mode)\n run_experiments(\n experiments,\n scheduler=_make_scheduler(args),\n queue_trials=args.queue_trials,\n resume=args.resume,\n verbose=verbose,\n concurrent=True)\n\n\nif __name__ == \"__main__\":\n parser = create_parser()\n args = parser.parse_args()\n run(args, parser)\n", "path": "rllib/train.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport argparse\nimport os\nfrom pathlib import Path\nimport yaml\n\nimport ray\nfrom ray.cluster_utils import Cluster\nfrom ray.tune.config_parser import make_parser\nfrom ray.tune.result import DEFAULT_RESULTS_DIR\nfrom ray.tune.resources import resources_to_json\nfrom ray.tune.tune import _make_scheduler, run_experiments\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch\n\n# Try to import both backends for flag checking/warnings.\ntf = try_import_tf()\ntorch, _ = try_import_torch()\n\nEXAMPLE_USAGE = \"\"\"\nTraining example via RLlib CLI:\n rllib train --run DQN --env CartPole-v0\n\nGrid search example via RLlib CLI:\n rllib train -f tuned_examples/cartpole-grid-search-example.yaml\n\nGrid search example via executable:\n ./train.py -f tuned_examples/cartpole-grid-search-example.yaml\n\nNote that -f overrides all other trial-specific command-line options.\n\"\"\"\n\n\ndef create_parser(parser_creator=None):\n parser = make_parser(\n parser_creator=parser_creator,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n description=\"Train a reinforcement learning agent.\",\n epilog=EXAMPLE_USAGE)\n\n # See also the base parser definition in ray/tune/config_parser.py\n parser.add_argument(\n \"--ray-address\",\n default=None,\n type=str,\n help=\"Connect to an existing Ray cluster at this address instead \"\n \"of starting a new one.\")\n parser.add_argument(\n \"--no-ray-ui\",\n action=\"store_true\",\n help=\"Whether to disable the Ray web ui.\")\n parser.add_argument(\n \"--local-mode\",\n action=\"store_true\",\n help=\"Whether to run ray with `local_mode=True`. \"\n \"Only if --ray-num-nodes is not used.\")\n parser.add_argument(\n \"--ray-num-cpus\",\n default=None,\n type=int,\n help=\"--num-cpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-gpus\",\n default=None,\n type=int,\n help=\"--num-gpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-nodes\",\n default=None,\n type=int,\n help=\"Emulate multiple cluster nodes for debugging.\")\n parser.add_argument(\n \"--ray-redis-max-memory\",\n default=None,\n type=int,\n help=\"--redis-max-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-memory\",\n default=None,\n type=int,\n help=\"--memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-object-store-memory\",\n default=None,\n type=int,\n help=\"--object-store-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--experiment-name\",\n default=\"default\",\n type=str,\n help=\"Name of the subdirectory under `local_dir` to put results in.\")\n parser.add_argument(\n \"--local-dir\",\n default=DEFAULT_RESULTS_DIR,\n type=str,\n help=\"Local dir to save training results to. Defaults to '{}'.\".format(\n DEFAULT_RESULTS_DIR))\n parser.add_argument(\n \"--upload-dir\",\n default=\"\",\n type=str,\n help=\"Optional URI to sync training results to (e.g. s3://bucket).\")\n parser.add_argument(\n \"-v\", action=\"store_true\", help=\"Whether to use INFO level logging.\")\n parser.add_argument(\n \"-vv\", action=\"store_true\", help=\"Whether to use DEBUG level logging.\")\n parser.add_argument(\n \"--resume\",\n action=\"store_true\",\n help=\"Whether to attempt to resume previous Tune experiments.\")\n parser.add_argument(\n \"--torch\",\n action=\"store_true\",\n help=\"Whether to use PyTorch (instead of tf) as the DL framework.\")\n parser.add_argument(\n \"--eager\",\n action=\"store_true\",\n help=\"Whether to attempt to enable TF eager execution.\")\n parser.add_argument(\n \"--trace\",\n action=\"store_true\",\n help=\"Whether to attempt to enable tracing for eager mode.\")\n parser.add_argument(\n \"--env\", default=None, type=str, help=\"The gym environment to use.\")\n parser.add_argument(\n \"--queue-trials\",\n action=\"store_true\",\n help=(\n \"Whether to queue trials when the cluster does not currently have \"\n \"enough resources to launch one. This should be set to True when \"\n \"running on an autoscaling cluster to enable automatic scale-up.\"))\n parser.add_argument(\n \"-f\",\n \"--config-file\",\n default=None,\n type=str,\n help=\"If specified, use config options from this file. Note that this \"\n \"overrides any trial-specific options set via flags above.\")\n return parser\n\n\ndef run(args, parser):\n if args.config_file:\n with open(args.config_file) as f:\n experiments = yaml.safe_load(f)\n else:\n # Note: keep this in sync with tune/config_parser.py\n experiments = {\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n \"checkpoint_at_end\": args.checkpoint_at_end,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n \"resources_per_trial\": (\n args.resources_per_trial and\n resources_to_json(args.resources_per_trial)),\n \"stop\": args.stop,\n \"config\": dict(args.config, env=args.env),\n \"restore\": args.restore,\n \"num_samples\": args.num_samples,\n \"upload_dir\": args.upload_dir,\n }\n }\n\n verbose = 1\n for exp in experiments.values():\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n # NOTE: Some of our yaml files don't have a `config` section.\n if exp.get(\"config\", {}).get(\"input\") and \\\n not os.path.exists(exp[\"config\"][\"input\"]):\n # This script runs in the ray/rllib dir.\n rllib_dir = Path(__file__).parent\n input_file = rllib_dir.absolute().joinpath(exp[\"config\"][\"input\"])\n exp[\"config\"][\"input\"] = str(input_file)\n\n if not exp.get(\"run\"):\n parser.error(\"the following arguments are required: --run\")\n if not exp.get(\"env\") and not exp.get(\"config\", {}).get(\"env\"):\n parser.error(\"the following arguments are required: --env\")\n if args.eager:\n exp[\"config\"][\"framework\"] = \"tfe\"\n elif args.torch:\n exp[\"config\"][\"framework\"] = \"torch\"\n else:\n exp[\"config\"][\"framework\"] = \"tf\"\n if args.v:\n exp[\"config\"][\"log_level\"] = \"INFO\"\n verbose = 2\n if args.vv:\n exp[\"config\"][\"log_level\"] = \"DEBUG\"\n verbose = 3\n if args.trace:\n if exp[\"config\"][\"framework\"] != \"tfe\":\n raise ValueError(\"Must enable --eager to enable tracing.\")\n exp[\"config\"][\"eager_tracing\"] = True\n\n if args.ray_num_nodes:\n cluster = Cluster()\n for _ in range(args.ray_num_nodes):\n cluster.add_node(\n num_cpus=args.ray_num_cpus or 1,\n num_gpus=args.ray_num_gpus or 0,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory)\n ray.init(address=cluster.address)\n else:\n ray.init(\n include_webui=not args.no_ray_ui,\n address=args.ray_address,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory,\n num_cpus=args.ray_num_cpus,\n num_gpus=args.ray_num_gpus,\n local_mode=args.local_mode)\n run_experiments(\n experiments,\n scheduler=_make_scheduler(args),\n queue_trials=args.queue_trials,\n resume=args.resume,\n verbose=verbose,\n concurrent=True)\n\n\nif __name__ == \"__main__\":\n parser = create_parser()\n args = parser.parse_args()\n run(args, parser)\n", "path": "rllib/train.py"}]}
| 3,377 | 125 |
gh_patches_debug_12265
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-273
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sources should automatically have segments
From #257:
> A source should always have a segment. It is either "Cantus Database" or "Sequence Database". It's a foreign key field. In cases where a source doesn't have a segment, it is probably a test source that we created.
> Desired behaviour: when creating a source, assign it to "Cantus Database" by default.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/models/source.py`
Content:
```
1 from django.db import models
2 from main_app.models import BaseModel
3 from django.contrib.auth import get_user_model
4
5
6 class Source(BaseModel):
7 cursus_choices = [("Monastic", "Monastic"), ("Secular", "Secular")]
8 source_status_choices = [
9 (
10 "Editing process (not all the fields have been proofread)",
11 "Editing process (not all the fields have been proofread)",
12 ),
13 ("Published / Complete", "Published / Complete"),
14 ("Published / Proofread pending", "Published / Proofread pending"),
15 ("Unpublished / Editing process", "Unpublished / Editing process"),
16 ("Unpublished / Indexing process", "Unpublished / Indexing process"),
17 ("Unpublished / Proofread pending", "Unpublished / Proofread pending"),
18 ("Unpublished / Proofreading process", "Unpublished / Proofreading process"),
19 ]
20
21 # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list
22 public = models.BooleanField(blank=True, null=True)
23 # sources with visible=False can be accessed by typing in the url, but do not appear in source list
24 visible = models.BooleanField(blank=True, null=True)
25 title = models.CharField(
26 max_length=255,
27 help_text="Full Manuscript Identification (City, Archive, Shelf-mark)",
28 )
29 # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
30 # it is a human-readable ID for a source
31 siglum = models.CharField(
32 max_length=63,
33 null=True,
34 blank=True,
35 help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
36 )
37 # the RISM siglum uniquely identifies a library or holding institution
38 rism_siglum = models.ForeignKey(
39 "RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
40 )
41 provenance = models.ForeignKey(
42 "Provenance",
43 on_delete=models.PROTECT,
44 help_text="If the origin is unknown, select a location where the source was "
45 "used later in its lifetime and provide details in the "
46 '"Provenance notes" field.',
47 null=True,
48 blank=True,
49 )
50 provenance_notes = models.TextField(
51 blank=True,
52 null=True,
53 help_text="More exact indication of the provenance (if necessary)",
54 )
55 full_source = models.BooleanField(blank=True, null=True)
56 date = models.CharField(
57 blank=True,
58 null=True,
59 max_length=63,
60 help_text='Date of the manuscript (e.g. "1200s", "1300-1350", etc.)',
61 )
62 century = models.ManyToManyField("Century", related_name="sources")
63 notation = models.ManyToManyField("Notation", related_name="sources")
64 cursus = models.CharField(
65 blank=True, null=True, choices=cursus_choices, max_length=63
66 )
67 # TODO: Fill this field up with JSON info when I have access to the Users
68 current_editors = models.ManyToManyField(get_user_model(), related_name="sources_user_can_edit")
69 inventoried_by = models.ManyToManyField(
70 "Indexer", related_name="sources_inventoried"
71 )
72 full_text_entered_by = models.ManyToManyField(
73 "Indexer", related_name="entered_full_text_for_sources"
74 )
75 melodies_entered_by = models.ManyToManyField(
76 "Indexer", related_name="entered_melody_for_sources"
77 )
78 proofreaders = models.ManyToManyField("Indexer", related_name="proofread_sources")
79 other_editors = models.ManyToManyField("Indexer", related_name="edited_sources")
80 segment = models.ForeignKey(
81 "Segment", on_delete=models.PROTECT, blank=True, null=True
82 )
83 source_status = models.CharField(blank=True, null=True, max_length=255)
84 complete_inventory = models.BooleanField(blank=True, null=True)
85 summary = models.TextField(blank=True, null=True)
86 liturgical_occasions = models.TextField(blank=True, null=True)
87 description = models.TextField(blank=True, null=True)
88 selected_bibliography = models.TextField(blank=True, null=True)
89 image_link = models.URLField(
90 blank=True,
91 null=True,
92 help_text='HTTP link to the image gallery of the source.',
93 )
94 indexing_notes = models.TextField(blank=True, null=True)
95 indexing_date = models.TextField(blank=True, null=True)
96 json_info = models.JSONField(blank=True, null=True)
97 fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)
98 dact_id = models.CharField(max_length=15, blank=True, null=True)
99
100 def number_of_chants(self) -> int:
101 """Returns the number of Chants and Sequences in this Source."""
102 return self.chant_set.count() + self.sequence_set.count()
103
104 def number_of_melodies(self) -> int:
105 """Returns the number of Chants in this Source that have melodies."""
106 return self.chant_set.filter(volpiano__isnull=False).count()
107
108 def __str__(self):
109 string = '{t} ({i})'.format(t=self.title, i=self.id)
110 return string
111
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py
--- a/django/cantusdb_project/main_app/models/source.py
+++ b/django/cantusdb_project/main_app/models/source.py
@@ -1,5 +1,5 @@
from django.db import models
-from main_app.models import BaseModel
+from main_app.models import BaseModel, Segment
from django.contrib.auth import get_user_model
@@ -109,4 +109,8 @@
string = '{t} ({i})'.format(t=self.title, i=self.id)
return string
-
\ No newline at end of file
+ def save(self, *args, **kwargs):
+ # when creating a source, assign it to "Cantus Database" by default
+ cantus_db_segment = Segment.objects.get(name="CANTUS Database")
+ self.segment = cantus_db_segment
+ super().save(*args, **kwargs)
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py\n--- a/django/cantusdb_project/main_app/models/source.py\n+++ b/django/cantusdb_project/main_app/models/source.py\n@@ -1,5 +1,5 @@\n from django.db import models\n-from main_app.models import BaseModel\n+from main_app.models import BaseModel, Segment\n from django.contrib.auth import get_user_model\n \n \n@@ -109,4 +109,8 @@\n string = '{t} ({i})'.format(t=self.title, i=self.id)\n return string\n \n- \n\\ No newline at end of file\n+ def save(self, *args, **kwargs):\n+ # when creating a source, assign it to \"Cantus Database\" by default\n+ cantus_db_segment = Segment.objects.get(name=\"CANTUS Database\")\n+ self.segment = cantus_db_segment\n+ super().save(*args, **kwargs)\n", "issue": "Sources should automatically have segments\nFrom #257:\r\n\r\n> A source should always have a segment. It is either \"Cantus Database\" or \"Sequence Database\". It's a foreign key field. In cases where a source doesn't have a segment, it is probably a test source that we created.\r\n> Desired behaviour: when creating a source, assign it to \"Cantus Database\" by default.\n", "before_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=True, \n blank=True,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_user_can_edit\")\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n string = '{t} ({i})'.format(t=self.title, i=self.id)\n return string\n\n ", "path": "django/cantusdb_project/main_app/models/source.py"}], "after_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel, Segment\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=True, \n blank=True,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_user_can_edit\")\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n string = '{t} ({i})'.format(t=self.title, i=self.id)\n return string\n\n def save(self, *args, **kwargs):\n # when creating a source, assign it to \"Cantus Database\" by default\n cantus_db_segment = Segment.objects.get(name=\"CANTUS Database\")\n self.segment = cantus_db_segment\n super().save(*args, **kwargs)\n", "path": "django/cantusdb_project/main_app/models/source.py"}]}
| 1,709 | 220 |
gh_patches_debug_41106
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-56522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Discord `alert.sent` event
start sending `alert.sent` analytic events for discord
add category to existing notification_sent event
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/rules/actions/integrations/base.py`
Content:
```
1 from __future__ import annotations
2
3 import abc
4 from typing import List
5
6 from django import forms
7
8 from sentry.models import OrganizationStatus
9 from sentry.rules.actions import EventAction
10 from sentry.services.hybrid_cloud.integration import (
11 RpcIntegration,
12 RpcOrganizationIntegration,
13 integration_service,
14 )
15
16 INTEGRATION_KEY = "integration"
17
18
19 class IntegrationEventAction(EventAction, abc.ABC):
20 """Intermediate abstract class to help DRY some event actions code."""
21
22 @property
23 @abc.abstractmethod
24 def prompt(self) -> str:
25 pass
26
27 @property
28 @abc.abstractmethod
29 def provider(self) -> str:
30 pass
31
32 @property
33 @abc.abstractmethod
34 def integration_key(self) -> str:
35 pass
36
37 def is_enabled(self) -> bool:
38 enabled: bool = bool(self.get_integrations())
39 return enabled
40
41 def get_integration_name(self) -> str:
42 """Get the integration's name for the label."""
43 integration = self.get_integration()
44 if not integration:
45 return "[removed]"
46
47 _name: str = integration.name
48 return _name
49
50 def get_integrations(self) -> List[RpcIntegration]:
51 return integration_service.get_integrations(
52 organization_id=self.project.organization_id,
53 status=OrganizationStatus.ACTIVE,
54 org_integration_status=OrganizationStatus.ACTIVE,
55 providers=[self.provider],
56 )
57
58 def get_integration_id(self) -> int:
59 integration_id: str | None = self.get_option(self.integration_key)
60 if integration_id:
61 return int(integration_id)
62 return 0
63
64 def get_integration(self) -> RpcIntegration | None:
65 """
66 Uses the required class variables `provider` and `integration_key` with
67 RuleBase.get_option to get the integration object from DB.
68 """
69 for integration in integration_service.get_integrations(
70 organization_id=self.project.organization_id,
71 status=OrganizationStatus.ACTIVE,
72 org_integration_status=OrganizationStatus.ACTIVE,
73 providers=[self.provider],
74 ):
75 if integration.id == self.get_integration_id():
76 return integration
77 return None
78
79 def get_organization_integration(self) -> RpcOrganizationIntegration | None:
80 return integration_service.get_organization_integration(
81 integration_id=self.get_integration_id(), organization_id=self.project.organization_id
82 )
83
84 def get_form_instance(self) -> forms.Form:
85 return self.form_cls(self.data, integrations=self.get_integrations())
86
```
Path: `src/sentry/integrations/discord/analytics.py`
Content:
```
1 from sentry import analytics
2
3
4 class DiscordIntegrationNotificationSent(analytics.Event):
5 type = "integrations.discord.notification_sent"
6
7 attributes = (
8 analytics.Attribute("organization_id"),
9 analytics.Attribute("project_id"),
10 analytics.Attribute("group_id"),
11 analytics.Attribute("notification_uuid"),
12 analytics.Attribute("alert_id", required=False),
13 )
14
15
16 class DiscordIntegrationCommandInteractionReceived(analytics.Event):
17 type = "integrations.discord.command_interaction"
18
19 attributes = (analytics.Attribute("command_name"),)
20
21
22 class DiscordIntegrationIdentityLinked(analytics.Event):
23 type = "integrations.discord.identity_linked"
24
25 attributes = (
26 analytics.Attribute("provider"),
27 analytics.Attribute("actor_id"),
28 analytics.Attribute("actor_type"),
29 )
30
31
32 class DiscordIntegrationIdentityUnlinked(analytics.Event):
33 type = "integrations.discord.identity_unlinked"
34
35 attributes = (
36 analytics.Attribute("provider"),
37 analytics.Attribute("actor_id"),
38 analytics.Attribute("actor_type"),
39 )
40
41
42 class DiscordIntegrationMessageInteractionReceived(analytics.Event):
43 type = "integrations.discord.message_interaction"
44
45 attributes = (analytics.Attribute("custom_id"),)
46
47
48 class DiscordIntegrationAssign(analytics.Event):
49 type = "integrations.discord.assign"
50
51 attributes = (analytics.Attribute("actor_id"),)
52
53
54 class DiscordIntegrationStatus(analytics.Event):
55 type = "integrations.discord.status"
56
57 attributes = (
58 analytics.Attribute("organization_id"),
59 analytics.Attribute("user_id"),
60 analytics.Attribute("status"),
61 )
62
63
64 analytics.register(DiscordIntegrationNotificationSent)
65 analytics.register(DiscordIntegrationCommandInteractionReceived)
66 analytics.register(DiscordIntegrationIdentityLinked)
67 analytics.register(DiscordIntegrationIdentityUnlinked)
68 analytics.register(DiscordIntegrationMessageInteractionReceived)
69 analytics.register(DiscordIntegrationAssign)
70 analytics.register(DiscordIntegrationStatus)
71
```
Path: `src/sentry/integrations/discord/actions/notification.py`
Content:
```
1 from typing import Any, Generator, Optional, Sequence
2
3 from sentry import analytics, features
4 from sentry.eventstore.models import GroupEvent
5 from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm
6 from sentry.integrations.discord.client import DiscordClient
7 from sentry.integrations.discord.message_builder.issues import DiscordIssuesMessageBuilder
8 from sentry.rules.actions import IntegrationEventAction
9 from sentry.rules.base import CallbackFuture, EventState
10 from sentry.shared_integrations.exceptions.base import ApiError
11 from sentry.types.rules import RuleFuture
12 from sentry.utils import metrics
13
14
15 class DiscordNotifyServiceAction(IntegrationEventAction):
16 id = "sentry.integrations.discord.notify_action.DiscordNotifyServiceAction"
17 form_cls = DiscordNotifyServiceForm
18 label = "Send a notification to the {server} Discord server in the channel with ID: {channel_id} and show tags {tags} in the notification."
19 prompt = "Send a Discord notification"
20 provider = "discord"
21 integration_key = "server"
22
23 def __init__(self, *args: Any, **kwargs: Any) -> None:
24 super().__init__(*args, **kwargs)
25 self.form_fields = {
26 "server": {
27 "type": "choice",
28 "choices": [(i.id, i.name) for i in self.get_integrations()],
29 },
30 "channel_id": {"type": "string", "placeholder": "e.g., 1134274732116676679"},
31 "tags": {"type": "string", "placeholder": "e.g., environment,user,my_tag"},
32 }
33
34 def after(
35 self, event: GroupEvent, state: EventState, notification_uuid: Optional[str] = None
36 ) -> Generator[CallbackFuture, None, None]:
37 channel_id = self.get_option("channel_id")
38 tags = set(self.get_tags_list())
39
40 integration = self.get_integration()
41 if not integration:
42 # Integration removed, but rule still active
43 return
44
45 def send_notification(event: GroupEvent, futures: Sequence[RuleFuture]) -> None:
46 if not features.has(
47 "organizations:integrations-discord-notifications", event.organization
48 ):
49 return
50
51 rules = [f.rule for f in futures]
52 message = DiscordIssuesMessageBuilder(event.group, event=event, tags=tags, rules=rules)
53
54 client = DiscordClient(integration_id=integration.id)
55 try:
56 client.send_message(channel_id, message, notification_uuid=notification_uuid)
57 except ApiError as e:
58 self.logger.error(
59 "rule.fail.discord_post",
60 extra={
61 "error": str(e),
62 "project_id": event.project_id,
63 "event_id": event.event_id,
64 "guild_id": integration.external_id,
65 "channel_id": channel_id,
66 },
67 )
68
69 key = f"discord:{integration.id}:{channel_id}"
70
71 analytics.record(
72 "integrations.discord.notification_sent",
73 organization_id=event.organization.id,
74 project_id=event.project_id,
75 group_id=event.group_id,
76 notification_uuid=notification_uuid if notification_uuid else "",
77 )
78 metrics.incr("notifications.sent", instance="discord.notifications", skip_internal=False)
79 yield self.future(send_notification, key=key)
80
81 def render_label(self) -> str:
82 tags = self.get_tags_list()
83
84 return self.label.format(
85 server=self.get_integration_name(),
86 channel_id=self.get_option("channel_id"),
87 tags="[{}]".format(", ".join(tags)),
88 )
89
90 def get_tags_list(self) -> Sequence[str]:
91 return [s.strip() for s in self.get_option("tags", "").split(",")]
92
93 def get_form_instance(self) -> Any:
94 return self.form_cls(self.data, integrations=self.get_integrations())
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/integrations/discord/actions/notification.py b/src/sentry/integrations/discord/actions/notification.py
--- a/src/sentry/integrations/discord/actions/notification.py
+++ b/src/sentry/integrations/discord/actions/notification.py
@@ -1,6 +1,6 @@
from typing import Any, Generator, Optional, Sequence
-from sentry import analytics, features
+from sentry import features
from sentry.eventstore.models import GroupEvent
from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm
from sentry.integrations.discord.client import DiscordClient
@@ -65,16 +65,11 @@
"channel_id": channel_id,
},
)
+ rule = rules[0] if rules else None
+ self.record_notification_sent(event, channel_id, rule, notification_uuid)
key = f"discord:{integration.id}:{channel_id}"
- analytics.record(
- "integrations.discord.notification_sent",
- organization_id=event.organization.id,
- project_id=event.project_id,
- group_id=event.group_id,
- notification_uuid=notification_uuid if notification_uuid else "",
- )
metrics.incr("notifications.sent", instance="discord.notifications", skip_internal=False)
yield self.future(send_notification, key=key)
diff --git a/src/sentry/integrations/discord/analytics.py b/src/sentry/integrations/discord/analytics.py
--- a/src/sentry/integrations/discord/analytics.py
+++ b/src/sentry/integrations/discord/analytics.py
@@ -7,6 +7,7 @@
attributes = (
analytics.Attribute("organization_id"),
analytics.Attribute("project_id"),
+ analytics.Attribute("category"),
analytics.Attribute("group_id"),
analytics.Attribute("notification_uuid"),
analytics.Attribute("alert_id", required=False),
diff --git a/src/sentry/rules/actions/integrations/base.py b/src/sentry/rules/actions/integrations/base.py
--- a/src/sentry/rules/actions/integrations/base.py
+++ b/src/sentry/rules/actions/integrations/base.py
@@ -5,7 +5,9 @@
from django import forms
-from sentry.models import OrganizationStatus
+from sentry import analytics
+from sentry.eventstore.models import GroupEvent
+from sentry.models import OrganizationStatus, Rule
from sentry.rules.actions import EventAction
from sentry.services.hybrid_cloud.integration import (
RpcIntegration,
@@ -83,3 +85,31 @@
def get_form_instance(self) -> forms.Form:
return self.form_cls(self.data, integrations=self.get_integrations())
+
+ def record_notification_sent(
+ self,
+ event: GroupEvent,
+ external_id: str,
+ rule: Rule | None = None,
+ notification_uuid: str | None = None,
+ ) -> None:
+ # Currently these actions can only be triggered by issue alerts
+ analytics.record(
+ f"integrations.{self.provider}.notification_sent",
+ category="issue_alert",
+ organization_id=event.organization.id,
+ project_id=event.project_id,
+ group_id=event.group_id,
+ notification_uuid=notification_uuid if notification_uuid else "",
+ alert_id=rule.id if rule else None,
+ )
+ analytics.record(
+ "alert.sent",
+ provider=self.provider,
+ alert_id=rule.id if rule else "",
+ alert_type="issue_alert",
+ organization_id=event.organization.id,
+ project_id=event.project_id,
+ external_id=external_id,
+ notification_uuid=notification_uuid if notification_uuid else "",
+ )
|
{"golden_diff": "diff --git a/src/sentry/integrations/discord/actions/notification.py b/src/sentry/integrations/discord/actions/notification.py\n--- a/src/sentry/integrations/discord/actions/notification.py\n+++ b/src/sentry/integrations/discord/actions/notification.py\n@@ -1,6 +1,6 @@\n from typing import Any, Generator, Optional, Sequence\n \n-from sentry import analytics, features\n+from sentry import features\n from sentry.eventstore.models import GroupEvent\n from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm\n from sentry.integrations.discord.client import DiscordClient\n@@ -65,16 +65,11 @@\n \"channel_id\": channel_id,\n },\n )\n+ rule = rules[0] if rules else None\n+ self.record_notification_sent(event, channel_id, rule, notification_uuid)\n \n key = f\"discord:{integration.id}:{channel_id}\"\n \n- analytics.record(\n- \"integrations.discord.notification_sent\",\n- organization_id=event.organization.id,\n- project_id=event.project_id,\n- group_id=event.group_id,\n- notification_uuid=notification_uuid if notification_uuid else \"\",\n- )\n metrics.incr(\"notifications.sent\", instance=\"discord.notifications\", skip_internal=False)\n yield self.future(send_notification, key=key)\n \ndiff --git a/src/sentry/integrations/discord/analytics.py b/src/sentry/integrations/discord/analytics.py\n--- a/src/sentry/integrations/discord/analytics.py\n+++ b/src/sentry/integrations/discord/analytics.py\n@@ -7,6 +7,7 @@\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"project_id\"),\n+ analytics.Attribute(\"category\"),\n analytics.Attribute(\"group_id\"),\n analytics.Attribute(\"notification_uuid\"),\n analytics.Attribute(\"alert_id\", required=False),\ndiff --git a/src/sentry/rules/actions/integrations/base.py b/src/sentry/rules/actions/integrations/base.py\n--- a/src/sentry/rules/actions/integrations/base.py\n+++ b/src/sentry/rules/actions/integrations/base.py\n@@ -5,7 +5,9 @@\n \n from django import forms\n \n-from sentry.models import OrganizationStatus\n+from sentry import analytics\n+from sentry.eventstore.models import GroupEvent\n+from sentry.models import OrganizationStatus, Rule\n from sentry.rules.actions import EventAction\n from sentry.services.hybrid_cloud.integration import (\n RpcIntegration,\n@@ -83,3 +85,31 @@\n \n def get_form_instance(self) -> forms.Form:\n return self.form_cls(self.data, integrations=self.get_integrations())\n+\n+ def record_notification_sent(\n+ self,\n+ event: GroupEvent,\n+ external_id: str,\n+ rule: Rule | None = None,\n+ notification_uuid: str | None = None,\n+ ) -> None:\n+ # Currently these actions can only be triggered by issue alerts\n+ analytics.record(\n+ f\"integrations.{self.provider}.notification_sent\",\n+ category=\"issue_alert\",\n+ organization_id=event.organization.id,\n+ project_id=event.project_id,\n+ group_id=event.group_id,\n+ notification_uuid=notification_uuid if notification_uuid else \"\",\n+ alert_id=rule.id if rule else None,\n+ )\n+ analytics.record(\n+ \"alert.sent\",\n+ provider=self.provider,\n+ alert_id=rule.id if rule else \"\",\n+ alert_type=\"issue_alert\",\n+ organization_id=event.organization.id,\n+ project_id=event.project_id,\n+ external_id=external_id,\n+ notification_uuid=notification_uuid if notification_uuid else \"\",\n+ )\n", "issue": "Discord `alert.sent` event\nstart sending `alert.sent` analytic events for discord\n\nadd category to existing notification_sent event\n", "before_files": [{"content": "from __future__ import annotations\n\nimport abc\nfrom typing import List\n\nfrom django import forms\n\nfrom sentry.models import OrganizationStatus\nfrom sentry.rules.actions import EventAction\nfrom sentry.services.hybrid_cloud.integration import (\n RpcIntegration,\n RpcOrganizationIntegration,\n integration_service,\n)\n\nINTEGRATION_KEY = \"integration\"\n\n\nclass IntegrationEventAction(EventAction, abc.ABC):\n \"\"\"Intermediate abstract class to help DRY some event actions code.\"\"\"\n\n @property\n @abc.abstractmethod\n def prompt(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def provider(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def integration_key(self) -> str:\n pass\n\n def is_enabled(self) -> bool:\n enabled: bool = bool(self.get_integrations())\n return enabled\n\n def get_integration_name(self) -> str:\n \"\"\"Get the integration's name for the label.\"\"\"\n integration = self.get_integration()\n if not integration:\n return \"[removed]\"\n\n _name: str = integration.name\n return _name\n\n def get_integrations(self) -> List[RpcIntegration]:\n return integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n )\n\n def get_integration_id(self) -> int:\n integration_id: str | None = self.get_option(self.integration_key)\n if integration_id:\n return int(integration_id)\n return 0\n\n def get_integration(self) -> RpcIntegration | None:\n \"\"\"\n Uses the required class variables `provider` and `integration_key` with\n RuleBase.get_option to get the integration object from DB.\n \"\"\"\n for integration in integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n ):\n if integration.id == self.get_integration_id():\n return integration\n return None\n\n def get_organization_integration(self) -> RpcOrganizationIntegration | None:\n return integration_service.get_organization_integration(\n integration_id=self.get_integration_id(), organization_id=self.project.organization_id\n )\n\n def get_form_instance(self) -> forms.Form:\n return self.form_cls(self.data, integrations=self.get_integrations())\n", "path": "src/sentry/rules/actions/integrations/base.py"}, {"content": "from sentry import analytics\n\n\nclass DiscordIntegrationNotificationSent(analytics.Event):\n type = \"integrations.discord.notification_sent\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"project_id\"),\n analytics.Attribute(\"group_id\"),\n analytics.Attribute(\"notification_uuid\"),\n analytics.Attribute(\"alert_id\", required=False),\n )\n\n\nclass DiscordIntegrationCommandInteractionReceived(analytics.Event):\n type = \"integrations.discord.command_interaction\"\n\n attributes = (analytics.Attribute(\"command_name\"),)\n\n\nclass DiscordIntegrationIdentityLinked(analytics.Event):\n type = \"integrations.discord.identity_linked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationIdentityUnlinked(analytics.Event):\n type = \"integrations.discord.identity_unlinked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationMessageInteractionReceived(analytics.Event):\n type = \"integrations.discord.message_interaction\"\n\n attributes = (analytics.Attribute(\"custom_id\"),)\n\n\nclass DiscordIntegrationAssign(analytics.Event):\n type = \"integrations.discord.assign\"\n\n attributes = (analytics.Attribute(\"actor_id\"),)\n\n\nclass DiscordIntegrationStatus(analytics.Event):\n type = \"integrations.discord.status\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"user_id\"),\n analytics.Attribute(\"status\"),\n )\n\n\nanalytics.register(DiscordIntegrationNotificationSent)\nanalytics.register(DiscordIntegrationCommandInteractionReceived)\nanalytics.register(DiscordIntegrationIdentityLinked)\nanalytics.register(DiscordIntegrationIdentityUnlinked)\nanalytics.register(DiscordIntegrationMessageInteractionReceived)\nanalytics.register(DiscordIntegrationAssign)\nanalytics.register(DiscordIntegrationStatus)\n", "path": "src/sentry/integrations/discord/analytics.py"}, {"content": "from typing import Any, Generator, Optional, Sequence\n\nfrom sentry import analytics, features\nfrom sentry.eventstore.models import GroupEvent\nfrom sentry.integrations.discord.actions.form import DiscordNotifyServiceForm\nfrom sentry.integrations.discord.client import DiscordClient\nfrom sentry.integrations.discord.message_builder.issues import DiscordIssuesMessageBuilder\nfrom sentry.rules.actions import IntegrationEventAction\nfrom sentry.rules.base import CallbackFuture, EventState\nfrom sentry.shared_integrations.exceptions.base import ApiError\nfrom sentry.types.rules import RuleFuture\nfrom sentry.utils import metrics\n\n\nclass DiscordNotifyServiceAction(IntegrationEventAction):\n id = \"sentry.integrations.discord.notify_action.DiscordNotifyServiceAction\"\n form_cls = DiscordNotifyServiceForm\n label = \"Send a notification to the {server} Discord server in the channel with ID: {channel_id} and show tags {tags} in the notification.\"\n prompt = \"Send a Discord notification\"\n provider = \"discord\"\n integration_key = \"server\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n super().__init__(*args, **kwargs)\n self.form_fields = {\n \"server\": {\n \"type\": \"choice\",\n \"choices\": [(i.id, i.name) for i in self.get_integrations()],\n },\n \"channel_id\": {\"type\": \"string\", \"placeholder\": \"e.g., 1134274732116676679\"},\n \"tags\": {\"type\": \"string\", \"placeholder\": \"e.g., environment,user,my_tag\"},\n }\n\n def after(\n self, event: GroupEvent, state: EventState, notification_uuid: Optional[str] = None\n ) -> Generator[CallbackFuture, None, None]:\n channel_id = self.get_option(\"channel_id\")\n tags = set(self.get_tags_list())\n\n integration = self.get_integration()\n if not integration:\n # Integration removed, but rule still active\n return\n\n def send_notification(event: GroupEvent, futures: Sequence[RuleFuture]) -> None:\n if not features.has(\n \"organizations:integrations-discord-notifications\", event.organization\n ):\n return\n\n rules = [f.rule for f in futures]\n message = DiscordIssuesMessageBuilder(event.group, event=event, tags=tags, rules=rules)\n\n client = DiscordClient(integration_id=integration.id)\n try:\n client.send_message(channel_id, message, notification_uuid=notification_uuid)\n except ApiError as e:\n self.logger.error(\n \"rule.fail.discord_post\",\n extra={\n \"error\": str(e),\n \"project_id\": event.project_id,\n \"event_id\": event.event_id,\n \"guild_id\": integration.external_id,\n \"channel_id\": channel_id,\n },\n )\n\n key = f\"discord:{integration.id}:{channel_id}\"\n\n analytics.record(\n \"integrations.discord.notification_sent\",\n organization_id=event.organization.id,\n project_id=event.project_id,\n group_id=event.group_id,\n notification_uuid=notification_uuid if notification_uuid else \"\",\n )\n metrics.incr(\"notifications.sent\", instance=\"discord.notifications\", skip_internal=False)\n yield self.future(send_notification, key=key)\n\n def render_label(self) -> str:\n tags = self.get_tags_list()\n\n return self.label.format(\n server=self.get_integration_name(),\n channel_id=self.get_option(\"channel_id\"),\n tags=\"[{}]\".format(\", \".join(tags)),\n )\n\n def get_tags_list(self) -> Sequence[str]:\n return [s.strip() for s in self.get_option(\"tags\", \"\").split(\",\")]\n\n def get_form_instance(self) -> Any:\n return self.form_cls(self.data, integrations=self.get_integrations())\n", "path": "src/sentry/integrations/discord/actions/notification.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport abc\nfrom typing import List\n\nfrom django import forms\n\nfrom sentry import analytics\nfrom sentry.eventstore.models import GroupEvent\nfrom sentry.models import OrganizationStatus, Rule\nfrom sentry.rules.actions import EventAction\nfrom sentry.services.hybrid_cloud.integration import (\n RpcIntegration,\n RpcOrganizationIntegration,\n integration_service,\n)\n\nINTEGRATION_KEY = \"integration\"\n\n\nclass IntegrationEventAction(EventAction, abc.ABC):\n \"\"\"Intermediate abstract class to help DRY some event actions code.\"\"\"\n\n @property\n @abc.abstractmethod\n def prompt(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def provider(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def integration_key(self) -> str:\n pass\n\n def is_enabled(self) -> bool:\n enabled: bool = bool(self.get_integrations())\n return enabled\n\n def get_integration_name(self) -> str:\n \"\"\"Get the integration's name for the label.\"\"\"\n integration = self.get_integration()\n if not integration:\n return \"[removed]\"\n\n _name: str = integration.name\n return _name\n\n def get_integrations(self) -> List[RpcIntegration]:\n return integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n )\n\n def get_integration_id(self) -> int:\n integration_id: str | None = self.get_option(self.integration_key)\n if integration_id:\n return int(integration_id)\n return 0\n\n def get_integration(self) -> RpcIntegration | None:\n \"\"\"\n Uses the required class variables `provider` and `integration_key` with\n RuleBase.get_option to get the integration object from DB.\n \"\"\"\n for integration in integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n ):\n if integration.id == self.get_integration_id():\n return integration\n return None\n\n def get_organization_integration(self) -> RpcOrganizationIntegration | None:\n return integration_service.get_organization_integration(\n integration_id=self.get_integration_id(), organization_id=self.project.organization_id\n )\n\n def get_form_instance(self) -> forms.Form:\n return self.form_cls(self.data, integrations=self.get_integrations())\n\n def record_notification_sent(\n self,\n event: GroupEvent,\n external_id: str,\n rule: Rule | None = None,\n notification_uuid: str | None = None,\n ) -> None:\n # Currently these actions can only be triggered by issue alerts\n analytics.record(\n f\"integrations.{self.provider}.notification_sent\",\n category=\"issue_alert\",\n organization_id=event.organization.id,\n project_id=event.project_id,\n group_id=event.group_id,\n notification_uuid=notification_uuid if notification_uuid else \"\",\n alert_id=rule.id if rule else None,\n )\n analytics.record(\n \"alert.sent\",\n provider=self.provider,\n alert_id=rule.id if rule else \"\",\n alert_type=\"issue_alert\",\n organization_id=event.organization.id,\n project_id=event.project_id,\n external_id=external_id,\n notification_uuid=notification_uuid if notification_uuid else \"\",\n )\n", "path": "src/sentry/rules/actions/integrations/base.py"}, {"content": "from sentry import analytics\n\n\nclass DiscordIntegrationNotificationSent(analytics.Event):\n type = \"integrations.discord.notification_sent\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"project_id\"),\n analytics.Attribute(\"category\"),\n analytics.Attribute(\"group_id\"),\n analytics.Attribute(\"notification_uuid\"),\n analytics.Attribute(\"alert_id\", required=False),\n )\n\n\nclass DiscordIntegrationCommandInteractionReceived(analytics.Event):\n type = \"integrations.discord.command_interaction\"\n\n attributes = (analytics.Attribute(\"command_name\"),)\n\n\nclass DiscordIntegrationIdentityLinked(analytics.Event):\n type = \"integrations.discord.identity_linked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationIdentityUnlinked(analytics.Event):\n type = \"integrations.discord.identity_unlinked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationMessageInteractionReceived(analytics.Event):\n type = \"integrations.discord.message_interaction\"\n\n attributes = (analytics.Attribute(\"custom_id\"),)\n\n\nclass DiscordIntegrationAssign(analytics.Event):\n type = \"integrations.discord.assign\"\n\n attributes = (analytics.Attribute(\"actor_id\"),)\n\n\nclass DiscordIntegrationStatus(analytics.Event):\n type = \"integrations.discord.status\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"user_id\"),\n analytics.Attribute(\"status\"),\n )\n\n\nanalytics.register(DiscordIntegrationNotificationSent)\nanalytics.register(DiscordIntegrationCommandInteractionReceived)\nanalytics.register(DiscordIntegrationIdentityLinked)\nanalytics.register(DiscordIntegrationIdentityUnlinked)\nanalytics.register(DiscordIntegrationMessageInteractionReceived)\nanalytics.register(DiscordIntegrationAssign)\nanalytics.register(DiscordIntegrationStatus)\n", "path": "src/sentry/integrations/discord/analytics.py"}, {"content": "from typing import Any, Generator, Optional, Sequence\n\nfrom sentry import features\nfrom sentry.eventstore.models import GroupEvent\nfrom sentry.integrations.discord.actions.form import DiscordNotifyServiceForm\nfrom sentry.integrations.discord.client import DiscordClient\nfrom sentry.integrations.discord.message_builder.issues import DiscordIssuesMessageBuilder\nfrom sentry.rules.actions import IntegrationEventAction\nfrom sentry.rules.base import CallbackFuture, EventState\nfrom sentry.shared_integrations.exceptions.base import ApiError\nfrom sentry.types.rules import RuleFuture\nfrom sentry.utils import metrics\n\n\nclass DiscordNotifyServiceAction(IntegrationEventAction):\n id = \"sentry.integrations.discord.notify_action.DiscordNotifyServiceAction\"\n form_cls = DiscordNotifyServiceForm\n label = \"Send a notification to the {server} Discord server in the channel with ID: {channel_id} and show tags {tags} in the notification.\"\n prompt = \"Send a Discord notification\"\n provider = \"discord\"\n integration_key = \"server\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n super().__init__(*args, **kwargs)\n self.form_fields = {\n \"server\": {\n \"type\": \"choice\",\n \"choices\": [(i.id, i.name) for i in self.get_integrations()],\n },\n \"channel_id\": {\"type\": \"string\", \"placeholder\": \"e.g., 1134274732116676679\"},\n \"tags\": {\"type\": \"string\", \"placeholder\": \"e.g., environment,user,my_tag\"},\n }\n\n def after(\n self, event: GroupEvent, state: EventState, notification_uuid: Optional[str] = None\n ) -> Generator[CallbackFuture, None, None]:\n channel_id = self.get_option(\"channel_id\")\n tags = set(self.get_tags_list())\n\n integration = self.get_integration()\n if not integration:\n # Integration removed, but rule still active\n return\n\n def send_notification(event: GroupEvent, futures: Sequence[RuleFuture]) -> None:\n if not features.has(\n \"organizations:integrations-discord-notifications\", event.organization\n ):\n return\n\n rules = [f.rule for f in futures]\n message = DiscordIssuesMessageBuilder(event.group, event=event, tags=tags, rules=rules)\n\n client = DiscordClient(integration_id=integration.id)\n try:\n client.send_message(channel_id, message, notification_uuid=notification_uuid)\n except ApiError as e:\n self.logger.error(\n \"rule.fail.discord_post\",\n extra={\n \"error\": str(e),\n \"project_id\": event.project_id,\n \"event_id\": event.event_id,\n \"guild_id\": integration.external_id,\n \"channel_id\": channel_id,\n },\n )\n rule = rules[0] if rules else None\n self.record_notification_sent(event, channel_id, rule, notification_uuid)\n\n key = f\"discord:{integration.id}:{channel_id}\"\n\n metrics.incr(\"notifications.sent\", instance=\"discord.notifications\", skip_internal=False)\n yield self.future(send_notification, key=key)\n\n def render_label(self) -> str:\n tags = self.get_tags_list()\n\n return self.label.format(\n server=self.get_integration_name(),\n channel_id=self.get_option(\"channel_id\"),\n tags=\"[{}]\".format(\", \".join(tags)),\n )\n\n def get_tags_list(self) -> Sequence[str]:\n return [s.strip() for s in self.get_option(\"tags\", \"\").split(\",\")]\n\n def get_form_instance(self) -> Any:\n return self.form_cls(self.data, integrations=self.get_integrations())\n", "path": "src/sentry/integrations/discord/actions/notification.py"}]}
| 2,550 | 785 |
gh_patches_debug_7058
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1139
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consistency on PUT with mandatory schema fields
While working on #790 I realize that there is something not clear in our specifications.
Currently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.
On other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.
But, I believe we should make every resources behave the same way.
For example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.
Two solutions:
- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)
- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)
Consistency on PUT with mandatory schema fields
While working on #790 I realize that there is something not clear in our specifications.
Currently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.
On other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.
But, I believe we should make every resources behave the same way.
For example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.
Two solutions:
- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)
- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/groups.py`
Content:
```
1 import colander
2
3 from kinto.core import resource, utils
4 from kinto.core.events import ResourceChanged, ACTIONS
5 from pyramid.events import subscriber
6
7
8 def validate_member(node, member):
9 if member.startswith('/buckets/') or member == 'system.Everyone':
10 raise colander.Invalid(node, "'{}' is not a valid user ID.".format(member))
11
12
13 class GroupSchema(resource.ResourceSchema):
14 members = colander.SchemaNode(colander.Sequence(),
15 colander.SchemaNode(colander.String(),
16 validator=validate_member))
17
18
19 @resource.register(name='group',
20 collection_path='/buckets/{{bucket_id}}/groups',
21 record_path='/buckets/{{bucket_id}}/groups/{{id}}')
22 class Group(resource.ShareableResource):
23 schema = GroupSchema
24
25 def get_parent_id(self, request):
26 bucket_id = request.matchdict['bucket_id']
27 parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)
28 return parent_id
29
30
31 @subscriber(ResourceChanged,
32 for_resources=('group',),
33 for_actions=(ACTIONS.DELETE,))
34 def on_groups_deleted(event):
35 """Some groups were deleted, remove them from users principals.
36 """
37 permission_backend = event.request.registry.permission
38
39 for change in event.impacted_records:
40 group = change['old']
41 bucket_id = event.payload['bucket_id']
42 group_uri = utils.instance_uri(event.request, 'group',
43 bucket_id=bucket_id,
44 id=group['id'])
45
46 permission_backend.remove_principal(group_uri)
47
48
49 @subscriber(ResourceChanged,
50 for_resources=('group',),
51 for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))
52 def on_groups_changed(event):
53 """Some groups were changed, update users principals.
54 """
55 permission_backend = event.request.registry.permission
56
57 for change in event.impacted_records:
58 if 'old' in change:
59 existing_record_members = set(change['old'].get('members', []))
60 else:
61 existing_record_members = set()
62
63 group = change['new']
64 group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],
65 **event.payload)
66 new_record_members = set(group.get('members', []))
67 new_members = new_record_members - existing_record_members
68 removed_members = existing_record_members - new_record_members
69
70 for member in new_members:
71 # Add the group to the member principal.
72 permission_backend.add_user_principal(member, group_uri)
73
74 for member in removed_members:
75 # Remove the group from the member principal.
76 permission_backend.remove_user_principal(member, group_uri)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/views/groups.py b/kinto/views/groups.py
--- a/kinto/views/groups.py
+++ b/kinto/views/groups.py
@@ -13,7 +13,8 @@
class GroupSchema(resource.ResourceSchema):
members = colander.SchemaNode(colander.Sequence(),
colander.SchemaNode(colander.String(),
- validator=validate_member))
+ validator=validate_member),
+ missing=[])
@resource.register(name='group',
|
{"golden_diff": "diff --git a/kinto/views/groups.py b/kinto/views/groups.py\n--- a/kinto/views/groups.py\n+++ b/kinto/views/groups.py\n@@ -13,7 +13,8 @@\n class GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n- validator=validate_member))\n+ validator=validate_member),\n+ missing=[])\n \n \n @resource.register(name='group',\n", "issue": "Consistency on PUT with mandatory schema fields\nWhile working on #790 I realize that there is something not clear in our specifications.\n\nCurrently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.\n\nOn other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.\n\nBut, I believe we should make every resources behave the same way.\n\nFor example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.\n\nTwo solutions:\n- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)\n- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)\n\nConsistency on PUT with mandatory schema fields\nWhile working on #790 I realize that there is something not clear in our specifications.\n\nCurrently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.\n\nOn other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.\n\nBut, I believe we should make every resources behave the same way.\n\nFor example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.\n\nTwo solutions:\n- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)\n- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)\n\n", "before_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\ndef validate_member(node, member):\n if member.startswith('/buckets/') or member == 'system.Everyone':\n raise colander.Invalid(node, \"'{}' is not a valid user ID.\".format(member))\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n validator=validate_member))\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n schema = GroupSchema\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}], "after_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\ndef validate_member(node, member):\n if member.startswith('/buckets/') or member == 'system.Everyone':\n raise colander.Invalid(node, \"'{}' is not a valid user ID.\".format(member))\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n validator=validate_member),\n missing=[])\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n schema = GroupSchema\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}]}
| 1,375 | 98 |
gh_patches_debug_43621
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-5642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] convert to period bug in `_StatsModelsAdapter`
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
For panel data with datetime indexes, converting datetimes to period index causes an error in the predict stage in the `_StatsModelsAdapter` models
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.datasets import load_airline
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
import pandas as pd
y = load_airline()
# create dummy index with hourly timestamps and panel data by hour of day
y.index = pd.date_range(start='1960-01-01', periods=len(y.index), freq='H')
y.index.names = ["datetime"]
y.name = "passengers"
y = y.to_frame()
y['hour_of_day'] = y.index.hour
y = y.reset_index().set_index(['hour_of_day', 'datetime']).sort_index()
forecaster = ExponentialSmoothing(
trend='add', sp=1
)
forecaster.fit(y)
forecaster.predict(fh=[1])
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
For each panel predict one step ahead
**Additional context**
<!--
Add any other context about the problem here.
-->
I will address this issue 👍
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
Python dependencies:
pip: 23.1.2
sktime: 0.24.1
sklearn: 1.2.2
skbase: 0.6.1
numpy: 1.24.3
scipy: 1.10.1
pandas: 2.0.2
matplotlib: None
joblib: 1.2.0
numba: None
statsmodels: 0.14.0
pmdarima: 2.0.3
statsforecast: None
tsfresh: None
tslearn: None
torch: None
tensorflow: None
tensorflow_probability: None
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/forecasting/base/adapters/_statsmodels.py`
Content:
```
1 # !/usr/bin/env python3 -u
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Implements adapter for statsmodels forecasters to be used in sktime framework."""
4
5 __author__ = ["mloning"]
6 __all__ = ["_StatsModelsAdapter"]
7
8 import inspect
9
10 import numpy as np
11 import pandas as pd
12
13 from sktime.forecasting.base import BaseForecaster
14 from sktime.utils.warnings import warn
15
16
17 class _StatsModelsAdapter(BaseForecaster):
18 """Base class for interfacing statsmodels forecasting algorithms."""
19
20 _fitted_param_names = ()
21 _tags = {
22 "ignores-exogeneous-X": True,
23 "requires-fh-in-fit": False,
24 "handles-missing-data": False,
25 "python_dependencies": "statsmodels",
26 }
27
28 def __init__(self, random_state=None):
29 self._forecaster = None
30 self.random_state = random_state
31 self._fitted_forecaster = None
32 super().__init__()
33
34 def _fit(self, y, X, fh):
35 """Fit to training data.
36
37 Parameters
38 ----------
39 y : pd.Series
40 Target time series to which to fit the forecaster.
41 fh : int, list or np.array, optional (default=None)
42 The forecasters horizon with the steps ahead to to predict.
43 X : pd.DataFrame, optional (default=None)
44 Exogenous variables are ignored
45
46 Returns
47 -------
48 self : returns an instance of self.
49 """
50 # statsmodels does not support the pd.Int64Index as required,
51 # so we coerce them here to pd.RangeIndex
52 if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):
53 y, X = _coerce_int_to_range_index(y, X)
54 self._fit_forecaster(y, X)
55 return self
56
57 def _fit_forecaster(self, y_train, X_train=None):
58 """Log used internally in fit."""
59 raise NotImplementedError("abstract method")
60
61 def _update(self, y, X=None, update_params=True):
62 """Update used internally in update."""
63 if update_params or self.is_composite():
64 super()._update(y, X, update_params=update_params)
65 else:
66 if not hasattr(self._fitted_forecaster, "append"):
67 warn(
68 f"NotImplementedWarning: {self.__class__.__name__} "
69 f"can not accept new data when update_params=False. "
70 f"Call with update_params=True to refit with new data.",
71 obj=self,
72 )
73 else:
74 # only append unseen data to fitted forecaster
75 index_diff = y.index.difference(
76 self._fitted_forecaster.fittedvalues.index
77 )
78 if index_diff.isin(y.index).all():
79 y = y.loc[index_diff]
80 self._fitted_forecaster = self._fitted_forecaster.append(y)
81
82 def _predict(self, fh, X):
83 """Make forecasts.
84
85 Parameters
86 ----------
87 fh : ForecastingHorizon
88 The forecasters horizon with the steps ahead to to predict.
89 Default is one-step ahead forecast,
90 i.e. np.array([1])
91 X : pd.DataFrame, optional (default=None)
92 Exogenous variables are ignored.
93
94 Returns
95 -------
96 y_pred : pd.Series
97 Returns series of predicted values.
98 """
99 # statsmodels requires zero-based indexing starting at the
100 # beginning of the training series when passing integers
101 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
102 fh_abs = fh.to_absolute_index(self.cutoff)
103
104 # bug fix for evaluate function as test_plus_train indices are passed
105 # statsmodels exog must contain test indices only.
106 # For discussion see https://github.com/sktime/sktime/issues/3830
107 if X is not None:
108 ind_drop = self._X.index
109 X = X.loc[~X.index.isin(ind_drop)]
110 # Entire range of the forecast horizon is required
111 X = X[: fh_abs[-1]]
112
113 if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
114 y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
115 else:
116 y_pred = self._fitted_forecaster.predict(start=start, end=end)
117
118 # statsmodels forecasts all periods from start to end of forecasting
119 # horizon, but only return given time points in forecasting horizon
120 y_pred = y_pred.loc[fh_abs]
121 # ensure that name is not added nor removed
122 # otherwise this may upset conversion to pd.DataFrame
123 y_pred.name = self._y.name
124 return y_pred
125
126 @staticmethod
127 def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:
128 """Construct confidence interval at specified `alpha` for each timestep.
129
130 Parameters
131 ----------
132 prediction_results : PredictionResults
133 results class, as returned by ``self._fitted_forecaster.get_prediction``
134 alpha : float
135 one minus nominal coverage
136
137 Returns
138 -------
139 pd.DataFrame
140 confidence intervals at each timestep
141
142 The dataframe must have at least two columns ``lower`` and ``upper``, and
143 the row indices must be integers relative to ``self.cutoff``. Order of
144 columns do not matter, and row indices must be a superset of relative
145 integer horizon of ``fh``.
146 """
147 del prediction_results, alpha # tools like ``vulture`` may complain as unused
148
149 raise NotImplementedError("abstract method")
150
151 def _predict_interval(self, fh, X, coverage):
152 """Compute/return prediction interval forecasts.
153
154 private _predict_interval containing the core logic,
155 called from predict_interval and default _predict_quantiles
156
157 Parameters
158 ----------
159 fh : guaranteed to be ForecastingHorizon
160 The forecasting horizon with the steps ahead to to predict.
161 X : optional (default=None)
162 guaranteed to be of a type in self.get_tag("X_inner_mtype")
163 Exogeneous time series to predict from.
164 coverage : float or list of float, optional (default=0.95)
165 nominal coverage(s) of predictive interval(s)
166
167 Returns
168 -------
169 pred_int : pd.DataFrame
170 Column has multi-index: first level is variable name from y in fit,
171 second level coverage fractions for which intervals were computed.
172 in the same order as in input `coverage`.
173 Third level is string "lower" or "upper", for lower/upper interval end.
174 Row index is fh, with additional (upper) levels equal to instance levels,
175 from y seen in fit, if y_inner_mtype is Panel or Hierarchical.
176 Entries are forecasts of lower/upper interval end,
177 for var in col index, at nominal coverage in second col index,
178 lower/upper depending on third col index, for the row index.
179 Upper/lower interval end forecasts are equivalent to
180 quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.
181 """
182 implements_interval_adapter = self._has_implementation_of("_extract_conf_int")
183 implements_quantiles = self._has_implementation_of("_predict_quantiles")
184
185 if not implements_interval_adapter and implements_quantiles:
186 return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
187
188 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
189 valid_indices = fh.to_absolute(self.cutoff).to_pandas()
190
191 get_prediction_arguments = {"start": start, "end": end}
192
193 if hasattr(self, "random_state"):
194 get_prediction_arguments["random_state"] = self.random_state
195
196 if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(
197 "exog"
198 ):
199 get_prediction_arguments["exog"] = X
200
201 prediction_results = self._fitted_forecaster.get_prediction(
202 **get_prediction_arguments
203 )
204
205 var_names = self._get_varnames()
206 var_name = var_names[0]
207 columns = pd.MultiIndex.from_product([var_names, coverage, ["lower", "upper"]])
208 pred_int = pd.DataFrame(index=valid_indices, columns=columns)
209
210 for c in coverage:
211 pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))
212
213 pred_int[(var_name, c, "lower")] = pred_statsmodels.loc[
214 valid_indices, "lower"
215 ]
216 pred_int[(var_name, c, "upper")] = pred_statsmodels.loc[
217 valid_indices, "upper"
218 ]
219
220 return pred_int
221
222 def _get_fitted_params(self):
223 """Get fitted parameters.
224
225 Returns
226 -------
227 fitted_params : dict
228 """
229 fitted_params = {}
230 for name in self._get_fitted_param_names():
231 if name in ["aic", "aicc", "bic", "hqic"]:
232 fitted_params[name] = getattr(self._fitted_forecaster, name, None)
233 else:
234 fitted_params[name] = self._fitted_forecaster.params.get(name)
235 return fitted_params
236
237 def _get_fitted_param_names(self):
238 """Get names of fitted parameters."""
239 return self._fitted_param_names
240
241
242 def _coerce_int_to_range_index(y, X=None):
243 new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)
244 try:
245 np.testing.assert_array_equal(y.index, new_index)
246 except AssertionError:
247 raise ValueError(
248 "Coercion of integer pd.Index to pd.RangeIndex "
249 "failed. Please provide `y_train` with a "
250 "pd.RangeIndex."
251 )
252 y.index = new_index
253 if X is not None:
254 X.index = new_index
255 return y, X
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py
--- a/sktime/forecasting/base/adapters/_statsmodels.py
+++ b/sktime/forecasting/base/adapters/_statsmodels.py
@@ -2,7 +2,7 @@
# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
"""Implements adapter for statsmodels forecasters to be used in sktime framework."""
-__author__ = ["mloning"]
+__author__ = ["mloning", "ciaran-g"]
__all__ = ["_StatsModelsAdapter"]
import inspect
@@ -99,7 +99,7 @@
# statsmodels requires zero-based indexing starting at the
# beginning of the training series when passing integers
start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- fh_abs = fh.to_absolute_index(self.cutoff)
+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
# bug fix for evaluate function as test_plus_train indices are passed
# statsmodels exog must contain test indices only.
@@ -108,7 +108,7 @@
ind_drop = self._X.index
X = X.loc[~X.index.isin(ind_drop)]
# Entire range of the forecast horizon is required
- X = X[: fh_abs[-1]]
+ X = X.iloc[: (fh_int[-1] + 1)] # include end point
if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
@@ -117,7 +117,9 @@
# statsmodels forecasts all periods from start to end of forecasting
# horizon, but only return given time points in forecasting horizon
- y_pred = y_pred.loc[fh_abs]
+ # if fh[0] > 1 steps ahead of cutoff then make relative to `start`
+ fh_int = fh_int - fh_int[0]
+ y_pred = y_pred.iloc[fh_int]
# ensure that name is not added nor removed
# otherwise this may upset conversion to pd.DataFrame
y_pred.name = self._y.name
@@ -186,7 +188,9 @@
return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- valid_indices = fh.to_absolute(self.cutoff).to_pandas()
+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
+ # if fh > 1 steps ahead of cutoff
+ fh_int = fh_int - fh_int[0]
get_prediction_arguments = {"start": start, "end": end}
@@ -205,17 +209,15 @@
var_names = self._get_varnames()
var_name = var_names[0]
columns = pd.MultiIndex.from_product([var_names, coverage, ["lower", "upper"]])
- pred_int = pd.DataFrame(index=valid_indices, columns=columns)
+ preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))
+ preds_index = preds_index.iloc[fh_int].index
+ pred_int = pd.DataFrame(index=preds_index, columns=columns)
for c in coverage:
pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))
- pred_int[(var_name, c, "lower")] = pred_statsmodels.loc[
- valid_indices, "lower"
- ]
- pred_int[(var_name, c, "upper")] = pred_statsmodels.loc[
- valid_indices, "upper"
- ]
+ pred_int[(var_name, c, "lower")] = pred_statsmodels.iloc[fh_int]["lower"]
+ pred_int[(var_name, c, "upper")] = pred_statsmodels.iloc[fh_int]["upper"]
return pred_int
|
{"golden_diff": "diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py\n--- a/sktime/forecasting/base/adapters/_statsmodels.py\n+++ b/sktime/forecasting/base/adapters/_statsmodels.py\n@@ -2,7 +2,7 @@\n # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n \"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n \n-__author__ = [\"mloning\"]\n+__author__ = [\"mloning\", \"ciaran-g\"]\n __all__ = [\"_StatsModelsAdapter\"]\n \n import inspect\n@@ -99,7 +99,7 @@\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n- fh_abs = fh.to_absolute_index(self.cutoff)\n+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n \n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n@@ -108,7 +108,7 @@\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n- X = X[: fh_abs[-1]]\n+ X = X.iloc[: (fh_int[-1] + 1)] # include end point\n \n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n@@ -117,7 +117,9 @@\n \n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n- y_pred = y_pred.loc[fh_abs]\n+ # if fh[0] > 1 steps ahead of cutoff then make relative to `start`\n+ fh_int = fh_int - fh_int[0]\n+ y_pred = y_pred.iloc[fh_int]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n@@ -186,7 +188,9 @@\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n \n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n- valid_indices = fh.to_absolute(self.cutoff).to_pandas()\n+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n+ # if fh > 1 steps ahead of cutoff\n+ fh_int = fh_int - fh_int[0]\n \n get_prediction_arguments = {\"start\": start, \"end\": end}\n \n@@ -205,17 +209,15 @@\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n- pred_int = pd.DataFrame(index=valid_indices, columns=columns)\n+ preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))\n+ preds_index = preds_index.iloc[fh_int].index\n+ pred_int = pd.DataFrame(index=preds_index, columns=columns)\n \n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n \n- pred_int[(var_name, c, \"lower\")] = pred_statsmodels.loc[\n- valid_indices, \"lower\"\n- ]\n- pred_int[(var_name, c, \"upper\")] = pred_statsmodels.loc[\n- valid_indices, \"upper\"\n- ]\n+ pred_int[(var_name, c, \"lower\")] = pred_statsmodels.iloc[fh_int][\"lower\"]\n+ pred_int[(var_name, c, \"upper\")] = pred_statsmodels.iloc[fh_int][\"upper\"]\n \n return pred_int\n", "issue": "[BUG] convert to period bug in `_StatsModelsAdapter`\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\nFor panel data with datetime indexes, converting datetimes to period index causes an error in the predict stage in the `_StatsModelsAdapter` models\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nfrom sktime.datasets import load_airline\r\nfrom sktime.forecasting.exp_smoothing import ExponentialSmoothing\r\nimport pandas as pd\r\ny = load_airline()\r\n\r\n# create dummy index with hourly timestamps and panel data by hour of day\r\ny.index = pd.date_range(start='1960-01-01', periods=len(y.index), freq='H')\r\ny.index.names = [\"datetime\"]\r\ny.name = \"passengers\"\r\ny = y.to_frame()\r\ny['hour_of_day'] = y.index.hour\r\ny = y.reset_index().set_index(['hour_of_day', 'datetime']).sort_index()\r\n\r\nforecaster = ExponentialSmoothing(\r\n trend='add', sp=1\r\n) \r\nforecaster.fit(y)\r\nforecaster.predict(fh=[1])\r\n\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\n\r\nFor each panel predict one step ahead\r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\r\nI will address this issue \ud83d\udc4d \r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n\r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\nPython dependencies:\r\n pip: 23.1.2\r\n sktime: 0.24.1\r\n sklearn: 1.2.2\r\n skbase: 0.6.1\r\n numpy: 1.24.3\r\n scipy: 1.10.1\r\n pandas: 2.0.2\r\n matplotlib: None\r\n joblib: 1.2.0\r\n numba: None\r\n statsmodels: 0.14.0\r\n pmdarima: 2.0.3\r\nstatsforecast: None\r\n tsfresh: None\r\n tslearn: None\r\n torch: None\r\n tensorflow: None\r\ntensorflow_probability: None\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n\n__author__ = [\"mloning\"]\n__all__ = [\"_StatsModelsAdapter\"]\n\nimport inspect\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.warnings import warn\n\n\nclass _StatsModelsAdapter(BaseForecaster):\n \"\"\"Base class for interfacing statsmodels forecasting algorithms.\"\"\"\n\n _fitted_param_names = ()\n _tags = {\n \"ignores-exogeneous-X\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n \"python_dependencies\": \"statsmodels\",\n }\n\n def __init__(self, random_state=None):\n self._forecaster = None\n self.random_state = random_state\n self._fitted_forecaster = None\n super().__init__()\n\n def _fit(self, y, X, fh):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n # statsmodels does not support the pd.Int64Index as required,\n # so we coerce them here to pd.RangeIndex\n if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):\n y, X = _coerce_int_to_range_index(y, X)\n self._fit_forecaster(y, X)\n return self\n\n def _fit_forecaster(self, y_train, X_train=None):\n \"\"\"Log used internally in fit.\"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, y, X=None, update_params=True):\n \"\"\"Update used internally in update.\"\"\"\n if update_params or self.is_composite():\n super()._update(y, X, update_params=update_params)\n else:\n if not hasattr(self._fitted_forecaster, \"append\"):\n warn(\n f\"NotImplementedWarning: {self.__class__.__name__} \"\n f\"can not accept new data when update_params=False. \"\n f\"Call with update_params=True to refit with new data.\",\n obj=self,\n )\n else:\n # only append unseen data to fitted forecaster\n index_diff = y.index.difference(\n self._fitted_forecaster.fittedvalues.index\n )\n if index_diff.isin(y.index).all():\n y = y.loc[index_diff]\n self._fitted_forecaster = self._fitted_forecaster.append(y)\n\n def _predict(self, fh, X):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : ForecastingHorizon\n The forecasters horizon with the steps ahead to to predict.\n Default is one-step ahead forecast,\n i.e. np.array([1])\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored.\n\n Returns\n -------\n y_pred : pd.Series\n Returns series of predicted values.\n \"\"\"\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_abs = fh.to_absolute_index(self.cutoff)\n\n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n # For discussion see https://github.com/sktime/sktime/issues/3830\n if X is not None:\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n X = X[: fh_abs[-1]]\n\n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n else:\n y_pred = self._fitted_forecaster.predict(start=start, end=end)\n\n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n y_pred = y_pred.loc[fh_abs]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n return y_pred\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n del prediction_results, alpha # tools like ``vulture`` may complain as unused\n\n raise NotImplementedError(\"abstract method\")\n\n def _predict_interval(self, fh, X, coverage):\n \"\"\"Compute/return prediction interval forecasts.\n\n private _predict_interval containing the core logic,\n called from predict_interval and default _predict_quantiles\n\n Parameters\n ----------\n fh : guaranteed to be ForecastingHorizon\n The forecasting horizon with the steps ahead to to predict.\n X : optional (default=None)\n guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n Exogeneous time series to predict from.\n coverage : float or list of float, optional (default=0.95)\n nominal coverage(s) of predictive interval(s)\n\n Returns\n -------\n pred_int : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level coverage fractions for which intervals were computed.\n in the same order as in input `coverage`.\n Third level is string \"lower\" or \"upper\", for lower/upper interval end.\n Row index is fh, with additional (upper) levels equal to instance levels,\n from y seen in fit, if y_inner_mtype is Panel or Hierarchical.\n Entries are forecasts of lower/upper interval end,\n for var in col index, at nominal coverage in second col index,\n lower/upper depending on third col index, for the row index.\n Upper/lower interval end forecasts are equivalent to\n quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.\n \"\"\"\n implements_interval_adapter = self._has_implementation_of(\"_extract_conf_int\")\n implements_quantiles = self._has_implementation_of(\"_predict_quantiles\")\n\n if not implements_interval_adapter and implements_quantiles:\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n valid_indices = fh.to_absolute(self.cutoff).to_pandas()\n\n get_prediction_arguments = {\"start\": start, \"end\": end}\n\n if hasattr(self, \"random_state\"):\n get_prediction_arguments[\"random_state\"] = self.random_state\n\n if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(\n \"exog\"\n ):\n get_prediction_arguments[\"exog\"] = X\n\n prediction_results = self._fitted_forecaster.get_prediction(\n **get_prediction_arguments\n )\n\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n pred_int = pd.DataFrame(index=valid_indices, columns=columns)\n\n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n\n pred_int[(var_name, c, \"lower\")] = pred_statsmodels.loc[\n valid_indices, \"lower\"\n ]\n pred_int[(var_name, c, \"upper\")] = pred_statsmodels.loc[\n valid_indices, \"upper\"\n ]\n\n return pred_int\n\n def _get_fitted_params(self):\n \"\"\"Get fitted parameters.\n\n Returns\n -------\n fitted_params : dict\n \"\"\"\n fitted_params = {}\n for name in self._get_fitted_param_names():\n if name in [\"aic\", \"aicc\", \"bic\", \"hqic\"]:\n fitted_params[name] = getattr(self._fitted_forecaster, name, None)\n else:\n fitted_params[name] = self._fitted_forecaster.params.get(name)\n return fitted_params\n\n def _get_fitted_param_names(self):\n \"\"\"Get names of fitted parameters.\"\"\"\n return self._fitted_param_names\n\n\ndef _coerce_int_to_range_index(y, X=None):\n new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)\n try:\n np.testing.assert_array_equal(y.index, new_index)\n except AssertionError:\n raise ValueError(\n \"Coercion of integer pd.Index to pd.RangeIndex \"\n \"failed. Please provide `y_train` with a \"\n \"pd.RangeIndex.\"\n )\n y.index = new_index\n if X is not None:\n X.index = new_index\n return y, X\n", "path": "sktime/forecasting/base/adapters/_statsmodels.py"}], "after_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n\n__author__ = [\"mloning\", \"ciaran-g\"]\n__all__ = [\"_StatsModelsAdapter\"]\n\nimport inspect\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.warnings import warn\n\n\nclass _StatsModelsAdapter(BaseForecaster):\n \"\"\"Base class for interfacing statsmodels forecasting algorithms.\"\"\"\n\n _fitted_param_names = ()\n _tags = {\n \"ignores-exogeneous-X\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n \"python_dependencies\": \"statsmodels\",\n }\n\n def __init__(self, random_state=None):\n self._forecaster = None\n self.random_state = random_state\n self._fitted_forecaster = None\n super().__init__()\n\n def _fit(self, y, X, fh):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n # statsmodels does not support the pd.Int64Index as required,\n # so we coerce them here to pd.RangeIndex\n if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):\n y, X = _coerce_int_to_range_index(y, X)\n self._fit_forecaster(y, X)\n return self\n\n def _fit_forecaster(self, y_train, X_train=None):\n \"\"\"Log used internally in fit.\"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, y, X=None, update_params=True):\n \"\"\"Update used internally in update.\"\"\"\n if update_params or self.is_composite():\n super()._update(y, X, update_params=update_params)\n else:\n if not hasattr(self._fitted_forecaster, \"append\"):\n warn(\n f\"NotImplementedWarning: {self.__class__.__name__} \"\n f\"can not accept new data when update_params=False. \"\n f\"Call with update_params=True to refit with new data.\",\n obj=self,\n )\n else:\n # only append unseen data to fitted forecaster\n index_diff = y.index.difference(\n self._fitted_forecaster.fittedvalues.index\n )\n if index_diff.isin(y.index).all():\n y = y.loc[index_diff]\n self._fitted_forecaster = self._fitted_forecaster.append(y)\n\n def _predict(self, fh, X):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : ForecastingHorizon\n The forecasters horizon with the steps ahead to to predict.\n Default is one-step ahead forecast,\n i.e. np.array([1])\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored.\n\n Returns\n -------\n y_pred : pd.Series\n Returns series of predicted values.\n \"\"\"\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n\n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n # For discussion see https://github.com/sktime/sktime/issues/3830\n if X is not None:\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n X = X.iloc[: (fh_int[-1] + 1)] # include end point\n\n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n else:\n y_pred = self._fitted_forecaster.predict(start=start, end=end)\n\n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n # if fh[0] > 1 steps ahead of cutoff then make relative to `start`\n fh_int = fh_int - fh_int[0]\n y_pred = y_pred.iloc[fh_int]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n return y_pred\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n del prediction_results, alpha # tools like ``vulture`` may complain as unused\n\n raise NotImplementedError(\"abstract method\")\n\n def _predict_interval(self, fh, X, coverage):\n \"\"\"Compute/return prediction interval forecasts.\n\n private _predict_interval containing the core logic,\n called from predict_interval and default _predict_quantiles\n\n Parameters\n ----------\n fh : guaranteed to be ForecastingHorizon\n The forecasting horizon with the steps ahead to to predict.\n X : optional (default=None)\n guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n Exogeneous time series to predict from.\n coverage : float or list of float, optional (default=0.95)\n nominal coverage(s) of predictive interval(s)\n\n Returns\n -------\n pred_int : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level coverage fractions for which intervals were computed.\n in the same order as in input `coverage`.\n Third level is string \"lower\" or \"upper\", for lower/upper interval end.\n Row index is fh, with additional (upper) levels equal to instance levels,\n from y seen in fit, if y_inner_mtype is Panel or Hierarchical.\n Entries are forecasts of lower/upper interval end,\n for var in col index, at nominal coverage in second col index,\n lower/upper depending on third col index, for the row index.\n Upper/lower interval end forecasts are equivalent to\n quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.\n \"\"\"\n implements_interval_adapter = self._has_implementation_of(\"_extract_conf_int\")\n implements_quantiles = self._has_implementation_of(\"_predict_quantiles\")\n\n if not implements_interval_adapter and implements_quantiles:\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n # if fh > 1 steps ahead of cutoff\n fh_int = fh_int - fh_int[0]\n\n get_prediction_arguments = {\"start\": start, \"end\": end}\n\n if hasattr(self, \"random_state\"):\n get_prediction_arguments[\"random_state\"] = self.random_state\n\n if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(\n \"exog\"\n ):\n get_prediction_arguments[\"exog\"] = X\n\n prediction_results = self._fitted_forecaster.get_prediction(\n **get_prediction_arguments\n )\n\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))\n preds_index = preds_index.iloc[fh_int].index\n pred_int = pd.DataFrame(index=preds_index, columns=columns)\n\n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n\n pred_int[(var_name, c, \"lower\")] = pred_statsmodels.iloc[fh_int][\"lower\"]\n pred_int[(var_name, c, \"upper\")] = pred_statsmodels.iloc[fh_int][\"upper\"]\n\n return pred_int\n\n def _get_fitted_params(self):\n \"\"\"Get fitted parameters.\n\n Returns\n -------\n fitted_params : dict\n \"\"\"\n fitted_params = {}\n for name in self._get_fitted_param_names():\n if name in [\"aic\", \"aicc\", \"bic\", \"hqic\"]:\n fitted_params[name] = getattr(self._fitted_forecaster, name, None)\n else:\n fitted_params[name] = self._fitted_forecaster.params.get(name)\n return fitted_params\n\n def _get_fitted_param_names(self):\n \"\"\"Get names of fitted parameters.\"\"\"\n return self._fitted_param_names\n\n\ndef _coerce_int_to_range_index(y, X=None):\n new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)\n try:\n np.testing.assert_array_equal(y.index, new_index)\n except AssertionError:\n raise ValueError(\n \"Coercion of integer pd.Index to pd.RangeIndex \"\n \"failed. Please provide `y_train` with a \"\n \"pd.RangeIndex.\"\n )\n y.index = new_index\n if X is not None:\n X.index = new_index\n return y, X\n", "path": "sktime/forecasting/base/adapters/_statsmodels.py"}]}
| 3,600 | 937 |
gh_patches_debug_15873
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-13917
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Review: Connected App: Difficult to see how Token Cache get_expires_in could be any more wrong
https://github.com/frappe/frappe/blob/86e512452d77f3e61405fd33ecd1bf881790ae18/frappe/integrations/doctype/token_cache/token_cache.py#L53
PR to follow
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/integrations/doctype/token_cache/token_cache.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2019, Frappe Technologies and contributors
3 # For license information, please see license.txt
4
5 from __future__ import unicode_literals
6 from datetime import datetime, timedelta
7
8 import frappe
9 from frappe import _
10 from frappe.utils import cstr, cint
11 from frappe.model.document import Document
12
13 class TokenCache(Document):
14
15 def get_auth_header(self):
16 if self.access_token:
17 headers = {'Authorization': 'Bearer ' + self.get_password('access_token')}
18 return headers
19
20 raise frappe.exceptions.DoesNotExistError
21
22 def update_data(self, data):
23 """
24 Store data returned by authorization flow.
25
26 Params:
27 data - Dict with access_token, refresh_token, expires_in and scope.
28 """
29 token_type = cstr(data.get('token_type', '')).lower()
30 if token_type not in ['bearer', 'mac']:
31 frappe.throw(_('Received an invalid token type.'))
32 # 'Bearer' or 'MAC'
33 token_type = token_type.title() if token_type == 'bearer' else token_type.upper()
34
35 self.token_type = token_type
36 self.access_token = cstr(data.get('access_token', ''))
37 self.refresh_token = cstr(data.get('refresh_token', ''))
38 self.expires_in = cint(data.get('expires_in', 0))
39
40 new_scopes = data.get('scope')
41 if new_scopes:
42 if isinstance(new_scopes, str):
43 new_scopes = new_scopes.split(' ')
44 if isinstance(new_scopes, list):
45 self.scopes = None
46 for scope in new_scopes:
47 self.append('scopes', {'scope': scope})
48
49 self.state = None
50 self.save(ignore_permissions=True)
51 frappe.db.commit()
52 return self
53
54 def get_expires_in(self):
55 expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)
56 return (datetime.now() - expiry_time).total_seconds()
57
58 def is_expired(self):
59 return self.get_expires_in() < 0
60
61 def get_json(self):
62 return {
63 'access_token': self.get_password('access_token', ''),
64 'refresh_token': self.get_password('refresh_token', ''),
65 'expires_in': self.get_expires_in(),
66 'token_type': self.token_type
67 }
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/integrations/doctype/token_cache/token_cache.py b/frappe/integrations/doctype/token_cache/token_cache.py
--- a/frappe/integrations/doctype/token_cache/token_cache.py
+++ b/frappe/integrations/doctype/token_cache/token_cache.py
@@ -3,7 +3,7 @@
# For license information, please see license.txt
from __future__ import unicode_literals
-from datetime import datetime, timedelta
+from datetime import timedelta
import frappe
from frappe import _
@@ -52,8 +52,8 @@
return self
def get_expires_in(self):
- expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)
- return (datetime.now() - expiry_time).total_seconds()
+ expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(seconds=self.expires_in)
+ return (expiry_time - frappe.utils.now_datetime()).total_seconds()
def is_expired(self):
return self.get_expires_in() < 0
|
{"golden_diff": "diff --git a/frappe/integrations/doctype/token_cache/token_cache.py b/frappe/integrations/doctype/token_cache/token_cache.py\n--- a/frappe/integrations/doctype/token_cache/token_cache.py\n+++ b/frappe/integrations/doctype/token_cache/token_cache.py\n@@ -3,7 +3,7 @@\n # For license information, please see license.txt\n \n from __future__ import unicode_literals\n-from datetime import datetime, timedelta\n+from datetime import timedelta\n \n import frappe\n from frappe import _\n@@ -52,8 +52,8 @@\n \t\treturn self\n \n \tdef get_expires_in(self):\n-\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)\n-\t\treturn (datetime.now() - expiry_time).total_seconds()\n+\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(seconds=self.expires_in)\n+\t\treturn (expiry_time - frappe.utils.now_datetime()).total_seconds()\n \n \tdef is_expired(self):\n \t\treturn self.get_expires_in() < 0\n", "issue": "Review: Connected App: Difficult to see how Token Cache get_expires_in could be any more wrong\nhttps://github.com/frappe/frappe/blob/86e512452d77f3e61405fd33ecd1bf881790ae18/frappe/integrations/doctype/token_cache/token_cache.py#L53\r\n\r\nPR to follow\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2019, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nfrom datetime import datetime, timedelta\n\nimport frappe\nfrom frappe import _\nfrom frappe.utils import cstr, cint\nfrom frappe.model.document import Document\n\nclass TokenCache(Document):\n\n\tdef get_auth_header(self):\n\t\tif self.access_token:\n\t\t\theaders = {'Authorization': 'Bearer ' + self.get_password('access_token')}\n\t\t\treturn headers\n\n\t\traise frappe.exceptions.DoesNotExistError\n\n\tdef update_data(self, data):\n\t\t\"\"\"\n\t\tStore data returned by authorization flow.\n\n\t\tParams:\n\t\tdata - Dict with access_token, refresh_token, expires_in and scope.\n\t\t\"\"\"\n\t\ttoken_type = cstr(data.get('token_type', '')).lower()\n\t\tif token_type not in ['bearer', 'mac']:\n\t\t\tfrappe.throw(_('Received an invalid token type.'))\n\t\t# 'Bearer' or 'MAC'\n\t\ttoken_type = token_type.title() if token_type == 'bearer' else token_type.upper()\n\n\t\tself.token_type = token_type\n\t\tself.access_token = cstr(data.get('access_token', ''))\n\t\tself.refresh_token = cstr(data.get('refresh_token', ''))\n\t\tself.expires_in = cint(data.get('expires_in', 0))\n\n\t\tnew_scopes = data.get('scope')\n\t\tif new_scopes:\n\t\t\tif isinstance(new_scopes, str):\n\t\t\t\tnew_scopes = new_scopes.split(' ')\n\t\t\tif isinstance(new_scopes, list):\n\t\t\t\tself.scopes = None\n\t\t\t\tfor scope in new_scopes:\n\t\t\t\t\tself.append('scopes', {'scope': scope})\n\n\t\tself.state = None\n\t\tself.save(ignore_permissions=True)\n\t\tfrappe.db.commit()\n\t\treturn self\n\n\tdef get_expires_in(self):\n\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)\n\t\treturn (datetime.now() - expiry_time).total_seconds()\n\n\tdef is_expired(self):\n\t\treturn self.get_expires_in() < 0\n\n\tdef get_json(self):\n\t\treturn {\n\t\t\t'access_token': self.get_password('access_token', ''),\n\t\t\t'refresh_token': self.get_password('refresh_token', ''),\n\t\t\t'expires_in': self.get_expires_in(),\n\t\t\t'token_type': self.token_type\n\t\t}\n", "path": "frappe/integrations/doctype/token_cache/token_cache.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2019, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nfrom datetime import timedelta\n\nimport frappe\nfrom frappe import _\nfrom frappe.utils import cstr, cint\nfrom frappe.model.document import Document\n\nclass TokenCache(Document):\n\n\tdef get_auth_header(self):\n\t\tif self.access_token:\n\t\t\theaders = {'Authorization': 'Bearer ' + self.get_password('access_token')}\n\t\t\treturn headers\n\n\t\traise frappe.exceptions.DoesNotExistError\n\n\tdef update_data(self, data):\n\t\t\"\"\"\n\t\tStore data returned by authorization flow.\n\n\t\tParams:\n\t\tdata - Dict with access_token, refresh_token, expires_in and scope.\n\t\t\"\"\"\n\t\ttoken_type = cstr(data.get('token_type', '')).lower()\n\t\tif token_type not in ['bearer', 'mac']:\n\t\t\tfrappe.throw(_('Received an invalid token type.'))\n\t\t# 'Bearer' or 'MAC'\n\t\ttoken_type = token_type.title() if token_type == 'bearer' else token_type.upper()\n\n\t\tself.token_type = token_type\n\t\tself.access_token = cstr(data.get('access_token', ''))\n\t\tself.refresh_token = cstr(data.get('refresh_token', ''))\n\t\tself.expires_in = cint(data.get('expires_in', 0))\n\n\t\tnew_scopes = data.get('scope')\n\t\tif new_scopes:\n\t\t\tif isinstance(new_scopes, str):\n\t\t\t\tnew_scopes = new_scopes.split(' ')\n\t\t\tif isinstance(new_scopes, list):\n\t\t\t\tself.scopes = None\n\t\t\t\tfor scope in new_scopes:\n\t\t\t\t\tself.append('scopes', {'scope': scope})\n\n\t\tself.state = None\n\t\tself.save(ignore_permissions=True)\n\t\tfrappe.db.commit()\n\t\treturn self\n\n\tdef get_expires_in(self):\n\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(seconds=self.expires_in)\n\t\treturn (expiry_time - frappe.utils.now_datetime()).total_seconds()\n\n\tdef is_expired(self):\n\t\treturn self.get_expires_in() < 0\n\n\tdef get_json(self):\n\t\treturn {\n\t\t\t'access_token': self.get_password('access_token', ''),\n\t\t\t'refresh_token': self.get_password('refresh_token', ''),\n\t\t\t'expires_in': self.get_expires_in(),\n\t\t\t'token_type': self.token_type\n\t\t}\n", "path": "frappe/integrations/doctype/token_cache/token_cache.py"}]}
| 1,002 | 224 |
gh_patches_debug_24555
|
rasdani/github-patches
|
git_diff
|
pyg-team__pytorch_geometric-5051
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data Batch problem in PyG
### 🐛 Describe the bug
Hi. I am a computational physics researcher and was using PyG very well.
my pyg code was working well a few weeks ago, but now that I run my code, it is not working anymore without any changes.
the problem is like below.
I have many material structures and in my "custom_dataset" class, these are preprocessed and all graph informations (node features, edge features, edge index etc) are inserted into "Data" object in PyTorch geometric.
You can see that each preprocessed sample with index $i$ was printed normal "Data" object in pyg

But When I insert my custom dataset class into pyg DataLoader and I did like below,
``` Python
sample = next(iter(train_loader)) # batch sample
```
batch sample is denoted by "DataDataBatch". I didn't see this kind of object name.
and i can't use "sample.x' or "sample.edge_index" command. Instead I need to do like this

I want to use expressions like "sample.x", "sample.edge_index" or "sample.edge_attr" as like before.
I expect your kind explanations. Thank you.
### Environment
* PyG version: `2.0.5`
* PyTorch version: `1.11.0+cu113`
* OS: `GoogleColab Pro Plus`
* Python version: `Python 3.7.13 in colab`
* CUDA/cuDNN version:
* How you installed PyTorch and PyG (`conda`, `pip`, source):
``` python
# Install required packages.
import os
import torch
os.environ['TORCH'] = torch.__version__
print(torch.__version__)
!pip install -q torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html
!pip install -q torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html
!pip install -q git+https://github.com/pyg-team/pytorch_geometric.git
!pip install -q pymatgen==2020.11.11
```
* Any other relevant information (*e.g.*, version of `torch-scatter`):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch_geometric/loader/dataloader.py`
Content:
```
1 from collections.abc import Mapping, Sequence
2 from typing import List, Optional, Union
3
4 import torch.utils.data
5 from torch.utils.data.dataloader import default_collate
6
7 from torch_geometric.data import Batch, Dataset
8 from torch_geometric.data.data import BaseData
9
10
11 class Collater:
12 def __init__(self, follow_batch, exclude_keys):
13 self.follow_batch = follow_batch
14 self.exclude_keys = exclude_keys
15
16 def __call__(self, batch):
17 elem = batch[0]
18 if isinstance(elem, BaseData):
19 return Batch.from_data_list(batch, self.follow_batch,
20 self.exclude_keys)
21 elif isinstance(elem, torch.Tensor):
22 return default_collate(batch)
23 elif isinstance(elem, float):
24 return torch.tensor(batch, dtype=torch.float)
25 elif isinstance(elem, int):
26 return torch.tensor(batch)
27 elif isinstance(elem, str):
28 return batch
29 elif isinstance(elem, Mapping):
30 return {key: self([data[key] for data in batch]) for key in elem}
31 elif isinstance(elem, tuple) and hasattr(elem, '_fields'):
32 return type(elem)(*(self(s) for s in zip(*batch)))
33 elif isinstance(elem, Sequence) and not isinstance(elem, str):
34 return [self(s) for s in zip(*batch)]
35
36 raise TypeError(f'DataLoader found invalid type: {type(elem)}')
37
38 def collate(self, batch): # Deprecated...
39 return self(batch)
40
41
42 class DataLoader(torch.utils.data.DataLoader):
43 r"""A data loader which merges data objects from a
44 :class:`torch_geometric.data.Dataset` to a mini-batch.
45 Data objects can be either of type :class:`~torch_geometric.data.Data` or
46 :class:`~torch_geometric.data.HeteroData`.
47
48 Args:
49 dataset (Dataset): The dataset from which to load the data.
50 batch_size (int, optional): How many samples per batch to load.
51 (default: :obj:`1`)
52 shuffle (bool, optional): If set to :obj:`True`, the data will be
53 reshuffled at every epoch. (default: :obj:`False`)
54 follow_batch (List[str], optional): Creates assignment batch
55 vectors for each key in the list. (default: :obj:`None`)
56 exclude_keys (List[str], optional): Will exclude each key in the
57 list. (default: :obj:`None`)
58 **kwargs (optional): Additional arguments of
59 :class:`torch.utils.data.DataLoader`.
60 """
61 def __init__(
62 self,
63 dataset: Union[Dataset, List[BaseData]],
64 batch_size: int = 1,
65 shuffle: bool = False,
66 follow_batch: Optional[List[str]] = None,
67 exclude_keys: Optional[List[str]] = None,
68 **kwargs,
69 ):
70
71 if 'collate_fn' in kwargs:
72 del kwargs['collate_fn']
73
74 # Save for PyTorch Lightning < 1.6:
75 self.follow_batch = follow_batch
76 self.exclude_keys = exclude_keys
77
78 super().__init__(
79 dataset,
80 batch_size,
81 shuffle,
82 collate_fn=Collater(follow_batch, exclude_keys),
83 **kwargs,
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch_geometric/loader/dataloader.py b/torch_geometric/loader/dataloader.py
--- a/torch_geometric/loader/dataloader.py
+++ b/torch_geometric/loader/dataloader.py
@@ -1,4 +1,5 @@
from collections.abc import Mapping, Sequence
+from inspect import signature
from typing import List, Optional, Union
import torch.utils.data
@@ -39,6 +40,28 @@
return self(batch)
+# PyG 'Data' objects are subclasses of MutableMapping, which is an
+# instance of collections.abc.Mapping. Currently, PyTorch pin_memory
+# for DataLoaders treats the returned batches as Mapping objects and
+# calls `pin_memory` on each element in `Data.__dict__`, which is not
+# desired behavior if 'Data' has a `pin_memory` function. We patch
+# this behavior here by monkeypatching `pin_memory`, but can hopefully patch
+# this in PyTorch in the future:
+__torch_pin_memory = torch.utils.data._utils.pin_memory.pin_memory
+__torch_pin_memory_params = signature(__torch_pin_memory).parameters
+
+
+def pin_memory(data, device=None):
+ if hasattr(data, "pin_memory"):
+ return data.pin_memory()
+ if len(__torch_pin_memory_params) > 1:
+ return __torch_pin_memory(data, device)
+ return __torch_pin_memory(data)
+
+
+torch.utils.data._utils.pin_memory.pin_memory = pin_memory
+
+
class DataLoader(torch.utils.data.DataLoader):
r"""A data loader which merges data objects from a
:class:`torch_geometric.data.Dataset` to a mini-batch.
|
{"golden_diff": "diff --git a/torch_geometric/loader/dataloader.py b/torch_geometric/loader/dataloader.py\n--- a/torch_geometric/loader/dataloader.py\n+++ b/torch_geometric/loader/dataloader.py\n@@ -1,4 +1,5 @@\n from collections.abc import Mapping, Sequence\n+from inspect import signature\n from typing import List, Optional, Union\n \n import torch.utils.data\n@@ -39,6 +40,28 @@\n return self(batch)\n \n \n+# PyG 'Data' objects are subclasses of MutableMapping, which is an\n+# instance of collections.abc.Mapping. Currently, PyTorch pin_memory\n+# for DataLoaders treats the returned batches as Mapping objects and\n+# calls `pin_memory` on each element in `Data.__dict__`, which is not\n+# desired behavior if 'Data' has a `pin_memory` function. We patch\n+# this behavior here by monkeypatching `pin_memory`, but can hopefully patch\n+# this in PyTorch in the future:\n+__torch_pin_memory = torch.utils.data._utils.pin_memory.pin_memory\n+__torch_pin_memory_params = signature(__torch_pin_memory).parameters\n+\n+\n+def pin_memory(data, device=None):\n+ if hasattr(data, \"pin_memory\"):\n+ return data.pin_memory()\n+ if len(__torch_pin_memory_params) > 1:\n+ return __torch_pin_memory(data, device)\n+ return __torch_pin_memory(data)\n+\n+\n+torch.utils.data._utils.pin_memory.pin_memory = pin_memory\n+\n+\n class DataLoader(torch.utils.data.DataLoader):\n r\"\"\"A data loader which merges data objects from a\n :class:`torch_geometric.data.Dataset` to a mini-batch.\n", "issue": "Data Batch problem in PyG\n### \ud83d\udc1b Describe the bug\n\nHi. I am a computational physics researcher and was using PyG very well.\r\nmy pyg code was working well a few weeks ago, but now that I run my code, it is not working anymore without any changes.\r\n\r\nthe problem is like below.\r\nI have many material structures and in my \"custom_dataset\" class, these are preprocessed and all graph informations (node features, edge features, edge index etc) are inserted into \"Data\" object in PyTorch geometric.\r\nYou can see that each preprocessed sample with index $i$ was printed normal \"Data\" object in pyg\r\n\r\n\r\n\r\nBut When I insert my custom dataset class into pyg DataLoader and I did like below,\r\n\r\n``` Python\r\nsample = next(iter(train_loader)) # batch sample\r\n```\r\n\r\nbatch sample is denoted by \"DataDataBatch\". I didn't see this kind of object name.\r\nand i can't use \"sample.x' or \"sample.edge_index\" command. Instead I need to do like this\r\n\r\n\r\n\r\nI want to use expressions like \"sample.x\", \"sample.edge_index\" or \"sample.edge_attr\" as like before. \r\nI expect your kind explanations. Thank you.\r\n\n\n### Environment\n\n* PyG version: `2.0.5`\r\n* PyTorch version: `1.11.0+cu113`\r\n* OS: `GoogleColab Pro Plus`\r\n* Python version: `Python 3.7.13 in colab`\r\n* CUDA/cuDNN version:\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source): \r\n``` python\r\n# Install required packages.\r\nimport os\r\nimport torch\r\nos.environ['TORCH'] = torch.__version__\r\nprint(torch.__version__)\r\n!pip install -q torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html\r\n!pip install -q torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html\r\n!pip install -q git+https://github.com/pyg-team/pytorch_geometric.git\r\n!pip install -q pymatgen==2020.11.11 \r\n```\r\n* Any other relevant information (*e.g.*, version of `torch-scatter`):\r\n\n", "before_files": [{"content": "from collections.abc import Mapping, Sequence\nfrom typing import List, Optional, Union\n\nimport torch.utils.data\nfrom torch.utils.data.dataloader import default_collate\n\nfrom torch_geometric.data import Batch, Dataset\nfrom torch_geometric.data.data import BaseData\n\n\nclass Collater:\n def __init__(self, follow_batch, exclude_keys):\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n def __call__(self, batch):\n elem = batch[0]\n if isinstance(elem, BaseData):\n return Batch.from_data_list(batch, self.follow_batch,\n self.exclude_keys)\n elif isinstance(elem, torch.Tensor):\n return default_collate(batch)\n elif isinstance(elem, float):\n return torch.tensor(batch, dtype=torch.float)\n elif isinstance(elem, int):\n return torch.tensor(batch)\n elif isinstance(elem, str):\n return batch\n elif isinstance(elem, Mapping):\n return {key: self([data[key] for data in batch]) for key in elem}\n elif isinstance(elem, tuple) and hasattr(elem, '_fields'):\n return type(elem)(*(self(s) for s in zip(*batch)))\n elif isinstance(elem, Sequence) and not isinstance(elem, str):\n return [self(s) for s in zip(*batch)]\n\n raise TypeError(f'DataLoader found invalid type: {type(elem)}')\n\n def collate(self, batch): # Deprecated...\n return self(batch)\n\n\nclass DataLoader(torch.utils.data.DataLoader):\n r\"\"\"A data loader which merges data objects from a\n :class:`torch_geometric.data.Dataset` to a mini-batch.\n Data objects can be either of type :class:`~torch_geometric.data.Data` or\n :class:`~torch_geometric.data.HeteroData`.\n\n Args:\n dataset (Dataset): The dataset from which to load the data.\n batch_size (int, optional): How many samples per batch to load.\n (default: :obj:`1`)\n shuffle (bool, optional): If set to :obj:`True`, the data will be\n reshuffled at every epoch. (default: :obj:`False`)\n follow_batch (List[str], optional): Creates assignment batch\n vectors for each key in the list. (default: :obj:`None`)\n exclude_keys (List[str], optional): Will exclude each key in the\n list. (default: :obj:`None`)\n **kwargs (optional): Additional arguments of\n :class:`torch.utils.data.DataLoader`.\n \"\"\"\n def __init__(\n self,\n dataset: Union[Dataset, List[BaseData]],\n batch_size: int = 1,\n shuffle: bool = False,\n follow_batch: Optional[List[str]] = None,\n exclude_keys: Optional[List[str]] = None,\n **kwargs,\n ):\n\n if 'collate_fn' in kwargs:\n del kwargs['collate_fn']\n\n # Save for PyTorch Lightning < 1.6:\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n super().__init__(\n dataset,\n batch_size,\n shuffle,\n collate_fn=Collater(follow_batch, exclude_keys),\n **kwargs,\n )\n", "path": "torch_geometric/loader/dataloader.py"}], "after_files": [{"content": "from collections.abc import Mapping, Sequence\nfrom inspect import signature\nfrom typing import List, Optional, Union\n\nimport torch.utils.data\nfrom torch.utils.data.dataloader import default_collate\n\nfrom torch_geometric.data import Batch, Dataset\nfrom torch_geometric.data.data import BaseData\n\n\nclass Collater:\n def __init__(self, follow_batch, exclude_keys):\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n def __call__(self, batch):\n elem = batch[0]\n if isinstance(elem, BaseData):\n return Batch.from_data_list(batch, self.follow_batch,\n self.exclude_keys)\n elif isinstance(elem, torch.Tensor):\n return default_collate(batch)\n elif isinstance(elem, float):\n return torch.tensor(batch, dtype=torch.float)\n elif isinstance(elem, int):\n return torch.tensor(batch)\n elif isinstance(elem, str):\n return batch\n elif isinstance(elem, Mapping):\n return {key: self([data[key] for data in batch]) for key in elem}\n elif isinstance(elem, tuple) and hasattr(elem, '_fields'):\n return type(elem)(*(self(s) for s in zip(*batch)))\n elif isinstance(elem, Sequence) and not isinstance(elem, str):\n return [self(s) for s in zip(*batch)]\n\n raise TypeError(f'DataLoader found invalid type: {type(elem)}')\n\n def collate(self, batch): # Deprecated...\n return self(batch)\n\n\n# PyG 'Data' objects are subclasses of MutableMapping, which is an\n# instance of collections.abc.Mapping. Currently, PyTorch pin_memory\n# for DataLoaders treats the returned batches as Mapping objects and\n# calls `pin_memory` on each element in `Data.__dict__`, which is not\n# desired behavior if 'Data' has a `pin_memory` function. We patch\n# this behavior here by monkeypatching `pin_memory`, but can hopefully patch\n# this in PyTorch in the future:\n__torch_pin_memory = torch.utils.data._utils.pin_memory.pin_memory\n__torch_pin_memory_params = signature(__torch_pin_memory).parameters\n\n\ndef pin_memory(data, device=None):\n if hasattr(data, \"pin_memory\"):\n return data.pin_memory()\n if len(__torch_pin_memory_params) > 1:\n return __torch_pin_memory(data, device)\n return __torch_pin_memory(data)\n\n\ntorch.utils.data._utils.pin_memory.pin_memory = pin_memory\n\n\nclass DataLoader(torch.utils.data.DataLoader):\n r\"\"\"A data loader which merges data objects from a\n :class:`torch_geometric.data.Dataset` to a mini-batch.\n Data objects can be either of type :class:`~torch_geometric.data.Data` or\n :class:`~torch_geometric.data.HeteroData`.\n\n Args:\n dataset (Dataset): The dataset from which to load the data.\n batch_size (int, optional): How many samples per batch to load.\n (default: :obj:`1`)\n shuffle (bool, optional): If set to :obj:`True`, the data will be\n reshuffled at every epoch. (default: :obj:`False`)\n follow_batch (List[str], optional): Creates assignment batch\n vectors for each key in the list. (default: :obj:`None`)\n exclude_keys (List[str], optional): Will exclude each key in the\n list. (default: :obj:`None`)\n **kwargs (optional): Additional arguments of\n :class:`torch.utils.data.DataLoader`.\n \"\"\"\n def __init__(\n self,\n dataset: Union[Dataset, List[BaseData]],\n batch_size: int = 1,\n shuffle: bool = False,\n follow_batch: Optional[List[str]] = None,\n exclude_keys: Optional[List[str]] = None,\n **kwargs,\n ):\n\n if 'collate_fn' in kwargs:\n del kwargs['collate_fn']\n\n # Save for PyTorch Lightning < 1.6:\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n super().__init__(\n dataset,\n batch_size,\n shuffle,\n collate_fn=Collater(follow_batch, exclude_keys),\n **kwargs,\n )\n", "path": "torch_geometric/loader/dataloader.py"}]}
| 1,727 | 362 |
gh_patches_debug_10874
|
rasdani/github-patches
|
git_diff
|
jupyterhub__zero-to-jupyterhub-k8s-405
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow setting service account from config.ayml
We currently disable [service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for all user pods, disallowing kubernetes API access. This is the correct default for us, but we should allow users to easily override it if they need.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `images/hub/jupyterhub_config.py`
Content:
```
1 import os
2 import sys
3 import yaml
4 from tornado.httpclient import AsyncHTTPClient
5
6 def get_config(key, default=None):
7 """
8 Find a config item of a given name & return it
9
10 Parses everything as YAML, so lists and dicts are available too
11 """
12 path = os.path.join('/etc/jupyterhub/config', key)
13 try:
14 with open(path) as f:
15 data = yaml.safe_load(f)
16 return data
17 except FileNotFoundError:
18 return default
19
20 def get_secret(key, default=None):
21 """Get a secret from /etc/jupyterhub/secret"""
22 path = os.path.join('/etc/jupyterhub/secret', key)
23 try:
24 with open(path) as f:
25 return f.read().strip()
26 except FileNotFoundError:
27 return default
28
29
30 # Configure JupyterHub to use the curl backend for making HTTP requests,
31 # rather than the pure-python implementations. The default one starts
32 # being too slow to make a large number of requests to the proxy API
33 # at the rate required.
34 AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
35
36 c.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'
37
38 # Connect to a proxy running in a different pod
39 c.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))
40 c.ConfigurableHTTPProxy.should_start = False
41
42 # Do not shut down user pods when hub is restarted
43 c.JupyterHub.cleanup_servers = False
44
45 # Check that the proxy has routes appropriately setup
46 # This isn't the best named setting :D
47 c.JupyterHub.last_activity_interval = 60
48
49 # Max number of servers that can be spawning at any one time
50 c.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')
51
52 active_server_limit = get_config('hub.active-server-limit', None)
53
54 if active_server_limit is not None:
55 c.JupyterHub.active_server_limit = int(active_server_limit)
56
57 c.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']
58 c.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])
59
60 # the hub should listen on all interfaces, so the proxy can access it
61 c.JupyterHub.hub_ip = '0.0.0.0'
62
63 c.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')
64
65 c.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')
66
67 # Use env var for this, since we want hub to restart when this changes
68 c.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']
69
70 c.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})
71
72 c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')
73 c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')
74
75 c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')
76 # Configure dynamically provisioning pvc
77 storage_type = get_config('singleuser.storage.type')
78 if storage_type == 'dynamic':
79 c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'
80 c.KubeSpawner.user_storage_pvc_ensure = True
81 storage_class = get_config('singleuser.storage.dynamic.storage-class', None)
82 if storage_class:
83 c.KubeSpawner.user_storage_class = storage_class
84 c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']
85 c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')
86
87 # Add volumes to singleuser pods
88 c.KubeSpawner.volumes = [
89 {
90 'name': 'volume-{username}{servername}',
91 'persistentVolumeClaim': {
92 'claimName': 'claim-{username}{servername}'
93 }
94 }
95 ]
96 c.KubeSpawner.volume_mounts = [
97 {
98 'mountPath': get_config('singleuser.storage.home_mount_path'),
99 'name': 'volume-{username}{servername}'
100 }
101 ]
102 elif storage_type == 'static':
103 pvc_claim_name = get_config('singleuser.storage.static.pvc-name')
104 c.KubeSpawner.volumes = [{
105 'name': 'home',
106 'persistentVolumeClaim': {
107 'claimName': pvc_claim_name
108 }
109 }]
110
111 c.KubeSpawner.volume_mounts = [{
112 'mountPath': get_config('singleuser.storage.home_mount_path'),
113 'name': 'home',
114 'subPath': get_config('singleuser.storage.static.sub-path')
115 }]
116
117 c.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))
118 c.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))
119
120 lifecycle_hooks = get_config('singleuser.lifecycle-hooks')
121 if lifecycle_hooks:
122 c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks
123
124 init_containers = get_config('singleuser.init-containers')
125 if init_containers:
126 c.KubeSpawner.singleuser_init_containers = init_containers
127
128 # Gives spawned containers access to the API of the hub
129 c.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
130 c.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
131
132 c.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
133 c.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
134
135 c.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')
136 c.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')
137 c.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')
138 c.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')
139
140 # Allow switching authenticators easily
141 auth_type = get_config('auth.type')
142 email_domain = 'local'
143
144 if auth_type == 'google':
145 c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'
146 c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')
147 c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')
148 c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')
149 c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')
150 c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')
151 email_domain = get_config('auth.google.hosted-domain')
152 elif auth_type == 'github':
153 c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'
154 c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')
155 c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')
156 c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')
157 elif auth_type == 'cilogon':
158 c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'
159 c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')
160 c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')
161 c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')
162 elif auth_type == 'gitlab':
163 c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'
164 c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')
165 c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')
166 c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')
167 elif auth_type == 'mediawiki':
168 c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'
169 c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')
170 c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')
171 c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')
172 elif auth_type == 'globus':
173 c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'
174 c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')
175 c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')
176 c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')
177 c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')
178 elif auth_type == 'hmac':
179 c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'
180 c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))
181 elif auth_type == 'dummy':
182 c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
183 c.DummyAuthenticator.password = get_config('auth.dummy.password', None)
184 elif auth_type == 'tmp':
185 c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'
186 elif auth_type == 'lti':
187 c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'
188 c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')
189 elif auth_type == 'custom':
190 # full_class_name looks like "myauthenticator.MyAuthenticator".
191 # To create a docker image with this class availabe, you can just have the
192 # following Dockerifle:
193 # FROM jupyterhub/k8s-hub:v0.4
194 # RUN pip3 install myauthenticator
195 full_class_name = get_config('auth.custom.class-name')
196 c.JupyterHub.authenticator_class = full_class_name
197 auth_class_name = full_class_name.rsplit('.', 1)[-1]
198 auth_config = c[auth_class_name]
199 auth_config.update(get_config('auth.custom.config') or {})
200 else:
201 raise ValueError("Unhandled auth type: %r" % auth_type)
202
203 c.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)
204
205 def generate_user_email(spawner):
206 """
207 Used as the EMAIL environment variable
208 """
209 return '{username}@{domain}'.format(
210 username=spawner.user.name, domain=email_domain
211 )
212
213 def generate_user_name(spawner):
214 """
215 Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables
216 """
217 return spawner.user.name
218
219 c.KubeSpawner.environment = {
220 'EMAIL': generate_user_email,
221 # git requires these committer attributes
222 'GIT_AUTHOR_NAME': generate_user_name,
223 'GIT_COMMITTER_NAME': generate_user_name
224 }
225
226 c.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))
227
228 # Enable admins to access user servers
229 c.JupyterHub.admin_access = get_config('auth.admin.access')
230 c.Authenticator.admin_users = get_config('auth.admin.users', [])
231 c.Authenticator.whitelist = get_config('auth.whitelist.users', [])
232
233 c.JupyterHub.base_url = get_config('hub.base_url')
234
235 c.JupyterHub.services = []
236
237 if get_config('cull.enabled', False):
238 cull_timeout = get_config('cull.timeout')
239 cull_every = get_config('cull.every')
240 cull_cmd = [
241 '/usr/local/bin/cull_idle_servers.py',
242 '--timeout=%s' % cull_timeout,
243 '--cull-every=%s' % cull_every,
244 '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'
245 ]
246 if get_config('cull.users'):
247 cull_cmd.append('--cull-users')
248 c.JupyterHub.services.append({
249 'name': 'cull-idle',
250 'admin': True,
251 'command': cull_cmd,
252 })
253
254 for name, service in get_config('hub.services', {}).items():
255 api_token = get_secret('services.token.%s' % name)
256 # jupyterhub.services is a list of dicts, but
257 # in the helm chart it is a dict of dicts for easier merged-config
258 service.setdefault('name', name)
259 if api_token:
260 service['api_token'] = api_token
261 c.JupyterHub.services.append(service)
262
263
264 c.JupyterHub.db_url = get_config('hub.db_url')
265
266 cmd = get_config('singleuser.cmd', None)
267 if cmd:
268 c.Spawner.cmd = cmd
269
270
271 scheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')
272
273 if scheduler_strategy == 'pack':
274 # FIXME: Support setting affinity directly in KubeSpawner
275 c.KubeSpawner.singleuser_extra_pod_config = {
276 'affinity': {
277 'podAffinity': {
278 'preferredDuringSchedulingIgnoredDuringExecution': [{
279 'weight': 100,
280 'podAffinityTerm': {
281 'labelSelector': {
282 'matchExpressions': [{
283 'key': 'component',
284 'operator': 'In',
285 'values': ['singleuser-server']
286 }]
287 },
288 'topologyKey': 'kubernetes.io/hostname'
289 }
290 }],
291 }
292 }
293 }
294 else:
295 # Set default to {} so subconfigs can easily update it
296 c.KubeSpawner.singleuser_extra_pod_config = {}
297
298 extra_config_path = '/etc/jupyterhub/config/hub.extra-config.py'
299 if os.path.exists(extra_config_path):
300 load_subconfig(extra_config_path)
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py
--- a/images/hub/jupyterhub_config.py
+++ b/images/hub/jupyterhub_config.py
@@ -72,6 +72,10 @@
c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')
c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')
+service_account_name = get_config('singleuser.service-account-name', None)
+if service_account_name:
+ c.KubeSpawner.singleuser_service_account = service_account_name
+
c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')
# Configure dynamically provisioning pvc
storage_type = get_config('singleuser.storage.type')
|
{"golden_diff": "diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py\n--- a/images/hub/jupyterhub_config.py\n+++ b/images/hub/jupyterhub_config.py\n@@ -72,6 +72,10 @@\n c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\n c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n \n+service_account_name = get_config('singleuser.service-account-name', None)\n+if service_account_name:\n+ c.KubeSpawner.singleuser_service_account = service_account_name\n+\n c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n # Configure dynamically provisioning pvc\n storage_type = get_config('singleuser.storage.type')\n", "issue": "allow setting service account from config.ayml\nWe currently disable [service accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for all user pods, disallowing kubernetes API access. This is the correct default for us, but we should allow users to easily override it if they need.\n", "before_files": [{"content": "import os\nimport sys\nimport yaml\nfrom tornado.httpclient import AsyncHTTPClient\n\ndef get_config(key, default=None):\n \"\"\"\n Find a config item of a given name & return it\n\n Parses everything as YAML, so lists and dicts are available too\n \"\"\"\n path = os.path.join('/etc/jupyterhub/config', key)\n try:\n with open(path) as f:\n data = yaml.safe_load(f)\n return data\n except FileNotFoundError:\n return default\n\ndef get_secret(key, default=None):\n \"\"\"Get a secret from /etc/jupyterhub/secret\"\"\"\n path = os.path.join('/etc/jupyterhub/secret', key)\n try:\n with open(path) as f:\n return f.read().strip()\n except FileNotFoundError:\n return default\n\n\n# Configure JupyterHub to use the curl backend for making HTTP requests,\n# rather than the pure-python implementations. The default one starts\n# being too slow to make a large number of requests to the proxy API\n# at the rate required.\nAsyncHTTPClient.configure(\"tornado.curl_httpclient.CurlAsyncHTTPClient\")\n\nc.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'\n\n# Connect to a proxy running in a different pod\nc.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))\nc.ConfigurableHTTPProxy.should_start = False\n\n# Do not shut down user pods when hub is restarted\nc.JupyterHub.cleanup_servers = False\n\n# Check that the proxy has routes appropriately setup\n# This isn't the best named setting :D\nc.JupyterHub.last_activity_interval = 60\n\n# Max number of servers that can be spawning at any one time\nc.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')\n\nactive_server_limit = get_config('hub.active-server-limit', None)\n\nif active_server_limit is not None:\n c.JupyterHub.active_server_limit = int(active_server_limit)\n\nc.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']\nc.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])\n\n# the hub should listen on all interfaces, so the proxy can access it\nc.JupyterHub.hub_ip = '0.0.0.0'\n\nc.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')\n\nc.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')\n\n# Use env var for this, since we want hub to restart when this changes\nc.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']\n\nc.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})\n\nc.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\nc.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n\nc.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n# Configure dynamically provisioning pvc\nstorage_type = get_config('singleuser.storage.type')\nif storage_type == 'dynamic':\n c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'\n c.KubeSpawner.user_storage_pvc_ensure = True\n storage_class = get_config('singleuser.storage.dynamic.storage-class', None)\n if storage_class:\n c.KubeSpawner.user_storage_class = storage_class\n c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']\n c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')\n\n # Add volumes to singleuser pods\n c.KubeSpawner.volumes = [\n {\n 'name': 'volume-{username}{servername}',\n 'persistentVolumeClaim': {\n 'claimName': 'claim-{username}{servername}'\n }\n }\n ]\n c.KubeSpawner.volume_mounts = [\n {\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'volume-{username}{servername}'\n }\n ]\nelif storage_type == 'static':\n pvc_claim_name = get_config('singleuser.storage.static.pvc-name')\n c.KubeSpawner.volumes = [{\n 'name': 'home',\n 'persistentVolumeClaim': {\n 'claimName': pvc_claim_name\n }\n }]\n\n c.KubeSpawner.volume_mounts = [{\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'home',\n 'subPath': get_config('singleuser.storage.static.sub-path')\n }]\n\nc.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))\nc.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))\n\nlifecycle_hooks = get_config('singleuser.lifecycle-hooks')\nif lifecycle_hooks:\n c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks\n\ninit_containers = get_config('singleuser.init-containers')\nif init_containers:\n c.KubeSpawner.singleuser_init_containers = init_containers\n\n# Gives spawned containers access to the API of the hub\nc.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')\nc.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')\nc.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')\nc.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')\n\n# Allow switching authenticators easily\nauth_type = get_config('auth.type')\nemail_domain = 'local'\n\nif auth_type == 'google':\n c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'\n c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')\n c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')\n c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')\n c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')\n c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')\n email_domain = get_config('auth.google.hosted-domain')\nelif auth_type == 'github':\n c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'\n c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')\n c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')\n c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')\nelif auth_type == 'cilogon':\n c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'\n c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')\n c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')\n c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')\nelif auth_type == 'gitlab':\n c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'\n c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')\n c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')\n c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')\nelif auth_type == 'mediawiki':\n c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'\n c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')\n c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')\n c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')\nelif auth_type == 'globus':\n c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'\n c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')\n c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')\n c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')\n c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')\nelif auth_type == 'hmac':\n c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'\n c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))\nelif auth_type == 'dummy':\n c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'\n c.DummyAuthenticator.password = get_config('auth.dummy.password', None)\nelif auth_type == 'tmp':\n c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'\nelif auth_type == 'lti':\n c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'\n c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')\nelif auth_type == 'custom':\n # full_class_name looks like \"myauthenticator.MyAuthenticator\".\n # To create a docker image with this class availabe, you can just have the\n # following Dockerifle:\n # FROM jupyterhub/k8s-hub:v0.4\n # RUN pip3 install myauthenticator\n full_class_name = get_config('auth.custom.class-name')\n c.JupyterHub.authenticator_class = full_class_name\n auth_class_name = full_class_name.rsplit('.', 1)[-1]\n auth_config = c[auth_class_name]\n auth_config.update(get_config('auth.custom.config') or {})\nelse:\n raise ValueError(\"Unhandled auth type: %r\" % auth_type)\n\nc.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)\n\ndef generate_user_email(spawner):\n \"\"\"\n Used as the EMAIL environment variable\n \"\"\"\n return '{username}@{domain}'.format(\n username=spawner.user.name, domain=email_domain\n )\n\ndef generate_user_name(spawner):\n \"\"\"\n Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables\n \"\"\"\n return spawner.user.name\n\nc.KubeSpawner.environment = {\n 'EMAIL': generate_user_email,\n # git requires these committer attributes\n 'GIT_AUTHOR_NAME': generate_user_name,\n 'GIT_COMMITTER_NAME': generate_user_name\n}\n\nc.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))\n\n# Enable admins to access user servers\nc.JupyterHub.admin_access = get_config('auth.admin.access')\nc.Authenticator.admin_users = get_config('auth.admin.users', [])\nc.Authenticator.whitelist = get_config('auth.whitelist.users', [])\n\nc.JupyterHub.base_url = get_config('hub.base_url')\n\nc.JupyterHub.services = []\n\nif get_config('cull.enabled', False):\n cull_timeout = get_config('cull.timeout')\n cull_every = get_config('cull.every')\n cull_cmd = [\n '/usr/local/bin/cull_idle_servers.py',\n '--timeout=%s' % cull_timeout,\n '--cull-every=%s' % cull_every,\n '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'\n ]\n if get_config('cull.users'):\n cull_cmd.append('--cull-users')\n c.JupyterHub.services.append({\n 'name': 'cull-idle',\n 'admin': True,\n 'command': cull_cmd,\n })\n\nfor name, service in get_config('hub.services', {}).items():\n api_token = get_secret('services.token.%s' % name)\n # jupyterhub.services is a list of dicts, but\n # in the helm chart it is a dict of dicts for easier merged-config\n service.setdefault('name', name)\n if api_token:\n service['api_token'] = api_token\n c.JupyterHub.services.append(service)\n\n\nc.JupyterHub.db_url = get_config('hub.db_url')\n\ncmd = get_config('singleuser.cmd', None)\nif cmd:\n c.Spawner.cmd = cmd\n\n\nscheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')\n\nif scheduler_strategy == 'pack':\n # FIXME: Support setting affinity directly in KubeSpawner\n c.KubeSpawner.singleuser_extra_pod_config = {\n 'affinity': {\n 'podAffinity': {\n 'preferredDuringSchedulingIgnoredDuringExecution': [{\n 'weight': 100,\n 'podAffinityTerm': {\n 'labelSelector': {\n 'matchExpressions': [{\n 'key': 'component',\n 'operator': 'In',\n 'values': ['singleuser-server']\n }]\n },\n 'topologyKey': 'kubernetes.io/hostname'\n }\n }],\n }\n }\n }\nelse:\n # Set default to {} so subconfigs can easily update it\n c.KubeSpawner.singleuser_extra_pod_config = {}\n\nextra_config_path = '/etc/jupyterhub/config/hub.extra-config.py'\nif os.path.exists(extra_config_path):\n load_subconfig(extra_config_path)\n", "path": "images/hub/jupyterhub_config.py"}], "after_files": [{"content": "import os\nimport sys\nimport yaml\nfrom tornado.httpclient import AsyncHTTPClient\n\ndef get_config(key, default=None):\n \"\"\"\n Find a config item of a given name & return it\n\n Parses everything as YAML, so lists and dicts are available too\n \"\"\"\n path = os.path.join('/etc/jupyterhub/config', key)\n try:\n with open(path) as f:\n data = yaml.safe_load(f)\n return data\n except FileNotFoundError:\n return default\n\ndef get_secret(key, default=None):\n \"\"\"Get a secret from /etc/jupyterhub/secret\"\"\"\n path = os.path.join('/etc/jupyterhub/secret', key)\n try:\n with open(path) as f:\n return f.read().strip()\n except FileNotFoundError:\n return default\n\n\n# Configure JupyterHub to use the curl backend for making HTTP requests,\n# rather than the pure-python implementations. The default one starts\n# being too slow to make a large number of requests to the proxy API\n# at the rate required.\nAsyncHTTPClient.configure(\"tornado.curl_httpclient.CurlAsyncHTTPClient\")\n\nc.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'\n\n# Connect to a proxy running in a different pod\nc.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))\nc.ConfigurableHTTPProxy.should_start = False\n\n# Do not shut down user pods when hub is restarted\nc.JupyterHub.cleanup_servers = False\n\n# Check that the proxy has routes appropriately setup\n# This isn't the best named setting :D\nc.JupyterHub.last_activity_interval = 60\n\n# Max number of servers that can be spawning at any one time\nc.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')\n\nactive_server_limit = get_config('hub.active-server-limit', None)\n\nif active_server_limit is not None:\n c.JupyterHub.active_server_limit = int(active_server_limit)\n\nc.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']\nc.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])\n\n# the hub should listen on all interfaces, so the proxy can access it\nc.JupyterHub.hub_ip = '0.0.0.0'\n\nc.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')\n\nc.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')\n\n# Use env var for this, since we want hub to restart when this changes\nc.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']\n\nc.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})\n\nc.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\nc.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n\nservice_account_name = get_config('singleuser.service-account-name', None)\nif service_account_name:\n c.KubeSpawner.singleuser_service_account = service_account_name\n\nc.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n# Configure dynamically provisioning pvc\nstorage_type = get_config('singleuser.storage.type')\nif storage_type == 'dynamic':\n c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'\n c.KubeSpawner.user_storage_pvc_ensure = True\n storage_class = get_config('singleuser.storage.dynamic.storage-class', None)\n if storage_class:\n c.KubeSpawner.user_storage_class = storage_class\n c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']\n c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')\n\n # Add volumes to singleuser pods\n c.KubeSpawner.volumes = [\n {\n 'name': 'volume-{username}{servername}',\n 'persistentVolumeClaim': {\n 'claimName': 'claim-{username}{servername}'\n }\n }\n ]\n c.KubeSpawner.volume_mounts = [\n {\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'volume-{username}{servername}'\n }\n ]\nelif storage_type == 'static':\n pvc_claim_name = get_config('singleuser.storage.static.pvc-name')\n c.KubeSpawner.volumes = [{\n 'name': 'home',\n 'persistentVolumeClaim': {\n 'claimName': pvc_claim_name\n }\n }]\n\n c.KubeSpawner.volume_mounts = [{\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'home',\n 'subPath': get_config('singleuser.storage.static.sub-path')\n }]\n\nc.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))\nc.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))\n\nlifecycle_hooks = get_config('singleuser.lifecycle-hooks')\nif lifecycle_hooks:\n c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks\n\ninit_containers = get_config('singleuser.init-containers')\nif init_containers:\n c.KubeSpawner.singleuser_init_containers = init_containers\n\n# Gives spawned containers access to the API of the hub\nc.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')\nc.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')\nc.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')\nc.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')\n\n# Allow switching authenticators easily\nauth_type = get_config('auth.type')\nemail_domain = 'local'\n\nif auth_type == 'google':\n c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'\n c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')\n c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')\n c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')\n c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')\n c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')\n email_domain = get_config('auth.google.hosted-domain')\nelif auth_type == 'github':\n c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'\n c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')\n c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')\n c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')\nelif auth_type == 'cilogon':\n c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'\n c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')\n c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')\n c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')\nelif auth_type == 'gitlab':\n c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'\n c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')\n c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')\n c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')\nelif auth_type == 'mediawiki':\n c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'\n c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')\n c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')\n c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')\nelif auth_type == 'globus':\n c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'\n c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')\n c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')\n c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')\n c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')\nelif auth_type == 'hmac':\n c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'\n c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))\nelif auth_type == 'dummy':\n c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'\n c.DummyAuthenticator.password = get_config('auth.dummy.password', None)\nelif auth_type == 'tmp':\n c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'\nelif auth_type == 'lti':\n c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'\n c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')\nelif auth_type == 'custom':\n # full_class_name looks like \"myauthenticator.MyAuthenticator\".\n # To create a docker image with this class availabe, you can just have the\n # following Dockerifle:\n # FROM jupyterhub/k8s-hub:v0.4\n # RUN pip3 install myauthenticator\n full_class_name = get_config('auth.custom.class-name')\n c.JupyterHub.authenticator_class = full_class_name\n auth_class_name = full_class_name.rsplit('.', 1)[-1]\n auth_config = c[auth_class_name]\n auth_config.update(get_config('auth.custom.config') or {})\nelse:\n raise ValueError(\"Unhandled auth type: %r\" % auth_type)\n\nc.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)\n\ndef generate_user_email(spawner):\n \"\"\"\n Used as the EMAIL environment variable\n \"\"\"\n return '{username}@{domain}'.format(\n username=spawner.user.name, domain=email_domain\n )\n\ndef generate_user_name(spawner):\n \"\"\"\n Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables\n \"\"\"\n return spawner.user.name\n\nc.KubeSpawner.environment = {\n 'EMAIL': generate_user_email,\n # git requires these committer attributes\n 'GIT_AUTHOR_NAME': generate_user_name,\n 'GIT_COMMITTER_NAME': generate_user_name\n}\n\nc.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))\n\n# Enable admins to access user servers\nc.JupyterHub.admin_access = get_config('auth.admin.access')\nc.Authenticator.admin_users = get_config('auth.admin.users', [])\nc.Authenticator.whitelist = get_config('auth.whitelist.users', [])\n\nc.JupyterHub.base_url = get_config('hub.base_url')\n\nc.JupyterHub.services = []\n\nif get_config('cull.enabled', False):\n cull_timeout = get_config('cull.timeout')\n cull_every = get_config('cull.every')\n cull_cmd = [\n '/usr/local/bin/cull_idle_servers.py',\n '--timeout=%s' % cull_timeout,\n '--cull-every=%s' % cull_every,\n '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'\n ]\n if get_config('cull.users'):\n cull_cmd.append('--cull-users')\n c.JupyterHub.services.append({\n 'name': 'cull-idle',\n 'admin': True,\n 'command': cull_cmd,\n })\n\nfor name, service in get_config('hub.services', {}).items():\n api_token = get_secret('services.token.%s' % name)\n # jupyterhub.services is a list of dicts, but\n # in the helm chart it is a dict of dicts for easier merged-config\n service.setdefault('name', name)\n if api_token:\n service['api_token'] = api_token\n c.JupyterHub.services.append(service)\n\n\nc.JupyterHub.db_url = get_config('hub.db_url')\n\ncmd = get_config('singleuser.cmd', None)\nif cmd:\n c.Spawner.cmd = cmd\n\n\nscheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')\n\nif scheduler_strategy == 'pack':\n # FIXME: Support setting affinity directly in KubeSpawner\n c.KubeSpawner.singleuser_extra_pod_config = {\n 'affinity': {\n 'podAffinity': {\n 'preferredDuringSchedulingIgnoredDuringExecution': [{\n 'weight': 100,\n 'podAffinityTerm': {\n 'labelSelector': {\n 'matchExpressions': [{\n 'key': 'component',\n 'operator': 'In',\n 'values': ['singleuser-server']\n }]\n },\n 'topologyKey': 'kubernetes.io/hostname'\n }\n }],\n }\n }\n }\nelse:\n # Set default to {} so subconfigs can easily update it\n c.KubeSpawner.singleuser_extra_pod_config = {}\n\nextra_config_path = '/etc/jupyterhub/config/hub.extra-config.py'\nif os.path.exists(extra_config_path):\n load_subconfig(extra_config_path)\n", "path": "images/hub/jupyterhub_config.py"}]}
| 4,049 | 168 |
gh_patches_debug_38136
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-280
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace asserts in the code by if condition then raise
It would be better to replace the code as
https://github.com/pytorch/ignite/blob/0f1905e87b95779cf6544a5fe29b46519ad9d4e0/ignite/metrics/epoch_metric.py#L33-L34
by
```python
if y_pred.ndimension() < 1 or y_pred.ndimension() > 2:
raise TypeError("Predictions should be of shape (batch_size, n_classes)"
# etc
```
Usage of assert to change in `EpochMetric`, `EarlyStopping` and `Loss`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/epoch_metric.py`
Content:
```
1 import torch
2
3 from ignite.metrics.metric import Metric
4
5
6 class EpochMetric(Metric):
7 """Class for metrics that should be computed on the entire output history of a model.
8 Model's output and targets are restricted to be of shape `(batch_size, n_classes)`. Output
9 datatype should be `float32`. Target datatype should be `long`.
10
11 - `update` must receive output of the form `(y_pred, y)`.
12
13 If target shape is `(batch_size, n_classes)` and `n_classes > 1` than it should be binary: e.g. `[[0, 1, 0, 1], ]`
14
15 Args:
16 compute_fn (callable): a callable with the signature (`torch.tensor`, `torch.tensor`) takes as the input
17 `predictions` and `targets` and returns a scalar.
18
19 """
20
21 def __init__(self, compute_fn, output_transform=lambda x: x):
22 assert callable(compute_fn), "Argument compute_fn should be callable"
23 super(EpochMetric, self).__init__(output_transform=output_transform)
24 self.compute_fn = compute_fn
25
26 def reset(self):
27 self._predictions = torch.tensor([], dtype=torch.float32)
28 self._targets = torch.tensor([], dtype=torch.long)
29
30 def update(self, output):
31 y_pred, y = output
32
33 assert 1 <= y_pred.ndimension() <= 2, "Predictions should be of shape (batch_size, n_classes)"
34 assert 1 <= y.ndimension() <= 2, "Targets should be of shape (batch_size, n_classes)"
35
36 if y.ndimension() == 2:
37 assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'
38
39 if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:
40 y_pred = y_pred.squeeze(dim=-1)
41
42 if y.ndimension() == 2 and y.shape[1] == 1:
43 y = y.squeeze(dim=-1)
44
45 y_pred = y_pred.type_as(self._predictions)
46 y = y.type_as(self._targets)
47
48 self._predictions = torch.cat([self._predictions, y_pred], dim=0)
49 self._targets = torch.cat([self._targets, y], dim=0)
50
51 # Check once the signature and execution of compute_fn
52 if self._predictions.shape == y_pred.shape:
53 try:
54 self.compute_fn(self._predictions, self._targets)
55 except Exception as e:
56 raise RuntimeError("Problem with `compute_fn`:\n {}".format(e))
57
58 def compute(self):
59 return self.compute_fn(self._predictions, self._targets)
60
```
Path: `ignite/handlers/early_stopping.py`
Content:
```
1 import logging
2
3 from ignite.engine import Engine
4
5
6 class EarlyStopping(object):
7 """EarlyStopping handler can be used to stop the training if no improvement after a given number of events
8
9 Args:
10 patience (int):
11 Number of events to wait if no improvement and then stop the training
12 score_function (Callable):
13 It should be a function taking a single argument, an `ignite.engine.Engine` object,
14 and return a score `float`. An improvement is considered if the score is higher.
15 trainer (Engine):
16 trainer engine to stop the run if no improvement
17
18 Examples:
19
20 .. code-block:: python
21
22 from ignite.engine import Engine, Events
23 from ignite.handlers import EarlyStopping
24
25 def score_function(engine):
26 val_loss = engine.state.metrics['nll']
27 return -val_loss
28
29 handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
30 # Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset)
31 evaluator.add_event_handler(Events.COMPLETED, handler)
32
33 """
34 def __init__(self, patience, score_function, trainer):
35 assert callable(score_function), "Argument score_function should be a function"
36 assert patience > 0, "Argument patience should be positive"
37 assert isinstance(trainer, Engine), "Argument trainer should be an instance of Engine"
38 self.score_function = score_function
39 self.patience = patience
40 self.trainer = trainer
41 self.counter = 0
42 self.best_score = None
43 self._logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
44 self._logger.addHandler(logging.NullHandler())
45
46 def __call__(self, engine):
47 score = self.score_function(engine)
48
49 if self.best_score is None:
50 self.best_score = score
51 elif score < self.best_score:
52 self.counter += 1
53 self._logger.debug("EarlyStopping: %i / %i" % (self.counter, self.patience))
54 if self.counter >= self.patience:
55 self._logger.info("EarlyStopping: Stop training")
56 self.trainer.terminate()
57 else:
58 self.best_score = score
59 self.counter = 0
60
```
Path: `ignite/metrics/loss.py`
Content:
```
1 from __future__ import division
2
3 from ignite.exceptions import NotComputableError
4 from ignite.metrics.metric import Metric
5
6
7 class Loss(Metric):
8 """
9 Calculates the average loss according to the passed loss_fn.
10
11 Args:
12 loss_fn (callable): a callable taking a prediction tensor, a target
13 tensor, optionally other arguments, and returns the average loss
14 over all observations in the batch.
15 output_transform (callable): a callable that is used to transform the
16 :class:`ignite.engine.Engine`'s `process_function`'s output into the
17 form expected by the metric.
18 This can be useful if, for example, you have a multi-output model and
19 you want to compute the metric with respect to one of the outputs.
20 The output is is expected to be a tuple (prediction, target) or
21 (prediction, target, kwargs) where kwargs is a dictionary of extra
22 keywords arguments.
23
24 """
25
26 def __init__(self, loss_fn, output_transform=lambda x: x):
27 super(Loss, self).__init__(output_transform)
28 self._loss_fn = loss_fn
29
30 def reset(self):
31 self._sum = 0
32 self._num_examples = 0
33
34 def update(self, output):
35 if len(output) == 2:
36 y_pred, y = output
37 kwargs = {}
38 else:
39 y_pred, y, kwargs = output
40 average_loss = self._loss_fn(y_pred, y, **kwargs)
41 assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
42 self._sum += average_loss.item() * y.shape[0]
43 self._num_examples += y.shape[0]
44
45 def compute(self):
46 if self._num_examples == 0:
47 raise NotComputableError(
48 'Loss must have at least one example before it can be computed')
49 return self._sum / self._num_examples
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/handlers/early_stopping.py b/ignite/handlers/early_stopping.py
--- a/ignite/handlers/early_stopping.py
+++ b/ignite/handlers/early_stopping.py
@@ -32,9 +32,16 @@
"""
def __init__(self, patience, score_function, trainer):
- assert callable(score_function), "Argument score_function should be a function"
- assert patience > 0, "Argument patience should be positive"
- assert isinstance(trainer, Engine), "Argument trainer should be an instance of Engine"
+
+ if not callable(score_function):
+ raise TypeError("Argument score_function should be a function")
+
+ if patience < 1:
+ raise ValueError("Argument patience should be positive integer")
+
+ if not isinstance(trainer, Engine):
+ raise TypeError("Argument trainer should be an instance of Engine")
+
self.score_function = score_function
self.patience = patience
self.trainer = trainer
diff --git a/ignite/metrics/epoch_metric.py b/ignite/metrics/epoch_metric.py
--- a/ignite/metrics/epoch_metric.py
+++ b/ignite/metrics/epoch_metric.py
@@ -19,7 +19,10 @@
"""
def __init__(self, compute_fn, output_transform=lambda x: x):
- assert callable(compute_fn), "Argument compute_fn should be callable"
+
+ if not callable(compute_fn):
+ raise TypeError("Argument compute_fn should be callable")
+
super(EpochMetric, self).__init__(output_transform=output_transform)
self.compute_fn = compute_fn
@@ -30,11 +33,15 @@
def update(self, output):
y_pred, y = output
- assert 1 <= y_pred.ndimension() <= 2, "Predictions should be of shape (batch_size, n_classes)"
- assert 1 <= y.ndimension() <= 2, "Targets should be of shape (batch_size, n_classes)"
+ if y_pred.ndimension() not in (1, 2):
+ raise ValueError("Predictions should be of shape (batch_size, n_classes) or (batch_size, )")
+
+ if y.ndimension() not in (1, 2):
+ raise ValueError("Targets should be of shape (batch_size, n_classes) or (batch_size, )")
if y.ndimension() == 2:
- assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'
+ if not torch.equal(y ** 2, y):
+ raise ValueError('Targets should be binary (0 or 1)')
if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(dim=-1)
diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py
--- a/ignite/metrics/loss.py
+++ b/ignite/metrics/loss.py
@@ -38,7 +38,10 @@
else:
y_pred, y, kwargs = output
average_loss = self._loss_fn(y_pred, y, **kwargs)
- assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
+
+ if len(average_loss.shape) != 0:
+ raise ValueError('loss_fn did not return the average loss')
+
self._sum += average_loss.item() * y.shape[0]
self._num_examples += y.shape[0]
|
{"golden_diff": "diff --git a/ignite/handlers/early_stopping.py b/ignite/handlers/early_stopping.py\n--- a/ignite/handlers/early_stopping.py\n+++ b/ignite/handlers/early_stopping.py\n@@ -32,9 +32,16 @@\n \n \"\"\"\n def __init__(self, patience, score_function, trainer):\n- assert callable(score_function), \"Argument score_function should be a function\"\n- assert patience > 0, \"Argument patience should be positive\"\n- assert isinstance(trainer, Engine), \"Argument trainer should be an instance of Engine\"\n+\n+ if not callable(score_function):\n+ raise TypeError(\"Argument score_function should be a function\")\n+\n+ if patience < 1:\n+ raise ValueError(\"Argument patience should be positive integer\")\n+\n+ if not isinstance(trainer, Engine):\n+ raise TypeError(\"Argument trainer should be an instance of Engine\")\n+\n self.score_function = score_function\n self.patience = patience\n self.trainer = trainer\ndiff --git a/ignite/metrics/epoch_metric.py b/ignite/metrics/epoch_metric.py\n--- a/ignite/metrics/epoch_metric.py\n+++ b/ignite/metrics/epoch_metric.py\n@@ -19,7 +19,10 @@\n \"\"\"\n \n def __init__(self, compute_fn, output_transform=lambda x: x):\n- assert callable(compute_fn), \"Argument compute_fn should be callable\"\n+\n+ if not callable(compute_fn):\n+ raise TypeError(\"Argument compute_fn should be callable\")\n+\n super(EpochMetric, self).__init__(output_transform=output_transform)\n self.compute_fn = compute_fn\n \n@@ -30,11 +33,15 @@\n def update(self, output):\n y_pred, y = output\n \n- assert 1 <= y_pred.ndimension() <= 2, \"Predictions should be of shape (batch_size, n_classes)\"\n- assert 1 <= y.ndimension() <= 2, \"Targets should be of shape (batch_size, n_classes)\"\n+ if y_pred.ndimension() not in (1, 2):\n+ raise ValueError(\"Predictions should be of shape (batch_size, n_classes) or (batch_size, )\")\n+\n+ if y.ndimension() not in (1, 2):\n+ raise ValueError(\"Targets should be of shape (batch_size, n_classes) or (batch_size, )\")\n \n if y.ndimension() == 2:\n- assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'\n+ if not torch.equal(y ** 2, y):\n+ raise ValueError('Targets should be binary (0 or 1)')\n \n if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:\n y_pred = y_pred.squeeze(dim=-1)\ndiff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py\n--- a/ignite/metrics/loss.py\n+++ b/ignite/metrics/loss.py\n@@ -38,7 +38,10 @@\n else:\n y_pred, y, kwargs = output\n average_loss = self._loss_fn(y_pred, y, **kwargs)\n- assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n+\n+ if len(average_loss.shape) != 0:\n+ raise ValueError('loss_fn did not return the average loss')\n+\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n", "issue": "Replace asserts in the code by if condition then raise \nIt would be better to replace the code as \r\n\r\nhttps://github.com/pytorch/ignite/blob/0f1905e87b95779cf6544a5fe29b46519ad9d4e0/ignite/metrics/epoch_metric.py#L33-L34\r\n\r\nby \r\n```python\r\nif y_pred.ndimension() < 1 or y_pred.ndimension() > 2:\r\n raise TypeError(\"Predictions should be of shape (batch_size, n_classes)\"\r\n# etc\r\n```\r\n\r\nUsage of assert to change in `EpochMetric`, `EarlyStopping` and `Loss`\r\n\n", "before_files": [{"content": "import torch\n\nfrom ignite.metrics.metric import Metric\n\n\nclass EpochMetric(Metric):\n \"\"\"Class for metrics that should be computed on the entire output history of a model.\n Model's output and targets are restricted to be of shape `(batch_size, n_classes)`. Output\n datatype should be `float32`. Target datatype should be `long`.\n\n - `update` must receive output of the form `(y_pred, y)`.\n\n If target shape is `(batch_size, n_classes)` and `n_classes > 1` than it should be binary: e.g. `[[0, 1, 0, 1], ]`\n\n Args:\n compute_fn (callable): a callable with the signature (`torch.tensor`, `torch.tensor`) takes as the input\n `predictions` and `targets` and returns a scalar.\n\n \"\"\"\n\n def __init__(self, compute_fn, output_transform=lambda x: x):\n assert callable(compute_fn), \"Argument compute_fn should be callable\"\n super(EpochMetric, self).__init__(output_transform=output_transform)\n self.compute_fn = compute_fn\n\n def reset(self):\n self._predictions = torch.tensor([], dtype=torch.float32)\n self._targets = torch.tensor([], dtype=torch.long)\n\n def update(self, output):\n y_pred, y = output\n\n assert 1 <= y_pred.ndimension() <= 2, \"Predictions should be of shape (batch_size, n_classes)\"\n assert 1 <= y.ndimension() <= 2, \"Targets should be of shape (batch_size, n_classes)\"\n\n if y.ndimension() == 2:\n assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'\n\n if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:\n y_pred = y_pred.squeeze(dim=-1)\n\n if y.ndimension() == 2 and y.shape[1] == 1:\n y = y.squeeze(dim=-1)\n\n y_pred = y_pred.type_as(self._predictions)\n y = y.type_as(self._targets)\n\n self._predictions = torch.cat([self._predictions, y_pred], dim=0)\n self._targets = torch.cat([self._targets, y], dim=0)\n\n # Check once the signature and execution of compute_fn\n if self._predictions.shape == y_pred.shape:\n try:\n self.compute_fn(self._predictions, self._targets)\n except Exception as e:\n raise RuntimeError(\"Problem with `compute_fn`:\\n {}\".format(e))\n\n def compute(self):\n return self.compute_fn(self._predictions, self._targets)\n", "path": "ignite/metrics/epoch_metric.py"}, {"content": "import logging\n\nfrom ignite.engine import Engine\n\n\nclass EarlyStopping(object):\n \"\"\"EarlyStopping handler can be used to stop the training if no improvement after a given number of events\n\n Args:\n patience (int):\n Number of events to wait if no improvement and then stop the training\n score_function (Callable):\n It should be a function taking a single argument, an `ignite.engine.Engine` object,\n and return a score `float`. An improvement is considered if the score is higher.\n trainer (Engine):\n trainer engine to stop the run if no improvement\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.handlers import EarlyStopping\n\n def score_function(engine):\n val_loss = engine.state.metrics['nll']\n return -val_loss\n\n handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)\n # Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset)\n evaluator.add_event_handler(Events.COMPLETED, handler)\n\n \"\"\"\n def __init__(self, patience, score_function, trainer):\n assert callable(score_function), \"Argument score_function should be a function\"\n assert patience > 0, \"Argument patience should be positive\"\n assert isinstance(trainer, Engine), \"Argument trainer should be an instance of Engine\"\n self.score_function = score_function\n self.patience = patience\n self.trainer = trainer\n self.counter = 0\n self.best_score = None\n self._logger = logging.getLogger(__name__ + \".\" + self.__class__.__name__)\n self._logger.addHandler(logging.NullHandler())\n\n def __call__(self, engine):\n score = self.score_function(engine)\n\n if self.best_score is None:\n self.best_score = score\n elif score < self.best_score:\n self.counter += 1\n self._logger.debug(\"EarlyStopping: %i / %i\" % (self.counter, self.patience))\n if self.counter >= self.patience:\n self._logger.info(\"EarlyStopping: Stop training\")\n self.trainer.terminate()\n else:\n self.best_score = score\n self.counter = 0\n", "path": "ignite/handlers/early_stopping.py"}, {"content": "from __future__ import division\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n Args:\n loss_fn (callable): a callable taking a prediction tensor, a target\n tensor, optionally other arguments, and returns the average loss\n over all observations in the batch.\n output_transform (callable): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric.\n This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n The output is is expected to be a tuple (prediction, target) or\n (prediction, target, kwargs) where kwargs is a dictionary of extra\n keywords arguments.\n\n \"\"\"\n\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n\n def reset(self):\n self._sum = 0\n self._num_examples = 0\n\n def update(self, output):\n if len(output) == 2:\n y_pred, y = output\n kwargs = {}\n else:\n y_pred, y, kwargs = output\n average_loss = self._loss_fn(y_pred, y, **kwargs)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\n 'Loss must have at least one example before it can be computed')\n return self._sum / self._num_examples\n", "path": "ignite/metrics/loss.py"}], "after_files": [{"content": "import torch\n\nfrom ignite.metrics.metric import Metric\n\n\nclass EpochMetric(Metric):\n \"\"\"Class for metrics that should be computed on the entire output history of a model.\n Model's output and targets are restricted to be of shape `(batch_size, n_classes)`. Output\n datatype should be `float32`. Target datatype should be `long`.\n\n - `update` must receive output of the form `(y_pred, y)`.\n\n If target shape is `(batch_size, n_classes)` and `n_classes > 1` than it should be binary: e.g. `[[0, 1, 0, 1], ]`\n\n Args:\n compute_fn (callable): a callable with the signature (`torch.tensor`, `torch.tensor`) takes as the input\n `predictions` and `targets` and returns a scalar.\n\n \"\"\"\n\n def __init__(self, compute_fn, output_transform=lambda x: x):\n\n if not callable(compute_fn):\n raise TypeError(\"Argument compute_fn should be callable\")\n\n super(EpochMetric, self).__init__(output_transform=output_transform)\n self.compute_fn = compute_fn\n\n def reset(self):\n self._predictions = torch.tensor([], dtype=torch.float32)\n self._targets = torch.tensor([], dtype=torch.long)\n\n def update(self, output):\n y_pred, y = output\n\n if y_pred.ndimension() not in (1, 2):\n raise ValueError(\"Predictions should be of shape (batch_size, n_classes) or (batch_size, )\")\n\n if y.ndimension() not in (1, 2):\n raise ValueError(\"Targets should be of shape (batch_size, n_classes) or (batch_size, )\")\n\n if y.ndimension() == 2:\n if not torch.equal(y ** 2, y):\n raise ValueError('Targets should be binary (0 or 1)')\n\n if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:\n y_pred = y_pred.squeeze(dim=-1)\n\n if y.ndimension() == 2 and y.shape[1] == 1:\n y = y.squeeze(dim=-1)\n\n y_pred = y_pred.type_as(self._predictions)\n y = y.type_as(self._targets)\n\n self._predictions = torch.cat([self._predictions, y_pred], dim=0)\n self._targets = torch.cat([self._targets, y], dim=0)\n\n # Check once the signature and execution of compute_fn\n if self._predictions.shape == y_pred.shape:\n try:\n self.compute_fn(self._predictions, self._targets)\n except Exception as e:\n raise RuntimeError(\"Problem with `compute_fn`:\\n {}\".format(e))\n\n def compute(self):\n return self.compute_fn(self._predictions, self._targets)\n", "path": "ignite/metrics/epoch_metric.py"}, {"content": "import logging\n\nfrom ignite.engine import Engine\n\n\nclass EarlyStopping(object):\n \"\"\"EarlyStopping handler can be used to stop the training if no improvement after a given number of events\n\n Args:\n patience (int):\n Number of events to wait if no improvement and then stop the training\n score_function (Callable):\n It should be a function taking a single argument, an `ignite.engine.Engine` object,\n and return a score `float`. An improvement is considered if the score is higher.\n trainer (Engine):\n trainer engine to stop the run if no improvement\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.handlers import EarlyStopping\n\n def score_function(engine):\n val_loss = engine.state.metrics['nll']\n return -val_loss\n\n handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)\n # Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset)\n evaluator.add_event_handler(Events.COMPLETED, handler)\n\n \"\"\"\n def __init__(self, patience, score_function, trainer):\n\n if not callable(score_function):\n raise TypeError(\"Argument score_function should be a function\")\n\n if patience < 1:\n raise ValueError(\"Argument patience should be positive integer\")\n\n if not isinstance(trainer, Engine):\n raise TypeError(\"Argument trainer should be an instance of Engine\")\n\n self.score_function = score_function\n self.patience = patience\n self.trainer = trainer\n self.counter = 0\n self.best_score = None\n self._logger = logging.getLogger(__name__ + \".\" + self.__class__.__name__)\n self._logger.addHandler(logging.NullHandler())\n\n def __call__(self, engine):\n score = self.score_function(engine)\n\n if self.best_score is None:\n self.best_score = score\n elif score < self.best_score:\n self.counter += 1\n self._logger.debug(\"EarlyStopping: %i / %i\" % (self.counter, self.patience))\n if self.counter >= self.patience:\n self._logger.info(\"EarlyStopping: Stop training\")\n self.trainer.terminate()\n else:\n self.best_score = score\n self.counter = 0\n", "path": "ignite/handlers/early_stopping.py"}, {"content": "from __future__ import division\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n Args:\n loss_fn (callable): a callable taking a prediction tensor, a target\n tensor, optionally other arguments, and returns the average loss\n over all observations in the batch.\n output_transform (callable): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric.\n This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n The output is is expected to be a tuple (prediction, target) or\n (prediction, target, kwargs) where kwargs is a dictionary of extra\n keywords arguments.\n\n \"\"\"\n\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n\n def reset(self):\n self._sum = 0\n self._num_examples = 0\n\n def update(self, output):\n if len(output) == 2:\n y_pred, y = output\n kwargs = {}\n else:\n y_pred, y, kwargs = output\n average_loss = self._loss_fn(y_pred, y, **kwargs)\n\n if len(average_loss.shape) != 0:\n raise ValueError('loss_fn did not return the average loss')\n\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\n 'Loss must have at least one example before it can be computed')\n return self._sum / self._num_examples\n", "path": "ignite/metrics/loss.py"}]}
| 2,238 | 785 |
gh_patches_debug_3074
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1341
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot make other users admin on the website
**Describe the bug**
For the moment, there is no way to promote an user to be an admin. One has to do it in the "./bw-dev shell"
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Admin' and then the page of the user you want to promote
2. Promote the user and save
3. The "promoted user" logins in
4. Nope, not promoted
**Expected behavior**
The logged in promoted user should see the admin panel.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/user_admin.py`
Content:
```
1 """ manage user """
2 from django.contrib.auth.decorators import login_required, permission_required
3 from django.core.paginator import Paginator
4 from django.shortcuts import get_object_or_404
5 from django.template.response import TemplateResponse
6 from django.utils.decorators import method_decorator
7 from django.views import View
8
9 from bookwyrm import forms, models
10 from bookwyrm.settings import PAGE_LENGTH
11
12
13 # pylint: disable= no-self-use
14 @method_decorator(login_required, name="dispatch")
15 @method_decorator(
16 permission_required("bookwyrm.moderate_users", raise_exception=True),
17 name="dispatch",
18 )
19 class UserAdminList(View):
20 """admin view of users on this server"""
21
22 def get(self, request):
23 """list of users"""
24 filters = {}
25 server = request.GET.get("server")
26 if server:
27 server = models.FederatedServer.objects.filter(server_name=server).first()
28 filters["federated_server"] = server
29 filters["federated_server__isnull"] = False
30 username = request.GET.get("username")
31 if username:
32 filters["username__icontains"] = username
33 scope = request.GET.get("scope")
34 if scope:
35 filters["local"] = scope == "local"
36
37 users = models.User.objects.filter(**filters)
38
39 sort = request.GET.get("sort", "-created_date")
40 sort_fields = [
41 "created_date",
42 "last_active_date",
43 "username",
44 "federated_server__server_name",
45 "is_active",
46 ]
47 if sort in sort_fields + ["-{:s}".format(f) for f in sort_fields]:
48 users = users.order_by(sort)
49
50 paginated = Paginator(users, PAGE_LENGTH)
51 data = {
52 "users": paginated.get_page(request.GET.get("page")),
53 "sort": sort,
54 "server": server,
55 }
56 return TemplateResponse(request, "user_admin/user_admin.html", data)
57
58
59 @method_decorator(login_required, name="dispatch")
60 @method_decorator(
61 permission_required("bookwyrm.moderate_users", raise_exception=True),
62 name="dispatch",
63 )
64 class UserAdmin(View):
65 """moderate an individual user"""
66
67 def get(self, request, user):
68 """user view"""
69 user = get_object_or_404(models.User, id=user)
70 data = {"user": user, "group_form": forms.UserGroupForm()}
71 return TemplateResponse(request, "user_admin/user.html", data)
72
73 def post(self, request, user):
74 """update user group"""
75 user = get_object_or_404(models.User, id=user)
76 form = forms.UserGroupForm(request.POST, instance=user)
77 if form.is_valid():
78 form.save()
79 data = {"user": user, "group_form": form}
80 return TemplateResponse(request, "user_admin/user.html", data)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/user_admin.py b/bookwyrm/views/user_admin.py
--- a/bookwyrm/views/user_admin.py
+++ b/bookwyrm/views/user_admin.py
@@ -13,7 +13,7 @@
# pylint: disable= no-self-use
@method_decorator(login_required, name="dispatch")
@method_decorator(
- permission_required("bookwyrm.moderate_users", raise_exception=True),
+ permission_required("bookwyrm.moderate_user", raise_exception=True),
name="dispatch",
)
class UserAdminList(View):
|
{"golden_diff": "diff --git a/bookwyrm/views/user_admin.py b/bookwyrm/views/user_admin.py\n--- a/bookwyrm/views/user_admin.py\n+++ b/bookwyrm/views/user_admin.py\n@@ -13,7 +13,7 @@\n # pylint: disable= no-self-use\n @method_decorator(login_required, name=\"dispatch\")\n @method_decorator(\n- permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n+ permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n )\n class UserAdminList(View):\n", "issue": "Cannot make other users admin on the website\n**Describe the bug**\r\nFor the moment, there is no way to promote an user to be an admin. One has to do it in the \"./bw-dev shell\"\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to 'Admin' and then the page of the user you want to promote\r\n2. Promote the user and save\r\n3. The \"promoted user\" logins in\r\n4. Nope, not promoted\r\n\r\n**Expected behavior**\r\nThe logged in promoted user should see the admin panel.\r\n\n", "before_files": [{"content": "\"\"\" manage user \"\"\"\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdminList(View):\n \"\"\"admin view of users on this server\"\"\"\n\n def get(self, request):\n \"\"\"list of users\"\"\"\n filters = {}\n server = request.GET.get(\"server\")\n if server:\n server = models.FederatedServer.objects.filter(server_name=server).first()\n filters[\"federated_server\"] = server\n filters[\"federated_server__isnull\"] = False\n username = request.GET.get(\"username\")\n if username:\n filters[\"username__icontains\"] = username\n scope = request.GET.get(\"scope\")\n if scope:\n filters[\"local\"] = scope == \"local\"\n\n users = models.User.objects.filter(**filters)\n\n sort = request.GET.get(\"sort\", \"-created_date\")\n sort_fields = [\n \"created_date\",\n \"last_active_date\",\n \"username\",\n \"federated_server__server_name\",\n \"is_active\",\n ]\n if sort in sort_fields + [\"-{:s}\".format(f) for f in sort_fields]:\n users = users.order_by(sort)\n\n paginated = Paginator(users, PAGE_LENGTH)\n data = {\n \"users\": paginated.get_page(request.GET.get(\"page\")),\n \"sort\": sort,\n \"server\": server,\n }\n return TemplateResponse(request, \"user_admin/user_admin.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdmin(View):\n \"\"\"moderate an individual user\"\"\"\n\n def get(self, request, user):\n \"\"\"user view\"\"\"\n user = get_object_or_404(models.User, id=user)\n data = {\"user\": user, \"group_form\": forms.UserGroupForm()}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n\n def post(self, request, user):\n \"\"\"update user group\"\"\"\n user = get_object_or_404(models.User, id=user)\n form = forms.UserGroupForm(request.POST, instance=user)\n if form.is_valid():\n form.save()\n data = {\"user\": user, \"group_form\": form}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n", "path": "bookwyrm/views/user_admin.py"}], "after_files": [{"content": "\"\"\" manage user \"\"\"\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdminList(View):\n \"\"\"admin view of users on this server\"\"\"\n\n def get(self, request):\n \"\"\"list of users\"\"\"\n filters = {}\n server = request.GET.get(\"server\")\n if server:\n server = models.FederatedServer.objects.filter(server_name=server).first()\n filters[\"federated_server\"] = server\n filters[\"federated_server__isnull\"] = False\n username = request.GET.get(\"username\")\n if username:\n filters[\"username__icontains\"] = username\n scope = request.GET.get(\"scope\")\n if scope:\n filters[\"local\"] = scope == \"local\"\n\n users = models.User.objects.filter(**filters)\n\n sort = request.GET.get(\"sort\", \"-created_date\")\n sort_fields = [\n \"created_date\",\n \"last_active_date\",\n \"username\",\n \"federated_server__server_name\",\n \"is_active\",\n ]\n if sort in sort_fields + [\"-{:s}\".format(f) for f in sort_fields]:\n users = users.order_by(sort)\n\n paginated = Paginator(users, PAGE_LENGTH)\n data = {\n \"users\": paginated.get_page(request.GET.get(\"page\")),\n \"sort\": sort,\n \"server\": server,\n }\n return TemplateResponse(request, \"user_admin/user_admin.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdmin(View):\n \"\"\"moderate an individual user\"\"\"\n\n def get(self, request, user):\n \"\"\"user view\"\"\"\n user = get_object_or_404(models.User, id=user)\n data = {\"user\": user, \"group_form\": forms.UserGroupForm()}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n\n def post(self, request, user):\n \"\"\"update user group\"\"\"\n user = get_object_or_404(models.User, id=user)\n form = forms.UserGroupForm(request.POST, instance=user)\n if form.is_valid():\n form.save()\n data = {\"user\": user, \"group_form\": form}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n", "path": "bookwyrm/views/user_admin.py"}]}
| 1,136 | 121 |
gh_patches_debug_557
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-743
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 1.6.8
On the docket:
+ [x] Fixup pex re-exec during bootstrap. #741
+ [x] Pex should not re-exec when the current interpreter satifies constraints #709
+ [x] Pex should not lose PEX_PYTHON or PEX_PYTHON_PATH when re-exec-ing #710
+ [x] Fix resolution of `setup.py` project extras. #739
Deferred:
+ [ ] Remove PEX_HTTP_RETRIES and push into a flag for the pex tool #94
+ [ ] Sdist resolution is not always reproducible #735
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.6.7'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.6.7'
+__version__ = '1.6.8'
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.6.7'\n+__version__ = '1.6.8'\n", "issue": "Release 1.6.8\nOn the docket:\r\n\r\n+ [x] Fixup pex re-exec during bootstrap. #741 \r\n + [x] Pex should not re-exec when the current interpreter satifies constraints #709\r\n + [x] Pex should not lose PEX_PYTHON or PEX_PYTHON_PATH when re-exec-ing #710\r\n+ [x] Fix resolution of `setup.py` project extras. #739\r\n\r\nDeferred:\r\n\r\n+ [ ] Remove PEX_HTTP_RETRIES and push into a flag for the pex tool #94\r\n+ [ ] Sdist resolution is not always reproducible #735\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.7'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.8'\n", "path": "pex/version.py"}]}
| 449 | 94 |
gh_patches_debug_13557
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-3715
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows module fails on multiple resource files with same name
I have a project with multiple subfolders that contain resource scripts named 'rsrc.rc', this worked with at least 0.44.0, but fails with current master:
> meson.build:7:0: ERROR: Tried to create target "Windows resource for file 'rsrc.rc'", but a target of that name already exists.
Here is a small testcase: [rsrcbug.zip](https://github.com/mesonbuild/meson/files/2007861/rsrcbug.zip)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesonbuild/modules/windows.py`
Content:
```
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from .. import mlog
18 from .. import mesonlib, dependencies, build
19 from ..mesonlib import MesonException, extract_as_list
20 from . import get_include_args
21 from . import ModuleReturnValue
22 from . import ExtensionModule
23 from ..interpreterbase import permittedKwargs, FeatureNewKwargs
24
25 class WindowsModule(ExtensionModule):
26
27 def detect_compiler(self, compilers):
28 for l in ('c', 'cpp'):
29 if l in compilers:
30 return compilers[l]
31 raise MesonException('Resource compilation requires a C or C++ compiler.')
32
33 @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files'])
34 @permittedKwargs({'args', 'include_directories', 'depend_files'})
35 def compile_resources(self, state, args, kwargs):
36 comp = self.detect_compiler(state.compilers)
37
38 extra_args = mesonlib.stringlistify(kwargs.get('args', []))
39 wrc_deps = extract_as_list(kwargs, 'depend_files', pop = True)
40 inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)
41 for incd in inc_dirs:
42 if not isinstance(incd.held_object, (str, build.IncludeDirs)):
43 raise MesonException('Resource include dirs should be include_directories().')
44 extra_args += get_include_args(inc_dirs)
45
46 if comp.id == 'msvc':
47 rescomp = dependencies.ExternalProgram('rc', silent=True)
48 res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']
49 suffix = 'res'
50 else:
51 m = 'Argument {!r} has a space which may not work with windres due to ' \
52 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'
53 for arg in extra_args:
54 if ' ' in arg:
55 mlog.warning(m.format(arg))
56 rescomp_name = None
57 # FIXME: Does not handle `native: true` executables, see
58 # https://github.com/mesonbuild/meson/issues/1531
59 if state.environment.is_cross_build():
60 # If cross compiling see if windres has been specified in the
61 # cross file before trying to find it another way.
62 rescomp_name = state.environment.cross_info.config['binaries'].get('windres')
63 if rescomp_name is None:
64 # Pick-up env var WINDRES if set. This is often used for
65 # specifying an arch-specific windres.
66 rescomp_name = os.environ.get('WINDRES', 'windres')
67 rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)
68 res_args = extra_args + ['@INPUT@', '@OUTPUT@']
69 suffix = 'o'
70 if not rescomp.found():
71 raise MesonException('Could not find Windows resource compiler "%s".' % rescomp_name)
72
73 res_targets = []
74
75 def add_target(src):
76 if isinstance(src, list):
77 for subsrc in src:
78 add_target(subsrc)
79 return
80
81 if hasattr(src, 'held_object'):
82 src = src.held_object
83
84 res_kwargs = {
85 'output': '@BASENAME@.' + suffix,
86 'input': [src],
87 'command': [rescomp] + res_args,
88 'depend_files': wrc_deps,
89 }
90
91 if isinstance(src, (str, mesonlib.File)):
92 name = 'file {!r}'.format(str(src))
93 elif isinstance(src, build.CustomTarget):
94 if len(src.get_outputs()) > 1:
95 raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
96
97 name = 'target {!r}'.format(src.get_id())
98 else:
99 raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))
100
101 # Path separators are not allowed in target names
102 name = name.replace('/', '_').replace('\\', '_')
103
104 # instruct binutils windres to generate a preprocessor depfile
105 if comp.id != 'msvc':
106 res_kwargs['depfile'] = res_kwargs['output'] + '.d'
107 res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']
108
109 res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))
110
111 add_target(args)
112
113 return ModuleReturnValue(res_targets, [res_targets])
114
115 def initialize(*args, **kwargs):
116 return WindowsModule(*args, **kwargs)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py
--- a/mesonbuild/modules/windows.py
+++ b/mesonbuild/modules/windows.py
@@ -88,8 +88,10 @@
'depend_files': wrc_deps,
}
- if isinstance(src, (str, mesonlib.File)):
- name = 'file {!r}'.format(str(src))
+ if isinstance(src, str):
+ name = 'file {!r}'.format(os.path.join(state.subdir, src))
+ elif isinstance(src, mesonlib.File):
+ name = 'file {!r}'.format(src.relative_name())
elif isinstance(src, build.CustomTarget):
if len(src.get_outputs()) > 1:
raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
|
{"golden_diff": "diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py\n--- a/mesonbuild/modules/windows.py\n+++ b/mesonbuild/modules/windows.py\n@@ -88,8 +88,10 @@\n 'depend_files': wrc_deps,\n }\n \n- if isinstance(src, (str, mesonlib.File)):\n- name = 'file {!r}'.format(str(src))\n+ if isinstance(src, str):\n+ name = 'file {!r}'.format(os.path.join(state.subdir, src))\n+ elif isinstance(src, mesonlib.File):\n+ name = 'file {!r}'.format(src.relative_name())\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n", "issue": "Windows module fails on multiple resource files with same name\nI have a project with multiple subfolders that contain resource scripts named 'rsrc.rc', this worked with at least 0.44.0, but fails with current master:\r\n\r\n> meson.build:7:0: ERROR: Tried to create target \"Windows resource for file 'rsrc.rc'\", but a target of that name already exists.\r\n\r\nHere is a small testcase: [rsrcbug.zip](https://github.com/mesonbuild/meson/files/2007861/rsrcbug.zip)\r\n\n", "before_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .. import mlog\nfrom .. import mesonlib, dependencies, build\nfrom ..mesonlib import MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs, FeatureNewKwargs\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files'])\n @permittedKwargs({'args', 'include_directories', 'depend_files'})\n def compile_resources(self, state, args, kwargs):\n comp = self.detect_compiler(state.compilers)\n\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n wrc_deps = extract_as_list(kwargs, 'depend_files', pop = True)\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n if comp.id == 'msvc':\n rescomp = dependencies.ExternalProgram('rc', silent=True)\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n suffix = 'res'\n else:\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n rescomp_name = None\n # FIXME: Does not handle `native: true` executables, see\n # https://github.com/mesonbuild/meson/issues/1531\n if state.environment.is_cross_build():\n # If cross compiling see if windres has been specified in the\n # cross file before trying to find it another way.\n rescomp_name = state.environment.cross_info.config['binaries'].get('windres')\n if rescomp_name is None:\n # Pick-up env var WINDRES if set. This is often used for\n # specifying an arch-specific windres.\n rescomp_name = os.environ.get('WINDRES', 'windres')\n rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler \"%s\".' % rescomp_name)\n\n res_targets = []\n\n def add_target(src):\n if isinstance(src, list):\n for subsrc in src:\n add_target(subsrc)\n return\n\n if hasattr(src, 'held_object'):\n src = src.held_object\n\n res_kwargs = {\n 'output': '@BASENAME@.' + suffix,\n 'input': [src],\n 'command': [rescomp] + res_args,\n 'depend_files': wrc_deps,\n }\n\n if isinstance(src, (str, mesonlib.File)):\n name = 'file {!r}'.format(str(src))\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n\n name = 'target {!r}'.format(src.get_id())\n else:\n raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n\n # Path separators are not allowed in target names\n name = name.replace('/', '_').replace('\\\\', '_')\n\n # instruct binutils windres to generate a preprocessor depfile\n if comp.id != 'msvc':\n res_kwargs['depfile'] = res_kwargs['output'] + '.d'\n res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']\n\n res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))\n\n add_target(args)\n\n return ModuleReturnValue(res_targets, [res_targets])\n\ndef initialize(*args, **kwargs):\n return WindowsModule(*args, **kwargs)\n", "path": "mesonbuild/modules/windows.py"}], "after_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .. import mlog\nfrom .. import mesonlib, dependencies, build\nfrom ..mesonlib import MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs, FeatureNewKwargs\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files'])\n @permittedKwargs({'args', 'include_directories', 'depend_files'})\n def compile_resources(self, state, args, kwargs):\n comp = self.detect_compiler(state.compilers)\n\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n wrc_deps = extract_as_list(kwargs, 'depend_files', pop = True)\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n if comp.id == 'msvc':\n rescomp = dependencies.ExternalProgram('rc', silent=True)\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n suffix = 'res'\n else:\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n rescomp_name = None\n # FIXME: Does not handle `native: true` executables, see\n # https://github.com/mesonbuild/meson/issues/1531\n if state.environment.is_cross_build():\n # If cross compiling see if windres has been specified in the\n # cross file before trying to find it another way.\n rescomp_name = state.environment.cross_info.config['binaries'].get('windres')\n if rescomp_name is None:\n # Pick-up env var WINDRES if set. This is often used for\n # specifying an arch-specific windres.\n rescomp_name = os.environ.get('WINDRES', 'windres')\n rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler \"%s\".' % rescomp_name)\n\n res_targets = []\n\n def add_target(src):\n if isinstance(src, list):\n for subsrc in src:\n add_target(subsrc)\n return\n\n if hasattr(src, 'held_object'):\n src = src.held_object\n\n res_kwargs = {\n 'output': '@BASENAME@.' + suffix,\n 'input': [src],\n 'command': [rescomp] + res_args,\n 'depend_files': wrc_deps,\n }\n\n if isinstance(src, str):\n name = 'file {!r}'.format(os.path.join(state.subdir, src))\n elif isinstance(src, mesonlib.File):\n name = 'file {!r}'.format(src.relative_name())\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n\n name = 'target {!r}'.format(src.get_id())\n else:\n raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n\n # Path separators are not allowed in target names\n name = name.replace('/', '_').replace('\\\\', '_')\n\n # instruct binutils windres to generate a preprocessor depfile\n if comp.id != 'msvc':\n res_kwargs['depfile'] = res_kwargs['output'] + '.d'\n res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']\n\n res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))\n\n add_target(args)\n\n return ModuleReturnValue(res_targets, [res_targets])\n\ndef initialize(*args, **kwargs):\n return WindowsModule(*args, **kwargs)\n", "path": "mesonbuild/modules/windows.py"}]}
| 1,776 | 181 |
gh_patches_debug_30757
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-2306
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Don't install setup_requires when run as a PEP 517 backend
PEP 517 separates the responsibilities around build requirements: the backend is responsible for saying what is required, and the frontend is responsible for ensuring they're available to the build.
In setuptools, build requirements are defined by `setup_requires`, and these get passed through to PEP 517's `get_requires_for_build_*` hooks. There is a monkeypatch to return them from these hooks and prevent setuptools trying to install them itself:
https://github.com/pypa/setuptools/blob/5e60dc50e540a942aeb558aabe7d92ab7eb13d4b/setuptools/build_meta.py#L56-L75
But something similar to that - preventing installation - should really be in place for all the PEP 517 hooks, because a PEP 517 backend isn't responsible for installing dependencies.
Why does this matter? Pip has the `--no-build-isolation` option, with which the caller can declare that they have taken care of build dependencies and pip should try to build the package in the current environment. This is useful for downstream packagers, and for experimenting with different versions of your build dependencies. But setuptools doesn't know about this, so it charges ahead and attempts to install things when that's not what you want.
The workaround I'm looking at is to only specify `setup_requires` if `'egg_info' in sys.argv`, as is the case when the `get_requires_for_build_*` hooks are called. But this is clearly not ideal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/build_meta.py`
Content:
```
1 """A PEP 517 interface to setuptools
2
3 Previously, when a user or a command line tool (let's call it a "frontend")
4 needed to make a request of setuptools to take a certain action, for
5 example, generating a list of installation requirements, the frontend would
6 would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line.
7
8 PEP 517 defines a different method of interfacing with setuptools. Rather
9 than calling "setup.py" directly, the frontend should:
10
11 1. Set the current directory to the directory with a setup.py file
12 2. Import this module into a safe python interpreter (one in which
13 setuptools can potentially set global variables or crash hard).
14 3. Call one of the functions defined in PEP 517.
15
16 What each function does is defined in PEP 517. However, here is a "casual"
17 definition of the functions (this definition should not be relied on for
18 bug reports or API stability):
19
20 - `build_wheel`: build a wheel in the folder and return the basename
21 - `get_requires_for_build_wheel`: get the `setup_requires` to build
22 - `prepare_metadata_for_build_wheel`: get the `install_requires`
23 - `build_sdist`: build an sdist in the folder and return the basename
24 - `get_requires_for_build_sdist`: get the `setup_requires` to build
25
26 Again, this is not a formal definition! Just a "taste" of the module.
27 """
28
29 import io
30 import os
31 import sys
32 import tokenize
33 import shutil
34 import contextlib
35
36 import setuptools
37 import distutils
38 from setuptools.py31compat import TemporaryDirectory
39
40 from pkg_resources import parse_requirements
41
42 __all__ = ['get_requires_for_build_sdist',
43 'get_requires_for_build_wheel',
44 'prepare_metadata_for_build_wheel',
45 'build_wheel',
46 'build_sdist',
47 '__legacy__',
48 'SetupRequirementsError']
49
50
51 class SetupRequirementsError(BaseException):
52 def __init__(self, specifiers):
53 self.specifiers = specifiers
54
55
56 class Distribution(setuptools.dist.Distribution):
57 def fetch_build_eggs(self, specifiers):
58 specifier_list = list(map(str, parse_requirements(specifiers)))
59
60 raise SetupRequirementsError(specifier_list)
61
62 @classmethod
63 @contextlib.contextmanager
64 def patch(cls):
65 """
66 Replace
67 distutils.dist.Distribution with this class
68 for the duration of this context.
69 """
70 orig = distutils.core.Distribution
71 distutils.core.Distribution = cls
72 try:
73 yield
74 finally:
75 distutils.core.Distribution = orig
76
77
78 def _to_str(s):
79 """
80 Convert a filename to a string (on Python 2, explicitly
81 a byte string, not Unicode) as distutils checks for the
82 exact type str.
83 """
84 if sys.version_info[0] == 2 and not isinstance(s, str):
85 # Assume it's Unicode, as that's what the PEP says
86 # should be provided.
87 return s.encode(sys.getfilesystemencoding())
88 return s
89
90
91 def _get_immediate_subdirectories(a_dir):
92 return [name for name in os.listdir(a_dir)
93 if os.path.isdir(os.path.join(a_dir, name))]
94
95
96 def _file_with_extension(directory, extension):
97 matching = (
98 f for f in os.listdir(directory)
99 if f.endswith(extension)
100 )
101 file, = matching
102 return file
103
104
105 def _open_setup_script(setup_script):
106 if not os.path.exists(setup_script):
107 # Supply a default setup.py
108 return io.StringIO(u"from setuptools import setup; setup()")
109
110 return getattr(tokenize, 'open', open)(setup_script)
111
112
113 class _BuildMetaBackend(object):
114
115 def _fix_config(self, config_settings):
116 config_settings = config_settings or {}
117 config_settings.setdefault('--global-option', [])
118 return config_settings
119
120 def _get_build_requires(self, config_settings, requirements):
121 config_settings = self._fix_config(config_settings)
122
123 sys.argv = sys.argv[:1] + ['egg_info'] + \
124 config_settings["--global-option"]
125 try:
126 with Distribution.patch():
127 self.run_setup()
128 except SetupRequirementsError as e:
129 requirements += e.specifiers
130
131 return requirements
132
133 def run_setup(self, setup_script='setup.py'):
134 # Note that we can reuse our build directory between calls
135 # Correctness comes first, then optimization later
136 __file__ = setup_script
137 __name__ = '__main__'
138
139 with _open_setup_script(__file__) as f:
140 code = f.read().replace(r'\r\n', r'\n')
141
142 exec(compile(code, __file__, 'exec'), locals())
143
144 def get_requires_for_build_wheel(self, config_settings=None):
145 config_settings = self._fix_config(config_settings)
146 return self._get_build_requires(
147 config_settings, requirements=['wheel'])
148
149 def get_requires_for_build_sdist(self, config_settings=None):
150 config_settings = self._fix_config(config_settings)
151 return self._get_build_requires(config_settings, requirements=[])
152
153 def prepare_metadata_for_build_wheel(self, metadata_directory,
154 config_settings=None):
155 sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
156 _to_str(metadata_directory)]
157 self.run_setup()
158
159 dist_info_directory = metadata_directory
160 while True:
161 dist_infos = [f for f in os.listdir(dist_info_directory)
162 if f.endswith('.dist-info')]
163
164 if (
165 len(dist_infos) == 0 and
166 len(_get_immediate_subdirectories(dist_info_directory)) == 1
167 ):
168
169 dist_info_directory = os.path.join(
170 dist_info_directory, os.listdir(dist_info_directory)[0])
171 continue
172
173 assert len(dist_infos) == 1
174 break
175
176 # PEP 517 requires that the .dist-info directory be placed in the
177 # metadata_directory. To comply, we MUST copy the directory to the root
178 if dist_info_directory != metadata_directory:
179 shutil.move(
180 os.path.join(dist_info_directory, dist_infos[0]),
181 metadata_directory)
182 shutil.rmtree(dist_info_directory, ignore_errors=True)
183
184 return dist_infos[0]
185
186 def _build_with_temp_dir(self, setup_command, result_extension,
187 result_directory, config_settings):
188 config_settings = self._fix_config(config_settings)
189 result_directory = os.path.abspath(result_directory)
190
191 # Build in a temporary directory, then copy to the target.
192 os.makedirs(result_directory, exist_ok=True)
193 with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:
194 sys.argv = (sys.argv[:1] + setup_command +
195 ['--dist-dir', tmp_dist_dir] +
196 config_settings["--global-option"])
197 self.run_setup()
198
199 result_basename = _file_with_extension(
200 tmp_dist_dir, result_extension)
201 result_path = os.path.join(result_directory, result_basename)
202 if os.path.exists(result_path):
203 # os.rename will fail overwriting on non-Unix.
204 os.remove(result_path)
205 os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)
206
207 return result_basename
208
209 def build_wheel(self, wheel_directory, config_settings=None,
210 metadata_directory=None):
211 return self._build_with_temp_dir(['bdist_wheel'], '.whl',
212 wheel_directory, config_settings)
213
214 def build_sdist(self, sdist_directory, config_settings=None):
215 return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],
216 '.tar.gz', sdist_directory,
217 config_settings)
218
219
220 class _BuildMetaLegacyBackend(_BuildMetaBackend):
221 """Compatibility backend for setuptools
222
223 This is a version of setuptools.build_meta that endeavors
224 to maintain backwards
225 compatibility with pre-PEP 517 modes of invocation. It
226 exists as a temporary
227 bridge between the old packaging mechanism and the new
228 packaging mechanism,
229 and will eventually be removed.
230 """
231 def run_setup(self, setup_script='setup.py'):
232 # In order to maintain compatibility with scripts assuming that
233 # the setup.py script is in a directory on the PYTHONPATH, inject
234 # '' into sys.path. (pypa/setuptools#1642)
235 sys_path = list(sys.path) # Save the original path
236
237 script_dir = os.path.dirname(os.path.abspath(setup_script))
238 if script_dir not in sys.path:
239 sys.path.insert(0, script_dir)
240
241 # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to
242 # get the directory of the source code. They expect it to refer to the
243 # setup.py script.
244 sys_argv_0 = sys.argv[0]
245 sys.argv[0] = setup_script
246
247 try:
248 super(_BuildMetaLegacyBackend,
249 self).run_setup(setup_script=setup_script)
250 finally:
251 # While PEP 517 frontends should be calling each hook in a fresh
252 # subprocess according to the standard (and thus it should not be
253 # strictly necessary to restore the old sys.path), we'll restore
254 # the original path so that the path manipulation does not persist
255 # within the hook after run_setup is called.
256 sys.path[:] = sys_path
257 sys.argv[0] = sys_argv_0
258
259
260 # The primary backend
261 _BACKEND = _BuildMetaBackend()
262
263 get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel
264 get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist
265 prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel
266 build_wheel = _BACKEND.build_wheel
267 build_sdist = _BACKEND.build_sdist
268
269
270 # The legacy backend
271 __legacy__ = _BuildMetaLegacyBackend()
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py
--- a/setuptools/build_meta.py
+++ b/setuptools/build_meta.py
@@ -75,6 +75,22 @@
distutils.core.Distribution = orig
[email protected]
+def no_install_setup_requires():
+ """Temporarily disable installing setup_requires
+
+ Under PEP 517, the backend reports build dependencies to the frontend,
+ and the frontend is responsible for ensuring they're installed.
+ So setuptools (acting as a backend) should not try to install them.
+ """
+ orig = setuptools._install_setup_requires
+ setuptools._install_setup_requires = lambda attrs: None
+ try:
+ yield
+ finally:
+ setuptools._install_setup_requires = orig
+
+
def _to_str(s):
"""
Convert a filename to a string (on Python 2, explicitly
@@ -154,7 +170,8 @@
config_settings=None):
sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
_to_str(metadata_directory)]
- self.run_setup()
+ with no_install_setup_requires():
+ self.run_setup()
dist_info_directory = metadata_directory
while True:
@@ -194,7 +211,8 @@
sys.argv = (sys.argv[:1] + setup_command +
['--dist-dir', tmp_dist_dir] +
config_settings["--global-option"])
- self.run_setup()
+ with no_install_setup_requires():
+ self.run_setup()
result_basename = _file_with_extension(
tmp_dist_dir, result_extension)
|
{"golden_diff": "diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py\n--- a/setuptools/build_meta.py\n+++ b/setuptools/build_meta.py\n@@ -75,6 +75,22 @@\n distutils.core.Distribution = orig\n \n \[email protected]\n+def no_install_setup_requires():\n+ \"\"\"Temporarily disable installing setup_requires\n+\n+ Under PEP 517, the backend reports build dependencies to the frontend,\n+ and the frontend is responsible for ensuring they're installed.\n+ So setuptools (acting as a backend) should not try to install them.\n+ \"\"\"\n+ orig = setuptools._install_setup_requires\n+ setuptools._install_setup_requires = lambda attrs: None\n+ try:\n+ yield\n+ finally:\n+ setuptools._install_setup_requires = orig\n+\n+\n def _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n@@ -154,7 +170,8 @@\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n- self.run_setup()\n+ with no_install_setup_requires():\n+ self.run_setup()\n \n dist_info_directory = metadata_directory\n while True:\n@@ -194,7 +211,8 @@\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n- self.run_setup()\n+ with no_install_setup_requires():\n+ self.run_setup()\n \n result_basename = _file_with_extension(\n tmp_dist_dir, result_extension)\n", "issue": "Don't install setup_requires when run as a PEP 517 backend\nPEP 517 separates the responsibilities around build requirements: the backend is responsible for saying what is required, and the frontend is responsible for ensuring they're available to the build.\r\n\r\nIn setuptools, build requirements are defined by `setup_requires`, and these get passed through to PEP 517's `get_requires_for_build_*` hooks. There is a monkeypatch to return them from these hooks and prevent setuptools trying to install them itself:\r\n\r\nhttps://github.com/pypa/setuptools/blob/5e60dc50e540a942aeb558aabe7d92ab7eb13d4b/setuptools/build_meta.py#L56-L75\r\n\r\nBut something similar to that - preventing installation - should really be in place for all the PEP 517 hooks, because a PEP 517 backend isn't responsible for installing dependencies.\r\n\r\nWhy does this matter? Pip has the `--no-build-isolation` option, with which the caller can declare that they have taken care of build dependencies and pip should try to build the package in the current environment. This is useful for downstream packagers, and for experimenting with different versions of your build dependencies. But setuptools doesn't know about this, so it charges ahead and attempts to install things when that's not what you want.\r\n\r\nThe workaround I'm looking at is to only specify `setup_requires` if `'egg_info' in sys.argv`, as is the case when the `get_requires_for_build_*` hooks are called. But this is clearly not ideal.\n", "before_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(\n config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (\n len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1\n ):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def _build_with_temp_dir(self, setup_command, result_extension,\n result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n result_directory = os.path.abspath(result_directory)\n\n # Build in a temporary directory, then copy to the target.\n os.makedirs(result_directory, exist_ok=True)\n with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n\n result_basename = _file_with_extension(\n tmp_dist_dir, result_extension)\n result_path = os.path.join(result_directory, result_basename)\n if os.path.exists(result_path):\n # os.rename will fail overwriting on non-Unix.\n os.remove(result_path)\n os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n\n return result_basename\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n wheel_directory, config_settings)\n\n def build_sdist(self, sdist_directory, config_settings=None):\n return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n '.tar.gz', sdist_directory,\n config_settings)\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors\n to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It\n exists as a temporary\n bridge between the old packaging mechanism and the new\n packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to\n # get the directory of the source code. They expect it to refer to the\n # setup.py script.\n sys_argv_0 = sys.argv[0]\n sys.argv[0] = setup_script\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n sys.argv[0] = sys_argv_0\n\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}], "after_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\[email protected]\ndef no_install_setup_requires():\n \"\"\"Temporarily disable installing setup_requires\n\n Under PEP 517, the backend reports build dependencies to the frontend,\n and the frontend is responsible for ensuring they're installed.\n So setuptools (acting as a backend) should not try to install them.\n \"\"\"\n orig = setuptools._install_setup_requires\n setuptools._install_setup_requires = lambda attrs: None\n try:\n yield\n finally:\n setuptools._install_setup_requires = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(\n config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n with no_install_setup_requires():\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (\n len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1\n ):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def _build_with_temp_dir(self, setup_command, result_extension,\n result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n result_directory = os.path.abspath(result_directory)\n\n # Build in a temporary directory, then copy to the target.\n os.makedirs(result_directory, exist_ok=True)\n with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n with no_install_setup_requires():\n self.run_setup()\n\n result_basename = _file_with_extension(\n tmp_dist_dir, result_extension)\n result_path = os.path.join(result_directory, result_basename)\n if os.path.exists(result_path):\n # os.rename will fail overwriting on non-Unix.\n os.remove(result_path)\n os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n\n return result_basename\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n wheel_directory, config_settings)\n\n def build_sdist(self, sdist_directory, config_settings=None):\n return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n '.tar.gz', sdist_directory,\n config_settings)\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors\n to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It\n exists as a temporary\n bridge between the old packaging mechanism and the new\n packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to\n # get the directory of the source code. They expect it to refer to the\n # setup.py script.\n sys_argv_0 = sys.argv[0]\n sys.argv[0] = setup_script\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n sys.argv[0] = sys_argv_0\n\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}]}
| 3,446 | 364 |
gh_patches_debug_36499
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-380
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue with metric arithmetics
I'm trying to define my metric as
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = 1.0 - accuracy
```
and I got the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-70-c4c69e70a6d5> in <module>()
2
3 accuracy = Accuracy()
----> 4 error_metric = 1.0 - accuracy
TypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'
```
But I can define
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = (accuracy - 1.0) * -1.0
```
cc @zasdfgbnm
Issue with metric arithmetics
I'm trying to define my metric as
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = 1.0 - accuracy
```
and I got the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-70-c4c69e70a6d5> in <module>()
2
3 accuracy = Accuracy()
----> 4 error_metric = 1.0 - accuracy
TypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'
```
But I can define
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = (accuracy - 1.0) * -1.0
```
cc @zasdfgbnm
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/metric.py`
Content:
```
1 from abc import ABCMeta, abstractmethod
2 from ignite._six import with_metaclass
3 from ignite.engine import Events
4 import torch
5
6
7 class Metric(with_metaclass(ABCMeta, object)):
8 """
9 Base class for all Metrics.
10
11 Args:
12 output_transform (callable, optional): a callable that is used to transform the
13 :class:`ignite.engine.Engine`'s `process_function`'s output into the
14 form expected by the metric. This can be useful if, for example, you have a multi-output model and
15 you want to compute the metric with respect to one of the outputs.
16
17 """
18
19 def __init__(self, output_transform=lambda x: x):
20 self._output_transform = output_transform
21 self.reset()
22
23 @abstractmethod
24 def reset(self):
25 """
26 Resets the metric to to it's initial state.
27
28 This is called at the start of each epoch.
29 """
30 pass
31
32 @abstractmethod
33 def update(self, output):
34 """
35 Updates the metric's state using the passed batch output.
36
37 This is called once for each batch.
38
39 Args:
40 output: the is the output from the engine's process function
41 """
42 pass
43
44 @abstractmethod
45 def compute(self):
46 """
47 Computes the metric based on it's accumulated state.
48
49 This is called at the end of each epoch.
50
51 Returns:
52 Any: the actual quantity of interest
53
54 Raises:
55 NotComputableError: raised when the metric cannot be computed
56 """
57 pass
58
59 def started(self, engine):
60 self.reset()
61
62 @torch.no_grad()
63 def iteration_completed(self, engine):
64 output = self._output_transform(engine.state.output)
65 self.update(output)
66
67 def completed(self, engine, name):
68 engine.state.metrics[name] = self.compute()
69
70 def attach(self, engine, name):
71 engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
72 if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):
73 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
74 if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
75 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
76
77 def __add__(self, other):
78 from ignite.metrics import MetricsLambda
79 return MetricsLambda(lambda x, y: x + y, self, other)
80
81 def __sub__(self, other):
82 from ignite.metrics import MetricsLambda
83 return MetricsLambda(lambda x, y: x - y, self, other)
84
85 def __mul__(self, other):
86 from ignite.metrics import MetricsLambda
87 return MetricsLambda(lambda x, y: x * y, self, other)
88
89 def __pow__(self, other):
90 from ignite.metrics import MetricsLambda
91 return MetricsLambda(lambda x, y: x ** y, self, other)
92
93 def __mod__(self, other):
94 from ignite.metrics import MetricsLambda
95 return MetricsLambda(lambda x, y: x % y, self, other)
96
97 def __div__(self, other):
98 from ignite.metrics import MetricsLambda
99 return MetricsLambda(lambda x, y: x.__div__(y), self, other)
100
101 def __truediv__(self, other):
102 from ignite.metrics import MetricsLambda
103 return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
104
105 def __floordiv__(self, other):
106 from ignite.metrics import MetricsLambda
107 return MetricsLambda(lambda x, y: x // y, self, other)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -78,18 +78,34 @@
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x + y, self, other)
+ def __radd__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x + y, other, self)
+
def __sub__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x - y, self, other)
+ def __rsub__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x - y, other, self)
+
def __mul__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x * y, self, other)
+ def __rmul__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x * y, other, self)
+
def __pow__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x ** y, self, other)
+ def __rpow__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x ** y, other, self)
+
def __mod__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x % y, self, other)
@@ -98,10 +114,18 @@
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x.__div__(y), self, other)
+ def __rdiv__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x.__div__(y), other, self)
+
def __truediv__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
+ def __rtruediv__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)
+
def __floordiv__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x // y, self, other)
|
{"golden_diff": "diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py\n--- a/ignite/metrics/metric.py\n+++ b/ignite/metrics/metric.py\n@@ -78,18 +78,34 @@\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n \n+ def __radd__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x + y, other, self)\n+\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n \n+ def __rsub__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x - y, other, self)\n+\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n \n+ def __rmul__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x * y, other, self)\n+\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n \n+ def __rpow__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x ** y, other, self)\n+\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n@@ -98,10 +114,18 @@\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n \n+ def __rdiv__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n+\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n \n+ def __rtruediv__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n+\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n", "issue": "Issue with metric arithmetics\nI'm trying to define my metric as \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = 1.0 - accuracy\r\n```\r\nand I got the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-70-c4c69e70a6d5> in <module>()\r\n 2 \r\n 3 accuracy = Accuracy()\r\n----> 4 error_metric = 1.0 - accuracy\r\n\r\nTypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'\r\n```\r\nBut I can define \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = (accuracy - 1.0) * -1.0\r\n```\r\n\r\ncc @zasdfgbnm \nIssue with metric arithmetics\nI'm trying to define my metric as \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = 1.0 - accuracy\r\n```\r\nand I got the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-70-c4c69e70a6d5> in <module>()\r\n 2 \r\n 3 accuracy = Accuracy()\r\n----> 4 error_metric = 1.0 - accuracy\r\n\r\nTypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'\r\n```\r\nBut I can define \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = (accuracy - 1.0) * -1.0\r\n```\r\n\r\ncc @zasdfgbnm \n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest\n\n Raises:\n NotComputableError: raised when the metric cannot be computed\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n engine.state.metrics[name] = self.compute()\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n", "path": "ignite/metrics/metric.py"}], "after_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest\n\n Raises:\n NotComputableError: raised when the metric cannot be computed\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n engine.state.metrics[name] = self.compute()\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n", "path": "ignite/metrics/metric.py"}]}
| 1,598 | 581 |
gh_patches_debug_30714
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1328
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mess in integrations setup
Currently we have two issues with integrations setup:
1. 'setup' is call for all integrations, regardless 'publish' flag.
2. if integration exists in db and in config file with same name, then only integration from db will be setup. That bad, because any integration from config file will be copied to db right after mindsdb start, and any changes in config file after that will have no effect.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/__main__.py`
Content:
```
1 import atexit
2 import traceback
3 import sys
4 import os
5 import time
6 import asyncio
7 import signal
8
9 import torch.multiprocessing as mp
10
11 from mindsdb.utilities.config import Config, STOP_THREADS_EVENT
12 from mindsdb.interfaces.model.model_interface import ray_based, ModelInterface
13 from mindsdb.api.http.start import start as start_http
14 from mindsdb.api.mysql.start import start as start_mysql
15 from mindsdb.api.mongo.start import start as start_mongo
16 from mindsdb.utilities.ps import is_pid_listen_port, get_child_pids
17 from mindsdb.utilities.functions import args_parse
18 from mindsdb.interfaces.database.database import DatabaseWrapper
19 from mindsdb.utilities.log import log
20
21 from mindsdb.interfaces.database.integrations import get_db_integrations
22
23 COMPANY_ID = os.environ.get('MINDSDB_COMPANY_ID', None)
24
25
26 def close_api_gracefully(apis):
27 try:
28 for api in apis.values():
29 process = api['process']
30 childs = get_child_pids(process.pid)
31 for p in childs:
32 try:
33 os.kill(p, signal.SIGTERM)
34 except Exception:
35 p.kill()
36 sys.stdout.flush()
37 process.terminate()
38 process.join()
39 sys.stdout.flush()
40 if ray_based:
41 os.system('ray stop --force')
42 except KeyboardInterrupt:
43 sys.exit(0)
44
45
46 if __name__ == '__main__':
47 mp.freeze_support()
48 args = args_parse()
49 config = Config()
50
51 if args.verbose is True:
52 # Figure this one out later
53 pass
54
55 os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']
56 os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']
57
58 # Switch to this once the native interface has it's own thread :/
59 ctx = mp.get_context('spawn')
60
61 from mindsdb.__about__ import __version__ as mindsdb_version
62 print(f'Version {mindsdb_version}')
63
64 print(f'Configuration file:\n {config.config_path}')
65 print(f"Storage path:\n {config['paths']['root']}")
66
67 # @TODO Backwards compatibiltiy for tests, remove later
68 from mindsdb.interfaces.database.integrations import add_db_integration, get_db_integration
69 dbw = DatabaseWrapper(COMPANY_ID)
70 model_interface = ModelInterface()
71 raw_model_data_arr = model_interface.get_models()
72 model_data_arr = []
73 for model in raw_model_data_arr:
74 if model['status'] == 'complete':
75 x = model_interface.get_model_data(model['name'])
76 try:
77 model_data_arr.append(model_interface.get_model_data(model['name']))
78 except Exception:
79 pass
80
81 is_cloud = config.get('cloud', False)
82 if not is_cloud:
83 for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):
84 print(f"Setting up integration: {integration_name}")
85 dbw.setup_integration(integration_name)
86
87 for integration_name in config.get('integrations', {}):
88 print(f'Adding: {integration_name}')
89 try:
90 it = get_db_integration(integration_name, None)
91 if it is None:
92 add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud
93 if config['integrations'][integration_name].get('publish', False) and not is_cloud:
94 dbw.setup_integration(integration_name)
95 dbw.register_predictors(model_data_arr, integration_name=integration_name)
96 except Exception as e:
97 log.error(f'\n\nError: {e} adding database integration {integration_name}\n\n')
98
99 del model_interface
100 del dbw
101 # @TODO Backwards compatibiltiy for tests, remove later
102
103 if args.api is None:
104 api_arr = ['http', 'mysql']
105 else:
106 api_arr = args.api.split(',')
107
108 apis = {
109 api: {
110 'port': config['api'][api]['port'],
111 'process': None,
112 'started': False
113 } for api in api_arr
114 }
115
116 start_functions = {
117 'http': start_http,
118 'mysql': start_mysql,
119 'mongodb': start_mongo
120 }
121
122 for api_name, api_data in apis.items():
123 if api_data['started']:
124 continue
125 print(f'{api_name} API: starting...')
126 try:
127 if api_name == 'http':
128 p = ctx.Process(target=start_functions[api_name], args=(args.verbose, args.no_studio))
129 else:
130 p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))
131 p.start()
132 api_data['process'] = p
133 except Exception as e:
134 log.error(f'Failed to start {api_name} API with exception {e}\n{traceback.format_exc()}')
135 close_api_gracefully(apis)
136 raise e
137
138 atexit.register(close_api_gracefully, apis=apis)
139
140 async def wait_api_start(api_name, pid, port):
141 timeout = 60
142 start_time = time.time()
143 started = is_pid_listen_port(pid, port)
144 while (time.time() - start_time) < timeout and started is False:
145 await asyncio.sleep(0.5)
146 started = is_pid_listen_port(pid, port)
147 return api_name, port, started
148
149 async def wait_apis_start():
150 futures = [
151 wait_api_start(api_name, api_data['process'].pid, api_data['port'])
152 for api_name, api_data in apis.items() if 'port' in api_data
153 ]
154 for i, future in enumerate(asyncio.as_completed(futures)):
155 api_name, port, started = await future
156 if started:
157 print(f"{api_name} API: started on {port}")
158 else:
159 log.error(f"ERROR: {api_name} API cant start on {port}")
160
161 ioloop = asyncio.get_event_loop()
162 ioloop.run_until_complete(wait_apis_start())
163 ioloop.close()
164
165 try:
166 for api_data in apis.values():
167 api_data['process'].join()
168 except KeyboardInterrupt:
169 print('Stopping stream integrations...')
170 STOP_THREADS_EVENT.set()
171 print('Closing app...')
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py
--- a/mindsdb/__main__.py
+++ b/mindsdb/__main__.py
@@ -82,17 +82,20 @@
if not is_cloud:
for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):
print(f"Setting up integration: {integration_name}")
- dbw.setup_integration(integration_name)
+ if get_db_integration(integration_name, COMPANY_ID)['publish']:
+ # do setup and register only if it is 'publish' integration
+ dbw.setup_integration(integration_name)
+ dbw.register_predictors(model_data_arr, integration_name=integration_name)
for integration_name in config.get('integrations', {}):
print(f'Adding: {integration_name}')
try:
it = get_db_integration(integration_name, None)
- if it is None:
+ if it is None: # register and setup it only if it doesn't conflict with records in db
add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud
- if config['integrations'][integration_name].get('publish', False) and not is_cloud:
- dbw.setup_integration(integration_name)
- dbw.register_predictors(model_data_arr, integration_name=integration_name)
+ if config['integrations'][integration_name].get('publish', False) and not is_cloud:
+ dbw.setup_integration(integration_name)
+ dbw.register_predictors(model_data_arr, integration_name=integration_name)
except Exception as e:
log.error(f'\n\nError: {e} adding database integration {integration_name}\n\n')
|
{"golden_diff": "diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py\n--- a/mindsdb/__main__.py\n+++ b/mindsdb/__main__.py\n@@ -82,17 +82,20 @@\n if not is_cloud:\n for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):\n print(f\"Setting up integration: {integration_name}\")\n- dbw.setup_integration(integration_name)\n+ if get_db_integration(integration_name, COMPANY_ID)['publish']:\n+ # do setup and register only if it is 'publish' integration\n+ dbw.setup_integration(integration_name)\n+ dbw.register_predictors(model_data_arr, integration_name=integration_name)\n \n for integration_name in config.get('integrations', {}):\n print(f'Adding: {integration_name}')\n try:\n it = get_db_integration(integration_name, None)\n- if it is None:\n+ if it is None: # register and setup it only if it doesn't conflict with records in db\n add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud\n- if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n- dbw.setup_integration(integration_name)\n- dbw.register_predictors(model_data_arr, integration_name=integration_name)\n+ if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n+ dbw.setup_integration(integration_name)\n+ dbw.register_predictors(model_data_arr, integration_name=integration_name)\n except Exception as e:\n log.error(f'\\n\\nError: {e} adding database integration {integration_name}\\n\\n')\n", "issue": "Mess in integrations setup\nCurrently we have two issues with integrations setup:\r\n1. 'setup' is call for all integrations, regardless 'publish' flag.\r\n2. if integration exists in db and in config file with same name, then only integration from db will be setup. That bad, because any integration from config file will be copied to db right after mindsdb start, and any changes in config file after that will have no effect.\n", "before_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\nimport asyncio\nimport signal\n\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config, STOP_THREADS_EVENT\nfrom mindsdb.interfaces.model.model_interface import ray_based, ModelInterface\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.ps import is_pid_listen_port, get_child_pids\nfrom mindsdb.utilities.functions import args_parse\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.log import log\n\nfrom mindsdb.interfaces.database.integrations import get_db_integrations\n\nCOMPANY_ID = os.environ.get('MINDSDB_COMPANY_ID', None)\n\n\ndef close_api_gracefully(apis):\n try:\n for api in apis.values():\n process = api['process']\n childs = get_child_pids(process.pid)\n for p in childs:\n try:\n os.kill(p, signal.SIGTERM)\n except Exception:\n p.kill()\n sys.stdout.flush()\n process.terminate()\n process.join()\n sys.stdout.flush()\n if ray_based:\n os.system('ray stop --force')\n except KeyboardInterrupt:\n sys.exit(0)\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n args = args_parse()\n config = Config()\n\n if args.verbose is True:\n # Figure this one out later\n pass\n\n os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']\n os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']\n\n # Switch to this once the native interface has it's own thread :/\n ctx = mp.get_context('spawn')\n\n from mindsdb.__about__ import __version__ as mindsdb_version\n print(f'Version {mindsdb_version}')\n\n print(f'Configuration file:\\n {config.config_path}')\n print(f\"Storage path:\\n {config['paths']['root']}\")\n\n # @TODO Backwards compatibiltiy for tests, remove later\n from mindsdb.interfaces.database.integrations import add_db_integration, get_db_integration\n dbw = DatabaseWrapper(COMPANY_ID)\n model_interface = ModelInterface()\n raw_model_data_arr = model_interface.get_models()\n model_data_arr = []\n for model in raw_model_data_arr:\n if model['status'] == 'complete':\n x = model_interface.get_model_data(model['name'])\n try:\n model_data_arr.append(model_interface.get_model_data(model['name']))\n except Exception:\n pass\n\n is_cloud = config.get('cloud', False)\n if not is_cloud:\n for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):\n print(f\"Setting up integration: {integration_name}\")\n dbw.setup_integration(integration_name)\n\n for integration_name in config.get('integrations', {}):\n print(f'Adding: {integration_name}')\n try:\n it = get_db_integration(integration_name, None)\n if it is None:\n add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud\n if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n dbw.setup_integration(integration_name)\n dbw.register_predictors(model_data_arr, integration_name=integration_name)\n except Exception as e:\n log.error(f'\\n\\nError: {e} adding database integration {integration_name}\\n\\n')\n\n del model_interface\n del dbw\n # @TODO Backwards compatibiltiy for tests, remove later\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n apis = {\n api: {\n 'port': config['api'][api]['port'],\n 'process': None,\n 'started': False\n } for api in api_arr\n }\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n for api_name, api_data in apis.items():\n if api_data['started']:\n continue\n print(f'{api_name} API: starting...')\n try:\n if api_name == 'http':\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose, args.no_studio))\n else:\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))\n p.start()\n api_data['process'] = p\n except Exception as e:\n log.error(f'Failed to start {api_name} API with exception {e}\\n{traceback.format_exc()}')\n close_api_gracefully(apis)\n raise e\n\n atexit.register(close_api_gracefully, apis=apis)\n\n async def wait_api_start(api_name, pid, port):\n timeout = 60\n start_time = time.time()\n started = is_pid_listen_port(pid, port)\n while (time.time() - start_time) < timeout and started is False:\n await asyncio.sleep(0.5)\n started = is_pid_listen_port(pid, port)\n return api_name, port, started\n\n async def wait_apis_start():\n futures = [\n wait_api_start(api_name, api_data['process'].pid, api_data['port'])\n for api_name, api_data in apis.items() if 'port' in api_data\n ]\n for i, future in enumerate(asyncio.as_completed(futures)):\n api_name, port, started = await future\n if started:\n print(f\"{api_name} API: started on {port}\")\n else:\n log.error(f\"ERROR: {api_name} API cant start on {port}\")\n\n ioloop = asyncio.get_event_loop()\n ioloop.run_until_complete(wait_apis_start())\n ioloop.close()\n\n try:\n for api_data in apis.values():\n api_data['process'].join()\n except KeyboardInterrupt:\n print('Stopping stream integrations...')\n STOP_THREADS_EVENT.set()\n print('Closing app...')\n", "path": "mindsdb/__main__.py"}], "after_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\nimport asyncio\nimport signal\n\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config, STOP_THREADS_EVENT\nfrom mindsdb.interfaces.model.model_interface import ray_based, ModelInterface\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.ps import is_pid_listen_port, get_child_pids\nfrom mindsdb.utilities.functions import args_parse\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.log import log\n\nfrom mindsdb.interfaces.database.integrations import get_db_integrations\n\nCOMPANY_ID = os.environ.get('MINDSDB_COMPANY_ID', None)\n\n\ndef close_api_gracefully(apis):\n try:\n for api in apis.values():\n process = api['process']\n childs = get_child_pids(process.pid)\n for p in childs:\n try:\n os.kill(p, signal.SIGTERM)\n except Exception:\n p.kill()\n sys.stdout.flush()\n process.terminate()\n process.join()\n sys.stdout.flush()\n if ray_based:\n os.system('ray stop --force')\n except KeyboardInterrupt:\n sys.exit(0)\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n args = args_parse()\n config = Config()\n\n if args.verbose is True:\n # Figure this one out later\n pass\n\n os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']\n os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']\n\n # Switch to this once the native interface has it's own thread :/\n ctx = mp.get_context('spawn')\n\n from mindsdb.__about__ import __version__ as mindsdb_version\n print(f'Version {mindsdb_version}')\n\n print(f'Configuration file:\\n {config.config_path}')\n print(f\"Storage path:\\n {config['paths']['root']}\")\n\n # @TODO Backwards compatibiltiy for tests, remove later\n from mindsdb.interfaces.database.integrations import add_db_integration, get_db_integration\n dbw = DatabaseWrapper(COMPANY_ID)\n model_interface = ModelInterface()\n raw_model_data_arr = model_interface.get_models()\n model_data_arr = []\n for model in raw_model_data_arr:\n if model['status'] == 'complete':\n x = model_interface.get_model_data(model['name'])\n try:\n model_data_arr.append(model_interface.get_model_data(model['name']))\n except Exception:\n pass\n\n is_cloud = config.get('cloud', False)\n if not is_cloud:\n for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):\n print(f\"Setting up integration: {integration_name}\")\n if get_db_integration(integration_name, COMPANY_ID)['publish']:\n # do setup and register only if it is 'publish' integration\n dbw.setup_integration(integration_name)\n dbw.register_predictors(model_data_arr, integration_name=integration_name)\n\n for integration_name in config.get('integrations', {}):\n print(f'Adding: {integration_name}')\n try:\n it = get_db_integration(integration_name, None)\n if it is None: # register and setup it only if it doesn't conflict with records in db\n add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud\n if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n dbw.setup_integration(integration_name)\n dbw.register_predictors(model_data_arr, integration_name=integration_name)\n except Exception as e:\n log.error(f'\\n\\nError: {e} adding database integration {integration_name}\\n\\n')\n\n del model_interface\n del dbw\n # @TODO Backwards compatibiltiy for tests, remove later\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n apis = {\n api: {\n 'port': config['api'][api]['port'],\n 'process': None,\n 'started': False\n } for api in api_arr\n }\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n for api_name, api_data in apis.items():\n if api_data['started']:\n continue\n print(f'{api_name} API: starting...')\n try:\n if api_name == 'http':\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose, args.no_studio))\n else:\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))\n p.start()\n api_data['process'] = p\n except Exception as e:\n log.error(f'Failed to start {api_name} API with exception {e}\\n{traceback.format_exc()}')\n close_api_gracefully(apis)\n raise e\n\n atexit.register(close_api_gracefully, apis=apis)\n\n async def wait_api_start(api_name, pid, port):\n timeout = 60\n start_time = time.time()\n started = is_pid_listen_port(pid, port)\n while (time.time() - start_time) < timeout and started is False:\n await asyncio.sleep(0.5)\n started = is_pid_listen_port(pid, port)\n return api_name, port, started\n\n async def wait_apis_start():\n futures = [\n wait_api_start(api_name, api_data['process'].pid, api_data['port'])\n for api_name, api_data in apis.items() if 'port' in api_data\n ]\n for i, future in enumerate(asyncio.as_completed(futures)):\n api_name, port, started = await future\n if started:\n print(f\"{api_name} API: started on {port}\")\n else:\n log.error(f\"ERROR: {api_name} API cant start on {port}\")\n\n ioloop = asyncio.get_event_loop()\n ioloop.run_until_complete(wait_apis_start())\n ioloop.close()\n\n try:\n for api_data in apis.values():\n api_data['process'].join()\n except KeyboardInterrupt:\n print('Stopping stream integrations...')\n STOP_THREADS_EVENT.set()\n print('Closing app...')\n", "path": "mindsdb/__main__.py"}]}
| 2,106 | 390 |
gh_patches_debug_11137
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-3035
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tuple index out of range of threading.py
After upgrading from ddtrace==0.46.0 to version ddtrace==0.55.4 my service crash with IndexError.
```
Traceback (most recent call last):
File "/my_service/services/base_service.py", line 105, in run
futures.append(executor.submit(fn=self._single_entry_point_run, entry_point=entry_point))
File "/my_service/venv/lib/python3.7/site-packages/ddtrace/contrib/futures/threading.py", line 26, in _wrap_submit
fn = args[0]
IndexError: tuple index out of range
```
I'm facing this issue even when setting futures=False.
`patch_all(celery=True, django=True, psycopg2=True, redis=True, futures=True)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/futures/threading.py`
Content:
```
1 import ddtrace
2
3
4 def _wrap_submit(func, instance, args, kwargs):
5 """
6 Wrap `Executor` method used to submit a work executed in another
7 thread. This wrapper ensures that a new `Context` is created and
8 properly propagated using an intermediate function.
9 """
10 # If there isn't a currently active context, then do not create one
11 # DEV: Calling `.active()` when there isn't an active context will create a new context
12 # DEV: We need to do this in case they are either:
13 # - Starting nested futures
14 # - Starting futures from outside of an existing context
15 #
16 # In either of these cases we essentially will propagate the wrong context between futures
17 #
18 # The resolution is to not create/propagate a new context if one does not exist, but let the
19 # future's thread create the context instead.
20 current_ctx = None
21 if ddtrace.tracer.context_provider._has_active_context():
22 current_ctx = ddtrace.tracer.context_provider.active()
23
24 # extract the target function that must be executed in
25 # a new thread and the `target` arguments
26 fn = args[0]
27 fn_args = args[1:]
28 return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)
29
30
31 def _wrap_execution(ctx, fn, args, kwargs):
32 """
33 Intermediate target function that is executed in a new thread;
34 it receives the original function with arguments and keyword
35 arguments, including our tracing `Context`. The current context
36 provider sets the Active context in a thread local storage
37 variable because it's outside the asynchronous loop.
38 """
39 if ctx is not None:
40 ddtrace.tracer.context_provider.activate(ctx)
41 return fn(*args, **kwargs)
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/futures/threading.py b/ddtrace/contrib/futures/threading.py
--- a/ddtrace/contrib/futures/threading.py
+++ b/ddtrace/contrib/futures/threading.py
@@ -21,10 +21,12 @@
if ddtrace.tracer.context_provider._has_active_context():
current_ctx = ddtrace.tracer.context_provider.active()
- # extract the target function that must be executed in
- # a new thread and the `target` arguments
- fn = args[0]
- fn_args = args[1:]
+ # The target function can be provided as a kwarg argument "fn" or the first positional argument
+ if "fn" in kwargs:
+ fn = kwargs.pop("fn")
+ fn_args = args
+ else:
+ fn, fn_args = args[0], args[1:]
return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)
|
{"golden_diff": "diff --git a/ddtrace/contrib/futures/threading.py b/ddtrace/contrib/futures/threading.py\n--- a/ddtrace/contrib/futures/threading.py\n+++ b/ddtrace/contrib/futures/threading.py\n@@ -21,10 +21,12 @@\n if ddtrace.tracer.context_provider._has_active_context():\n current_ctx = ddtrace.tracer.context_provider.active()\n \n- # extract the target function that must be executed in\n- # a new thread and the `target` arguments\n- fn = args[0]\n- fn_args = args[1:]\n+ # The target function can be provided as a kwarg argument \"fn\" or the first positional argument\n+ if \"fn\" in kwargs:\n+ fn = kwargs.pop(\"fn\")\n+ fn_args = args\n+ else:\n+ fn, fn_args = args[0], args[1:]\n return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)\n", "issue": "tuple index out of range of threading.py\nAfter upgrading from ddtrace==0.46.0 to version ddtrace==0.55.4 my service crash with IndexError.\r\n```\r\nTraceback (most recent call last):\r\n File \"/my_service/services/base_service.py\", line 105, in run\r\n futures.append(executor.submit(fn=self._single_entry_point_run, entry_point=entry_point))\r\n File \"/my_service/venv/lib/python3.7/site-packages/ddtrace/contrib/futures/threading.py\", line 26, in _wrap_submit\r\n fn = args[0]\r\nIndexError: tuple index out of range\r\n```\r\n\r\nI'm facing this issue even when setting futures=False.\r\n`patch_all(celery=True, django=True, psycopg2=True, redis=True, futures=True)`\r\n\n", "before_files": [{"content": "import ddtrace\n\n\ndef _wrap_submit(func, instance, args, kwargs):\n \"\"\"\n Wrap `Executor` method used to submit a work executed in another\n thread. This wrapper ensures that a new `Context` is created and\n properly propagated using an intermediate function.\n \"\"\"\n # If there isn't a currently active context, then do not create one\n # DEV: Calling `.active()` when there isn't an active context will create a new context\n # DEV: We need to do this in case they are either:\n # - Starting nested futures\n # - Starting futures from outside of an existing context\n #\n # In either of these cases we essentially will propagate the wrong context between futures\n #\n # The resolution is to not create/propagate a new context if one does not exist, but let the\n # future's thread create the context instead.\n current_ctx = None\n if ddtrace.tracer.context_provider._has_active_context():\n current_ctx = ddtrace.tracer.context_provider.active()\n\n # extract the target function that must be executed in\n # a new thread and the `target` arguments\n fn = args[0]\n fn_args = args[1:]\n return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)\n\n\ndef _wrap_execution(ctx, fn, args, kwargs):\n \"\"\"\n Intermediate target function that is executed in a new thread;\n it receives the original function with arguments and keyword\n arguments, including our tracing `Context`. The current context\n provider sets the Active context in a thread local storage\n variable because it's outside the asynchronous loop.\n \"\"\"\n if ctx is not None:\n ddtrace.tracer.context_provider.activate(ctx)\n return fn(*args, **kwargs)\n", "path": "ddtrace/contrib/futures/threading.py"}], "after_files": [{"content": "import ddtrace\n\n\ndef _wrap_submit(func, instance, args, kwargs):\n \"\"\"\n Wrap `Executor` method used to submit a work executed in another\n thread. This wrapper ensures that a new `Context` is created and\n properly propagated using an intermediate function.\n \"\"\"\n # If there isn't a currently active context, then do not create one\n # DEV: Calling `.active()` when there isn't an active context will create a new context\n # DEV: We need to do this in case they are either:\n # - Starting nested futures\n # - Starting futures from outside of an existing context\n #\n # In either of these cases we essentially will propagate the wrong context between futures\n #\n # The resolution is to not create/propagate a new context if one does not exist, but let the\n # future's thread create the context instead.\n current_ctx = None\n if ddtrace.tracer.context_provider._has_active_context():\n current_ctx = ddtrace.tracer.context_provider.active()\n\n # The target function can be provided as a kwarg argument \"fn\" or the first positional argument\n if \"fn\" in kwargs:\n fn = kwargs.pop(\"fn\")\n fn_args = args\n else:\n fn, fn_args = args[0], args[1:]\n return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)\n\n\ndef _wrap_execution(ctx, fn, args, kwargs):\n \"\"\"\n Intermediate target function that is executed in a new thread;\n it receives the original function with arguments and keyword\n arguments, including our tracing `Context`. The current context\n provider sets the Active context in a thread local storage\n variable because it's outside the asynchronous loop.\n \"\"\"\n if ctx is not None:\n ddtrace.tracer.context_provider.activate(ctx)\n return fn(*args, **kwargs)\n", "path": "ddtrace/contrib/futures/threading.py"}]}
| 895 | 215 |
gh_patches_debug_20162
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-120
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default bucket UUID doesn't have dashes
I've seen that default record ID's have got dashes whereas default bucket id doesn't.
Does it makes sense to try to be consistent here?
```
$ http GET http://localhost:8888/v1/buckets/e93a0bb5b7d16d4f9bfd81b6d737271c -v --auth 'mary:marypassword'
{
"data": {
"id": "e93a0bb5b7d16d4f9bfd81b6d737271c",
"last_modified": 1436191171386
},
[...]
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/buckets.py`
Content:
```
1 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
2 from pyramid.security import NO_PERMISSION_REQUIRED
3 from pyramid.view import view_config
4
5 from cliquet import resource
6 from cliquet.utils import hmac_digest, build_request
7
8 from kinto.views import NameGenerator
9
10
11 def create_bucket(request, bucket_id):
12 """Create a bucket if it doesn't exists."""
13 bucket_put = (request.method.lower() == 'put' and
14 request.path.endswith('buckets/default'))
15
16 if not bucket_put:
17 subrequest = build_request(request, {
18 'method': 'PUT',
19 'path': '/buckets/%s' % bucket_id,
20 'body': {"data": {}},
21 'headers': {'If-None-Match': '*'.encode('utf-8')}
22 })
23
24 try:
25 request.invoke_subrequest(subrequest)
26 except HTTPPreconditionFailed:
27 # The bucket already exists
28 pass
29
30
31 def create_collection(request, bucket_id):
32 subpath = request.matchdict['subpath']
33 if subpath.startswith('/collections/'):
34 collection_id = subpath.split('/')[2]
35 collection_put = (request.method.lower() == 'put' and
36 request.path.endswith(collection_id))
37 if not collection_put:
38 subrequest = build_request(request, {
39 'method': 'PUT',
40 'path': '/buckets/%s/collections/%s' % (
41 bucket_id, collection_id),
42 'body': {"data": {}},
43 'headers': {'If-None-Match': '*'.encode('utf-8')}
44 })
45 try:
46 request.invoke_subrequest(subrequest)
47 except HTTPPreconditionFailed:
48 # The collection already exists
49 pass
50
51
52 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
53 def default_bucket(request):
54 if getattr(request, 'prefixed_userid', None) is None:
55 raise HTTPForbidden # Pass through the forbidden_view_config
56
57 settings = request.registry.settings
58 hmac_secret = settings['cliquet.userid_hmac_secret']
59 # Build the user unguessable bucket_id UUID from its user_id
60 bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
61 path = request.path.replace('default', bucket_id)
62 querystring = request.url[(request.url.index(request.path) +
63 len(request.path)):]
64
65 # Make sure bucket exists
66 create_bucket(request, bucket_id)
67
68 # Make sure the collection exists
69 create_collection(request, bucket_id)
70
71 subrequest = build_request(request, {
72 'method': request.method,
73 'path': path + querystring,
74 'body': request.body
75 })
76
77 return request.invoke_subrequest(subrequest)
78
79
80 @resource.register(name='bucket',
81 collection_methods=('GET',),
82 collection_path='/buckets',
83 record_path='/buckets/{{id}}')
84 class Bucket(resource.ProtectedResource):
85 permissions = ('read', 'write', 'collection:create', 'group:create')
86
87 def __init__(self, *args, **kwargs):
88 super(Bucket, self).__init__(*args, **kwargs)
89 self.collection.id_generator = NameGenerator()
90
91 def get_parent_id(self, request):
92 # Buckets are not isolated by user, unlike Cliquet resources.
93 return ''
94
95 def delete(self):
96 result = super(Bucket, self).delete()
97
98 # Delete groups.
99 storage = self.collection.storage
100 parent_id = '/buckets/%s' % self.record_id
101 storage.delete_all(collection_id='group', parent_id=parent_id)
102
103 # Delete collections.
104 deleted = storage.delete_all(collection_id='collection',
105 parent_id=parent_id)
106
107 # Delete records.
108 id_field = self.collection.id_field
109 for collection in deleted:
110 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
111 collection[id_field])
112 storage.delete_all(collection_id='record', parent_id=parent_id)
113
114 return result
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -1,3 +1,6 @@
+from six import text_type
+from uuid import UUID
+
from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
from pyramid.security import NO_PERMISSION_REQUIRED
from pyramid.view import view_config
@@ -57,7 +60,8 @@
settings = request.registry.settings
hmac_secret = settings['cliquet.userid_hmac_secret']
# Build the user unguessable bucket_id UUID from its user_id
- bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
+ digest = hmac_digest(hmac_secret, request.prefixed_userid)
+ bucket_id = text_type(UUID(digest[:32]))
path = request.path.replace('default', bucket_id)
querystring = request.url[(request.url.index(request.path) +
len(request.path)):]
|
{"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -1,3 +1,6 @@\n+from six import text_type\n+from uuid import UUID\n+\n from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\n from pyramid.security import NO_PERMISSION_REQUIRED\n from pyramid.view import view_config\n@@ -57,7 +60,8 @@\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n- bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n+ digest = hmac_digest(hmac_secret, request.prefixed_userid)\n+ bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n", "issue": "Default bucket UUID doesn't have dashes\nI've seen that default record ID's have got dashes whereas default bucket id doesn't.\n\nDoes it makes sense to try to be consistent here?\n\n```\n$ http GET http://localhost:8888/v1/buckets/e93a0bb5b7d16d4f9bfd81b6d737271c -v --auth 'mary:marypassword'\n{\n \"data\": {\n \"id\": \"e93a0bb5b7d16d4f9bfd81b6d737271c\", \n \"last_modified\": 1436191171386\n }, \n [...]\n}\n```\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]}
| 1,503 | 217 |
gh_patches_debug_16866
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-1528
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CSS bundles generation breaks background images relative urls
This is a bug related to PR #1300.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/browser/combine.py`
Content:
```
1 import re
2 from zExceptions import NotFound
3 from Acquisition import aq_base
4 from datetime import datetime
5 from plone.registry.interfaces import IRegistry
6 from plone.resource.file import FilesystemFile
7 from plone.resource.interfaces import IResourceDirectory
8 from Products.CMFPlone.interfaces import IBundleRegistry
9 from Products.CMFPlone.interfaces.resources import (
10 OVERRIDE_RESOURCE_DIRECTORY_NAME,
11 )
12 from StringIO import StringIO
13 from zope.component import getUtility
14 from zope.component import queryUtility
15
16 PRODUCTION_RESOURCE_DIRECTORY = "production"
17
18
19 def get_production_resource_directory():
20 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
21 if persistent_directory is None:
22 return ''
23 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
24 try:
25 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
26 except NotFound:
27 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
28 timestamp = production_folder.readFile('timestamp.txt')
29 return "%s/++unique++%s" % (
30 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
31
32
33 def get_resource(context, path):
34 if path.startswith('++plone++'):
35 # ++plone++ resources can be customized, we return their override
36 # value if any
37 overrides = get_override_directory(context)
38 filepath = path[9:]
39 if overrides.isFile(filepath):
40 return overrides.readFile(filepath)
41
42 resource = context.unrestrictedTraverse(path)
43 if isinstance(resource, FilesystemFile):
44 (directory, sep, filename) = path.rpartition('/')
45 return context.unrestrictedTraverse(directory).readFile(filename)
46 else:
47 if hasattr(aq_base(resource), 'GET'):
48 # for FileResource
49 return resource.GET()
50 else:
51 # any BrowserView
52 return resource()
53
54
55 def write_js(context, folder, meta_bundle):
56 registry = getUtility(IRegistry)
57 resources = []
58
59 # default resources
60 if meta_bundle == 'default' and registry.records.get(
61 'plone.resources/jquery.js'
62 ):
63 resources.append(get_resource(context,
64 registry.records['plone.resources/jquery.js'].value))
65 resources.append(get_resource(context,
66 registry.records['plone.resources.requirejs'].value))
67 resources.append(get_resource(context,
68 registry.records['plone.resources.configjs'].value))
69
70 # bundles
71 bundles = registry.collectionOfInterface(
72 IBundleRegistry, prefix="plone.bundles", check=False)
73 for bundle in bundles.values():
74 if bundle.merge_with == meta_bundle and bundle.jscompilation:
75 resources.append(get_resource(context, bundle.jscompilation))
76
77 fi = StringIO()
78 for script in resources:
79 fi.write(script + '\n')
80 folder.writeFile(meta_bundle + ".js", fi)
81
82
83 def write_css(context, folder, meta_bundle):
84 registry = getUtility(IRegistry)
85 resources = []
86
87 bundles = registry.collectionOfInterface(
88 IBundleRegistry, prefix="plone.bundles", check=False)
89 for bundle in bundles.values():
90 if bundle.merge_with == meta_bundle and bundle.csscompilation:
91 css = get_resource(context, bundle.csscompilation)
92 # Preserve relative urls:
93 # we prefix with '../'' any url not starting with '/'
94 # or http: or data:
95 css = re.sub(
96 r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
97 r'\1../',
98 css)
99 resources.append(css)
100
101 fi = StringIO()
102 for script in resources:
103 fi.write(script + '\n')
104 folder.writeFile(meta_bundle + ".css", fi)
105
106
107 def get_override_directory(context):
108 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
109 if persistent_directory is None:
110 return
111 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
112 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
113 return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
114
115
116 def combine_bundles(context):
117 container = get_override_directory(context)
118 if PRODUCTION_RESOURCE_DIRECTORY not in container:
119 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
120 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
121
122 # store timestamp
123 fi = StringIO()
124 fi.write(datetime.now().isoformat())
125 production_folder.writeFile("timestamp.txt", fi)
126
127 # generate new combined bundles
128 write_js(context, production_folder, 'default')
129 write_js(context, production_folder, 'logged-in')
130 write_css(context, production_folder, 'default')
131 write_css(context, production_folder, 'logged-in')
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -89,12 +89,13 @@
for bundle in bundles.values():
if bundle.merge_with == meta_bundle and bundle.csscompilation:
css = get_resource(context, bundle.csscompilation)
- # Preserve relative urls:
- # we prefix with '../'' any url not starting with '/'
- # or http: or data:
+ (path, sep, filename) = bundle.csscompilation.rpartition('/')
+ # Process relative urls:
+ # we prefix with current resource path any url not starting with
+ # '/' or http: or data:
css = re.sub(
r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
- r'\1../',
+ r'\1%s/' % path,
css)
resources.append(css)
|
{"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -89,12 +89,13 @@\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n css = get_resource(context, bundle.csscompilation)\n- # Preserve relative urls:\n- # we prefix with '../'' any url not starting with '/'\n- # or http: or data:\n+ (path, sep, filename) = bundle.csscompilation.rpartition('/')\n+ # Process relative urls:\n+ # we prefix with current resource path any url not starting with\n+ # '/' or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n- r'\\1../',\n+ r'\\1%s/' % path,\n css)\n resources.append(css)\n", "issue": "CSS bundles generation breaks background images relative urls\nThis is a bug related to PR #1300.\n\n", "before_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.jscompilation:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n css = get_resource(context, bundle.csscompilation)\n # Preserve relative urls:\n # we prefix with '../'' any url not starting with '/'\n # or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1../',\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}], "after_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.jscompilation:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n css = get_resource(context, bundle.csscompilation)\n (path, sep, filename) = bundle.csscompilation.rpartition('/')\n # Process relative urls:\n # we prefix with current resource path any url not starting with\n # '/' or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1%s/' % path,\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]}
| 1,523 | 239 |
gh_patches_debug_67317
|
rasdani/github-patches
|
git_diff
|
qutip__qutip-2305
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
QuTiP 4.7.4: incompatibility with the latest scipy (1.12.0)
https://github.com/qutip/qutip/blob/f5149616a4071a273e7e48a63d956836739c4569/qutip/parallel.py#L7
When the latest scipy version is used (1.12.0), QuTiP (4.7.4) cannot be imported since `from scipy import array` is no longer supported in scipy.
Code to reproduce the bug:
`import qutip`
Output:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import qutip
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/__init__.py:133
131 # graphics
132 from qutip.bloch import *
--> 133 from qutip.visualization import *
134 from qutip.orbital import *
135 from qutip.bloch3d import *
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/visualization.py:24
22 from qutip.qobj import Qobj, isket
23 from qutip.states import ket2dm
---> 24 from qutip.wigner import wigner
25 from qutip.tensor import tensor
26 from qutip.matplotlib_utilities import complex_phase_cmap
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/wigner.py:19
17 import qutip
18 from qutip import Qobj, ket2dm, jmat
---> 19 from qutip.parallel import parfor
20 from qutip.cy.sparse_utils import _csr_get_diag
21 from qutip.sparse import eigh
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/parallel.py:7
1 """
2 This function provides functions for parallel execution of loops and function
3 mappings, using the builtin Python module multiprocessing.
4 """
5 __all__ = ['parfor', 'parallel_map', 'serial_map']
----> 7 from scipy import array
8 import multiprocessing
9 from functools import partial
ImportError: cannot import name 'array' from 'scipy' (/Users/konstantin/anaconda3/envs/test/lib/python3.11/site-packages/scipy/__init__.py)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutip/parallel.py`
Content:
```
1 """
2 This function provides functions for parallel execution of loops and function
3 mappings, using the builtin Python module multiprocessing.
4 """
5 __all__ = ['parfor', 'parallel_map', 'serial_map']
6
7 from scipy import array
8 import multiprocessing
9 from functools import partial
10 import os
11 import sys
12 import signal
13 import qutip.settings as qset
14 from qutip.ui.progressbar import BaseProgressBar, TextProgressBar
15
16
17 if sys.platform == 'darwin':
18 Pool = multiprocessing.get_context('fork').Pool
19 else:
20 Pool = multiprocessing.Pool
21
22
23 def _task_wrapper(args):
24 try:
25 return args[0](*args[1])
26 except KeyboardInterrupt:
27 os.kill(args[2], signal.SIGINT)
28 sys.exit(1)
29
30
31 def _task_wrapper_with_args(args, user_args):
32 try:
33 return args[0](*args[1], **user_args)
34 except KeyboardInterrupt:
35 os.kill(args[2], signal.SIGINT)
36 sys.exit(1)
37
38
39 def parfor(func, *args, **kwargs):
40 """Executes a multi-variable function in parallel on the local machine.
41
42 Parallel execution of a for-loop over function `func` for multiple input
43 arguments and keyword arguments.
44
45 .. note::
46
47 From QuTiP 3.1, we recommend to use :func:`qutip.parallel.parallel_map`
48 instead of this function.
49
50 Parameters
51 ----------
52 func : function_type
53 A function to run in parallel on the local machine. The function 'func'
54 accepts a series of arguments that are passed to the function as
55 variables. In general, the function can have multiple input variables,
56 and these arguments must be passed in the same order as they are
57 defined in the function definition. In addition, the user can pass
58 multiple keyword arguments to the function.
59
60 The following keyword argument is reserved:
61
62 num_cpus : int
63 Number of CPU's to use. Default uses maximum number of CPU's.
64 Performance degrades if num_cpus is larger than the physical CPU
65 count of your machine.
66
67 Returns
68 -------
69 result : list
70 A ``list`` with length equal to number of input parameters
71 containing the output from `func`.
72
73 """
74 os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'
75 kw = _default_kwargs()
76 if 'num_cpus' in kwargs.keys():
77 kw['num_cpus'] = kwargs['num_cpus']
78 del kwargs['num_cpus']
79 if len(kwargs) != 0:
80 task_func = partial(_task_wrapper_with_args, user_args=kwargs)
81 else:
82 task_func = _task_wrapper
83
84 if kw['num_cpus'] > qset.num_cpus:
85 print("Requested number of CPUs (%s) " % kw['num_cpus'] +
86 "is larger than physical number (%s)." % qset.num_cpus)
87 print("Reduce 'num_cpus' for greater performance.")
88
89 pool = Pool(processes=kw['num_cpus'])
90 args = [list(arg) for arg in args]
91 var = [[args[j][i] for j in range(len(args))]
92 for i in range(len(list(args[0])))]
93 try:
94 map_args = ((func, v, os.getpid()) for v in var)
95 par_return = list(pool.map(task_func, map_args))
96
97 pool.terminate()
98 pool.join()
99 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
100 if isinstance(par_return[0], tuple):
101 par_return = [elem for elem in par_return]
102 num_elems = len(par_return[0])
103 dt = [type(ii) for ii in par_return[0]]
104 return [array([elem[ii] for elem in par_return], dtype=dt[ii])
105 for ii in range(num_elems)]
106 else:
107 return par_return
108
109 except KeyboardInterrupt:
110 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
111 pool.terminate()
112
113
114 def serial_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):
115 """
116 Serial mapping function with the same call signature as parallel_map, for
117 easy switching between serial and parallel execution. This
118 is functionally equivalent to::
119
120 result = [task(value, *task_args, **task_kwargs) for value in values]
121
122 This function work as a drop-in replacement of
123 :func:`qutip.parallel.parallel_map`.
124
125 Parameters
126 ----------
127 task : a Python function
128 The function that is to be called for each value in ``task_vec``.
129 values : array / list
130 The list or array of values for which the ``task`` function is to be
131 evaluated.
132 task_args : list / dictionary
133 The optional additional argument to the ``task`` function.
134 task_kwargs : list / dictionary
135 The optional additional keyword argument to the ``task`` function.
136 progress_bar : ProgressBar
137 Progress bar class instance for showing progress.
138
139 Returns
140 --------
141 result : list
142 The result list contains the value of
143 ``task(value, *task_args, **task_kwargs)`` for each
144 value in ``values``.
145
146 """
147 try:
148 progress_bar = kwargs['progress_bar']
149 if progress_bar is True:
150 progress_bar = TextProgressBar()
151 except:
152 progress_bar = BaseProgressBar()
153
154 progress_bar.start(len(values))
155 results = []
156 for n, value in enumerate(values):
157 progress_bar.update(n)
158 result = task(value, *task_args, **task_kwargs)
159 results.append(result)
160 progress_bar.finished()
161
162 return results
163
164
165 def parallel_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):
166 """
167 Parallel execution of a mapping of `values` to the function `task`. This
168 is functionally equivalent to::
169
170 result = [task(value, *task_args, **task_kwargs) for value in values]
171
172 Parameters
173 ----------
174 task : a Python function
175 The function that is to be called for each value in ``task_vec``.
176 values : array / list
177 The list or array of values for which the ``task`` function is to be
178 evaluated.
179 task_args : list / dictionary
180 The optional additional argument to the ``task`` function.
181 task_kwargs : list / dictionary
182 The optional additional keyword argument to the ``task`` function.
183 progress_bar : ProgressBar
184 Progress bar class instance for showing progress.
185
186 Returns
187 --------
188 result : list
189 The result list contains the value of
190 ``task(value, *task_args, **task_kwargs)`` for
191 each value in ``values``.
192
193 """
194 os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'
195 kw = _default_kwargs()
196 if 'num_cpus' in kwargs:
197 kw['num_cpus'] = kwargs['num_cpus']
198
199 try:
200 progress_bar = kwargs['progress_bar']
201 if progress_bar is True:
202 progress_bar = TextProgressBar()
203 except:
204 progress_bar = BaseProgressBar()
205
206 progress_bar.start(len(values))
207 nfinished = [0]
208
209 def _update_progress_bar(x):
210 nfinished[0] += 1
211 progress_bar.update(nfinished[0])
212
213 try:
214 pool = Pool(processes=kw['num_cpus'])
215
216 async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,
217 _update_progress_bar)
218 for value in values]
219
220 while not all([ar.ready() for ar in async_res]):
221 for ar in async_res:
222 ar.wait(timeout=0.1)
223
224 pool.terminate()
225 pool.join()
226
227 except KeyboardInterrupt as e:
228 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
229 pool.terminate()
230 pool.join()
231 raise e
232
233 progress_bar.finished()
234 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
235 return [ar.get() for ar in async_res]
236
237
238 def _default_kwargs():
239 settings = {'num_cpus': qset.num_cpus}
240 return settings
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutip/parallel.py b/qutip/parallel.py
--- a/qutip/parallel.py
+++ b/qutip/parallel.py
@@ -4,7 +4,7 @@
"""
__all__ = ['parfor', 'parallel_map', 'serial_map']
-from scipy import array
+from numpy import array
import multiprocessing
from functools import partial
import os
|
{"golden_diff": "diff --git a/qutip/parallel.py b/qutip/parallel.py\n--- a/qutip/parallel.py\n+++ b/qutip/parallel.py\n@@ -4,7 +4,7 @@\n \"\"\"\n __all__ = ['parfor', 'parallel_map', 'serial_map']\n \n-from scipy import array\n+from numpy import array\n import multiprocessing\n from functools import partial\n import os\n", "issue": "QuTiP 4.7.4: incompatibility with the latest scipy (1.12.0)\nhttps://github.com/qutip/qutip/blob/f5149616a4071a273e7e48a63d956836739c4569/qutip/parallel.py#L7\r\n\r\nWhen the latest scipy version is used (1.12.0), QuTiP (4.7.4) cannot be imported since `from scipy import array` is no longer supported in scipy.\r\n\r\nCode to reproduce the bug: \r\n`import qutip`\r\n\r\nOutput:\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\nCell In[1], line 1\r\n----> 1 import qutip\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/__init__.py:133\r\n 131 # graphics\r\n 132 from qutip.bloch import *\r\n--> 133 from qutip.visualization import *\r\n 134 from qutip.orbital import *\r\n 135 from qutip.bloch3d import *\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/visualization.py:24\r\n 22 from qutip.qobj import Qobj, isket\r\n 23 from qutip.states import ket2dm\r\n---> 24 from qutip.wigner import wigner\r\n 25 from qutip.tensor import tensor\r\n 26 from qutip.matplotlib_utilities import complex_phase_cmap\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/wigner.py:19\r\n 17 import qutip\r\n 18 from qutip import Qobj, ket2dm, jmat\r\n---> 19 from qutip.parallel import parfor\r\n 20 from qutip.cy.sparse_utils import _csr_get_diag\r\n 21 from qutip.sparse import eigh\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/parallel.py:7\r\n 1 \"\"\"\r\n 2 This function provides functions for parallel execution of loops and function\r\n 3 mappings, using the builtin Python module multiprocessing.\r\n 4 \"\"\"\r\n 5 __all__ = ['parfor', 'parallel_map', 'serial_map']\r\n----> 7 from scipy import array\r\n 8 import multiprocessing\r\n 9 from functools import partial\r\n\r\nImportError: cannot import name 'array' from 'scipy' (/Users/konstantin/anaconda3/envs/test/lib/python3.11/site-packages/scipy/__init__.py)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nThis function provides functions for parallel execution of loops and function\nmappings, using the builtin Python module multiprocessing.\n\"\"\"\n__all__ = ['parfor', 'parallel_map', 'serial_map']\n\nfrom scipy import array\nimport multiprocessing\nfrom functools import partial\nimport os\nimport sys\nimport signal\nimport qutip.settings as qset\nfrom qutip.ui.progressbar import BaseProgressBar, TextProgressBar\n\n\nif sys.platform == 'darwin':\n Pool = multiprocessing.get_context('fork').Pool\nelse:\n Pool = multiprocessing.Pool\n\n\ndef _task_wrapper(args):\n try:\n return args[0](*args[1])\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef _task_wrapper_with_args(args, user_args):\n try:\n return args[0](*args[1], **user_args)\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef parfor(func, *args, **kwargs):\n \"\"\"Executes a multi-variable function in parallel on the local machine.\n\n Parallel execution of a for-loop over function `func` for multiple input\n arguments and keyword arguments.\n\n .. note::\n\n From QuTiP 3.1, we recommend to use :func:`qutip.parallel.parallel_map`\n instead of this function.\n\n Parameters\n ----------\n func : function_type\n A function to run in parallel on the local machine. The function 'func'\n accepts a series of arguments that are passed to the function as\n variables. In general, the function can have multiple input variables,\n and these arguments must be passed in the same order as they are\n defined in the function definition. In addition, the user can pass\n multiple keyword arguments to the function.\n\n The following keyword argument is reserved:\n\n num_cpus : int\n Number of CPU's to use. Default uses maximum number of CPU's.\n Performance degrades if num_cpus is larger than the physical CPU\n count of your machine.\n\n Returns\n -------\n result : list\n A ``list`` with length equal to number of input parameters\n containing the output from `func`.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs.keys():\n kw['num_cpus'] = kwargs['num_cpus']\n del kwargs['num_cpus']\n if len(kwargs) != 0:\n task_func = partial(_task_wrapper_with_args, user_args=kwargs)\n else:\n task_func = _task_wrapper\n\n if kw['num_cpus'] > qset.num_cpus:\n print(\"Requested number of CPUs (%s) \" % kw['num_cpus'] +\n \"is larger than physical number (%s).\" % qset.num_cpus)\n print(\"Reduce 'num_cpus' for greater performance.\")\n\n pool = Pool(processes=kw['num_cpus'])\n args = [list(arg) for arg in args]\n var = [[args[j][i] for j in range(len(args))]\n for i in range(len(list(args[0])))]\n try:\n map_args = ((func, v, os.getpid()) for v in var)\n par_return = list(pool.map(task_func, map_args))\n\n pool.terminate()\n pool.join()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n if isinstance(par_return[0], tuple):\n par_return = [elem for elem in par_return]\n num_elems = len(par_return[0])\n dt = [type(ii) for ii in par_return[0]]\n return [array([elem[ii] for elem in par_return], dtype=dt[ii])\n for ii in range(num_elems)]\n else:\n return par_return\n\n except KeyboardInterrupt:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n\n\ndef serial_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Serial mapping function with the same call signature as parallel_map, for\n easy switching between serial and parallel execution. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n This function work as a drop-in replacement of\n :func:`qutip.parallel.parallel_map`.\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for each\n value in ``values``.\n\n \"\"\"\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n results = []\n for n, value in enumerate(values):\n progress_bar.update(n)\n result = task(value, *task_args, **task_kwargs)\n results.append(result)\n progress_bar.finished()\n\n return results\n\n\ndef parallel_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Parallel execution of a mapping of `values` to the function `task`. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for\n each value in ``values``.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs:\n kw['num_cpus'] = kwargs['num_cpus']\n\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n nfinished = [0]\n\n def _update_progress_bar(x):\n nfinished[0] += 1\n progress_bar.update(nfinished[0])\n\n try:\n pool = Pool(processes=kw['num_cpus'])\n\n async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,\n _update_progress_bar)\n for value in values]\n\n while not all([ar.ready() for ar in async_res]):\n for ar in async_res:\n ar.wait(timeout=0.1)\n\n pool.terminate()\n pool.join()\n\n except KeyboardInterrupt as e:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n pool.join()\n raise e\n\n progress_bar.finished()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n return [ar.get() for ar in async_res]\n\n\ndef _default_kwargs():\n settings = {'num_cpus': qset.num_cpus}\n return settings\n", "path": "qutip/parallel.py"}], "after_files": [{"content": "\"\"\"\nThis function provides functions for parallel execution of loops and function\nmappings, using the builtin Python module multiprocessing.\n\"\"\"\n__all__ = ['parfor', 'parallel_map', 'serial_map']\n\nfrom numpy import array\nimport multiprocessing\nfrom functools import partial\nimport os\nimport sys\nimport signal\nimport qutip.settings as qset\nfrom qutip.ui.progressbar import BaseProgressBar, TextProgressBar\n\n\nif sys.platform == 'darwin':\n Pool = multiprocessing.get_context('fork').Pool\nelse:\n Pool = multiprocessing.Pool\n\n\ndef _task_wrapper(args):\n try:\n return args[0](*args[1])\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef _task_wrapper_with_args(args, user_args):\n try:\n return args[0](*args[1], **user_args)\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef parfor(func, *args, **kwargs):\n \"\"\"Executes a multi-variable function in parallel on the local machine.\n\n Parallel execution of a for-loop over function `func` for multiple input\n arguments and keyword arguments.\n\n .. note::\n\n From QuTiP 3.1, we recommend to use :func:`qutip.parallel.parallel_map`\n instead of this function.\n\n Parameters\n ----------\n func : function_type\n A function to run in parallel on the local machine. The function 'func'\n accepts a series of arguments that are passed to the function as\n variables. In general, the function can have multiple input variables,\n and these arguments must be passed in the same order as they are\n defined in the function definition. In addition, the user can pass\n multiple keyword arguments to the function.\n\n The following keyword argument is reserved:\n\n num_cpus : int\n Number of CPU's to use. Default uses maximum number of CPU's.\n Performance degrades if num_cpus is larger than the physical CPU\n count of your machine.\n\n Returns\n -------\n result : list\n A ``list`` with length equal to number of input parameters\n containing the output from `func`.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs.keys():\n kw['num_cpus'] = kwargs['num_cpus']\n del kwargs['num_cpus']\n if len(kwargs) != 0:\n task_func = partial(_task_wrapper_with_args, user_args=kwargs)\n else:\n task_func = _task_wrapper\n\n if kw['num_cpus'] > qset.num_cpus:\n print(\"Requested number of CPUs (%s) \" % kw['num_cpus'] +\n \"is larger than physical number (%s).\" % qset.num_cpus)\n print(\"Reduce 'num_cpus' for greater performance.\")\n\n pool = Pool(processes=kw['num_cpus'])\n args = [list(arg) for arg in args]\n var = [[args[j][i] for j in range(len(args))]\n for i in range(len(list(args[0])))]\n try:\n map_args = ((func, v, os.getpid()) for v in var)\n par_return = list(pool.map(task_func, map_args))\n\n pool.terminate()\n pool.join()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n if isinstance(par_return[0], tuple):\n par_return = [elem for elem in par_return]\n num_elems = len(par_return[0])\n dt = [type(ii) for ii in par_return[0]]\n return [array([elem[ii] for elem in par_return], dtype=dt[ii])\n for ii in range(num_elems)]\n else:\n return par_return\n\n except KeyboardInterrupt:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n\n\ndef serial_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Serial mapping function with the same call signature as parallel_map, for\n easy switching between serial and parallel execution. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n This function work as a drop-in replacement of\n :func:`qutip.parallel.parallel_map`.\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for each\n value in ``values``.\n\n \"\"\"\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n results = []\n for n, value in enumerate(values):\n progress_bar.update(n)\n result = task(value, *task_args, **task_kwargs)\n results.append(result)\n progress_bar.finished()\n\n return results\n\n\ndef parallel_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Parallel execution of a mapping of `values` to the function `task`. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for\n each value in ``values``.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs:\n kw['num_cpus'] = kwargs['num_cpus']\n\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n nfinished = [0]\n\n def _update_progress_bar(x):\n nfinished[0] += 1\n progress_bar.update(nfinished[0])\n\n try:\n pool = Pool(processes=kw['num_cpus'])\n\n async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,\n _update_progress_bar)\n for value in values]\n\n while not all([ar.ready() for ar in async_res]):\n for ar in async_res:\n ar.wait(timeout=0.1)\n\n pool.terminate()\n pool.join()\n\n except KeyboardInterrupt as e:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n pool.join()\n raise e\n\n progress_bar.finished()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n return [ar.get() for ar in async_res]\n\n\ndef _default_kwargs():\n settings = {'num_cpus': qset.num_cpus}\n return settings\n", "path": "qutip/parallel.py"}]}
| 3,218 | 86 |
gh_patches_debug_8379
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-3013
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document the LIFO order in which hooks are executed in `settings.py`
### Description
We mention that hook implementations registered in `settings.py` run in LIFO order and that auto discovered hooks run before hooks in `settings.py`.
- [ ] We need to also document what the order is in which auto-discovered hooks run. Add this to: https://kedro.readthedocs.io/en/stable/hooks/introduction.html To verify the run order, create a project and install several plugins with hooks to test.
- [ ] Add a comment in the `settings.py` template file to explain the run order of hooks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py`
Content:
```
1 """Project settings. There is no need to edit this file unless you want to change values
2 from the Kedro defaults. For further information, including these default values, see
3 https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html."""
4
5 # Instantiated project hooks.
6 # For example, after creating a hooks.py and defining a ProjectHooks class there, do
7 # from {{cookiecutter.python_package}}.hooks import ProjectHooks
8 # HOOKS = (ProjectHooks(),)
9
10 # Installed plugins for which to disable hook auto-registration.
11 # DISABLE_HOOKS_FOR_PLUGINS = ("kedro-viz",)
12
13 # Class that manages storing KedroSession data.
14 # from kedro.framework.session.store import BaseSessionStore
15 # SESSION_STORE_CLASS = BaseSessionStore
16 # Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.
17 # SESSION_STORE_ARGS = {
18 # "path": "./sessions"
19 # }
20
21 # Directory that holds configuration.
22 # CONF_SOURCE = "conf"
23
24 # Class that manages how configuration is loaded.
25 from kedro.config import OmegaConfigLoader # noqa: import-outside-toplevel
26
27 CONFIG_LOADER_CLASS = OmegaConfigLoader
28 # Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
29 # CONFIG_LOADER_ARGS = {
30 # "config_patterns": {
31 # "spark" : ["spark*/"],
32 # "parameters": ["parameters*", "parameters*/**", "**/parameters*"],
33 # }
34 # }
35
36 # Class that manages Kedro's library components.
37 # from kedro.framework.context import KedroContext
38 # CONTEXT_CLASS = KedroContext
39
40 # Class that manages the Data Catalog.
41 # from kedro.io import DataCatalog
42 # DATA_CATALOG_CLASS = DataCatalog
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
@@ -5,6 +5,7 @@
# Instantiated project hooks.
# For example, after creating a hooks.py and defining a ProjectHooks class there, do
# from {{cookiecutter.python_package}}.hooks import ProjectHooks
+# Hooks are executed in a Last-In-First-Out (LIFO) order.
# HOOKS = (ProjectHooks(),)
# Installed plugins for which to disable hook auto-registration.
|
{"golden_diff": "diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\n--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n@@ -5,6 +5,7 @@\n # Instantiated project hooks.\n # For example, after creating a hooks.py and defining a ProjectHooks class there, do\n # from {{cookiecutter.python_package}}.hooks import ProjectHooks\n+# Hooks are executed in a Last-In-First-Out (LIFO) order.\n # HOOKS = (ProjectHooks(),)\n \n # Installed plugins for which to disable hook auto-registration.\n", "issue": "Document the LIFO order in which hooks are executed in `settings.py`\n### Description\r\n\r\nWe mention that hook implementations registered in `settings.py` run in LIFO order and that auto discovered hooks run before hooks in `settings.py`. \r\n\r\n- [ ] We need to also document what the order is in which auto-discovered hooks run. Add this to: https://kedro.readthedocs.io/en/stable/hooks/introduction.html To verify the run order, create a project and install several plugins with hooks to test.\r\n- [ ] Add a comment in the `settings.py` template file to explain the run order of hooks\n", "before_files": [{"content": "\"\"\"Project settings. There is no need to edit this file unless you want to change values\nfrom the Kedro defaults. For further information, including these default values, see\nhttps://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html.\"\"\"\n\n# Instantiated project hooks.\n# For example, after creating a hooks.py and defining a ProjectHooks class there, do\n# from {{cookiecutter.python_package}}.hooks import ProjectHooks\n# HOOKS = (ProjectHooks(),)\n\n# Installed plugins for which to disable hook auto-registration.\n# DISABLE_HOOKS_FOR_PLUGINS = (\"kedro-viz\",)\n\n# Class that manages storing KedroSession data.\n# from kedro.framework.session.store import BaseSessionStore\n# SESSION_STORE_CLASS = BaseSessionStore\n# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.\n# SESSION_STORE_ARGS = {\n# \"path\": \"./sessions\"\n# }\n\n# Directory that holds configuration.\n# CONF_SOURCE = \"conf\"\n\n# Class that manages how configuration is loaded.\nfrom kedro.config import OmegaConfigLoader # noqa: import-outside-toplevel\n\nCONFIG_LOADER_CLASS = OmegaConfigLoader\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\n# CONFIG_LOADER_ARGS = {\n# \"config_patterns\": {\n# \"spark\" : [\"spark*/\"],\n# \"parameters\": [\"parameters*\", \"parameters*/**\", \"**/parameters*\"],\n# }\n# }\n\n# Class that manages Kedro's library components.\n# from kedro.framework.context import KedroContext\n# CONTEXT_CLASS = KedroContext\n\n# Class that manages the Data Catalog.\n# from kedro.io import DataCatalog\n# DATA_CATALOG_CLASS = DataCatalog\n", "path": "kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py"}], "after_files": [{"content": "\"\"\"Project settings. There is no need to edit this file unless you want to change values\nfrom the Kedro defaults. For further information, including these default values, see\nhttps://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html.\"\"\"\n\n# Instantiated project hooks.\n# For example, after creating a hooks.py and defining a ProjectHooks class there, do\n# from {{cookiecutter.python_package}}.hooks import ProjectHooks\n# Hooks are executed in a Last-In-First-Out (LIFO) order.\n# HOOKS = (ProjectHooks(),)\n\n# Installed plugins for which to disable hook auto-registration.\n# DISABLE_HOOKS_FOR_PLUGINS = (\"kedro-viz\",)\n\n# Class that manages storing KedroSession data.\n# from kedro.framework.session.store import BaseSessionStore\n# SESSION_STORE_CLASS = BaseSessionStore\n# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.\n# SESSION_STORE_ARGS = {\n# \"path\": \"./sessions\"\n# }\n\n# Directory that holds configuration.\n# CONF_SOURCE = \"conf\"\n\n# Class that manages how configuration is loaded.\nfrom kedro.config import OmegaConfigLoader # noqa: import-outside-toplevel\n\nCONFIG_LOADER_CLASS = OmegaConfigLoader\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\n# CONFIG_LOADER_ARGS = {\n# \"config_patterns\": {\n# \"spark\" : [\"spark*/\"],\n# \"parameters\": [\"parameters*\", \"parameters*/**\", \"**/parameters*\"],\n# }\n# }\n\n# Class that manages Kedro's library components.\n# from kedro.framework.context import KedroContext\n# CONTEXT_CLASS = KedroContext\n\n# Class that manages the Data Catalog.\n# from kedro.io import DataCatalog\n# DATA_CATALOG_CLASS = DataCatalog\n", "path": "kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py"}]}
| 850 | 189 |
gh_patches_debug_12933
|
rasdani/github-patches
|
git_diff
|
koxudaxi__datamodel-code-generator-1186
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
` --use-default-kwarg` breaks codegen with default_factory `Field(default=default_factory=...`
**Describe the bug**
When using ` --use-default-kwarg` on models with a default_factory codegen breaks: `Field(default=default_factory=...`
```
File "src\black\parsing.py", line 127, in lib2to3_parse
black.parsing.InvalidInput: Cannot parse: 17:54: foo: Optional[Foo] = Field(default=default_factory=lambda :Foo.parse_obj({'foo': 'foo'}), title='Foo')
```
**To Reproduce**
Example schema:
```json
{
"title": "Requests",
"definitions": {
"Foo": {
"title": "Foo",
"type": "object",
"properties": {
"foo": {
"title": "Foo",
"default": "foo",
"type": "string"
}
}
},
"Bar": {
"title": "Bar",
"type": "object",
"properties": {
"foo": {
"title": "Foo",
"default": {
"foo": "foo"
},
"allOf": [
{
"$ref": "#/definitions/Foo"
}
]
}
}
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input schema.json --output model.py --use-default-kwarg
```
**Expected behavior**
Such a model should be produced
``` python
class Foo(BaseModel):
foo: Optional[str] = Field(default='foo', title='Foo')
class Bar(BaseModel):
foo: Optional[Foo] = Field(default_factory=lambda: Foo.parse_obj({'foo': 'foo'}), title='Foo')
```
**Version:**
✅ works in 0.16.1
❌ breaks in 0.17.0 (probably introduced by https://github.com/koxudaxi/datamodel-code-generator/pull/1047 )
- python 3.11.1
- windows 11
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datamodel_code_generator/model/pydantic/base_model.py`
Content:
```
1 from __future__ import annotations
2
3 from pathlib import Path
4 from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple, Union
5
6 from pydantic import Field
7
8 from datamodel_code_generator import cached_property
9 from datamodel_code_generator.imports import Import
10 from datamodel_code_generator.model import (
11 ConstraintsBase,
12 DataModel,
13 DataModelFieldBase,
14 )
15 from datamodel_code_generator.model.base import UNDEFINED
16 from datamodel_code_generator.model.pydantic.imports import IMPORT_EXTRA, IMPORT_FIELD
17 from datamodel_code_generator.reference import Reference
18 from datamodel_code_generator.types import chain_as_tuple
19
20
21 class Constraints(ConstraintsBase):
22 gt: Optional[Union[float, int]] = Field(None, alias='exclusiveMinimum')
23 ge: Optional[Union[float, int]] = Field(None, alias='minimum')
24 lt: Optional[Union[float, int]] = Field(None, alias='exclusiveMaximum')
25 le: Optional[Union[float, int]] = Field(None, alias='maximum')
26 multiple_of: Optional[float] = Field(None, alias='multipleOf')
27 min_items: Optional[int] = Field(None, alias='minItems')
28 max_items: Optional[int] = Field(None, alias='maxItems')
29 min_length: Optional[int] = Field(None, alias='minLength')
30 max_length: Optional[int] = Field(None, alias='maxLength')
31 regex: Optional[str] = Field(None, alias='pattern')
32 unique_items: Optional[bool] = Field(None, alias='uniqueItems')
33
34
35 class DataModelField(DataModelFieldBase):
36 _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {
37 'alias',
38 'default',
39 'const',
40 'gt',
41 'ge',
42 'lt',
43 'le',
44 'multiple_of',
45 'min_items',
46 'max_items',
47 'min_length',
48 'max_length',
49 'regex',
50 }
51 _COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}
52 constraints: Optional[Constraints] = None
53
54 @property
55 def method(self) -> Optional[str]:
56 return self.validator
57
58 @property
59 def validator(self) -> Optional[str]:
60 return None
61 # TODO refactor this method for other validation logic
62 # from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE
63 #
64 # return VALIDATOR_TEMPLATE.render(
65 # field_name=self.name, types=','.join([t.type_hint for t in self.data_types])
66 # )
67
68 @property
69 def field(self) -> Optional[str]:
70 """for backwards compatibility"""
71 result = str(self)
72 if self.use_default_kwarg and not result.startswith('Field(...'):
73 # Use `default=` for fields that have a default value so that type
74 # checkers using @dataclass_transform can infer the field as
75 # optional in __init__.
76 result = result.replace('Field(', 'Field(default=')
77 if result == '':
78 return None
79
80 return result
81
82 def self_reference(self) -> bool:
83 return isinstance(self.parent, BaseModel) and self.parent.reference.path in {
84 d.reference.path for d in self.data_type.all_data_types if d.reference
85 }
86
87 def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:
88 if value is None or constraint not in self._COMPARE_EXPRESSIONS:
89 return value
90
91 for data_type in self.data_type.all_data_types:
92 if data_type.type == 'int':
93 value = int(value)
94 else:
95 value = float(value)
96 break
97 return value
98
99 def _get_default_as_pydantic_model(self) -> Optional[str]:
100 for data_type in self.data_type.data_types or (self.data_type,):
101 # TODO: Check nested data_types
102 if data_type.is_dict or self.data_type.is_union:
103 # TODO: Parse Union and dict model for default
104 continue
105 elif data_type.is_list and len(data_type.data_types) == 1:
106 data_type = data_type.data_types[0]
107 data_type.alias
108 if (
109 data_type.reference
110 and isinstance(data_type.reference.source, BaseModel)
111 and isinstance(self.default, list)
112 ): # pragma: no cover
113 return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.parse_obj(v) for v in {repr(self.default)}]'
114 elif data_type.reference and isinstance(
115 data_type.reference.source, BaseModel
116 ): # pragma: no cover
117 return f'lambda :{data_type.alias or data_type.reference.source.class_name}.parse_obj({repr(self.default)})'
118 return None
119
120 def __str__(self) -> str:
121 data: Dict[str, Any] = {
122 k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS
123 }
124 if self.alias:
125 data['alias'] = self.alias
126 if (
127 self.constraints is not None
128 and not self.self_reference()
129 and not self.data_type.strict
130 ):
131 data = {
132 **data,
133 **{
134 k: self._get_strict_field_constraint_value(k, v)
135 for k, v in self.constraints.dict().items()
136 },
137 }
138
139 if self.use_field_description:
140 data.pop('description', None) # Description is part of field docstring
141
142 if self.const:
143 data['const'] = True
144
145 discriminator = data.pop('discriminator', None)
146 if discriminator:
147 if isinstance(discriminator, str):
148 data['discriminator'] = discriminator
149 elif isinstance(discriminator, dict): # pragma: no cover
150 data['discriminator'] = discriminator['propertyName']
151
152 if self.required:
153 default_factory = None
154 elif self.default and 'default_factory' not in data:
155 default_factory = self._get_default_as_pydantic_model()
156 else:
157 default_factory = data.pop('default_factory', None)
158
159 field_arguments = sorted(
160 f'{k}={repr(v)}' for k, v in data.items() if v is not None
161 )
162
163 if not field_arguments and not default_factory:
164 if self.nullable and self.required:
165 return 'Field(...)' # Field() is for mypy
166 return ''
167
168 if self.use_annotated:
169 pass
170 elif self.required:
171 field_arguments = ['...', *field_arguments]
172 elif default_factory:
173 field_arguments = [f'default_factory={default_factory}', *field_arguments]
174 else:
175 field_arguments = [f'{repr(self.default)}', *field_arguments]
176
177 return f'Field({", ".join(field_arguments)})'
178
179 @property
180 def annotated(self) -> Optional[str]:
181 if not self.use_annotated or not str(self):
182 return None
183 return f'Annotated[{self.type_hint}, {str(self)}]'
184
185
186 class BaseModel(DataModel):
187 TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'
188 BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
189
190 def __init__(
191 self,
192 *,
193 reference: Reference,
194 fields: List[DataModelField],
195 decorators: Optional[List[str]] = None,
196 base_classes: Optional[List[Reference]] = None,
197 custom_base_class: Optional[str] = None,
198 custom_template_dir: Optional[Path] = None,
199 extra_template_data: Optional[DefaultDict[str, Any]] = None,
200 path: Optional[Path] = None,
201 description: Optional[str] = None,
202 default: Any = UNDEFINED,
203 nullable: bool = False,
204 ):
205 methods: List[str] = [field.method for field in fields if field.method]
206
207 super().__init__(
208 fields=fields, # type: ignore
209 reference=reference,
210 decorators=decorators,
211 base_classes=base_classes,
212 custom_base_class=custom_base_class,
213 custom_template_dir=custom_template_dir,
214 extra_template_data=extra_template_data,
215 methods=methods,
216 path=path,
217 description=description,
218 default=default,
219 nullable=nullable,
220 )
221
222 config_parameters: Dict[str, Any] = {}
223
224 additionalProperties = self.extra_template_data.get('additionalProperties')
225 allow_extra_fields = self.extra_template_data.get('allow_extra_fields')
226 if additionalProperties is not None or allow_extra_fields:
227 config_parameters['extra'] = (
228 'Extra.allow'
229 if additionalProperties or allow_extra_fields
230 else 'Extra.forbid'
231 )
232 self._additional_imports.append(IMPORT_EXTRA)
233
234 for config_attribute in 'allow_population_by_field_name', 'allow_mutation':
235 if config_attribute in self.extra_template_data:
236 config_parameters[config_attribute] = self.extra_template_data[
237 config_attribute
238 ]
239 for data_type in self.all_data_types:
240 if data_type.is_custom_type:
241 config_parameters['arbitrary_types_allowed'] = True
242 break
243
244 if isinstance(self.extra_template_data.get('config'), dict):
245 for key, value in self.extra_template_data['config'].items():
246 config_parameters[key] = value
247
248 if config_parameters:
249 from datamodel_code_generator.model.pydantic import Config
250
251 self.extra_template_data['config'] = Config.parse_obj(config_parameters)
252
253 @property
254 def imports(self) -> Tuple[Import, ...]:
255 if any(f for f in self.fields if f.field):
256 return chain_as_tuple(super().imports, (IMPORT_FIELD,))
257 return super().imports
258
259 @cached_property
260 def template_file_path(self) -> Path:
261 # This property is for Backward compatibility
262 # Current version supports '{custom_template_dir}/BaseModel.jinja'
263 # But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'
264 if self._custom_template_dir is not None:
265 custom_template_file_path = (
266 self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name
267 )
268 if custom_template_file_path.exists():
269 return custom_template_file_path
270 return super().template_file_path
271
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/datamodel_code_generator/model/pydantic/base_model.py b/datamodel_code_generator/model/pydantic/base_model.py
--- a/datamodel_code_generator/model/pydantic/base_model.py
+++ b/datamodel_code_generator/model/pydantic/base_model.py
@@ -69,7 +69,11 @@
def field(self) -> Optional[str]:
"""for backwards compatibility"""
result = str(self)
- if self.use_default_kwarg and not result.startswith('Field(...'):
+ if (
+ self.use_default_kwarg
+ and not result.startswith('Field(...')
+ and not result.startswith('Field(default_factory=')
+ ):
# Use `default=` for fields that have a default value so that type
# checkers using @dataclass_transform can infer the field as
# optional in __init__.
|
{"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/base_model.py b/datamodel_code_generator/model/pydantic/base_model.py\n--- a/datamodel_code_generator/model/pydantic/base_model.py\n+++ b/datamodel_code_generator/model/pydantic/base_model.py\n@@ -69,7 +69,11 @@\n def field(self) -> Optional[str]:\n \"\"\"for backwards compatibility\"\"\"\n result = str(self)\n- if self.use_default_kwarg and not result.startswith('Field(...'):\n+ if (\n+ self.use_default_kwarg\n+ and not result.startswith('Field(...')\n+ and not result.startswith('Field(default_factory=')\n+ ):\n # Use `default=` for fields that have a default value so that type\n # checkers using @dataclass_transform can infer the field as\n # optional in __init__.\n", "issue": "` --use-default-kwarg` breaks codegen with default_factory `Field(default=default_factory=...`\n**Describe the bug**\r\nWhen using ` --use-default-kwarg` on models with a default_factory codegen breaks: `Field(default=default_factory=...`\r\n\r\n\r\n```\r\nFile \"src\\black\\parsing.py\", line 127, in lib2to3_parse\r\nblack.parsing.InvalidInput: Cannot parse: 17:54: foo: Optional[Foo] = Field(default=default_factory=lambda :Foo.parse_obj({'foo': 'foo'}), title='Foo')\r\n```\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```json\r\n\r\n{\r\n \"title\": \"Requests\",\r\n \"definitions\": {\r\n \"Foo\": {\r\n \"title\": \"Foo\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"foo\": {\r\n \"title\": \"Foo\",\r\n \"default\": \"foo\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n },\r\n \"Bar\": {\r\n \"title\": \"Bar\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"foo\": {\r\n \"title\": \"Foo\",\r\n \"default\": {\r\n \"foo\": \"foo\"\r\n },\r\n \"allOf\": [\r\n {\r\n \"$ref\": \"#/definitions/Foo\"\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input schema.json --output model.py --use-default-kwarg\r\n```\r\n\r\n**Expected behavior**\r\nSuch a model should be produced\r\n``` python\r\nclass Foo(BaseModel):\r\n foo: Optional[str] = Field(default='foo', title='Foo')\r\n\r\n\r\nclass Bar(BaseModel):\r\n foo: Optional[Foo] = Field(default_factory=lambda: Foo.parse_obj({'foo': 'foo'}), title='Foo')\r\n\r\n```\r\n\r\n**Version:**\r\n\r\n\u2705 works in 0.16.1\r\n\u274c breaks in 0.17.0 (probably introduced by https://github.com/koxudaxi/datamodel-code-generator/pull/1047 )\r\n- python 3.11.1\r\n- windows 11\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple, Union\n\nfrom pydantic import Field\n\nfrom datamodel_code_generator import cached_property\nfrom datamodel_code_generator.imports import Import\nfrom datamodel_code_generator.model import (\n ConstraintsBase,\n DataModel,\n DataModelFieldBase,\n)\nfrom datamodel_code_generator.model.base import UNDEFINED\nfrom datamodel_code_generator.model.pydantic.imports import IMPORT_EXTRA, IMPORT_FIELD\nfrom datamodel_code_generator.reference import Reference\nfrom datamodel_code_generator.types import chain_as_tuple\n\n\nclass Constraints(ConstraintsBase):\n gt: Optional[Union[float, int]] = Field(None, alias='exclusiveMinimum')\n ge: Optional[Union[float, int]] = Field(None, alias='minimum')\n lt: Optional[Union[float, int]] = Field(None, alias='exclusiveMaximum')\n le: Optional[Union[float, int]] = Field(None, alias='maximum')\n multiple_of: Optional[float] = Field(None, alias='multipleOf')\n min_items: Optional[int] = Field(None, alias='minItems')\n max_items: Optional[int] = Field(None, alias='maxItems')\n min_length: Optional[int] = Field(None, alias='minLength')\n max_length: Optional[int] = Field(None, alias='maxLength')\n regex: Optional[str] = Field(None, alias='pattern')\n unique_items: Optional[bool] = Field(None, alias='uniqueItems')\n\n\nclass DataModelField(DataModelFieldBase):\n _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {\n 'alias',\n 'default',\n 'const',\n 'gt',\n 'ge',\n 'lt',\n 'le',\n 'multiple_of',\n 'min_items',\n 'max_items',\n 'min_length',\n 'max_length',\n 'regex',\n }\n _COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}\n constraints: Optional[Constraints] = None\n\n @property\n def method(self) -> Optional[str]:\n return self.validator\n\n @property\n def validator(self) -> Optional[str]:\n return None\n # TODO refactor this method for other validation logic\n # from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE\n #\n # return VALIDATOR_TEMPLATE.render(\n # field_name=self.name, types=','.join([t.type_hint for t in self.data_types])\n # )\n\n @property\n def field(self) -> Optional[str]:\n \"\"\"for backwards compatibility\"\"\"\n result = str(self)\n if self.use_default_kwarg and not result.startswith('Field(...'):\n # Use `default=` for fields that have a default value so that type\n # checkers using @dataclass_transform can infer the field as\n # optional in __init__.\n result = result.replace('Field(', 'Field(default=')\n if result == '':\n return None\n\n return result\n\n def self_reference(self) -> bool:\n return isinstance(self.parent, BaseModel) and self.parent.reference.path in {\n d.reference.path for d in self.data_type.all_data_types if d.reference\n }\n\n def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:\n if value is None or constraint not in self._COMPARE_EXPRESSIONS:\n return value\n\n for data_type in self.data_type.all_data_types:\n if data_type.type == 'int':\n value = int(value)\n else:\n value = float(value)\n break\n return value\n\n def _get_default_as_pydantic_model(self) -> Optional[str]:\n for data_type in self.data_type.data_types or (self.data_type,):\n # TODO: Check nested data_types\n if data_type.is_dict or self.data_type.is_union:\n # TODO: Parse Union and dict model for default\n continue\n elif data_type.is_list and len(data_type.data_types) == 1:\n data_type = data_type.data_types[0]\n data_type.alias\n if (\n data_type.reference\n and isinstance(data_type.reference.source, BaseModel)\n and isinstance(self.default, list)\n ): # pragma: no cover\n return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.parse_obj(v) for v in {repr(self.default)}]'\n elif data_type.reference and isinstance(\n data_type.reference.source, BaseModel\n ): # pragma: no cover\n return f'lambda :{data_type.alias or data_type.reference.source.class_name}.parse_obj({repr(self.default)})'\n return None\n\n def __str__(self) -> str:\n data: Dict[str, Any] = {\n k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS\n }\n if self.alias:\n data['alias'] = self.alias\n if (\n self.constraints is not None\n and not self.self_reference()\n and not self.data_type.strict\n ):\n data = {\n **data,\n **{\n k: self._get_strict_field_constraint_value(k, v)\n for k, v in self.constraints.dict().items()\n },\n }\n\n if self.use_field_description:\n data.pop('description', None) # Description is part of field docstring\n\n if self.const:\n data['const'] = True\n\n discriminator = data.pop('discriminator', None)\n if discriminator:\n if isinstance(discriminator, str):\n data['discriminator'] = discriminator\n elif isinstance(discriminator, dict): # pragma: no cover\n data['discriminator'] = discriminator['propertyName']\n\n if self.required:\n default_factory = None\n elif self.default and 'default_factory' not in data:\n default_factory = self._get_default_as_pydantic_model()\n else:\n default_factory = data.pop('default_factory', None)\n\n field_arguments = sorted(\n f'{k}={repr(v)}' for k, v in data.items() if v is not None\n )\n\n if not field_arguments and not default_factory:\n if self.nullable and self.required:\n return 'Field(...)' # Field() is for mypy\n return ''\n\n if self.use_annotated:\n pass\n elif self.required:\n field_arguments = ['...', *field_arguments]\n elif default_factory:\n field_arguments = [f'default_factory={default_factory}', *field_arguments]\n else:\n field_arguments = [f'{repr(self.default)}', *field_arguments]\n\n return f'Field({\", \".join(field_arguments)})'\n\n @property\n def annotated(self) -> Optional[str]:\n if not self.use_annotated or not str(self):\n return None\n return f'Annotated[{self.type_hint}, {str(self)}]'\n\n\nclass BaseModel(DataModel):\n TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'\n BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'\n\n def __init__(\n self,\n *,\n reference: Reference,\n fields: List[DataModelField],\n decorators: Optional[List[str]] = None,\n base_classes: Optional[List[Reference]] = None,\n custom_base_class: Optional[str] = None,\n custom_template_dir: Optional[Path] = None,\n extra_template_data: Optional[DefaultDict[str, Any]] = None,\n path: Optional[Path] = None,\n description: Optional[str] = None,\n default: Any = UNDEFINED,\n nullable: bool = False,\n ):\n methods: List[str] = [field.method for field in fields if field.method]\n\n super().__init__(\n fields=fields, # type: ignore\n reference=reference,\n decorators=decorators,\n base_classes=base_classes,\n custom_base_class=custom_base_class,\n custom_template_dir=custom_template_dir,\n extra_template_data=extra_template_data,\n methods=methods,\n path=path,\n description=description,\n default=default,\n nullable=nullable,\n )\n\n config_parameters: Dict[str, Any] = {}\n\n additionalProperties = self.extra_template_data.get('additionalProperties')\n allow_extra_fields = self.extra_template_data.get('allow_extra_fields')\n if additionalProperties is not None or allow_extra_fields:\n config_parameters['extra'] = (\n 'Extra.allow'\n if additionalProperties or allow_extra_fields\n else 'Extra.forbid'\n )\n self._additional_imports.append(IMPORT_EXTRA)\n\n for config_attribute in 'allow_population_by_field_name', 'allow_mutation':\n if config_attribute in self.extra_template_data:\n config_parameters[config_attribute] = self.extra_template_data[\n config_attribute\n ]\n for data_type in self.all_data_types:\n if data_type.is_custom_type:\n config_parameters['arbitrary_types_allowed'] = True\n break\n\n if isinstance(self.extra_template_data.get('config'), dict):\n for key, value in self.extra_template_data['config'].items():\n config_parameters[key] = value\n\n if config_parameters:\n from datamodel_code_generator.model.pydantic import Config\n\n self.extra_template_data['config'] = Config.parse_obj(config_parameters)\n\n @property\n def imports(self) -> Tuple[Import, ...]:\n if any(f for f in self.fields if f.field):\n return chain_as_tuple(super().imports, (IMPORT_FIELD,))\n return super().imports\n\n @cached_property\n def template_file_path(self) -> Path:\n # This property is for Backward compatibility\n # Current version supports '{custom_template_dir}/BaseModel.jinja'\n # But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'\n if self._custom_template_dir is not None:\n custom_template_file_path = (\n self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name\n )\n if custom_template_file_path.exists():\n return custom_template_file_path\n return super().template_file_path\n", "path": "datamodel_code_generator/model/pydantic/base_model.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple, Union\n\nfrom pydantic import Field\n\nfrom datamodel_code_generator import cached_property\nfrom datamodel_code_generator.imports import Import\nfrom datamodel_code_generator.model import (\n ConstraintsBase,\n DataModel,\n DataModelFieldBase,\n)\nfrom datamodel_code_generator.model.base import UNDEFINED\nfrom datamodel_code_generator.model.pydantic.imports import IMPORT_EXTRA, IMPORT_FIELD\nfrom datamodel_code_generator.reference import Reference\nfrom datamodel_code_generator.types import chain_as_tuple\n\n\nclass Constraints(ConstraintsBase):\n gt: Optional[Union[float, int]] = Field(None, alias='exclusiveMinimum')\n ge: Optional[Union[float, int]] = Field(None, alias='minimum')\n lt: Optional[Union[float, int]] = Field(None, alias='exclusiveMaximum')\n le: Optional[Union[float, int]] = Field(None, alias='maximum')\n multiple_of: Optional[float] = Field(None, alias='multipleOf')\n min_items: Optional[int] = Field(None, alias='minItems')\n max_items: Optional[int] = Field(None, alias='maxItems')\n min_length: Optional[int] = Field(None, alias='minLength')\n max_length: Optional[int] = Field(None, alias='maxLength')\n regex: Optional[str] = Field(None, alias='pattern')\n unique_items: Optional[bool] = Field(None, alias='uniqueItems')\n\n\nclass DataModelField(DataModelFieldBase):\n _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {\n 'alias',\n 'default',\n 'const',\n 'gt',\n 'ge',\n 'lt',\n 'le',\n 'multiple_of',\n 'min_items',\n 'max_items',\n 'min_length',\n 'max_length',\n 'regex',\n }\n _COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}\n constraints: Optional[Constraints] = None\n\n @property\n def method(self) -> Optional[str]:\n return self.validator\n\n @property\n def validator(self) -> Optional[str]:\n return None\n # TODO refactor this method for other validation logic\n # from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE\n #\n # return VALIDATOR_TEMPLATE.render(\n # field_name=self.name, types=','.join([t.type_hint for t in self.data_types])\n # )\n\n @property\n def field(self) -> Optional[str]:\n \"\"\"for backwards compatibility\"\"\"\n result = str(self)\n if (\n self.use_default_kwarg\n and not result.startswith('Field(...')\n and not result.startswith('Field(default_factory=')\n ):\n # Use `default=` for fields that have a default value so that type\n # checkers using @dataclass_transform can infer the field as\n # optional in __init__.\n result = result.replace('Field(', 'Field(default=')\n if result == '':\n return None\n\n return result\n\n def self_reference(self) -> bool:\n return isinstance(self.parent, BaseModel) and self.parent.reference.path in {\n d.reference.path for d in self.data_type.all_data_types if d.reference\n }\n\n def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:\n if value is None or constraint not in self._COMPARE_EXPRESSIONS:\n return value\n\n for data_type in self.data_type.all_data_types:\n if data_type.type == 'int':\n value = int(value)\n else:\n value = float(value)\n break\n return value\n\n def _get_default_as_pydantic_model(self) -> Optional[str]:\n for data_type in self.data_type.data_types or (self.data_type,):\n # TODO: Check nested data_types\n if data_type.is_dict or self.data_type.is_union:\n # TODO: Parse Union and dict model for default\n continue\n elif data_type.is_list and len(data_type.data_types) == 1:\n data_type = data_type.data_types[0]\n data_type.alias\n if (\n data_type.reference\n and isinstance(data_type.reference.source, BaseModel)\n and isinstance(self.default, list)\n ): # pragma: no cover\n return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.parse_obj(v) for v in {repr(self.default)}]'\n elif data_type.reference and isinstance(\n data_type.reference.source, BaseModel\n ): # pragma: no cover\n return f'lambda :{data_type.alias or data_type.reference.source.class_name}.parse_obj({repr(self.default)})'\n return None\n\n def __str__(self) -> str:\n data: Dict[str, Any] = {\n k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS\n }\n if self.alias:\n data['alias'] = self.alias\n if (\n self.constraints is not None\n and not self.self_reference()\n and not self.data_type.strict\n ):\n data = {\n **data,\n **{\n k: self._get_strict_field_constraint_value(k, v)\n for k, v in self.constraints.dict().items()\n },\n }\n\n if self.use_field_description:\n data.pop('description', None) # Description is part of field docstring\n\n if self.const:\n data['const'] = True\n\n discriminator = data.pop('discriminator', None)\n if discriminator:\n if isinstance(discriminator, str):\n data['discriminator'] = discriminator\n elif isinstance(discriminator, dict): # pragma: no cover\n data['discriminator'] = discriminator['propertyName']\n\n if self.required:\n default_factory = None\n elif self.default and 'default_factory' not in data:\n default_factory = self._get_default_as_pydantic_model()\n else:\n default_factory = data.pop('default_factory', None)\n\n field_arguments = sorted(\n f'{k}={repr(v)}' for k, v in data.items() if v is not None\n )\n\n if not field_arguments and not default_factory:\n if self.nullable and self.required:\n return 'Field(...)' # Field() is for mypy\n return ''\n\n if self.use_annotated:\n pass\n elif self.required:\n field_arguments = ['...', *field_arguments]\n elif default_factory:\n field_arguments = [f'default_factory={default_factory}', *field_arguments]\n else:\n field_arguments = [f'{repr(self.default)}', *field_arguments]\n\n return f'Field({\", \".join(field_arguments)})'\n\n @property\n def annotated(self) -> Optional[str]:\n if not self.use_annotated or not str(self):\n return None\n return f'Annotated[{self.type_hint}, {str(self)}]'\n\n\nclass BaseModel(DataModel):\n TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'\n BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'\n\n def __init__(\n self,\n *,\n reference: Reference,\n fields: List[DataModelField],\n decorators: Optional[List[str]] = None,\n base_classes: Optional[List[Reference]] = None,\n custom_base_class: Optional[str] = None,\n custom_template_dir: Optional[Path] = None,\n extra_template_data: Optional[DefaultDict[str, Any]] = None,\n path: Optional[Path] = None,\n description: Optional[str] = None,\n default: Any = UNDEFINED,\n nullable: bool = False,\n ):\n methods: List[str] = [field.method for field in fields if field.method]\n\n super().__init__(\n fields=fields, # type: ignore\n reference=reference,\n decorators=decorators,\n base_classes=base_classes,\n custom_base_class=custom_base_class,\n custom_template_dir=custom_template_dir,\n extra_template_data=extra_template_data,\n methods=methods,\n path=path,\n description=description,\n default=default,\n nullable=nullable,\n )\n\n config_parameters: Dict[str, Any] = {}\n\n additionalProperties = self.extra_template_data.get('additionalProperties')\n allow_extra_fields = self.extra_template_data.get('allow_extra_fields')\n if additionalProperties is not None or allow_extra_fields:\n config_parameters['extra'] = (\n 'Extra.allow'\n if additionalProperties or allow_extra_fields\n else 'Extra.forbid'\n )\n self._additional_imports.append(IMPORT_EXTRA)\n\n for config_attribute in 'allow_population_by_field_name', 'allow_mutation':\n if config_attribute in self.extra_template_data:\n config_parameters[config_attribute] = self.extra_template_data[\n config_attribute\n ]\n for data_type in self.all_data_types:\n if data_type.is_custom_type:\n config_parameters['arbitrary_types_allowed'] = True\n break\n\n if isinstance(self.extra_template_data.get('config'), dict):\n for key, value in self.extra_template_data['config'].items():\n config_parameters[key] = value\n\n if config_parameters:\n from datamodel_code_generator.model.pydantic import Config\n\n self.extra_template_data['config'] = Config.parse_obj(config_parameters)\n\n @property\n def imports(self) -> Tuple[Import, ...]:\n if any(f for f in self.fields if f.field):\n return chain_as_tuple(super().imports, (IMPORT_FIELD,))\n return super().imports\n\n @cached_property\n def template_file_path(self) -> Path:\n # This property is for Backward compatibility\n # Current version supports '{custom_template_dir}/BaseModel.jinja'\n # But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'\n if self._custom_template_dir is not None:\n custom_template_file_path = (\n self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name\n )\n if custom_template_file_path.exists():\n return custom_template_file_path\n return super().template_file_path\n", "path": "datamodel_code_generator/model/pydantic/base_model.py"}]}
| 3,610 | 183 |
gh_patches_debug_5910
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-18673
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JVM resource jar creation is broken on Mac by variation in `touch` command.
BSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330
As discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U
Where the error looks like:
```
12:24:56.74 [ERROR] 1 Exception encountered:
Engine traceback:
in select
in pants.core.goals.check.check
in pants.backend.scala.goals.check.scalac_check (scalac)
in pants.backend.scala.compile.scalac.compile_scala_source
in pants.jvm.compile.compile_classpath_entries
in pants.jvm.resources.assemble_resources_jar
in pants.engine.process.fallible_to_exec_result_or_raise
Traceback (most recent call last):
File "/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py", line 275, in fallible_to_exec_result_or_raise
raise ProcessExecutionFailure(
pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.
stdout:
stderr:
/usr/bin/touch: illegal option -- d
usage:
touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
```
It appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/jvm/resources.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import itertools
5 import logging
6 import shlex
7 from itertools import chain
8 from pathlib import Path
9
10 from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet
11 from pants.core.util_rules import stripped_source_files
12 from pants.core.util_rules.source_files import SourceFilesRequest
13 from pants.core.util_rules.stripped_source_files import StrippedSourceFiles
14 from pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary
15 from pants.engine.fs import Digest, MergeDigests
16 from pants.engine.internals.selectors import MultiGet
17 from pants.engine.process import Process, ProcessResult
18 from pants.engine.rules import Get, collect_rules, rule
19 from pants.engine.target import SourcesField
20 from pants.engine.unions import UnionRule
21 from pants.jvm import compile
22 from pants.jvm.compile import (
23 ClasspathDependenciesRequest,
24 ClasspathEntry,
25 ClasspathEntryRequest,
26 ClasspathEntryRequests,
27 CompileResult,
28 FallibleClasspathEntries,
29 FallibleClasspathEntry,
30 )
31 from pants.jvm.subsystems import JvmSubsystem
32 from pants.util.logging import LogLevel
33
34 logger = logging.getLogger(__name__)
35
36
37 class JvmResourcesRequest(ClasspathEntryRequest):
38 field_sets = (
39 ResourcesFieldSet,
40 ResourcesGeneratorFieldSet,
41 )
42
43
44 @rule(desc="Assemble resources")
45 async def assemble_resources_jar(
46 zip: ZipBinary,
47 bash: BashBinary,
48 touch: TouchBinary,
49 jvm: JvmSubsystem,
50 request: JvmResourcesRequest,
51 ) -> FallibleClasspathEntry:
52 # Request the component's direct dependency classpath, and additionally any prerequisite.
53 # Filter out any dependencies that are generated by our current target so that each resource
54 # only appears in a single input JAR.
55 # NOTE: Generated dependencies will have the same dependencies as the current target, so we
56 # don't need to inspect those dependencies.
57 optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]
58 fallibles = await MultiGet(
59 Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),
60 Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),
61 )
62 direct_dependency_classpath_entries = FallibleClasspathEntries(
63 itertools.chain(*fallibles)
64 ).if_all_succeeded()
65
66 if direct_dependency_classpath_entries is None:
67 return FallibleClasspathEntry(
68 description=str(request.component),
69 result=CompileResult.DEPENDENCY_FAILED,
70 output=None,
71 exit_code=1,
72 )
73
74 source_files = await Get(
75 StrippedSourceFiles,
76 SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),
77 )
78
79 output_filename = f"{request.component.representative.address.path_safe_spec}.resources.jar"
80 output_files = [output_filename]
81
82 # #16231: Valid JAR files need the directories of each resource file as well as the files
83 # themselves.
84
85 paths = {Path(filename) for filename in source_files.snapshot.files}
86 directories = {parent for path in paths for parent in path.parents}
87 input_files = {str(path) for path in chain(paths, directories)}
88
89 resources_jar_input_digest = source_files.snapshot.digest
90
91 input_filenames = " ".join(shlex.quote(file) for file in sorted(input_files))
92
93 resources_jar_result = await Get(
94 ProcessResult,
95 Process(
96 argv=[
97 bash.path,
98 "-c",
99 " ".join(
100 [
101 touch.path,
102 "-d 1980-01-01T00:00:00Z",
103 input_filenames,
104 "&&",
105 "TZ=UTC",
106 zip.path,
107 "-oX",
108 output_filename,
109 input_filenames,
110 ]
111 ),
112 ],
113 description=f"Build resources JAR for {request.component}",
114 input_digest=resources_jar_input_digest,
115 output_files=output_files,
116 level=LogLevel.DEBUG,
117 ),
118 )
119
120 output_digest = resources_jar_result.output_digest
121 cpe = ClasspathEntry(output_digest, output_files, [])
122
123 merged_cpe_digest = await Get(
124 Digest,
125 MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),
126 )
127
128 merged_cpe = ClasspathEntry.merge(
129 digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]
130 )
131
132 return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)
133
134
135 def rules():
136 return [
137 *collect_rules(),
138 *compile.rules(),
139 *stripped_source_files.rules(),
140 UnionRule(ClasspathEntryRequest, JvmResourcesRequest),
141 ]
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py
--- a/src/python/pants/jvm/resources.py
+++ b/src/python/pants/jvm/resources.py
@@ -98,8 +98,9 @@
"-c",
" ".join(
[
+ "TZ=UTC",
touch.path,
- "-d 1980-01-01T00:00:00Z",
+ "-t 198001010000.00",
input_filenames,
"&&",
"TZ=UTC",
|
{"golden_diff": "diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py\n--- a/src/python/pants/jvm/resources.py\n+++ b/src/python/pants/jvm/resources.py\n@@ -98,8 +98,9 @@\n \"-c\",\n \" \".join(\n [\n+ \"TZ=UTC\",\n touch.path,\n- \"-d 1980-01-01T00:00:00Z\",\n+ \"-t 198001010000.00\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n", "issue": "JVM resource jar creation is broken on Mac by variation in `touch` command.\nBSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330\n\nAs discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U\n\nWhere the error looks like:\n```\n 12:24:56.74 [ERROR] 1 Exception encountered:\n \n Engine traceback:\n in select\n in pants.core.goals.check.check\n in pants.backend.scala.goals.check.scalac_check (scalac)\n in pants.backend.scala.compile.scalac.compile_scala_source\n in pants.jvm.compile.compile_classpath_entries\n in pants.jvm.resources.assemble_resources_jar\n in pants.engine.process.fallible_to_exec_result_or_raise\n Traceback (most recent call last):\n File \"/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py\", line 275, in fallible_to_exec_result_or_raise\n raise ProcessExecutionFailure(\n pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.\n stdout:\n \n stderr:\n /usr/bin/touch: illegal option -- d\n usage:\n touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...\n```\n\nIt appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.\n\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport itertools\nimport logging\nimport shlex\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\nfrom pants.jvm.subsystems import JvmSubsystem\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n bash: BashBinary,\n touch: TouchBinary,\n jvm: JvmSubsystem,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n # #16231: Valid JAR files need the directories of each resource file as well as the files\n # themselves.\n\n paths = {Path(filename) for filename in source_files.snapshot.files}\n directories = {parent for path in paths for parent in path.parents}\n input_files = {str(path) for path in chain(paths, directories)}\n\n resources_jar_input_digest = source_files.snapshot.digest\n\n input_filenames = \" \".join(shlex.quote(file) for file in sorted(input_files))\n\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n bash.path,\n \"-c\",\n \" \".join(\n [\n touch.path,\n \"-d 1980-01-01T00:00:00Z\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n zip.path,\n \"-oX\",\n output_filename,\n input_filenames,\n ]\n ),\n ],\n description=f\"Build resources JAR for {request.component}\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n level=LogLevel.DEBUG,\n ),\n )\n\n output_digest = resources_jar_result.output_digest\n cpe = ClasspathEntry(output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport itertools\nimport logging\nimport shlex\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\nfrom pants.jvm.subsystems import JvmSubsystem\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n bash: BashBinary,\n touch: TouchBinary,\n jvm: JvmSubsystem,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n # #16231: Valid JAR files need the directories of each resource file as well as the files\n # themselves.\n\n paths = {Path(filename) for filename in source_files.snapshot.files}\n directories = {parent for path in paths for parent in path.parents}\n input_files = {str(path) for path in chain(paths, directories)}\n\n resources_jar_input_digest = source_files.snapshot.digest\n\n input_filenames = \" \".join(shlex.quote(file) for file in sorted(input_files))\n\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n bash.path,\n \"-c\",\n \" \".join(\n [\n \"TZ=UTC\",\n touch.path,\n \"-t 198001010000.00\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n zip.path,\n \"-oX\",\n output_filename,\n input_filenames,\n ]\n ),\n ],\n description=f\"Build resources JAR for {request.component}\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n level=LogLevel.DEBUG,\n ),\n )\n\n output_digest = resources_jar_result.output_digest\n cpe = ClasspathEntry(output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}]}
| 2,095 | 139 |
gh_patches_debug_3096
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-834
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump mypy from 0.720 to 0.730
Bumps [mypy](https://github.com/python/mypy) from 0.720 to 0.730.
<details>
<summary>Commits</summary>
- [`7ad7f8b`](https://github.com/python/mypy/commit/7ad7f8bbe61e5e67aa7fd6f2efe280931dd2f620) Bump version to 0.730
- [`90776b8`](https://github.com/python/mypy/commit/90776b8b49dee8b5c84a7e90c1b563f2fd88f4f1) Document error codes ([#7451](https://github-redirect.dependabot.com/python/mypy/issues/7451))
- [`99475b2`](https://github.com/python/mypy/commit/99475b21705816a48a5f2cc0380907d21a93442f) Don't serialize redefined symbol nodes ([#7499](https://github-redirect.dependabot.com/python/mypy/issues/7499))
- [`8c17dd8`](https://github.com/python/mypy/commit/8c17dd863121138b20d92184786ed3777d4c574c) Don't compile mypyc/lib-rt/setup.py ([#7497](https://github-redirect.dependabot.com/python/mypy/issues/7497))
- [`41db9a0`](https://github.com/python/mypy/commit/41db9a0c570a3e190f3749cf0b681a31823dc0f7) Pass is_classmethod to bind_self() also for superype ([#7491](https://github-redirect.dependabot.com/python/mypy/issues/7491))
- [`2bdbacf`](https://github.com/python/mypy/commit/2bdbacf32a2b5201200dc2ed8ef5c7175b8de739) Attempt to fix travis on Python 3.8 beta ([#7492](https://github-redirect.dependabot.com/python/mypy/issues/7492))
- [`09c243d`](https://github.com/python/mypy/commit/09c243dcc12935b989367f31d1d25d7fd0ec634c) Point error to incompatible argument instead of call expression ([#7470](https://github-redirect.dependabot.com/python/mypy/issues/7470))
- [`88e2b67`](https://github.com/python/mypy/commit/88e2b67c4c2e8590dbee4aec272b3727b9566f0b) Support pickling of extension classes ([#7481](https://github-redirect.dependabot.com/python/mypy/issues/7481))
- [`9f1b8e9`](https://github.com/python/mypy/commit/9f1b8e930b812385fc866b3145785f7bb59361ef) Fix missing quotes in sample python snippet ([#7487](https://github-redirect.dependabot.com/python/mypy/issues/7487))
- [`37e5be1`](https://github.com/python/mypy/commit/37e5be10c845be3c036721c9462ef9cd90469236) Add http:// in front of the docs url for strict-optional ([#7485](https://github-redirect.dependabot.com/python/mypy/issues/7485))
- Additional commits viewable in [compare view](https://github.com/python/mypy/compare/v0.720...v0.730)
</details>
<br />
[](https://dependabot.com/compatibility-score.html?dependency-name=mypy&package-manager=pip&previous-version=0.720&new-version=0.730)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Automerge options (never/patch/minor, and dev/runtime dependencies)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
Finally, you can contact us by mentioning @dependabot.
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/compat/nodes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import ast
4
5 try: # pragma: no cover
6 from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113
7 except ImportError: # pragma: no cover
8 class Constant(ast.AST): # type: ignore # noqa: WPS440
9 """
10 Fallback for pythons that do not have ``ast.Constant``.
11
12 In this case ``Constant`` is replaced with:
13
14 - ``ast.Num``
15 - ``ast.Str`` and ``ast.Bytes``
16 - ``ast.NameConstant``
17
18 Only ``python3.8+`` has this node.
19 """
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/compat/nodes.py b/wemake_python_styleguide/compat/nodes.py
--- a/wemake_python_styleguide/compat/nodes.py
+++ b/wemake_python_styleguide/compat/nodes.py
@@ -3,7 +3,7 @@
import ast
try: # pragma: no cover
- from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113
+ from ast import Constant as Constant # noqa: WPS433, WPS113
except ImportError: # pragma: no cover
class Constant(ast.AST): # type: ignore # noqa: WPS440
"""
|
{"golden_diff": "diff --git a/wemake_python_styleguide/compat/nodes.py b/wemake_python_styleguide/compat/nodes.py\n--- a/wemake_python_styleguide/compat/nodes.py\n+++ b/wemake_python_styleguide/compat/nodes.py\n@@ -3,7 +3,7 @@\n import ast\n \n try: # pragma: no cover\n- from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113\n+ from ast import Constant as Constant # noqa: WPS433, WPS113\n except ImportError: # pragma: no cover\n class Constant(ast.AST): # type: ignore # noqa: WPS440\n \"\"\"\n", "issue": "Bump mypy from 0.720 to 0.730\nBumps [mypy](https://github.com/python/mypy) from 0.720 to 0.730.\n<details>\n<summary>Commits</summary>\n\n- [`7ad7f8b`](https://github.com/python/mypy/commit/7ad7f8bbe61e5e67aa7fd6f2efe280931dd2f620) Bump version to 0.730\n- [`90776b8`](https://github.com/python/mypy/commit/90776b8b49dee8b5c84a7e90c1b563f2fd88f4f1) Document error codes ([#7451](https://github-redirect.dependabot.com/python/mypy/issues/7451))\n- [`99475b2`](https://github.com/python/mypy/commit/99475b21705816a48a5f2cc0380907d21a93442f) Don't serialize redefined symbol nodes ([#7499](https://github-redirect.dependabot.com/python/mypy/issues/7499))\n- [`8c17dd8`](https://github.com/python/mypy/commit/8c17dd863121138b20d92184786ed3777d4c574c) Don't compile mypyc/lib-rt/setup.py ([#7497](https://github-redirect.dependabot.com/python/mypy/issues/7497))\n- [`41db9a0`](https://github.com/python/mypy/commit/41db9a0c570a3e190f3749cf0b681a31823dc0f7) Pass is_classmethod to bind_self() also for superype ([#7491](https://github-redirect.dependabot.com/python/mypy/issues/7491))\n- [`2bdbacf`](https://github.com/python/mypy/commit/2bdbacf32a2b5201200dc2ed8ef5c7175b8de739) Attempt to fix travis on Python 3.8 beta ([#7492](https://github-redirect.dependabot.com/python/mypy/issues/7492))\n- [`09c243d`](https://github.com/python/mypy/commit/09c243dcc12935b989367f31d1d25d7fd0ec634c) Point error to incompatible argument instead of call expression ([#7470](https://github-redirect.dependabot.com/python/mypy/issues/7470))\n- [`88e2b67`](https://github.com/python/mypy/commit/88e2b67c4c2e8590dbee4aec272b3727b9566f0b) Support pickling of extension classes ([#7481](https://github-redirect.dependabot.com/python/mypy/issues/7481))\n- [`9f1b8e9`](https://github.com/python/mypy/commit/9f1b8e930b812385fc866b3145785f7bb59361ef) Fix missing quotes in sample python snippet ([#7487](https://github-redirect.dependabot.com/python/mypy/issues/7487))\n- [`37e5be1`](https://github.com/python/mypy/commit/37e5be10c845be3c036721c9462ef9cd90469236) Add http:// in front of the docs url for strict-optional ([#7485](https://github-redirect.dependabot.com/python/mypy/issues/7485))\n- Additional commits viewable in [compare view](https://github.com/python/mypy/compare/v0.720...v0.730)\n</details>\n<br />\n\n[](https://dependabot.com/compatibility-score.html?dependency-name=mypy&package-manager=pip&previous-version=0.720&new-version=0.730)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n<details>\n<summary>Dependabot commands and options</summary>\n<br />\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Automerge options (never/patch/minor, and dev/runtime dependencies)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\nFinally, you can contact us by mentioning @dependabot.\n\n</details>\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\n\ntry: # pragma: no cover\n from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113\nexcept ImportError: # pragma: no cover\n class Constant(ast.AST): # type: ignore # noqa: WPS440\n \"\"\"\n Fallback for pythons that do not have ``ast.Constant``.\n\n In this case ``Constant`` is replaced with:\n\n - ``ast.Num``\n - ``ast.Str`` and ``ast.Bytes``\n - ``ast.NameConstant``\n\n Only ``python3.8+`` has this node.\n \"\"\"\n", "path": "wemake_python_styleguide/compat/nodes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\n\ntry: # pragma: no cover\n from ast import Constant as Constant # noqa: WPS433, WPS113\nexcept ImportError: # pragma: no cover\n class Constant(ast.AST): # type: ignore # noqa: WPS440\n \"\"\"\n Fallback for pythons that do not have ``ast.Constant``.\n\n In this case ``Constant`` is replaced with:\n\n - ``ast.Num``\n - ``ast.Str`` and ``ast.Bytes``\n - ``ast.NameConstant``\n\n Only ``python3.8+`` has this node.\n \"\"\"\n", "path": "wemake_python_styleguide/compat/nodes.py"}]}
| 2,142 | 164 |
gh_patches_debug_35493
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-287
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reprojection Example/Documentation
I was having some trouble following the [reprojection](https://github.com/mapbox/rasterio/blob/master/examples/reproject.py) example; the Affine parameters for `dst_transform` aren't referenced anywhere before they are applied:
https://github.com/mapbox/rasterio/blob/master/examples/reproject.py#L29
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/transform.py`
Content:
```
1 import warnings
2
3 from affine import Affine
4
5 IDENTITY = Affine.identity()
6
7
8 def tastes_like_gdal(seq):
9 """Return True if `seq` matches the GDAL geotransform pattern."""
10 return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0
11
12
13 def guard_transform(transform):
14 """Return an Affine transformation instance"""
15 if not isinstance(transform, Affine):
16 if tastes_like_gdal(transform):
17 warnings.warn(
18 "GDAL-style transforms are deprecated and will not "
19 "be supported in Rasterio 1.0.",
20 FutureWarning,
21 stacklevel=2)
22 transform = Affine.from_gdal(*transform)
23 else:
24 transform = Affine(*transform)
25 return transform
26
```
Path: `examples/reproject.py`
Content:
```
1 import os
2 import shutil
3 import subprocess
4 import tempfile
5
6 import numpy
7 import rasterio
8 from rasterio import Affine as A
9 from rasterio.warp import reproject, RESAMPLING
10
11 tempdir = '/tmp'
12 tiffname = os.path.join(tempdir, 'example.tif')
13
14 with rasterio.drivers():
15
16 # Consider a 512 x 512 raster centered on 0 degrees E and 0 degrees N
17 # with each pixel covering 15".
18 rows, cols = src_shape = (512, 512)
19 dpp = 1.0/240 # decimal degrees per pixel
20 # The following is equivalent to
21 # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).
22 src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)
23 src_crs = {'init': 'EPSG:4326'}
24 source = numpy.ones(src_shape, numpy.uint8)*255
25
26 # Prepare to reproject this rasters to a 1024 x 1024 dataset in
27 # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.
28 dst_shape = (1024, 1024)
29 dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)
30 dst_transform = dst_transform.to_gdal()
31 dst_crs = {'init': 'EPSG:3857'}
32 destination = numpy.zeros(dst_shape, numpy.uint8)
33
34 reproject(
35 source,
36 destination,
37 src_transform=src_transform,
38 src_crs=src_crs,
39 dst_transform=dst_transform,
40 dst_crs=dst_crs,
41 resampling=RESAMPLING.nearest)
42
43 # Assert that the destination is only partly filled.
44 assert destination.any()
45 assert not destination.all()
46
47 # Write it out to a file.
48 with rasterio.open(
49 tiffname,
50 'w',
51 driver='GTiff',
52 width=dst_shape[1],
53 height=dst_shape[0],
54 count=1,
55 dtype=numpy.uint8,
56 nodata=0,
57 transform=dst_transform,
58 crs=dst_crs) as dst:
59 dst.write_band(1, destination)
60
61 info = subprocess.call(['open', tiffname])
62
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/reproject.py b/examples/reproject.py
--- a/examples/reproject.py
+++ b/examples/reproject.py
@@ -5,7 +5,7 @@
import numpy
import rasterio
-from rasterio import Affine as A
+from rasterio import transform
from rasterio.warp import reproject, RESAMPLING
tempdir = '/tmp'
@@ -17,17 +17,15 @@
# with each pixel covering 15".
rows, cols = src_shape = (512, 512)
dpp = 1.0/240 # decimal degrees per pixel
- # The following is equivalent to
- # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).
- src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)
+ west, south, east, north = -cols*dpp/2, -rows*dpp/2, cols*dpp/2, rows*dpp/2
+ src_transform = transform.from_bounds(west, south, east, north, cols, rows)
src_crs = {'init': 'EPSG:4326'}
source = numpy.ones(src_shape, numpy.uint8)*255
# Prepare to reproject this rasters to a 1024 x 1024 dataset in
- # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.
+ # Web Mercator (EPSG:3857) with origin at -237481.5, 237536.4.
dst_shape = (1024, 1024)
- dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)
- dst_transform = dst_transform.to_gdal()
+ dst_transform = transform.from_origin(-237481.5, 237536.4, 425.0, 425.0)
dst_crs = {'init': 'EPSG:3857'}
destination = numpy.zeros(dst_shape, numpy.uint8)
@@ -59,4 +57,3 @@
dst.write_band(1, destination)
info = subprocess.call(['open', tiffname])
-
diff --git a/rasterio/transform.py b/rasterio/transform.py
--- a/rasterio/transform.py
+++ b/rasterio/transform.py
@@ -23,3 +23,18 @@
else:
transform = Affine(*transform)
return transform
+
+
+def from_origin(west, north, xsize, ysize):
+ """Return an Affine transformation for a georeferenced raster given
+ the coordinates of its upper left corner `west`, `north` and pixel
+ sizes `xsize`, `ysize`."""
+ return Affine.translation(west, north) * Affine.scale(xsize, -ysize)
+
+
+def from_bounds(west, south, east, north, width, height):
+ """Return an Affine transformation for a georeferenced raster given
+ its bounds `west`, `south`, `east`, `north` and its `width` and
+ `height` in number of pixels."""
+ return Affine.translation(west, north) * Affine.scale(
+ (east - west)/width, (south - north)/height)
|
{"golden_diff": "diff --git a/examples/reproject.py b/examples/reproject.py\n--- a/examples/reproject.py\n+++ b/examples/reproject.py\n@@ -5,7 +5,7 @@\n \n import numpy\n import rasterio\n-from rasterio import Affine as A\n+from rasterio import transform\n from rasterio.warp import reproject, RESAMPLING\n \n tempdir = '/tmp'\n@@ -17,17 +17,15 @@\n # with each pixel covering 15\".\n rows, cols = src_shape = (512, 512)\n dpp = 1.0/240 # decimal degrees per pixel\n- # The following is equivalent to \n- # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).\n- src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)\n+ west, south, east, north = -cols*dpp/2, -rows*dpp/2, cols*dpp/2, rows*dpp/2\n+ src_transform = transform.from_bounds(west, south, east, north, cols, rows)\n src_crs = {'init': 'EPSG:4326'}\n source = numpy.ones(src_shape, numpy.uint8)*255\n \n # Prepare to reproject this rasters to a 1024 x 1024 dataset in\n- # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.\n+ # Web Mercator (EPSG:3857) with origin at -237481.5, 237536.4.\n dst_shape = (1024, 1024)\n- dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)\n- dst_transform = dst_transform.to_gdal()\n+ dst_transform = transform.from_origin(-237481.5, 237536.4, 425.0, 425.0)\n dst_crs = {'init': 'EPSG:3857'}\n destination = numpy.zeros(dst_shape, numpy.uint8)\n \n@@ -59,4 +57,3 @@\n dst.write_band(1, destination)\n \n info = subprocess.call(['open', tiffname])\n-\ndiff --git a/rasterio/transform.py b/rasterio/transform.py\n--- a/rasterio/transform.py\n+++ b/rasterio/transform.py\n@@ -23,3 +23,18 @@\n else:\n transform = Affine(*transform)\n return transform\n+\n+\n+def from_origin(west, north, xsize, ysize):\n+ \"\"\"Return an Affine transformation for a georeferenced raster given\n+ the coordinates of its upper left corner `west`, `north` and pixel\n+ sizes `xsize`, `ysize`.\"\"\"\n+ return Affine.translation(west, north) * Affine.scale(xsize, -ysize)\n+\n+\n+def from_bounds(west, south, east, north, width, height):\n+ \"\"\"Return an Affine transformation for a georeferenced raster given\n+ its bounds `west`, `south`, `east`, `north` and its `width` and\n+ `height` in number of pixels.\"\"\"\n+ return Affine.translation(west, north) * Affine.scale(\n+ (east - west)/width, (south - north)/height)\n", "issue": "Reprojection Example/Documentation\nI was having some trouble following the [reprojection](https://github.com/mapbox/rasterio/blob/master/examples/reproject.py) example; the Affine parameters for `dst_transform` aren't referenced anywhere before they are applied:\n\nhttps://github.com/mapbox/rasterio/blob/master/examples/reproject.py#L29\n\n", "before_files": [{"content": "import warnings\n\nfrom affine import Affine\n\nIDENTITY = Affine.identity()\n\n\ndef tastes_like_gdal(seq):\n \"\"\"Return True if `seq` matches the GDAL geotransform pattern.\"\"\"\n return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0\n\n\ndef guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n if not isinstance(transform, Affine):\n if tastes_like_gdal(transform):\n warnings.warn(\n \"GDAL-style transforms are deprecated and will not \"\n \"be supported in Rasterio 1.0.\",\n FutureWarning,\n stacklevel=2)\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n return transform\n", "path": "rasterio/transform.py"}, {"content": "import os\nimport shutil\nimport subprocess\nimport tempfile\n\nimport numpy\nimport rasterio\nfrom rasterio import Affine as A\nfrom rasterio.warp import reproject, RESAMPLING\n\ntempdir = '/tmp'\ntiffname = os.path.join(tempdir, 'example.tif')\n\nwith rasterio.drivers():\n\n # Consider a 512 x 512 raster centered on 0 degrees E and 0 degrees N\n # with each pixel covering 15\".\n rows, cols = src_shape = (512, 512)\n dpp = 1.0/240 # decimal degrees per pixel\n # The following is equivalent to \n # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).\n src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)\n src_crs = {'init': 'EPSG:4326'}\n source = numpy.ones(src_shape, numpy.uint8)*255\n\n # Prepare to reproject this rasters to a 1024 x 1024 dataset in\n # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.\n dst_shape = (1024, 1024)\n dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)\n dst_transform = dst_transform.to_gdal()\n dst_crs = {'init': 'EPSG:3857'}\n destination = numpy.zeros(dst_shape, numpy.uint8)\n\n reproject(\n source, \n destination, \n src_transform=src_transform,\n src_crs=src_crs,\n dst_transform=dst_transform,\n dst_crs=dst_crs,\n resampling=RESAMPLING.nearest)\n\n # Assert that the destination is only partly filled.\n assert destination.any()\n assert not destination.all()\n\n # Write it out to a file.\n with rasterio.open(\n tiffname, \n 'w',\n driver='GTiff',\n width=dst_shape[1],\n height=dst_shape[0],\n count=1,\n dtype=numpy.uint8,\n nodata=0,\n transform=dst_transform,\n crs=dst_crs) as dst:\n dst.write_band(1, destination)\n\ninfo = subprocess.call(['open', tiffname])\n\n", "path": "examples/reproject.py"}], "after_files": [{"content": "import warnings\n\nfrom affine import Affine\n\nIDENTITY = Affine.identity()\n\n\ndef tastes_like_gdal(seq):\n \"\"\"Return True if `seq` matches the GDAL geotransform pattern.\"\"\"\n return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0\n\n\ndef guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n if not isinstance(transform, Affine):\n if tastes_like_gdal(transform):\n warnings.warn(\n \"GDAL-style transforms are deprecated and will not \"\n \"be supported in Rasterio 1.0.\",\n FutureWarning,\n stacklevel=2)\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n return transform\n\n\ndef from_origin(west, north, xsize, ysize):\n \"\"\"Return an Affine transformation for a georeferenced raster given\n the coordinates of its upper left corner `west`, `north` and pixel\n sizes `xsize`, `ysize`.\"\"\"\n return Affine.translation(west, north) * Affine.scale(xsize, -ysize)\n\n\ndef from_bounds(west, south, east, north, width, height):\n \"\"\"Return an Affine transformation for a georeferenced raster given\n its bounds `west`, `south`, `east`, `north` and its `width` and\n `height` in number of pixels.\"\"\"\n return Affine.translation(west, north) * Affine.scale(\n (east - west)/width, (south - north)/height)\n", "path": "rasterio/transform.py"}, {"content": "import os\nimport shutil\nimport subprocess\nimport tempfile\n\nimport numpy\nimport rasterio\nfrom rasterio import transform\nfrom rasterio.warp import reproject, RESAMPLING\n\ntempdir = '/tmp'\ntiffname = os.path.join(tempdir, 'example.tif')\n\nwith rasterio.drivers():\n\n # Consider a 512 x 512 raster centered on 0 degrees E and 0 degrees N\n # with each pixel covering 15\".\n rows, cols = src_shape = (512, 512)\n dpp = 1.0/240 # decimal degrees per pixel\n west, south, east, north = -cols*dpp/2, -rows*dpp/2, cols*dpp/2, rows*dpp/2\n src_transform = transform.from_bounds(west, south, east, north, cols, rows)\n src_crs = {'init': 'EPSG:4326'}\n source = numpy.ones(src_shape, numpy.uint8)*255\n\n # Prepare to reproject this rasters to a 1024 x 1024 dataset in\n # Web Mercator (EPSG:3857) with origin at -237481.5, 237536.4.\n dst_shape = (1024, 1024)\n dst_transform = transform.from_origin(-237481.5, 237536.4, 425.0, 425.0)\n dst_crs = {'init': 'EPSG:3857'}\n destination = numpy.zeros(dst_shape, numpy.uint8)\n\n reproject(\n source, \n destination, \n src_transform=src_transform,\n src_crs=src_crs,\n dst_transform=dst_transform,\n dst_crs=dst_crs,\n resampling=RESAMPLING.nearest)\n\n # Assert that the destination is only partly filled.\n assert destination.any()\n assert not destination.all()\n\n # Write it out to a file.\n with rasterio.open(\n tiffname, \n 'w',\n driver='GTiff',\n width=dst_shape[1],\n height=dst_shape[0],\n count=1,\n dtype=numpy.uint8,\n nodata=0,\n transform=dst_transform,\n crs=dst_crs) as dst:\n dst.write_band(1, destination)\n\ninfo = subprocess.call(['open', tiffname])\n", "path": "examples/reproject.py"}]}
| 1,277 | 834 |
gh_patches_debug_1604
|
rasdani/github-patches
|
git_diff
|
swcarpentry__python-novice-inflammation-946
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Code provided for students contain python code not compatible with python 3
At least one file in the `code` directory, e.g., `gen_inflammation.py` fails when running it with python 3. The [problem is the "division" not giving an integer](https://github.com/swcarpentry/python-novice-inflammation/blob/11643f14d31726f2f60873c4ca1230fff0bbf108/code/gen_inflammation.py#L19). It needs to be changed to
```diff
- upper / 4
+ upper // 4
```
This was spotted by a student trying to check their installation and running different files.
Other files may have similar errors. I'd suggest running and testing via CI everything we provide to the students.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `code/gen_inflammation.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Generate pseudo-random patient inflammation data for use in Python lessons.
5 """
6
7 import random
8
9 n_patients = 60
10 n_days = 40
11 n_range = 20
12
13 middle = n_days / 2
14
15 for p in range(n_patients):
16 vals = []
17 for d in range(n_days):
18 upper = max(n_range - abs(d - middle), 0)
19 vals.append(random.randint(upper/4, upper))
20 print(','.join([str(v) for v in vals]))
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/code/gen_inflammation.py b/code/gen_inflammation.py
--- a/code/gen_inflammation.py
+++ b/code/gen_inflammation.py
@@ -16,5 +16,5 @@
vals = []
for d in range(n_days):
upper = max(n_range - abs(d - middle), 0)
- vals.append(random.randint(upper/4, upper))
+ vals.append(random.randint(upper//4, upper))
print(','.join([str(v) for v in vals]))
|
{"golden_diff": "diff --git a/code/gen_inflammation.py b/code/gen_inflammation.py\n--- a/code/gen_inflammation.py\n+++ b/code/gen_inflammation.py\n@@ -16,5 +16,5 @@\n vals = []\n for d in range(n_days):\n upper = max(n_range - abs(d - middle), 0)\n- vals.append(random.randint(upper/4, upper))\n+ vals.append(random.randint(upper//4, upper))\n print(','.join([str(v) for v in vals]))\n", "issue": "Code provided for students contain python code not compatible with python 3\nAt least one file in the `code` directory, e.g., `gen_inflammation.py` fails when running it with python 3. The [problem is the \"division\" not giving an integer](https://github.com/swcarpentry/python-novice-inflammation/blob/11643f14d31726f2f60873c4ca1230fff0bbf108/code/gen_inflammation.py#L19). It needs to be changed to\r\n```diff\r\n- upper / 4\r\n+ upper // 4\r\n```\r\n\r\nThis was spotted by a student trying to check their installation and running different files.\r\nOther files may have similar errors. I'd suggest running and testing via CI everything we provide to the students.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nGenerate pseudo-random patient inflammation data for use in Python lessons.\n\"\"\"\n\nimport random\n\nn_patients = 60\nn_days = 40\nn_range = 20\n\nmiddle = n_days / 2\n\nfor p in range(n_patients):\n vals = []\n for d in range(n_days):\n upper = max(n_range - abs(d - middle), 0)\n vals.append(random.randint(upper/4, upper))\n print(','.join([str(v) for v in vals]))\n", "path": "code/gen_inflammation.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nGenerate pseudo-random patient inflammation data for use in Python lessons.\n\"\"\"\n\nimport random\n\nn_patients = 60\nn_days = 40\nn_range = 20\n\nmiddle = n_days / 2\n\nfor p in range(n_patients):\n vals = []\n for d in range(n_days):\n upper = max(n_range - abs(d - middle), 0)\n vals.append(random.randint(upper//4, upper))\n print(','.join([str(v) for v in vals]))\n", "path": "code/gen_inflammation.py"}]}
| 598 | 115 |
gh_patches_debug_39179
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-3495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SensitiveAreas details page does not show nearby Sites/Courses (and the other way around)
To reproduce : create a sensitive area overlapping a site or course and visit the detail page of the sensitive area. The site or course does not appear in the list of nearby objects. (visit the site or course detail page, the sensitive area does not appear in the list either).

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/sensitivity/models.py`
Content:
```
1 """
2 Sensitivity models
3 """
4
5 import datetime
6 import simplekml
7
8 from django.conf import settings
9 from django.contrib.gis.db import models
10 from django.contrib.gis.geos import GEOSGeometry, Polygon
11 from django.utils.translation import pgettext_lazy, gettext_lazy as _
12
13 from mapentity.serializers import plain_text
14 from geotrek.authent.models import StructureRelated
15 from geotrek.common.mixins.models import (OptionalPictogramMixin, NoDeleteMixin, TimeStampedModelMixin,
16 AddPropertyMixin, GeotrekMapEntityMixin, get_uuid_duplication)
17 from geotrek.common.utils import intersecting, classproperty
18 from geotrek.sensitivity.managers import SensitiveAreaManager
19 from geotrek.core.models import simplify_coords
20
21
22 class Rule(TimeStampedModelMixin, OptionalPictogramMixin):
23 code = models.CharField(verbose_name=_("Code"), max_length=50, unique=True, blank=True, null=True)
24 name = models.CharField(verbose_name=_("Name"), max_length=128, unique=True)
25 description = models.TextField(verbose_name=_("Description"), blank=True)
26 url = models.URLField(blank=True, verbose_name="URL")
27
28 class Meta:
29 verbose_name = _("Rule")
30 verbose_name_plural = _("Rules")
31 ordering = ['name']
32
33 def __str__(self):
34 return self.name
35
36
37 class SportPractice(TimeStampedModelMixin, models.Model):
38 name = models.CharField(max_length=250, verbose_name=_("Name"))
39
40 class Meta:
41 ordering = ['name']
42 verbose_name = _("Sport practice")
43 verbose_name_plural = _("Sport practices")
44
45 def __str__(self):
46 return self.name
47
48
49 class Species(TimeStampedModelMixin, OptionalPictogramMixin):
50 SPECIES = 1
51 REGULATORY = 2
52
53 name = models.CharField(max_length=250, verbose_name=_("Name"))
54 # TODO: we should replace these 12 fields by a unique JSONField
55 period01 = models.BooleanField(default=False, verbose_name=_("January"))
56 period02 = models.BooleanField(default=False, verbose_name=_("February"))
57 period03 = models.BooleanField(default=False, verbose_name=_("March"))
58 period04 = models.BooleanField(default=False, verbose_name=_("April"))
59 period05 = models.BooleanField(default=False, verbose_name=_("May"))
60 period06 = models.BooleanField(default=False, verbose_name=_("June"))
61 period07 = models.BooleanField(default=False, verbose_name=_("July"))
62 period08 = models.BooleanField(default=False, verbose_name=_("August"))
63 period09 = models.BooleanField(default=False, verbose_name=_("September"))
64 period10 = models.BooleanField(default=False, verbose_name=_("October"))
65 period11 = models.BooleanField(default=False, verbose_name=_("November"))
66 period12 = models.BooleanField(default=False, verbose_name=_("Decembre"))
67 practices = models.ManyToManyField(SportPractice, verbose_name=_("Sport practices"))
68 url = models.URLField(blank=True, verbose_name="URL")
69 radius = models.IntegerField(blank=True, null=True, verbose_name=_("Bubble radius"), help_text=_("meters"))
70 category = models.IntegerField(verbose_name=_("Category"), editable=False, default=SPECIES,
71 choices=((SPECIES, pgettext_lazy("Singular", "Species")),
72 (REGULATORY, _("Regulatory"))))
73 eid = models.CharField(verbose_name=_("External id"), max_length=1024, blank=True, null=True)
74
75 class Meta:
76 ordering = ['name']
77 verbose_name = pgettext_lazy("Singular", "Species")
78 verbose_name_plural = _("Species")
79
80 def __str__(self):
81 return self.name
82
83 def pretty_period(self):
84 return ", ".join([str(self._meta.get_field('period{:02}'.format(p)).verbose_name)
85 for p in range(1, 13)
86 if getattr(self, 'period{:02}'.format(p))])
87
88 def pretty_practices(self):
89 return ", ".join([str(practice) for practice in self.practices.all()])
90
91
92 class SensitiveArea(GeotrekMapEntityMixin, StructureRelated, TimeStampedModelMixin, NoDeleteMixin,
93 AddPropertyMixin):
94 geom = models.GeometryField(srid=settings.SRID)
95 geom_buffered = models.GeometryField(srid=settings.SRID, editable=False)
96 species = models.ForeignKey(Species, verbose_name=_("Species or regulatory area"), on_delete=models.PROTECT)
97 published = models.BooleanField(verbose_name=_("Published"), default=False, help_text=_("Visible on Geotrek-rando"))
98 publication_date = models.DateField(verbose_name=_("Publication date"), null=True, blank=True, editable=False)
99 description = models.TextField(verbose_name=_("Description"), blank=True)
100 contact = models.TextField(verbose_name=_("Contact"), blank=True)
101 eid = models.CharField(verbose_name=_("External id"), max_length=1024, blank=True, null=True)
102 provider = models.CharField(verbose_name=_("Provider"), db_index=True, max_length=1024, blank=True)
103 rules = models.ManyToManyField(Rule, verbose_name=_("Rules"), blank=True)
104
105 objects = SensitiveAreaManager()
106
107 elements_duplication = {
108 "attachments": {"uuid": get_uuid_duplication}
109 }
110
111 class Meta:
112 verbose_name = _("Sensitive area")
113 verbose_name_plural = _("Sensitive areas")
114 permissions = (
115 ("import_sensitivearea", "Can import Sensitive area"),
116 )
117
118 def __str__(self):
119 return self.species.name
120
121 @property
122 def radius(self):
123 if self.species.radius is None:
124 return settings.SENSITIVITY_DEFAULT_RADIUS
125 return self.species.radius
126
127 @classproperty
128 def radius_verbose_name(cls):
129 return _("Radius")
130
131 @property
132 def category_display(self):
133 return self.species.get_category_display()
134
135 @classproperty
136 def category_verbose_name(cls):
137 return _("Category")
138
139 def reload(self):
140 """
141 Reload into instance all computed attributes in triggers.
142 """
143 if self.pk:
144 # Update computed values
145 fromdb = self.__class__.objects.get(pk=self.pk)
146 self.geom_buffered = fromdb.geom_buffered
147 return self
148
149 def save(self, *args, **kwargs):
150 if self.publication_date is None and self.published:
151 self.publication_date = datetime.date.today()
152 if self.publication_date is not None and not self.published:
153 self.publication_date = None
154 super().save(*args, **kwargs)
155 self.reload()
156
157 @property
158 def any_published(self):
159 return self.published
160
161 @property
162 def published_status(self):
163 """Returns the publication status by language.
164 """
165 status = []
166 for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']:
167 status.append({
168 'lang': language[0],
169 'language': language[1],
170 'status': self.published
171 })
172 return status
173
174 @property
175 def published_langs(self):
176 """Returns languages in which the object is published.
177 """
178 if self.published:
179 return [language[0] for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']]
180 else:
181 return []
182
183 @property
184 def species_display(self):
185 s = '<a data-pk="%s" href="%s" title="%s">%s</a>' % (self.pk,
186 self.get_detail_url(),
187 self.species.name,
188 self.species.name)
189 if self.published:
190 s = '<span class="badge badge-success" title="%s">☆</span> ' % _("Published") + s
191 return s
192
193 @property
194 def extent(self):
195 return self.geom.transform(settings.API_SRID, clone=True).extent if self.geom else None
196
197 def kml(self):
198 """Exports sensitivearea into KML format"""
199 kml = simplekml.Kml()
200 geom = self.geom
201 if geom.geom_type == 'Point':
202 geom = geom.buffer(self.species.radius or settings.SENSITIVITY_DEFAULT_RADIUS, 4)
203 if self.species.radius:
204 geometry = ()
205 for coords in geom.coords[0]:
206 coords += (self.species.radius, )
207 geometry += (coords, )
208 geom = GEOSGeometry(Polygon(geometry), srid=settings.SRID)
209 geom = geom.transform(4326, clone=True) # KML uses WGS84
210 line = kml.newpolygon(name=self.species.name,
211 description=plain_text(self.description),
212 altitudemode=simplekml.AltitudeMode.relativetoground,
213 outerboundaryis=simplify_coords(geom.coords[0]))
214 line.style.linestyle.color = simplekml.Color.red # Red
215 line.style.linestyle.width = 4 # pixels
216 return kml.kml()
217
218 def is_public(self):
219 return self.published
220
221 @property
222 def pretty_period(self):
223 return self.species.pretty_period()
224 pretty_period_verbose_name = _("Period")
225
226 @property
227 def pretty_practices(self):
228 return self.species.pretty_practices()
229 pretty_practices_verbose_name = _("Practices")
230
231
232 if 'geotrek.core' in settings.INSTALLED_APPS:
233 from geotrek.core.models import Topology
234
235 Topology.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,
236 field='geom_buffered').select_related('species'),
237 _("Sensitive areas"))
238 Topology.add_property('published_sensitive_areas',
239 lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,
240 field='geom_buffered').filter(published=True),
241 _("Published sensitive areas"))
242
243 if 'geotrek.trekking' in settings.INSTALLED_APPS:
244 from geotrek.trekking import models as trekking_models
245
246 SensitiveArea.add_property('pois', lambda self: intersecting(trekking_models.POI, self, 0), _("POIs"))
247 SensitiveArea.add_property('treks', lambda self: intersecting(trekking_models.Trek, self, 0), _("Treks"))
248 SensitiveArea.add_property('services', lambda self: intersecting(trekking_models.Service, self, 0), _("Services"))
249
250 if 'geotrek.diving' in settings.INSTALLED_APPS:
251 from geotrek.diving.models import Dive
252
253 Dive.add_property('sensitive_areas',
254 lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False, field='geom_buffered'),
255 _("Sensitive areas"))
256 Dive.add_property('published_sensitive_areas',
257 lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,
258 field='geom_buffered').filter(published=True),
259 _("Published sensitive areas"))
260 SensitiveArea.add_property('dives', lambda self: intersecting(Dive, self, 0), _("Dives"))
261 SensitiveArea.add_property('published_dives',
262 lambda self: intersecting(Dive, self, 0).filter(published=True),
263 _("Published dives"))
264
265 if 'geotrek.tourism' in settings.INSTALLED_APPS:
266 from geotrek.tourism import models as tourism_models
267
268 tourism_models.TouristicContent.add_property('sensitive_areas',
269 lambda self: intersecting(SensitiveArea, self, distance=0,
270 ordering=False, field='geom_buffered'),
271 _("Sensitive areas"))
272 tourism_models.TouristicContent.add_property('published_sensitive_areas',
273 lambda self: intersecting(SensitiveArea, self, distance=0,
274 ordering=False,
275 field='geom_buffered').filter(
276 published=True), _("Published sensitive areas"))
277 tourism_models.TouristicEvent.add_property('sensitive_areas',
278 lambda self: intersecting(SensitiveArea, self, distance=0,
279 ordering=False, field='geom_buffered'),
280 _("Sensitive areas"))
281 tourism_models.TouristicEvent.add_property('published_sensitive_areas',
282 lambda self: intersecting(SensitiveArea, self, distance=0,
283 ordering=False, field='geom_buffered').filter(
284 published=True), _("Published sensitive areas"))
285
286 SensitiveArea.add_property('touristic_contents',
287 lambda self: intersecting(tourism_models.TouristicContent, self, 0),
288 _("Touristic contents"))
289 SensitiveArea.add_property('published_touristic_contents',
290 lambda self: intersecting(tourism_models.TouristicContent, self, 0).filter(
291 published=True),
292 _("Published touristic contents"))
293 SensitiveArea.add_property('touristic_events', lambda self: intersecting(tourism_models.TouristicEvent, self, 0),
294 _("Touristic events"))
295 SensitiveArea.add_property('published_touristic_events',
296 lambda self: intersecting(tourism_models.TouristicEvent, self, 0).filter(published=True),
297 _("Published touristic events"))
298
299 SensitiveArea.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, 0), _("Sensitive areas"))
300 SensitiveArea.add_property('published_sensitive_areas',
301 lambda self: intersecting(SensitiveArea, self, 0).filter(published=True),
302 _("Published sensitive areas"))
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geotrek/sensitivity/models.py b/geotrek/sensitivity/models.py
--- a/geotrek/sensitivity/models.py
+++ b/geotrek/sensitivity/models.py
@@ -228,6 +228,10 @@
return self.species.pretty_practices()
pretty_practices_verbose_name = _("Practices")
+ def distance(self, to_cls):
+ """Distance to associate this site to another class"""
+ return settings.SENSITIVE_AREA_INTERSECTION_MARGIN
+
if 'geotrek.core' in settings.INSTALLED_APPS:
from geotrek.core.models import Topology
@@ -296,6 +300,41 @@
lambda self: intersecting(tourism_models.TouristicEvent, self, 0).filter(published=True),
_("Published touristic events"))
+
+if 'geotrek.outdoor' in settings.INSTALLED_APPS:
+ from geotrek.outdoor import models as outdoor_models
+
+ outdoor_models.Site.add_property('sensitive_areas',
+ lambda self: intersecting(SensitiveArea, self, distance=0,
+ ordering=False, field='geom_buffered'),
+ _("Sensitive areas"))
+ outdoor_models.Site.add_property('published_sensitive_areas',
+ lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,
+ field='geom_buffered').filter(published=True),
+ _("Published sensitive areas"))
+ outdoor_models.Course.add_property('sensitive_areas',
+ lambda self: intersecting(SensitiveArea, self, distance=0,
+ ordering=False, field='geom_buffered'),
+ _("Sensitive areas"))
+ outdoor_models.Course.add_property('published_sensitive_areas',
+ lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,
+ field='geom_buffered').filter(published=True),
+ _("Published sensitive areas"))
+
+ SensitiveArea.add_property('sites',
+ lambda self: intersecting(outdoor_models.Site, self),
+ _("Touristic contents"))
+ SensitiveArea.add_property('published_sites',
+ lambda self: intersecting(outdoor_models.Site, self).filter(published=True),
+ _("Published touristic contents"))
+ SensitiveArea.add_property('courses',
+ lambda self: intersecting(outdoor_models.Course, self),
+ _("Touristic events"))
+ SensitiveArea.add_property('published_courses',
+ lambda self: intersecting(outdoor_models.Course, self).filter(published=True),
+ _("Published touristic events"))
+
+
SensitiveArea.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, 0), _("Sensitive areas"))
SensitiveArea.add_property('published_sensitive_areas',
lambda self: intersecting(SensitiveArea, self, 0).filter(published=True),
|
{"golden_diff": "diff --git a/geotrek/sensitivity/models.py b/geotrek/sensitivity/models.py\n--- a/geotrek/sensitivity/models.py\n+++ b/geotrek/sensitivity/models.py\n@@ -228,6 +228,10 @@\n return self.species.pretty_practices()\n pretty_practices_verbose_name = _(\"Practices\")\n \n+ def distance(self, to_cls):\n+ \"\"\"Distance to associate this site to another class\"\"\"\n+ return settings.SENSITIVE_AREA_INTERSECTION_MARGIN\n+\n \n if 'geotrek.core' in settings.INSTALLED_APPS:\n from geotrek.core.models import Topology\n@@ -296,6 +300,41 @@\n lambda self: intersecting(tourism_models.TouristicEvent, self, 0).filter(published=True),\n _(\"Published touristic events\"))\n \n+\n+if 'geotrek.outdoor' in settings.INSTALLED_APPS:\n+ from geotrek.outdoor import models as outdoor_models\n+\n+ outdoor_models.Site.add_property('sensitive_areas',\n+ lambda self: intersecting(SensitiveArea, self, distance=0,\n+ ordering=False, field='geom_buffered'),\n+ _(\"Sensitive areas\"))\n+ outdoor_models.Site.add_property('published_sensitive_areas',\n+ lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n+ field='geom_buffered').filter(published=True),\n+ _(\"Published sensitive areas\"))\n+ outdoor_models.Course.add_property('sensitive_areas',\n+ lambda self: intersecting(SensitiveArea, self, distance=0,\n+ ordering=False, field='geom_buffered'),\n+ _(\"Sensitive areas\"))\n+ outdoor_models.Course.add_property('published_sensitive_areas',\n+ lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n+ field='geom_buffered').filter(published=True),\n+ _(\"Published sensitive areas\"))\n+\n+ SensitiveArea.add_property('sites',\n+ lambda self: intersecting(outdoor_models.Site, self),\n+ _(\"Touristic contents\"))\n+ SensitiveArea.add_property('published_sites',\n+ lambda self: intersecting(outdoor_models.Site, self).filter(published=True),\n+ _(\"Published touristic contents\"))\n+ SensitiveArea.add_property('courses',\n+ lambda self: intersecting(outdoor_models.Course, self),\n+ _(\"Touristic events\"))\n+ SensitiveArea.add_property('published_courses',\n+ lambda self: intersecting(outdoor_models.Course, self).filter(published=True),\n+ _(\"Published touristic events\"))\n+\n+\n SensitiveArea.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, 0), _(\"Sensitive areas\"))\n SensitiveArea.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, 0).filter(published=True),\n", "issue": "SensitiveAreas details page does not show nearby Sites/Courses (and the other way around)\nTo reproduce : create a sensitive area overlapping a site or course and visit the detail page of the sensitive area. The site or course does not appear in the list of nearby objects. (visit the site or course detail page, the sensitive area does not appear in the list either).\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n Sensitivity models\n\"\"\"\n\nimport datetime\nimport simplekml\n\nfrom django.conf import settings\nfrom django.contrib.gis.db import models\nfrom django.contrib.gis.geos import GEOSGeometry, Polygon\nfrom django.utils.translation import pgettext_lazy, gettext_lazy as _\n\nfrom mapentity.serializers import plain_text\nfrom geotrek.authent.models import StructureRelated\nfrom geotrek.common.mixins.models import (OptionalPictogramMixin, NoDeleteMixin, TimeStampedModelMixin,\n AddPropertyMixin, GeotrekMapEntityMixin, get_uuid_duplication)\nfrom geotrek.common.utils import intersecting, classproperty\nfrom geotrek.sensitivity.managers import SensitiveAreaManager\nfrom geotrek.core.models import simplify_coords\n\n\nclass Rule(TimeStampedModelMixin, OptionalPictogramMixin):\n code = models.CharField(verbose_name=_(\"Code\"), max_length=50, unique=True, blank=True, null=True)\n name = models.CharField(verbose_name=_(\"Name\"), max_length=128, unique=True)\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n url = models.URLField(blank=True, verbose_name=\"URL\")\n\n class Meta:\n verbose_name = _(\"Rule\")\n verbose_name_plural = _(\"Rules\")\n ordering = ['name']\n\n def __str__(self):\n return self.name\n\n\nclass SportPractice(TimeStampedModelMixin, models.Model):\n name = models.CharField(max_length=250, verbose_name=_(\"Name\"))\n\n class Meta:\n ordering = ['name']\n verbose_name = _(\"Sport practice\")\n verbose_name_plural = _(\"Sport practices\")\n\n def __str__(self):\n return self.name\n\n\nclass Species(TimeStampedModelMixin, OptionalPictogramMixin):\n SPECIES = 1\n REGULATORY = 2\n\n name = models.CharField(max_length=250, verbose_name=_(\"Name\"))\n # TODO: we should replace these 12 fields by a unique JSONField\n period01 = models.BooleanField(default=False, verbose_name=_(\"January\"))\n period02 = models.BooleanField(default=False, verbose_name=_(\"February\"))\n period03 = models.BooleanField(default=False, verbose_name=_(\"March\"))\n period04 = models.BooleanField(default=False, verbose_name=_(\"April\"))\n period05 = models.BooleanField(default=False, verbose_name=_(\"May\"))\n period06 = models.BooleanField(default=False, verbose_name=_(\"June\"))\n period07 = models.BooleanField(default=False, verbose_name=_(\"July\"))\n period08 = models.BooleanField(default=False, verbose_name=_(\"August\"))\n period09 = models.BooleanField(default=False, verbose_name=_(\"September\"))\n period10 = models.BooleanField(default=False, verbose_name=_(\"October\"))\n period11 = models.BooleanField(default=False, verbose_name=_(\"November\"))\n period12 = models.BooleanField(default=False, verbose_name=_(\"Decembre\"))\n practices = models.ManyToManyField(SportPractice, verbose_name=_(\"Sport practices\"))\n url = models.URLField(blank=True, verbose_name=\"URL\")\n radius = models.IntegerField(blank=True, null=True, verbose_name=_(\"Bubble radius\"), help_text=_(\"meters\"))\n category = models.IntegerField(verbose_name=_(\"Category\"), editable=False, default=SPECIES,\n choices=((SPECIES, pgettext_lazy(\"Singular\", \"Species\")),\n (REGULATORY, _(\"Regulatory\"))))\n eid = models.CharField(verbose_name=_(\"External id\"), max_length=1024, blank=True, null=True)\n\n class Meta:\n ordering = ['name']\n verbose_name = pgettext_lazy(\"Singular\", \"Species\")\n verbose_name_plural = _(\"Species\")\n\n def __str__(self):\n return self.name\n\n def pretty_period(self):\n return \", \".join([str(self._meta.get_field('period{:02}'.format(p)).verbose_name)\n for p in range(1, 13)\n if getattr(self, 'period{:02}'.format(p))])\n\n def pretty_practices(self):\n return \", \".join([str(practice) for practice in self.practices.all()])\n\n\nclass SensitiveArea(GeotrekMapEntityMixin, StructureRelated, TimeStampedModelMixin, NoDeleteMixin,\n AddPropertyMixin):\n geom = models.GeometryField(srid=settings.SRID)\n geom_buffered = models.GeometryField(srid=settings.SRID, editable=False)\n species = models.ForeignKey(Species, verbose_name=_(\"Species or regulatory area\"), on_delete=models.PROTECT)\n published = models.BooleanField(verbose_name=_(\"Published\"), default=False, help_text=_(\"Visible on Geotrek-rando\"))\n publication_date = models.DateField(verbose_name=_(\"Publication date\"), null=True, blank=True, editable=False)\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n contact = models.TextField(verbose_name=_(\"Contact\"), blank=True)\n eid = models.CharField(verbose_name=_(\"External id\"), max_length=1024, blank=True, null=True)\n provider = models.CharField(verbose_name=_(\"Provider\"), db_index=True, max_length=1024, blank=True)\n rules = models.ManyToManyField(Rule, verbose_name=_(\"Rules\"), blank=True)\n\n objects = SensitiveAreaManager()\n\n elements_duplication = {\n \"attachments\": {\"uuid\": get_uuid_duplication}\n }\n\n class Meta:\n verbose_name = _(\"Sensitive area\")\n verbose_name_plural = _(\"Sensitive areas\")\n permissions = (\n (\"import_sensitivearea\", \"Can import Sensitive area\"),\n )\n\n def __str__(self):\n return self.species.name\n\n @property\n def radius(self):\n if self.species.radius is None:\n return settings.SENSITIVITY_DEFAULT_RADIUS\n return self.species.radius\n\n @classproperty\n def radius_verbose_name(cls):\n return _(\"Radius\")\n\n @property\n def category_display(self):\n return self.species.get_category_display()\n\n @classproperty\n def category_verbose_name(cls):\n return _(\"Category\")\n\n def reload(self):\n \"\"\"\n Reload into instance all computed attributes in triggers.\n \"\"\"\n if self.pk:\n # Update computed values\n fromdb = self.__class__.objects.get(pk=self.pk)\n self.geom_buffered = fromdb.geom_buffered\n return self\n\n def save(self, *args, **kwargs):\n if self.publication_date is None and self.published:\n self.publication_date = datetime.date.today()\n if self.publication_date is not None and not self.published:\n self.publication_date = None\n super().save(*args, **kwargs)\n self.reload()\n\n @property\n def any_published(self):\n return self.published\n\n @property\n def published_status(self):\n \"\"\"Returns the publication status by language.\n \"\"\"\n status = []\n for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']:\n status.append({\n 'lang': language[0],\n 'language': language[1],\n 'status': self.published\n })\n return status\n\n @property\n def published_langs(self):\n \"\"\"Returns languages in which the object is published.\n \"\"\"\n if self.published:\n return [language[0] for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']]\n else:\n return []\n\n @property\n def species_display(self):\n s = '<a data-pk=\"%s\" href=\"%s\" title=\"%s\">%s</a>' % (self.pk,\n self.get_detail_url(),\n self.species.name,\n self.species.name)\n if self.published:\n s = '<span class=\"badge badge-success\" title=\"%s\">☆</span> ' % _(\"Published\") + s\n return s\n\n @property\n def extent(self):\n return self.geom.transform(settings.API_SRID, clone=True).extent if self.geom else None\n\n def kml(self):\n \"\"\"Exports sensitivearea into KML format\"\"\"\n kml = simplekml.Kml()\n geom = self.geom\n if geom.geom_type == 'Point':\n geom = geom.buffer(self.species.radius or settings.SENSITIVITY_DEFAULT_RADIUS, 4)\n if self.species.radius:\n geometry = ()\n for coords in geom.coords[0]:\n coords += (self.species.radius, )\n geometry += (coords, )\n geom = GEOSGeometry(Polygon(geometry), srid=settings.SRID)\n geom = geom.transform(4326, clone=True) # KML uses WGS84\n line = kml.newpolygon(name=self.species.name,\n description=plain_text(self.description),\n altitudemode=simplekml.AltitudeMode.relativetoground,\n outerboundaryis=simplify_coords(geom.coords[0]))\n line.style.linestyle.color = simplekml.Color.red # Red\n line.style.linestyle.width = 4 # pixels\n return kml.kml()\n\n def is_public(self):\n return self.published\n\n @property\n def pretty_period(self):\n return self.species.pretty_period()\n pretty_period_verbose_name = _(\"Period\")\n\n @property\n def pretty_practices(self):\n return self.species.pretty_practices()\n pretty_practices_verbose_name = _(\"Practices\")\n\n\nif 'geotrek.core' in settings.INSTALLED_APPS:\n from geotrek.core.models import Topology\n\n Topology.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').select_related('species'),\n _(\"Sensitive areas\"))\n Topology.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n\nif 'geotrek.trekking' in settings.INSTALLED_APPS:\n from geotrek.trekking import models as trekking_models\n\n SensitiveArea.add_property('pois', lambda self: intersecting(trekking_models.POI, self, 0), _(\"POIs\"))\n SensitiveArea.add_property('treks', lambda self: intersecting(trekking_models.Trek, self, 0), _(\"Treks\"))\n SensitiveArea.add_property('services', lambda self: intersecting(trekking_models.Service, self, 0), _(\"Services\"))\n\nif 'geotrek.diving' in settings.INSTALLED_APPS:\n from geotrek.diving.models import Dive\n\n Dive.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n Dive.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n SensitiveArea.add_property('dives', lambda self: intersecting(Dive, self, 0), _(\"Dives\"))\n SensitiveArea.add_property('published_dives',\n lambda self: intersecting(Dive, self, 0).filter(published=True),\n _(\"Published dives\"))\n\nif 'geotrek.tourism' in settings.INSTALLED_APPS:\n from geotrek.tourism import models as tourism_models\n\n tourism_models.TouristicContent.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n tourism_models.TouristicContent.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False,\n field='geom_buffered').filter(\n published=True), _(\"Published sensitive areas\"))\n tourism_models.TouristicEvent.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n tourism_models.TouristicEvent.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered').filter(\n published=True), _(\"Published sensitive areas\"))\n\n SensitiveArea.add_property('touristic_contents',\n lambda self: intersecting(tourism_models.TouristicContent, self, 0),\n _(\"Touristic contents\"))\n SensitiveArea.add_property('published_touristic_contents',\n lambda self: intersecting(tourism_models.TouristicContent, self, 0).filter(\n published=True),\n _(\"Published touristic contents\"))\n SensitiveArea.add_property('touristic_events', lambda self: intersecting(tourism_models.TouristicEvent, self, 0),\n _(\"Touristic events\"))\n SensitiveArea.add_property('published_touristic_events',\n lambda self: intersecting(tourism_models.TouristicEvent, self, 0).filter(published=True),\n _(\"Published touristic events\"))\n\nSensitiveArea.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, 0), _(\"Sensitive areas\"))\nSensitiveArea.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, 0).filter(published=True),\n _(\"Published sensitive areas\"))\n", "path": "geotrek/sensitivity/models.py"}], "after_files": [{"content": "\"\"\"\n Sensitivity models\n\"\"\"\n\nimport datetime\nimport simplekml\n\nfrom django.conf import settings\nfrom django.contrib.gis.db import models\nfrom django.contrib.gis.geos import GEOSGeometry, Polygon\nfrom django.utils.translation import pgettext_lazy, gettext_lazy as _\n\nfrom mapentity.serializers import plain_text\nfrom geotrek.authent.models import StructureRelated\nfrom geotrek.common.mixins.models import (OptionalPictogramMixin, NoDeleteMixin, TimeStampedModelMixin,\n AddPropertyMixin, GeotrekMapEntityMixin, get_uuid_duplication)\nfrom geotrek.common.utils import intersecting, classproperty\nfrom geotrek.sensitivity.managers import SensitiveAreaManager\nfrom geotrek.core.models import simplify_coords\n\n\nclass Rule(TimeStampedModelMixin, OptionalPictogramMixin):\n code = models.CharField(verbose_name=_(\"Code\"), max_length=50, unique=True, blank=True, null=True)\n name = models.CharField(verbose_name=_(\"Name\"), max_length=128, unique=True)\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n url = models.URLField(blank=True, verbose_name=\"URL\")\n\n class Meta:\n verbose_name = _(\"Rule\")\n verbose_name_plural = _(\"Rules\")\n ordering = ['name']\n\n def __str__(self):\n return self.name\n\n\nclass SportPractice(TimeStampedModelMixin, models.Model):\n name = models.CharField(max_length=250, verbose_name=_(\"Name\"))\n\n class Meta:\n ordering = ['name']\n verbose_name = _(\"Sport practice\")\n verbose_name_plural = _(\"Sport practices\")\n\n def __str__(self):\n return self.name\n\n\nclass Species(TimeStampedModelMixin, OptionalPictogramMixin):\n SPECIES = 1\n REGULATORY = 2\n\n name = models.CharField(max_length=250, verbose_name=_(\"Name\"))\n # TODO: we should replace these 12 fields by a unique JSONField\n period01 = models.BooleanField(default=False, verbose_name=_(\"January\"))\n period02 = models.BooleanField(default=False, verbose_name=_(\"February\"))\n period03 = models.BooleanField(default=False, verbose_name=_(\"March\"))\n period04 = models.BooleanField(default=False, verbose_name=_(\"April\"))\n period05 = models.BooleanField(default=False, verbose_name=_(\"May\"))\n period06 = models.BooleanField(default=False, verbose_name=_(\"June\"))\n period07 = models.BooleanField(default=False, verbose_name=_(\"July\"))\n period08 = models.BooleanField(default=False, verbose_name=_(\"August\"))\n period09 = models.BooleanField(default=False, verbose_name=_(\"September\"))\n period10 = models.BooleanField(default=False, verbose_name=_(\"October\"))\n period11 = models.BooleanField(default=False, verbose_name=_(\"November\"))\n period12 = models.BooleanField(default=False, verbose_name=_(\"Decembre\"))\n practices = models.ManyToManyField(SportPractice, verbose_name=_(\"Sport practices\"))\n url = models.URLField(blank=True, verbose_name=\"URL\")\n radius = models.IntegerField(blank=True, null=True, verbose_name=_(\"Bubble radius\"), help_text=_(\"meters\"))\n category = models.IntegerField(verbose_name=_(\"Category\"), editable=False, default=SPECIES,\n choices=((SPECIES, pgettext_lazy(\"Singular\", \"Species\")),\n (REGULATORY, _(\"Regulatory\"))))\n eid = models.CharField(verbose_name=_(\"External id\"), max_length=1024, blank=True, null=True)\n\n class Meta:\n ordering = ['name']\n verbose_name = pgettext_lazy(\"Singular\", \"Species\")\n verbose_name_plural = _(\"Species\")\n\n def __str__(self):\n return self.name\n\n def pretty_period(self):\n return \", \".join([str(self._meta.get_field('period{:02}'.format(p)).verbose_name)\n for p in range(1, 13)\n if getattr(self, 'period{:02}'.format(p))])\n\n def pretty_practices(self):\n return \", \".join([str(practice) for practice in self.practices.all()])\n\n\nclass SensitiveArea(GeotrekMapEntityMixin, StructureRelated, TimeStampedModelMixin, NoDeleteMixin,\n AddPropertyMixin):\n geom = models.GeometryField(srid=settings.SRID)\n geom_buffered = models.GeometryField(srid=settings.SRID, editable=False)\n species = models.ForeignKey(Species, verbose_name=_(\"Species or regulatory area\"), on_delete=models.PROTECT)\n published = models.BooleanField(verbose_name=_(\"Published\"), default=False, help_text=_(\"Visible on Geotrek-rando\"))\n publication_date = models.DateField(verbose_name=_(\"Publication date\"), null=True, blank=True, editable=False)\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n contact = models.TextField(verbose_name=_(\"Contact\"), blank=True)\n eid = models.CharField(verbose_name=_(\"External id\"), max_length=1024, blank=True, null=True)\n provider = models.CharField(verbose_name=_(\"Provider\"), db_index=True, max_length=1024, blank=True)\n rules = models.ManyToManyField(Rule, verbose_name=_(\"Rules\"), blank=True)\n\n objects = SensitiveAreaManager()\n\n elements_duplication = {\n \"attachments\": {\"uuid\": get_uuid_duplication}\n }\n\n class Meta:\n verbose_name = _(\"Sensitive area\")\n verbose_name_plural = _(\"Sensitive areas\")\n permissions = (\n (\"import_sensitivearea\", \"Can import Sensitive area\"),\n )\n\n def __str__(self):\n return self.species.name\n\n @property\n def radius(self):\n if self.species.radius is None:\n return settings.SENSITIVITY_DEFAULT_RADIUS\n return self.species.radius\n\n @classproperty\n def radius_verbose_name(cls):\n return _(\"Radius\")\n\n @property\n def category_display(self):\n return self.species.get_category_display()\n\n @classproperty\n def category_verbose_name(cls):\n return _(\"Category\")\n\n def reload(self):\n \"\"\"\n Reload into instance all computed attributes in triggers.\n \"\"\"\n if self.pk:\n # Update computed values\n fromdb = self.__class__.objects.get(pk=self.pk)\n self.geom_buffered = fromdb.geom_buffered\n return self\n\n def save(self, *args, **kwargs):\n if self.publication_date is None and self.published:\n self.publication_date = datetime.date.today()\n if self.publication_date is not None and not self.published:\n self.publication_date = None\n super().save(*args, **kwargs)\n self.reload()\n\n @property\n def any_published(self):\n return self.published\n\n @property\n def published_status(self):\n \"\"\"Returns the publication status by language.\n \"\"\"\n status = []\n for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']:\n status.append({\n 'lang': language[0],\n 'language': language[1],\n 'status': self.published\n })\n return status\n\n @property\n def published_langs(self):\n \"\"\"Returns languages in which the object is published.\n \"\"\"\n if self.published:\n return [language[0] for language in settings.MAPENTITY_CONFIG['TRANSLATED_LANGUAGES']]\n else:\n return []\n\n @property\n def species_display(self):\n s = '<a data-pk=\"%s\" href=\"%s\" title=\"%s\">%s</a>' % (self.pk,\n self.get_detail_url(),\n self.species.name,\n self.species.name)\n if self.published:\n s = '<span class=\"badge badge-success\" title=\"%s\">☆</span> ' % _(\"Published\") + s\n return s\n\n @property\n def extent(self):\n return self.geom.transform(settings.API_SRID, clone=True).extent if self.geom else None\n\n def kml(self):\n \"\"\"Exports sensitivearea into KML format\"\"\"\n kml = simplekml.Kml()\n geom = self.geom\n if geom.geom_type == 'Point':\n geom = geom.buffer(self.species.radius or settings.SENSITIVITY_DEFAULT_RADIUS, 4)\n if self.species.radius:\n geometry = ()\n for coords in geom.coords[0]:\n coords += (self.species.radius, )\n geometry += (coords, )\n geom = GEOSGeometry(Polygon(geometry), srid=settings.SRID)\n geom = geom.transform(4326, clone=True) # KML uses WGS84\n line = kml.newpolygon(name=self.species.name,\n description=plain_text(self.description),\n altitudemode=simplekml.AltitudeMode.relativetoground,\n outerboundaryis=simplify_coords(geom.coords[0]))\n line.style.linestyle.color = simplekml.Color.red # Red\n line.style.linestyle.width = 4 # pixels\n return kml.kml()\n\n def is_public(self):\n return self.published\n\n @property\n def pretty_period(self):\n return self.species.pretty_period()\n pretty_period_verbose_name = _(\"Period\")\n\n @property\n def pretty_practices(self):\n return self.species.pretty_practices()\n pretty_practices_verbose_name = _(\"Practices\")\n\n def distance(self, to_cls):\n \"\"\"Distance to associate this site to another class\"\"\"\n return settings.SENSITIVE_AREA_INTERSECTION_MARGIN\n\n\nif 'geotrek.core' in settings.INSTALLED_APPS:\n from geotrek.core.models import Topology\n\n Topology.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').select_related('species'),\n _(\"Sensitive areas\"))\n Topology.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n\nif 'geotrek.trekking' in settings.INSTALLED_APPS:\n from geotrek.trekking import models as trekking_models\n\n SensitiveArea.add_property('pois', lambda self: intersecting(trekking_models.POI, self, 0), _(\"POIs\"))\n SensitiveArea.add_property('treks', lambda self: intersecting(trekking_models.Trek, self, 0), _(\"Treks\"))\n SensitiveArea.add_property('services', lambda self: intersecting(trekking_models.Service, self, 0), _(\"Services\"))\n\nif 'geotrek.diving' in settings.INSTALLED_APPS:\n from geotrek.diving.models import Dive\n\n Dive.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n Dive.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n SensitiveArea.add_property('dives', lambda self: intersecting(Dive, self, 0), _(\"Dives\"))\n SensitiveArea.add_property('published_dives',\n lambda self: intersecting(Dive, self, 0).filter(published=True),\n _(\"Published dives\"))\n\nif 'geotrek.tourism' in settings.INSTALLED_APPS:\n from geotrek.tourism import models as tourism_models\n\n tourism_models.TouristicContent.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n tourism_models.TouristicContent.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False,\n field='geom_buffered').filter(\n published=True), _(\"Published sensitive areas\"))\n tourism_models.TouristicEvent.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n tourism_models.TouristicEvent.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered').filter(\n published=True), _(\"Published sensitive areas\"))\n\n SensitiveArea.add_property('touristic_contents',\n lambda self: intersecting(tourism_models.TouristicContent, self, 0),\n _(\"Touristic contents\"))\n SensitiveArea.add_property('published_touristic_contents',\n lambda self: intersecting(tourism_models.TouristicContent, self, 0).filter(\n published=True),\n _(\"Published touristic contents\"))\n SensitiveArea.add_property('touristic_events', lambda self: intersecting(tourism_models.TouristicEvent, self, 0),\n _(\"Touristic events\"))\n SensitiveArea.add_property('published_touristic_events',\n lambda self: intersecting(tourism_models.TouristicEvent, self, 0).filter(published=True),\n _(\"Published touristic events\"))\n\n\nif 'geotrek.outdoor' in settings.INSTALLED_APPS:\n from geotrek.outdoor import models as outdoor_models\n\n outdoor_models.Site.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n outdoor_models.Site.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n outdoor_models.Course.add_property('sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0,\n ordering=False, field='geom_buffered'),\n _(\"Sensitive areas\"))\n outdoor_models.Course.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, distance=0, ordering=False,\n field='geom_buffered').filter(published=True),\n _(\"Published sensitive areas\"))\n\n SensitiveArea.add_property('sites',\n lambda self: intersecting(outdoor_models.Site, self),\n _(\"Touristic contents\"))\n SensitiveArea.add_property('published_sites',\n lambda self: intersecting(outdoor_models.Site, self).filter(published=True),\n _(\"Published touristic contents\"))\n SensitiveArea.add_property('courses',\n lambda self: intersecting(outdoor_models.Course, self),\n _(\"Touristic events\"))\n SensitiveArea.add_property('published_courses',\n lambda self: intersecting(outdoor_models.Course, self).filter(published=True),\n _(\"Published touristic events\"))\n\n\nSensitiveArea.add_property('sensitive_areas', lambda self: intersecting(SensitiveArea, self, 0), _(\"Sensitive areas\"))\nSensitiveArea.add_property('published_sensitive_areas',\n lambda self: intersecting(SensitiveArea, self, 0).filter(published=True),\n _(\"Published sensitive areas\"))\n", "path": "geotrek/sensitivity/models.py"}]}
| 4,023 | 610 |
gh_patches_debug_19894
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show OrganisationIndicatorLabels belonging to all partners
The organisation with indicator labels defined, may not always be the primary organisation. Currently, as an interim solution we display labels defined by all organisations.
When there are multiple organizations with lots of labels (and possibly some of them very similar), this can turn out to be a problem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/templatetags/project_editor.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please
7 see < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 from django import template
11 from django.db import models
12 from django.db.models import get_model, QuerySet
13
14 from akvo.rsr.models import ProjectEditorValidation
15
16 register = template.Library()
17 VALIDATION_CACHE = {}
18
19
20 def retrieve_model(obj):
21 """
22 Retrieves the model from 'obj', which can be either a Django Object or a string.
23 """
24 return get_model('rsr', obj.split('.')[0]) if isinstance(obj, basestring) else type(obj)
25
26
27 def retrieve_id(obj):
28 """
29 Retrieves the id from 'obj', which can be either a Django Object or a string.
30 """
31 if not isinstance(obj, basestring):
32 try:
33 return obj.id
34 except AttributeError:
35 return obj.project.id
36 else:
37 return "{0}_{1}".format(obj.split('.')[1], "new-0")
38
39
40 def get_validations():
41 """ Populate the VALIDATION_CACHE and return it."""
42
43 if VALIDATION_CACHE.get('CACHE_VALID', False):
44 return VALIDATION_CACHE
45
46 fields = ('validation', 'action', 'validation_set__pk')
47 for name, action, validation_set in ProjectEditorValidation.objects.values_list(*fields):
48 if action == ProjectEditorValidation.MANDATORY_ACTION:
49 action = 'mandatory'
50 elif action == ProjectEditorValidation.HIDDEN_ACTION:
51 action = 'hidden'
52
53 names = name.split('||')
54 for name in names:
55 indication = VALIDATION_CACHE.get(name, '')
56 indication += ' {0}-{1} '.format(action, validation_set)
57
58 if action == 'mandatory' and len(names) > 1:
59 other_names = set(names) - set([name])
60 for or_name in other_names:
61 indication += 'mandatory-{0}-or-{1} '.format(
62 validation_set, or_name.split('.')[1]
63 )
64
65 VALIDATION_CACHE[name] = indication.strip()
66
67 VALIDATION_CACHE['CACHE_VALID'] = True
68
69 return VALIDATION_CACHE
70
71
72 def invalidate_validation_cache():
73 global VALIDATION_CACHE
74 VALIDATION_CACHE = {}
75
76 return VALIDATION_CACHE
77
78
79 @register.filter
80 def obj_id(obj):
81 """
82 Generates the field's ID for a given model's field.
83
84 :returns "1234" or "1234_new-0"
85 """
86 return "{0}".format(retrieve_id(obj))
87
88
89 @register.filter
90 def field_id(obj, field):
91 """
92 Generates the field's ID for a given model's field.
93
94 :returns "rsr_project.title.1234"
95 """
96 return "{0}.{1}.{2}".format(
97 retrieve_model(obj)._meta.db_table,
98 field,
99 retrieve_id(obj)
100 )
101
102
103 @register.filter
104 def field_class(obj, field):
105 """
106 Similar to field_id(), but without the ID and a - to separate the fields. This is needed to
107 identify the class of a typeahead field.
108
109 :returns "rsr_relatedproject-related_project"
110 """
111 return "{0}-{1}".format(
112 retrieve_model(obj)._meta.db_table,
113 field
114 )
115
116
117 @register.filter
118 def field_class_id(obj, field):
119 """
120 Similar to field_id(), but without the ID and a - to separate the fields. This is needed to
121 identify the class of a typeahead field.
122
123 :returns "rsr_relatedproject-related_project"
124 """
125 return "{0}-{1}-{2}".format(
126 retrieve_model(obj)._meta.db_table,
127 field,
128 retrieve_id(obj)
129 )
130
131
132 @register.filter
133 def field_name(obj, field):
134 """
135 Retrieves the field's name for a given model's field, and makes first character uppercase. Also
136 taking into account that 'IATI' should always be in uppercase.
137
138 :returns "Project title"
139 """
140 def check_iati_in_name(name):
141 """Checks whether IATI is in the field name and capitalises that part."""
142 return name.replace('iati', 'IATI').replace('Iati', 'IATI')
143
144 return check_iati_in_name(retrieve_model(obj)._meta.get_field(field).verbose_name.capitalize())
145
146
147 @register.filter
148 def field_model_name(obj):
149 """
150 Retrieves the field's model name, and makes first character uppercase.
151
152 :returns "Related project"
153 """
154 return retrieve_model(obj)._meta.verbose_name.capitalize()
155
156
157 @register.filter
158 def help_text(obj, field):
159 """
160 Retrieves the help text for a given model's field.
161
162 :returns "If you are reporting multiple levels of projects in RSR, you can specify whether
163 this is a core, sub, or lower sub activity here."
164 """
165 return retrieve_model(obj)._meta.get_field(field).help_text
166
167
168 @register.filter
169 def max_length(obj, field):
170 """
171 Retrieves the max length of a given model's field.
172
173 :returns 100
174 """
175 return retrieve_model(obj)._meta.get_field(field).max_length
176
177
178 @register.filter
179 def value(obj, field):
180 """
181 Retrieves the value of a given object's field.
182
183 In case the object is a string, the supplied model and field are retrieved, and
184 the default value of the field returned, or an empty string if no default is specified.
185
186 In case the object is a Django object, the value of that object is retrieved.
187 If the object is a related object (e.g. ForeignKey), the primary key of the related object
188 is returned.
189
190 :returns "Project title"
191 :returns 1234 (in case of related object)
192 """
193 if isinstance(obj, basestring):
194 return ''
195 else:
196 field_value = getattr(obj, field)
197 if hasattr(field_value, 'pk'):
198 return field_value.pk
199 elif hasattr(field_value, 'url'):
200 return field_value.url
201 elif field_value is True:
202 return '1'
203 elif field_value is False:
204 return '2'
205 elif field_value in [0, 0.]:
206 return '0'
207 else:
208 return field_value or ''
209
210
211 @register.filter
212 def choices(obj, field):
213 """
214 Retrieves the choices of a given object's field and the IDs of the choices
215
216 :returns [((1, "Core Activity"), (2, "Sub Activity"), (3, "Lower Sub Activity")), [1, 2, 3]]
217 """
218
219 def first_items_list(iterable):
220 return [item[0] for item in iterable]
221
222 def values_list_of(model, *fields):
223 if isinstance(model, QuerySet):
224 objects = model
225 else:
226 objects = get_model('rsr', model).objects.all()
227 return objects.values_list(*fields)
228
229 def choices_and_ids(model, *fields):
230 choices_list = values_list_of(model, *fields)
231 return [
232 choices_list,
233 first_items_list(choices_list)
234 ]
235
236 model = retrieve_model(obj)
237 model_field = model._meta.get_field(field)
238
239 if not isinstance(model_field, models.ForeignKey):
240 return [model_field.choices, first_items_list(model_field.choices)]
241
242 elif isinstance(obj, get_model('rsr', 'BudgetItem')) or \
243 (isinstance(obj, basestring) and 'BudgetItem' in obj):
244 # The ForeignKey field on budget items is the budget item labels
245 return choices_and_ids('budgetitemlabel', 'id', 'label')
246
247 elif isinstance(obj, get_model('rsr', 'ProjectLocation')) or \
248 (isinstance(obj, basestring) and 'ProjectLocation' in obj):
249 # The ForeignKey field on locations is the countries
250 return choices_and_ids('country', 'id', 'name')
251
252 elif isinstance(obj, get_model('rsr', 'IndicatorLabel')) or \
253 (isinstance(obj, basestring) and 'IndicatorLabel' in obj):
254
255 if isinstance(obj, basestring) and 'IndicatorLabel' in obj:
256 # String looking like: u'IndicatorLabel.5577_22634_19197', 5577 is the project ID
257 project_pk = obj.split('.')[1].split('_')[0]
258 project = get_model('rsr', 'Project').objects.get(pk=project_pk)
259 else:
260 project = obj.indicator.result.project
261 organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(
262 organisation=project.primary_organisation
263 )
264 return choices_and_ids(organisation_indicator_labels, 'id', 'label')
265
266
267 @register.filter
268 def manytomany_value(obj):
269 """
270 Retrieves the id of a given object's field.
271
272 :returns ((1, "Akvo/Chum"), (2, "Yep"))
273 """
274 return '' if isinstance(obj, basestring) else obj.pk
275
276
277 @register.filter
278 def manytomany_choices(obj, field):
279 """
280 Retrieves the choices of a given object's Model. All objects of that Model should be displayed.
281
282 :returns ((1, "Akvo/Chum"), (2, "Yep"))
283 """
284 return retrieve_model(obj).objects.all().values_list('id', field)
285
286
287 @register.filter
288 def mandatory_or_hidden(validations, field):
289 """
290 Retrieves the mandatory and hidden fields for project editor validations.
291
292 :returns A string of the form mandatory-{validation ID} and/or hidden-{validation_ID}
293 """
294
295 if '.' in field:
296 # Model fields like 'rsr_relatedproject.12.relation'
297 field_name_list = field.split('.')
298 new_field_name = '.'.join([field_name_list[0], field_name_list[1]])
299
300 else:
301 # Full models like 'rsr_relatedproject'
302 new_field_name = field
303
304 indication = get_validations().get(new_field_name, '')
305
306 # XXX: Not sure why exactly 'rsr_project.current_image' needs to be special
307 # cased, but this code just retains the behavior of the older code.
308 if field == 'rsr_project.current_image':
309 indication = ' '.join(
310 filter(lambda x: not x.startswith('mandatory'), indication.split())
311 )
312
313 return indication
314
```
Path: `akvo/rsr/models/organisation_indicator_label.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo Reporting is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.db import models
9 from django.utils.translation import ugettext_lazy as _
10
11 from ..fields import ValidXMLCharField
12
13
14 class OrganisationIndicatorLabel(models.Model):
15 from .organisation import Organisation
16
17 organisation = models.ForeignKey(Organisation, verbose_name=_(u'organisation'),
18 related_name='indicator_labels')
19 label = ValidXMLCharField(_(u'label'), max_length=100)
20
21 class Meta:
22 app_label = 'rsr'
23 verbose_name = _(u'organisation indicator label')
24 verbose_name_plural = _(u'organisation indicator labels')
25 unique_together = ('organisation', 'label')
26
27 def __unicode__(self):
28 return self.label
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rsr/models/organisation_indicator_label.py b/akvo/rsr/models/organisation_indicator_label.py
--- a/akvo/rsr/models/organisation_indicator_label.py
+++ b/akvo/rsr/models/organisation_indicator_label.py
@@ -23,6 +23,7 @@
verbose_name = _(u'organisation indicator label')
verbose_name_plural = _(u'organisation indicator labels')
unique_together = ('organisation', 'label')
+ ordering = ('organisation', 'label')
def __unicode__(self):
return self.label
diff --git a/akvo/rsr/templatetags/project_editor.py b/akvo/rsr/templatetags/project_editor.py
--- a/akvo/rsr/templatetags/project_editor.py
+++ b/akvo/rsr/templatetags/project_editor.py
@@ -259,8 +259,8 @@
else:
project = obj.indicator.result.project
organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(
- organisation=project.primary_organisation
- )
+ organisation=project.all_partners()
+ ).distinct()
return choices_and_ids(organisation_indicator_labels, 'id', 'label')
|
{"golden_diff": "diff --git a/akvo/rsr/models/organisation_indicator_label.py b/akvo/rsr/models/organisation_indicator_label.py\n--- a/akvo/rsr/models/organisation_indicator_label.py\n+++ b/akvo/rsr/models/organisation_indicator_label.py\n@@ -23,6 +23,7 @@\n verbose_name = _(u'organisation indicator label')\n verbose_name_plural = _(u'organisation indicator labels')\n unique_together = ('organisation', 'label')\n+ ordering = ('organisation', 'label')\n \n def __unicode__(self):\n return self.label\ndiff --git a/akvo/rsr/templatetags/project_editor.py b/akvo/rsr/templatetags/project_editor.py\n--- a/akvo/rsr/templatetags/project_editor.py\n+++ b/akvo/rsr/templatetags/project_editor.py\n@@ -259,8 +259,8 @@\n else:\n project = obj.indicator.result.project\n organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(\n- organisation=project.primary_organisation\n- )\n+ organisation=project.all_partners()\n+ ).distinct()\n return choices_and_ids(organisation_indicator_labels, 'id', 'label')\n", "issue": "Show OrganisationIndicatorLabels belonging to all partners\nThe organisation with indicator labels defined, may not always be the primary organisation. Currently, as an interim solution we display labels defined by all organisations. \r\n\r\nWhen there are multiple organizations with lots of labels (and possibly some of them very similar), this can turn out to be a problem. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django import template\nfrom django.db import models\nfrom django.db.models import get_model, QuerySet\n\nfrom akvo.rsr.models import ProjectEditorValidation\n\nregister = template.Library()\nVALIDATION_CACHE = {}\n\n\ndef retrieve_model(obj):\n \"\"\"\n Retrieves the model from 'obj', which can be either a Django Object or a string.\n \"\"\"\n return get_model('rsr', obj.split('.')[0]) if isinstance(obj, basestring) else type(obj)\n\n\ndef retrieve_id(obj):\n \"\"\"\n Retrieves the id from 'obj', which can be either a Django Object or a string.\n \"\"\"\n if not isinstance(obj, basestring):\n try:\n return obj.id\n except AttributeError:\n return obj.project.id\n else:\n return \"{0}_{1}\".format(obj.split('.')[1], \"new-0\")\n\n\ndef get_validations():\n \"\"\" Populate the VALIDATION_CACHE and return it.\"\"\"\n\n if VALIDATION_CACHE.get('CACHE_VALID', False):\n return VALIDATION_CACHE\n\n fields = ('validation', 'action', 'validation_set__pk')\n for name, action, validation_set in ProjectEditorValidation.objects.values_list(*fields):\n if action == ProjectEditorValidation.MANDATORY_ACTION:\n action = 'mandatory'\n elif action == ProjectEditorValidation.HIDDEN_ACTION:\n action = 'hidden'\n\n names = name.split('||')\n for name in names:\n indication = VALIDATION_CACHE.get(name, '')\n indication += ' {0}-{1} '.format(action, validation_set)\n\n if action == 'mandatory' and len(names) > 1:\n other_names = set(names) - set([name])\n for or_name in other_names:\n indication += 'mandatory-{0}-or-{1} '.format(\n validation_set, or_name.split('.')[1]\n )\n\n VALIDATION_CACHE[name] = indication.strip()\n\n VALIDATION_CACHE['CACHE_VALID'] = True\n\n return VALIDATION_CACHE\n\n\ndef invalidate_validation_cache():\n global VALIDATION_CACHE\n VALIDATION_CACHE = {}\n\n return VALIDATION_CACHE\n\n\[email protected]\ndef obj_id(obj):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"1234\" or \"1234_new-0\"\n \"\"\"\n return \"{0}\".format(retrieve_id(obj))\n\n\[email protected]\ndef field_id(obj, field):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"rsr_project.title.1234\"\n \"\"\"\n return \"{0}.{1}.{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_class(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}\".format(\n retrieve_model(obj)._meta.db_table,\n field\n )\n\n\[email protected]\ndef field_class_id(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}-{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_name(obj, field):\n \"\"\"\n Retrieves the field's name for a given model's field, and makes first character uppercase. Also\n taking into account that 'IATI' should always be in uppercase.\n\n :returns \"Project title\"\n \"\"\"\n def check_iati_in_name(name):\n \"\"\"Checks whether IATI is in the field name and capitalises that part.\"\"\"\n return name.replace('iati', 'IATI').replace('Iati', 'IATI')\n\n return check_iati_in_name(retrieve_model(obj)._meta.get_field(field).verbose_name.capitalize())\n\n\[email protected]\ndef field_model_name(obj):\n \"\"\"\n Retrieves the field's model name, and makes first character uppercase.\n\n :returns \"Related project\"\n \"\"\"\n return retrieve_model(obj)._meta.verbose_name.capitalize()\n\n\[email protected]\ndef help_text(obj, field):\n \"\"\"\n Retrieves the help text for a given model's field.\n\n :returns \"If you are reporting multiple levels of projects in RSR, you can specify whether\n this is a core, sub, or lower sub activity here.\"\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).help_text\n\n\[email protected]\ndef max_length(obj, field):\n \"\"\"\n Retrieves the max length of a given model's field.\n\n :returns 100\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).max_length\n\n\[email protected]\ndef value(obj, field):\n \"\"\"\n Retrieves the value of a given object's field.\n\n In case the object is a string, the supplied model and field are retrieved, and\n the default value of the field returned, or an empty string if no default is specified.\n\n In case the object is a Django object, the value of that object is retrieved.\n If the object is a related object (e.g. ForeignKey), the primary key of the related object\n is returned.\n\n :returns \"Project title\"\n :returns 1234 (in case of related object)\n \"\"\"\n if isinstance(obj, basestring):\n return ''\n else:\n field_value = getattr(obj, field)\n if hasattr(field_value, 'pk'):\n return field_value.pk\n elif hasattr(field_value, 'url'):\n return field_value.url\n elif field_value is True:\n return '1'\n elif field_value is False:\n return '2'\n elif field_value in [0, 0.]:\n return '0'\n else:\n return field_value or ''\n\n\[email protected]\ndef choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's field and the IDs of the choices\n\n :returns [((1, \"Core Activity\"), (2, \"Sub Activity\"), (3, \"Lower Sub Activity\")), [1, 2, 3]]\n \"\"\"\n\n def first_items_list(iterable):\n return [item[0] for item in iterable]\n\n def values_list_of(model, *fields):\n if isinstance(model, QuerySet):\n objects = model\n else:\n objects = get_model('rsr', model).objects.all()\n return objects.values_list(*fields)\n\n def choices_and_ids(model, *fields):\n choices_list = values_list_of(model, *fields)\n return [\n choices_list,\n first_items_list(choices_list)\n ]\n\n model = retrieve_model(obj)\n model_field = model._meta.get_field(field)\n\n if not isinstance(model_field, models.ForeignKey):\n return [model_field.choices, first_items_list(model_field.choices)]\n\n elif isinstance(obj, get_model('rsr', 'BudgetItem')) or \\\n (isinstance(obj, basestring) and 'BudgetItem' in obj):\n # The ForeignKey field on budget items is the budget item labels\n return choices_and_ids('budgetitemlabel', 'id', 'label')\n\n elif isinstance(obj, get_model('rsr', 'ProjectLocation')) or \\\n (isinstance(obj, basestring) and 'ProjectLocation' in obj):\n # The ForeignKey field on locations is the countries\n return choices_and_ids('country', 'id', 'name')\n\n elif isinstance(obj, get_model('rsr', 'IndicatorLabel')) or \\\n (isinstance(obj, basestring) and 'IndicatorLabel' in obj):\n\n if isinstance(obj, basestring) and 'IndicatorLabel' in obj:\n # String looking like: u'IndicatorLabel.5577_22634_19197', 5577 is the project ID\n project_pk = obj.split('.')[1].split('_')[0]\n project = get_model('rsr', 'Project').objects.get(pk=project_pk)\n else:\n project = obj.indicator.result.project\n organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(\n organisation=project.primary_organisation\n )\n return choices_and_ids(organisation_indicator_labels, 'id', 'label')\n\n\[email protected]\ndef manytomany_value(obj):\n \"\"\"\n Retrieves the id of a given object's field.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return '' if isinstance(obj, basestring) else obj.pk\n\n\[email protected]\ndef manytomany_choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's Model. All objects of that Model should be displayed.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return retrieve_model(obj).objects.all().values_list('id', field)\n\n\[email protected]\ndef mandatory_or_hidden(validations, field):\n \"\"\"\n Retrieves the mandatory and hidden fields for project editor validations.\n\n :returns A string of the form mandatory-{validation ID} and/or hidden-{validation_ID}\n \"\"\"\n\n if '.' in field:\n # Model fields like 'rsr_relatedproject.12.relation'\n field_name_list = field.split('.')\n new_field_name = '.'.join([field_name_list[0], field_name_list[1]])\n\n else:\n # Full models like 'rsr_relatedproject'\n new_field_name = field\n\n indication = get_validations().get(new_field_name, '')\n\n # XXX: Not sure why exactly 'rsr_project.current_image' needs to be special\n # cased, but this code just retains the behavior of the older code.\n if field == 'rsr_project.current_image':\n indication = ' '.join(\n filter(lambda x: not x.startswith('mandatory'), indication.split())\n )\n\n return indication\n", "path": "akvo/rsr/templatetags/project_editor.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo Reporting is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass OrganisationIndicatorLabel(models.Model):\n from .organisation import Organisation\n\n organisation = models.ForeignKey(Organisation, verbose_name=_(u'organisation'),\n related_name='indicator_labels')\n label = ValidXMLCharField(_(u'label'), max_length=100)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'organisation indicator label')\n verbose_name_plural = _(u'organisation indicator labels')\n unique_together = ('organisation', 'label')\n\n def __unicode__(self):\n return self.label\n", "path": "akvo/rsr/models/organisation_indicator_label.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django import template\nfrom django.db import models\nfrom django.db.models import get_model, QuerySet\n\nfrom akvo.rsr.models import ProjectEditorValidation\n\nregister = template.Library()\nVALIDATION_CACHE = {}\n\n\ndef retrieve_model(obj):\n \"\"\"\n Retrieves the model from 'obj', which can be either a Django Object or a string.\n \"\"\"\n return get_model('rsr', obj.split('.')[0]) if isinstance(obj, basestring) else type(obj)\n\n\ndef retrieve_id(obj):\n \"\"\"\n Retrieves the id from 'obj', which can be either a Django Object or a string.\n \"\"\"\n if not isinstance(obj, basestring):\n try:\n return obj.id\n except AttributeError:\n return obj.project.id\n else:\n return \"{0}_{1}\".format(obj.split('.')[1], \"new-0\")\n\n\ndef get_validations():\n \"\"\" Populate the VALIDATION_CACHE and return it.\"\"\"\n\n if VALIDATION_CACHE.get('CACHE_VALID', False):\n return VALIDATION_CACHE\n\n fields = ('validation', 'action', 'validation_set__pk')\n for name, action, validation_set in ProjectEditorValidation.objects.values_list(*fields):\n if action == ProjectEditorValidation.MANDATORY_ACTION:\n action = 'mandatory'\n elif action == ProjectEditorValidation.HIDDEN_ACTION:\n action = 'hidden'\n\n names = name.split('||')\n for name in names:\n indication = VALIDATION_CACHE.get(name, '')\n indication += ' {0}-{1} '.format(action, validation_set)\n\n if action == 'mandatory' and len(names) > 1:\n other_names = set(names) - set([name])\n for or_name in other_names:\n indication += 'mandatory-{0}-or-{1} '.format(\n validation_set, or_name.split('.')[1]\n )\n\n VALIDATION_CACHE[name] = indication.strip()\n\n VALIDATION_CACHE['CACHE_VALID'] = True\n\n return VALIDATION_CACHE\n\n\ndef invalidate_validation_cache():\n global VALIDATION_CACHE\n VALIDATION_CACHE = {}\n\n return VALIDATION_CACHE\n\n\[email protected]\ndef obj_id(obj):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"1234\" or \"1234_new-0\"\n \"\"\"\n return \"{0}\".format(retrieve_id(obj))\n\n\[email protected]\ndef field_id(obj, field):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"rsr_project.title.1234\"\n \"\"\"\n return \"{0}.{1}.{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_class(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}\".format(\n retrieve_model(obj)._meta.db_table,\n field\n )\n\n\[email protected]\ndef field_class_id(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}-{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_name(obj, field):\n \"\"\"\n Retrieves the field's name for a given model's field, and makes first character uppercase. Also\n taking into account that 'IATI' should always be in uppercase.\n\n :returns \"Project title\"\n \"\"\"\n def check_iati_in_name(name):\n \"\"\"Checks whether IATI is in the field name and capitalises that part.\"\"\"\n return name.replace('iati', 'IATI').replace('Iati', 'IATI')\n\n return check_iati_in_name(retrieve_model(obj)._meta.get_field(field).verbose_name.capitalize())\n\n\[email protected]\ndef field_model_name(obj):\n \"\"\"\n Retrieves the field's model name, and makes first character uppercase.\n\n :returns \"Related project\"\n \"\"\"\n return retrieve_model(obj)._meta.verbose_name.capitalize()\n\n\[email protected]\ndef help_text(obj, field):\n \"\"\"\n Retrieves the help text for a given model's field.\n\n :returns \"If you are reporting multiple levels of projects in RSR, you can specify whether\n this is a core, sub, or lower sub activity here.\"\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).help_text\n\n\[email protected]\ndef max_length(obj, field):\n \"\"\"\n Retrieves the max length of a given model's field.\n\n :returns 100\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).max_length\n\n\[email protected]\ndef value(obj, field):\n \"\"\"\n Retrieves the value of a given object's field.\n\n In case the object is a string, the supplied model and field are retrieved, and\n the default value of the field returned, or an empty string if no default is specified.\n\n In case the object is a Django object, the value of that object is retrieved.\n If the object is a related object (e.g. ForeignKey), the primary key of the related object\n is returned.\n\n :returns \"Project title\"\n :returns 1234 (in case of related object)\n \"\"\"\n if isinstance(obj, basestring):\n return ''\n else:\n field_value = getattr(obj, field)\n if hasattr(field_value, 'pk'):\n return field_value.pk\n elif hasattr(field_value, 'url'):\n return field_value.url\n elif field_value is True:\n return '1'\n elif field_value is False:\n return '2'\n elif field_value in [0, 0.]:\n return '0'\n else:\n return field_value or ''\n\n\[email protected]\ndef choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's field and the IDs of the choices\n\n :returns [((1, \"Core Activity\"), (2, \"Sub Activity\"), (3, \"Lower Sub Activity\")), [1, 2, 3]]\n \"\"\"\n\n def first_items_list(iterable):\n return [item[0] for item in iterable]\n\n def values_list_of(model, *fields):\n if isinstance(model, QuerySet):\n objects = model\n else:\n objects = get_model('rsr', model).objects.all()\n return objects.values_list(*fields)\n\n def choices_and_ids(model, *fields):\n choices_list = values_list_of(model, *fields)\n return [\n choices_list,\n first_items_list(choices_list)\n ]\n\n model = retrieve_model(obj)\n model_field = model._meta.get_field(field)\n\n if not isinstance(model_field, models.ForeignKey):\n return [model_field.choices, first_items_list(model_field.choices)]\n\n elif isinstance(obj, get_model('rsr', 'BudgetItem')) or \\\n (isinstance(obj, basestring) and 'BudgetItem' in obj):\n # The ForeignKey field on budget items is the budget item labels\n return choices_and_ids('budgetitemlabel', 'id', 'label')\n\n elif isinstance(obj, get_model('rsr', 'ProjectLocation')) or \\\n (isinstance(obj, basestring) and 'ProjectLocation' in obj):\n # The ForeignKey field on locations is the countries\n return choices_and_ids('country', 'id', 'name')\n\n elif isinstance(obj, get_model('rsr', 'IndicatorLabel')) or \\\n (isinstance(obj, basestring) and 'IndicatorLabel' in obj):\n\n if isinstance(obj, basestring) and 'IndicatorLabel' in obj:\n # String looking like: u'IndicatorLabel.5577_22634_19197', 5577 is the project ID\n project_pk = obj.split('.')[1].split('_')[0]\n project = get_model('rsr', 'Project').objects.get(pk=project_pk)\n else:\n project = obj.indicator.result.project\n organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(\n organisation=project.all_partners()\n ).distinct()\n return choices_and_ids(organisation_indicator_labels, 'id', 'label')\n\n\[email protected]\ndef manytomany_value(obj):\n \"\"\"\n Retrieves the id of a given object's field.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return '' if isinstance(obj, basestring) else obj.pk\n\n\[email protected]\ndef manytomany_choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's Model. All objects of that Model should be displayed.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return retrieve_model(obj).objects.all().values_list('id', field)\n\n\[email protected]\ndef mandatory_or_hidden(validations, field):\n \"\"\"\n Retrieves the mandatory and hidden fields for project editor validations.\n\n :returns A string of the form mandatory-{validation ID} and/or hidden-{validation_ID}\n \"\"\"\n\n if '.' in field:\n # Model fields like 'rsr_relatedproject.12.relation'\n field_name_list = field.split('.')\n new_field_name = '.'.join([field_name_list[0], field_name_list[1]])\n\n else:\n # Full models like 'rsr_relatedproject'\n new_field_name = field\n\n indication = get_validations().get(new_field_name, '')\n\n # XXX: Not sure why exactly 'rsr_project.current_image' needs to be special\n # cased, but this code just retains the behavior of the older code.\n if field == 'rsr_project.current_image':\n indication = ' '.join(\n filter(lambda x: not x.startswith('mandatory'), indication.split())\n )\n\n return indication\n", "path": "akvo/rsr/templatetags/project_editor.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo Reporting is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass OrganisationIndicatorLabel(models.Model):\n from .organisation import Organisation\n\n organisation = models.ForeignKey(Organisation, verbose_name=_(u'organisation'),\n related_name='indicator_labels')\n label = ValidXMLCharField(_(u'label'), max_length=100)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'organisation indicator label')\n verbose_name_plural = _(u'organisation indicator labels')\n unique_together = ('organisation', 'label')\n ordering = ('organisation', 'label')\n\n def __unicode__(self):\n return self.label\n", "path": "akvo/rsr/models/organisation_indicator_label.py"}]}
| 3,712 | 277 |
gh_patches_debug_8998
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-1706
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
socket.socket doesnt have unrecv. bug in udp server
https://github.com/Gallopsled/pwntools/blob/5e279e7993f1f57cba2ba128f6bd8a27c19ea25f/pwnlib/tubes/server.py#L120
as mentioned above this is a bug.
line should be
```python
self.unrecv(data)
```
if its necessary at all
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwnlib/tubes/server.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import errno
5 import socket
6 import threading
7
8 from pwnlib.context import context
9 from pwnlib.log import getLogger
10 from pwnlib.tubes.sock import sock
11 from pwnlib.tubes.remote import remote
12
13 log = getLogger(__name__)
14
15 class server(sock):
16 r"""Creates an TCP or UDP-server to listen for connections. It supports
17 both IPv4 and IPv6.
18
19 Arguments:
20 port(int): The port to connect to.
21 Defaults to a port auto-selected by the operating system.
22 bindaddr(str): The address to bind to.
23 Defaults to ``0.0.0.0`` / `::`.
24 fam: The string "any", "ipv4" or "ipv6" or an integer to pass to :func:`socket.getaddrinfo`.
25 typ: The string "tcp" or "udp" or an integer to pass to :func:`socket.getaddrinfo`.
26 callback: A function to be started on incoming connections. It should take a :class:`pwnlib.tubes.remote` as its only argument.
27
28 Examples:
29
30 >>> s = server(8888)
31 >>> client_conn = remote('localhost', s.lport)
32 >>> server_conn = s.next_connection()
33 >>> client_conn.sendline(b'Hello')
34 >>> server_conn.recvline()
35 b'Hello\n'
36 >>> def cb(r):
37 ... client_input = r.readline()
38 ... r.send(client_input[::-1])
39 ...
40 >>> t = server(8889, callback=cb)
41 >>> client_conn = remote('localhost', t.lport)
42 >>> client_conn.sendline(b'callback')
43 >>> client_conn.recv()
44 b'\nkcabllac'
45 """
46
47 #: Local port
48 lport = 0
49
50 #: Local host
51 lhost = None
52
53 #: Socket type (e.g. socket.SOCK_STREAM)
54 type = None
55
56 #: Socket family
57 family = None
58
59 #: Socket protocol
60 protocol = None
61
62 #: Canonical name of the listening interface
63 canonname = None
64
65 #: Sockaddr structure that is being listened on
66 sockaddr = None
67
68 _accepter = None
69
70 def __init__(self, port=0, bindaddr = "0.0.0.0", fam = "any", typ = "tcp",
71 callback = None, blocking = False, *args, **kwargs):
72 super(server, self).__init__(*args, **kwargs)
73
74 port = int(port)
75 fam = {socket.AF_INET: 'ipv4',
76 socket.AF_INET6: 'ipv6'}.get(fam, fam)
77
78 fam = self._get_family(fam)
79 typ = self._get_type(typ)
80
81 if fam == socket.AF_INET6 and bindaddr == '0.0.0.0':
82 bindaddr = '::'
83
84 h = self.waitfor('Trying to bind to %s on port %d' % (bindaddr, port))
85
86 for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):
87 self.family, self.type, self.proto, self.canonname, self.sockaddr = res
88
89 if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:
90 continue
91
92 h.status("Trying %s" % self.sockaddr[0])
93 listen_sock = socket.socket(self.family, self.type, self.proto)
94 listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
95 listen_sock.bind(self.sockaddr)
96 self.lhost, self.lport = listen_sock.getsockname()[:2]
97 if self.type == socket.SOCK_STREAM:
98 listen_sock.listen(1)
99 break
100 else:
101 h.failure()
102 self.error("Could not bind to %s on port %d" % (bindaddr, port))
103
104 h.success()
105
106 self.sock = listen_sock
107 self.connections_waiting = threading.Event()
108 self.connections = []
109 def accepter():
110 while True:
111 h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))
112 while True:
113 try:
114 if self.type == socket.SOCK_STREAM:
115 sock, rhost = listen_sock.accept()
116 else:
117 data, rhost = listen_sock.recvfrom(4096)
118 listen_sock.connect(rhost)
119 sock = listen_sock
120 sock.unrecv(data)
121 sock.settimeout(self.timeout)
122 break
123 except socket.error as e:
124 if e.errno == errno.EINTR:
125 continue
126 h.failure()
127 self.exception("Socket failure while waiting for connection")
128 sock = None
129 return
130
131 self.rhost, self.rport = rhost[:2]
132 r = remote(self.rhost, self.rport, sock = sock)
133 h.success('Got connection from %s on port %d' % (self.rhost, self.rport))
134 if callback:
135 if not blocking:
136 t = context.Thread(target = callback, args = (r,))
137 t.daemon = True
138 t.start()
139 else:
140 callback(r)
141 else:
142 self.connections.append(r)
143 if not self.connections_waiting.is_set():
144 self.connections_waiting.set()
145
146 self._accepter = context.Thread(target = accepter)
147 self._accepter.daemon = True
148 self._accepter.start()
149
150 def next_connection(self):
151 if not self.connections_waiting.is_set():
152 self.connections_waiting.wait()
153 conn = self.connections.pop(0)
154 if not self.connections:
155 self.connections_waiting.clear()
156 return conn
157
158 def close(self):
159 # since `close` is scheduled to run on exit we must check that we got
160 # a connection or the program will hang in the `join` call above
161 if self._accepter and self._accepter.is_alive():
162 return
163 super(server, self).close()
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwnlib/tubes/server.py b/pwnlib/tubes/server.py
--- a/pwnlib/tubes/server.py
+++ b/pwnlib/tubes/server.py
@@ -117,7 +117,7 @@
data, rhost = listen_sock.recvfrom(4096)
listen_sock.connect(rhost)
sock = listen_sock
- sock.unrecv(data)
+ self.unrecv(data)
sock.settimeout(self.timeout)
break
except socket.error as e:
|
{"golden_diff": "diff --git a/pwnlib/tubes/server.py b/pwnlib/tubes/server.py\n--- a/pwnlib/tubes/server.py\n+++ b/pwnlib/tubes/server.py\n@@ -117,7 +117,7 @@\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n sock = listen_sock\n- sock.unrecv(data)\n+ self.unrecv(data)\n sock.settimeout(self.timeout)\n break\n except socket.error as e:\n", "issue": "socket.socket doesnt have unrecv. bug in udp server\nhttps://github.com/Gallopsled/pwntools/blob/5e279e7993f1f57cba2ba128f6bd8a27c19ea25f/pwnlib/tubes/server.py#L120\r\n\r\nas mentioned above this is a bug.\r\n\r\nline should be\r\n```python\r\nself.unrecv(data)\r\n```\r\nif its necessary at all\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport socket\nimport threading\n\nfrom pwnlib.context import context\nfrom pwnlib.log import getLogger\nfrom pwnlib.tubes.sock import sock\nfrom pwnlib.tubes.remote import remote\n\nlog = getLogger(__name__)\n\nclass server(sock):\n r\"\"\"Creates an TCP or UDP-server to listen for connections. It supports\n both IPv4 and IPv6.\n\n Arguments:\n port(int): The port to connect to.\n Defaults to a port auto-selected by the operating system.\n bindaddr(str): The address to bind to.\n Defaults to ``0.0.0.0`` / `::`.\n fam: The string \"any\", \"ipv4\" or \"ipv6\" or an integer to pass to :func:`socket.getaddrinfo`.\n typ: The string \"tcp\" or \"udp\" or an integer to pass to :func:`socket.getaddrinfo`.\n callback: A function to be started on incoming connections. It should take a :class:`pwnlib.tubes.remote` as its only argument.\n\n Examples:\n\n >>> s = server(8888)\n >>> client_conn = remote('localhost', s.lport)\n >>> server_conn = s.next_connection()\n >>> client_conn.sendline(b'Hello')\n >>> server_conn.recvline()\n b'Hello\\n'\n >>> def cb(r):\n ... client_input = r.readline()\n ... r.send(client_input[::-1])\n ...\n >>> t = server(8889, callback=cb)\n >>> client_conn = remote('localhost', t.lport)\n >>> client_conn.sendline(b'callback')\n >>> client_conn.recv()\n b'\\nkcabllac'\n \"\"\"\n\n #: Local port\n lport = 0\n\n #: Local host\n lhost = None\n\n #: Socket type (e.g. socket.SOCK_STREAM)\n type = None\n\n #: Socket family\n family = None\n\n #: Socket protocol\n protocol = None\n\n #: Canonical name of the listening interface\n canonname = None\n\n #: Sockaddr structure that is being listened on\n sockaddr = None\n\n _accepter = None\n\n def __init__(self, port=0, bindaddr = \"0.0.0.0\", fam = \"any\", typ = \"tcp\",\n callback = None, blocking = False, *args, **kwargs):\n super(server, self).__init__(*args, **kwargs)\n\n port = int(port)\n fam = {socket.AF_INET: 'ipv4',\n socket.AF_INET6: 'ipv6'}.get(fam, fam)\n\n fam = self._get_family(fam)\n typ = self._get_type(typ)\n\n if fam == socket.AF_INET6 and bindaddr == '0.0.0.0':\n bindaddr = '::'\n\n h = self.waitfor('Trying to bind to %s on port %d' % (bindaddr, port))\n\n for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):\n self.family, self.type, self.proto, self.canonname, self.sockaddr = res\n\n if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:\n continue\n\n h.status(\"Trying %s\" % self.sockaddr[0])\n listen_sock = socket.socket(self.family, self.type, self.proto)\n listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n listen_sock.bind(self.sockaddr)\n self.lhost, self.lport = listen_sock.getsockname()[:2]\n if self.type == socket.SOCK_STREAM:\n listen_sock.listen(1)\n break\n else:\n h.failure()\n self.error(\"Could not bind to %s on port %d\" % (bindaddr, port))\n\n h.success()\n\n self.sock = listen_sock\n self.connections_waiting = threading.Event()\n self.connections = []\n def accepter():\n while True:\n h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))\n while True:\n try:\n if self.type == socket.SOCK_STREAM:\n sock, rhost = listen_sock.accept()\n else:\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n sock = listen_sock\n sock.unrecv(data)\n sock.settimeout(self.timeout)\n break\n except socket.error as e:\n if e.errno == errno.EINTR:\n continue\n h.failure()\n self.exception(\"Socket failure while waiting for connection\")\n sock = None\n return\n\n self.rhost, self.rport = rhost[:2]\n r = remote(self.rhost, self.rport, sock = sock)\n h.success('Got connection from %s on port %d' % (self.rhost, self.rport))\n if callback:\n if not blocking:\n t = context.Thread(target = callback, args = (r,))\n t.daemon = True\n t.start()\n else:\n callback(r)\n else:\n self.connections.append(r)\n if not self.connections_waiting.is_set():\n self.connections_waiting.set()\n\n self._accepter = context.Thread(target = accepter)\n self._accepter.daemon = True\n self._accepter.start()\n\n def next_connection(self):\n if not self.connections_waiting.is_set():\n self.connections_waiting.wait()\n conn = self.connections.pop(0)\n if not self.connections:\n self.connections_waiting.clear()\n return conn\n\n def close(self):\n # since `close` is scheduled to run on exit we must check that we got\n # a connection or the program will hang in the `join` call above\n if self._accepter and self._accepter.is_alive():\n return\n super(server, self).close()\n", "path": "pwnlib/tubes/server.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport socket\nimport threading\n\nfrom pwnlib.context import context\nfrom pwnlib.log import getLogger\nfrom pwnlib.tubes.sock import sock\nfrom pwnlib.tubes.remote import remote\n\nlog = getLogger(__name__)\n\nclass server(sock):\n r\"\"\"Creates an TCP or UDP-server to listen for connections. It supports\n both IPv4 and IPv6.\n\n Arguments:\n port(int): The port to connect to.\n Defaults to a port auto-selected by the operating system.\n bindaddr(str): The address to bind to.\n Defaults to ``0.0.0.0`` / `::`.\n fam: The string \"any\", \"ipv4\" or \"ipv6\" or an integer to pass to :func:`socket.getaddrinfo`.\n typ: The string \"tcp\" or \"udp\" or an integer to pass to :func:`socket.getaddrinfo`.\n callback: A function to be started on incoming connections. It should take a :class:`pwnlib.tubes.remote` as its only argument.\n\n Examples:\n\n >>> s = server(8888)\n >>> client_conn = remote('localhost', s.lport)\n >>> server_conn = s.next_connection()\n >>> client_conn.sendline(b'Hello')\n >>> server_conn.recvline()\n b'Hello\\n'\n >>> def cb(r):\n ... client_input = r.readline()\n ... r.send(client_input[::-1])\n ...\n >>> t = server(8889, callback=cb)\n >>> client_conn = remote('localhost', t.lport)\n >>> client_conn.sendline(b'callback')\n >>> client_conn.recv()\n b'\\nkcabllac'\n \"\"\"\n\n #: Local port\n lport = 0\n\n #: Local host\n lhost = None\n\n #: Socket type (e.g. socket.SOCK_STREAM)\n type = None\n\n #: Socket family\n family = None\n\n #: Socket protocol\n protocol = None\n\n #: Canonical name of the listening interface\n canonname = None\n\n #: Sockaddr structure that is being listened on\n sockaddr = None\n\n _accepter = None\n\n def __init__(self, port=0, bindaddr = \"0.0.0.0\", fam = \"any\", typ = \"tcp\",\n callback = None, blocking = False, *args, **kwargs):\n super(server, self).__init__(*args, **kwargs)\n\n port = int(port)\n fam = {socket.AF_INET: 'ipv4',\n socket.AF_INET6: 'ipv6'}.get(fam, fam)\n\n fam = self._get_family(fam)\n typ = self._get_type(typ)\n\n if fam == socket.AF_INET6 and bindaddr == '0.0.0.0':\n bindaddr = '::'\n\n h = self.waitfor('Trying to bind to %s on port %d' % (bindaddr, port))\n\n for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):\n self.family, self.type, self.proto, self.canonname, self.sockaddr = res\n\n if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:\n continue\n\n h.status(\"Trying %s\" % self.sockaddr[0])\n listen_sock = socket.socket(self.family, self.type, self.proto)\n listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n listen_sock.bind(self.sockaddr)\n self.lhost, self.lport = listen_sock.getsockname()[:2]\n if self.type == socket.SOCK_STREAM:\n listen_sock.listen(1)\n break\n else:\n h.failure()\n self.error(\"Could not bind to %s on port %d\" % (bindaddr, port))\n\n h.success()\n\n self.sock = listen_sock\n self.connections_waiting = threading.Event()\n self.connections = []\n def accepter():\n while True:\n h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))\n while True:\n try:\n if self.type == socket.SOCK_STREAM:\n sock, rhost = listen_sock.accept()\n else:\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n sock = listen_sock\n self.unrecv(data)\n sock.settimeout(self.timeout)\n break\n except socket.error as e:\n if e.errno == errno.EINTR:\n continue\n h.failure()\n self.exception(\"Socket failure while waiting for connection\")\n sock = None\n return\n\n self.rhost, self.rport = rhost[:2]\n r = remote(self.rhost, self.rport, sock = sock)\n h.success('Got connection from %s on port %d' % (self.rhost, self.rport))\n if callback:\n if not blocking:\n t = context.Thread(target = callback, args = (r,))\n t.daemon = True\n t.start()\n else:\n callback(r)\n else:\n self.connections.append(r)\n if not self.connections_waiting.is_set():\n self.connections_waiting.set()\n\n self._accepter = context.Thread(target = accepter)\n self._accepter.daemon = True\n self._accepter.start()\n\n def next_connection(self):\n if not self.connections_waiting.is_set():\n self.connections_waiting.wait()\n conn = self.connections.pop(0)\n if not self.connections:\n self.connections_waiting.clear()\n return conn\n\n def close(self):\n # since `close` is scheduled to run on exit we must check that we got\n # a connection or the program will hang in the `join` call above\n if self._accepter and self._accepter.is_alive():\n return\n super(server, self).close()\n", "path": "pwnlib/tubes/server.py"}]}
| 2,042 | 114 |
gh_patches_debug_931
|
rasdani/github-patches
|
git_diff
|
AUTOMATIC1111__stable-diffusion-webui-7353
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: thumbnail cards are not loading the preview image
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
just getting black image, and if I try to update an image, it goes black too.
It was working before checkpoints were added, I don't know if that's related.
### Steps to reproduce the problem
1. Go to ....
2. Press ....
3. ...
### What should have happened?
should see the preview images
### Commit where the problem happens
0a8515085ef258d4b76fdc000f7ed9d55751d6b8
### What platforms do you use to access the UI ?
_No response_
### What browsers do you use to access the UI ?
_No response_
### Command Line Arguments
```Shell
--api --cors-allow-origins http://localhost:5173 --administrator --no-half-vae --no-half --disable-safe-unpickle --force-cpu --xformers
```
### List of extensions
all of them
### Console logs
```Shell
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 391, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in fetch_file
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in <listcomp>
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\Python\Python310\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "D:\Python\Python310\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "D:\Python\Python310\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modules/ui_extra_networks_checkpoints.py`
Content:
```
1 import html
2 import json
3 import os
4 import urllib.parse
5
6 from modules import shared, ui_extra_networks, sd_models
7
8
9 class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):
10 def __init__(self):
11 super().__init__('Checkpoints')
12
13 def refresh(self):
14 shared.refresh_checkpoints()
15
16 def list_items(self):
17 for name, checkpoint in sd_models.checkpoints_list.items():
18 path, ext = os.path.splitext(checkpoint.filename)
19 previews = [path + ".png", path + ".preview.png"]
20
21 preview = None
22 for file in previews:
23 if os.path.isfile(file):
24 preview = self.link_preview(file)
25 break
26
27 yield {
28 "name": checkpoint.name_for_extra,
29 "filename": path,
30 "preview": preview,
31 "search_term": self.search_terms_from_path(checkpoint.filename),
32 "onclick": '"' + html.escape(f"""return selectCheckpoint({json.dumps(name)})""") + '"',
33 "local_preview": path + ".png",
34 }
35
36 def allowed_directories_for_previews(self):
37 return [shared.cmd_opts.ckpt_dir, sd_models.model_path]
38
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modules/ui_extra_networks_checkpoints.py b/modules/ui_extra_networks_checkpoints.py
--- a/modules/ui_extra_networks_checkpoints.py
+++ b/modules/ui_extra_networks_checkpoints.py
@@ -34,5 +34,5 @@
}
def allowed_directories_for_previews(self):
- return [shared.cmd_opts.ckpt_dir, sd_models.model_path]
+ return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]
|
{"golden_diff": "diff --git a/modules/ui_extra_networks_checkpoints.py b/modules/ui_extra_networks_checkpoints.py\n--- a/modules/ui_extra_networks_checkpoints.py\n+++ b/modules/ui_extra_networks_checkpoints.py\n@@ -34,5 +34,5 @@\n }\r\n \r\n def allowed_directories_for_previews(self):\r\n- return [shared.cmd_opts.ckpt_dir, sd_models.model_path]\r\n+ return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]\n", "issue": "[Bug]: thumbnail cards are not loading the preview image\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\njust getting black image, and if I try to update an image, it goes black too.\r\n\r\nIt was working before checkpoints were added, I don't know if that's related.\n\n### Steps to reproduce the problem\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### What should have happened?\n\nshould see the preview images\n\n### Commit where the problem happens\n\n0a8515085ef258d4b76fdc000f7ed9d55751d6b8\n\n### What platforms do you use to access the UI ?\n\n_No response_\n\n### What browsers do you use to access the UI ?\n\n_No response_\n\n### Command Line Arguments\n\n```Shell\n--api --cors-allow-origins http://localhost:5173 --administrator --no-half-vae --no-half --disable-safe-unpickle --force-cpu --xformers\n```\n\n\n### List of extensions\n\nall of them\n\n### Console logs\n\n```Shell\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 94, in receive\r\n return self.receive_nowait()\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 89, in receive_nowait\r\n raise WouldBlock\r\nanyio.WouldBlock\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 77, in call_next\r\n message = await recv_stream.receive()\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 114, in receive\r\n raise EndOfStream\r\nanyio.EndOfStream\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 407, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 184, in __call__\r\n raise exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 106, in __call__\r\n response = await self.dispatch_func(request, call_next)\r\n File \"D:\\stable-diffusion-webui\\extensions\\auto-sd-paint-ext\\backend\\app.py\", line 391, in app_encryption_middleware\r\n res: StreamingResponse = await call_next(req)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 80, in call_next\r\n raise app_exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 69, in coro\r\n await self.app(scope, receive_or_disconnect, send_no_error)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 106, in __call__\r\n response = await self.dispatch_func(request, call_next)\r\n File \"D:\\stable-diffusion-webui\\modules\\api\\api.py\", line 96, in log_and_time\r\n res: Response = await call_next(req)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 80, in call_next\r\n raise app_exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 69, in coro\r\n await self.app(scope, receive_or_disconnect, send_no_error)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\gzip.py\", line 24, in __call__\r\n await responder(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\gzip.py\", line 43, in __call__\r\n await self.app(scope, receive, self.send_with_gzip)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\cors.py\", line 84, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 706, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\routing.py\", line 235, in app\r\n raw_response = await run_endpoint_function(\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\routing.py\", line 163, in run_endpoint_function\r\n return await run_in_threadpool(dependant.call, **values)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\concurrency.py\", line 41, in run_in_threadpool\r\n return await anyio.to_thread.run_sync(func, *args)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\to_thread.py\", line 31, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 937, in run_sync_in_worker_thread\r\n return await future\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 867, in run\r\n result = context.run(func, *args)\r\n File \"D:\\stable-diffusion-webui\\modules\\ui_extra_networks.py\", line 28, in fetch_file\r\n if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):\r\n File \"D:\\stable-diffusion-webui\\modules\\ui_extra_networks.py\", line 28, in <listcomp>\r\n if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 960, in __new__\r\n self = cls._from_parts(args)\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 594, in _from_parts\r\n drv, root, parts = self._parse_args(args)\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 578, in _parse_args\r\n a = os.fspath(a)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import html\r\nimport json\r\nimport os\r\nimport urllib.parse\r\n\r\nfrom modules import shared, ui_extra_networks, sd_models\r\n\r\n\r\nclass ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):\r\n def __init__(self):\r\n super().__init__('Checkpoints')\r\n\r\n def refresh(self):\r\n shared.refresh_checkpoints()\r\n\r\n def list_items(self):\r\n for name, checkpoint in sd_models.checkpoints_list.items():\r\n path, ext = os.path.splitext(checkpoint.filename)\r\n previews = [path + \".png\", path + \".preview.png\"]\r\n\r\n preview = None\r\n for file in previews:\r\n if os.path.isfile(file):\r\n preview = self.link_preview(file)\r\n break\r\n\r\n yield {\r\n \"name\": checkpoint.name_for_extra,\r\n \"filename\": path,\r\n \"preview\": preview,\r\n \"search_term\": self.search_terms_from_path(checkpoint.filename),\r\n \"onclick\": '\"' + html.escape(f\"\"\"return selectCheckpoint({json.dumps(name)})\"\"\") + '\"',\r\n \"local_preview\": path + \".png\",\r\n }\r\n\r\n def allowed_directories_for_previews(self):\r\n return [shared.cmd_opts.ckpt_dir, sd_models.model_path]\r\n\r\n", "path": "modules/ui_extra_networks_checkpoints.py"}], "after_files": [{"content": "import html\r\nimport json\r\nimport os\r\nimport urllib.parse\r\n\r\nfrom modules import shared, ui_extra_networks, sd_models\r\n\r\n\r\nclass ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):\r\n def __init__(self):\r\n super().__init__('Checkpoints')\r\n\r\n def refresh(self):\r\n shared.refresh_checkpoints()\r\n\r\n def list_items(self):\r\n for name, checkpoint1 in sd_models.checkpoints_list.items():\r\n checkpoint: sd_models.CheckpointInfo = checkpoint1\r\n path, ext = os.path.splitext(checkpoint.filename)\r\n previews = [path + \".png\", path + \".preview.png\"]\r\n\r\n preview = None\r\n for file in previews:\r\n if os.path.isfile(file):\r\n preview = self.link_preview(file)\r\n break\r\n\r\n yield {\r\n \"name\": checkpoint.model_name,\r\n \"filename\": path,\r\n \"preview\": preview,\r\n \"onclick\": '\"' + html.escape(f\"\"\"return selectCheckpoint({json.dumps(name)})\"\"\") + '\"',\r\n \"local_preview\": path + \".png\",\r\n }\r\n\r\n def allowed_directories_for_previews(self):\r\n return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]\r\n\r\n", "path": "modules/ui_extra_networks_checkpoints.py"}]}
| 2,735 | 112 |
gh_patches_debug_24699
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-285
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
django_polymorphic breaks model generation
`django_polymorphic` adds mandatory fields (named `*_ptr`) to models, but gives them values when the model is created. Hypothesis sees these as normal non-nullable fields, which trigger the relevant health check. However, explicitly providing a value for one of these fields causes an exception to be thrown in the model's constructor.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/hypothesis/extra/django/models.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
4 #
5 # Most of this work is copyright (C) 2013-2015 David R. MacIver
6 # ([email protected]), but it contains contributions by others. See
7 # https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
8 # full list of people who may hold copyright, and consult the git log if you
9 # need to determine who owns an individual contribution.
10 #
11 # This Source Code Form is subject to the terms of the Mozilla Public License,
12 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
13 # obtain one at http://mozilla.org/MPL/2.0/.
14 #
15 # END HEADER
16
17 from __future__ import division, print_function, absolute_import
18
19 import django.db.models as dm
20 from django.db import IntegrityError
21
22 import hypothesis.strategies as st
23 import hypothesis.extra.fakefactory as ff
24 from hypothesis.errors import InvalidArgument
25 from hypothesis.extra.datetime import datetimes
26 from hypothesis.searchstrategy.strategies import SearchStrategy
27
28
29 class ModelNotSupported(Exception):
30 pass
31
32
33 def referenced_models(model, seen=None):
34 if seen is None:
35 seen = set()
36 for f in model._meta.concrete_fields:
37 if isinstance(f, dm.ForeignKey):
38 t = f.rel.to
39 if t not in seen:
40 seen.add(t)
41 referenced_models(t, seen)
42 return seen
43
44
45 __default_field_mappings = None
46
47
48 def field_mappings():
49 global __default_field_mappings
50
51 if __default_field_mappings is None:
52 __default_field_mappings = {
53 dm.SmallIntegerField: st.integers(-32768, 32767),
54 dm.IntegerField: st.integers(-2147483648, 2147483647),
55 dm.BigIntegerField:
56 st.integers(-9223372036854775808, 9223372036854775807),
57 dm.PositiveIntegerField: st.integers(0, 2147483647),
58 dm.PositiveSmallIntegerField: st.integers(0, 32767),
59 dm.BinaryField: st.binary(),
60 dm.BooleanField: st.booleans(),
61 dm.CharField: st.text(),
62 dm.TextField: st.text(),
63 dm.DateTimeField: datetimes(allow_naive=False),
64 dm.EmailField: ff.fake_factory(u'email'),
65 dm.FloatField: st.floats(),
66 dm.NullBooleanField: st.one_of(st.none(), st.booleans()),
67 }
68 return __default_field_mappings
69
70
71 def add_default_field_mapping(field_type, strategy):
72 field_mappings()[field_type] = strategy
73
74
75 def models(model, **extra):
76 result = {}
77 mappings = field_mappings()
78 mandatory = set()
79 for f in model._meta.concrete_fields:
80 if isinstance(f, dm.AutoField):
81 continue
82 try:
83 mapped = mappings[type(f)]
84 except KeyError:
85 if not f.null:
86 mandatory.add(f.name)
87 continue
88 if f.null:
89 mapped = st.one_of(st.none(), mapped)
90 result[f.name] = mapped
91 missed = {x for x in mandatory if x not in extra}
92 if missed:
93 raise InvalidArgument((
94 u'Missing arguments for mandatory field%s %s for model %s' % (
95 u's' if len(missed) > 1 else u'',
96 u', '.join(missed),
97 model.__name__,
98 )))
99 for k, v in extra.items():
100 if isinstance(v, SearchStrategy):
101 result[k] = v
102 else:
103 result[k] = st.just(v)
104 result.update(extra)
105 return ModelStrategy(model, result)
106
107
108 class ModelStrategy(SearchStrategy):
109
110 def __init__(self, model, mappings):
111 super(ModelStrategy, self).__init__()
112 self.model = model
113 self.arg_strategy = st.fixed_dictionaries(mappings)
114
115 def __repr__(self):
116 return u'ModelStrategy(%s)' % (self.model.__name__,)
117
118 def do_draw(self, data):
119 try:
120 result, _ = self.model.objects.get_or_create(
121 **self.arg_strategy.do_draw(data)
122 )
123 return result
124 except IntegrityError:
125 data.mark_invalid()
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/hypothesis/extra/django/models.py b/src/hypothesis/extra/django/models.py
--- a/src/hypothesis/extra/django/models.py
+++ b/src/hypothesis/extra/django/models.py
@@ -23,6 +23,7 @@
import hypothesis.extra.fakefactory as ff
from hypothesis.errors import InvalidArgument
from hypothesis.extra.datetime import datetimes
+from hypothesis.utils.conventions import UniqueIdentifier
from hypothesis.searchstrategy.strategies import SearchStrategy
@@ -72,6 +73,9 @@
field_mappings()[field_type] = strategy
+default_value = UniqueIdentifier(u'default_value')
+
+
def models(model, **extra):
result = {}
mappings = field_mappings()
@@ -96,12 +100,9 @@
u', '.join(missed),
model.__name__,
)))
- for k, v in extra.items():
- if isinstance(v, SearchStrategy):
- result[k] = v
- else:
- result[k] = st.just(v)
result.update(extra)
+ # Remove default_values so we don't try to generate anything for those.
+ result = {k: v for k, v in result.items() if v is not default_value}
return ModelStrategy(model, result)
|
{"golden_diff": "diff --git a/src/hypothesis/extra/django/models.py b/src/hypothesis/extra/django/models.py\n--- a/src/hypothesis/extra/django/models.py\n+++ b/src/hypothesis/extra/django/models.py\n@@ -23,6 +23,7 @@\n import hypothesis.extra.fakefactory as ff\n from hypothesis.errors import InvalidArgument\n from hypothesis.extra.datetime import datetimes\n+from hypothesis.utils.conventions import UniqueIdentifier\n from hypothesis.searchstrategy.strategies import SearchStrategy\n \n \n@@ -72,6 +73,9 @@\n field_mappings()[field_type] = strategy\n \n \n+default_value = UniqueIdentifier(u'default_value')\n+\n+\n def models(model, **extra):\n result = {}\n mappings = field_mappings()\n@@ -96,12 +100,9 @@\n u', '.join(missed),\n model.__name__,\n )))\n- for k, v in extra.items():\n- if isinstance(v, SearchStrategy):\n- result[k] = v\n- else:\n- result[k] = st.just(v)\n result.update(extra)\n+ # Remove default_values so we don't try to generate anything for those.\n+ result = {k: v for k, v in result.items() if v is not default_value}\n return ModelStrategy(model, result)\n", "issue": "django_polymorphic breaks model generation\n`django_polymorphic` adds mandatory fields (named `*_ptr`) to models, but gives them values when the model is created. Hypothesis sees these as normal non-nullable fields, which trigger the relevant health check. However, explicitly providing a value for one of these fields causes an exception to be thrown in the model's constructor.\n\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)\n#\n# Most of this work is copyright (C) 2013-2015 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a\n# full list of people who may hold copyright, and consult the git log if you\n# need to determine who owns an individual contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport django.db.models as dm\nfrom django.db import IntegrityError\n\nimport hypothesis.strategies as st\nimport hypothesis.extra.fakefactory as ff\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.extra.datetime import datetimes\nfrom hypothesis.searchstrategy.strategies import SearchStrategy\n\n\nclass ModelNotSupported(Exception):\n pass\n\n\ndef referenced_models(model, seen=None):\n if seen is None:\n seen = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.ForeignKey):\n t = f.rel.to\n if t not in seen:\n seen.add(t)\n referenced_models(t, seen)\n return seen\n\n\n__default_field_mappings = None\n\n\ndef field_mappings():\n global __default_field_mappings\n\n if __default_field_mappings is None:\n __default_field_mappings = {\n dm.SmallIntegerField: st.integers(-32768, 32767),\n dm.IntegerField: st.integers(-2147483648, 2147483647),\n dm.BigIntegerField:\n st.integers(-9223372036854775808, 9223372036854775807),\n dm.PositiveIntegerField: st.integers(0, 2147483647),\n dm.PositiveSmallIntegerField: st.integers(0, 32767),\n dm.BinaryField: st.binary(),\n dm.BooleanField: st.booleans(),\n dm.CharField: st.text(),\n dm.TextField: st.text(),\n dm.DateTimeField: datetimes(allow_naive=False),\n dm.EmailField: ff.fake_factory(u'email'),\n dm.FloatField: st.floats(),\n dm.NullBooleanField: st.one_of(st.none(), st.booleans()),\n }\n return __default_field_mappings\n\n\ndef add_default_field_mapping(field_type, strategy):\n field_mappings()[field_type] = strategy\n\n\ndef models(model, **extra):\n result = {}\n mappings = field_mappings()\n mandatory = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.AutoField):\n continue\n try:\n mapped = mappings[type(f)]\n except KeyError:\n if not f.null:\n mandatory.add(f.name)\n continue\n if f.null:\n mapped = st.one_of(st.none(), mapped)\n result[f.name] = mapped\n missed = {x for x in mandatory if x not in extra}\n if missed:\n raise InvalidArgument((\n u'Missing arguments for mandatory field%s %s for model %s' % (\n u's' if len(missed) > 1 else u'',\n u', '.join(missed),\n model.__name__,\n )))\n for k, v in extra.items():\n if isinstance(v, SearchStrategy):\n result[k] = v\n else:\n result[k] = st.just(v)\n result.update(extra)\n return ModelStrategy(model, result)\n\n\nclass ModelStrategy(SearchStrategy):\n\n def __init__(self, model, mappings):\n super(ModelStrategy, self).__init__()\n self.model = model\n self.arg_strategy = st.fixed_dictionaries(mappings)\n\n def __repr__(self):\n return u'ModelStrategy(%s)' % (self.model.__name__,)\n\n def do_draw(self, data):\n try:\n result, _ = self.model.objects.get_or_create(\n **self.arg_strategy.do_draw(data)\n )\n return result\n except IntegrityError:\n data.mark_invalid()\n", "path": "src/hypothesis/extra/django/models.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)\n#\n# Most of this work is copyright (C) 2013-2015 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a\n# full list of people who may hold copyright, and consult the git log if you\n# need to determine who owns an individual contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport django.db.models as dm\nfrom django.db import IntegrityError\n\nimport hypothesis.strategies as st\nimport hypothesis.extra.fakefactory as ff\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.extra.datetime import datetimes\nfrom hypothesis.utils.conventions import UniqueIdentifier\nfrom hypothesis.searchstrategy.strategies import SearchStrategy\n\n\nclass ModelNotSupported(Exception):\n pass\n\n\ndef referenced_models(model, seen=None):\n if seen is None:\n seen = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.ForeignKey):\n t = f.rel.to\n if t not in seen:\n seen.add(t)\n referenced_models(t, seen)\n return seen\n\n\n__default_field_mappings = None\n\n\ndef field_mappings():\n global __default_field_mappings\n\n if __default_field_mappings is None:\n __default_field_mappings = {\n dm.SmallIntegerField: st.integers(-32768, 32767),\n dm.IntegerField: st.integers(-2147483648, 2147483647),\n dm.BigIntegerField:\n st.integers(-9223372036854775808, 9223372036854775807),\n dm.PositiveIntegerField: st.integers(0, 2147483647),\n dm.PositiveSmallIntegerField: st.integers(0, 32767),\n dm.BinaryField: st.binary(),\n dm.BooleanField: st.booleans(),\n dm.CharField: st.text(),\n dm.TextField: st.text(),\n dm.DateTimeField: datetimes(allow_naive=False),\n dm.EmailField: ff.fake_factory(u'email'),\n dm.FloatField: st.floats(),\n dm.NullBooleanField: st.one_of(st.none(), st.booleans()),\n }\n return __default_field_mappings\n\n\ndef add_default_field_mapping(field_type, strategy):\n field_mappings()[field_type] = strategy\n\n\ndefault_value = UniqueIdentifier(u'default_value')\n\n\ndef models(model, **extra):\n result = {}\n mappings = field_mappings()\n mandatory = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.AutoField):\n continue\n try:\n mapped = mappings[type(f)]\n except KeyError:\n if not f.null:\n mandatory.add(f.name)\n continue\n if f.null:\n mapped = st.one_of(st.none(), mapped)\n result[f.name] = mapped\n missed = {x for x in mandatory if x not in extra}\n if missed:\n raise InvalidArgument((\n u'Missing arguments for mandatory field%s %s for model %s' % (\n u's' if len(missed) > 1 else u'',\n u', '.join(missed),\n model.__name__,\n )))\n result.update(extra)\n # Remove default_values so we don't try to generate anything for those.\n result = {k: v for k, v in result.items() if v is not default_value}\n return ModelStrategy(model, result)\n\n\nclass ModelStrategy(SearchStrategy):\n\n def __init__(self, model, mappings):\n super(ModelStrategy, self).__init__()\n self.model = model\n self.arg_strategy = st.fixed_dictionaries(mappings)\n\n def __repr__(self):\n return u'ModelStrategy(%s)' % (self.model.__name__,)\n\n def do_draw(self, data):\n try:\n result, _ = self.model.objects.get_or_create(\n **self.arg_strategy.do_draw(data)\n )\n return result\n except IntegrityError:\n data.mark_invalid()\n", "path": "src/hypothesis/extra/django/models.py"}]}
| 1,591 | 286 |
gh_patches_debug_22783
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-3992
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase WebSocket message timestamp precision
#### Is your feature request related to a problem? Please describe.
Timestamps for WebSocket messages are rounded to whole seconds. For at least some protocols or analyses, a higher precision may be useful or even necessary for productive use; one example is measuring the performance impact of mitmproxy.
#### Describe the solution you'd like
Avoid converting the timestamp to an integer and use a float instead:
https://github.com/mitmproxy/mitmproxy/blob/7fdcbb09e6034ab1f76724965cfdf45f3d775129/mitmproxy/websocket.py#L28
Basic support for higher-precision timestamps seems to be as simple as changing the type of `WebSocketMessage.timestamp` from `int` to `float` and dropping the `int()` call around `time.time()`. I'm not sure if more is needed e.g. to ensure backward compatibility, but I was able to read a previous dump just fine with that modification (which makes sense since integers in the relevant range are strictly a subset of double-precision floats).
#### Describe alternatives you've considered
If keeping the `timestamp` an `int` is unavoidable, the Linux approach of storing the microseconds in a second integer (between 0 and 999999, inclusive) could be used. I don't think this is a good idea though.
#### Additional context
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/websocket.py`
Content:
```
1 import time
2 import queue
3 from typing import List, Optional
4
5 from wsproto.frame_protocol import CloseReason
6 from wsproto.frame_protocol import Opcode
7
8 from mitmproxy import flow
9 from mitmproxy.net import websockets
10 from mitmproxy.coretypes import serializable
11 from mitmproxy.utils import strutils, human
12
13
14 class WebSocketMessage(serializable.Serializable):
15 """
16 A WebSocket message sent from one endpoint to the other.
17 """
18
19 def __init__(
20 self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False
21 ) -> None:
22 self.type = Opcode(type) # type: ignore
23 """indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode)."""
24 self.from_client = from_client
25 """True if this messages was sent by the client."""
26 self.content = content
27 """A byte-string representing the content of this message."""
28 self.timestamp: int = timestamp or int(time.time())
29 """Timestamp of when this message was received or created."""
30 self.killed = killed
31 """True if this messages was killed and should not be sent to the other endpoint."""
32
33 @classmethod
34 def from_state(cls, state):
35 return cls(*state)
36
37 def get_state(self):
38 return int(self.type), self.from_client, self.content, self.timestamp, self.killed
39
40 def set_state(self, state):
41 self.type, self.from_client, self.content, self.timestamp, self.killed = state
42 self.type = Opcode(self.type) # replace enum with bare int
43
44 def __repr__(self):
45 if self.type == Opcode.TEXT:
46 return "text message: {}".format(repr(self.content))
47 else:
48 return "binary message: {}".format(strutils.bytes_to_escaped_str(self.content))
49
50 def kill(self):
51 """
52 Kill this message.
53
54 It will not be sent to the other endpoint. This has no effect in streaming mode.
55 """
56 self.killed = True
57
58
59 class WebSocketFlow(flow.Flow):
60 """
61 A WebSocketFlow is a simplified representation of a Websocket connection.
62 """
63
64 def __init__(self, client_conn, server_conn, handshake_flow, live=None):
65 super().__init__("websocket", client_conn, server_conn, live)
66
67 self.messages: List[WebSocketMessage] = []
68 """A list containing all WebSocketMessage's."""
69 self.close_sender = 'client'
70 """'client' if the client initiated connection closing."""
71 self.close_code = CloseReason.NORMAL_CLOSURE
72 """WebSocket close code."""
73 self.close_message = '(message missing)'
74 """WebSocket close message."""
75 self.close_reason = 'unknown status code'
76 """WebSocket close reason."""
77 self.stream = False
78 """True of this connection is streaming directly to the other endpoint."""
79 self.handshake_flow = handshake_flow
80 """The HTTP flow containing the initial WebSocket handshake."""
81 self.ended = False
82 """True when the WebSocket connection has been closed."""
83
84 self._inject_messages_client = queue.Queue(maxsize=1)
85 self._inject_messages_server = queue.Queue(maxsize=1)
86
87 if handshake_flow:
88 self.client_key = websockets.get_client_key(handshake_flow.request.headers)
89 self.client_protocol = websockets.get_protocol(handshake_flow.request.headers)
90 self.client_extensions = websockets.get_extensions(handshake_flow.request.headers)
91 self.server_accept = websockets.get_server_accept(handshake_flow.response.headers)
92 self.server_protocol = websockets.get_protocol(handshake_flow.response.headers)
93 self.server_extensions = websockets.get_extensions(handshake_flow.response.headers)
94 else:
95 self.client_key = ''
96 self.client_protocol = ''
97 self.client_extensions = ''
98 self.server_accept = ''
99 self.server_protocol = ''
100 self.server_extensions = ''
101
102 _stateobject_attributes = flow.Flow._stateobject_attributes.copy()
103 # mypy doesn't support update with kwargs
104 _stateobject_attributes.update(dict(
105 messages=List[WebSocketMessage],
106 close_sender=str,
107 close_code=int,
108 close_message=str,
109 close_reason=str,
110 client_key=str,
111 client_protocol=str,
112 client_extensions=str,
113 server_accept=str,
114 server_protocol=str,
115 server_extensions=str,
116 # Do not include handshake_flow, to prevent recursive serialization!
117 # Since mitmproxy-console currently only displays HTTPFlows,
118 # dumping the handshake_flow will include the WebSocketFlow too.
119 ))
120
121 def get_state(self):
122 d = super().get_state()
123 d['close_code'] = int(d['close_code']) # replace enum with bare int
124 return d
125
126 @classmethod
127 def from_state(cls, state):
128 f = cls(None, None, None)
129 f.set_state(state)
130 return f
131
132 def __repr__(self):
133 return "<WebSocketFlow ({} messages)>".format(len(self.messages))
134
135 def message_info(self, message: WebSocketMessage) -> str:
136 return "{client} {direction} WebSocket {type} message {direction} {server}{endpoint}".format(
137 type=message.type,
138 client=human.format_address(self.client_conn.address),
139 server=human.format_address(self.server_conn.address),
140 direction="->" if message.from_client else "<-",
141 endpoint=self.handshake_flow.request.path,
142 )
143
144 def inject_message(self, endpoint, payload):
145 """
146 Inject and send a full WebSocket message to the remote endpoint.
147 This might corrupt your WebSocket connection! Be careful!
148
149 The endpoint needs to be either flow.client_conn or flow.server_conn.
150
151 If ``payload`` is of type ``bytes`` then the message is flagged as
152 being binary If it is of type ``str`` encoded as UTF-8 and sent as
153 text.
154
155 :param payload: The message body to send.
156 :type payload: ``bytes`` or ``str``
157 """
158
159 if endpoint == self.client_conn:
160 self._inject_messages_client.put(payload)
161 elif endpoint == self.server_conn:
162 self._inject_messages_server.put(payload)
163 else:
164 raise ValueError('Invalid endpoint')
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/websocket.py b/mitmproxy/websocket.py
--- a/mitmproxy/websocket.py
+++ b/mitmproxy/websocket.py
@@ -17,7 +17,7 @@
"""
def __init__(
- self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False
+ self, type: int, from_client: bool, content: bytes, timestamp: Optional[float]=None, killed: bool=False
) -> None:
self.type = Opcode(type) # type: ignore
"""indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode)."""
@@ -25,7 +25,7 @@
"""True if this messages was sent by the client."""
self.content = content
"""A byte-string representing the content of this message."""
- self.timestamp: int = timestamp or int(time.time())
+ self.timestamp: float = timestamp or time.time()
"""Timestamp of when this message was received or created."""
self.killed = killed
"""True if this messages was killed and should not be sent to the other endpoint."""
|
{"golden_diff": "diff --git a/mitmproxy/websocket.py b/mitmproxy/websocket.py\n--- a/mitmproxy/websocket.py\n+++ b/mitmproxy/websocket.py\n@@ -17,7 +17,7 @@\n \"\"\"\n \n def __init__(\n- self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False\n+ self, type: int, from_client: bool, content: bytes, timestamp: Optional[float]=None, killed: bool=False\n ) -> None:\n self.type = Opcode(type) # type: ignore\n \"\"\"indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode).\"\"\"\n@@ -25,7 +25,7 @@\n \"\"\"True if this messages was sent by the client.\"\"\"\n self.content = content\n \"\"\"A byte-string representing the content of this message.\"\"\"\n- self.timestamp: int = timestamp or int(time.time())\n+ self.timestamp: float = timestamp or time.time()\n \"\"\"Timestamp of when this message was received or created.\"\"\"\n self.killed = killed\n \"\"\"True if this messages was killed and should not be sent to the other endpoint.\"\"\"\n", "issue": "Increase WebSocket message timestamp precision\n#### Is your feature request related to a problem? Please describe.\r\nTimestamps for WebSocket messages are rounded to whole seconds. For at least some protocols or analyses, a higher precision may be useful or even necessary for productive use; one example is measuring the performance impact of mitmproxy.\r\n\r\n#### Describe the solution you'd like\r\nAvoid converting the timestamp to an integer and use a float instead:\r\n\r\nhttps://github.com/mitmproxy/mitmproxy/blob/7fdcbb09e6034ab1f76724965cfdf45f3d775129/mitmproxy/websocket.py#L28\r\n\r\nBasic support for higher-precision timestamps seems to be as simple as changing the type of `WebSocketMessage.timestamp` from `int` to `float` and dropping the `int()` call around `time.time()`. I'm not sure if more is needed e.g. to ensure backward compatibility, but I was able to read a previous dump just fine with that modification (which makes sense since integers in the relevant range are strictly a subset of double-precision floats).\r\n\r\n#### Describe alternatives you've considered\r\nIf keeping the `timestamp` an `int` is unavoidable, the Linux approach of storing the microseconds in a second integer (between 0 and 999999, inclusive) could be used. I don't think this is a good idea though.\r\n\r\n#### Additional context\r\nNone\n", "before_files": [{"content": "import time\nimport queue\nfrom typing import List, Optional\n\nfrom wsproto.frame_protocol import CloseReason\nfrom wsproto.frame_protocol import Opcode\n\nfrom mitmproxy import flow\nfrom mitmproxy.net import websockets\nfrom mitmproxy.coretypes import serializable\nfrom mitmproxy.utils import strutils, human\n\n\nclass WebSocketMessage(serializable.Serializable):\n \"\"\"\n A WebSocket message sent from one endpoint to the other.\n \"\"\"\n\n def __init__(\n self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False\n ) -> None:\n self.type = Opcode(type) # type: ignore\n \"\"\"indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode).\"\"\"\n self.from_client = from_client\n \"\"\"True if this messages was sent by the client.\"\"\"\n self.content = content\n \"\"\"A byte-string representing the content of this message.\"\"\"\n self.timestamp: int = timestamp or int(time.time())\n \"\"\"Timestamp of when this message was received or created.\"\"\"\n self.killed = killed\n \"\"\"True if this messages was killed and should not be sent to the other endpoint.\"\"\"\n\n @classmethod\n def from_state(cls, state):\n return cls(*state)\n\n def get_state(self):\n return int(self.type), self.from_client, self.content, self.timestamp, self.killed\n\n def set_state(self, state):\n self.type, self.from_client, self.content, self.timestamp, self.killed = state\n self.type = Opcode(self.type) # replace enum with bare int\n\n def __repr__(self):\n if self.type == Opcode.TEXT:\n return \"text message: {}\".format(repr(self.content))\n else:\n return \"binary message: {}\".format(strutils.bytes_to_escaped_str(self.content))\n\n def kill(self):\n \"\"\"\n Kill this message.\n\n It will not be sent to the other endpoint. This has no effect in streaming mode.\n \"\"\"\n self.killed = True\n\n\nclass WebSocketFlow(flow.Flow):\n \"\"\"\n A WebSocketFlow is a simplified representation of a Websocket connection.\n \"\"\"\n\n def __init__(self, client_conn, server_conn, handshake_flow, live=None):\n super().__init__(\"websocket\", client_conn, server_conn, live)\n\n self.messages: List[WebSocketMessage] = []\n \"\"\"A list containing all WebSocketMessage's.\"\"\"\n self.close_sender = 'client'\n \"\"\"'client' if the client initiated connection closing.\"\"\"\n self.close_code = CloseReason.NORMAL_CLOSURE\n \"\"\"WebSocket close code.\"\"\"\n self.close_message = '(message missing)'\n \"\"\"WebSocket close message.\"\"\"\n self.close_reason = 'unknown status code'\n \"\"\"WebSocket close reason.\"\"\"\n self.stream = False\n \"\"\"True of this connection is streaming directly to the other endpoint.\"\"\"\n self.handshake_flow = handshake_flow\n \"\"\"The HTTP flow containing the initial WebSocket handshake.\"\"\"\n self.ended = False\n \"\"\"True when the WebSocket connection has been closed.\"\"\"\n\n self._inject_messages_client = queue.Queue(maxsize=1)\n self._inject_messages_server = queue.Queue(maxsize=1)\n\n if handshake_flow:\n self.client_key = websockets.get_client_key(handshake_flow.request.headers)\n self.client_protocol = websockets.get_protocol(handshake_flow.request.headers)\n self.client_extensions = websockets.get_extensions(handshake_flow.request.headers)\n self.server_accept = websockets.get_server_accept(handshake_flow.response.headers)\n self.server_protocol = websockets.get_protocol(handshake_flow.response.headers)\n self.server_extensions = websockets.get_extensions(handshake_flow.response.headers)\n else:\n self.client_key = ''\n self.client_protocol = ''\n self.client_extensions = ''\n self.server_accept = ''\n self.server_protocol = ''\n self.server_extensions = ''\n\n _stateobject_attributes = flow.Flow._stateobject_attributes.copy()\n # mypy doesn't support update with kwargs\n _stateobject_attributes.update(dict(\n messages=List[WebSocketMessage],\n close_sender=str,\n close_code=int,\n close_message=str,\n close_reason=str,\n client_key=str,\n client_protocol=str,\n client_extensions=str,\n server_accept=str,\n server_protocol=str,\n server_extensions=str,\n # Do not include handshake_flow, to prevent recursive serialization!\n # Since mitmproxy-console currently only displays HTTPFlows,\n # dumping the handshake_flow will include the WebSocketFlow too.\n ))\n\n def get_state(self):\n d = super().get_state()\n d['close_code'] = int(d['close_code']) # replace enum with bare int\n return d\n\n @classmethod\n def from_state(cls, state):\n f = cls(None, None, None)\n f.set_state(state)\n return f\n\n def __repr__(self):\n return \"<WebSocketFlow ({} messages)>\".format(len(self.messages))\n\n def message_info(self, message: WebSocketMessage) -> str:\n return \"{client} {direction} WebSocket {type} message {direction} {server}{endpoint}\".format(\n type=message.type,\n client=human.format_address(self.client_conn.address),\n server=human.format_address(self.server_conn.address),\n direction=\"->\" if message.from_client else \"<-\",\n endpoint=self.handshake_flow.request.path,\n )\n\n def inject_message(self, endpoint, payload):\n \"\"\"\n Inject and send a full WebSocket message to the remote endpoint.\n This might corrupt your WebSocket connection! Be careful!\n\n The endpoint needs to be either flow.client_conn or flow.server_conn.\n\n If ``payload`` is of type ``bytes`` then the message is flagged as\n being binary If it is of type ``str`` encoded as UTF-8 and sent as\n text.\n\n :param payload: The message body to send.\n :type payload: ``bytes`` or ``str``\n \"\"\"\n\n if endpoint == self.client_conn:\n self._inject_messages_client.put(payload)\n elif endpoint == self.server_conn:\n self._inject_messages_server.put(payload)\n else:\n raise ValueError('Invalid endpoint')\n", "path": "mitmproxy/websocket.py"}], "after_files": [{"content": "import time\nimport queue\nfrom typing import List, Optional\n\nfrom wsproto.frame_protocol import CloseReason\nfrom wsproto.frame_protocol import Opcode\n\nfrom mitmproxy import flow\nfrom mitmproxy.net import websockets\nfrom mitmproxy.coretypes import serializable\nfrom mitmproxy.utils import strutils, human\n\n\nclass WebSocketMessage(serializable.Serializable):\n \"\"\"\n A WebSocket message sent from one endpoint to the other.\n \"\"\"\n\n def __init__(\n self, type: int, from_client: bool, content: bytes, timestamp: Optional[float]=None, killed: bool=False\n ) -> None:\n self.type = Opcode(type) # type: ignore\n \"\"\"indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode).\"\"\"\n self.from_client = from_client\n \"\"\"True if this messages was sent by the client.\"\"\"\n self.content = content\n \"\"\"A byte-string representing the content of this message.\"\"\"\n self.timestamp: float = timestamp or time.time()\n \"\"\"Timestamp of when this message was received or created.\"\"\"\n self.killed = killed\n \"\"\"True if this messages was killed and should not be sent to the other endpoint.\"\"\"\n\n @classmethod\n def from_state(cls, state):\n return cls(*state)\n\n def get_state(self):\n return int(self.type), self.from_client, self.content, self.timestamp, self.killed\n\n def set_state(self, state):\n self.type, self.from_client, self.content, self.timestamp, self.killed = state\n self.type = Opcode(self.type) # replace enum with bare int\n\n def __repr__(self):\n if self.type == Opcode.TEXT:\n return \"text message: {}\".format(repr(self.content))\n else:\n return \"binary message: {}\".format(strutils.bytes_to_escaped_str(self.content))\n\n def kill(self):\n \"\"\"\n Kill this message.\n\n It will not be sent to the other endpoint. This has no effect in streaming mode.\n \"\"\"\n self.killed = True\n\n\nclass WebSocketFlow(flow.Flow):\n \"\"\"\n A WebSocketFlow is a simplified representation of a Websocket connection.\n \"\"\"\n\n def __init__(self, client_conn, server_conn, handshake_flow, live=None):\n super().__init__(\"websocket\", client_conn, server_conn, live)\n\n self.messages: List[WebSocketMessage] = []\n \"\"\"A list containing all WebSocketMessage's.\"\"\"\n self.close_sender = 'client'\n \"\"\"'client' if the client initiated connection closing.\"\"\"\n self.close_code = CloseReason.NORMAL_CLOSURE\n \"\"\"WebSocket close code.\"\"\"\n self.close_message = '(message missing)'\n \"\"\"WebSocket close message.\"\"\"\n self.close_reason = 'unknown status code'\n \"\"\"WebSocket close reason.\"\"\"\n self.stream = False\n \"\"\"True of this connection is streaming directly to the other endpoint.\"\"\"\n self.handshake_flow = handshake_flow\n \"\"\"The HTTP flow containing the initial WebSocket handshake.\"\"\"\n self.ended = False\n \"\"\"True when the WebSocket connection has been closed.\"\"\"\n\n self._inject_messages_client = queue.Queue(maxsize=1)\n self._inject_messages_server = queue.Queue(maxsize=1)\n\n if handshake_flow:\n self.client_key = websockets.get_client_key(handshake_flow.request.headers)\n self.client_protocol = websockets.get_protocol(handshake_flow.request.headers)\n self.client_extensions = websockets.get_extensions(handshake_flow.request.headers)\n self.server_accept = websockets.get_server_accept(handshake_flow.response.headers)\n self.server_protocol = websockets.get_protocol(handshake_flow.response.headers)\n self.server_extensions = websockets.get_extensions(handshake_flow.response.headers)\n else:\n self.client_key = ''\n self.client_protocol = ''\n self.client_extensions = ''\n self.server_accept = ''\n self.server_protocol = ''\n self.server_extensions = ''\n\n _stateobject_attributes = flow.Flow._stateobject_attributes.copy()\n # mypy doesn't support update with kwargs\n _stateobject_attributes.update(dict(\n messages=List[WebSocketMessage],\n close_sender=str,\n close_code=int,\n close_message=str,\n close_reason=str,\n client_key=str,\n client_protocol=str,\n client_extensions=str,\n server_accept=str,\n server_protocol=str,\n server_extensions=str,\n # Do not include handshake_flow, to prevent recursive serialization!\n # Since mitmproxy-console currently only displays HTTPFlows,\n # dumping the handshake_flow will include the WebSocketFlow too.\n ))\n\n def get_state(self):\n d = super().get_state()\n d['close_code'] = int(d['close_code']) # replace enum with bare int\n return d\n\n @classmethod\n def from_state(cls, state):\n f = cls(None, None, None)\n f.set_state(state)\n return f\n\n def __repr__(self):\n return \"<WebSocketFlow ({} messages)>\".format(len(self.messages))\n\n def message_info(self, message: WebSocketMessage) -> str:\n return \"{client} {direction} WebSocket {type} message {direction} {server}{endpoint}\".format(\n type=message.type,\n client=human.format_address(self.client_conn.address),\n server=human.format_address(self.server_conn.address),\n direction=\"->\" if message.from_client else \"<-\",\n endpoint=self.handshake_flow.request.path,\n )\n\n def inject_message(self, endpoint, payload):\n \"\"\"\n Inject and send a full WebSocket message to the remote endpoint.\n This might corrupt your WebSocket connection! Be careful!\n\n The endpoint needs to be either flow.client_conn or flow.server_conn.\n\n If ``payload`` is of type ``bytes`` then the message is flagged as\n being binary If it is of type ``str`` encoded as UTF-8 and sent as\n text.\n\n :param payload: The message body to send.\n :type payload: ``bytes`` or ``str``\n \"\"\"\n\n if endpoint == self.client_conn:\n self._inject_messages_client.put(payload)\n elif endpoint == self.server_conn:\n self._inject_messages_server.put(payload)\n else:\n raise ValueError('Invalid endpoint')\n", "path": "mitmproxy/websocket.py"}]}
| 2,245 | 255 |
gh_patches_debug_1180
|
rasdani/github-patches
|
git_diff
|
encode__httpx-1054
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Type-checking our tests
I know this is not a standard thing to do across Encode projects, but I've been wondering if it would be worth starting to type-hint our tests.
I've seen at least two instances of this recently:
- In HTTPX: https://github.com/encode/httpx/pull/648#discussion_r359862603
- In Starlette: https://github.com/encode/starlette/issues/722
My rationale is based on two aspects:
- It improves our upfront knowledge about how users will actually use HTTPX — currently their usage of type hints in the wild is not reflected anywhere.
- It helps us catch type hint inconsistencies we wouldn't see in the core package.
The main counter-argument, I suppose, is that type hinting tests is tedious. I think that's fair, but I believe the two pro's above make it compelling.
Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/_types.py`
Content:
```
1 """
2 Type definitions for type checking purposes.
3 """
4
5 import ssl
6 from http.cookiejar import CookieJar
7 from typing import (
8 IO,
9 TYPE_CHECKING,
10 AsyncIterator,
11 Callable,
12 Dict,
13 Iterator,
14 List,
15 Mapping,
16 Optional,
17 Sequence,
18 Tuple,
19 Union,
20 )
21
22 if TYPE_CHECKING: # pragma: no cover
23 from ._auth import Auth # noqa: F401
24 from ._config import Proxy, Timeout # noqa: F401
25 from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401
26
27
28 PrimitiveData = Optional[Union[str, int, float, bool]]
29
30 URLTypes = Union["URL", str]
31
32 QueryParamTypes = Union[
33 "QueryParams",
34 Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],
35 List[Tuple[str, PrimitiveData]],
36 str,
37 ]
38
39 HeaderTypes = Union[
40 "Headers",
41 Dict[str, str],
42 Dict[bytes, bytes],
43 Sequence[Tuple[str, str]],
44 Sequence[Tuple[bytes, bytes]],
45 ]
46
47 CookieTypes = Union["Cookies", CookieJar, Dict[str, str]]
48
49 CertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]
50 VerifyTypes = Union[str, bool, ssl.SSLContext]
51 TimeoutTypes = Union[
52 Optional[float],
53 Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
54 "Timeout",
55 ]
56 ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[URLTypes, "Proxy"]]]
57
58 AuthTypes = Union[
59 Tuple[Union[str, bytes], Union[str, bytes]],
60 Callable[["Request"], "Request"],
61 "Auth",
62 ]
63
64 RequestData = Union[dict, str, bytes, Iterator[bytes], AsyncIterator[bytes]]
65
66 FileContent = Union[IO[str], IO[bytes], str, bytes]
67 FileTypes = Union[
68 # file (or text)
69 FileContent,
70 # (filename, file (or text))
71 Tuple[Optional[str], FileContent],
72 # (filename, file (or text), content_type)
73 Tuple[Optional[str], FileContent, Optional[str]],
74 ]
75 RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/_types.py b/httpx/_types.py
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -72,4 +72,4 @@
# (filename, file (or text), content_type)
Tuple[Optional[str], FileContent, Optional[str]],
]
-RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]
+RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
|
{"golden_diff": "diff --git a/httpx/_types.py b/httpx/_types.py\n--- a/httpx/_types.py\n+++ b/httpx/_types.py\n@@ -72,4 +72,4 @@\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n ]\n-RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]\n+RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n", "issue": "Type-checking our tests\nI know this is not a standard thing to do across Encode projects, but I've been wondering if it would be worth starting to type-hint our tests.\r\n\r\nI've seen at least two instances of this recently:\r\n\r\n- In HTTPX: https://github.com/encode/httpx/pull/648#discussion_r359862603\r\n- In Starlette: https://github.com/encode/starlette/issues/722\r\n\r\nMy rationale is based on two aspects:\r\n\r\n- It improves our upfront knowledge about how users will actually use HTTPX \u2014 currently their usage of type hints in the wild is not reflected anywhere.\r\n- It helps us catch type hint inconsistencies we wouldn't see in the core package.\r\n\r\nThe main counter-argument, I suppose, is that type hinting tests is tedious. I think that's fair, but I believe the two pro's above make it compelling.\r\n\r\nThoughts?\n", "before_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n AsyncIterator,\n Callable,\n Dict,\n Iterator,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n str,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Dict[str, str],\n Dict[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str]]\n\nCertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestData = Union[dict, str, bytes, Iterator[bytes], AsyncIterator[bytes]]\n\nFileContent = Union[IO[str], IO[bytes], str, bytes]\nFileTypes = Union[\n # file (or text)\n FileContent,\n # (filename, file (or text))\n Tuple[Optional[str], FileContent],\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]\n", "path": "httpx/_types.py"}], "after_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n AsyncIterator,\n Callable,\n Dict,\n Iterator,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n str,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Dict[str, str],\n Dict[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str]]\n\nCertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestData = Union[dict, str, bytes, Iterator[bytes], AsyncIterator[bytes]]\n\nFileContent = Union[IO[str], IO[bytes], str, bytes]\nFileTypes = Union[\n # file (or text)\n FileContent,\n # (filename, file (or text))\n Tuple[Optional[str], FileContent],\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n", "path": "httpx/_types.py"}]}
| 1,091 | 112 |
gh_patches_debug_12379
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-189
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Added FilterOut class from deleted file master/buildbot/status/web/st…
…atus_json.py
This class was removed in c3e1aaede2fc02507bccb548dd26e694bf32ba6a but still in use in StatusPush
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/libvirtbuildslave.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Portions Copyright Buildbot Team Members
15 # Portions Copyright 2010 Isotoma Limited
16
17 import os
18
19 from twisted.internet import defer, utils, reactor, threads
20 from twisted.python import log
21 from buildbot.buildslave import AbstractBuildSlave, AbstractLatentBuildSlave
22
23 import libvirt
24
25
26 class WorkQueue(object):
27 """
28 I am a class that turns parallel access into serial access.
29
30 I exist because we want to run libvirt access in threads as we don't
31 trust calls not to block, but under load libvirt doesnt seem to like
32 this kind of threaded use.
33 """
34
35 def __init__(self):
36 self.queue = []
37
38 def _process(self):
39 log.msg("Looking to start a piece of work now...")
40
41 # Is there anything to do?
42 if not self.queue:
43 log.msg("_process called when there is no work")
44 return
45
46 # Peek at the top of the stack - get a function to call and
47 # a deferred to fire when its all over
48 d, next_operation, args, kwargs = self.queue[0]
49
50 # Start doing some work - expects a deferred
51 try:
52 d2 = next_operation(*args, **kwargs)
53 except:
54 d2 = defer.fail()
55
56 # Whenever a piece of work is done, whether it worked or not
57 # call this to schedule the next piece of work
58 def _work_done(res):
59 log.msg("Completed a piece of work")
60 self.queue.pop(0)
61 if self.queue:
62 log.msg("Preparing next piece of work")
63 reactor.callLater(0, self._process)
64 return res
65 d2.addBoth(_work_done)
66
67 # When the work is done, trigger d
68 d2.chainDeferred(d)
69
70 def execute(self, cb, *args, **kwargs):
71 kickstart_processing = not self.queue
72 d = defer.Deferred()
73 self.queue.append((d, cb, args, kwargs))
74 if kickstart_processing:
75 self._process()
76 return d
77
78 def executeInThread(self, cb, *args, **kwargs):
79 return self.execute(threads.deferToThread, cb, *args, **kwargs)
80
81
82 # A module is effectively a singleton class, so this is OK
83 queue = WorkQueue()
84
85
86 class Domain(object):
87
88 """
89 I am a wrapper around a libvirt Domain object
90 """
91
92 def __init__(self, connection, domain):
93 self.connection = connection
94 self.domain = domain
95
96 def create(self):
97 return queue.executeInThread(self.domain.create)
98
99 def shutdown(self):
100 return queue.executeInThread(self.domain.shutdown)
101
102 def destroy(self):
103 return queue.executeInThread(self.domain.destroy)
104
105
106 class Connection(object):
107
108 """
109 I am a wrapper around a libvirt Connection object.
110 """
111
112 def __init__(self, uri):
113 self.uri = uri
114 self.connection = libvirt.open(uri)
115
116 def lookupByName(self, name):
117 """ I lookup an existing prefined domain """
118 d = queue.executeInThread(self.connection.lookupByName, name)
119 def _(res):
120 return Domain(self, res)
121 d.addCallback(_)
122 return d
123
124 def create(self, xml):
125 """ I take libvirt XML and start a new VM """
126 d = queue.executeInThread(self.connection.createXML, xml, 0)
127 def _(res):
128 return Domain(self, res)
129 d.addCallback(_)
130 return d
131
132
133 class LibVirtSlave(AbstractLatentBuildSlave):
134
135 def __init__(self, name, password, connection, hd_image, base_image = None, xml=None, max_builds=None, notify_on_missing=[],
136 missing_timeout=60*20, build_wait_timeout=60*10, properties={}, locks=None):
137 AbstractLatentBuildSlave.__init__(self, name, password, max_builds, notify_on_missing,
138 missing_timeout, build_wait_timeout, properties, locks)
139 self.name = name
140 self.connection = connection
141 self.image = hd_image
142 self.base_image = base_image
143 self.xml = xml
144
145 self.insubstantiate_after_build = True
146 self.cheap_copy = True
147 self.graceful_shutdown = False
148
149 self.domain = None
150
151 def _prepare_base_image(self):
152 """
153 I am a private method for creating (possibly cheap) copies of a
154 base_image for start_instance to boot.
155 """
156 if not self.base_image:
157 return defer.succeed(True)
158
159 if self.cheap_copy:
160 clone_cmd = "qemu-img"
161 clone_args = "create -b %(base)s -f qcow2 %(image)s"
162 else:
163 clone_cmd = "cp"
164 clone_args = "%(base)s %(image)s"
165
166 clone_args = clone_args % {
167 "base": self.base_image,
168 "image": self.image,
169 }
170
171 log.msg("Cloning base image: %s %s'" % (clone_cmd, clone_args))
172
173 def _log_result(res):
174 log.msg("Cloning exit code was: %d" % res)
175 return res
176
177 d = utils.getProcessValue(clone_cmd, clone_args.split())
178 d.addBoth(_log_result)
179 return d
180
181 def start_instance(self, build):
182 """
183 I start a new instance of a VM.
184
185 If a base_image is specified, I will make a clone of that otherwise i will
186 use image directly.
187
188 If i'm not given libvirt domain definition XML, I will look for my name
189 in the list of defined virtual machines and start that.
190 """
191 if self.domain is not None:
192 raise ValueError('domain active')
193
194 d = self._prepare_base_image()
195
196 def _start(res):
197 if self.xml:
198 return self.connection.create(self.xml)
199 d = self.connection.lookupByName(self.name)
200 def _really_start(res):
201 return res.create()
202 d.addCallback(_really_start)
203 return d
204 d.addCallback(_start)
205
206 def _started(res):
207 self.domain = res
208 return True
209 d.addCallback(_started)
210
211 def _start_failed(failure):
212 log.msg("Cannot start a VM (%s), failing gracefully and triggering a new build check" % self.name)
213 log.err(failure)
214 self.domain = None
215 return False
216 d.addErrback(_start_failed)
217
218 return d
219
220 def stop_instance(self, fast=False):
221 """
222 I attempt to stop a running VM.
223 I make sure any connection to the slave is removed.
224 If the VM was using a cloned image, I remove the clone
225 When everything is tidied up, I ask that bbot looks for work to do
226 """
227 log.msg("Attempting to stop '%s'" % self.name)
228 if self.domain is None:
229 log.msg("I don't think that domain is evening running, aborting")
230 return defer.succeed(None)
231
232 domain = self.domain
233 self.domain = None
234
235 if self.graceful_shutdown and not fast:
236 log.msg("Graceful shutdown chosen for %s" % self.name)
237 d = domain.shutdown()
238 else:
239 d = domain.destroy()
240
241 def _disconnect(res):
242 log.msg("VM destroyed (%s): Forcing its connection closed." % self.name)
243 return AbstractBuildSlave.disconnect(self)
244 d.addCallback(_disconnect)
245
246 def _disconnected(res):
247 log.msg("We forced disconnection (%s), cleaning up and triggering new build" % self.name)
248 if self.base_image:
249 os.remove(self.image)
250 self.botmaster.maybeStartBuildsForSlave(self.name)
251 return res
252 d.addBoth(_disconnected)
253
254 return d
255
256 def buildFinished(self, *args, **kwargs):
257 """
258 I insubstantiate a slave after it has done a build, if that is
259 desired behaviour.
260 """
261 AbstractLatentBuildSlave.buildFinished(self, *args, **kwargs)
262 if self.insubstantiate_after_build:
263 log.msg("Got buildFinished notification - attempting to insubstantiate")
264 self.insubstantiate()
265
266
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/master/buildbot/libvirtbuildslave.py b/master/buildbot/libvirtbuildslave.py
--- a/master/buildbot/libvirtbuildslave.py
+++ b/master/buildbot/libvirtbuildslave.py
@@ -198,13 +198,13 @@
return self.connection.create(self.xml)
d = self.connection.lookupByName(self.name)
def _really_start(res):
- return res.create()
+ self.domain = res
+ return self.domain.create()
d.addCallback(_really_start)
return d
d.addCallback(_start)
def _started(res):
- self.domain = res
return True
d.addCallback(_started)
|
{"golden_diff": "diff --git a/master/buildbot/libvirtbuildslave.py b/master/buildbot/libvirtbuildslave.py\n--- a/master/buildbot/libvirtbuildslave.py\n+++ b/master/buildbot/libvirtbuildslave.py\n@@ -198,13 +198,13 @@\n return self.connection.create(self.xml)\n d = self.connection.lookupByName(self.name)\n def _really_start(res):\n- return res.create()\n+ self.domain = res\n+ return self.domain.create()\n d.addCallback(_really_start)\n return d\n d.addCallback(_start)\n \n def _started(res):\n- self.domain = res\n return True\n d.addCallback(_started)\n", "issue": "Added FilterOut class from deleted file master/buildbot/status/web/st\u2026\n\u2026atus_json.py\n\nThis class was removed in c3e1aaede2fc02507bccb548dd26e694bf32ba6a but still in use in StatusPush\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Portions Copyright Buildbot Team Members\n# Portions Copyright 2010 Isotoma Limited\n\nimport os\n\nfrom twisted.internet import defer, utils, reactor, threads\nfrom twisted.python import log\nfrom buildbot.buildslave import AbstractBuildSlave, AbstractLatentBuildSlave\n\nimport libvirt\n\n\nclass WorkQueue(object):\n \"\"\"\n I am a class that turns parallel access into serial access.\n\n I exist because we want to run libvirt access in threads as we don't\n trust calls not to block, but under load libvirt doesnt seem to like\n this kind of threaded use.\n \"\"\"\n\n def __init__(self):\n self.queue = []\n\n def _process(self):\n log.msg(\"Looking to start a piece of work now...\")\n\n # Is there anything to do?\n if not self.queue:\n log.msg(\"_process called when there is no work\")\n return\n\n # Peek at the top of the stack - get a function to call and\n # a deferred to fire when its all over\n d, next_operation, args, kwargs = self.queue[0]\n\n # Start doing some work - expects a deferred\n try:\n d2 = next_operation(*args, **kwargs)\n except:\n d2 = defer.fail()\n\n # Whenever a piece of work is done, whether it worked or not \n # call this to schedule the next piece of work\n def _work_done(res):\n log.msg(\"Completed a piece of work\")\n self.queue.pop(0)\n if self.queue:\n log.msg(\"Preparing next piece of work\")\n reactor.callLater(0, self._process)\n return res\n d2.addBoth(_work_done)\n\n # When the work is done, trigger d\n d2.chainDeferred(d)\n\n def execute(self, cb, *args, **kwargs):\n kickstart_processing = not self.queue\n d = defer.Deferred()\n self.queue.append((d, cb, args, kwargs))\n if kickstart_processing:\n self._process()\n return d\n\n def executeInThread(self, cb, *args, **kwargs):\n return self.execute(threads.deferToThread, cb, *args, **kwargs)\n\n\n# A module is effectively a singleton class, so this is OK\nqueue = WorkQueue()\n\n\nclass Domain(object):\n\n \"\"\"\n I am a wrapper around a libvirt Domain object\n \"\"\"\n\n def __init__(self, connection, domain):\n self.connection = connection\n self.domain = domain\n\n def create(self):\n return queue.executeInThread(self.domain.create)\n\n def shutdown(self):\n return queue.executeInThread(self.domain.shutdown)\n\n def destroy(self):\n return queue.executeInThread(self.domain.destroy)\n\n\nclass Connection(object):\n\n \"\"\"\n I am a wrapper around a libvirt Connection object.\n \"\"\"\n\n def __init__(self, uri):\n self.uri = uri\n self.connection = libvirt.open(uri)\n\n def lookupByName(self, name):\n \"\"\" I lookup an existing prefined domain \"\"\"\n d = queue.executeInThread(self.connection.lookupByName, name)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n def create(self, xml):\n \"\"\" I take libvirt XML and start a new VM \"\"\"\n d = queue.executeInThread(self.connection.createXML, xml, 0)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n\nclass LibVirtSlave(AbstractLatentBuildSlave):\n\n def __init__(self, name, password, connection, hd_image, base_image = None, xml=None, max_builds=None, notify_on_missing=[],\n missing_timeout=60*20, build_wait_timeout=60*10, properties={}, locks=None):\n AbstractLatentBuildSlave.__init__(self, name, password, max_builds, notify_on_missing,\n missing_timeout, build_wait_timeout, properties, locks)\n self.name = name\n self.connection = connection\n self.image = hd_image\n self.base_image = base_image\n self.xml = xml\n\n self.insubstantiate_after_build = True\n self.cheap_copy = True\n self.graceful_shutdown = False\n\n self.domain = None\n\n def _prepare_base_image(self):\n \"\"\"\n I am a private method for creating (possibly cheap) copies of a\n base_image for start_instance to boot.\n \"\"\"\n if not self.base_image:\n return defer.succeed(True)\n\n if self.cheap_copy:\n clone_cmd = \"qemu-img\"\n clone_args = \"create -b %(base)s -f qcow2 %(image)s\"\n else:\n clone_cmd = \"cp\"\n clone_args = \"%(base)s %(image)s\"\n\n clone_args = clone_args % {\n \"base\": self.base_image,\n \"image\": self.image,\n }\n\n log.msg(\"Cloning base image: %s %s'\" % (clone_cmd, clone_args))\n\n def _log_result(res):\n log.msg(\"Cloning exit code was: %d\" % res)\n return res\n\n d = utils.getProcessValue(clone_cmd, clone_args.split())\n d.addBoth(_log_result)\n return d\n\n def start_instance(self, build):\n \"\"\"\n I start a new instance of a VM.\n\n If a base_image is specified, I will make a clone of that otherwise i will\n use image directly.\n\n If i'm not given libvirt domain definition XML, I will look for my name\n in the list of defined virtual machines and start that.\n \"\"\"\n if self.domain is not None:\n raise ValueError('domain active')\n\n d = self._prepare_base_image()\n\n def _start(res):\n if self.xml:\n return self.connection.create(self.xml)\n d = self.connection.lookupByName(self.name)\n def _really_start(res):\n return res.create()\n d.addCallback(_really_start)\n return d\n d.addCallback(_start)\n\n def _started(res):\n self.domain = res\n return True\n d.addCallback(_started)\n\n def _start_failed(failure):\n log.msg(\"Cannot start a VM (%s), failing gracefully and triggering a new build check\" % self.name)\n log.err(failure)\n self.domain = None\n return False\n d.addErrback(_start_failed)\n\n return d\n\n def stop_instance(self, fast=False):\n \"\"\"\n I attempt to stop a running VM.\n I make sure any connection to the slave is removed.\n If the VM was using a cloned image, I remove the clone\n When everything is tidied up, I ask that bbot looks for work to do\n \"\"\"\n log.msg(\"Attempting to stop '%s'\" % self.name)\n if self.domain is None:\n log.msg(\"I don't think that domain is evening running, aborting\")\n return defer.succeed(None)\n\n domain = self.domain\n self.domain = None\n\n if self.graceful_shutdown and not fast:\n log.msg(\"Graceful shutdown chosen for %s\" % self.name)\n d = domain.shutdown()\n else:\n d = domain.destroy()\n\n def _disconnect(res):\n log.msg(\"VM destroyed (%s): Forcing its connection closed.\" % self.name)\n return AbstractBuildSlave.disconnect(self)\n d.addCallback(_disconnect)\n\n def _disconnected(res):\n log.msg(\"We forced disconnection (%s), cleaning up and triggering new build\" % self.name)\n if self.base_image:\n os.remove(self.image)\n self.botmaster.maybeStartBuildsForSlave(self.name)\n return res\n d.addBoth(_disconnected)\n\n return d\n\n def buildFinished(self, *args, **kwargs):\n \"\"\"\n I insubstantiate a slave after it has done a build, if that is\n desired behaviour.\n \"\"\"\n AbstractLatentBuildSlave.buildFinished(self, *args, **kwargs)\n if self.insubstantiate_after_build:\n log.msg(\"Got buildFinished notification - attempting to insubstantiate\")\n self.insubstantiate()\n\n\n", "path": "master/buildbot/libvirtbuildslave.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Portions Copyright Buildbot Team Members\n# Portions Copyright 2010 Isotoma Limited\n\nimport os\n\nfrom twisted.internet import defer, utils, reactor, threads\nfrom twisted.python import log\nfrom buildbot.buildslave import AbstractBuildSlave, AbstractLatentBuildSlave\n\nimport libvirt\n\n\nclass WorkQueue(object):\n \"\"\"\n I am a class that turns parallel access into serial access.\n\n I exist because we want to run libvirt access in threads as we don't\n trust calls not to block, but under load libvirt doesnt seem to like\n this kind of threaded use.\n \"\"\"\n\n def __init__(self):\n self.queue = []\n\n def _process(self):\n log.msg(\"Looking to start a piece of work now...\")\n\n # Is there anything to do?\n if not self.queue:\n log.msg(\"_process called when there is no work\")\n return\n\n # Peek at the top of the stack - get a function to call and\n # a deferred to fire when its all over\n d, next_operation, args, kwargs = self.queue[0]\n\n # Start doing some work - expects a deferred\n try:\n d2 = next_operation(*args, **kwargs)\n except:\n d2 = defer.fail()\n\n # Whenever a piece of work is done, whether it worked or not \n # call this to schedule the next piece of work\n def _work_done(res):\n log.msg(\"Completed a piece of work\")\n self.queue.pop(0)\n if self.queue:\n log.msg(\"Preparing next piece of work\")\n reactor.callLater(0, self._process)\n return res\n d2.addBoth(_work_done)\n\n # When the work is done, trigger d\n d2.chainDeferred(d)\n\n def execute(self, cb, *args, **kwargs):\n kickstart_processing = not self.queue\n d = defer.Deferred()\n self.queue.append((d, cb, args, kwargs))\n if kickstart_processing:\n self._process()\n return d\n\n def executeInThread(self, cb, *args, **kwargs):\n return self.execute(threads.deferToThread, cb, *args, **kwargs)\n\n\n# A module is effectively a singleton class, so this is OK\nqueue = WorkQueue()\n\n\nclass Domain(object):\n\n \"\"\"\n I am a wrapper around a libvirt Domain object\n \"\"\"\n\n def __init__(self, connection, domain):\n self.connection = connection\n self.domain = domain\n\n def create(self):\n return queue.executeInThread(self.domain.create)\n\n def shutdown(self):\n return queue.executeInThread(self.domain.shutdown)\n\n def destroy(self):\n return queue.executeInThread(self.domain.destroy)\n\n\nclass Connection(object):\n\n \"\"\"\n I am a wrapper around a libvirt Connection object.\n \"\"\"\n\n def __init__(self, uri):\n self.uri = uri\n self.connection = libvirt.open(uri)\n\n def lookupByName(self, name):\n \"\"\" I lookup an existing prefined domain \"\"\"\n d = queue.executeInThread(self.connection.lookupByName, name)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n def create(self, xml):\n \"\"\" I take libvirt XML and start a new VM \"\"\"\n d = queue.executeInThread(self.connection.createXML, xml, 0)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n\nclass LibVirtSlave(AbstractLatentBuildSlave):\n\n def __init__(self, name, password, connection, hd_image, base_image = None, xml=None, max_builds=None, notify_on_missing=[],\n missing_timeout=60*20, build_wait_timeout=60*10, properties={}, locks=None):\n AbstractLatentBuildSlave.__init__(self, name, password, max_builds, notify_on_missing,\n missing_timeout, build_wait_timeout, properties, locks)\n self.name = name\n self.connection = connection\n self.image = hd_image\n self.base_image = base_image\n self.xml = xml\n\n self.insubstantiate_after_build = True\n self.cheap_copy = True\n self.graceful_shutdown = False\n\n self.domain = None\n\n def _prepare_base_image(self):\n \"\"\"\n I am a private method for creating (possibly cheap) copies of a\n base_image for start_instance to boot.\n \"\"\"\n if not self.base_image:\n return defer.succeed(True)\n\n if self.cheap_copy:\n clone_cmd = \"qemu-img\"\n clone_args = \"create -b %(base)s -f qcow2 %(image)s\"\n else:\n clone_cmd = \"cp\"\n clone_args = \"%(base)s %(image)s\"\n\n clone_args = clone_args % {\n \"base\": self.base_image,\n \"image\": self.image,\n }\n\n log.msg(\"Cloning base image: %s %s'\" % (clone_cmd, clone_args))\n\n def _log_result(res):\n log.msg(\"Cloning exit code was: %d\" % res)\n return res\n\n d = utils.getProcessValue(clone_cmd, clone_args.split())\n d.addBoth(_log_result)\n return d\n\n def start_instance(self, build):\n \"\"\"\n I start a new instance of a VM.\n\n If a base_image is specified, I will make a clone of that otherwise i will\n use image directly.\n\n If i'm not given libvirt domain definition XML, I will look for my name\n in the list of defined virtual machines and start that.\n \"\"\"\n if self.domain is not None:\n raise ValueError('domain active')\n\n d = self._prepare_base_image()\n\n def _start(res):\n if self.xml:\n return self.connection.create(self.xml)\n d = self.connection.lookupByName(self.name)\n def _really_start(res):\n self.domain = res\n return self.domain.create()\n d.addCallback(_really_start)\n return d\n d.addCallback(_start)\n\n def _started(res):\n return True\n d.addCallback(_started)\n\n def _start_failed(failure):\n log.msg(\"Cannot start a VM (%s), failing gracefully and triggering a new build check\" % self.name)\n log.err(failure)\n self.domain = None\n return False\n d.addErrback(_start_failed)\n\n return d\n\n def stop_instance(self, fast=False):\n \"\"\"\n I attempt to stop a running VM.\n I make sure any connection to the slave is removed.\n If the VM was using a cloned image, I remove the clone\n When everything is tidied up, I ask that bbot looks for work to do\n \"\"\"\n log.msg(\"Attempting to stop '%s'\" % self.name)\n if self.domain is None:\n log.msg(\"I don't think that domain is evening running, aborting\")\n return defer.succeed(None)\n\n domain = self.domain\n self.domain = None\n\n if self.graceful_shutdown and not fast:\n log.msg(\"Graceful shutdown chosen for %s\" % self.name)\n d = domain.shutdown()\n else:\n d = domain.destroy()\n\n def _disconnect(res):\n log.msg(\"VM destroyed (%s): Forcing its connection closed.\" % self.name)\n return AbstractBuildSlave.disconnect(self)\n d.addCallback(_disconnect)\n\n def _disconnected(res):\n log.msg(\"We forced disconnection (%s), cleaning up and triggering new build\" % self.name)\n if self.base_image:\n os.remove(self.image)\n self.botmaster.maybeStartBuildsForSlave(self.name)\n return res\n d.addBoth(_disconnected)\n\n return d\n\n def buildFinished(self, *args, **kwargs):\n \"\"\"\n I insubstantiate a slave after it has done a build, if that is\n desired behaviour.\n \"\"\"\n AbstractLatentBuildSlave.buildFinished(self, *args, **kwargs)\n if self.insubstantiate_after_build:\n log.msg(\"Got buildFinished notification - attempting to insubstantiate\")\n self.insubstantiate()\n\n\n", "path": "master/buildbot/libvirtbuildslave.py"}]}
| 2,952 | 149 |
gh_patches_debug_40075
|
rasdani/github-patches
|
git_diff
|
edgedb__edgedb-5864
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change how globals are passed in GraphQL
Currently global variables are passed as their own special `globals` field. This is a non-standard field and it gets in the way of existing frameworks and libraries.
An alternative to that would be to use the `variables` field that's part of the GraphQL standard and add `__globals__` to that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `edb/testbase/http.py`
Content:
```
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2019-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 from __future__ import annotations
21 from typing import *
22
23 import json
24 import urllib.parse
25 import urllib.request
26
27 import edgedb
28
29 from edb.errors import base as base_errors
30
31 from edb.common import assert_data_shape
32
33 from . import server
34
35
36 bag = assert_data_shape.bag
37
38
39 class BaseHttpExtensionTest(server.QueryTestCase):
40 EXTENSION_SETUP: List[str] = []
41
42 @classmethod
43 def get_extension_name(cls):
44 raise NotImplementedError
45
46 @classmethod
47 def get_extension_path(cls):
48 return cls.get_extension_name()
49
50 @classmethod
51 def get_api_prefix(cls):
52 extpath = cls.get_extension_path()
53 dbname = cls.get_database_name()
54 return f'/db/{dbname}/{extpath}'
55
56 @classmethod
57 def get_setup_script(cls):
58 script = super().get_setup_script()
59
60 extname = cls.get_extension_name()
61 script += f'\nCREATE EXTENSION pgcrypto;\n'
62 script += f'\nCREATE EXTENSION {extname};\n'
63 script += "\n".join(cls.EXTENSION_SETUP)
64 return script
65
66 @classmethod
67 def tearDownClass(cls):
68 extname = cls.get_extension_name()
69 cls.loop.run_until_complete(
70 cls.con.execute(f'DROP EXTENSION {extname};')
71 )
72 super().tearDownClass()
73
74
75 class ExtAuthTestCase(BaseHttpExtensionTest):
76
77 @classmethod
78 def get_extension_name(cls):
79 return 'auth'
80
81 @classmethod
82 def get_extension_path(cls):
83 return 'ext/auth'
84
85
86 class EdgeQLTestCase(BaseHttpExtensionTest):
87
88 @classmethod
89 def get_extension_name(cls):
90 return 'edgeql_http'
91
92 @classmethod
93 def get_extension_path(cls):
94 return 'edgeql'
95
96 def edgeql_query(
97 self, query, *, use_http_post=True, variables=None, globals=None):
98 req_data = {
99 'query': query
100 }
101
102 if use_http_post:
103 if variables is not None:
104 req_data['variables'] = variables
105 if globals is not None:
106 req_data['globals'] = globals
107 req = urllib.request.Request(self.http_addr, method='POST')
108 req.add_header('Content-Type', 'application/json')
109 response = urllib.request.urlopen(
110 req, json.dumps(req_data).encode(), context=self.tls_context
111 )
112 resp_data = json.loads(response.read())
113 else:
114 if variables is not None:
115 req_data['variables'] = json.dumps(variables)
116 if globals is not None:
117 req_data['globals'] = json.dumps(globals)
118 response = urllib.request.urlopen(
119 f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
120 context=self.tls_context,
121 )
122 resp_data = json.loads(response.read())
123
124 if 'data' in resp_data:
125 return resp_data['data']
126
127 err = resp_data['error']
128
129 ex_msg = err['message'].strip()
130 ex_code = err['code']
131
132 raise edgedb.EdgeDBError._from_code(ex_code, ex_msg)
133
134 def assert_edgeql_query_result(self, query, result, *,
135 msg=None, sort=None,
136 use_http_post=True,
137 variables=None,
138 globals=None):
139 res = self.edgeql_query(
140 query,
141 use_http_post=use_http_post,
142 variables=variables,
143 globals=globals)
144
145 if sort is not None:
146 # GQL will always have a single object returned. The data is
147 # in the top-level fields, so that's what needs to be sorted.
148 for r in res.values():
149 assert_data_shape.sort_results(r, sort)
150
151 assert_data_shape.assert_data_shape(
152 res, result, self.fail, message=msg)
153 return res
154
155
156 class GraphQLTestCase(BaseHttpExtensionTest):
157
158 @classmethod
159 def get_extension_name(cls):
160 return 'graphql'
161
162 def graphql_query(self, query, *, operation_name=None,
163 use_http_post=True,
164 variables=None,
165 globals=None):
166 req_data = {
167 'query': query
168 }
169
170 if operation_name is not None:
171 req_data['operationName'] = operation_name
172
173 if use_http_post:
174 if variables is not None:
175 req_data['variables'] = variables
176 if globals is not None:
177 req_data['globals'] = globals
178 req = urllib.request.Request(self.http_addr, method='POST')
179 req.add_header('Content-Type', 'application/json')
180 response = urllib.request.urlopen(
181 req, json.dumps(req_data).encode(), context=self.tls_context
182 )
183 resp_data = json.loads(response.read())
184 else:
185 if variables is not None:
186 req_data['variables'] = json.dumps(variables)
187 if globals is not None:
188 req_data['globals'] = json.dumps(globals)
189 response = urllib.request.urlopen(
190 f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
191 context=self.tls_context,
192 )
193 resp_data = json.loads(response.read())
194
195 if 'data' in resp_data:
196 return resp_data['data']
197
198 err = resp_data['errors'][0]
199
200 typename, msg = err['message'].split(':', 1)
201 msg = msg.strip()
202
203 try:
204 ex_type = getattr(edgedb, typename)
205 except AttributeError:
206 raise AssertionError(
207 f'server returned an invalid exception typename: {typename!r}'
208 f'\n Message: {msg}')
209
210 ex = ex_type(msg)
211
212 if 'locations' in err:
213 # XXX Fix this when LSP "location" objects are implemented
214 ex._attrs[base_errors.FIELD_LINE_START] = str(
215 err['locations'][0]['line']).encode()
216 ex._attrs[base_errors.FIELD_COLUMN_START] = str(
217 err['locations'][0]['column']).encode()
218
219 raise ex
220
221 def assert_graphql_query_result(self, query, result, *,
222 msg=None, sort=None,
223 operation_name=None,
224 use_http_post=True,
225 variables=None,
226 globals=None):
227 res = self.graphql_query(
228 query,
229 operation_name=operation_name,
230 use_http_post=use_http_post,
231 variables=variables,
232 globals=globals)
233
234 if sort is not None:
235 # GQL will always have a single object returned. The data is
236 # in the top-level fields, so that's what needs to be sorted.
237 for r in res.values():
238 assert_data_shape.sort_results(r, sort)
239
240 assert_data_shape.assert_data_shape(
241 res, result, self.fail, message=msg)
242 return res
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/edb/testbase/http.py b/edb/testbase/http.py
--- a/edb/testbase/http.py
+++ b/edb/testbase/http.py
@@ -162,7 +162,8 @@
def graphql_query(self, query, *, operation_name=None,
use_http_post=True,
variables=None,
- globals=None):
+ globals=None,
+ deprecated_globals=None):
req_data = {
'query': query
}
@@ -174,7 +175,13 @@
if variables is not None:
req_data['variables'] = variables
if globals is not None:
- req_data['globals'] = globals
+ if variables is None:
+ req_data['variables'] = dict()
+ req_data['variables']['__globals__'] = globals
+ # Support testing the old way of sending globals.
+ if deprecated_globals is not None:
+ req_data['globals'] = deprecated_globals
+
req = urllib.request.Request(self.http_addr, method='POST')
req.add_header('Content-Type', 'application/json')
response = urllib.request.urlopen(
@@ -182,10 +189,15 @@
)
resp_data = json.loads(response.read())
else:
+ if globals is not None:
+ if variables is None:
+ variables = dict()
+ variables['__globals__'] = globals
+ # Support testing the old way of sending globals.
+ if deprecated_globals is not None:
+ req_data['globals'] = json.dumps(deprecated_globals)
if variables is not None:
req_data['variables'] = json.dumps(variables)
- if globals is not None:
- req_data['globals'] = json.dumps(globals)
response = urllib.request.urlopen(
f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
context=self.tls_context,
@@ -223,13 +235,15 @@
operation_name=None,
use_http_post=True,
variables=None,
- globals=None):
+ globals=None,
+ deprecated_globals=None):
res = self.graphql_query(
query,
operation_name=operation_name,
use_http_post=use_http_post,
variables=variables,
- globals=globals)
+ globals=globals,
+ deprecated_globals=deprecated_globals)
if sort is not None:
# GQL will always have a single object returned. The data is
|
{"golden_diff": "diff --git a/edb/testbase/http.py b/edb/testbase/http.py\n--- a/edb/testbase/http.py\n+++ b/edb/testbase/http.py\n@@ -162,7 +162,8 @@\n def graphql_query(self, query, *, operation_name=None,\n use_http_post=True,\n variables=None,\n- globals=None):\n+ globals=None,\n+ deprecated_globals=None):\n req_data = {\n 'query': query\n }\n@@ -174,7 +175,13 @@\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n- req_data['globals'] = globals\n+ if variables is None:\n+ req_data['variables'] = dict()\n+ req_data['variables']['__globals__'] = globals\n+ # Support testing the old way of sending globals.\n+ if deprecated_globals is not None:\n+ req_data['globals'] = deprecated_globals\n+\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n@@ -182,10 +189,15 @@\n )\n resp_data = json.loads(response.read())\n else:\n+ if globals is not None:\n+ if variables is None:\n+ variables = dict()\n+ variables['__globals__'] = globals\n+ # Support testing the old way of sending globals.\n+ if deprecated_globals is not None:\n+ req_data['globals'] = json.dumps(deprecated_globals)\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n- if globals is not None:\n- req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n@@ -223,13 +235,15 @@\n operation_name=None,\n use_http_post=True,\n variables=None,\n- globals=None):\n+ globals=None,\n+ deprecated_globals=None):\n res = self.graphql_query(\n query,\n operation_name=operation_name,\n use_http_post=use_http_post,\n variables=variables,\n- globals=globals)\n+ globals=globals,\n+ deprecated_globals=deprecated_globals)\n \n if sort is not None:\n # GQL will always have a single object returned. The data is\n", "issue": "Change how globals are passed in GraphQL\nCurrently global variables are passed as their own special `globals` field. This is a non-standard field and it gets in the way of existing frameworks and libraries.\r\n\r\nAn alternative to that would be to use the `variables` field that's part of the GraphQL standard and add `__globals__` to that.\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2019-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\nfrom typing import *\n\nimport json\nimport urllib.parse\nimport urllib.request\n\nimport edgedb\n\nfrom edb.errors import base as base_errors\n\nfrom edb.common import assert_data_shape\n\nfrom . import server\n\n\nbag = assert_data_shape.bag\n\n\nclass BaseHttpExtensionTest(server.QueryTestCase):\n EXTENSION_SETUP: List[str] = []\n\n @classmethod\n def get_extension_name(cls):\n raise NotImplementedError\n\n @classmethod\n def get_extension_path(cls):\n return cls.get_extension_name()\n\n @classmethod\n def get_api_prefix(cls):\n extpath = cls.get_extension_path()\n dbname = cls.get_database_name()\n return f'/db/{dbname}/{extpath}'\n\n @classmethod\n def get_setup_script(cls):\n script = super().get_setup_script()\n\n extname = cls.get_extension_name()\n script += f'\\nCREATE EXTENSION pgcrypto;\\n'\n script += f'\\nCREATE EXTENSION {extname};\\n'\n script += \"\\n\".join(cls.EXTENSION_SETUP)\n return script\n\n @classmethod\n def tearDownClass(cls):\n extname = cls.get_extension_name()\n cls.loop.run_until_complete(\n cls.con.execute(f'DROP EXTENSION {extname};')\n )\n super().tearDownClass()\n\n\nclass ExtAuthTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'auth'\n\n @classmethod\n def get_extension_path(cls):\n return 'ext/auth'\n\n\nclass EdgeQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'edgeql_http'\n\n @classmethod\n def get_extension_path(cls):\n return 'edgeql'\n\n def edgeql_query(\n self, query, *, use_http_post=True, variables=None, globals=None):\n req_data = {\n 'query': query\n }\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n req_data['globals'] = globals\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n if globals is not None:\n req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['error']\n\n ex_msg = err['message'].strip()\n ex_code = err['code']\n\n raise edgedb.EdgeDBError._from_code(ex_code, ex_msg)\n\n def assert_edgeql_query_result(self, query, result, *,\n msg=None, sort=None,\n use_http_post=True,\n variables=None,\n globals=None):\n res = self.edgeql_query(\n query,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n\n\nclass GraphQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'graphql'\n\n def graphql_query(self, query, *, operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None):\n req_data = {\n 'query': query\n }\n\n if operation_name is not None:\n req_data['operationName'] = operation_name\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n req_data['globals'] = globals\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n if globals is not None:\n req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['errors'][0]\n\n typename, msg = err['message'].split(':', 1)\n msg = msg.strip()\n\n try:\n ex_type = getattr(edgedb, typename)\n except AttributeError:\n raise AssertionError(\n f'server returned an invalid exception typename: {typename!r}'\n f'\\n Message: {msg}')\n\n ex = ex_type(msg)\n\n if 'locations' in err:\n # XXX Fix this when LSP \"location\" objects are implemented\n ex._attrs[base_errors.FIELD_LINE_START] = str(\n err['locations'][0]['line']).encode()\n ex._attrs[base_errors.FIELD_COLUMN_START] = str(\n err['locations'][0]['column']).encode()\n\n raise ex\n\n def assert_graphql_query_result(self, query, result, *,\n msg=None, sort=None,\n operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None):\n res = self.graphql_query(\n query,\n operation_name=operation_name,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n", "path": "edb/testbase/http.py"}], "after_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2019-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\nfrom typing import *\n\nimport json\nimport urllib.parse\nimport urllib.request\n\nimport edgedb\n\nfrom edb.errors import base as base_errors\n\nfrom edb.common import assert_data_shape\n\nfrom . import server\n\n\nbag = assert_data_shape.bag\n\n\nclass BaseHttpExtensionTest(server.QueryTestCase):\n EXTENSION_SETUP: List[str] = []\n\n @classmethod\n def get_extension_name(cls):\n raise NotImplementedError\n\n @classmethod\n def get_extension_path(cls):\n return cls.get_extension_name()\n\n @classmethod\n def get_api_prefix(cls):\n extpath = cls.get_extension_path()\n dbname = cls.get_database_name()\n return f'/db/{dbname}/{extpath}'\n\n @classmethod\n def get_setup_script(cls):\n script = super().get_setup_script()\n\n extname = cls.get_extension_name()\n script += f'\\nCREATE EXTENSION pgcrypto;\\n'\n script += f'\\nCREATE EXTENSION {extname};\\n'\n script += \"\\n\".join(cls.EXTENSION_SETUP)\n return script\n\n @classmethod\n def tearDownClass(cls):\n extname = cls.get_extension_name()\n cls.loop.run_until_complete(\n cls.con.execute(f'DROP EXTENSION {extname};')\n )\n super().tearDownClass()\n\n\nclass ExtAuthTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'auth'\n\n @classmethod\n def get_extension_path(cls):\n return 'ext/auth'\n\n\nclass EdgeQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'edgeql_http'\n\n @classmethod\n def get_extension_path(cls):\n return 'edgeql'\n\n def edgeql_query(\n self, query, *, use_http_post=True, variables=None, globals=None):\n req_data = {\n 'query': query\n }\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n req_data['globals'] = globals\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n if globals is not None:\n req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['error']\n\n ex_msg = err['message'].strip()\n ex_code = err['code']\n\n raise edgedb.EdgeDBError._from_code(ex_code, ex_msg)\n\n def assert_edgeql_query_result(self, query, result, *,\n msg=None, sort=None,\n use_http_post=True,\n variables=None,\n globals=None):\n res = self.edgeql_query(\n query,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n\n\nclass GraphQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'graphql'\n\n def graphql_query(self, query, *, operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None,\n deprecated_globals=None):\n req_data = {\n 'query': query\n }\n\n if operation_name is not None:\n req_data['operationName'] = operation_name\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n if variables is None:\n req_data['variables'] = dict()\n req_data['variables']['__globals__'] = globals\n # Support testing the old way of sending globals.\n if deprecated_globals is not None:\n req_data['globals'] = deprecated_globals\n\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if globals is not None:\n if variables is None:\n variables = dict()\n variables['__globals__'] = globals\n # Support testing the old way of sending globals.\n if deprecated_globals is not None:\n req_data['globals'] = json.dumps(deprecated_globals)\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['errors'][0]\n\n typename, msg = err['message'].split(':', 1)\n msg = msg.strip()\n\n try:\n ex_type = getattr(edgedb, typename)\n except AttributeError:\n raise AssertionError(\n f'server returned an invalid exception typename: {typename!r}'\n f'\\n Message: {msg}')\n\n ex = ex_type(msg)\n\n if 'locations' in err:\n # XXX Fix this when LSP \"location\" objects are implemented\n ex._attrs[base_errors.FIELD_LINE_START] = str(\n err['locations'][0]['line']).encode()\n ex._attrs[base_errors.FIELD_COLUMN_START] = str(\n err['locations'][0]['column']).encode()\n\n raise ex\n\n def assert_graphql_query_result(self, query, result, *,\n msg=None, sort=None,\n operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None,\n deprecated_globals=None):\n res = self.graphql_query(\n query,\n operation_name=operation_name,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals,\n deprecated_globals=deprecated_globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n", "path": "edb/testbase/http.py"}]}
| 2,535 | 538 |
gh_patches_debug_10160
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-581
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKEditor Image upload makes the GUI undismissable
The browser keeps the changed fields state after the image is uploaded, and for some reason we're unable to dismiss the gui.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/config/urls.py`
Content:
```
1 from django.conf import settings
2 from django.conf.urls import include
3 from django.contrib import admin
4 from django.template.response import TemplateResponse
5 from django.urls import re_path, path
6 from django.views.generic import TemplateView, RedirectView
7
8 from grandchallenge.core.views import comicmain
9 from grandchallenge.pages.views import FaviconView
10
11 admin.autodiscover()
12
13
14 def handler500(request):
15 context = {"request": request}
16 template_name = "500.html"
17 return TemplateResponse(request, template_name, context, status=500)
18
19
20 urlpatterns = [
21 path("", comicmain, name="home"),
22 path(
23 "robots.txt/",
24 TemplateView.as_view(
25 template_name="robots.txt", content_type="text/plain"
26 ),
27 ),
28 # Favicons
29 path(
30 "favicon.ico/",
31 FaviconView.as_view(rel="shortcut icon"),
32 name="favicon",
33 ),
34 path(
35 "apple-touch-icon.png/",
36 FaviconView.as_view(rel="apple-touch-icon"),
37 name="apple-touch-icon",
38 ),
39 path(
40 "apple-touch-icon-precomposed.png/",
41 FaviconView.as_view(rel="apple-touch-icon-precomposed"),
42 name="apple-touch-icon-precomposed",
43 ),
44 path(
45 "apple-touch-icon-<int:size>x<int>.png/",
46 FaviconView.as_view(rel="apple-touch-icon"),
47 name="apple-touch-icon-sized",
48 ),
49 path(
50 "apple-touch-icon-<int:size>x<int>-precomposed.png/",
51 FaviconView.as_view(rel="apple-touch-icon-precomposed"),
52 name="apple-touch-icon-precomposed-sized",
53 ),
54 path(settings.ADMIN_URL, admin.site.urls),
55 path(
56 "site/<slug:challenge_short_name>/",
57 include("grandchallenge.core.urls"),
58 name="site",
59 ),
60 path(
61 "stats/",
62 include("grandchallenge.statistics.urls", namespace="statistics"),
63 ),
64 # Do not change the api namespace without updating the view names in
65 # all of the serializers
66 path("api/", include("grandchallenge.api.urls", namespace="api")),
67 # Used for logging in and managing grandchallenge.profiles. This is done on
68 # the framework level because it is too hard to get this all under each
69 # project
70 path("accounts/", include("grandchallenge.profiles.urls")),
71 path("socialauth/", include("social_django.urls", namespace="social")),
72 path(
73 "challenges/",
74 include("grandchallenge.challenges.urls", namespace="challenges"),
75 ),
76 re_path(
77 r"^(?i)all_challenges/$",
78 RedirectView.as_view(pattern_name="challenges:list", permanent=False),
79 ),
80 path("cases/", include("grandchallenge.cases.urls", namespace="cases")),
81 path(
82 "algorithms/",
83 include("grandchallenge.algorithms.urls", namespace="algorithms"),
84 ),
85 # ========== catch all ====================
86 # when all other urls have been checked, try to load page from main project
87 # keep this url at the bottom of this list, because urls are checked in
88 # order
89 path("<slug:page_title>/", comicmain, name="mainproject-home"),
90 path(
91 "media/", include("grandchallenge.serving.urls", namespace="serving")
92 ),
93 ]
94 if settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:
95 import debug_toolbar
96
97 urlpatterns = [
98 path("__debug__/", include(debug_toolbar.urls))
99 ] + urlpatterns
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/config/urls.py b/app/config/urls.py
--- a/app/config/urls.py
+++ b/app/config/urls.py
@@ -73,8 +73,12 @@
"challenges/",
include("grandchallenge.challenges.urls", namespace="challenges"),
),
- re_path(
- r"^(?i)all_challenges/$",
+ path(
+ "all_challenges/",
+ RedirectView.as_view(pattern_name="challenges:list", permanent=False),
+ ),
+ path(
+ "All_Challenges/",
RedirectView.as_view(pattern_name="challenges:list", permanent=False),
),
path("cases/", include("grandchallenge.cases.urls", namespace="cases")),
|
{"golden_diff": "diff --git a/app/config/urls.py b/app/config/urls.py\n--- a/app/config/urls.py\n+++ b/app/config/urls.py\n@@ -73,8 +73,12 @@\n \"challenges/\",\n include(\"grandchallenge.challenges.urls\", namespace=\"challenges\"),\n ),\n- re_path(\n- r\"^(?i)all_challenges/$\",\n+ path(\n+ \"all_challenges/\",\n+ RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n+ ),\n+ path(\n+ \"All_Challenges/\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\"cases/\", include(\"grandchallenge.cases.urls\", namespace=\"cases\")),\n", "issue": "CKEditor Image upload makes the GUI undismissable\nThe browser keeps the changed fields state after the image is uploaded, and for some reason we're unable to dismiss the gui.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import include\nfrom django.contrib import admin\nfrom django.template.response import TemplateResponse\nfrom django.urls import re_path, path\nfrom django.views.generic import TemplateView, RedirectView\n\nfrom grandchallenge.core.views import comicmain\nfrom grandchallenge.pages.views import FaviconView\n\nadmin.autodiscover()\n\n\ndef handler500(request):\n context = {\"request\": request}\n template_name = \"500.html\"\n return TemplateResponse(request, template_name, context, status=500)\n\n\nurlpatterns = [\n path(\"\", comicmain, name=\"home\"),\n path(\n \"robots.txt/\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n ),\n # Favicons\n path(\n \"favicon.ico/\",\n FaviconView.as_view(rel=\"shortcut icon\"),\n name=\"favicon\",\n ),\n path(\n \"apple-touch-icon.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon\",\n ),\n path(\n \"apple-touch-icon-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon-sized\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed-sized\",\n ),\n path(settings.ADMIN_URL, admin.site.urls),\n path(\n \"site/<slug:challenge_short_name>/\",\n include(\"grandchallenge.core.urls\"),\n name=\"site\",\n ),\n path(\n \"stats/\",\n include(\"grandchallenge.statistics.urls\", namespace=\"statistics\"),\n ),\n # Do not change the api namespace without updating the view names in\n # all of the serializers\n path(\"api/\", include(\"grandchallenge.api.urls\", namespace=\"api\")),\n # Used for logging in and managing grandchallenge.profiles. This is done on\n # the framework level because it is too hard to get this all under each\n # project\n path(\"accounts/\", include(\"grandchallenge.profiles.urls\")),\n path(\"socialauth/\", include(\"social_django.urls\", namespace=\"social\")),\n path(\n \"challenges/\",\n include(\"grandchallenge.challenges.urls\", namespace=\"challenges\"),\n ),\n re_path(\n r\"^(?i)all_challenges/$\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\"cases/\", include(\"grandchallenge.cases.urls\", namespace=\"cases\")),\n path(\n \"algorithms/\",\n include(\"grandchallenge.algorithms.urls\", namespace=\"algorithms\"),\n ),\n # ========== catch all ====================\n # when all other urls have been checked, try to load page from main project\n # keep this url at the bottom of this list, because urls are checked in\n # order\n path(\"<slug:page_title>/\", comicmain, name=\"mainproject-home\"),\n path(\n \"media/\", include(\"grandchallenge.serving.urls\", namespace=\"serving\")\n ),\n]\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import include\nfrom django.contrib import admin\nfrom django.template.response import TemplateResponse\nfrom django.urls import re_path, path\nfrom django.views.generic import TemplateView, RedirectView\n\nfrom grandchallenge.core.views import comicmain\nfrom grandchallenge.pages.views import FaviconView\n\nadmin.autodiscover()\n\n\ndef handler500(request):\n context = {\"request\": request}\n template_name = \"500.html\"\n return TemplateResponse(request, template_name, context, status=500)\n\n\nurlpatterns = [\n path(\"\", comicmain, name=\"home\"),\n path(\n \"robots.txt/\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n ),\n # Favicons\n path(\n \"favicon.ico/\",\n FaviconView.as_view(rel=\"shortcut icon\"),\n name=\"favicon\",\n ),\n path(\n \"apple-touch-icon.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon\",\n ),\n path(\n \"apple-touch-icon-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon-sized\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed-sized\",\n ),\n path(settings.ADMIN_URL, admin.site.urls),\n path(\n \"site/<slug:challenge_short_name>/\",\n include(\"grandchallenge.core.urls\"),\n name=\"site\",\n ),\n path(\n \"stats/\",\n include(\"grandchallenge.statistics.urls\", namespace=\"statistics\"),\n ),\n # Do not change the api namespace without updating the view names in\n # all of the serializers\n path(\"api/\", include(\"grandchallenge.api.urls\", namespace=\"api\")),\n # Used for logging in and managing grandchallenge.profiles. This is done on\n # the framework level because it is too hard to get this all under each\n # project\n path(\"accounts/\", include(\"grandchallenge.profiles.urls\")),\n path(\"socialauth/\", include(\"social_django.urls\", namespace=\"social\")),\n path(\n \"challenges/\",\n include(\"grandchallenge.challenges.urls\", namespace=\"challenges\"),\n ),\n path(\n \"all_challenges/\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\n \"All_Challenges/\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\"cases/\", include(\"grandchallenge.cases.urls\", namespace=\"cases\")),\n path(\n \"algorithms/\",\n include(\"grandchallenge.algorithms.urls\", namespace=\"algorithms\"),\n ),\n # ========== catch all ====================\n # when all other urls have been checked, try to load page from main project\n # keep this url at the bottom of this list, because urls are checked in\n # order\n path(\"<slug:page_title>/\", comicmain, name=\"mainproject-home\"),\n path(\n \"media/\", include(\"grandchallenge.serving.urls\", namespace=\"serving\")\n ),\n]\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls.py"}]}
| 1,224 | 159 |
gh_patches_debug_32710
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-1599
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
If local premium DB is larger and more recent than remote don't ask user for replacement
## Problem Definition
If local premium DB is larger and more recent than remote don't ask user for replacement and instead replace remote.

## Task
Do it lol
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/premium/sync.py`
Content:
```
1 import base64
2 import logging
3 import shutil
4 from enum import Enum
5 from typing import Any, Dict, NamedTuple, Optional
6
7 from typing_extensions import Literal
8
9 from rotkehlchen.data_handler import DataHandler
10 from rotkehlchen.errors import (
11 PremiumAuthenticationError,
12 RemoteError,
13 RotkehlchenPermissionError,
14 UnableToDecryptRemoteData,
15 )
16 from rotkehlchen.logging import RotkehlchenLogsAdapter
17 from rotkehlchen.premium.premium import Premium, PremiumCredentials, premium_create_and_verify
18 from rotkehlchen.utils.misc import timestamp_to_date, ts_now
19
20 logger = logging.getLogger(__name__)
21 log = RotkehlchenLogsAdapter(logger)
22
23
24 class CanSync(Enum):
25 YES = 0
26 NO = 1
27 ASK_USER = 2
28
29
30 class SyncCheckResult(NamedTuple):
31 # The result of the sync check
32 can_sync: CanSync
33 # If result is ASK_USER, what should the message be?
34 message: str
35 payload: Optional[Dict[str, Any]]
36
37
38 class PremiumSyncManager():
39
40 def __init__(self, data: DataHandler, password: str) -> None:
41 self.last_data_upload_ts = 0
42 self.data = data
43 self.password = password
44 self.premium: Optional[Premium] = None
45
46 def _can_sync_data_from_server(self, new_account: bool) -> SyncCheckResult:
47 """
48 Checks if the remote data can be pulled from the server.
49
50 Returns a SyncCheckResult denoting whether we can pull for sure,
51 whether we can't pull or whether the user should be asked. If the user
52 should be asked a message is also returned
53 """
54 log.debug('can sync data from server -- start')
55 if self.premium is None:
56 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
57
58 b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)
59
60 try:
61 metadata = self.premium.query_last_data_metadata()
62 except RemoteError as e:
63 log.debug('can sync data from server failed', error=str(e))
64 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
65
66 if new_account:
67 return SyncCheckResult(can_sync=CanSync.YES, message='', payload=None)
68
69 if not self.data.db.get_premium_sync():
70 # If it's not a new account and the db setting for premium syncing is off stop
71 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
72
73 log.debug(
74 'CAN_PULL',
75 ours=our_hash,
76 theirs=metadata.data_hash,
77 )
78 if our_hash == metadata.data_hash:
79 log.debug('sync from server stopped -- same hash')
80 # same hash -- no need to get anything
81 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
82
83 our_last_write_ts = self.data.db.get_last_write_ts()
84 data_bytes_size = len(base64.b64decode(b64_encoded_data))
85 if our_last_write_ts >= metadata.last_modify_ts:
86 message = (
87 'Detected remote database BUT with older last modification timestamp '
88 'than the local one. '
89 )
90 else:
91 if data_bytes_size > metadata.data_size:
92 message = (
93 'Detected newer remote database BUT with smaller size than the local one. '
94 )
95 else:
96 message = 'Detected newer remote database. '
97
98 return SyncCheckResult(
99 can_sync=CanSync.ASK_USER,
100 message=message,
101 payload={
102 'local_size': data_bytes_size,
103 'remote_size': metadata.data_size,
104 'local_last_modified': timestamp_to_date(our_last_write_ts),
105 'remote_last_modified': timestamp_to_date(metadata.last_modify_ts),
106 },
107 )
108
109 def _sync_data_from_server_and_replace_local(self) -> bool:
110 """
111 Performs syncing of data from server and replaces local db
112
113 Returns true for success and False for error/failure
114
115 May raise:
116 - PremiumAuthenticationError due to an UnableToDecryptRemoteData
117 coming from decompress_and_decrypt_db. This happens when the given password
118 does not match the one on the saved DB.
119 """
120 assert self.premium, 'This function has to be called with a not None premium'
121 try:
122 result = self.premium.pull_data()
123 except RemoteError as e:
124 log.debug('sync from server -- pulling failed.', error=str(e))
125 return False
126
127 if result['data'] is None:
128 log.debug('sync from server -- no data found.')
129 return False
130
131 try:
132 self.data.decompress_and_decrypt_db(self.password, result['data'])
133 except UnableToDecryptRemoteData:
134 raise PremiumAuthenticationError(
135 'The given password can not unlock the database that was retrieved from '
136 'the server. Make sure to use the same password as when the account was created.',
137 )
138
139 return True
140
141 def maybe_upload_data_to_server(self) -> None:
142 # if user has no premium do nothing
143 if self.premium is None:
144 return
145
146 # upload only once per hour
147 diff = ts_now() - self.last_data_upload_ts
148 if diff < 3600:
149 return
150
151 b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)
152 try:
153 metadata = self.premium.query_last_data_metadata()
154 except RemoteError as e:
155 log.debug(
156 'upload to server stopped -- query last metadata failed',
157 error=str(e),
158 )
159 return
160
161 log.debug(
162 'CAN_PUSH',
163 ours=our_hash,
164 theirs=metadata.data_hash,
165 )
166 if our_hash == metadata.data_hash:
167 log.debug('upload to server stopped -- same hash')
168 # same hash -- no need to upload anything
169 return
170
171 our_last_write_ts = self.data.db.get_last_write_ts()
172 if our_last_write_ts <= metadata.last_modify_ts:
173 # Server's DB was modified after our local DB
174 log.debug('upload to server stopped -- remote db more recent than local')
175 return
176
177 data_bytes_size = len(base64.b64decode(b64_encoded_data))
178 if data_bytes_size < metadata.data_size:
179 # Let's be conservative.
180 # TODO: Here perhaps prompt user in the future
181 log.debug('upload to server stopped -- remote db bigger than local')
182 return
183
184 try:
185 self.premium.upload_data(
186 data_blob=b64_encoded_data,
187 our_hash=our_hash,
188 last_modify_ts=our_last_write_ts,
189 compression_type='zlib',
190 )
191 except RemoteError as e:
192 log.debug('upload to server -- upload error', error=str(e))
193 return
194
195 # update the last data upload value
196 self.last_data_upload_ts = ts_now()
197 self.data.db.update_last_data_upload_ts(self.last_data_upload_ts)
198 log.debug('upload to server -- success')
199
200 def try_premium_at_start(
201 self,
202 given_premium_credentials: Optional[PremiumCredentials],
203 username: str,
204 create_new: bool,
205 sync_approval: Literal['yes', 'no', 'unknown'],
206 ) -> Optional[Premium]:
207 """
208 Check if new user provided api pair or we already got one in the DB
209
210 Returns the created premium if user's premium credentials were fine.
211
212 If not it will raise PremiumAuthenticationError.
213
214 If no credentials were given it returns None
215 """
216
217 if given_premium_credentials is not None:
218 assert create_new, 'We should never get here for an already existing account'
219
220 try:
221 self.premium = premium_create_and_verify(given_premium_credentials)
222 except PremiumAuthenticationError as e:
223 log.error('Given API key is invalid')
224 # At this point we are at a new user trying to create an account with
225 # premium API keys and we failed. But a directory was created. Remove it.
226 # But create a backup of it in case something went really wrong
227 # and the directory contained data we did not want to lose
228 shutil.move(
229 self.data.user_data_dir, # type: ignore
230 self.data.data_directory / f'auto_backup_{username}_{ts_now()}',
231 )
232 raise PremiumAuthenticationError(
233 'Could not verify keys for the new account. '
234 '{}'.format(str(e)),
235 )
236
237 # else, if we got premium data in the DB initialize it and try to sync with the server
238 db_credentials = self.data.db.get_rotkehlchen_premium()
239 if db_credentials:
240 assert not create_new, 'We should never get here for a new account'
241 try:
242 self.premium = premium_create_and_verify(db_credentials)
243 except PremiumAuthenticationError as e:
244 message = (
245 f'Could not authenticate with the rotkehlchen server with '
246 f'the API keys found in the Database. Error: {str(e)}'
247 )
248 log.error(message)
249 raise PremiumAuthenticationError(message)
250
251 if self.premium is None:
252 return None
253
254 result = self._can_sync_data_from_server(new_account=create_new)
255 if result.can_sync == CanSync.ASK_USER:
256 if sync_approval == 'unknown':
257 log.info('DB data at server newer than local')
258 raise RotkehlchenPermissionError(result.message, result.payload)
259 elif sync_approval == 'yes':
260 log.info('User approved data sync from server')
261 if self._sync_data_from_server_and_replace_local():
262 if create_new:
263 # if we successfully synced data from the server and this is
264 # a new account, make sure the api keys are properly stored
265 # in the DB
266 self.data.db.set_rotkehlchen_premium(self.premium.credentials)
267 else:
268 log.debug('Could sync data from server but user refused')
269 elif result.can_sync == CanSync.YES:
270 log.info('User approved data sync from server')
271 if self._sync_data_from_server_and_replace_local():
272 if create_new:
273 # if we successfully synced data from the server and this is
274 # a new account, make sure the api keys are properly stored
275 # in the DB
276 self.data.db.set_rotkehlchen_premium(self.premium.credentials)
277
278 # else result.can_sync was no, so we do nothing
279
280 # Success, return premium
281 return self.premium
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rotkehlchen/premium/sync.py b/rotkehlchen/premium/sync.py
--- a/rotkehlchen/premium/sync.py
+++ b/rotkehlchen/premium/sync.py
@@ -82,18 +82,21 @@
our_last_write_ts = self.data.db.get_last_write_ts()
data_bytes_size = len(base64.b64decode(b64_encoded_data))
- if our_last_write_ts >= metadata.last_modify_ts:
+
+ local_more_recent = our_last_write_ts >= metadata.last_modify_ts
+ local_bigger = data_bytes_size >= metadata.data_size
+
+ if local_more_recent and local_bigger:
+ log.debug('sync from server stopped -- local is both newer and bigger')
+ return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
+
+ if local_more_recent is False: # remote is more recent
message = (
- 'Detected remote database BUT with older last modification timestamp '
+ 'Detected remote database with more recent modification timestamp '
'than the local one. '
)
- else:
- if data_bytes_size > metadata.data_size:
- message = (
- 'Detected newer remote database BUT with smaller size than the local one. '
- )
- else:
- message = 'Detected newer remote database. '
+ else: # remote is bigger
+ message = 'Detected remote database with bigger size than the local one. '
return SyncCheckResult(
can_sync=CanSync.ASK_USER,
@@ -254,7 +257,7 @@
result = self._can_sync_data_from_server(new_account=create_new)
if result.can_sync == CanSync.ASK_USER:
if sync_approval == 'unknown':
- log.info('DB data at server newer than local')
+ log.info('Remote DB is possibly newer. Ask user.')
raise RotkehlchenPermissionError(result.message, result.payload)
elif sync_approval == 'yes':
log.info('User approved data sync from server')
|
{"golden_diff": "diff --git a/rotkehlchen/premium/sync.py b/rotkehlchen/premium/sync.py\n--- a/rotkehlchen/premium/sync.py\n+++ b/rotkehlchen/premium/sync.py\n@@ -82,18 +82,21 @@\n \n our_last_write_ts = self.data.db.get_last_write_ts()\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n- if our_last_write_ts >= metadata.last_modify_ts:\n+\n+ local_more_recent = our_last_write_ts >= metadata.last_modify_ts\n+ local_bigger = data_bytes_size >= metadata.data_size\n+\n+ if local_more_recent and local_bigger:\n+ log.debug('sync from server stopped -- local is both newer and bigger')\n+ return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n+\n+ if local_more_recent is False: # remote is more recent\n message = (\n- 'Detected remote database BUT with older last modification timestamp '\n+ 'Detected remote database with more recent modification timestamp '\n 'than the local one. '\n )\n- else:\n- if data_bytes_size > metadata.data_size:\n- message = (\n- 'Detected newer remote database BUT with smaller size than the local one. '\n- )\n- else:\n- message = 'Detected newer remote database. '\n+ else: # remote is bigger\n+ message = 'Detected remote database with bigger size than the local one. '\n \n return SyncCheckResult(\n can_sync=CanSync.ASK_USER,\n@@ -254,7 +257,7 @@\n result = self._can_sync_data_from_server(new_account=create_new)\n if result.can_sync == CanSync.ASK_USER:\n if sync_approval == 'unknown':\n- log.info('DB data at server newer than local')\n+ log.info('Remote DB is possibly newer. Ask user.')\n raise RotkehlchenPermissionError(result.message, result.payload)\n elif sync_approval == 'yes':\n log.info('User approved data sync from server')\n", "issue": "If local premium DB is larger and more recent than remote don't ask user for replacement\n## Problem Definition\r\n\r\nIf local premium DB is larger and more recent than remote don't ask user for replacement and instead replace remote.\r\n\r\n\r\n\r\n## Task\r\n\r\nDo it lol\n", "before_files": [{"content": "import base64\nimport logging\nimport shutil\nfrom enum import Enum\nfrom typing import Any, Dict, NamedTuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom rotkehlchen.data_handler import DataHandler\nfrom rotkehlchen.errors import (\n PremiumAuthenticationError,\n RemoteError,\n RotkehlchenPermissionError,\n UnableToDecryptRemoteData,\n)\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.premium.premium import Premium, PremiumCredentials, premium_create_and_verify\nfrom rotkehlchen.utils.misc import timestamp_to_date, ts_now\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass CanSync(Enum):\n YES = 0\n NO = 1\n ASK_USER = 2\n\n\nclass SyncCheckResult(NamedTuple):\n # The result of the sync check\n can_sync: CanSync\n # If result is ASK_USER, what should the message be?\n message: str\n payload: Optional[Dict[str, Any]]\n\n\nclass PremiumSyncManager():\n\n def __init__(self, data: DataHandler, password: str) -> None:\n self.last_data_upload_ts = 0\n self.data = data\n self.password = password\n self.premium: Optional[Premium] = None\n\n def _can_sync_data_from_server(self, new_account: bool) -> SyncCheckResult:\n \"\"\"\n Checks if the remote data can be pulled from the server.\n\n Returns a SyncCheckResult denoting whether we can pull for sure,\n whether we can't pull or whether the user should be asked. If the user\n should be asked a message is also returned\n \"\"\"\n log.debug('can sync data from server -- start')\n if self.premium is None:\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug('can sync data from server failed', error=str(e))\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n if new_account:\n return SyncCheckResult(can_sync=CanSync.YES, message='', payload=None)\n\n if not self.data.db.get_premium_sync():\n # If it's not a new account and the db setting for premium syncing is off stop\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n log.debug(\n 'CAN_PULL',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('sync from server stopped -- same hash')\n # same hash -- no need to get anything\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n if our_last_write_ts >= metadata.last_modify_ts:\n message = (\n 'Detected remote database BUT with older last modification timestamp '\n 'than the local one. '\n )\n else:\n if data_bytes_size > metadata.data_size:\n message = (\n 'Detected newer remote database BUT with smaller size than the local one. '\n )\n else:\n message = 'Detected newer remote database. '\n\n return SyncCheckResult(\n can_sync=CanSync.ASK_USER,\n message=message,\n payload={\n 'local_size': data_bytes_size,\n 'remote_size': metadata.data_size,\n 'local_last_modified': timestamp_to_date(our_last_write_ts),\n 'remote_last_modified': timestamp_to_date(metadata.last_modify_ts),\n },\n )\n\n def _sync_data_from_server_and_replace_local(self) -> bool:\n \"\"\"\n Performs syncing of data from server and replaces local db\n\n Returns true for success and False for error/failure\n\n May raise:\n - PremiumAuthenticationError due to an UnableToDecryptRemoteData\n coming from decompress_and_decrypt_db. This happens when the given password\n does not match the one on the saved DB.\n \"\"\"\n assert self.premium, 'This function has to be called with a not None premium'\n try:\n result = self.premium.pull_data()\n except RemoteError as e:\n log.debug('sync from server -- pulling failed.', error=str(e))\n return False\n\n if result['data'] is None:\n log.debug('sync from server -- no data found.')\n return False\n\n try:\n self.data.decompress_and_decrypt_db(self.password, result['data'])\n except UnableToDecryptRemoteData:\n raise PremiumAuthenticationError(\n 'The given password can not unlock the database that was retrieved from '\n 'the server. Make sure to use the same password as when the account was created.',\n )\n\n return True\n\n def maybe_upload_data_to_server(self) -> None:\n # if user has no premium do nothing\n if self.premium is None:\n return\n\n # upload only once per hour\n diff = ts_now() - self.last_data_upload_ts\n if diff < 3600:\n return\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug(\n 'upload to server stopped -- query last metadata failed',\n error=str(e),\n )\n return\n\n log.debug(\n 'CAN_PUSH',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('upload to server stopped -- same hash')\n # same hash -- no need to upload anything\n return\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n if our_last_write_ts <= metadata.last_modify_ts:\n # Server's DB was modified after our local DB\n log.debug('upload to server stopped -- remote db more recent than local')\n return\n\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n if data_bytes_size < metadata.data_size:\n # Let's be conservative.\n # TODO: Here perhaps prompt user in the future\n log.debug('upload to server stopped -- remote db bigger than local')\n return\n\n try:\n self.premium.upload_data(\n data_blob=b64_encoded_data,\n our_hash=our_hash,\n last_modify_ts=our_last_write_ts,\n compression_type='zlib',\n )\n except RemoteError as e:\n log.debug('upload to server -- upload error', error=str(e))\n return\n\n # update the last data upload value\n self.last_data_upload_ts = ts_now()\n self.data.db.update_last_data_upload_ts(self.last_data_upload_ts)\n log.debug('upload to server -- success')\n\n def try_premium_at_start(\n self,\n given_premium_credentials: Optional[PremiumCredentials],\n username: str,\n create_new: bool,\n sync_approval: Literal['yes', 'no', 'unknown'],\n ) -> Optional[Premium]:\n \"\"\"\n Check if new user provided api pair or we already got one in the DB\n\n Returns the created premium if user's premium credentials were fine.\n\n If not it will raise PremiumAuthenticationError.\n\n If no credentials were given it returns None\n \"\"\"\n\n if given_premium_credentials is not None:\n assert create_new, 'We should never get here for an already existing account'\n\n try:\n self.premium = premium_create_and_verify(given_premium_credentials)\n except PremiumAuthenticationError as e:\n log.error('Given API key is invalid')\n # At this point we are at a new user trying to create an account with\n # premium API keys and we failed. But a directory was created. Remove it.\n # But create a backup of it in case something went really wrong\n # and the directory contained data we did not want to lose\n shutil.move(\n self.data.user_data_dir, # type: ignore\n self.data.data_directory / f'auto_backup_{username}_{ts_now()}',\n )\n raise PremiumAuthenticationError(\n 'Could not verify keys for the new account. '\n '{}'.format(str(e)),\n )\n\n # else, if we got premium data in the DB initialize it and try to sync with the server\n db_credentials = self.data.db.get_rotkehlchen_premium()\n if db_credentials:\n assert not create_new, 'We should never get here for a new account'\n try:\n self.premium = premium_create_and_verify(db_credentials)\n except PremiumAuthenticationError as e:\n message = (\n f'Could not authenticate with the rotkehlchen server with '\n f'the API keys found in the Database. Error: {str(e)}'\n )\n log.error(message)\n raise PremiumAuthenticationError(message)\n\n if self.premium is None:\n return None\n\n result = self._can_sync_data_from_server(new_account=create_new)\n if result.can_sync == CanSync.ASK_USER:\n if sync_approval == 'unknown':\n log.info('DB data at server newer than local')\n raise RotkehlchenPermissionError(result.message, result.payload)\n elif sync_approval == 'yes':\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n else:\n log.debug('Could sync data from server but user refused')\n elif result.can_sync == CanSync.YES:\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n\n # else result.can_sync was no, so we do nothing\n\n # Success, return premium\n return self.premium\n", "path": "rotkehlchen/premium/sync.py"}], "after_files": [{"content": "import base64\nimport logging\nimport shutil\nfrom enum import Enum\nfrom typing import Any, Dict, NamedTuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom rotkehlchen.data_handler import DataHandler\nfrom rotkehlchen.errors import (\n PremiumAuthenticationError,\n RemoteError,\n RotkehlchenPermissionError,\n UnableToDecryptRemoteData,\n)\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.premium.premium import Premium, PremiumCredentials, premium_create_and_verify\nfrom rotkehlchen.utils.misc import timestamp_to_date, ts_now\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass CanSync(Enum):\n YES = 0\n NO = 1\n ASK_USER = 2\n\n\nclass SyncCheckResult(NamedTuple):\n # The result of the sync check\n can_sync: CanSync\n # If result is ASK_USER, what should the message be?\n message: str\n payload: Optional[Dict[str, Any]]\n\n\nclass PremiumSyncManager():\n\n def __init__(self, data: DataHandler, password: str) -> None:\n self.last_data_upload_ts = 0\n self.data = data\n self.password = password\n self.premium: Optional[Premium] = None\n\n def _can_sync_data_from_server(self, new_account: bool) -> SyncCheckResult:\n \"\"\"\n Checks if the remote data can be pulled from the server.\n\n Returns a SyncCheckResult denoting whether we can pull for sure,\n whether we can't pull or whether the user should be asked. If the user\n should be asked a message is also returned\n \"\"\"\n log.debug('can sync data from server -- start')\n if self.premium is None:\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug('can sync data from server failed', error=str(e))\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n if new_account:\n return SyncCheckResult(can_sync=CanSync.YES, message='', payload=None)\n\n if not self.data.db.get_premium_sync():\n # If it's not a new account and the db setting for premium syncing is off stop\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n log.debug(\n 'CAN_PULL',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('sync from server stopped -- same hash')\n # same hash -- no need to get anything\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n\n local_more_recent = our_last_write_ts >= metadata.last_modify_ts\n local_bigger = data_bytes_size >= metadata.data_size\n\n if local_more_recent and local_bigger:\n log.debug('sync from server stopped -- local is both newer and bigger')\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n if local_more_recent is False: # remote is more recent\n message = (\n 'Detected remote database with more recent modification timestamp '\n 'than the local one. '\n )\n else: # remote is bigger\n message = 'Detected remote database with bigger size than the local one. '\n\n return SyncCheckResult(\n can_sync=CanSync.ASK_USER,\n message=message,\n payload={\n 'local_size': data_bytes_size,\n 'remote_size': metadata.data_size,\n 'local_last_modified': timestamp_to_date(our_last_write_ts),\n 'remote_last_modified': timestamp_to_date(metadata.last_modify_ts),\n },\n )\n\n def _sync_data_from_server_and_replace_local(self) -> bool:\n \"\"\"\n Performs syncing of data from server and replaces local db\n\n Returns true for success and False for error/failure\n\n May raise:\n - PremiumAuthenticationError due to an UnableToDecryptRemoteData\n coming from decompress_and_decrypt_db. This happens when the given password\n does not match the one on the saved DB.\n \"\"\"\n assert self.premium, 'This function has to be called with a not None premium'\n try:\n result = self.premium.pull_data()\n except RemoteError as e:\n log.debug('sync from server -- pulling failed.', error=str(e))\n return False\n\n if result['data'] is None:\n log.debug('sync from server -- no data found.')\n return False\n\n try:\n self.data.decompress_and_decrypt_db(self.password, result['data'])\n except UnableToDecryptRemoteData:\n raise PremiumAuthenticationError(\n 'The given password can not unlock the database that was retrieved from '\n 'the server. Make sure to use the same password as when the account was created.',\n )\n\n return True\n\n def maybe_upload_data_to_server(self) -> None:\n # if user has no premium do nothing\n if self.premium is None:\n return\n\n # upload only once per hour\n diff = ts_now() - self.last_data_upload_ts\n if diff < 3600:\n return\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug(\n 'upload to server stopped -- query last metadata failed',\n error=str(e),\n )\n return\n\n log.debug(\n 'CAN_PUSH',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('upload to server stopped -- same hash')\n # same hash -- no need to upload anything\n return\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n if our_last_write_ts <= metadata.last_modify_ts:\n # Server's DB was modified after our local DB\n log.debug('upload to server stopped -- remote db more recent than local')\n return\n\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n if data_bytes_size < metadata.data_size:\n # Let's be conservative.\n # TODO: Here perhaps prompt user in the future\n log.debug('upload to server stopped -- remote db bigger than local')\n return\n\n try:\n self.premium.upload_data(\n data_blob=b64_encoded_data,\n our_hash=our_hash,\n last_modify_ts=our_last_write_ts,\n compression_type='zlib',\n )\n except RemoteError as e:\n log.debug('upload to server -- upload error', error=str(e))\n return\n\n # update the last data upload value\n self.last_data_upload_ts = ts_now()\n self.data.db.update_last_data_upload_ts(self.last_data_upload_ts)\n log.debug('upload to server -- success')\n\n def try_premium_at_start(\n self,\n given_premium_credentials: Optional[PremiumCredentials],\n username: str,\n create_new: bool,\n sync_approval: Literal['yes', 'no', 'unknown'],\n ) -> Optional[Premium]:\n \"\"\"\n Check if new user provided api pair or we already got one in the DB\n\n Returns the created premium if user's premium credentials were fine.\n\n If not it will raise PremiumAuthenticationError.\n\n If no credentials were given it returns None\n \"\"\"\n\n if given_premium_credentials is not None:\n assert create_new, 'We should never get here for an already existing account'\n\n try:\n self.premium = premium_create_and_verify(given_premium_credentials)\n except PremiumAuthenticationError as e:\n log.error('Given API key is invalid')\n # At this point we are at a new user trying to create an account with\n # premium API keys and we failed. But a directory was created. Remove it.\n # But create a backup of it in case something went really wrong\n # and the directory contained data we did not want to lose\n shutil.move(\n self.data.user_data_dir, # type: ignore\n self.data.data_directory / f'auto_backup_{username}_{ts_now()}',\n )\n raise PremiumAuthenticationError(\n 'Could not verify keys for the new account. '\n '{}'.format(str(e)),\n )\n\n # else, if we got premium data in the DB initialize it and try to sync with the server\n db_credentials = self.data.db.get_rotkehlchen_premium()\n if db_credentials:\n assert not create_new, 'We should never get here for a new account'\n try:\n self.premium = premium_create_and_verify(db_credentials)\n except PremiumAuthenticationError as e:\n message = (\n f'Could not authenticate with the rotkehlchen server with '\n f'the API keys found in the Database. Error: {str(e)}'\n )\n log.error(message)\n raise PremiumAuthenticationError(message)\n\n if self.premium is None:\n return None\n\n result = self._can_sync_data_from_server(new_account=create_new)\n if result.can_sync == CanSync.ASK_USER:\n if sync_approval == 'unknown':\n log.info('Remote DB is possibly newer. Ask user.')\n raise RotkehlchenPermissionError(result.message, result.payload)\n elif sync_approval == 'yes':\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n else:\n log.debug('Could sync data from server but user refused')\n elif result.can_sync == CanSync.YES:\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n\n # else result.can_sync was no, so we do nothing\n\n # Success, return premium\n return self.premium\n", "path": "rotkehlchen/premium/sync.py"}]}
| 3,375 | 460 |
gh_patches_debug_11530
|
rasdani/github-patches
|
git_diff
|
dmlc__dgl-3841
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I can't run dgl/examples/pytorch/han/train_sampling.py with cuda
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. I just run dgl/examples/pytorch/han/train_sampling.py with cuda, it has an error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
<img width="1103" alt="image" src="https://user-images.githubusercontent.com/53086386/156488968-32cde64a-1c14-4de7-93de-211ca4dd306e.png">
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- DGL Version: 0.8.0+cu11.1
- Backend Library & Version: PyTorch0.9.1
- OS: Linux
- How you installed DGL: pip
- Build command you used (if compiling from source):
- Python version: 3.7.4
- CUDA/cuDNN version: cuda11.1
- GPU models and configuration: A100
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/utils/checks.py`
Content:
```
1 """Checking and logging utilities."""
2 # pylint: disable=invalid-name
3 from __future__ import absolute_import, division
4 from collections.abc import Mapping
5
6 from ..base import DGLError
7 from .._ffi.function import _init_api
8 from .. import backend as F
9
10 def prepare_tensor(g, data, name):
11 """Convert the data to ID tensor and check its ID type and context.
12
13 If the data is already in tensor type, raise error if its ID type
14 and context does not match the graph's.
15 Otherwise, convert it to tensor type of the graph's ID type and
16 ctx and return.
17
18 Parameters
19 ----------
20 g : DGLHeteroGraph
21 Graph.
22 data : int, iterable of int, tensor
23 Data.
24 name : str
25 Name of the data.
26
27 Returns
28 -------
29 Tensor
30 Data in tensor object.
31 """
32 if F.is_tensor(data):
33 if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):
34 raise DGLError('Expect argument "{}" to have data type {} and device '
35 'context {}. But got {} and {}.'.format(
36 name, g.idtype, g.device, F.dtype(data), F.context(data)))
37 ret = data
38 else:
39 data = F.tensor(data)
40 if (not (F.ndim(data) > 0 and F.shape(data)[0] == 0) and # empty tensor
41 F.dtype(data) not in (F.int32, F.int64)):
42 raise DGLError('Expect argument "{}" to have data type int32 or int64,'
43 ' but got {}.'.format(name, F.dtype(data)))
44 ret = F.copy_to(F.astype(data, g.idtype), g.device)
45
46 if F.ndim(ret) == 0:
47 ret = F.unsqueeze(ret, 0)
48 if F.ndim(ret) > 1:
49 raise DGLError('Expect a 1-D tensor for argument "{}". But got {}.'.format(
50 name, ret))
51 return ret
52
53 def prepare_tensor_dict(g, data, name):
54 """Convert a dictionary of data to a dictionary of ID tensors.
55
56 Calls ``prepare_tensor`` on each key-value pair.
57
58 Parameters
59 ----------
60 g : DGLHeteroGraph
61 Graph.
62 data : dict[str, (int, iterable of int, tensor)]
63 Data dict.
64 name : str
65 Name of the data.
66
67 Returns
68 -------
69 dict[str, tensor]
70 """
71 return {key : prepare_tensor(g, val, '{}["{}"]'.format(name, key))
72 for key, val in data.items()}
73
74 def prepare_tensor_or_dict(g, data, name):
75 """Convert data to either a tensor or a dictionary depending on input type.
76
77 Parameters
78 ----------
79 g : DGLHeteroGraph
80 Graph.
81 data : dict[str, (int, iterable of int, tensor)]
82 Data dict.
83 name : str
84 Name of the data.
85
86 Returns
87 -------
88 tensor or dict[str, tensor]
89 """
90 return prepare_tensor_dict(g, data, name) if isinstance(data, Mapping) \
91 else prepare_tensor(g, data, name)
92
93 def parse_edges_arg_to_eid(g, edges, etid, argname='edges'):
94 """Parse the :attr:`edges` argument and return an edge ID tensor.
95
96 The resulting edge ID tensor has the same ID type and device of :attr:`g`.
97
98 Parameters
99 ----------
100 g : DGLGraph
101 Graph
102 edges : pair of Tensor, Tensor, iterable[int]
103 Argument for specifying edges.
104 etid : int
105 Edge type ID.
106 argname : str, optional
107 Argument name.
108
109 Returns
110 -------
111 Tensor
112 Edge ID tensor
113 """
114 if isinstance(edges, tuple):
115 u, v = edges
116 u = prepare_tensor(g, u, '{}[0]'.format(argname))
117 v = prepare_tensor(g, v, '{}[1]'.format(argname))
118 eid = g.edge_ids(u, v, etype=g.canonical_etypes[etid])
119 else:
120 eid = prepare_tensor(g, edges, argname)
121 return eid
122
123 def check_all_same_idtype(glist, name):
124 """Check all the graphs have the same idtype."""
125 if len(glist) == 0:
126 return
127 idtype = glist[0].idtype
128 for i, g in enumerate(glist):
129 if g.idtype != idtype:
130 raise DGLError('Expect {}[{}] to have {} type ID, but got {}.'.format(
131 name, i, idtype, g.idtype))
132
133 def check_device(data, device):
134 """Check if data is on the target device.
135
136 Parameters
137 ----------
138 data : Tensor or dict[str, Tensor]
139 device: Backend device.
140
141 Returns
142 -------
143 Bool: True if the data is on the target device.
144 """
145 if isinstance(data, dict):
146 for v in data.values():
147 if v.device != device:
148 return False
149 elif data.device != device:
150 return False
151 return True
152
153 def check_all_same_device(glist, name):
154 """Check all the graphs have the same device."""
155 if len(glist) == 0:
156 return
157 device = glist[0].device
158 for i, g in enumerate(glist):
159 if g.device != device:
160 raise DGLError('Expect {}[{}] to be on device {}, but got {}.'.format(
161 name, i, device, g.device))
162
163 def check_all_same_schema(schemas, name):
164 """Check the list of schemas are the same."""
165 if len(schemas) == 0:
166 return
167
168 for i, schema in enumerate(schemas):
169 if schema != schemas[0]:
170 raise DGLError(
171 'Expect all graphs to have the same schema on {}, '
172 'but graph {} got\n\t{}\nwhich is different from\n\t{}.'.format(
173 name, i, schema, schemas[0]))
174
175 def check_all_same_schema_for_keys(schemas, keys, name):
176 """Check the list of schemas are the same on the given keys."""
177 if len(schemas) == 0:
178 return
179
180 head = None
181 keys = set(keys)
182 for i, schema in enumerate(schemas):
183 if not keys.issubset(schema.keys()):
184 raise DGLError(
185 'Expect all graphs to have keys {} on {}, '
186 'but graph {} got keys {}.'.format(
187 keys, name, i, schema.keys()))
188
189 if head is None:
190 head = {k: schema[k] for k in keys}
191 else:
192 target = {k: schema[k] for k in keys}
193 if target != head:
194 raise DGLError(
195 'Expect all graphs to have the same schema for keys {} on {}, '
196 'but graph {} got \n\t{}\n which is different from\n\t{}.'.format(
197 keys, name, i, target, head))
198
199 def check_valid_idtype(idtype):
200 """Check whether the value of the idtype argument is valid (int32/int64)
201
202 Parameters
203 ----------
204 idtype : data type
205 The framework object of a data type.
206 """
207 if idtype not in [None, F.int32, F.int64]:
208 raise DGLError('Expect idtype to be a framework object of int32/int64, '
209 'got {}'.format(idtype))
210
211 def is_sorted_srcdst(src, dst, num_src=None, num_dst=None):
212 """Checks whether an edge list is in ascending src-major order (e.g., first
213 sorted by ``src`` and then by ``dst``).
214
215 Parameters
216 ----------
217 src : IdArray
218 The tensor of source nodes for each edge.
219 dst : IdArray
220 The tensor of destination nodes for each edge.
221 num_src : int, optional
222 The number of source nodes.
223 num_dst : int, optional
224 The number of destination nodes.
225
226 Returns
227 -------
228 bool, bool
229 Whether ``src`` is in ascending order, and whether ``dst`` is
230 in ascending order with respect to ``src``.
231 """
232 # for some versions of MXNET and TensorFlow, num_src and num_dst get
233 # incorrectly marked as floats, so force them as integers here
234 if num_src is None:
235 num_src = int(F.as_scalar(F.max(src, dim=0)+1))
236 if num_dst is None:
237 num_dst = int(F.as_scalar(F.max(dst, dim=0)+1))
238
239 src = F.zerocopy_to_dgl_ndarray(src)
240 dst = F.zerocopy_to_dgl_ndarray(dst)
241 sorted_status = _CAPI_DGLCOOIsSorted(src, dst, num_src, num_dst)
242
243 row_sorted = sorted_status > 0
244 col_sorted = sorted_status > 1
245
246 return row_sorted, col_sorted
247
248 _init_api("dgl.utils.checks")
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/dgl/utils/checks.py b/python/dgl/utils/checks.py
--- a/python/dgl/utils/checks.py
+++ b/python/dgl/utils/checks.py
@@ -30,7 +30,7 @@
Data in tensor object.
"""
if F.is_tensor(data):
- if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):
+ if (F.dtype(data) != g.idtype or F.context(data) != g.device) and not g.is_pinned():
raise DGLError('Expect argument "{}" to have data type {} and device '
'context {}. But got {} and {}.'.format(
name, g.idtype, g.device, F.dtype(data), F.context(data)))
|
{"golden_diff": "diff --git a/python/dgl/utils/checks.py b/python/dgl/utils/checks.py\n--- a/python/dgl/utils/checks.py\n+++ b/python/dgl/utils/checks.py\n@@ -30,7 +30,7 @@\n Data in tensor object.\n \"\"\"\n if F.is_tensor(data):\n- if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):\n+ if (F.dtype(data) != g.idtype or F.context(data) != g.device) and not g.is_pinned():\n raise DGLError('Expect argument \"{}\" to have data type {} and device '\n 'context {}. But got {} and {}.'.format(\n name, g.idtype, g.device, F.dtype(data), F.context(data)))\n", "issue": "I can't run dgl/examples/pytorch/han/train_sampling.py with cuda\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. I just run dgl/examples/pytorch/han/train_sampling.py with cuda, it has an error\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n<img width=\"1103\" alt=\"image\" src=\"https://user-images.githubusercontent.com/53086386/156488968-32cde64a-1c14-4de7-93de-211ca4dd306e.png\">\r\n\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n - DGL Version: 0.8.0+cu11.1\r\n - Backend Library & Version: PyTorch0.9.1\r\n - OS: Linux\r\n - How you installed DGL: pip\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7.4\r\n - CUDA/cuDNN version: cuda11.1\r\n - GPU models and configuration: A100\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "\"\"\"Checking and logging utilities.\"\"\"\n# pylint: disable=invalid-name\nfrom __future__ import absolute_import, division\nfrom collections.abc import Mapping\n\nfrom ..base import DGLError\nfrom .._ffi.function import _init_api\nfrom .. import backend as F\n\ndef prepare_tensor(g, data, name):\n \"\"\"Convert the data to ID tensor and check its ID type and context.\n\n If the data is already in tensor type, raise error if its ID type\n and context does not match the graph's.\n Otherwise, convert it to tensor type of the graph's ID type and\n ctx and return.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : int, iterable of int, tensor\n Data.\n name : str\n Name of the data.\n\n Returns\n -------\n Tensor\n Data in tensor object.\n \"\"\"\n if F.is_tensor(data):\n if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):\n raise DGLError('Expect argument \"{}\" to have data type {} and device '\n 'context {}. But got {} and {}.'.format(\n name, g.idtype, g.device, F.dtype(data), F.context(data)))\n ret = data\n else:\n data = F.tensor(data)\n if (not (F.ndim(data) > 0 and F.shape(data)[0] == 0) and # empty tensor\n F.dtype(data) not in (F.int32, F.int64)):\n raise DGLError('Expect argument \"{}\" to have data type int32 or int64,'\n ' but got {}.'.format(name, F.dtype(data)))\n ret = F.copy_to(F.astype(data, g.idtype), g.device)\n\n if F.ndim(ret) == 0:\n ret = F.unsqueeze(ret, 0)\n if F.ndim(ret) > 1:\n raise DGLError('Expect a 1-D tensor for argument \"{}\". But got {}.'.format(\n name, ret))\n return ret\n\ndef prepare_tensor_dict(g, data, name):\n \"\"\"Convert a dictionary of data to a dictionary of ID tensors.\n\n Calls ``prepare_tensor`` on each key-value pair.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n dict[str, tensor]\n \"\"\"\n return {key : prepare_tensor(g, val, '{}[\"{}\"]'.format(name, key))\n for key, val in data.items()}\n\ndef prepare_tensor_or_dict(g, data, name):\n \"\"\"Convert data to either a tensor or a dictionary depending on input type.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n tensor or dict[str, tensor]\n \"\"\"\n return prepare_tensor_dict(g, data, name) if isinstance(data, Mapping) \\\n else prepare_tensor(g, data, name)\n\ndef parse_edges_arg_to_eid(g, edges, etid, argname='edges'):\n \"\"\"Parse the :attr:`edges` argument and return an edge ID tensor.\n\n The resulting edge ID tensor has the same ID type and device of :attr:`g`.\n\n Parameters\n ----------\n g : DGLGraph\n Graph\n edges : pair of Tensor, Tensor, iterable[int]\n Argument for specifying edges.\n etid : int\n Edge type ID.\n argname : str, optional\n Argument name.\n\n Returns\n -------\n Tensor\n Edge ID tensor\n \"\"\"\n if isinstance(edges, tuple):\n u, v = edges\n u = prepare_tensor(g, u, '{}[0]'.format(argname))\n v = prepare_tensor(g, v, '{}[1]'.format(argname))\n eid = g.edge_ids(u, v, etype=g.canonical_etypes[etid])\n else:\n eid = prepare_tensor(g, edges, argname)\n return eid\n\ndef check_all_same_idtype(glist, name):\n \"\"\"Check all the graphs have the same idtype.\"\"\"\n if len(glist) == 0:\n return\n idtype = glist[0].idtype\n for i, g in enumerate(glist):\n if g.idtype != idtype:\n raise DGLError('Expect {}[{}] to have {} type ID, but got {}.'.format(\n name, i, idtype, g.idtype))\n\ndef check_device(data, device):\n \"\"\"Check if data is on the target device.\n\n Parameters\n ----------\n data : Tensor or dict[str, Tensor]\n device: Backend device.\n\n Returns\n -------\n Bool: True if the data is on the target device.\n \"\"\"\n if isinstance(data, dict):\n for v in data.values():\n if v.device != device:\n return False\n elif data.device != device:\n return False\n return True\n\ndef check_all_same_device(glist, name):\n \"\"\"Check all the graphs have the same device.\"\"\"\n if len(glist) == 0:\n return\n device = glist[0].device\n for i, g in enumerate(glist):\n if g.device != device:\n raise DGLError('Expect {}[{}] to be on device {}, but got {}.'.format(\n name, i, device, g.device))\n\ndef check_all_same_schema(schemas, name):\n \"\"\"Check the list of schemas are the same.\"\"\"\n if len(schemas) == 0:\n return\n\n for i, schema in enumerate(schemas):\n if schema != schemas[0]:\n raise DGLError(\n 'Expect all graphs to have the same schema on {}, '\n 'but graph {} got\\n\\t{}\\nwhich is different from\\n\\t{}.'.format(\n name, i, schema, schemas[0]))\n\ndef check_all_same_schema_for_keys(schemas, keys, name):\n \"\"\"Check the list of schemas are the same on the given keys.\"\"\"\n if len(schemas) == 0:\n return\n\n head = None\n keys = set(keys)\n for i, schema in enumerate(schemas):\n if not keys.issubset(schema.keys()):\n raise DGLError(\n 'Expect all graphs to have keys {} on {}, '\n 'but graph {} got keys {}.'.format(\n keys, name, i, schema.keys()))\n\n if head is None:\n head = {k: schema[k] for k in keys}\n else:\n target = {k: schema[k] for k in keys}\n if target != head:\n raise DGLError(\n 'Expect all graphs to have the same schema for keys {} on {}, '\n 'but graph {} got \\n\\t{}\\n which is different from\\n\\t{}.'.format(\n keys, name, i, target, head))\n\ndef check_valid_idtype(idtype):\n \"\"\"Check whether the value of the idtype argument is valid (int32/int64)\n\n Parameters\n ----------\n idtype : data type\n The framework object of a data type.\n \"\"\"\n if idtype not in [None, F.int32, F.int64]:\n raise DGLError('Expect idtype to be a framework object of int32/int64, '\n 'got {}'.format(idtype))\n\ndef is_sorted_srcdst(src, dst, num_src=None, num_dst=None):\n \"\"\"Checks whether an edge list is in ascending src-major order (e.g., first\n sorted by ``src`` and then by ``dst``).\n\n Parameters\n ----------\n src : IdArray\n The tensor of source nodes for each edge.\n dst : IdArray\n The tensor of destination nodes for each edge.\n num_src : int, optional\n The number of source nodes.\n num_dst : int, optional\n The number of destination nodes.\n\n Returns\n -------\n bool, bool\n Whether ``src`` is in ascending order, and whether ``dst`` is\n in ascending order with respect to ``src``.\n \"\"\"\n # for some versions of MXNET and TensorFlow, num_src and num_dst get\n # incorrectly marked as floats, so force them as integers here\n if num_src is None:\n num_src = int(F.as_scalar(F.max(src, dim=0)+1))\n if num_dst is None:\n num_dst = int(F.as_scalar(F.max(dst, dim=0)+1))\n\n src = F.zerocopy_to_dgl_ndarray(src)\n dst = F.zerocopy_to_dgl_ndarray(dst)\n sorted_status = _CAPI_DGLCOOIsSorted(src, dst, num_src, num_dst)\n\n row_sorted = sorted_status > 0\n col_sorted = sorted_status > 1\n\n return row_sorted, col_sorted\n\n_init_api(\"dgl.utils.checks\")\n", "path": "python/dgl/utils/checks.py"}], "after_files": [{"content": "\"\"\"Checking and logging utilities.\"\"\"\n# pylint: disable=invalid-name\nfrom __future__ import absolute_import, division\nfrom collections.abc import Mapping\n\nfrom ..base import DGLError\nfrom .._ffi.function import _init_api\nfrom .. import backend as F\n\ndef prepare_tensor(g, data, name):\n \"\"\"Convert the data to ID tensor and check its ID type and context.\n\n If the data is already in tensor type, raise error if its ID type\n and context does not match the graph's.\n Otherwise, convert it to tensor type of the graph's ID type and\n ctx and return.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : int, iterable of int, tensor\n Data.\n name : str\n Name of the data.\n\n Returns\n -------\n Tensor\n Data in tensor object.\n \"\"\"\n if F.is_tensor(data):\n if (F.dtype(data) != g.idtype or F.context(data) != g.device) and not g.is_pinned():\n raise DGLError('Expect argument \"{}\" to have data type {} and device '\n 'context {}. But got {} and {}.'.format(\n name, g.idtype, g.device, F.dtype(data), F.context(data)))\n ret = data\n else:\n data = F.tensor(data)\n if (not (F.ndim(data) > 0 and F.shape(data)[0] == 0) and # empty tensor\n F.dtype(data) not in (F.int32, F.int64)):\n raise DGLError('Expect argument \"{}\" to have data type int32 or int64,'\n ' but got {}.'.format(name, F.dtype(data)))\n ret = F.copy_to(F.astype(data, g.idtype), g.device)\n\n if F.ndim(ret) == 0:\n ret = F.unsqueeze(ret, 0)\n if F.ndim(ret) > 1:\n raise DGLError('Expect a 1-D tensor for argument \"{}\". But got {}.'.format(\n name, ret))\n return ret\n\ndef prepare_tensor_dict(g, data, name):\n \"\"\"Convert a dictionary of data to a dictionary of ID tensors.\n\n Calls ``prepare_tensor`` on each key-value pair.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n dict[str, tensor]\n \"\"\"\n return {key : prepare_tensor(g, val, '{}[\"{}\"]'.format(name, key))\n for key, val in data.items()}\n\ndef prepare_tensor_or_dict(g, data, name):\n \"\"\"Convert data to either a tensor or a dictionary depending on input type.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n tensor or dict[str, tensor]\n \"\"\"\n return prepare_tensor_dict(g, data, name) if isinstance(data, Mapping) \\\n else prepare_tensor(g, data, name)\n\ndef parse_edges_arg_to_eid(g, edges, etid, argname='edges'):\n \"\"\"Parse the :attr:`edges` argument and return an edge ID tensor.\n\n The resulting edge ID tensor has the same ID type and device of :attr:`g`.\n\n Parameters\n ----------\n g : DGLGraph\n Graph\n edges : pair of Tensor, Tensor, iterable[int]\n Argument for specifying edges.\n etid : int\n Edge type ID.\n argname : str, optional\n Argument name.\n\n Returns\n -------\n Tensor\n Edge ID tensor\n \"\"\"\n if isinstance(edges, tuple):\n u, v = edges\n u = prepare_tensor(g, u, '{}[0]'.format(argname))\n v = prepare_tensor(g, v, '{}[1]'.format(argname))\n eid = g.edge_ids(u, v, etype=g.canonical_etypes[etid])\n else:\n eid = prepare_tensor(g, edges, argname)\n return eid\n\ndef check_all_same_idtype(glist, name):\n \"\"\"Check all the graphs have the same idtype.\"\"\"\n if len(glist) == 0:\n return\n idtype = glist[0].idtype\n for i, g in enumerate(glist):\n if g.idtype != idtype:\n raise DGLError('Expect {}[{}] to have {} type ID, but got {}.'.format(\n name, i, idtype, g.idtype))\n\ndef check_device(data, device):\n \"\"\"Check if data is on the target device.\n\n Parameters\n ----------\n data : Tensor or dict[str, Tensor]\n device: Backend device.\n\n Returns\n -------\n Bool: True if the data is on the target device.\n \"\"\"\n if isinstance(data, dict):\n for v in data.values():\n if v.device != device:\n return False\n elif data.device != device:\n return False\n return True\n\ndef check_all_same_device(glist, name):\n \"\"\"Check all the graphs have the same device.\"\"\"\n if len(glist) == 0:\n return\n device = glist[0].device\n for i, g in enumerate(glist):\n if g.device != device:\n raise DGLError('Expect {}[{}] to be on device {}, but got {}.'.format(\n name, i, device, g.device))\n\ndef check_all_same_schema(schemas, name):\n \"\"\"Check the list of schemas are the same.\"\"\"\n if len(schemas) == 0:\n return\n\n for i, schema in enumerate(schemas):\n if schema != schemas[0]:\n raise DGLError(\n 'Expect all graphs to have the same schema on {}, '\n 'but graph {} got\\n\\t{}\\nwhich is different from\\n\\t{}.'.format(\n name, i, schema, schemas[0]))\n\ndef check_all_same_schema_for_keys(schemas, keys, name):\n \"\"\"Check the list of schemas are the same on the given keys.\"\"\"\n if len(schemas) == 0:\n return\n\n head = None\n keys = set(keys)\n for i, schema in enumerate(schemas):\n if not keys.issubset(schema.keys()):\n raise DGLError(\n 'Expect all graphs to have keys {} on {}, '\n 'but graph {} got keys {}.'.format(\n keys, name, i, schema.keys()))\n\n if head is None:\n head = {k: schema[k] for k in keys}\n else:\n target = {k: schema[k] for k in keys}\n if target != head:\n raise DGLError(\n 'Expect all graphs to have the same schema for keys {} on {}, '\n 'but graph {} got \\n\\t{}\\n which is different from\\n\\t{}.'.format(\n keys, name, i, target, head))\n\ndef check_valid_idtype(idtype):\n \"\"\"Check whether the value of the idtype argument is valid (int32/int64)\n\n Parameters\n ----------\n idtype : data type\n The framework object of a data type.\n \"\"\"\n if idtype not in [None, F.int32, F.int64]:\n raise DGLError('Expect idtype to be a framework object of int32/int64, '\n 'got {}'.format(idtype))\n\ndef is_sorted_srcdst(src, dst, num_src=None, num_dst=None):\n \"\"\"Checks whether an edge list is in ascending src-major order (e.g., first\n sorted by ``src`` and then by ``dst``).\n\n Parameters\n ----------\n src : IdArray\n The tensor of source nodes for each edge.\n dst : IdArray\n The tensor of destination nodes for each edge.\n num_src : int, optional\n The number of source nodes.\n num_dst : int, optional\n The number of destination nodes.\n\n Returns\n -------\n bool, bool\n Whether ``src`` is in ascending order, and whether ``dst`` is\n in ascending order with respect to ``src``.\n \"\"\"\n # for some versions of MXNET and TensorFlow, num_src and num_dst get\n # incorrectly marked as floats, so force them as integers here\n if num_src is None:\n num_src = int(F.as_scalar(F.max(src, dim=0)+1))\n if num_dst is None:\n num_dst = int(F.as_scalar(F.max(dst, dim=0)+1))\n\n src = F.zerocopy_to_dgl_ndarray(src)\n dst = F.zerocopy_to_dgl_ndarray(dst)\n sorted_status = _CAPI_DGLCOOIsSorted(src, dst, num_src, num_dst)\n\n row_sorted = sorted_status > 0\n col_sorted = sorted_status > 1\n\n return row_sorted, col_sorted\n\n_init_api(\"dgl.utils.checks\")\n", "path": "python/dgl/utils/checks.py"}]}
| 3,171 | 170 |
gh_patches_debug_3171
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1743
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot enter float number as contribution during benefactor renewal
### Describe the bug
Cannot enter float number as contribution during benefactor renewal
### How to reproduce
Steps to reproduce the behaviour:
1. Go to http://localhost:8000/user/membership/
2. Add a benefactor membership renewal
3. The form will not error when trying to send
### Expected behaviour
Can enter float numbers as contribution. Like 7.69.
### Screenshots
<img width="691" alt="Screenshot 2021-06-20 at 18 48 35" src="https://user-images.githubusercontent.com/1799914/122682192-4edaf880-d1f8-11eb-99e2-26eaf5379ae8.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/registrations/forms.py`
Content:
```
1 """The forms defined by the registrations package."""
2 from django import forms
3 from django.core.exceptions import NON_FIELD_ERRORS, ValidationError
4 from django.forms import TypedChoiceField
5 from django.urls import reverse_lazy
6 from django.utils import timezone
7 from django.utils.safestring import mark_safe
8 from django.utils.text import capfirst
9 from django.utils.translation import gettext_lazy as _
10
11 from members.models import Membership
12 from payments.widgets import SignatureWidget
13 from registrations import services
14 from .models import Registration, Renewal, Reference
15
16
17 class BaseRegistrationForm(forms.ModelForm):
18 """Base form for membership registrations."""
19
20 birthday = forms.DateField(
21 widget=forms.widgets.SelectDateWidget(
22 years=range(timezone.now().year - 50, timezone.now().year - 10)
23 ),
24 label=capfirst(_("birthday")),
25 )
26
27 privacy_policy = forms.BooleanField(required=True,)
28
29 def __init__(self, *args, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.fields["privacy_policy"].label = mark_safe(
32 _('I accept the <a href="{}">privacy policy</a>.').format(
33 reverse_lazy("singlepages:privacy-policy")
34 )
35 )
36
37
38 class RegistrationAdminForm(forms.ModelForm):
39 """Custom admin form for Registration model to add the widget for the signature."""
40
41 class Meta:
42 fields = "__all__"
43 model = Registration
44 widgets = {
45 "signature": SignatureWidget(),
46 }
47
48
49 class MemberRegistrationForm(BaseRegistrationForm):
50 """Form for member registrations."""
51
52 this_year = timezone.now().year
53 years = reversed(
54 [(x, "{} - {}".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]
55 )
56
57 starting_year = TypedChoiceField(
58 choices=years,
59 coerce=int,
60 empty_value=this_year,
61 required=False,
62 help_text=_("What lecture year did you start studying at Radboud University?"),
63 )
64
65 class Meta:
66 model = Registration
67 widgets = {
68 "signature": SignatureWidget(),
69 }
70 fields = (
71 "length",
72 "first_name",
73 "last_name",
74 "birthday",
75 "email",
76 "phone_number",
77 "student_number",
78 "programme",
79 "starting_year",
80 "address_street",
81 "address_street2",
82 "address_postal_code",
83 "address_city",
84 "address_country",
85 "optin_birthday",
86 "optin_mailinglist",
87 "membership_type",
88 "direct_debit",
89 "initials",
90 "iban",
91 "bic",
92 "signature",
93 )
94
95
96 class BenefactorRegistrationForm(BaseRegistrationForm):
97 """Form for benefactor registrations."""
98
99 icis_employee = forms.BooleanField(
100 required=False, label=_("I am an employee of iCIS")
101 )
102
103 class Meta:
104 model = Registration
105 widgets = {
106 "signature": SignatureWidget(),
107 }
108 fields = (
109 "length",
110 "first_name",
111 "last_name",
112 "birthday",
113 "email",
114 "phone_number",
115 "student_number",
116 "address_street",
117 "address_street2",
118 "address_postal_code",
119 "address_city",
120 "address_country",
121 "optin_birthday",
122 "optin_mailinglist",
123 "contribution",
124 "membership_type",
125 "direct_debit",
126 "initials",
127 "iban",
128 "bic",
129 "signature",
130 )
131
132
133 class RenewalForm(forms.ModelForm):
134 """Form for membership renewals."""
135
136 privacy_policy = forms.BooleanField(required=True,)
137
138 icis_employee = forms.BooleanField(
139 required=False, label=_("I am an employee of iCIS")
140 )
141
142 contribution = forms.IntegerField(required=False,)
143
144 def __init__(self, *args, **kwargs):
145 super().__init__(*args, **kwargs)
146 self.fields["privacy_policy"].label = mark_safe(
147 _('I accept the <a href="{}">privacy policy</a>.').format(
148 reverse_lazy("singlepages:privacy-policy")
149 )
150 )
151
152 class Meta:
153 model = Renewal
154 fields = (
155 "member",
156 "length",
157 "contribution",
158 "membership_type",
159 "no_references",
160 "remarks",
161 )
162
163
164 class ReferenceForm(forms.ModelForm):
165 def clean(self):
166 super().clean()
167 membership = self.cleaned_data["member"].current_membership
168 if membership and membership.type == Membership.BENEFACTOR:
169 raise ValidationError(_("Benefactors cannot give references."))
170
171 membership = self.cleaned_data["member"].latest_membership
172 if (
173 membership
174 and membership.until
175 and membership.until < services.calculate_membership_since()
176 ):
177 raise ValidationError(
178 _(
179 "It's not possible to give references for "
180 "memberships that start after your own "
181 "membership's end."
182 )
183 )
184
185 class Meta:
186 model = Reference
187 fields = "__all__"
188 error_messages = {
189 NON_FIELD_ERRORS: {
190 "unique_together": _(
191 "You've already given a reference for this person."
192 ),
193 }
194 }
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/registrations/forms.py b/website/registrations/forms.py
--- a/website/registrations/forms.py
+++ b/website/registrations/forms.py
@@ -139,7 +139,7 @@
required=False, label=_("I am an employee of iCIS")
)
- contribution = forms.IntegerField(required=False,)
+ contribution = forms.DecimalField(required=False, max_digits=5, decimal_places=2,)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
|
{"golden_diff": "diff --git a/website/registrations/forms.py b/website/registrations/forms.py\n--- a/website/registrations/forms.py\n+++ b/website/registrations/forms.py\n@@ -139,7 +139,7 @@\n required=False, label=_(\"I am an employee of iCIS\")\n )\n \n- contribution = forms.IntegerField(required=False,)\n+ contribution = forms.DecimalField(required=False, max_digits=5, decimal_places=2,)\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n", "issue": "Cannot enter float number as contribution during benefactor renewal\n### Describe the bug\r\nCannot enter float number as contribution during benefactor renewal\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to http://localhost:8000/user/membership/\r\n2. Add a benefactor membership renewal \r\n3. The form will not error when trying to send\r\n\r\n### Expected behaviour\r\nCan enter float numbers as contribution. Like 7.69.\r\n\r\n### Screenshots\r\n<img width=\"691\" alt=\"Screenshot 2021-06-20 at 18 48 35\" src=\"https://user-images.githubusercontent.com/1799914/122682192-4edaf880-d1f8-11eb-99e2-26eaf5379ae8.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"The forms defined by the registrations package.\"\"\"\nfrom django import forms\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.forms import TypedChoiceField\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import capfirst\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Membership\nfrom payments.widgets import SignatureWidget\nfrom registrations import services\nfrom .models import Registration, Renewal, Reference\n\n\nclass BaseRegistrationForm(forms.ModelForm):\n \"\"\"Base form for membership registrations.\"\"\"\n\n birthday = forms.DateField(\n widget=forms.widgets.SelectDateWidget(\n years=range(timezone.now().year - 50, timezone.now().year - 10)\n ),\n label=capfirst(_(\"birthday\")),\n )\n\n privacy_policy = forms.BooleanField(required=True,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n\nclass RegistrationAdminForm(forms.ModelForm):\n \"\"\"Custom admin form for Registration model to add the widget for the signature.\"\"\"\n\n class Meta:\n fields = \"__all__\"\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n\n\nclass MemberRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for member registrations.\"\"\"\n\n this_year = timezone.now().year\n years = reversed(\n [(x, \"{} - {}\".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]\n )\n\n starting_year = TypedChoiceField(\n choices=years,\n coerce=int,\n empty_value=this_year,\n required=False,\n help_text=_(\"What lecture year did you start studying at Radboud University?\"),\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"programme\",\n \"starting_year\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass BenefactorRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for benefactor registrations.\"\"\"\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass RenewalForm(forms.ModelForm):\n \"\"\"Form for membership renewals.\"\"\"\n\n privacy_policy = forms.BooleanField(required=True,)\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.IntegerField(required=False,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n class Meta:\n model = Renewal\n fields = (\n \"member\",\n \"length\",\n \"contribution\",\n \"membership_type\",\n \"no_references\",\n \"remarks\",\n )\n\n\nclass ReferenceForm(forms.ModelForm):\n def clean(self):\n super().clean()\n membership = self.cleaned_data[\"member\"].current_membership\n if membership and membership.type == Membership.BENEFACTOR:\n raise ValidationError(_(\"Benefactors cannot give references.\"))\n\n membership = self.cleaned_data[\"member\"].latest_membership\n if (\n membership\n and membership.until\n and membership.until < services.calculate_membership_since()\n ):\n raise ValidationError(\n _(\n \"It's not possible to give references for \"\n \"memberships that start after your own \"\n \"membership's end.\"\n )\n )\n\n class Meta:\n model = Reference\n fields = \"__all__\"\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\n \"You've already given a reference for this person.\"\n ),\n }\n }\n", "path": "website/registrations/forms.py"}], "after_files": [{"content": "\"\"\"The forms defined by the registrations package.\"\"\"\nfrom django import forms\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.forms import TypedChoiceField\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import capfirst\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Membership\nfrom payments.widgets import SignatureWidget\nfrom registrations import services\nfrom .models import Registration, Renewal, Reference\n\n\nclass BaseRegistrationForm(forms.ModelForm):\n \"\"\"Base form for membership registrations.\"\"\"\n\n birthday = forms.DateField(\n widget=forms.widgets.SelectDateWidget(\n years=range(timezone.now().year - 50, timezone.now().year - 10)\n ),\n label=capfirst(_(\"birthday\")),\n )\n\n privacy_policy = forms.BooleanField(required=True,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n\nclass RegistrationAdminForm(forms.ModelForm):\n \"\"\"Custom admin form for Registration model to add the widget for the signature.\"\"\"\n\n class Meta:\n fields = \"__all__\"\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n\n\nclass MemberRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for member registrations.\"\"\"\n\n this_year = timezone.now().year\n years = reversed(\n [(x, \"{} - {}\".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]\n )\n\n starting_year = TypedChoiceField(\n choices=years,\n coerce=int,\n empty_value=this_year,\n required=False,\n help_text=_(\"What lecture year did you start studying at Radboud University?\"),\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"programme\",\n \"starting_year\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass BenefactorRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for benefactor registrations.\"\"\"\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass RenewalForm(forms.ModelForm):\n \"\"\"Form for membership renewals.\"\"\"\n\n privacy_policy = forms.BooleanField(required=True,)\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.DecimalField(required=False, max_digits=5, decimal_places=2,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n class Meta:\n model = Renewal\n fields = (\n \"member\",\n \"length\",\n \"contribution\",\n \"membership_type\",\n \"no_references\",\n \"remarks\",\n )\n\n\nclass ReferenceForm(forms.ModelForm):\n def clean(self):\n super().clean()\n membership = self.cleaned_data[\"member\"].current_membership\n if membership and membership.type == Membership.BENEFACTOR:\n raise ValidationError(_(\"Benefactors cannot give references.\"))\n\n membership = self.cleaned_data[\"member\"].latest_membership\n if (\n membership\n and membership.until\n and membership.until < services.calculate_membership_since()\n ):\n raise ValidationError(\n _(\n \"It's not possible to give references for \"\n \"memberships that start after your own \"\n \"membership's end.\"\n )\n )\n\n class Meta:\n model = Reference\n fields = \"__all__\"\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\n \"You've already given a reference for this person.\"\n ),\n }\n }\n", "path": "website/registrations/forms.py"}]}
| 2,027 | 124 |
gh_patches_debug_26779
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-5029
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List supported languages for syntax highlighting with `st.code`
**Link to doc page in question (if any):** https://docs.streamlit.io/library/api-reference/text/st.code
**Name of the Streamlit feature whose docs need improvement:** `st.code`
**What you think the docs should say:** The docs should provide a list of all supported languages for syntax highlighting.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/elements/markdown.py`
Content:
```
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import cast, Optional, TYPE_CHECKING, Union
16
17 from streamlit import type_util
18 from streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto
19 from .utils import clean_text
20
21 if TYPE_CHECKING:
22 import sympy
23
24 from streamlit.delta_generator import DeltaGenerator
25
26
27 class MarkdownMixin:
28 def markdown(self, body: str, unsafe_allow_html: bool = False) -> "DeltaGenerator":
29 """Display string formatted as Markdown.
30
31 Parameters
32 ----------
33 body : str
34 The string to display as Github-flavored Markdown. Syntax
35 information can be found at: https://github.github.com/gfm.
36
37 This also supports:
38
39 * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.
40 For a list of all supported codes,
41 see https://share.streamlit.io/streamlit/emoji-shortcodes.
42
43 * LaTeX expressions, by wrapping them in "$" or "$$" (the "$$"
44 must be on their own lines). Supported LaTeX functions are listed
45 at https://katex.org/docs/supported.html.
46
47 unsafe_allow_html : bool
48 By default, any HTML tags found in the body will be escaped and
49 therefore treated as pure text. This behavior may be turned off by
50 setting this argument to True.
51
52 That said, we *strongly advise against it*. It is hard to write
53 secure HTML, so by using this argument you may be compromising your
54 users' security. For more information, see:
55
56 https://github.com/streamlit/streamlit/issues/152
57
58 *Also note that `unsafe_allow_html` is a temporary measure and may
59 be removed from Streamlit at any time.*
60
61 If you decide to turn on HTML anyway, we ask you to please tell us
62 your exact use case here:
63
64 https://discuss.streamlit.io/t/96
65
66 This will help us come up with safe APIs that allow you to do what
67 you want.
68
69 Example
70 -------
71 >>> st.markdown('Streamlit is **_really_ cool**.')
72
73 """
74 markdown_proto = MarkdownProto()
75
76 markdown_proto.body = clean_text(body)
77 markdown_proto.allow_html = unsafe_allow_html
78
79 return self.dg._enqueue("markdown", markdown_proto)
80
81 def header(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
82 """Display text in header formatting.
83
84 Parameters
85 ----------
86 body : str
87 The text to display.
88
89 anchor : str
90 The anchor name of the header that can be accessed with #anchor
91 in the URL. If omitted, it generates an anchor using the body.
92
93 Example
94 -------
95 >>> st.header('This is a header')
96
97 """
98 header_proto = MarkdownProto()
99 if anchor is None:
100 header_proto.body = f"## {clean_text(body)}"
101 else:
102 header_proto.body = f'<h2 data-anchor="{anchor}">{clean_text(body)}</h2>'
103 header_proto.allow_html = True
104 return self.dg._enqueue("markdown", header_proto)
105
106 def subheader(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
107 """Display text in subheader formatting.
108
109 Parameters
110 ----------
111 body : str
112 The text to display.
113
114 anchor : str
115 The anchor name of the header that can be accessed with #anchor
116 in the URL. If omitted, it generates an anchor using the body.
117
118 Example
119 -------
120 >>> st.subheader('This is a subheader')
121
122 """
123 subheader_proto = MarkdownProto()
124 if anchor is None:
125 subheader_proto.body = f"### {clean_text(body)}"
126 else:
127 subheader_proto.body = f'<h3 data-anchor="{anchor}">{clean_text(body)}</h3>'
128 subheader_proto.allow_html = True
129
130 return self.dg._enqueue("markdown", subheader_proto)
131
132 def code(self, body: str, language: Optional[str] = "python") -> "DeltaGenerator":
133 """Display a code block with optional syntax highlighting.
134
135 (This is a convenience wrapper around `st.markdown()`)
136
137 Parameters
138 ----------
139 body : str
140 The string to display as code.
141
142 language : str
143 The language that the code is written in, for syntax highlighting.
144 If omitted, the code will be unstyled.
145
146 Example
147 -------
148 >>> code = '''def hello():
149 ... print("Hello, Streamlit!")'''
150 >>> st.code(code, language='python')
151
152 """
153 code_proto = MarkdownProto()
154 markdown = "```%(language)s\n%(body)s\n```" % {
155 "language": language or "",
156 "body": body,
157 }
158 code_proto.body = clean_text(markdown)
159 return self.dg._enqueue("markdown", code_proto)
160
161 def title(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
162 """Display text in title formatting.
163
164 Each document should have a single `st.title()`, although this is not
165 enforced.
166
167 Parameters
168 ----------
169 body : str
170 The text to display.
171
172 anchor : str
173 The anchor name of the header that can be accessed with #anchor
174 in the URL. If omitted, it generates an anchor using the body.
175
176 Example
177 -------
178 >>> st.title('This is a title')
179
180 """
181 title_proto = MarkdownProto()
182 if anchor is None:
183 title_proto.body = f"# {clean_text(body)}"
184 else:
185 title_proto.body = f'<h1 data-anchor="{anchor}">{clean_text(body)}</h1>'
186 title_proto.allow_html = True
187 return self.dg._enqueue("markdown", title_proto)
188
189 def caption(self, body: str, unsafe_allow_html: bool = False) -> "DeltaGenerator":
190 """Display text in small font.
191
192 This should be used for captions, asides, footnotes, sidenotes, and
193 other explanatory text.
194
195 Parameters
196 ----------
197 body : str
198 The text to display.
199
200 unsafe_allow_html : bool
201 By default, any HTML tags found in strings will be escaped and
202 therefore treated as pure text. This behavior may be turned off by
203 setting this argument to True.
204
205 That said, *we strongly advise against it*. It is hard to write secure
206 HTML, so by using this argument you may be compromising your users'
207 security. For more information, see:
208
209 https://github.com/streamlit/streamlit/issues/152
210
211 **Also note that `unsafe_allow_html` is a temporary measure and may be
212 removed from Streamlit at any time.**
213
214 If you decide to turn on HTML anyway, we ask you to please tell us your
215 exact use case here:
216 https://discuss.streamlit.io/t/96 .
217
218 This will help us come up with safe APIs that allow you to do what you
219 want.
220
221 Example
222 -------
223 >>> st.caption('This is a string that explains something above.')
224
225 """
226 caption_proto = MarkdownProto()
227 caption_proto.body = clean_text(body)
228 caption_proto.allow_html = unsafe_allow_html
229 caption_proto.is_caption = True
230 return self.dg._enqueue("markdown", caption_proto)
231
232 def latex(self, body: Union[str, "sympy.Expr"]) -> "DeltaGenerator":
233 # This docstring needs to be "raw" because of the backslashes in the
234 # example below.
235 r"""Display mathematical expressions formatted as LaTeX.
236
237 Supported LaTeX functions are listed at
238 https://katex.org/docs/supported.html.
239
240 Parameters
241 ----------
242 body : str or SymPy expression
243 The string or SymPy expression to display as LaTeX. If str, it's
244 a good idea to use raw Python strings since LaTeX uses backslashes
245 a lot.
246
247
248 Example
249 -------
250 >>> st.latex(r'''
251 ... a + ar + a r^2 + a r^3 + \cdots + a r^{n-1} =
252 ... \sum_{k=0}^{n-1} ar^k =
253 ... a \left(\frac{1-r^{n}}{1-r}\right)
254 ... ''')
255
256 """
257 if type_util.is_sympy_expession(body):
258 import sympy
259
260 body = sympy.latex(body)
261
262 latex_proto = MarkdownProto()
263 latex_proto.body = "$$\n%s\n$$" % clean_text(body)
264 return self.dg._enqueue("markdown", latex_proto)
265
266 @property
267 def dg(self) -> "DeltaGenerator":
268 """Get our DeltaGenerator."""
269 return cast("DeltaGenerator", self)
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/streamlit/elements/markdown.py b/lib/streamlit/elements/markdown.py
--- a/lib/streamlit/elements/markdown.py
+++ b/lib/streamlit/elements/markdown.py
@@ -36,7 +36,7 @@
This also supports:
- * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.
+ * Emoji shortcodes, such as ``:+1:`` and ``:sunglasses:``.
For a list of all supported codes,
see https://share.streamlit.io/streamlit/emoji-shortcodes.
@@ -55,7 +55,7 @@
https://github.com/streamlit/streamlit/issues/152
- *Also note that `unsafe_allow_html` is a temporary measure and may
+ *Also note that ``unsafe_allow_html`` is a temporary measure and may
be removed from Streamlit at any time.*
If you decide to turn on HTML anyway, we ask you to please tell us
@@ -143,6 +143,10 @@
The language that the code is written in, for syntax highlighting.
If omitted, the code will be unstyled.
+ For a list of available ``language`` imports, see:
+
+ https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/AVAILABLE_LANGUAGES_PRISM.MD
+
Example
-------
>>> code = '''def hello():
|
{"golden_diff": "diff --git a/lib/streamlit/elements/markdown.py b/lib/streamlit/elements/markdown.py\n--- a/lib/streamlit/elements/markdown.py\n+++ b/lib/streamlit/elements/markdown.py\n@@ -36,7 +36,7 @@\n \n This also supports:\n \n- * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.\n+ * Emoji shortcodes, such as ``:+1:`` and ``:sunglasses:``.\n For a list of all supported codes,\n see https://share.streamlit.io/streamlit/emoji-shortcodes.\n \n@@ -55,7 +55,7 @@\n \n https://github.com/streamlit/streamlit/issues/152\n \n- *Also note that `unsafe_allow_html` is a temporary measure and may\n+ *Also note that ``unsafe_allow_html`` is a temporary measure and may\n be removed from Streamlit at any time.*\n \n If you decide to turn on HTML anyway, we ask you to please tell us\n@@ -143,6 +143,10 @@\n The language that the code is written in, for syntax highlighting.\n If omitted, the code will be unstyled.\n \n+ For a list of available ``language`` imports, see:\n+\n+ https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/AVAILABLE_LANGUAGES_PRISM.MD\n+\n Example\n -------\n >>> code = '''def hello():\n", "issue": "List supported languages for syntax highlighting with `st.code`\n**Link to doc page in question (if any):** https://docs.streamlit.io/library/api-reference/text/st.code\r\n\r\n**Name of the Streamlit feature whose docs need improvement:** `st.code`\r\n\r\n**What you think the docs should say:** The docs should provide a list of all supported languages for syntax highlighting.\r\n\n", "before_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, Optional, TYPE_CHECKING, Union\n\nfrom streamlit import type_util\nfrom streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto\nfrom .utils import clean_text\n\nif TYPE_CHECKING:\n import sympy\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nclass MarkdownMixin:\n def markdown(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display string formatted as Markdown.\n\n Parameters\n ----------\n body : str\n The string to display as Github-flavored Markdown. Syntax\n information can be found at: https://github.github.com/gfm.\n\n This also supports:\n\n * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.\n For a list of all supported codes,\n see https://share.streamlit.io/streamlit/emoji-shortcodes.\n\n * LaTeX expressions, by wrapping them in \"$\" or \"$$\" (the \"$$\"\n must be on their own lines). Supported LaTeX functions are listed\n at https://katex.org/docs/supported.html.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in the body will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, we *strongly advise against it*. It is hard to write\n secure HTML, so by using this argument you may be compromising your\n users' security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n *Also note that `unsafe_allow_html` is a temporary measure and may\n be removed from Streamlit at any time.*\n\n If you decide to turn on HTML anyway, we ask you to please tell us\n your exact use case here:\n\n https://discuss.streamlit.io/t/96\n\n This will help us come up with safe APIs that allow you to do what\n you want.\n\n Example\n -------\n >>> st.markdown('Streamlit is **_really_ cool**.')\n\n \"\"\"\n markdown_proto = MarkdownProto()\n\n markdown_proto.body = clean_text(body)\n markdown_proto.allow_html = unsafe_allow_html\n\n return self.dg._enqueue(\"markdown\", markdown_proto)\n\n def header(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in header formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.header('This is a header')\n\n \"\"\"\n header_proto = MarkdownProto()\n if anchor is None:\n header_proto.body = f\"## {clean_text(body)}\"\n else:\n header_proto.body = f'<h2 data-anchor=\"{anchor}\">{clean_text(body)}</h2>'\n header_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", header_proto)\n\n def subheader(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in subheader formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.subheader('This is a subheader')\n\n \"\"\"\n subheader_proto = MarkdownProto()\n if anchor is None:\n subheader_proto.body = f\"### {clean_text(body)}\"\n else:\n subheader_proto.body = f'<h3 data-anchor=\"{anchor}\">{clean_text(body)}</h3>'\n subheader_proto.allow_html = True\n\n return self.dg._enqueue(\"markdown\", subheader_proto)\n\n def code(self, body: str, language: Optional[str] = \"python\") -> \"DeltaGenerator\":\n \"\"\"Display a code block with optional syntax highlighting.\n\n (This is a convenience wrapper around `st.markdown()`)\n\n Parameters\n ----------\n body : str\n The string to display as code.\n\n language : str\n The language that the code is written in, for syntax highlighting.\n If omitted, the code will be unstyled.\n\n Example\n -------\n >>> code = '''def hello():\n ... print(\"Hello, Streamlit!\")'''\n >>> st.code(code, language='python')\n\n \"\"\"\n code_proto = MarkdownProto()\n markdown = \"```%(language)s\\n%(body)s\\n```\" % {\n \"language\": language or \"\",\n \"body\": body,\n }\n code_proto.body = clean_text(markdown)\n return self.dg._enqueue(\"markdown\", code_proto)\n\n def title(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in title formatting.\n\n Each document should have a single `st.title()`, although this is not\n enforced.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.title('This is a title')\n\n \"\"\"\n title_proto = MarkdownProto()\n if anchor is None:\n title_proto.body = f\"# {clean_text(body)}\"\n else:\n title_proto.body = f'<h1 data-anchor=\"{anchor}\">{clean_text(body)}</h1>'\n title_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", title_proto)\n\n def caption(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display text in small font.\n\n This should be used for captions, asides, footnotes, sidenotes, and\n other explanatory text.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in strings will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, *we strongly advise against it*. It is hard to write secure\n HTML, so by using this argument you may be compromising your users'\n security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n **Also note that `unsafe_allow_html` is a temporary measure and may be\n removed from Streamlit at any time.**\n\n If you decide to turn on HTML anyway, we ask you to please tell us your\n exact use case here:\n https://discuss.streamlit.io/t/96 .\n\n This will help us come up with safe APIs that allow you to do what you\n want.\n\n Example\n -------\n >>> st.caption('This is a string that explains something above.')\n\n \"\"\"\n caption_proto = MarkdownProto()\n caption_proto.body = clean_text(body)\n caption_proto.allow_html = unsafe_allow_html\n caption_proto.is_caption = True\n return self.dg._enqueue(\"markdown\", caption_proto)\n\n def latex(self, body: Union[str, \"sympy.Expr\"]) -> \"DeltaGenerator\":\n # This docstring needs to be \"raw\" because of the backslashes in the\n # example below.\n r\"\"\"Display mathematical expressions formatted as LaTeX.\n\n Supported LaTeX functions are listed at\n https://katex.org/docs/supported.html.\n\n Parameters\n ----------\n body : str or SymPy expression\n The string or SymPy expression to display as LaTeX. If str, it's\n a good idea to use raw Python strings since LaTeX uses backslashes\n a lot.\n\n\n Example\n -------\n >>> st.latex(r'''\n ... a + ar + a r^2 + a r^3 + \\cdots + a r^{n-1} =\n ... \\sum_{k=0}^{n-1} ar^k =\n ... a \\left(\\frac{1-r^{n}}{1-r}\\right)\n ... ''')\n\n \"\"\"\n if type_util.is_sympy_expession(body):\n import sympy\n\n body = sympy.latex(body)\n\n latex_proto = MarkdownProto()\n latex_proto.body = \"$$\\n%s\\n$$\" % clean_text(body)\n return self.dg._enqueue(\"markdown\", latex_proto)\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/markdown.py"}], "after_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, Optional, TYPE_CHECKING, Union\n\nfrom streamlit import type_util\nfrom streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto\nfrom .utils import clean_text\n\nif TYPE_CHECKING:\n import sympy\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nclass MarkdownMixin:\n def markdown(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display string formatted as Markdown.\n\n Parameters\n ----------\n body : str\n The string to display as Github-flavored Markdown. Syntax\n information can be found at: https://github.github.com/gfm.\n\n This also supports:\n\n * Emoji shortcodes, such as ``:+1:`` and ``:sunglasses:``.\n For a list of all supported codes,\n see https://share.streamlit.io/streamlit/emoji-shortcodes.\n\n * LaTeX expressions, by wrapping them in \"$\" or \"$$\" (the \"$$\"\n must be on their own lines). Supported LaTeX functions are listed\n at https://katex.org/docs/supported.html.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in the body will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, we *strongly advise against it*. It is hard to write\n secure HTML, so by using this argument you may be compromising your\n users' security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n *Also note that ``unsafe_allow_html`` is a temporary measure and may\n be removed from Streamlit at any time.*\n\n If you decide to turn on HTML anyway, we ask you to please tell us\n your exact use case here:\n\n https://discuss.streamlit.io/t/96\n\n This will help us come up with safe APIs that allow you to do what\n you want.\n\n Example\n -------\n >>> st.markdown('Streamlit is **_really_ cool**.')\n\n \"\"\"\n markdown_proto = MarkdownProto()\n\n markdown_proto.body = clean_text(body)\n markdown_proto.allow_html = unsafe_allow_html\n\n return self.dg._enqueue(\"markdown\", markdown_proto)\n\n def header(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in header formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.header('This is a header')\n\n \"\"\"\n header_proto = MarkdownProto()\n if anchor is None:\n header_proto.body = f\"## {clean_text(body)}\"\n else:\n header_proto.body = f'<h2 data-anchor=\"{anchor}\">{clean_text(body)}</h2>'\n header_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", header_proto)\n\n def subheader(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in subheader formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.subheader('This is a subheader')\n\n \"\"\"\n subheader_proto = MarkdownProto()\n if anchor is None:\n subheader_proto.body = f\"### {clean_text(body)}\"\n else:\n subheader_proto.body = f'<h3 data-anchor=\"{anchor}\">{clean_text(body)}</h3>'\n subheader_proto.allow_html = True\n\n return self.dg._enqueue(\"markdown\", subheader_proto)\n\n def code(self, body: str, language: Optional[str] = \"python\") -> \"DeltaGenerator\":\n \"\"\"Display a code block with optional syntax highlighting.\n\n (This is a convenience wrapper around `st.markdown()`)\n\n Parameters\n ----------\n body : str\n The string to display as code.\n\n language : str\n The language that the code is written in, for syntax highlighting.\n If omitted, the code will be unstyled.\n\n For a list of available ``language`` imports, see:\n\n https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/AVAILABLE_LANGUAGES_PRISM.MD\n\n Example\n -------\n >>> code = '''def hello():\n ... print(\"Hello, Streamlit!\")'''\n >>> st.code(code, language='python')\n\n \"\"\"\n code_proto = MarkdownProto()\n markdown = \"```%(language)s\\n%(body)s\\n```\" % {\n \"language\": language or \"\",\n \"body\": body,\n }\n code_proto.body = clean_text(markdown)\n return self.dg._enqueue(\"markdown\", code_proto)\n\n def title(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in title formatting.\n\n Each document should have a single `st.title()`, although this is not\n enforced.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.title('This is a title')\n\n \"\"\"\n title_proto = MarkdownProto()\n if anchor is None:\n title_proto.body = f\"# {clean_text(body)}\"\n else:\n title_proto.body = f'<h1 data-anchor=\"{anchor}\">{clean_text(body)}</h1>'\n title_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", title_proto)\n\n def caption(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display text in small font.\n\n This should be used for captions, asides, footnotes, sidenotes, and\n other explanatory text.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in strings will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, *we strongly advise against it*. It is hard to write secure\n HTML, so by using this argument you may be compromising your users'\n security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n **Also note that `unsafe_allow_html` is a temporary measure and may be\n removed from Streamlit at any time.**\n\n If you decide to turn on HTML anyway, we ask you to please tell us your\n exact use case here:\n https://discuss.streamlit.io/t/96 .\n\n This will help us come up with safe APIs that allow you to do what you\n want.\n\n Example\n -------\n >>> st.caption('This is a string that explains something above.')\n\n \"\"\"\n caption_proto = MarkdownProto()\n caption_proto.body = clean_text(body)\n caption_proto.allow_html = unsafe_allow_html\n caption_proto.is_caption = True\n return self.dg._enqueue(\"markdown\", caption_proto)\n\n def latex(self, body: Union[str, \"sympy.Expr\"]) -> \"DeltaGenerator\":\n # This docstring needs to be \"raw\" because of the backslashes in the\n # example below.\n r\"\"\"Display mathematical expressions formatted as LaTeX.\n\n Supported LaTeX functions are listed at\n https://katex.org/docs/supported.html.\n\n Parameters\n ----------\n body : str or SymPy expression\n The string or SymPy expression to display as LaTeX. If str, it's\n a good idea to use raw Python strings since LaTeX uses backslashes\n a lot.\n\n\n Example\n -------\n >>> st.latex(r'''\n ... a + ar + a r^2 + a r^3 + \\cdots + a r^{n-1} =\n ... \\sum_{k=0}^{n-1} ar^k =\n ... a \\left(\\frac{1-r^{n}}{1-r}\\right)\n ... ''')\n\n \"\"\"\n if type_util.is_sympy_expession(body):\n import sympy\n\n body = sympy.latex(body)\n\n latex_proto = MarkdownProto()\n latex_proto.body = \"$$\\n%s\\n$$\" % clean_text(body)\n return self.dg._enqueue(\"markdown\", latex_proto)\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/markdown.py"}]}
| 3,088 | 323 |
gh_patches_debug_32481
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-45
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Liste der Benutzer ansehen
Als Manager möchte ich die Liste der Nutzer sehen, dafür soll es einen entsprechenden Menüpunkt geben.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `event_management/views.py`
Content:
```
1 import json
2
3 import guardian.mixins
4 from django.contrib import messages
5 from django.contrib.auth import get_user_model
6 from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
7 from django.contrib.auth.models import Group
8 from django.core.exceptions import ValidationError
9 from django.http import HttpResponse
10 from django.shortcuts import get_object_or_404, redirect
11 from django.urls import reverse, reverse_lazy
12 from django.utils.timezone import get_default_timezone
13 from django.views.generic import (
14 DeleteView,
15 DetailView,
16 ListView,
17 TemplateView,
18 UpdateView,
19 View,
20 CreateView,
21 RedirectView,
22 )
23 from django.views.generic.detail import SingleObjectMixin
24 from guardian.shortcuts import get_objects_for_user, get_users_with_perms
25
26 from event_management.forms import EventForm, ShiftForm
27 from event_management.models import (
28 Event,
29 Shift,
30 AbstractParticipation,
31 )
32 from django.utils.translation import gettext as _
33
34 from event_management.signup import SignupError, DeclineError
35 from jep.permissions import get_groups_with_perms
36
37
38 class HomeView(LoginRequiredMixin, TemplateView):
39 template_name = "event_management/home.html"
40
41
42 class EventListView(LoginRequiredMixin, ListView):
43 model = Event
44
45 def get_queryset(self):
46 return get_objects_for_user(self.request.user, "event_management.view_event")
47
48
49 class EventDetailView(guardian.mixins.PermissionRequiredMixin, DetailView):
50 model = Event
51 permission_required = "event_management.view_event"
52
53 def get_queryset(self):
54 if self.request.user.has_perm("event_management.add_event"):
55 return Event.all_objects
56 else:
57 return Event.objects
58
59
60 class EventUpdateView(guardian.mixins.PermissionRequiredMixin, UpdateView):
61 model = Event
62 queryset = Event.all_objects
63 permission_required = "event_management.change_event"
64 raise_exception = True
65 accept_global_perms = True
66
67 def get_form(self, form_class=None):
68 visible_queryset = get_objects_for_user(
69 self.request.user, "publish_event_for_group", klass=Group
70 )
71 initial = {
72 "visible_for": get_groups_with_perms(self.object, only_with_perms_in=["view_event"]),
73 "responsible_persons": get_users_with_perms(
74 self.object, only_with_perms_in=["change_event"], with_group_users=False
75 ),
76 "responsible_groups": get_groups_with_perms(
77 self.object, only_with_perms_in=["change_event"]
78 ),
79 }
80 event_form = EventForm(self.request.POST or None, instance=self.object, initial=initial)
81 event_form.fields["visible_for"].queryset = visible_queryset
82 return event_form
83
84
85 class EventCreateView(PermissionRequiredMixin, CreateView):
86 template_name = "event_management/event_form.html"
87 permission_required = "event_management.add_event"
88
89 def get_form(self, form_class=None):
90 visible_for_queryset = get_objects_for_user(
91 self.request.user, "publish_event_for_group", klass=Group
92 )
93 event_form = EventForm(
94 self.request.POST or None,
95 initial={
96 "responsible_persons": get_user_model().objects.filter(pk=self.request.user.pk),
97 "responsible_groups": Group.objects.none(),
98 "visible_for": visible_for_queryset,
99 },
100 )
101 event_form.fields["visible_for"].queryset = visible_for_queryset
102 return event_form
103
104 def get_context_data(self, **kwargs):
105 inactive_events = Event.all_objects.filter(active=False)
106 kwargs.setdefault("inactive_events", inactive_events)
107 return super().get_context_data(**kwargs)
108
109 def get_success_url(self):
110 return reverse("event_management:event_createshift", kwargs={"pk": self.object.pk})
111
112
113 class EventActivateView(PermissionRequiredMixin, RedirectView):
114 permission_required = "event_management.add_event"
115
116 def get_redirect_url(self, *args, **kwargs):
117 event = get_object_or_404(Event.all_objects, pk=kwargs["pk"])
118 event.active = True
119 event.save()
120 messages.success(
121 self.request, _("The event {title} has been saved.".format(title=event.title))
122 )
123 return event.get_absolute_url()
124
125
126 class EventDeleteView(PermissionRequiredMixin, DeleteView):
127 queryset = Event.all_objects
128 permission_required = "event_management.delete_event"
129 success_url = reverse_lazy("event_management:event_list")
130
131
132 class ShiftCreateView(PermissionRequiredMixin, TemplateView):
133 permission_required = "event_management.add_event"
134 template_name = "event_management/shift_form.html"
135
136 def get_event(self):
137 return get_object_or_404(Event.all_objects, pk=self.kwargs.get("pk"))
138
139 def get_shift_form(self):
140 return ShiftForm(self.request.POST or None)
141
142 def get_context_data(self, **kwargs):
143 event = self.get_event()
144 kwargs.setdefault("event", event)
145 kwargs.setdefault("form", self.get_shift_form())
146 return super().get_context_data(**kwargs)
147
148 def post(self, *args, **kwargs):
149 form = self.get_shift_form()
150 try:
151 from event_management.signup import signup_method_from_slug
152
153 signup_method = signup_method_from_slug(self.request.POST["signup_method_slug"])
154 configuration_form = signup_method.get_configuration_form(self.request.POST)
155 except ValueError as e:
156 raise ValidationError(e)
157 if form.is_valid() and configuration_form.is_valid():
158 shift = form.save(commit=False)
159 event = self.get_event()
160 shift.event = event
161 shift.signup_configuration = configuration_form.get_configuration()
162 shift.save()
163 if "addAnother" in self.request.POST:
164 return redirect(
165 reverse(
166 "event_management:event_createshift", kwargs={"pk": self.kwargs.get("pk")}
167 )
168 )
169 else:
170 event.active = True
171 event.save()
172 messages.success(
173 self.request, _("The event {title} has been saved.".format(title=event.title))
174 )
175 return redirect(event.get_absolute_url())
176 else:
177 return self.render_to_response(
178 self.get_context_data(
179 form=form,
180 configuration_form=signup_method.render_configuration_form(configuration_form),
181 )
182 )
183
184
185 class ShiftConfigurationFormView(View):
186 def get(self, request, *args, **kwargs):
187 from event_management.signup import signup_method_from_slug
188
189 signup_method = signup_method_from_slug(self.kwargs.get("slug"))
190 return HttpResponse(signup_method.render_configuration_form())
191
192
193 class ShiftUpdateView(guardian.mixins.PermissionRequiredMixin, TemplateView, SingleObjectMixin):
194 model = Shift
195 template_name = "event_management/shift_form.html"
196 permission_required = "event_management.change_event"
197 raise_exception = True
198 accept_global_perms = True
199
200 def get_permission_object(self):
201 return self.get_object().event
202
203 def get_shift_form(self):
204 return ShiftForm(
205 self.request.POST or None,
206 instance=self.object,
207 initial={
208 "date": self.object.meeting_time.date(),
209 "meeting_time": self.object.meeting_time.astimezone(get_default_timezone()).time(),
210 "start_time": self.object.start_time.astimezone(get_default_timezone()).time(),
211 "end_time": self.object.end_time.astimezone(get_default_timezone()).time(),
212 },
213 )
214
215 def get_configuration_form(self):
216 return self.object.signup_method.render_configuration_form(
217 data=self.request.POST or None, initial=json.loads(self.object.signup_configuration)
218 )
219
220 def get_context_data(self, **kwargs):
221 self.object = self.get_object()
222 kwargs.setdefault("event", self.object.event)
223 kwargs.setdefault("form", self.get_shift_form())
224 kwargs.setdefault("configuration_form", self.get_configuration_form())
225 return super().get_context_data(**kwargs)
226
227 def post(self, *args, **kwargs):
228 self.object = self.get_object()
229 form = self.get_shift_form()
230 try:
231 from event_management.signup import signup_method_from_slug
232
233 signup_method = signup_method_from_slug(self.request.POST["signup_method_slug"])
234 configuration_form = signup_method.get_configuration_form(self.request.POST)
235 except ValueError as e:
236 raise ValidationError(e)
237 if form.is_valid() and configuration_form.is_valid():
238 shift = form.save(commit=False)
239 shift.signup_configuration = configuration_form.get_configuration()
240 shift.save()
241 if "addAnother" in self.request.POST:
242 return redirect(
243 reverse("event_management:event_createshift", kwargs={"pk": shift.event.pk})
244 )
245 else:
246 messages.success(
247 self.request, _("The shift {shift} has been saved.".format(shift=shift))
248 )
249 return redirect(self.object.event.get_absolute_url())
250 else:
251 return self.render_to_response(
252 self.get_context_data(
253 form=form,
254 configuration_form=signup_method.render_configuration_form(configuration_form),
255 )
256 )
257
258
259 class ShiftDeleteView(PermissionRequiredMixin, DeleteView):
260 permission_required = "event_management.change_event"
261 model = Shift
262
263 def delete(self, request, *args, **kwargs):
264 self.object = self.get_object()
265 if self.object.event.shifts.count() == 1:
266 messages.error(self.request, _("You cannot delete the last shift!"))
267 return redirect(self.object.event.get_absolute_url())
268 else:
269 return super().delete(request, *args, **kwargs)
270
271 def get_success_url(self):
272 messages.success(self.request, _("The shift has been deleted."))
273 return self.object.event.get_absolute_url()
274
275
276 # TODO rename to signup
277 class ShiftRegisterView(LoginRequiredMixin, View):
278 def get(self, request, *args, **kwargs):
279 shift = get_object_or_404(Shift, id=self.kwargs["pk"])
280 return shift.signup_method.signup_view(request, *args, **kwargs)
281
282
283 class ShiftDeclineView(RedirectView):
284 def get_redirect_url(self, *args, **kwargs):
285 shift = get_object_or_404(Shift, id=self.kwargs["pk"])
286 return shift.signup_method.decline_view(self.request)
287
```
Path: `user_management/urls.py`
Content:
```
1 from django.urls import path
2
3 from user_management import views
4
5 app_name = "user_management"
6 urlpatterns = [
7 path("profile/", views.ProfileView.as_view(), name="profile"),
8 path("groups/", views.GroupListView.as_view(), name="group_list"),
9 path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
10 path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
11 path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
12 ]
13
```
Path: `user_management/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
3 from django.contrib.auth.models import Group
4 from django.urls import reverse
5 from django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView
6 from guardian.shortcuts import get_users_with_perms, get_objects_for_group
7
8 from user_management.forms import GroupForm
9 from django.utils.translation import gettext as _
10
11 from user_management.models import UserProfile
12
13
14 class ProfileView(LoginRequiredMixin, DetailView):
15 def get_object(self, queryset=None):
16 return self.request.user
17
18
19 class GroupListView(PermissionRequiredMixin, ListView):
20 model = Group
21 permission_required = "auth.view_group"
22 template_name = "user_management/group_list.html"
23
24
25 class GroupCreateView(PermissionRequiredMixin, CreateView):
26 model = Group
27 permission_required = "auth.add_group"
28 template_name = "user_management/group_form.html"
29 form_class = GroupForm
30
31 def get_form_kwargs(self):
32 kwargs = super().get_form_kwargs()
33 kwargs["initial"] = {
34 "users": UserProfile.objects.none(),
35 "can_add_event": False,
36 "publish_event_for_group": Group.objects.none(),
37 }
38 return kwargs
39
40 def get_success_url(self):
41 messages.success(self.request, _("Group succesfully created."))
42 return reverse("user_management:group_list")
43
44
45 class GroupUpdateView(PermissionRequiredMixin, UpdateView):
46 model = Group
47 permission_required = "auth.change_group"
48 template_name = "user_management/group_form.html"
49 form_class = GroupForm
50
51 def get_form_kwargs(self):
52 kwargs = super().get_form_kwargs()
53 kwargs["initial"] = {
54 "users": self.object.user_set.all(),
55 "can_add_event": self.object.permissions.filter(codename="add_event").exists(),
56 "publish_event_for_group": get_objects_for_group(
57 self.object, "publish_event_for_group", klass=Group
58 ),
59 }
60 return kwargs
61
62 def get_success_url(self):
63 return reverse("user_management:group_list")
64
65
66 class GroupDeleteView(PermissionRequiredMixin, DeleteView):
67 model = Group
68 permission_required = "auth.delete_group"
69 template_name = "user_management/group_confirm_delete.html"
70
71 def get_success_url(self):
72 return reverse("user_management:group_list")
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/event_management/views.py b/event_management/views.py
--- a/event_management/views.py
+++ b/event_management/views.py
@@ -170,7 +170,7 @@
event.active = True
event.save()
messages.success(
- self.request, _("The event {title} has been saved.".format(title=event.title))
+ self.request, _("The event {title} has been saved.").format(title=event.title)
)
return redirect(event.get_absolute_url())
else:
@@ -244,7 +244,7 @@
)
else:
messages.success(
- self.request, _("The shift {shift} has been saved.".format(shift=shift))
+ self.request, _("The shift {shift} has been saved.").format(shift=shift)
)
return redirect(self.object.event.get_absolute_url())
else:
diff --git a/user_management/urls.py b/user_management/urls.py
--- a/user_management/urls.py
+++ b/user_management/urls.py
@@ -9,4 +9,5 @@
path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
+ path("users/", views.UserProfileListView.as_view(), name="user_list",),
]
diff --git a/user_management/views.py b/user_management/views.py
--- a/user_management/views.py
+++ b/user_management/views.py
@@ -16,6 +16,14 @@
return self.request.user
+class UserProfileListView(PermissionRequiredMixin, ListView):
+ model = UserProfile
+ permission_required = "user.view_user"
+
+ def get_queryset(self):
+ return UserProfile.objects.all()
+
+
class GroupListView(PermissionRequiredMixin, ListView):
model = Group
permission_required = "auth.view_group"
|
{"golden_diff": "diff --git a/event_management/views.py b/event_management/views.py\n--- a/event_management/views.py\n+++ b/event_management/views.py\n@@ -170,7 +170,7 @@\n event.active = True\n event.save()\n messages.success(\n- self.request, _(\"The event {title} has been saved.\".format(title=event.title))\n+ self.request, _(\"The event {title} has been saved.\").format(title=event.title)\n )\n return redirect(event.get_absolute_url())\n else:\n@@ -244,7 +244,7 @@\n )\n else:\n messages.success(\n- self.request, _(\"The shift {shift} has been saved.\".format(shift=shift))\n+ self.request, _(\"The shift {shift} has been saved.\").format(shift=shift)\n )\n return redirect(self.object.event.get_absolute_url())\n else:\ndiff --git a/user_management/urls.py b/user_management/urls.py\n--- a/user_management/urls.py\n+++ b/user_management/urls.py\n@@ -9,4 +9,5 @@\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n+ path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n ]\ndiff --git a/user_management/views.py b/user_management/views.py\n--- a/user_management/views.py\n+++ b/user_management/views.py\n@@ -16,6 +16,14 @@\n return self.request.user\n \n \n+class UserProfileListView(PermissionRequiredMixin, ListView):\n+ model = UserProfile\n+ permission_required = \"user.view_user\"\n+\n+ def get_queryset(self):\n+ return UserProfile.objects.all()\n+\n+\n class GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n", "issue": "Liste der Benutzer ansehen\nAls Manager m\u00f6chte ich die Liste der Nutzer sehen, daf\u00fcr soll es einen entsprechenden Men\u00fcpunkt geben.\n", "before_files": [{"content": "import json\n\nimport guardian.mixins\nfrom django.contrib import messages\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import ValidationError\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.timezone import get_default_timezone\nfrom django.views.generic import (\n DeleteView,\n DetailView,\n ListView,\n TemplateView,\n UpdateView,\n View,\n CreateView,\n RedirectView,\n)\nfrom django.views.generic.detail import SingleObjectMixin\nfrom guardian.shortcuts import get_objects_for_user, get_users_with_perms\n\nfrom event_management.forms import EventForm, ShiftForm\nfrom event_management.models import (\n Event,\n Shift,\n AbstractParticipation,\n)\nfrom django.utils.translation import gettext as _\n\nfrom event_management.signup import SignupError, DeclineError\nfrom jep.permissions import get_groups_with_perms\n\n\nclass HomeView(LoginRequiredMixin, TemplateView):\n template_name = \"event_management/home.html\"\n\n\nclass EventListView(LoginRequiredMixin, ListView):\n model = Event\n\n def get_queryset(self):\n return get_objects_for_user(self.request.user, \"event_management.view_event\")\n\n\nclass EventDetailView(guardian.mixins.PermissionRequiredMixin, DetailView):\n model = Event\n permission_required = \"event_management.view_event\"\n\n def get_queryset(self):\n if self.request.user.has_perm(\"event_management.add_event\"):\n return Event.all_objects\n else:\n return Event.objects\n\n\nclass EventUpdateView(guardian.mixins.PermissionRequiredMixin, UpdateView):\n model = Event\n queryset = Event.all_objects\n permission_required = \"event_management.change_event\"\n raise_exception = True\n accept_global_perms = True\n\n def get_form(self, form_class=None):\n visible_queryset = get_objects_for_user(\n self.request.user, \"publish_event_for_group\", klass=Group\n )\n initial = {\n \"visible_for\": get_groups_with_perms(self.object, only_with_perms_in=[\"view_event\"]),\n \"responsible_persons\": get_users_with_perms(\n self.object, only_with_perms_in=[\"change_event\"], with_group_users=False\n ),\n \"responsible_groups\": get_groups_with_perms(\n self.object, only_with_perms_in=[\"change_event\"]\n ),\n }\n event_form = EventForm(self.request.POST or None, instance=self.object, initial=initial)\n event_form.fields[\"visible_for\"].queryset = visible_queryset\n return event_form\n\n\nclass EventCreateView(PermissionRequiredMixin, CreateView):\n template_name = \"event_management/event_form.html\"\n permission_required = \"event_management.add_event\"\n\n def get_form(self, form_class=None):\n visible_for_queryset = get_objects_for_user(\n self.request.user, \"publish_event_for_group\", klass=Group\n )\n event_form = EventForm(\n self.request.POST or None,\n initial={\n \"responsible_persons\": get_user_model().objects.filter(pk=self.request.user.pk),\n \"responsible_groups\": Group.objects.none(),\n \"visible_for\": visible_for_queryset,\n },\n )\n event_form.fields[\"visible_for\"].queryset = visible_for_queryset\n return event_form\n\n def get_context_data(self, **kwargs):\n inactive_events = Event.all_objects.filter(active=False)\n kwargs.setdefault(\"inactive_events\", inactive_events)\n return super().get_context_data(**kwargs)\n\n def get_success_url(self):\n return reverse(\"event_management:event_createshift\", kwargs={\"pk\": self.object.pk})\n\n\nclass EventActivateView(PermissionRequiredMixin, RedirectView):\n permission_required = \"event_management.add_event\"\n\n def get_redirect_url(self, *args, **kwargs):\n event = get_object_or_404(Event.all_objects, pk=kwargs[\"pk\"])\n event.active = True\n event.save()\n messages.success(\n self.request, _(\"The event {title} has been saved.\".format(title=event.title))\n )\n return event.get_absolute_url()\n\n\nclass EventDeleteView(PermissionRequiredMixin, DeleteView):\n queryset = Event.all_objects\n permission_required = \"event_management.delete_event\"\n success_url = reverse_lazy(\"event_management:event_list\")\n\n\nclass ShiftCreateView(PermissionRequiredMixin, TemplateView):\n permission_required = \"event_management.add_event\"\n template_name = \"event_management/shift_form.html\"\n\n def get_event(self):\n return get_object_or_404(Event.all_objects, pk=self.kwargs.get(\"pk\"))\n\n def get_shift_form(self):\n return ShiftForm(self.request.POST or None)\n\n def get_context_data(self, **kwargs):\n event = self.get_event()\n kwargs.setdefault(\"event\", event)\n kwargs.setdefault(\"form\", self.get_shift_form())\n return super().get_context_data(**kwargs)\n\n def post(self, *args, **kwargs):\n form = self.get_shift_form()\n try:\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.request.POST[\"signup_method_slug\"])\n configuration_form = signup_method.get_configuration_form(self.request.POST)\n except ValueError as e:\n raise ValidationError(e)\n if form.is_valid() and configuration_form.is_valid():\n shift = form.save(commit=False)\n event = self.get_event()\n shift.event = event\n shift.signup_configuration = configuration_form.get_configuration()\n shift.save()\n if \"addAnother\" in self.request.POST:\n return redirect(\n reverse(\n \"event_management:event_createshift\", kwargs={\"pk\": self.kwargs.get(\"pk\")}\n )\n )\n else:\n event.active = True\n event.save()\n messages.success(\n self.request, _(\"The event {title} has been saved.\".format(title=event.title))\n )\n return redirect(event.get_absolute_url())\n else:\n return self.render_to_response(\n self.get_context_data(\n form=form,\n configuration_form=signup_method.render_configuration_form(configuration_form),\n )\n )\n\n\nclass ShiftConfigurationFormView(View):\n def get(self, request, *args, **kwargs):\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.kwargs.get(\"slug\"))\n return HttpResponse(signup_method.render_configuration_form())\n\n\nclass ShiftUpdateView(guardian.mixins.PermissionRequiredMixin, TemplateView, SingleObjectMixin):\n model = Shift\n template_name = \"event_management/shift_form.html\"\n permission_required = \"event_management.change_event\"\n raise_exception = True\n accept_global_perms = True\n\n def get_permission_object(self):\n return self.get_object().event\n\n def get_shift_form(self):\n return ShiftForm(\n self.request.POST or None,\n instance=self.object,\n initial={\n \"date\": self.object.meeting_time.date(),\n \"meeting_time\": self.object.meeting_time.astimezone(get_default_timezone()).time(),\n \"start_time\": self.object.start_time.astimezone(get_default_timezone()).time(),\n \"end_time\": self.object.end_time.astimezone(get_default_timezone()).time(),\n },\n )\n\n def get_configuration_form(self):\n return self.object.signup_method.render_configuration_form(\n data=self.request.POST or None, initial=json.loads(self.object.signup_configuration)\n )\n\n def get_context_data(self, **kwargs):\n self.object = self.get_object()\n kwargs.setdefault(\"event\", self.object.event)\n kwargs.setdefault(\"form\", self.get_shift_form())\n kwargs.setdefault(\"configuration_form\", self.get_configuration_form())\n return super().get_context_data(**kwargs)\n\n def post(self, *args, **kwargs):\n self.object = self.get_object()\n form = self.get_shift_form()\n try:\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.request.POST[\"signup_method_slug\"])\n configuration_form = signup_method.get_configuration_form(self.request.POST)\n except ValueError as e:\n raise ValidationError(e)\n if form.is_valid() and configuration_form.is_valid():\n shift = form.save(commit=False)\n shift.signup_configuration = configuration_form.get_configuration()\n shift.save()\n if \"addAnother\" in self.request.POST:\n return redirect(\n reverse(\"event_management:event_createshift\", kwargs={\"pk\": shift.event.pk})\n )\n else:\n messages.success(\n self.request, _(\"The shift {shift} has been saved.\".format(shift=shift))\n )\n return redirect(self.object.event.get_absolute_url())\n else:\n return self.render_to_response(\n self.get_context_data(\n form=form,\n configuration_form=signup_method.render_configuration_form(configuration_form),\n )\n )\n\n\nclass ShiftDeleteView(PermissionRequiredMixin, DeleteView):\n permission_required = \"event_management.change_event\"\n model = Shift\n\n def delete(self, request, *args, **kwargs):\n self.object = self.get_object()\n if self.object.event.shifts.count() == 1:\n messages.error(self.request, _(\"You cannot delete the last shift!\"))\n return redirect(self.object.event.get_absolute_url())\n else:\n return super().delete(request, *args, **kwargs)\n\n def get_success_url(self):\n messages.success(self.request, _(\"The shift has been deleted.\"))\n return self.object.event.get_absolute_url()\n\n\n# TODO rename to signup\nclass ShiftRegisterView(LoginRequiredMixin, View):\n def get(self, request, *args, **kwargs):\n shift = get_object_or_404(Shift, id=self.kwargs[\"pk\"])\n return shift.signup_method.signup_view(request, *args, **kwargs)\n\n\nclass ShiftDeclineView(RedirectView):\n def get_redirect_url(self, *args, **kwargs):\n shift = get_object_or_404(Shift, id=self.kwargs[\"pk\"])\n return shift.signup_method.decline_view(self.request)\n", "path": "event_management/views.py"}, {"content": "from django.urls import path\n\nfrom user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n]\n", "path": "user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.urls import reverse\nfrom django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView\nfrom guardian.shortcuts import get_users_with_perms, get_objects_for_group\n\nfrom user_management.forms import GroupForm\nfrom django.utils.translation import gettext as _\n\nfrom user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group succesfully created.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "user_management/views.py"}], "after_files": [{"content": "import json\n\nimport guardian.mixins\nfrom django.contrib import messages\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import ValidationError\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse, reverse_lazy\nfrom django.utils.timezone import get_default_timezone\nfrom django.views.generic import (\n DeleteView,\n DetailView,\n ListView,\n TemplateView,\n UpdateView,\n View,\n CreateView,\n RedirectView,\n)\nfrom django.views.generic.detail import SingleObjectMixin\nfrom guardian.shortcuts import get_objects_for_user, get_users_with_perms\n\nfrom event_management.forms import EventForm, ShiftForm\nfrom event_management.models import (\n Event,\n Shift,\n AbstractParticipation,\n)\nfrom django.utils.translation import gettext as _\n\nfrom event_management.signup import SignupError, DeclineError\nfrom jep.permissions import get_groups_with_perms\n\n\nclass HomeView(LoginRequiredMixin, TemplateView):\n template_name = \"event_management/home.html\"\n\n\nclass EventListView(LoginRequiredMixin, ListView):\n model = Event\n\n def get_queryset(self):\n return get_objects_for_user(self.request.user, \"event_management.view_event\")\n\n\nclass EventDetailView(guardian.mixins.PermissionRequiredMixin, DetailView):\n model = Event\n permission_required = \"event_management.view_event\"\n\n def get_queryset(self):\n if self.request.user.has_perm(\"event_management.add_event\"):\n return Event.all_objects\n else:\n return Event.objects\n\n\nclass EventUpdateView(guardian.mixins.PermissionRequiredMixin, UpdateView):\n model = Event\n queryset = Event.all_objects\n permission_required = \"event_management.change_event\"\n raise_exception = True\n accept_global_perms = True\n\n def get_form(self, form_class=None):\n visible_queryset = get_objects_for_user(\n self.request.user, \"publish_event_for_group\", klass=Group\n )\n initial = {\n \"visible_for\": get_groups_with_perms(self.object, only_with_perms_in=[\"view_event\"]),\n \"responsible_persons\": get_users_with_perms(\n self.object, only_with_perms_in=[\"change_event\"], with_group_users=False\n ),\n \"responsible_groups\": get_groups_with_perms(\n self.object, only_with_perms_in=[\"change_event\"]\n ),\n }\n event_form = EventForm(self.request.POST or None, instance=self.object, initial=initial)\n event_form.fields[\"visible_for\"].queryset = visible_queryset\n return event_form\n\n\nclass EventCreateView(PermissionRequiredMixin, CreateView):\n template_name = \"event_management/event_form.html\"\n permission_required = \"event_management.add_event\"\n\n def get_form(self, form_class=None):\n visible_for_queryset = get_objects_for_user(\n self.request.user, \"publish_event_for_group\", klass=Group\n )\n event_form = EventForm(\n self.request.POST or None,\n initial={\n \"responsible_persons\": get_user_model().objects.filter(pk=self.request.user.pk),\n \"responsible_groups\": Group.objects.none(),\n \"visible_for\": visible_for_queryset,\n },\n )\n event_form.fields[\"visible_for\"].queryset = visible_for_queryset\n return event_form\n\n def get_context_data(self, **kwargs):\n inactive_events = Event.all_objects.filter(active=False)\n kwargs.setdefault(\"inactive_events\", inactive_events)\n return super().get_context_data(**kwargs)\n\n def get_success_url(self):\n return reverse(\"event_management:event_createshift\", kwargs={\"pk\": self.object.pk})\n\n\nclass EventActivateView(PermissionRequiredMixin, RedirectView):\n permission_required = \"event_management.add_event\"\n\n def get_redirect_url(self, *args, **kwargs):\n event = get_object_or_404(Event.all_objects, pk=kwargs[\"pk\"])\n event.active = True\n event.save()\n messages.success(\n self.request, _(\"The event {title} has been saved.\".format(title=event.title))\n )\n return event.get_absolute_url()\n\n\nclass EventDeleteView(PermissionRequiredMixin, DeleteView):\n queryset = Event.all_objects\n permission_required = \"event_management.delete_event\"\n success_url = reverse_lazy(\"event_management:event_list\")\n\n\nclass ShiftCreateView(PermissionRequiredMixin, TemplateView):\n permission_required = \"event_management.add_event\"\n template_name = \"event_management/shift_form.html\"\n\n def get_event(self):\n return get_object_or_404(Event.all_objects, pk=self.kwargs.get(\"pk\"))\n\n def get_shift_form(self):\n return ShiftForm(self.request.POST or None)\n\n def get_context_data(self, **kwargs):\n event = self.get_event()\n kwargs.setdefault(\"event\", event)\n kwargs.setdefault(\"form\", self.get_shift_form())\n return super().get_context_data(**kwargs)\n\n def post(self, *args, **kwargs):\n form = self.get_shift_form()\n try:\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.request.POST[\"signup_method_slug\"])\n configuration_form = signup_method.get_configuration_form(self.request.POST)\n except ValueError as e:\n raise ValidationError(e)\n if form.is_valid() and configuration_form.is_valid():\n shift = form.save(commit=False)\n event = self.get_event()\n shift.event = event\n shift.signup_configuration = configuration_form.get_configuration()\n shift.save()\n if \"addAnother\" in self.request.POST:\n return redirect(\n reverse(\n \"event_management:event_createshift\", kwargs={\"pk\": self.kwargs.get(\"pk\")}\n )\n )\n else:\n event.active = True\n event.save()\n messages.success(\n self.request, _(\"The event {title} has been saved.\").format(title=event.title)\n )\n return redirect(event.get_absolute_url())\n else:\n return self.render_to_response(\n self.get_context_data(\n form=form,\n configuration_form=signup_method.render_configuration_form(configuration_form),\n )\n )\n\n\nclass ShiftConfigurationFormView(View):\n def get(self, request, *args, **kwargs):\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.kwargs.get(\"slug\"))\n return HttpResponse(signup_method.render_configuration_form())\n\n\nclass ShiftUpdateView(guardian.mixins.PermissionRequiredMixin, TemplateView, SingleObjectMixin):\n model = Shift\n template_name = \"event_management/shift_form.html\"\n permission_required = \"event_management.change_event\"\n raise_exception = True\n accept_global_perms = True\n\n def get_permission_object(self):\n return self.get_object().event\n\n def get_shift_form(self):\n return ShiftForm(\n self.request.POST or None,\n instance=self.object,\n initial={\n \"date\": self.object.meeting_time.date(),\n \"meeting_time\": self.object.meeting_time.astimezone(get_default_timezone()).time(),\n \"start_time\": self.object.start_time.astimezone(get_default_timezone()).time(),\n \"end_time\": self.object.end_time.astimezone(get_default_timezone()).time(),\n },\n )\n\n def get_configuration_form(self):\n return self.object.signup_method.render_configuration_form(\n data=self.request.POST or None, initial=json.loads(self.object.signup_configuration)\n )\n\n def get_context_data(self, **kwargs):\n self.object = self.get_object()\n kwargs.setdefault(\"event\", self.object.event)\n kwargs.setdefault(\"form\", self.get_shift_form())\n kwargs.setdefault(\"configuration_form\", self.get_configuration_form())\n return super().get_context_data(**kwargs)\n\n def post(self, *args, **kwargs):\n self.object = self.get_object()\n form = self.get_shift_form()\n try:\n from event_management.signup import signup_method_from_slug\n\n signup_method = signup_method_from_slug(self.request.POST[\"signup_method_slug\"])\n configuration_form = signup_method.get_configuration_form(self.request.POST)\n except ValueError as e:\n raise ValidationError(e)\n if form.is_valid() and configuration_form.is_valid():\n shift = form.save(commit=False)\n shift.signup_configuration = configuration_form.get_configuration()\n shift.save()\n if \"addAnother\" in self.request.POST:\n return redirect(\n reverse(\"event_management:event_createshift\", kwargs={\"pk\": shift.event.pk})\n )\n else:\n messages.success(\n self.request, _(\"The shift {shift} has been saved.\").format(shift=shift)\n )\n return redirect(self.object.event.get_absolute_url())\n else:\n return self.render_to_response(\n self.get_context_data(\n form=form,\n configuration_form=signup_method.render_configuration_form(configuration_form),\n )\n )\n\n\nclass ShiftDeleteView(PermissionRequiredMixin, DeleteView):\n permission_required = \"event_management.change_event\"\n model = Shift\n\n def delete(self, request, *args, **kwargs):\n self.object = self.get_object()\n if self.object.event.shifts.count() == 1:\n messages.error(self.request, _(\"You cannot delete the last shift!\"))\n return redirect(self.object.event.get_absolute_url())\n else:\n return super().delete(request, *args, **kwargs)\n\n def get_success_url(self):\n messages.success(self.request, _(\"The shift has been deleted.\"))\n return self.object.event.get_absolute_url()\n\n\n# TODO rename to signup\nclass ShiftRegisterView(LoginRequiredMixin, View):\n def get(self, request, *args, **kwargs):\n shift = get_object_or_404(Shift, id=self.kwargs[\"pk\"])\n return shift.signup_method.signup_view(request, *args, **kwargs)\n\n\nclass ShiftDeclineView(RedirectView):\n def get_redirect_url(self, *args, **kwargs):\n shift = get_object_or_404(Shift, id=self.kwargs[\"pk\"])\n return shift.signup_method.decline_view(self.request)\n", "path": "event_management/views.py"}, {"content": "from django.urls import path\n\nfrom user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\"users/\", views.UserProfileListView.as_view(), name=\"user_list\",),\n]\n", "path": "user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.urls import reverse\nfrom django.views.generic import DetailView, ListView, UpdateView, CreateView, DeleteView\nfrom guardian.shortcuts import get_users_with_perms, get_objects_for_group\n\nfrom user_management.forms import GroupForm\nfrom django.utils.translation import gettext as _\n\nfrom user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user.view_user\"\n\n def get_queryset(self):\n return UserProfile.objects.all()\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group succesfully created.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "user_management/views.py"}]}
| 3,926 | 425 |
gh_patches_debug_22906
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-2016
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Troublesome command entering python mode
I'm trying to run this command which I used to use often in zsh:
```
svn diff ./test.py | vim -R -
```
but it keeps trying to evaluate in Python mode.
I can usually wrestle these types of troublesome commands into submission with `$()` and `@()` but I can't figure this one out.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/commands_cache.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Module for caching command & alias names as well as for predicting whether
3 a command will be able to be run in the background.
4
5 A background predictor is a function that accepect a single argument list
6 and returns whethere or not the process can be run in the background (returns
7 True) or must be run the foreground (returns False).
8 """
9 import os
10 import builtins
11 import argparse
12 import collections
13 import collections.abc as cabc
14
15 from xonsh.platform import ON_WINDOWS, pathbasename
16 from xonsh.tools import executables_in
17 from xonsh.lazyasd import lazyobject
18
19
20 class CommandsCache(cabc.Mapping):
21 """A lazy cache representing the commands available on the file system.
22 The keys are the command names and the values a tuple of (loc, has_alias)
23 where loc is either a str pointing to the executable on the file system or
24 None (if no executable exists) and has_alias is a boolean flag for whether
25 the command has an alias.
26 """
27
28 def __init__(self):
29 self._cmds_cache = {}
30 self._path_checksum = None
31 self._alias_checksum = None
32 self._path_mtime = -1
33 self.threadable_predictors = default_threadable_predictors()
34
35 def __contains__(self, key):
36 _ = self.all_commands
37 return self.lazyin(key)
38
39 def __iter__(self):
40 for cmd, (path, is_alias) in self.all_commands.items():
41 if ON_WINDOWS and path is not None:
42 # All comand keys are stored in uppercase on Windows.
43 # This ensures the original command name is returned.
44 cmd = pathbasename(path)
45 yield cmd
46
47 def __len__(self):
48 return len(self.all_commands)
49
50 def __getitem__(self, key):
51 _ = self.all_commands
52 return self.lazyget(key)
53
54 def is_empty(self):
55 """Returns whether the cache is populated or not."""
56 return len(self._cmds_cache) == 0
57
58 @staticmethod
59 def get_possible_names(name):
60 """Generates the possible `PATHEXT` extension variants of a given executable
61 name on Windows as a list, conserving the ordering in `PATHEXT`.
62 Returns a list as `name` being the only item in it on other platforms."""
63 if ON_WINDOWS:
64 pathext = builtins.__xonsh_env__.get('PATHEXT')
65 name = name.upper()
66 return [
67 name + ext
68 for ext in ([''] + pathext)
69 ]
70 else:
71 return [name]
72
73 @property
74 def all_commands(self):
75 paths = builtins.__xonsh_env__.get('PATH', [])
76 pathset = frozenset(x for x in paths if os.path.isdir(x))
77 # did PATH change?
78 path_hash = hash(pathset)
79 cache_valid = path_hash == self._path_checksum
80 self._path_checksum = path_hash
81 # did aliases change?
82 alss = getattr(builtins, 'aliases', set())
83 al_hash = hash(frozenset(alss))
84 cache_valid = cache_valid and al_hash == self._alias_checksum
85 self._alias_checksum = al_hash
86 # did the contents of any directory in PATH change?
87 max_mtime = 0
88 for path in pathset:
89 mtime = os.stat(path).st_mtime
90 if mtime > max_mtime:
91 max_mtime = mtime
92 cache_valid = cache_valid and (max_mtime <= self._path_mtime)
93 self._path_mtime = max_mtime
94 if cache_valid:
95 return self._cmds_cache
96 allcmds = {}
97 for path in reversed(paths):
98 # iterate backwards so that entries at the front of PATH overwrite
99 # entries at the back.
100 for cmd in executables_in(path):
101 key = cmd.upper() if ON_WINDOWS else cmd
102 allcmds[key] = (os.path.join(path, cmd), cmd in alss)
103 for cmd in alss:
104 if cmd not in allcmds:
105 key = cmd.upper() if ON_WINDOWS else cmd
106 allcmds[key] = (cmd, True)
107 self._cmds_cache = allcmds
108 return allcmds
109
110 def cached_name(self, name):
111 """Returns the name that would appear in the cache, if it exists."""
112 if name is None:
113 return None
114 cached = pathbasename(name)
115 if ON_WINDOWS:
116 keys = self.get_possible_names(cached)
117 cached = next((k for k in keys if k in self._cmds_cache), None)
118 return cached
119
120 def lazyin(self, key):
121 """Checks if the value is in the current cache without the potential to
122 update the cache. It just says whether the value is known *now*. This
123 may not reflect precisely what is on the $PATH.
124 """
125 return self.cached_name(key) in self._cmds_cache
126
127 def lazyiter(self):
128 """Returns an iterator over the current cache contents without the
129 potential to update the cache. This may not reflect what is on the
130 $PATH.
131 """
132 return iter(self._cmds_cache)
133
134 def lazylen(self):
135 """Returns the length of the current cache contents without the
136 potential to update the cache. This may not reflect precisely
137 what is on the $PATH.
138 """
139 return len(self._cmds_cache)
140
141 def lazyget(self, key, default=None):
142 """A lazy value getter."""
143 return self._cmds_cache.get(self.cached_name(key), default)
144
145 def locate_binary(self, name):
146 """Locates an executable on the file system using the cache."""
147 # make sure the cache is up to date by accessing the property
148 _ = self.all_commands
149 return self.lazy_locate_binary(name)
150
151 def lazy_locate_binary(self, name):
152 """Locates an executable in the cache, without checking its validity."""
153 possibilities = self.get_possible_names(name)
154 if ON_WINDOWS:
155 # Windows users expect to be able to execute files in the same
156 # directory without `./`
157 local_bin = next((fn for fn in possibilities if os.path.isfile(fn)),
158 None)
159 if local_bin:
160 return os.path.abspath(local_bin)
161 cached = next((cmd for cmd in possibilities if cmd in self._cmds_cache),
162 None)
163 if cached:
164 (path, is_alias) = self._cmds_cache[cached]
165 return path if not is_alias else None
166 elif os.path.isfile(name) and name != pathbasename(name):
167 return name
168
169 def predict_threadable(self, cmd):
170 """Predicts whether a command list is able to be run on a background
171 thread, rather than the main thread.
172 """
173 name = self.cached_name(cmd[0])
174 predictors = self.threadable_predictors
175 if ON_WINDOWS:
176 # On all names (keys) are stored in upper case so instead
177 # we get the original cmd or alias name
178 path, _ = self.lazyget(name, (None, None))
179 if path is None:
180 return True
181 else:
182 name = pathbasename(path)
183 if name not in predictors:
184 pre, ext = os.path.splitext(name)
185 if pre in predictors:
186 predictors[name] = predictors[pre]
187 predictor = predictors[name]
188 return predictor(cmd[1:])
189
190 #
191 # Background Predictors
192 #
193
194
195 def predict_true(args):
196 """Always say the process is threadable."""
197 return True
198
199
200 def predict_false(args):
201 """Never say the process is threadable."""
202 return False
203
204
205 @lazyobject
206 def SHELL_PREDICTOR_PARSER():
207 p = argparse.ArgumentParser('shell', add_help=False)
208 p.add_argument('-c', nargs='?', default=None)
209 p.add_argument('filename', nargs='?', default=None)
210 return p
211
212
213 def predict_shell(args):
214 """Precict the backgroundability of the normal shell interface, which
215 comes down to whether it is being run in subproc mode.
216 """
217 ns, _ = SHELL_PREDICTOR_PARSER.parse_known_args(args)
218 if ns.c is None and ns.filename is None:
219 pred = False
220 else:
221 pred = True
222 return pred
223
224
225 @lazyobject
226 def HELP_VER_PREDICTOR_PARSER():
227 p = argparse.ArgumentParser('cmd', add_help=False)
228 p.add_argument('-h', '--help', dest='help',
229 action='store_true', default=None)
230 p.add_argument('-v', '-V', '--version', dest='version',
231 action='store_true', default=None)
232 return p
233
234
235 def predict_help_ver(args):
236 """Precict the backgroundability of commands that have help & version
237 switches: -h, --help, -v, -V, --version. If either of these options is
238 present, the command is assumed to print to stdout normally and is therefore
239 threadable. Otherwise, the command is assumed to not be threadable.
240 This is useful for commands, like top, that normally enter alternate mode
241 but may not in certain circumstances.
242 """
243 ns, _ = HELP_VER_PREDICTOR_PARSER.parse_known_args(args)
244 pred = ns.help is not None or ns.version is not None
245 return pred
246
247
248 def default_threadable_predictors():
249 """Generates a new defaultdict for known threadable predictors.
250 The default is to predict true.
251 """
252 # alphabetical, for what it is worth.
253 predictors = {
254 'bash': predict_shell,
255 'csh': predict_shell,
256 'clear': predict_false,
257 'cls': predict_false,
258 'cmd': predict_shell,
259 'fish': predict_shell,
260 'htop': predict_help_ver,
261 'ksh': predict_shell,
262 'less': predict_help_ver,
263 'man': predict_help_ver,
264 'more': predict_help_ver,
265 'mvim': predict_help_ver,
266 'mutt': predict_help_ver,
267 'nano': predict_help_ver,
268 'psql': predict_false,
269 'ranger': predict_help_ver,
270 'scp': predict_false,
271 'sh': predict_shell,
272 'ssh': predict_false,
273 'startx': predict_false,
274 'sudo': predict_help_ver,
275 'tcsh': predict_shell,
276 'top': predict_help_ver,
277 'vi': predict_false,
278 'vim': predict_false,
279 'vimpager': predict_help_ver,
280 'xo': predict_help_ver,
281 'xonsh': predict_shell,
282 'zsh': predict_shell,
283 }
284 return collections.defaultdict(lambda: predict_true, predictors)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py
--- a/xonsh/commands_cache.py
+++ b/xonsh/commands_cache.py
@@ -256,6 +256,7 @@
'clear': predict_false,
'cls': predict_false,
'cmd': predict_shell,
+ 'ex': predict_false,
'fish': predict_shell,
'htop': predict_help_ver,
'ksh': predict_shell,
@@ -267,6 +268,8 @@
'nano': predict_help_ver,
'psql': predict_false,
'ranger': predict_help_ver,
+ 'rview': predict_false,
+ 'rvim': predict_false,
'scp': predict_false,
'sh': predict_shell,
'ssh': predict_false,
@@ -275,6 +278,7 @@
'tcsh': predict_shell,
'top': predict_help_ver,
'vi': predict_false,
+ 'view': predict_false,
'vim': predict_false,
'vimpager': predict_help_ver,
'xo': predict_help_ver,
|
{"golden_diff": "diff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py\n--- a/xonsh/commands_cache.py\n+++ b/xonsh/commands_cache.py\n@@ -256,6 +256,7 @@\n 'clear': predict_false,\n 'cls': predict_false,\n 'cmd': predict_shell,\n+ 'ex': predict_false,\n 'fish': predict_shell,\n 'htop': predict_help_ver,\n 'ksh': predict_shell,\n@@ -267,6 +268,8 @@\n 'nano': predict_help_ver,\n 'psql': predict_false,\n 'ranger': predict_help_ver,\n+ 'rview': predict_false,\n+ 'rvim': predict_false,\n 'scp': predict_false,\n 'sh': predict_shell,\n 'ssh': predict_false,\n@@ -275,6 +278,7 @@\n 'tcsh': predict_shell,\n 'top': predict_help_ver,\n 'vi': predict_false,\n+ 'view': predict_false,\n 'vim': predict_false,\n 'vimpager': predict_help_ver,\n 'xo': predict_help_ver,\n", "issue": "Troublesome command entering python mode\nI'm trying to run this command which I used to use often in zsh:\r\n```\r\nsvn diff ./test.py | vim -R -\r\n```\r\nbut it keeps trying to evaluate in Python mode. \r\n\r\nI can usually wrestle these types of troublesome commands into submission with `$()` and `@()` but I can't figure this one out.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Module for caching command & alias names as well as for predicting whether\na command will be able to be run in the background.\n\nA background predictor is a function that accepect a single argument list\nand returns whethere or not the process can be run in the background (returns\nTrue) or must be run the foreground (returns False).\n\"\"\"\nimport os\nimport builtins\nimport argparse\nimport collections\nimport collections.abc as cabc\n\nfrom xonsh.platform import ON_WINDOWS, pathbasename\nfrom xonsh.tools import executables_in\nfrom xonsh.lazyasd import lazyobject\n\n\nclass CommandsCache(cabc.Mapping):\n \"\"\"A lazy cache representing the commands available on the file system.\n The keys are the command names and the values a tuple of (loc, has_alias)\n where loc is either a str pointing to the executable on the file system or\n None (if no executable exists) and has_alias is a boolean flag for whether\n the command has an alias.\n \"\"\"\n\n def __init__(self):\n self._cmds_cache = {}\n self._path_checksum = None\n self._alias_checksum = None\n self._path_mtime = -1\n self.threadable_predictors = default_threadable_predictors()\n\n def __contains__(self, key):\n _ = self.all_commands\n return self.lazyin(key)\n\n def __iter__(self):\n for cmd, (path, is_alias) in self.all_commands.items():\n if ON_WINDOWS and path is not None:\n # All comand keys are stored in uppercase on Windows.\n # This ensures the original command name is returned.\n cmd = pathbasename(path)\n yield cmd\n\n def __len__(self):\n return len(self.all_commands)\n\n def __getitem__(self, key):\n _ = self.all_commands\n return self.lazyget(key)\n\n def is_empty(self):\n \"\"\"Returns whether the cache is populated or not.\"\"\"\n return len(self._cmds_cache) == 0\n\n @staticmethod\n def get_possible_names(name):\n \"\"\"Generates the possible `PATHEXT` extension variants of a given executable\n name on Windows as a list, conserving the ordering in `PATHEXT`.\n Returns a list as `name` being the only item in it on other platforms.\"\"\"\n if ON_WINDOWS:\n pathext = builtins.__xonsh_env__.get('PATHEXT')\n name = name.upper()\n return [\n name + ext\n for ext in ([''] + pathext)\n ]\n else:\n return [name]\n\n @property\n def all_commands(self):\n paths = builtins.__xonsh_env__.get('PATH', [])\n pathset = frozenset(x for x in paths if os.path.isdir(x))\n # did PATH change?\n path_hash = hash(pathset)\n cache_valid = path_hash == self._path_checksum\n self._path_checksum = path_hash\n # did aliases change?\n alss = getattr(builtins, 'aliases', set())\n al_hash = hash(frozenset(alss))\n cache_valid = cache_valid and al_hash == self._alias_checksum\n self._alias_checksum = al_hash\n # did the contents of any directory in PATH change?\n max_mtime = 0\n for path in pathset:\n mtime = os.stat(path).st_mtime\n if mtime > max_mtime:\n max_mtime = mtime\n cache_valid = cache_valid and (max_mtime <= self._path_mtime)\n self._path_mtime = max_mtime\n if cache_valid:\n return self._cmds_cache\n allcmds = {}\n for path in reversed(paths):\n # iterate backwards so that entries at the front of PATH overwrite\n # entries at the back.\n for cmd in executables_in(path):\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (os.path.join(path, cmd), cmd in alss)\n for cmd in alss:\n if cmd not in allcmds:\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (cmd, True)\n self._cmds_cache = allcmds\n return allcmds\n\n def cached_name(self, name):\n \"\"\"Returns the name that would appear in the cache, if it exists.\"\"\"\n if name is None:\n return None\n cached = pathbasename(name)\n if ON_WINDOWS:\n keys = self.get_possible_names(cached)\n cached = next((k for k in keys if k in self._cmds_cache), None)\n return cached\n\n def lazyin(self, key):\n \"\"\"Checks if the value is in the current cache without the potential to\n update the cache. It just says whether the value is known *now*. This\n may not reflect precisely what is on the $PATH.\n \"\"\"\n return self.cached_name(key) in self._cmds_cache\n\n def lazyiter(self):\n \"\"\"Returns an iterator over the current cache contents without the\n potential to update the cache. This may not reflect what is on the\n $PATH.\n \"\"\"\n return iter(self._cmds_cache)\n\n def lazylen(self):\n \"\"\"Returns the length of the current cache contents without the\n potential to update the cache. This may not reflect precisely\n what is on the $PATH.\n \"\"\"\n return len(self._cmds_cache)\n\n def lazyget(self, key, default=None):\n \"\"\"A lazy value getter.\"\"\"\n return self._cmds_cache.get(self.cached_name(key), default)\n\n def locate_binary(self, name):\n \"\"\"Locates an executable on the file system using the cache.\"\"\"\n # make sure the cache is up to date by accessing the property\n _ = self.all_commands\n return self.lazy_locate_binary(name)\n\n def lazy_locate_binary(self, name):\n \"\"\"Locates an executable in the cache, without checking its validity.\"\"\"\n possibilities = self.get_possible_names(name)\n if ON_WINDOWS:\n # Windows users expect to be able to execute files in the same\n # directory without `./`\n local_bin = next((fn for fn in possibilities if os.path.isfile(fn)),\n None)\n if local_bin:\n return os.path.abspath(local_bin)\n cached = next((cmd for cmd in possibilities if cmd in self._cmds_cache),\n None)\n if cached:\n (path, is_alias) = self._cmds_cache[cached]\n return path if not is_alias else None\n elif os.path.isfile(name) and name != pathbasename(name):\n return name\n\n def predict_threadable(self, cmd):\n \"\"\"Predicts whether a command list is able to be run on a background\n thread, rather than the main thread.\n \"\"\"\n name = self.cached_name(cmd[0])\n predictors = self.threadable_predictors\n if ON_WINDOWS:\n # On all names (keys) are stored in upper case so instead\n # we get the original cmd or alias name\n path, _ = self.lazyget(name, (None, None))\n if path is None:\n return True\n else:\n name = pathbasename(path)\n if name not in predictors:\n pre, ext = os.path.splitext(name)\n if pre in predictors:\n predictors[name] = predictors[pre]\n predictor = predictors[name]\n return predictor(cmd[1:])\n\n#\n# Background Predictors\n#\n\n\ndef predict_true(args):\n \"\"\"Always say the process is threadable.\"\"\"\n return True\n\n\ndef predict_false(args):\n \"\"\"Never say the process is threadable.\"\"\"\n return False\n\n\n@lazyobject\ndef SHELL_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('shell', add_help=False)\n p.add_argument('-c', nargs='?', default=None)\n p.add_argument('filename', nargs='?', default=None)\n return p\n\n\ndef predict_shell(args):\n \"\"\"Precict the backgroundability of the normal shell interface, which\n comes down to whether it is being run in subproc mode.\n \"\"\"\n ns, _ = SHELL_PREDICTOR_PARSER.parse_known_args(args)\n if ns.c is None and ns.filename is None:\n pred = False\n else:\n pred = True\n return pred\n\n\n@lazyobject\ndef HELP_VER_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('cmd', add_help=False)\n p.add_argument('-h', '--help', dest='help',\n action='store_true', default=None)\n p.add_argument('-v', '-V', '--version', dest='version',\n action='store_true', default=None)\n return p\n\n\ndef predict_help_ver(args):\n \"\"\"Precict the backgroundability of commands that have help & version\n switches: -h, --help, -v, -V, --version. If either of these options is\n present, the command is assumed to print to stdout normally and is therefore\n threadable. Otherwise, the command is assumed to not be threadable.\n This is useful for commands, like top, that normally enter alternate mode\n but may not in certain circumstances.\n \"\"\"\n ns, _ = HELP_VER_PREDICTOR_PARSER.parse_known_args(args)\n pred = ns.help is not None or ns.version is not None\n return pred\n\n\ndef default_threadable_predictors():\n \"\"\"Generates a new defaultdict for known threadable predictors.\n The default is to predict true.\n \"\"\"\n # alphabetical, for what it is worth.\n predictors = {\n 'bash': predict_shell,\n 'csh': predict_shell,\n 'clear': predict_false,\n 'cls': predict_false,\n 'cmd': predict_shell,\n 'fish': predict_shell,\n 'htop': predict_help_ver,\n 'ksh': predict_shell,\n 'less': predict_help_ver,\n 'man': predict_help_ver,\n 'more': predict_help_ver,\n 'mvim': predict_help_ver,\n 'mutt': predict_help_ver,\n 'nano': predict_help_ver,\n 'psql': predict_false,\n 'ranger': predict_help_ver,\n 'scp': predict_false,\n 'sh': predict_shell,\n 'ssh': predict_false,\n 'startx': predict_false,\n 'sudo': predict_help_ver,\n 'tcsh': predict_shell,\n 'top': predict_help_ver,\n 'vi': predict_false,\n 'vim': predict_false,\n 'vimpager': predict_help_ver,\n 'xo': predict_help_ver,\n 'xonsh': predict_shell,\n 'zsh': predict_shell,\n }\n return collections.defaultdict(lambda: predict_true, predictors)\n", "path": "xonsh/commands_cache.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Module for caching command & alias names as well as for predicting whether\na command will be able to be run in the background.\n\nA background predictor is a function that accepect a single argument list\nand returns whethere or not the process can be run in the background (returns\nTrue) or must be run the foreground (returns False).\n\"\"\"\nimport os\nimport builtins\nimport argparse\nimport collections\nimport collections.abc as cabc\n\nfrom xonsh.platform import ON_WINDOWS, pathbasename\nfrom xonsh.tools import executables_in\nfrom xonsh.lazyasd import lazyobject\n\n\nclass CommandsCache(cabc.Mapping):\n \"\"\"A lazy cache representing the commands available on the file system.\n The keys are the command names and the values a tuple of (loc, has_alias)\n where loc is either a str pointing to the executable on the file system or\n None (if no executable exists) and has_alias is a boolean flag for whether\n the command has an alias.\n \"\"\"\n\n def __init__(self):\n self._cmds_cache = {}\n self._path_checksum = None\n self._alias_checksum = None\n self._path_mtime = -1\n self.threadable_predictors = default_threadable_predictors()\n\n def __contains__(self, key):\n _ = self.all_commands\n return self.lazyin(key)\n\n def __iter__(self):\n for cmd, (path, is_alias) in self.all_commands.items():\n if ON_WINDOWS and path is not None:\n # All comand keys are stored in uppercase on Windows.\n # This ensures the original command name is returned.\n cmd = pathbasename(path)\n yield cmd\n\n def __len__(self):\n return len(self.all_commands)\n\n def __getitem__(self, key):\n _ = self.all_commands\n return self.lazyget(key)\n\n def is_empty(self):\n \"\"\"Returns whether the cache is populated or not.\"\"\"\n return len(self._cmds_cache) == 0\n\n @staticmethod\n def get_possible_names(name):\n \"\"\"Generates the possible `PATHEXT` extension variants of a given executable\n name on Windows as a list, conserving the ordering in `PATHEXT`.\n Returns a list as `name` being the only item in it on other platforms.\"\"\"\n if ON_WINDOWS:\n pathext = builtins.__xonsh_env__.get('PATHEXT')\n name = name.upper()\n return [\n name + ext\n for ext in ([''] + pathext)\n ]\n else:\n return [name]\n\n @property\n def all_commands(self):\n paths = builtins.__xonsh_env__.get('PATH', [])\n pathset = frozenset(x for x in paths if os.path.isdir(x))\n # did PATH change?\n path_hash = hash(pathset)\n cache_valid = path_hash == self._path_checksum\n self._path_checksum = path_hash\n # did aliases change?\n alss = getattr(builtins, 'aliases', set())\n al_hash = hash(frozenset(alss))\n cache_valid = cache_valid and al_hash == self._alias_checksum\n self._alias_checksum = al_hash\n # did the contents of any directory in PATH change?\n max_mtime = 0\n for path in pathset:\n mtime = os.stat(path).st_mtime\n if mtime > max_mtime:\n max_mtime = mtime\n cache_valid = cache_valid and (max_mtime <= self._path_mtime)\n self._path_mtime = max_mtime\n if cache_valid:\n return self._cmds_cache\n allcmds = {}\n for path in reversed(paths):\n # iterate backwards so that entries at the front of PATH overwrite\n # entries at the back.\n for cmd in executables_in(path):\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (os.path.join(path, cmd), cmd in alss)\n for cmd in alss:\n if cmd not in allcmds:\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (cmd, True)\n self._cmds_cache = allcmds\n return allcmds\n\n def cached_name(self, name):\n \"\"\"Returns the name that would appear in the cache, if it exists.\"\"\"\n if name is None:\n return None\n cached = pathbasename(name)\n if ON_WINDOWS:\n keys = self.get_possible_names(cached)\n cached = next((k for k in keys if k in self._cmds_cache), None)\n return cached\n\n def lazyin(self, key):\n \"\"\"Checks if the value is in the current cache without the potential to\n update the cache. It just says whether the value is known *now*. This\n may not reflect precisely what is on the $PATH.\n \"\"\"\n return self.cached_name(key) in self._cmds_cache\n\n def lazyiter(self):\n \"\"\"Returns an iterator over the current cache contents without the\n potential to update the cache. This may not reflect what is on the\n $PATH.\n \"\"\"\n return iter(self._cmds_cache)\n\n def lazylen(self):\n \"\"\"Returns the length of the current cache contents without the\n potential to update the cache. This may not reflect precisely\n what is on the $PATH.\n \"\"\"\n return len(self._cmds_cache)\n\n def lazyget(self, key, default=None):\n \"\"\"A lazy value getter.\"\"\"\n return self._cmds_cache.get(self.cached_name(key), default)\n\n def locate_binary(self, name):\n \"\"\"Locates an executable on the file system using the cache.\"\"\"\n # make sure the cache is up to date by accessing the property\n _ = self.all_commands\n return self.lazy_locate_binary(name)\n\n def lazy_locate_binary(self, name):\n \"\"\"Locates an executable in the cache, without checking its validity.\"\"\"\n possibilities = self.get_possible_names(name)\n if ON_WINDOWS:\n # Windows users expect to be able to execute files in the same\n # directory without `./`\n local_bin = next((fn for fn in possibilities if os.path.isfile(fn)),\n None)\n if local_bin:\n return os.path.abspath(local_bin)\n cached = next((cmd for cmd in possibilities if cmd in self._cmds_cache),\n None)\n if cached:\n (path, is_alias) = self._cmds_cache[cached]\n return path if not is_alias else None\n elif os.path.isfile(name) and name != pathbasename(name):\n return name\n\n def predict_threadable(self, cmd):\n \"\"\"Predicts whether a command list is able to be run on a background\n thread, rather than the main thread.\n \"\"\"\n name = self.cached_name(cmd[0])\n predictors = self.threadable_predictors\n if ON_WINDOWS:\n # On all names (keys) are stored in upper case so instead\n # we get the original cmd or alias name\n path, _ = self.lazyget(name, (None, None))\n if path is None:\n return True\n else:\n name = pathbasename(path)\n if name not in predictors:\n pre, ext = os.path.splitext(name)\n if pre in predictors:\n predictors[name] = predictors[pre]\n predictor = predictors[name]\n return predictor(cmd[1:])\n\n#\n# Background Predictors\n#\n\n\ndef predict_true(args):\n \"\"\"Always say the process is threadable.\"\"\"\n return True\n\n\ndef predict_false(args):\n \"\"\"Never say the process is threadable.\"\"\"\n return False\n\n\n@lazyobject\ndef SHELL_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('shell', add_help=False)\n p.add_argument('-c', nargs='?', default=None)\n p.add_argument('filename', nargs='?', default=None)\n return p\n\n\ndef predict_shell(args):\n \"\"\"Precict the backgroundability of the normal shell interface, which\n comes down to whether it is being run in subproc mode.\n \"\"\"\n ns, _ = SHELL_PREDICTOR_PARSER.parse_known_args(args)\n if ns.c is None and ns.filename is None:\n pred = False\n else:\n pred = True\n return pred\n\n\n@lazyobject\ndef HELP_VER_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('cmd', add_help=False)\n p.add_argument('-h', '--help', dest='help',\n action='store_true', default=None)\n p.add_argument('-v', '-V', '--version', dest='version',\n action='store_true', default=None)\n return p\n\n\ndef predict_help_ver(args):\n \"\"\"Precict the backgroundability of commands that have help & version\n switches: -h, --help, -v, -V, --version. If either of these options is\n present, the command is assumed to print to stdout normally and is therefore\n threadable. Otherwise, the command is assumed to not be threadable.\n This is useful for commands, like top, that normally enter alternate mode\n but may not in certain circumstances.\n \"\"\"\n ns, _ = HELP_VER_PREDICTOR_PARSER.parse_known_args(args)\n pred = ns.help is not None or ns.version is not None\n return pred\n\n\ndef default_threadable_predictors():\n \"\"\"Generates a new defaultdict for known threadable predictors.\n The default is to predict true.\n \"\"\"\n # alphabetical, for what it is worth.\n predictors = {\n 'bash': predict_shell,\n 'csh': predict_shell,\n 'clear': predict_false,\n 'cls': predict_false,\n 'cmd': predict_shell,\n 'ex': predict_false,\n 'fish': predict_shell,\n 'htop': predict_help_ver,\n 'ksh': predict_shell,\n 'less': predict_help_ver,\n 'man': predict_help_ver,\n 'more': predict_help_ver,\n 'mvim': predict_help_ver,\n 'mutt': predict_help_ver,\n 'nano': predict_help_ver,\n 'psql': predict_false,\n 'ranger': predict_help_ver,\n 'rview': predict_false,\n 'rvim': predict_false,\n 'scp': predict_false,\n 'sh': predict_shell,\n 'ssh': predict_false,\n 'startx': predict_false,\n 'sudo': predict_help_ver,\n 'tcsh': predict_shell,\n 'top': predict_help_ver,\n 'vi': predict_false,\n 'view': predict_false,\n 'vim': predict_false,\n 'vimpager': predict_help_ver,\n 'xo': predict_help_ver,\n 'xonsh': predict_shell,\n 'zsh': predict_shell,\n }\n return collections.defaultdict(lambda: predict_true, predictors)\n", "path": "xonsh/commands_cache.py"}]}
| 3,366 | 255 |
gh_patches_debug_17826
|
rasdani/github-patches
|
git_diff
|
dask__distributed-1337
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValueError / hang on LocalCluster startup
I'm loading my conda environment off NAS storage, and when usage is heavy, there can be long delays in loading things, which I believe can exceed the default timeout (five seconds) here:
https://github.com/dask/distributed/blob/master/distributed/client.py#L428
As a result I see this on a call to `LocalCluster()`:
Traceback (most recent call last):
File "lib/python3.5/site-packages/tornado/gen.py", line 910, in error_callback
future.result()
File "lib/python3.5/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 4, in raise_exc_info
File "lib/python3.5/site-packages/tornado/gen.py", line 1063, in run
yielded = self.gen.throw(*exc_info)
File "lib/python3.5/site-packages/distributed/nanny.py", line 300, in start
yield self._wait_until_running()
File "lib/python3.5/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "lib/python3.5/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "<string>", line 4, in raise_exc_info
File "lib/python3.5/site-packages/tornado/gen.py", line 1069, in run
yielded = self.gen.send(value)
File "lib/python3.5/site-packages/distributed/nanny.py", line 386, in _wait_until_running
raise ValueError("Worker not started")
ValueError: Worker not started
This `ValueError` does not seem to be passed up to my application; my application seems frozen waiting for the cluster start up (not 100% sure where) - ie, it never returns from the `LocalCluster()` call. Although the multiprocessing pool actually does start up, in this case too late to matter.
How can I catch / retry this situation?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/deploy/local.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2
3 import atexit
4 import logging
5 import math
6 from threading import Thread
7 from time import sleep
8 import warnings
9 import weakref
10
11 from tornado import gen
12 from tornado.ioloop import IOLoop
13
14 from ..core import CommClosedError
15 from ..utils import sync, ignoring, All, silence_logging
16 from ..nanny import Nanny
17 from ..scheduler import Scheduler
18 from ..worker import Worker, _ncores
19
20 logger = logging.getLogger(__name__)
21
22
23 class LocalCluster(object):
24 """ Create local Scheduler and Workers
25
26 This creates a "cluster" of a scheduler and workers running on the local
27 machine.
28
29 Parameters
30 ----------
31 n_workers: int
32 Number of workers to start
33 processes: bool
34 Whether to use processes (True) or threads (False). Defaults to True
35 threads_per_worker: int
36 Number of threads per each worker
37 scheduler_port: int
38 Port of the scheduler. 8786 by default, use 0 to choose a random port
39 silence_logs: logging level
40 Level of logs to print out to stdout. ``logging.CRITICAL`` by default.
41 Use a falsey value like False or None for no change.
42 ip: string
43 IP address on which the scheduler will listen, defaults to only localhost
44 kwargs: dict
45 Extra worker arguments, will be passed to the Worker constructor.
46
47 Examples
48 --------
49 >>> c = LocalCluster() # Create a local cluster with as many workers as cores # doctest: +SKIP
50 >>> c # doctest: +SKIP
51 LocalCluster("127.0.0.1:8786", workers=8, ncores=8)
52
53 >>> c = Client(c) # connect to local cluster # doctest: +SKIP
54
55 Add a new worker to the cluster
56 >>> w = c.start_worker(ncores=2) # doctest: +SKIP
57
58 Shut down the extra worker
59 >>> c.remove_worker(w) # doctest: +SKIP
60 """
61 def __init__(self, n_workers=None, threads_per_worker=None, processes=True,
62 loop=None, start=True, ip=None, scheduler_port=0,
63 silence_logs=logging.CRITICAL, diagnostics_port=8787,
64 services={}, worker_services={}, nanny=None, **worker_kwargs):
65 if nanny is not None:
66 warnings.warning("nanny has been deprecated, used processes=")
67 processes = nanny
68 self.status = None
69 self.processes = processes
70 self.silence_logs = silence_logs
71 if silence_logs:
72 silence_logging(level=silence_logs)
73 if n_workers is None and threads_per_worker is None:
74 if processes:
75 n_workers = _ncores
76 threads_per_worker = 1
77 else:
78 n_workers = 1
79 threads_per_worker = _ncores
80 if n_workers is None and threads_per_worker is not None:
81 n_workers = max(1, _ncores // threads_per_worker)
82 if n_workers and threads_per_worker is None:
83 # Overcommit threads per worker, rather than undercommit
84 threads_per_worker = max(1, int(math.ceil(_ncores / n_workers)))
85
86 self.loop = loop or IOLoop()
87 if start and not self.loop._running:
88 self._thread = Thread(target=self.loop.start,
89 name="LocalCluster loop")
90 self._thread.daemon = True
91 self._thread.start()
92 while not self.loop._running:
93 sleep(0.001)
94
95 if diagnostics_port is not None:
96 try:
97 from distributed.bokeh.scheduler import BokehScheduler
98 from distributed.bokeh.worker import BokehWorker
99 except ImportError:
100 logger.debug("To start diagnostics web server please install Bokeh")
101 else:
102 services[('bokeh', diagnostics_port)] = BokehScheduler
103 worker_services[('bokeh', 0)] = BokehWorker
104
105 self.scheduler = Scheduler(loop=self.loop,
106 services=services)
107 self.scheduler_port = scheduler_port
108
109 self.workers = []
110 self.n_workers = n_workers
111 self.threads_per_worker = threads_per_worker
112 self.worker_services = worker_services
113 self.worker_kwargs = worker_kwargs
114
115 if start:
116 sync(self.loop, self._start, ip)
117
118 clusters_to_close.add(self)
119
120 def __str__(self):
121 return ('LocalCluster(%r, workers=%d, ncores=%d)' %
122 (self.scheduler_address, len(self.workers),
123 sum(w.ncores for w in self.workers))
124 )
125
126 __repr__ = __str__
127
128 @gen.coroutine
129 def _start(self, ip=None):
130 """
131 Start all cluster services.
132 Wait on this if you passed `start=False` to the LocalCluster
133 constructor.
134 """
135 if self.status == 'running':
136 return
137 if ip is None and not self.scheduler_port and not self.processes:
138 # Use inproc transport for optimization
139 scheduler_address = 'inproc://'
140 else:
141 if ip is None:
142 ip = '127.0.0.1'
143 scheduler_address = (ip, self.scheduler_port)
144 self.scheduler.start(scheduler_address)
145
146 yield self._start_all_workers(
147 self.n_workers, ncores=self.threads_per_worker,
148 services=self.worker_services, **self.worker_kwargs)
149
150 self.status = 'running'
151
152 @gen.coroutine
153 def _start_all_workers(self, n_workers, **kwargs):
154 yield [self._start_worker(**kwargs) for i in range(n_workers)]
155
156 @gen.coroutine
157 def _start_worker(self, port=0, processes=None, death_timeout=60, **kwargs):
158 if processes is not None:
159 raise ValueError("overriding `processes` for individual workers "
160 "in a LocalCluster is not supported anymore")
161 if port:
162 raise ValueError("overriding `port` for individual workers "
163 "in a LocalCluster is not supported anymore")
164 if self.processes:
165 W = Nanny
166 kwargs['quiet'] = True
167 else:
168 W = Worker
169
170 w = W(self.scheduler.address, loop=self.loop,
171 death_timeout=death_timeout,
172 silence_logs=self.silence_logs, **kwargs)
173 yield w._start()
174
175 self.workers.append(w)
176
177 while w.worker_address not in self.scheduler.worker_info:
178 yield gen.sleep(0.01)
179
180 raise gen.Return(w)
181
182 def start_worker(self, ncores=0, **kwargs):
183 """ Add a new worker to the running cluster
184
185 Parameters
186 ----------
187 port: int (optional)
188 Port on which to serve the worker, defaults to 0 or random
189 ncores: int (optional)
190 Number of threads to use. Defaults to number of logical cores
191
192 Examples
193 --------
194 >>> c = LocalCluster() # doctest: +SKIP
195 >>> c.start_worker(ncores=2) # doctest: +SKIP
196
197 Returns
198 -------
199 The created Worker or Nanny object. Can be discarded.
200 """
201 return sync(self.loop, self._start_worker, ncores=ncores, **kwargs)
202
203 @gen.coroutine
204 def _stop_worker(self, w):
205 yield w._close()
206 if w in self.workers:
207 self.workers.remove(w)
208
209 def stop_worker(self, w):
210 """ Stop a running worker
211
212 Examples
213 --------
214 >>> c = LocalCluster() # doctest: +SKIP
215 >>> w = c.start_worker(ncores=2) # doctest: +SKIP
216 >>> c.stop_worker(w) # doctest: +SKIP
217 """
218 sync(self.loop, self._stop_worker, w)
219
220 @gen.coroutine
221 def _close(self):
222 if self.status == 'closed':
223 return
224
225 with ignoring(gen.TimeoutError, CommClosedError, OSError):
226 yield All([w._close() for w in self.workers])
227 with ignoring(gen.TimeoutError, CommClosedError, OSError):
228 yield self.scheduler.close(fast=True)
229 del self.workers[:]
230 self.status = 'closed'
231
232 def close(self):
233 """ Close the cluster """
234 if self.status == 'closed':
235 return
236
237 for w in self.workers:
238 self.loop.add_callback(self._stop_worker, w)
239 for i in range(10):
240 if not self.workers:
241 break
242 else:
243 sleep(0.01)
244 if self.loop._running:
245 sync(self.loop, self._close)
246 if hasattr(self, '_thread'):
247 if self.loop._running:
248 self.loop.add_callback(self.loop.stop)
249 try:
250 self._thread.join(timeout=1)
251 finally:
252 self.loop.close()
253 del self._thread
254
255 @gen.coroutine
256 def scale_up(self, n, **kwargs):
257 """ Bring the total count of workers up to ``n``
258
259 This function/coroutine should bring the total number of workers up to
260 the number ``n``.
261
262 This can be implemented either as a function or as a Tornado coroutine.
263 """
264 yield [self._start_worker(**kwargs)
265 for i in range(n - len(self.workers))]
266
267 @gen.coroutine
268 def scale_down(self, workers):
269 """ Remove ``workers`` from the cluster
270
271 Given a list of worker addresses this function should remove those
272 workers from the cluster. This may require tracking which jobs are
273 associated to which worker address.
274
275 This can be implemented either as a function or as a Tornado coroutine.
276 """
277 workers = set(workers)
278 yield [self._stop_worker(w)
279 for w in self.workers
280 if w.worker_address in workers]
281 while workers & set(self.workers):
282 yield gen.sleep(0.01)
283
284 def __del__(self):
285 self.close()
286
287 def __enter__(self):
288 return self
289
290 def __exit__(self, *args):
291 self.close()
292
293 @property
294 def scheduler_address(self):
295 try:
296 return self.scheduler.address
297 except ValueError:
298 return '<unstarted>'
299
300
301 clusters_to_close = weakref.WeakSet()
302
303
304 @atexit.register
305 def close_clusters():
306 for cluster in clusters_to_close:
307 cluster.close()
308
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py
--- a/distributed/deploy/local.py
+++ b/distributed/deploy/local.py
@@ -174,9 +174,13 @@
self.workers.append(w)
- while w.worker_address not in self.scheduler.worker_info:
+ while w.status != 'closed' and w.worker_address not in self.scheduler.worker_info:
yield gen.sleep(0.01)
+ if w.status == 'closed':
+ self.workers.remove(w)
+ raise gen.TimeoutError("Worker failed to start")
+
raise gen.Return(w)
def start_worker(self, ncores=0, **kwargs):
@@ -249,7 +253,10 @@
try:
self._thread.join(timeout=1)
finally:
- self.loop.close()
+ try:
+ self.loop.close()
+ except ValueError:
+ pass
del self._thread
@gen.coroutine
|
{"golden_diff": "diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py\n--- a/distributed/deploy/local.py\n+++ b/distributed/deploy/local.py\n@@ -174,9 +174,13 @@\n \n self.workers.append(w)\n \n- while w.worker_address not in self.scheduler.worker_info:\n+ while w.status != 'closed' and w.worker_address not in self.scheduler.worker_info:\n yield gen.sleep(0.01)\n \n+ if w.status == 'closed':\n+ self.workers.remove(w)\n+ raise gen.TimeoutError(\"Worker failed to start\")\n+\n raise gen.Return(w)\n \n def start_worker(self, ncores=0, **kwargs):\n@@ -249,7 +253,10 @@\n try:\n self._thread.join(timeout=1)\n finally:\n- self.loop.close()\n+ try:\n+ self.loop.close()\n+ except ValueError:\n+ pass\n del self._thread\n \n @gen.coroutine\n", "issue": "ValueError / hang on LocalCluster startup\nI'm loading my conda environment off NAS storage, and when usage is heavy, there can be long delays in loading things, which I believe can exceed the default timeout (five seconds) here:\r\n\r\nhttps://github.com/dask/distributed/blob/master/distributed/client.py#L428\r\n\r\nAs a result I see this on a call to `LocalCluster()`:\r\n\r\n\tTraceback (most recent call last): \r\n\t File \"lib/python3.5/site-packages/tornado/gen.py\", line 910, in error_callback \r\n\t\tfuture.result() \r\n\t File \"lib/python3.5/site-packages/tornado/concurrent.py\", line 238, in result \r\n\t\traise_exc_info(self._exc_info) \r\n\t File \"<string>\", line 4, in raise_exc_info \r\n\t File \"lib/python3.5/site-packages/tornado/gen.py\", line 1063, in run \r\n\t\tyielded = self.gen.throw(*exc_info) \r\n\t File \"lib/python3.5/site-packages/distributed/nanny.py\", line 300, in start\r\n\t\tyield self._wait_until_running() \r\n\t File \"lib/python3.5/site-packages/tornado/gen.py\", line 1055, in run \r\n\t\tvalue = future.result() \r\n\t File \"lib/python3.5/site-packages/tornado/concurrent.py\", line 238, in result\r\n\t\traise_exc_info(self._exc_info) \r\n\t File \"<string>\", line 4, in raise_exc_info \r\n\t File \"lib/python3.5/site-packages/tornado/gen.py\", line 1069, in run\r\n\t\tyielded = self.gen.send(value) \r\n\t File \"lib/python3.5/site-packages/distributed/nanny.py\", line 386, in _wait_until_running\r\n\t\traise ValueError(\"Worker not started\") \r\n\tValueError: Worker not started \r\n\r\nThis `ValueError` does not seem to be passed up to my application; my application seems frozen waiting for the cluster start up (not 100% sure where) - ie, it never returns from the `LocalCluster()` call. Although the multiprocessing pool actually does start up, in this case too late to matter.\r\n\r\nHow can I catch / retry this situation?\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport atexit\nimport logging\nimport math\nfrom threading import Thread\nfrom time import sleep\nimport warnings\nimport weakref\n\nfrom tornado import gen\nfrom tornado.ioloop import IOLoop\n\nfrom ..core import CommClosedError\nfrom ..utils import sync, ignoring, All, silence_logging\nfrom ..nanny import Nanny\nfrom ..scheduler import Scheduler\nfrom ..worker import Worker, _ncores\n\nlogger = logging.getLogger(__name__)\n\n\nclass LocalCluster(object):\n \"\"\" Create local Scheduler and Workers\n\n This creates a \"cluster\" of a scheduler and workers running on the local\n machine.\n\n Parameters\n ----------\n n_workers: int\n Number of workers to start\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True\n threads_per_worker: int\n Number of threads per each worker\n scheduler_port: int\n Port of the scheduler. 8786 by default, use 0 to choose a random port\n silence_logs: logging level\n Level of logs to print out to stdout. ``logging.CRITICAL`` by default.\n Use a falsey value like False or None for no change.\n ip: string\n IP address on which the scheduler will listen, defaults to only localhost\n kwargs: dict\n Extra worker arguments, will be passed to the Worker constructor.\n\n Examples\n --------\n >>> c = LocalCluster() # Create a local cluster with as many workers as cores # doctest: +SKIP\n >>> c # doctest: +SKIP\n LocalCluster(\"127.0.0.1:8786\", workers=8, ncores=8)\n\n >>> c = Client(c) # connect to local cluster # doctest: +SKIP\n\n Add a new worker to the cluster\n >>> w = c.start_worker(ncores=2) # doctest: +SKIP\n\n Shut down the extra worker\n >>> c.remove_worker(w) # doctest: +SKIP\n \"\"\"\n def __init__(self, n_workers=None, threads_per_worker=None, processes=True,\n loop=None, start=True, ip=None, scheduler_port=0,\n silence_logs=logging.CRITICAL, diagnostics_port=8787,\n services={}, worker_services={}, nanny=None, **worker_kwargs):\n if nanny is not None:\n warnings.warning(\"nanny has been deprecated, used processes=\")\n processes = nanny\n self.status = None\n self.processes = processes\n self.silence_logs = silence_logs\n if silence_logs:\n silence_logging(level=silence_logs)\n if n_workers is None and threads_per_worker is None:\n if processes:\n n_workers = _ncores\n threads_per_worker = 1\n else:\n n_workers = 1\n threads_per_worker = _ncores\n if n_workers is None and threads_per_worker is not None:\n n_workers = max(1, _ncores // threads_per_worker)\n if n_workers and threads_per_worker is None:\n # Overcommit threads per worker, rather than undercommit\n threads_per_worker = max(1, int(math.ceil(_ncores / n_workers)))\n\n self.loop = loop or IOLoop()\n if start and not self.loop._running:\n self._thread = Thread(target=self.loop.start,\n name=\"LocalCluster loop\")\n self._thread.daemon = True\n self._thread.start()\n while not self.loop._running:\n sleep(0.001)\n\n if diagnostics_port is not None:\n try:\n from distributed.bokeh.scheduler import BokehScheduler\n from distributed.bokeh.worker import BokehWorker\n except ImportError:\n logger.debug(\"To start diagnostics web server please install Bokeh\")\n else:\n services[('bokeh', diagnostics_port)] = BokehScheduler\n worker_services[('bokeh', 0)] = BokehWorker\n\n self.scheduler = Scheduler(loop=self.loop,\n services=services)\n self.scheduler_port = scheduler_port\n\n self.workers = []\n self.n_workers = n_workers\n self.threads_per_worker = threads_per_worker\n self.worker_services = worker_services\n self.worker_kwargs = worker_kwargs\n\n if start:\n sync(self.loop, self._start, ip)\n\n clusters_to_close.add(self)\n\n def __str__(self):\n return ('LocalCluster(%r, workers=%d, ncores=%d)' %\n (self.scheduler_address, len(self.workers),\n sum(w.ncores for w in self.workers))\n )\n\n __repr__ = __str__\n\n @gen.coroutine\n def _start(self, ip=None):\n \"\"\"\n Start all cluster services.\n Wait on this if you passed `start=False` to the LocalCluster\n constructor.\n \"\"\"\n if self.status == 'running':\n return\n if ip is None and not self.scheduler_port and not self.processes:\n # Use inproc transport for optimization\n scheduler_address = 'inproc://'\n else:\n if ip is None:\n ip = '127.0.0.1'\n scheduler_address = (ip, self.scheduler_port)\n self.scheduler.start(scheduler_address)\n\n yield self._start_all_workers(\n self.n_workers, ncores=self.threads_per_worker,\n services=self.worker_services, **self.worker_kwargs)\n\n self.status = 'running'\n\n @gen.coroutine\n def _start_all_workers(self, n_workers, **kwargs):\n yield [self._start_worker(**kwargs) for i in range(n_workers)]\n\n @gen.coroutine\n def _start_worker(self, port=0, processes=None, death_timeout=60, **kwargs):\n if processes is not None:\n raise ValueError(\"overriding `processes` for individual workers \"\n \"in a LocalCluster is not supported anymore\")\n if port:\n raise ValueError(\"overriding `port` for individual workers \"\n \"in a LocalCluster is not supported anymore\")\n if self.processes:\n W = Nanny\n kwargs['quiet'] = True\n else:\n W = Worker\n\n w = W(self.scheduler.address, loop=self.loop,\n death_timeout=death_timeout,\n silence_logs=self.silence_logs, **kwargs)\n yield w._start()\n\n self.workers.append(w)\n\n while w.worker_address not in self.scheduler.worker_info:\n yield gen.sleep(0.01)\n\n raise gen.Return(w)\n\n def start_worker(self, ncores=0, **kwargs):\n \"\"\" Add a new worker to the running cluster\n\n Parameters\n ----------\n port: int (optional)\n Port on which to serve the worker, defaults to 0 or random\n ncores: int (optional)\n Number of threads to use. Defaults to number of logical cores\n\n Examples\n --------\n >>> c = LocalCluster() # doctest: +SKIP\n >>> c.start_worker(ncores=2) # doctest: +SKIP\n\n Returns\n -------\n The created Worker or Nanny object. Can be discarded.\n \"\"\"\n return sync(self.loop, self._start_worker, ncores=ncores, **kwargs)\n\n @gen.coroutine\n def _stop_worker(self, w):\n yield w._close()\n if w in self.workers:\n self.workers.remove(w)\n\n def stop_worker(self, w):\n \"\"\" Stop a running worker\n\n Examples\n --------\n >>> c = LocalCluster() # doctest: +SKIP\n >>> w = c.start_worker(ncores=2) # doctest: +SKIP\n >>> c.stop_worker(w) # doctest: +SKIP\n \"\"\"\n sync(self.loop, self._stop_worker, w)\n\n @gen.coroutine\n def _close(self):\n if self.status == 'closed':\n return\n\n with ignoring(gen.TimeoutError, CommClosedError, OSError):\n yield All([w._close() for w in self.workers])\n with ignoring(gen.TimeoutError, CommClosedError, OSError):\n yield self.scheduler.close(fast=True)\n del self.workers[:]\n self.status = 'closed'\n\n def close(self):\n \"\"\" Close the cluster \"\"\"\n if self.status == 'closed':\n return\n\n for w in self.workers:\n self.loop.add_callback(self._stop_worker, w)\n for i in range(10):\n if not self.workers:\n break\n else:\n sleep(0.01)\n if self.loop._running:\n sync(self.loop, self._close)\n if hasattr(self, '_thread'):\n if self.loop._running:\n self.loop.add_callback(self.loop.stop)\n try:\n self._thread.join(timeout=1)\n finally:\n self.loop.close()\n del self._thread\n\n @gen.coroutine\n def scale_up(self, n, **kwargs):\n \"\"\" Bring the total count of workers up to ``n``\n\n This function/coroutine should bring the total number of workers up to\n the number ``n``.\n\n This can be implemented either as a function or as a Tornado coroutine.\n \"\"\"\n yield [self._start_worker(**kwargs)\n for i in range(n - len(self.workers))]\n\n @gen.coroutine\n def scale_down(self, workers):\n \"\"\" Remove ``workers`` from the cluster\n\n Given a list of worker addresses this function should remove those\n workers from the cluster. This may require tracking which jobs are\n associated to which worker address.\n\n This can be implemented either as a function or as a Tornado coroutine.\n \"\"\"\n workers = set(workers)\n yield [self._stop_worker(w)\n for w in self.workers\n if w.worker_address in workers]\n while workers & set(self.workers):\n yield gen.sleep(0.01)\n\n def __del__(self):\n self.close()\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args):\n self.close()\n\n @property\n def scheduler_address(self):\n try:\n return self.scheduler.address\n except ValueError:\n return '<unstarted>'\n\n\nclusters_to_close = weakref.WeakSet()\n\n\[email protected]\ndef close_clusters():\n for cluster in clusters_to_close:\n cluster.close()\n", "path": "distributed/deploy/local.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport atexit\nimport logging\nimport math\nfrom threading import Thread\nfrom time import sleep\nimport warnings\nimport weakref\n\nfrom tornado import gen\nfrom tornado.ioloop import IOLoop\n\nfrom ..core import CommClosedError\nfrom ..utils import sync, ignoring, All, silence_logging\nfrom ..nanny import Nanny\nfrom ..scheduler import Scheduler\nfrom ..worker import Worker, _ncores\n\nlogger = logging.getLogger(__name__)\n\n\nclass LocalCluster(object):\n \"\"\" Create local Scheduler and Workers\n\n This creates a \"cluster\" of a scheduler and workers running on the local\n machine.\n\n Parameters\n ----------\n n_workers: int\n Number of workers to start\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True\n threads_per_worker: int\n Number of threads per each worker\n scheduler_port: int\n Port of the scheduler. 8786 by default, use 0 to choose a random port\n silence_logs: logging level\n Level of logs to print out to stdout. ``logging.CRITICAL`` by default.\n Use a falsey value like False or None for no change.\n ip: string\n IP address on which the scheduler will listen, defaults to only localhost\n kwargs: dict\n Extra worker arguments, will be passed to the Worker constructor.\n\n Examples\n --------\n >>> c = LocalCluster() # Create a local cluster with as many workers as cores # doctest: +SKIP\n >>> c # doctest: +SKIP\n LocalCluster(\"127.0.0.1:8786\", workers=8, ncores=8)\n\n >>> c = Client(c) # connect to local cluster # doctest: +SKIP\n\n Add a new worker to the cluster\n >>> w = c.start_worker(ncores=2) # doctest: +SKIP\n\n Shut down the extra worker\n >>> c.remove_worker(w) # doctest: +SKIP\n \"\"\"\n def __init__(self, n_workers=None, threads_per_worker=None, processes=True,\n loop=None, start=True, ip=None, scheduler_port=0,\n silence_logs=logging.CRITICAL, diagnostics_port=8787,\n services={}, worker_services={}, nanny=None, **worker_kwargs):\n if nanny is not None:\n warnings.warning(\"nanny has been deprecated, used processes=\")\n processes = nanny\n self.status = None\n self.processes = processes\n self.silence_logs = silence_logs\n if silence_logs:\n silence_logging(level=silence_logs)\n if n_workers is None and threads_per_worker is None:\n if processes:\n n_workers = _ncores\n threads_per_worker = 1\n else:\n n_workers = 1\n threads_per_worker = _ncores\n if n_workers is None and threads_per_worker is not None:\n n_workers = max(1, _ncores // threads_per_worker)\n if n_workers and threads_per_worker is None:\n # Overcommit threads per worker, rather than undercommit\n threads_per_worker = max(1, int(math.ceil(_ncores / n_workers)))\n\n self.loop = loop or IOLoop()\n if start and not self.loop._running:\n self._thread = Thread(target=self.loop.start,\n name=\"LocalCluster loop\")\n self._thread.daemon = True\n self._thread.start()\n while not self.loop._running:\n sleep(0.001)\n\n if diagnostics_port is not None:\n try:\n from distributed.bokeh.scheduler import BokehScheduler\n from distributed.bokeh.worker import BokehWorker\n except ImportError:\n logger.debug(\"To start diagnostics web server please install Bokeh\")\n else:\n services[('bokeh', diagnostics_port)] = BokehScheduler\n worker_services[('bokeh', 0)] = BokehWorker\n\n self.scheduler = Scheduler(loop=self.loop,\n services=services)\n self.scheduler_port = scheduler_port\n\n self.workers = []\n self.n_workers = n_workers\n self.threads_per_worker = threads_per_worker\n self.worker_services = worker_services\n self.worker_kwargs = worker_kwargs\n\n if start:\n sync(self.loop, self._start, ip)\n\n clusters_to_close.add(self)\n\n def __str__(self):\n return ('LocalCluster(%r, workers=%d, ncores=%d)' %\n (self.scheduler_address, len(self.workers),\n sum(w.ncores for w in self.workers))\n )\n\n __repr__ = __str__\n\n @gen.coroutine\n def _start(self, ip=None):\n \"\"\"\n Start all cluster services.\n Wait on this if you passed `start=False` to the LocalCluster\n constructor.\n \"\"\"\n if self.status == 'running':\n return\n if ip is None and not self.scheduler_port and not self.processes:\n # Use inproc transport for optimization\n scheduler_address = 'inproc://'\n else:\n if ip is None:\n ip = '127.0.0.1'\n scheduler_address = (ip, self.scheduler_port)\n self.scheduler.start(scheduler_address)\n\n yield self._start_all_workers(\n self.n_workers, ncores=self.threads_per_worker,\n services=self.worker_services, **self.worker_kwargs)\n\n self.status = 'running'\n\n @gen.coroutine\n def _start_all_workers(self, n_workers, **kwargs):\n yield [self._start_worker(**kwargs) for i in range(n_workers)]\n\n @gen.coroutine\n def _start_worker(self, port=0, processes=None, death_timeout=60, **kwargs):\n if processes is not None:\n raise ValueError(\"overriding `processes` for individual workers \"\n \"in a LocalCluster is not supported anymore\")\n if port:\n raise ValueError(\"overriding `port` for individual workers \"\n \"in a LocalCluster is not supported anymore\")\n if self.processes:\n W = Nanny\n kwargs['quiet'] = True\n else:\n W = Worker\n\n w = W(self.scheduler.address, loop=self.loop,\n death_timeout=death_timeout,\n silence_logs=self.silence_logs, **kwargs)\n yield w._start()\n\n self.workers.append(w)\n\n while w.status != 'closed' and w.worker_address not in self.scheduler.worker_info:\n yield gen.sleep(0.01)\n\n if w.status == 'closed':\n self.workers.remove(w)\n raise gen.TimeoutError(\"Worker failed to start\")\n\n raise gen.Return(w)\n\n def start_worker(self, ncores=0, **kwargs):\n \"\"\" Add a new worker to the running cluster\n\n Parameters\n ----------\n port: int (optional)\n Port on which to serve the worker, defaults to 0 or random\n ncores: int (optional)\n Number of threads to use. Defaults to number of logical cores\n\n Examples\n --------\n >>> c = LocalCluster() # doctest: +SKIP\n >>> c.start_worker(ncores=2) # doctest: +SKIP\n\n Returns\n -------\n The created Worker or Nanny object. Can be discarded.\n \"\"\"\n return sync(self.loop, self._start_worker, ncores=ncores, **kwargs)\n\n @gen.coroutine\n def _stop_worker(self, w):\n yield w._close()\n if w in self.workers:\n self.workers.remove(w)\n\n def stop_worker(self, w):\n \"\"\" Stop a running worker\n\n Examples\n --------\n >>> c = LocalCluster() # doctest: +SKIP\n >>> w = c.start_worker(ncores=2) # doctest: +SKIP\n >>> c.stop_worker(w) # doctest: +SKIP\n \"\"\"\n sync(self.loop, self._stop_worker, w)\n\n @gen.coroutine\n def _close(self):\n if self.status == 'closed':\n return\n\n with ignoring(gen.TimeoutError, CommClosedError, OSError):\n yield All([w._close() for w in self.workers])\n with ignoring(gen.TimeoutError, CommClosedError, OSError):\n yield self.scheduler.close(fast=True)\n del self.workers[:]\n self.status = 'closed'\n\n def close(self):\n \"\"\" Close the cluster \"\"\"\n if self.status == 'closed':\n return\n\n for w in self.workers:\n self.loop.add_callback(self._stop_worker, w)\n for i in range(10):\n if not self.workers:\n break\n else:\n sleep(0.01)\n if self.loop._running:\n sync(self.loop, self._close)\n if hasattr(self, '_thread'):\n if self.loop._running:\n self.loop.add_callback(self.loop.stop)\n try:\n self._thread.join(timeout=1)\n finally:\n try:\n self.loop.close()\n except ValueError:\n pass\n del self._thread\n\n @gen.coroutine\n def scale_up(self, n, **kwargs):\n \"\"\" Bring the total count of workers up to ``n``\n\n This function/coroutine should bring the total number of workers up to\n the number ``n``.\n\n This can be implemented either as a function or as a Tornado coroutine.\n \"\"\"\n yield [self._start_worker(**kwargs)\n for i in range(n - len(self.workers))]\n\n @gen.coroutine\n def scale_down(self, workers):\n \"\"\" Remove ``workers`` from the cluster\n\n Given a list of worker addresses this function should remove those\n workers from the cluster. This may require tracking which jobs are\n associated to which worker address.\n\n This can be implemented either as a function or as a Tornado coroutine.\n \"\"\"\n workers = set(workers)\n yield [self._stop_worker(w)\n for w in self.workers\n if w.worker_address in workers]\n while workers & set(self.workers):\n yield gen.sleep(0.01)\n\n def __del__(self):\n self.close()\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args):\n self.close()\n\n @property\n def scheduler_address(self):\n try:\n return self.scheduler.address\n except ValueError:\n return '<unstarted>'\n\n\nclusters_to_close = weakref.WeakSet()\n\n\[email protected]\ndef close_clusters():\n for cluster in clusters_to_close:\n cluster.close()\n", "path": "distributed/deploy/local.py"}]}
| 3,843 | 222 |
gh_patches_debug_28131
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-530
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for field deprecation
I was looking in the code and I couldn't find a way to mark field as deprecated. I also didn't find any issues regarding this.
I know that it is handled by `deprecation_reason` in graphql's `GraphQLField` but how I mark a field as deprecated in strawberry-graphql?
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/375">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/field.py`
Content:
```
1 import dataclasses
2 import inspect
3 from typing import Callable, List, Optional, Type
4
5 from .arguments import get_arguments_from_resolver
6 from .permission import BasePermission
7 from .types.types import FederationFieldParams, FieldDefinition
8 from .utils.str_converters import to_camel_case
9
10
11 class StrawberryField(dataclasses.Field):
12 _field_definition: FieldDefinition
13
14 def __init__(self, field_definition: FieldDefinition):
15 self._field_definition = field_definition
16
17 super().__init__( # type: ignore
18 default=dataclasses.MISSING,
19 default_factory=dataclasses.MISSING,
20 init=field_definition.base_resolver is None,
21 repr=True,
22 hash=None,
23 compare=True,
24 metadata=None,
25 )
26
27 def __call__(self, resolver: Callable) -> Callable:
28 """Migrate the field definition to the resolver"""
29
30 field_definition = self._field_definition
31 # note that field_definition.name is finalized in type_resolver._get_fields
32
33 field_definition.origin_name = resolver.__name__
34 field_definition.origin = resolver
35 field_definition.base_resolver = resolver
36 field_definition.arguments = get_arguments_from_resolver(resolver)
37 field_definition.type = resolver.__annotations__.get("return", None)
38
39 if not inspect.ismethod(resolver):
40 # resolver is a normal function
41 resolver._field_definition = field_definition # type: ignore
42 else:
43 # resolver is a bound method and immutable (most likely a
44 # classmethod or an instance method). We need to monkeypatch its
45 # underlying .__func__ function
46 # https://stackoverflow.com/a/7891681/8134178
47 resolver.__func__._field_definition = field_definition # type:ignore
48
49 return resolver
50
51 def __setattr__(self, name, value):
52 if name == "type":
53 self._field_definition.type = value
54
55 if value and name == "name":
56 if not self._field_definition.origin_name:
57 self._field_definition.origin_name = value
58
59 if not self._field_definition.name:
60 self._field_definition.name = to_camel_case(value)
61
62 return super().__setattr__(name, value)
63
64
65 def field(
66 resolver: Optional[Callable] = None,
67 *,
68 name: Optional[str] = None,
69 is_subscription: bool = False,
70 description: Optional[str] = None,
71 permission_classes: Optional[List[Type[BasePermission]]] = None,
72 federation: Optional[FederationFieldParams] = None
73 ):
74 """Annotates a method or property as a GraphQL field.
75
76 This is normally used inside a type declaration:
77
78 >>> @strawberry.type:
79 >>> class X:
80 >>> field_abc: str = strawberry.field(description="ABC")
81
82 >>> @strawberry.field(description="ABC")
83 >>> def field_with_resolver(self, info) -> str:
84 >>> return "abc"
85
86 it can be used both as decorator and as a normal function.
87 """
88
89 field_definition = FieldDefinition(
90 origin_name=None, # modified by resolver in __call__
91 name=name, # modified by resolver in __call__
92 type=None, # type: ignore
93 origin=resolver, # type: ignore
94 description=description,
95 base_resolver=resolver,
96 is_subscription=is_subscription,
97 permission_classes=permission_classes or [],
98 arguments=[], # modified by resolver in __call__
99 federation=federation or FederationFieldParams(),
100 )
101
102 field_ = StrawberryField(field_definition)
103
104 if resolver:
105 return field_(resolver)
106 return field_
107
```
Path: `strawberry/types/types.py`
Content:
```
1 import dataclasses
2 from typing import Any, Callable, Dict, List, Optional, Type, Union
3
4 from strawberry.permission import BasePermission
5 from strawberry.union import StrawberryUnion
6
7
8 undefined = object()
9
10
11 @dataclasses.dataclass
12 class FederationTypeParams:
13 keys: List[str] = dataclasses.field(default_factory=list)
14 extend: bool = False
15
16
17 @dataclasses.dataclass
18 class TypeDefinition:
19 name: str
20 is_input: bool
21 is_interface: bool
22 is_generic: bool
23 origin: Type
24 description: Optional[str]
25 federation: FederationTypeParams
26 interfaces: List["TypeDefinition"]
27
28 _fields: List["FieldDefinition"]
29 _type_params: Dict[str, Type] = dataclasses.field(default_factory=dict, init=False)
30
31 def get_field(self, name: str) -> Optional["FieldDefinition"]:
32 return next((field for field in self.fields if field.name == name), None)
33
34 @property
35 def fields(self) -> List["FieldDefinition"]:
36 from .type_resolver import _resolve_types
37
38 return _resolve_types(self._fields)
39
40 @property
41 def type_params(self) -> Dict[str, Type]:
42 if not self._type_params:
43 from .type_resolver import _get_type_params
44
45 self._type_params = _get_type_params(self.fields)
46
47 return self._type_params
48
49
50 @dataclasses.dataclass
51 class ArgumentDefinition:
52 name: Optional[str] = None
53 origin_name: Optional[str] = None
54 type: Optional[Type] = None
55 origin: Optional[Type] = None
56 child: Optional["ArgumentDefinition"] = None
57 is_subscription: bool = False
58 is_optional: bool = False
59 is_child_optional: bool = False
60 is_list: bool = False
61 is_union: bool = False
62 description: Optional[str] = None
63 default_value: Any = undefined
64
65
66 @dataclasses.dataclass
67 class FederationFieldParams:
68 provides: List[str] = dataclasses.field(default_factory=list)
69 requires: List[str] = dataclasses.field(default_factory=list)
70 external: bool = False
71
72
73 @dataclasses.dataclass
74 class FieldDefinition:
75 name: Optional[str]
76 origin_name: Optional[str]
77 type: Optional[Union[Type, StrawberryUnion]]
78 origin: Union[Type, Callable]
79 child: Optional["FieldDefinition"] = None
80 is_subscription: bool = False
81 is_optional: bool = False
82 is_child_optional: bool = False
83 is_list: bool = False
84 is_union: bool = False
85 federation: FederationFieldParams = dataclasses.field(
86 default_factory=FederationFieldParams
87 )
88 arguments: List[ArgumentDefinition] = dataclasses.field(default_factory=list)
89 description: Optional[str] = None
90 base_resolver: Optional[Callable] = None
91 permission_classes: List[Type[BasePermission]] = dataclasses.field(
92 default_factory=list
93 )
94 default_value: Any = undefined
95
```
Path: `strawberry/schema/types/fields.py`
Content:
```
1 import typing
2
3 from graphql import GraphQLField, GraphQLInputField
4
5 from strawberry.arguments import UNSET
6 from strawberry.field import FieldDefinition
7 from strawberry.resolvers import get_resolver
8 from strawberry.types.types import undefined
9
10 from .arguments import convert_arguments
11 from .type import get_graphql_type
12 from .types import Field, TypeMap
13
14
15 def get_field(
16 field: FieldDefinition,
17 is_input: bool,
18 type_map: TypeMap,
19 ) -> Field:
20 graphql_type = get_graphql_type(field, type_map)
21
22 TypeClass: typing.Union[
23 typing.Type[GraphQLInputField], typing.Type[GraphQLField]
24 ] = GraphQLField
25
26 kwargs: typing.Dict[str, typing.Any] = {
27 "description": field.description,
28 }
29
30 resolver = get_resolver(field)
31
32 if is_input:
33 TypeClass = GraphQLInputField
34 if field.default_value not in (undefined, UNSET):
35 kwargs["default_value"] = field.default_value
36 elif field.is_subscription:
37 kwargs["args"] = convert_arguments(field.arguments, type_map)
38 kwargs["subscribe"] = resolver
39 kwargs["resolve"] = lambda event, *args, **kwargs: event
40 else:
41 kwargs["args"] = convert_arguments(field.arguments, type_map)
42 kwargs["resolve"] = resolver
43
44 return TypeClass(graphql_type, **kwargs) # type: ignore
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/field.py b/strawberry/field.py
--- a/strawberry/field.py
+++ b/strawberry/field.py
@@ -69,7 +69,8 @@
is_subscription: bool = False,
description: Optional[str] = None,
permission_classes: Optional[List[Type[BasePermission]]] = None,
- federation: Optional[FederationFieldParams] = None
+ federation: Optional[FederationFieldParams] = None,
+ deprecation_reason: Optional[str] = None,
):
"""Annotates a method or property as a GraphQL field.
@@ -97,6 +98,7 @@
permission_classes=permission_classes or [],
arguments=[], # modified by resolver in __call__
federation=federation or FederationFieldParams(),
+ deprecation_reason=deprecation_reason,
)
field_ = StrawberryField(field_definition)
diff --git a/strawberry/schema/types/fields.py b/strawberry/schema/types/fields.py
--- a/strawberry/schema/types/fields.py
+++ b/strawberry/schema/types/fields.py
@@ -41,4 +41,7 @@
kwargs["args"] = convert_arguments(field.arguments, type_map)
kwargs["resolve"] = resolver
+ if not is_input:
+ kwargs["deprecation_reason"] = field.deprecation_reason
+
return TypeClass(graphql_type, **kwargs) # type: ignore
diff --git a/strawberry/types/types.py b/strawberry/types/types.py
--- a/strawberry/types/types.py
+++ b/strawberry/types/types.py
@@ -92,3 +92,4 @@
default_factory=list
)
default_value: Any = undefined
+ deprecation_reason: Optional[str] = None
|
{"golden_diff": "diff --git a/strawberry/field.py b/strawberry/field.py\n--- a/strawberry/field.py\n+++ b/strawberry/field.py\n@@ -69,7 +69,8 @@\n is_subscription: bool = False,\n description: Optional[str] = None,\n permission_classes: Optional[List[Type[BasePermission]]] = None,\n- federation: Optional[FederationFieldParams] = None\n+ federation: Optional[FederationFieldParams] = None,\n+ deprecation_reason: Optional[str] = None,\n ):\n \"\"\"Annotates a method or property as a GraphQL field.\n \n@@ -97,6 +98,7 @@\n permission_classes=permission_classes or [],\n arguments=[], # modified by resolver in __call__\n federation=federation or FederationFieldParams(),\n+ deprecation_reason=deprecation_reason,\n )\n \n field_ = StrawberryField(field_definition)\ndiff --git a/strawberry/schema/types/fields.py b/strawberry/schema/types/fields.py\n--- a/strawberry/schema/types/fields.py\n+++ b/strawberry/schema/types/fields.py\n@@ -41,4 +41,7 @@\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"resolve\"] = resolver\n \n+ if not is_input:\n+ kwargs[\"deprecation_reason\"] = field.deprecation_reason\n+\n return TypeClass(graphql_type, **kwargs) # type: ignore\ndiff --git a/strawberry/types/types.py b/strawberry/types/types.py\n--- a/strawberry/types/types.py\n+++ b/strawberry/types/types.py\n@@ -92,3 +92,4 @@\n default_factory=list\n )\n default_value: Any = undefined\n+ deprecation_reason: Optional[str] = None\n", "issue": "Add support for field deprecation\nI was looking in the code and I couldn't find a way to mark field as deprecated. I also didn't find any issues regarding this.\r\nI know that it is handled by `deprecation_reason` in graphql's `GraphQLField` but how I mark a field as deprecated in strawberry-graphql?\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/375\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "import dataclasses\nimport inspect\nfrom typing import Callable, List, Optional, Type\n\nfrom .arguments import get_arguments_from_resolver\nfrom .permission import BasePermission\nfrom .types.types import FederationFieldParams, FieldDefinition\nfrom .utils.str_converters import to_camel_case\n\n\nclass StrawberryField(dataclasses.Field):\n _field_definition: FieldDefinition\n\n def __init__(self, field_definition: FieldDefinition):\n self._field_definition = field_definition\n\n super().__init__( # type: ignore\n default=dataclasses.MISSING,\n default_factory=dataclasses.MISSING,\n init=field_definition.base_resolver is None,\n repr=True,\n hash=None,\n compare=True,\n metadata=None,\n )\n\n def __call__(self, resolver: Callable) -> Callable:\n \"\"\"Migrate the field definition to the resolver\"\"\"\n\n field_definition = self._field_definition\n # note that field_definition.name is finalized in type_resolver._get_fields\n\n field_definition.origin_name = resolver.__name__\n field_definition.origin = resolver\n field_definition.base_resolver = resolver\n field_definition.arguments = get_arguments_from_resolver(resolver)\n field_definition.type = resolver.__annotations__.get(\"return\", None)\n\n if not inspect.ismethod(resolver):\n # resolver is a normal function\n resolver._field_definition = field_definition # type: ignore\n else:\n # resolver is a bound method and immutable (most likely a\n # classmethod or an instance method). We need to monkeypatch its\n # underlying .__func__ function\n # https://stackoverflow.com/a/7891681/8134178\n resolver.__func__._field_definition = field_definition # type:ignore\n\n return resolver\n\n def __setattr__(self, name, value):\n if name == \"type\":\n self._field_definition.type = value\n\n if value and name == \"name\":\n if not self._field_definition.origin_name:\n self._field_definition.origin_name = value\n\n if not self._field_definition.name:\n self._field_definition.name = to_camel_case(value)\n\n return super().__setattr__(name, value)\n\n\ndef field(\n resolver: Optional[Callable] = None,\n *,\n name: Optional[str] = None,\n is_subscription: bool = False,\n description: Optional[str] = None,\n permission_classes: Optional[List[Type[BasePermission]]] = None,\n federation: Optional[FederationFieldParams] = None\n):\n \"\"\"Annotates a method or property as a GraphQL field.\n\n This is normally used inside a type declaration:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_with_resolver(self, info) -> str:\n >>> return \"abc\"\n\n it can be used both as decorator and as a normal function.\n \"\"\"\n\n field_definition = FieldDefinition(\n origin_name=None, # modified by resolver in __call__\n name=name, # modified by resolver in __call__\n type=None, # type: ignore\n origin=resolver, # type: ignore\n description=description,\n base_resolver=resolver,\n is_subscription=is_subscription,\n permission_classes=permission_classes or [],\n arguments=[], # modified by resolver in __call__\n federation=federation or FederationFieldParams(),\n )\n\n field_ = StrawberryField(field_definition)\n\n if resolver:\n return field_(resolver)\n return field_\n", "path": "strawberry/field.py"}, {"content": "import dataclasses\nfrom typing import Any, Callable, Dict, List, Optional, Type, Union\n\nfrom strawberry.permission import BasePermission\nfrom strawberry.union import StrawberryUnion\n\n\nundefined = object()\n\n\[email protected]\nclass FederationTypeParams:\n keys: List[str] = dataclasses.field(default_factory=list)\n extend: bool = False\n\n\[email protected]\nclass TypeDefinition:\n name: str\n is_input: bool\n is_interface: bool\n is_generic: bool\n origin: Type\n description: Optional[str]\n federation: FederationTypeParams\n interfaces: List[\"TypeDefinition\"]\n\n _fields: List[\"FieldDefinition\"]\n _type_params: Dict[str, Type] = dataclasses.field(default_factory=dict, init=False)\n\n def get_field(self, name: str) -> Optional[\"FieldDefinition\"]:\n return next((field for field in self.fields if field.name == name), None)\n\n @property\n def fields(self) -> List[\"FieldDefinition\"]:\n from .type_resolver import _resolve_types\n\n return _resolve_types(self._fields)\n\n @property\n def type_params(self) -> Dict[str, Type]:\n if not self._type_params:\n from .type_resolver import _get_type_params\n\n self._type_params = _get_type_params(self.fields)\n\n return self._type_params\n\n\[email protected]\nclass ArgumentDefinition:\n name: Optional[str] = None\n origin_name: Optional[str] = None\n type: Optional[Type] = None\n origin: Optional[Type] = None\n child: Optional[\"ArgumentDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n description: Optional[str] = None\n default_value: Any = undefined\n\n\[email protected]\nclass FederationFieldParams:\n provides: List[str] = dataclasses.field(default_factory=list)\n requires: List[str] = dataclasses.field(default_factory=list)\n external: bool = False\n\n\[email protected]\nclass FieldDefinition:\n name: Optional[str]\n origin_name: Optional[str]\n type: Optional[Union[Type, StrawberryUnion]]\n origin: Union[Type, Callable]\n child: Optional[\"FieldDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n federation: FederationFieldParams = dataclasses.field(\n default_factory=FederationFieldParams\n )\n arguments: List[ArgumentDefinition] = dataclasses.field(default_factory=list)\n description: Optional[str] = None\n base_resolver: Optional[Callable] = None\n permission_classes: List[Type[BasePermission]] = dataclasses.field(\n default_factory=list\n )\n default_value: Any = undefined\n", "path": "strawberry/types/types.py"}, {"content": "import typing\n\nfrom graphql import GraphQLField, GraphQLInputField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.field import FieldDefinition\nfrom strawberry.resolvers import get_resolver\nfrom strawberry.types.types import undefined\n\nfrom .arguments import convert_arguments\nfrom .type import get_graphql_type\nfrom .types import Field, TypeMap\n\n\ndef get_field(\n field: FieldDefinition,\n is_input: bool,\n type_map: TypeMap,\n) -> Field:\n graphql_type = get_graphql_type(field, type_map)\n\n TypeClass: typing.Union[\n typing.Type[GraphQLInputField], typing.Type[GraphQLField]\n ] = GraphQLField\n\n kwargs: typing.Dict[str, typing.Any] = {\n \"description\": field.description,\n }\n\n resolver = get_resolver(field)\n\n if is_input:\n TypeClass = GraphQLInputField\n if field.default_value not in (undefined, UNSET):\n kwargs[\"default_value\"] = field.default_value\n elif field.is_subscription:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"subscribe\"] = resolver\n kwargs[\"resolve\"] = lambda event, *args, **kwargs: event\n else:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"resolve\"] = resolver\n\n return TypeClass(graphql_type, **kwargs) # type: ignore\n", "path": "strawberry/schema/types/fields.py"}], "after_files": [{"content": "import dataclasses\nimport inspect\nfrom typing import Callable, List, Optional, Type\n\nfrom .arguments import get_arguments_from_resolver\nfrom .permission import BasePermission\nfrom .types.types import FederationFieldParams, FieldDefinition\nfrom .utils.str_converters import to_camel_case\n\n\nclass StrawberryField(dataclasses.Field):\n _field_definition: FieldDefinition\n\n def __init__(self, field_definition: FieldDefinition):\n self._field_definition = field_definition\n\n super().__init__( # type: ignore\n default=dataclasses.MISSING,\n default_factory=dataclasses.MISSING,\n init=field_definition.base_resolver is None,\n repr=True,\n hash=None,\n compare=True,\n metadata=None,\n )\n\n def __call__(self, resolver: Callable) -> Callable:\n \"\"\"Migrate the field definition to the resolver\"\"\"\n\n field_definition = self._field_definition\n # note that field_definition.name is finalized in type_resolver._get_fields\n\n field_definition.origin_name = resolver.__name__\n field_definition.origin = resolver\n field_definition.base_resolver = resolver\n field_definition.arguments = get_arguments_from_resolver(resolver)\n field_definition.type = resolver.__annotations__.get(\"return\", None)\n\n if not inspect.ismethod(resolver):\n # resolver is a normal function\n resolver._field_definition = field_definition # type: ignore\n else:\n # resolver is a bound method and immutable (most likely a\n # classmethod or an instance method). We need to monkeypatch its\n # underlying .__func__ function\n # https://stackoverflow.com/a/7891681/8134178\n resolver.__func__._field_definition = field_definition # type:ignore\n\n return resolver\n\n def __setattr__(self, name, value):\n if name == \"type\":\n self._field_definition.type = value\n\n if value and name == \"name\":\n if not self._field_definition.origin_name:\n self._field_definition.origin_name = value\n\n if not self._field_definition.name:\n self._field_definition.name = to_camel_case(value)\n\n return super().__setattr__(name, value)\n\n\ndef field(\n resolver: Optional[Callable] = None,\n *,\n name: Optional[str] = None,\n is_subscription: bool = False,\n description: Optional[str] = None,\n permission_classes: Optional[List[Type[BasePermission]]] = None,\n federation: Optional[FederationFieldParams] = None,\n deprecation_reason: Optional[str] = None,\n):\n \"\"\"Annotates a method or property as a GraphQL field.\n\n This is normally used inside a type declaration:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_with_resolver(self, info) -> str:\n >>> return \"abc\"\n\n it can be used both as decorator and as a normal function.\n \"\"\"\n\n field_definition = FieldDefinition(\n origin_name=None, # modified by resolver in __call__\n name=name, # modified by resolver in __call__\n type=None, # type: ignore\n origin=resolver, # type: ignore\n description=description,\n base_resolver=resolver,\n is_subscription=is_subscription,\n permission_classes=permission_classes or [],\n arguments=[], # modified by resolver in __call__\n federation=federation or FederationFieldParams(),\n deprecation_reason=deprecation_reason,\n )\n\n field_ = StrawberryField(field_definition)\n\n if resolver:\n return field_(resolver)\n return field_\n", "path": "strawberry/field.py"}, {"content": "import dataclasses\nfrom typing import Any, Callable, Dict, List, Optional, Type, Union\n\nfrom strawberry.permission import BasePermission\nfrom strawberry.union import StrawberryUnion\n\n\nundefined = object()\n\n\[email protected]\nclass FederationTypeParams:\n keys: List[str] = dataclasses.field(default_factory=list)\n extend: bool = False\n\n\[email protected]\nclass TypeDefinition:\n name: str\n is_input: bool\n is_interface: bool\n is_generic: bool\n origin: Type\n description: Optional[str]\n federation: FederationTypeParams\n interfaces: List[\"TypeDefinition\"]\n\n _fields: List[\"FieldDefinition\"]\n _type_params: Dict[str, Type] = dataclasses.field(default_factory=dict, init=False)\n\n def get_field(self, name: str) -> Optional[\"FieldDefinition\"]:\n return next((field for field in self.fields if field.name == name), None)\n\n @property\n def fields(self) -> List[\"FieldDefinition\"]:\n from .type_resolver import _resolve_types\n\n return _resolve_types(self._fields)\n\n @property\n def type_params(self) -> Dict[str, Type]:\n if not self._type_params:\n from .type_resolver import _get_type_params\n\n self._type_params = _get_type_params(self.fields)\n\n return self._type_params\n\n\[email protected]\nclass ArgumentDefinition:\n name: Optional[str] = None\n origin_name: Optional[str] = None\n type: Optional[Type] = None\n origin: Optional[Type] = None\n child: Optional[\"ArgumentDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n description: Optional[str] = None\n default_value: Any = undefined\n\n\[email protected]\nclass FederationFieldParams:\n provides: List[str] = dataclasses.field(default_factory=list)\n requires: List[str] = dataclasses.field(default_factory=list)\n external: bool = False\n\n\[email protected]\nclass FieldDefinition:\n name: Optional[str]\n origin_name: Optional[str]\n type: Optional[Union[Type, StrawberryUnion]]\n origin: Union[Type, Callable]\n child: Optional[\"FieldDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n federation: FederationFieldParams = dataclasses.field(\n default_factory=FederationFieldParams\n )\n arguments: List[ArgumentDefinition] = dataclasses.field(default_factory=list)\n description: Optional[str] = None\n base_resolver: Optional[Callable] = None\n permission_classes: List[Type[BasePermission]] = dataclasses.field(\n default_factory=list\n )\n default_value: Any = undefined\n deprecation_reason: Optional[str] = None\n", "path": "strawberry/types/types.py"}, {"content": "import typing\n\nfrom graphql import GraphQLField, GraphQLInputField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.field import FieldDefinition\nfrom strawberry.resolvers import get_resolver\nfrom strawberry.types.types import undefined\n\nfrom .arguments import convert_arguments\nfrom .type import get_graphql_type\nfrom .types import Field, TypeMap\n\n\ndef get_field(\n field: FieldDefinition,\n is_input: bool,\n type_map: TypeMap,\n) -> Field:\n graphql_type = get_graphql_type(field, type_map)\n\n TypeClass: typing.Union[\n typing.Type[GraphQLInputField], typing.Type[GraphQLField]\n ] = GraphQLField\n\n kwargs: typing.Dict[str, typing.Any] = {\n \"description\": field.description,\n }\n\n resolver = get_resolver(field)\n\n if is_input:\n TypeClass = GraphQLInputField\n if field.default_value not in (undefined, UNSET):\n kwargs[\"default_value\"] = field.default_value\n elif field.is_subscription:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"subscribe\"] = resolver\n kwargs[\"resolve\"] = lambda event, *args, **kwargs: event\n else:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"resolve\"] = resolver\n\n if not is_input:\n kwargs[\"deprecation_reason\"] = field.deprecation_reason\n\n return TypeClass(graphql_type, **kwargs) # type: ignore\n", "path": "strawberry/schema/types/fields.py"}]}
| 2,771 | 402 |
gh_patches_debug_7660
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-447
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings
# Brief Description of Fix
Currently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function.
In the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:
"""
Clean column names.
Takes all column names, converts them to lowercase, then replaces all
spaces with underscores. <b>Does not mutate original dataframe.</b>
"""
Happy to add this line somewhere else in the docstring if inappropriate here.
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `janitor/finance.py`
Content:
```
1 """
2 Finance-specific data cleaning functions.
3 """
4
5 import json
6 from datetime import date, datetime
7 from functools import lru_cache
8
9 import pandas as pd
10 import pandas_flavor as pf
11 import requests
12
13 from janitor import check
14
15 from .utils import deprecated_alias
16
17 currency_set = {
18 "AUD",
19 "BGN",
20 "BRL",
21 "CAD",
22 "CHF",
23 "CNY",
24 "CZK",
25 "DKK",
26 "EUR",
27 "GBP",
28 "HKD",
29 "HRK",
30 "HUF",
31 "IDR",
32 "ILS",
33 "INR",
34 "ISK",
35 "JPY",
36 "KRW",
37 "MXN",
38 "MYR",
39 "NOK",
40 "NZD",
41 "PHP",
42 "PLN",
43 "RON",
44 "RUB",
45 "SEK",
46 "SGD",
47 "THB",
48 "TRY",
49 "USD",
50 "ZAR",
51 }
52
53
54 def _check_currency(currency: str):
55 if currency not in currency_set:
56 raise ValueError(
57 f"currency {currency} not in supported currency set, "
58 f"{currency_set}"
59 )
60
61
62 @lru_cache(maxsize=32)
63 def _convert_currency(
64 from_currency: str = None,
65 to_currency: str = None,
66 historical_date: date = None,
67 ):
68 """
69 Currency conversion for Pandas DataFrame column.
70
71 Helper function for `convert_currency` method.
72
73 The API used is: https://exchangeratesapi.io/
74 """
75
76 url = "https://api.exchangeratesapi.io"
77
78 if historical_date:
79 check("historical_date", historical_date, [datetime, date])
80 if isinstance(historical_date, datetime):
81 if historical_date < datetime(1999, 1, 4):
82 raise ValueError(
83 "historical_date:datetime must be later than 1999-01-04!"
84 )
85 string_date = str(historical_date)[:10]
86 else:
87 if historical_date < date(1999, 1, 4):
88 raise ValueError(
89 "historical_date:date must be later than 1999-01-04!"
90 )
91 string_date = str(historical_date)
92 url = url + "/%s" % string_date
93 else:
94 url = url + "/latest"
95
96 _check_currency(from_currency)
97 _check_currency(to_currency)
98
99 payload = {"base": from_currency, "symbols": to_currency}
100
101 result = requests.get(url, params=payload)
102
103 if result.status_code != 200:
104 raise ConnectionError(
105 "Exchange Rate API failed to receive a 200 "
106 "response from the server. "
107 "Please try again later."
108 )
109
110 currency_dict = json.loads(result.text)
111 rate = currency_dict["rates"][to_currency]
112 return rate
113
114
115 @pf.register_dataframe_method
116 @deprecated_alias(colname="column_name")
117 def convert_currency(
118 df: pd.DataFrame,
119 column_name: str = None,
120 from_currency: str = None,
121 to_currency: str = None,
122 historical_date: date = None,
123 make_new_column: bool = False,
124 ) -> pd.DataFrame:
125 """
126 Converts a column from one currency to another, with an option to
127 convert based on historical exchange values.
128
129 :param df: A pandas dataframe.
130 :param column_name: Name of the new column. Should be a string, in order
131 for the column name to be compatible with the Feather binary
132 format (this is a useful thing to have).
133 :param from_currency: The base currency to convert from.
134 May be any of: currency_set = {"AUD", "BGN", "BRL", "CAD", "CHF",
135 "CNY", "CZK", "DKK", "EUR", "GBP", "HKD", "HRK", "HUF", "IDR",
136 "ILS", "INR", "ISK", "JPY", "KRW", "MXN", "MYR", "NOK", "NZD",
137 "PHP", "PLN", "RON", "RUB", "SEK", "SGD", "THB", "TRY", "USD",
138 "ZAR"}
139 :param to_currency: The target currency to convert to.
140 May be any of: currency_set = {"AUD", "BGN", "BRL", "CAD", "CHF",
141 "CNY", "CZK", "DKK", "EUR", "GBP", "HKD", "HRK", "HUF", "IDR",
142 "ILS", "INR", "ISK", "JPY", "KRW", "MXN", "MYR", "NOK", "NZD",
143 "PHP", "PLN", "RON", "RUB", "SEK", "SGD", "THB", "TRY", "USD",
144 "ZAR"}
145 :param historical_date: If supplied, get exchange rate on a certain\
146 date. If not supplied, get the latest exchange rate. The exchange\
147 rates go back to Jan. 4, 1999.
148 :param make_new_column: Generates new column for converted currency if
149 True, otherwise, converts currency in place.
150
151 :Setup:
152
153 .. code-block:: python
154
155 import pandas as pd
156 import janitor
157 from datetime import date
158
159 data_dict = {
160 "a": [1.23452345, 2.456234, 3.2346125] * 3,
161 "Bell__Chart": [1/3, 2/7, 3/2] * 3,
162 "decorated-elephant": [1/234, 2/13, 3/167] * 3,
163 "animals": ["rabbit", "leopard", "lion"] * 3,
164 "cities": ["Cambridge", "Shanghai", "Basel"] * 3,
165 }
166
167 example_dataframe = pd.DataFrame(data_dict)
168
169 :Example: Converting a column from one currency to another using rates
170
171 from 01/01/2018:
172
173 .. code-block:: python
174
175 example_dataframe.convert_currency('a', from_currency='USD',
176 to_currency='EUR', historical_date=date(2018,1,1))
177
178 :Output:
179
180 .. code-block:: python
181
182 a Bell__Chart decorated-elephant animals cities
183 0 1.029370 0.333333 0.004274 rabbit Cambridge
184 1 2.048056 0.285714 0.153846 leopard Shanghai
185 2 2.697084 1.500000 0.017964 lion Basel
186 3 1.029370 0.333333 0.004274 rabbit Cambridge
187 4 2.048056 0.285714 0.153846 leopard Shanghai
188 5 2.697084 1.500000 0.017964 lion Basel
189 6 1.029370 0.333333 0.004274 rabbit Cambridge
190 7 2.048056 0.285714 0.153846 leopard Shanghai
191 8 2.697084 1.500000 0.017964 lion Basel
192
193 """
194
195 rate = _convert_currency(from_currency, to_currency, historical_date)
196
197 if make_new_column:
198 new_column_name = column_name + "_" + to_currency
199 df[new_column_name] = df[column_name] * rate
200
201 else:
202 df[column_name] = df[column_name] * rate
203
204 return df
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/janitor/finance.py b/janitor/finance.py
--- a/janitor/finance.py
+++ b/janitor/finance.py
@@ -126,6 +126,8 @@
Converts a column from one currency to another, with an option to
convert based on historical exchange values.
+ This method mutates the original DataFrame.
+
:param df: A pandas dataframe.
:param column_name: Name of the new column. Should be a string, in order
for the column name to be compatible with the Feather binary
|
{"golden_diff": "diff --git a/janitor/finance.py b/janitor/finance.py\n--- a/janitor/finance.py\n+++ b/janitor/finance.py\n@@ -126,6 +126,8 @@\n Converts a column from one currency to another, with an option to\n convert based on historical exchange values.\n \n+ This method mutates the original DataFrame.\n+\n :param df: A pandas dataframe.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n", "issue": "[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings\n# Brief Description of Fix\r\nCurrently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function. \r\n\r\nIn the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:\r\n\r\n\"\"\"\r\nClean column names.\r\n Takes all column names, converts them to lowercase, then replaces all\r\n spaces with underscores. <b>Does not mutate original dataframe.</b>\r\n\"\"\"\r\n\r\nHappy to add this line somewhere else in the docstring if inappropriate here. \r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)\r\n- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)\r\n\n", "before_files": [{"content": "\"\"\"\nFinance-specific data cleaning functions.\n\"\"\"\n\nimport json\nfrom datetime import date, datetime\nfrom functools import lru_cache\n\nimport pandas as pd\nimport pandas_flavor as pf\nimport requests\n\nfrom janitor import check\n\nfrom .utils import deprecated_alias\n\ncurrency_set = {\n \"AUD\",\n \"BGN\",\n \"BRL\",\n \"CAD\",\n \"CHF\",\n \"CNY\",\n \"CZK\",\n \"DKK\",\n \"EUR\",\n \"GBP\",\n \"HKD\",\n \"HRK\",\n \"HUF\",\n \"IDR\",\n \"ILS\",\n \"INR\",\n \"ISK\",\n \"JPY\",\n \"KRW\",\n \"MXN\",\n \"MYR\",\n \"NOK\",\n \"NZD\",\n \"PHP\",\n \"PLN\",\n \"RON\",\n \"RUB\",\n \"SEK\",\n \"SGD\",\n \"THB\",\n \"TRY\",\n \"USD\",\n \"ZAR\",\n}\n\n\ndef _check_currency(currency: str):\n if currency not in currency_set:\n raise ValueError(\n f\"currency {currency} not in supported currency set, \"\n f\"{currency_set}\"\n )\n\n\n@lru_cache(maxsize=32)\ndef _convert_currency(\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n):\n \"\"\"\n Currency conversion for Pandas DataFrame column.\n\n Helper function for `convert_currency` method.\n\n The API used is: https://exchangeratesapi.io/\n \"\"\"\n\n url = \"https://api.exchangeratesapi.io\"\n\n if historical_date:\n check(\"historical_date\", historical_date, [datetime, date])\n if isinstance(historical_date, datetime):\n if historical_date < datetime(1999, 1, 4):\n raise ValueError(\n \"historical_date:datetime must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)[:10]\n else:\n if historical_date < date(1999, 1, 4):\n raise ValueError(\n \"historical_date:date must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)\n url = url + \"/%s\" % string_date\n else:\n url = url + \"/latest\"\n\n _check_currency(from_currency)\n _check_currency(to_currency)\n\n payload = {\"base\": from_currency, \"symbols\": to_currency}\n\n result = requests.get(url, params=payload)\n\n if result.status_code != 200:\n raise ConnectionError(\n \"Exchange Rate API failed to receive a 200 \"\n \"response from the server. \"\n \"Please try again later.\"\n )\n\n currency_dict = json.loads(result.text)\n rate = currency_dict[\"rates\"][to_currency]\n return rate\n\n\[email protected]_dataframe_method\n@deprecated_alias(colname=\"column_name\")\ndef convert_currency(\n df: pd.DataFrame,\n column_name: str = None,\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n make_new_column: bool = False,\n) -> pd.DataFrame:\n \"\"\"\n Converts a column from one currency to another, with an option to\n convert based on historical exchange values.\n\n :param df: A pandas dataframe.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n format (this is a useful thing to have).\n :param from_currency: The base currency to convert from.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param to_currency: The target currency to convert to.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param historical_date: If supplied, get exchange rate on a certain\\\n date. If not supplied, get the latest exchange rate. The exchange\\\n rates go back to Jan. 4, 1999.\n :param make_new_column: Generates new column for converted currency if\n True, otherwise, converts currency in place.\n\n :Setup:\n\n .. code-block:: python\n\n import pandas as pd\n import janitor\n from datetime import date\n\n data_dict = {\n \"a\": [1.23452345, 2.456234, 3.2346125] * 3,\n \"Bell__Chart\": [1/3, 2/7, 3/2] * 3,\n \"decorated-elephant\": [1/234, 2/13, 3/167] * 3,\n \"animals\": [\"rabbit\", \"leopard\", \"lion\"] * 3,\n \"cities\": [\"Cambridge\", \"Shanghai\", \"Basel\"] * 3,\n }\n\n example_dataframe = pd.DataFrame(data_dict)\n\n :Example: Converting a column from one currency to another using rates\n\n from 01/01/2018:\n\n .. code-block:: python\n\n example_dataframe.convert_currency('a', from_currency='USD',\n to_currency='EUR', historical_date=date(2018,1,1))\n\n :Output:\n\n .. code-block:: python\n\n a Bell__Chart decorated-elephant animals cities\n 0 1.029370 0.333333 0.004274 rabbit Cambridge\n 1 2.048056 0.285714 0.153846 leopard Shanghai\n 2 2.697084 1.500000 0.017964 lion Basel\n 3 1.029370 0.333333 0.004274 rabbit Cambridge\n 4 2.048056 0.285714 0.153846 leopard Shanghai\n 5 2.697084 1.500000 0.017964 lion Basel\n 6 1.029370 0.333333 0.004274 rabbit Cambridge\n 7 2.048056 0.285714 0.153846 leopard Shanghai\n 8 2.697084 1.500000 0.017964 lion Basel\n\n \"\"\"\n\n rate = _convert_currency(from_currency, to_currency, historical_date)\n\n if make_new_column:\n new_column_name = column_name + \"_\" + to_currency\n df[new_column_name] = df[column_name] * rate\n\n else:\n df[column_name] = df[column_name] * rate\n\n return df\n", "path": "janitor/finance.py"}], "after_files": [{"content": "\"\"\"\nFinance-specific data cleaning functions.\n\"\"\"\n\nimport json\nfrom datetime import date, datetime\nfrom functools import lru_cache\n\nimport pandas as pd\nimport pandas_flavor as pf\nimport requests\n\nfrom janitor import check\n\nfrom .utils import deprecated_alias\n\ncurrency_set = {\n \"AUD\",\n \"BGN\",\n \"BRL\",\n \"CAD\",\n \"CHF\",\n \"CNY\",\n \"CZK\",\n \"DKK\",\n \"EUR\",\n \"GBP\",\n \"HKD\",\n \"HRK\",\n \"HUF\",\n \"IDR\",\n \"ILS\",\n \"INR\",\n \"ISK\",\n \"JPY\",\n \"KRW\",\n \"MXN\",\n \"MYR\",\n \"NOK\",\n \"NZD\",\n \"PHP\",\n \"PLN\",\n \"RON\",\n \"RUB\",\n \"SEK\",\n \"SGD\",\n \"THB\",\n \"TRY\",\n \"USD\",\n \"ZAR\",\n}\n\n\ndef _check_currency(currency: str):\n if currency not in currency_set:\n raise ValueError(\n f\"currency {currency} not in supported currency set, \"\n f\"{currency_set}\"\n )\n\n\n@lru_cache(maxsize=32)\ndef _convert_currency(\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n):\n \"\"\"\n Currency conversion for Pandas DataFrame column.\n\n Helper function for `convert_currency` method.\n\n The API used is: https://exchangeratesapi.io/\n \"\"\"\n\n url = \"https://api.exchangeratesapi.io\"\n\n if historical_date:\n check(\"historical_date\", historical_date, [datetime, date])\n if isinstance(historical_date, datetime):\n if historical_date < datetime(1999, 1, 4):\n raise ValueError(\n \"historical_date:datetime must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)[:10]\n else:\n if historical_date < date(1999, 1, 4):\n raise ValueError(\n \"historical_date:date must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)\n url = url + \"/%s\" % string_date\n else:\n url = url + \"/latest\"\n\n _check_currency(from_currency)\n _check_currency(to_currency)\n\n payload = {\"base\": from_currency, \"symbols\": to_currency}\n\n result = requests.get(url, params=payload)\n\n if result.status_code != 200:\n raise ConnectionError(\n \"Exchange Rate API failed to receive a 200 \"\n \"response from the server. \"\n \"Please try again later.\"\n )\n\n currency_dict = json.loads(result.text)\n rate = currency_dict[\"rates\"][to_currency]\n return rate\n\n\[email protected]_dataframe_method\n@deprecated_alias(colname=\"column_name\")\ndef convert_currency(\n df: pd.DataFrame,\n column_name: str = None,\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n make_new_column: bool = False,\n) -> pd.DataFrame:\n \"\"\"\n Converts a column from one currency to another, with an option to\n convert based on historical exchange values.\n\n This method mutates the original DataFrame.\n\n :param df: A pandas dataframe.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n format (this is a useful thing to have).\n :param from_currency: The base currency to convert from.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param to_currency: The target currency to convert to.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param historical_date: If supplied, get exchange rate on a certain\\\n date. If not supplied, get the latest exchange rate. The exchange\\\n rates go back to Jan. 4, 1999.\n :param make_new_column: Generates new column for converted currency if\n True, otherwise, converts currency in place.\n\n :Setup:\n\n .. code-block:: python\n\n import pandas as pd\n import janitor\n from datetime import date\n\n data_dict = {\n \"a\": [1.23452345, 2.456234, 3.2346125] * 3,\n \"Bell__Chart\": [1/3, 2/7, 3/2] * 3,\n \"decorated-elephant\": [1/234, 2/13, 3/167] * 3,\n \"animals\": [\"rabbit\", \"leopard\", \"lion\"] * 3,\n \"cities\": [\"Cambridge\", \"Shanghai\", \"Basel\"] * 3,\n }\n\n example_dataframe = pd.DataFrame(data_dict)\n\n :Example: Converting a column from one currency to another using rates\n\n from 01/01/2018:\n\n .. code-block:: python\n\n example_dataframe.convert_currency('a', from_currency='USD',\n to_currency='EUR', historical_date=date(2018,1,1))\n\n :Output:\n\n .. code-block:: python\n\n a Bell__Chart decorated-elephant animals cities\n 0 1.029370 0.333333 0.004274 rabbit Cambridge\n 1 2.048056 0.285714 0.153846 leopard Shanghai\n 2 2.697084 1.500000 0.017964 lion Basel\n 3 1.029370 0.333333 0.004274 rabbit Cambridge\n 4 2.048056 0.285714 0.153846 leopard Shanghai\n 5 2.697084 1.500000 0.017964 lion Basel\n 6 1.029370 0.333333 0.004274 rabbit Cambridge\n 7 2.048056 0.285714 0.153846 leopard Shanghai\n 8 2.697084 1.500000 0.017964 lion Basel\n\n \"\"\"\n\n rate = _convert_currency(from_currency, to_currency, historical_date)\n\n if make_new_column:\n new_column_name = column_name + \"_\" + to_currency\n df[new_column_name] = df[column_name] * rate\n\n else:\n df[column_name] = df[column_name] * rate\n\n return df\n", "path": "janitor/finance.py"}]}
| 2,871 | 127 |
gh_patches_debug_57471
|
rasdani/github-patches
|
git_diff
|
d2l-ai__d2l-en-2279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ModuleNotFoundError when running the official pytorch colab notebook

I can replicate the error at multiple official pytorch colab notebooks, e.g.
https://colab.research.google.com/github/d2l-ai/d2l-pytorch-colab/blob/master/chapter_linear-classification/image-classification-dataset.ipynb#scrollTo=ee445cce
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 import d2l
3
4 requirements = [
5 'ipython>=7.23',
6 'jupyter',
7 'numpy',
8 'matplotlib',
9 'requests',
10 'pandas',
11 'gym'
12 ]
13
14 setup(
15 name='d2l',
16 version=d2l.__version__,
17 python_requires='>=3.5',
18 author='D2L Developers',
19 author_email='[email protected]',
20 url='https://d2l.ai',
21 description='Dive into Deep Learning',
22 license='MIT-0',
23 packages=find_packages(),
24 zip_safe=True,
25 install_requires=requirements,
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,10 +2,10 @@
import d2l
requirements = [
- 'ipython>=7.23',
'jupyter',
'numpy',
'matplotlib',
+ 'matplotlib-inline',
'requests',
'pandas',
'gym'
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,10 +2,10 @@\n import d2l\n \n requirements = [\n- 'ipython>=7.23',\n 'jupyter',\n 'numpy',\n 'matplotlib',\n+ 'matplotlib-inline',\n 'requests',\n 'pandas',\n 'gym'\n", "issue": "ModuleNotFoundError when running the official pytorch colab notebook\n\r\n\r\nI can replicate the error at multiple official pytorch colab notebooks, e.g. \r\n\r\nhttps://colab.research.google.com/github/d2l-ai/d2l-pytorch-colab/blob/master/chapter_linear-classification/image-classification-dataset.ipynb#scrollTo=ee445cce\r\n\r\n\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport d2l\n\nrequirements = [\n 'ipython>=7.23',\n 'jupyter',\n 'numpy',\n 'matplotlib',\n 'requests',\n 'pandas',\n 'gym'\n]\n\nsetup(\n name='d2l',\n version=d2l.__version__,\n python_requires='>=3.5',\n author='D2L Developers',\n author_email='[email protected]',\n url='https://d2l.ai',\n description='Dive into Deep Learning',\n license='MIT-0',\n packages=find_packages(),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nimport d2l\n\nrequirements = [\n 'jupyter',\n 'numpy',\n 'matplotlib',\n 'matplotlib-inline',\n 'requests',\n 'pandas',\n 'gym'\n]\n\nsetup(\n name='d2l',\n version=d2l.__version__,\n python_requires='>=3.5',\n author='D2L Developers',\n author_email='[email protected]',\n url='https://d2l.ai',\n description='Dive into Deep Learning',\n license='MIT-0',\n packages=find_packages(),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]}
| 591 | 84 |
gh_patches_debug_892
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-437
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check for "ndarray-like" instead of ndarray in _warp; other places
I want to use `rasterio.warp.reproject` on an `xray.Dataset` with `xray.Dataset.apply` (http://xray.readthedocs.org/en/stable/). xray has a feature to turn the dataset into a `np.ndarray`, but that means losing all my metadata.
At https://github.com/mapbox/rasterio/blob/master/rasterio/_warp.pyx#L249, _warp checks that the source is an `np.ndarray` (whereas the source in my case is an `xray.DataArray` - satisfying the same interfaces as `np.ndarray`), so I get an invalid source error.
It could be a good idea to check for something like
```
def is_ndarray_like(source):
return hasattr(source, '__array__')
```
instead of
```
isinstance(source, np.ndarray)
```
so other numpy-like arrays can be used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/dtypes.py`
Content:
```
1 # Mapping of GDAL to Numpy data types.
2 #
3 # Since 0.13 we are not importing numpy here and data types are strings.
4 # Happily strings can be used throughout Numpy and so existing code will
5 # break.
6 #
7 # Within Rasterio, to test data types, we use Numpy's dtype() factory to
8 # do something like this:
9 #
10 # if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...
11 #
12
13 bool_ = 'bool'
14 ubyte = uint8 = 'uint8'
15 uint16 = 'uint16'
16 int16 = 'int16'
17 uint32 = 'uint32'
18 int32 = 'int32'
19 float32 = 'float32'
20 float64 = 'float64'
21 complex_ = 'complex'
22 complex64 = 'complex64'
23 complex128 = 'complex128'
24
25 # Not supported:
26 # GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11
27
28 dtype_fwd = {
29 0: None, # GDT_Unknown
30 1: ubyte, # GDT_Byte
31 2: uint16, # GDT_UInt16
32 3: int16, # GDT_Int16
33 4: uint32, # GDT_UInt32
34 5: int32, # GDT_Int32
35 6: float32, # GDT_Float32
36 7: float64, # GDT_Float64
37 8: complex_, # GDT_CInt16
38 9: complex_, # GDT_CInt32
39 10: complex64, # GDT_CFloat32
40 11: complex128 } # GDT_CFloat64
41
42 dtype_rev = dict((v, k) for k, v in dtype_fwd.items())
43 dtype_rev['uint8'] = 1
44
45 typename_fwd = {
46 0: 'Unknown',
47 1: 'Byte',
48 2: 'UInt16',
49 3: 'Int16',
50 4: 'UInt32',
51 5: 'Int32',
52 6: 'Float32',
53 7: 'Float64',
54 8: 'CInt16',
55 9: 'CInt32',
56 10: 'CFloat32',
57 11: 'CFloat64' }
58
59 typename_rev = dict((v, k) for k, v in typename_fwd.items())
60
61 def _gdal_typename(dt):
62 try:
63 return typename_fwd[dtype_rev[dt]]
64 except KeyError:
65 return typename_fwd[dtype_rev[dt().dtype.name]]
66
67 def check_dtype(dt):
68 if dt not in dtype_rev:
69 try:
70 return dt().dtype.name in dtype_rev
71 except:
72 return False
73 return True
74
75
76 def get_minimum_int_dtype(values):
77 """
78 Uses range checking to determine the minimum integer data type required
79 to represent values.
80
81 :param values: numpy array
82 :return: named data type that can be later used to create a numpy dtype
83 """
84
85 min_value = values.min()
86 max_value = values.max()
87
88 if min_value >= 0:
89 if max_value <= 255:
90 return uint8
91 elif max_value <= 65535:
92 return uint16
93 elif max_value <= 4294967295:
94 return uint32
95 elif min_value >= -32768 and max_value <= 32767:
96 return int16
97 elif min_value >= -2147483648 and max_value <= 2147483647:
98 return int32
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py
--- a/rasterio/dtypes.py
+++ b/rasterio/dtypes.py
@@ -96,3 +96,9 @@
return int16
elif min_value >= -2147483648 and max_value <= 2147483647:
return int32
+
+
+def is_ndarray(array):
+ import numpy
+
+ return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')
|
{"golden_diff": "diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py\n--- a/rasterio/dtypes.py\n+++ b/rasterio/dtypes.py\n@@ -96,3 +96,9 @@\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n+\n+\n+def is_ndarray(array):\n+ import numpy\n+\n+ return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')\n", "issue": "Check for \"ndarray-like\" instead of ndarray in _warp; other places\nI want to use `rasterio.warp.reproject` on an `xray.Dataset` with `xray.Dataset.apply` (http://xray.readthedocs.org/en/stable/). xray has a feature to turn the dataset into a `np.ndarray`, but that means losing all my metadata.\n\nAt https://github.com/mapbox/rasterio/blob/master/rasterio/_warp.pyx#L249, _warp checks that the source is an `np.ndarray` (whereas the source in my case is an `xray.DataArray` - satisfying the same interfaces as `np.ndarray`), so I get an invalid source error.\n\nIt could be a good idea to check for something like\n\n```\ndef is_ndarray_like(source):\n return hasattr(source, '__array__')\n```\n\ninstead of\n\n```\nisinstance(source, np.ndarray)\n```\n\nso other numpy-like arrays can be used.\n\n", "before_files": [{"content": "# Mapping of GDAL to Numpy data types.\n#\n# Since 0.13 we are not importing numpy here and data types are strings.\n# Happily strings can be used throughout Numpy and so existing code will\n# break.\n#\n# Within Rasterio, to test data types, we use Numpy's dtype() factory to \n# do something like this:\n#\n# if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...\n#\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\n# Not supported:\n# GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_, # GDT_CInt16\n 9: complex_, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128 } # GDT_CFloat64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\ndtype_rev['uint8'] = 1\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64' }\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\ndef check_dtype(dt):\n if dt not in dtype_rev:\n try:\n return dt().dtype.name in dtype_rev\n except:\n return False\n return True\n\n\ndef get_minimum_int_dtype(values):\n \"\"\"\n Uses range checking to determine the minimum integer data type required\n to represent values.\n\n :param values: numpy array\n :return: named data type that can be later used to create a numpy dtype\n \"\"\"\n\n min_value = values.min()\n max_value = values.max()\n \n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n", "path": "rasterio/dtypes.py"}], "after_files": [{"content": "# Mapping of GDAL to Numpy data types.\n#\n# Since 0.13 we are not importing numpy here and data types are strings.\n# Happily strings can be used throughout Numpy and so existing code will\n# break.\n#\n# Within Rasterio, to test data types, we use Numpy's dtype() factory to \n# do something like this:\n#\n# if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...\n#\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\n# Not supported:\n# GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_, # GDT_CInt16\n 9: complex_, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128 } # GDT_CFloat64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\ndtype_rev['uint8'] = 1\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64' }\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\ndef check_dtype(dt):\n if dt not in dtype_rev:\n try:\n return dt().dtype.name in dtype_rev\n except:\n return False\n return True\n\n\ndef get_minimum_int_dtype(values):\n \"\"\"\n Uses range checking to determine the minimum integer data type required\n to represent values.\n\n :param values: numpy array\n :return: named data type that can be later used to create a numpy dtype\n \"\"\"\n\n min_value = values.min()\n max_value = values.max()\n \n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n\n\ndef is_ndarray(array):\n import numpy\n\n return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')\n", "path": "rasterio/dtypes.py"}]}
| 1,546 | 124 |
gh_patches_debug_38521
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-2633
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Forward compatibility][BUG] pkg_resources.extern.VendorImporter lacks find_spec() method
### setuptools version
54.1.2 and main
### Python version
3.10.0a7
### OS
All
### Additional environment information
When running tests for setuptools_scm.
### Description
We get:
```python-traceback
<frozen importlib._bootstrap>:933: in _find_spec
???
E AttributeError: 'VendorImporter' object has no attribute 'find_spec'
During handling of the above exception, another exception occurred:
/usr/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
/usr/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
/usr/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file
elif _is_doctest(config, path, parent):
/usr/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest
if path.check(fnmatch=glob):
/usr/lib/python3.10/site-packages/py/_path/local.py:387: in check
return super(LocalPath, self).check(**kw)
/usr/lib/python3.10/site-packages/py/_path/common.py:241: in check
return self.Checkers(self)._evaluate(kw)
/usr/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate
if py.code.getrawcode(meth).co_argcount > 1:
/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr
result = importobj(modpath, attrname)
/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj
module = __import__(modpath, None, None, ['__doc__'])
/usr/lib/python3.10/site-packages/py/_code/code.py:7: in <module>
reprlib = py.builtin._tryimport('repr', 'reprlib')
/usr/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport
__import__(name)
<frozen importlib._bootstrap>:1021: in _find_and_load
???
<frozen importlib._bootstrap>:996: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _find_spec
???
<frozen importlib._bootstrap>:908: in _find_spec_legacy
???
E ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()
```
The traceback is not very helpful, but the VendorImporter is from pkg_resourecs.
Python added a warning: https://bugs.python.org/issue42134 but other packages treat it as error.
### Expected behavior
No ImportWarning.
### How to Reproduce
1. Get Python 3.10.0a7 and tox (e.g. on Fedora via `$ sudo dnf --enablerepo=updates-testing install python3.10 tox`)
2. git clone https://github.com/pypa/setuptools_scm.git and cd in
3. Run `tox -e py310-test`
### Output
```console
py310-test develop-inst-nodeps: .../setuptools_scm
py310-test installed: attrs==20.3.0,iniconfig==1.1.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.3,-e git+https://github.com/pypa/setuptools_scm.git@abb67b15985f380d8cf4451b9f2ef3dd11cb8a91#egg=setuptools_scm,toml==0.10.2
py310-test run-test-pre: PYTHONHASHSEED='1426720794'
py310-test run-test: commands[0] | pytest
============================= test session starts ==============================
platform linux -- Python 3.10.0a7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
cachedir: .tox/py310-test/.pytest_cache
setuptools version 53.0.0 from '.../setuptools_scm/.tox/py310-test/lib/python3.10/site-packages/setuptools/__init__.py'
setuptools_scm version 6.0.1 from '.../setuptools_scm/src/setuptools_scm/__init__.py'
rootdir: .../setuptools_scm, configfile: tox.ini, testpaths: testing
collected 0 items / 1 error
==================================== ERRORS ====================================
________________________ ERROR collecting test session _________________________
<frozen importlib._bootstrap>:933: in _find_spec
???
E AttributeError: 'VendorImporter' object has no attribute 'find_spec'
During handling of the above exception, another exception occurred:
.tox/py310-test/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file
elif _is_doctest(config, path, parent):
.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest
if path.check(fnmatch=glob):
.tox/py310-test/lib/python3.10/site-packages/py/_path/local.py:387: in check
return super(LocalPath, self).check(**kw)
.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:241: in check
return self.Checkers(self)._evaluate(kw)
.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate
if py.code.getrawcode(meth).co_argcount > 1:
.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr
result = importobj(modpath, attrname)
.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj
module = __import__(modpath, None, None, ['__doc__'])
.tox/py310-test/lib/python3.10/site-packages/py/_code/code.py:7: in <module>
reprlib = py.builtin._tryimport('repr', 'reprlib')
.tox/py310-test/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport
__import__(name)
<frozen importlib._bootstrap>:1021: in _find_and_load
???
<frozen importlib._bootstrap>:996: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _find_spec
???
<frozen importlib._bootstrap>:908: in _find_spec_legacy
???
E ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()
=========================== short test summary info ============================
ERROR - ImportWarning: VendorImporter.find_spec() not found; falling back to...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.16s ===============================
ERROR: InvocationError for command .../setuptools_scm/.tox/py310-test/bin/pytest (exited with code 2)
___________________________________ summary ____________________________________
ERROR: py310-test: commands failed
```
### Code of Conduct
I agree to follow the PSF Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/extern/__init__.py`
Content:
```
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 return mod
47 except ImportError:
48 pass
49 else:
50 raise ImportError(
51 "The '{target}' package is required; "
52 "normally this is bundled with this package so if you get "
53 "this warning, consult the packager of your "
54 "distribution.".format(**locals())
55 )
56
57 def create_module(self, spec):
58 return self.load_module(spec.name)
59
60 def exec_module(self, module):
61 pass
62
63 def install(self):
64 """
65 Install this importer into sys.meta_path if not already present.
66 """
67 if self not in sys.meta_path:
68 sys.meta_path.append(self)
69
70
71 names = 'packaging', 'pyparsing', 'ordered_set',
72 VendorImporter(__name__, names, 'setuptools._vendor').install()
73
```
Path: `pkg_resources/extern/__init__.py`
Content:
```
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 return mod
47 except ImportError:
48 pass
49 else:
50 raise ImportError(
51 "The '{target}' package is required; "
52 "normally this is bundled with this package so if you get "
53 "this warning, consult the packager of your "
54 "distribution.".format(**locals())
55 )
56
57 def create_module(self, spec):
58 return self.load_module(spec.name)
59
60 def exec_module(self, module):
61 pass
62
63 def install(self):
64 """
65 Install this importer into sys.meta_path if not already present.
66 """
67 if self not in sys.meta_path:
68 sys.meta_path.append(self)
69
70
71 names = 'packaging', 'pyparsing', 'appdirs'
72 VendorImporter(__name__, names).install()
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py
--- a/pkg_resources/extern/__init__.py
+++ b/pkg_resources/extern/__init__.py
@@ -1,3 +1,4 @@
+import importlib.util
import sys
@@ -20,17 +21,10 @@
yield self.vendor_pkg + '.'
yield ''
- def find_module(self, fullname, path=None):
- """
- Return self when fullname starts with root_name and the
- target module is one vendored through this importer.
- """
+ def _module_matches_namespace(self, fullname):
+ """Figure out if the target module is vendored."""
root, base, target = fullname.partition(self.root_name + '.')
- if root:
- return
- if not any(map(target.startswith, self.vendored_names)):
- return
- return self
+ return not root and any(map(target.startswith, self.vendored_names))
def load_module(self, fullname):
"""
@@ -60,6 +54,13 @@
def exec_module(self, module):
pass
+ def find_spec(self, fullname, path=None, target=None):
+ """Return a module spec for vendored names."""
+ return (
+ importlib.util.spec_from_loader(fullname, self)
+ if self._module_matches_namespace(fullname) else None
+ )
+
def install(self):
"""
Install this importer into sys.meta_path if not already present.
diff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py
--- a/setuptools/extern/__init__.py
+++ b/setuptools/extern/__init__.py
@@ -1,3 +1,4 @@
+import importlib.util
import sys
@@ -20,17 +21,10 @@
yield self.vendor_pkg + '.'
yield ''
- def find_module(self, fullname, path=None):
- """
- Return self when fullname starts with root_name and the
- target module is one vendored through this importer.
- """
+ def _module_matches_namespace(self, fullname):
+ """Figure out if the target module is vendored."""
root, base, target = fullname.partition(self.root_name + '.')
- if root:
- return
- if not any(map(target.startswith, self.vendored_names)):
- return
- return self
+ return not root and any(map(target.startswith, self.vendored_names))
def load_module(self, fullname):
"""
@@ -60,6 +54,13 @@
def exec_module(self, module):
pass
+ def find_spec(self, fullname, path=None, target=None):
+ """Return a module spec for vendored names."""
+ return (
+ importlib.util.spec_from_loader(fullname, self)
+ if self._module_matches_namespace(fullname) else None
+ )
+
def install(self):
"""
Install this importer into sys.meta_path if not already present.
|
{"golden_diff": "diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py\n--- a/pkg_resources/extern/__init__.py\n+++ b/pkg_resources/extern/__init__.py\n@@ -1,3 +1,4 @@\n+import importlib.util\n import sys\n \n \n@@ -20,17 +21,10 @@\n yield self.vendor_pkg + '.'\n yield ''\n \n- def find_module(self, fullname, path=None):\n- \"\"\"\n- Return self when fullname starts with root_name and the\n- target module is one vendored through this importer.\n- \"\"\"\n+ def _module_matches_namespace(self, fullname):\n+ \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n- if root:\n- return\n- if not any(map(target.startswith, self.vendored_names)):\n- return\n- return self\n+ return not root and any(map(target.startswith, self.vendored_names))\n \n def load_module(self, fullname):\n \"\"\"\n@@ -60,6 +54,13 @@\n def exec_module(self, module):\n pass\n \n+ def find_spec(self, fullname, path=None, target=None):\n+ \"\"\"Return a module spec for vendored names.\"\"\"\n+ return (\n+ importlib.util.spec_from_loader(fullname, self)\n+ if self._module_matches_namespace(fullname) else None\n+ )\n+\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\ndiff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py\n--- a/setuptools/extern/__init__.py\n+++ b/setuptools/extern/__init__.py\n@@ -1,3 +1,4 @@\n+import importlib.util\n import sys\n \n \n@@ -20,17 +21,10 @@\n yield self.vendor_pkg + '.'\n yield ''\n \n- def find_module(self, fullname, path=None):\n- \"\"\"\n- Return self when fullname starts with root_name and the\n- target module is one vendored through this importer.\n- \"\"\"\n+ def _module_matches_namespace(self, fullname):\n+ \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n- if root:\n- return\n- if not any(map(target.startswith, self.vendored_names)):\n- return\n- return self\n+ return not root and any(map(target.startswith, self.vendored_names))\n \n def load_module(self, fullname):\n \"\"\"\n@@ -60,6 +54,13 @@\n def exec_module(self, module):\n pass\n \n+ def find_spec(self, fullname, path=None, target=None):\n+ \"\"\"Return a module spec for vendored names.\"\"\"\n+ return (\n+ importlib.util.spec_from_loader(fullname, self)\n+ if self._module_matches_namespace(fullname) else None\n+ )\n+\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n", "issue": "[Forward compatibility][BUG] pkg_resources.extern.VendorImporter lacks find_spec() method\n### setuptools version\r\n\r\n54.1.2 and main\r\n\r\n### Python version\r\n\r\n3.10.0a7\r\n\r\n### OS\r\n\r\nAll\r\n\r\n### Additional environment information\r\n\r\nWhen running tests for setuptools_scm.\r\n\r\n### Description\r\n\r\nWe get:\r\n\r\n```python-traceback\r\n<frozen importlib._bootstrap>:933: in _find_spec\r\n ???\r\nE AttributeError: 'VendorImporter' object has no attribute 'find_spec'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n/usr/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n/usr/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n/usr/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>\r\n self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\r\n/usr/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file\r\n elif _is_doctest(config, path, parent):\r\n/usr/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest\r\n if path.check(fnmatch=glob):\r\n/usr/lib/python3.10/site-packages/py/_path/local.py:387: in check\r\n return super(LocalPath, self).check(**kw)\r\n/usr/lib/python3.10/site-packages/py/_path/common.py:241: in check\r\n return self.Checkers(self)._evaluate(kw)\r\n/usr/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate\r\n if py.code.getrawcode(meth).co_argcount > 1:\r\n/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr\r\n result = importobj(modpath, attrname)\r\n/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj\r\n module = __import__(modpath, None, None, ['__doc__'])\r\n/usr/lib/python3.10/site-packages/py/_code/code.py:7: in <module>\r\n reprlib = py.builtin._tryimport('repr', 'reprlib')\r\n/usr/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport\r\n __import__(name)\r\n<frozen importlib._bootstrap>:1021: in _find_and_load\r\n ???\r\n<frozen importlib._bootstrap>:996: in _find_and_load_unlocked\r\n ???\r\n<frozen importlib._bootstrap>:935: in _find_spec\r\n ???\r\n<frozen importlib._bootstrap>:908: in _find_spec_legacy\r\n ???\r\nE ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()\r\n```\r\n\r\nThe traceback is not very helpful, but the VendorImporter is from pkg_resourecs.\r\n\r\nPython added a warning: https://bugs.python.org/issue42134 but other packages treat it as error.\r\n\r\n### Expected behavior\r\n\r\nNo ImportWarning.\r\n\r\n### How to Reproduce\r\n\r\n1. Get Python 3.10.0a7 and tox (e.g. on Fedora via `$ sudo dnf --enablerepo=updates-testing install python3.10 tox`)\r\n2. git clone https://github.com/pypa/setuptools_scm.git and cd in\r\n3. Run `tox -e py310-test`\r\n\r\n### Output\r\n\r\n```console\r\npy310-test develop-inst-nodeps: .../setuptools_scm\r\npy310-test installed: attrs==20.3.0,iniconfig==1.1.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.3,-e git+https://github.com/pypa/setuptools_scm.git@abb67b15985f380d8cf4451b9f2ef3dd11cb8a91#egg=setuptools_scm,toml==0.10.2\r\npy310-test run-test-pre: PYTHONHASHSEED='1426720794'\r\npy310-test run-test: commands[0] | pytest\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.10.0a7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1\r\ncachedir: .tox/py310-test/.pytest_cache\r\nsetuptools version 53.0.0 from '.../setuptools_scm/.tox/py310-test/lib/python3.10/site-packages/setuptools/__init__.py'\r\nsetuptools_scm version 6.0.1 from '.../setuptools_scm/src/setuptools_scm/__init__.py'\r\nrootdir: .../setuptools_scm, configfile: tox.ini, testpaths: testing\r\ncollected 0 items / 1 error\r\n\r\n==================================== ERRORS ====================================\r\n________________________ ERROR collecting test session _________________________\r\n<frozen importlib._bootstrap>:933: in _find_spec\r\n ???\r\nE AttributeError: 'VendorImporter' object has no attribute 'find_spec'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>\r\n self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\r\n.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file\r\n elif _is_doctest(config, path, parent):\r\n.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest\r\n if path.check(fnmatch=glob):\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/local.py:387: in check\r\n return super(LocalPath, self).check(**kw)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:241: in check\r\n return self.Checkers(self)._evaluate(kw)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate\r\n if py.code.getrawcode(meth).co_argcount > 1:\r\n.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr\r\n result = importobj(modpath, attrname)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj\r\n module = __import__(modpath, None, None, ['__doc__'])\r\n.tox/py310-test/lib/python3.10/site-packages/py/_code/code.py:7: in <module>\r\n reprlib = py.builtin._tryimport('repr', 'reprlib')\r\n.tox/py310-test/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport\r\n __import__(name)\r\n<frozen importlib._bootstrap>:1021: in _find_and_load\r\n ???\r\n<frozen importlib._bootstrap>:996: in _find_and_load_unlocked\r\n ???\r\n<frozen importlib._bootstrap>:935: in _find_spec\r\n ???\r\n<frozen importlib._bootstrap>:908: in _find_spec_legacy\r\n ???\r\nE ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()\r\n=========================== short test summary info ============================\r\nERROR - ImportWarning: VendorImporter.find_spec() not found; falling back to...\r\n!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!\r\n=============================== 1 error in 0.16s ===============================\r\nERROR: InvocationError for command .../setuptools_scm/.tox/py310-test/bin/pytest (exited with code 2)\r\n___________________________________ summary ____________________________________\r\nERROR: py310-test: commands failed\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\nI agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'ordered_set',\nVendorImporter(__name__, names, 'setuptools._vendor').install()\n", "path": "setuptools/extern/__init__.py"}, {"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'appdirs'\nVendorImporter(__name__, names).install()\n", "path": "pkg_resources/extern/__init__.py"}], "after_files": [{"content": "import importlib.util\nimport sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def _module_matches_namespace(self, fullname):\n \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n return not root and any(map(target.startswith, self.vendored_names))\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def find_spec(self, fullname, path=None, target=None):\n \"\"\"Return a module spec for vendored names.\"\"\"\n return (\n importlib.util.spec_from_loader(fullname, self)\n if self._module_matches_namespace(fullname) else None\n )\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'ordered_set',\nVendorImporter(__name__, names, 'setuptools._vendor').install()\n", "path": "setuptools/extern/__init__.py"}, {"content": "import importlib.util\nimport sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def _module_matches_namespace(self, fullname):\n \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n return not root and any(map(target.startswith, self.vendored_names))\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def find_spec(self, fullname, path=None, target=None):\n \"\"\"Return a module spec for vendored names.\"\"\"\n return (\n importlib.util.spec_from_loader(fullname, self)\n if self._module_matches_namespace(fullname) else None\n )\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'appdirs'\nVendorImporter(__name__, names).install()\n", "path": "pkg_resources/extern/__init__.py"}]}
| 3,472 | 678 |
gh_patches_debug_22553
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-10573
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stripping non-ASCII characters
**Describe the bug**
We have some Fortran files that contain non-ASCII characters in comments. They compile fine but when used with meson, I get errors in `depscan.py`:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 3329: invalid continuation byte
```
Instead of writing a custom script that strips out these non-ASCII characters, I wonder if there's a way to do it from within meson. Alternatively, is it possible to just bypass this check or make it skip comments?
**To Reproduce**
I'm working off of this pyOptSparse build at mdolab/pyoptsparse#300. The offending Fortran source file is unfortunately not publicly available, but any Fortran file with a non-ASCII character in a comment line should do.
**Expected behavior**
First, I think better debugging info would be helpful. It's not clear from the message which file was the issue, and I had to check with `chardetect` to figure it out.
I think the best thing would be for meson to allow these characters to exist, since the compiler has no problems with it. If that's not possible, then I was wondering if meson provides any utilities in stripping characters from files, as that seems to be fairly useful.
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native build
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Manjaro 21.2.6
* what Python version are you using? Python 3.9
* what `meson --version` 0.63.0
* what `ninja --version` if it's a Ninja build `1.10.2.git.kitware.jobserver-1`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesonbuild/scripts/depscan.py`
Content:
```
1 # Copyright 2020 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import annotations
15
16 import json
17 import os
18 import pathlib
19 import pickle
20 import re
21 import sys
22 import typing as T
23
24 from ..backend.ninjabackend import ninja_quote
25 from ..compilers.compilers import lang_suffixes
26
27 if T.TYPE_CHECKING:
28 from ..backend.ninjabackend import TargetDependencyScannerInfo
29
30 CPP_IMPORT_RE = re.compile(r'\w*import ([a-zA-Z0-9]+);')
31 CPP_EXPORT_RE = re.compile(r'\w*export module ([a-zA-Z0-9]+);')
32
33 FORTRAN_INCLUDE_PAT = r"^\s*include\s*['\"](\w+\.\w+)['\"]"
34 FORTRAN_MODULE_PAT = r"^\s*\bmodule\b\s+(\w+)\s*(?:!+.*)*$"
35 FORTRAN_SUBMOD_PAT = r"^\s*\bsubmodule\b\s*\((\w+:?\w+)\)\s*(\w+)"
36 FORTRAN_USE_PAT = r"^\s*use,?\s*(?:non_intrinsic)?\s*(?:::)?\s*(\w+)"
37
38 FORTRAN_MODULE_RE = re.compile(FORTRAN_MODULE_PAT, re.IGNORECASE)
39 FORTRAN_SUBMOD_RE = re.compile(FORTRAN_SUBMOD_PAT, re.IGNORECASE)
40 FORTRAN_USE_RE = re.compile(FORTRAN_USE_PAT, re.IGNORECASE)
41
42 class DependencyScanner:
43 def __init__(self, pickle_file: str, outfile: str, sources: T.List[str]):
44 with open(pickle_file, 'rb') as pf:
45 self.target_data: TargetDependencyScannerInfo = pickle.load(pf)
46 self.outfile = outfile
47 self.sources = sources
48 self.provided_by: T.Dict[str, str] = {}
49 self.exports: T.Dict[str, str] = {}
50 self.needs: T.Dict[str, T.List[str]] = {}
51 self.sources_with_exports: T.List[str] = []
52
53 def scan_file(self, fname: str) -> None:
54 suffix = os.path.splitext(fname)[1][1:].lower()
55 if suffix in lang_suffixes['fortran']:
56 self.scan_fortran_file(fname)
57 elif suffix in lang_suffixes['cpp']:
58 self.scan_cpp_file(fname)
59 else:
60 sys.exit(f'Can not scan files with suffix .{suffix}.')
61
62 def scan_fortran_file(self, fname: str) -> None:
63 fpath = pathlib.Path(fname)
64 modules_in_this_file = set()
65 for line in fpath.read_text(encoding='utf-8').split('\n'):
66 import_match = FORTRAN_USE_RE.match(line)
67 export_match = FORTRAN_MODULE_RE.match(line)
68 submodule_export_match = FORTRAN_SUBMOD_RE.match(line)
69 if import_match:
70 needed = import_match.group(1).lower()
71 # In Fortran you have an using declaration also for the module
72 # you define in the same file. Prevent circular dependencies.
73 if needed not in modules_in_this_file:
74 if fname in self.needs:
75 self.needs[fname].append(needed)
76 else:
77 self.needs[fname] = [needed]
78 if export_match:
79 exported_module = export_match.group(1).lower()
80 assert exported_module not in modules_in_this_file
81 modules_in_this_file.add(exported_module)
82 if exported_module in self.provided_by:
83 raise RuntimeError(f'Multiple files provide module {exported_module}.')
84 self.sources_with_exports.append(fname)
85 self.provided_by[exported_module] = fname
86 self.exports[fname] = exported_module
87 if submodule_export_match:
88 # Store submodule "Foo" "Bar" as "foo:bar".
89 # A submodule declaration can be both an import and an export declaration:
90 #
91 # submodule (a1:a2) a3
92 # - requires [email protected]
93 # - produces [email protected]
94 parent_module_name_full = submodule_export_match.group(1).lower()
95 parent_module_name = parent_module_name_full.split(':')[0]
96 submodule_name = submodule_export_match.group(2).lower()
97 concat_name = f'{parent_module_name}:{submodule_name}'
98 self.sources_with_exports.append(fname)
99 self.provided_by[concat_name] = fname
100 self.exports[fname] = concat_name
101 # Fortran requires that the immediate parent module must be built
102 # before the current one. Thus:
103 #
104 # submodule (parent) parent <- requires parent.mod (really parent.smod, but they are created at the same time)
105 # submodule (a1:a2) a3 <- requires [email protected]
106 #
107 # a3 does not depend on the a1 parent module directly, only transitively.
108 if fname in self.needs:
109 self.needs[fname].append(parent_module_name_full)
110 else:
111 self.needs[fname] = [parent_module_name_full]
112
113 def scan_cpp_file(self, fname: str) -> None:
114 fpath = pathlib.Path(fname)
115 for line in fpath.read_text(encoding='utf-8').split('\n'):
116 import_match = CPP_IMPORT_RE.match(line)
117 export_match = CPP_EXPORT_RE.match(line)
118 if import_match:
119 needed = import_match.group(1)
120 if fname in self.needs:
121 self.needs[fname].append(needed)
122 else:
123 self.needs[fname] = [needed]
124 if export_match:
125 exported_module = export_match.group(1)
126 if exported_module in self.provided_by:
127 raise RuntimeError(f'Multiple files provide module {exported_module}.')
128 self.sources_with_exports.append(fname)
129 self.provided_by[exported_module] = fname
130 self.exports[fname] = exported_module
131
132 def objname_for(self, src: str) -> str:
133 objname = self.target_data.source2object[src]
134 assert isinstance(objname, str)
135 return objname
136
137 def module_name_for(self, src: str) -> str:
138 suffix = os.path.splitext(src)[1][1:].lower()
139 if suffix in lang_suffixes['fortran']:
140 exported = self.exports[src]
141 # Module foo:bar goes to a file name [email protected]
142 # Module Foo goes to a file name foo.mod
143 namebase = exported.replace(':', '@')
144 if ':' in exported:
145 extension = 'smod'
146 else:
147 extension = 'mod'
148 return os.path.join(self.target_data.private_dir, f'{namebase}.{extension}')
149 elif suffix in lang_suffixes['cpp']:
150 return '{}.ifc'.format(self.exports[src])
151 else:
152 raise RuntimeError('Unreachable code.')
153
154 def scan(self) -> int:
155 for s in self.sources:
156 self.scan_file(s)
157 with open(self.outfile, 'w', encoding='utf-8') as ofile:
158 ofile.write('ninja_dyndep_version = 1\n')
159 for src in self.sources:
160 objfilename = self.objname_for(src)
161 mods_and_submods_needed = []
162 module_files_generated = []
163 module_files_needed = []
164 if src in self.sources_with_exports:
165 module_files_generated.append(self.module_name_for(src))
166 if src in self.needs:
167 for modname in self.needs[src]:
168 if modname not in self.provided_by:
169 # Nothing provides this module, we assume that it
170 # comes from a dependency library somewhere and is
171 # already built by the time this compilation starts.
172 pass
173 else:
174 mods_and_submods_needed.append(modname)
175
176 for modname in mods_and_submods_needed:
177 provider_src = self.provided_by[modname]
178 provider_modfile = self.module_name_for(provider_src)
179 # Prune self-dependencies
180 if provider_src != src:
181 module_files_needed.append(provider_modfile)
182
183 quoted_objfilename = ninja_quote(objfilename, True)
184 quoted_module_files_generated = [ninja_quote(x, True) for x in module_files_generated]
185 quoted_module_files_needed = [ninja_quote(x, True) for x in module_files_needed]
186 if quoted_module_files_generated:
187 mod_gen = '| ' + ' '.join(quoted_module_files_generated)
188 else:
189 mod_gen = ''
190 if quoted_module_files_needed:
191 mod_dep = '| ' + ' '.join(quoted_module_files_needed)
192 else:
193 mod_dep = ''
194 build_line = 'build {} {}: dyndep {}'.format(quoted_objfilename,
195 mod_gen,
196 mod_dep)
197 ofile.write(build_line + '\n')
198 return 0
199
200 def run(args: T.List[str]) -> int:
201 assert len(args) == 3, 'got wrong number of arguments!'
202 pickle_file, outfile, jsonfile = args
203 with open(jsonfile, encoding='utf-8') as f:
204 sources = json.load(f)
205 scanner = DependencyScanner(pickle_file, outfile, sources)
206 return scanner.scan()
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesonbuild/scripts/depscan.py b/mesonbuild/scripts/depscan.py
--- a/mesonbuild/scripts/depscan.py
+++ b/mesonbuild/scripts/depscan.py
@@ -62,7 +62,7 @@
def scan_fortran_file(self, fname: str) -> None:
fpath = pathlib.Path(fname)
modules_in_this_file = set()
- for line in fpath.read_text(encoding='utf-8').split('\n'):
+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\n'):
import_match = FORTRAN_USE_RE.match(line)
export_match = FORTRAN_MODULE_RE.match(line)
submodule_export_match = FORTRAN_SUBMOD_RE.match(line)
@@ -112,7 +112,7 @@
def scan_cpp_file(self, fname: str) -> None:
fpath = pathlib.Path(fname)
- for line in fpath.read_text(encoding='utf-8').split('\n'):
+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\n'):
import_match = CPP_IMPORT_RE.match(line)
export_match = CPP_EXPORT_RE.match(line)
if import_match:
|
{"golden_diff": "diff --git a/mesonbuild/scripts/depscan.py b/mesonbuild/scripts/depscan.py\n--- a/mesonbuild/scripts/depscan.py\n+++ b/mesonbuild/scripts/depscan.py\n@@ -62,7 +62,7 @@\n def scan_fortran_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n modules_in_this_file = set()\n- for line in fpath.read_text(encoding='utf-8').split('\\n'):\n+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = FORTRAN_USE_RE.match(line)\n export_match = FORTRAN_MODULE_RE.match(line)\n submodule_export_match = FORTRAN_SUBMOD_RE.match(line)\n@@ -112,7 +112,7 @@\n \n def scan_cpp_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n- for line in fpath.read_text(encoding='utf-8').split('\\n'):\n+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = CPP_IMPORT_RE.match(line)\n export_match = CPP_EXPORT_RE.match(line)\n if import_match:\n", "issue": "Stripping non-ASCII characters\n**Describe the bug**\r\nWe have some Fortran files that contain non-ASCII characters in comments. They compile fine but when used with meson, I get errors in `depscan.py`:\r\n```\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 3329: invalid continuation byte\r\n```\r\n\r\nInstead of writing a custom script that strips out these non-ASCII characters, I wonder if there's a way to do it from within meson. Alternatively, is it possible to just bypass this check or make it skip comments?\r\n\r\n\r\n\r\n**To Reproduce**\r\nI'm working off of this pyOptSparse build at mdolab/pyoptsparse#300. The offending Fortran source file is unfortunately not publicly available, but any Fortran file with a non-ASCII character in a comment line should do.\r\n\r\n**Expected behavior**\r\nFirst, I think better debugging info would be helpful. It's not clear from the message which file was the issue, and I had to check with `chardetect` to figure it out.\r\n\r\nI think the best thing would be for meson to allow these characters to exist, since the compiler has no problems with it. If that's not possible, then I was wondering if meson provides any utilities in stripping characters from files, as that seems to be fairly useful.\r\n\r\n**system parameters**\r\n* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native build\r\n* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Manjaro 21.2.6\r\n* what Python version are you using? Python 3.9\r\n* what `meson --version` 0.63.0\r\n* what `ninja --version` if it's a Ninja build `1.10.2.git.kitware.jobserver-1`\r\n\n", "before_files": [{"content": "# Copyright 2020 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import annotations\n\nimport json\nimport os\nimport pathlib\nimport pickle\nimport re\nimport sys\nimport typing as T\n\nfrom ..backend.ninjabackend import ninja_quote\nfrom ..compilers.compilers import lang_suffixes\n\nif T.TYPE_CHECKING:\n from ..backend.ninjabackend import TargetDependencyScannerInfo\n\nCPP_IMPORT_RE = re.compile(r'\\w*import ([a-zA-Z0-9]+);')\nCPP_EXPORT_RE = re.compile(r'\\w*export module ([a-zA-Z0-9]+);')\n\nFORTRAN_INCLUDE_PAT = r\"^\\s*include\\s*['\\\"](\\w+\\.\\w+)['\\\"]\"\nFORTRAN_MODULE_PAT = r\"^\\s*\\bmodule\\b\\s+(\\w+)\\s*(?:!+.*)*$\"\nFORTRAN_SUBMOD_PAT = r\"^\\s*\\bsubmodule\\b\\s*\\((\\w+:?\\w+)\\)\\s*(\\w+)\"\nFORTRAN_USE_PAT = r\"^\\s*use,?\\s*(?:non_intrinsic)?\\s*(?:::)?\\s*(\\w+)\"\n\nFORTRAN_MODULE_RE = re.compile(FORTRAN_MODULE_PAT, re.IGNORECASE)\nFORTRAN_SUBMOD_RE = re.compile(FORTRAN_SUBMOD_PAT, re.IGNORECASE)\nFORTRAN_USE_RE = re.compile(FORTRAN_USE_PAT, re.IGNORECASE)\n\nclass DependencyScanner:\n def __init__(self, pickle_file: str, outfile: str, sources: T.List[str]):\n with open(pickle_file, 'rb') as pf:\n self.target_data: TargetDependencyScannerInfo = pickle.load(pf)\n self.outfile = outfile\n self.sources = sources\n self.provided_by: T.Dict[str, str] = {}\n self.exports: T.Dict[str, str] = {}\n self.needs: T.Dict[str, T.List[str]] = {}\n self.sources_with_exports: T.List[str] = []\n\n def scan_file(self, fname: str) -> None:\n suffix = os.path.splitext(fname)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n self.scan_fortran_file(fname)\n elif suffix in lang_suffixes['cpp']:\n self.scan_cpp_file(fname)\n else:\n sys.exit(f'Can not scan files with suffix .{suffix}.')\n\n def scan_fortran_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n modules_in_this_file = set()\n for line in fpath.read_text(encoding='utf-8').split('\\n'):\n import_match = FORTRAN_USE_RE.match(line)\n export_match = FORTRAN_MODULE_RE.match(line)\n submodule_export_match = FORTRAN_SUBMOD_RE.match(line)\n if import_match:\n needed = import_match.group(1).lower()\n # In Fortran you have an using declaration also for the module\n # you define in the same file. Prevent circular dependencies.\n if needed not in modules_in_this_file:\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1).lower()\n assert exported_module not in modules_in_this_file\n modules_in_this_file.add(exported_module)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n if submodule_export_match:\n # Store submodule \"Foo\" \"Bar\" as \"foo:bar\".\n # A submodule declaration can be both an import and an export declaration:\n #\n # submodule (a1:a2) a3\n # - requires [email protected]\n # - produces [email protected]\n parent_module_name_full = submodule_export_match.group(1).lower()\n parent_module_name = parent_module_name_full.split(':')[0]\n submodule_name = submodule_export_match.group(2).lower()\n concat_name = f'{parent_module_name}:{submodule_name}'\n self.sources_with_exports.append(fname)\n self.provided_by[concat_name] = fname\n self.exports[fname] = concat_name\n # Fortran requires that the immediate parent module must be built\n # before the current one. Thus:\n #\n # submodule (parent) parent <- requires parent.mod (really parent.smod, but they are created at the same time)\n # submodule (a1:a2) a3 <- requires [email protected]\n #\n # a3 does not depend on the a1 parent module directly, only transitively.\n if fname in self.needs:\n self.needs[fname].append(parent_module_name_full)\n else:\n self.needs[fname] = [parent_module_name_full]\n\n def scan_cpp_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n for line in fpath.read_text(encoding='utf-8').split('\\n'):\n import_match = CPP_IMPORT_RE.match(line)\n export_match = CPP_EXPORT_RE.match(line)\n if import_match:\n needed = import_match.group(1)\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n\n def objname_for(self, src: str) -> str:\n objname = self.target_data.source2object[src]\n assert isinstance(objname, str)\n return objname\n\n def module_name_for(self, src: str) -> str:\n suffix = os.path.splitext(src)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n exported = self.exports[src]\n # Module foo:bar goes to a file name [email protected]\n # Module Foo goes to a file name foo.mod\n namebase = exported.replace(':', '@')\n if ':' in exported:\n extension = 'smod'\n else:\n extension = 'mod'\n return os.path.join(self.target_data.private_dir, f'{namebase}.{extension}')\n elif suffix in lang_suffixes['cpp']:\n return '{}.ifc'.format(self.exports[src])\n else:\n raise RuntimeError('Unreachable code.')\n\n def scan(self) -> int:\n for s in self.sources:\n self.scan_file(s)\n with open(self.outfile, 'w', encoding='utf-8') as ofile:\n ofile.write('ninja_dyndep_version = 1\\n')\n for src in self.sources:\n objfilename = self.objname_for(src)\n mods_and_submods_needed = []\n module_files_generated = []\n module_files_needed = []\n if src in self.sources_with_exports:\n module_files_generated.append(self.module_name_for(src))\n if src in self.needs:\n for modname in self.needs[src]:\n if modname not in self.provided_by:\n # Nothing provides this module, we assume that it\n # comes from a dependency library somewhere and is\n # already built by the time this compilation starts.\n pass\n else:\n mods_and_submods_needed.append(modname)\n\n for modname in mods_and_submods_needed:\n provider_src = self.provided_by[modname]\n provider_modfile = self.module_name_for(provider_src)\n # Prune self-dependencies\n if provider_src != src:\n module_files_needed.append(provider_modfile)\n\n quoted_objfilename = ninja_quote(objfilename, True)\n quoted_module_files_generated = [ninja_quote(x, True) for x in module_files_generated]\n quoted_module_files_needed = [ninja_quote(x, True) for x in module_files_needed]\n if quoted_module_files_generated:\n mod_gen = '| ' + ' '.join(quoted_module_files_generated)\n else:\n mod_gen = ''\n if quoted_module_files_needed:\n mod_dep = '| ' + ' '.join(quoted_module_files_needed)\n else:\n mod_dep = ''\n build_line = 'build {} {}: dyndep {}'.format(quoted_objfilename,\n mod_gen,\n mod_dep)\n ofile.write(build_line + '\\n')\n return 0\n\ndef run(args: T.List[str]) -> int:\n assert len(args) == 3, 'got wrong number of arguments!'\n pickle_file, outfile, jsonfile = args\n with open(jsonfile, encoding='utf-8') as f:\n sources = json.load(f)\n scanner = DependencyScanner(pickle_file, outfile, sources)\n return scanner.scan()\n", "path": "mesonbuild/scripts/depscan.py"}], "after_files": [{"content": "# Copyright 2020 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import annotations\n\nimport json\nimport os\nimport pathlib\nimport pickle\nimport re\nimport sys\nimport typing as T\n\nfrom ..backend.ninjabackend import ninja_quote\nfrom ..compilers.compilers import lang_suffixes\n\nif T.TYPE_CHECKING:\n from ..backend.ninjabackend import TargetDependencyScannerInfo\n\nCPP_IMPORT_RE = re.compile(r'\\w*import ([a-zA-Z0-9]+);')\nCPP_EXPORT_RE = re.compile(r'\\w*export module ([a-zA-Z0-9]+);')\n\nFORTRAN_INCLUDE_PAT = r\"^\\s*include\\s*['\\\"](\\w+\\.\\w+)['\\\"]\"\nFORTRAN_MODULE_PAT = r\"^\\s*\\bmodule\\b\\s+(\\w+)\\s*(?:!+.*)*$\"\nFORTRAN_SUBMOD_PAT = r\"^\\s*\\bsubmodule\\b\\s*\\((\\w+:?\\w+)\\)\\s*(\\w+)\"\nFORTRAN_USE_PAT = r\"^\\s*use,?\\s*(?:non_intrinsic)?\\s*(?:::)?\\s*(\\w+)\"\n\nFORTRAN_MODULE_RE = re.compile(FORTRAN_MODULE_PAT, re.IGNORECASE)\nFORTRAN_SUBMOD_RE = re.compile(FORTRAN_SUBMOD_PAT, re.IGNORECASE)\nFORTRAN_USE_RE = re.compile(FORTRAN_USE_PAT, re.IGNORECASE)\n\nclass DependencyScanner:\n def __init__(self, pickle_file: str, outfile: str, sources: T.List[str]):\n with open(pickle_file, 'rb') as pf:\n self.target_data: TargetDependencyScannerInfo = pickle.load(pf)\n self.outfile = outfile\n self.sources = sources\n self.provided_by: T.Dict[str, str] = {}\n self.exports: T.Dict[str, str] = {}\n self.needs: T.Dict[str, T.List[str]] = {}\n self.sources_with_exports: T.List[str] = []\n\n def scan_file(self, fname: str) -> None:\n suffix = os.path.splitext(fname)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n self.scan_fortran_file(fname)\n elif suffix in lang_suffixes['cpp']:\n self.scan_cpp_file(fname)\n else:\n sys.exit(f'Can not scan files with suffix .{suffix}.')\n\n def scan_fortran_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n modules_in_this_file = set()\n for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = FORTRAN_USE_RE.match(line)\n export_match = FORTRAN_MODULE_RE.match(line)\n submodule_export_match = FORTRAN_SUBMOD_RE.match(line)\n if import_match:\n needed = import_match.group(1).lower()\n # In Fortran you have an using declaration also for the module\n # you define in the same file. Prevent circular dependencies.\n if needed not in modules_in_this_file:\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1).lower()\n assert exported_module not in modules_in_this_file\n modules_in_this_file.add(exported_module)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n if submodule_export_match:\n # Store submodule \"Foo\" \"Bar\" as \"foo:bar\".\n # A submodule declaration can be both an import and an export declaration:\n #\n # submodule (a1:a2) a3\n # - requires [email protected]\n # - produces [email protected]\n parent_module_name_full = submodule_export_match.group(1).lower()\n parent_module_name = parent_module_name_full.split(':')[0]\n submodule_name = submodule_export_match.group(2).lower()\n concat_name = f'{parent_module_name}:{submodule_name}'\n self.sources_with_exports.append(fname)\n self.provided_by[concat_name] = fname\n self.exports[fname] = concat_name\n # Fortran requires that the immediate parent module must be built\n # before the current one. Thus:\n #\n # submodule (parent) parent <- requires parent.mod (really parent.smod, but they are created at the same time)\n # submodule (a1:a2) a3 <- requires [email protected]\n #\n # a3 does not depend on the a1 parent module directly, only transitively.\n if fname in self.needs:\n self.needs[fname].append(parent_module_name_full)\n else:\n self.needs[fname] = [parent_module_name_full]\n\n def scan_cpp_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = CPP_IMPORT_RE.match(line)\n export_match = CPP_EXPORT_RE.match(line)\n if import_match:\n needed = import_match.group(1)\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n\n def objname_for(self, src: str) -> str:\n objname = self.target_data.source2object[src]\n assert isinstance(objname, str)\n return objname\n\n def module_name_for(self, src: str) -> str:\n suffix = os.path.splitext(src)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n exported = self.exports[src]\n # Module foo:bar goes to a file name [email protected]\n # Module Foo goes to a file name foo.mod\n namebase = exported.replace(':', '@')\n if ':' in exported:\n extension = 'smod'\n else:\n extension = 'mod'\n return os.path.join(self.target_data.private_dir, f'{namebase}.{extension}')\n elif suffix in lang_suffixes['cpp']:\n return '{}.ifc'.format(self.exports[src])\n else:\n raise RuntimeError('Unreachable code.')\n\n def scan(self) -> int:\n for s in self.sources:\n self.scan_file(s)\n with open(self.outfile, 'w', encoding='utf-8') as ofile:\n ofile.write('ninja_dyndep_version = 1\\n')\n for src in self.sources:\n objfilename = self.objname_for(src)\n mods_and_submods_needed = []\n module_files_generated = []\n module_files_needed = []\n if src in self.sources_with_exports:\n module_files_generated.append(self.module_name_for(src))\n if src in self.needs:\n for modname in self.needs[src]:\n if modname not in self.provided_by:\n # Nothing provides this module, we assume that it\n # comes from a dependency library somewhere and is\n # already built by the time this compilation starts.\n pass\n else:\n mods_and_submods_needed.append(modname)\n\n for modname in mods_and_submods_needed:\n provider_src = self.provided_by[modname]\n provider_modfile = self.module_name_for(provider_src)\n # Prune self-dependencies\n if provider_src != src:\n module_files_needed.append(provider_modfile)\n\n quoted_objfilename = ninja_quote(objfilename, True)\n quoted_module_files_generated = [ninja_quote(x, True) for x in module_files_generated]\n quoted_module_files_needed = [ninja_quote(x, True) for x in module_files_needed]\n if quoted_module_files_generated:\n mod_gen = '| ' + ' '.join(quoted_module_files_generated)\n else:\n mod_gen = ''\n if quoted_module_files_needed:\n mod_dep = '| ' + ' '.join(quoted_module_files_needed)\n else:\n mod_dep = ''\n build_line = 'build {} {}: dyndep {}'.format(quoted_objfilename,\n mod_gen,\n mod_dep)\n ofile.write(build_line + '\\n')\n return 0\n\ndef run(args: T.List[str]) -> int:\n assert len(args) == 3, 'got wrong number of arguments!'\n pickle_file, outfile, jsonfile = args\n with open(jsonfile, encoding='utf-8') as f:\n sources = json.load(f)\n scanner = DependencyScanner(pickle_file, outfile, sources)\n return scanner.scan()\n", "path": "mesonbuild/scripts/depscan.py"}]}
| 3,267 | 269 |
gh_patches_debug_63370
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update requirements
While working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):
```
install_requires = [
'Jinja2==2.7.1',
'Markdown==2.3.1',
'PyYAML==3.10',
'watchdog==0.7.0',
'ghp-import==0.4.1'
]
```
Since these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is—you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.
How would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'In progress.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2==2.7.1',
20 'Markdown==2.3.1',
21 'PyYAML==3.10',
22 'watchdog==0.7.0',
23 'ghp-import==0.4.1'
24 ]
25
26 long_description = """Work in progress."""
27
28
29 def get_version(package):
30 """
31 Return package version as listed in `__version__` in `init.py`.
32 """
33 init_py = open(os.path.join(package, '__init__.py')).read()
34 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
35
36
37 def get_packages(package):
38 """
39 Return root package and all sub-packages.
40 """
41 return [dirpath
42 for dirpath, dirnames, filenames in os.walk(package)
43 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
44
45
46 def get_package_data(package):
47 """
48 Return all files under the root package, that are not in a
49 package themselves.
50 """
51 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
52 for dirpath, dirnames, filenames in os.walk(package)
53 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
54
55 filepaths = []
56 for base, filenames in walk:
57 filepaths.extend([os.path.join(base, filename)
58 for filename in filenames])
59 return {package: filepaths}
60
61
62 if sys.argv[-1] == 'publish':
63 os.system("python setup.py sdist upload")
64 args = {'version': get_version(package)}
65 print("You probably want to also tag the version now:")
66 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
67 print(" git push --tags")
68 sys.exit()
69
70
71 setup(
72 name=name,
73 version=get_version(package),
74 url=url,
75 license=license,
76 description=description,
77 long_description=long_description,
78 author=author,
79 author_email=author_email,
80 packages=get_packages(package),
81 package_data=get_package_data(package),
82 install_requires=install_requires,
83 entry_points={
84 'console_scripts': [
85 'mkdocs = mkdocs.main:run_main',
86 ],
87 },
88 classifiers=[
89 'Development Status :: 5 - Production/Stable',
90 'Environment :: Console',
91 'Environment :: Web Environment',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: BSD License',
94 'Operating System :: OS Independent',
95 'Programming Language :: Python',
96 'Programming Language :: Python :: 2',
97 'Programming Language :: Python :: 2.6',
98 'Programming Language :: Python :: 2.7',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Topic :: Documentation',
103 'Topic :: Text Processing',
104 ]
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,11 +16,11 @@
author_email = '[email protected]'
license = 'BSD'
install_requires = [
- 'Jinja2==2.7.1',
- 'Markdown==2.3.1',
- 'PyYAML==3.10',
- 'watchdog==0.7.0',
- 'ghp-import==0.4.1'
+ 'Jinja2>=2.7.1',
+ 'Markdown>=2.3.1,<2.5',
+ 'PyYAML>=3.10',
+ 'watchdog>=0.7.0',
+ 'ghp-import>=0.4.1'
]
long_description = """Work in progress."""
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,11 +16,11 @@\n author_email = '[email protected]'\n license = 'BSD'\n install_requires = [\n- 'Jinja2==2.7.1',\n- 'Markdown==2.3.1',\n- 'PyYAML==3.10',\n- 'watchdog==0.7.0',\n- 'ghp-import==0.4.1'\n+ 'Jinja2>=2.7.1',\n+ 'Markdown>=2.3.1,<2.5',\n+ 'PyYAML>=3.10',\n+ 'watchdog>=0.7.0',\n+ 'ghp-import>=0.4.1'\n ]\n \n long_description = \"\"\"Work in progress.\"\"\"\n", "issue": "Update requirements\nWhile working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):\n\n```\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n```\n\nSince these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is\u2014you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.\n\nHow would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]}
| 1,482 | 190 |
gh_patches_debug_49
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-1712
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CI/CD: Verify .pre-commit-config.yaml use latest hooks versions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """cookiecutter distutils configuration."""
3 from setuptools import setup
4
5 version = "2.1.2.dev0"
6
7 with open('README.md', encoding='utf-8') as readme_file:
8 readme = readme_file.read()
9
10 requirements = [
11 'binaryornot>=0.4.4',
12 'Jinja2>=2.7,<4.0.0',
13 'click>=7.0,<9.0.0',
14 'pyyaml>=5.3.1',
15 'jinja2-time>=0.2.0',
16 'python-slugify>=4.0.0',
17 'requests>=2.23.0',
18 ]
19
20 setup(
21 name='cookiecutter',
22 version=version,
23 description=(
24 'A command-line utility that creates projects from project '
25 'templates, e.g. creating a Python package project from a '
26 'Python package project template.'
27 ),
28 long_description=readme,
29 long_description_content_type='text/markdown',
30 author='Audrey Feldroy',
31 author_email='[email protected]',
32 url='https://github.com/cookiecutter/cookiecutter',
33 packages=['cookiecutter'],
34 package_dir={'cookiecutter': 'cookiecutter'},
35 entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},
36 include_package_data=True,
37 python_requires='>=3.7',
38 install_requires=requirements,
39 license='BSD',
40 zip_safe=False,
41 classifiers=[
42 "Development Status :: 5 - Production/Stable",
43 "Environment :: Console",
44 "Intended Audience :: Developers",
45 "Natural Language :: English",
46 "License :: OSI Approved :: BSD License",
47 "Programming Language :: Python :: 3 :: Only",
48 "Programming Language :: Python :: 3",
49 "Programming Language :: Python :: 3.7",
50 "Programming Language :: Python :: 3.8",
51 "Programming Language :: Python :: 3.9",
52 "Programming Language :: Python :: 3.10",
53 "Programming Language :: Python :: Implementation :: CPython",
54 "Programming Language :: Python :: Implementation :: PyPy",
55 "Programming Language :: Python",
56 "Topic :: Software Development",
57 ],
58 keywords=[
59 "cookiecutter",
60 "Python",
61 "projects",
62 "project templates",
63 "Jinja2",
64 "skeleton",
65 "scaffolding",
66 "project directory",
67 "package",
68 "packaging",
69 ],
70 )
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
"""cookiecutter distutils configuration."""
from setuptools import setup
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,3 @@\n-#!/usr/bin/env python\n \"\"\"cookiecutter distutils configuration.\"\"\"\n from setuptools import setup\n", "issue": "CI/CD: Verify .pre-commit-config.yaml use latest hooks versions\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.1.2.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.1.2.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}]}
| 943 | 50 |
gh_patches_debug_35130
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-6206
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Timeout value too small: when downloading large model files, timeout is reached
https://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/mlflow/store/artifact/http_artifact_repo.py#L63
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/store/artifact/http_artifact_repo.py`
Content:
```
1 import os
2 import posixpath
3
4 from mlflow.entities import FileInfo
5 from mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path
6 from mlflow.tracking._tracking_service.utils import _get_default_host_creds
7 from mlflow.utils.file_utils import relative_path_to_artifact_path
8 from mlflow.utils.rest_utils import augmented_raise_for_status, http_request
9
10
11 class HttpArtifactRepository(ArtifactRepository):
12 """Stores artifacts in a remote artifact storage using HTTP requests"""
13
14 @property
15 def _host_creds(self):
16 return _get_default_host_creds(self.artifact_uri)
17
18 def log_artifact(self, local_file, artifact_path=None):
19 verify_artifact_path(artifact_path)
20
21 file_name = os.path.basename(local_file)
22 paths = (artifact_path, file_name) if artifact_path else (file_name,)
23 endpoint = posixpath.join("/", *paths)
24 with open(local_file, "rb") as f:
25 resp = http_request(self._host_creds, endpoint, "PUT", data=f, timeout=600)
26 augmented_raise_for_status(resp)
27
28 def log_artifacts(self, local_dir, artifact_path=None):
29 local_dir = os.path.abspath(local_dir)
30 for root, _, filenames in os.walk(local_dir):
31 if root == local_dir:
32 artifact_dir = artifact_path
33 else:
34 rel_path = os.path.relpath(root, local_dir)
35 rel_path = relative_path_to_artifact_path(rel_path)
36 artifact_dir = (
37 posixpath.join(artifact_path, rel_path) if artifact_path else rel_path
38 )
39 for f in filenames:
40 self.log_artifact(os.path.join(root, f), artifact_dir)
41
42 def list_artifacts(self, path=None):
43 endpoint = "/mlflow-artifacts/artifacts"
44 url, tail = self.artifact_uri.split(endpoint, maxsplit=1)
45 root = tail.lstrip("/")
46 params = {"path": posixpath.join(root, path) if path else root}
47 host_creds = _get_default_host_creds(url)
48 resp = http_request(host_creds, endpoint, "GET", params=params, timeout=10)
49 augmented_raise_for_status(resp)
50 file_infos = []
51 for f in resp.json().get("files", []):
52 file_info = FileInfo(
53 posixpath.join(path, f["path"]) if path else f["path"],
54 f["is_dir"],
55 int(f["file_size"]) if ("file_size" in f) else None,
56 )
57 file_infos.append(file_info)
58
59 return sorted(file_infos, key=lambda f: f.path)
60
61 def _download_file(self, remote_file_path, local_path):
62 endpoint = posixpath.join("/", remote_file_path)
63 resp = http_request(self._host_creds, endpoint, "GET", stream=True, timeout=10)
64 augmented_raise_for_status(resp)
65 with open(local_path, "wb") as f:
66 chunk_size = 1024 * 1024 # 1 MB
67 for chunk in resp.iter_content(chunk_size=chunk_size):
68 f.write(chunk)
69
70 def delete_artifacts(self, artifact_path=None):
71 endpoint = posixpath.join("/", artifact_path) if artifact_path else "/"
72 resp = http_request(self._host_creds, endpoint, "DELETE", stream=True, timeout=10)
73 augmented_raise_for_status(resp)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlflow/store/artifact/http_artifact_repo.py b/mlflow/store/artifact/http_artifact_repo.py
--- a/mlflow/store/artifact/http_artifact_repo.py
+++ b/mlflow/store/artifact/http_artifact_repo.py
@@ -22,7 +22,7 @@
paths = (artifact_path, file_name) if artifact_path else (file_name,)
endpoint = posixpath.join("/", *paths)
with open(local_file, "rb") as f:
- resp = http_request(self._host_creds, endpoint, "PUT", data=f, timeout=600)
+ resp = http_request(self._host_creds, endpoint, "PUT", data=f)
augmented_raise_for_status(resp)
def log_artifacts(self, local_dir, artifact_path=None):
@@ -45,7 +45,7 @@
root = tail.lstrip("/")
params = {"path": posixpath.join(root, path) if path else root}
host_creds = _get_default_host_creds(url)
- resp = http_request(host_creds, endpoint, "GET", params=params, timeout=10)
+ resp = http_request(host_creds, endpoint, "GET", params=params)
augmented_raise_for_status(resp)
file_infos = []
for f in resp.json().get("files", []):
@@ -60,7 +60,7 @@
def _download_file(self, remote_file_path, local_path):
endpoint = posixpath.join("/", remote_file_path)
- resp = http_request(self._host_creds, endpoint, "GET", stream=True, timeout=10)
+ resp = http_request(self._host_creds, endpoint, "GET", stream=True)
augmented_raise_for_status(resp)
with open(local_path, "wb") as f:
chunk_size = 1024 * 1024 # 1 MB
@@ -69,5 +69,5 @@
def delete_artifacts(self, artifact_path=None):
endpoint = posixpath.join("/", artifact_path) if artifact_path else "/"
- resp = http_request(self._host_creds, endpoint, "DELETE", stream=True, timeout=10)
+ resp = http_request(self._host_creds, endpoint, "DELETE", stream=True)
augmented_raise_for_status(resp)
|
{"golden_diff": "diff --git a/mlflow/store/artifact/http_artifact_repo.py b/mlflow/store/artifact/http_artifact_repo.py\n--- a/mlflow/store/artifact/http_artifact_repo.py\n+++ b/mlflow/store/artifact/http_artifact_repo.py\n@@ -22,7 +22,7 @@\n paths = (artifact_path, file_name) if artifact_path else (file_name,)\n endpoint = posixpath.join(\"/\", *paths)\n with open(local_file, \"rb\") as f:\n- resp = http_request(self._host_creds, endpoint, \"PUT\", data=f, timeout=600)\n+ resp = http_request(self._host_creds, endpoint, \"PUT\", data=f)\n augmented_raise_for_status(resp)\n \n def log_artifacts(self, local_dir, artifact_path=None):\n@@ -45,7 +45,7 @@\n root = tail.lstrip(\"/\")\n params = {\"path\": posixpath.join(root, path) if path else root}\n host_creds = _get_default_host_creds(url)\n- resp = http_request(host_creds, endpoint, \"GET\", params=params, timeout=10)\n+ resp = http_request(host_creds, endpoint, \"GET\", params=params)\n augmented_raise_for_status(resp)\n file_infos = []\n for f in resp.json().get(\"files\", []):\n@@ -60,7 +60,7 @@\n \n def _download_file(self, remote_file_path, local_path):\n endpoint = posixpath.join(\"/\", remote_file_path)\n- resp = http_request(self._host_creds, endpoint, \"GET\", stream=True, timeout=10)\n+ resp = http_request(self._host_creds, endpoint, \"GET\", stream=True)\n augmented_raise_for_status(resp)\n with open(local_path, \"wb\") as f:\n chunk_size = 1024 * 1024 # 1 MB\n@@ -69,5 +69,5 @@\n \n def delete_artifacts(self, artifact_path=None):\n endpoint = posixpath.join(\"/\", artifact_path) if artifact_path else \"/\"\n- resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True, timeout=10)\n+ resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True)\n augmented_raise_for_status(resp)\n", "issue": "Timeout value too small: when downloading large model files, timeout is reached\nhttps://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/mlflow/store/artifact/http_artifact_repo.py#L63\n", "before_files": [{"content": "import os\nimport posixpath\n\nfrom mlflow.entities import FileInfo\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.tracking._tracking_service.utils import _get_default_host_creds\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\nfrom mlflow.utils.rest_utils import augmented_raise_for_status, http_request\n\n\nclass HttpArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts in a remote artifact storage using HTTP requests\"\"\"\n\n @property\n def _host_creds(self):\n return _get_default_host_creds(self.artifact_uri)\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n\n file_name = os.path.basename(local_file)\n paths = (artifact_path, file_name) if artifact_path else (file_name,)\n endpoint = posixpath.join(\"/\", *paths)\n with open(local_file, \"rb\") as f:\n resp = http_request(self._host_creds, endpoint, \"PUT\", data=f, timeout=600)\n augmented_raise_for_status(resp)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n local_dir = os.path.abspath(local_dir)\n for root, _, filenames in os.walk(local_dir):\n if root == local_dir:\n artifact_dir = artifact_path\n else:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n artifact_dir = (\n posixpath.join(artifact_path, rel_path) if artifact_path else rel_path\n )\n for f in filenames:\n self.log_artifact(os.path.join(root, f), artifact_dir)\n\n def list_artifacts(self, path=None):\n endpoint = \"/mlflow-artifacts/artifacts\"\n url, tail = self.artifact_uri.split(endpoint, maxsplit=1)\n root = tail.lstrip(\"/\")\n params = {\"path\": posixpath.join(root, path) if path else root}\n host_creds = _get_default_host_creds(url)\n resp = http_request(host_creds, endpoint, \"GET\", params=params, timeout=10)\n augmented_raise_for_status(resp)\n file_infos = []\n for f in resp.json().get(\"files\", []):\n file_info = FileInfo(\n posixpath.join(path, f[\"path\"]) if path else f[\"path\"],\n f[\"is_dir\"],\n int(f[\"file_size\"]) if (\"file_size\" in f) else None,\n )\n file_infos.append(file_info)\n\n return sorted(file_infos, key=lambda f: f.path)\n\n def _download_file(self, remote_file_path, local_path):\n endpoint = posixpath.join(\"/\", remote_file_path)\n resp = http_request(self._host_creds, endpoint, \"GET\", stream=True, timeout=10)\n augmented_raise_for_status(resp)\n with open(local_path, \"wb\") as f:\n chunk_size = 1024 * 1024 # 1 MB\n for chunk in resp.iter_content(chunk_size=chunk_size):\n f.write(chunk)\n\n def delete_artifacts(self, artifact_path=None):\n endpoint = posixpath.join(\"/\", artifact_path) if artifact_path else \"/\"\n resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True, timeout=10)\n augmented_raise_for_status(resp)\n", "path": "mlflow/store/artifact/http_artifact_repo.py"}], "after_files": [{"content": "import os\nimport posixpath\n\nfrom mlflow.entities import FileInfo\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.tracking._tracking_service.utils import _get_default_host_creds\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\nfrom mlflow.utils.rest_utils import augmented_raise_for_status, http_request\n\n\nclass HttpArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts in a remote artifact storage using HTTP requests\"\"\"\n\n @property\n def _host_creds(self):\n return _get_default_host_creds(self.artifact_uri)\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n\n file_name = os.path.basename(local_file)\n paths = (artifact_path, file_name) if artifact_path else (file_name,)\n endpoint = posixpath.join(\"/\", *paths)\n with open(local_file, \"rb\") as f:\n resp = http_request(self._host_creds, endpoint, \"PUT\", data=f)\n augmented_raise_for_status(resp)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n local_dir = os.path.abspath(local_dir)\n for root, _, filenames in os.walk(local_dir):\n if root == local_dir:\n artifact_dir = artifact_path\n else:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n artifact_dir = (\n posixpath.join(artifact_path, rel_path) if artifact_path else rel_path\n )\n for f in filenames:\n self.log_artifact(os.path.join(root, f), artifact_dir)\n\n def list_artifacts(self, path=None):\n endpoint = \"/mlflow-artifacts/artifacts\"\n url, tail = self.artifact_uri.split(endpoint, maxsplit=1)\n root = tail.lstrip(\"/\")\n params = {\"path\": posixpath.join(root, path) if path else root}\n host_creds = _get_default_host_creds(url)\n resp = http_request(host_creds, endpoint, \"GET\", params=params)\n augmented_raise_for_status(resp)\n file_infos = []\n for f in resp.json().get(\"files\", []):\n file_info = FileInfo(\n posixpath.join(path, f[\"path\"]) if path else f[\"path\"],\n f[\"is_dir\"],\n int(f[\"file_size\"]) if (\"file_size\" in f) else None,\n )\n file_infos.append(file_info)\n\n return sorted(file_infos, key=lambda f: f.path)\n\n def _download_file(self, remote_file_path, local_path):\n endpoint = posixpath.join(\"/\", remote_file_path)\n resp = http_request(self._host_creds, endpoint, \"GET\", stream=True)\n augmented_raise_for_status(resp)\n with open(local_path, \"wb\") as f:\n chunk_size = 1024 * 1024 # 1 MB\n for chunk in resp.iter_content(chunk_size=chunk_size):\n f.write(chunk)\n\n def delete_artifacts(self, artifact_path=None):\n endpoint = posixpath.join(\"/\", artifact_path) if artifact_path else \"/\"\n resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True)\n augmented_raise_for_status(resp)\n", "path": "mlflow/store/artifact/http_artifact_repo.py"}]}
| 1,197 | 502 |
gh_patches_debug_15814
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1977
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in pizza admin related to title_en
Sentry Issue: [CONCREXIT-9W](https://sentry.io/organizations/thalia/issues/2692657224/?referrer=github_integration)
```
FieldError: Related Field got invalid lookup: title_en
(16 additional frame(s) were not displayed)
...
File "django/db/models/sql/query.py", line 1393, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "django/db/models/sql/query.py", line 1412, in _add_q
child_clause, needed_inner = self.build_filter(
File "django/db/models/sql/query.py", line 1265, in build_filter
return self._add_q(
File "django/db/models/sql/query.py", line 1412, in _add_q
child_clause, needed_inner = self.build_filter(
File "django/db/models/sql/query.py", line 1339, in build_filter
raise FieldError('Related Field got invalid lookup: {}'.format(lookups[0]))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/pizzas/admin.py`
Content:
```
1 """Registers admin interfaces for the pizzas module."""
2 from django.conf import settings
3 from django.contrib import admin
4 from django.core.exceptions import PermissionDenied
5 from django.forms import Field
6 from django.urls import reverse, path
7 from django.utils.html import format_html
8 from django.utils.translation import gettext_lazy as _
9
10 from events import services
11 from events.services import is_organiser
12 from payments.widgets import PaymentWidget
13 from pizzas import admin_views
14 from utils.admin import DoNextModelAdmin
15 from .models import FoodOrder, FoodEvent, Product
16
17
18 @admin.register(Product)
19 class ProductAdmin(admin.ModelAdmin):
20 """Manage the products."""
21
22 list_display = ("name", "price", "available")
23 list_filter = ("available", "restricted")
24 search_fields = ("name",)
25
26
27 @admin.register(FoodEvent)
28 class FoodEventAdmin(admin.ModelAdmin):
29 """Manage the pizza events."""
30
31 list_display = ("title", "start", "end", "notification_enabled", "orders")
32 date_hierarchy = "start"
33 exclude = ("end_reminder",)
34 search_fields = [f"event__title_{l[0]}" for l in settings.LANGUAGES]
35 autocomplete_fields = ("event",)
36
37 def notification_enabled(self, obj):
38 return obj.send_notification
39
40 notification_enabled.short_description = _("reminder")
41 notification_enabled.admin_order_field = "send_notification"
42 notification_enabled.boolean = True
43
44 def has_change_permission(self, request, obj=None):
45 """Only allow access to the change form if the user is an organiser."""
46 if obj is not None and not services.is_organiser(request.member, obj.event):
47 return False
48 return super().has_change_permission(request, obj)
49
50 def has_delete_permission(self, request, obj=None):
51 """Only allow access to delete if the user is an organiser."""
52 if obj is not None and not services.is_organiser(request.member, obj.event):
53 return False
54 return super().has_delete_permission(request, obj)
55
56 def orders(self, obj):
57 url = reverse("admin:pizzas_foodevent_details", kwargs={"pk": obj.pk})
58 return format_html('<a href="{url}">{text}</a>', url=url, text=_("Orders"))
59
60 def get_urls(self):
61 urls = super().get_urls()
62 custom_urls = [
63 path(
64 "<int:pk>/details/",
65 self.admin_site.admin_view(
66 admin_views.PizzaOrderDetails.as_view(admin=self)
67 ),
68 name="pizzas_foodevent_details",
69 ),
70 path(
71 "<int:pk>/overview/",
72 self.admin_site.admin_view(
73 admin_views.PizzaOrderSummary.as_view(admin=self)
74 ),
75 name="pizzas_foodevent_overview",
76 ),
77 ]
78 return custom_urls + urls
79
80
81 @admin.register(FoodOrder)
82 class FoodOrderAdmin(DoNextModelAdmin):
83 """Manage the orders."""
84
85 list_display = (
86 "food_event",
87 "member_first_name",
88 "member_last_name",
89 "product",
90 "payment",
91 )
92
93 fields = (
94 "food_event",
95 "member",
96 "name",
97 "product",
98 "payment",
99 )
100
101 def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):
102 """Payment field widget."""
103 field = super().formfield_for_dbfield(db_field, request, **kwargs)
104 if db_field.name == "payment":
105 return Field(
106 widget=PaymentWidget(obj=obj), initial=field.initial, required=False,
107 )
108 return field
109
110 def save_model(self, request, obj, form, change):
111 """You can only save the orders if you have permission."""
112 if not is_organiser(request.member, obj.food_event.event):
113 raise PermissionDenied
114 return super().save_model(request, obj, form, change)
115
116 def has_view_permission(self, request, order=None):
117 """Only give view permission if the user is an organiser."""
118 if order is not None and not is_organiser(
119 request.member, order.food_event.event
120 ):
121 return False
122 return super().has_view_permission(request, order)
123
124 def has_change_permission(self, request, order=None):
125 """Only give change permission if the user is an organiser."""
126 if order is not None and not is_organiser(
127 request.member, order.food_event.event
128 ):
129 return False
130 return super().has_change_permission(request, order)
131
132 def has_delete_permission(self, request, order=None):
133 """Only give delete permission if the user is an organiser."""
134 if order is not None and not is_organiser(
135 request.member, order.food_event.event
136 ):
137 return False
138 return super().has_delete_permission(request, order)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/pizzas/admin.py b/website/pizzas/admin.py
--- a/website/pizzas/admin.py
+++ b/website/pizzas/admin.py
@@ -1,5 +1,4 @@
"""Registers admin interfaces for the pizzas module."""
-from django.conf import settings
from django.contrib import admin
from django.core.exceptions import PermissionDenied
from django.forms import Field
@@ -31,7 +30,7 @@
list_display = ("title", "start", "end", "notification_enabled", "orders")
date_hierarchy = "start"
exclude = ("end_reminder",)
- search_fields = [f"event__title_{l[0]}" for l in settings.LANGUAGES]
+ search_fields = ("event__title",)
autocomplete_fields = ("event",)
def notification_enabled(self, obj):
|
{"golden_diff": "diff --git a/website/pizzas/admin.py b/website/pizzas/admin.py\n--- a/website/pizzas/admin.py\n+++ b/website/pizzas/admin.py\n@@ -1,5 +1,4 @@\n \"\"\"Registers admin interfaces for the pizzas module.\"\"\"\n-from django.conf import settings\n from django.contrib import admin\n from django.core.exceptions import PermissionDenied\n from django.forms import Field\n@@ -31,7 +30,7 @@\n list_display = (\"title\", \"start\", \"end\", \"notification_enabled\", \"orders\")\n date_hierarchy = \"start\"\n exclude = (\"end_reminder\",)\n- search_fields = [f\"event__title_{l[0]}\" for l in settings.LANGUAGES]\n+ search_fields = (\"event__title\",)\n autocomplete_fields = (\"event\",)\n \n def notification_enabled(self, obj):\n", "issue": "Error in pizza admin related to title_en\nSentry Issue: [CONCREXIT-9W](https://sentry.io/organizations/thalia/issues/2692657224/?referrer=github_integration)\n\n```\nFieldError: Related Field got invalid lookup: title_en\n(16 additional frame(s) were not displayed)\n...\n File \"django/db/models/sql/query.py\", line 1393, in add_q\n clause, _ = self._add_q(q_object, self.used_aliases)\n File \"django/db/models/sql/query.py\", line 1412, in _add_q\n child_clause, needed_inner = self.build_filter(\n File \"django/db/models/sql/query.py\", line 1265, in build_filter\n return self._add_q(\n File \"django/db/models/sql/query.py\", line 1412, in _add_q\n child_clause, needed_inner = self.build_filter(\n File \"django/db/models/sql/query.py\", line 1339, in build_filter\n raise FieldError('Related Field got invalid lookup: {}'.format(lookups[0]))\n```\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the pizzas module.\"\"\"\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.core.exceptions import PermissionDenied\nfrom django.forms import Field\nfrom django.urls import reverse, path\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext_lazy as _\n\nfrom events import services\nfrom events.services import is_organiser\nfrom payments.widgets import PaymentWidget\nfrom pizzas import admin_views\nfrom utils.admin import DoNextModelAdmin\nfrom .models import FoodOrder, FoodEvent, Product\n\n\[email protected](Product)\nclass ProductAdmin(admin.ModelAdmin):\n \"\"\"Manage the products.\"\"\"\n\n list_display = (\"name\", \"price\", \"available\")\n list_filter = (\"available\", \"restricted\")\n search_fields = (\"name\",)\n\n\[email protected](FoodEvent)\nclass FoodEventAdmin(admin.ModelAdmin):\n \"\"\"Manage the pizza events.\"\"\"\n\n list_display = (\"title\", \"start\", \"end\", \"notification_enabled\", \"orders\")\n date_hierarchy = \"start\"\n exclude = (\"end_reminder\",)\n search_fields = [f\"event__title_{l[0]}\" for l in settings.LANGUAGES]\n autocomplete_fields = (\"event\",)\n\n def notification_enabled(self, obj):\n return obj.send_notification\n\n notification_enabled.short_description = _(\"reminder\")\n notification_enabled.admin_order_field = \"send_notification\"\n notification_enabled.boolean = True\n\n def has_change_permission(self, request, obj=None):\n \"\"\"Only allow access to the change form if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_change_permission(request, obj)\n\n def has_delete_permission(self, request, obj=None):\n \"\"\"Only allow access to delete if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_delete_permission(request, obj)\n\n def orders(self, obj):\n url = reverse(\"admin:pizzas_foodevent_details\", kwargs={\"pk\": obj.pk})\n return format_html('<a href=\"{url}\">{text}</a>', url=url, text=_(\"Orders\"))\n\n def get_urls(self):\n urls = super().get_urls()\n custom_urls = [\n path(\n \"<int:pk>/details/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderDetails.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_details\",\n ),\n path(\n \"<int:pk>/overview/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderSummary.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_overview\",\n ),\n ]\n return custom_urls + urls\n\n\[email protected](FoodOrder)\nclass FoodOrderAdmin(DoNextModelAdmin):\n \"\"\"Manage the orders.\"\"\"\n\n list_display = (\n \"food_event\",\n \"member_first_name\",\n \"member_last_name\",\n \"product\",\n \"payment\",\n )\n\n fields = (\n \"food_event\",\n \"member\",\n \"name\",\n \"product\",\n \"payment\",\n )\n\n def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):\n \"\"\"Payment field widget.\"\"\"\n field = super().formfield_for_dbfield(db_field, request, **kwargs)\n if db_field.name == \"payment\":\n return Field(\n widget=PaymentWidget(obj=obj), initial=field.initial, required=False,\n )\n return field\n\n def save_model(self, request, obj, form, change):\n \"\"\"You can only save the orders if you have permission.\"\"\"\n if not is_organiser(request.member, obj.food_event.event):\n raise PermissionDenied\n return super().save_model(request, obj, form, change)\n\n def has_view_permission(self, request, order=None):\n \"\"\"Only give view permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_view_permission(request, order)\n\n def has_change_permission(self, request, order=None):\n \"\"\"Only give change permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_change_permission(request, order)\n\n def has_delete_permission(self, request, order=None):\n \"\"\"Only give delete permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_delete_permission(request, order)\n", "path": "website/pizzas/admin.py"}], "after_files": [{"content": "\"\"\"Registers admin interfaces for the pizzas module.\"\"\"\nfrom django.contrib import admin\nfrom django.core.exceptions import PermissionDenied\nfrom django.forms import Field\nfrom django.urls import reverse, path\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext_lazy as _\n\nfrom events import services\nfrom events.services import is_organiser\nfrom payments.widgets import PaymentWidget\nfrom pizzas import admin_views\nfrom utils.admin import DoNextModelAdmin\nfrom .models import FoodOrder, FoodEvent, Product\n\n\[email protected](Product)\nclass ProductAdmin(admin.ModelAdmin):\n \"\"\"Manage the products.\"\"\"\n\n list_display = (\"name\", \"price\", \"available\")\n list_filter = (\"available\", \"restricted\")\n search_fields = (\"name\",)\n\n\[email protected](FoodEvent)\nclass FoodEventAdmin(admin.ModelAdmin):\n \"\"\"Manage the pizza events.\"\"\"\n\n list_display = (\"title\", \"start\", \"end\", \"notification_enabled\", \"orders\")\n date_hierarchy = \"start\"\n exclude = (\"end_reminder\",)\n search_fields = (\"event__title\",)\n autocomplete_fields = (\"event\",)\n\n def notification_enabled(self, obj):\n return obj.send_notification\n\n notification_enabled.short_description = _(\"reminder\")\n notification_enabled.admin_order_field = \"send_notification\"\n notification_enabled.boolean = True\n\n def has_change_permission(self, request, obj=None):\n \"\"\"Only allow access to the change form if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_change_permission(request, obj)\n\n def has_delete_permission(self, request, obj=None):\n \"\"\"Only allow access to delete if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_delete_permission(request, obj)\n\n def orders(self, obj):\n url = reverse(\"admin:pizzas_foodevent_details\", kwargs={\"pk\": obj.pk})\n return format_html('<a href=\"{url}\">{text}</a>', url=url, text=_(\"Orders\"))\n\n def get_urls(self):\n urls = super().get_urls()\n custom_urls = [\n path(\n \"<int:pk>/details/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderDetails.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_details\",\n ),\n path(\n \"<int:pk>/overview/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderSummary.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_overview\",\n ),\n ]\n return custom_urls + urls\n\n\[email protected](FoodOrder)\nclass FoodOrderAdmin(DoNextModelAdmin):\n \"\"\"Manage the orders.\"\"\"\n\n list_display = (\n \"food_event\",\n \"member_first_name\",\n \"member_last_name\",\n \"product\",\n \"payment\",\n )\n\n fields = (\n \"food_event\",\n \"member\",\n \"name\",\n \"product\",\n \"payment\",\n )\n\n def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):\n \"\"\"Payment field widget.\"\"\"\n field = super().formfield_for_dbfield(db_field, request, **kwargs)\n if db_field.name == \"payment\":\n return Field(\n widget=PaymentWidget(obj=obj), initial=field.initial, required=False,\n )\n return field\n\n def save_model(self, request, obj, form, change):\n \"\"\"You can only save the orders if you have permission.\"\"\"\n if not is_organiser(request.member, obj.food_event.event):\n raise PermissionDenied\n return super().save_model(request, obj, form, change)\n\n def has_view_permission(self, request, order=None):\n \"\"\"Only give view permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_view_permission(request, order)\n\n def has_change_permission(self, request, order=None):\n \"\"\"Only give change permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_change_permission(request, order)\n\n def has_delete_permission(self, request, order=None):\n \"\"\"Only give delete permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_delete_permission(request, order)\n", "path": "website/pizzas/admin.py"}]}
| 1,832 | 182 |
gh_patches_debug_6872
|
rasdani/github-patches
|
git_diff
|
PennyLaneAI__pennylane-3386
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] qml.matrix(op) and op.matrix() are different for Barrier and WireCut
### Expected behavior
I expected them both to pass, or both to fail.
### Actual behavior
`qml.matrix(qml.Barrier(0))` passed, `qml.Barrier(0).matrix()` failed. Same for `qml.WireCut`.
### Additional information
Attached source code is for Barrier, but the same thing happens with WireCut.
### Source code
```shell
>>> import pennylane as qml
>>> op = qml.Barrier(0)
>>> qml.matrix(op)
array([[1.]])
>>> op.matrix()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py", line 1405, in matrix
canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)
File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py", line 446, in compute_matrix
raise MatrixUndefinedError
pennylane.operation.MatrixUndefinedError
```
### Tracebacks
_No response_
### System information
```shell
Name: PennyLane
Version: 0.27.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /Users/matthews/.pyenv/versions/3.9.13/envs/pennylane/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning
Platform info: macOS-12.6.1-arm64-arm-64bit
Python version: 3.9.13
Numpy version: 1.23.2
Scipy version: 1.9.0
Installed devices:
- default.gaussian (PennyLane-0.27.0)
- default.mixed (PennyLane-0.27.0)
- default.qubit (PennyLane-0.27.0)
- default.qubit.autograd (PennyLane-0.27.0)
- default.qubit.jax (PennyLane-0.27.0)
- default.qubit.tf (PennyLane-0.27.0)
- default.qubit.torch (PennyLane-0.27.0)
- default.qutrit (PennyLane-0.27.0)
- null.qubit (PennyLane-0.27.0)
- lightning.qubit (PennyLane-Lightning-0.26.1)
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pennylane/ops/functions/matrix.py`
Content:
```
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This module contains the qml.matrix function.
16 """
17 # pylint: disable=protected-access
18 import pennylane as qml
19
20
21 @qml.op_transform
22 def matrix(op, *, wire_order=None):
23 r"""The matrix representation of an operation or quantum circuit.
24
25 Args:
26 op (.Operator, pennylane.QNode, .QuantumTape, or Callable): An operator, quantum node, tape,
27 or function that applies quantum operations.
28 wire_order (Sequence[Any], optional): Order of the wires in the quantum circuit.
29 Defaults to the order in which the wires appear in the quantum function.
30
31 Returns:
32 tensor_like or function: Function which accepts the same arguments as the QNode or quantum
33 function. When called, this function will return the unitary matrix in the appropriate
34 autodiff framework (Autograd, TensorFlow, PyTorch, JAX) given its parameters.
35
36 **Example**
37
38 Given an instantiated operator, ``qml.matrix`` returns the matrix representation:
39
40 >>> op = qml.RX(0.54, wires=0)
41 >>> qml.matrix(op)
42 [[0.9637709+0.j 0. -0.26673144j]
43 [0. -0.26673144j 0.9637709+0.j ]]
44
45 It can also be used in a functional form:
46
47 >>> x = torch.tensor(0.6, requires_grad=True)
48 >>> matrix_fn = qml.matrix(qml.RX)
49 >>> matrix_fn(x, wires=0)
50 tensor([[0.9553+0.0000j, 0.0000-0.2955j],
51 [0.0000-0.2955j, 0.9553+0.0000j]], grad_fn=<AddBackward0>)
52
53 In its functional form, it is fully differentiable with respect to gate arguments:
54
55 >>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))
56 >>> loss.backward()
57 >>> x.grad
58 tensor(-0.5910)
59
60 This operator transform can also be applied to QNodes, tapes, and quantum functions
61 that contain multiple operations; see Usage Details below for more details.
62
63 .. details::
64 :title: Usage Details
65
66 ``qml.matrix`` can also be used with QNodes, tapes, or quantum functions that
67 contain multiple operations.
68
69 Consider the following quantum function:
70
71 .. code-block:: python3
72
73 def circuit(theta):
74 qml.RX(theta, wires=1)
75 qml.PauliZ(wires=0)
76
77 We can use ``qml.matrix`` to generate a new function that returns the unitary matrix
78 corresponding to the function ``circuit``:
79
80 >>> matrix_fn = qml.matrix(circuit)
81 >>> theta = np.pi / 4
82 >>> matrix_fn(theta)
83 array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],
84 [ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],
85 [ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],
86 [ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])
87
88 Note that since ``wire_order`` was not specified, the default order ``[1, 0]`` for ``circuit``
89 was used, and the unitary matrix corresponds to the operation :math:`Z\otimes R_X(\theta)`. To
90 obtain the matrix for :math:`R_X(\theta)\otimes Z`, specify ``wire_order=[0, 1]`` in the
91 function call:
92
93 >>> matrix = qml.matrix(circuit, wire_order=[0, 1])
94
95 You can also get the unitary matrix for operations on a subspace of a larger Hilbert space. For
96 example, with the same function ``circuit`` and ``wire_order=["a", 0, "b", 1]`` you obtain the
97 :math:`16\times 16` matrix for the operation :math:`I\otimes Z\otimes I\otimes R_X(\theta)`.
98
99 This unitary matrix can also be used in differentiable calculations. For example, consider the
100 following cost function:
101
102 .. code-block:: python
103
104 def circuit(theta):
105 qml.RX(theta, wires=1) qml.PauliZ(wires=0)
106 qml.CNOT(wires=[0, 1])
107
108 def cost(theta):
109 matrix = qml.matrix(circuit)(theta)
110 return np.real(np.trace(matrix))
111
112 Since this cost function returns a real scalar as a function of ``theta``, we can differentiate
113 it:
114
115 >>> theta = np.array(0.3, requires_grad=True)
116 >>> cost(theta)
117 1.9775421558720845
118 >>> qml.grad(cost)(theta)
119 -0.14943813247359922
120 """
121 if isinstance(op, qml.operation.Tensor) and wire_order is not None:
122 op = 1.0 * op # convert to a Hamiltonian
123
124 if isinstance(op, qml.Hamiltonian):
125 return qml.utils.sparse_hamiltonian(op, wires=wire_order).toarray()
126
127 return op.matrix(wire_order=wire_order)
128
129
130 @matrix.tape_transform
131 def _matrix(tape, wire_order=None):
132 """Defines how matrix works if applied to a tape containing multiple operations."""
133 params = tape.get_parameters(trainable_only=False)
134 interface = qml.math.get_interface(*params)
135
136 wire_order = wire_order or tape.wires
137
138 # initialize the unitary matrix
139 result = qml.math.eye(2 ** len(wire_order), like=interface)
140
141 for op in tape.operations:
142 U = matrix(op, wire_order=wire_order)
143 # Coerce the matrices U and result and use matrix multiplication. Broadcasted axes
144 # are handled correctly automatically by ``matmul`` (See e.g. NumPy documentation)
145 result = qml.math.matmul(*qml.math.coerce([U, result], like=interface), like=interface)
146
147 return result
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pennylane/ops/functions/matrix.py b/pennylane/ops/functions/matrix.py
--- a/pennylane/ops/functions/matrix.py
+++ b/pennylane/ops/functions/matrix.py
@@ -130,6 +130,8 @@
@matrix.tape_transform
def _matrix(tape, wire_order=None):
"""Defines how matrix works if applied to a tape containing multiple operations."""
+ if not tape.wires:
+ raise qml.operation.MatrixUndefinedError
params = tape.get_parameters(trainable_only=False)
interface = qml.math.get_interface(*params)
|
{"golden_diff": "diff --git a/pennylane/ops/functions/matrix.py b/pennylane/ops/functions/matrix.py\n--- a/pennylane/ops/functions/matrix.py\n+++ b/pennylane/ops/functions/matrix.py\n@@ -130,6 +130,8 @@\n @matrix.tape_transform\r\n def _matrix(tape, wire_order=None):\r\n \"\"\"Defines how matrix works if applied to a tape containing multiple operations.\"\"\"\r\n+ if not tape.wires:\r\n+ raise qml.operation.MatrixUndefinedError\r\n params = tape.get_parameters(trainable_only=False)\r\n interface = qml.math.get_interface(*params)\n", "issue": "[BUG] qml.matrix(op) and op.matrix() are different for Barrier and WireCut\n### Expected behavior\n\nI expected them both to pass, or both to fail.\n\n### Actual behavior\n\n`qml.matrix(qml.Barrier(0))` passed, `qml.Barrier(0).matrix()` failed. Same for `qml.WireCut`.\n\n### Additional information\n\nAttached source code is for Barrier, but the same thing happens with WireCut.\n\n### Source code\n\n```shell\n>>> import pennylane as qml\r\n>>> op = qml.Barrier(0)\r\n>>> qml.matrix(op)\r\narray([[1.]])\r\n>>> op.matrix()\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py\", line 1405, in matrix\r\n canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)\r\n File \"/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py\", line 446, in compute_matrix\r\n raise MatrixUndefinedError\r\npennylane.operation.MatrixUndefinedError\n```\n\n\n### Tracebacks\n\n_No response_\n\n### System information\n\n```shell\nName: PennyLane\r\nVersion: 0.27.0\r\nSummary: PennyLane is a Python quantum machine learning library by Xanadu Inc.\r\nHome-page: https://github.com/XanaduAI/pennylane\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License 2.0\r\nLocation: /Users/matthews/.pyenv/versions/3.9.13/envs/pennylane/lib/python3.9/site-packages\r\nRequires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml\r\nRequired-by: PennyLane-Lightning\r\n\r\nPlatform info: macOS-12.6.1-arm64-arm-64bit\r\nPython version: 3.9.13\r\nNumpy version: 1.23.2\r\nScipy version: 1.9.0\r\nInstalled devices:\r\n- default.gaussian (PennyLane-0.27.0)\r\n- default.mixed (PennyLane-0.27.0)\r\n- default.qubit (PennyLane-0.27.0)\r\n- default.qubit.autograd (PennyLane-0.27.0)\r\n- default.qubit.jax (PennyLane-0.27.0)\r\n- default.qubit.tf (PennyLane-0.27.0)\r\n- default.qubit.torch (PennyLane-0.27.0)\r\n- default.qutrit (PennyLane-0.27.0)\r\n- null.qubit (PennyLane-0.27.0)\r\n- lightning.qubit (PennyLane-Lightning-0.26.1)\n```\n\n\n### Existing GitHub issues\n\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"\r\nThis module contains the qml.matrix function.\r\n\"\"\"\r\n# pylint: disable=protected-access\r\nimport pennylane as qml\r\n\r\n\r\[email protected]_transform\r\ndef matrix(op, *, wire_order=None):\r\n r\"\"\"The matrix representation of an operation or quantum circuit.\r\n\r\n Args:\r\n op (.Operator, pennylane.QNode, .QuantumTape, or Callable): An operator, quantum node, tape,\r\n or function that applies quantum operations.\r\n wire_order (Sequence[Any], optional): Order of the wires in the quantum circuit.\r\n Defaults to the order in which the wires appear in the quantum function.\r\n\r\n Returns:\r\n tensor_like or function: Function which accepts the same arguments as the QNode or quantum\r\n function. When called, this function will return the unitary matrix in the appropriate\r\n autodiff framework (Autograd, TensorFlow, PyTorch, JAX) given its parameters.\r\n\r\n **Example**\r\n\r\n Given an instantiated operator, ``qml.matrix`` returns the matrix representation:\r\n\r\n >>> op = qml.RX(0.54, wires=0)\r\n >>> qml.matrix(op)\r\n [[0.9637709+0.j 0. -0.26673144j]\r\n [0. -0.26673144j 0.9637709+0.j ]]\r\n\r\n It can also be used in a functional form:\r\n\r\n >>> x = torch.tensor(0.6, requires_grad=True)\r\n >>> matrix_fn = qml.matrix(qml.RX)\r\n >>> matrix_fn(x, wires=0)\r\n tensor([[0.9553+0.0000j, 0.0000-0.2955j],\r\n [0.0000-0.2955j, 0.9553+0.0000j]], grad_fn=<AddBackward0>)\r\n\r\n In its functional form, it is fully differentiable with respect to gate arguments:\r\n\r\n >>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))\r\n >>> loss.backward()\r\n >>> x.grad\r\n tensor(-0.5910)\r\n\r\n This operator transform can also be applied to QNodes, tapes, and quantum functions\r\n that contain multiple operations; see Usage Details below for more details.\r\n\r\n .. details::\r\n :title: Usage Details\r\n\r\n ``qml.matrix`` can also be used with QNodes, tapes, or quantum functions that\r\n contain multiple operations.\r\n\r\n Consider the following quantum function:\r\n\r\n .. code-block:: python3\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1)\r\n qml.PauliZ(wires=0)\r\n\r\n We can use ``qml.matrix`` to generate a new function that returns the unitary matrix\r\n corresponding to the function ``circuit``:\r\n\r\n >>> matrix_fn = qml.matrix(circuit)\r\n >>> theta = np.pi / 4\r\n >>> matrix_fn(theta)\r\n array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],\r\n [ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],\r\n [ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],\r\n [ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])\r\n\r\n Note that since ``wire_order`` was not specified, the default order ``[1, 0]`` for ``circuit``\r\n was used, and the unitary matrix corresponds to the operation :math:`Z\\otimes R_X(\\theta)`. To\r\n obtain the matrix for :math:`R_X(\\theta)\\otimes Z`, specify ``wire_order=[0, 1]`` in the\r\n function call:\r\n\r\n >>> matrix = qml.matrix(circuit, wire_order=[0, 1])\r\n\r\n You can also get the unitary matrix for operations on a subspace of a larger Hilbert space. For\r\n example, with the same function ``circuit`` and ``wire_order=[\"a\", 0, \"b\", 1]`` you obtain the\r\n :math:`16\\times 16` matrix for the operation :math:`I\\otimes Z\\otimes I\\otimes R_X(\\theta)`.\r\n\r\n This unitary matrix can also be used in differentiable calculations. For example, consider the\r\n following cost function:\r\n\r\n .. code-block:: python\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1) qml.PauliZ(wires=0)\r\n qml.CNOT(wires=[0, 1])\r\n\r\n def cost(theta):\r\n matrix = qml.matrix(circuit)(theta)\r\n return np.real(np.trace(matrix))\r\n\r\n Since this cost function returns a real scalar as a function of ``theta``, we can differentiate\r\n it:\r\n\r\n >>> theta = np.array(0.3, requires_grad=True)\r\n >>> cost(theta)\r\n 1.9775421558720845\r\n >>> qml.grad(cost)(theta)\r\n -0.14943813247359922\r\n \"\"\"\r\n if isinstance(op, qml.operation.Tensor) and wire_order is not None:\r\n op = 1.0 * op # convert to a Hamiltonian\r\n\r\n if isinstance(op, qml.Hamiltonian):\r\n return qml.utils.sparse_hamiltonian(op, wires=wire_order).toarray()\r\n\r\n return op.matrix(wire_order=wire_order)\r\n\r\n\r\[email protected]_transform\r\ndef _matrix(tape, wire_order=None):\r\n \"\"\"Defines how matrix works if applied to a tape containing multiple operations.\"\"\"\r\n params = tape.get_parameters(trainable_only=False)\r\n interface = qml.math.get_interface(*params)\r\n\r\n wire_order = wire_order or tape.wires\r\n\r\n # initialize the unitary matrix\r\n result = qml.math.eye(2 ** len(wire_order), like=interface)\r\n\r\n for op in tape.operations:\r\n U = matrix(op, wire_order=wire_order)\r\n # Coerce the matrices U and result and use matrix multiplication. Broadcasted axes\r\n # are handled correctly automatically by ``matmul`` (See e.g. NumPy documentation)\r\n result = qml.math.matmul(*qml.math.coerce([U, result], like=interface), like=interface)\r\n\r\n return result\r\n", "path": "pennylane/ops/functions/matrix.py"}], "after_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"\r\nThis module contains the qml.matrix function.\r\n\"\"\"\r\n# pylint: disable=protected-access\r\nimport pennylane as qml\r\n\r\n\r\[email protected]_transform\r\ndef matrix(op, *, wire_order=None):\r\n r\"\"\"The matrix representation of an operation or quantum circuit.\r\n\r\n Args:\r\n op (.Operator, pennylane.QNode, .QuantumTape, or Callable): An operator, quantum node, tape,\r\n or function that applies quantum operations.\r\n wire_order (Sequence[Any], optional): Order of the wires in the quantum circuit.\r\n Defaults to the order in which the wires appear in the quantum function.\r\n\r\n Returns:\r\n tensor_like or function: Function which accepts the same arguments as the QNode or quantum\r\n function. When called, this function will return the unitary matrix in the appropriate\r\n autodiff framework (Autograd, TensorFlow, PyTorch, JAX) given its parameters.\r\n\r\n **Example**\r\n\r\n Given an instantiated operator, ``qml.matrix`` returns the matrix representation:\r\n\r\n >>> op = qml.RX(0.54, wires=0)\r\n >>> qml.matrix(op)\r\n [[0.9637709+0.j 0. -0.26673144j]\r\n [0. -0.26673144j 0.9637709+0.j ]]\r\n\r\n It can also be used in a functional form:\r\n\r\n >>> x = torch.tensor(0.6, requires_grad=True)\r\n >>> matrix_fn = qml.matrix(qml.RX)\r\n >>> matrix_fn(x, wires=0)\r\n tensor([[0.9553+0.0000j, 0.0000-0.2955j],\r\n [0.0000-0.2955j, 0.9553+0.0000j]], grad_fn=<AddBackward0>)\r\n\r\n In its functional form, it is fully differentiable with respect to gate arguments:\r\n\r\n >>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))\r\n >>> loss.backward()\r\n >>> x.grad\r\n tensor(-0.5910)\r\n\r\n This operator transform can also be applied to QNodes, tapes, and quantum functions\r\n that contain multiple operations; see Usage Details below for more details.\r\n\r\n .. details::\r\n :title: Usage Details\r\n\r\n ``qml.matrix`` can also be used with QNodes, tapes, or quantum functions that\r\n contain multiple operations.\r\n\r\n Consider the following quantum function:\r\n\r\n .. code-block:: python3\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1)\r\n qml.PauliZ(wires=0)\r\n\r\n We can use ``qml.matrix`` to generate a new function that returns the unitary matrix\r\n corresponding to the function ``circuit``:\r\n\r\n >>> matrix_fn = qml.matrix(circuit)\r\n >>> theta = np.pi / 4\r\n >>> matrix_fn(theta)\r\n array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],\r\n [ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],\r\n [ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],\r\n [ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])\r\n\r\n Note that since ``wire_order`` was not specified, the default order ``[1, 0]`` for ``circuit``\r\n was used, and the unitary matrix corresponds to the operation :math:`Z\\otimes R_X(\\theta)`. To\r\n obtain the matrix for :math:`R_X(\\theta)\\otimes Z`, specify ``wire_order=[0, 1]`` in the\r\n function call:\r\n\r\n >>> matrix = qml.matrix(circuit, wire_order=[0, 1])\r\n\r\n You can also get the unitary matrix for operations on a subspace of a larger Hilbert space. For\r\n example, with the same function ``circuit`` and ``wire_order=[\"a\", 0, \"b\", 1]`` you obtain the\r\n :math:`16\\times 16` matrix for the operation :math:`I\\otimes Z\\otimes I\\otimes R_X(\\theta)`.\r\n\r\n This unitary matrix can also be used in differentiable calculations. For example, consider the\r\n following cost function:\r\n\r\n .. code-block:: python\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1) qml.PauliZ(wires=0)\r\n qml.CNOT(wires=[0, 1])\r\n\r\n def cost(theta):\r\n matrix = qml.matrix(circuit)(theta)\r\n return np.real(np.trace(matrix))\r\n\r\n Since this cost function returns a real scalar as a function of ``theta``, we can differentiate\r\n it:\r\n\r\n >>> theta = np.array(0.3, requires_grad=True)\r\n >>> cost(theta)\r\n 1.9775421558720845\r\n >>> qml.grad(cost)(theta)\r\n -0.14943813247359922\r\n \"\"\"\r\n if isinstance(op, qml.operation.Tensor) and wire_order is not None:\r\n op = 1.0 * op # convert to a Hamiltonian\r\n\r\n if isinstance(op, qml.Hamiltonian):\r\n return qml.utils.sparse_hamiltonian(op, wires=wire_order).toarray()\r\n\r\n return op.matrix(wire_order=wire_order)\r\n\r\n\r\[email protected]_transform\r\ndef _matrix(tape, wire_order=None):\r\n \"\"\"Defines how matrix works if applied to a tape containing multiple operations.\"\"\"\r\n if not tape.wires:\r\n raise qml.operation.MatrixUndefinedError\r\n params = tape.get_parameters(trainable_only=False)\r\n interface = qml.math.get_interface(*params)\r\n\r\n wire_order = wire_order or tape.wires\r\n\r\n # initialize the unitary matrix\r\n result = qml.math.eye(2 ** len(wire_order), like=interface)\r\n\r\n for op in tape.operations:\r\n U = matrix(op, wire_order=wire_order)\r\n # Coerce the matrices U and result and use matrix multiplication. Broadcasted axes\r\n # are handled correctly automatically by ``matmul`` (See e.g. NumPy documentation)\r\n result = qml.math.matmul(*qml.math.coerce([U, result], like=interface), like=interface)\r\n\r\n return result\r\n", "path": "pennylane/ops/functions/matrix.py"}]}
| 2,946 | 138 |
gh_patches_debug_6632
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-1695
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
implement AWS::ServiceCatalog changes from May 14, 2020 update
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/servicecatalog.py`
Content:
```
1 # Copyright (c) 2012-2018, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7 from .validators import boolean, integer
8
9
10 class AcceptedPortfolioShare(AWSObject):
11 resource_type = "AWS::ServiceCatalog::AcceptedPortfolioShare"
12
13 props = {
14 'AcceptLanguage': (basestring, False),
15 'PortfolioId': (basestring, True),
16 }
17
18
19 class ProvisioningArtifactProperties(AWSProperty):
20 props = {
21 'Description': (basestring, False),
22 'DisableTemplateValidation': (boolean, False),
23 'Info': (dict, True),
24 'Name': (basestring, False),
25 }
26
27
28 class CloudFormationProduct(AWSObject):
29 resource_type = "AWS::ServiceCatalog::CloudFormationProduct"
30
31 props = {
32 'AcceptLanguage': (basestring, False),
33 'Description': (basestring, False),
34 'Distributor': (basestring, False),
35 'Name': (basestring, True),
36 'Owner': (basestring, True),
37 'ProvisioningArtifactParameters':
38 ([ProvisioningArtifactProperties], True),
39 'SupportDescription': (basestring, False),
40 'SupportEmail': (basestring, False),
41 'SupportUrl': (basestring, False),
42 'Tags': (Tags, False),
43 }
44
45
46 class ProvisioningParameter(AWSProperty):
47 props = {
48 'Key': (basestring, False),
49 'Value': (basestring, False),
50 }
51
52
53 class ProvisioningPreferences(AWSProperty):
54 props = {
55 'StackSetAccounts': ([basestring], False),
56 'StackSetFailureToleranceCount': (integer, False),
57 'StackSetFailureTolerancePercentage': (integer, False),
58 'StackSetMaxConcurrencyCount': (integer, False),
59 'StackSetMaxConcurrencyPercentage': (integer, False),
60 'StackSetOperationType': (basestring, False),
61 'StackSetRegions': ([basestring], False),
62 }
63
64
65 class CloudFormationProvisionedProduct(AWSObject):
66 resource_type = "AWS::ServiceCatalog::CloudFormationProvisionedProduct"
67
68 props = {
69 'AcceptLanguage': (basestring, False),
70 'NotificationArns': ([basestring], False),
71 'PathId': (basestring, False),
72 'ProductId': (basestring, False),
73 'ProductName': (basestring, False),
74 'ProvisionedProductName': (basestring, False),
75 'ProvisioningArtifactId': (basestring, False),
76 'ProvisioningArtifactName': (basestring, False),
77 'ProvisioningParameters': ([ProvisioningParameter], False),
78 'ProvisioningPreferences': (ProvisioningPreferences, False),
79 'Tags': (Tags, False),
80 }
81
82
83 class LaunchNotificationConstraint(AWSObject):
84 resource_type = "AWS::ServiceCatalog::LaunchNotificationConstraint"
85
86 props = {
87 'AcceptLanguage': (basestring, False),
88 'Description': (basestring, False),
89 'NotificationArns': ([basestring], True),
90 'PortfolioId': (basestring, True),
91 'ProductId': (basestring, True),
92 }
93
94
95 class LaunchRoleConstraint(AWSObject):
96 resource_type = "AWS::ServiceCatalog::LaunchRoleConstraint"
97
98 props = {
99 'AcceptLanguage': (basestring, False),
100 'Description': (basestring, False),
101 'LocalRoleName': (basestring, False),
102 'PortfolioId': (basestring, True),
103 'ProductId': (basestring, True),
104 'RoleArn': (basestring, True),
105 }
106
107
108 class LaunchTemplateConstraint(AWSObject):
109 resource_type = "AWS::ServiceCatalog::LaunchTemplateConstraint"
110
111 props = {
112 'AcceptLanguage': (basestring, False),
113 'Description': (basestring, False),
114 'PortfolioId': (basestring, True),
115 'ProductId': (basestring, True),
116 'Rules': (basestring, True),
117 }
118
119
120 class Portfolio(AWSObject):
121 resource_type = "AWS::ServiceCatalog::Portfolio"
122
123 props = {
124 'AcceptLanguage': (basestring, False),
125 'Description': (basestring, False),
126 'DisplayName': (basestring, True),
127 'ProviderName': (basestring, True),
128 'Tags': (Tags, False),
129 }
130
131
132 class PortfolioPrincipalAssociation(AWSObject):
133 resource_type = "AWS::ServiceCatalog::PortfolioPrincipalAssociation"
134
135 props = {
136 'AcceptLanguage': (basestring, False),
137 'PortfolioId': (basestring, True),
138 'PrincipalARN': (basestring, True),
139 'PrincipalType': (basestring, True),
140 }
141
142
143 class PortfolioProductAssociation(AWSObject):
144 resource_type = "AWS::ServiceCatalog::PortfolioProductAssociation"
145
146 props = {
147 'AcceptLanguage': (basestring, False),
148 'PortfolioId': (basestring, True),
149 'ProductId': (basestring, True),
150 'SourcePortfolioId': (basestring, False),
151 }
152
153
154 class PortfolioShare(AWSObject):
155 resource_type = "AWS::ServiceCatalog::PortfolioShare"
156
157 props = {
158 'AcceptLanguage': (basestring, False),
159 'AccountId': (basestring, True),
160 'PortfolioId': (basestring, True),
161 }
162
163
164 def validate_tag_update(update):
165 valid_tag_update_values = [
166 "ALLOWED",
167 "NOT_ALLOWED",
168 ]
169 if update not in valid_tag_update_values:
170 raise ValueError(
171 "{} is not a valid tag update value".format(update)
172 )
173 return update
174
175
176 class ResourceUpdateConstraint(AWSObject):
177 resource_type = "AWS::ServiceCatalog::ResourceUpdateConstraint"
178
179 props = {
180 'AcceptLanguage': (basestring, False),
181 'Description': (basestring, False),
182 'PortfolioId': (basestring, True),
183 'ProductId': (basestring, True),
184 'TagUpdateOnProvisionedProduct': (validate_tag_update, True),
185 }
186
187
188 class StackSetConstraint(AWSObject):
189 resource_type = "AWS::ServiceCatalog::StackSetConstraint"
190
191 props = {
192 'AcceptLanguage': (basestring, False),
193 'AccountList': ([basestring], True),
194 'AdminRole': (basestring, True),
195 'Description': (basestring, True),
196 'ExecutionRole': (basestring, True),
197 'PortfolioId': (basestring, True),
198 'ProductId': (basestring, True),
199 'RegionList': ([basestring], True),
200 'StackInstanceControl': (basestring, True),
201 }
202
203
204 class TagOption(AWSObject):
205 resource_type = "AWS::ServiceCatalog::TagOption"
206
207 props = {
208 'Active': (boolean, False),
209 'Key': (basestring, True),
210 'Value': (basestring, True),
211 }
212
213
214 class TagOptionAssociation(AWSObject):
215 resource_type = "AWS::ServiceCatalog::TagOptionAssociation"
216
217 props = {
218 'ResourceId': (basestring, True),
219 'TagOptionId': (basestring, True),
220 }
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/troposphere/servicecatalog.py b/troposphere/servicecatalog.py
--- a/troposphere/servicecatalog.py
+++ b/troposphere/servicecatalog.py
@@ -36,6 +36,7 @@
'Owner': (basestring, True),
'ProvisioningArtifactParameters':
([ProvisioningArtifactProperties], True),
+ 'ReplaceProvisioningArtifacts': (boolean, False),
'SupportDescription': (basestring, False),
'SupportEmail': (basestring, False),
'SupportUrl': (basestring, False),
|
{"golden_diff": "diff --git a/troposphere/servicecatalog.py b/troposphere/servicecatalog.py\n--- a/troposphere/servicecatalog.py\n+++ b/troposphere/servicecatalog.py\n@@ -36,6 +36,7 @@\n 'Owner': (basestring, True),\n 'ProvisioningArtifactParameters':\n ([ProvisioningArtifactProperties], True),\n+ 'ReplaceProvisioningArtifacts': (boolean, False),\n 'SupportDescription': (basestring, False),\n 'SupportEmail': (basestring, False),\n 'SupportUrl': (basestring, False),\n", "issue": "implement AWS::ServiceCatalog changes from May 14, 2020 update\n\n", "before_files": [{"content": "# Copyright (c) 2012-2018, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import boolean, integer\n\n\nclass AcceptedPortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::AcceptedPortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n }\n\n\nclass ProvisioningArtifactProperties(AWSProperty):\n props = {\n 'Description': (basestring, False),\n 'DisableTemplateValidation': (boolean, False),\n 'Info': (dict, True),\n 'Name': (basestring, False),\n }\n\n\nclass CloudFormationProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'Distributor': (basestring, False),\n 'Name': (basestring, True),\n 'Owner': (basestring, True),\n 'ProvisioningArtifactParameters':\n ([ProvisioningArtifactProperties], True),\n 'SupportDescription': (basestring, False),\n 'SupportEmail': (basestring, False),\n 'SupportUrl': (basestring, False),\n 'Tags': (Tags, False),\n }\n\n\nclass ProvisioningParameter(AWSProperty):\n props = {\n 'Key': (basestring, False),\n 'Value': (basestring, False),\n }\n\n\nclass ProvisioningPreferences(AWSProperty):\n props = {\n 'StackSetAccounts': ([basestring], False),\n 'StackSetFailureToleranceCount': (integer, False),\n 'StackSetFailureTolerancePercentage': (integer, False),\n 'StackSetMaxConcurrencyCount': (integer, False),\n 'StackSetMaxConcurrencyPercentage': (integer, False),\n 'StackSetOperationType': (basestring, False),\n 'StackSetRegions': ([basestring], False),\n }\n\n\nclass CloudFormationProvisionedProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProvisionedProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'NotificationArns': ([basestring], False),\n 'PathId': (basestring, False),\n 'ProductId': (basestring, False),\n 'ProductName': (basestring, False),\n 'ProvisionedProductName': (basestring, False),\n 'ProvisioningArtifactId': (basestring, False),\n 'ProvisioningArtifactName': (basestring, False),\n 'ProvisioningParameters': ([ProvisioningParameter], False),\n 'ProvisioningPreferences': (ProvisioningPreferences, False),\n 'Tags': (Tags, False),\n }\n\n\nclass LaunchNotificationConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchNotificationConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'NotificationArns': ([basestring], True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n }\n\n\nclass LaunchRoleConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchRoleConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'LocalRoleName': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RoleArn': (basestring, True),\n }\n\n\nclass LaunchTemplateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchTemplateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'Rules': (basestring, True),\n }\n\n\nclass Portfolio(AWSObject):\n resource_type = \"AWS::ServiceCatalog::Portfolio\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'DisplayName': (basestring, True),\n 'ProviderName': (basestring, True),\n 'Tags': (Tags, False),\n }\n\n\nclass PortfolioPrincipalAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioPrincipalAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'PrincipalARN': (basestring, True),\n 'PrincipalType': (basestring, True),\n }\n\n\nclass PortfolioProductAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioProductAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'SourcePortfolioId': (basestring, False),\n }\n\n\nclass PortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountId': (basestring, True),\n 'PortfolioId': (basestring, True),\n }\n\n\ndef validate_tag_update(update):\n valid_tag_update_values = [\n \"ALLOWED\",\n \"NOT_ALLOWED\",\n ]\n if update not in valid_tag_update_values:\n raise ValueError(\n \"{} is not a valid tag update value\".format(update)\n )\n return update\n\n\nclass ResourceUpdateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::ResourceUpdateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'TagUpdateOnProvisionedProduct': (validate_tag_update, True),\n }\n\n\nclass StackSetConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::StackSetConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountList': ([basestring], True),\n 'AdminRole': (basestring, True),\n 'Description': (basestring, True),\n 'ExecutionRole': (basestring, True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RegionList': ([basestring], True),\n 'StackInstanceControl': (basestring, True),\n }\n\n\nclass TagOption(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOption\"\n\n props = {\n 'Active': (boolean, False),\n 'Key': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass TagOptionAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOptionAssociation\"\n\n props = {\n 'ResourceId': (basestring, True),\n 'TagOptionId': (basestring, True),\n }\n", "path": "troposphere/servicecatalog.py"}], "after_files": [{"content": "# Copyright (c) 2012-2018, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import boolean, integer\n\n\nclass AcceptedPortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::AcceptedPortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n }\n\n\nclass ProvisioningArtifactProperties(AWSProperty):\n props = {\n 'Description': (basestring, False),\n 'DisableTemplateValidation': (boolean, False),\n 'Info': (dict, True),\n 'Name': (basestring, False),\n }\n\n\nclass CloudFormationProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'Distributor': (basestring, False),\n 'Name': (basestring, True),\n 'Owner': (basestring, True),\n 'ProvisioningArtifactParameters':\n ([ProvisioningArtifactProperties], True),\n 'ReplaceProvisioningArtifacts': (boolean, False),\n 'SupportDescription': (basestring, False),\n 'SupportEmail': (basestring, False),\n 'SupportUrl': (basestring, False),\n 'Tags': (Tags, False),\n }\n\n\nclass ProvisioningParameter(AWSProperty):\n props = {\n 'Key': (basestring, False),\n 'Value': (basestring, False),\n }\n\n\nclass ProvisioningPreferences(AWSProperty):\n props = {\n 'StackSetAccounts': ([basestring], False),\n 'StackSetFailureToleranceCount': (integer, False),\n 'StackSetFailureTolerancePercentage': (integer, False),\n 'StackSetMaxConcurrencyCount': (integer, False),\n 'StackSetMaxConcurrencyPercentage': (integer, False),\n 'StackSetOperationType': (basestring, False),\n 'StackSetRegions': ([basestring], False),\n }\n\n\nclass CloudFormationProvisionedProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProvisionedProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'NotificationArns': ([basestring], False),\n 'PathId': (basestring, False),\n 'ProductId': (basestring, False),\n 'ProductName': (basestring, False),\n 'ProvisionedProductName': (basestring, False),\n 'ProvisioningArtifactId': (basestring, False),\n 'ProvisioningArtifactName': (basestring, False),\n 'ProvisioningParameters': ([ProvisioningParameter], False),\n 'ProvisioningPreferences': (ProvisioningPreferences, False),\n 'Tags': (Tags, False),\n }\n\n\nclass LaunchNotificationConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchNotificationConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'NotificationArns': ([basestring], True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n }\n\n\nclass LaunchRoleConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchRoleConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'LocalRoleName': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RoleArn': (basestring, True),\n }\n\n\nclass LaunchTemplateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchTemplateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'Rules': (basestring, True),\n }\n\n\nclass Portfolio(AWSObject):\n resource_type = \"AWS::ServiceCatalog::Portfolio\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'DisplayName': (basestring, True),\n 'ProviderName': (basestring, True),\n 'Tags': (Tags, False),\n }\n\n\nclass PortfolioPrincipalAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioPrincipalAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'PrincipalARN': (basestring, True),\n 'PrincipalType': (basestring, True),\n }\n\n\nclass PortfolioProductAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioProductAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'SourcePortfolioId': (basestring, False),\n }\n\n\nclass PortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountId': (basestring, True),\n 'PortfolioId': (basestring, True),\n }\n\n\ndef validate_tag_update(update):\n valid_tag_update_values = [\n \"ALLOWED\",\n \"NOT_ALLOWED\",\n ]\n if update not in valid_tag_update_values:\n raise ValueError(\n \"{} is not a valid tag update value\".format(update)\n )\n return update\n\n\nclass ResourceUpdateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::ResourceUpdateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'TagUpdateOnProvisionedProduct': (validate_tag_update, True),\n }\n\n\nclass StackSetConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::StackSetConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountList': ([basestring], True),\n 'AdminRole': (basestring, True),\n 'Description': (basestring, True),\n 'ExecutionRole': (basestring, True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RegionList': ([basestring], True),\n 'StackInstanceControl': (basestring, True),\n }\n\n\nclass TagOption(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOption\"\n\n props = {\n 'Active': (boolean, False),\n 'Key': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass TagOptionAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOptionAssociation\"\n\n props = {\n 'ResourceId': (basestring, True),\n 'TagOptionId': (basestring, True),\n }\n", "path": "troposphere/servicecatalog.py"}]}
| 2,402 | 124 |
gh_patches_debug_50237
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-914
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The example() decorator is improperly "fixing" the example text.
Version: 6.0.0
Code excerpt
``` python
@commands('abort')
@example(".abort")
```
Help output:
``` text
08:49 bgallew: .help abort
08:49 DevEgo: bgallew: Abort any/all pending power management commands.
08:49 DevEgo: bgallew: e.g. \.bort
```
If you update the prefix setting to be, say, '.|!', it's even more broken.
FTR, this affects Sopel's built-ins, too:
``` text
08:54 bgallew: .help help
08:54 DevEgo: bgallew: Shows a command's documentation, and possibly an example.
08:54 DevEgo: bgallew: e.g. \.elp tell
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/loader.py`
Content:
```
1 # coding=utf-8
2 from __future__ import unicode_literals, absolute_import
3
4 import imp
5 import os.path
6 import re
7 import sys
8
9 from sopel.tools import itervalues, get_command_regexp
10
11 if sys.version_info.major >= 3:
12 basestring = (str, bytes)
13
14
15 def get_module_description(path):
16 good_file = (os.path.isfile(path) and path.endswith('.py')
17 and not path.startswith('_'))
18 good_dir = (os.path.isdir(path) and
19 os.path.isfile(os.path.join(path, '__init__.py')))
20 if good_file:
21 name = os.path.basename(path)[:-3]
22 return (name, path, imp.PY_SOURCE)
23 elif good_dir:
24 name = os.path.basename(path)
25 return (name, path, imp.PKG_DIRECTORY)
26 else:
27 return None
28
29
30 def _update_modules_from_dir(modules, directory):
31 # Note that this modifies modules in place
32 for path in os.listdir(directory):
33 path = os.path.join(directory, path)
34 result = get_module_description(path)
35 if result:
36 modules[result[0]] = result[1:]
37
38
39 def enumerate_modules(config, show_all=False):
40 """Map the names of modules to the location of their file.
41
42 Return a dict mapping the names of modules to a tuple of the module name,
43 the pathname and either `imp.PY_SOURCE` or `imp.PKG_DIRECTORY`. This
44 searches the regular modules directory and all directories specified in the
45 `core.extra` attribute of the `config` object. If two modules have the same
46 name, the last one to be found will be returned and the rest will be
47 ignored. Modules are found starting in the regular directory, followed by
48 `~/.sopel/modules`, and then through the extra directories in the order
49 that the are specified.
50
51 If `show_all` is given as `True`, the `enable` and `exclude`
52 configuration options will be ignored, and all modules will be shown
53 (though duplicates will still be ignored as above).
54 """
55 modules = {}
56
57 # First, add modules from the regular modules directory
58 main_dir = os.path.dirname(os.path.abspath(__file__))
59 modules_dir = os.path.join(main_dir, 'modules')
60 _update_modules_from_dir(modules, modules_dir)
61 for path in os.listdir(modules_dir):
62 break
63
64 # Then, find PyPI installed modules
65 # TODO does this work with all possible install mechanisms?
66 try:
67 import sopel_modules
68 except:
69 pass
70 else:
71 for directory in sopel_modules.__path__:
72 _update_modules_from_dir(modules, directory)
73
74 # Next, look in ~/.sopel/modules
75 home_modules_dir = os.path.join(config.homedir, 'modules')
76 if not os.path.isdir(home_modules_dir):
77 os.makedirs(home_modules_dir)
78 _update_modules_from_dir(modules, home_modules_dir)
79
80 # Last, look at all the extra directories.
81 for directory in config.core.extra:
82 _update_modules_from_dir(modules, directory)
83
84 # Coretasks is special. No custom user coretasks.
85 ct_path = os.path.join(main_dir, 'coretasks.py')
86 modules['coretasks'] = (ct_path, imp.PY_SOURCE)
87
88 # If caller wants all of them, don't apply white and blacklists
89 if show_all:
90 return modules
91
92 # Apply whitelist, if present
93 enable = config.core.enable
94 if enable:
95 enabled_modules = {'coretasks': modules['coretasks']}
96 for module in enable:
97 if module in modules:
98 enabled_modules[module] = modules[module]
99 modules = enabled_modules
100
101 # Apply blacklist, if present
102 exclude = config.core.exclude
103 for module in exclude:
104 if module in modules:
105 del modules[module]
106
107 return modules
108
109
110 def compile_rule(nick, pattern):
111 pattern = pattern.replace('$nickname', nick)
112 pattern = pattern.replace('$nick', r'{}[,:]\s+'.format(nick))
113 flags = re.IGNORECASE
114 if '\n' in pattern:
115 flags |= re.VERBOSE
116 return re.compile(pattern, flags)
117
118
119 def trim_docstring(doc):
120 """Get the docstring as a series of lines that can be sent"""
121 if not doc:
122 return []
123 lines = doc.expandtabs().splitlines()
124 indent = sys.maxsize
125 for line in lines[1:]:
126 stripped = line.lstrip()
127 if stripped:
128 indent = min(indent, len(line) - len(stripped))
129 trimmed = [lines[0].strip()]
130 if indent < sys.maxsize:
131 for line in lines[1:]:
132 trimmed.append(line[:].rstrip())
133 while trimmed and not trimmed[-1]:
134 trimmed.pop()
135 while trimmed and not trimmed[0]:
136 trimmed.pop(0)
137 return trimmed
138
139
140 def clean_callable(func, config):
141 """Compiles the regexes, moves commands into func.rule, fixes up docs and
142 puts them in func._docs, and sets defaults"""
143 nick = config.core.nick
144 prefix = config.core.prefix
145 help_prefix = config.core.prefix
146 func._docs = {}
147 doc = trim_docstring(func.__doc__)
148 example = None
149
150 func.unblockable = getattr(func, 'unblockable', True)
151 func.priority = getattr(func, 'priority', 'medium')
152 func.thread = getattr(func, 'thread', True)
153 func.rate = getattr(func, 'rate', 0)
154
155 if not hasattr(func, 'event'):
156 func.event = ['PRIVMSG']
157 else:
158 if isinstance(func.event, basestring):
159 func.event = [func.event.upper()]
160 else:
161 func.event = [event.upper() for event in func.event]
162
163 if hasattr(func, 'rule'):
164 if isinstance(func.rule, basestring):
165 func.rule = [func.rule]
166 func.rule = [compile_rule(nick, rule) for rule in func.rule]
167
168 if hasattr(func, 'commands'):
169 func.rule = getattr(func, 'rule', [])
170 for command in func.commands:
171 regexp = get_command_regexp(prefix, command)
172 func.rule.append(regexp)
173 if hasattr(func, 'example'):
174 example = func.example[0]["example"]
175 example = example.replace('$nickname', nick)
176 if example[0] != help_prefix:
177 example = help_prefix + example[len(help_prefix):]
178 if doc or example:
179 for command in func.commands:
180 func._docs[command] = (doc, example)
181
182
183 def load_module(name, path, type_):
184 """Load a module, and sort out the callables and shutdowns"""
185 if type_ == imp.PY_SOURCE:
186 with open(path) as mod:
187 module = imp.load_module(name, mod, path, ('.py', 'U', type_))
188 elif type_ == imp.PKG_DIRECTORY:
189 module = imp.load_module(name, None, path, ('', '', type_))
190 else:
191 raise TypeError('Unsupported module type')
192 return module, os.path.getmtime(path)
193
194
195 def is_triggerable(obj):
196 return any(hasattr(obj, attr) for attr in ('rule', 'rule', 'intent',
197 'commands'))
198
199
200 def clean_module(module, config):
201 callables = []
202 shutdowns = []
203 jobs = []
204 for obj in itervalues(vars(module)):
205 if callable(obj):
206 if getattr(obj, '__name__', None) == 'shutdown':
207 shutdowns.append(obj)
208 elif is_triggerable(obj):
209 clean_callable(obj, config)
210 callables.append(obj)
211 elif hasattr(obj, 'interval'):
212 clean_callable(obj, config)
213 jobs.append(obj)
214 return callables, jobs, shutdowns
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/loader.py b/sopel/loader.py
--- a/sopel/loader.py
+++ b/sopel/loader.py
@@ -142,7 +142,7 @@
puts them in func._docs, and sets defaults"""
nick = config.core.nick
prefix = config.core.prefix
- help_prefix = config.core.prefix
+ help_prefix = config.core.help_prefix
func._docs = {}
doc = trim_docstring(func.__doc__)
example = None
|
{"golden_diff": "diff --git a/sopel/loader.py b/sopel/loader.py\n--- a/sopel/loader.py\n+++ b/sopel/loader.py\n@@ -142,7 +142,7 @@\n puts them in func._docs, and sets defaults\"\"\"\n nick = config.core.nick\n prefix = config.core.prefix\n- help_prefix = config.core.prefix\n+ help_prefix = config.core.help_prefix\n func._docs = {}\n doc = trim_docstring(func.__doc__)\n example = None\n", "issue": "The example() decorator is improperly \"fixing\" the example text.\nVersion: 6.0.0\n\nCode excerpt\n\n``` python\n@commands('abort')\n@example(\".abort\")\n```\n\nHelp output:\n\n``` text\n08:49 bgallew: .help abort\n08:49 DevEgo: bgallew: Abort any/all pending power management commands.\n08:49 DevEgo: bgallew: e.g. \\.bort\n```\n\nIf you update the prefix setting to be, say, '.|!', it's even more broken.\n\nFTR, this affects Sopel's built-ins, too:\n\n``` text\n08:54 bgallew: .help help\n08:54 DevEgo: bgallew: Shows a command's documentation, and possibly an example.\n08:54 DevEgo: bgallew: e.g. \\.elp tell\n```\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals, absolute_import\n\nimport imp\nimport os.path\nimport re\nimport sys\n\nfrom sopel.tools import itervalues, get_command_regexp\n\nif sys.version_info.major >= 3:\n basestring = (str, bytes)\n\n\ndef get_module_description(path):\n good_file = (os.path.isfile(path) and path.endswith('.py')\n and not path.startswith('_'))\n good_dir = (os.path.isdir(path) and\n os.path.isfile(os.path.join(path, '__init__.py')))\n if good_file:\n name = os.path.basename(path)[:-3]\n return (name, path, imp.PY_SOURCE)\n elif good_dir:\n name = os.path.basename(path)\n return (name, path, imp.PKG_DIRECTORY)\n else:\n return None\n\n\ndef _update_modules_from_dir(modules, directory):\n # Note that this modifies modules in place\n for path in os.listdir(directory):\n path = os.path.join(directory, path)\n result = get_module_description(path)\n if result:\n modules[result[0]] = result[1:]\n\n\ndef enumerate_modules(config, show_all=False):\n \"\"\"Map the names of modules to the location of their file.\n\n Return a dict mapping the names of modules to a tuple of the module name,\n the pathname and either `imp.PY_SOURCE` or `imp.PKG_DIRECTORY`. This\n searches the regular modules directory and all directories specified in the\n `core.extra` attribute of the `config` object. If two modules have the same\n name, the last one to be found will be returned and the rest will be\n ignored. Modules are found starting in the regular directory, followed by\n `~/.sopel/modules`, and then through the extra directories in the order\n that the are specified.\n\n If `show_all` is given as `True`, the `enable` and `exclude`\n configuration options will be ignored, and all modules will be shown\n (though duplicates will still be ignored as above).\n \"\"\"\n modules = {}\n\n # First, add modules from the regular modules directory\n main_dir = os.path.dirname(os.path.abspath(__file__))\n modules_dir = os.path.join(main_dir, 'modules')\n _update_modules_from_dir(modules, modules_dir)\n for path in os.listdir(modules_dir):\n break\n\n # Then, find PyPI installed modules\n # TODO does this work with all possible install mechanisms?\n try:\n import sopel_modules\n except:\n pass\n else:\n for directory in sopel_modules.__path__:\n _update_modules_from_dir(modules, directory)\n\n # Next, look in ~/.sopel/modules\n home_modules_dir = os.path.join(config.homedir, 'modules')\n if not os.path.isdir(home_modules_dir):\n os.makedirs(home_modules_dir)\n _update_modules_from_dir(modules, home_modules_dir)\n\n # Last, look at all the extra directories.\n for directory in config.core.extra:\n _update_modules_from_dir(modules, directory)\n\n # Coretasks is special. No custom user coretasks.\n ct_path = os.path.join(main_dir, 'coretasks.py')\n modules['coretasks'] = (ct_path, imp.PY_SOURCE)\n\n # If caller wants all of them, don't apply white and blacklists\n if show_all:\n return modules\n\n # Apply whitelist, if present\n enable = config.core.enable\n if enable:\n enabled_modules = {'coretasks': modules['coretasks']}\n for module in enable:\n if module in modules:\n enabled_modules[module] = modules[module]\n modules = enabled_modules\n\n # Apply blacklist, if present\n exclude = config.core.exclude\n for module in exclude:\n if module in modules:\n del modules[module]\n\n return modules\n\n\ndef compile_rule(nick, pattern):\n pattern = pattern.replace('$nickname', nick)\n pattern = pattern.replace('$nick', r'{}[,:]\\s+'.format(nick))\n flags = re.IGNORECASE\n if '\\n' in pattern:\n flags |= re.VERBOSE\n return re.compile(pattern, flags)\n\n\ndef trim_docstring(doc):\n \"\"\"Get the docstring as a series of lines that can be sent\"\"\"\n if not doc:\n return []\n lines = doc.expandtabs().splitlines()\n indent = sys.maxsize\n for line in lines[1:]:\n stripped = line.lstrip()\n if stripped:\n indent = min(indent, len(line) - len(stripped))\n trimmed = [lines[0].strip()]\n if indent < sys.maxsize:\n for line in lines[1:]:\n trimmed.append(line[:].rstrip())\n while trimmed and not trimmed[-1]:\n trimmed.pop()\n while trimmed and not trimmed[0]:\n trimmed.pop(0)\n return trimmed\n\n\ndef clean_callable(func, config):\n \"\"\"Compiles the regexes, moves commands into func.rule, fixes up docs and\n puts them in func._docs, and sets defaults\"\"\"\n nick = config.core.nick\n prefix = config.core.prefix\n help_prefix = config.core.prefix\n func._docs = {}\n doc = trim_docstring(func.__doc__)\n example = None\n\n func.unblockable = getattr(func, 'unblockable', True)\n func.priority = getattr(func, 'priority', 'medium')\n func.thread = getattr(func, 'thread', True)\n func.rate = getattr(func, 'rate', 0)\n\n if not hasattr(func, 'event'):\n func.event = ['PRIVMSG']\n else:\n if isinstance(func.event, basestring):\n func.event = [func.event.upper()]\n else:\n func.event = [event.upper() for event in func.event]\n\n if hasattr(func, 'rule'):\n if isinstance(func.rule, basestring):\n func.rule = [func.rule]\n func.rule = [compile_rule(nick, rule) for rule in func.rule]\n\n if hasattr(func, 'commands'):\n func.rule = getattr(func, 'rule', [])\n for command in func.commands:\n regexp = get_command_regexp(prefix, command)\n func.rule.append(regexp)\n if hasattr(func, 'example'):\n example = func.example[0][\"example\"]\n example = example.replace('$nickname', nick)\n if example[0] != help_prefix:\n example = help_prefix + example[len(help_prefix):]\n if doc or example:\n for command in func.commands:\n func._docs[command] = (doc, example)\n\n\ndef load_module(name, path, type_):\n \"\"\"Load a module, and sort out the callables and shutdowns\"\"\"\n if type_ == imp.PY_SOURCE:\n with open(path) as mod:\n module = imp.load_module(name, mod, path, ('.py', 'U', type_))\n elif type_ == imp.PKG_DIRECTORY:\n module = imp.load_module(name, None, path, ('', '', type_))\n else:\n raise TypeError('Unsupported module type')\n return module, os.path.getmtime(path)\n\n\ndef is_triggerable(obj):\n return any(hasattr(obj, attr) for attr in ('rule', 'rule', 'intent',\n 'commands'))\n\n\ndef clean_module(module, config):\n callables = []\n shutdowns = []\n jobs = []\n for obj in itervalues(vars(module)):\n if callable(obj):\n if getattr(obj, '__name__', None) == 'shutdown':\n shutdowns.append(obj)\n elif is_triggerable(obj):\n clean_callable(obj, config)\n callables.append(obj)\n elif hasattr(obj, 'interval'):\n clean_callable(obj, config)\n jobs.append(obj)\n return callables, jobs, shutdowns\n", "path": "sopel/loader.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals, absolute_import\n\nimport imp\nimport os.path\nimport re\nimport sys\n\nfrom sopel.tools import itervalues, get_command_regexp\n\nif sys.version_info.major >= 3:\n basestring = (str, bytes)\n\n\ndef get_module_description(path):\n good_file = (os.path.isfile(path) and path.endswith('.py')\n and not path.startswith('_'))\n good_dir = (os.path.isdir(path) and\n os.path.isfile(os.path.join(path, '__init__.py')))\n if good_file:\n name = os.path.basename(path)[:-3]\n return (name, path, imp.PY_SOURCE)\n elif good_dir:\n name = os.path.basename(path)\n return (name, path, imp.PKG_DIRECTORY)\n else:\n return None\n\n\ndef _update_modules_from_dir(modules, directory):\n # Note that this modifies modules in place\n for path in os.listdir(directory):\n path = os.path.join(directory, path)\n result = get_module_description(path)\n if result:\n modules[result[0]] = result[1:]\n\n\ndef enumerate_modules(config, show_all=False):\n \"\"\"Map the names of modules to the location of their file.\n\n Return a dict mapping the names of modules to a tuple of the module name,\n the pathname and either `imp.PY_SOURCE` or `imp.PKG_DIRECTORY`. This\n searches the regular modules directory and all directories specified in the\n `core.extra` attribute of the `config` object. If two modules have the same\n name, the last one to be found will be returned and the rest will be\n ignored. Modules are found starting in the regular directory, followed by\n `~/.sopel/modules`, and then through the extra directories in the order\n that the are specified.\n\n If `show_all` is given as `True`, the `enable` and `exclude`\n configuration options will be ignored, and all modules will be shown\n (though duplicates will still be ignored as above).\n \"\"\"\n modules = {}\n\n # First, add modules from the regular modules directory\n main_dir = os.path.dirname(os.path.abspath(__file__))\n modules_dir = os.path.join(main_dir, 'modules')\n _update_modules_from_dir(modules, modules_dir)\n for path in os.listdir(modules_dir):\n break\n\n # Then, find PyPI installed modules\n # TODO does this work with all possible install mechanisms?\n try:\n import sopel_modules\n except:\n pass\n else:\n for directory in sopel_modules.__path__:\n _update_modules_from_dir(modules, directory)\n\n # Next, look in ~/.sopel/modules\n home_modules_dir = os.path.join(config.homedir, 'modules')\n if not os.path.isdir(home_modules_dir):\n os.makedirs(home_modules_dir)\n _update_modules_from_dir(modules, home_modules_dir)\n\n # Last, look at all the extra directories.\n for directory in config.core.extra:\n _update_modules_from_dir(modules, directory)\n\n # Coretasks is special. No custom user coretasks.\n ct_path = os.path.join(main_dir, 'coretasks.py')\n modules['coretasks'] = (ct_path, imp.PY_SOURCE)\n\n # If caller wants all of them, don't apply white and blacklists\n if show_all:\n return modules\n\n # Apply whitelist, if present\n enable = config.core.enable\n if enable:\n enabled_modules = {'coretasks': modules['coretasks']}\n for module in enable:\n if module in modules:\n enabled_modules[module] = modules[module]\n modules = enabled_modules\n\n # Apply blacklist, if present\n exclude = config.core.exclude\n for module in exclude:\n if module in modules:\n del modules[module]\n\n return modules\n\n\ndef compile_rule(nick, pattern):\n pattern = pattern.replace('$nickname', nick)\n pattern = pattern.replace('$nick', r'{}[,:]\\s+'.format(nick))\n flags = re.IGNORECASE\n if '\\n' in pattern:\n flags |= re.VERBOSE\n return re.compile(pattern, flags)\n\n\ndef trim_docstring(doc):\n \"\"\"Get the docstring as a series of lines that can be sent\"\"\"\n if not doc:\n return []\n lines = doc.expandtabs().splitlines()\n indent = sys.maxsize\n for line in lines[1:]:\n stripped = line.lstrip()\n if stripped:\n indent = min(indent, len(line) - len(stripped))\n trimmed = [lines[0].strip()]\n if indent < sys.maxsize:\n for line in lines[1:]:\n trimmed.append(line[:].rstrip())\n while trimmed and not trimmed[-1]:\n trimmed.pop()\n while trimmed and not trimmed[0]:\n trimmed.pop(0)\n return trimmed\n\n\ndef clean_callable(func, config):\n \"\"\"Compiles the regexes, moves commands into func.rule, fixes up docs and\n puts them in func._docs, and sets defaults\"\"\"\n nick = config.core.nick\n prefix = config.core.prefix\n help_prefix = config.core.help_prefix\n func._docs = {}\n doc = trim_docstring(func.__doc__)\n example = None\n\n func.unblockable = getattr(func, 'unblockable', True)\n func.priority = getattr(func, 'priority', 'medium')\n func.thread = getattr(func, 'thread', True)\n func.rate = getattr(func, 'rate', 0)\n\n if not hasattr(func, 'event'):\n func.event = ['PRIVMSG']\n else:\n if isinstance(func.event, basestring):\n func.event = [func.event.upper()]\n else:\n func.event = [event.upper() for event in func.event]\n\n if hasattr(func, 'rule'):\n if isinstance(func.rule, basestring):\n func.rule = [func.rule]\n func.rule = [compile_rule(nick, rule) for rule in func.rule]\n\n if hasattr(func, 'commands'):\n func.rule = getattr(func, 'rule', [])\n for command in func.commands:\n regexp = get_command_regexp(prefix, command)\n func.rule.append(regexp)\n if hasattr(func, 'example'):\n example = func.example[0][\"example\"]\n example = example.replace('$nickname', nick)\n if example[0] != help_prefix:\n example = help_prefix + example[len(help_prefix):]\n if doc or example:\n for command in func.commands:\n func._docs[command] = (doc, example)\n\n\ndef load_module(name, path, type_):\n \"\"\"Load a module, and sort out the callables and shutdowns\"\"\"\n if type_ == imp.PY_SOURCE:\n with open(path) as mod:\n module = imp.load_module(name, mod, path, ('.py', 'U', type_))\n elif type_ == imp.PKG_DIRECTORY:\n module = imp.load_module(name, None, path, ('', '', type_))\n else:\n raise TypeError('Unsupported module type')\n return module, os.path.getmtime(path)\n\n\ndef is_triggerable(obj):\n return any(hasattr(obj, attr) for attr in ('rule', 'rule', 'intent',\n 'commands'))\n\n\ndef clean_module(module, config):\n callables = []\n shutdowns = []\n jobs = []\n for obj in itervalues(vars(module)):\n if callable(obj):\n if getattr(obj, '__name__', None) == 'shutdown':\n shutdowns.append(obj)\n elif is_triggerable(obj):\n clean_callable(obj, config)\n callables.append(obj)\n elif hasattr(obj, 'interval'):\n clean_callable(obj, config)\n jobs.append(obj)\n return callables, jobs, shutdowns\n", "path": "sopel/loader.py"}]}
| 2,681 | 112 |
gh_patches_debug_10213
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-3111
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Click twice in the 404 sustainability link doesn't work
## Steps to reproduce it
1. Go to https://readthedocs.org/humitos
2. You will see a 404 page with a sustainability link (https://readthedocs.org/sustainability/click/90/EdfO7Jed1YQr/)
3. Click on it
4. It goes to Sentry home page
5. Go back and click it again
## Expected Result
Go to Sentry again.
## Actual Result
You get **a new** 404 page with a new link :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/donate/views.py`
Content:
```
1 """Donation views"""
2 # We use 'hash' heavily in the API here.
3 # pylint: disable=redefined-builtin
4
5 from __future__ import absolute_import
6 import logging
7
8 from django.views.generic import TemplateView
9 from django.core.urlresolvers import reverse
10 from django.utils.translation import ugettext_lazy as _
11 from django.shortcuts import redirect, get_object_or_404, render_to_response
12 from django.template import RequestContext
13 from django.core.cache import cache
14 from django.http import Http404
15
16 from vanilla import CreateView, ListView
17
18 from readthedocs.donate.utils import offer_promo
19 from readthedocs.payments.mixins import StripeMixin
20 from readthedocs.projects.models import Project
21 from readthedocs.redirects.utils import get_redirect_response
22
23 from .models import Supporter, SupporterPromo
24 from .constants import CLICKS, VIEWS
25 from .forms import SupporterForm, EthicalAdForm
26 from .mixins import DonateProgressMixin
27
28 log = logging.getLogger(__name__)
29
30
31 class PayAdsView(StripeMixin, CreateView):
32
33 """Create a payment locally and in Stripe"""
34
35 form_class = EthicalAdForm
36 success_message = _('Your payment has been received')
37 template_name = 'donate/ethicalads.html'
38
39 def get_success_url(self):
40 return reverse('pay_success')
41
42
43 class PaySuccess(TemplateView):
44 template_name = 'donate/ethicalads-success.html'
45
46
47 class DonateCreateView(StripeMixin, CreateView):
48
49 """Create a donation locally and in Stripe"""
50
51 form_class = SupporterForm
52 success_message = _('Your contribution has been received')
53 template_name = 'donate/create.html'
54
55 def get_success_url(self):
56 return reverse('donate_success')
57
58 def get_initial(self):
59 return {'dollars': self.request.GET.get('dollars', 50)}
60
61 def get_form(self, data=None, files=None, **kwargs):
62 kwargs['user'] = self.request.user
63 return super(DonateCreateView, self).get_form(data, files, **kwargs)
64
65
66 class DonateSuccessView(TemplateView):
67 template_name = 'donate/success.html'
68
69
70 class DonateListView(DonateProgressMixin, ListView):
71
72 """Donation list and detail view"""
73
74 template_name = 'donate/list.html'
75 model = Supporter
76 context_object_name = 'supporters'
77
78 def get_queryset(self):
79 return (Supporter.objects
80 .filter(public=True)
81 .order_by('-dollars', '-pub_date'))
82
83 def get_template_names(self):
84 return [self.template_name]
85
86
87 class PromoDetailView(TemplateView):
88 template_name = 'donate/promo_detail.html'
89
90 def get_context_data(self, **kwargs):
91 promo_slug = kwargs['promo_slug']
92 days = int(self.request.GET.get('days', 90))
93
94 if promo_slug == 'live' and self.request.user.is_staff:
95 promos = SupporterPromo.objects.filter(live=True)
96 elif promo_slug[-1] == '*' and '-' in promo_slug:
97 promos = SupporterPromo.objects.filter(
98 analytics_id__contains=promo_slug.replace('*', '')
99 )
100 else:
101 slugs = promo_slug.split(',')
102 promos = SupporterPromo.objects.filter(analytics_id__in=slugs)
103
104 total_clicks = sum(promo.total_clicks() for promo in promos)
105
106 return {
107 'promos': promos,
108 'total_clicks': total_clicks,
109 'days': days,
110 'days_slice': ':%s' % days,
111 }
112
113
114 def click_proxy(request, promo_id, hash):
115 """Track a click on a promotion and redirect to the link."""
116 promo = get_object_or_404(SupporterPromo, pk=promo_id)
117 count = cache.get(promo.cache_key(type=CLICKS, hash=hash), None)
118 if count is None:
119 log.warning('Old or nonexistent hash tried on Click.')
120 elif count == 0:
121 promo.incr(CLICKS)
122 cache.incr(promo.cache_key(type=CLICKS, hash=hash))
123 project_slug = cache.get(
124 promo.cache_key(type='project', hash=hash),
125 None
126 )
127 if project_slug:
128 project = Project.objects.get(slug=project_slug)
129 promo.incr(CLICKS, project=project)
130 else:
131 agent = request.META.get('HTTP_USER_AGENT', 'Unknown')
132 log.warning(
133 'Duplicate click logged. {count} total clicks tried. User Agent: [{agent}]'.format(
134 count=count, agent=agent
135 )
136 )
137 cache.incr(promo.cache_key(type=CLICKS, hash=hash))
138 raise Http404('Invalid click. This has been logged.')
139 return redirect(promo.link)
140
141
142 def view_proxy(request, promo_id, hash):
143 """Track a view of a promotion and redirect to the image."""
144 promo = get_object_or_404(SupporterPromo, pk=promo_id)
145 if not promo.image:
146 raise Http404('No image defined for this promo.')
147 count = cache.get(promo.cache_key(type=VIEWS, hash=hash), None)
148 if count is None:
149 log.warning('Old or nonexistent hash tried on View.')
150 elif count == 0:
151 promo.incr(VIEWS)
152 cache.incr(promo.cache_key(type=VIEWS, hash=hash))
153 project_slug = cache.get(
154 promo.cache_key(type='project', hash=hash),
155 None
156 )
157 if project_slug:
158 project = Project.objects.get(slug=project_slug)
159 promo.incr(VIEWS, project=project)
160 else:
161 agent = request.META.get('HTTP_USER_AGENT', 'Unknown')
162 log.warning(
163 'Duplicate view logged. {count} total views tried. User Agent: [{agent}]'.format(
164 count=count, agent=agent
165 )
166 )
167 cache.incr(promo.cache_key(type=VIEWS, hash=hash))
168 raise Http404('Invalid click. This has been logged.')
169 return redirect(promo.image)
170
171
172 def _add_promo_data(display_type):
173 promo_queryset = SupporterPromo.objects.filter(live=True, display_type=display_type)
174 promo_obj = promo_queryset.order_by('?').first()
175 if promo_obj:
176 promo_dict = offer_promo(promo_obj=promo_obj, project=None)
177 else:
178 promo_dict = None
179 return promo_dict
180
181
182 def promo_500(request, template_name='donate/promo_500.html', **__):
183 """A simple 500 handler so we get media"""
184 promo_dict = _add_promo_data(display_type='error')
185 r = render_to_response(template_name,
186 context_instance=RequestContext(request),
187 context={
188 'promo_data': promo_dict,
189 })
190 r.status_code = 500
191 return r
192
193
194 def promo_404(request, template_name='donate/promo_404.html', **__):
195 """A simple 404 handler so we get media"""
196 promo_dict = _add_promo_data(display_type='error')
197 response = get_redirect_response(request, path=request.get_full_path())
198 if response:
199 return response
200 r = render_to_response(template_name,
201 context_instance=RequestContext(request),
202 context={
203 'promo_data': promo_dict,
204 })
205 r.status_code = 404
206 return r
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/donate/views.py b/readthedocs/donate/views.py
--- a/readthedocs/donate/views.py
+++ b/readthedocs/donate/views.py
@@ -135,7 +135,6 @@
)
)
cache.incr(promo.cache_key(type=CLICKS, hash=hash))
- raise Http404('Invalid click. This has been logged.')
return redirect(promo.link)
@@ -165,7 +164,6 @@
)
)
cache.incr(promo.cache_key(type=VIEWS, hash=hash))
- raise Http404('Invalid click. This has been logged.')
return redirect(promo.image)
|
{"golden_diff": "diff --git a/readthedocs/donate/views.py b/readthedocs/donate/views.py\n--- a/readthedocs/donate/views.py\n+++ b/readthedocs/donate/views.py\n@@ -135,7 +135,6 @@\n )\n )\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n- raise Http404('Invalid click. This has been logged.')\n return redirect(promo.link)\n \n \n@@ -165,7 +164,6 @@\n )\n )\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n- raise Http404('Invalid click. This has been logged.')\n return redirect(promo.image)\n", "issue": "Click twice in the 404 sustainability link doesn't work\n## Steps to reproduce it\r\n\r\n1. Go to https://readthedocs.org/humitos\r\n2. You will see a 404 page with a sustainability link (https://readthedocs.org/sustainability/click/90/EdfO7Jed1YQr/)\r\n3. Click on it\r\n4. It goes to Sentry home page\r\n5. Go back and click it again\r\n\r\n## Expected Result\r\n\r\nGo to Sentry again.\r\n\r\n## Actual Result\r\n\r\nYou get **a new** 404 page with a new link :)\r\n\n", "before_files": [{"content": "\"\"\"Donation views\"\"\"\n# We use 'hash' heavily in the API here.\n# pylint: disable=redefined-builtin\n\nfrom __future__ import absolute_import\nimport logging\n\nfrom django.views.generic import TemplateView\nfrom django.core.urlresolvers import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import redirect, get_object_or_404, render_to_response\nfrom django.template import RequestContext\nfrom django.core.cache import cache\nfrom django.http import Http404\n\nfrom vanilla import CreateView, ListView\n\nfrom readthedocs.donate.utils import offer_promo\nfrom readthedocs.payments.mixins import StripeMixin\nfrom readthedocs.projects.models import Project\nfrom readthedocs.redirects.utils import get_redirect_response\n\nfrom .models import Supporter, SupporterPromo\nfrom .constants import CLICKS, VIEWS\nfrom .forms import SupporterForm, EthicalAdForm\nfrom .mixins import DonateProgressMixin\n\nlog = logging.getLogger(__name__)\n\n\nclass PayAdsView(StripeMixin, CreateView):\n\n \"\"\"Create a payment locally and in Stripe\"\"\"\n\n form_class = EthicalAdForm\n success_message = _('Your payment has been received')\n template_name = 'donate/ethicalads.html'\n\n def get_success_url(self):\n return reverse('pay_success')\n\n\nclass PaySuccess(TemplateView):\n template_name = 'donate/ethicalads-success.html'\n\n\nclass DonateCreateView(StripeMixin, CreateView):\n\n \"\"\"Create a donation locally and in Stripe\"\"\"\n\n form_class = SupporterForm\n success_message = _('Your contribution has been received')\n template_name = 'donate/create.html'\n\n def get_success_url(self):\n return reverse('donate_success')\n\n def get_initial(self):\n return {'dollars': self.request.GET.get('dollars', 50)}\n\n def get_form(self, data=None, files=None, **kwargs):\n kwargs['user'] = self.request.user\n return super(DonateCreateView, self).get_form(data, files, **kwargs)\n\n\nclass DonateSuccessView(TemplateView):\n template_name = 'donate/success.html'\n\n\nclass DonateListView(DonateProgressMixin, ListView):\n\n \"\"\"Donation list and detail view\"\"\"\n\n template_name = 'donate/list.html'\n model = Supporter\n context_object_name = 'supporters'\n\n def get_queryset(self):\n return (Supporter.objects\n .filter(public=True)\n .order_by('-dollars', '-pub_date'))\n\n def get_template_names(self):\n return [self.template_name]\n\n\nclass PromoDetailView(TemplateView):\n template_name = 'donate/promo_detail.html'\n\n def get_context_data(self, **kwargs):\n promo_slug = kwargs['promo_slug']\n days = int(self.request.GET.get('days', 90))\n\n if promo_slug == 'live' and self.request.user.is_staff:\n promos = SupporterPromo.objects.filter(live=True)\n elif promo_slug[-1] == '*' and '-' in promo_slug:\n promos = SupporterPromo.objects.filter(\n analytics_id__contains=promo_slug.replace('*', '')\n )\n else:\n slugs = promo_slug.split(',')\n promos = SupporterPromo.objects.filter(analytics_id__in=slugs)\n\n total_clicks = sum(promo.total_clicks() for promo in promos)\n\n return {\n 'promos': promos,\n 'total_clicks': total_clicks,\n 'days': days,\n 'days_slice': ':%s' % days,\n }\n\n\ndef click_proxy(request, promo_id, hash):\n \"\"\"Track a click on a promotion and redirect to the link.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n count = cache.get(promo.cache_key(type=CLICKS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on Click.')\n elif count == 0:\n promo.incr(CLICKS)\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(CLICKS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate click logged. {count} total clicks tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n raise Http404('Invalid click. This has been logged.')\n return redirect(promo.link)\n\n\ndef view_proxy(request, promo_id, hash):\n \"\"\"Track a view of a promotion and redirect to the image.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n if not promo.image:\n raise Http404('No image defined for this promo.')\n count = cache.get(promo.cache_key(type=VIEWS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on View.')\n elif count == 0:\n promo.incr(VIEWS)\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(VIEWS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate view logged. {count} total views tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n raise Http404('Invalid click. This has been logged.')\n return redirect(promo.image)\n\n\ndef _add_promo_data(display_type):\n promo_queryset = SupporterPromo.objects.filter(live=True, display_type=display_type)\n promo_obj = promo_queryset.order_by('?').first()\n if promo_obj:\n promo_dict = offer_promo(promo_obj=promo_obj, project=None)\n else:\n promo_dict = None\n return promo_dict\n\n\ndef promo_500(request, template_name='donate/promo_500.html', **__):\n \"\"\"A simple 500 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 500\n return r\n\n\ndef promo_404(request, template_name='donate/promo_404.html', **__):\n \"\"\"A simple 404 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n response = get_redirect_response(request, path=request.get_full_path())\n if response:\n return response\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 404\n return r\n", "path": "readthedocs/donate/views.py"}], "after_files": [{"content": "\"\"\"Donation views\"\"\"\n# We use 'hash' heavily in the API here.\n# pylint: disable=redefined-builtin\n\nfrom __future__ import absolute_import\nimport logging\n\nfrom django.views.generic import TemplateView\nfrom django.core.urlresolvers import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import redirect, get_object_or_404, render_to_response\nfrom django.template import RequestContext\nfrom django.core.cache import cache\nfrom django.http import Http404\n\nfrom vanilla import CreateView, ListView\n\nfrom readthedocs.donate.utils import offer_promo\nfrom readthedocs.payments.mixins import StripeMixin\nfrom readthedocs.projects.models import Project\nfrom readthedocs.redirects.utils import get_redirect_response\n\nfrom .models import Supporter, SupporterPromo\nfrom .constants import CLICKS, VIEWS\nfrom .forms import SupporterForm, EthicalAdForm\nfrom .mixins import DonateProgressMixin\n\nlog = logging.getLogger(__name__)\n\n\nclass PayAdsView(StripeMixin, CreateView):\n\n \"\"\"Create a payment locally and in Stripe\"\"\"\n\n form_class = EthicalAdForm\n success_message = _('Your payment has been received')\n template_name = 'donate/ethicalads.html'\n\n def get_success_url(self):\n return reverse('pay_success')\n\n\nclass PaySuccess(TemplateView):\n template_name = 'donate/ethicalads-success.html'\n\n\nclass DonateCreateView(StripeMixin, CreateView):\n\n \"\"\"Create a donation locally and in Stripe\"\"\"\n\n form_class = SupporterForm\n success_message = _('Your contribution has been received')\n template_name = 'donate/create.html'\n\n def get_success_url(self):\n return reverse('donate_success')\n\n def get_initial(self):\n return {'dollars': self.request.GET.get('dollars', 50)}\n\n def get_form(self, data=None, files=None, **kwargs):\n kwargs['user'] = self.request.user\n return super(DonateCreateView, self).get_form(data, files, **kwargs)\n\n\nclass DonateSuccessView(TemplateView):\n template_name = 'donate/success.html'\n\n\nclass DonateListView(DonateProgressMixin, ListView):\n\n \"\"\"Donation list and detail view\"\"\"\n\n template_name = 'donate/list.html'\n model = Supporter\n context_object_name = 'supporters'\n\n def get_queryset(self):\n return (Supporter.objects\n .filter(public=True)\n .order_by('-dollars', '-pub_date'))\n\n def get_template_names(self):\n return [self.template_name]\n\n\nclass PromoDetailView(TemplateView):\n template_name = 'donate/promo_detail.html'\n\n def get_context_data(self, **kwargs):\n promo_slug = kwargs['promo_slug']\n days = int(self.request.GET.get('days', 90))\n\n if promo_slug == 'live' and self.request.user.is_staff:\n promos = SupporterPromo.objects.filter(live=True)\n elif promo_slug[-1] == '*' and '-' in promo_slug:\n promos = SupporterPromo.objects.filter(\n analytics_id__contains=promo_slug.replace('*', '')\n )\n else:\n slugs = promo_slug.split(',')\n promos = SupporterPromo.objects.filter(analytics_id__in=slugs)\n\n total_clicks = sum(promo.total_clicks() for promo in promos)\n\n return {\n 'promos': promos,\n 'total_clicks': total_clicks,\n 'days': days,\n 'days_slice': ':%s' % days,\n }\n\n\ndef click_proxy(request, promo_id, hash):\n \"\"\"Track a click on a promotion and redirect to the link.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n count = cache.get(promo.cache_key(type=CLICKS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on Click.')\n elif count == 0:\n promo.incr(CLICKS)\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(CLICKS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate click logged. {count} total clicks tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n return redirect(promo.link)\n\n\ndef view_proxy(request, promo_id, hash):\n \"\"\"Track a view of a promotion and redirect to the image.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n if not promo.image:\n raise Http404('No image defined for this promo.')\n count = cache.get(promo.cache_key(type=VIEWS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on View.')\n elif count == 0:\n promo.incr(VIEWS)\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(VIEWS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate view logged. {count} total views tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n return redirect(promo.image)\n\n\ndef _add_promo_data(display_type):\n promo_queryset = SupporterPromo.objects.filter(live=True, display_type=display_type)\n promo_obj = promo_queryset.order_by('?').first()\n if promo_obj:\n promo_dict = offer_promo(promo_obj=promo_obj, project=None)\n else:\n promo_dict = None\n return promo_dict\n\n\ndef promo_500(request, template_name='donate/promo_500.html', **__):\n \"\"\"A simple 500 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 500\n return r\n\n\ndef promo_404(request, template_name='donate/promo_404.html', **__):\n \"\"\"A simple 404 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n response = get_redirect_response(request, path=request.get_full_path())\n if response:\n return response\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 404\n return r\n", "path": "readthedocs/donate/views.py"}]}
| 2,532 | 161 |
gh_patches_debug_10832
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-309
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
unexpected 'str' during Arrow test
testing on Windows Python 3.4.4/Qt5.5.1/ PyQtgraph github of 20160102, I have thefollowing error on "Arrow" test:
```
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Traceback (most recent call last):
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\examples\Arrow.py", line 50, in <module>
anim = a.makeAnimation(loop=-1)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\graphicsItems\CurvePoint.py", line 94, in makeAnima
tion
anim = QtCore.QPropertyAnimation(self, prop)
TypeError: arguments did not match any overloaded call:
QPropertyAnimation(QObject parent=None): too many arguments
QPropertyAnimation(QObject, QByteArray, QObject parent=None): argument 2 has u
nexpected type 'str'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/graphicsItems/CurvePoint.py`
Content:
```
1 from ..Qt import QtGui, QtCore
2 from . import ArrowItem
3 import numpy as np
4 from ..Point import Point
5 import weakref
6 from .GraphicsObject import GraphicsObject
7
8 __all__ = ['CurvePoint', 'CurveArrow']
9 class CurvePoint(GraphicsObject):
10 """A GraphicsItem that sets its location to a point on a PlotCurveItem.
11 Also rotates to be tangent to the curve.
12 The position along the curve is a Qt property, and thus can be easily animated.
13
14 Note: This class does not display anything; see CurveArrow for an applied example
15 """
16
17 def __init__(self, curve, index=0, pos=None, rotate=True):
18 """Position can be set either as an index referring to the sample number or
19 the position 0.0 - 1.0
20 If *rotate* is True, then the item rotates to match the tangent of the curve.
21 """
22
23 GraphicsObject.__init__(self)
24 #QObjectWorkaround.__init__(self)
25 self._rotate = rotate
26 self.curve = weakref.ref(curve)
27 self.setParentItem(curve)
28 self.setProperty('position', 0.0)
29 self.setProperty('index', 0)
30
31 if hasattr(self, 'ItemHasNoContents'):
32 self.setFlags(self.flags() | self.ItemHasNoContents)
33
34 if pos is not None:
35 self.setPos(pos)
36 else:
37 self.setIndex(index)
38
39 def setPos(self, pos):
40 self.setProperty('position', float(pos))## cannot use numpy types here, MUST be python float.
41
42 def setIndex(self, index):
43 self.setProperty('index', int(index)) ## cannot use numpy types here, MUST be python int.
44
45 def event(self, ev):
46 if not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) or self.curve() is None:
47 return False
48
49 if ev.propertyName() == 'index':
50 index = self.property('index')
51 if 'QVariant' in repr(index):
52 index = index.toInt()[0]
53 elif ev.propertyName() == 'position':
54 index = None
55 else:
56 return False
57
58 (x, y) = self.curve().getData()
59 if index is None:
60 #print ev.propertyName(), self.property('position').toDouble()[0], self.property('position').typeName()
61 pos = self.property('position')
62 if 'QVariant' in repr(pos): ## need to support 2 APIs :(
63 pos = pos.toDouble()[0]
64 index = (len(x)-1) * np.clip(pos, 0.0, 1.0)
65
66 if index != int(index): ## interpolate floating-point values
67 i1 = int(index)
68 i2 = np.clip(i1+1, 0, len(x)-1)
69 s2 = index-i1
70 s1 = 1.0-s2
71 newPos = (x[i1]*s1+x[i2]*s2, y[i1]*s1+y[i2]*s2)
72 else:
73 index = int(index)
74 i1 = np.clip(index-1, 0, len(x)-1)
75 i2 = np.clip(index+1, 0, len(x)-1)
76 newPos = (x[index], y[index])
77
78 p1 = self.parentItem().mapToScene(QtCore.QPointF(x[i1], y[i1]))
79 p2 = self.parentItem().mapToScene(QtCore.QPointF(x[i2], y[i2]))
80 ang = np.arctan2(p2.y()-p1.y(), p2.x()-p1.x()) ## returns radians
81 self.resetTransform()
82 if self._rotate:
83 self.rotate(180+ ang * 180 / np.pi) ## takes degrees
84 QtGui.QGraphicsItem.setPos(self, *newPos)
85 return True
86
87 def boundingRect(self):
88 return QtCore.QRectF()
89
90 def paint(self, *args):
91 pass
92
93 def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):
94 anim = QtCore.QPropertyAnimation(self, prop)
95 anim.setDuration(duration)
96 anim.setStartValue(start)
97 anim.setEndValue(end)
98 anim.setLoopCount(loop)
99 return anim
100
101
102 class CurveArrow(CurvePoint):
103 """Provides an arrow that points to any specific sample on a PlotCurveItem.
104 Provides properties that can be animated."""
105
106 def __init__(self, curve, index=0, pos=None, **opts):
107 CurvePoint.__init__(self, curve, index=index, pos=pos)
108 if opts.get('pxMode', True):
109 opts['pxMode'] = False
110 self.setFlags(self.flags() | self.ItemIgnoresTransformations)
111 opts['angle'] = 0
112 self.arrow = ArrowItem.ArrowItem(**opts)
113 self.arrow.setParentItem(self)
114
115 def setStyle(self, **opts):
116 return self.arrow.setStyle(**opts)
117
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyqtgraph/graphicsItems/CurvePoint.py b/pyqtgraph/graphicsItems/CurvePoint.py
--- a/pyqtgraph/graphicsItems/CurvePoint.py
+++ b/pyqtgraph/graphicsItems/CurvePoint.py
@@ -91,6 +91,11 @@
pass
def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):
+ # In Python 3, a bytes object needs to be used as a property name in
+ # QPropertyAnimation. PyQt stopped automatically encoding a str when a
+ # QByteArray was expected in v5.5 (see qbytearray.sip).
+ if not isinstance(prop, bytes):
+ prop = prop.encode('latin-1')
anim = QtCore.QPropertyAnimation(self, prop)
anim.setDuration(duration)
anim.setStartValue(start)
|
{"golden_diff": "diff --git a/pyqtgraph/graphicsItems/CurvePoint.py b/pyqtgraph/graphicsItems/CurvePoint.py\n--- a/pyqtgraph/graphicsItems/CurvePoint.py\n+++ b/pyqtgraph/graphicsItems/CurvePoint.py\n@@ -91,6 +91,11 @@\n pass\n \n def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):\n+ # In Python 3, a bytes object needs to be used as a property name in\n+ # QPropertyAnimation. PyQt stopped automatically encoding a str when a\n+ # QByteArray was expected in v5.5 (see qbytearray.sip).\n+ if not isinstance(prop, bytes):\n+ prop = prop.encode('latin-1')\n anim = QtCore.QPropertyAnimation(self, prop)\n anim.setDuration(duration)\n anim.setStartValue(start)\n", "issue": "unexpected 'str' during Arrow test\ntesting on Windows Python 3.4.4/Qt5.5.1/ PyQtgraph github of 20160102, I have thefollowing error on \"Arrow\" test:\n\n```\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nTraceback (most recent call last):\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\examples\\Arrow.py\", line 50, in <module>\n anim = a.makeAnimation(loop=-1)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\graphicsItems\\CurvePoint.py\", line 94, in makeAnima\ntion\n anim = QtCore.QPropertyAnimation(self, prop)\nTypeError: arguments did not match any overloaded call:\n QPropertyAnimation(QObject parent=None): too many arguments\n QPropertyAnimation(QObject, QByteArray, QObject parent=None): argument 2 has u\nnexpected type 'str'\n\n```\n\n", "before_files": [{"content": "from ..Qt import QtGui, QtCore\nfrom . import ArrowItem\nimport numpy as np\nfrom ..Point import Point\nimport weakref\nfrom .GraphicsObject import GraphicsObject\n\n__all__ = ['CurvePoint', 'CurveArrow']\nclass CurvePoint(GraphicsObject):\n \"\"\"A GraphicsItem that sets its location to a point on a PlotCurveItem.\n Also rotates to be tangent to the curve.\n The position along the curve is a Qt property, and thus can be easily animated.\n \n Note: This class does not display anything; see CurveArrow for an applied example\n \"\"\"\n \n def __init__(self, curve, index=0, pos=None, rotate=True):\n \"\"\"Position can be set either as an index referring to the sample number or\n the position 0.0 - 1.0\n If *rotate* is True, then the item rotates to match the tangent of the curve.\n \"\"\"\n \n GraphicsObject.__init__(self)\n #QObjectWorkaround.__init__(self)\n self._rotate = rotate\n self.curve = weakref.ref(curve)\n self.setParentItem(curve)\n self.setProperty('position', 0.0)\n self.setProperty('index', 0)\n \n if hasattr(self, 'ItemHasNoContents'):\n self.setFlags(self.flags() | self.ItemHasNoContents)\n \n if pos is not None:\n self.setPos(pos)\n else:\n self.setIndex(index)\n \n def setPos(self, pos):\n self.setProperty('position', float(pos))## cannot use numpy types here, MUST be python float.\n \n def setIndex(self, index):\n self.setProperty('index', int(index)) ## cannot use numpy types here, MUST be python int.\n \n def event(self, ev):\n if not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) or self.curve() is None:\n return False\n \n if ev.propertyName() == 'index':\n index = self.property('index')\n if 'QVariant' in repr(index):\n index = index.toInt()[0]\n elif ev.propertyName() == 'position':\n index = None\n else:\n return False\n \n (x, y) = self.curve().getData()\n if index is None:\n #print ev.propertyName(), self.property('position').toDouble()[0], self.property('position').typeName()\n pos = self.property('position')\n if 'QVariant' in repr(pos): ## need to support 2 APIs :(\n pos = pos.toDouble()[0]\n index = (len(x)-1) * np.clip(pos, 0.0, 1.0)\n \n if index != int(index): ## interpolate floating-point values\n i1 = int(index)\n i2 = np.clip(i1+1, 0, len(x)-1)\n s2 = index-i1\n s1 = 1.0-s2\n newPos = (x[i1]*s1+x[i2]*s2, y[i1]*s1+y[i2]*s2)\n else:\n index = int(index)\n i1 = np.clip(index-1, 0, len(x)-1)\n i2 = np.clip(index+1, 0, len(x)-1)\n newPos = (x[index], y[index])\n \n p1 = self.parentItem().mapToScene(QtCore.QPointF(x[i1], y[i1]))\n p2 = self.parentItem().mapToScene(QtCore.QPointF(x[i2], y[i2]))\n ang = np.arctan2(p2.y()-p1.y(), p2.x()-p1.x()) ## returns radians\n self.resetTransform()\n if self._rotate:\n self.rotate(180+ ang * 180 / np.pi) ## takes degrees\n QtGui.QGraphicsItem.setPos(self, *newPos)\n return True\n \n def boundingRect(self):\n return QtCore.QRectF()\n \n def paint(self, *args):\n pass\n \n def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):\n anim = QtCore.QPropertyAnimation(self, prop)\n anim.setDuration(duration)\n anim.setStartValue(start)\n anim.setEndValue(end)\n anim.setLoopCount(loop)\n return anim\n\n\nclass CurveArrow(CurvePoint):\n \"\"\"Provides an arrow that points to any specific sample on a PlotCurveItem.\n Provides properties that can be animated.\"\"\"\n \n def __init__(self, curve, index=0, pos=None, **opts):\n CurvePoint.__init__(self, curve, index=index, pos=pos)\n if opts.get('pxMode', True):\n opts['pxMode'] = False\n self.setFlags(self.flags() | self.ItemIgnoresTransformations)\n opts['angle'] = 0\n self.arrow = ArrowItem.ArrowItem(**opts)\n self.arrow.setParentItem(self)\n \n def setStyle(self, **opts):\n return self.arrow.setStyle(**opts)\n \n", "path": "pyqtgraph/graphicsItems/CurvePoint.py"}], "after_files": [{"content": "from ..Qt import QtGui, QtCore\nfrom . import ArrowItem\nimport numpy as np\nfrom ..Point import Point\nimport weakref\nfrom .GraphicsObject import GraphicsObject\n\n__all__ = ['CurvePoint', 'CurveArrow']\nclass CurvePoint(GraphicsObject):\n \"\"\"A GraphicsItem that sets its location to a point on a PlotCurveItem.\n Also rotates to be tangent to the curve.\n The position along the curve is a Qt property, and thus can be easily animated.\n \n Note: This class does not display anything; see CurveArrow for an applied example\n \"\"\"\n \n def __init__(self, curve, index=0, pos=None, rotate=True):\n \"\"\"Position can be set either as an index referring to the sample number or\n the position 0.0 - 1.0\n If *rotate* is True, then the item rotates to match the tangent of the curve.\n \"\"\"\n \n GraphicsObject.__init__(self)\n #QObjectWorkaround.__init__(self)\n self._rotate = rotate\n self.curve = weakref.ref(curve)\n self.setParentItem(curve)\n self.setProperty('position', 0.0)\n self.setProperty('index', 0)\n \n if hasattr(self, 'ItemHasNoContents'):\n self.setFlags(self.flags() | self.ItemHasNoContents)\n \n if pos is not None:\n self.setPos(pos)\n else:\n self.setIndex(index)\n \n def setPos(self, pos):\n self.setProperty('position', float(pos))## cannot use numpy types here, MUST be python float.\n \n def setIndex(self, index):\n self.setProperty('index', int(index)) ## cannot use numpy types here, MUST be python int.\n \n def event(self, ev):\n if not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) or self.curve() is None:\n return False\n \n if ev.propertyName() == 'index':\n index = self.property('index')\n if 'QVariant' in repr(index):\n index = index.toInt()[0]\n elif ev.propertyName() == 'position':\n index = None\n else:\n return False\n \n (x, y) = self.curve().getData()\n if index is None:\n #print ev.propertyName(), self.property('position').toDouble()[0], self.property('position').typeName()\n pos = self.property('position')\n if 'QVariant' in repr(pos): ## need to support 2 APIs :(\n pos = pos.toDouble()[0]\n index = (len(x)-1) * np.clip(pos, 0.0, 1.0)\n \n if index != int(index): ## interpolate floating-point values\n i1 = int(index)\n i2 = np.clip(i1+1, 0, len(x)-1)\n s2 = index-i1\n s1 = 1.0-s2\n newPos = (x[i1]*s1+x[i2]*s2, y[i1]*s1+y[i2]*s2)\n else:\n index = int(index)\n i1 = np.clip(index-1, 0, len(x)-1)\n i2 = np.clip(index+1, 0, len(x)-1)\n newPos = (x[index], y[index])\n \n p1 = self.parentItem().mapToScene(QtCore.QPointF(x[i1], y[i1]))\n p2 = self.parentItem().mapToScene(QtCore.QPointF(x[i2], y[i2]))\n ang = np.arctan2(p2.y()-p1.y(), p2.x()-p1.x()) ## returns radians\n self.resetTransform()\n if self._rotate:\n self.rotate(180+ ang * 180 / np.pi) ## takes degrees\n QtGui.QGraphicsItem.setPos(self, *newPos)\n return True\n \n def boundingRect(self):\n return QtCore.QRectF()\n \n def paint(self, *args):\n pass\n \n def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):\n # In Python 3, a bytes object needs to be used as a property name in\n # QPropertyAnimation. PyQt stopped automatically encoding a str when a\n # QByteArray was expected in v5.5 (see qbytearray.sip).\n if not isinstance(prop, bytes):\n prop = prop.encode('latin-1')\n anim = QtCore.QPropertyAnimation(self, prop)\n anim.setDuration(duration)\n anim.setStartValue(start)\n anim.setEndValue(end)\n anim.setLoopCount(loop)\n return anim\n\n\nclass CurveArrow(CurvePoint):\n \"\"\"Provides an arrow that points to any specific sample on a PlotCurveItem.\n Provides properties that can be animated.\"\"\"\n \n def __init__(self, curve, index=0, pos=None, **opts):\n CurvePoint.__init__(self, curve, index=index, pos=pos)\n if opts.get('pxMode', True):\n opts['pxMode'] = False\n self.setFlags(self.flags() | self.ItemIgnoresTransformations)\n opts['angle'] = 0\n self.arrow = ArrowItem.ArrowItem(**opts)\n self.arrow.setParentItem(self)\n \n def setStyle(self, **opts):\n return self.arrow.setStyle(**opts)\n \n", "path": "pyqtgraph/graphicsItems/CurvePoint.py"}]}
| 1,894 | 202 |
gh_patches_debug_36818
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-636
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No exception when connecting to Kafka fails
Hey guys,
I have installed feast(0.4.4) via Helm3 on GKE
The Basic example is working until the ingestion (online serving) part
There I get ->>
Waiting for feature set to be ready for ingestion...
0%| | 0/15 [00:00<?, ?rows/s]
but there is no progress.
When I look on GCPs bigquery interface I can see that the project "customer_project" is created with the correct columns in "customer_transactions".
But for sure no data
get_feature_set gives me
{
"spec": {
"name": "customer_transactions",
"version": 1,
"entities": [
{
"name": "customer_id",
"valueType": "INT64"
}
],
"features": [
{
"name": "daily_transactions",
"valueType": "DOUBLE"
},
{
"name": "total_transactions",
"valueType": "INT64"
}
],
"maxAge": "432000s",
"source": {
"type": "KAFKA",
"kafkaSourceConfig": {
"bootstrapServers": "feast-kafka:9092",
"topic": "feast"
}
},
"project": "customer_project_1"
},
"meta": {
"createdTimestamp": "2020-04-15T10:26:51Z",
"status": "STATUS_READY"
}
}
I had to modify some port service setup in the chart so it can be that some of feast have connection issues between kafka etc.
But There are no errors in the logs of the core and serving pod.
What can be the problem and how is a way to debug that?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/loaders/abstract_producer.py`
Content:
```
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, Union
16
17 from tqdm import tqdm
18
19
20 class AbstractProducer:
21 """
22 Abstract class for Kafka producers
23 """
24
25 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
26 self.brokers = brokers
27 self.row_count = row_count
28 self.error_count = 0
29 self.last_exception = ""
30
31 # Progress bar will always display average rate
32 self.pbar = tqdm(
33 total=row_count, unit="rows", smoothing=0, disable=disable_progress_bar
34 )
35
36 def produce(self, topic: str, data: bytes):
37 message = "{} should implement a produce method".format(self.__class__.__name__)
38 raise NotImplementedError(message)
39
40 def flush(self, timeout: int):
41 message = "{} should implement a flush method".format(self.__class__.__name__)
42 raise NotImplementedError(message)
43
44 def _inc_pbar(self, meta):
45 self.pbar.update(1)
46
47 def _set_error(self, exception: str):
48 self.error_count += 1
49 self.last_exception = exception
50
51 def print_results(self) -> None:
52 """
53 Print ingestion statistics.
54
55 Returns:
56 None: None
57 """
58 # Refresh and close tqdm progress bar
59 self.pbar.refresh()
60
61 self.pbar.close()
62
63 print("Ingestion complete!")
64
65 failed_message = (
66 ""
67 if self.error_count == 0
68 else f"\nFail: {self.error_count / self.row_count}"
69 )
70
71 last_exception_message = (
72 ""
73 if self.last_exception == ""
74 else f"\nLast exception:\n{self.last_exception}"
75 )
76
77 print(
78 f"\nIngestion statistics:"
79 f"\nSuccess: {self.pbar.n}/{self.row_count}"
80 f"{failed_message}"
81 f"{last_exception_message}"
82 )
83 return None
84
85
86 class ConfluentProducer(AbstractProducer):
87 """
88 Concrete implementation of Confluent Kafka producer (confluent-kafka)
89 """
90
91 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
92 from confluent_kafka import Producer
93
94 self.producer = Producer({"bootstrap.servers": brokers})
95 super().__init__(brokers, row_count, disable_progress_bar)
96
97 def produce(self, topic: str, value: bytes) -> None:
98 """
99 Generic produce that implements confluent-kafka's produce method to
100 push a byte encoded object into a Kafka topic.
101
102 Args:
103 topic (str): Kafka topic.
104 value (bytes): Byte encoded object.
105
106 Returns:
107 None: None.
108 """
109
110 try:
111 self.producer.produce(topic, value=value, callback=self._delivery_callback)
112 # Serve delivery callback queue.
113 # NOTE: Since produce() is an asynchronous API this poll() call
114 # will most likely not serve the delivery callback for the
115 # last produce()d message.
116 self.producer.poll(0)
117 except Exception as ex:
118 self._set_error(str(ex))
119
120 return None
121
122 def flush(self, timeout: Optional[int]):
123 """
124 Generic flush that implements confluent-kafka's flush method.
125
126 Args:
127 timeout (Optional[int]): Timeout in seconds to wait for completion.
128
129 Returns:
130 int: Number of messages still in queue.
131 """
132 return self.producer.flush(timeout=timeout)
133
134 def _delivery_callback(self, err: str, msg) -> None:
135 """
136 Optional per-message delivery callback (triggered by poll() or flush())
137 when a message has been successfully delivered or permanently failed
138 delivery (after retries).
139
140 Although the msg argument is not used, the current method signature is
141 required as specified in the confluent-kafka documentation.
142
143 Args:
144 err (str): Error message.
145 msg (): Kafka message.
146
147 Returns:
148 None
149 """
150 if err:
151 self._set_error(err)
152 else:
153 self._inc_pbar(None)
154
155
156 class KafkaPythonProducer(AbstractProducer):
157 """
158 Concrete implementation of Python Kafka producer (kafka-python)
159 """
160
161 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
162 from kafka import KafkaProducer
163
164 self.producer = KafkaProducer(bootstrap_servers=[brokers])
165 super().__init__(brokers, row_count, disable_progress_bar)
166
167 def produce(self, topic: str, value: bytes):
168 """
169 Generic produce that implements kafka-python's send method to push a
170 byte encoded object into a Kafka topic.
171
172 Args:
173 topic (str): Kafka topic.
174 value (bytes): Byte encoded object.
175
176 Returns:
177 FutureRecordMetadata: resolves to RecordMetadata
178
179 Raises:
180 KafkaTimeoutError: if unable to fetch topic metadata, or unable
181 to obtain memory buffer prior to configured max_block_ms
182 """
183 return (
184 self.producer.send(topic, value=value)
185 .add_callback(self._inc_pbar)
186 .add_errback(self._set_error)
187 )
188
189 def flush(self, timeout: Optional[int]):
190 """
191 Generic flush that implements kafka-python's flush method.
192
193 Args:
194 timeout (Optional[int]): timeout in seconds to wait for completion.
195
196 Returns:
197 None
198
199 Raises:
200 KafkaTimeoutError: failure to flush buffered records within the
201 provided timeout
202 """
203 return self.producer.flush(timeout=timeout)
204
205
206 def get_producer(
207 brokers: str, row_count: int, disable_progress_bar: bool
208 ) -> Union[ConfluentProducer, KafkaPythonProducer]:
209 """
210 Simple context helper function that returns a AbstractProducer object when
211 invoked.
212
213 This helper function will try to import confluent-kafka as a producer first.
214
215 This helper function will fallback to kafka-python if it fails to import
216 confluent-kafka.
217
218 Args:
219 brokers (str): Kafka broker information with hostname and port.
220 row_count (int): Number of rows in table
221
222 Returns:
223 Union[ConfluentProducer, KafkaPythonProducer]:
224 Concrete implementation of a Kafka producer. Ig can be:
225 * confluent-kafka producer
226 * kafka-python producer
227 """
228 try:
229 return ConfluentProducer(brokers, row_count, disable_progress_bar)
230 except ImportError:
231 print("Unable to import confluent-kafka, falling back to kafka-python")
232 return KafkaPythonProducer(brokers, row_count, disable_progress_bar)
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/feast/loaders/abstract_producer.py b/sdk/python/feast/loaders/abstract_producer.py
--- a/sdk/python/feast/loaders/abstract_producer.py
+++ b/sdk/python/feast/loaders/abstract_producer.py
@@ -25,8 +25,6 @@
def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
self.brokers = brokers
self.row_count = row_count
- self.error_count = 0
- self.last_exception = ""
# Progress bar will always display average rate
self.pbar = tqdm(
@@ -45,8 +43,7 @@
self.pbar.update(1)
def _set_error(self, exception: str):
- self.error_count += 1
- self.last_exception = exception
+ raise Exception(exception)
def print_results(self) -> None:
"""
@@ -62,24 +59,7 @@
print("Ingestion complete!")
- failed_message = (
- ""
- if self.error_count == 0
- else f"\nFail: {self.error_count / self.row_count}"
- )
-
- last_exception_message = (
- ""
- if self.last_exception == ""
- else f"\nLast exception:\n{self.last_exception}"
- )
-
- print(
- f"\nIngestion statistics:"
- f"\nSuccess: {self.pbar.n}/{self.row_count}"
- f"{failed_message}"
- f"{last_exception_message}"
- )
+ print(f"\nIngestion statistics:" f"\nSuccess: {self.pbar.n}/{self.row_count}")
return None
@@ -129,7 +109,10 @@
Returns:
int: Number of messages still in queue.
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def _delivery_callback(self, err: str, msg) -> None:
"""
@@ -200,7 +183,10 @@
KafkaTimeoutError: failure to flush buffered records within the
provided timeout
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def get_producer(
|
{"golden_diff": "diff --git a/sdk/python/feast/loaders/abstract_producer.py b/sdk/python/feast/loaders/abstract_producer.py\n--- a/sdk/python/feast/loaders/abstract_producer.py\n+++ b/sdk/python/feast/loaders/abstract_producer.py\n@@ -25,8 +25,6 @@\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n self.brokers = brokers\n self.row_count = row_count\n- self.error_count = 0\n- self.last_exception = \"\"\n \n # Progress bar will always display average rate\n self.pbar = tqdm(\n@@ -45,8 +43,7 @@\n self.pbar.update(1)\n \n def _set_error(self, exception: str):\n- self.error_count += 1\n- self.last_exception = exception\n+ raise Exception(exception)\n \n def print_results(self) -> None:\n \"\"\"\n@@ -62,24 +59,7 @@\n \n print(\"Ingestion complete!\")\n \n- failed_message = (\n- \"\"\n- if self.error_count == 0\n- else f\"\\nFail: {self.error_count / self.row_count}\"\n- )\n-\n- last_exception_message = (\n- \"\"\n- if self.last_exception == \"\"\n- else f\"\\nLast exception:\\n{self.last_exception}\"\n- )\n-\n- print(\n- f\"\\nIngestion statistics:\"\n- f\"\\nSuccess: {self.pbar.n}/{self.row_count}\"\n- f\"{failed_message}\"\n- f\"{last_exception_message}\"\n- )\n+ print(f\"\\nIngestion statistics:\" f\"\\nSuccess: {self.pbar.n}/{self.row_count}\")\n return None\n \n \n@@ -129,7 +109,10 @@\n Returns:\n int: Number of messages still in queue.\n \"\"\"\n- return self.producer.flush(timeout=timeout)\n+ messages = self.producer.flush(timeout=timeout)\n+ if messages:\n+ raise Exception(\"Not all Kafka messages are successfully delivered.\")\n+ return messages\n \n def _delivery_callback(self, err: str, msg) -> None:\n \"\"\"\n@@ -200,7 +183,10 @@\n KafkaTimeoutError: failure to flush buffered records within the\n provided timeout\n \"\"\"\n- return self.producer.flush(timeout=timeout)\n+ messages = self.producer.flush(timeout=timeout)\n+ if messages:\n+ raise Exception(\"Not all Kafka messages are successfully delivered.\")\n+ return messages\n \n \n def get_producer(\n", "issue": "No exception when connecting to Kafka fails\nHey guys,\r\n\r\nI have installed feast(0.4.4) via Helm3 on GKE \r\n\r\nThe Basic example is working until the ingestion (online serving) part\r\n\r\nThere I get ->>\r\n\r\nWaiting for feature set to be ready for ingestion...\r\n 0%| | 0/15 [00:00<?, ?rows/s]\r\n\r\nbut there is no progress.\r\n\r\n\r\n\r\nWhen I look on GCPs bigquery interface I can see that the project \"customer_project\" is created with the correct columns in \"customer_transactions\".\r\nBut for sure no data\r\n\r\n\r\n\r\nget_feature_set gives me \r\n\r\n\r\n{\r\n \"spec\": {\r\n \"name\": \"customer_transactions\",\r\n \"version\": 1,\r\n \"entities\": [\r\n {\r\n \"name\": \"customer_id\",\r\n \"valueType\": \"INT64\"\r\n }\r\n ],\r\n \"features\": [\r\n {\r\n \"name\": \"daily_transactions\",\r\n \"valueType\": \"DOUBLE\"\r\n },\r\n {\r\n \"name\": \"total_transactions\",\r\n \"valueType\": \"INT64\"\r\n }\r\n ],\r\n \"maxAge\": \"432000s\",\r\n \"source\": {\r\n \"type\": \"KAFKA\",\r\n \"kafkaSourceConfig\": {\r\n \"bootstrapServers\": \"feast-kafka:9092\",\r\n \"topic\": \"feast\"\r\n }\r\n },\r\n \"project\": \"customer_project_1\"\r\n },\r\n \"meta\": {\r\n \"createdTimestamp\": \"2020-04-15T10:26:51Z\",\r\n \"status\": \"STATUS_READY\"\r\n }\r\n}\r\n\r\n\r\nI had to modify some port service setup in the chart so it can be that some of feast have connection issues between kafka etc.\r\nBut There are no errors in the logs of the core and serving pod.\r\n\r\nWhat can be the problem and how is a way to debug that?\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Union\n\nfrom tqdm import tqdm\n\n\nclass AbstractProducer:\n \"\"\"\n Abstract class for Kafka producers\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n self.brokers = brokers\n self.row_count = row_count\n self.error_count = 0\n self.last_exception = \"\"\n\n # Progress bar will always display average rate\n self.pbar = tqdm(\n total=row_count, unit=\"rows\", smoothing=0, disable=disable_progress_bar\n )\n\n def produce(self, topic: str, data: bytes):\n message = \"{} should implement a produce method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def flush(self, timeout: int):\n message = \"{} should implement a flush method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def _inc_pbar(self, meta):\n self.pbar.update(1)\n\n def _set_error(self, exception: str):\n self.error_count += 1\n self.last_exception = exception\n\n def print_results(self) -> None:\n \"\"\"\n Print ingestion statistics.\n\n Returns:\n None: None\n \"\"\"\n # Refresh and close tqdm progress bar\n self.pbar.refresh()\n\n self.pbar.close()\n\n print(\"Ingestion complete!\")\n\n failed_message = (\n \"\"\n if self.error_count == 0\n else f\"\\nFail: {self.error_count / self.row_count}\"\n )\n\n last_exception_message = (\n \"\"\n if self.last_exception == \"\"\n else f\"\\nLast exception:\\n{self.last_exception}\"\n )\n\n print(\n f\"\\nIngestion statistics:\"\n f\"\\nSuccess: {self.pbar.n}/{self.row_count}\"\n f\"{failed_message}\"\n f\"{last_exception_message}\"\n )\n return None\n\n\nclass ConfluentProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Confluent Kafka producer (confluent-kafka)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from confluent_kafka import Producer\n\n self.producer = Producer({\"bootstrap.servers\": brokers})\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes) -> None:\n \"\"\"\n Generic produce that implements confluent-kafka's produce method to\n push a byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n None: None.\n \"\"\"\n\n try:\n self.producer.produce(topic, value=value, callback=self._delivery_callback)\n # Serve delivery callback queue.\n # NOTE: Since produce() is an asynchronous API this poll() call\n # will most likely not serve the delivery callback for the\n # last produce()d message.\n self.producer.poll(0)\n except Exception as ex:\n self._set_error(str(ex))\n\n return None\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements confluent-kafka's flush method.\n\n Args:\n timeout (Optional[int]): Timeout in seconds to wait for completion.\n\n Returns:\n int: Number of messages still in queue.\n \"\"\"\n return self.producer.flush(timeout=timeout)\n\n def _delivery_callback(self, err: str, msg) -> None:\n \"\"\"\n Optional per-message delivery callback (triggered by poll() or flush())\n when a message has been successfully delivered or permanently failed\n delivery (after retries).\n\n Although the msg argument is not used, the current method signature is\n required as specified in the confluent-kafka documentation.\n\n Args:\n err (str): Error message.\n msg (): Kafka message.\n\n Returns:\n None\n \"\"\"\n if err:\n self._set_error(err)\n else:\n self._inc_pbar(None)\n\n\nclass KafkaPythonProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Python Kafka producer (kafka-python)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from kafka import KafkaProducer\n\n self.producer = KafkaProducer(bootstrap_servers=[brokers])\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes):\n \"\"\"\n Generic produce that implements kafka-python's send method to push a\n byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n FutureRecordMetadata: resolves to RecordMetadata\n\n Raises:\n KafkaTimeoutError: if unable to fetch topic metadata, or unable\n to obtain memory buffer prior to configured max_block_ms\n \"\"\"\n return (\n self.producer.send(topic, value=value)\n .add_callback(self._inc_pbar)\n .add_errback(self._set_error)\n )\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements kafka-python's flush method.\n\n Args:\n timeout (Optional[int]): timeout in seconds to wait for completion.\n\n Returns:\n None\n\n Raises:\n KafkaTimeoutError: failure to flush buffered records within the\n provided timeout\n \"\"\"\n return self.producer.flush(timeout=timeout)\n\n\ndef get_producer(\n brokers: str, row_count: int, disable_progress_bar: bool\n) -> Union[ConfluentProducer, KafkaPythonProducer]:\n \"\"\"\n Simple context helper function that returns a AbstractProducer object when\n invoked.\n\n This helper function will try to import confluent-kafka as a producer first.\n\n This helper function will fallback to kafka-python if it fails to import\n confluent-kafka.\n\n Args:\n brokers (str): Kafka broker information with hostname and port.\n row_count (int): Number of rows in table\n\n Returns:\n Union[ConfluentProducer, KafkaPythonProducer]:\n Concrete implementation of a Kafka producer. Ig can be:\n * confluent-kafka producer\n * kafka-python producer\n \"\"\"\n try:\n return ConfluentProducer(brokers, row_count, disable_progress_bar)\n except ImportError:\n print(\"Unable to import confluent-kafka, falling back to kafka-python\")\n return KafkaPythonProducer(brokers, row_count, disable_progress_bar)\n", "path": "sdk/python/feast/loaders/abstract_producer.py"}], "after_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Union\n\nfrom tqdm import tqdm\n\n\nclass AbstractProducer:\n \"\"\"\n Abstract class for Kafka producers\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n self.brokers = brokers\n self.row_count = row_count\n\n # Progress bar will always display average rate\n self.pbar = tqdm(\n total=row_count, unit=\"rows\", smoothing=0, disable=disable_progress_bar\n )\n\n def produce(self, topic: str, data: bytes):\n message = \"{} should implement a produce method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def flush(self, timeout: int):\n message = \"{} should implement a flush method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def _inc_pbar(self, meta):\n self.pbar.update(1)\n\n def _set_error(self, exception: str):\n raise Exception(exception)\n\n def print_results(self) -> None:\n \"\"\"\n Print ingestion statistics.\n\n Returns:\n None: None\n \"\"\"\n # Refresh and close tqdm progress bar\n self.pbar.refresh()\n\n self.pbar.close()\n\n print(\"Ingestion complete!\")\n\n print(f\"\\nIngestion statistics:\" f\"\\nSuccess: {self.pbar.n}/{self.row_count}\")\n return None\n\n\nclass ConfluentProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Confluent Kafka producer (confluent-kafka)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from confluent_kafka import Producer\n\n self.producer = Producer({\"bootstrap.servers\": brokers})\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes) -> None:\n \"\"\"\n Generic produce that implements confluent-kafka's produce method to\n push a byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n None: None.\n \"\"\"\n\n try:\n self.producer.produce(topic, value=value, callback=self._delivery_callback)\n # Serve delivery callback queue.\n # NOTE: Since produce() is an asynchronous API this poll() call\n # will most likely not serve the delivery callback for the\n # last produce()d message.\n self.producer.poll(0)\n except Exception as ex:\n self._set_error(str(ex))\n\n return None\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements confluent-kafka's flush method.\n\n Args:\n timeout (Optional[int]): Timeout in seconds to wait for completion.\n\n Returns:\n int: Number of messages still in queue.\n \"\"\"\n messages = self.producer.flush(timeout=timeout)\n if messages:\n raise Exception(\"Not all Kafka messages are successfully delivered.\")\n return messages\n\n def _delivery_callback(self, err: str, msg) -> None:\n \"\"\"\n Optional per-message delivery callback (triggered by poll() or flush())\n when a message has been successfully delivered or permanently failed\n delivery (after retries).\n\n Although the msg argument is not used, the current method signature is\n required as specified in the confluent-kafka documentation.\n\n Args:\n err (str): Error message.\n msg (): Kafka message.\n\n Returns:\n None\n \"\"\"\n if err:\n self._set_error(err)\n else:\n self._inc_pbar(None)\n\n\nclass KafkaPythonProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Python Kafka producer (kafka-python)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from kafka import KafkaProducer\n\n self.producer = KafkaProducer(bootstrap_servers=[brokers])\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes):\n \"\"\"\n Generic produce that implements kafka-python's send method to push a\n byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n FutureRecordMetadata: resolves to RecordMetadata\n\n Raises:\n KafkaTimeoutError: if unable to fetch topic metadata, or unable\n to obtain memory buffer prior to configured max_block_ms\n \"\"\"\n return (\n self.producer.send(topic, value=value)\n .add_callback(self._inc_pbar)\n .add_errback(self._set_error)\n )\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements kafka-python's flush method.\n\n Args:\n timeout (Optional[int]): timeout in seconds to wait for completion.\n\n Returns:\n None\n\n Raises:\n KafkaTimeoutError: failure to flush buffered records within the\n provided timeout\n \"\"\"\n messages = self.producer.flush(timeout=timeout)\n if messages:\n raise Exception(\"Not all Kafka messages are successfully delivered.\")\n return messages\n\n\ndef get_producer(\n brokers: str, row_count: int, disable_progress_bar: bool\n) -> Union[ConfluentProducer, KafkaPythonProducer]:\n \"\"\"\n Simple context helper function that returns a AbstractProducer object when\n invoked.\n\n This helper function will try to import confluent-kafka as a producer first.\n\n This helper function will fallback to kafka-python if it fails to import\n confluent-kafka.\n\n Args:\n brokers (str): Kafka broker information with hostname and port.\n row_count (int): Number of rows in table\n\n Returns:\n Union[ConfluentProducer, KafkaPythonProducer]:\n Concrete implementation of a Kafka producer. Ig can be:\n * confluent-kafka producer\n * kafka-python producer\n \"\"\"\n try:\n return ConfluentProducer(brokers, row_count, disable_progress_bar)\n except ImportError:\n print(\"Unable to import confluent-kafka, falling back to kafka-python\")\n return KafkaPythonProducer(brokers, row_count, disable_progress_bar)\n", "path": "sdk/python/feast/loaders/abstract_producer.py"}]}
| 2,797 | 571 |
gh_patches_debug_33750
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-4349
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix experimental make generator CONAN_CPPFLAGS and CONAN_INCLUDE_PATHS
Following the conversation here https://github.com/conan-io/conan/issues/4286#issuecomment-454194188
We have finally reached the conclusion of having ``cpp_info.cppflags`` converted to ``CONAN_CXXFLAGS`` in the ``make`` generator to be consistent with orhter generators such as ``cmake``.
Also the flag ``CONAN_INCLUDE_PATHS`` should be renamed to ``CONAN_INCLUDE_DIRS`` for the same reason.
In another issue we would probably introduce a ``cpp_info.cxxflags`` that would be an internal alias of ``cpp_info.cppflags`` to avoid this confusion without breaking.
cc/ @solvingj
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/generators/make.py`
Content:
```
1 from conans.model import Generator
2 from conans.paths import BUILD_INFO_MAKE
3
4
5 class MakeGenerator(Generator):
6
7 def __init__(self, conanfile):
8 Generator.__init__(self, conanfile)
9 self.makefile_newline = "\n"
10 self.makefile_line_continuation = " \\\n"
11 self.assignment_if_absent = " ?= "
12 self.assignment_append = " += "
13
14 @property
15 def filename(self):
16 return BUILD_INFO_MAKE
17
18 @property
19 def content(self):
20
21 content = [
22 "#-------------------------------------------------------------------#",
23 "# Makefile variables from Conan Dependencies #",
24 "#-------------------------------------------------------------------#",
25 "",
26 ]
27
28 for line_as_list in self.create_deps_content():
29 content.append("".join(line_as_list))
30
31 content.append("#-------------------------------------------------------------------#")
32 content.append(self.makefile_newline)
33 return self.makefile_newline.join(content)
34
35 def create_deps_content(self):
36 deps_content = self.create_content_from_deps()
37 deps_content.extend(self.create_combined_content())
38 return deps_content
39
40 def create_content_from_deps(self):
41 content = []
42 for pkg_name, cpp_info in self.deps_build_info.dependencies:
43 content.extend(self.create_content_from_dep(pkg_name, cpp_info))
44 return content
45
46 def create_content_from_dep(self, pkg_name, cpp_info):
47
48 vars_info = [("ROOT", self.assignment_if_absent, [cpp_info.rootpath]),
49 ("SYSROOT", self.assignment_if_absent, [cpp_info.sysroot]),
50 ("INCLUDE_PATHS", self.assignment_append, cpp_info.include_paths),
51 ("LIB_PATHS", self.assignment_append, cpp_info.lib_paths),
52 ("BIN_PATHS", self.assignment_append, cpp_info.bin_paths),
53 ("BUILD_PATHS", self.assignment_append, cpp_info.build_paths),
54 ("RES_PATHS", self.assignment_append, cpp_info.res_paths),
55 ("LIBS", self.assignment_append, cpp_info.libs),
56 ("DEFINES", self.assignment_append, cpp_info.defines),
57 ("CFLAGS", self.assignment_append, cpp_info.cflags),
58 ("CPPFLAGS", self.assignment_append, cpp_info.cppflags),
59 ("SHAREDLINKFLAGS", self.assignment_append, cpp_info.sharedlinkflags),
60 ("EXELINKFLAGS", self.assignment_append, cpp_info.exelinkflags)]
61
62 return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)
63 for var_name, operator, info in vars_info]
64
65 def create_combined_content(self):
66 content = []
67 for var_name in self.all_dep_vars():
68 content.append(self.create_makefile_var_global(var_name, self.assignment_append,
69 self.create_combined_var_list(var_name)))
70 return content
71
72 def create_combined_var_list(self, var_name):
73 make_vars = []
74 for pkg_name, _ in self.deps_build_info.dependencies:
75 pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)
76 make_vars.append("$({pkg_var})".format(pkg_var=pkg_var))
77 return make_vars
78
79 def create_makefile_var_global(self, var_name, operator, values):
80 make_var = [self.create_makefile_var_name_global(var_name)]
81 make_var.extend(self.create_makefile_var_common(operator, values))
82 return make_var
83
84 def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):
85 make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]
86 make_var.extend(self.create_makefile_var_common(operator, values))
87 return make_var
88
89 def create_makefile_var_common(self, operator, values):
90 return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),
91 self.makefile_newline]
92
93 @staticmethod
94 def create_makefile_var_name_global(var_name):
95 return "CONAN_{var}".format(var=var_name).upper()
96
97 @staticmethod
98 def create_makefile_var_name_pkg(var_name, pkg_name):
99 return "CONAN_{var}_{lib}".format(var=var_name, lib=pkg_name).upper()
100
101 def create_makefile_var_value(self, values):
102 formatted_values = [value.replace("\\", "/") for value in values]
103 return self.makefile_line_continuation.join(formatted_values)
104
105 @staticmethod
106 def all_dep_vars():
107 return ["rootpath", "sysroot", "include_paths", "lib_paths", "bin_paths", "build_paths",
108 "res_paths", "libs", "defines", "cflags", "cppflags", "sharedlinkflags",
109 "exelinkflags"]
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/client/generators/make.py b/conans/client/generators/make.py
--- a/conans/client/generators/make.py
+++ b/conans/client/generators/make.py
@@ -47,15 +47,15 @@
vars_info = [("ROOT", self.assignment_if_absent, [cpp_info.rootpath]),
("SYSROOT", self.assignment_if_absent, [cpp_info.sysroot]),
- ("INCLUDE_PATHS", self.assignment_append, cpp_info.include_paths),
- ("LIB_PATHS", self.assignment_append, cpp_info.lib_paths),
- ("BIN_PATHS", self.assignment_append, cpp_info.bin_paths),
- ("BUILD_PATHS", self.assignment_append, cpp_info.build_paths),
- ("RES_PATHS", self.assignment_append, cpp_info.res_paths),
+ ("INCLUDE_DIRS", self.assignment_append, cpp_info.include_paths),
+ ("LIB_DIRS", self.assignment_append, cpp_info.lib_paths),
+ ("BIN_DIRS", self.assignment_append, cpp_info.bin_paths),
+ ("BUILD_DIRS", self.assignment_append, cpp_info.build_paths),
+ ("RES_DIRS", self.assignment_append, cpp_info.res_paths),
("LIBS", self.assignment_append, cpp_info.libs),
("DEFINES", self.assignment_append, cpp_info.defines),
("CFLAGS", self.assignment_append, cpp_info.cflags),
- ("CPPFLAGS", self.assignment_append, cpp_info.cppflags),
+ ("CXXFLAGS", self.assignment_append, cpp_info.cppflags),
("SHAREDLINKFLAGS", self.assignment_append, cpp_info.sharedlinkflags),
("EXELINKFLAGS", self.assignment_append, cpp_info.exelinkflags)]
@@ -104,6 +104,6 @@
@staticmethod
def all_dep_vars():
- return ["rootpath", "sysroot", "include_paths", "lib_paths", "bin_paths", "build_paths",
- "res_paths", "libs", "defines", "cflags", "cppflags", "sharedlinkflags",
+ return ["rootpath", "sysroot", "include_dirs", "lib_dirs", "bin_dirs", "build_dirs",
+ "res_dirs", "libs", "defines", "cflags", "cxxflags", "sharedlinkflags",
"exelinkflags"]
|
{"golden_diff": "diff --git a/conans/client/generators/make.py b/conans/client/generators/make.py\n--- a/conans/client/generators/make.py\n+++ b/conans/client/generators/make.py\n@@ -47,15 +47,15 @@\n \n vars_info = [(\"ROOT\", self.assignment_if_absent, [cpp_info.rootpath]),\n (\"SYSROOT\", self.assignment_if_absent, [cpp_info.sysroot]),\n- (\"INCLUDE_PATHS\", self.assignment_append, cpp_info.include_paths),\n- (\"LIB_PATHS\", self.assignment_append, cpp_info.lib_paths),\n- (\"BIN_PATHS\", self.assignment_append, cpp_info.bin_paths),\n- (\"BUILD_PATHS\", self.assignment_append, cpp_info.build_paths),\n- (\"RES_PATHS\", self.assignment_append, cpp_info.res_paths),\n+ (\"INCLUDE_DIRS\", self.assignment_append, cpp_info.include_paths),\n+ (\"LIB_DIRS\", self.assignment_append, cpp_info.lib_paths),\n+ (\"BIN_DIRS\", self.assignment_append, cpp_info.bin_paths),\n+ (\"BUILD_DIRS\", self.assignment_append, cpp_info.build_paths),\n+ (\"RES_DIRS\", self.assignment_append, cpp_info.res_paths),\n (\"LIBS\", self.assignment_append, cpp_info.libs),\n (\"DEFINES\", self.assignment_append, cpp_info.defines),\n (\"CFLAGS\", self.assignment_append, cpp_info.cflags),\n- (\"CPPFLAGS\", self.assignment_append, cpp_info.cppflags),\n+ (\"CXXFLAGS\", self.assignment_append, cpp_info.cppflags),\n (\"SHAREDLINKFLAGS\", self.assignment_append, cpp_info.sharedlinkflags),\n (\"EXELINKFLAGS\", self.assignment_append, cpp_info.exelinkflags)]\n \n@@ -104,6 +104,6 @@\n \n @staticmethod\n def all_dep_vars():\n- return [\"rootpath\", \"sysroot\", \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\",\n- \"res_paths\", \"libs\", \"defines\", \"cflags\", \"cppflags\", \"sharedlinkflags\",\n+ return [\"rootpath\", \"sysroot\", \"include_dirs\", \"lib_dirs\", \"bin_dirs\", \"build_dirs\",\n+ \"res_dirs\", \"libs\", \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\",\n \"exelinkflags\"]\n", "issue": "Fix experimental make generator CONAN_CPPFLAGS and CONAN_INCLUDE_PATHS\nFollowing the conversation here https://github.com/conan-io/conan/issues/4286#issuecomment-454194188\r\n\r\nWe have finally reached the conclusion of having ``cpp_info.cppflags`` converted to ``CONAN_CXXFLAGS`` in the ``make`` generator to be consistent with orhter generators such as ``cmake``.\r\n\r\nAlso the flag ``CONAN_INCLUDE_PATHS`` should be renamed to ``CONAN_INCLUDE_DIRS`` for the same reason.\r\n\r\nIn another issue we would probably introduce a ``cpp_info.cxxflags`` that would be an internal alias of ``cpp_info.cppflags`` to avoid this confusion without breaking.\r\n\r\ncc/ @solvingj \n", "before_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_MAKE\n\n\nclass MakeGenerator(Generator):\n\n def __init__(self, conanfile):\n Generator.__init__(self, conanfile)\n self.makefile_newline = \"\\n\"\n self.makefile_line_continuation = \" \\\\\\n\"\n self.assignment_if_absent = \" ?= \"\n self.assignment_append = \" += \"\n\n @property\n def filename(self):\n return BUILD_INFO_MAKE\n\n @property\n def content(self):\n\n content = [\n \"#-------------------------------------------------------------------#\",\n \"# Makefile variables from Conan Dependencies #\",\n \"#-------------------------------------------------------------------#\",\n \"\",\n ]\n\n for line_as_list in self.create_deps_content():\n content.append(\"\".join(line_as_list))\n\n content.append(\"#-------------------------------------------------------------------#\")\n content.append(self.makefile_newline)\n return self.makefile_newline.join(content)\n\n def create_deps_content(self):\n deps_content = self.create_content_from_deps()\n deps_content.extend(self.create_combined_content())\n return deps_content\n\n def create_content_from_deps(self):\n content = []\n for pkg_name, cpp_info in self.deps_build_info.dependencies:\n content.extend(self.create_content_from_dep(pkg_name, cpp_info))\n return content\n\n def create_content_from_dep(self, pkg_name, cpp_info):\n\n vars_info = [(\"ROOT\", self.assignment_if_absent, [cpp_info.rootpath]),\n (\"SYSROOT\", self.assignment_if_absent, [cpp_info.sysroot]),\n (\"INCLUDE_PATHS\", self.assignment_append, cpp_info.include_paths),\n (\"LIB_PATHS\", self.assignment_append, cpp_info.lib_paths),\n (\"BIN_PATHS\", self.assignment_append, cpp_info.bin_paths),\n (\"BUILD_PATHS\", self.assignment_append, cpp_info.build_paths),\n (\"RES_PATHS\", self.assignment_append, cpp_info.res_paths),\n (\"LIBS\", self.assignment_append, cpp_info.libs),\n (\"DEFINES\", self.assignment_append, cpp_info.defines),\n (\"CFLAGS\", self.assignment_append, cpp_info.cflags),\n (\"CPPFLAGS\", self.assignment_append, cpp_info.cppflags),\n (\"SHAREDLINKFLAGS\", self.assignment_append, cpp_info.sharedlinkflags),\n (\"EXELINKFLAGS\", self.assignment_append, cpp_info.exelinkflags)]\n\n return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)\n for var_name, operator, info in vars_info]\n\n def create_combined_content(self):\n content = []\n for var_name in self.all_dep_vars():\n content.append(self.create_makefile_var_global(var_name, self.assignment_append,\n self.create_combined_var_list(var_name)))\n return content\n\n def create_combined_var_list(self, var_name):\n make_vars = []\n for pkg_name, _ in self.deps_build_info.dependencies:\n pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)\n make_vars.append(\"$({pkg_var})\".format(pkg_var=pkg_var))\n return make_vars\n\n def create_makefile_var_global(self, var_name, operator, values):\n make_var = [self.create_makefile_var_name_global(var_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):\n make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_common(self, operator, values):\n return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),\n self.makefile_newline]\n\n @staticmethod\n def create_makefile_var_name_global(var_name):\n return \"CONAN_{var}\".format(var=var_name).upper()\n\n @staticmethod\n def create_makefile_var_name_pkg(var_name, pkg_name):\n return \"CONAN_{var}_{lib}\".format(var=var_name, lib=pkg_name).upper()\n\n def create_makefile_var_value(self, values):\n formatted_values = [value.replace(\"\\\\\", \"/\") for value in values]\n return self.makefile_line_continuation.join(formatted_values)\n\n @staticmethod\n def all_dep_vars():\n return [\"rootpath\", \"sysroot\", \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\",\n \"res_paths\", \"libs\", \"defines\", \"cflags\", \"cppflags\", \"sharedlinkflags\",\n \"exelinkflags\"]\n", "path": "conans/client/generators/make.py"}], "after_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_MAKE\n\n\nclass MakeGenerator(Generator):\n\n def __init__(self, conanfile):\n Generator.__init__(self, conanfile)\n self.makefile_newline = \"\\n\"\n self.makefile_line_continuation = \" \\\\\\n\"\n self.assignment_if_absent = \" ?= \"\n self.assignment_append = \" += \"\n\n @property\n def filename(self):\n return BUILD_INFO_MAKE\n\n @property\n def content(self):\n\n content = [\n \"#-------------------------------------------------------------------#\",\n \"# Makefile variables from Conan Dependencies #\",\n \"#-------------------------------------------------------------------#\",\n \"\",\n ]\n\n for line_as_list in self.create_deps_content():\n content.append(\"\".join(line_as_list))\n\n content.append(\"#-------------------------------------------------------------------#\")\n content.append(self.makefile_newline)\n return self.makefile_newline.join(content)\n\n def create_deps_content(self):\n deps_content = self.create_content_from_deps()\n deps_content.extend(self.create_combined_content())\n return deps_content\n\n def create_content_from_deps(self):\n content = []\n for pkg_name, cpp_info in self.deps_build_info.dependencies:\n content.extend(self.create_content_from_dep(pkg_name, cpp_info))\n return content\n\n def create_content_from_dep(self, pkg_name, cpp_info):\n\n vars_info = [(\"ROOT\", self.assignment_if_absent, [cpp_info.rootpath]),\n (\"SYSROOT\", self.assignment_if_absent, [cpp_info.sysroot]),\n (\"INCLUDE_DIRS\", self.assignment_append, cpp_info.include_paths),\n (\"LIB_DIRS\", self.assignment_append, cpp_info.lib_paths),\n (\"BIN_DIRS\", self.assignment_append, cpp_info.bin_paths),\n (\"BUILD_DIRS\", self.assignment_append, cpp_info.build_paths),\n (\"RES_DIRS\", self.assignment_append, cpp_info.res_paths),\n (\"LIBS\", self.assignment_append, cpp_info.libs),\n (\"DEFINES\", self.assignment_append, cpp_info.defines),\n (\"CFLAGS\", self.assignment_append, cpp_info.cflags),\n (\"CXXFLAGS\", self.assignment_append, cpp_info.cppflags),\n (\"SHAREDLINKFLAGS\", self.assignment_append, cpp_info.sharedlinkflags),\n (\"EXELINKFLAGS\", self.assignment_append, cpp_info.exelinkflags)]\n\n return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)\n for var_name, operator, info in vars_info]\n\n def create_combined_content(self):\n content = []\n for var_name in self.all_dep_vars():\n content.append(self.create_makefile_var_global(var_name, self.assignment_append,\n self.create_combined_var_list(var_name)))\n return content\n\n def create_combined_var_list(self, var_name):\n make_vars = []\n for pkg_name, _ in self.deps_build_info.dependencies:\n pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)\n make_vars.append(\"$({pkg_var})\".format(pkg_var=pkg_var))\n return make_vars\n\n def create_makefile_var_global(self, var_name, operator, values):\n make_var = [self.create_makefile_var_name_global(var_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):\n make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_common(self, operator, values):\n return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),\n self.makefile_newline]\n\n @staticmethod\n def create_makefile_var_name_global(var_name):\n return \"CONAN_{var}\".format(var=var_name).upper()\n\n @staticmethod\n def create_makefile_var_name_pkg(var_name, pkg_name):\n return \"CONAN_{var}_{lib}\".format(var=var_name, lib=pkg_name).upper()\n\n def create_makefile_var_value(self, values):\n formatted_values = [value.replace(\"\\\\\", \"/\") for value in values]\n return self.makefile_line_continuation.join(formatted_values)\n\n @staticmethod\n def all_dep_vars():\n return [\"rootpath\", \"sysroot\", \"include_dirs\", \"lib_dirs\", \"bin_dirs\", \"build_dirs\",\n \"res_dirs\", \"libs\", \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\",\n \"exelinkflags\"]\n", "path": "conans/client/generators/make.py"}]}
| 1,622 | 498 |
gh_patches_debug_13502
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-bids-111
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
prune dependencies that we do not really depend on
As became apparent in a discussion with @agramfort and @jasmainak, we probably do not need the `environment.yml` and instead should rely on minimal dependencies such as numpy, scipy, and matplotlib.
if we decide to keep the `environment.yml` for convenience during installation, we should at least prune it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne_bids/datasets.py`
Content:
```
1 """Helper functions to fetch data to work with."""
2 # Authors: Mainak Jas <[email protected]>
3 # Alexandre Gramfort <[email protected]>
4 # Teon Brooks <[email protected]>
5 # Stefan Appelhoff <[email protected]>
6 #
7 # License: BSD (3-clause)
8
9 import os
10 import os.path as op
11 import shutil
12 import tarfile
13 import requests
14
15 from mne.utils import _fetch_file
16
17
18 def fetch_faces_data(data_path=None, repo='ds000117', subject_ids=[1]):
19 """Dataset fetcher for OpenfMRI dataset ds000117.
20
21 Parameters
22 ----------
23 data_path : str | None
24 Path to the folder where data is stored. Defaults to
25 '~/mne_data/mne_bids_examples'
26 repo : str
27 The folder name. Defaults to 'ds000117'.
28 subject_ids : list of int
29 The subjects to fetch. Defaults to [1], downloading subject 1.
30
31 Returns
32 -------
33 data_path : str
34 Path to the folder where data is stored.
35
36 """
37 if not data_path:
38 home = os.path.expanduser('~')
39 data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')
40 if not os.path.exists(data_path):
41 os.makedirs(data_path)
42
43 for subject_id in subject_ids:
44 src_url = ('http://openfmri.s3.amazonaws.com/tarballs/'
45 'ds117_R0.1.1_sub%03d_raw.tgz' % subject_id)
46 tar_fname = op.join(data_path, repo + '.tgz')
47 target_dir = op.join(data_path, repo)
48 if not op.exists(target_dir):
49 if not op.exists(tar_fname):
50 _fetch_file(url=src_url, file_name=tar_fname,
51 print_destination=True, resume=True, timeout=10.)
52 tf = tarfile.open(tar_fname)
53 print('Extracting files. This may take a while ...')
54 tf.extractall(path=data_path)
55 shutil.move(op.join(data_path, 'ds117'), target_dir)
56 os.remove(tar_fname)
57
58 return data_path
59
60
61 def fetch_brainvision_testing_data(data_path=None):
62 """Download the MNE-Python testing data for the BrainVision format.
63
64 Parameters
65 ----------
66 data_path : str | None
67 Path to the folder where data is stored. Defaults to
68 '~/mne_data/mne_bids_examples'
69
70 Returns
71 -------
72 data_path : str
73 Path to the folder where data is stored.
74
75 """
76 if not data_path:
77 home = os.path.expanduser('~')
78 data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')
79 if not os.path.exists(data_path):
80 os.makedirs(data_path)
81
82 base_url = 'https://github.com/mne-tools/mne-python/'
83 base_url += 'raw/master/mne/io/brainvision/tests/data/test'
84 file_endings = ['.vhdr', '.vmrk', '.eeg', ]
85
86 for f_ending in file_endings:
87 url = base_url + f_ending
88 response = requests.get(url)
89
90 fname = os.path.join(data_path, 'test' + f_ending)
91 with open(fname, 'wb') as fout:
92 fout.write(response.content)
93
94 return data_path
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mne_bids/datasets.py b/mne_bids/datasets.py
--- a/mne_bids/datasets.py
+++ b/mne_bids/datasets.py
@@ -10,7 +10,7 @@
import os.path as op
import shutil
import tarfile
-import requests
+from six.moves import urllib
from mne.utils import _fetch_file
@@ -85,10 +85,10 @@
for f_ending in file_endings:
url = base_url + f_ending
- response = requests.get(url)
+ response = urllib.request.urlopen(url)
fname = os.path.join(data_path, 'test' + f_ending)
with open(fname, 'wb') as fout:
- fout.write(response.content)
+ fout.write(response.read())
return data_path
|
{"golden_diff": "diff --git a/mne_bids/datasets.py b/mne_bids/datasets.py\n--- a/mne_bids/datasets.py\n+++ b/mne_bids/datasets.py\n@@ -10,7 +10,7 @@\n import os.path as op\n import shutil\n import tarfile\n-import requests\n+from six.moves import urllib\n \n from mne.utils import _fetch_file\n \n@@ -85,10 +85,10 @@\n \n for f_ending in file_endings:\n url = base_url + f_ending\n- response = requests.get(url)\n+ response = urllib.request.urlopen(url)\n \n fname = os.path.join(data_path, 'test' + f_ending)\n with open(fname, 'wb') as fout:\n- fout.write(response.content)\n+ fout.write(response.read())\n \n return data_path\n", "issue": "prune dependencies that we do not really depend on\nAs became apparent in a discussion with @agramfort and @jasmainak, we probably do not need the `environment.yml` and instead should rely on minimal dependencies such as numpy, scipy, and matplotlib.\r\n\r\nif we decide to keep the `environment.yml` for convenience during installation, we should at least prune it.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Helper functions to fetch data to work with.\"\"\"\n# Authors: Mainak Jas <[email protected]>\n# Alexandre Gramfort <[email protected]>\n# Teon Brooks <[email protected]>\n# Stefan Appelhoff <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os\nimport os.path as op\nimport shutil\nimport tarfile\nimport requests\n\nfrom mne.utils import _fetch_file\n\n\ndef fetch_faces_data(data_path=None, repo='ds000117', subject_ids=[1]):\n \"\"\"Dataset fetcher for OpenfMRI dataset ds000117.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n repo : str\n The folder name. Defaults to 'ds000117'.\n subject_ids : list of int\n The subjects to fetch. Defaults to [1], downloading subject 1.\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n for subject_id in subject_ids:\n src_url = ('http://openfmri.s3.amazonaws.com/tarballs/'\n 'ds117_R0.1.1_sub%03d_raw.tgz' % subject_id)\n tar_fname = op.join(data_path, repo + '.tgz')\n target_dir = op.join(data_path, repo)\n if not op.exists(target_dir):\n if not op.exists(tar_fname):\n _fetch_file(url=src_url, file_name=tar_fname,\n print_destination=True, resume=True, timeout=10.)\n tf = tarfile.open(tar_fname)\n print('Extracting files. This may take a while ...')\n tf.extractall(path=data_path)\n shutil.move(op.join(data_path, 'ds117'), target_dir)\n os.remove(tar_fname)\n\n return data_path\n\n\ndef fetch_brainvision_testing_data(data_path=None):\n \"\"\"Download the MNE-Python testing data for the BrainVision format.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n base_url = 'https://github.com/mne-tools/mne-python/'\n base_url += 'raw/master/mne/io/brainvision/tests/data/test'\n file_endings = ['.vhdr', '.vmrk', '.eeg', ]\n\n for f_ending in file_endings:\n url = base_url + f_ending\n response = requests.get(url)\n\n fname = os.path.join(data_path, 'test' + f_ending)\n with open(fname, 'wb') as fout:\n fout.write(response.content)\n\n return data_path\n", "path": "mne_bids/datasets.py"}], "after_files": [{"content": "\"\"\"Helper functions to fetch data to work with.\"\"\"\n# Authors: Mainak Jas <[email protected]>\n# Alexandre Gramfort <[email protected]>\n# Teon Brooks <[email protected]>\n# Stefan Appelhoff <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os\nimport os.path as op\nimport shutil\nimport tarfile\nfrom six.moves import urllib\n\nfrom mne.utils import _fetch_file\n\n\ndef fetch_faces_data(data_path=None, repo='ds000117', subject_ids=[1]):\n \"\"\"Dataset fetcher for OpenfMRI dataset ds000117.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n repo : str\n The folder name. Defaults to 'ds000117'.\n subject_ids : list of int\n The subjects to fetch. Defaults to [1], downloading subject 1.\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n for subject_id in subject_ids:\n src_url = ('http://openfmri.s3.amazonaws.com/tarballs/'\n 'ds117_R0.1.1_sub%03d_raw.tgz' % subject_id)\n tar_fname = op.join(data_path, repo + '.tgz')\n target_dir = op.join(data_path, repo)\n if not op.exists(target_dir):\n if not op.exists(tar_fname):\n _fetch_file(url=src_url, file_name=tar_fname,\n print_destination=True, resume=True, timeout=10.)\n tf = tarfile.open(tar_fname)\n print('Extracting files. This may take a while ...')\n tf.extractall(path=data_path)\n shutil.move(op.join(data_path, 'ds117'), target_dir)\n os.remove(tar_fname)\n\n return data_path\n\n\ndef fetch_brainvision_testing_data(data_path=None):\n \"\"\"Download the MNE-Python testing data for the BrainVision format.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n base_url = 'https://github.com/mne-tools/mne-python/'\n base_url += 'raw/master/mne/io/brainvision/tests/data/test'\n file_endings = ['.vhdr', '.vmrk', '.eeg', ]\n\n for f_ending in file_endings:\n url = base_url + f_ending\n response = urllib.request.urlopen(url)\n\n fname = os.path.join(data_path, 'test' + f_ending)\n with open(fname, 'wb') as fout:\n fout.write(response.read())\n\n return data_path\n", "path": "mne_bids/datasets.py"}]}
| 1,290 | 185 |
gh_patches_debug_17256
|
rasdani/github-patches
|
git_diff
|
apluslms__a-plus-1352
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Login should not take the user to the front page
Common scenario: the user is browsing a particular course module. They aren’t logged in. They decide to log in, but doing so takes them to the A+ front page, from which they have to navigate back to where they were. Inconvenient.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `course/templatetags/base.py`
Content:
```
1 from datetime import datetime
2
3 from django import template
4 from django.conf import settings
5 from django.utils.safestring import mark_safe
6 from django.utils.text import format_lazy
7 from django.utils.translation import get_language, gettext_lazy as _
8 from lib.helpers import remove_query_param_from_url, settings_text, update_url_params
9 from exercise.submission_models import PendingSubmission
10 from site_alert.models import SiteAlert
11
12
13 register = template.Library()
14
15
16 def pick_localized(message):
17 if message and isinstance(message, dict):
18 return (message.get(get_language()) or
19 message.get(settings.LANGUAGE_CODE[:2]) or
20 list(message.values())[0])
21 return message
22
23
24 def get_date(cont, key):
25 data = cont.get(key)
26 if data and not isinstance(data, datetime):
27 data = datetime.strptime(data, '%Y-%m-%d')
28 cont[key] = data
29 return data
30
31
32 @register.simple_tag
33 def brand_name():
34 return mark_safe(settings.BRAND_NAME)
35
36
37 @register.simple_tag
38 def brand_name_long():
39 return mark_safe(settings.BRAND_NAME_LONG)
40
41
42 @register.simple_tag
43 def brand_institution_name():
44 return mark_safe(settings_text('BRAND_INSTITUTION_NAME'))
45
46
47 @register.simple_tag
48 def course_alert(instance):
49 exercises = PendingSubmission.objects.get_exercise_names_if_grader_is_unstable(instance)
50 if exercises:
51 message = format_lazy(
52 _('GRADER_PROBLEMS_ALERT -- {exercises}'),
53 exercises=exercises,
54 )
55 return mark_safe(format_lazy('<div class="alert alert-danger sticky-alert">{message}</div>', message=message))
56 return ''
57
58
59 @register.simple_tag
60 def site_alert():
61 alerts = SiteAlert.objects.filter(status=SiteAlert.STATUS.ACTIVE)
62 return mark_safe(
63 ''.join(
64 '<div class="alert alert-danger">{}</div>'.format(pick_localized(alert.alert))
65 for alert in alerts
66 )
67 )
68
69
70 @register.simple_tag
71 def site_advert(): # pylint: disable=inconsistent-return-statements
72 advert = settings.SITEWIDE_ADVERT
73 if not advert or not isinstance(advert, dict):
74 return
75 not_before = get_date(advert, 'not-before')
76 not_after = get_date(advert, 'not-after')
77 if not_before or not_after:
78 now = datetime.now()
79 if not_before and not_before > now:
80 return
81 if not_after and not_after < now:
82 return
83 return {k: pick_localized(advert.get(k))
84 for k in ('title', 'text', 'href', 'image')}
85
86
87 @register.simple_tag
88 def tracking_html():
89 return mark_safe(settings.TRACKING_HTML)
90
91
92 @register.filter
93 def localized_url(path, language=None):
94 base_url = settings.BASE_URL
95 if base_url.endswith('/'):
96 base_url = base_url[:-1]
97 path = remove_query_param_from_url(path, 'hl')
98 if not language:
99 language = settings.LANGUAGE_CODE.split('-')[0]
100 path = update_url_params(path, { 'hl': language })
101 return base_url + path
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/course/templatetags/base.py b/course/templatetags/base.py
--- a/course/templatetags/base.py
+++ b/course/templatetags/base.py
@@ -2,6 +2,8 @@
from django import template
from django.conf import settings
+from django.urls import resolve
+from django.urls.exceptions import Resolver404
from django.utils.safestring import mark_safe
from django.utils.text import format_lazy
from django.utils.translation import get_language, gettext_lazy as _
@@ -28,6 +30,15 @@
cont[key] = data
return data
[email protected]_tag(takes_context=True)
+def login_next(context):
+ request = context["request"]
+ try:
+ matched_url_name = resolve(request.path).url_name
+ next_path = f"?next={request.path}" if matched_url_name != 'logout' else ""
+ return next_path
+ except Resolver404:
+ return ""
@register.simple_tag
def brand_name():
|
{"golden_diff": "diff --git a/course/templatetags/base.py b/course/templatetags/base.py\n--- a/course/templatetags/base.py\n+++ b/course/templatetags/base.py\n@@ -2,6 +2,8 @@\n \n from django import template\n from django.conf import settings\n+from django.urls import resolve\n+from django.urls.exceptions import Resolver404\n from django.utils.safestring import mark_safe\n from django.utils.text import format_lazy\n from django.utils.translation import get_language, gettext_lazy as _\n@@ -28,6 +30,15 @@\n cont[key] = data\n return data\n \[email protected]_tag(takes_context=True)\n+def login_next(context):\n+ request = context[\"request\"]\n+ try:\n+ matched_url_name = resolve(request.path).url_name\n+ next_path = f\"?next={request.path}\" if matched_url_name != 'logout' else \"\"\n+ return next_path\n+ except Resolver404:\n+ return \"\"\n \n @register.simple_tag\n def brand_name():\n", "issue": "Login should not take the user to the front page\nCommon scenario: the user is browsing a particular course module. They aren\u2019t logged in. They decide to log in, but doing so takes them to the A+ front page, from which they have to navigate back to where they were. Inconvenient.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django import template\nfrom django.conf import settings\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import get_language, gettext_lazy as _\nfrom lib.helpers import remove_query_param_from_url, settings_text, update_url_params\nfrom exercise.submission_models import PendingSubmission\nfrom site_alert.models import SiteAlert\n\n\nregister = template.Library()\n\n\ndef pick_localized(message):\n if message and isinstance(message, dict):\n return (message.get(get_language()) or\n message.get(settings.LANGUAGE_CODE[:2]) or\n list(message.values())[0])\n return message\n\n\ndef get_date(cont, key):\n data = cont.get(key)\n if data and not isinstance(data, datetime):\n data = datetime.strptime(data, '%Y-%m-%d')\n cont[key] = data\n return data\n\n\[email protected]_tag\ndef brand_name():\n return mark_safe(settings.BRAND_NAME)\n\n\[email protected]_tag\ndef brand_name_long():\n return mark_safe(settings.BRAND_NAME_LONG)\n\n\[email protected]_tag\ndef brand_institution_name():\n return mark_safe(settings_text('BRAND_INSTITUTION_NAME'))\n\n\[email protected]_tag\ndef course_alert(instance):\n exercises = PendingSubmission.objects.get_exercise_names_if_grader_is_unstable(instance)\n if exercises:\n message = format_lazy(\n _('GRADER_PROBLEMS_ALERT -- {exercises}'),\n exercises=exercises,\n )\n return mark_safe(format_lazy('<div class=\"alert alert-danger sticky-alert\">{message}</div>', message=message))\n return ''\n\n\[email protected]_tag\ndef site_alert():\n alerts = SiteAlert.objects.filter(status=SiteAlert.STATUS.ACTIVE)\n return mark_safe(\n ''.join(\n '<div class=\"alert alert-danger\">{}</div>'.format(pick_localized(alert.alert))\n for alert in alerts\n )\n )\n\n\[email protected]_tag\ndef site_advert(): # pylint: disable=inconsistent-return-statements\n advert = settings.SITEWIDE_ADVERT\n if not advert or not isinstance(advert, dict):\n return\n not_before = get_date(advert, 'not-before')\n not_after = get_date(advert, 'not-after')\n if not_before or not_after:\n now = datetime.now()\n if not_before and not_before > now:\n return\n if not_after and not_after < now:\n return\n return {k: pick_localized(advert.get(k))\n for k in ('title', 'text', 'href', 'image')}\n\n\[email protected]_tag\ndef tracking_html():\n return mark_safe(settings.TRACKING_HTML)\n\n\[email protected]\ndef localized_url(path, language=None):\n base_url = settings.BASE_URL\n if base_url.endswith('/'):\n base_url = base_url[:-1]\n path = remove_query_param_from_url(path, 'hl')\n if not language:\n language = settings.LANGUAGE_CODE.split('-')[0]\n path = update_url_params(path, { 'hl': language })\n return base_url + path\n", "path": "course/templatetags/base.py"}], "after_files": [{"content": "from datetime import datetime\n\nfrom django import template\nfrom django.conf import settings\nfrom django.urls import resolve\nfrom django.urls.exceptions import Resolver404\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import get_language, gettext_lazy as _\nfrom lib.helpers import remove_query_param_from_url, settings_text, update_url_params\nfrom exercise.submission_models import PendingSubmission\nfrom site_alert.models import SiteAlert\n\n\nregister = template.Library()\n\n\ndef pick_localized(message):\n if message and isinstance(message, dict):\n return (message.get(get_language()) or\n message.get(settings.LANGUAGE_CODE[:2]) or\n list(message.values())[0])\n return message\n\n\ndef get_date(cont, key):\n data = cont.get(key)\n if data and not isinstance(data, datetime):\n data = datetime.strptime(data, '%Y-%m-%d')\n cont[key] = data\n return data\n\[email protected]_tag(takes_context=True)\ndef login_next(context):\n request = context[\"request\"]\n try:\n matched_url_name = resolve(request.path).url_name\n next_path = f\"?next={request.path}\" if matched_url_name != 'logout' else \"\"\n return next_path\n except Resolver404:\n return \"\"\n\[email protected]_tag\ndef brand_name():\n return mark_safe(settings.BRAND_NAME)\n\n\[email protected]_tag\ndef brand_name_long():\n return mark_safe(settings.BRAND_NAME_LONG)\n\n\[email protected]_tag\ndef brand_institution_name():\n return mark_safe(settings_text('BRAND_INSTITUTION_NAME'))\n\n\[email protected]_tag\ndef course_alert(instance):\n exercises = PendingSubmission.objects.get_exercise_names_if_grader_is_unstable(instance)\n if exercises:\n message = format_lazy(\n _('GRADER_PROBLEMS_ALERT -- {exercises}'),\n exercises=exercises,\n )\n return mark_safe(format_lazy('<div class=\"alert alert-danger sticky-alert\">{message}</div>', message=message))\n return ''\n\n\[email protected]_tag\ndef site_alert():\n alerts = SiteAlert.objects.filter(status=SiteAlert.STATUS.ACTIVE)\n return mark_safe(\n ''.join(\n '<div class=\"alert alert-danger\">{}</div>'.format(pick_localized(alert.alert))\n for alert in alerts\n )\n )\n\n\[email protected]_tag\ndef site_advert(): # pylint: disable=inconsistent-return-statements\n advert = settings.SITEWIDE_ADVERT\n if not advert or not isinstance(advert, dict):\n return\n not_before = get_date(advert, 'not-before')\n not_after = get_date(advert, 'not-after')\n if not_before or not_after:\n now = datetime.now()\n if not_before and not_before > now:\n return\n if not_after and not_after < now:\n return\n return {k: pick_localized(advert.get(k))\n for k in ('title', 'text', 'href', 'image')}\n\n\[email protected]_tag\ndef tracking_html():\n return mark_safe(settings.TRACKING_HTML)\n\n\[email protected]\ndef localized_url(path, language=None):\n base_url = settings.BASE_URL\n if base_url.endswith('/'):\n base_url = base_url[:-1]\n path = remove_query_param_from_url(path, 'hl')\n if not language:\n language = settings.LANGUAGE_CODE.split('-')[0]\n path = update_url_params(path, { 'hl': language })\n return base_url + path\n", "path": "course/templatetags/base.py"}]}
| 1,187 | 229 |
gh_patches_debug_1790
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-933
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs build broken with Sphinx v3.1.1
# Description
After the new Sphinx patch release [`v3.1.1`](https://github.com/sphinx-doc/sphinx/releases/tag/v3.1.1) was released there is an error with building the docs due to `autodocumenting`:
```
WARNING: don't know which module to import for autodocumenting 'optimize.opt_jax.jax_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_minuit.minuit_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_pytorch.pytorch_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_scipy.scipy_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_tflow.tflow_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.jax_backend.jax_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.numpy_backend.numpy_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.pytorch_backend.pytorch_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.tensorflow_backend.tensorflow_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'tensorflow': [
5 'tensorflow~=2.0',
6 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
7 ],
8 'torch': ['torch~=1.2'],
9 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
10 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
11 'minuit': ['iminuit'],
12 }
13 extras_require['backends'] = sorted(
14 set(
15 extras_require['tensorflow']
16 + extras_require['torch']
17 + extras_require['jax']
18 + extras_require['minuit']
19 )
20 )
21 extras_require['contrib'] = sorted(set(['matplotlib']))
22 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
23
24 extras_require['test'] = sorted(
25 set(
26 extras_require['backends']
27 + extras_require['xmlio']
28 + extras_require['contrib']
29 + [
30 'pytest~=3.5',
31 'pytest-cov>=2.5.1',
32 'pytest-mock',
33 'pytest-benchmark[histogram]',
34 'pytest-console-scripts',
35 'pytest-mpl',
36 'pydocstyle',
37 'coverage>=4.0', # coveralls
38 'papermill~=2.0',
39 'nteract-scrapbook~=0.2',
40 'jupyter',
41 'uproot~=3.3',
42 'graphviz',
43 'jsonpatch',
44 ]
45 )
46 )
47 extras_require['docs'] = sorted(
48 set(
49 [
50 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
51 'sphinxcontrib-bibtex',
52 'sphinx-click',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'ipywidgets',
56 'sphinx-issues',
57 'sphinx-copybutton>0.2.9',
58 ]
59 )
60 )
61 extras_require['develop'] = sorted(
62 set(
63 extras_require['docs']
64 + extras_require['lint']
65 + extras_require['test']
66 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
67 )
68 )
69 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
70
71
72 setup(
73 extras_require=extras_require,
74 use_scm_version=lambda: {'local_scheme': lambda version: ''},
75 )
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -47,7 +47,7 @@
extras_require['docs'] = sorted(
set(
[
- 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
+ 'sphinx>=3.1.2',
'sphinxcontrib-bibtex',
'sphinx-click',
'sphinx_rtd_theme',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -47,7 +47,7 @@\n extras_require['docs'] = sorted(\n set(\n [\n- 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n+ 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n", "issue": "Docs build broken with Sphinx v3.1.1\n# Description\r\n\r\nAfter the new Sphinx patch release [`v3.1.1`](https://github.com/sphinx-doc/sphinx/releases/tag/v3.1.1) was released there is an error with building the docs due to `autodocumenting`:\r\n\r\n```\r\n\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_jax.jax_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_minuit.minuit_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_pytorch.pytorch_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_scipy.scipy_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_tflow.tflow_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.jax_backend.jax_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.numpy_backend.numpy_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.pytorch_backend.pytorch_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.tensorflow_backend.tensorflow_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\n```\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,449 | 105 |
gh_patches_debug_1464
|
rasdani/github-patches
|
git_diff
|
conda__conda-build-1716
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnboundLocalError with --skip-existing and --no-locking flags
Hit this today on conda-build 2.1.2. Also tried with the tip of master and I get the same result. For reproduciblity, this is the output for trying to build the conda.recipe folder inside of conda-build itself:
```
$ conda build conda.recipe --no-locking --skip-existing master :: 1h :: ⬢
Cloning into '/home/edill/miniconda/conda-bld/conda.recipe_1485803296268/work'...
done.
checkout: 'HEAD'
Your branch is up-to-date with 'origin/_conda_cache_origin_head'.
==> git log -n1 <==
commit 6922ec3ed1afc287a4cd7f3872572f2bef89d892
Merge: 837fbc8 c82ea9b
Author: Mike Sarahan <[email protected]>
Date: Mon Jan 30 11:38:01 2017 -0600
Merge pull request #1704 from jerowe/feature/fix-perl-build
adding some fixes to cpan skeleton
==> git describe --tags --dirty <==
2.1.2-20-g6922ec3
==> git status <==
On branch _conda_cache_origin_head
Your branch is up-to-date with 'origin/_conda_cache_origin_head'.
nothing to commit, working directory clean
updating index in: /home/edill/miniconda/conda-bld/linux-64
Traceback (most recent call last):
File "/home/edill/miniconda/bin/conda-build", line 11, in <module>
load_entry_point('conda-build', 'console_scripts', 'conda-build')()
File "/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py", line 322, in main
execute(sys.argv[1:])
File "/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py", line 313, in execute
noverify=args.no_verify)
File "/home/edill/dev/conda/conda-build/conda_build/api.py", line 97, in build
need_source_download=need_source_download, config=config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 1478, in build_tree
config=config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 928, in build
package_exists = is_package_built(m, config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 1633, in is_package_built
update_index(d, config, could_be_mirror=False)
File "/home/edill/dev/conda/conda-build/conda_build/index.py", line 83, in update_index
with try_acquire_locks(locks, config.timeout):
UnboundLocalError: local variable 'locks' referenced before assignment
```
And some debug info
```
$ conda info
Current conda install:
platform : linux-64
conda version : 4.2.13
conda is private : False
conda-env version : 4.2.13
conda-build version : 2.1.2+20.g6922ec3
python version : 3.5.3.final.0
requests version : 2.13.0
root environment : /home/edill/miniconda (writable)
default environment : /home/edill/miniconda
envs directories : /home/edill/miniconda/envs
package cache : /home/edill/miniconda/pkgs
channel URLs : ...
config file : /home/edill/.condarc
offline mode : False
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/index.py`
Content:
```
1 '''
2 Functions related to creating repodata index files.
3 '''
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import bz2
9 import sys
10 import json
11 import tarfile
12 from os.path import isfile, join, getmtime
13
14 from conda_build.utils import file_info, get_lock, try_acquire_locks
15 from .conda_interface import PY3, md5_file
16
17
18 def read_index_tar(tar_path, config, lock):
19 """ Returns the index.json dict inside the given package tarball. """
20 if config.locking:
21 locks = [lock]
22 with try_acquire_locks(locks, config.timeout):
23 with tarfile.open(tar_path) as t:
24 try:
25 return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))
26 except EOFError:
27 raise RuntimeError("Could not extract %s. File probably corrupt."
28 % tar_path)
29 except OSError as e:
30 raise RuntimeError("Could not extract %s (%s)" % (tar_path, e))
31 except tarfile.ReadError:
32 raise RuntimeError("Could not extract metadata from %s. "
33 "File probably corrupt." % tar_path)
34
35
36 def write_repodata(repodata, dir_path, lock, config=None):
37 """ Write updated repodata.json and repodata.json.bz2 """
38 if not config:
39 import conda_build.config
40 config = conda_build.config.config
41 if config.locking:
42 locks = [lock]
43 with try_acquire_locks(locks, config.timeout):
44 data = json.dumps(repodata, indent=2, sort_keys=True)
45 # strip trailing whitespace
46 data = '\n'.join(line.rstrip() for line in data.splitlines())
47 # make sure we have newline at the end
48 if not data.endswith('\n'):
49 data += '\n'
50 with open(join(dir_path, 'repodata.json'), 'w') as fo:
51 fo.write(data)
52 with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:
53 fo.write(bz2.compress(data.encode('utf-8')))
54
55
56 def update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,
57 could_be_mirror=True):
58 """
59 Update all index files in dir_path with changed packages.
60
61 :param verbose: Should detailed status messages be output?
62 :type verbose: bool
63 :param force: Whether to re-index all packages (including those that
64 haven't changed) or not.
65 :type force: bool
66 :param check_md5: Whether to check MD5s instead of mtimes for determining
67 if a package changed.
68 :type check_md5: bool
69 """
70
71 if config.verbose:
72 print("updating index in:", dir_path)
73 index_path = join(dir_path, '.index.json')
74 if not os.path.isdir(dir_path):
75 os.makedirs(dir_path)
76
77 if not lock:
78 lock = get_lock(dir_path)
79
80 if config.locking:
81 locks = [lock]
82
83 with try_acquire_locks(locks, config.timeout):
84 if force:
85 index = {}
86 else:
87 try:
88 mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}
89 with open(index_path, **mode_dict) as fi:
90 index = json.load(fi)
91 except (IOError, ValueError):
92 index = {}
93
94 files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))
95 if could_be_mirror and any(fn.startswith('_license-') for fn in files):
96 sys.exit("""\
97 Error:
98 Indexing a copy of the Anaconda conda package channel is neither
99 necessary nor supported. If you wish to add your own packages,
100 you can do so by adding them to a separate channel.
101 """)
102 for fn in files:
103 path = join(dir_path, fn)
104 if fn in index:
105 if check_md5:
106 if index[fn]['md5'] == md5_file(path):
107 continue
108 elif index[fn]['mtime'] == getmtime(path):
109 continue
110 if config.verbose:
111 print('updating:', fn)
112 d = read_index_tar(path, config, lock=lock)
113 d.update(file_info(path))
114 index[fn] = d
115
116 for fn in files:
117 index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None
118
119 if remove:
120 # remove files from the index which are not on disk
121 for fn in set(index) - files:
122 if config.verbose:
123 print("removing:", fn)
124 del index[fn]
125
126 # Deal with Python 2 and 3's different json module type reqs
127 mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}
128 with open(index_path, **mode_dict) as fo:
129 json.dump(index, fo, indent=2, sort_keys=True, default=str)
130
131 # --- new repodata
132 for fn in index:
133 info = index[fn]
134 for varname in 'arch', 'platform', 'mtime', 'ucs':
135 try:
136 del info[varname]
137 except KeyError:
138 pass
139
140 if 'requires' in info and 'depends' not in info:
141 info['depends'] = info['requires']
142
143 repodata = {'packages': index, 'info': {}}
144 write_repodata(repodata, dir_path, lock=lock, config=config)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda_build/index.py b/conda_build/index.py
--- a/conda_build/index.py
+++ b/conda_build/index.py
@@ -77,8 +77,9 @@
if not lock:
lock = get_lock(dir_path)
+ locks = []
if config.locking:
- locks = [lock]
+ locks.append(lock)
with try_acquire_locks(locks, config.timeout):
if force:
|
{"golden_diff": "diff --git a/conda_build/index.py b/conda_build/index.py\n--- a/conda_build/index.py\n+++ b/conda_build/index.py\n@@ -77,8 +77,9 @@\n if not lock:\n lock = get_lock(dir_path)\n \n+ locks = []\n if config.locking:\n- locks = [lock]\n+ locks.append(lock)\n \n with try_acquire_locks(locks, config.timeout):\n if force:\n", "issue": "UnboundLocalError with --skip-existing and --no-locking flags\nHit this today on conda-build 2.1.2. Also tried with the tip of master and I get the same result. For reproduciblity, this is the output for trying to build the conda.recipe folder inside of conda-build itself:\r\n\r\n```\r\n$ conda build conda.recipe --no-locking --skip-existing master :: 1h :: \u2b22\r\nCloning into '/home/edill/miniconda/conda-bld/conda.recipe_1485803296268/work'...\r\ndone.\r\ncheckout: 'HEAD'\r\nYour branch is up-to-date with 'origin/_conda_cache_origin_head'.\r\n==> git log -n1 <==\r\n\r\ncommit 6922ec3ed1afc287a4cd7f3872572f2bef89d892\r\nMerge: 837fbc8 c82ea9b\r\nAuthor: Mike Sarahan <[email protected]>\r\nDate: Mon Jan 30 11:38:01 2017 -0600\r\n\r\n Merge pull request #1704 from jerowe/feature/fix-perl-build\r\n\r\n adding some fixes to cpan skeleton\r\n\r\n\r\n==> git describe --tags --dirty <==\r\n\r\n2.1.2-20-g6922ec3\r\n\r\n\r\n==> git status <==\r\n\r\nOn branch _conda_cache_origin_head\r\nYour branch is up-to-date with 'origin/_conda_cache_origin_head'.\r\n\r\nnothing to commit, working directory clean\r\n\r\n\r\nupdating index in: /home/edill/miniconda/conda-bld/linux-64\r\nTraceback (most recent call last):\r\n File \"/home/edill/miniconda/bin/conda-build\", line 11, in <module>\r\n load_entry_point('conda-build', 'console_scripts', 'conda-build')()\r\n File \"/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py\", line 322, in main\r\n execute(sys.argv[1:])\r\n File \"/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py\", line 313, in execute\r\n noverify=args.no_verify)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/api.py\", line 97, in build\r\n need_source_download=need_source_download, config=config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 1478, in build_tree\r\n config=config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 928, in build\r\n package_exists = is_package_built(m, config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 1633, in is_package_built\r\n update_index(d, config, could_be_mirror=False)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/index.py\", line 83, in update_index\r\n with try_acquire_locks(locks, config.timeout):\r\nUnboundLocalError: local variable 'locks' referenced before assignment\r\n```\r\n\r\nAnd some debug info\r\n\r\n```\r\n$ conda info \r\nCurrent conda install:\r\n\r\n platform : linux-64\r\n conda version : 4.2.13\r\n conda is private : False\r\n conda-env version : 4.2.13\r\n conda-build version : 2.1.2+20.g6922ec3\r\n python version : 3.5.3.final.0\r\n requests version : 2.13.0\r\n root environment : /home/edill/miniconda (writable)\r\n default environment : /home/edill/miniconda\r\n envs directories : /home/edill/miniconda/envs\r\n package cache : /home/edill/miniconda/pkgs\r\n channel URLs : ...\r\n config file : /home/edill/.condarc\r\n offline mode : False\r\n```\r\n\n", "before_files": [{"content": "'''\nFunctions related to creating repodata index files.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport bz2\nimport sys\nimport json\nimport tarfile\nfrom os.path import isfile, join, getmtime\n\nfrom conda_build.utils import file_info, get_lock, try_acquire_locks\nfrom .conda_interface import PY3, md5_file\n\n\ndef read_index_tar(tar_path, config, lock):\n \"\"\" Returns the index.json dict inside the given package tarball. \"\"\"\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n with tarfile.open(tar_path) as t:\n try:\n return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))\n except EOFError:\n raise RuntimeError(\"Could not extract %s. File probably corrupt.\"\n % tar_path)\n except OSError as e:\n raise RuntimeError(\"Could not extract %s (%s)\" % (tar_path, e))\n except tarfile.ReadError:\n raise RuntimeError(\"Could not extract metadata from %s. \"\n \"File probably corrupt.\" % tar_path)\n\n\ndef write_repodata(repodata, dir_path, lock, config=None):\n \"\"\" Write updated repodata.json and repodata.json.bz2 \"\"\"\n if not config:\n import conda_build.config\n config = conda_build.config.config\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n data = json.dumps(repodata, indent=2, sort_keys=True)\n # strip trailing whitespace\n data = '\\n'.join(line.rstrip() for line in data.splitlines())\n # make sure we have newline at the end\n if not data.endswith('\\n'):\n data += '\\n'\n with open(join(dir_path, 'repodata.json'), 'w') as fo:\n fo.write(data)\n with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:\n fo.write(bz2.compress(data.encode('utf-8')))\n\n\ndef update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,\n could_be_mirror=True):\n \"\"\"\n Update all index files in dir_path with changed packages.\n\n :param verbose: Should detailed status messages be output?\n :type verbose: bool\n :param force: Whether to re-index all packages (including those that\n haven't changed) or not.\n :type force: bool\n :param check_md5: Whether to check MD5s instead of mtimes for determining\n if a package changed.\n :type check_md5: bool\n \"\"\"\n\n if config.verbose:\n print(\"updating index in:\", dir_path)\n index_path = join(dir_path, '.index.json')\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n if not lock:\n lock = get_lock(dir_path)\n\n if config.locking:\n locks = [lock]\n\n with try_acquire_locks(locks, config.timeout):\n if force:\n index = {}\n else:\n try:\n mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}\n with open(index_path, **mode_dict) as fi:\n index = json.load(fi)\n except (IOError, ValueError):\n index = {}\n\n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n if could_be_mirror and any(fn.startswith('_license-') for fn in files):\n sys.exit(\"\"\"\\\n Error:\n Indexing a copy of the Anaconda conda package channel is neither\n necessary nor supported. If you wish to add your own packages,\n you can do so by adding them to a separate channel.\n \"\"\")\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n if check_md5:\n if index[fn]['md5'] == md5_file(path):\n continue\n elif index[fn]['mtime'] == getmtime(path):\n continue\n if config.verbose:\n print('updating:', fn)\n d = read_index_tar(path, config, lock=lock)\n d.update(file_info(path))\n index[fn] = d\n\n for fn in files:\n index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None\n\n if remove:\n # remove files from the index which are not on disk\n for fn in set(index) - files:\n if config.verbose:\n print(\"removing:\", fn)\n del index[fn]\n\n # Deal with Python 2 and 3's different json module type reqs\n mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}\n with open(index_path, **mode_dict) as fo:\n json.dump(index, fo, indent=2, sort_keys=True, default=str)\n\n # --- new repodata\n for fn in index:\n info = index[fn]\n for varname in 'arch', 'platform', 'mtime', 'ucs':\n try:\n del info[varname]\n except KeyError:\n pass\n\n if 'requires' in info and 'depends' not in info:\n info['depends'] = info['requires']\n\n repodata = {'packages': index, 'info': {}}\n write_repodata(repodata, dir_path, lock=lock, config=config)\n", "path": "conda_build/index.py"}], "after_files": [{"content": "'''\nFunctions related to creating repodata index files.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport bz2\nimport sys\nimport json\nimport tarfile\nfrom os.path import isfile, join, getmtime\n\nfrom conda_build.utils import file_info, get_lock, try_acquire_locks\nfrom .conda_interface import PY3, md5_file\n\n\ndef read_index_tar(tar_path, config, lock):\n \"\"\" Returns the index.json dict inside the given package tarball. \"\"\"\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n with tarfile.open(tar_path) as t:\n try:\n return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))\n except EOFError:\n raise RuntimeError(\"Could not extract %s. File probably corrupt.\"\n % tar_path)\n except OSError as e:\n raise RuntimeError(\"Could not extract %s (%s)\" % (tar_path, e))\n except tarfile.ReadError:\n raise RuntimeError(\"Could not extract metadata from %s. \"\n \"File probably corrupt.\" % tar_path)\n\n\ndef write_repodata(repodata, dir_path, lock, config=None):\n \"\"\" Write updated repodata.json and repodata.json.bz2 \"\"\"\n if not config:\n import conda_build.config\n config = conda_build.config.config\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n data = json.dumps(repodata, indent=2, sort_keys=True)\n # strip trailing whitespace\n data = '\\n'.join(line.rstrip() for line in data.splitlines())\n # make sure we have newline at the end\n if not data.endswith('\\n'):\n data += '\\n'\n with open(join(dir_path, 'repodata.json'), 'w') as fo:\n fo.write(data)\n with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:\n fo.write(bz2.compress(data.encode('utf-8')))\n\n\ndef update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,\n could_be_mirror=True):\n \"\"\"\n Update all index files in dir_path with changed packages.\n\n :param verbose: Should detailed status messages be output?\n :type verbose: bool\n :param force: Whether to re-index all packages (including those that\n haven't changed) or not.\n :type force: bool\n :param check_md5: Whether to check MD5s instead of mtimes for determining\n if a package changed.\n :type check_md5: bool\n \"\"\"\n\n if config.verbose:\n print(\"updating index in:\", dir_path)\n index_path = join(dir_path, '.index.json')\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n if not lock:\n lock = get_lock(dir_path)\n\n locks = []\n if config.locking:\n locks.append(lock)\n\n with try_acquire_locks(locks, config.timeout):\n if force:\n index = {}\n else:\n try:\n mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}\n with open(index_path, **mode_dict) as fi:\n index = json.load(fi)\n except (IOError, ValueError):\n index = {}\n\n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n if could_be_mirror and any(fn.startswith('_license-') for fn in files):\n sys.exit(\"\"\"\\\n Error:\n Indexing a copy of the Anaconda conda package channel is neither\n necessary nor supported. If you wish to add your own packages,\n you can do so by adding them to a separate channel.\n \"\"\")\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n if check_md5:\n if index[fn]['md5'] == md5_file(path):\n continue\n elif index[fn]['mtime'] == getmtime(path):\n continue\n if config.verbose:\n print('updating:', fn)\n d = read_index_tar(path, config, lock=lock)\n d.update(file_info(path))\n index[fn] = d\n\n for fn in files:\n index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None\n\n if remove:\n # remove files from the index which are not on disk\n for fn in set(index) - files:\n if config.verbose:\n print(\"removing:\", fn)\n del index[fn]\n\n # Deal with Python 2 and 3's different json module type reqs\n mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}\n with open(index_path, **mode_dict) as fo:\n json.dump(index, fo, indent=2, sort_keys=True, default=str)\n\n # --- new repodata\n for fn in index:\n info = index[fn]\n for varname in 'arch', 'platform', 'mtime', 'ucs':\n try:\n del info[varname]\n except KeyError:\n pass\n\n if 'requires' in info and 'depends' not in info:\n info['depends'] = info['requires']\n\n repodata = {'packages': index, 'info': {}}\n write_repodata(repodata, dir_path, lock=lock, config=config)\n", "path": "conda_build/index.py"}]}
| 2,697 | 99 |
gh_patches_debug_4748
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Text/link displayed after calling run_pipeline should be "Run link" not "Job link"
Here: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/_client.py#L153
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/kfp/_client.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import six
17 import time
18 import logging
19 import json
20 import os
21 import tarfile
22 import yaml
23 from datetime import datetime
24
25
26 class Client(object):
27 """ API Client for KubeFlow Pipeline.
28 """
29
30 def __init__(self, host='ml-pipeline.kubeflow.svc.cluster.local:8888'):
31 """Create a new instance of kfp client.
32
33 Args:
34 host: the API host. If running inside the cluster as a Pod, default value should work.
35 """
36
37 try:
38 import kfp_experiment
39 except ImportError:
40 raise Exception('This module requires installation of kfp_experiment')
41
42 try:
43 import kfp_run
44 except ImportError:
45 raise Exception('This module requires installation of kfp_run')
46
47 config = kfp_run.configuration.Configuration()
48 config.host = host
49 api_client = kfp_run.api_client.ApiClient(config)
50 self._run_api = kfp_run.api.run_service_api.RunServiceApi(api_client)
51
52 config = kfp_experiment.configuration.Configuration()
53 config.host = host
54 api_client = kfp_experiment.api_client.ApiClient(config)
55 self._experiment_api = \
56 kfp_experiment.api.experiment_service_api.ExperimentServiceApi(api_client)
57
58 def _is_ipython(self):
59 """Returns whether we are running in notebook."""
60 try:
61 import IPython
62 except ImportError:
63 return False
64
65 return True
66
67 def create_experiment(self, name):
68 """Create a new experiment.
69 Args:
70 name: the name of the experiment.
71 Returns:
72 An Experiment object. Most important field is id.
73 """
74 import kfp_experiment
75
76 exp = kfp_experiment.models.ApiExperiment(name=name)
77 response = self._experiment_api.create_experiment(body=exp)
78
79 if self._is_ipython():
80 import IPython
81 html = \
82 ('Experiment link <a href="/pipeline/#/experiments/details/%s" target="_blank" >here</a>'
83 % response.id)
84 IPython.display.display(IPython.display.HTML(html))
85 return response
86
87 def list_experiments(self, page_token='', page_size=10, sort_by=''):
88 """List experiments.
89 Args:
90 page_token: token for starting of the page.
91 page_size: size of the page.
92 sort_by: can be '[field_name]', '[field_name] des'. For example, 'name des'.
93 Returns:
94 A response object including a list of experiments and next page token.
95 """
96 response = self._experiment_api.list_experiment(
97 page_token=page_token, page_size=page_size, sort_by=sort_by)
98 return response
99
100 def get_experiment(self, experiment_id):
101 """Get details of an experiment
102 Args:
103 id of the experiment.
104 Returns:
105 A response object including details of a experiment.
106 Throws:
107 Exception if experiment is not found.
108 """
109 return self._experiment_api.get_experiment(id=experiment_id)
110
111 def _extract_pipeline_yaml(self, tar_file):
112 with tarfile.open(tar_file, "r:gz") as tar:
113 all_yaml_files = [m for m in tar if m.isfile() and
114 (os.path.splitext(m.name)[-1] == '.yaml' or os.path.splitext(m.name)[-1] == '.yml')]
115 if len(all_yaml_files) == 0:
116 raise ValueError('Invalid package. Missing pipeline yaml file in the package.')
117
118 if len(all_yaml_files) > 1:
119 raise ValueError('Invalid package. Multiple yaml files in the package.')
120
121 with tar.extractfile(all_yaml_files[0]) as f:
122 return yaml.load(f)
123
124 def run_pipeline(self, experiment_id, job_name, pipeline_package_path, params={}):
125 """Run a specified pipeline.
126
127 Args:
128 experiment_id: The string id of an experiment.
129 job_name: name of the job.
130 pipeline_package_path: local path of the pipeline package(tar.gz file).
131 params: a dictionary with key (string) as param name and value (string) as as param value.
132
133 Returns:
134 A run object. Most important field is id.
135 """
136 import kfp_run
137
138 pipeline_obj = self._extract_pipeline_yaml(pipeline_package_path)
139 pipeline_json_string = json.dumps(pipeline_obj)
140 api_params = [kfp_run.ApiParameter(name=k, value=str(v)) for k,v in six.iteritems(params)]
141 key = kfp_run.models.ApiResourceKey(id=experiment_id,
142 type=kfp_run.models.ApiResourceType.EXPERIMENT)
143 reference = kfp_run.models.ApiResourceReference(key, kfp_run.models.ApiRelationship.OWNER)
144 spec = kfp_run.models.ApiPipelineSpec(
145 workflow_manifest=pipeline_json_string, parameters=api_params)
146 run_body = kfp_run.models.ApiRun(
147 pipeline_spec=spec, resource_references=[reference], name=job_name)
148
149 response = self._run_api.create_run(body=run_body)
150
151 if self._is_ipython():
152 import IPython
153 html = ('Job link <a href="/pipeline/#/runs/details/%s" target="_blank" >here</a>'
154 % response.run.id)
155 IPython.display.display(IPython.display.HTML(html))
156 return response.run
157
158 def list_runs(self, page_token='', page_size=10, sort_by=''):
159 """List runs.
160 Args:
161 page_token: token for starting of the page.
162 page_size: size of the page.
163 sort_by: one of 'field_name', 'field_name des'. For example, 'name des'.
164 Returns:
165 A response object including a list of experiments and next page token.
166 """
167 response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by)
168 return response
169
170 def get_run(self, run_id):
171 """Get run details.
172 Args:
173 id of the run.
174 Returns:
175 A response object including details of a run.
176 Throws:
177 Exception if run is not found.
178 """
179 return self._run_api.get_run(run_id=run_id)
180
181 def wait_for_run_completion(self, run_id, timeout):
182 """Wait for a run to complete.
183 Args:
184 run_id: run id, returned from run_pipeline.
185 timeout: timeout in seconds.
186 Returns:
187 A run detail object: Most important fields are run and pipeline_runtime
188 """
189 status = 'Running:'
190 start_time = datetime.now()
191 while status is None or status.lower() not in ['succeeded', 'failed', 'skipped', 'error']:
192 get_run_response = self._run_api.get_run(run_id=run_id)
193 status = get_run_response.run.status
194 elapsed_time = (datetime.now() - start_time).seconds
195 logging.info('Waiting for the job to complete...')
196 if elapsed_time > timeout:
197 raise TimeoutError('Run timeout')
198 time.sleep(5)
199 return get_run_response
200
201 def _get_workflow_json(self, run_id):
202 """Get the workflow json.
203 Args:
204 run_id: run id, returned from run_pipeline.
205 Returns:
206 workflow: json workflow
207 """
208 get_run_response = self._run_api.get_run(run_id=run_id)
209 workflow = get_run_response.pipeline_runtime.workflow_manifest
210 workflow_json = json.loads(workflow)
211 return workflow_json
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/kfp/_client.py b/sdk/python/kfp/_client.py
--- a/sdk/python/kfp/_client.py
+++ b/sdk/python/kfp/_client.py
@@ -150,7 +150,7 @@
if self._is_ipython():
import IPython
- html = ('Job link <a href="/pipeline/#/runs/details/%s" target="_blank" >here</a>'
+ html = ('Run link <a href="/pipeline/#/runs/details/%s" target="_blank" >here</a>'
% response.run.id)
IPython.display.display(IPython.display.HTML(html))
return response.run
|
{"golden_diff": "diff --git a/sdk/python/kfp/_client.py b/sdk/python/kfp/_client.py\n--- a/sdk/python/kfp/_client.py\n+++ b/sdk/python/kfp/_client.py\n@@ -150,7 +150,7 @@\n \n if self._is_ipython():\n import IPython\n- html = ('Job link <a href=\"/pipeline/#/runs/details/%s\" target=\"_blank\" >here</a>'\n+ html = ('Run link <a href=\"/pipeline/#/runs/details/%s\" target=\"_blank\" >here</a>'\n % response.run.id)\n IPython.display.display(IPython.display.HTML(html))\n return response.run\n", "issue": "Text/link displayed after calling run_pipeline should be \"Run link\" not \"Job link\" \nHere: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/_client.py#L153\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport six\nimport time\nimport logging\nimport json\nimport os\nimport tarfile\nimport yaml\nfrom datetime import datetime\n\n\nclass Client(object):\n \"\"\" API Client for KubeFlow Pipeline.\n \"\"\"\n\n def __init__(self, host='ml-pipeline.kubeflow.svc.cluster.local:8888'):\n \"\"\"Create a new instance of kfp client.\n\n Args:\n host: the API host. If running inside the cluster as a Pod, default value should work.\n \"\"\"\n\n try:\n import kfp_experiment\n except ImportError:\n raise Exception('This module requires installation of kfp_experiment')\n\n try:\n import kfp_run\n except ImportError:\n raise Exception('This module requires installation of kfp_run')\n\n config = kfp_run.configuration.Configuration()\n config.host = host\n api_client = kfp_run.api_client.ApiClient(config)\n self._run_api = kfp_run.api.run_service_api.RunServiceApi(api_client)\n\n config = kfp_experiment.configuration.Configuration()\n config.host = host\n api_client = kfp_experiment.api_client.ApiClient(config)\n self._experiment_api = \\\n kfp_experiment.api.experiment_service_api.ExperimentServiceApi(api_client)\n\n def _is_ipython(self):\n \"\"\"Returns whether we are running in notebook.\"\"\"\n try:\n import IPython\n except ImportError:\n return False\n\n return True\n\n def create_experiment(self, name):\n \"\"\"Create a new experiment.\n Args:\n name: the name of the experiment.\n Returns:\n An Experiment object. Most important field is id.\n \"\"\"\n import kfp_experiment\n\n exp = kfp_experiment.models.ApiExperiment(name=name)\n response = self._experiment_api.create_experiment(body=exp)\n \n if self._is_ipython():\n import IPython\n html = \\\n ('Experiment link <a href=\"/pipeline/#/experiments/details/%s\" target=\"_blank\" >here</a>'\n % response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def list_experiments(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List experiments.\n Args:\n page_token: token for starting of the page.\n page_size: size of the page.\n sort_by: can be '[field_name]', '[field_name] des'. For example, 'name des'.\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n response = self._experiment_api.list_experiment(\n page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_experiment(self, experiment_id):\n \"\"\"Get details of an experiment\n Args:\n id of the experiment.\n Returns:\n A response object including details of a experiment.\n Throws:\n Exception if experiment is not found. \n \"\"\"\n return self._experiment_api.get_experiment(id=experiment_id)\n\n def _extract_pipeline_yaml(self, tar_file):\n with tarfile.open(tar_file, \"r:gz\") as tar:\n all_yaml_files = [m for m in tar if m.isfile() and \n (os.path.splitext(m.name)[-1] == '.yaml' or os.path.splitext(m.name)[-1] == '.yml')]\n if len(all_yaml_files) == 0:\n raise ValueError('Invalid package. Missing pipeline yaml file in the package.')\n \n if len(all_yaml_files) > 1:\n raise ValueError('Invalid package. Multiple yaml files in the package.')\n \n with tar.extractfile(all_yaml_files[0]) as f:\n return yaml.load(f)\n\n def run_pipeline(self, experiment_id, job_name, pipeline_package_path, params={}):\n \"\"\"Run a specified pipeline.\n\n Args:\n experiment_id: The string id of an experiment.\n job_name: name of the job.\n pipeline_package_path: local path of the pipeline package(tar.gz file).\n params: a dictionary with key (string) as param name and value (string) as as param value.\n\n Returns:\n A run object. Most important field is id.\n \"\"\"\n import kfp_run\n\n pipeline_obj = self._extract_pipeline_yaml(pipeline_package_path)\n pipeline_json_string = json.dumps(pipeline_obj)\n api_params = [kfp_run.ApiParameter(name=k, value=str(v)) for k,v in six.iteritems(params)]\n key = kfp_run.models.ApiResourceKey(id=experiment_id,\n type=kfp_run.models.ApiResourceType.EXPERIMENT)\n reference = kfp_run.models.ApiResourceReference(key, kfp_run.models.ApiRelationship.OWNER)\n spec = kfp_run.models.ApiPipelineSpec(\n workflow_manifest=pipeline_json_string, parameters=api_params)\n run_body = kfp_run.models.ApiRun(\n pipeline_spec=spec, resource_references=[reference], name=job_name)\n\n response = self._run_api.create_run(body=run_body)\n \n if self._is_ipython():\n import IPython\n html = ('Job link <a href=\"/pipeline/#/runs/details/%s\" target=\"_blank\" >here</a>'\n % response.run.id)\n IPython.display.display(IPython.display.HTML(html))\n return response.run\n\n def list_runs(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List runs.\n Args:\n page_token: token for starting of the page.\n page_size: size of the page.\n sort_by: one of 'field_name', 'field_name des'. For example, 'name des'.\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_run(self, run_id):\n \"\"\"Get run details.\n Args:\n id of the run.\n Returns:\n A response object including details of a run.\n Throws:\n Exception if run is not found.\n \"\"\"\n return self._run_api.get_run(run_id=run_id)\n\n def wait_for_run_completion(self, run_id, timeout):\n \"\"\"Wait for a run to complete.\n Args:\n run_id: run id, returned from run_pipeline.\n timeout: timeout in seconds.\n Returns:\n A run detail object: Most important fields are run and pipeline_runtime\n \"\"\"\n status = 'Running:'\n start_time = datetime.now()\n while status is None or status.lower() not in ['succeeded', 'failed', 'skipped', 'error']:\n get_run_response = self._run_api.get_run(run_id=run_id)\n status = get_run_response.run.status\n elapsed_time = (datetime.now() - start_time).seconds\n logging.info('Waiting for the job to complete...')\n if elapsed_time > timeout:\n raise TimeoutError('Run timeout')\n time.sleep(5)\n return get_run_response\n\n def _get_workflow_json(self, run_id):\n \"\"\"Get the workflow json.\n Args:\n run_id: run id, returned from run_pipeline.\n Returns:\n workflow: json workflow\n \"\"\"\n get_run_response = self._run_api.get_run(run_id=run_id)\n workflow = get_run_response.pipeline_runtime.workflow_manifest\n workflow_json = json.loads(workflow)\n return workflow_json\n", "path": "sdk/python/kfp/_client.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport six\nimport time\nimport logging\nimport json\nimport os\nimport tarfile\nimport yaml\nfrom datetime import datetime\n\n\nclass Client(object):\n \"\"\" API Client for KubeFlow Pipeline.\n \"\"\"\n\n def __init__(self, host='ml-pipeline.kubeflow.svc.cluster.local:8888'):\n \"\"\"Create a new instance of kfp client.\n\n Args:\n host: the API host. If running inside the cluster as a Pod, default value should work.\n \"\"\"\n\n try:\n import kfp_experiment\n except ImportError:\n raise Exception('This module requires installation of kfp_experiment')\n\n try:\n import kfp_run\n except ImportError:\n raise Exception('This module requires installation of kfp_run')\n\n config = kfp_run.configuration.Configuration()\n config.host = host\n api_client = kfp_run.api_client.ApiClient(config)\n self._run_api = kfp_run.api.run_service_api.RunServiceApi(api_client)\n\n config = kfp_experiment.configuration.Configuration()\n config.host = host\n api_client = kfp_experiment.api_client.ApiClient(config)\n self._experiment_api = \\\n kfp_experiment.api.experiment_service_api.ExperimentServiceApi(api_client)\n\n def _is_ipython(self):\n \"\"\"Returns whether we are running in notebook.\"\"\"\n try:\n import IPython\n except ImportError:\n return False\n\n return True\n\n def create_experiment(self, name):\n \"\"\"Create a new experiment.\n Args:\n name: the name of the experiment.\n Returns:\n An Experiment object. Most important field is id.\n \"\"\"\n import kfp_experiment\n\n exp = kfp_experiment.models.ApiExperiment(name=name)\n response = self._experiment_api.create_experiment(body=exp)\n \n if self._is_ipython():\n import IPython\n html = \\\n ('Experiment link <a href=\"/pipeline/#/experiments/details/%s\" target=\"_blank\" >here</a>'\n % response.id)\n IPython.display.display(IPython.display.HTML(html))\n return response\n\n def list_experiments(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List experiments.\n Args:\n page_token: token for starting of the page.\n page_size: size of the page.\n sort_by: can be '[field_name]', '[field_name] des'. For example, 'name des'.\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n response = self._experiment_api.list_experiment(\n page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_experiment(self, experiment_id):\n \"\"\"Get details of an experiment\n Args:\n id of the experiment.\n Returns:\n A response object including details of a experiment.\n Throws:\n Exception if experiment is not found. \n \"\"\"\n return self._experiment_api.get_experiment(id=experiment_id)\n\n def _extract_pipeline_yaml(self, tar_file):\n with tarfile.open(tar_file, \"r:gz\") as tar:\n all_yaml_files = [m for m in tar if m.isfile() and \n (os.path.splitext(m.name)[-1] == '.yaml' or os.path.splitext(m.name)[-1] == '.yml')]\n if len(all_yaml_files) == 0:\n raise ValueError('Invalid package. Missing pipeline yaml file in the package.')\n \n if len(all_yaml_files) > 1:\n raise ValueError('Invalid package. Multiple yaml files in the package.')\n \n with tar.extractfile(all_yaml_files[0]) as f:\n return yaml.load(f)\n\n def run_pipeline(self, experiment_id, job_name, pipeline_package_path, params={}):\n \"\"\"Run a specified pipeline.\n\n Args:\n experiment_id: The string id of an experiment.\n job_name: name of the job.\n pipeline_package_path: local path of the pipeline package(tar.gz file).\n params: a dictionary with key (string) as param name and value (string) as as param value.\n\n Returns:\n A run object. Most important field is id.\n \"\"\"\n import kfp_run\n\n pipeline_obj = self._extract_pipeline_yaml(pipeline_package_path)\n pipeline_json_string = json.dumps(pipeline_obj)\n api_params = [kfp_run.ApiParameter(name=k, value=str(v)) for k,v in six.iteritems(params)]\n key = kfp_run.models.ApiResourceKey(id=experiment_id,\n type=kfp_run.models.ApiResourceType.EXPERIMENT)\n reference = kfp_run.models.ApiResourceReference(key, kfp_run.models.ApiRelationship.OWNER)\n spec = kfp_run.models.ApiPipelineSpec(\n workflow_manifest=pipeline_json_string, parameters=api_params)\n run_body = kfp_run.models.ApiRun(\n pipeline_spec=spec, resource_references=[reference], name=job_name)\n\n response = self._run_api.create_run(body=run_body)\n \n if self._is_ipython():\n import IPython\n html = ('Run link <a href=\"/pipeline/#/runs/details/%s\" target=\"_blank\" >here</a>'\n % response.run.id)\n IPython.display.display(IPython.display.HTML(html))\n return response.run\n\n def list_runs(self, page_token='', page_size=10, sort_by=''):\n \"\"\"List runs.\n Args:\n page_token: token for starting of the page.\n page_size: size of the page.\n sort_by: one of 'field_name', 'field_name des'. For example, 'name des'.\n Returns:\n A response object including a list of experiments and next page token.\n \"\"\"\n response = self._run_api.list_runs(page_token=page_token, page_size=page_size, sort_by=sort_by)\n return response\n\n def get_run(self, run_id):\n \"\"\"Get run details.\n Args:\n id of the run.\n Returns:\n A response object including details of a run.\n Throws:\n Exception if run is not found.\n \"\"\"\n return self._run_api.get_run(run_id=run_id)\n\n def wait_for_run_completion(self, run_id, timeout):\n \"\"\"Wait for a run to complete.\n Args:\n run_id: run id, returned from run_pipeline.\n timeout: timeout in seconds.\n Returns:\n A run detail object: Most important fields are run and pipeline_runtime\n \"\"\"\n status = 'Running:'\n start_time = datetime.now()\n while status is None or status.lower() not in ['succeeded', 'failed', 'skipped', 'error']:\n get_run_response = self._run_api.get_run(run_id=run_id)\n status = get_run_response.run.status\n elapsed_time = (datetime.now() - start_time).seconds\n logging.info('Waiting for the job to complete...')\n if elapsed_time > timeout:\n raise TimeoutError('Run timeout')\n time.sleep(5)\n return get_run_response\n\n def _get_workflow_json(self, run_id):\n \"\"\"Get the workflow json.\n Args:\n run_id: run id, returned from run_pipeline.\n Returns:\n workflow: json workflow\n \"\"\"\n get_run_response = self._run_api.get_run(run_id=run_id)\n workflow = get_run_response.pipeline_runtime.workflow_manifest\n workflow_json = json.loads(workflow)\n return workflow_json\n", "path": "sdk/python/kfp/_client.py"}]}
| 2,542 | 146 |
gh_patches_debug_9087
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-2585
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RoutablePage throws a TypeError if index route is not set
RoutablePage should not raise TypeError if index route is not set.
[route method](https://github.com/torchbox/wagtail/blob/master/wagtail/contrib/wagtailroutablepage/models.py#L97) properly calls parent class method if route is not found, but `serve` method has different signature, so it fails.
There is also a discussion of similar issue on Google groups:
https://groups.google.com/forum/#!msg/wagtail/Q9FymA-MOxM/1UkQ0hRGAAAJ
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/wagtailroutablepage/models.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 from django.conf.urls import url
4 from django.core.urlresolvers import RegexURLResolver
5 from django.http import Http404
6
7 from wagtail.wagtailcore.models import Page
8 from wagtail.wagtailcore.url_routing import RouteResult
9
10 _creation_counter = 0
11
12
13 def route(pattern, name=None):
14 def decorator(view_func):
15 global _creation_counter
16 _creation_counter += 1
17
18 # Make sure page has _routablepage_routes attribute
19 if not hasattr(view_func, '_routablepage_routes'):
20 view_func._routablepage_routes = []
21
22 # Add new route to view
23 view_func._routablepage_routes.append((
24 url(pattern, view_func, name=(name or view_func.__name__)),
25 _creation_counter,
26 ))
27
28 return view_func
29
30 return decorator
31
32
33 class RoutablePageMixin(object):
34 """
35 This class can be mixed in to a Page model, allowing extra routes to be
36 added to it.
37 """
38 #: Set this to a tuple of ``django.conf.urls.url`` objects.
39 subpage_urls = None
40
41 @classmethod
42 def get_subpage_urls(cls):
43 routes = []
44 for attr in dir(cls):
45 val = getattr(cls, attr)
46 if hasattr(val, '_routablepage_routes'):
47 routes.extend(val._routablepage_routes)
48
49 return tuple([
50 route[0]
51 for route in sorted(routes, key=lambda route: route[1])
52 ])
53
54 @classmethod
55 def get_resolver(cls):
56 if '_routablepage_urlresolver' not in cls.__dict__:
57 subpage_urls = cls.get_subpage_urls()
58 cls._routablepage_urlresolver = RegexURLResolver(r'^/', subpage_urls)
59
60 return cls._routablepage_urlresolver
61
62 def reverse_subpage(self, name, args=None, kwargs=None):
63 """
64 This method takes a route name/arguments and returns a URL path.
65 """
66 args = args or []
67 kwargs = kwargs or {}
68
69 return self.get_resolver().reverse(name, *args, **kwargs)
70
71 def resolve_subpage(self, path):
72 """
73 This method takes a URL path and finds the view to call.
74 """
75 view, args, kwargs = self.get_resolver().resolve(path)
76
77 # Bind the method
78 view = view.__get__(self, type(self))
79
80 return view, args, kwargs
81
82 def route(self, request, path_components):
83 """
84 This hooks the subpage URLs into Wagtail's routing.
85 """
86 if self.live:
87 try:
88 path = '/'
89 if path_components:
90 path += '/'.join(path_components) + '/'
91
92 view, args, kwargs = self.resolve_subpage(path)
93 return RouteResult(self, args=(view, args, kwargs))
94 except Http404:
95 pass
96
97 return super(RoutablePageMixin, self).route(request, path_components)
98
99 def serve(self, request, view, args, kwargs):
100 return view(request, *args, **kwargs)
101
102 def serve_preview(self, request, mode_name):
103 view, args, kwargs = self.resolve_subpage('/')
104 return view(request, *args, **kwargs)
105
106
107 class RoutablePage(RoutablePageMixin, Page):
108 """
109 This class extends Page by adding methods which allows extra routes to be
110 added to it.
111 """
112
113 class Meta:
114 abstract = True
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/contrib/wagtailroutablepage/models.py b/wagtail/contrib/wagtailroutablepage/models.py
--- a/wagtail/contrib/wagtailroutablepage/models.py
+++ b/wagtail/contrib/wagtailroutablepage/models.py
@@ -96,7 +96,13 @@
return super(RoutablePageMixin, self).route(request, path_components)
- def serve(self, request, view, args, kwargs):
+ def serve(self, request, view=None, args=None, kwargs=None):
+ if args is None:
+ args = []
+ if kwargs is None:
+ kwargs = {}
+ if view is None:
+ return super(RoutablePageMixin, self).serve(request, *args, **kwargs)
return view(request, *args, **kwargs)
def serve_preview(self, request, mode_name):
|
{"golden_diff": "diff --git a/wagtail/contrib/wagtailroutablepage/models.py b/wagtail/contrib/wagtailroutablepage/models.py\n--- a/wagtail/contrib/wagtailroutablepage/models.py\n+++ b/wagtail/contrib/wagtailroutablepage/models.py\n@@ -96,7 +96,13 @@\n \n return super(RoutablePageMixin, self).route(request, path_components)\n \n- def serve(self, request, view, args, kwargs):\n+ def serve(self, request, view=None, args=None, kwargs=None):\n+ if args is None:\n+ args = []\n+ if kwargs is None:\n+ kwargs = {}\n+ if view is None:\n+ return super(RoutablePageMixin, self).serve(request, *args, **kwargs)\n return view(request, *args, **kwargs)\n \n def serve_preview(self, request, mode_name):\n", "issue": "RoutablePage throws a TypeError if index route is not set\nRoutablePage should not raise TypeError if index route is not set. \n\n[route method](https://github.com/torchbox/wagtail/blob/master/wagtail/contrib/wagtailroutablepage/models.py#L97) properly calls parent class method if route is not found, but `serve` method has different signature, so it fails.\n\nThere is also a discussion of similar issue on Google groups:\nhttps://groups.google.com/forum/#!msg/wagtail/Q9FymA-MOxM/1UkQ0hRGAAAJ\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.conf.urls import url\nfrom django.core.urlresolvers import RegexURLResolver\nfrom django.http import Http404\n\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailcore.url_routing import RouteResult\n\n_creation_counter = 0\n\n\ndef route(pattern, name=None):\n def decorator(view_func):\n global _creation_counter\n _creation_counter += 1\n\n # Make sure page has _routablepage_routes attribute\n if not hasattr(view_func, '_routablepage_routes'):\n view_func._routablepage_routes = []\n\n # Add new route to view\n view_func._routablepage_routes.append((\n url(pattern, view_func, name=(name or view_func.__name__)),\n _creation_counter,\n ))\n\n return view_func\n\n return decorator\n\n\nclass RoutablePageMixin(object):\n \"\"\"\n This class can be mixed in to a Page model, allowing extra routes to be\n added to it.\n \"\"\"\n #: Set this to a tuple of ``django.conf.urls.url`` objects.\n subpage_urls = None\n\n @classmethod\n def get_subpage_urls(cls):\n routes = []\n for attr in dir(cls):\n val = getattr(cls, attr)\n if hasattr(val, '_routablepage_routes'):\n routes.extend(val._routablepage_routes)\n\n return tuple([\n route[0]\n for route in sorted(routes, key=lambda route: route[1])\n ])\n\n @classmethod\n def get_resolver(cls):\n if '_routablepage_urlresolver' not in cls.__dict__:\n subpage_urls = cls.get_subpage_urls()\n cls._routablepage_urlresolver = RegexURLResolver(r'^/', subpage_urls)\n\n return cls._routablepage_urlresolver\n\n def reverse_subpage(self, name, args=None, kwargs=None):\n \"\"\"\n This method takes a route name/arguments and returns a URL path.\n \"\"\"\n args = args or []\n kwargs = kwargs or {}\n\n return self.get_resolver().reverse(name, *args, **kwargs)\n\n def resolve_subpage(self, path):\n \"\"\"\n This method takes a URL path and finds the view to call.\n \"\"\"\n view, args, kwargs = self.get_resolver().resolve(path)\n\n # Bind the method\n view = view.__get__(self, type(self))\n\n return view, args, kwargs\n\n def route(self, request, path_components):\n \"\"\"\n This hooks the subpage URLs into Wagtail's routing.\n \"\"\"\n if self.live:\n try:\n path = '/'\n if path_components:\n path += '/'.join(path_components) + '/'\n\n view, args, kwargs = self.resolve_subpage(path)\n return RouteResult(self, args=(view, args, kwargs))\n except Http404:\n pass\n\n return super(RoutablePageMixin, self).route(request, path_components)\n\n def serve(self, request, view, args, kwargs):\n return view(request, *args, **kwargs)\n\n def serve_preview(self, request, mode_name):\n view, args, kwargs = self.resolve_subpage('/')\n return view(request, *args, **kwargs)\n\n\nclass RoutablePage(RoutablePageMixin, Page):\n \"\"\"\n This class extends Page by adding methods which allows extra routes to be\n added to it.\n \"\"\"\n\n class Meta:\n abstract = True\n", "path": "wagtail/contrib/wagtailroutablepage/models.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.conf.urls import url\nfrom django.core.urlresolvers import RegexURLResolver\nfrom django.http import Http404\n\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailcore.url_routing import RouteResult\n\n_creation_counter = 0\n\n\ndef route(pattern, name=None):\n def decorator(view_func):\n global _creation_counter\n _creation_counter += 1\n\n # Make sure page has _routablepage_routes attribute\n if not hasattr(view_func, '_routablepage_routes'):\n view_func._routablepage_routes = []\n\n # Add new route to view\n view_func._routablepage_routes.append((\n url(pattern, view_func, name=(name or view_func.__name__)),\n _creation_counter,\n ))\n\n return view_func\n\n return decorator\n\n\nclass RoutablePageMixin(object):\n \"\"\"\n This class can be mixed in to a Page model, allowing extra routes to be\n added to it.\n \"\"\"\n #: Set this to a tuple of ``django.conf.urls.url`` objects.\n subpage_urls = None\n\n @classmethod\n def get_subpage_urls(cls):\n routes = []\n for attr in dir(cls):\n val = getattr(cls, attr)\n if hasattr(val, '_routablepage_routes'):\n routes.extend(val._routablepage_routes)\n\n return tuple([\n route[0]\n for route in sorted(routes, key=lambda route: route[1])\n ])\n\n @classmethod\n def get_resolver(cls):\n if '_routablepage_urlresolver' not in cls.__dict__:\n subpage_urls = cls.get_subpage_urls()\n cls._routablepage_urlresolver = RegexURLResolver(r'^/', subpage_urls)\n\n return cls._routablepage_urlresolver\n\n def reverse_subpage(self, name, args=None, kwargs=None):\n \"\"\"\n This method takes a route name/arguments and returns a URL path.\n \"\"\"\n args = args or []\n kwargs = kwargs or {}\n\n return self.get_resolver().reverse(name, *args, **kwargs)\n\n def resolve_subpage(self, path):\n \"\"\"\n This method takes a URL path and finds the view to call.\n \"\"\"\n view, args, kwargs = self.get_resolver().resolve(path)\n\n # Bind the method\n view = view.__get__(self, type(self))\n\n return view, args, kwargs\n\n def route(self, request, path_components):\n \"\"\"\n This hooks the subpage URLs into Wagtail's routing.\n \"\"\"\n if self.live:\n try:\n path = '/'\n if path_components:\n path += '/'.join(path_components) + '/'\n\n view, args, kwargs = self.resolve_subpage(path)\n return RouteResult(self, args=(view, args, kwargs))\n except Http404:\n pass\n\n return super(RoutablePageMixin, self).route(request, path_components)\n\n def serve(self, request, view=None, args=None, kwargs=None):\n if args is None:\n args = []\n if kwargs is None:\n kwargs = {}\n if view is None:\n return super(RoutablePageMixin, self).serve(request, *args, **kwargs)\n return view(request, *args, **kwargs)\n\n def serve_preview(self, request, mode_name):\n view, args, kwargs = self.resolve_subpage('/')\n return view(request, *args, **kwargs)\n\n\nclass RoutablePage(RoutablePageMixin, Page):\n \"\"\"\n This class extends Page by adding methods which allows extra routes to be\n added to it.\n \"\"\"\n\n class Meta:\n abstract = True\n", "path": "wagtail/contrib/wagtailroutablepage/models.py"}]}
| 1,399 | 206 |
gh_patches_debug_15119
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-734
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DMM Example Model and guide dims disagree when IAF is applied
When the number of IAF is non zero, the latent dimension changes to the same value as the batch size.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/transformed_distribution.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import numbers
4 from abc import ABCMeta, abstractmethod
5
6 import torch
7 import torch.nn as nn
8 from six import add_metaclass
9 from torch.autograd import Variable
10
11 from pyro.distributions.distribution import Distribution
12 from pyro.distributions.util import copy_docs_from
13 from pyro.nn import AutoRegressiveNN
14
15
16 @copy_docs_from(Distribution)
17 class TransformedDistribution(Distribution):
18 """
19 Transforms the base distribution by applying a sequence of `Bijector`s to it.
20 This results in a scorable distribution (i.e. it has a `log_pdf()` method).
21
22 :param base_distribution: a (continuous) base distribution; samples from this distribution
23 are passed through the sequence of `Bijector`s to yield a sample from the
24 `TransformedDistribution`
25 :type base_distribution: pyro.distribution.Distribution
26 :param bijectors: either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList
27 :returns: the transformed distribution
28 """
29
30 def __init__(self, base_distribution, bijectors, *args, **kwargs):
31 super(TransformedDistribution, self).__init__(*args, **kwargs)
32 self.reparameterized = base_distribution.reparameterized
33 self.base_dist = base_distribution
34 if isinstance(bijectors, Bijector):
35 self.bijectors = nn.ModuleList([bijectors])
36 elif isinstance(bijectors, nn.ModuleList):
37 for bijector in bijectors:
38 assert isinstance(bijector, Bijector), \
39 "bijectors must be a Bijector or a nn.ModuleList of Bijectors"
40 self.bijectors = bijectors
41
42 def sample(self, *args, **kwargs):
43 """
44 :returns: a sample y
45 :rtype: torch.autograd.Variable
46
47 Sample from base distribution and pass through bijector(s)
48 """
49 x = self.base_dist.sample(*args, **kwargs)
50 next_input = x
51 for bijector in self.bijectors:
52 y = bijector(next_input)
53 if bijector.add_inverse_to_cache:
54 bijector._add_intermediate_to_cache(next_input, y, 'x')
55 next_input = y
56 return next_input
57
58 def batch_shape(self, x=None, *args, **kwargs):
59 return self.base_dist.batch_shape(*args, **kwargs)
60
61 def event_shape(self, *args, **kwargs):
62 return self.base_dist.event_shape(*args, **kwargs)
63
64 def log_prob(self, y, *args, **kwargs):
65 """
66 :param y: a value sampled from the transformed distribution
67 :type y: torch.autograd.Variable
68
69 :returns: the score (the log pdf) of y
70 :rtype: torch.autograd.Variable
71
72 Scores the sample by inverting the bijector(s) and computing the score using the score
73 of the base distribution and the log det jacobian
74 """
75 value = y
76 log_det_jacobian = 0.0
77 for bijector in reversed(self.bijectors):
78 log_det_jacobian += bijector.batch_log_det_jacobian(value, *args, **kwargs)
79 value = bijector.inverse(value)
80 base_log_pdf = self.base_dist.log_prob(value, *args, **kwargs)
81 if not isinstance(log_det_jacobian, numbers.Number):
82 log_det_jacobian = log_det_jacobian.contiguous().view(*base_log_pdf.size())
83 assert log_det_jacobian.size() == base_log_pdf.size(), \
84 'Invalid batch_log_det_jacobian().size():\nexpected {}\nactual {}'.format(
85 base_log_pdf.size(), log_det_jacobian.size())
86 return base_log_pdf - log_det_jacobian
87
88
89 @add_metaclass(ABCMeta)
90 class Bijector(nn.Module):
91 """
92 Abstract class `Bijector`. `Bijector` are bijective transformations with computable
93 log det jacobians. They are meant for use in `TransformedDistribution`.
94 """
95
96 def __init__(self, *args, **kwargs):
97 super(Bijector, self).__init__(*args, **kwargs)
98 self.add_inverse_to_cache = False
99
100 @abstractmethod
101 def __call__(self, *args, **kwargs):
102 """
103 Virtual forward method
104
105 Invokes the bijection x=>y
106 """
107 raise NotImplementedError()
108
109 @abstractmethod
110 def inverse(self, *args, **kwargs):
111 """
112 Virtual inverse method
113
114 Inverts the bijection y => x.
115 """
116 raise NotImplementedError()
117
118 def log_det_jacobian(self, *args, **kwargs):
119 """
120 Default logdet jacobian method.
121
122 Computes the log det jacobian `|dy/dx|`
123 """
124 return self.batch_log_det_jacobian(*args, **kwargs).sum()
125
126 def batch_log_det_jacobian(self, *args, **kwargs):
127 """
128 Virtual elementwise logdet jacobian method.
129
130 Computes the log abs det jacobian `|dy/dx|`
131 """
132 raise NotImplementedError()
133
134
135 class InverseAutoregressiveFlow(Bijector):
136 """
137 An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this
138 provides a way to create richer variational approximations.
139
140 Example usage::
141
142 >>> base_dist = Normal(...)
143 >>> iaf = InverseAutoregressiveFlow(...)
144 >>> pyro.module("my_iaf", iaf)
145 >>> iaf_dist = TransformedDistribution(base_dist, iaf)
146
147 Note that this implementation is only meant to be used in settings where the inverse of the Bijector
148 is never explicitly computed (rather the result is cached from the forward call). In the context of
149 variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,
150 i.e. in the variational distribution. In other contexts the inverse could in principle be computed but
151 this would be a (potentially) costly computation that scales with the dimension of the input (and in
152 any case support for this is not included in this implementation).
153
154 :param input_dim: dimension of input
155 :type input_dim: int
156 :param hidden_dim: hidden dimension (number of hidden units)
157 :type hidden_dim: int
158 :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`
159 :type sigmoid_bias: float
160 :param permutation: whether the order of the inputs should be permuted (by default the conditional
161 dependence structure of the autoregression follows the sequential order)
162 :type permutation: bool
163
164 References:
165
166 1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]
167 Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
168
169 2. Variational Inference with Normalizing Flows [arXiv:1505.05770]
170 Danilo Jimenez Rezende, Shakir Mohamed
171
172 3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]
173 Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
174 """
175
176 def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):
177 super(InverseAutoregressiveFlow, self).__init__()
178 self.input_dim = input_dim
179 self.hidden_dim = hidden_dim
180 self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)
181 self.sigmoid = nn.Sigmoid()
182 self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))
183 self._intermediates_cache = {}
184 self.add_inverse_to_cache = True
185
186 def get_arn(self):
187 """
188 :rtype: pyro.nn.AutoRegressiveNN
189
190 Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow
191 """
192 return self.arn
193
194 def __call__(self, x, *args, **kwargs):
195 """
196 :param x: the input into the bijection
197 :type x: torch.autograd.Variable
198
199 Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a
200 sample from the base distribution (or the output of a previous flow)
201 """
202 hidden = self.arn(x)
203 sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))
204 mean = hidden[:, self.input_dim:]
205 y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean
206 self._add_intermediate_to_cache(sigma, y, 'sigma')
207 return y
208
209 def inverse(self, y, *args, **kwargs):
210 """
211 :param y: the output of the bijection
212 :type y: torch.autograd.Variable
213
214 Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values
215 `y`; rather it assumes `y` is the result of a previously computed application of the bijector
216 to some `x` (which was cached on the forward call)
217 """
218 if (y, 'x') in self._intermediates_cache:
219 x = self._intermediates_cache.pop((y, 'x'))
220 return x
221 else:
222 raise KeyError("Bijector InverseAutoregressiveFlow expected to find" +
223 "key in intermediates cache but didn't")
224
225 def _add_intermediate_to_cache(self, intermediate, y, name):
226 """
227 Internal function used to cache intermediate results computed during the forward call
228 """
229 assert((y, name) not in self._intermediates_cache),\
230 "key collision in _add_intermediate_to_cache"
231 self._intermediates_cache[(y, name)] = intermediate
232
233 def batch_log_det_jacobian(self, y, *args, **kwargs):
234 """
235 Calculates the elementwise determinant of the log jacobian
236 """
237 if (y, 'sigma') in self._intermediates_cache:
238 sigma = self._intermediates_cache.pop((y, 'sigma'))
239 else:
240 raise KeyError("Bijector InverseAutoregressiveFlow expected to find" +
241 "key in intermediates cache but didn't")
242 log_sigma = torch.log(sigma)
243 if 'log_pdf_mask' in kwargs:
244 log_sigma = log_sigma * kwargs['log_pdf_mask']
245 return log_sigma.sum(-1)
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyro/distributions/transformed_distribution.py b/pyro/distributions/transformed_distribution.py
--- a/pyro/distributions/transformed_distribution.py
+++ b/pyro/distributions/transformed_distribution.py
@@ -55,7 +55,10 @@
next_input = y
return next_input
- def batch_shape(self, x=None, *args, **kwargs):
+ def shape(self, *args, **kwargs):
+ return self.base_dist.shape(*args, **kwargs)
+
+ def batch_shape(self, *args, **kwargs):
return self.base_dist.batch_shape(*args, **kwargs)
def event_shape(self, *args, **kwargs):
@@ -242,4 +245,4 @@
log_sigma = torch.log(sigma)
if 'log_pdf_mask' in kwargs:
log_sigma = log_sigma * kwargs['log_pdf_mask']
- return log_sigma.sum(-1)
+ return log_sigma
|
{"golden_diff": "diff --git a/pyro/distributions/transformed_distribution.py b/pyro/distributions/transformed_distribution.py\n--- a/pyro/distributions/transformed_distribution.py\n+++ b/pyro/distributions/transformed_distribution.py\n@@ -55,7 +55,10 @@\n next_input = y\n return next_input\n \n- def batch_shape(self, x=None, *args, **kwargs):\n+ def shape(self, *args, **kwargs):\n+ return self.base_dist.shape(*args, **kwargs)\n+\n+ def batch_shape(self, *args, **kwargs):\n return self.base_dist.batch_shape(*args, **kwargs)\n \n def event_shape(self, *args, **kwargs):\n@@ -242,4 +245,4 @@\n log_sigma = torch.log(sigma)\n if 'log_pdf_mask' in kwargs:\n log_sigma = log_sigma * kwargs['log_pdf_mask']\n- return log_sigma.sum(-1)\n+ return log_sigma\n", "issue": "DMM Example Model and guide dims disagree when IAF is applied\nWhen the number of IAF is non zero, the latent dimension changes to the same value as the batch size.\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport numbers\nfrom abc import ABCMeta, abstractmethod\n\nimport torch\nimport torch.nn as nn\nfrom six import add_metaclass\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\nfrom pyro.distributions.util import copy_docs_from\nfrom pyro.nn import AutoRegressiveNN\n\n\n@copy_docs_from(Distribution)\nclass TransformedDistribution(Distribution):\n \"\"\"\n Transforms the base distribution by applying a sequence of `Bijector`s to it.\n This results in a scorable distribution (i.e. it has a `log_pdf()` method).\n\n :param base_distribution: a (continuous) base distribution; samples from this distribution\n are passed through the sequence of `Bijector`s to yield a sample from the\n `TransformedDistribution`\n :type base_distribution: pyro.distribution.Distribution\n :param bijectors: either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList\n :returns: the transformed distribution\n \"\"\"\n\n def __init__(self, base_distribution, bijectors, *args, **kwargs):\n super(TransformedDistribution, self).__init__(*args, **kwargs)\n self.reparameterized = base_distribution.reparameterized\n self.base_dist = base_distribution\n if isinstance(bijectors, Bijector):\n self.bijectors = nn.ModuleList([bijectors])\n elif isinstance(bijectors, nn.ModuleList):\n for bijector in bijectors:\n assert isinstance(bijector, Bijector), \\\n \"bijectors must be a Bijector or a nn.ModuleList of Bijectors\"\n self.bijectors = bijectors\n\n def sample(self, *args, **kwargs):\n \"\"\"\n :returns: a sample y\n :rtype: torch.autograd.Variable\n\n Sample from base distribution and pass through bijector(s)\n \"\"\"\n x = self.base_dist.sample(*args, **kwargs)\n next_input = x\n for bijector in self.bijectors:\n y = bijector(next_input)\n if bijector.add_inverse_to_cache:\n bijector._add_intermediate_to_cache(next_input, y, 'x')\n next_input = y\n return next_input\n\n def batch_shape(self, x=None, *args, **kwargs):\n return self.base_dist.batch_shape(*args, **kwargs)\n\n def event_shape(self, *args, **kwargs):\n return self.base_dist.event_shape(*args, **kwargs)\n\n def log_prob(self, y, *args, **kwargs):\n \"\"\"\n :param y: a value sampled from the transformed distribution\n :type y: torch.autograd.Variable\n\n :returns: the score (the log pdf) of y\n :rtype: torch.autograd.Variable\n\n Scores the sample by inverting the bijector(s) and computing the score using the score\n of the base distribution and the log det jacobian\n \"\"\"\n value = y\n log_det_jacobian = 0.0\n for bijector in reversed(self.bijectors):\n log_det_jacobian += bijector.batch_log_det_jacobian(value, *args, **kwargs)\n value = bijector.inverse(value)\n base_log_pdf = self.base_dist.log_prob(value, *args, **kwargs)\n if not isinstance(log_det_jacobian, numbers.Number):\n log_det_jacobian = log_det_jacobian.contiguous().view(*base_log_pdf.size())\n assert log_det_jacobian.size() == base_log_pdf.size(), \\\n 'Invalid batch_log_det_jacobian().size():\\nexpected {}\\nactual {}'.format(\n base_log_pdf.size(), log_det_jacobian.size())\n return base_log_pdf - log_det_jacobian\n\n\n@add_metaclass(ABCMeta)\nclass Bijector(nn.Module):\n \"\"\"\n Abstract class `Bijector`. `Bijector` are bijective transformations with computable\n log det jacobians. They are meant for use in `TransformedDistribution`.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super(Bijector, self).__init__(*args, **kwargs)\n self.add_inverse_to_cache = False\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n \"\"\"\n Virtual forward method\n\n Invokes the bijection x=>y\n \"\"\"\n raise NotImplementedError()\n\n @abstractmethod\n def inverse(self, *args, **kwargs):\n \"\"\"\n Virtual inverse method\n\n Inverts the bijection y => x.\n \"\"\"\n raise NotImplementedError()\n\n def log_det_jacobian(self, *args, **kwargs):\n \"\"\"\n Default logdet jacobian method.\n\n Computes the log det jacobian `|dy/dx|`\n \"\"\"\n return self.batch_log_det_jacobian(*args, **kwargs).sum()\n\n def batch_log_det_jacobian(self, *args, **kwargs):\n \"\"\"\n Virtual elementwise logdet jacobian method.\n\n Computes the log abs det jacobian `|dy/dx|`\n \"\"\"\n raise NotImplementedError()\n\n\nclass InverseAutoregressiveFlow(Bijector):\n \"\"\"\n An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this\n provides a way to create richer variational approximations.\n\n Example usage::\n\n >>> base_dist = Normal(...)\n >>> iaf = InverseAutoregressiveFlow(...)\n >>> pyro.module(\"my_iaf\", iaf)\n >>> iaf_dist = TransformedDistribution(base_dist, iaf)\n\n Note that this implementation is only meant to be used in settings where the inverse of the Bijector\n is never explicitly computed (rather the result is cached from the forward call). In the context of\n variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,\n i.e. in the variational distribution. In other contexts the inverse could in principle be computed but\n this would be a (potentially) costly computation that scales with the dimension of the input (and in\n any case support for this is not included in this implementation).\n\n :param input_dim: dimension of input\n :type input_dim: int\n :param hidden_dim: hidden dimension (number of hidden units)\n :type hidden_dim: int\n :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`\n :type sigmoid_bias: float\n :param permutation: whether the order of the inputs should be permuted (by default the conditional\n dependence structure of the autoregression follows the sequential order)\n :type permutation: bool\n\n References:\n\n 1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]\n Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling\n\n 2. Variational Inference with Normalizing Flows [arXiv:1505.05770]\n Danilo Jimenez Rezende, Shakir Mohamed\n\n 3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]\n Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle\n \"\"\"\n\n def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):\n super(InverseAutoregressiveFlow, self).__init__()\n self.input_dim = input_dim\n self.hidden_dim = hidden_dim\n self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)\n self.sigmoid = nn.Sigmoid()\n self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))\n self._intermediates_cache = {}\n self.add_inverse_to_cache = True\n\n def get_arn(self):\n \"\"\"\n :rtype: pyro.nn.AutoRegressiveNN\n\n Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow\n \"\"\"\n return self.arn\n\n def __call__(self, x, *args, **kwargs):\n \"\"\"\n :param x: the input into the bijection\n :type x: torch.autograd.Variable\n\n Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a\n sample from the base distribution (or the output of a previous flow)\n \"\"\"\n hidden = self.arn(x)\n sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))\n mean = hidden[:, self.input_dim:]\n y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean\n self._add_intermediate_to_cache(sigma, y, 'sigma')\n return y\n\n def inverse(self, y, *args, **kwargs):\n \"\"\"\n :param y: the output of the bijection\n :type y: torch.autograd.Variable\n\n Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values\n `y`; rather it assumes `y` is the result of a previously computed application of the bijector\n to some `x` (which was cached on the forward call)\n \"\"\"\n if (y, 'x') in self._intermediates_cache:\n x = self._intermediates_cache.pop((y, 'x'))\n return x\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n\n def _add_intermediate_to_cache(self, intermediate, y, name):\n \"\"\"\n Internal function used to cache intermediate results computed during the forward call\n \"\"\"\n assert((y, name) not in self._intermediates_cache),\\\n \"key collision in _add_intermediate_to_cache\"\n self._intermediates_cache[(y, name)] = intermediate\n\n def batch_log_det_jacobian(self, y, *args, **kwargs):\n \"\"\"\n Calculates the elementwise determinant of the log jacobian\n \"\"\"\n if (y, 'sigma') in self._intermediates_cache:\n sigma = self._intermediates_cache.pop((y, 'sigma'))\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n log_sigma = torch.log(sigma)\n if 'log_pdf_mask' in kwargs:\n log_sigma = log_sigma * kwargs['log_pdf_mask']\n return log_sigma.sum(-1)\n", "path": "pyro/distributions/transformed_distribution.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport numbers\nfrom abc import ABCMeta, abstractmethod\n\nimport torch\nimport torch.nn as nn\nfrom six import add_metaclass\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\nfrom pyro.distributions.util import copy_docs_from\nfrom pyro.nn import AutoRegressiveNN\n\n\n@copy_docs_from(Distribution)\nclass TransformedDistribution(Distribution):\n \"\"\"\n Transforms the base distribution by applying a sequence of `Bijector`s to it.\n This results in a scorable distribution (i.e. it has a `log_pdf()` method).\n\n :param base_distribution: a (continuous) base distribution; samples from this distribution\n are passed through the sequence of `Bijector`s to yield a sample from the\n `TransformedDistribution`\n :type base_distribution: pyro.distribution.Distribution\n :param bijectors: either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList\n :returns: the transformed distribution\n \"\"\"\n\n def __init__(self, base_distribution, bijectors, *args, **kwargs):\n super(TransformedDistribution, self).__init__(*args, **kwargs)\n self.reparameterized = base_distribution.reparameterized\n self.base_dist = base_distribution\n if isinstance(bijectors, Bijector):\n self.bijectors = nn.ModuleList([bijectors])\n elif isinstance(bijectors, nn.ModuleList):\n for bijector in bijectors:\n assert isinstance(bijector, Bijector), \\\n \"bijectors must be a Bijector or a nn.ModuleList of Bijectors\"\n self.bijectors = bijectors\n\n def sample(self, *args, **kwargs):\n \"\"\"\n :returns: a sample y\n :rtype: torch.autograd.Variable\n\n Sample from base distribution and pass through bijector(s)\n \"\"\"\n x = self.base_dist.sample(*args, **kwargs)\n next_input = x\n for bijector in self.bijectors:\n y = bijector(next_input)\n if bijector.add_inverse_to_cache:\n bijector._add_intermediate_to_cache(next_input, y, 'x')\n next_input = y\n return next_input\n\n def shape(self, *args, **kwargs):\n return self.base_dist.shape(*args, **kwargs)\n\n def batch_shape(self, *args, **kwargs):\n return self.base_dist.batch_shape(*args, **kwargs)\n\n def event_shape(self, *args, **kwargs):\n return self.base_dist.event_shape(*args, **kwargs)\n\n def log_prob(self, y, *args, **kwargs):\n \"\"\"\n :param y: a value sampled from the transformed distribution\n :type y: torch.autograd.Variable\n\n :returns: the score (the log pdf) of y\n :rtype: torch.autograd.Variable\n\n Scores the sample by inverting the bijector(s) and computing the score using the score\n of the base distribution and the log det jacobian\n \"\"\"\n value = y\n log_det_jacobian = 0.0\n for bijector in reversed(self.bijectors):\n log_det_jacobian += bijector.batch_log_det_jacobian(value, *args, **kwargs)\n value = bijector.inverse(value)\n base_log_pdf = self.base_dist.log_prob(value, *args, **kwargs)\n if not isinstance(log_det_jacobian, numbers.Number):\n log_det_jacobian = log_det_jacobian.contiguous().view(*base_log_pdf.size())\n assert log_det_jacobian.size() == base_log_pdf.size(), \\\n 'Invalid batch_log_det_jacobian().size():\\nexpected {}\\nactual {}'.format(\n base_log_pdf.size(), log_det_jacobian.size())\n return base_log_pdf - log_det_jacobian\n\n\n@add_metaclass(ABCMeta)\nclass Bijector(nn.Module):\n \"\"\"\n Abstract class `Bijector`. `Bijector` are bijective transformations with computable\n log det jacobians. They are meant for use in `TransformedDistribution`.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super(Bijector, self).__init__(*args, **kwargs)\n self.add_inverse_to_cache = False\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n \"\"\"\n Virtual forward method\n\n Invokes the bijection x=>y\n \"\"\"\n raise NotImplementedError()\n\n @abstractmethod\n def inverse(self, *args, **kwargs):\n \"\"\"\n Virtual inverse method\n\n Inverts the bijection y => x.\n \"\"\"\n raise NotImplementedError()\n\n def log_det_jacobian(self, *args, **kwargs):\n \"\"\"\n Default logdet jacobian method.\n\n Computes the log det jacobian `|dy/dx|`\n \"\"\"\n return self.batch_log_det_jacobian(*args, **kwargs).sum()\n\n def batch_log_det_jacobian(self, *args, **kwargs):\n \"\"\"\n Virtual elementwise logdet jacobian method.\n\n Computes the log abs det jacobian `|dy/dx|`\n \"\"\"\n raise NotImplementedError()\n\n\nclass InverseAutoregressiveFlow(Bijector):\n \"\"\"\n An implementation of an Inverse Autoregressive Flow. Together with the `TransformedDistribution` this\n provides a way to create richer variational approximations.\n\n Example usage::\n\n >>> base_dist = Normal(...)\n >>> iaf = InverseAutoregressiveFlow(...)\n >>> pyro.module(\"my_iaf\", iaf)\n >>> iaf_dist = TransformedDistribution(base_dist, iaf)\n\n Note that this implementation is only meant to be used in settings where the inverse of the Bijector\n is never explicitly computed (rather the result is cached from the forward call). In the context of\n variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide,\n i.e. in the variational distribution. In other contexts the inverse could in principle be computed but\n this would be a (potentially) costly computation that scales with the dimension of the input (and in\n any case support for this is not included in this implementation).\n\n :param input_dim: dimension of input\n :type input_dim: int\n :param hidden_dim: hidden dimension (number of hidden units)\n :type hidden_dim: int\n :param sigmoid_bias: bias on the hidden units fed into the sigmoid; default=`2.0`\n :type sigmoid_bias: float\n :param permutation: whether the order of the inputs should be permuted (by default the conditional\n dependence structure of the autoregression follows the sequential order)\n :type permutation: bool\n\n References:\n\n 1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934]\n Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling\n\n 2. Variational Inference with Normalizing Flows [arXiv:1505.05770]\n Danilo Jimenez Rezende, Shakir Mohamed\n\n 3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509]\n Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle\n \"\"\"\n\n def __init__(self, input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None):\n super(InverseAutoregressiveFlow, self).__init__()\n self.input_dim = input_dim\n self.hidden_dim = hidden_dim\n self.arn = AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=2, permutation=permutation)\n self.sigmoid = nn.Sigmoid()\n self.sigmoid_bias = Variable(torch.Tensor([sigmoid_bias]))\n self._intermediates_cache = {}\n self.add_inverse_to_cache = True\n\n def get_arn(self):\n \"\"\"\n :rtype: pyro.nn.AutoRegressiveNN\n\n Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow\n \"\"\"\n return self.arn\n\n def __call__(self, x, *args, **kwargs):\n \"\"\"\n :param x: the input into the bijection\n :type x: torch.autograd.Variable\n\n Invokes the bijection x=>y; in the prototypical context of a TransformedDistribution `x` is a\n sample from the base distribution (or the output of a previous flow)\n \"\"\"\n hidden = self.arn(x)\n sigma = self.sigmoid(hidden[:, 0:self.input_dim] + self.sigmoid_bias.type_as(hidden))\n mean = hidden[:, self.input_dim:]\n y = sigma * x + (Variable(torch.ones(sigma.size())).type_as(sigma) - sigma) * mean\n self._add_intermediate_to_cache(sigma, y, 'sigma')\n return y\n\n def inverse(self, y, *args, **kwargs):\n \"\"\"\n :param y: the output of the bijection\n :type y: torch.autograd.Variable\n\n Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values\n `y`; rather it assumes `y` is the result of a previously computed application of the bijector\n to some `x` (which was cached on the forward call)\n \"\"\"\n if (y, 'x') in self._intermediates_cache:\n x = self._intermediates_cache.pop((y, 'x'))\n return x\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n\n def _add_intermediate_to_cache(self, intermediate, y, name):\n \"\"\"\n Internal function used to cache intermediate results computed during the forward call\n \"\"\"\n assert((y, name) not in self._intermediates_cache),\\\n \"key collision in _add_intermediate_to_cache\"\n self._intermediates_cache[(y, name)] = intermediate\n\n def batch_log_det_jacobian(self, y, *args, **kwargs):\n \"\"\"\n Calculates the elementwise determinant of the log jacobian\n \"\"\"\n if (y, 'sigma') in self._intermediates_cache:\n sigma = self._intermediates_cache.pop((y, 'sigma'))\n else:\n raise KeyError(\"Bijector InverseAutoregressiveFlow expected to find\" +\n \"key in intermediates cache but didn't\")\n log_sigma = torch.log(sigma)\n if 'log_pdf_mask' in kwargs:\n log_sigma = log_sigma * kwargs['log_pdf_mask']\n return log_sigma\n", "path": "pyro/distributions/transformed_distribution.py"}]}
| 3,274 | 213 |
gh_patches_debug_9881
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1862
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Updated Error on Version File Missing
4XX errors are client errors. In that case, if the file is not on the server, the client is not guilty
Raise error instead of returning 404 on __version__ if version file is missing
https://github.com/Kinto/kinto/blame/master/kinto/core/views/version.py#L49
suggested by @peterbe
4XX errors are client errors. In that case, if the file is not on the server, the client is not guilty ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/views/version.py`
Content:
```
1 import json
2 import os
3
4 import colander
5 from pyramid import httpexceptions
6 from pyramid.security import NO_PERMISSION_REQUIRED
7 from kinto.core import Service
8
9 HERE = os.path.dirname(__file__)
10 ORIGIN = os.path.dirname(HERE)
11
12
13 class VersionResponseSchema(colander.MappingSchema):
14 body = colander.SchemaNode(colander.Mapping(unknown="preserve"))
15
16
17 version_response_schemas = {
18 "200": VersionResponseSchema(description="Return the running Instance version information.")
19 }
20
21
22 version = Service(name="version", path="/__version__", description="Version")
23
24
25 @version.get(
26 permission=NO_PERMISSION_REQUIRED,
27 tags=["Utilities"],
28 operation_id="__version__",
29 response_schemas=version_response_schemas,
30 )
31 def version_view(request):
32 try:
33 return version_view.__json__
34 except AttributeError:
35 pass
36
37 location = request.registry.settings["version_json_path"]
38 files = [
39 location, # Default is current working dir.
40 os.path.join(ORIGIN, "version.json"), # Relative to the package root.
41 os.path.join(HERE, "version.json"), # Relative to this file.
42 ]
43 for version_file in files:
44 if os.path.exists(version_file):
45 with open(version_file) as f:
46 version_view.__json__ = json.load(f)
47 return version_view.__json__ # First one wins.
48
49 raise httpexceptions.HTTPNotFound()
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/core/views/version.py b/kinto/core/views/version.py
--- a/kinto/core/views/version.py
+++ b/kinto/core/views/version.py
@@ -2,7 +2,6 @@
import os
import colander
-from pyramid import httpexceptions
from pyramid.security import NO_PERMISSION_REQUIRED
from kinto.core import Service
@@ -46,4 +45,4 @@
version_view.__json__ = json.load(f)
return version_view.__json__ # First one wins.
- raise httpexceptions.HTTPNotFound()
+ raise FileNotFoundError("Version file missing from {}".format(files.join(",")))
|
{"golden_diff": "diff --git a/kinto/core/views/version.py b/kinto/core/views/version.py\n--- a/kinto/core/views/version.py\n+++ b/kinto/core/views/version.py\n@@ -2,7 +2,6 @@\n import os\n \n import colander\n-from pyramid import httpexceptions\n from pyramid.security import NO_PERMISSION_REQUIRED\n from kinto.core import Service\n \n@@ -46,4 +45,4 @@\n version_view.__json__ = json.load(f)\n return version_view.__json__ # First one wins.\n \n- raise httpexceptions.HTTPNotFound()\n+ raise FileNotFoundError(\"Version file missing from {}\".format(files.join(\",\")))\n", "issue": "Updated Error on Version File Missing\n4XX errors are client errors. In that case, if the file is not on the server, the client is not guilty\nRaise error instead of returning 404 on __version__ if version file is missing\nhttps://github.com/Kinto/kinto/blame/master/kinto/core/views/version.py#L49\r\n\r\nsuggested by @peterbe \r\n\r\n4XX errors are client errors. In that case, if the file is not on the server, the client is not guilty ;)\n", "before_files": [{"content": "import json\nimport os\n\nimport colander\nfrom pyramid import httpexceptions\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom kinto.core import Service\n\nHERE = os.path.dirname(__file__)\nORIGIN = os.path.dirname(HERE)\n\n\nclass VersionResponseSchema(colander.MappingSchema):\n body = colander.SchemaNode(colander.Mapping(unknown=\"preserve\"))\n\n\nversion_response_schemas = {\n \"200\": VersionResponseSchema(description=\"Return the running Instance version information.\")\n}\n\n\nversion = Service(name=\"version\", path=\"/__version__\", description=\"Version\")\n\n\[email protected](\n permission=NO_PERMISSION_REQUIRED,\n tags=[\"Utilities\"],\n operation_id=\"__version__\",\n response_schemas=version_response_schemas,\n)\ndef version_view(request):\n try:\n return version_view.__json__\n except AttributeError:\n pass\n\n location = request.registry.settings[\"version_json_path\"]\n files = [\n location, # Default is current working dir.\n os.path.join(ORIGIN, \"version.json\"), # Relative to the package root.\n os.path.join(HERE, \"version.json\"), # Relative to this file.\n ]\n for version_file in files:\n if os.path.exists(version_file):\n with open(version_file) as f:\n version_view.__json__ = json.load(f)\n return version_view.__json__ # First one wins.\n\n raise httpexceptions.HTTPNotFound()\n", "path": "kinto/core/views/version.py"}], "after_files": [{"content": "import json\nimport os\n\nimport colander\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom kinto.core import Service\n\nHERE = os.path.dirname(__file__)\nORIGIN = os.path.dirname(HERE)\n\n\nclass VersionResponseSchema(colander.MappingSchema):\n body = colander.SchemaNode(colander.Mapping(unknown=\"preserve\"))\n\n\nversion_response_schemas = {\n \"200\": VersionResponseSchema(description=\"Return the running Instance version information.\")\n}\n\n\nversion = Service(name=\"version\", path=\"/__version__\", description=\"Version\")\n\n\[email protected](\n permission=NO_PERMISSION_REQUIRED,\n tags=[\"Utilities\"],\n operation_id=\"__version__\",\n response_schemas=version_response_schemas,\n)\ndef version_view(request):\n try:\n return version_view.__json__\n except AttributeError:\n pass\n\n location = request.registry.settings[\"version_json_path\"]\n files = [\n location, # Default is current working dir.\n os.path.join(ORIGIN, \"version.json\"), # Relative to the package root.\n os.path.join(HERE, \"version.json\"), # Relative to this file.\n ]\n for version_file in files:\n if os.path.exists(version_file):\n with open(version_file) as f:\n version_view.__json__ = json.load(f)\n return version_view.__json__ # First one wins.\n\n raise FileNotFoundError(\"Version file missing from {}\".format(files.join(\",\")))\n", "path": "kinto/core/views/version.py"}]}
| 762 | 134 |
gh_patches_debug_8651
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1179
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Run hooks on files in specific dir, rather than using `--all-files`
I am able to successfully use `pre-commit run --all-files` to run hooks on all my files.
Now, I'm trying to run the hooks just on files in a particular directory. I think I'm not understanding the docs and I can't find an example to work from.
Here's what the docs say:
> `--files [FILES [FILES ...]]`: specific filenames to run hooks on.
I've tried the following variations:
`pre-commit run --files web/modules/custom`
`pre-commit run --files web/modules/custom/*`
`pre-commit run --files [web/modules/custom]`
`pre-commit run --files [web/modules/custom/*]`
`pre-commit run --files [FILES [web/modules/custom]`
`pre-commit run --files [FILES [web/modules/custom/*]`
I feel really dumb having to ask, but can someone please point me in the right direction?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/run.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import logging
4 import os
5 import re
6 import subprocess
7 import sys
8
9 from identify.identify import tags_from_path
10
11 from pre_commit import color
12 from pre_commit import git
13 from pre_commit import output
14 from pre_commit.clientlib import load_config
15 from pre_commit.output import get_hook_message
16 from pre_commit.repository import all_hooks
17 from pre_commit.repository import install_hook_envs
18 from pre_commit.staged_files_only import staged_files_only
19 from pre_commit.util import cmd_output_b
20 from pre_commit.util import noop_context
21
22
23 logger = logging.getLogger('pre_commit')
24
25
26 def filter_by_include_exclude(names, include, exclude):
27 include_re, exclude_re = re.compile(include), re.compile(exclude)
28 return [
29 filename for filename in names
30 if include_re.search(filename)
31 if not exclude_re.search(filename)
32 ]
33
34
35 class Classifier(object):
36 def __init__(self, filenames):
37 self.filenames = [f for f in filenames if os.path.lexists(f)]
38 self._types_cache = {}
39
40 def _types_for_file(self, filename):
41 try:
42 return self._types_cache[filename]
43 except KeyError:
44 ret = self._types_cache[filename] = tags_from_path(filename)
45 return ret
46
47 def by_types(self, names, types, exclude_types):
48 types, exclude_types = frozenset(types), frozenset(exclude_types)
49 ret = []
50 for filename in names:
51 tags = self._types_for_file(filename)
52 if tags >= types and not tags & exclude_types:
53 ret.append(filename)
54 return ret
55
56 def filenames_for_hook(self, hook):
57 names = self.filenames
58 names = filter_by_include_exclude(names, hook.files, hook.exclude)
59 names = self.by_types(names, hook.types, hook.exclude_types)
60 return names
61
62
63 def _get_skips(environ):
64 skips = environ.get('SKIP', '')
65 return {skip.strip() for skip in skips.split(',') if skip.strip()}
66
67
68 def _hook_msg_start(hook, verbose):
69 return '{}{}'.format('[{}] '.format(hook.id) if verbose else '', hook.name)
70
71
72 SKIPPED = 'Skipped'
73 NO_FILES = '(no files to check)'
74
75
76 def _run_single_hook(classifier, hook, args, skips, cols, use_color):
77 filenames = classifier.filenames_for_hook(hook)
78
79 if hook.language == 'pcre':
80 logger.warning(
81 '`{}` (from {}) uses the deprecated pcre language.\n'
82 'The pcre language is scheduled for removal in pre-commit 2.x.\n'
83 'The pygrep language is a more portable (and usually drop-in) '
84 'replacement.'.format(hook.id, hook.src),
85 )
86
87 if hook.id in skips or hook.alias in skips:
88 output.write(
89 get_hook_message(
90 _hook_msg_start(hook, args.verbose),
91 end_msg=SKIPPED,
92 end_color=color.YELLOW,
93 use_color=args.color,
94 cols=cols,
95 ),
96 )
97 return 0
98 elif not filenames and not hook.always_run:
99 output.write(
100 get_hook_message(
101 _hook_msg_start(hook, args.verbose),
102 postfix=NO_FILES,
103 end_msg=SKIPPED,
104 end_color=color.TURQUOISE,
105 use_color=args.color,
106 cols=cols,
107 ),
108 )
109 return 0
110
111 # Print the hook and the dots first in case the hook takes hella long to
112 # run.
113 output.write(
114 get_hook_message(
115 _hook_msg_start(hook, args.verbose), end_len=6, cols=cols,
116 ),
117 )
118 sys.stdout.flush()
119
120 diff_before = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)
121 filenames = tuple(filenames) if hook.pass_filenames else ()
122 retcode, out = hook.run(filenames, use_color)
123 diff_after = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)
124
125 file_modifications = diff_before != diff_after
126
127 # If the hook makes changes, fail the commit
128 if file_modifications:
129 retcode = 1
130
131 if retcode:
132 retcode = 1
133 print_color = color.RED
134 pass_fail = 'Failed'
135 else:
136 retcode = 0
137 print_color = color.GREEN
138 pass_fail = 'Passed'
139
140 output.write_line(color.format_color(pass_fail, print_color, args.color))
141
142 if (
143 (out or file_modifications) and
144 (retcode or args.verbose or hook.verbose)
145 ):
146 output.write_line('hookid: {}\n'.format(hook.id))
147
148 # Print a message if failing due to file modifications
149 if file_modifications:
150 output.write('Files were modified by this hook.')
151
152 if out:
153 output.write_line(' Additional output:')
154
155 output.write_line()
156
157 if out.strip():
158 output.write_line(out.strip(), logfile_name=hook.log_file)
159 output.write_line()
160
161 return retcode
162
163
164 def _compute_cols(hooks, verbose):
165 """Compute the number of columns to display hook messages. The widest
166 that will be displayed is in the no files skipped case:
167
168 Hook name...(no files to check) Skipped
169
170 or in the verbose case
171
172 Hook name [hookid]...(no files to check) Skipped
173 """
174 if hooks:
175 name_len = max(len(_hook_msg_start(hook, verbose)) for hook in hooks)
176 else:
177 name_len = 0
178
179 cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)
180 return max(cols, 80)
181
182
183 def _all_filenames(args):
184 if args.origin and args.source:
185 return git.get_changed_files(args.origin, args.source)
186 elif args.hook_stage in {'prepare-commit-msg', 'commit-msg'}:
187 return (args.commit_msg_filename,)
188 elif args.files:
189 return args.files
190 elif args.all_files:
191 return git.get_all_files()
192 elif git.is_in_merge_conflict():
193 return git.get_conflicted_files()
194 else:
195 return git.get_staged_files()
196
197
198 def _run_hooks(config, hooks, args, environ):
199 """Actually run the hooks."""
200 skips = _get_skips(environ)
201 cols = _compute_cols(hooks, args.verbose)
202 filenames = _all_filenames(args)
203 filenames = filter_by_include_exclude(filenames, '', config['exclude'])
204 classifier = Classifier(filenames)
205 retval = 0
206 for hook in hooks:
207 retval |= _run_single_hook(
208 classifier, hook, args, skips, cols, args.color,
209 )
210 if retval and config['fail_fast']:
211 break
212 if retval and args.show_diff_on_failure and git.has_diff():
213 if args.all_files:
214 output.write_line(
215 'pre-commit hook(s) made changes.\n'
216 'If you are seeing this message in CI, '
217 'reproduce locally with: `pre-commit run --all-files`.\n'
218 'To run `pre-commit` as part of git workflow, use '
219 '`pre-commit install`.',
220 )
221 output.write_line('All changes made by hooks:')
222 # args.color is a boolean.
223 # See user_color function in color.py
224 subprocess.call((
225 'git', '--no-pager', 'diff', '--no-ext-diff',
226 '--color={}'.format({True: 'always', False: 'never'}[args.color]),
227 ))
228
229 return retval
230
231
232 def _has_unmerged_paths():
233 _, stdout, _ = cmd_output_b('git', 'ls-files', '--unmerged')
234 return bool(stdout.strip())
235
236
237 def _has_unstaged_config(config_file):
238 retcode, _, _ = cmd_output_b(
239 'git', 'diff', '--no-ext-diff', '--exit-code', config_file,
240 retcode=None,
241 )
242 # be explicit, other git errors don't mean it has an unstaged config.
243 return retcode == 1
244
245
246 def run(config_file, store, args, environ=os.environ):
247 no_stash = args.all_files or bool(args.files)
248
249 # Check if we have unresolved merge conflict files and fail fast.
250 if _has_unmerged_paths():
251 logger.error('Unmerged files. Resolve before committing.')
252 return 1
253 if bool(args.source) != bool(args.origin):
254 logger.error('Specify both --origin and --source.')
255 return 1
256 if _has_unstaged_config(config_file) and not no_stash:
257 logger.error(
258 'Your pre-commit configuration is unstaged.\n'
259 '`git add {}` to fix this.'.format(config_file),
260 )
261 return 1
262
263 # Expose origin / source as environment variables for hooks to consume
264 if args.origin and args.source:
265 environ['PRE_COMMIT_ORIGIN'] = args.origin
266 environ['PRE_COMMIT_SOURCE'] = args.source
267
268 if no_stash:
269 ctx = noop_context()
270 else:
271 ctx = staged_files_only(store.directory)
272
273 with ctx:
274 config = load_config(config_file)
275 hooks = [
276 hook
277 for hook in all_hooks(config, store)
278 if not args.hook or hook.id == args.hook or hook.alias == args.hook
279 if args.hook_stage in hook.stages
280 ]
281
282 if args.hook and not hooks:
283 output.write_line(
284 'No hook with id `{}` in stage `{}`'.format(
285 args.hook, args.hook_stage,
286 ),
287 )
288 return 1
289
290 install_hook_envs(hooks, store)
291
292 return _run_hooks(config, hooks, args, environ)
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -34,6 +34,12 @@
class Classifier(object):
def __init__(self, filenames):
+ # on windows we normalize all filenames to use forward slashes
+ # this makes it easier to filter using the `files:` regex
+ # this also makes improperly quoted shell-based hooks work better
+ # see #1173
+ if os.altsep == '/' and os.sep == '\\':
+ filenames = (f.replace(os.sep, os.altsep) for f in filenames)
self.filenames = [f for f in filenames if os.path.lexists(f)]
self._types_cache = {}
|
{"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -34,6 +34,12 @@\n \n class Classifier(object):\n def __init__(self, filenames):\n+ # on windows we normalize all filenames to use forward slashes\n+ # this makes it easier to filter using the `files:` regex\n+ # this also makes improperly quoted shell-based hooks work better\n+ # see #1173\n+ if os.altsep == '/' and os.sep == '\\\\':\n+ filenames = (f.replace(os.sep, os.altsep) for f in filenames)\n self.filenames = [f for f in filenames if os.path.lexists(f)]\n self._types_cache = {}\n", "issue": "Run hooks on files in specific dir, rather than using `--all-files`\nI am able to successfully use `pre-commit run --all-files` to run hooks on all my files.\r\n\r\nNow, I'm trying to run the hooks just on files in a particular directory. I think I'm not understanding the docs and I can't find an example to work from.\r\n\r\nHere's what the docs say:\r\n> `--files [FILES [FILES ...]]`: specific filenames to run hooks on.\r\n\r\nI've tried the following variations:\r\n`pre-commit run --files web/modules/custom`\r\n`pre-commit run --files web/modules/custom/*`\r\n`pre-commit run --files [web/modules/custom]`\r\n`pre-commit run --files [web/modules/custom/*]`\r\n`pre-commit run --files [FILES [web/modules/custom]`\r\n`pre-commit run --files [FILES [web/modules/custom/*]`\r\n\r\nI feel really dumb having to ask, but can someone please point me in the right direction?\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\n\nfrom identify.identify import tags_from_path\n\nfrom pre_commit import color\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef filter_by_include_exclude(names, include, exclude):\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n return [\n filename for filename in names\n if include_re.search(filename)\n if not exclude_re.search(filename)\n ]\n\n\nclass Classifier(object):\n def __init__(self, filenames):\n self.filenames = [f for f in filenames if os.path.lexists(f)]\n self._types_cache = {}\n\n def _types_for_file(self, filename):\n try:\n return self._types_cache[filename]\n except KeyError:\n ret = self._types_cache[filename] = tags_from_path(filename)\n return ret\n\n def by_types(self, names, types, exclude_types):\n types, exclude_types = frozenset(types), frozenset(exclude_types)\n ret = []\n for filename in names:\n tags = self._types_for_file(filename)\n if tags >= types and not tags & exclude_types:\n ret.append(filename)\n return ret\n\n def filenames_for_hook(self, hook):\n names = self.filenames\n names = filter_by_include_exclude(names, hook.files, hook.exclude)\n names = self.by_types(names, hook.types, hook.exclude_types)\n return names\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return {skip.strip() for skip in skips.split(',') if skip.strip()}\n\n\ndef _hook_msg_start(hook, verbose):\n return '{}{}'.format('[{}] '.format(hook.id) if verbose else '', hook.name)\n\n\nSKIPPED = 'Skipped'\nNO_FILES = '(no files to check)'\n\n\ndef _run_single_hook(classifier, hook, args, skips, cols, use_color):\n filenames = classifier.filenames_for_hook(hook)\n\n if hook.language == 'pcre':\n logger.warning(\n '`{}` (from {}) uses the deprecated pcre language.\\n'\n 'The pcre language is scheduled for removal in pre-commit 2.x.\\n'\n 'The pygrep language is a more portable (and usually drop-in) '\n 'replacement.'.format(hook.id, hook.src),\n )\n\n if hook.id in skips or hook.alias in skips:\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg=SKIPPED,\n end_color=color.YELLOW,\n use_color=args.color,\n cols=cols,\n ),\n )\n return 0\n elif not filenames and not hook.always_run:\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix=NO_FILES,\n end_msg=SKIPPED,\n end_color=color.TURQUOISE,\n use_color=args.color,\n cols=cols,\n ),\n )\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose), end_len=6, cols=cols,\n ),\n )\n sys.stdout.flush()\n\n diff_before = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)\n filenames = tuple(filenames) if hook.pass_filenames else ()\n retcode, out = hook.run(filenames, use_color)\n diff_after = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)\n\n file_modifications = diff_before != diff_after\n\n # If the hook makes changes, fail the commit\n if file_modifications:\n retcode = 1\n\n if retcode:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n output.write_line(color.format_color(pass_fail, print_color, args.color))\n\n if (\n (out or file_modifications) and\n (retcode or args.verbose or hook.verbose)\n ):\n output.write_line('hookid: {}\\n'.format(hook.id))\n\n # Print a message if failing due to file modifications\n if file_modifications:\n output.write('Files were modified by this hook.')\n\n if out:\n output.write_line(' Additional output:')\n\n output.write_line()\n\n if out.strip():\n output.write_line(out.strip(), logfile_name=hook.log_file)\n output.write_line()\n\n return retcode\n\n\ndef _compute_cols(hooks, verbose):\n \"\"\"Compute the number of columns to display hook messages. The widest\n that will be displayed is in the no files skipped case:\n\n Hook name...(no files to check) Skipped\n\n or in the verbose case\n\n Hook name [hookid]...(no files to check) Skipped\n \"\"\"\n if hooks:\n name_len = max(len(_hook_msg_start(hook, verbose)) for hook in hooks)\n else:\n name_len = 0\n\n cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)\n return max(cols, 80)\n\n\ndef _all_filenames(args):\n if args.origin and args.source:\n return git.get_changed_files(args.origin, args.source)\n elif args.hook_stage in {'prepare-commit-msg', 'commit-msg'}:\n return (args.commit_msg_filename,)\n elif args.files:\n return args.files\n elif args.all_files:\n return git.get_all_files()\n elif git.is_in_merge_conflict():\n return git.get_conflicted_files()\n else:\n return git.get_staged_files()\n\n\ndef _run_hooks(config, hooks, args, environ):\n \"\"\"Actually run the hooks.\"\"\"\n skips = _get_skips(environ)\n cols = _compute_cols(hooks, args.verbose)\n filenames = _all_filenames(args)\n filenames = filter_by_include_exclude(filenames, '', config['exclude'])\n classifier = Classifier(filenames)\n retval = 0\n for hook in hooks:\n retval |= _run_single_hook(\n classifier, hook, args, skips, cols, args.color,\n )\n if retval and config['fail_fast']:\n break\n if retval and args.show_diff_on_failure and git.has_diff():\n if args.all_files:\n output.write_line(\n 'pre-commit hook(s) made changes.\\n'\n 'If you are seeing this message in CI, '\n 'reproduce locally with: `pre-commit run --all-files`.\\n'\n 'To run `pre-commit` as part of git workflow, use '\n '`pre-commit install`.',\n )\n output.write_line('All changes made by hooks:')\n # args.color is a boolean.\n # See user_color function in color.py\n subprocess.call((\n 'git', '--no-pager', 'diff', '--no-ext-diff',\n '--color={}'.format({True: 'always', False: 'never'}[args.color]),\n ))\n\n return retval\n\n\ndef _has_unmerged_paths():\n _, stdout, _ = cmd_output_b('git', 'ls-files', '--unmerged')\n return bool(stdout.strip())\n\n\ndef _has_unstaged_config(config_file):\n retcode, _, _ = cmd_output_b(\n 'git', 'diff', '--no-ext-diff', '--exit-code', config_file,\n retcode=None,\n )\n # be explicit, other git errors don't mean it has an unstaged config.\n return retcode == 1\n\n\ndef run(config_file, store, args, environ=os.environ):\n no_stash = args.all_files or bool(args.files)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths():\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n if bool(args.source) != bool(args.origin):\n logger.error('Specify both --origin and --source.')\n return 1\n if _has_unstaged_config(config_file) and not no_stash:\n logger.error(\n 'Your pre-commit configuration is unstaged.\\n'\n '`git add {}` to fix this.'.format(config_file),\n )\n return 1\n\n # Expose origin / source as environment variables for hooks to consume\n if args.origin and args.source:\n environ['PRE_COMMIT_ORIGIN'] = args.origin\n environ['PRE_COMMIT_SOURCE'] = args.source\n\n if no_stash:\n ctx = noop_context()\n else:\n ctx = staged_files_only(store.directory)\n\n with ctx:\n config = load_config(config_file)\n hooks = [\n hook\n for hook in all_hooks(config, store)\n if not args.hook or hook.id == args.hook or hook.alias == args.hook\n if args.hook_stage in hook.stages\n ]\n\n if args.hook and not hooks:\n output.write_line(\n 'No hook with id `{}` in stage `{}`'.format(\n args.hook, args.hook_stage,\n ),\n )\n return 1\n\n install_hook_envs(hooks, store)\n\n return _run_hooks(config, hooks, args, environ)\n", "path": "pre_commit/commands/run.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\n\nfrom identify.identify import tags_from_path\n\nfrom pre_commit import color\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef filter_by_include_exclude(names, include, exclude):\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n return [\n filename for filename in names\n if include_re.search(filename)\n if not exclude_re.search(filename)\n ]\n\n\nclass Classifier(object):\n def __init__(self, filenames):\n # on windows we normalize all filenames to use forward slashes\n # this makes it easier to filter using the `files:` regex\n # this also makes improperly quoted shell-based hooks work better\n # see #1173\n if os.altsep == '/' and os.sep == '\\\\':\n filenames = (f.replace(os.sep, os.altsep) for f in filenames)\n self.filenames = [f for f in filenames if os.path.lexists(f)]\n self._types_cache = {}\n\n def _types_for_file(self, filename):\n try:\n return self._types_cache[filename]\n except KeyError:\n ret = self._types_cache[filename] = tags_from_path(filename)\n return ret\n\n def by_types(self, names, types, exclude_types):\n types, exclude_types = frozenset(types), frozenset(exclude_types)\n ret = []\n for filename in names:\n tags = self._types_for_file(filename)\n if tags >= types and not tags & exclude_types:\n ret.append(filename)\n return ret\n\n def filenames_for_hook(self, hook):\n names = self.filenames\n names = filter_by_include_exclude(names, hook.files, hook.exclude)\n names = self.by_types(names, hook.types, hook.exclude_types)\n return names\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return {skip.strip() for skip in skips.split(',') if skip.strip()}\n\n\ndef _hook_msg_start(hook, verbose):\n return '{}{}'.format('[{}] '.format(hook.id) if verbose else '', hook.name)\n\n\nSKIPPED = 'Skipped'\nNO_FILES = '(no files to check)'\n\n\ndef _run_single_hook(classifier, hook, args, skips, cols, use_color):\n filenames = classifier.filenames_for_hook(hook)\n\n if hook.language == 'pcre':\n logger.warning(\n '`{}` (from {}) uses the deprecated pcre language.\\n'\n 'The pcre language is scheduled for removal in pre-commit 2.x.\\n'\n 'The pygrep language is a more portable (and usually drop-in) '\n 'replacement.'.format(hook.id, hook.src),\n )\n\n if hook.id in skips or hook.alias in skips:\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg=SKIPPED,\n end_color=color.YELLOW,\n use_color=args.color,\n cols=cols,\n ),\n )\n return 0\n elif not filenames and not hook.always_run:\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix=NO_FILES,\n end_msg=SKIPPED,\n end_color=color.TURQUOISE,\n use_color=args.color,\n cols=cols,\n ),\n )\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n output.write(\n get_hook_message(\n _hook_msg_start(hook, args.verbose), end_len=6, cols=cols,\n ),\n )\n sys.stdout.flush()\n\n diff_before = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)\n filenames = tuple(filenames) if hook.pass_filenames else ()\n retcode, out = hook.run(filenames, use_color)\n diff_after = cmd_output_b('git', 'diff', '--no-ext-diff', retcode=None)\n\n file_modifications = diff_before != diff_after\n\n # If the hook makes changes, fail the commit\n if file_modifications:\n retcode = 1\n\n if retcode:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n output.write_line(color.format_color(pass_fail, print_color, args.color))\n\n if (\n (out or file_modifications) and\n (retcode or args.verbose or hook.verbose)\n ):\n output.write_line('hookid: {}\\n'.format(hook.id))\n\n # Print a message if failing due to file modifications\n if file_modifications:\n output.write('Files were modified by this hook.')\n\n if out:\n output.write_line(' Additional output:')\n\n output.write_line()\n\n if out.strip():\n output.write_line(out.strip(), logfile_name=hook.log_file)\n output.write_line()\n\n return retcode\n\n\ndef _compute_cols(hooks, verbose):\n \"\"\"Compute the number of columns to display hook messages. The widest\n that will be displayed is in the no files skipped case:\n\n Hook name...(no files to check) Skipped\n\n or in the verbose case\n\n Hook name [hookid]...(no files to check) Skipped\n \"\"\"\n if hooks:\n name_len = max(len(_hook_msg_start(hook, verbose)) for hook in hooks)\n else:\n name_len = 0\n\n cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)\n return max(cols, 80)\n\n\ndef _all_filenames(args):\n if args.origin and args.source:\n return git.get_changed_files(args.origin, args.source)\n elif args.hook_stage in {'prepare-commit-msg', 'commit-msg'}:\n return (args.commit_msg_filename,)\n elif args.files:\n return args.files\n elif args.all_files:\n return git.get_all_files()\n elif git.is_in_merge_conflict():\n return git.get_conflicted_files()\n else:\n return git.get_staged_files()\n\n\ndef _run_hooks(config, hooks, args, environ):\n \"\"\"Actually run the hooks.\"\"\"\n skips = _get_skips(environ)\n cols = _compute_cols(hooks, args.verbose)\n filenames = _all_filenames(args)\n filenames = filter_by_include_exclude(filenames, '', config['exclude'])\n classifier = Classifier(filenames)\n retval = 0\n for hook in hooks:\n retval |= _run_single_hook(\n classifier, hook, args, skips, cols, args.color,\n )\n if retval and config['fail_fast']:\n break\n if retval and args.show_diff_on_failure and git.has_diff():\n if args.all_files:\n output.write_line(\n 'pre-commit hook(s) made changes.\\n'\n 'If you are seeing this message in CI, '\n 'reproduce locally with: `pre-commit run --all-files`.\\n'\n 'To run `pre-commit` as part of git workflow, use '\n '`pre-commit install`.',\n )\n output.write_line('All changes made by hooks:')\n # args.color is a boolean.\n # See user_color function in color.py\n subprocess.call((\n 'git', '--no-pager', 'diff', '--no-ext-diff',\n '--color={}'.format({True: 'always', False: 'never'}[args.color]),\n ))\n\n return retval\n\n\ndef _has_unmerged_paths():\n _, stdout, _ = cmd_output_b('git', 'ls-files', '--unmerged')\n return bool(stdout.strip())\n\n\ndef _has_unstaged_config(config_file):\n retcode, _, _ = cmd_output_b(\n 'git', 'diff', '--no-ext-diff', '--exit-code', config_file,\n retcode=None,\n )\n # be explicit, other git errors don't mean it has an unstaged config.\n return retcode == 1\n\n\ndef run(config_file, store, args, environ=os.environ):\n no_stash = args.all_files or bool(args.files)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths():\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n if bool(args.source) != bool(args.origin):\n logger.error('Specify both --origin and --source.')\n return 1\n if _has_unstaged_config(config_file) and not no_stash:\n logger.error(\n 'Your pre-commit configuration is unstaged.\\n'\n '`git add {}` to fix this.'.format(config_file),\n )\n return 1\n\n # Expose origin / source as environment variables for hooks to consume\n if args.origin and args.source:\n environ['PRE_COMMIT_ORIGIN'] = args.origin\n environ['PRE_COMMIT_SOURCE'] = args.source\n\n if no_stash:\n ctx = noop_context()\n else:\n ctx = staged_files_only(store.directory)\n\n with ctx:\n config = load_config(config_file)\n hooks = [\n hook\n for hook in all_hooks(config, store)\n if not args.hook or hook.id == args.hook or hook.alias == args.hook\n if args.hook_stage in hook.stages\n ]\n\n if args.hook and not hooks:\n output.write_line(\n 'No hook with id `{}` in stage `{}`'.format(\n args.hook, args.hook_stage,\n ),\n )\n return 1\n\n install_hook_envs(hooks, store)\n\n return _run_hooks(config, hooks, args, environ)\n", "path": "pre_commit/commands/run.py"}]}
| 3,345 | 175 |
gh_patches_debug_13538
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-4418
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[sentry] KeyError/v1/users/<int:user_id>/access-codes
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
```
KeyError: 'id'
File "flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "flask_rest_jsonapi/decorators.py", line 32, in wrapper
return func(*args, **kwargs)
File "flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "flask_rest_jsonapi/resource.py", line 68, in dispatch_request
response = method(*args, **kwargs)
File "flask_rest_jsonapi/decorators.py", line 56, in wrapper
return func(*args, **kwargs)
File "flask_rest_jsonapi/resource.py", line 145, in get
objects_count, objects = self._data_layer.get_collection(qs, kwargs)
File "flask_rest_jsonapi/data_layers/alchemy.py", line 92, in get_collection
query = self.query(view_kwargs)
File "app/api/access_codes.py", line 56, in query
if not has_access('is_user_itself', user_id=user.id):
File "app/api/helpers/permission_manager.py", line 450, in has_access
auth = permissions[access_level](lambda *a, **b: True, (), {}, (), **kwargs)
File "app/api/helpers/permissions.py", line 43, in decorator
return fn(*args, **kwargs)
File "app/api/helpers/permission_manager.py", line 109, in is_user_itself
if not user.is_admin and not user.is_super_admin and user.id != kwargs['id']:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/access_codes.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from flask_rest_jsonapi.exceptions import ObjectNotFound
3 from app.api.helpers.exceptions import ForbiddenException
4
5 from app.api.bootstrap import api
6 from app.api.helpers.db import safe_query
7 from app.api.helpers.permission_manager import has_access
8 from app.api.helpers.permissions import jwt_required
9 from app.api.helpers.query import event_query
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.access_codes import AccessCodeSchema
12 from app.models import db
13 from app.models.access_code import AccessCode
14 from app.models.ticket import Ticket
15 from app.models.user import User
16
17
18 class AccessCodeListPost(ResourceList):
19 """
20 Create AccessCodes
21 """
22 def before_post(self, args, kwargs, data):
23 """
24 before post method to check for required relationships and permissions
25 :param args:
26 :param kwargs:
27 :param data:
28 :return:
29 """
30 require_relationship(['event', 'user'], data)
31 if not has_access('is_coorganizer', event_id=data['event']):
32 raise ObjectNotFound({'parameter': 'event_id'},
33 "Event: {} not found".format(data['event']))
34
35 schema = AccessCodeSchema
36 methods = ['POST', ]
37 data_layer = {'session': db.session,
38 'model': AccessCode
39 }
40
41
42 class AccessCodeList(ResourceList):
43 """
44 List AccessCodes
45 """
46 def query(self, view_kwargs):
47 """
48 Method to get access codes list based on different view_kwargs
49 :param view_kwargs:
50 :return:
51 """
52 query_ = self.session.query(AccessCode)
53 query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')
54 if view_kwargs.get('user_id'):
55 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
56 if not has_access('is_user_itself', user_id=user.id):
57 raise ForbiddenException({'source': ''}, 'Access Forbidden')
58 query_ = query_.join(User).filter(User.id == user.id)
59 if view_kwargs.get('ticket_id'):
60 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
61 if not has_access('is_coorganizer', event_id=ticket.event_id):
62 raise ForbiddenException({'source': ''}, 'Access Forbidden')
63 # access_code - ticket :: many-to-many relationship
64 query_ = AccessCode.query.filter(AccessCode.tickets.any(id=ticket.id))
65 query_
66 return query_
67
68 view_kwargs = True
69 methods = ['GET', ]
70 schema = AccessCodeSchema
71 data_layer = {'session': db.session,
72 'model': AccessCode,
73 'methods': {
74 'query': query,
75 }}
76
77
78 class AccessCodeDetail(ResourceDetail):
79 """
80 AccessCode detail by id
81 """
82
83 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
84 fetch_as="event_id", model=AccessCode, methods="GET, PATCH"),
85 api.has_permission('is_coorganizer_but_not_admin', fetch='event_id',
86 fetch_as="event_id", model=AccessCode, methods="DELETE"),)
87 schema = AccessCodeSchema
88 data_layer = {'session': db.session,
89 'model': AccessCode
90 }
91
92
93 class AccessCodeRelationshipRequired(ResourceRelationship):
94 """
95 AccessCode Relationship Required
96 """
97 decorators = (jwt_required,)
98 methods = ['GET', 'PATCH']
99 schema = AccessCodeSchema
100 data_layer = {'session': db.session,
101 'model': AccessCode}
102
103
104 class AccessCodeRelationshipOptional(ResourceRelationship):
105 """
106 AccessCode Relationship Optional
107 """
108 decorators = (jwt_required,)
109 schema = AccessCodeSchema
110 data_layer = {'session': db.session,
111 'model': AccessCode}
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/access_codes.py b/app/api/access_codes.py
--- a/app/api/access_codes.py
+++ b/app/api/access_codes.py
@@ -53,7 +53,7 @@
query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')
if view_kwargs.get('user_id'):
user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
- if not has_access('is_user_itself', user_id=user.id):
+ if not has_access('is_user_itself', id=user.id):
raise ForbiddenException({'source': ''}, 'Access Forbidden')
query_ = query_.join(User).filter(User.id == user.id)
if view_kwargs.get('ticket_id'):
|
{"golden_diff": "diff --git a/app/api/access_codes.py b/app/api/access_codes.py\n--- a/app/api/access_codes.py\n+++ b/app/api/access_codes.py\n@@ -53,7 +53,7 @@\n query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n- if not has_access('is_user_itself', user_id=user.id):\n+ if not has_access('is_user_itself', id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('ticket_id'):\n", "issue": "[sentry] KeyError/v1/users/<int:user_id>/access-codes\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\n```\r\nKeyError: 'id'\r\n File \"flask/app.py\", line 1982, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"flask/app.py\", line 1614, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"flask_cors/extension.py\", line 161, in wrapped_function\r\n return cors_after_request(app.make_response(f(*args, **kwargs)))\r\n File \"flask/app.py\", line 1517, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"flask/app.py\", line 1612, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"flask/app.py\", line 1598, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"flask_rest_jsonapi/decorators.py\", line 32, in wrapper\r\n return func(*args, **kwargs)\r\n File \"flask/views.py\", line 84, in view\r\n return self.dispatch_request(*args, **kwargs)\r\n File \"flask_rest_jsonapi/resource.py\", line 68, in dispatch_request\r\n response = method(*args, **kwargs)\r\n File \"flask_rest_jsonapi/decorators.py\", line 56, in wrapper\r\n return func(*args, **kwargs)\r\n File \"flask_rest_jsonapi/resource.py\", line 145, in get\r\n objects_count, objects = self._data_layer.get_collection(qs, kwargs)\r\n File \"flask_rest_jsonapi/data_layers/alchemy.py\", line 92, in get_collection\r\n query = self.query(view_kwargs)\r\n File \"app/api/access_codes.py\", line 56, in query\r\n if not has_access('is_user_itself', user_id=user.id):\r\n File \"app/api/helpers/permission_manager.py\", line 450, in has_access\r\n auth = permissions[access_level](lambda *a, **b: True, (), {}, (), **kwargs)\r\n File \"app/api/helpers/permissions.py\", line 43, in decorator\r\n return fn(*args, **kwargs)\r\n File \"app/api/helpers/permission_manager.py\", line 109, in is_user_itself\r\n if not user.is_admin and not user.is_super_admin and user.id != kwargs['id']:\r\n```\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom app.api.helpers.exceptions import ForbiddenException\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.access_codes import AccessCodeSchema\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.ticket import Ticket\nfrom app.models.user import User\n\n\nclass AccessCodeListPost(ResourceList):\n \"\"\"\n Create AccessCodes\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationships and permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event']))\n\n schema = AccessCodeSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeList(ResourceList):\n \"\"\"\n List AccessCodes\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n Method to get access codes list based on different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(AccessCode)\n query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', user_id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_coorganizer', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n # access_code - ticket :: many-to-many relationship\n query_ = AccessCode.query.filter(AccessCode.tickets.any(id=ticket.id))\n query_\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode,\n 'methods': {\n 'query': query,\n }}\n\n\nclass AccessCodeDetail(ResourceDetail):\n \"\"\"\n AccessCode detail by id\n \"\"\"\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"GET, PATCH\"),\n api.has_permission('is_coorganizer_but_not_admin', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"DELETE\"),)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeRelationshipRequired(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Required\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n\n\nclass AccessCodeRelationshipOptional(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Optional\n \"\"\"\n decorators = (jwt_required,)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n", "path": "app/api/access_codes.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom app.api.helpers.exceptions import ForbiddenException\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.access_codes import AccessCodeSchema\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.ticket import Ticket\nfrom app.models.user import User\n\n\nclass AccessCodeListPost(ResourceList):\n \"\"\"\n Create AccessCodes\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationships and permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event']))\n\n schema = AccessCodeSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeList(ResourceList):\n \"\"\"\n List AccessCodes\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n Method to get access codes list based on different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(AccessCode)\n query_ = event_query(self, query_, view_kwargs, permission='is_coorganizer')\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_coorganizer', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n # access_code - ticket :: many-to-many relationship\n query_ = AccessCode.query.filter(AccessCode.tickets.any(id=ticket.id))\n query_\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode,\n 'methods': {\n 'query': query,\n }}\n\n\nclass AccessCodeDetail(ResourceDetail):\n \"\"\"\n AccessCode detail by id\n \"\"\"\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"GET, PATCH\"),\n api.has_permission('is_coorganizer_but_not_admin', fetch='event_id',\n fetch_as=\"event_id\", model=AccessCode, methods=\"DELETE\"),)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode\n }\n\n\nclass AccessCodeRelationshipRequired(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Required\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n\n\nclass AccessCodeRelationshipOptional(ResourceRelationship):\n \"\"\"\n AccessCode Relationship Optional\n \"\"\"\n decorators = (jwt_required,)\n schema = AccessCodeSchema\n data_layer = {'session': db.session,\n 'model': AccessCode}\n", "path": "app/api/access_codes.py"}]}
| 1,917 | 166 |
gh_patches_debug_13827
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-7931
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search results page fails to render if search string includes special characters
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
### Evidence / Screenshot (if possible)
<img width="1050" alt="Screen Shot 2023-03-08 at 2 56 35 PM" src="https://user-images.githubusercontent.com/82178/223893720-68b534ce-5bf8-4b2c-b601-92ef76a4a2d4.png">
### Relevant url?
https://openlibrary.org/search?q=Cornell+%2777%3A+The+Music%2C+the+Myth+and+the+Magnificence+of+the+Grateful+Dead+Show+at+Barton+Hall&mode=everything
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...
2. Do ...
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual:
* Expected:
### Details
- **Logged in (Y/N)?**
- **Browser type/version?**
- **Operating system?**
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/solr/query_utils.py`
Content:
```
1 from typing import Callable, Literal, Optional
2 from luqum.parser import parser
3 from luqum.tree import Item, SearchField, BaseOperation, Group, Word, Unary
4 import re
5
6
7 class EmptyTreeError(Exception):
8 pass
9
10
11 def luqum_remove_child(child: Item, parents: list[Item]):
12 """
13 Removes a child from a luqum parse tree. If the tree
14 ends up being empty, errors.
15
16 :param child: Node to remove
17 :param parents: Path of parent nodes leading from the root of the tree
18 """
19 parent = parents[-1] if parents else None
20 if parent is None:
21 # We cannot remove the element if it is the root of the tree
22 raise EmptyTreeError()
23 elif isinstance(parent, (BaseOperation, Group, Unary)):
24 new_children = tuple(c for c in parent.children if c != child)
25 if not new_children:
26 # If we have deleted all the children, we need to delete the parent
27 # as well. And potentially recurse up the tree.
28 luqum_remove_child(parent, parents[:-1])
29 else:
30 parent.children = new_children
31 else:
32 raise ValueError("Not supported for generic class Item")
33
34
35 def luqum_replace_child(parent: Item, old_child: Item, new_child: Item):
36 """
37 Replaces a child in a luqum parse tree.
38 """
39 if isinstance(parent, (BaseOperation, Group, Unary)):
40 new_children = tuple(
41 new_child if c == old_child else c for c in parent.children
42 )
43 parent.children = new_children
44 else:
45 raise ValueError("Not supported for generic class Item")
46
47
48 def luqum_traverse(item: Item, _parents: list[Item] | None = None):
49 """
50 Traverses every node in the parse tree in depth-first order.
51
52 Does not make any guarantees about what will happen if you
53 modify the tree while traversing it 😅 But we do it anyways.
54
55 :param item: Node to traverse
56 :param _parents: Internal parameter for tracking parents
57 """
58 parents = _parents or []
59 yield item, parents
60 new_parents = [*parents, item]
61 for child in item.children:
62 yield from luqum_traverse(child, new_parents)
63
64
65 def escape_unknown_fields(
66 query: str,
67 is_valid_field: Callable[[str], bool],
68 lower=True,
69 ) -> str:
70 """
71 Escapes the colon of any search field that is not deemed valid by the
72 predicate function `is_valid_field`.
73
74 :param query: Query to escape
75 :param is_valid_field: Predicate function that determines if a field is valid
76 :param lower: If true, the field will be lowercased before being checked
77
78 >>> escape_unknown_fields('title:foo', lambda field: False)
79 'title\\\\:foo'
80 >>> escape_unknown_fields('title:foo bar blah:bar baz:boo', lambda field: False)
81 'title\\\\:foo bar blah\\\\:bar baz\\\\:boo'
82 >>> escape_unknown_fields('title:foo bar', {'title'}.__contains__)
83 'title:foo bar'
84 >>> escape_unknown_fields('title:foo bar baz:boo', {'title'}.__contains__)
85 'title:foo bar baz\\\\:boo'
86 >>> escape_unknown_fields('title:foo bar baz:boo', {'TITLE'}.__contains__, lower=False)
87 'title\\\\:foo bar baz\\\\:boo'
88 >>> escape_unknown_fields('hi', {'title'}.__contains__)
89 'hi'
90 >>> escape_unknown_fields('(title:foo) OR (blah:bah)', {'title'}.__contains__)
91 '(title:foo) OR (blah\\\\:bah)'
92 """
93 tree = parser.parse(query)
94 # Note we use the string of the tree, because it strips spaces
95 # like: "title : foo" -> "title:foo"
96 escaped_query = str(tree)
97 offset = 0
98 for sf, _ in luqum_traverse(tree):
99 if isinstance(sf, SearchField) and not is_valid_field(
100 sf.name.lower() if lower else sf.name
101 ):
102 field = sf.name + r'\:'
103 if hasattr(sf, 'head'):
104 # head and tail are used for whitespace between fields;
105 # copy it along to the write space to avoid things smashing
106 # together
107 field = sf.head + field
108
109 # We will be moving left to right, so we need to adjust the offset
110 # to account for the characters we have already replaced
111 escaped_query = (
112 escaped_query[: sf.pos + offset]
113 + field
114 + escaped_query[sf.pos + len(field) - 1 + offset :]
115 )
116 offset += 1
117 return escaped_query
118
119
120 def fully_escape_query(query: str) -> str:
121 """
122 Try to convert a query to basically a plain lucene string.
123
124 >>> fully_escape_query('title:foo')
125 'title\\\\:foo'
126 >>> fully_escape_query('title:foo bar')
127 'title\\\\:foo bar'
128 >>> fully_escape_query('title:foo (bar baz:boo)')
129 'title\\\\:foo \\\\(bar baz\\\\:boo\\\\)'
130 >>> fully_escape_query('x:[A TO Z}')
131 'x\\\\:\\\\[A TO Z\\\\}'
132 >>> fully_escape_query('foo AND bar')
133 'foo and bar'
134 """
135 escaped = query
136 # Escape special characters
137 escaped = re.sub(r'[\[\]\(\)\{\}:"\-+?~^/\\,]', r'\\\g<0>', escaped)
138 # Remove boolean operators by making them lowercase
139 escaped = re.sub(r'AND|OR|NOT', lambda _1: _1.group(0).lower(), escaped)
140 return escaped
141
142
143 def luqum_parser(query: str) -> Item:
144 """
145 Parses a lucene-like query, with the special binding rules of Open Library.
146
147 In our queries, unlike native solr/lucene, field names are greedy, and
148 affect the rest of the query until another field is hit.
149
150 Here are some examples. The first query is the native solr/lucene
151 parsing. The second is the parsing we want.
152
153 Query : title:foo bar
154 Lucene: (title:foo) bar
155 OL : (title:foo bar)
156
157 Query : title:foo OR bar AND author:blah
158 Lucene: (title:foo) OR (bar) AND (author:blah)
159 OL : (title:foo OR bar) AND (author:blah)
160
161 This requires an annoying amount of manipulation of the default
162 Luqum parser, unfortunately.
163
164 Also, OL queries allow spaces after fields.
165 """
166 tree = parser.parse(query)
167
168 def find_next_word(item: Item) -> Optional[tuple[Word, Optional[BaseOperation]]]:
169 if isinstance(item, Word):
170 return item, None
171 elif isinstance(item, BaseOperation) and isinstance(item.children[0], Word):
172 return item.children[0], item
173 else:
174 return None
175
176 for node, parents in luqum_traverse(tree):
177 if isinstance(node, BaseOperation):
178 # if any of the children are SearchField followed by one or more words,
179 # we bundle them together
180 last_sf: SearchField = None
181 to_rem = []
182 for child in node.children:
183 if isinstance(child, SearchField) and isinstance(child.expr, Word):
184 last_sf = child
185 elif last_sf and (next_word := find_next_word(child)):
186 word, parent_op = next_word
187 # Add it over
188 if not isinstance(last_sf.expr, Group):
189 last_sf.expr = Group(type(node)(last_sf.expr, word))
190 last_sf.expr.tail = word.tail
191 word.tail = ''
192 else:
193 last_sf.expr.expr.children[-1].tail = last_sf.expr.tail
194 last_sf.expr.expr.children += (word,)
195 last_sf.expr.tail = word.tail
196 word.tail = ''
197 if parent_op:
198 # A query like: 'title:foo blah OR author:bar
199 # Lucene parses as: (title:foo) ? (blah OR author:bar)
200 # We want : (title:foo ? blah) OR (author:bar)
201 node.op = parent_op.op
202 node.children += (*parent_op.children[1:],)
203 to_rem.append(child)
204 else:
205 last_sf = None
206 if len(to_rem) == len(node.children) - 1:
207 # We only have the searchfield left!
208 if parents:
209 # Move the head to the next element
210 last_sf.head = node.head
211 parents[-1].children = tuple(
212 child if child is not node else last_sf
213 for child in parents[-1].children
214 )
215 else:
216 tree = last_sf
217 break
218 else:
219 node.children = tuple(
220 child for child in node.children if child not in to_rem
221 )
222
223 # Remove spaces before field names
224 for node, parents in luqum_traverse(tree):
225 if isinstance(node, SearchField):
226 node.expr.head = ''
227
228 return tree
229
230
231 def query_dict_to_str(
232 escaped: dict | None = None,
233 unescaped: dict | None = None,
234 op: Literal['AND', 'OR', ''] = '',
235 phrase: bool = False,
236 ) -> str:
237 """
238 Converts a query dict to a search query.
239
240 >>> query_dict_to_str({'title': 'foo'})
241 'title:(foo)'
242 >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'})
243 'title:(foo bar) author:(bar)'
244 >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'}, op='OR')
245 'title:(foo bar) OR author:(bar)'
246 >>> query_dict_to_str({'title': 'foo ? to escape'})
247 'title:(foo \\\\? to escape)'
248 >>> query_dict_to_str({'title': 'YES AND'})
249 'title:(YES and)'
250 >>> query_dict_to_str({'publisher_facet': 'Running Press'}, phrase=True)
251 'publisher_facet:"Running Press"'
252 """
253 result = ''
254 if escaped:
255 result += f' {op} '.join(
256 f'{k}:"{fully_escape_query(v)}"'
257 if phrase
258 else f'{k}:({fully_escape_query(v)})'
259 for k, v in escaped.items()
260 )
261 if unescaped:
262 if result:
263 result += f' {op} '
264 result += f' {op} '.join(f'{k}:{v}' for k, v in unescaped.items())
265 return result
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openlibrary/solr/query_utils.py b/openlibrary/solr/query_utils.py
--- a/openlibrary/solr/query_utils.py
+++ b/openlibrary/solr/query_utils.py
@@ -131,10 +131,12 @@
'x\\\\:\\\\[A TO Z\\\\}'
>>> fully_escape_query('foo AND bar')
'foo and bar'
+ >>> fully_escape_query("foo's bar")
+ "foo\\\\'s bar"
"""
escaped = query
# Escape special characters
- escaped = re.sub(r'[\[\]\(\)\{\}:"\-+?~^/\\,]', r'\\\g<0>', escaped)
+ escaped = re.sub(r'[\[\]\(\)\{\}:"\-+?~^/\\,\']', r'\\\g<0>', escaped)
# Remove boolean operators by making them lowercase
escaped = re.sub(r'AND|OR|NOT', lambda _1: _1.group(0).lower(), escaped)
return escaped
|
{"golden_diff": "diff --git a/openlibrary/solr/query_utils.py b/openlibrary/solr/query_utils.py\n--- a/openlibrary/solr/query_utils.py\n+++ b/openlibrary/solr/query_utils.py\n@@ -131,10 +131,12 @@\n 'x\\\\\\\\:\\\\\\\\[A TO Z\\\\\\\\}'\n >>> fully_escape_query('foo AND bar')\n 'foo and bar'\n+ >>> fully_escape_query(\"foo's bar\")\n+ \"foo\\\\\\\\'s bar\"\n \"\"\"\n escaped = query\n # Escape special characters\n- escaped = re.sub(r'[\\[\\]\\(\\)\\{\\}:\"\\-+?~^/\\\\,]', r'\\\\\\g<0>', escaped)\n+ escaped = re.sub(r'[\\[\\]\\(\\)\\{\\}:\"\\-+?~^/\\\\,\\']', r'\\\\\\g<0>', escaped)\n # Remove boolean operators by making them lowercase\n escaped = re.sub(r'AND|OR|NOT', lambda _1: _1.group(0).lower(), escaped)\n return escaped\n", "issue": "Search results page fails to render if search string includes special characters\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\n\r\n### Evidence / Screenshot (if possible)\r\n<img width=\"1050\" alt=\"Screen Shot 2023-03-08 at 2 56 35 PM\" src=\"https://user-images.githubusercontent.com/82178/223893720-68b534ce-5bf8-4b2c-b601-92ef76a4a2d4.png\">\r\n\r\n### Relevant url?\r\n\r\nhttps://openlibrary.org/search?q=Cornell+%2777%3A+The+Music%2C+the+Myth+and+the+Magnificence+of+the+Grateful+Dead+Show+at+Barton+Hall&mode=everything\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...\r\n2. Do ...\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual:\r\n* Expected:\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?**\r\n- **Browser type/version?**\r\n- **Operating system?**\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "from typing import Callable, Literal, Optional\nfrom luqum.parser import parser\nfrom luqum.tree import Item, SearchField, BaseOperation, Group, Word, Unary\nimport re\n\n\nclass EmptyTreeError(Exception):\n pass\n\n\ndef luqum_remove_child(child: Item, parents: list[Item]):\n \"\"\"\n Removes a child from a luqum parse tree. If the tree\n ends up being empty, errors.\n\n :param child: Node to remove\n :param parents: Path of parent nodes leading from the root of the tree\n \"\"\"\n parent = parents[-1] if parents else None\n if parent is None:\n # We cannot remove the element if it is the root of the tree\n raise EmptyTreeError()\n elif isinstance(parent, (BaseOperation, Group, Unary)):\n new_children = tuple(c for c in parent.children if c != child)\n if not new_children:\n # If we have deleted all the children, we need to delete the parent\n # as well. And potentially recurse up the tree.\n luqum_remove_child(parent, parents[:-1])\n else:\n parent.children = new_children\n else:\n raise ValueError(\"Not supported for generic class Item\")\n\n\ndef luqum_replace_child(parent: Item, old_child: Item, new_child: Item):\n \"\"\"\n Replaces a child in a luqum parse tree.\n \"\"\"\n if isinstance(parent, (BaseOperation, Group, Unary)):\n new_children = tuple(\n new_child if c == old_child else c for c in parent.children\n )\n parent.children = new_children\n else:\n raise ValueError(\"Not supported for generic class Item\")\n\n\ndef luqum_traverse(item: Item, _parents: list[Item] | None = None):\n \"\"\"\n Traverses every node in the parse tree in depth-first order.\n\n Does not make any guarantees about what will happen if you\n modify the tree while traversing it \ud83d\ude05 But we do it anyways.\n\n :param item: Node to traverse\n :param _parents: Internal parameter for tracking parents\n \"\"\"\n parents = _parents or []\n yield item, parents\n new_parents = [*parents, item]\n for child in item.children:\n yield from luqum_traverse(child, new_parents)\n\n\ndef escape_unknown_fields(\n query: str,\n is_valid_field: Callable[[str], bool],\n lower=True,\n) -> str:\n \"\"\"\n Escapes the colon of any search field that is not deemed valid by the\n predicate function `is_valid_field`.\n\n :param query: Query to escape\n :param is_valid_field: Predicate function that determines if a field is valid\n :param lower: If true, the field will be lowercased before being checked\n\n >>> escape_unknown_fields('title:foo', lambda field: False)\n 'title\\\\\\\\:foo'\n >>> escape_unknown_fields('title:foo bar blah:bar baz:boo', lambda field: False)\n 'title\\\\\\\\:foo bar blah\\\\\\\\:bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('title:foo bar', {'title'}.__contains__)\n 'title:foo bar'\n >>> escape_unknown_fields('title:foo bar baz:boo', {'title'}.__contains__)\n 'title:foo bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('title:foo bar baz:boo', {'TITLE'}.__contains__, lower=False)\n 'title\\\\\\\\:foo bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('hi', {'title'}.__contains__)\n 'hi'\n >>> escape_unknown_fields('(title:foo) OR (blah:bah)', {'title'}.__contains__)\n '(title:foo) OR (blah\\\\\\\\:bah)'\n \"\"\"\n tree = parser.parse(query)\n # Note we use the string of the tree, because it strips spaces\n # like: \"title : foo\" -> \"title:foo\"\n escaped_query = str(tree)\n offset = 0\n for sf, _ in luqum_traverse(tree):\n if isinstance(sf, SearchField) and not is_valid_field(\n sf.name.lower() if lower else sf.name\n ):\n field = sf.name + r'\\:'\n if hasattr(sf, 'head'):\n # head and tail are used for whitespace between fields;\n # copy it along to the write space to avoid things smashing\n # together\n field = sf.head + field\n\n # We will be moving left to right, so we need to adjust the offset\n # to account for the characters we have already replaced\n escaped_query = (\n escaped_query[: sf.pos + offset]\n + field\n + escaped_query[sf.pos + len(field) - 1 + offset :]\n )\n offset += 1\n return escaped_query\n\n\ndef fully_escape_query(query: str) -> str:\n \"\"\"\n Try to convert a query to basically a plain lucene string.\n\n >>> fully_escape_query('title:foo')\n 'title\\\\\\\\:foo'\n >>> fully_escape_query('title:foo bar')\n 'title\\\\\\\\:foo bar'\n >>> fully_escape_query('title:foo (bar baz:boo)')\n 'title\\\\\\\\:foo \\\\\\\\(bar baz\\\\\\\\:boo\\\\\\\\)'\n >>> fully_escape_query('x:[A TO Z}')\n 'x\\\\\\\\:\\\\\\\\[A TO Z\\\\\\\\}'\n >>> fully_escape_query('foo AND bar')\n 'foo and bar'\n \"\"\"\n escaped = query\n # Escape special characters\n escaped = re.sub(r'[\\[\\]\\(\\)\\{\\}:\"\\-+?~^/\\\\,]', r'\\\\\\g<0>', escaped)\n # Remove boolean operators by making them lowercase\n escaped = re.sub(r'AND|OR|NOT', lambda _1: _1.group(0).lower(), escaped)\n return escaped\n\n\ndef luqum_parser(query: str) -> Item:\n \"\"\"\n Parses a lucene-like query, with the special binding rules of Open Library.\n\n In our queries, unlike native solr/lucene, field names are greedy, and\n affect the rest of the query until another field is hit.\n\n Here are some examples. The first query is the native solr/lucene\n parsing. The second is the parsing we want.\n\n Query : title:foo bar\n Lucene: (title:foo) bar\n OL : (title:foo bar)\n\n Query : title:foo OR bar AND author:blah\n Lucene: (title:foo) OR (bar) AND (author:blah)\n OL : (title:foo OR bar) AND (author:blah)\n\n This requires an annoying amount of manipulation of the default\n Luqum parser, unfortunately.\n\n Also, OL queries allow spaces after fields.\n \"\"\"\n tree = parser.parse(query)\n\n def find_next_word(item: Item) -> Optional[tuple[Word, Optional[BaseOperation]]]:\n if isinstance(item, Word):\n return item, None\n elif isinstance(item, BaseOperation) and isinstance(item.children[0], Word):\n return item.children[0], item\n else:\n return None\n\n for node, parents in luqum_traverse(tree):\n if isinstance(node, BaseOperation):\n # if any of the children are SearchField followed by one or more words,\n # we bundle them together\n last_sf: SearchField = None\n to_rem = []\n for child in node.children:\n if isinstance(child, SearchField) and isinstance(child.expr, Word):\n last_sf = child\n elif last_sf and (next_word := find_next_word(child)):\n word, parent_op = next_word\n # Add it over\n if not isinstance(last_sf.expr, Group):\n last_sf.expr = Group(type(node)(last_sf.expr, word))\n last_sf.expr.tail = word.tail\n word.tail = ''\n else:\n last_sf.expr.expr.children[-1].tail = last_sf.expr.tail\n last_sf.expr.expr.children += (word,)\n last_sf.expr.tail = word.tail\n word.tail = ''\n if parent_op:\n # A query like: 'title:foo blah OR author:bar\n # Lucene parses as: (title:foo) ? (blah OR author:bar)\n # We want : (title:foo ? blah) OR (author:bar)\n node.op = parent_op.op\n node.children += (*parent_op.children[1:],)\n to_rem.append(child)\n else:\n last_sf = None\n if len(to_rem) == len(node.children) - 1:\n # We only have the searchfield left!\n if parents:\n # Move the head to the next element\n last_sf.head = node.head\n parents[-1].children = tuple(\n child if child is not node else last_sf\n for child in parents[-1].children\n )\n else:\n tree = last_sf\n break\n else:\n node.children = tuple(\n child for child in node.children if child not in to_rem\n )\n\n # Remove spaces before field names\n for node, parents in luqum_traverse(tree):\n if isinstance(node, SearchField):\n node.expr.head = ''\n\n return tree\n\n\ndef query_dict_to_str(\n escaped: dict | None = None,\n unescaped: dict | None = None,\n op: Literal['AND', 'OR', ''] = '',\n phrase: bool = False,\n) -> str:\n \"\"\"\n Converts a query dict to a search query.\n\n >>> query_dict_to_str({'title': 'foo'})\n 'title:(foo)'\n >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'})\n 'title:(foo bar) author:(bar)'\n >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'}, op='OR')\n 'title:(foo bar) OR author:(bar)'\n >>> query_dict_to_str({'title': 'foo ? to escape'})\n 'title:(foo \\\\\\\\? to escape)'\n >>> query_dict_to_str({'title': 'YES AND'})\n 'title:(YES and)'\n >>> query_dict_to_str({'publisher_facet': 'Running Press'}, phrase=True)\n 'publisher_facet:\"Running Press\"'\n \"\"\"\n result = ''\n if escaped:\n result += f' {op} '.join(\n f'{k}:\"{fully_escape_query(v)}\"'\n if phrase\n else f'{k}:({fully_escape_query(v)})'\n for k, v in escaped.items()\n )\n if unescaped:\n if result:\n result += f' {op} '\n result += f' {op} '.join(f'{k}:{v}' for k, v in unescaped.items())\n return result\n", "path": "openlibrary/solr/query_utils.py"}], "after_files": [{"content": "from typing import Callable, Literal, Optional\nfrom luqum.parser import parser\nfrom luqum.tree import Item, SearchField, BaseOperation, Group, Word, Unary\nimport re\n\n\nclass EmptyTreeError(Exception):\n pass\n\n\ndef luqum_remove_child(child: Item, parents: list[Item]):\n \"\"\"\n Removes a child from a luqum parse tree. If the tree\n ends up being empty, errors.\n\n :param child: Node to remove\n :param parents: Path of parent nodes leading from the root of the tree\n \"\"\"\n parent = parents[-1] if parents else None\n if parent is None:\n # We cannot remove the element if it is the root of the tree\n raise EmptyTreeError()\n elif isinstance(parent, (BaseOperation, Group, Unary)):\n new_children = tuple(c for c in parent.children if c != child)\n if not new_children:\n # If we have deleted all the children, we need to delete the parent\n # as well. And potentially recurse up the tree.\n luqum_remove_child(parent, parents[:-1])\n else:\n parent.children = new_children\n else:\n raise ValueError(\"Not supported for generic class Item\")\n\n\ndef luqum_replace_child(parent: Item, old_child: Item, new_child: Item):\n \"\"\"\n Replaces a child in a luqum parse tree.\n \"\"\"\n if isinstance(parent, (BaseOperation, Group, Unary)):\n new_children = tuple(\n new_child if c == old_child else c for c in parent.children\n )\n parent.children = new_children\n else:\n raise ValueError(\"Not supported for generic class Item\")\n\n\ndef luqum_traverse(item: Item, _parents: list[Item] | None = None):\n \"\"\"\n Traverses every node in the parse tree in depth-first order.\n\n Does not make any guarantees about what will happen if you\n modify the tree while traversing it \ud83d\ude05 But we do it anyways.\n\n :param item: Node to traverse\n :param _parents: Internal parameter for tracking parents\n \"\"\"\n parents = _parents or []\n yield item, parents\n new_parents = [*parents, item]\n for child in item.children:\n yield from luqum_traverse(child, new_parents)\n\n\ndef escape_unknown_fields(\n query: str,\n is_valid_field: Callable[[str], bool],\n lower=True,\n) -> str:\n \"\"\"\n Escapes the colon of any search field that is not deemed valid by the\n predicate function `is_valid_field`.\n\n :param query: Query to escape\n :param is_valid_field: Predicate function that determines if a field is valid\n :param lower: If true, the field will be lowercased before being checked\n\n >>> escape_unknown_fields('title:foo', lambda field: False)\n 'title\\\\\\\\:foo'\n >>> escape_unknown_fields('title:foo bar blah:bar baz:boo', lambda field: False)\n 'title\\\\\\\\:foo bar blah\\\\\\\\:bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('title:foo bar', {'title'}.__contains__)\n 'title:foo bar'\n >>> escape_unknown_fields('title:foo bar baz:boo', {'title'}.__contains__)\n 'title:foo bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('title:foo bar baz:boo', {'TITLE'}.__contains__, lower=False)\n 'title\\\\\\\\:foo bar baz\\\\\\\\:boo'\n >>> escape_unknown_fields('hi', {'title'}.__contains__)\n 'hi'\n >>> escape_unknown_fields('(title:foo) OR (blah:bah)', {'title'}.__contains__)\n '(title:foo) OR (blah\\\\\\\\:bah)'\n \"\"\"\n tree = parser.parse(query)\n # Note we use the string of the tree, because it strips spaces\n # like: \"title : foo\" -> \"title:foo\"\n escaped_query = str(tree)\n offset = 0\n for sf, _ in luqum_traverse(tree):\n if isinstance(sf, SearchField) and not is_valid_field(\n sf.name.lower() if lower else sf.name\n ):\n field = sf.name + r'\\:'\n if hasattr(sf, 'head'):\n # head and tail are used for whitespace between fields;\n # copy it along to the write space to avoid things smashing\n # together\n field = sf.head + field\n\n # We will be moving left to right, so we need to adjust the offset\n # to account for the characters we have already replaced\n escaped_query = (\n escaped_query[: sf.pos + offset]\n + field\n + escaped_query[sf.pos + len(field) - 1 + offset :]\n )\n offset += 1\n return escaped_query\n\n\ndef fully_escape_query(query: str) -> str:\n \"\"\"\n Try to convert a query to basically a plain lucene string.\n\n >>> fully_escape_query('title:foo')\n 'title\\\\\\\\:foo'\n >>> fully_escape_query('title:foo bar')\n 'title\\\\\\\\:foo bar'\n >>> fully_escape_query('title:foo (bar baz:boo)')\n 'title\\\\\\\\:foo \\\\\\\\(bar baz\\\\\\\\:boo\\\\\\\\)'\n >>> fully_escape_query('x:[A TO Z}')\n 'x\\\\\\\\:\\\\\\\\[A TO Z\\\\\\\\}'\n >>> fully_escape_query('foo AND bar')\n 'foo and bar'\n >>> fully_escape_query(\"foo's bar\")\n \"foo\\\\\\\\'s bar\"\n \"\"\"\n escaped = query\n # Escape special characters\n escaped = re.sub(r'[\\[\\]\\(\\)\\{\\}:\"\\-+?~^/\\\\,\\']', r'\\\\\\g<0>', escaped)\n # Remove boolean operators by making them lowercase\n escaped = re.sub(r'AND|OR|NOT', lambda _1: _1.group(0).lower(), escaped)\n return escaped\n\n\ndef luqum_parser(query: str) -> Item:\n \"\"\"\n Parses a lucene-like query, with the special binding rules of Open Library.\n\n In our queries, unlike native solr/lucene, field names are greedy, and\n affect the rest of the query until another field is hit.\n\n Here are some examples. The first query is the native solr/lucene\n parsing. The second is the parsing we want.\n\n Query : title:foo bar\n Lucene: (title:foo) bar\n OL : (title:foo bar)\n\n Query : title:foo OR bar AND author:blah\n Lucene: (title:foo) OR (bar) AND (author:blah)\n OL : (title:foo OR bar) AND (author:blah)\n\n This requires an annoying amount of manipulation of the default\n Luqum parser, unfortunately.\n\n Also, OL queries allow spaces after fields.\n \"\"\"\n tree = parser.parse(query)\n\n def find_next_word(item: Item) -> Optional[tuple[Word, Optional[BaseOperation]]]:\n if isinstance(item, Word):\n return item, None\n elif isinstance(item, BaseOperation) and isinstance(item.children[0], Word):\n return item.children[0], item\n else:\n return None\n\n for node, parents in luqum_traverse(tree):\n if isinstance(node, BaseOperation):\n # if any of the children are SearchField followed by one or more words,\n # we bundle them together\n last_sf: SearchField = None\n to_rem = []\n for child in node.children:\n if isinstance(child, SearchField) and isinstance(child.expr, Word):\n last_sf = child\n elif last_sf and (next_word := find_next_word(child)):\n word, parent_op = next_word\n # Add it over\n if not isinstance(last_sf.expr, Group):\n last_sf.expr = Group(type(node)(last_sf.expr, word))\n last_sf.expr.tail = word.tail\n word.tail = ''\n else:\n last_sf.expr.expr.children[-1].tail = last_sf.expr.tail\n last_sf.expr.expr.children += (word,)\n last_sf.expr.tail = word.tail\n word.tail = ''\n if parent_op:\n # A query like: 'title:foo blah OR author:bar\n # Lucene parses as: (title:foo) ? (blah OR author:bar)\n # We want : (title:foo ? blah) OR (author:bar)\n node.op = parent_op.op\n node.children += (*parent_op.children[1:],)\n to_rem.append(child)\n else:\n last_sf = None\n if len(to_rem) == len(node.children) - 1:\n # We only have the searchfield left!\n if parents:\n # Move the head to the next element\n last_sf.head = node.head\n parents[-1].children = tuple(\n child if child is not node else last_sf\n for child in parents[-1].children\n )\n else:\n tree = last_sf\n break\n else:\n node.children = tuple(\n child for child in node.children if child not in to_rem\n )\n\n # Remove spaces before field names\n for node, parents in luqum_traverse(tree):\n if isinstance(node, SearchField):\n node.expr.head = ''\n\n return tree\n\n\ndef query_dict_to_str(\n escaped: dict | None = None,\n unescaped: dict | None = None,\n op: Literal['AND', 'OR', ''] = '',\n phrase: bool = False,\n) -> str:\n \"\"\"\n Converts a query dict to a search query.\n\n >>> query_dict_to_str({'title': 'foo'})\n 'title:(foo)'\n >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'})\n 'title:(foo bar) author:(bar)'\n >>> query_dict_to_str({'title': 'foo bar', 'author': 'bar'}, op='OR')\n 'title:(foo bar) OR author:(bar)'\n >>> query_dict_to_str({'title': 'foo ? to escape'})\n 'title:(foo \\\\\\\\? to escape)'\n >>> query_dict_to_str({'title': 'YES AND'})\n 'title:(YES and)'\n >>> query_dict_to_str({'publisher_facet': 'Running Press'}, phrase=True)\n 'publisher_facet:\"Running Press\"'\n \"\"\"\n result = ''\n if escaped:\n result += f' {op} '.join(\n f'{k}:\"{fully_escape_query(v)}\"'\n if phrase\n else f'{k}:({fully_escape_query(v)})'\n for k, v in escaped.items()\n )\n if unescaped:\n if result:\n result += f' {op} '\n result += f' {op} '.join(f'{k}:{v}' for k, v in unescaped.items())\n return result\n", "path": "openlibrary/solr/query_utils.py"}]}
| 3,652 | 231 |
gh_patches_debug_31495
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-2784
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[ASV] add read_csv benchmark with dtype/names/parse_dates parameters
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `asv_bench/benchmarks/io/csv.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 import modin.pandas as pd
15 import numpy as np
16
17 from ..utils import (
18 generate_dataframe,
19 RAND_LOW,
20 RAND_HIGH,
21 ASV_USE_IMPL,
22 ASV_DATASET_SIZE,
23 UNARY_OP_DATA_SIZE,
24 IMPL,
25 execute,
26 get_shape_id,
27 )
28
29 # ray init
30 if ASV_USE_IMPL == "modin":
31 pd.DataFrame([])
32
33
34 class BaseReadCsv:
35 # test data file can de created only once
36 def setup_cache(self, test_filename="io_test_file"):
37 test_filenames = {}
38 for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:
39 shape_id = get_shape_id(shape)
40 test_filenames[shape_id] = f"{test_filename}_{shape_id}.csv"
41 df = generate_dataframe("pandas", "str_int", *shape, RAND_LOW, RAND_HIGH)
42 df.to_csv(test_filenames[shape_id], index=False)
43
44 return test_filenames
45
46 def setup(self, test_filenames, shape, *args, **kwargs):
47 self.shape_id = get_shape_id(shape)
48
49
50 class TimeReadCsvSkiprows(BaseReadCsv):
51 param_names = ["shape", "skiprows"]
52 params = [
53 UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
54 [
55 None,
56 lambda x: x % 2,
57 np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0] // 10),
58 np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0], 2),
59 ],
60 ]
61
62 def time_skiprows(self, test_filenames, shape, skiprows):
63 execute(
64 IMPL[ASV_USE_IMPL].read_csv(
65 test_filenames[self.shape_id], skiprows=skiprows
66 )
67 )
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py
--- a/asv_bench/benchmarks/io/csv.py
+++ b/asv_bench/benchmarks/io/csv.py
@@ -65,3 +65,70 @@
test_filenames[self.shape_id], skiprows=skiprows
)
)
+
+
+class TimeReadCsvNamesDtype:
+ _dtypes_params = ["Int64", "Int64_Timestamp"]
+ _timestamp_columns = ["col1", "col2"]
+
+ param_names = ["shape", "names", "dtype"]
+ params = [
+ UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
+ ["array-like"],
+ _dtypes_params,
+ ]
+
+ def _get_file_id(self, shape, dtype):
+ return get_shape_id(shape) + dtype
+
+ def _add_timestamp_columns(self, df):
+ df = df.copy()
+ date_column = IMPL["pandas"].date_range(
+ "2000",
+ periods=df.shape[0],
+ freq="ms",
+ )
+ for col in self._timestamp_columns:
+ df[col] = date_column
+ return df
+
+ def setup_cache(self, test_filename="io_test_file_csv_names_dtype"):
+ # filenames with a metadata of saved dataframes
+ cache = {}
+ for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:
+ for dtype in self._dtypes_params:
+ df = generate_dataframe("pandas", "int", *shape, RAND_LOW, RAND_HIGH)
+ if dtype == "Int64_Timestamp":
+ df = self._add_timestamp_columns(df)
+
+ file_id = self._get_file_id(shape, dtype)
+ cache[file_id] = (
+ f"{test_filename}_{file_id}.csv",
+ df.columns.to_list(),
+ df.dtypes.to_dict(),
+ )
+ df.to_csv(cache[file_id][0], index=False)
+ return cache
+
+ def setup(self, cache, shape, names, dtype):
+ file_id = self._get_file_id(shape, dtype)
+ self.filename, self.names, self.dtype = cache[file_id]
+
+ self.parse_dates = None
+ if dtype == "Int64_Timestamp":
+ # cached version of dtype should not change
+ self.dtype = self.dtype.copy()
+ for col in self._timestamp_columns:
+ del self.dtype[col]
+ self.parse_dates = self._timestamp_columns
+
+ def time_read_csv_names_dtype(self, cache, shape, names, dtype):
+ execute(
+ IMPL[ASV_USE_IMPL].read_csv(
+ self.filename,
+ names=self.names,
+ header=0,
+ dtype=self.dtype,
+ parse_dates=self.parse_dates,
+ )
+ )
|
{"golden_diff": "diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py\n--- a/asv_bench/benchmarks/io/csv.py\n+++ b/asv_bench/benchmarks/io/csv.py\n@@ -65,3 +65,70 @@\n test_filenames[self.shape_id], skiprows=skiprows\n )\n )\n+\n+\n+class TimeReadCsvNamesDtype:\n+ _dtypes_params = [\"Int64\", \"Int64_Timestamp\"]\n+ _timestamp_columns = [\"col1\", \"col2\"]\n+\n+ param_names = [\"shape\", \"names\", \"dtype\"]\n+ params = [\n+ UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n+ [\"array-like\"],\n+ _dtypes_params,\n+ ]\n+\n+ def _get_file_id(self, shape, dtype):\n+ return get_shape_id(shape) + dtype\n+\n+ def _add_timestamp_columns(self, df):\n+ df = df.copy()\n+ date_column = IMPL[\"pandas\"].date_range(\n+ \"2000\",\n+ periods=df.shape[0],\n+ freq=\"ms\",\n+ )\n+ for col in self._timestamp_columns:\n+ df[col] = date_column\n+ return df\n+\n+ def setup_cache(self, test_filename=\"io_test_file_csv_names_dtype\"):\n+ # filenames with a metadata of saved dataframes\n+ cache = {}\n+ for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:\n+ for dtype in self._dtypes_params:\n+ df = generate_dataframe(\"pandas\", \"int\", *shape, RAND_LOW, RAND_HIGH)\n+ if dtype == \"Int64_Timestamp\":\n+ df = self._add_timestamp_columns(df)\n+\n+ file_id = self._get_file_id(shape, dtype)\n+ cache[file_id] = (\n+ f\"{test_filename}_{file_id}.csv\",\n+ df.columns.to_list(),\n+ df.dtypes.to_dict(),\n+ )\n+ df.to_csv(cache[file_id][0], index=False)\n+ return cache\n+\n+ def setup(self, cache, shape, names, dtype):\n+ file_id = self._get_file_id(shape, dtype)\n+ self.filename, self.names, self.dtype = cache[file_id]\n+\n+ self.parse_dates = None\n+ if dtype == \"Int64_Timestamp\":\n+ # cached version of dtype should not change\n+ self.dtype = self.dtype.copy()\n+ for col in self._timestamp_columns:\n+ del self.dtype[col]\n+ self.parse_dates = self._timestamp_columns\n+\n+ def time_read_csv_names_dtype(self, cache, shape, names, dtype):\n+ execute(\n+ IMPL[ASV_USE_IMPL].read_csv(\n+ self.filename,\n+ names=self.names,\n+ header=0,\n+ dtype=self.dtype,\n+ parse_dates=self.parse_dates,\n+ )\n+ )\n", "issue": "[ASV] add read_csv benchmark with dtype/names/parse_dates parameters\n\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport modin.pandas as pd\nimport numpy as np\n\nfrom ..utils import (\n generate_dataframe,\n RAND_LOW,\n RAND_HIGH,\n ASV_USE_IMPL,\n ASV_DATASET_SIZE,\n UNARY_OP_DATA_SIZE,\n IMPL,\n execute,\n get_shape_id,\n)\n\n# ray init\nif ASV_USE_IMPL == \"modin\":\n pd.DataFrame([])\n\n\nclass BaseReadCsv:\n # test data file can de created only once\n def setup_cache(self, test_filename=\"io_test_file\"):\n test_filenames = {}\n for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:\n shape_id = get_shape_id(shape)\n test_filenames[shape_id] = f\"{test_filename}_{shape_id}.csv\"\n df = generate_dataframe(\"pandas\", \"str_int\", *shape, RAND_LOW, RAND_HIGH)\n df.to_csv(test_filenames[shape_id], index=False)\n\n return test_filenames\n\n def setup(self, test_filenames, shape, *args, **kwargs):\n self.shape_id = get_shape_id(shape)\n\n\nclass TimeReadCsvSkiprows(BaseReadCsv):\n param_names = [\"shape\", \"skiprows\"]\n params = [\n UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\n None,\n lambda x: x % 2,\n np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0] // 10),\n np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0], 2),\n ],\n ]\n\n def time_skiprows(self, test_filenames, shape, skiprows):\n execute(\n IMPL[ASV_USE_IMPL].read_csv(\n test_filenames[self.shape_id], skiprows=skiprows\n )\n )\n", "path": "asv_bench/benchmarks/io/csv.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nimport modin.pandas as pd\nimport numpy as np\n\nfrom ..utils import (\n generate_dataframe,\n RAND_LOW,\n RAND_HIGH,\n ASV_USE_IMPL,\n ASV_DATASET_SIZE,\n UNARY_OP_DATA_SIZE,\n IMPL,\n execute,\n get_shape_id,\n)\n\n# ray init\nif ASV_USE_IMPL == \"modin\":\n pd.DataFrame([])\n\n\nclass BaseReadCsv:\n # test data file can de created only once\n def setup_cache(self, test_filename=\"io_test_file\"):\n test_filenames = {}\n for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:\n shape_id = get_shape_id(shape)\n test_filenames[shape_id] = f\"{test_filename}_{shape_id}.csv\"\n df = generate_dataframe(\"pandas\", \"str_int\", *shape, RAND_LOW, RAND_HIGH)\n df.to_csv(test_filenames[shape_id], index=False)\n\n return test_filenames\n\n def setup(self, test_filenames, shape, *args, **kwargs):\n self.shape_id = get_shape_id(shape)\n\n\nclass TimeReadCsvSkiprows(BaseReadCsv):\n param_names = [\"shape\", \"skiprows\"]\n params = [\n UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\n None,\n lambda x: x % 2,\n np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0] // 10),\n np.arange(1, UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE][0][0], 2),\n ],\n ]\n\n def time_skiprows(self, test_filenames, shape, skiprows):\n execute(\n IMPL[ASV_USE_IMPL].read_csv(\n test_filenames[self.shape_id], skiprows=skiprows\n )\n )\n\n\nclass TimeReadCsvNamesDtype:\n _dtypes_params = [\"Int64\", \"Int64_Timestamp\"]\n _timestamp_columns = [\"col1\", \"col2\"]\n\n param_names = [\"shape\", \"names\", \"dtype\"]\n params = [\n UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\"array-like\"],\n _dtypes_params,\n ]\n\n def _get_file_id(self, shape, dtype):\n return get_shape_id(shape) + dtype\n\n def _add_timestamp_columns(self, df):\n df = df.copy()\n date_column = IMPL[\"pandas\"].date_range(\n \"2000\",\n periods=df.shape[0],\n freq=\"ms\",\n )\n for col in self._timestamp_columns:\n df[col] = date_column\n return df\n\n def setup_cache(self, test_filename=\"io_test_file_csv_names_dtype\"):\n # filenames with a metadata of saved dataframes\n cache = {}\n for shape in UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]:\n for dtype in self._dtypes_params:\n df = generate_dataframe(\"pandas\", \"int\", *shape, RAND_LOW, RAND_HIGH)\n if dtype == \"Int64_Timestamp\":\n df = self._add_timestamp_columns(df)\n\n file_id = self._get_file_id(shape, dtype)\n cache[file_id] = (\n f\"{test_filename}_{file_id}.csv\",\n df.columns.to_list(),\n df.dtypes.to_dict(),\n )\n df.to_csv(cache[file_id][0], index=False)\n return cache\n\n def setup(self, cache, shape, names, dtype):\n file_id = self._get_file_id(shape, dtype)\n self.filename, self.names, self.dtype = cache[file_id]\n\n self.parse_dates = None\n if dtype == \"Int64_Timestamp\":\n # cached version of dtype should not change\n self.dtype = self.dtype.copy()\n for col in self._timestamp_columns:\n del self.dtype[col]\n self.parse_dates = self._timestamp_columns\n\n def time_read_csv_names_dtype(self, cache, shape, names, dtype):\n execute(\n IMPL[ASV_USE_IMPL].read_csv(\n self.filename,\n names=self.names,\n header=0,\n dtype=self.dtype,\n parse_dates=self.parse_dates,\n )\n )\n", "path": "asv_bench/benchmarks/io/csv.py"}]}
| 961 | 649 |
gh_patches_debug_8525
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-1091
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive implicit yield violation
# Bug report
<!--
Hi, thanks for submitting a bug. We appreciate that.
But, we will need some information about what's wrong to help you.
-->
## What's wrong
With common function to split iterable by chunks WPS raises `ImplicitYieldFromViolation`
<!-- Describe what is not working. Please, attach a traceback. -->
```
def chunks(value, chunk_size):
"""Yield successive n-sized chunks from l."""
for index in range(0, len(value), chunk_size):
yield value[index:index + chunk_size]
```
## How is that should be
`ImplicitYieldFromViolation` should be raised only if no other operations with iterable item.
<!-- Describe how it should work. -->
<!--
Love wemake-python-styleguide? ❤️
Do you want your bugs to be solved faster?
Please consider supporting our collective:
👉 https://opencollective.com/wemake-python-styleguide/donate
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/visitors/ast/loops.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import ast
4 from collections import defaultdict
5 from typing import ClassVar, DefaultDict, List, Optional, Union
6
7 from typing_extensions import final
8
9 from wemake_python_styleguide.compat.aliases import AssignNodes, ForNodes
10 from wemake_python_styleguide.compat.functions import get_assign_targets
11 from wemake_python_styleguide.logic import (
12 nodes,
13 operators,
14 slices,
15 source,
16 walk,
17 )
18 from wemake_python_styleguide.logic.variables import (
19 is_valid_block_variable_definition,
20 )
21 from wemake_python_styleguide.types import AnyFor, AnyNodes
22 from wemake_python_styleguide.violations.best_practices import (
23 LambdaInsideLoopViolation,
24 LoopVariableDefinitionViolation,
25 YieldInComprehensionViolation,
26 )
27 from wemake_python_styleguide.violations.complexity import (
28 TooManyForsInComprehensionViolation,
29 )
30 from wemake_python_styleguide.violations.consistency import (
31 MultilineLoopViolation,
32 MultipleIfsInComprehensionViolation,
33 UselessContinueViolation,
34 WrongLoopIterTypeViolation,
35 )
36 from wemake_python_styleguide.violations.refactoring import (
37 ImplicitItemsIteratorViolation,
38 ImplicitSumViolation,
39 ImplicitYieldFromViolation,
40 UselessLoopElseViolation,
41 )
42 from wemake_python_styleguide.visitors import base, decorators
43
44 _AnyLoop = Union[AnyFor, ast.While]
45 _AnyComprehension = Union[
46 ast.ListComp,
47 ast.DictComp,
48 ast.SetComp,
49 ast.GeneratorExp,
50 ]
51
52
53 @final
54 @decorators.alias('visit_any_comprehension', (
55 'visit_ListComp',
56 'visit_DictComp',
57 'visit_SetComp',
58 'visit_GeneratorExp',
59 ))
60 class WrongComprehensionVisitor(base.BaseNodeVisitor):
61 """Checks comprehensions for correctness."""
62
63 _max_ifs: ClassVar[int] = 1
64 _max_fors: ClassVar[int] = 2
65
66 def __init__(self, *args, **kwargs) -> None:
67 """Creates a counter for tracked metrics."""
68 super().__init__(*args, **kwargs)
69 self._fors: DefaultDict[ast.AST, int] = defaultdict(int)
70
71 def visit_comprehension(self, node: ast.comprehension) -> None:
72 """
73 Finds multiple ``if`` and ``for`` nodes inside the comprehension.
74
75 Raises:
76 MultipleIfsInComprehensionViolation
77 TooManyForsInComprehensionViolation
78
79 """
80 self._check_ifs(node)
81 self._check_fors(node)
82 self.generic_visit(node)
83
84 def visit_any_comprehension(self, node: _AnyComprehension) -> None:
85 """
86 Finds incorrect patterns inside comprehensions.
87
88 Raises:
89 YieldInComprehensionViolation
90
91 """
92 self._check_contains_yield(node)
93 self.generic_visit(node)
94
95 def _check_ifs(self, node: ast.comprehension) -> None:
96 if len(node.ifs) > self._max_ifs:
97 # We are trying to fix line number in the report,
98 # since `comprehension` does not have this property.
99 parent = nodes.get_parent(node) or node
100 self.add_violation(MultipleIfsInComprehensionViolation(parent))
101
102 def _check_fors(self, node: ast.comprehension) -> None:
103 parent = nodes.get_parent(node)
104 self._fors[parent] = len(parent.generators) # type: ignore
105
106 def _check_contains_yield(self, node: _AnyComprehension) -> None:
107 for sub_node in ast.walk(node):
108 if isinstance(sub_node, ast.Yield):
109 self.add_violation(YieldInComprehensionViolation(node))
110
111 def _post_visit(self) -> None:
112 for node, for_count in self._fors.items():
113 if for_count > self._max_fors:
114 self.add_violation(TooManyForsInComprehensionViolation(node))
115
116
117 @final
118 @decorators.alias('visit_any_loop', (
119 'visit_For',
120 'visit_While',
121 'visit_AsyncFor',
122 ))
123 class WrongLoopVisitor(base.BaseNodeVisitor):
124 """Responsible for examining loops."""
125
126 def visit_any_loop(self, node: _AnyLoop) -> None:
127 """
128 Checks ``for`` and ``while`` loops.
129
130 Raises:
131 UselessLoopElseViolation
132 LambdaInsideLoopViolation
133 MultilineLoopViolation
134
135 """
136 self._check_loop_needs_else(node)
137 self._check_lambda_inside_loop(node)
138 self._check_useless_continue(node)
139 self._check_multiline_loop(node)
140 self.generic_visit(node)
141
142 def _does_loop_contain_node( # TODO: move, reuse in annotations.py
143 self,
144 loop: Optional[_AnyLoop],
145 to_check: ast.Break,
146 ) -> bool:
147 if loop is None:
148 return False
149
150 for inner_node in ast.walk(loop):
151 # We are checking this specific node, not just any `break`:
152 if to_check is inner_node:
153 return True
154 return False
155
156 def _has_break(self, node: _AnyLoop) -> bool:
157 closest_loop = None
158
159 for subnode in ast.walk(node):
160 if isinstance(subnode, (*ForNodes, ast.While)):
161 if subnode is not node:
162 closest_loop = subnode
163
164 if isinstance(subnode, ast.Break):
165 is_nested_break = self._does_loop_contain_node(
166 closest_loop, subnode,
167 )
168 if not is_nested_break:
169 return True
170 return False
171
172 def _check_loop_needs_else(self, node: _AnyLoop) -> None:
173 if node.orelse and not self._has_break(node):
174 self.add_violation(UselessLoopElseViolation(node))
175
176 def _check_lambda_inside_loop(self, node: _AnyLoop) -> None:
177 for subnode in node.body:
178 if walk.is_contained(subnode, (ast.Lambda,)):
179 self.add_violation(LambdaInsideLoopViolation(node))
180
181 def _check_useless_continue(self, node: _AnyLoop) -> None:
182 nodes_at_line: DefaultDict[int, List[ast.AST]] = defaultdict(list)
183 for sub_node in ast.walk(node):
184 lineno = getattr(sub_node, 'lineno', None)
185 if lineno is not None:
186 nodes_at_line[lineno].append(sub_node)
187
188 last_line = nodes_at_line[sorted(nodes_at_line.keys())[-1]]
189 if any(isinstance(last, ast.Continue) for last in last_line):
190 self.add_violation(UselessContinueViolation(node))
191
192 def _check_multiline_loop(self, node: _AnyLoop) -> None:
193 start_lineno = getattr(node, 'lineno', None)
194
195 if isinstance(node, ast.While):
196 node_to_check = node.test
197 else:
198 node_to_check = node.iter
199
200 for sub_node in ast.walk(node_to_check):
201 sub_lineno = getattr(sub_node, 'lineno', None)
202 if sub_lineno is not None and sub_lineno > start_lineno:
203 self.add_violation(MultilineLoopViolation(node))
204 break
205
206
207 @final
208 @decorators.alias('visit_any_for', (
209 'visit_For',
210 'visit_AsyncFor',
211 ))
212 class WrongLoopDefinitionVisitor(base.BaseNodeVisitor):
213 """Responsible for ``for`` loops and comprehensions definitions."""
214
215 _forbidden_for_iters: ClassVar[AnyNodes] = (
216 ast.List,
217 ast.ListComp,
218 ast.Dict,
219 ast.DictComp,
220 ast.Set,
221 ast.SetComp,
222 ast.GeneratorExp,
223 ast.Num,
224 ast.NameConstant,
225 )
226
227 def visit_any_for(self, node: AnyFor) -> None:
228 """
229 Ensures that ``for`` loop definitions are correct.
230
231 Raises:
232 LoopVariableDefinitionViolation
233 WrongLoopIterTypeViolation
234 ImplicitSumViolation
235
236 """
237 self._check_variable_definitions(node.target)
238 self._check_explicit_iter_type(node)
239 self._check_implicit_sum(node)
240 self._check_implicit_yield_from(node)
241 self.generic_visit(node)
242
243 def visit_comprehension(self, node: ast.comprehension) -> None:
244 """
245 Ensures that comprehension definitions are correct.
246
247 Raises:
248 LoopVariableDefinitionViolation
249
250 """
251 self._check_variable_definitions(node.target)
252 self._check_explicit_iter_type(node)
253 self.generic_visit(node)
254
255 def _check_variable_definitions(self, node: ast.AST) -> None:
256 if not is_valid_block_variable_definition(node):
257 self.add_violation(LoopVariableDefinitionViolation(node))
258
259 def _check_explicit_iter_type(
260 self,
261 node: Union[AnyFor, ast.comprehension],
262 ) -> None:
263 node_iter = operators.unwrap_unary_node(node.iter)
264 is_wrong = isinstance(node_iter, self._forbidden_for_iters)
265 is_empty = isinstance(node_iter, ast.Tuple) and not node_iter.elts
266 if is_wrong or is_empty:
267 self.add_violation(WrongLoopIterTypeViolation(node_iter))
268
269 def _check_implicit_sum(self, node: AnyFor) -> None:
270 is_implicit_sum = (
271 len(node.body) == 1 and
272 isinstance(node.body[0], ast.AugAssign) and
273 isinstance(node.body[0].op, ast.Add) and
274 isinstance(node.body[0].target, ast.Name)
275 )
276 if is_implicit_sum:
277 self.add_violation(ImplicitSumViolation(node))
278
279 def _check_implicit_yield_from(self, node: AnyFor) -> None:
280 is_implicit_yield_from = (
281 len(node.body) == 1 and
282 isinstance(node.body[0], ast.Expr) and
283 isinstance(node.body[0].value, ast.Yield)
284 )
285 if is_implicit_yield_from:
286 self.add_violation(ImplicitYieldFromViolation(node))
287
288
289 @final
290 class SyncForLoopVisitor(base.BaseNodeVisitor):
291 """We use this visitor to check just sync ``for`` loops."""
292
293 def visit_For(self, node: ast.For) -> None:
294 """
295 Checks for hidden patterns in sync loops.
296
297 Raises:
298 ImplicitItemsIteratorViolation
299
300 """
301 self._check_implicit_items(node)
302 self.generic_visit(node)
303
304 def _check_implicit_items(self, node: ast.For) -> None:
305 iterable = source.node_to_string(node.iter)
306 target = source.node_to_string(node.target)
307
308 for sub in ast.walk(node):
309 if isinstance(sub, ast.Subscript):
310 if self._is_assigned_target(sub):
311 continue
312
313 if slices.is_same_slice(iterable, target, sub):
314 self.add_violation(ImplicitItemsIteratorViolation(node))
315 break
316
317 def _is_assigned_target(self, node: ast.Subscript) -> bool:
318 parent = nodes.get_parent(node)
319 if not isinstance(parent, (*AssignNodes, ast.AugAssign)):
320 return False
321 return any(node == target for target in get_assign_targets(parent))
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/visitors/ast/loops.py b/wemake_python_styleguide/visitors/ast/loops.py
--- a/wemake_python_styleguide/visitors/ast/loops.py
+++ b/wemake_python_styleguide/visitors/ast/loops.py
@@ -277,6 +277,9 @@
self.add_violation(ImplicitSumViolation(node))
def _check_implicit_yield_from(self, node: AnyFor) -> None:
+ if isinstance(nodes.get_context(node), ast.AsyncFunctionDef):
+ # Python does not support 'yield from' inside async functions
+ return
is_implicit_yield_from = (
len(node.body) == 1 and
isinstance(node.body[0], ast.Expr) and
|
{"golden_diff": "diff --git a/wemake_python_styleguide/visitors/ast/loops.py b/wemake_python_styleguide/visitors/ast/loops.py\n--- a/wemake_python_styleguide/visitors/ast/loops.py\n+++ b/wemake_python_styleguide/visitors/ast/loops.py\n@@ -277,6 +277,9 @@\n self.add_violation(ImplicitSumViolation(node))\n \n def _check_implicit_yield_from(self, node: AnyFor) -> None:\n+ if isinstance(nodes.get_context(node), ast.AsyncFunctionDef):\n+ # Python does not support 'yield from' inside async functions\n+ return\n is_implicit_yield_from = (\n len(node.body) == 1 and\n isinstance(node.body[0], ast.Expr) and\n", "issue": "False positive implicit yield violation\n# Bug report\r\n\r\n<!--\r\nHi, thanks for submitting a bug. We appreciate that.\r\n\r\nBut, we will need some information about what's wrong to help you.\r\n-->\r\n\r\n## What's wrong\r\nWith common function to split iterable by chunks WPS raises `ImplicitYieldFromViolation`\r\n<!-- Describe what is not working. Please, attach a traceback. -->\r\n```\r\ndef chunks(value, chunk_size):\r\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\r\n for index in range(0, len(value), chunk_size):\r\n yield value[index:index + chunk_size]\r\n```\r\n\r\n## How is that should be\r\n`ImplicitYieldFromViolation` should be raised only if no other operations with iterable item.\r\n<!-- Describe how it should work. -->\r\n\r\n<!--\r\n\r\n Love wemake-python-styleguide? \u2764\ufe0f\r\n Do you want your bugs to be solved faster?\r\n\r\n Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/wemake-python-styleguide/donate\r\n\r\n-->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\nfrom collections import defaultdict\nfrom typing import ClassVar, DefaultDict, List, Optional, Union\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.compat.aliases import AssignNodes, ForNodes\nfrom wemake_python_styleguide.compat.functions import get_assign_targets\nfrom wemake_python_styleguide.logic import (\n nodes,\n operators,\n slices,\n source,\n walk,\n)\nfrom wemake_python_styleguide.logic.variables import (\n is_valid_block_variable_definition,\n)\nfrom wemake_python_styleguide.types import AnyFor, AnyNodes\nfrom wemake_python_styleguide.violations.best_practices import (\n LambdaInsideLoopViolation,\n LoopVariableDefinitionViolation,\n YieldInComprehensionViolation,\n)\nfrom wemake_python_styleguide.violations.complexity import (\n TooManyForsInComprehensionViolation,\n)\nfrom wemake_python_styleguide.violations.consistency import (\n MultilineLoopViolation,\n MultipleIfsInComprehensionViolation,\n UselessContinueViolation,\n WrongLoopIterTypeViolation,\n)\nfrom wemake_python_styleguide.violations.refactoring import (\n ImplicitItemsIteratorViolation,\n ImplicitSumViolation,\n ImplicitYieldFromViolation,\n UselessLoopElseViolation,\n)\nfrom wemake_python_styleguide.visitors import base, decorators\n\n_AnyLoop = Union[AnyFor, ast.While]\n_AnyComprehension = Union[\n ast.ListComp,\n ast.DictComp,\n ast.SetComp,\n ast.GeneratorExp,\n]\n\n\n@final\[email protected]('visit_any_comprehension', (\n 'visit_ListComp',\n 'visit_DictComp',\n 'visit_SetComp',\n 'visit_GeneratorExp',\n))\nclass WrongComprehensionVisitor(base.BaseNodeVisitor):\n \"\"\"Checks comprehensions for correctness.\"\"\"\n\n _max_ifs: ClassVar[int] = 1\n _max_fors: ClassVar[int] = 2\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Creates a counter for tracked metrics.\"\"\"\n super().__init__(*args, **kwargs)\n self._fors: DefaultDict[ast.AST, int] = defaultdict(int)\n\n def visit_comprehension(self, node: ast.comprehension) -> None:\n \"\"\"\n Finds multiple ``if`` and ``for`` nodes inside the comprehension.\n\n Raises:\n MultipleIfsInComprehensionViolation\n TooManyForsInComprehensionViolation\n\n \"\"\"\n self._check_ifs(node)\n self._check_fors(node)\n self.generic_visit(node)\n\n def visit_any_comprehension(self, node: _AnyComprehension) -> None:\n \"\"\"\n Finds incorrect patterns inside comprehensions.\n\n Raises:\n YieldInComprehensionViolation\n\n \"\"\"\n self._check_contains_yield(node)\n self.generic_visit(node)\n\n def _check_ifs(self, node: ast.comprehension) -> None:\n if len(node.ifs) > self._max_ifs:\n # We are trying to fix line number in the report,\n # since `comprehension` does not have this property.\n parent = nodes.get_parent(node) or node\n self.add_violation(MultipleIfsInComprehensionViolation(parent))\n\n def _check_fors(self, node: ast.comprehension) -> None:\n parent = nodes.get_parent(node)\n self._fors[parent] = len(parent.generators) # type: ignore\n\n def _check_contains_yield(self, node: _AnyComprehension) -> None:\n for sub_node in ast.walk(node):\n if isinstance(sub_node, ast.Yield):\n self.add_violation(YieldInComprehensionViolation(node))\n\n def _post_visit(self) -> None:\n for node, for_count in self._fors.items():\n if for_count > self._max_fors:\n self.add_violation(TooManyForsInComprehensionViolation(node))\n\n\n@final\[email protected]('visit_any_loop', (\n 'visit_For',\n 'visit_While',\n 'visit_AsyncFor',\n))\nclass WrongLoopVisitor(base.BaseNodeVisitor):\n \"\"\"Responsible for examining loops.\"\"\"\n\n def visit_any_loop(self, node: _AnyLoop) -> None:\n \"\"\"\n Checks ``for`` and ``while`` loops.\n\n Raises:\n UselessLoopElseViolation\n LambdaInsideLoopViolation\n MultilineLoopViolation\n\n \"\"\"\n self._check_loop_needs_else(node)\n self._check_lambda_inside_loop(node)\n self._check_useless_continue(node)\n self._check_multiline_loop(node)\n self.generic_visit(node)\n\n def _does_loop_contain_node( # TODO: move, reuse in annotations.py\n self,\n loop: Optional[_AnyLoop],\n to_check: ast.Break,\n ) -> bool:\n if loop is None:\n return False\n\n for inner_node in ast.walk(loop):\n # We are checking this specific node, not just any `break`:\n if to_check is inner_node:\n return True\n return False\n\n def _has_break(self, node: _AnyLoop) -> bool:\n closest_loop = None\n\n for subnode in ast.walk(node):\n if isinstance(subnode, (*ForNodes, ast.While)):\n if subnode is not node:\n closest_loop = subnode\n\n if isinstance(subnode, ast.Break):\n is_nested_break = self._does_loop_contain_node(\n closest_loop, subnode,\n )\n if not is_nested_break:\n return True\n return False\n\n def _check_loop_needs_else(self, node: _AnyLoop) -> None:\n if node.orelse and not self._has_break(node):\n self.add_violation(UselessLoopElseViolation(node))\n\n def _check_lambda_inside_loop(self, node: _AnyLoop) -> None:\n for subnode in node.body:\n if walk.is_contained(subnode, (ast.Lambda,)):\n self.add_violation(LambdaInsideLoopViolation(node))\n\n def _check_useless_continue(self, node: _AnyLoop) -> None:\n nodes_at_line: DefaultDict[int, List[ast.AST]] = defaultdict(list)\n for sub_node in ast.walk(node):\n lineno = getattr(sub_node, 'lineno', None)\n if lineno is not None:\n nodes_at_line[lineno].append(sub_node)\n\n last_line = nodes_at_line[sorted(nodes_at_line.keys())[-1]]\n if any(isinstance(last, ast.Continue) for last in last_line):\n self.add_violation(UselessContinueViolation(node))\n\n def _check_multiline_loop(self, node: _AnyLoop) -> None:\n start_lineno = getattr(node, 'lineno', None)\n\n if isinstance(node, ast.While):\n node_to_check = node.test\n else:\n node_to_check = node.iter\n\n for sub_node in ast.walk(node_to_check):\n sub_lineno = getattr(sub_node, 'lineno', None)\n if sub_lineno is not None and sub_lineno > start_lineno:\n self.add_violation(MultilineLoopViolation(node))\n break\n\n\n@final\[email protected]('visit_any_for', (\n 'visit_For',\n 'visit_AsyncFor',\n))\nclass WrongLoopDefinitionVisitor(base.BaseNodeVisitor):\n \"\"\"Responsible for ``for`` loops and comprehensions definitions.\"\"\"\n\n _forbidden_for_iters: ClassVar[AnyNodes] = (\n ast.List,\n ast.ListComp,\n ast.Dict,\n ast.DictComp,\n ast.Set,\n ast.SetComp,\n ast.GeneratorExp,\n ast.Num,\n ast.NameConstant,\n )\n\n def visit_any_for(self, node: AnyFor) -> None:\n \"\"\"\n Ensures that ``for`` loop definitions are correct.\n\n Raises:\n LoopVariableDefinitionViolation\n WrongLoopIterTypeViolation\n ImplicitSumViolation\n\n \"\"\"\n self._check_variable_definitions(node.target)\n self._check_explicit_iter_type(node)\n self._check_implicit_sum(node)\n self._check_implicit_yield_from(node)\n self.generic_visit(node)\n\n def visit_comprehension(self, node: ast.comprehension) -> None:\n \"\"\"\n Ensures that comprehension definitions are correct.\n\n Raises:\n LoopVariableDefinitionViolation\n\n \"\"\"\n self._check_variable_definitions(node.target)\n self._check_explicit_iter_type(node)\n self.generic_visit(node)\n\n def _check_variable_definitions(self, node: ast.AST) -> None:\n if not is_valid_block_variable_definition(node):\n self.add_violation(LoopVariableDefinitionViolation(node))\n\n def _check_explicit_iter_type(\n self,\n node: Union[AnyFor, ast.comprehension],\n ) -> None:\n node_iter = operators.unwrap_unary_node(node.iter)\n is_wrong = isinstance(node_iter, self._forbidden_for_iters)\n is_empty = isinstance(node_iter, ast.Tuple) and not node_iter.elts\n if is_wrong or is_empty:\n self.add_violation(WrongLoopIterTypeViolation(node_iter))\n\n def _check_implicit_sum(self, node: AnyFor) -> None:\n is_implicit_sum = (\n len(node.body) == 1 and\n isinstance(node.body[0], ast.AugAssign) and\n isinstance(node.body[0].op, ast.Add) and\n isinstance(node.body[0].target, ast.Name)\n )\n if is_implicit_sum:\n self.add_violation(ImplicitSumViolation(node))\n\n def _check_implicit_yield_from(self, node: AnyFor) -> None:\n is_implicit_yield_from = (\n len(node.body) == 1 and\n isinstance(node.body[0], ast.Expr) and\n isinstance(node.body[0].value, ast.Yield)\n )\n if is_implicit_yield_from:\n self.add_violation(ImplicitYieldFromViolation(node))\n\n\n@final\nclass SyncForLoopVisitor(base.BaseNodeVisitor):\n \"\"\"We use this visitor to check just sync ``for`` loops.\"\"\"\n\n def visit_For(self, node: ast.For) -> None:\n \"\"\"\n Checks for hidden patterns in sync loops.\n\n Raises:\n ImplicitItemsIteratorViolation\n\n \"\"\"\n self._check_implicit_items(node)\n self.generic_visit(node)\n\n def _check_implicit_items(self, node: ast.For) -> None:\n iterable = source.node_to_string(node.iter)\n target = source.node_to_string(node.target)\n\n for sub in ast.walk(node):\n if isinstance(sub, ast.Subscript):\n if self._is_assigned_target(sub):\n continue\n\n if slices.is_same_slice(iterable, target, sub):\n self.add_violation(ImplicitItemsIteratorViolation(node))\n break\n\n def _is_assigned_target(self, node: ast.Subscript) -> bool:\n parent = nodes.get_parent(node)\n if not isinstance(parent, (*AssignNodes, ast.AugAssign)):\n return False\n return any(node == target for target in get_assign_targets(parent))\n", "path": "wemake_python_styleguide/visitors/ast/loops.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\nfrom collections import defaultdict\nfrom typing import ClassVar, DefaultDict, List, Optional, Union\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.compat.aliases import AssignNodes, ForNodes\nfrom wemake_python_styleguide.compat.functions import get_assign_targets\nfrom wemake_python_styleguide.logic import (\n nodes,\n operators,\n slices,\n source,\n walk,\n)\nfrom wemake_python_styleguide.logic.variables import (\n is_valid_block_variable_definition,\n)\nfrom wemake_python_styleguide.types import AnyFor, AnyNodes\nfrom wemake_python_styleguide.violations.best_practices import (\n LambdaInsideLoopViolation,\n LoopVariableDefinitionViolation,\n YieldInComprehensionViolation,\n)\nfrom wemake_python_styleguide.violations.complexity import (\n TooManyForsInComprehensionViolation,\n)\nfrom wemake_python_styleguide.violations.consistency import (\n MultilineLoopViolation,\n MultipleIfsInComprehensionViolation,\n UselessContinueViolation,\n WrongLoopIterTypeViolation,\n)\nfrom wemake_python_styleguide.violations.refactoring import (\n ImplicitItemsIteratorViolation,\n ImplicitSumViolation,\n ImplicitYieldFromViolation,\n UselessLoopElseViolation,\n)\nfrom wemake_python_styleguide.visitors import base, decorators\n\n_AnyLoop = Union[AnyFor, ast.While]\n_AnyComprehension = Union[\n ast.ListComp,\n ast.DictComp,\n ast.SetComp,\n ast.GeneratorExp,\n]\n\n\n@final\[email protected]('visit_any_comprehension', (\n 'visit_ListComp',\n 'visit_DictComp',\n 'visit_SetComp',\n 'visit_GeneratorExp',\n))\nclass WrongComprehensionVisitor(base.BaseNodeVisitor):\n \"\"\"Checks comprehensions for correctness.\"\"\"\n\n _max_ifs: ClassVar[int] = 1\n _max_fors: ClassVar[int] = 2\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Creates a counter for tracked metrics.\"\"\"\n super().__init__(*args, **kwargs)\n self._fors: DefaultDict[ast.AST, int] = defaultdict(int)\n\n def visit_comprehension(self, node: ast.comprehension) -> None:\n \"\"\"\n Finds multiple ``if`` and ``for`` nodes inside the comprehension.\n\n Raises:\n MultipleIfsInComprehensionViolation\n TooManyForsInComprehensionViolation\n\n \"\"\"\n self._check_ifs(node)\n self._check_fors(node)\n self.generic_visit(node)\n\n def visit_any_comprehension(self, node: _AnyComprehension) -> None:\n \"\"\"\n Finds incorrect patterns inside comprehensions.\n\n Raises:\n YieldInComprehensionViolation\n\n \"\"\"\n self._check_contains_yield(node)\n self.generic_visit(node)\n\n def _check_ifs(self, node: ast.comprehension) -> None:\n if len(node.ifs) > self._max_ifs:\n # We are trying to fix line number in the report,\n # since `comprehension` does not have this property.\n parent = nodes.get_parent(node) or node\n self.add_violation(MultipleIfsInComprehensionViolation(parent))\n\n def _check_fors(self, node: ast.comprehension) -> None:\n parent = nodes.get_parent(node)\n self._fors[parent] = len(parent.generators) # type: ignore\n\n def _check_contains_yield(self, node: _AnyComprehension) -> None:\n for sub_node in ast.walk(node):\n if isinstance(sub_node, ast.Yield):\n self.add_violation(YieldInComprehensionViolation(node))\n\n def _post_visit(self) -> None:\n for node, for_count in self._fors.items():\n if for_count > self._max_fors:\n self.add_violation(TooManyForsInComprehensionViolation(node))\n\n\n@final\[email protected]('visit_any_loop', (\n 'visit_For',\n 'visit_While',\n 'visit_AsyncFor',\n))\nclass WrongLoopVisitor(base.BaseNodeVisitor):\n \"\"\"Responsible for examining loops.\"\"\"\n\n def visit_any_loop(self, node: _AnyLoop) -> None:\n \"\"\"\n Checks ``for`` and ``while`` loops.\n\n Raises:\n UselessLoopElseViolation\n LambdaInsideLoopViolation\n MultilineLoopViolation\n\n \"\"\"\n self._check_loop_needs_else(node)\n self._check_lambda_inside_loop(node)\n self._check_useless_continue(node)\n self._check_multiline_loop(node)\n self.generic_visit(node)\n\n def _does_loop_contain_node( # TODO: move, reuse in annotations.py\n self,\n loop: Optional[_AnyLoop],\n to_check: ast.Break,\n ) -> bool:\n if loop is None:\n return False\n\n for inner_node in ast.walk(loop):\n # We are checking this specific node, not just any `break`:\n if to_check is inner_node:\n return True\n return False\n\n def _has_break(self, node: _AnyLoop) -> bool:\n closest_loop = None\n\n for subnode in ast.walk(node):\n if isinstance(subnode, (*ForNodes, ast.While)):\n if subnode is not node:\n closest_loop = subnode\n\n if isinstance(subnode, ast.Break):\n is_nested_break = self._does_loop_contain_node(\n closest_loop, subnode,\n )\n if not is_nested_break:\n return True\n return False\n\n def _check_loop_needs_else(self, node: _AnyLoop) -> None:\n if node.orelse and not self._has_break(node):\n self.add_violation(UselessLoopElseViolation(node))\n\n def _check_lambda_inside_loop(self, node: _AnyLoop) -> None:\n for subnode in node.body:\n if walk.is_contained(subnode, (ast.Lambda,)):\n self.add_violation(LambdaInsideLoopViolation(node))\n\n def _check_useless_continue(self, node: _AnyLoop) -> None:\n nodes_at_line: DefaultDict[int, List[ast.AST]] = defaultdict(list)\n for sub_node in ast.walk(node):\n lineno = getattr(sub_node, 'lineno', None)\n if lineno is not None:\n nodes_at_line[lineno].append(sub_node)\n\n last_line = nodes_at_line[sorted(nodes_at_line.keys())[-1]]\n if any(isinstance(last, ast.Continue) for last in last_line):\n self.add_violation(UselessContinueViolation(node))\n\n def _check_multiline_loop(self, node: _AnyLoop) -> None:\n start_lineno = getattr(node, 'lineno', None)\n\n if isinstance(node, ast.While):\n node_to_check = node.test\n else:\n node_to_check = node.iter\n\n for sub_node in ast.walk(node_to_check):\n sub_lineno = getattr(sub_node, 'lineno', None)\n if sub_lineno is not None and sub_lineno > start_lineno:\n self.add_violation(MultilineLoopViolation(node))\n break\n\n\n@final\[email protected]('visit_any_for', (\n 'visit_For',\n 'visit_AsyncFor',\n))\nclass WrongLoopDefinitionVisitor(base.BaseNodeVisitor):\n \"\"\"Responsible for ``for`` loops and comprehensions definitions.\"\"\"\n\n _forbidden_for_iters: ClassVar[AnyNodes] = (\n ast.List,\n ast.ListComp,\n ast.Dict,\n ast.DictComp,\n ast.Set,\n ast.SetComp,\n ast.GeneratorExp,\n ast.Num,\n ast.NameConstant,\n )\n\n def visit_any_for(self, node: AnyFor) -> None:\n \"\"\"\n Ensures that ``for`` loop definitions are correct.\n\n Raises:\n LoopVariableDefinitionViolation\n WrongLoopIterTypeViolation\n ImplicitSumViolation\n\n \"\"\"\n self._check_variable_definitions(node.target)\n self._check_explicit_iter_type(node)\n self._check_implicit_sum(node)\n self._check_implicit_yield_from(node)\n self.generic_visit(node)\n\n def visit_comprehension(self, node: ast.comprehension) -> None:\n \"\"\"\n Ensures that comprehension definitions are correct.\n\n Raises:\n LoopVariableDefinitionViolation\n\n \"\"\"\n self._check_variable_definitions(node.target)\n self._check_explicit_iter_type(node)\n self.generic_visit(node)\n\n def _check_variable_definitions(self, node: ast.AST) -> None:\n if not is_valid_block_variable_definition(node):\n self.add_violation(LoopVariableDefinitionViolation(node))\n\n def _check_explicit_iter_type(\n self,\n node: Union[AnyFor, ast.comprehension],\n ) -> None:\n node_iter = operators.unwrap_unary_node(node.iter)\n is_wrong = isinstance(node_iter, self._forbidden_for_iters)\n is_empty = isinstance(node_iter, ast.Tuple) and not node_iter.elts\n if is_wrong or is_empty:\n self.add_violation(WrongLoopIterTypeViolation(node_iter))\n\n def _check_implicit_sum(self, node: AnyFor) -> None:\n is_implicit_sum = (\n len(node.body) == 1 and\n isinstance(node.body[0], ast.AugAssign) and\n isinstance(node.body[0].op, ast.Add) and\n isinstance(node.body[0].target, ast.Name)\n )\n if is_implicit_sum:\n self.add_violation(ImplicitSumViolation(node))\n\n def _check_implicit_yield_from(self, node: AnyFor) -> None:\n if isinstance(nodes.get_context(node), ast.AsyncFunctionDef):\n # Python does not support 'yield from' inside async functions\n return\n is_implicit_yield_from = (\n len(node.body) == 1 and\n isinstance(node.body[0], ast.Expr) and\n isinstance(node.body[0].value, ast.Yield)\n )\n if is_implicit_yield_from:\n self.add_violation(ImplicitYieldFromViolation(node))\n\n\n@final\nclass SyncForLoopVisitor(base.BaseNodeVisitor):\n \"\"\"We use this visitor to check just sync ``for`` loops.\"\"\"\n\n def visit_For(self, node: ast.For) -> None:\n \"\"\"\n Checks for hidden patterns in sync loops.\n\n Raises:\n ImplicitItemsIteratorViolation\n\n \"\"\"\n self._check_implicit_items(node)\n self.generic_visit(node)\n\n def _check_implicit_items(self, node: ast.For) -> None:\n iterable = source.node_to_string(node.iter)\n target = source.node_to_string(node.target)\n\n for sub in ast.walk(node):\n if isinstance(sub, ast.Subscript):\n if self._is_assigned_target(sub):\n continue\n\n if slices.is_same_slice(iterable, target, sub):\n self.add_violation(ImplicitItemsIteratorViolation(node))\n break\n\n def _is_assigned_target(self, node: ast.Subscript) -> bool:\n parent = nodes.get_parent(node)\n if not isinstance(parent, (*AssignNodes, ast.AugAssign)):\n return False\n return any(node == target for target in get_assign_targets(parent))\n", "path": "wemake_python_styleguide/visitors/ast/loops.py"}]}
| 3,745 | 177 |
gh_patches_debug_2618
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-4064
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a python_requires to our setup.py
cc: @dstufft
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 import setuptools
18 from setuptools import find_packages, setup
19 from setuptools.command.install import install
20 from setuptools.command.test import test
21
22
23 if (
24 pkg_resources.parse_version(setuptools.__version__) <
25 pkg_resources.parse_version("18.5")
26 ):
27 raise RuntimeError(
28 "cryptography requires setuptools 18.5 or newer, please upgrade to a "
29 "newer version of setuptools"
30 )
31
32 base_dir = os.path.dirname(__file__)
33 src_dir = os.path.join(base_dir, "src")
34
35 # When executing the setup.py, we need to be able to import ourselves, this
36 # means that we need to add the src/ directory to the sys.path.
37 sys.path.insert(0, src_dir)
38
39 about = {}
40 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
41 exec(f.read(), about)
42
43
44 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
45
46 setup_requirements = []
47
48 if platform.python_implementation() == "PyPy":
49 if sys.pypy_version_info < (5, 3):
50 raise RuntimeError(
51 "cryptography 1.9 is not compatible with PyPy < 5.3. Please "
52 "upgrade PyPy to use this library."
53 )
54 else:
55 setup_requirements.append("cffi>=1.7")
56
57 test_requirements = [
58 "pytest>=3.2.1,!=3.3.0",
59 "pretend",
60 "iso8601",
61 "pytz",
62 "hypothesis>=1.11.4",
63 ]
64
65
66 # If there's no vectors locally that probably means we are in a tarball and
67 # need to go and get the matching vectors package from PyPi
68 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
69 test_requirements.append(VECTORS_DEPENDENCY)
70
71
72 class PyTest(test):
73 def finalize_options(self):
74 test.finalize_options(self)
75 self.test_args = []
76 self.test_suite = True
77
78 # This means there's a vectors/ folder with the package in here.
79 # cd into it, install the vectors package and then refresh sys.path
80 if VECTORS_DEPENDENCY not in test_requirements:
81 subprocess.check_call(
82 [sys.executable, "setup.py", "install"], cwd="vectors"
83 )
84 pkg_resources.get_distribution("cryptography_vectors").activate()
85
86 def run_tests(self):
87 # Import here because in module scope the eggs are not loaded.
88 import pytest
89 test_args = [os.path.join(base_dir, "tests")]
90 errno = pytest.main(test_args)
91 sys.exit(errno)
92
93
94 def keywords_with_side_effects(argv):
95 """
96 Get a dictionary with setup keywords that (can) have side effects.
97
98 :param argv: A list of strings with command line arguments.
99 :returns: A dictionary with keyword arguments for the ``setup()`` function.
100
101 This setup.py script uses the setuptools 'setup_requires' feature because
102 this is required by the cffi package to compile extension modules. The
103 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
104 build process as a result of setup.py invocations that don't need the cffi
105 module to be built (setup.py serves the dual purpose of exposing package
106 metadata).
107
108 All of the options listed by ``python setup.py --help`` that print
109 information should be recognized here. The commands ``clean``,
110 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
111 Any combination of these options and commands is also supported.
112
113 This function was originally based on the `setup.py script`_ of SciPy (see
114 also the discussion in `pip issue #25`_).
115
116 .. _pip issue #25: https://github.com/pypa/pip/issues/25
117 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
118 """
119 no_setup_requires_arguments = (
120 '-h', '--help',
121 '-n', '--dry-run',
122 '-q', '--quiet',
123 '-v', '--verbose',
124 '-V', '--version',
125 '--author',
126 '--author-email',
127 '--classifiers',
128 '--contact',
129 '--contact-email',
130 '--description',
131 '--egg-base',
132 '--fullname',
133 '--help-commands',
134 '--keywords',
135 '--licence',
136 '--license',
137 '--long-description',
138 '--maintainer',
139 '--maintainer-email',
140 '--name',
141 '--no-user-cfg',
142 '--obsoletes',
143 '--platforms',
144 '--provides',
145 '--requires',
146 '--url',
147 'clean',
148 'egg_info',
149 'register',
150 'sdist',
151 'upload',
152 )
153
154 def is_short_option(argument):
155 """Check whether a command line argument is a short option."""
156 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
157
158 def expand_short_options(argument):
159 """Expand combined short options into canonical short options."""
160 return ('-' + char for char in argument[1:])
161
162 def argument_without_setup_requirements(argv, i):
163 """Check whether a command line argument needs setup requirements."""
164 if argv[i] in no_setup_requires_arguments:
165 # Simple case: An argument which is either an option or a command
166 # which doesn't need setup requirements.
167 return True
168 elif (is_short_option(argv[i]) and
169 all(option in no_setup_requires_arguments
170 for option in expand_short_options(argv[i]))):
171 # Not so simple case: Combined short options none of which need
172 # setup requirements.
173 return True
174 elif argv[i - 1:i] == ['--egg-base']:
175 # Tricky case: --egg-info takes an argument which should not make
176 # us use setup_requires (defeating the purpose of this code).
177 return True
178 else:
179 return False
180
181 if all(argument_without_setup_requirements(argv, i)
182 for i in range(1, len(argv))):
183 return {
184 "cmdclass": {
185 "build": DummyBuild,
186 "install": DummyInstall,
187 "test": DummyPyTest,
188 }
189 }
190 else:
191 cffi_modules = [
192 "src/_cffi_src/build_openssl.py:ffi",
193 "src/_cffi_src/build_constant_time.py:ffi",
194 "src/_cffi_src/build_padding.py:ffi",
195 ]
196
197 return {
198 "setup_requires": setup_requirements,
199 "cmdclass": {
200 "test": PyTest,
201 },
202 "cffi_modules": cffi_modules
203 }
204
205
206 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
207 "while command line arguments implied a side effect "
208 "free command or option.")
209
210
211 class DummyBuild(build):
212 """
213 This class makes it very obvious when ``keywords_with_side_effects()`` has
214 incorrectly interpreted the command line arguments to ``setup.py build`` as
215 one of the 'side effect free' commands or options.
216 """
217
218 def run(self):
219 raise RuntimeError(setup_requires_error)
220
221
222 class DummyInstall(install):
223 """
224 This class makes it very obvious when ``keywords_with_side_effects()`` has
225 incorrectly interpreted the command line arguments to ``setup.py install``
226 as one of the 'side effect free' commands or options.
227 """
228
229 def run(self):
230 raise RuntimeError(setup_requires_error)
231
232
233 class DummyPyTest(test):
234 """
235 This class makes it very obvious when ``keywords_with_side_effects()`` has
236 incorrectly interpreted the command line arguments to ``setup.py test`` as
237 one of the 'side effect free' commands or options.
238 """
239
240 def run_tests(self):
241 raise RuntimeError(setup_requires_error)
242
243
244 with open(os.path.join(base_dir, "README.rst")) as f:
245 long_description = f.read()
246
247
248 setup(
249 name=about["__title__"],
250 version=about["__version__"],
251
252 description=about["__summary__"],
253 long_description=long_description,
254 license=about["__license__"],
255 url=about["__uri__"],
256
257 author=about["__author__"],
258 author_email=about["__email__"],
259
260 classifiers=[
261 "Intended Audience :: Developers",
262 "License :: OSI Approved :: Apache Software License",
263 "License :: OSI Approved :: BSD License",
264 "Natural Language :: English",
265 "Operating System :: MacOS :: MacOS X",
266 "Operating System :: POSIX",
267 "Operating System :: POSIX :: BSD",
268 "Operating System :: POSIX :: Linux",
269 "Operating System :: Microsoft :: Windows",
270 "Programming Language :: Python",
271 "Programming Language :: Python :: 2",
272 "Programming Language :: Python :: 2.7",
273 "Programming Language :: Python :: 3",
274 "Programming Language :: Python :: 3.4",
275 "Programming Language :: Python :: 3.5",
276 "Programming Language :: Python :: 3.6",
277 "Programming Language :: Python :: Implementation :: CPython",
278 "Programming Language :: Python :: Implementation :: PyPy",
279 "Topic :: Security :: Cryptography",
280 ],
281
282 package_dir={"": "src"},
283 packages=find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"]),
284 include_package_data=True,
285
286 install_requires=[
287 "idna >= 2.1",
288 "asn1crypto >= 0.21.0",
289 "six >= 1.4.1",
290 ],
291 tests_require=test_requirements,
292 extras_require={
293 ":python_version < '3'": ["enum34", "ipaddress"],
294 ":platform_python_implementation != 'PyPy'": ["cffi >= 1.7"],
295
296 "test": test_requirements,
297 "docstest": [
298 "doc8",
299 "pyenchant >= 1.6.11",
300 "readme_renderer >= 16.0",
301 "sphinx >= 1.6.5",
302 "sphinx_rtd_theme",
303 "sphinxcontrib-spelling >= 4.0.1",
304 ],
305 "pep8test": [
306 "flake8",
307 "flake8-import-order",
308 "pep8-naming",
309 ],
310 },
311
312 # for cffi
313 zip_safe=False,
314 ext_package="cryptography.hazmat.bindings",
315 **keywords_with_side_effects(sys.argv)
316 )
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -283,6 +283,8 @@
packages=find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"]),
include_package_data=True,
+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
+
install_requires=[
"idna >= 2.1",
"asn1crypto >= 0.21.0",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -283,6 +283,8 @@\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n \n+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n+\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n", "issue": "Add a python_requires to our setup.py\ncc: @dstufft \n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nimport setuptools\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nif (\n pkg_resources.parse_version(setuptools.__version__) <\n pkg_resources.parse_version(\"18.5\")\n):\n raise RuntimeError(\n \"cryptography requires setuptools 18.5 or newer, please upgrade to a \"\n \"newer version of setuptools\"\n )\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nsetup_requirements = []\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (5, 3):\n raise RuntimeError(\n \"cryptography 1.9 is not compatible with PyPy < 5.3. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n setup_requirements.append(\"cffi>=1.7\")\n\ntest_requirements = [\n \"pytest>=3.2.1,!=3.3.0\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4\",\n]\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n \"six >= 1.4.1\",\n ],\n tests_require=test_requirements,\n extras_require={\n \":python_version < '3'\": [\"enum34\", \"ipaddress\"],\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n\n \"test\": test_requirements,\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n \"sphinx >= 1.6.5\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nimport setuptools\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nif (\n pkg_resources.parse_version(setuptools.__version__) <\n pkg_resources.parse_version(\"18.5\")\n):\n raise RuntimeError(\n \"cryptography requires setuptools 18.5 or newer, please upgrade to a \"\n \"newer version of setuptools\"\n )\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nsetup_requirements = []\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (5, 3):\n raise RuntimeError(\n \"cryptography 1.9 is not compatible with PyPy < 5.3. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n setup_requirements.append(\"cffi>=1.7\")\n\ntest_requirements = [\n \"pytest>=3.2.1,!=3.3.0\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4\",\n]\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n \"six >= 1.4.1\",\n ],\n tests_require=test_requirements,\n extras_require={\n \":python_version < '3'\": [\"enum34\", \"ipaddress\"],\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n\n \"test\": test_requirements,\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n \"sphinx >= 1.6.5\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]}
| 3,462 | 122 |
gh_patches_debug_1317
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1325
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bugzilla] Error calling shutdown method for module bugzilla:None
Noticed this in my logs. Bugzilla shutdown throwing none. On Sopel 6.5.3, Python 3.5.3.
```
Ping timeout reached after 120 seconds, closing connection
Calling shutdown for 2 modules.
calling reddit.shutdown
calling bugzilla.shutdown
Error calling shutdown method for module bugzilla:None
Closed!
Warning: Disconnected. Reconnecting in 20 seconds...
Welcome to Sopel. Loading modules...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/bugzilla.py`
Content:
```
1 # coding=utf-8
2 """Bugzilla issue reporting module
3
4 Copyright 2013-2015, Embolalia, embolalia.com
5 Licensed under the Eiffel Forum License 2.
6 """
7 from __future__ import unicode_literals, absolute_import, print_function, division
8
9 import re
10
11 import xmltodict
12
13 from sopel import web, tools
14 from sopel.config.types import StaticSection, ListAttribute
15 from sopel.logger import get_logger
16 from sopel.module import rule
17
18
19 regex = None
20 LOGGER = get_logger(__name__)
21
22
23 class BugzillaSection(StaticSection):
24 domains = ListAttribute('domains')
25 """The domains of the Bugzilla instances from which to get information."""
26
27
28 def configure(config):
29 config.define_section('bugzilla', BugzillaSection)
30 config.bugzilla.configure_setting(
31 'domains',
32 'Enter the domains of the Bugzillas you want extra information '
33 'from (e.g. bugzilla.gnome.org)'
34 )
35
36
37 def setup(bot):
38 global regex
39 bot.config.define_section('bugzilla', BugzillaSection)
40
41 if not bot.config.bugzilla.domains:
42 return
43 if not bot.memory.contains('url_callbacks'):
44 bot.memory['url_callbacks'] = tools.SopelMemory()
45
46 domains = '|'.join(bot.config.bugzilla.domains)
47 regex = re.compile((r'https?://(%s)'
48 '(/show_bug.cgi\?\S*?)'
49 '(id=\d+)')
50 % domains)
51 bot.memory['url_callbacks'][regex] = show_bug
52
53
54 def shutdown(bot):
55 del bot.memory['url_callbacks'][regex]
56
57
58 @rule(r'.*https?://(\S+?)'
59 '(/show_bug.cgi\?\S*?)'
60 '(id=\d+).*')
61 def show_bug(bot, trigger, match=None):
62 """Show information about a Bugzilla bug."""
63 match = match or trigger
64 domain = match.group(1)
65 if domain not in bot.config.bugzilla.domains:
66 return
67 url = 'https://%s%sctype=xml&%s' % match.groups()
68 data = web.get(url, dont_decode=True)
69 bug = xmltodict.parse(data).get('bugzilla').get('bug')
70 error = bug.get('@error', None) # error="NotPermitted"
71
72 if error:
73 LOGGER.warning('Bugzilla error: %s' % error)
74 bot.say('[BUGZILLA] Unable to get infomation for '
75 'linked bug (%s)' % error)
76 return
77
78 message = ('[BUGZILLA] %s | Product: %s | Component: %s | Version: %s | ' +
79 'Importance: %s | Status: %s | Assigned to: %s | ' +
80 'Reported: %s | Modified: %s')
81
82 resolution = bug.get('resolution')
83 if resolution is not None:
84 status = bug.get('bug_status') + ' ' + resolution
85 else:
86 status = bug.get('bug_status')
87
88 assigned_to = bug.get('assigned_to')
89 if isinstance(assigned_to, dict):
90 assigned_to = assigned_to.get('@name')
91
92 message = message % (
93 bug.get('short_desc'), bug.get('product'),
94 bug.get('component'), bug.get('version'),
95 (bug.get('priority') + ' ' + bug.get('bug_severity')),
96 status, assigned_to, bug.get('creation_ts'),
97 bug.get('delta_ts'))
98 bot.say(message)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/bugzilla.py b/sopel/modules/bugzilla.py
--- a/sopel/modules/bugzilla.py
+++ b/sopel/modules/bugzilla.py
@@ -52,7 +52,12 @@
def shutdown(bot):
- del bot.memory['url_callbacks'][regex]
+ try:
+ del bot.memory['url_callbacks'][regex]
+ except KeyError:
+ # bot.config.bugzilla.domains was probably just empty on startup
+ # everything's daijoubu
+ pass
@rule(r'.*https?://(\S+?)'
|
{"golden_diff": "diff --git a/sopel/modules/bugzilla.py b/sopel/modules/bugzilla.py\n--- a/sopel/modules/bugzilla.py\n+++ b/sopel/modules/bugzilla.py\n@@ -52,7 +52,12 @@\n \n \n def shutdown(bot):\n- del bot.memory['url_callbacks'][regex]\n+ try:\n+ del bot.memory['url_callbacks'][regex]\n+ except KeyError:\n+ # bot.config.bugzilla.domains was probably just empty on startup\n+ # everything's daijoubu\n+ pass\n \n \n @rule(r'.*https?://(\\S+?)'\n", "issue": "[Bugzilla] Error calling shutdown method for module bugzilla:None\nNoticed this in my logs. Bugzilla shutdown throwing none. On Sopel 6.5.3, Python 3.5.3.\r\n\r\n```\r\nPing timeout reached after 120 seconds, closing connection\r\nCalling shutdown for 2 modules.\r\ncalling reddit.shutdown\r\ncalling bugzilla.shutdown\r\nError calling shutdown method for module bugzilla:None\r\nClosed!\r\nWarning: Disconnected. Reconnecting in 20 seconds...\r\n \r\nWelcome to Sopel. Loading modules...\r\n```\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Bugzilla issue reporting module\n\nCopyright 2013-2015, Embolalia, embolalia.com\nLicensed under the Eiffel Forum License 2.\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\n\nimport xmltodict\n\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ListAttribute\nfrom sopel.logger import get_logger\nfrom sopel.module import rule\n\n\nregex = None\nLOGGER = get_logger(__name__)\n\n\nclass BugzillaSection(StaticSection):\n domains = ListAttribute('domains')\n \"\"\"The domains of the Bugzilla instances from which to get information.\"\"\"\n\n\ndef configure(config):\n config.define_section('bugzilla', BugzillaSection)\n config.bugzilla.configure_setting(\n 'domains',\n 'Enter the domains of the Bugzillas you want extra information '\n 'from (e.g. bugzilla.gnome.org)'\n )\n\n\ndef setup(bot):\n global regex\n bot.config.define_section('bugzilla', BugzillaSection)\n\n if not bot.config.bugzilla.domains:\n return\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n\n domains = '|'.join(bot.config.bugzilla.domains)\n regex = re.compile((r'https?://(%s)'\n '(/show_bug.cgi\\?\\S*?)'\n '(id=\\d+)')\n % domains)\n bot.memory['url_callbacks'][regex] = show_bug\n\n\ndef shutdown(bot):\n del bot.memory['url_callbacks'][regex]\n\n\n@rule(r'.*https?://(\\S+?)'\n '(/show_bug.cgi\\?\\S*?)'\n '(id=\\d+).*')\ndef show_bug(bot, trigger, match=None):\n \"\"\"Show information about a Bugzilla bug.\"\"\"\n match = match or trigger\n domain = match.group(1)\n if domain not in bot.config.bugzilla.domains:\n return\n url = 'https://%s%sctype=xml&%s' % match.groups()\n data = web.get(url, dont_decode=True)\n bug = xmltodict.parse(data).get('bugzilla').get('bug')\n error = bug.get('@error', None) # error=\"NotPermitted\"\n\n if error:\n LOGGER.warning('Bugzilla error: %s' % error)\n bot.say('[BUGZILLA] Unable to get infomation for '\n 'linked bug (%s)' % error)\n return\n\n message = ('[BUGZILLA] %s | Product: %s | Component: %s | Version: %s | ' +\n 'Importance: %s | Status: %s | Assigned to: %s | ' +\n 'Reported: %s | Modified: %s')\n\n resolution = bug.get('resolution')\n if resolution is not None:\n status = bug.get('bug_status') + ' ' + resolution\n else:\n status = bug.get('bug_status')\n\n assigned_to = bug.get('assigned_to')\n if isinstance(assigned_to, dict):\n assigned_to = assigned_to.get('@name')\n\n message = message % (\n bug.get('short_desc'), bug.get('product'),\n bug.get('component'), bug.get('version'),\n (bug.get('priority') + ' ' + bug.get('bug_severity')),\n status, assigned_to, bug.get('creation_ts'),\n bug.get('delta_ts'))\n bot.say(message)\n", "path": "sopel/modules/bugzilla.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"Bugzilla issue reporting module\n\nCopyright 2013-2015, Embolalia, embolalia.com\nLicensed under the Eiffel Forum License 2.\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\n\nimport xmltodict\n\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ListAttribute\nfrom sopel.logger import get_logger\nfrom sopel.module import rule\n\n\nregex = None\nLOGGER = get_logger(__name__)\n\n\nclass BugzillaSection(StaticSection):\n domains = ListAttribute('domains')\n \"\"\"The domains of the Bugzilla instances from which to get information.\"\"\"\n\n\ndef configure(config):\n config.define_section('bugzilla', BugzillaSection)\n config.bugzilla.configure_setting(\n 'domains',\n 'Enter the domains of the Bugzillas you want extra information '\n 'from (e.g. bugzilla.gnome.org)'\n )\n\n\ndef setup(bot):\n global regex\n bot.config.define_section('bugzilla', BugzillaSection)\n\n if not bot.config.bugzilla.domains:\n return\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n\n domains = '|'.join(bot.config.bugzilla.domains)\n regex = re.compile((r'https?://(%s)'\n '(/show_bug.cgi\\?\\S*?)'\n '(id=\\d+)')\n % domains)\n bot.memory['url_callbacks'][regex] = show_bug\n\n\ndef shutdown(bot):\n try:\n del bot.memory['url_callbacks'][regex]\n except KeyError:\n # bot.config.bugzilla.domains was probably just empty on startup\n # everything's daijoubu\n pass\n\n\n@rule(r'.*https?://(\\S+?)'\n '(/show_bug.cgi\\?\\S*?)'\n '(id=\\d+).*')\ndef show_bug(bot, trigger, match=None):\n \"\"\"Show information about a Bugzilla bug.\"\"\"\n match = match or trigger\n domain = match.group(1)\n if domain not in bot.config.bugzilla.domains:\n return\n url = 'https://%s%sctype=xml&%s' % match.groups()\n data = web.get(url, dont_decode=True)\n bug = xmltodict.parse(data).get('bugzilla').get('bug')\n error = bug.get('@error', None) # error=\"NotPermitted\"\n\n if error:\n LOGGER.warning('Bugzilla error: %s' % error)\n bot.say('[BUGZILLA] Unable to get infomation for '\n 'linked bug (%s)' % error)\n return\n\n message = ('[BUGZILLA] %s | Product: %s | Component: %s | Version: %s | ' +\n 'Importance: %s | Status: %s | Assigned to: %s | ' +\n 'Reported: %s | Modified: %s')\n\n resolution = bug.get('resolution')\n if resolution is not None:\n status = bug.get('bug_status') + ' ' + resolution\n else:\n status = bug.get('bug_status')\n\n assigned_to = bug.get('assigned_to')\n if isinstance(assigned_to, dict):\n assigned_to = assigned_to.get('@name')\n\n message = message % (\n bug.get('short_desc'), bug.get('product'),\n bug.get('component'), bug.get('version'),\n (bug.get('priority') + ' ' + bug.get('bug_severity')),\n status, assigned_to, bug.get('creation_ts'),\n bug.get('delta_ts'))\n bot.say(message)\n", "path": "sopel/modules/bugzilla.py"}]}
| 1,334 | 138 |
gh_patches_debug_50359
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-5239
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gevent hook unnecessarily bundles ~17MB of "stuff"
## Description of the issue
The included `gevent` hook bundles about 17MB of unnecessary files.
These include:
- HTML documentation
- `__pycache__` folders
- tests
- Generated `.c` and `.h` files
## Context information (for bug reports)
* Output of `pyinstaller --version`: `3.6`
* Version of Python: `3.7.7`
* Platform: macOS 10.15.4 / Ubuntu 18.04
### A minimal example program which shows the error
`test.py`:
```
import gevent
```
Commands
```shell
$ pip install gevent pyinstaller
[...]
$ pyinstaller -D test.py
[...]
$ find dist/test/gevent -name "*.c" -or -name "*.h" -or -name "*.html" -or -path "*/__pycache__/*" -or -path "*/tests/*" | xargs du -shc
[...]
17M total
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-gevent.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2015-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller.utils.hooks import collect_all
13
14 excludedimports = ["gevent.testing", "gevent.tests"]
15
16 datas, binaries, hiddenimports = collect_all(
17 'gevent',
18 filter_submodules=lambda name: (
19 "gevent.testing" not in name or "gevent.tests" not in name))
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-gevent.py b/PyInstaller/hooks/hook-gevent.py
--- a/PyInstaller/hooks/hook-gevent.py
+++ b/PyInstaller/hooks/hook-gevent.py
@@ -16,4 +16,6 @@
datas, binaries, hiddenimports = collect_all(
'gevent',
filter_submodules=lambda name: (
- "gevent.testing" not in name or "gevent.tests" not in name))
+ "gevent.testing" not in name or "gevent.tests" not in name),
+ include_py_files=False,
+ exclude_datas=["**/tests"])
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-gevent.py b/PyInstaller/hooks/hook-gevent.py\n--- a/PyInstaller/hooks/hook-gevent.py\n+++ b/PyInstaller/hooks/hook-gevent.py\n@@ -16,4 +16,6 @@\n datas, binaries, hiddenimports = collect_all(\n 'gevent',\n filter_submodules=lambda name: (\n- \"gevent.testing\" not in name or \"gevent.tests\" not in name))\n+ \"gevent.testing\" not in name or \"gevent.tests\" not in name),\n+ include_py_files=False,\n+ exclude_datas=[\"**/tests\"])\n", "issue": "gevent hook unnecessarily bundles ~17MB of \"stuff\"\n## Description of the issue\r\n\r\nThe included `gevent` hook bundles about 17MB of unnecessary files.\r\nThese include:\r\n- HTML documentation\r\n- `__pycache__` folders\r\n- tests\r\n- Generated `.c` and `.h` files\r\n\r\n## Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: `3.6`\r\n* Version of Python: `3.7.7`\r\n* Platform: macOS 10.15.4 / Ubuntu 18.04\r\n\r\n### A minimal example program which shows the error\r\n\r\n`test.py`:\r\n```\r\nimport gevent\r\n```\r\n\r\nCommands\r\n```shell\r\n$ pip install gevent pyinstaller\r\n[...]\r\n$ pyinstaller -D test.py\r\n[...]\r\n$ find dist/test/gevent -name \"*.c\" -or -name \"*.h\" -or -name \"*.html\" -or -path \"*/__pycache__/*\" -or -path \"*/tests/*\" | xargs du -shc\r\n[...]\r\n 17M\ttotal\r\n```\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2015-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import collect_all\n\nexcludedimports = [\"gevent.testing\", \"gevent.tests\"]\n\ndatas, binaries, hiddenimports = collect_all(\n 'gevent',\n filter_submodules=lambda name: (\n \"gevent.testing\" not in name or \"gevent.tests\" not in name))\n", "path": "PyInstaller/hooks/hook-gevent.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2015-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import collect_all\n\nexcludedimports = [\"gevent.testing\", \"gevent.tests\"]\n\ndatas, binaries, hiddenimports = collect_all(\n 'gevent',\n filter_submodules=lambda name: (\n \"gevent.testing\" not in name or \"gevent.tests\" not in name),\n include_py_files=False,\n exclude_datas=[\"**/tests\"])\n", "path": "PyInstaller/hooks/hook-gevent.py"}]}
| 692 | 143 |
gh_patches_debug_8955
|
rasdani/github-patches
|
git_diff
|
google__jax-13884
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install failed and got UnicodeDecodeError for jax>=0.3.15 on Windows, Python 3.9.13
### Description
I got `UnicodeDecodeError` when running pip install for jax `0.3.15` on Windows 10:
```cmd
PS C:\tensorflow> pip install --no-cache-dir jax==0.3.15
Collecting jax==0.3.15
Downloading jax-0.3.15.tar.gz (1.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 3.5 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\Jason Wu\AppData\Local\Temp\pip-install-5rpko2f6\jax_4375f8ec13e046228768476f1ee3a194\setup.py", line 33, in <module>
_long_description = f.read()
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1304: illegal multibyte sequence
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Also I tried `0.4.1` and `0.3.25`, both got the same error:
```cmd
PS C:\tensorflow> pip install --no-cache-dir jax==0.4.1
Collecting jax==0.4.1
Downloading jax-0.4.1.tar.gz (1.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 3.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\Jason Wu\AppData\Local\Temp\pip-install-trctdimm\jax_3d064f6acc4d4a0b91bcebef44bb0eb7\setup.py", line 38, in <module>
_long_description = f.read()
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1301: illegal multibyte sequence
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
```cmd
PS C:\tensorflow> pip install --no-cache-dir jax==0.3.25
Collecting jax==0.3.25
Downloading jax-0.3.25.tar.gz (1.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 4.2 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\Jason Wu\AppData\Local\Temp\pip-install-h_ymbbfo\jax_9f7e4365d3a24fb4bb4e74d1b86678e6\setup.py", line 38, in <module>
_long_description = f.read()
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1304: illegal multibyte sequence
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
It's worth noting that `0.3.14` installed without issue:
```cmd
PS C:\tensorflow> pip install --no-cache-dir jax==0.3.14
Collecting jax==0.3.14
Downloading jax-0.3.14.tar.gz (990 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 990.1/990.1 kB 2.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: absl-py ...
...
Building wheels for collected packages: jax
Building wheel for jax (setup.py) ... done
Created wheel for jax: filename=jax-0.3.14-py3-none-any.whl size=1147576 sha256=ae0aadd2688c543930de9ba7e31e58b3363a8752ae25e4770941b53e1c3476d6
Stored in directory: C:\Users\Jason Wu\AppData\Local\Temp\pip-ephem-wheel-cache-q92x7zxq\wheels\3d\22\cf\75c40ec058e8d0f9b8427cd35366e1fd618475451fc8fc6fd7
Successfully built jax
Installing collected packages: jax
Successfully installed jax-0.3.14
```
Since I'm building Tensorflow from source, it requires Jax > `0.3.15`.
Please let me know if there is any other information that needs to be provided. thanks!
### What jax/jaxlib version are you using?
jax v0.3.15
### Which accelerator(s) are you using?
CPU
### Additional system info
Python 3.9.13, pip 22.3.1, Windows 10,
### NVIDIA GPU info
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 The JAX Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from distutils import spawn
16 import subprocess
17 import os
18 import sys
19
20 from setuptools import setup, find_packages
21
22 _current_jaxlib_version = '0.4.1'
23 # The following should be updated with each new jaxlib release.
24 _latest_jaxlib_version_on_pypi = '0.4.1'
25 _available_cuda_versions = ['11']
26 _default_cuda_version = '11'
27 _available_cudnn_versions = ['82', '86']
28 _default_cudnn_version = '86'
29 _libtpu_version = '0.1.dev20221212'
30
31 _dct = {}
32 with open('jax/version.py') as f:
33 exec(f.read(), _dct)
34 __version__ = _dct['__version__']
35 _minimum_jaxlib_version = _dct['_minimum_jaxlib_version']
36
37 with open('README.md') as f:
38 _long_description = f.read()
39
40 if 'PROTOC' in os.environ and os.path.exists(os.environ['PROTOC']):
41 protoc = os.environ['PROTOC']
42 else:
43 protoc = spawn.find_executable('protoc')
44
45 def generate_proto(source):
46 if not protoc or not os.path.exists(source):
47 return
48 protoc_command = [protoc, '-I.', '--python_out=.', source]
49 if subprocess.call(protoc_command) != 0:
50 sys.exit(-1)
51
52 generate_proto("jax/experimental/australis/executable.proto")
53 generate_proto("jax/experimental/australis/petri.proto")
54
55 setup(
56 name='jax',
57 version=__version__,
58 description='Differentiate, compile, and transform Numpy code.',
59 long_description=_long_description,
60 long_description_content_type='text/markdown',
61 author='JAX team',
62 author_email='[email protected]',
63 packages=find_packages(exclude=["examples"]),
64 package_data={'jax': ['py.typed', "*.pyi", "**/*.pyi"]},
65 python_requires='>=3.8',
66 install_requires=[
67 'numpy>=1.20',
68 'opt_einsum',
69 'scipy>=1.5',
70 ],
71 extras_require={
72 # Minimum jaxlib version; used in testing.
73 'minimum-jaxlib': [f'jaxlib=={_minimum_jaxlib_version}'],
74
75 # CPU-only jaxlib can be installed via:
76 # $ pip install jax[cpu]
77 'cpu': [f'jaxlib=={_current_jaxlib_version}'],
78
79 # Used only for CI builds that install JAX from github HEAD.
80 'ci': [f'jaxlib=={_latest_jaxlib_version_on_pypi}'],
81
82 # Cloud TPU VM jaxlib can be installed via:
83 # $ pip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
84 'tpu': [f'jaxlib=={_current_jaxlib_version}',
85 f'libtpu-nightly=={_libtpu_version}',
86 # Required by cloud_tpu_init.py
87 'requests'],
88
89 # $ pip install jax[australis]
90 'australis': ['protobuf>=3.13,<4'],
91
92 # CUDA installations require adding jax releases URL; e.g.
93 # Cuda installation defaulting to a CUDA and Cudnn version defined above.
94 # $ pip install jax[cuda] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
95 'cuda': [f"jaxlib=={_current_jaxlib_version}+cuda{_default_cuda_version}.cudnn{_default_cudnn_version}"],
96
97 # CUDA installations require adding jax releases URL; e.g.
98 # $ pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
99 # $ pip install jax[cuda11_cudnn86] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
100 **{f'cuda{cuda_version}_cudnn{cudnn_version}': f"jaxlib=={_current_jaxlib_version}+cuda{cuda_version}.cudnn{cudnn_version}"
101 for cuda_version in _available_cuda_versions for cudnn_version in _available_cudnn_versions}
102 },
103 url='https://github.com/google/jax',
104 license='Apache-2.0',
105 classifiers=[
106 "Programming Language :: Python :: 3.8",
107 "Programming Language :: Python :: 3.9",
108 "Programming Language :: Python :: 3.10",
109 "Programming Language :: Python :: 3.11",
110 ],
111 zip_safe=False,
112 )
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,12 +29,12 @@
_libtpu_version = '0.1.dev20221212'
_dct = {}
-with open('jax/version.py') as f:
+with open('jax/version.py', encoding='utf-8') as f:
exec(f.read(), _dct)
__version__ = _dct['__version__']
_minimum_jaxlib_version = _dct['_minimum_jaxlib_version']
-with open('README.md') as f:
+with open('README.md', encoding='utf-8') as f:
_long_description = f.read()
if 'PROTOC' in os.environ and os.path.exists(os.environ['PROTOC']):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,12 +29,12 @@\n _libtpu_version = '0.1.dev20221212'\n \n _dct = {}\n-with open('jax/version.py') as f:\n+with open('jax/version.py', encoding='utf-8') as f:\n exec(f.read(), _dct)\n __version__ = _dct['__version__']\n _minimum_jaxlib_version = _dct['_minimum_jaxlib_version']\n \n-with open('README.md') as f:\n+with open('README.md', encoding='utf-8') as f:\n _long_description = f.read()\n \n if 'PROTOC' in os.environ and os.path.exists(os.environ['PROTOC']):\n", "issue": "pip install failed and got UnicodeDecodeError for jax>=0.3.15 on Windows, Python 3.9.13\n### Description\n\nI got `UnicodeDecodeError` when running pip install for jax `0.3.15` on Windows 10:\r\n```cmd\r\nPS C:\\tensorflow> pip install --no-cache-dir jax==0.3.15\r\nCollecting jax==0.3.15\r\n Downloading jax-0.3.15.tar.gz (1.0 MB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 1.0/1.0 MB 3.5 MB/s eta 0:00:00\r\n Preparing metadata (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n\r\n \u00d7 python setup.py egg_info did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [6 lines of output]\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\Jason Wu\\AppData\\Local\\Temp\\pip-install-5rpko2f6\\jax_4375f8ec13e046228768476f1ee3a194\\setup.py\", line 33, in <module>\r\n _long_description = f.read()\r\n UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1304: illegal multibyte sequence\r\n [end of output]\r\n\r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: metadata-generation-failed\r\n\r\n\u00d7 Encountered error while generating package metadata.\r\n\u2570\u2500> See above for output.\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for details.\r\n```\r\n\r\nAlso I tried `0.4.1` and `0.3.25`, both got the same error:\r\n```cmd\r\nPS C:\\tensorflow> pip install --no-cache-dir jax==0.4.1\r\nCollecting jax==0.4.1\r\n Downloading jax-0.4.1.tar.gz (1.2 MB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 1.2/1.2 MB 3.6 MB/s eta 0:00:00\r\n Preparing metadata (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n\r\n \u00d7 python setup.py egg_info did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [6 lines of output]\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\Jason Wu\\AppData\\Local\\Temp\\pip-install-trctdimm\\jax_3d064f6acc4d4a0b91bcebef44bb0eb7\\setup.py\", line 38, in <module>\r\n _long_description = f.read()\r\n UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1301: illegal multibyte sequence\r\n [end of output]\r\n\r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: metadata-generation-failed\r\n\r\n\u00d7 Encountered error while generating package metadata.\r\n\u2570\u2500> See above for output.\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for details.\r\n```\r\n```cmd\r\nPS C:\\tensorflow> pip install --no-cache-dir jax==0.3.25\r\nCollecting jax==0.3.25\r\n Downloading jax-0.3.25.tar.gz (1.1 MB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 1.1/1.1 MB 4.2 MB/s eta 0:00:00\r\n Preparing metadata (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n\r\n \u00d7 python setup.py egg_info did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [6 lines of output]\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\Jason Wu\\AppData\\Local\\Temp\\pip-install-h_ymbbfo\\jax_9f7e4365d3a24fb4bb4e74d1b86678e6\\setup.py\", line 38, in <module>\r\n _long_description = f.read()\r\n UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 1304: illegal multibyte sequence\r\n [end of output]\r\n\r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: metadata-generation-failed\r\n\r\n\u00d7 Encountered error while generating package metadata.\r\n\u2570\u2500> See above for output.\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for details.\r\n```\r\n\r\nIt's worth noting that `0.3.14` installed without issue:\r\n```cmd\r\nPS C:\\tensorflow> pip install --no-cache-dir jax==0.3.14\r\nCollecting jax==0.3.14\r\n Downloading jax-0.3.14.tar.gz (990 kB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 990.1/990.1 kB 2.6 MB/s eta 0:00:00\r\n Preparing metadata (setup.py) ... done\r\nRequirement already satisfied: absl-py ...\r\n...\r\nBuilding wheels for collected packages: jax\r\n Building wheel for jax (setup.py) ... done\r\n Created wheel for jax: filename=jax-0.3.14-py3-none-any.whl size=1147576 sha256=ae0aadd2688c543930de9ba7e31e58b3363a8752ae25e4770941b53e1c3476d6\r\n Stored in directory: C:\\Users\\Jason Wu\\AppData\\Local\\Temp\\pip-ephem-wheel-cache-q92x7zxq\\wheels\\3d\\22\\cf\\75c40ec058e8d0f9b8427cd35366e1fd618475451fc8fc6fd7\r\nSuccessfully built jax\r\nInstalling collected packages: jax\r\nSuccessfully installed jax-0.3.14\r\n```\r\n\r\nSince I'm building Tensorflow from source, it requires Jax > `0.3.15`.\r\n\r\nPlease let me know if there is any other information that needs to be provided. thanks!\n\n### What jax/jaxlib version are you using?\n\njax v0.3.15\n\n### Which accelerator(s) are you using?\n\nCPU\n\n### Additional system info\n\nPython 3.9.13, pip 22.3.1, Windows 10, \n\n### NVIDIA GPU info\n\n_No response_\n", "before_files": [{"content": "# Copyright 2018 The JAX Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom distutils import spawn\nimport subprocess\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n_current_jaxlib_version = '0.4.1'\n# The following should be updated with each new jaxlib release.\n_latest_jaxlib_version_on_pypi = '0.4.1'\n_available_cuda_versions = ['11']\n_default_cuda_version = '11'\n_available_cudnn_versions = ['82', '86']\n_default_cudnn_version = '86'\n_libtpu_version = '0.1.dev20221212'\n\n_dct = {}\nwith open('jax/version.py') as f:\n exec(f.read(), _dct)\n__version__ = _dct['__version__']\n_minimum_jaxlib_version = _dct['_minimum_jaxlib_version']\n\nwith open('README.md') as f:\n _long_description = f.read()\n\nif 'PROTOC' in os.environ and os.path.exists(os.environ['PROTOC']):\n protoc = os.environ['PROTOC']\nelse:\n protoc = spawn.find_executable('protoc')\n\ndef generate_proto(source):\n if not protoc or not os.path.exists(source):\n return\n protoc_command = [protoc, '-I.', '--python_out=.', source]\n if subprocess.call(protoc_command) != 0:\n sys.exit(-1)\n\ngenerate_proto(\"jax/experimental/australis/executable.proto\")\ngenerate_proto(\"jax/experimental/australis/petri.proto\")\n\nsetup(\n name='jax',\n version=__version__,\n description='Differentiate, compile, and transform Numpy code.',\n long_description=_long_description,\n long_description_content_type='text/markdown',\n author='JAX team',\n author_email='[email protected]',\n packages=find_packages(exclude=[\"examples\"]),\n package_data={'jax': ['py.typed', \"*.pyi\", \"**/*.pyi\"]},\n python_requires='>=3.8',\n install_requires=[\n 'numpy>=1.20',\n 'opt_einsum',\n 'scipy>=1.5',\n ],\n extras_require={\n # Minimum jaxlib version; used in testing.\n 'minimum-jaxlib': [f'jaxlib=={_minimum_jaxlib_version}'],\n\n # CPU-only jaxlib can be installed via:\n # $ pip install jax[cpu]\n 'cpu': [f'jaxlib=={_current_jaxlib_version}'],\n\n # Used only for CI builds that install JAX from github HEAD.\n 'ci': [f'jaxlib=={_latest_jaxlib_version_on_pypi}'],\n\n # Cloud TPU VM jaxlib can be installed via:\n # $ pip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html\n 'tpu': [f'jaxlib=={_current_jaxlib_version}',\n f'libtpu-nightly=={_libtpu_version}',\n # Required by cloud_tpu_init.py\n 'requests'],\n\n # $ pip install jax[australis]\n 'australis': ['protobuf>=3.13,<4'],\n\n # CUDA installations require adding jax releases URL; e.g.\n # Cuda installation defaulting to a CUDA and Cudnn version defined above.\n # $ pip install jax[cuda] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n 'cuda': [f\"jaxlib=={_current_jaxlib_version}+cuda{_default_cuda_version}.cudnn{_default_cudnn_version}\"],\n\n # CUDA installations require adding jax releases URL; e.g.\n # $ pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n # $ pip install jax[cuda11_cudnn86] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n **{f'cuda{cuda_version}_cudnn{cudnn_version}': f\"jaxlib=={_current_jaxlib_version}+cuda{cuda_version}.cudnn{cudnn_version}\"\n for cuda_version in _available_cuda_versions for cudnn_version in _available_cudnn_versions}\n },\n url='https://github.com/google/jax',\n license='Apache-2.0',\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 The JAX Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom distutils import spawn\nimport subprocess\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n_current_jaxlib_version = '0.4.1'\n# The following should be updated with each new jaxlib release.\n_latest_jaxlib_version_on_pypi = '0.4.1'\n_available_cuda_versions = ['11']\n_default_cuda_version = '11'\n_available_cudnn_versions = ['82', '86']\n_default_cudnn_version = '86'\n_libtpu_version = '0.1.dev20221212'\n\n_dct = {}\nwith open('jax/version.py', encoding='utf-8') as f:\n exec(f.read(), _dct)\n__version__ = _dct['__version__']\n_minimum_jaxlib_version = _dct['_minimum_jaxlib_version']\n\nwith open('README.md', encoding='utf-8') as f:\n _long_description = f.read()\n\nif 'PROTOC' in os.environ and os.path.exists(os.environ['PROTOC']):\n protoc = os.environ['PROTOC']\nelse:\n protoc = spawn.find_executable('protoc')\n\ndef generate_proto(source):\n if not protoc or not os.path.exists(source):\n return\n protoc_command = [protoc, '-I.', '--python_out=.', source]\n if subprocess.call(protoc_command) != 0:\n sys.exit(-1)\n\ngenerate_proto(\"jax/experimental/australis/executable.proto\")\ngenerate_proto(\"jax/experimental/australis/petri.proto\")\n\nsetup(\n name='jax',\n version=__version__,\n description='Differentiate, compile, and transform Numpy code.',\n long_description=_long_description,\n long_description_content_type='text/markdown',\n author='JAX team',\n author_email='[email protected]',\n packages=find_packages(exclude=[\"examples\"]),\n package_data={'jax': ['py.typed', \"*.pyi\", \"**/*.pyi\"]},\n python_requires='>=3.8',\n install_requires=[\n 'numpy>=1.20',\n 'opt_einsum',\n 'scipy>=1.5',\n ],\n extras_require={\n # Minimum jaxlib version; used in testing.\n 'minimum-jaxlib': [f'jaxlib=={_minimum_jaxlib_version}'],\n\n # CPU-only jaxlib can be installed via:\n # $ pip install jax[cpu]\n 'cpu': [f'jaxlib=={_current_jaxlib_version}'],\n\n # Used only for CI builds that install JAX from github HEAD.\n 'ci': [f'jaxlib=={_latest_jaxlib_version_on_pypi}'],\n\n # Cloud TPU VM jaxlib can be installed via:\n # $ pip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html\n 'tpu': [f'jaxlib=={_current_jaxlib_version}',\n f'libtpu-nightly=={_libtpu_version}',\n # Required by cloud_tpu_init.py\n 'requests'],\n\n # $ pip install jax[australis]\n 'australis': ['protobuf>=3.13,<4'],\n\n # CUDA installations require adding jax releases URL; e.g.\n # Cuda installation defaulting to a CUDA and Cudnn version defined above.\n # $ pip install jax[cuda] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n 'cuda': [f\"jaxlib=={_current_jaxlib_version}+cuda{_default_cuda_version}.cudnn{_default_cudnn_version}\"],\n\n # CUDA installations require adding jax releases URL; e.g.\n # $ pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n # $ pip install jax[cuda11_cudnn86] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n **{f'cuda{cuda_version}_cudnn{cudnn_version}': f\"jaxlib=={_current_jaxlib_version}+cuda{cuda_version}.cudnn{cudnn_version}\"\n for cuda_version in _available_cuda_versions for cudnn_version in _available_cudnn_versions}\n },\n url='https://github.com/google/jax',\n license='Apache-2.0',\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 3,357 | 177 |
gh_patches_debug_17518
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__PaddleSpeech-1644
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[vec][search] update to paddlespeech model
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `demos/audio_searching/src/encode.py`
Content:
```
1 # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import numpy as np
15 from logs import LOGGER
16
17 from paddlespeech.cli import VectorExecutor
18
19 vector_executor = VectorExecutor()
20
21
22 def get_audio_embedding(path):
23 """
24 Use vpr_inference to generate embedding of audio
25 """
26 try:
27 embedding = vector_executor(audio_file=path)
28 embedding = embedding / np.linalg.norm(embedding)
29 embedding = embedding.tolist()
30 return embedding
31 except Exception as e:
32 LOGGER.error(f"Error with embedding:{e}")
33 return None
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/demos/audio_searching/src/encode.py b/demos/audio_searching/src/encode.py
--- a/demos/audio_searching/src/encode.py
+++ b/demos/audio_searching/src/encode.py
@@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
-from logs import LOGGER
+from logs import LOGGER
from paddlespeech.cli import VectorExecutor
vector_executor = VectorExecutor()
@@ -24,7 +24,8 @@
Use vpr_inference to generate embedding of audio
"""
try:
- embedding = vector_executor(audio_file=path)
+ embedding = vector_executor(
+ audio_file=path, model='ecapatdnn_voxceleb12')
embedding = embedding / np.linalg.norm(embedding)
embedding = embedding.tolist()
return embedding
|
{"golden_diff": "diff --git a/demos/audio_searching/src/encode.py b/demos/audio_searching/src/encode.py\n--- a/demos/audio_searching/src/encode.py\n+++ b/demos/audio_searching/src/encode.py\n@@ -12,8 +12,8 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n import numpy as np\n-from logs import LOGGER\n \n+from logs import LOGGER\n from paddlespeech.cli import VectorExecutor\n \n vector_executor = VectorExecutor()\n@@ -24,7 +24,8 @@\n Use vpr_inference to generate embedding of audio\n \"\"\"\n try:\n- embedding = vector_executor(audio_file=path)\n+ embedding = vector_executor(\n+ audio_file=path, model='ecapatdnn_voxceleb12')\n embedding = embedding / np.linalg.norm(embedding)\n embedding = embedding.tolist()\n return embedding\n", "issue": "[vec][search] update to paddlespeech model\n\n", "before_files": [{"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy as np\nfrom logs import LOGGER\n\nfrom paddlespeech.cli import VectorExecutor\n\nvector_executor = VectorExecutor()\n\n\ndef get_audio_embedding(path):\n \"\"\"\n Use vpr_inference to generate embedding of audio\n \"\"\"\n try:\n embedding = vector_executor(audio_file=path)\n embedding = embedding / np.linalg.norm(embedding)\n embedding = embedding.tolist()\n return embedding\n except Exception as e:\n LOGGER.error(f\"Error with embedding:{e}\")\n return None\n", "path": "demos/audio_searching/src/encode.py"}], "after_files": [{"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy as np\n\nfrom logs import LOGGER\nfrom paddlespeech.cli import VectorExecutor\n\nvector_executor = VectorExecutor()\n\n\ndef get_audio_embedding(path):\n \"\"\"\n Use vpr_inference to generate embedding of audio\n \"\"\"\n try:\n embedding = vector_executor(\n audio_file=path, model='ecapatdnn_voxceleb12')\n embedding = embedding / np.linalg.norm(embedding)\n embedding = embedding.tolist()\n return embedding\n except Exception as e:\n LOGGER.error(f\"Error with embedding:{e}\")\n return None\n", "path": "demos/audio_searching/src/encode.py"}]}
| 579 | 195 |
gh_patches_debug_21048
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-25856
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add jitter on scheduled jobs
hourly, daily long processes if all started at once can cause sudden increase in workload if you have many sites/benches.
Adding simple jitter to scheduled time can lessen the impact of such issues. Jitter is common pattern used for solving problems with "frequency" becomes a problem. E.g. gunicorn adds jitter to avoid restarting all workers at same time, profilers add jitter to avoid amplifying some pattern of repeated work. retry/backoff implementations also use to avoid creating patterns.
Possible implementation: When importing scheduled job types add some random delays in cron. E.g. daily jobs will start in the range of 12:00-12:15 AM instead of all starting at 12:00 AM.
Cons: Some jobs are required to be executed at specific times e.g. birthday reminders. So adding negative offset can introduce bugs for them, positive offset however should be fine AFAIK.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/core/doctype/scheduled_job_type/scheduled_job_type.py`
Content:
```
1 # Copyright (c) 2021, Frappe Technologies and contributors
2 # License: MIT. See LICENSE
3
4 import json
5 from datetime import datetime
6
7 import click
8 from croniter import CroniterBadCronError, croniter
9
10 import frappe
11 from frappe import _
12 from frappe.model.document import Document
13 from frappe.utils import get_datetime, now_datetime
14 from frappe.utils.background_jobs import enqueue, is_job_enqueued
15
16
17 class ScheduledJobType(Document):
18 def autoname(self):
19 self.name = ".".join(self.method.split(".")[-2:])
20
21 def validate(self):
22 if self.frequency != "All":
23 # force logging for all events other than continuous ones (ALL)
24 self.create_log = 1
25
26 if self.frequency == "Cron":
27 if not self.cron_format:
28 frappe.throw(_("Cron format is required for job types with Cron frequency."))
29 try:
30 croniter(self.cron_format)
31 except CroniterBadCronError:
32 frappe.throw(
33 _("{0} is not a valid Cron expression.").format(f"<code>{self.cron_format}</code>"),
34 title=_("Bad Cron Expression"),
35 )
36
37 def enqueue(self, force=False) -> bool:
38 # enqueue event if last execution is done
39 if self.is_event_due() or force:
40 if frappe.flags.enqueued_jobs:
41 frappe.flags.enqueued_jobs.append(self.method)
42
43 if frappe.flags.execute_job:
44 self.execute()
45 else:
46 if not self.is_job_in_queue():
47 enqueue(
48 "frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job",
49 queue=self.get_queue_name(),
50 job_type=self.method,
51 job_id=self.rq_job_id,
52 )
53 return True
54 else:
55 frappe.logger("scheduler").error(
56 f"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}"
57 )
58 return False
59
60 def is_event_due(self, current_time=None):
61 """Return true if event is due based on time lapsed since last execution"""
62 # if the next scheduled event is before NOW, then its due!
63 return self.get_next_execution() <= (current_time or now_datetime())
64
65 def is_job_in_queue(self) -> bool:
66 return is_job_enqueued(self.rq_job_id)
67
68 @property
69 def rq_job_id(self):
70 """Unique ID created to deduplicate jobs with single RQ call."""
71 return f"scheduled_job::{self.method}"
72
73 @property
74 def next_execution(self):
75 return self.get_next_execution()
76
77 def get_next_execution(self):
78 CRON_MAP = {
79 "Yearly": "0 0 1 1 *",
80 "Annual": "0 0 1 1 *",
81 "Monthly": "0 0 1 * *",
82 "Monthly Long": "0 0 1 * *",
83 "Weekly": "0 0 * * 0",
84 "Weekly Long": "0 0 * * 0",
85 "Daily": "0 0 * * *",
86 "Daily Long": "0 0 * * *",
87 "Hourly": "0 * * * *",
88 "Hourly Long": "0 * * * *",
89 "All": "0/" + str((frappe.get_conf().scheduler_interval or 240) // 60) + " * * * *",
90 }
91
92 if not self.cron_format:
93 self.cron_format = CRON_MAP[self.frequency]
94
95 # If this is a cold start then last_execution will not be set.
96 # Creation is set as fallback because if very old fallback is set job might trigger
97 # immediately, even when it's meant to be daily.
98 # A dynamic fallback like current time might miss the scheduler interval and job will never start.
99 last_execution = get_datetime(self.last_execution or self.creation)
100 return croniter(self.cron_format, last_execution).get_next(datetime)
101
102 def execute(self):
103 self.scheduler_log = None
104 try:
105 self.log_status("Start")
106 if self.server_script:
107 script_name = frappe.db.get_value("Server Script", self.server_script)
108 if script_name:
109 frappe.get_doc("Server Script", script_name).execute_scheduled_method()
110 else:
111 frappe.get_attr(self.method)()
112 frappe.db.commit()
113 self.log_status("Complete")
114 except Exception:
115 frappe.db.rollback()
116 self.log_status("Failed")
117
118 def log_status(self, status):
119 # log file
120 frappe.logger("scheduler").info(f"Scheduled Job {status}: {self.method} for {frappe.local.site}")
121 self.update_scheduler_log(status)
122
123 def update_scheduler_log(self, status):
124 if not self.create_log:
125 # self.get_next_execution will work properly iff self.last_execution is properly set
126 if self.frequency == "All" and status == "Start":
127 self.db_set("last_execution", now_datetime(), update_modified=False)
128 frappe.db.commit()
129 return
130 if not self.scheduler_log:
131 self.scheduler_log = frappe.get_doc(
132 dict(doctype="Scheduled Job Log", scheduled_job_type=self.name)
133 ).insert(ignore_permissions=True)
134 self.scheduler_log.db_set("status", status)
135 if status == "Failed":
136 self.scheduler_log.db_set("details", frappe.get_traceback())
137 if status == "Start":
138 self.db_set("last_execution", now_datetime(), update_modified=False)
139 frappe.db.commit()
140
141 def get_queue_name(self):
142 return "long" if ("Long" in self.frequency) else "default"
143
144 def on_trash(self):
145 frappe.db.delete("Scheduled Job Log", {"scheduled_job_type": self.name})
146
147
148 @frappe.whitelist()
149 def execute_event(doc: str):
150 frappe.only_for("System Manager")
151 doc = json.loads(doc)
152 frappe.get_doc("Scheduled Job Type", doc.get("name")).enqueue(force=True)
153 return doc
154
155
156 def run_scheduled_job(job_type: str):
157 """This is a wrapper function that runs a hooks.scheduler_events method"""
158 try:
159 frappe.get_doc("Scheduled Job Type", dict(method=job_type)).execute()
160 except Exception:
161 print(frappe.get_traceback())
162
163
164 def sync_jobs(hooks: dict | None = None):
165 frappe.reload_doc("core", "doctype", "scheduled_job_type")
166 scheduler_events = hooks or frappe.get_hooks("scheduler_events")
167 all_events = insert_events(scheduler_events)
168 clear_events(all_events)
169
170
171 def insert_events(scheduler_events: dict) -> list:
172 cron_jobs, event_jobs = [], []
173 for event_type in scheduler_events:
174 events = scheduler_events.get(event_type)
175 if isinstance(events, dict):
176 cron_jobs += insert_cron_jobs(events)
177 else:
178 # hourly, daily etc
179 event_jobs += insert_event_jobs(events, event_type)
180 return cron_jobs + event_jobs
181
182
183 def insert_cron_jobs(events: dict) -> list:
184 cron_jobs = []
185 for cron_format in events:
186 for event in events.get(cron_format):
187 cron_jobs.append(event)
188 insert_single_event("Cron", event, cron_format)
189 return cron_jobs
190
191
192 def insert_event_jobs(events: list, event_type: str) -> list:
193 event_jobs = []
194 for event in events:
195 event_jobs.append(event)
196 frequency = event_type.replace("_", " ").title()
197 insert_single_event(frequency, event)
198 return event_jobs
199
200
201 def insert_single_event(frequency: str, event: str, cron_format: str | None = None):
202 cron_expr = {"cron_format": cron_format} if cron_format else {}
203
204 try:
205 frappe.get_attr(event)
206 except Exception as e:
207 click.secho(f"{event} is not a valid method: {e}", fg="yellow")
208
209 doc = frappe.get_doc(
210 {
211 "doctype": "Scheduled Job Type",
212 "method": event,
213 "cron_format": cron_format,
214 "frequency": frequency,
215 }
216 )
217
218 if not frappe.db.exists("Scheduled Job Type", {"method": event, "frequency": frequency, **cron_expr}):
219 savepoint = "scheduled_job_type_creation"
220 try:
221 frappe.db.savepoint(savepoint)
222 doc.insert()
223 except frappe.DuplicateEntryError:
224 frappe.db.rollback(save_point=savepoint)
225 doc.delete()
226 doc.insert()
227
228
229 def clear_events(all_events: list):
230 for event in frappe.get_all("Scheduled Job Type", fields=["name", "method", "server_script"]):
231 is_server_script = event.server_script
232 is_defined_in_hooks = event.method in all_events
233
234 if not (is_defined_in_hooks or is_server_script):
235 frappe.delete_doc("Scheduled Job Type", event.name)
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
--- a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
+++ b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
@@ -2,7 +2,8 @@
# License: MIT. See LICENSE
import json
-from datetime import datetime
+from datetime import datetime, timedelta
+from random import randint
import click
from croniter import CroniterBadCronError, croniter
@@ -97,7 +98,12 @@
# immediately, even when it's meant to be daily.
# A dynamic fallback like current time might miss the scheduler interval and job will never start.
last_execution = get_datetime(self.last_execution or self.creation)
- return croniter(self.cron_format, last_execution).get_next(datetime)
+ next_execution = croniter(self.cron_format, last_execution).get_next(datetime)
+
+ jitter = 0
+ if self.frequency in ("Hourly Long", "Daily Long"):
+ jitter = randint(1, 600)
+ return next_execution + timedelta(seconds=jitter)
def execute(self):
self.scheduler_log = None
|
{"golden_diff": "diff --git a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n--- a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n+++ b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n@@ -2,7 +2,8 @@\n # License: MIT. See LICENSE\n \n import json\n-from datetime import datetime\n+from datetime import datetime, timedelta\n+from random import randint\n \n import click\n from croniter import CroniterBadCronError, croniter\n@@ -97,7 +98,12 @@\n \t\t# immediately, even when it's meant to be daily.\n \t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n \t\tlast_execution = get_datetime(self.last_execution or self.creation)\n-\t\treturn croniter(self.cron_format, last_execution).get_next(datetime)\n+\t\tnext_execution = croniter(self.cron_format, last_execution).get_next(datetime)\n+\n+\t\tjitter = 0\n+\t\tif self.frequency in (\"Hourly Long\", \"Daily Long\"):\n+\t\t\tjitter = randint(1, 600)\n+\t\treturn next_execution + timedelta(seconds=jitter)\n \n \tdef execute(self):\n \t\tself.scheduler_log = None\n", "issue": "Add jitter on scheduled jobs\nhourly, daily long processes if all started at once can cause sudden increase in workload if you have many sites/benches. \r\n\r\n\r\nAdding simple jitter to scheduled time can lessen the impact of such issues. Jitter is common pattern used for solving problems with \"frequency\" becomes a problem. E.g. gunicorn adds jitter to avoid restarting all workers at same time, profilers add jitter to avoid amplifying some pattern of repeated work. retry/backoff implementations also use to avoid creating patterns.\r\n\r\n\r\nPossible implementation: When importing scheduled job types add some random delays in cron. E.g. daily jobs will start in the range of 12:00-12:15 AM instead of all starting at 12:00 AM.\r\n\r\n\r\nCons: Some jobs are required to be executed at specific times e.g. birthday reminders. So adding negative offset can introduce bugs for them, positive offset however should be fine AFAIK. \n", "before_files": [{"content": "# Copyright (c) 2021, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport json\nfrom datetime import datetime\n\nimport click\nfrom croniter import CroniterBadCronError, croniter\n\nimport frappe\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom frappe.utils import get_datetime, now_datetime\nfrom frappe.utils.background_jobs import enqueue, is_job_enqueued\n\n\nclass ScheduledJobType(Document):\n\tdef autoname(self):\n\t\tself.name = \".\".join(self.method.split(\".\")[-2:])\n\n\tdef validate(self):\n\t\tif self.frequency != \"All\":\n\t\t\t# force logging for all events other than continuous ones (ALL)\n\t\t\tself.create_log = 1\n\n\t\tif self.frequency == \"Cron\":\n\t\t\tif not self.cron_format:\n\t\t\t\tfrappe.throw(_(\"Cron format is required for job types with Cron frequency.\"))\n\t\t\ttry:\n\t\t\t\tcroniter(self.cron_format)\n\t\t\texcept CroniterBadCronError:\n\t\t\t\tfrappe.throw(\n\t\t\t\t\t_(\"{0} is not a valid Cron expression.\").format(f\"<code>{self.cron_format}</code>\"),\n\t\t\t\t\ttitle=_(\"Bad Cron Expression\"),\n\t\t\t\t)\n\n\tdef enqueue(self, force=False) -> bool:\n\t\t# enqueue event if last execution is done\n\t\tif self.is_event_due() or force:\n\t\t\tif frappe.flags.enqueued_jobs:\n\t\t\t\tfrappe.flags.enqueued_jobs.append(self.method)\n\n\t\t\tif frappe.flags.execute_job:\n\t\t\t\tself.execute()\n\t\t\telse:\n\t\t\t\tif not self.is_job_in_queue():\n\t\t\t\t\tenqueue(\n\t\t\t\t\t\t\"frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job\",\n\t\t\t\t\t\tqueue=self.get_queue_name(),\n\t\t\t\t\t\tjob_type=self.method,\n\t\t\t\t\t\tjob_id=self.rq_job_id,\n\t\t\t\t\t)\n\t\t\t\t\treturn True\n\t\t\t\telse:\n\t\t\t\t\tfrappe.logger(\"scheduler\").error(\n\t\t\t\t\t\tf\"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}\"\n\t\t\t\t\t)\n\t\treturn False\n\n\tdef is_event_due(self, current_time=None):\n\t\t\"\"\"Return true if event is due based on time lapsed since last execution\"\"\"\n\t\t# if the next scheduled event is before NOW, then its due!\n\t\treturn self.get_next_execution() <= (current_time or now_datetime())\n\n\tdef is_job_in_queue(self) -> bool:\n\t\treturn is_job_enqueued(self.rq_job_id)\n\n\t@property\n\tdef rq_job_id(self):\n\t\t\"\"\"Unique ID created to deduplicate jobs with single RQ call.\"\"\"\n\t\treturn f\"scheduled_job::{self.method}\"\n\n\t@property\n\tdef next_execution(self):\n\t\treturn self.get_next_execution()\n\n\tdef get_next_execution(self):\n\t\tCRON_MAP = {\n\t\t\t\"Yearly\": \"0 0 1 1 *\",\n\t\t\t\"Annual\": \"0 0 1 1 *\",\n\t\t\t\"Monthly\": \"0 0 1 * *\",\n\t\t\t\"Monthly Long\": \"0 0 1 * *\",\n\t\t\t\"Weekly\": \"0 0 * * 0\",\n\t\t\t\"Weekly Long\": \"0 0 * * 0\",\n\t\t\t\"Daily\": \"0 0 * * *\",\n\t\t\t\"Daily Long\": \"0 0 * * *\",\n\t\t\t\"Hourly\": \"0 * * * *\",\n\t\t\t\"Hourly Long\": \"0 * * * *\",\n\t\t\t\"All\": \"0/\" + str((frappe.get_conf().scheduler_interval or 240) // 60) + \" * * * *\",\n\t\t}\n\n\t\tif not self.cron_format:\n\t\t\tself.cron_format = CRON_MAP[self.frequency]\n\n\t\t# If this is a cold start then last_execution will not be set.\n\t\t# Creation is set as fallback because if very old fallback is set job might trigger\n\t\t# immediately, even when it's meant to be daily.\n\t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n\t\tlast_execution = get_datetime(self.last_execution or self.creation)\n\t\treturn croniter(self.cron_format, last_execution).get_next(datetime)\n\n\tdef execute(self):\n\t\tself.scheduler_log = None\n\t\ttry:\n\t\t\tself.log_status(\"Start\")\n\t\t\tif self.server_script:\n\t\t\t\tscript_name = frappe.db.get_value(\"Server Script\", self.server_script)\n\t\t\t\tif script_name:\n\t\t\t\t\tfrappe.get_doc(\"Server Script\", script_name).execute_scheduled_method()\n\t\t\telse:\n\t\t\t\tfrappe.get_attr(self.method)()\n\t\t\tfrappe.db.commit()\n\t\t\tself.log_status(\"Complete\")\n\t\texcept Exception:\n\t\t\tfrappe.db.rollback()\n\t\t\tself.log_status(\"Failed\")\n\n\tdef log_status(self, status):\n\t\t# log file\n\t\tfrappe.logger(\"scheduler\").info(f\"Scheduled Job {status}: {self.method} for {frappe.local.site}\")\n\t\tself.update_scheduler_log(status)\n\n\tdef update_scheduler_log(self, status):\n\t\tif not self.create_log:\n\t\t\t# self.get_next_execution will work properly iff self.last_execution is properly set\n\t\t\tif self.frequency == \"All\" and status == \"Start\":\n\t\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\t\t\tfrappe.db.commit()\n\t\t\treturn\n\t\tif not self.scheduler_log:\n\t\t\tself.scheduler_log = frappe.get_doc(\n\t\t\t\tdict(doctype=\"Scheduled Job Log\", scheduled_job_type=self.name)\n\t\t\t).insert(ignore_permissions=True)\n\t\tself.scheduler_log.db_set(\"status\", status)\n\t\tif status == \"Failed\":\n\t\t\tself.scheduler_log.db_set(\"details\", frappe.get_traceback())\n\t\tif status == \"Start\":\n\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\tfrappe.db.commit()\n\n\tdef get_queue_name(self):\n\t\treturn \"long\" if (\"Long\" in self.frequency) else \"default\"\n\n\tdef on_trash(self):\n\t\tfrappe.db.delete(\"Scheduled Job Log\", {\"scheduled_job_type\": self.name})\n\n\[email protected]()\ndef execute_event(doc: str):\n\tfrappe.only_for(\"System Manager\")\n\tdoc = json.loads(doc)\n\tfrappe.get_doc(\"Scheduled Job Type\", doc.get(\"name\")).enqueue(force=True)\n\treturn doc\n\n\ndef run_scheduled_job(job_type: str):\n\t\"\"\"This is a wrapper function that runs a hooks.scheduler_events method\"\"\"\n\ttry:\n\t\tfrappe.get_doc(\"Scheduled Job Type\", dict(method=job_type)).execute()\n\texcept Exception:\n\t\tprint(frappe.get_traceback())\n\n\ndef sync_jobs(hooks: dict | None = None):\n\tfrappe.reload_doc(\"core\", \"doctype\", \"scheduled_job_type\")\n\tscheduler_events = hooks or frappe.get_hooks(\"scheduler_events\")\n\tall_events = insert_events(scheduler_events)\n\tclear_events(all_events)\n\n\ndef insert_events(scheduler_events: dict) -> list:\n\tcron_jobs, event_jobs = [], []\n\tfor event_type in scheduler_events:\n\t\tevents = scheduler_events.get(event_type)\n\t\tif isinstance(events, dict):\n\t\t\tcron_jobs += insert_cron_jobs(events)\n\t\telse:\n\t\t\t# hourly, daily etc\n\t\t\tevent_jobs += insert_event_jobs(events, event_type)\n\treturn cron_jobs + event_jobs\n\n\ndef insert_cron_jobs(events: dict) -> list:\n\tcron_jobs = []\n\tfor cron_format in events:\n\t\tfor event in events.get(cron_format):\n\t\t\tcron_jobs.append(event)\n\t\t\tinsert_single_event(\"Cron\", event, cron_format)\n\treturn cron_jobs\n\n\ndef insert_event_jobs(events: list, event_type: str) -> list:\n\tevent_jobs = []\n\tfor event in events:\n\t\tevent_jobs.append(event)\n\t\tfrequency = event_type.replace(\"_\", \" \").title()\n\t\tinsert_single_event(frequency, event)\n\treturn event_jobs\n\n\ndef insert_single_event(frequency: str, event: str, cron_format: str | None = None):\n\tcron_expr = {\"cron_format\": cron_format} if cron_format else {}\n\n\ttry:\n\t\tfrappe.get_attr(event)\n\texcept Exception as e:\n\t\tclick.secho(f\"{event} is not a valid method: {e}\", fg=\"yellow\")\n\n\tdoc = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Scheduled Job Type\",\n\t\t\t\"method\": event,\n\t\t\t\"cron_format\": cron_format,\n\t\t\t\"frequency\": frequency,\n\t\t}\n\t)\n\n\tif not frappe.db.exists(\"Scheduled Job Type\", {\"method\": event, \"frequency\": frequency, **cron_expr}):\n\t\tsavepoint = \"scheduled_job_type_creation\"\n\t\ttry:\n\t\t\tfrappe.db.savepoint(savepoint)\n\t\t\tdoc.insert()\n\t\texcept frappe.DuplicateEntryError:\n\t\t\tfrappe.db.rollback(save_point=savepoint)\n\t\t\tdoc.delete()\n\t\t\tdoc.insert()\n\n\ndef clear_events(all_events: list):\n\tfor event in frappe.get_all(\"Scheduled Job Type\", fields=[\"name\", \"method\", \"server_script\"]):\n\t\tis_server_script = event.server_script\n\t\tis_defined_in_hooks = event.method in all_events\n\n\t\tif not (is_defined_in_hooks or is_server_script):\n\t\t\tfrappe.delete_doc(\"Scheduled Job Type\", event.name)\n", "path": "frappe/core/doctype/scheduled_job_type/scheduled_job_type.py"}], "after_files": [{"content": "# Copyright (c) 2021, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport json\nfrom datetime import datetime, timedelta\nfrom random import randint\n\nimport click\nfrom croniter import CroniterBadCronError, croniter\n\nimport frappe\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom frappe.utils import get_datetime, now_datetime\nfrom frappe.utils.background_jobs import enqueue, is_job_enqueued\n\n\nclass ScheduledJobType(Document):\n\tdef autoname(self):\n\t\tself.name = \".\".join(self.method.split(\".\")[-2:])\n\n\tdef validate(self):\n\t\tif self.frequency != \"All\":\n\t\t\t# force logging for all events other than continuous ones (ALL)\n\t\t\tself.create_log = 1\n\n\t\tif self.frequency == \"Cron\":\n\t\t\tif not self.cron_format:\n\t\t\t\tfrappe.throw(_(\"Cron format is required for job types with Cron frequency.\"))\n\t\t\ttry:\n\t\t\t\tcroniter(self.cron_format)\n\t\t\texcept CroniterBadCronError:\n\t\t\t\tfrappe.throw(\n\t\t\t\t\t_(\"{0} is not a valid Cron expression.\").format(f\"<code>{self.cron_format}</code>\"),\n\t\t\t\t\ttitle=_(\"Bad Cron Expression\"),\n\t\t\t\t)\n\n\tdef enqueue(self, force=False) -> bool:\n\t\t# enqueue event if last execution is done\n\t\tif self.is_event_due() or force:\n\t\t\tif frappe.flags.enqueued_jobs:\n\t\t\t\tfrappe.flags.enqueued_jobs.append(self.method)\n\n\t\t\tif frappe.flags.execute_job:\n\t\t\t\tself.execute()\n\t\t\telse:\n\t\t\t\tif not self.is_job_in_queue():\n\t\t\t\t\tenqueue(\n\t\t\t\t\t\t\"frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job\",\n\t\t\t\t\t\tqueue=self.get_queue_name(),\n\t\t\t\t\t\tjob_type=self.method,\n\t\t\t\t\t\tjob_id=self.rq_job_id,\n\t\t\t\t\t)\n\t\t\t\t\treturn True\n\t\t\t\telse:\n\t\t\t\t\tfrappe.logger(\"scheduler\").error(\n\t\t\t\t\t\tf\"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}\"\n\t\t\t\t\t)\n\t\treturn False\n\n\tdef is_event_due(self, current_time=None):\n\t\t\"\"\"Return true if event is due based on time lapsed since last execution\"\"\"\n\t\t# if the next scheduled event is before NOW, then its due!\n\t\treturn self.get_next_execution() <= (current_time or now_datetime())\n\n\tdef is_job_in_queue(self) -> bool:\n\t\treturn is_job_enqueued(self.rq_job_id)\n\n\t@property\n\tdef rq_job_id(self):\n\t\t\"\"\"Unique ID created to deduplicate jobs with single RQ call.\"\"\"\n\t\treturn f\"scheduled_job::{self.method}\"\n\n\t@property\n\tdef next_execution(self):\n\t\treturn self.get_next_execution()\n\n\tdef get_next_execution(self):\n\t\tCRON_MAP = {\n\t\t\t\"Yearly\": \"0 0 1 1 *\",\n\t\t\t\"Annual\": \"0 0 1 1 *\",\n\t\t\t\"Monthly\": \"0 0 1 * *\",\n\t\t\t\"Monthly Long\": \"0 0 1 * *\",\n\t\t\t\"Weekly\": \"0 0 * * 0\",\n\t\t\t\"Weekly Long\": \"0 0 * * 0\",\n\t\t\t\"Daily\": \"0 0 * * *\",\n\t\t\t\"Daily Long\": \"0 0 * * *\",\n\t\t\t\"Hourly\": \"0 * * * *\",\n\t\t\t\"Hourly Long\": \"0 * * * *\",\n\t\t\t\"All\": \"0/\" + str((frappe.get_conf().scheduler_interval or 240) // 60) + \" * * * *\",\n\t\t}\n\n\t\tif not self.cron_format:\n\t\t\tself.cron_format = CRON_MAP[self.frequency]\n\n\t\t# If this is a cold start then last_execution will not be set.\n\t\t# Creation is set as fallback because if very old fallback is set job might trigger\n\t\t# immediately, even when it's meant to be daily.\n\t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n\t\tlast_execution = get_datetime(self.last_execution or self.creation)\n\t\tnext_execution = croniter(self.cron_format, last_execution).get_next(datetime)\n\n\t\tjitter = 0\n\t\tif self.frequency in (\"Hourly Long\", \"Daily Long\"):\n\t\t\tjitter = randint(1, 600)\n\t\treturn next_execution + timedelta(seconds=jitter)\n\n\tdef execute(self):\n\t\tself.scheduler_log = None\n\t\ttry:\n\t\t\tself.log_status(\"Start\")\n\t\t\tif self.server_script:\n\t\t\t\tscript_name = frappe.db.get_value(\"Server Script\", self.server_script)\n\t\t\t\tif script_name:\n\t\t\t\t\tfrappe.get_doc(\"Server Script\", script_name).execute_scheduled_method()\n\t\t\telse:\n\t\t\t\tfrappe.get_attr(self.method)()\n\t\t\tfrappe.db.commit()\n\t\t\tself.log_status(\"Complete\")\n\t\texcept Exception:\n\t\t\tfrappe.db.rollback()\n\t\t\tself.log_status(\"Failed\")\n\n\tdef log_status(self, status):\n\t\t# log file\n\t\tfrappe.logger(\"scheduler\").info(f\"Scheduled Job {status}: {self.method} for {frappe.local.site}\")\n\t\tself.update_scheduler_log(status)\n\n\tdef update_scheduler_log(self, status):\n\t\tif not self.create_log:\n\t\t\t# self.get_next_execution will work properly iff self.last_execution is properly set\n\t\t\tif self.frequency == \"All\" and status == \"Start\":\n\t\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\t\t\tfrappe.db.commit()\n\t\t\treturn\n\t\tif not self.scheduler_log:\n\t\t\tself.scheduler_log = frappe.get_doc(\n\t\t\t\tdict(doctype=\"Scheduled Job Log\", scheduled_job_type=self.name)\n\t\t\t).insert(ignore_permissions=True)\n\t\tself.scheduler_log.db_set(\"status\", status)\n\t\tif status == \"Failed\":\n\t\t\tself.scheduler_log.db_set(\"details\", frappe.get_traceback())\n\t\tif status == \"Start\":\n\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\tfrappe.db.commit()\n\n\tdef get_queue_name(self):\n\t\treturn \"long\" if (\"Long\" in self.frequency) else \"default\"\n\n\tdef on_trash(self):\n\t\tfrappe.db.delete(\"Scheduled Job Log\", {\"scheduled_job_type\": self.name})\n\n\[email protected]()\ndef execute_event(doc: str):\n\tfrappe.only_for(\"System Manager\")\n\tdoc = json.loads(doc)\n\tfrappe.get_doc(\"Scheduled Job Type\", doc.get(\"name\")).enqueue(force=True)\n\treturn doc\n\n\ndef run_scheduled_job(job_type: str):\n\t\"\"\"This is a wrapper function that runs a hooks.scheduler_events method\"\"\"\n\ttry:\n\t\tfrappe.get_doc(\"Scheduled Job Type\", dict(method=job_type)).execute()\n\texcept Exception:\n\t\tprint(frappe.get_traceback())\n\n\ndef sync_jobs(hooks: dict | None = None):\n\tfrappe.reload_doc(\"core\", \"doctype\", \"scheduled_job_type\")\n\tscheduler_events = hooks or frappe.get_hooks(\"scheduler_events\")\n\tall_events = insert_events(scheduler_events)\n\tclear_events(all_events)\n\n\ndef insert_events(scheduler_events: dict) -> list:\n\tcron_jobs, event_jobs = [], []\n\tfor event_type in scheduler_events:\n\t\tevents = scheduler_events.get(event_type)\n\t\tif isinstance(events, dict):\n\t\t\tcron_jobs += insert_cron_jobs(events)\n\t\telse:\n\t\t\t# hourly, daily etc\n\t\t\tevent_jobs += insert_event_jobs(events, event_type)\n\treturn cron_jobs + event_jobs\n\n\ndef insert_cron_jobs(events: dict) -> list:\n\tcron_jobs = []\n\tfor cron_format in events:\n\t\tfor event in events.get(cron_format):\n\t\t\tcron_jobs.append(event)\n\t\t\tinsert_single_event(\"Cron\", event, cron_format)\n\treturn cron_jobs\n\n\ndef insert_event_jobs(events: list, event_type: str) -> list:\n\tevent_jobs = []\n\tfor event in events:\n\t\tevent_jobs.append(event)\n\t\tfrequency = event_type.replace(\"_\", \" \").title()\n\t\tinsert_single_event(frequency, event)\n\treturn event_jobs\n\n\ndef insert_single_event(frequency: str, event: str, cron_format: str | None = None):\n\tcron_expr = {\"cron_format\": cron_format} if cron_format else {}\n\n\ttry:\n\t\tfrappe.get_attr(event)\n\texcept Exception as e:\n\t\tclick.secho(f\"{event} is not a valid method: {e}\", fg=\"yellow\")\n\n\tdoc = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Scheduled Job Type\",\n\t\t\t\"method\": event,\n\t\t\t\"cron_format\": cron_format,\n\t\t\t\"frequency\": frequency,\n\t\t}\n\t)\n\n\tif not frappe.db.exists(\"Scheduled Job Type\", {\"method\": event, \"frequency\": frequency, **cron_expr}):\n\t\tsavepoint = \"scheduled_job_type_creation\"\n\t\ttry:\n\t\t\tfrappe.db.savepoint(savepoint)\n\t\t\tdoc.insert()\n\t\texcept frappe.DuplicateEntryError:\n\t\t\tfrappe.db.rollback(save_point=savepoint)\n\t\t\tdoc.delete()\n\t\t\tdoc.insert()\n\n\ndef clear_events(all_events: list):\n\tfor event in frappe.get_all(\"Scheduled Job Type\", fields=[\"name\", \"method\", \"server_script\"]):\n\t\tis_server_script = event.server_script\n\t\tis_defined_in_hooks = event.method in all_events\n\n\t\tif not (is_defined_in_hooks or is_server_script):\n\t\t\tfrappe.delete_doc(\"Scheduled Job Type\", event.name)\n", "path": "frappe/core/doctype/scheduled_job_type/scheduled_job_type.py"}]}
| 3,051 | 283 |
gh_patches_debug_7068
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-6489
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ChainerMN's ImageNet example is broken because of update in pickle module
* Conditions
<!-- If you're using Chainer 4.0+, you can also get this information by typing `python -c 'import chainer; chainer.print_runtime_info()'. -->
- Chainer >= 5.0.0
* Code to reproduce
The ImageNet example on Python >= 3.6
```
import multiprocessing
p = multiprocessing.Process(target=lambda *x: x, args=())
p.start()
```
* Error messages, stack traces, or logs
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/context.py", line 291, in _Popen
return Popen(process_obj)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_forkserver.py", line 35, in __init__
super().__init__(process_obj)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_forkserver.py", line 47, in _launch
reduction.dump(process_obj, buf)
File "/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x7fb633f35e18>: attribute lookup <lambda> on __main__ failed
```
For details, see https://docs.python.org/3.6/library/pickle.html
```
The following types can be pickled:
* None, True, and False
* integers, floating point numbers, complex numbers
* strings, bytes, bytearrays
* tuples, lists, sets, and dictionaries containing only picklable objects
* functions defined at the top level of a module (using def, not lambda)
* built-in functions defined at the top level of a module
* classes that are defined at the top level of a module
* instances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section Pickling Class Instances for details).
```
Thus lambda is not pickable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/chainermn/imagenet/train_imagenet.py`
Content:
```
1 #!/usr/bin/env python
2
3 from __future__ import print_function
4 import argparse
5 import multiprocessing
6 import random
7 import sys
8
9 import numpy as np
10
11 import chainer
12 import chainer.cuda
13 from chainer import training
14 from chainer.training import extensions
15
16 import chainermn
17
18
19 import models.alex as alex
20 import models.googlenet as googlenet
21 import models.googlenetbn as googlenetbn
22 import models.nin as nin
23 import models.resnet50 as resnet50
24
25 # Check Python version if it supports multiprocessing.set_start_method,
26 # which was introduced in Python 3.4
27 major, minor, _, _, _ = sys.version_info
28 if major <= 2 or (major == 3 and minor < 4):
29 sys.stderr.write('Error: ImageNet example uses '
30 'chainer.iterators.MultiprocessIterator, '
31 'which works only with Python >= 3.4. \n'
32 'For more details, see '
33 'http://chainermn.readthedocs.io/en/master/'
34 'tutorial/tips_faqs.html#using-multiprocessiterator\n')
35 exit(-1)
36
37
38 class PreprocessedDataset(chainer.dataset.DatasetMixin):
39
40 def __init__(self, path, root, mean, crop_size, random=True):
41 self.base = chainer.datasets.LabeledImageDataset(path, root)
42 self.mean = mean.astype(np.float32)
43 self.crop_size = crop_size
44 self.random = random
45
46 def __len__(self):
47 return len(self.base)
48
49 def get_example(self, i):
50 # It reads the i-th image/label pair and return a preprocessed image.
51 # It applies following preprocesses:
52 # - Cropping (random or center rectangular)
53 # - Random flip
54 # - Scaling to [0, 1] value
55 crop_size = self.crop_size
56
57 image, label = self.base[i]
58 _, h, w = image.shape
59
60 if self.random:
61 # Randomly crop a region and flip the image
62 top = random.randint(0, h - crop_size - 1)
63 left = random.randint(0, w - crop_size - 1)
64 if random.randint(0, 1):
65 image = image[:, :, ::-1]
66 else:
67 # Crop the center
68 top = (h - crop_size) // 2
69 left = (w - crop_size) // 2
70 bottom = top + crop_size
71 right = left + crop_size
72
73 image = image[:, top:bottom, left:right]
74 image -= self.mean[:, top:bottom, left:right]
75 image *= (1.0 / 255.0) # Scale to [0, 1]
76 return image, label
77
78
79 # chainermn.create_multi_node_evaluator can be also used with user customized
80 # evaluator classes that inherit chainer.training.extensions.Evaluator.
81 class TestModeEvaluator(extensions.Evaluator):
82
83 def evaluate(self):
84 model = self.get_target('main')
85 model.train = False
86 ret = super(TestModeEvaluator, self).evaluate()
87 model.train = True
88 return ret
89
90
91 def main():
92 # Check if GPU is available
93 # (ImageNet example does not support CPU execution)
94 if not chainer.cuda.available:
95 raise RuntimeError('ImageNet requires GPU support.')
96
97 archs = {
98 'alex': alex.Alex,
99 'googlenet': googlenet.GoogLeNet,
100 'googlenetbn': googlenetbn.GoogLeNetBN,
101 'nin': nin.NIN,
102 'resnet50': resnet50.ResNet50,
103 }
104
105 parser = argparse.ArgumentParser(
106 description='Learning convnet from ILSVRC2012 dataset')
107 parser.add_argument('train', help='Path to training image-label list file')
108 parser.add_argument('val', help='Path to validation image-label list file')
109 parser.add_argument('--arch', '-a', choices=archs.keys(), default='nin',
110 help='Convnet architecture')
111 parser.add_argument('--batchsize', '-B', type=int, default=32,
112 help='Learning minibatch size')
113 parser.add_argument('--epoch', '-E', type=int, default=10,
114 help='Number of epochs to train')
115 parser.add_argument('--initmodel',
116 help='Initialize the model from given file')
117 parser.add_argument('--loaderjob', '-j', type=int,
118 help='Number of parallel data loading processes')
119 parser.add_argument('--mean', '-m', default='mean.npy',
120 help='Mean file (computed by compute_mean.py)')
121 parser.add_argument('--resume', '-r', default='',
122 help='Initialize the trainer from given file')
123 parser.add_argument('--out', '-o', default='result',
124 help='Output directory')
125 parser.add_argument('--root', '-R', default='.',
126 help='Root directory path of image files')
127 parser.add_argument('--val_batchsize', '-b', type=int, default=250,
128 help='Validation minibatch size')
129 parser.add_argument('--test', action='store_true')
130 parser.add_argument('--communicator', default='hierarchical')
131 parser.set_defaults(test=False)
132 args = parser.parse_args()
133
134 # Start method of multiprocessing module need to be changed if we
135 # are using InfiniBand and MultiprocessIterator. This is because
136 # processes often crash when calling fork if they are using
137 # Infiniband. (c.f.,
138 # https://www.open-mpi.org/faq/?category=tuning#fork-warning )
139 # Also, just setting the start method does not seem to be
140 # sufficient to actually launch the forkserver processes, so also
141 # start a dummy process.
142 # See also our document:
143 # https://chainermn.readthedocs.io/en/stable/tutorial/tips_faqs.html#using-multiprocessiterator
144 # This must be done *before* ``chainermn.create_communicator``!!!
145 multiprocessing.set_start_method('forkserver')
146 p = multiprocessing.Process(target=lambda *x: x, args=())
147 p.start()
148 p.join()
149
150 # Prepare ChainerMN communicator.
151 comm = chainermn.create_communicator(args.communicator)
152 device = comm.intra_rank
153
154 if comm.rank == 0:
155 print('==========================================')
156 print('Num process (COMM_WORLD): {}'.format(comm.size))
157 print('Using {} communicator'.format(args.communicator))
158 print('Using {} arch'.format(args.arch))
159 print('Num Minibatch-size: {}'.format(args.batchsize))
160 print('Num epoch: {}'.format(args.epoch))
161 print('==========================================')
162
163 model = archs[args.arch]()
164 if args.initmodel:
165 print('Load model from', args.initmodel)
166 chainer.serializers.load_npz(args.initmodel, model)
167
168 chainer.cuda.get_device_from_id(device).use() # Make the GPU current
169 model.to_gpu()
170
171 # Split and distribute the dataset. Only worker 0 loads the whole dataset.
172 # Datasets of worker 0 are evenly split and distributed to all workers.
173 mean = np.load(args.mean)
174 if comm.rank == 0:
175 train = PreprocessedDataset(args.train, args.root, mean, model.insize)
176 val = PreprocessedDataset(
177 args.val, args.root, mean, model.insize, False)
178 else:
179 train = None
180 val = None
181 train = chainermn.scatter_dataset(train, comm, shuffle=True)
182 val = chainermn.scatter_dataset(val, comm)
183
184 # A workaround for processes crash should be done before making
185 # communicator above, when using fork (e.g. MultiProcessIterator)
186 # along with Infiniband.
187 train_iter = chainer.iterators.MultiprocessIterator(
188 train, args.batchsize, n_processes=args.loaderjob)
189 val_iter = chainer.iterators.MultiprocessIterator(
190 val, args.val_batchsize, repeat=False, n_processes=args.loaderjob)
191
192 # Create a multi node optimizer from a standard Chainer optimizer.
193 optimizer = chainermn.create_multi_node_optimizer(
194 chainer.optimizers.MomentumSGD(lr=0.01, momentum=0.9), comm)
195 optimizer.setup(model)
196
197 # Set up a trainer
198 updater = training.StandardUpdater(train_iter, optimizer, device=device)
199 trainer = training.Trainer(updater, (args.epoch, 'epoch'), args.out)
200
201 checkpoint_interval = (10, 'iteration') if args.test else (1, 'epoch')
202 val_interval = (10, 'iteration') if args.test else (1, 'epoch')
203 log_interval = (10, 'iteration') if args.test else (1, 'epoch')
204
205 checkpointer = chainermn.create_multi_node_checkpointer(
206 name='imagenet-example', comm=comm)
207 checkpointer.maybe_load(trainer, optimizer)
208 trainer.extend(checkpointer, trigger=checkpoint_interval)
209
210 # Create a multi node evaluator from an evaluator.
211 evaluator = TestModeEvaluator(val_iter, model, device=device)
212 evaluator = chainermn.create_multi_node_evaluator(evaluator, comm)
213 trainer.extend(evaluator, trigger=val_interval)
214
215 # Some display and output extensions are necessary only for one worker.
216 # (Otherwise, there would just be repeated outputs.)
217 if comm.rank == 0:
218 trainer.extend(extensions.DumpGraph('main/loss'))
219 trainer.extend(extensions.LogReport(trigger=log_interval))
220 trainer.extend(extensions.observe_lr(), trigger=log_interval)
221 trainer.extend(extensions.PrintReport([
222 'epoch', 'iteration', 'main/loss', 'validation/main/loss',
223 'main/accuracy', 'validation/main/accuracy', 'lr'
224 ]), trigger=log_interval)
225 trainer.extend(extensions.ProgressBar(update_interval=10))
226
227 if args.resume:
228 chainer.serializers.load_npz(args.resume, trainer)
229
230 trainer.run()
231
232
233 if __name__ == '__main__':
234 main()
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/chainermn/imagenet/train_imagenet.py b/examples/chainermn/imagenet/train_imagenet.py
--- a/examples/chainermn/imagenet/train_imagenet.py
+++ b/examples/chainermn/imagenet/train_imagenet.py
@@ -143,7 +143,7 @@
# https://chainermn.readthedocs.io/en/stable/tutorial/tips_faqs.html#using-multiprocessiterator
# This must be done *before* ``chainermn.create_communicator``!!!
multiprocessing.set_start_method('forkserver')
- p = multiprocessing.Process(target=lambda *x: x, args=())
+ p = multiprocessing.Process()
p.start()
p.join()
|
{"golden_diff": "diff --git a/examples/chainermn/imagenet/train_imagenet.py b/examples/chainermn/imagenet/train_imagenet.py\n--- a/examples/chainermn/imagenet/train_imagenet.py\n+++ b/examples/chainermn/imagenet/train_imagenet.py\n@@ -143,7 +143,7 @@\n # https://chainermn.readthedocs.io/en/stable/tutorial/tips_faqs.html#using-multiprocessiterator\n # This must be done *before* ``chainermn.create_communicator``!!!\n multiprocessing.set_start_method('forkserver')\n- p = multiprocessing.Process(target=lambda *x: x, args=())\n+ p = multiprocessing.Process()\n p.start()\n p.join()\n", "issue": "ChainerMN's ImageNet example is broken because of update in pickle module\n* Conditions\r\n<!-- If you're using Chainer 4.0+, you can also get this information by typing `python -c 'import chainer; chainer.print_runtime_info()'. -->\r\n - Chainer >= 5.0.0\r\n\r\n* Code to reproduce\r\nThe ImageNet example on Python >= 3.6\r\n\r\n```\r\nimport multiprocessing\r\np = multiprocessing.Process(target=lambda *x: x, args=())\r\np.start()\r\n```\r\n\r\n* Error messages, stack traces, or logs\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/process.py\", line 105, in start\r\n self._popen = self._Popen(self)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/context.py\", line 223, in _Popen\r\n return _default_context.get_context().Process._Popen(process_obj)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/context.py\", line 291, in _Popen\r\n return Popen(process_obj)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_forkserver.py\", line 35, in __init__\r\n super().__init__(process_obj)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_fork.py\", line 20, in __init__\r\n self._launch(process_obj)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/popen_forkserver.py\", line 47, in _launch\r\n reduction.dump(process_obj, buf)\r\n File \"/home/kfukuda/.pyenv/versions/anaconda3-4.3.1/lib/python3.6/multiprocessing/reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\n_pickle.PicklingError: Can't pickle <function <lambda> at 0x7fb633f35e18>: attribute lookup <lambda> on __main__ failed\r\n```\r\nFor details, see https://docs.python.org/3.6/library/pickle.html\r\n\r\n```\r\nThe following types can be pickled:\r\n\r\n* None, True, and False\r\n* integers, floating point numbers, complex numbers\r\n* strings, bytes, bytearrays\r\n* tuples, lists, sets, and dictionaries containing only picklable objects\r\n* functions defined at the top level of a module (using def, not lambda)\r\n* built-in functions defined at the top level of a module\r\n* classes that are defined at the top level of a module\r\n* instances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section Pickling Class Instances for details).\r\n```\r\n\r\nThus lambda is not pickable.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import print_function\nimport argparse\nimport multiprocessing\nimport random\nimport sys\n\nimport numpy as np\n\nimport chainer\nimport chainer.cuda\nfrom chainer import training\nfrom chainer.training import extensions\n\nimport chainermn\n\n\nimport models.alex as alex\nimport models.googlenet as googlenet\nimport models.googlenetbn as googlenetbn\nimport models.nin as nin\nimport models.resnet50 as resnet50\n\n# Check Python version if it supports multiprocessing.set_start_method,\n# which was introduced in Python 3.4\nmajor, minor, _, _, _ = sys.version_info\nif major <= 2 or (major == 3 and minor < 4):\n sys.stderr.write('Error: ImageNet example uses '\n 'chainer.iterators.MultiprocessIterator, '\n 'which works only with Python >= 3.4. \\n'\n 'For more details, see '\n 'http://chainermn.readthedocs.io/en/master/'\n 'tutorial/tips_faqs.html#using-multiprocessiterator\\n')\n exit(-1)\n\n\nclass PreprocessedDataset(chainer.dataset.DatasetMixin):\n\n def __init__(self, path, root, mean, crop_size, random=True):\n self.base = chainer.datasets.LabeledImageDataset(path, root)\n self.mean = mean.astype(np.float32)\n self.crop_size = crop_size\n self.random = random\n\n def __len__(self):\n return len(self.base)\n\n def get_example(self, i):\n # It reads the i-th image/label pair and return a preprocessed image.\n # It applies following preprocesses:\n # - Cropping (random or center rectangular)\n # - Random flip\n # - Scaling to [0, 1] value\n crop_size = self.crop_size\n\n image, label = self.base[i]\n _, h, w = image.shape\n\n if self.random:\n # Randomly crop a region and flip the image\n top = random.randint(0, h - crop_size - 1)\n left = random.randint(0, w - crop_size - 1)\n if random.randint(0, 1):\n image = image[:, :, ::-1]\n else:\n # Crop the center\n top = (h - crop_size) // 2\n left = (w - crop_size) // 2\n bottom = top + crop_size\n right = left + crop_size\n\n image = image[:, top:bottom, left:right]\n image -= self.mean[:, top:bottom, left:right]\n image *= (1.0 / 255.0) # Scale to [0, 1]\n return image, label\n\n\n# chainermn.create_multi_node_evaluator can be also used with user customized\n# evaluator classes that inherit chainer.training.extensions.Evaluator.\nclass TestModeEvaluator(extensions.Evaluator):\n\n def evaluate(self):\n model = self.get_target('main')\n model.train = False\n ret = super(TestModeEvaluator, self).evaluate()\n model.train = True\n return ret\n\n\ndef main():\n # Check if GPU is available\n # (ImageNet example does not support CPU execution)\n if not chainer.cuda.available:\n raise RuntimeError('ImageNet requires GPU support.')\n\n archs = {\n 'alex': alex.Alex,\n 'googlenet': googlenet.GoogLeNet,\n 'googlenetbn': googlenetbn.GoogLeNetBN,\n 'nin': nin.NIN,\n 'resnet50': resnet50.ResNet50,\n }\n\n parser = argparse.ArgumentParser(\n description='Learning convnet from ILSVRC2012 dataset')\n parser.add_argument('train', help='Path to training image-label list file')\n parser.add_argument('val', help='Path to validation image-label list file')\n parser.add_argument('--arch', '-a', choices=archs.keys(), default='nin',\n help='Convnet architecture')\n parser.add_argument('--batchsize', '-B', type=int, default=32,\n help='Learning minibatch size')\n parser.add_argument('--epoch', '-E', type=int, default=10,\n help='Number of epochs to train')\n parser.add_argument('--initmodel',\n help='Initialize the model from given file')\n parser.add_argument('--loaderjob', '-j', type=int,\n help='Number of parallel data loading processes')\n parser.add_argument('--mean', '-m', default='mean.npy',\n help='Mean file (computed by compute_mean.py)')\n parser.add_argument('--resume', '-r', default='',\n help='Initialize the trainer from given file')\n parser.add_argument('--out', '-o', default='result',\n help='Output directory')\n parser.add_argument('--root', '-R', default='.',\n help='Root directory path of image files')\n parser.add_argument('--val_batchsize', '-b', type=int, default=250,\n help='Validation minibatch size')\n parser.add_argument('--test', action='store_true')\n parser.add_argument('--communicator', default='hierarchical')\n parser.set_defaults(test=False)\n args = parser.parse_args()\n\n # Start method of multiprocessing module need to be changed if we\n # are using InfiniBand and MultiprocessIterator. This is because\n # processes often crash when calling fork if they are using\n # Infiniband. (c.f.,\n # https://www.open-mpi.org/faq/?category=tuning#fork-warning )\n # Also, just setting the start method does not seem to be\n # sufficient to actually launch the forkserver processes, so also\n # start a dummy process.\n # See also our document:\n # https://chainermn.readthedocs.io/en/stable/tutorial/tips_faqs.html#using-multiprocessiterator\n # This must be done *before* ``chainermn.create_communicator``!!!\n multiprocessing.set_start_method('forkserver')\n p = multiprocessing.Process(target=lambda *x: x, args=())\n p.start()\n p.join()\n\n # Prepare ChainerMN communicator.\n comm = chainermn.create_communicator(args.communicator)\n device = comm.intra_rank\n\n if comm.rank == 0:\n print('==========================================')\n print('Num process (COMM_WORLD): {}'.format(comm.size))\n print('Using {} communicator'.format(args.communicator))\n print('Using {} arch'.format(args.arch))\n print('Num Minibatch-size: {}'.format(args.batchsize))\n print('Num epoch: {}'.format(args.epoch))\n print('==========================================')\n\n model = archs[args.arch]()\n if args.initmodel:\n print('Load model from', args.initmodel)\n chainer.serializers.load_npz(args.initmodel, model)\n\n chainer.cuda.get_device_from_id(device).use() # Make the GPU current\n model.to_gpu()\n\n # Split and distribute the dataset. Only worker 0 loads the whole dataset.\n # Datasets of worker 0 are evenly split and distributed to all workers.\n mean = np.load(args.mean)\n if comm.rank == 0:\n train = PreprocessedDataset(args.train, args.root, mean, model.insize)\n val = PreprocessedDataset(\n args.val, args.root, mean, model.insize, False)\n else:\n train = None\n val = None\n train = chainermn.scatter_dataset(train, comm, shuffle=True)\n val = chainermn.scatter_dataset(val, comm)\n\n # A workaround for processes crash should be done before making\n # communicator above, when using fork (e.g. MultiProcessIterator)\n # along with Infiniband.\n train_iter = chainer.iterators.MultiprocessIterator(\n train, args.batchsize, n_processes=args.loaderjob)\n val_iter = chainer.iterators.MultiprocessIterator(\n val, args.val_batchsize, repeat=False, n_processes=args.loaderjob)\n\n # Create a multi node optimizer from a standard Chainer optimizer.\n optimizer = chainermn.create_multi_node_optimizer(\n chainer.optimizers.MomentumSGD(lr=0.01, momentum=0.9), comm)\n optimizer.setup(model)\n\n # Set up a trainer\n updater = training.StandardUpdater(train_iter, optimizer, device=device)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), args.out)\n\n checkpoint_interval = (10, 'iteration') if args.test else (1, 'epoch')\n val_interval = (10, 'iteration') if args.test else (1, 'epoch')\n log_interval = (10, 'iteration') if args.test else (1, 'epoch')\n\n checkpointer = chainermn.create_multi_node_checkpointer(\n name='imagenet-example', comm=comm)\n checkpointer.maybe_load(trainer, optimizer)\n trainer.extend(checkpointer, trigger=checkpoint_interval)\n\n # Create a multi node evaluator from an evaluator.\n evaluator = TestModeEvaluator(val_iter, model, device=device)\n evaluator = chainermn.create_multi_node_evaluator(evaluator, comm)\n trainer.extend(evaluator, trigger=val_interval)\n\n # Some display and output extensions are necessary only for one worker.\n # (Otherwise, there would just be repeated outputs.)\n if comm.rank == 0:\n trainer.extend(extensions.DumpGraph('main/loss'))\n trainer.extend(extensions.LogReport(trigger=log_interval))\n trainer.extend(extensions.observe_lr(), trigger=log_interval)\n trainer.extend(extensions.PrintReport([\n 'epoch', 'iteration', 'main/loss', 'validation/main/loss',\n 'main/accuracy', 'validation/main/accuracy', 'lr'\n ]), trigger=log_interval)\n trainer.extend(extensions.ProgressBar(update_interval=10))\n\n if args.resume:\n chainer.serializers.load_npz(args.resume, trainer)\n\n trainer.run()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/chainermn/imagenet/train_imagenet.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import print_function\nimport argparse\nimport multiprocessing\nimport random\nimport sys\n\nimport numpy as np\n\nimport chainer\nimport chainer.cuda\nfrom chainer import training\nfrom chainer.training import extensions\n\nimport chainermn\n\n\nimport models.alex as alex\nimport models.googlenet as googlenet\nimport models.googlenetbn as googlenetbn\nimport models.nin as nin\nimport models.resnet50 as resnet50\n\n# Check Python version if it supports multiprocessing.set_start_method,\n# which was introduced in Python 3.4\nmajor, minor, _, _, _ = sys.version_info\nif major <= 2 or (major == 3 and minor < 4):\n sys.stderr.write('Error: ImageNet example uses '\n 'chainer.iterators.MultiprocessIterator, '\n 'which works only with Python >= 3.4. \\n'\n 'For more details, see '\n 'http://chainermn.readthedocs.io/en/master/'\n 'tutorial/tips_faqs.html#using-multiprocessiterator\\n')\n exit(-1)\n\n\nclass PreprocessedDataset(chainer.dataset.DatasetMixin):\n\n def __init__(self, path, root, mean, crop_size, random=True):\n self.base = chainer.datasets.LabeledImageDataset(path, root)\n self.mean = mean.astype(np.float32)\n self.crop_size = crop_size\n self.random = random\n\n def __len__(self):\n return len(self.base)\n\n def get_example(self, i):\n # It reads the i-th image/label pair and return a preprocessed image.\n # It applies following preprocesses:\n # - Cropping (random or center rectangular)\n # - Random flip\n # - Scaling to [0, 1] value\n crop_size = self.crop_size\n\n image, label = self.base[i]\n _, h, w = image.shape\n\n if self.random:\n # Randomly crop a region and flip the image\n top = random.randint(0, h - crop_size - 1)\n left = random.randint(0, w - crop_size - 1)\n if random.randint(0, 1):\n image = image[:, :, ::-1]\n else:\n # Crop the center\n top = (h - crop_size) // 2\n left = (w - crop_size) // 2\n bottom = top + crop_size\n right = left + crop_size\n\n image = image[:, top:bottom, left:right]\n image -= self.mean[:, top:bottom, left:right]\n image *= (1.0 / 255.0) # Scale to [0, 1]\n return image, label\n\n\n# chainermn.create_multi_node_evaluator can be also used with user customized\n# evaluator classes that inherit chainer.training.extensions.Evaluator.\nclass TestModeEvaluator(extensions.Evaluator):\n\n def evaluate(self):\n model = self.get_target('main')\n model.train = False\n ret = super(TestModeEvaluator, self).evaluate()\n model.train = True\n return ret\n\n\ndef main():\n # Check if GPU is available\n # (ImageNet example does not support CPU execution)\n if not chainer.cuda.available:\n raise RuntimeError('ImageNet requires GPU support.')\n\n archs = {\n 'alex': alex.Alex,\n 'googlenet': googlenet.GoogLeNet,\n 'googlenetbn': googlenetbn.GoogLeNetBN,\n 'nin': nin.NIN,\n 'resnet50': resnet50.ResNet50,\n }\n\n parser = argparse.ArgumentParser(\n description='Learning convnet from ILSVRC2012 dataset')\n parser.add_argument('train', help='Path to training image-label list file')\n parser.add_argument('val', help='Path to validation image-label list file')\n parser.add_argument('--arch', '-a', choices=archs.keys(), default='nin',\n help='Convnet architecture')\n parser.add_argument('--batchsize', '-B', type=int, default=32,\n help='Learning minibatch size')\n parser.add_argument('--epoch', '-E', type=int, default=10,\n help='Number of epochs to train')\n parser.add_argument('--initmodel',\n help='Initialize the model from given file')\n parser.add_argument('--loaderjob', '-j', type=int,\n help='Number of parallel data loading processes')\n parser.add_argument('--mean', '-m', default='mean.npy',\n help='Mean file (computed by compute_mean.py)')\n parser.add_argument('--resume', '-r', default='',\n help='Initialize the trainer from given file')\n parser.add_argument('--out', '-o', default='result',\n help='Output directory')\n parser.add_argument('--root', '-R', default='.',\n help='Root directory path of image files')\n parser.add_argument('--val_batchsize', '-b', type=int, default=250,\n help='Validation minibatch size')\n parser.add_argument('--test', action='store_true')\n parser.add_argument('--communicator', default='hierarchical')\n parser.set_defaults(test=False)\n args = parser.parse_args()\n\n # Start method of multiprocessing module need to be changed if we\n # are using InfiniBand and MultiprocessIterator. This is because\n # processes often crash when calling fork if they are using\n # Infiniband. (c.f.,\n # https://www.open-mpi.org/faq/?category=tuning#fork-warning )\n # Also, just setting the start method does not seem to be\n # sufficient to actually launch the forkserver processes, so also\n # start a dummy process.\n # See also our document:\n # https://chainermn.readthedocs.io/en/stable/tutorial/tips_faqs.html#using-multiprocessiterator\n # This must be done *before* ``chainermn.create_communicator``!!!\n multiprocessing.set_start_method('forkserver')\n p = multiprocessing.Process()\n p.start()\n p.join()\n\n # Prepare ChainerMN communicator.\n comm = chainermn.create_communicator(args.communicator)\n device = comm.intra_rank\n\n if comm.rank == 0:\n print('==========================================')\n print('Num process (COMM_WORLD): {}'.format(comm.size))\n print('Using {} communicator'.format(args.communicator))\n print('Using {} arch'.format(args.arch))\n print('Num Minibatch-size: {}'.format(args.batchsize))\n print('Num epoch: {}'.format(args.epoch))\n print('==========================================')\n\n model = archs[args.arch]()\n if args.initmodel:\n print('Load model from', args.initmodel)\n chainer.serializers.load_npz(args.initmodel, model)\n\n chainer.cuda.get_device_from_id(device).use() # Make the GPU current\n model.to_gpu()\n\n # Split and distribute the dataset. Only worker 0 loads the whole dataset.\n # Datasets of worker 0 are evenly split and distributed to all workers.\n mean = np.load(args.mean)\n if comm.rank == 0:\n train = PreprocessedDataset(args.train, args.root, mean, model.insize)\n val = PreprocessedDataset(\n args.val, args.root, mean, model.insize, False)\n else:\n train = None\n val = None\n train = chainermn.scatter_dataset(train, comm, shuffle=True)\n val = chainermn.scatter_dataset(val, comm)\n\n # A workaround for processes crash should be done before making\n # communicator above, when using fork (e.g. MultiProcessIterator)\n # along with Infiniband.\n train_iter = chainer.iterators.MultiprocessIterator(\n train, args.batchsize, n_processes=args.loaderjob)\n val_iter = chainer.iterators.MultiprocessIterator(\n val, args.val_batchsize, repeat=False, n_processes=args.loaderjob)\n\n # Create a multi node optimizer from a standard Chainer optimizer.\n optimizer = chainermn.create_multi_node_optimizer(\n chainer.optimizers.MomentumSGD(lr=0.01, momentum=0.9), comm)\n optimizer.setup(model)\n\n # Set up a trainer\n updater = training.StandardUpdater(train_iter, optimizer, device=device)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), args.out)\n\n checkpoint_interval = (10, 'iteration') if args.test else (1, 'epoch')\n val_interval = (10, 'iteration') if args.test else (1, 'epoch')\n log_interval = (10, 'iteration') if args.test else (1, 'epoch')\n\n checkpointer = chainermn.create_multi_node_checkpointer(\n name='imagenet-example', comm=comm)\n checkpointer.maybe_load(trainer, optimizer)\n trainer.extend(checkpointer, trigger=checkpoint_interval)\n\n # Create a multi node evaluator from an evaluator.\n evaluator = TestModeEvaluator(val_iter, model, device=device)\n evaluator = chainermn.create_multi_node_evaluator(evaluator, comm)\n trainer.extend(evaluator, trigger=val_interval)\n\n # Some display and output extensions are necessary only for one worker.\n # (Otherwise, there would just be repeated outputs.)\n if comm.rank == 0:\n trainer.extend(extensions.DumpGraph('main/loss'))\n trainer.extend(extensions.LogReport(trigger=log_interval))\n trainer.extend(extensions.observe_lr(), trigger=log_interval)\n trainer.extend(extensions.PrintReport([\n 'epoch', 'iteration', 'main/loss', 'validation/main/loss',\n 'main/accuracy', 'validation/main/accuracy', 'lr'\n ]), trigger=log_interval)\n trainer.extend(extensions.ProgressBar(update_interval=10))\n\n if args.resume:\n chainer.serializers.load_npz(args.resume, trainer)\n\n trainer.run()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/chainermn/imagenet/train_imagenet.py"}]}
| 3,764 | 160 |
gh_patches_debug_35573
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-514
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add docstring for Correlation cost layer
1. Hide [correlation_cost](https://github.com/tensorflow/addons/blob/cffee80779e606870ebf2b7fce241eeb870053f5/tensorflow_addons/layers/optical_flow.py#L30) function, and make it private.
2. Add document for [CorrelationCost](https://github.com/tensorflow/addons/blob/cffee80779e606870ebf2b7fce241eeb870053f5/tensorflow_addons/layers/optical_flow.py#L143)
For details please check the discussion in https://github.com/tensorflow/addons/pull/207
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/layers/optical_flow.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ============================================================================
15 """Tensorflow op performing correlation cost operation."""
16
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 import tensorflow as tf
22 from tensorflow_addons.utils import keras_utils
23 from tensorflow_addons.utils.resource_loader import get_path_to_datafile
24
25 _correlation_cost_op_so = tf.load_op_library(
26 get_path_to_datafile("custom_ops/layers/_correlation_cost_ops.so"))
27
28
29 @tf.function
30 def correlation_cost(input_a,
31 input_b,
32 kernel_size,
33 max_displacement,
34 stride_1,
35 stride_2,
36 pad,
37 data_format='channels_last',
38 name=None):
39 """Correlation Cost Volume computation.
40
41 "FlowNet: Learning Optical Flow with Convolutional Networks"
42 Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Hausser,
43 Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt,
44 Daniel Cremers, Thomas Brox. https://arxiv.org/abs/1504.06852
45
46 Computes a cost volume using correlation for two inputs. For feature
47 maps A, B with spatial dimensions w, h, c it computes
48
49 output(a, b) = sum_{l in [-k,k]**2} < I(a+l), J(b+l) >
50
51 where the patches of size K=2d + 1 are centered in position a resp. b.
52
53 The output shape is [B, C', H', W'], where
54
55 r = max_displacement / stride_2;
56 bd = max_displacement + (kernel_size - 1) / 2
57 C' = (2 * r + 1) ** 2
58 H' = H + 2 * (pad - bd) / stride_1
59 W' = W + 2 * (pad - bd) / stride_1
60
61 Note: When the data_format requests "channels_last", an additional explicit
62 transpose operation is executed.
63
64 Args:
65 input_a: A `Tensor` of the format specified by `data_format`.
66 input_b: A `Tensor` of the format specified by `data_format`.
67 kernel_size: An integer specifying the height and width of the
68 patch used to compute the per-patch costs.
69 max_displacement: An integer specifying the maximum search radius
70 for each position.
71 stride_1: An integer specifying the stride length in the input.
72 stride_2: An integer specifying the stride length in the patch.
73 pad: An integer specifying the paddings in height and width.
74 data_format: Specifies the data format.
75 Possible values are:
76 "channels_last" float [batch, height, width, channels]
77 "channels_first" float [batch, channels, height, width]
78 Defaults to `"channels_last"`.
79 name: A name for the operation (optional).
80
81 Returns:
82 A `Tensor` of the format specified by `data_format`.
83 """
84
85 with tf.name_scope(name or "correlation_cost"):
86 op_call = _correlation_cost_op_so.addons_correlation_cost
87
88 if data_format == "channels_last":
89 op_data_format = "NHWC"
90 elif data_format == "channels_first":
91 op_data_format = "NCHW"
92 else:
93 raise ValueError("`data_format` must be either `channels_last` or"
94 "`channels_first`")
95
96 ret = op_call(
97 input_a,
98 input_b,
99 kernel_size=kernel_size,
100 max_displacement=max_displacement,
101 stride_1=stride_1,
102 stride_2=stride_2,
103 pad=pad,
104 data_format=op_data_format)
105 if data_format == 'channels_last':
106 # this is easier to maintain without
107 # specializing an additional cuda kernel
108 return tf.transpose(ret, [0, 2, 3, 1])
109 return ret
110
111
112 @tf.RegisterGradient("Addons>CorrelationCost")
113 def _correlation_cost_grad(op, grad_output):
114 kernel_size = op.get_attr("kernel_size")
115 max_displacement = op.get_attr("max_displacement")
116 stride_1 = op.get_attr("stride_1")
117 stride_2 = op.get_attr("stride_2")
118 pad = op.get_attr("pad")
119 data_format = op.get_attr("data_format")
120
121 input_a = tf.convert_to_tensor(op.inputs[0], name="input_a")
122 input_b = tf.convert_to_tensor(op.inputs[1], name="input_b")
123 grad_output_tensor = tf.convert_to_tensor(grad_output, name="grad_output")
124
125 op_call = _correlation_cost_op_so.addons_correlation_cost_grad
126 grads = op_call(
127 input_a,
128 input_b,
129 grad_output_tensor,
130 kernel_size=kernel_size,
131 max_displacement=max_displacement,
132 stride_1=stride_1,
133 stride_2=stride_2,
134 pad=pad,
135 data_format=data_format)
136
137 grad_input_a = tf.convert_to_tensor(grads[0], name="grad_input_a")
138 grad_input_b = tf.convert_to_tensor(grads[1], name="grad_input_b")
139 return [grad_input_a, grad_input_b]
140
141
142 @keras_utils.register_keras_custom_object
143 class CorrelationCost(tf.keras.layers.Layer):
144 def __init__(self, kernel_size, max_displacement, stride_1, stride_2, pad,
145 data_format, **kwargs):
146 self.kernel_size = kernel_size
147 self.max_displacement = max_displacement
148 self.stride_1 = stride_1
149 self.stride_2 = stride_2
150 self.pad = pad
151
152 if data_format != "channels_last" and data_format != "channels_first":
153 raise ValueError("`data_format` must be either `channels_last` or"
154 "`channels_first`, instead got %s" % data_format)
155
156 self.data_format = data_format
157
158 super(CorrelationCost, self).__init__(**kwargs)
159
160 def build(self, input_shape):
161 if not isinstance(input_shape, list):
162 raise ValueError("Input must be a list of two Tensors to process")
163 super(CorrelationCost, self).build(input_shape)
164
165 def call(self, inputs):
166 if not isinstance(inputs, list):
167 raise ValueError("Input must be a list of two Tensors to process")
168
169 input_a = tf.convert_to_tensor(inputs[0])
170 input_b = tf.convert_to_tensor(inputs[1])
171
172 return correlation_cost(
173 input_a,
174 input_b,
175 kernel_size=self.kernel_size,
176 max_displacement=self.max_displacement,
177 stride_1=self.stride_1,
178 stride_2=self.stride_2,
179 pad=self.pad,
180 data_format=self.data_format)
181
182 def compute_output_shape(self, input_shape):
183 assert isinstance(input_shape, list)
184 n = input_shape[0][0]
185 r = self.max_displacement / self.stride_2
186 bd = self.max_displacement + (self.kernel_size - 1) / 2
187 output_c = (2 * r + 1)**2
188
189 if self.data_format == "channels_first":
190 output_h = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1
191 output_w = input_shape[0][2] + 2 * (self.pad - bd) / self.stride_1
192 return [n, output_c, output_h, output_w]
193
194 elif self.data_format == "channels_last":
195 output_h = input_shape[0][0] + 2 * (self.pad - bd) / self.stride_1
196 output_w = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1
197 return [n, output_h, output_w, output_c]
198 else:
199 raise ValueError("`data_format` must be either `channels_last` or"
200 "`channels_first`")
201
202 def get_config(self):
203 config = {
204 'kernel_size': self.kernel_size,
205 'max_displacement': self.max_displacement,
206 'stride_1': self.stride_1,
207 'stride_2': self.stride_2,
208 'pad': self.pad,
209 'data_format': self.data_format
210 }
211
212 base_config = super(CorrelationCost, self).get_config()
213 return dict(list(base_config.items()) + list(config.items()))
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/layers/optical_flow.py b/tensorflow_addons/layers/optical_flow.py
--- a/tensorflow_addons/layers/optical_flow.py
+++ b/tensorflow_addons/layers/optical_flow.py
@@ -27,15 +27,15 @@
@tf.function
-def correlation_cost(input_a,
- input_b,
- kernel_size,
- max_displacement,
- stride_1,
- stride_2,
- pad,
- data_format='channels_last',
- name=None):
+def _correlation_cost(input_a,
+ input_b,
+ kernel_size,
+ max_displacement,
+ stride_1,
+ stride_2,
+ pad,
+ data_format='channels_last',
+ name=None):
"""Correlation Cost Volume computation.
"FlowNet: Learning Optical Flow with Convolutional Networks"
@@ -141,6 +141,27 @@
@keras_utils.register_keras_custom_object
class CorrelationCost(tf.keras.layers.Layer):
+ """Correlation Cost Layer.
+
+ This layer implements the correlation operation from FlowNet Learning
+ Optical Flow with Convolutional Networks (Fischer et al.):
+ https://arxiv.org/abs/1504.06
+
+ Args:
+ kernel_size: An integer specifying the height and width of the
+ patch used to compute the per-patch costs.
+ max_displacement: An integer specifying the maximum search radius
+ for each position.
+ stride_1: An integer specifying the stride length in the input.
+ stride_2: An integer specifying the stride length in the patch.
+ pad: An integer specifying the paddings in height and width.
+ data_format: Specifies the data format.
+ Possible values are:
+ "channels_last" float [batch, height, width, channels]
+ "channels_first" float [batch, channels, height, width]
+ Defaults to `"channels_last"`.
+ """
+
def __init__(self, kernel_size, max_displacement, stride_1, stride_2, pad,
data_format, **kwargs):
self.kernel_size = kernel_size
@@ -169,7 +190,7 @@
input_a = tf.convert_to_tensor(inputs[0])
input_b = tf.convert_to_tensor(inputs[1])
- return correlation_cost(
+ return _correlation_cost(
input_a,
input_b,
kernel_size=self.kernel_size,
|
{"golden_diff": "diff --git a/tensorflow_addons/layers/optical_flow.py b/tensorflow_addons/layers/optical_flow.py\n--- a/tensorflow_addons/layers/optical_flow.py\n+++ b/tensorflow_addons/layers/optical_flow.py\n@@ -27,15 +27,15 @@\n \n \n @tf.function\n-def correlation_cost(input_a,\n- input_b,\n- kernel_size,\n- max_displacement,\n- stride_1,\n- stride_2,\n- pad,\n- data_format='channels_last',\n- name=None):\n+def _correlation_cost(input_a,\n+ input_b,\n+ kernel_size,\n+ max_displacement,\n+ stride_1,\n+ stride_2,\n+ pad,\n+ data_format='channels_last',\n+ name=None):\n \"\"\"Correlation Cost Volume computation.\n \n \"FlowNet: Learning Optical Flow with Convolutional Networks\"\n@@ -141,6 +141,27 @@\n \n @keras_utils.register_keras_custom_object\n class CorrelationCost(tf.keras.layers.Layer):\n+ \"\"\"Correlation Cost Layer.\n+\n+ This layer implements the correlation operation from FlowNet Learning\n+ Optical Flow with Convolutional Networks (Fischer et al.):\n+ https://arxiv.org/abs/1504.06\n+\n+ Args:\n+ kernel_size: An integer specifying the height and width of the\n+ patch used to compute the per-patch costs.\n+ max_displacement: An integer specifying the maximum search radius\n+ for each position.\n+ stride_1: An integer specifying the stride length in the input.\n+ stride_2: An integer specifying the stride length in the patch.\n+ pad: An integer specifying the paddings in height and width.\n+ data_format: Specifies the data format.\n+ Possible values are:\n+ \"channels_last\" float [batch, height, width, channels]\n+ \"channels_first\" float [batch, channels, height, width]\n+ Defaults to `\"channels_last\"`.\n+ \"\"\"\n+\n def __init__(self, kernel_size, max_displacement, stride_1, stride_2, pad,\n data_format, **kwargs):\n self.kernel_size = kernel_size\n@@ -169,7 +190,7 @@\n input_a = tf.convert_to_tensor(inputs[0])\n input_b = tf.convert_to_tensor(inputs[1])\n \n- return correlation_cost(\n+ return _correlation_cost(\n input_a,\n input_b,\n kernel_size=self.kernel_size,\n", "issue": "Add docstring for Correlation cost layer\n1. Hide [correlation_cost](https://github.com/tensorflow/addons/blob/cffee80779e606870ebf2b7fce241eeb870053f5/tensorflow_addons/layers/optical_flow.py#L30) function, and make it private.\r\n2. Add document for [CorrelationCost](https://github.com/tensorflow/addons/blob/cffee80779e606870ebf2b7fce241eeb870053f5/tensorflow_addons/layers/optical_flow.py#L143)\r\n\r\nFor details please check the discussion in https://github.com/tensorflow/addons/pull/207\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================\n\"\"\"Tensorflow op performing correlation cost operation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import keras_utils\nfrom tensorflow_addons.utils.resource_loader import get_path_to_datafile\n\n_correlation_cost_op_so = tf.load_op_library(\n get_path_to_datafile(\"custom_ops/layers/_correlation_cost_ops.so\"))\n\n\[email protected]\ndef correlation_cost(input_a,\n input_b,\n kernel_size,\n max_displacement,\n stride_1,\n stride_2,\n pad,\n data_format='channels_last',\n name=None):\n \"\"\"Correlation Cost Volume computation.\n\n \"FlowNet: Learning Optical Flow with Convolutional Networks\"\n Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Hausser,\n Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt,\n Daniel Cremers, Thomas Brox. https://arxiv.org/abs/1504.06852\n\n Computes a cost volume using correlation for two inputs. For feature\n maps A, B with spatial dimensions w, h, c it computes\n\n output(a, b) = sum_{l in [-k,k]**2} < I(a+l), J(b+l) >\n\n where the patches of size K=2d + 1 are centered in position a resp. b.\n\n The output shape is [B, C', H', W'], where\n\n r = max_displacement / stride_2;\n bd = max_displacement + (kernel_size - 1) / 2\n C' = (2 * r + 1) ** 2\n H' = H + 2 * (pad - bd) / stride_1\n W' = W + 2 * (pad - bd) / stride_1\n\n Note: When the data_format requests \"channels_last\", an additional explicit\n transpose operation is executed.\n\n Args:\n input_a: A `Tensor` of the format specified by `data_format`.\n input_b: A `Tensor` of the format specified by `data_format`.\n kernel_size: An integer specifying the height and width of the\n patch used to compute the per-patch costs.\n max_displacement: An integer specifying the maximum search radius\n for each position.\n stride_1: An integer specifying the stride length in the input.\n stride_2: An integer specifying the stride length in the patch.\n pad: An integer specifying the paddings in height and width.\n data_format: Specifies the data format.\n Possible values are:\n \"channels_last\" float [batch, height, width, channels]\n \"channels_first\" float [batch, channels, height, width]\n Defaults to `\"channels_last\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the format specified by `data_format`.\n \"\"\"\n\n with tf.name_scope(name or \"correlation_cost\"):\n op_call = _correlation_cost_op_so.addons_correlation_cost\n\n if data_format == \"channels_last\":\n op_data_format = \"NHWC\"\n elif data_format == \"channels_first\":\n op_data_format = \"NCHW\"\n else:\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`\")\n\n ret = op_call(\n input_a,\n input_b,\n kernel_size=kernel_size,\n max_displacement=max_displacement,\n stride_1=stride_1,\n stride_2=stride_2,\n pad=pad,\n data_format=op_data_format)\n if data_format == 'channels_last':\n # this is easier to maintain without\n # specializing an additional cuda kernel\n return tf.transpose(ret, [0, 2, 3, 1])\n return ret\n\n\[email protected](\"Addons>CorrelationCost\")\ndef _correlation_cost_grad(op, grad_output):\n kernel_size = op.get_attr(\"kernel_size\")\n max_displacement = op.get_attr(\"max_displacement\")\n stride_1 = op.get_attr(\"stride_1\")\n stride_2 = op.get_attr(\"stride_2\")\n pad = op.get_attr(\"pad\")\n data_format = op.get_attr(\"data_format\")\n\n input_a = tf.convert_to_tensor(op.inputs[0], name=\"input_a\")\n input_b = tf.convert_to_tensor(op.inputs[1], name=\"input_b\")\n grad_output_tensor = tf.convert_to_tensor(grad_output, name=\"grad_output\")\n\n op_call = _correlation_cost_op_so.addons_correlation_cost_grad\n grads = op_call(\n input_a,\n input_b,\n grad_output_tensor,\n kernel_size=kernel_size,\n max_displacement=max_displacement,\n stride_1=stride_1,\n stride_2=stride_2,\n pad=pad,\n data_format=data_format)\n\n grad_input_a = tf.convert_to_tensor(grads[0], name=\"grad_input_a\")\n grad_input_b = tf.convert_to_tensor(grads[1], name=\"grad_input_b\")\n return [grad_input_a, grad_input_b]\n\n\n@keras_utils.register_keras_custom_object\nclass CorrelationCost(tf.keras.layers.Layer):\n def __init__(self, kernel_size, max_displacement, stride_1, stride_2, pad,\n data_format, **kwargs):\n self.kernel_size = kernel_size\n self.max_displacement = max_displacement\n self.stride_1 = stride_1\n self.stride_2 = stride_2\n self.pad = pad\n\n if data_format != \"channels_last\" and data_format != \"channels_first\":\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`, instead got %s\" % data_format)\n\n self.data_format = data_format\n\n super(CorrelationCost, self).__init__(**kwargs)\n\n def build(self, input_shape):\n if not isinstance(input_shape, list):\n raise ValueError(\"Input must be a list of two Tensors to process\")\n super(CorrelationCost, self).build(input_shape)\n\n def call(self, inputs):\n if not isinstance(inputs, list):\n raise ValueError(\"Input must be a list of two Tensors to process\")\n\n input_a = tf.convert_to_tensor(inputs[0])\n input_b = tf.convert_to_tensor(inputs[1])\n\n return correlation_cost(\n input_a,\n input_b,\n kernel_size=self.kernel_size,\n max_displacement=self.max_displacement,\n stride_1=self.stride_1,\n stride_2=self.stride_2,\n pad=self.pad,\n data_format=self.data_format)\n\n def compute_output_shape(self, input_shape):\n assert isinstance(input_shape, list)\n n = input_shape[0][0]\n r = self.max_displacement / self.stride_2\n bd = self.max_displacement + (self.kernel_size - 1) / 2\n output_c = (2 * r + 1)**2\n\n if self.data_format == \"channels_first\":\n output_h = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1\n output_w = input_shape[0][2] + 2 * (self.pad - bd) / self.stride_1\n return [n, output_c, output_h, output_w]\n\n elif self.data_format == \"channels_last\":\n output_h = input_shape[0][0] + 2 * (self.pad - bd) / self.stride_1\n output_w = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1\n return [n, output_h, output_w, output_c]\n else:\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`\")\n\n def get_config(self):\n config = {\n 'kernel_size': self.kernel_size,\n 'max_displacement': self.max_displacement,\n 'stride_1': self.stride_1,\n 'stride_2': self.stride_2,\n 'pad': self.pad,\n 'data_format': self.data_format\n }\n\n base_config = super(CorrelationCost, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/optical_flow.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================\n\"\"\"Tensorflow op performing correlation cost operation.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import keras_utils\nfrom tensorflow_addons.utils.resource_loader import get_path_to_datafile\n\n_correlation_cost_op_so = tf.load_op_library(\n get_path_to_datafile(\"custom_ops/layers/_correlation_cost_ops.so\"))\n\n\[email protected]\ndef _correlation_cost(input_a,\n input_b,\n kernel_size,\n max_displacement,\n stride_1,\n stride_2,\n pad,\n data_format='channels_last',\n name=None):\n \"\"\"Correlation Cost Volume computation.\n\n \"FlowNet: Learning Optical Flow with Convolutional Networks\"\n Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Hausser,\n Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt,\n Daniel Cremers, Thomas Brox. https://arxiv.org/abs/1504.06852\n\n Computes a cost volume using correlation for two inputs. For feature\n maps A, B with spatial dimensions w, h, c it computes\n\n output(a, b) = sum_{l in [-k,k]**2} < I(a+l), J(b+l) >\n\n where the patches of size K=2d + 1 are centered in position a resp. b.\n\n The output shape is [B, C', H', W'], where\n\n r = max_displacement / stride_2;\n bd = max_displacement + (kernel_size - 1) / 2\n C' = (2 * r + 1) ** 2\n H' = H + 2 * (pad - bd) / stride_1\n W' = W + 2 * (pad - bd) / stride_1\n\n Note: When the data_format requests \"channels_last\", an additional explicit\n transpose operation is executed.\n\n Args:\n input_a: A `Tensor` of the format specified by `data_format`.\n input_b: A `Tensor` of the format specified by `data_format`.\n kernel_size: An integer specifying the height and width of the\n patch used to compute the per-patch costs.\n max_displacement: An integer specifying the maximum search radius\n for each position.\n stride_1: An integer specifying the stride length in the input.\n stride_2: An integer specifying the stride length in the patch.\n pad: An integer specifying the paddings in height and width.\n data_format: Specifies the data format.\n Possible values are:\n \"channels_last\" float [batch, height, width, channels]\n \"channels_first\" float [batch, channels, height, width]\n Defaults to `\"channels_last\"`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` of the format specified by `data_format`.\n \"\"\"\n\n with tf.name_scope(name or \"correlation_cost\"):\n op_call = _correlation_cost_op_so.addons_correlation_cost\n\n if data_format == \"channels_last\":\n op_data_format = \"NHWC\"\n elif data_format == \"channels_first\":\n op_data_format = \"NCHW\"\n else:\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`\")\n\n ret = op_call(\n input_a,\n input_b,\n kernel_size=kernel_size,\n max_displacement=max_displacement,\n stride_1=stride_1,\n stride_2=stride_2,\n pad=pad,\n data_format=op_data_format)\n if data_format == 'channels_last':\n # this is easier to maintain without\n # specializing an additional cuda kernel\n return tf.transpose(ret, [0, 2, 3, 1])\n return ret\n\n\[email protected](\"Addons>CorrelationCost\")\ndef _correlation_cost_grad(op, grad_output):\n kernel_size = op.get_attr(\"kernel_size\")\n max_displacement = op.get_attr(\"max_displacement\")\n stride_1 = op.get_attr(\"stride_1\")\n stride_2 = op.get_attr(\"stride_2\")\n pad = op.get_attr(\"pad\")\n data_format = op.get_attr(\"data_format\")\n\n input_a = tf.convert_to_tensor(op.inputs[0], name=\"input_a\")\n input_b = tf.convert_to_tensor(op.inputs[1], name=\"input_b\")\n grad_output_tensor = tf.convert_to_tensor(grad_output, name=\"grad_output\")\n\n op_call = _correlation_cost_op_so.addons_correlation_cost_grad\n grads = op_call(\n input_a,\n input_b,\n grad_output_tensor,\n kernel_size=kernel_size,\n max_displacement=max_displacement,\n stride_1=stride_1,\n stride_2=stride_2,\n pad=pad,\n data_format=data_format)\n\n grad_input_a = tf.convert_to_tensor(grads[0], name=\"grad_input_a\")\n grad_input_b = tf.convert_to_tensor(grads[1], name=\"grad_input_b\")\n return [grad_input_a, grad_input_b]\n\n\n@keras_utils.register_keras_custom_object\nclass CorrelationCost(tf.keras.layers.Layer):\n \"\"\"Correlation Cost Layer.\n\n This layer implements the correlation operation from FlowNet Learning\n Optical Flow with Convolutional Networks (Fischer et al.):\n https://arxiv.org/abs/1504.06\n\n Args:\n kernel_size: An integer specifying the height and width of the\n patch used to compute the per-patch costs.\n max_displacement: An integer specifying the maximum search radius\n for each position.\n stride_1: An integer specifying the stride length in the input.\n stride_2: An integer specifying the stride length in the patch.\n pad: An integer specifying the paddings in height and width.\n data_format: Specifies the data format.\n Possible values are:\n \"channels_last\" float [batch, height, width, channels]\n \"channels_first\" float [batch, channels, height, width]\n Defaults to `\"channels_last\"`.\n \"\"\"\n\n def __init__(self, kernel_size, max_displacement, stride_1, stride_2, pad,\n data_format, **kwargs):\n self.kernel_size = kernel_size\n self.max_displacement = max_displacement\n self.stride_1 = stride_1\n self.stride_2 = stride_2\n self.pad = pad\n\n if data_format != \"channels_last\" and data_format != \"channels_first\":\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`, instead got %s\" % data_format)\n\n self.data_format = data_format\n\n super(CorrelationCost, self).__init__(**kwargs)\n\n def build(self, input_shape):\n if not isinstance(input_shape, list):\n raise ValueError(\"Input must be a list of two Tensors to process\")\n super(CorrelationCost, self).build(input_shape)\n\n def call(self, inputs):\n if not isinstance(inputs, list):\n raise ValueError(\"Input must be a list of two Tensors to process\")\n\n input_a = tf.convert_to_tensor(inputs[0])\n input_b = tf.convert_to_tensor(inputs[1])\n\n return _correlation_cost(\n input_a,\n input_b,\n kernel_size=self.kernel_size,\n max_displacement=self.max_displacement,\n stride_1=self.stride_1,\n stride_2=self.stride_2,\n pad=self.pad,\n data_format=self.data_format)\n\n def compute_output_shape(self, input_shape):\n assert isinstance(input_shape, list)\n n = input_shape[0][0]\n r = self.max_displacement / self.stride_2\n bd = self.max_displacement + (self.kernel_size - 1) / 2\n output_c = (2 * r + 1)**2\n\n if self.data_format == \"channels_first\":\n output_h = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1\n output_w = input_shape[0][2] + 2 * (self.pad - bd) / self.stride_1\n return [n, output_c, output_h, output_w]\n\n elif self.data_format == \"channels_last\":\n output_h = input_shape[0][0] + 2 * (self.pad - bd) / self.stride_1\n output_w = input_shape[0][1] + 2 * (self.pad - bd) / self.stride_1\n return [n, output_h, output_w, output_c]\n else:\n raise ValueError(\"`data_format` must be either `channels_last` or\"\n \"`channels_first`\")\n\n def get_config(self):\n config = {\n 'kernel_size': self.kernel_size,\n 'max_displacement': self.max_displacement,\n 'stride_1': self.stride_1,\n 'stride_2': self.stride_2,\n 'pad': self.pad,\n 'data_format': self.data_format\n }\n\n base_config = super(CorrelationCost, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/optical_flow.py"}]}
| 2,932 | 561 |
gh_patches_debug_17989
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-2336
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEATURE]: Make CUDA extensions default to optional
### Describe the feature
When installing `Colossal-AI` with `pip install .` from source or `pip install colossalai` from pypi, we always build CUDA extensions by default. However, this brings many problems for users who do not have a well-built environment with CUDA and torch. As a result, they failed at the first step, which is installation. I would suggest not to build extensions by default. The users can build extensions in two ways:
1. export `CUDA_EXT=1` when running `pip install`
2. build CUDA kernels only when the kernel is used
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import re
3
4 from setuptools import find_packages, setup
5
6 from op_builder.utils import get_cuda_bare_metal_version
7
8 try:
9 import torch
10 from torch.utils.cpp_extension import CUDA_HOME, BuildExtension, CUDAExtension
11 print("\n\ntorch.__version__ = {}\n\n".format(torch.__version__))
12 TORCH_MAJOR = int(torch.__version__.split('.')[0])
13 TORCH_MINOR = int(torch.__version__.split('.')[1])
14
15 if TORCH_MAJOR < 1 or (TORCH_MAJOR == 1 and TORCH_MINOR < 10):
16 raise RuntimeError("Colossal-AI requires Pytorch 1.10 or newer.\n"
17 "The latest stable release can be obtained from https://pytorch.org/")
18 TORCH_AVAILABLE = True
19 except ImportError:
20 TORCH_AVAILABLE = False
21
22
23 # ninja build does not work unless include_dirs are abs path
24 this_dir = os.path.dirname(os.path.abspath(__file__))
25 build_cuda_ext = True
26 ext_modules = []
27
28 if int(os.environ.get('NO_CUDA_EXT', '0')) == 1 or not TORCH_AVAILABLE:
29 build_cuda_ext = False
30
31
32 def check_cuda_torch_binary_vs_bare_metal(cuda_dir):
33 raw_output, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(cuda_dir)
34 torch_binary_major = torch.version.cuda.split(".")[0]
35 torch_binary_minor = torch.version.cuda.split(".")[1]
36
37 print("\nCompiling cuda extensions with")
38 print(raw_output + "from " + cuda_dir + "/bin\n")
39
40 if bare_metal_major != torch_binary_major:
41 print(f'The detected CUDA version ({raw_output}) mismatches the version that was used to compile PyTorch '
42 f'({torch.version.cuda}). CUDA extension will not be installed.')
43 return False
44
45 if bare_metal_minor != torch_binary_minor:
46 print("\nWarning: Cuda extensions are being compiled with a version of Cuda that does "
47 "not match the version used to compile Pytorch binaries. "
48 f"Pytorch binaries were compiled with Cuda {torch.version.cuda}.\n"
49 "In some cases, a minor-version mismatch will not cause later errors: "
50 "https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. ")
51 return True
52
53
54 def check_cuda_availability(cuda_dir):
55 if not torch.cuda.is_available():
56 # https://github.com/NVIDIA/apex/issues/486
57 # Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query
58 # torch.cuda.get_device_capability(), which will fail if you are compiling in an environment
59 # without visible GPUs (e.g. during an nvidia-docker build command).
60 print(
61 '\nWarning: Torch did not find available GPUs on this system.\n',
62 'If your intention is to cross-compile, this is not an error.\n'
63 'By default, Colossal-AI will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\n'
64 'Volta (compute capability 7.0), Turing (compute capability 7.5),\n'
65 'and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).\n'
66 'If you wish to cross-compile for a single specific architecture,\n'
67 'export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.\n')
68 if os.environ.get("TORCH_CUDA_ARCH_LIST", None) is None:
69 _, bare_metal_major, _ = get_cuda_bare_metal_version(cuda_dir)
70 if int(bare_metal_major) == 11:
71 os.environ["TORCH_CUDA_ARCH_LIST"] = "6.0;6.1;6.2;7.0;7.5;8.0"
72 else:
73 os.environ["TORCH_CUDA_ARCH_LIST"] = "6.0;6.1;6.2;7.0;7.5"
74 return False
75
76 if cuda_dir is None:
77 print("nvcc was not found. CUDA extension will not be installed. If you're installing within a container from "
78 "https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
79 return False
80 return True
81
82
83 def append_nvcc_threads(nvcc_extra_args):
84 _, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(CUDA_HOME)
85 if int(bare_metal_major) >= 11 and int(bare_metal_minor) >= 2:
86 return nvcc_extra_args + ["--threads", "4"]
87 return nvcc_extra_args
88
89
90 def fetch_requirements(path):
91 with open(path, 'r') as fd:
92 return [r.strip() for r in fd.readlines()]
93
94
95 def fetch_readme():
96 with open('README.md', encoding='utf-8') as f:
97 return f.read()
98
99
100 def get_version():
101 setup_file_path = os.path.abspath(__file__)
102 project_path = os.path.dirname(setup_file_path)
103 version_txt_path = os.path.join(project_path, 'version.txt')
104 version_py_path = os.path.join(project_path, 'colossalai/version.py')
105
106 with open(version_txt_path) as f:
107 version = f.read().strip()
108 if build_cuda_ext:
109 torch_version = '.'.join(torch.__version__.split('.')[:2])
110 cuda_version = '.'.join(get_cuda_bare_metal_version(CUDA_HOME)[1:])
111 version += f'+torch{torch_version}cu{cuda_version}'
112
113 # write version into version.py
114 with open(version_py_path, 'w') as f:
115 f.write(f"__version__ = '{version}'\n")
116
117 return version
118
119
120 if build_cuda_ext:
121 build_cuda_ext = check_cuda_availability(CUDA_HOME) and check_cuda_torch_binary_vs_bare_metal(CUDA_HOME)
122
123 if build_cuda_ext:
124 # Set up macros for forward/backward compatibility hack around
125 # https://github.com/pytorch/pytorch/commit/4404762d7dd955383acee92e6f06b48144a0742e
126 # and
127 # https://github.com/NVIDIA/apex/issues/456
128 # https://github.com/pytorch/pytorch/commit/eb7b39e02f7d75c26d8a795ea8c7fd911334da7e#diff-4632522f237f1e4e728cb824300403ac
129 version_dependent_macros = ['-DVERSION_GE_1_1', '-DVERSION_GE_1_3', '-DVERSION_GE_1_5']
130
131 def cuda_ext_helper(name, sources, extra_cuda_flags, extra_cxx_flags=[]):
132 return CUDAExtension(
133 name=name,
134 sources=[os.path.join('colossalai/kernel/cuda_native/csrc', path) for path in sources],
135 include_dirs=[os.path.join(this_dir, 'colossalai/kernel/cuda_native/csrc/kernels/include')],
136 extra_compile_args={
137 'cxx': ['-O3'] + version_dependent_macros + extra_cxx_flags,
138 'nvcc': append_nvcc_threads(['-O3', '--use_fast_math'] + version_dependent_macros + extra_cuda_flags)
139 })
140
141 #### fused optim kernels ###
142 from op_builder import FusedOptimBuilder
143 ext_modules.append(FusedOptimBuilder().builder('colossalai._C.fused_optim'))
144
145 #### N-D parallel kernels ###
146 cc_flag = []
147 for arch in torch.cuda.get_arch_list():
148 res = re.search(r'sm_(\d+)', arch)
149 if res:
150 arch_cap = res[1]
151 if int(arch_cap) >= 60:
152 cc_flag.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])
153
154 extra_cuda_flags = [
155 '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__', '--expt-relaxed-constexpr',
156 '--expt-extended-lambda'
157 ]
158
159 from op_builder import ScaledSoftmaxBuilder
160 ext_modules.append(ScaledSoftmaxBuilder().builder('colossalai._C.scaled_upper_triang_masked_softmax'))
161
162 ext_modules.append(
163 cuda_ext_helper('colossalai._C.scaled_masked_softmax',
164 ['scaled_masked_softmax.cpp', 'scaled_masked_softmax_cuda.cu'], extra_cuda_flags + cc_flag))
165
166 from op_builder import MOEBuilder
167 ext_modules.append(MOEBuilder().builder('colossalai._C.moe'))
168
169 extra_cuda_flags = ['-maxrregcount=50']
170
171 ext_modules.append(
172 cuda_ext_helper('colossalai._C.layer_norm', ['layer_norm_cuda.cpp', 'layer_norm_cuda_kernel.cu'],
173 extra_cuda_flags + cc_flag))
174
175 ### MultiHeadAttn Kernel ####
176 from op_builder import MultiHeadAttnBuilder
177 ext_modules.append(MultiHeadAttnBuilder().builder('colossalai._C.multihead_attention'))
178
179 ### Gemini Adam kernel ####
180 from op_builder import CPUAdamBuilder
181 ext_modules.append(CPUAdamBuilder().builder('colossalai._C.cpu_optim'))
182
183 setup(name='colossalai',
184 version=get_version(),
185 packages=find_packages(exclude=(
186 'benchmark',
187 'docker',
188 'tests',
189 'docs',
190 'examples',
191 'tests',
192 'scripts',
193 'requirements',
194 '*.egg-info',
195 )),
196 description='An integrated large-scale model training system with efficient parallelization techniques',
197 long_description=fetch_readme(),
198 long_description_content_type='text/markdown',
199 license='Apache Software License 2.0',
200 url='https://www.colossalai.org',
201 project_urls={
202 'Forum': 'https://github.com/hpcaitech/ColossalAI/discussions',
203 'Bug Tracker': 'https://github.com/hpcaitech/ColossalAI/issues',
204 'Examples': 'https://github.com/hpcaitech/ColossalAI-Examples',
205 'Documentation': 'http://colossalai.readthedocs.io',
206 'Github': 'https://github.com/hpcaitech/ColossalAI',
207 },
208 ext_modules=ext_modules,
209 cmdclass={'build_ext': BuildExtension} if ext_modules else {},
210 install_requires=fetch_requirements('requirements/requirements.txt'),
211 entry_points='''
212 [console_scripts]
213 colossalai=colossalai.cli:cli
214 ''',
215 python_requires='>=3.6',
216 classifiers=[
217 'Programming Language :: Python :: 3',
218 'License :: OSI Approved :: Apache Software License',
219 'Environment :: GPU :: NVIDIA CUDA',
220 'Topic :: Scientific/Engineering :: Artificial Intelligence',
221 'Topic :: System :: Distributed Computing',
222 ],
223 package_data={'colossalai': ['_C/*.pyi']})
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,15 +18,22 @@
TORCH_AVAILABLE = True
except ImportError:
TORCH_AVAILABLE = False
+ CUDA_HOME = None
# ninja build does not work unless include_dirs are abs path
this_dir = os.path.dirname(os.path.abspath(__file__))
-build_cuda_ext = True
+build_cuda_ext = False
ext_modules = []
-if int(os.environ.get('NO_CUDA_EXT', '0')) == 1 or not TORCH_AVAILABLE:
- build_cuda_ext = False
+if int(os.environ.get('CUDA_EXT', '0')) == 1:
+ if not TORCH_AVAILABLE:
+ raise ModuleNotFoundError("PyTorch is not found while CUDA_EXT=1. You need to install PyTorch first in order to build CUDA extensions")
+
+ if not CUDA_HOME:
+ raise RuntimeError("CUDA_HOME is not found while CUDA_EXT=1. You need to export CUDA_HOME environment vairable or install CUDA Toolkit first in order to build CUDA extensions")
+
+ build_cuda_ext = True
def check_cuda_torch_binary_vs_bare_metal(cuda_dir):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,15 +18,22 @@\n TORCH_AVAILABLE = True\n except ImportError:\n TORCH_AVAILABLE = False\n+ CUDA_HOME = None\n \n \n # ninja build does not work unless include_dirs are abs path\n this_dir = os.path.dirname(os.path.abspath(__file__))\n-build_cuda_ext = True\n+build_cuda_ext = False\n ext_modules = []\n \n-if int(os.environ.get('NO_CUDA_EXT', '0')) == 1 or not TORCH_AVAILABLE:\n- build_cuda_ext = False\n+if int(os.environ.get('CUDA_EXT', '0')) == 1:\n+ if not TORCH_AVAILABLE:\n+ raise ModuleNotFoundError(\"PyTorch is not found while CUDA_EXT=1. You need to install PyTorch first in order to build CUDA extensions\")\n+\n+ if not CUDA_HOME:\n+ raise RuntimeError(\"CUDA_HOME is not found while CUDA_EXT=1. You need to export CUDA_HOME environment vairable or install CUDA Toolkit first in order to build CUDA extensions\")\n+\n+ build_cuda_ext = True\n \n \n def check_cuda_torch_binary_vs_bare_metal(cuda_dir):\n", "issue": "[FEATURE]: Make CUDA extensions default to optional\n### Describe the feature\n\nWhen installing `Colossal-AI` with `pip install .` from source or `pip install colossalai` from pypi, we always build CUDA extensions by default. However, this brings many problems for users who do not have a well-built environment with CUDA and torch. As a result, they failed at the first step, which is installation. I would suggest not to build extensions by default. The users can build extensions in two ways:\r\n1. export `CUDA_EXT=1` when running `pip install`\r\n2. build CUDA kernels only when the kernel is used\n", "before_files": [{"content": "import os\nimport re\n\nfrom setuptools import find_packages, setup\n\nfrom op_builder.utils import get_cuda_bare_metal_version\n\ntry:\n import torch\n from torch.utils.cpp_extension import CUDA_HOME, BuildExtension, CUDAExtension\n print(\"\\n\\ntorch.__version__ = {}\\n\\n\".format(torch.__version__))\n TORCH_MAJOR = int(torch.__version__.split('.')[0])\n TORCH_MINOR = int(torch.__version__.split('.')[1])\n\n if TORCH_MAJOR < 1 or (TORCH_MAJOR == 1 and TORCH_MINOR < 10):\n raise RuntimeError(\"Colossal-AI requires Pytorch 1.10 or newer.\\n\"\n \"The latest stable release can be obtained from https://pytorch.org/\")\n TORCH_AVAILABLE = True\nexcept ImportError:\n TORCH_AVAILABLE = False\n\n\n# ninja build does not work unless include_dirs are abs path\nthis_dir = os.path.dirname(os.path.abspath(__file__))\nbuild_cuda_ext = True\next_modules = []\n\nif int(os.environ.get('NO_CUDA_EXT', '0')) == 1 or not TORCH_AVAILABLE:\n build_cuda_ext = False\n\n\ndef check_cuda_torch_binary_vs_bare_metal(cuda_dir):\n raw_output, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(cuda_dir)\n torch_binary_major = torch.version.cuda.split(\".\")[0]\n torch_binary_minor = torch.version.cuda.split(\".\")[1]\n\n print(\"\\nCompiling cuda extensions with\")\n print(raw_output + \"from \" + cuda_dir + \"/bin\\n\")\n\n if bare_metal_major != torch_binary_major:\n print(f'The detected CUDA version ({raw_output}) mismatches the version that was used to compile PyTorch '\n f'({torch.version.cuda}). CUDA extension will not be installed.')\n return False\n\n if bare_metal_minor != torch_binary_minor:\n print(\"\\nWarning: Cuda extensions are being compiled with a version of Cuda that does \"\n \"not match the version used to compile Pytorch binaries. \"\n f\"Pytorch binaries were compiled with Cuda {torch.version.cuda}.\\n\"\n \"In some cases, a minor-version mismatch will not cause later errors: \"\n \"https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. \")\n return True\n\n\ndef check_cuda_availability(cuda_dir):\n if not torch.cuda.is_available():\n # https://github.com/NVIDIA/apex/issues/486\n # Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query\n # torch.cuda.get_device_capability(), which will fail if you are compiling in an environment\n # without visible GPUs (e.g. during an nvidia-docker build command).\n print(\n '\\nWarning: Torch did not find available GPUs on this system.\\n',\n 'If your intention is to cross-compile, this is not an error.\\n'\n 'By default, Colossal-AI will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\\n'\n 'Volta (compute capability 7.0), Turing (compute capability 7.5),\\n'\n 'and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).\\n'\n 'If you wish to cross-compile for a single specific architecture,\\n'\n 'export TORCH_CUDA_ARCH_LIST=\"compute capability\" before running setup.py.\\n')\n if os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None) is None:\n _, bare_metal_major, _ = get_cuda_bare_metal_version(cuda_dir)\n if int(bare_metal_major) == 11:\n os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"6.0;6.1;6.2;7.0;7.5;8.0\"\n else:\n os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"6.0;6.1;6.2;7.0;7.5\"\n return False\n\n if cuda_dir is None:\n print(\"nvcc was not found. CUDA extension will not be installed. If you're installing within a container from \"\n \"https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\n return False\n return True\n\n\ndef append_nvcc_threads(nvcc_extra_args):\n _, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(CUDA_HOME)\n if int(bare_metal_major) >= 11 and int(bare_metal_minor) >= 2:\n return nvcc_extra_args + [\"--threads\", \"4\"]\n return nvcc_extra_args\n\n\ndef fetch_requirements(path):\n with open(path, 'r') as fd:\n return [r.strip() for r in fd.readlines()]\n\n\ndef fetch_readme():\n with open('README.md', encoding='utf-8') as f:\n return f.read()\n\n\ndef get_version():\n setup_file_path = os.path.abspath(__file__)\n project_path = os.path.dirname(setup_file_path)\n version_txt_path = os.path.join(project_path, 'version.txt')\n version_py_path = os.path.join(project_path, 'colossalai/version.py')\n\n with open(version_txt_path) as f:\n version = f.read().strip()\n if build_cuda_ext:\n torch_version = '.'.join(torch.__version__.split('.')[:2])\n cuda_version = '.'.join(get_cuda_bare_metal_version(CUDA_HOME)[1:])\n version += f'+torch{torch_version}cu{cuda_version}'\n\n # write version into version.py\n with open(version_py_path, 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\n return version\n\n\nif build_cuda_ext:\n build_cuda_ext = check_cuda_availability(CUDA_HOME) and check_cuda_torch_binary_vs_bare_metal(CUDA_HOME)\n\nif build_cuda_ext:\n # Set up macros for forward/backward compatibility hack around\n # https://github.com/pytorch/pytorch/commit/4404762d7dd955383acee92e6f06b48144a0742e\n # and\n # https://github.com/NVIDIA/apex/issues/456\n # https://github.com/pytorch/pytorch/commit/eb7b39e02f7d75c26d8a795ea8c7fd911334da7e#diff-4632522f237f1e4e728cb824300403ac\n version_dependent_macros = ['-DVERSION_GE_1_1', '-DVERSION_GE_1_3', '-DVERSION_GE_1_5']\n\n def cuda_ext_helper(name, sources, extra_cuda_flags, extra_cxx_flags=[]):\n return CUDAExtension(\n name=name,\n sources=[os.path.join('colossalai/kernel/cuda_native/csrc', path) for path in sources],\n include_dirs=[os.path.join(this_dir, 'colossalai/kernel/cuda_native/csrc/kernels/include')],\n extra_compile_args={\n 'cxx': ['-O3'] + version_dependent_macros + extra_cxx_flags,\n 'nvcc': append_nvcc_threads(['-O3', '--use_fast_math'] + version_dependent_macros + extra_cuda_flags)\n })\n\n #### fused optim kernels ###\n from op_builder import FusedOptimBuilder\n ext_modules.append(FusedOptimBuilder().builder('colossalai._C.fused_optim'))\n\n #### N-D parallel kernels ###\n cc_flag = []\n for arch in torch.cuda.get_arch_list():\n res = re.search(r'sm_(\\d+)', arch)\n if res:\n arch_cap = res[1]\n if int(arch_cap) >= 60:\n cc_flag.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])\n\n extra_cuda_flags = [\n '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__', '--expt-relaxed-constexpr',\n '--expt-extended-lambda'\n ]\n\n from op_builder import ScaledSoftmaxBuilder\n ext_modules.append(ScaledSoftmaxBuilder().builder('colossalai._C.scaled_upper_triang_masked_softmax'))\n\n ext_modules.append(\n cuda_ext_helper('colossalai._C.scaled_masked_softmax',\n ['scaled_masked_softmax.cpp', 'scaled_masked_softmax_cuda.cu'], extra_cuda_flags + cc_flag))\n\n from op_builder import MOEBuilder\n ext_modules.append(MOEBuilder().builder('colossalai._C.moe'))\n\n extra_cuda_flags = ['-maxrregcount=50']\n\n ext_modules.append(\n cuda_ext_helper('colossalai._C.layer_norm', ['layer_norm_cuda.cpp', 'layer_norm_cuda_kernel.cu'],\n extra_cuda_flags + cc_flag))\n\n ### MultiHeadAttn Kernel ####\n from op_builder import MultiHeadAttnBuilder\n ext_modules.append(MultiHeadAttnBuilder().builder('colossalai._C.multihead_attention'))\n\n ### Gemini Adam kernel ####\n from op_builder import CPUAdamBuilder\n ext_modules.append(CPUAdamBuilder().builder('colossalai._C.cpu_optim'))\n\nsetup(name='colossalai',\n version=get_version(),\n packages=find_packages(exclude=(\n 'benchmark',\n 'docker',\n 'tests',\n 'docs',\n 'examples',\n 'tests',\n 'scripts',\n 'requirements',\n '*.egg-info',\n )),\n description='An integrated large-scale model training system with efficient parallelization techniques',\n long_description=fetch_readme(),\n long_description_content_type='text/markdown',\n license='Apache Software License 2.0',\n url='https://www.colossalai.org',\n project_urls={\n 'Forum': 'https://github.com/hpcaitech/ColossalAI/discussions',\n 'Bug Tracker': 'https://github.com/hpcaitech/ColossalAI/issues',\n 'Examples': 'https://github.com/hpcaitech/ColossalAI-Examples',\n 'Documentation': 'http://colossalai.readthedocs.io',\n 'Github': 'https://github.com/hpcaitech/ColossalAI',\n },\n ext_modules=ext_modules,\n cmdclass={'build_ext': BuildExtension} if ext_modules else {},\n install_requires=fetch_requirements('requirements/requirements.txt'),\n entry_points='''\n [console_scripts]\n colossalai=colossalai.cli:cli\n ''',\n python_requires='>=3.6',\n classifiers=[\n 'Programming Language :: Python :: 3',\n 'License :: OSI Approved :: Apache Software License',\n 'Environment :: GPU :: NVIDIA CUDA',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: System :: Distributed Computing',\n ],\n package_data={'colossalai': ['_C/*.pyi']})\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport re\n\nfrom setuptools import find_packages, setup\n\nfrom op_builder.utils import get_cuda_bare_metal_version\n\ntry:\n import torch\n from torch.utils.cpp_extension import CUDA_HOME, BuildExtension, CUDAExtension\n print(\"\\n\\ntorch.__version__ = {}\\n\\n\".format(torch.__version__))\n TORCH_MAJOR = int(torch.__version__.split('.')[0])\n TORCH_MINOR = int(torch.__version__.split('.')[1])\n\n if TORCH_MAJOR < 1 or (TORCH_MAJOR == 1 and TORCH_MINOR < 10):\n raise RuntimeError(\"Colossal-AI requires Pytorch 1.10 or newer.\\n\"\n \"The latest stable release can be obtained from https://pytorch.org/\")\n TORCH_AVAILABLE = True\nexcept ImportError:\n TORCH_AVAILABLE = False\n CUDA_HOME = None\n\n\n# ninja build does not work unless include_dirs are abs path\nthis_dir = os.path.dirname(os.path.abspath(__file__))\nbuild_cuda_ext = False\next_modules = []\n\nif int(os.environ.get('CUDA_EXT', '0')) == 1:\n if not TORCH_AVAILABLE:\n raise ModuleNotFoundError(\"PyTorch is not found while CUDA_EXT=1. You need to install PyTorch first in order to build CUDA extensions\")\n\n if not CUDA_HOME:\n raise RuntimeError(\"CUDA_HOME is not found while CUDA_EXT=1. You need to export CUDA_HOME environment vairable or install CUDA Toolkit first in order to build CUDA extensions\")\n\n build_cuda_ext = True\n\n\ndef check_cuda_torch_binary_vs_bare_metal(cuda_dir):\n raw_output, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(cuda_dir)\n torch_binary_major = torch.version.cuda.split(\".\")[0]\n torch_binary_minor = torch.version.cuda.split(\".\")[1]\n\n print(\"\\nCompiling cuda extensions with\")\n print(raw_output + \"from \" + cuda_dir + \"/bin\\n\")\n\n if bare_metal_major != torch_binary_major:\n print(f'The detected CUDA version ({raw_output}) mismatches the version that was used to compile PyTorch '\n f'({torch.version.cuda}). CUDA extension will not be installed.')\n return False\n\n if bare_metal_minor != torch_binary_minor:\n print(\"\\nWarning: Cuda extensions are being compiled with a version of Cuda that does \"\n \"not match the version used to compile Pytorch binaries. \"\n f\"Pytorch binaries were compiled with Cuda {torch.version.cuda}.\\n\"\n \"In some cases, a minor-version mismatch will not cause later errors: \"\n \"https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. \")\n return True\n\n\ndef check_cuda_availability(cuda_dir):\n if not torch.cuda.is_available():\n # https://github.com/NVIDIA/apex/issues/486\n # Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query\n # torch.cuda.get_device_capability(), which will fail if you are compiling in an environment\n # without visible GPUs (e.g. during an nvidia-docker build command).\n print(\n '\\nWarning: Torch did not find available GPUs on this system.\\n',\n 'If your intention is to cross-compile, this is not an error.\\n'\n 'By default, Colossal-AI will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\\n'\n 'Volta (compute capability 7.0), Turing (compute capability 7.5),\\n'\n 'and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).\\n'\n 'If you wish to cross-compile for a single specific architecture,\\n'\n 'export TORCH_CUDA_ARCH_LIST=\"compute capability\" before running setup.py.\\n')\n if os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None) is None:\n _, bare_metal_major, _ = get_cuda_bare_metal_version(cuda_dir)\n if int(bare_metal_major) == 11:\n os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"6.0;6.1;6.2;7.0;7.5;8.0\"\n else:\n os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"6.0;6.1;6.2;7.0;7.5\"\n return False\n\n if cuda_dir is None:\n print(\"nvcc was not found. CUDA extension will not be installed. If you're installing within a container from \"\n \"https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\n return False\n return True\n\n\ndef append_nvcc_threads(nvcc_extra_args):\n _, bare_metal_major, bare_metal_minor = get_cuda_bare_metal_version(CUDA_HOME)\n if int(bare_metal_major) >= 11 and int(bare_metal_minor) >= 2:\n return nvcc_extra_args + [\"--threads\", \"4\"]\n return nvcc_extra_args\n\n\ndef fetch_requirements(path):\n with open(path, 'r') as fd:\n return [r.strip() for r in fd.readlines()]\n\n\ndef fetch_readme():\n with open('README.md', encoding='utf-8') as f:\n return f.read()\n\n\ndef get_version():\n setup_file_path = os.path.abspath(__file__)\n project_path = os.path.dirname(setup_file_path)\n version_txt_path = os.path.join(project_path, 'version.txt')\n version_py_path = os.path.join(project_path, 'colossalai/version.py')\n\n with open(version_txt_path) as f:\n version = f.read().strip()\n if build_cuda_ext:\n torch_version = '.'.join(torch.__version__.split('.')[:2])\n cuda_version = '.'.join(get_cuda_bare_metal_version(CUDA_HOME)[1:])\n version += f'+torch{torch_version}cu{cuda_version}'\n\n # write version into version.py\n with open(version_py_path, 'w') as f:\n f.write(f\"__version__ = '{version}'\\n\")\n\n return version\n\n\nif build_cuda_ext:\n build_cuda_ext = check_cuda_availability(CUDA_HOME) and check_cuda_torch_binary_vs_bare_metal(CUDA_HOME)\n\nif build_cuda_ext:\n # Set up macros for forward/backward compatibility hack around\n # https://github.com/pytorch/pytorch/commit/4404762d7dd955383acee92e6f06b48144a0742e\n # and\n # https://github.com/NVIDIA/apex/issues/456\n # https://github.com/pytorch/pytorch/commit/eb7b39e02f7d75c26d8a795ea8c7fd911334da7e#diff-4632522f237f1e4e728cb824300403ac\n version_dependent_macros = ['-DVERSION_GE_1_1', '-DVERSION_GE_1_3', '-DVERSION_GE_1_5']\n\n def cuda_ext_helper(name, sources, extra_cuda_flags, extra_cxx_flags=[]):\n return CUDAExtension(\n name=name,\n sources=[os.path.join('colossalai/kernel/cuda_native/csrc', path) for path in sources],\n include_dirs=[os.path.join(this_dir, 'colossalai/kernel/cuda_native/csrc/kernels/include')],\n extra_compile_args={\n 'cxx': ['-O3'] + version_dependent_macros + extra_cxx_flags,\n 'nvcc': append_nvcc_threads(['-O3', '--use_fast_math'] + version_dependent_macros + extra_cuda_flags)\n })\n\n #### fused optim kernels ###\n from op_builder import FusedOptimBuilder\n ext_modules.append(FusedOptimBuilder().builder('colossalai._C.fused_optim'))\n\n #### N-D parallel kernels ###\n cc_flag = []\n for arch in torch.cuda.get_arch_list():\n res = re.search(r'sm_(\\d+)', arch)\n if res:\n arch_cap = res[1]\n if int(arch_cap) >= 60:\n cc_flag.extend(['-gencode', f'arch=compute_{arch_cap},code={arch}'])\n\n extra_cuda_flags = [\n '-U__CUDA_NO_HALF_OPERATORS__', '-U__CUDA_NO_HALF_CONVERSIONS__', '--expt-relaxed-constexpr',\n '--expt-extended-lambda'\n ]\n\n from op_builder import ScaledSoftmaxBuilder\n ext_modules.append(ScaledSoftmaxBuilder().builder('colossalai._C.scaled_upper_triang_masked_softmax'))\n\n ext_modules.append(\n cuda_ext_helper('colossalai._C.scaled_masked_softmax',\n ['scaled_masked_softmax.cpp', 'scaled_masked_softmax_cuda.cu'], extra_cuda_flags + cc_flag))\n\n from op_builder import MOEBuilder\n ext_modules.append(MOEBuilder().builder('colossalai._C.moe'))\n\n extra_cuda_flags = ['-maxrregcount=50']\n\n ext_modules.append(\n cuda_ext_helper('colossalai._C.layer_norm', ['layer_norm_cuda.cpp', 'layer_norm_cuda_kernel.cu'],\n extra_cuda_flags + cc_flag))\n\n ### MultiHeadAttn Kernel ####\n from op_builder import MultiHeadAttnBuilder\n ext_modules.append(MultiHeadAttnBuilder().builder('colossalai._C.multihead_attention'))\n\n ### Gemini Adam kernel ####\n from op_builder import CPUAdamBuilder\n ext_modules.append(CPUAdamBuilder().builder('colossalai._C.cpu_optim'))\n\nsetup(name='colossalai',\n version=get_version(),\n packages=find_packages(exclude=(\n 'benchmark',\n 'docker',\n 'tests',\n 'docs',\n 'examples',\n 'tests',\n 'scripts',\n 'requirements',\n '*.egg-info',\n )),\n description='An integrated large-scale model training system with efficient parallelization techniques',\n long_description=fetch_readme(),\n long_description_content_type='text/markdown',\n license='Apache Software License 2.0',\n url='https://www.colossalai.org',\n project_urls={\n 'Forum': 'https://github.com/hpcaitech/ColossalAI/discussions',\n 'Bug Tracker': 'https://github.com/hpcaitech/ColossalAI/issues',\n 'Examples': 'https://github.com/hpcaitech/ColossalAI-Examples',\n 'Documentation': 'http://colossalai.readthedocs.io',\n 'Github': 'https://github.com/hpcaitech/ColossalAI',\n },\n ext_modules=ext_modules,\n cmdclass={'build_ext': BuildExtension} if ext_modules else {},\n install_requires=fetch_requirements('requirements/requirements.txt'),\n entry_points='''\n [console_scripts]\n colossalai=colossalai.cli:cli\n ''',\n python_requires='>=3.6',\n classifiers=[\n 'Programming Language :: Python :: 3',\n 'License :: OSI Approved :: Apache Software License',\n 'Environment :: GPU :: NVIDIA CUDA',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: System :: Distributed Computing',\n ],\n package_data={'colossalai': ['_C/*.pyi']})\n", "path": "setup.py"}]}
| 3,384 | 260 |
gh_patches_debug_192
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-726
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The list of top individuals is incomplete
While looking at https://liberapay.com/explore/individuals I realized that ploum isn't listed. It's because he doesn't have a profile statement. The thinking was that without a statement there isn't much to see on a profile page, so there's little point in linking to it. However it also makes the list incomplete.
The list of top individuals is incomplete
While looking at https://liberapay.com/explore/individuals I realized that ploum isn't listed. It's because he doesn't have a profile statement. The thinking was that without a statement there isn't much to see on a profile page, so there's little point in linking to it. However it also makes the list incomplete.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/constants.py`
Content:
```
1 # coding: utf8
2 from __future__ import print_function, unicode_literals
3
4 from collections import namedtuple, OrderedDict
5 from datetime import date, datetime, timedelta
6 from decimal import Decimal, ROUND_UP
7 import re
8
9 from jinja2 import StrictUndefined
10 from pando.utils import utc
11
12
13 class CustomUndefined(StrictUndefined):
14 __bool__ = __nonzero__ = lambda self: False
15
16 def __str__(self):
17 try:
18 self._fail_with_undefined_error()
19 except Exception as e:
20 self._tell_sentry(e, {})
21 return ''
22
23 __unicode__ = __str__
24
25
26 def check_bits(bits):
27 assert len(set(bits)) == len(bits) # no duplicates
28 assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit
29
30
31 Event = namedtuple('Event', 'name bit title')
32
33 Fees = namedtuple('Fees', ('var', 'fix'))
34
35 StandardTip = namedtuple('StandardTip', 'label weekly monthly yearly')
36
37
38 _ = lambda a: a
39
40 ASCII_ALLOWED_IN_USERNAME = set("0123456789"
41 "abcdefghijklmnopqrstuvwxyz"
42 "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
43 "-_.")
44
45 AVATAR_QUERY = '?s=160&default=retro'
46 AVATAR_SOURCES = 'libravatar bitbucket facebook github google mastodon twitter'.split()
47
48 BIRTHDAY = date(2015, 5, 22)
49
50 D_CENT = Decimal('0.01')
51 D_INF = Decimal('inf')
52 D_UNIT = Decimal('1.00')
53 D_ZERO = Decimal('0.00')
54
55 DONATION_LIMITS_WEEKLY = (Decimal('0.01'), Decimal('100.00'))
56 DONATION_LIMITS = {
57 'weekly': DONATION_LIMITS_WEEKLY,
58 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)
59 for x in DONATION_LIMITS_WEEKLY),
60 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)
61 for x in DONATION_LIMITS_WEEKLY),
62 }
63 DONATION_WEEKLY_MIN, DONATION_WEEKLY_MAX = DONATION_LIMITS_WEEKLY
64
65 DOMAIN_RE = re.compile(r'''
66 ^
67 ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+
68 [a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?
69 $
70 ''', re.VERBOSE)
71
72 ELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}
73
74 EMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)
75 EMAIL_RE = re.compile(r'''
76 # This is the regexp used by MangoPay (as of February 2017).
77 # It rejects some valid but exotic addresses.
78 # https://en.wikipedia.org/wiki/Email_address
79 ^
80 [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*
81 @
82 ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?
83 $
84 ''', re.VERBOSE)
85
86 EPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)
87
88 EVENTS = [
89 Event('income', 1, _("When I receive money")),
90 Event('low_balance', 2, _("When there isn't enough money in my wallet to cover my donations")),
91 Event('withdrawal_created', 4, _("When a transfer to my bank account is initiated")),
92 Event('withdrawal_failed', 8, _("When a transfer to my bank account fails")),
93 Event('pledgee_joined', 16, _("When someone I pledge to joins Liberapay")),
94 Event('team_invite', 32, _("When someone invites me to join a team")),
95 Event('payin_bankwire_failed', 64, _("When a bank wire transfer to my Liberapay wallet fails")),
96 Event('payin_bankwire_succeeded', 128, _("When a bank wire transfer to my Liberapay wallet succeeds")),
97 Event('payin_bankwire_expired', 256, _("When a bank wire transfer to my Liberapay wallet expires")),
98 Event('payin_directdebit_failed', 512, _("When a direct debit from my bank account fails")),
99 Event('payin_directdebit_succeeded', 1024, _("When a direct debit from my bank account succeeds")),
100 ]
101 check_bits([e.bit for e in EVENTS])
102 EVENTS = OrderedDict((e.name, e) for e in EVENTS)
103 EVENTS_S = ' '.join(EVENTS.keys())
104
105 # https://www.mangopay.com/pricing/
106 FEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), Decimal(0)) # 0.5%
107 FEE_PAYIN_CARD = Fees(Decimal('0.018'), Decimal('0.18')) # 1.8% + €0.18
108 FEE_PAYIN_DIRECT_DEBIT = Fees(Decimal(0), Decimal('0.80')) # €0.80
109 FEE_PAYOUT = Fees(Decimal(0), Decimal(0))
110 FEE_PAYOUT_OUTSIDE_SEPA = Fees(Decimal(0), Decimal('2.5'))
111 FEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%
112 FEE_VAT = Decimal('0.17') # 17% (Luxembourg rate)
113
114 INVOICE_DOC_MAX_SIZE = 5000000
115 INVOICE_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'png']
116 INVOICE_DOCS_LIMIT = 10
117
118 INVOICE_NATURES = {
119 'expense': _("Expense Report"),
120 }
121
122 INVOICE_STATUSES = {
123 'pre': _("Draft"),
124 'new': _("Sent (awaiting approval)"),
125 'retracted': _("Retracted"),
126 'accepted': _("Accepted (awaiting payment)"),
127 'paid': _("Paid"),
128 'rejected': _("Rejected"),
129 }
130
131 JINJA_ENV_COMMON = dict(
132 trim_blocks=True, lstrip_blocks=True,
133 line_statement_prefix='%',
134 # undefined=CustomUndefined,
135 )
136
137 # https://docs.mangopay.com/api-references/kyc-rules/
138 KYC_DOC_MAX_SIZE = 7000000
139 KYC_DOC_MAX_SIZE_MB = int(KYC_DOC_MAX_SIZE / 1000000)
140 KYC_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'gif', 'png']
141 KYC_DOCS_EXTS_STR = ', '.join(KYC_DOCS_EXTS)
142 KYC_INCOME_THRESHOLDS = (
143 (1, 18000),
144 (2, 30000),
145 (3, 50000),
146 (4, 80000),
147 (5, 120000),
148 (6, 120000),
149 )
150 KYC_PAYIN_YEARLY_THRESHOLD = Decimal('2500')
151 KYC_PAYOUT_YEARLY_THRESHOLD = Decimal('1000')
152
153 LAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)
154
155 PARTICIPANT_KINDS = {
156 'individual': _("Individual"),
157 'organization': _("Organization"),
158 'group': _("Team"),
159 }
160
161 PASSWORD_MIN_SIZE = 8
162 PASSWORD_MAX_SIZE = 150
163
164 PAYIN_BANK_WIRE_MIN = Decimal('2.00') # fee ≈ 0.99%
165 PAYIN_BANK_WIRE_TARGET = Decimal('5.00') # fee ≈ 0.6%
166 PAYIN_CARD_MIN = Decimal("15.00") # fee ≈ 3.5%
167 PAYIN_CARD_TARGET = Decimal("92.00") # fee ≈ 2.33%
168 PAYIN_DIRECT_DEBIT_MIN = Decimal('25.00') # fee ≈ 3.6%
169 PAYIN_DIRECT_DEBIT_TARGET = Decimal('99.00') # fee ≈ 0.94%
170
171 PERIOD_CONVERSION_RATES = {
172 'weekly': Decimal(1),
173 'monthly': Decimal(12) / Decimal(52),
174 'yearly': Decimal(1) / Decimal(52),
175 }
176
177 POSTAL_ADDRESS_KEYS = (
178 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'
179 )
180
181 PRIVACY_FIELDS = OrderedDict([
182 ('hide_giving', _("Hide total giving from others.")),
183 ('hide_receiving', _("Hide total receiving from others.")),
184 ('hide_from_search', _("Hide myself from search results on Liberapay.")),
185 ('profile_noindex', _("Tell web search engines not to index my profile.")),
186 ('hide_from_lists', _("Prevent my profile from being listed on Liberapay.")),
187 ])
188 PRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())
189
190 PRIVILEGES = dict(admin=1, run_payday=2)
191 check_bits(list(PRIVILEGES.values()))
192
193 QUARANTINE = timedelta(weeks=4)
194
195 RATE_LIMITS = {
196 'add_email.source': (5, 60*60*24), # 5 per day
197 'add_email.target': (2, 60*60*24), # 2 per day
198 'change_username': (7, 60*60*24*7), # 7 per week
199 'log-in.email': (10, 60*60*24), # 10 per day
200 'log-in.email.not-verified': (2, 60*60*24), # 2 per day
201 'log-in.email.verified': (10, 60*60*24), # 10 per day
202 'log-in.password': (3, 60*60), # 3 per hour
203 'sign-up.ip-addr': (5, 60*60), # 5 per hour per IP address
204 'sign-up.ip-net': (15, 15*60), # 15 per 15 minutes per IP network
205 'sign-up.ip-version': (15, 15*60), # 15 per 15 minutes per IP version
206 }
207
208 SEPA = set("""
209 AT BE BG CH CY CZ DE DK EE ES ES FI FR GB GI GR HR HU IE IS IT LI LT LU LV
210 MC MT NL NO PL PT RO SE SI SK
211 """.split())
212
213 SESSION = str('session') # bytes in python2, unicode in python3
214 SESSION_REFRESH = timedelta(hours=1)
215 SESSION_TIMEOUT = timedelta(hours=6)
216
217
218 def make_standard_tip(label, weekly):
219 return StandardTip(
220 label,
221 weekly,
222 weekly / PERIOD_CONVERSION_RATES['monthly'],
223 weekly / PERIOD_CONVERSION_RATES['yearly'],
224 )
225
226
227 STANDARD_TIPS = (
228 make_standard_tip(_("Symbolic"), Decimal('0.01')),
229 make_standard_tip(_("Small"), Decimal('0.25')),
230 make_standard_tip(_("Medium"), Decimal('1.00')),
231 make_standard_tip(_("Large"), Decimal('5.00')),
232 make_standard_tip(_("Maximum"), DONATION_WEEKLY_MAX),
233 )
234
235 USERNAME_MAX_SIZE = 32
236
237 del _
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/liberapay/constants.py b/liberapay/constants.py
--- a/liberapay/constants.py
+++ b/liberapay/constants.py
@@ -232,6 +232,8 @@
make_standard_tip(_("Maximum"), DONATION_WEEKLY_MAX),
)
+SUMMARY_MAX_SIZE = 100
+
USERNAME_MAX_SIZE = 32
del _
|
{"golden_diff": "diff --git a/liberapay/constants.py b/liberapay/constants.py\n--- a/liberapay/constants.py\n+++ b/liberapay/constants.py\n@@ -232,6 +232,8 @@\n make_standard_tip(_(\"Maximum\"), DONATION_WEEKLY_MAX),\n )\n \n+SUMMARY_MAX_SIZE = 100\n+\n USERNAME_MAX_SIZE = 32\n \n del _\n", "issue": "The list of top individuals is incomplete\nWhile looking at https://liberapay.com/explore/individuals I realized that ploum isn't listed. It's because he doesn't have a profile statement. The thinking was that without a statement there isn't much to see on a profile page, so there's little point in linking to it. However it also makes the list incomplete.\nThe list of top individuals is incomplete\nWhile looking at https://liberapay.com/explore/individuals I realized that ploum isn't listed. It's because he doesn't have a profile statement. The thinking was that without a statement there isn't much to see on a profile page, so there's little point in linking to it. However it also makes the list incomplete.\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import namedtuple, OrderedDict\nfrom datetime import date, datetime, timedelta\nfrom decimal import Decimal, ROUND_UP\nimport re\n\nfrom jinja2 import StrictUndefined\nfrom pando.utils import utc\n\n\nclass CustomUndefined(StrictUndefined):\n __bool__ = __nonzero__ = lambda self: False\n\n def __str__(self):\n try:\n self._fail_with_undefined_error()\n except Exception as e:\n self._tell_sentry(e, {})\n return ''\n\n __unicode__ = __str__\n\n\ndef check_bits(bits):\n assert len(set(bits)) == len(bits) # no duplicates\n assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit\n\n\nEvent = namedtuple('Event', 'name bit title')\n\nFees = namedtuple('Fees', ('var', 'fix'))\n\nStandardTip = namedtuple('StandardTip', 'label weekly monthly yearly')\n\n\n_ = lambda a: a\n\nASCII_ALLOWED_IN_USERNAME = set(\"0123456789\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n \"-_.\")\n\nAVATAR_QUERY = '?s=160&default=retro'\nAVATAR_SOURCES = 'libravatar bitbucket facebook github google mastodon twitter'.split()\n\nBIRTHDAY = date(2015, 5, 22)\n\nD_CENT = Decimal('0.01')\nD_INF = Decimal('inf')\nD_UNIT = Decimal('1.00')\nD_ZERO = Decimal('0.00')\n\nDONATION_LIMITS_WEEKLY = (Decimal('0.01'), Decimal('100.00'))\nDONATION_LIMITS = {\n 'weekly': DONATION_LIMITS_WEEKLY,\n 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)\n for x in DONATION_LIMITS_WEEKLY),\n 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)\n for x in DONATION_LIMITS_WEEKLY),\n}\nDONATION_WEEKLY_MIN, DONATION_WEEKLY_MAX = DONATION_LIMITS_WEEKLY\n\nDOMAIN_RE = re.compile(r'''\n ^\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+\n [a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n\nEMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\nEMAIL_RE = re.compile(r'''\n # This is the regexp used by MangoPay (as of February 2017).\n # It rejects some valid but exotic addresses.\n # https://en.wikipedia.org/wiki/Email_address\n ^\n [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*\n @\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nEPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n\nEVENTS = [\n Event('income', 1, _(\"When I receive money\")),\n Event('low_balance', 2, _(\"When there isn't enough money in my wallet to cover my donations\")),\n Event('withdrawal_created', 4, _(\"When a transfer to my bank account is initiated\")),\n Event('withdrawal_failed', 8, _(\"When a transfer to my bank account fails\")),\n Event('pledgee_joined', 16, _(\"When someone I pledge to joins Liberapay\")),\n Event('team_invite', 32, _(\"When someone invites me to join a team\")),\n Event('payin_bankwire_failed', 64, _(\"When a bank wire transfer to my Liberapay wallet fails\")),\n Event('payin_bankwire_succeeded', 128, _(\"When a bank wire transfer to my Liberapay wallet succeeds\")),\n Event('payin_bankwire_expired', 256, _(\"When a bank wire transfer to my Liberapay wallet expires\")),\n Event('payin_directdebit_failed', 512, _(\"When a direct debit from my bank account fails\")),\n Event('payin_directdebit_succeeded', 1024, _(\"When a direct debit from my bank account succeeds\")),\n]\ncheck_bits([e.bit for e in EVENTS])\nEVENTS = OrderedDict((e.name, e) for e in EVENTS)\nEVENTS_S = ' '.join(EVENTS.keys())\n\n# https://www.mangopay.com/pricing/\nFEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), Decimal(0)) # 0.5%\nFEE_PAYIN_CARD = Fees(Decimal('0.018'), Decimal('0.18')) # 1.8% + \u20ac0.18\nFEE_PAYIN_DIRECT_DEBIT = Fees(Decimal(0), Decimal('0.80')) # \u20ac0.80\nFEE_PAYOUT = Fees(Decimal(0), Decimal(0))\nFEE_PAYOUT_OUTSIDE_SEPA = Fees(Decimal(0), Decimal('2.5'))\nFEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%\nFEE_VAT = Decimal('0.17') # 17% (Luxembourg rate)\n\nINVOICE_DOC_MAX_SIZE = 5000000\nINVOICE_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'png']\nINVOICE_DOCS_LIMIT = 10\n\nINVOICE_NATURES = {\n 'expense': _(\"Expense Report\"),\n}\n\nINVOICE_STATUSES = {\n 'pre': _(\"Draft\"),\n 'new': _(\"Sent (awaiting approval)\"),\n 'retracted': _(\"Retracted\"),\n 'accepted': _(\"Accepted (awaiting payment)\"),\n 'paid': _(\"Paid\"),\n 'rejected': _(\"Rejected\"),\n}\n\nJINJA_ENV_COMMON = dict(\n trim_blocks=True, lstrip_blocks=True,\n line_statement_prefix='%',\n # undefined=CustomUndefined,\n)\n\n# https://docs.mangopay.com/api-references/kyc-rules/\nKYC_DOC_MAX_SIZE = 7000000\nKYC_DOC_MAX_SIZE_MB = int(KYC_DOC_MAX_SIZE / 1000000)\nKYC_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'gif', 'png']\nKYC_DOCS_EXTS_STR = ', '.join(KYC_DOCS_EXTS)\nKYC_INCOME_THRESHOLDS = (\n (1, 18000),\n (2, 30000),\n (3, 50000),\n (4, 80000),\n (5, 120000),\n (6, 120000),\n)\nKYC_PAYIN_YEARLY_THRESHOLD = Decimal('2500')\nKYC_PAYOUT_YEARLY_THRESHOLD = Decimal('1000')\n\nLAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)\n\nPARTICIPANT_KINDS = {\n 'individual': _(\"Individual\"),\n 'organization': _(\"Organization\"),\n 'group': _(\"Team\"),\n}\n\nPASSWORD_MIN_SIZE = 8\nPASSWORD_MAX_SIZE = 150\n\nPAYIN_BANK_WIRE_MIN = Decimal('2.00') # fee \u2248 0.99%\nPAYIN_BANK_WIRE_TARGET = Decimal('5.00') # fee \u2248 0.6%\nPAYIN_CARD_MIN = Decimal(\"15.00\") # fee \u2248 3.5%\nPAYIN_CARD_TARGET = Decimal(\"92.00\") # fee \u2248 2.33%\nPAYIN_DIRECT_DEBIT_MIN = Decimal('25.00') # fee \u2248 3.6%\nPAYIN_DIRECT_DEBIT_TARGET = Decimal('99.00') # fee \u2248 0.94%\n\nPERIOD_CONVERSION_RATES = {\n 'weekly': Decimal(1),\n 'monthly': Decimal(12) / Decimal(52),\n 'yearly': Decimal(1) / Decimal(52),\n}\n\nPOSTAL_ADDRESS_KEYS = (\n 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'\n)\n\nPRIVACY_FIELDS = OrderedDict([\n ('hide_giving', _(\"Hide total giving from others.\")),\n ('hide_receiving', _(\"Hide total receiving from others.\")),\n ('hide_from_search', _(\"Hide myself from search results on Liberapay.\")),\n ('profile_noindex', _(\"Tell web search engines not to index my profile.\")),\n ('hide_from_lists', _(\"Prevent my profile from being listed on Liberapay.\")),\n])\nPRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())\n\nPRIVILEGES = dict(admin=1, run_payday=2)\ncheck_bits(list(PRIVILEGES.values()))\n\nQUARANTINE = timedelta(weeks=4)\n\nRATE_LIMITS = {\n 'add_email.source': (5, 60*60*24), # 5 per day\n 'add_email.target': (2, 60*60*24), # 2 per day\n 'change_username': (7, 60*60*24*7), # 7 per week\n 'log-in.email': (10, 60*60*24), # 10 per day\n 'log-in.email.not-verified': (2, 60*60*24), # 2 per day\n 'log-in.email.verified': (10, 60*60*24), # 10 per day\n 'log-in.password': (3, 60*60), # 3 per hour\n 'sign-up.ip-addr': (5, 60*60), # 5 per hour per IP address\n 'sign-up.ip-net': (15, 15*60), # 15 per 15 minutes per IP network\n 'sign-up.ip-version': (15, 15*60), # 15 per 15 minutes per IP version\n}\n\nSEPA = set(\"\"\"\n AT BE BG CH CY CZ DE DK EE ES ES FI FR GB GI GR HR HU IE IS IT LI LT LU LV\n MC MT NL NO PL PT RO SE SI SK\n\"\"\".split())\n\nSESSION = str('session') # bytes in python2, unicode in python3\nSESSION_REFRESH = timedelta(hours=1)\nSESSION_TIMEOUT = timedelta(hours=6)\n\n\ndef make_standard_tip(label, weekly):\n return StandardTip(\n label,\n weekly,\n weekly / PERIOD_CONVERSION_RATES['monthly'],\n weekly / PERIOD_CONVERSION_RATES['yearly'],\n )\n\n\nSTANDARD_TIPS = (\n make_standard_tip(_(\"Symbolic\"), Decimal('0.01')),\n make_standard_tip(_(\"Small\"), Decimal('0.25')),\n make_standard_tip(_(\"Medium\"), Decimal('1.00')),\n make_standard_tip(_(\"Large\"), Decimal('5.00')),\n make_standard_tip(_(\"Maximum\"), DONATION_WEEKLY_MAX),\n)\n\nUSERNAME_MAX_SIZE = 32\n\ndel _\n", "path": "liberapay/constants.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import namedtuple, OrderedDict\nfrom datetime import date, datetime, timedelta\nfrom decimal import Decimal, ROUND_UP\nimport re\n\nfrom jinja2 import StrictUndefined\nfrom pando.utils import utc\n\n\nclass CustomUndefined(StrictUndefined):\n __bool__ = __nonzero__ = lambda self: False\n\n def __str__(self):\n try:\n self._fail_with_undefined_error()\n except Exception as e:\n self._tell_sentry(e, {})\n return ''\n\n __unicode__ = __str__\n\n\ndef check_bits(bits):\n assert len(set(bits)) == len(bits) # no duplicates\n assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit\n\n\nEvent = namedtuple('Event', 'name bit title')\n\nFees = namedtuple('Fees', ('var', 'fix'))\n\nStandardTip = namedtuple('StandardTip', 'label weekly monthly yearly')\n\n\n_ = lambda a: a\n\nASCII_ALLOWED_IN_USERNAME = set(\"0123456789\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n \"-_.\")\n\nAVATAR_QUERY = '?s=160&default=retro'\nAVATAR_SOURCES = 'libravatar bitbucket facebook github google mastodon twitter'.split()\n\nBIRTHDAY = date(2015, 5, 22)\n\nD_CENT = Decimal('0.01')\nD_INF = Decimal('inf')\nD_UNIT = Decimal('1.00')\nD_ZERO = Decimal('0.00')\n\nDONATION_LIMITS_WEEKLY = (Decimal('0.01'), Decimal('100.00'))\nDONATION_LIMITS = {\n 'weekly': DONATION_LIMITS_WEEKLY,\n 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)\n for x in DONATION_LIMITS_WEEKLY),\n 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)\n for x in DONATION_LIMITS_WEEKLY),\n}\nDONATION_WEEKLY_MIN, DONATION_WEEKLY_MAX = DONATION_LIMITS_WEEKLY\n\nDOMAIN_RE = re.compile(r'''\n ^\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+\n [a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n\nEMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\nEMAIL_RE = re.compile(r'''\n # This is the regexp used by MangoPay (as of February 2017).\n # It rejects some valid but exotic addresses.\n # https://en.wikipedia.org/wiki/Email_address\n ^\n [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*\n @\n ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n $\n''', re.VERBOSE)\n\nEPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n\nEVENTS = [\n Event('income', 1, _(\"When I receive money\")),\n Event('low_balance', 2, _(\"When there isn't enough money in my wallet to cover my donations\")),\n Event('withdrawal_created', 4, _(\"When a transfer to my bank account is initiated\")),\n Event('withdrawal_failed', 8, _(\"When a transfer to my bank account fails\")),\n Event('pledgee_joined', 16, _(\"When someone I pledge to joins Liberapay\")),\n Event('team_invite', 32, _(\"When someone invites me to join a team\")),\n Event('payin_bankwire_failed', 64, _(\"When a bank wire transfer to my Liberapay wallet fails\")),\n Event('payin_bankwire_succeeded', 128, _(\"When a bank wire transfer to my Liberapay wallet succeeds\")),\n Event('payin_bankwire_expired', 256, _(\"When a bank wire transfer to my Liberapay wallet expires\")),\n Event('payin_directdebit_failed', 512, _(\"When a direct debit from my bank account fails\")),\n Event('payin_directdebit_succeeded', 1024, _(\"When a direct debit from my bank account succeeds\")),\n]\ncheck_bits([e.bit for e in EVENTS])\nEVENTS = OrderedDict((e.name, e) for e in EVENTS)\nEVENTS_S = ' '.join(EVENTS.keys())\n\n# https://www.mangopay.com/pricing/\nFEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), Decimal(0)) # 0.5%\nFEE_PAYIN_CARD = Fees(Decimal('0.018'), Decimal('0.18')) # 1.8% + \u20ac0.18\nFEE_PAYIN_DIRECT_DEBIT = Fees(Decimal(0), Decimal('0.80')) # \u20ac0.80\nFEE_PAYOUT = Fees(Decimal(0), Decimal(0))\nFEE_PAYOUT_OUTSIDE_SEPA = Fees(Decimal(0), Decimal('2.5'))\nFEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%\nFEE_VAT = Decimal('0.17') # 17% (Luxembourg rate)\n\nINVOICE_DOC_MAX_SIZE = 5000000\nINVOICE_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'png']\nINVOICE_DOCS_LIMIT = 10\n\nINVOICE_NATURES = {\n 'expense': _(\"Expense Report\"),\n}\n\nINVOICE_STATUSES = {\n 'pre': _(\"Draft\"),\n 'new': _(\"Sent (awaiting approval)\"),\n 'retracted': _(\"Retracted\"),\n 'accepted': _(\"Accepted (awaiting payment)\"),\n 'paid': _(\"Paid\"),\n 'rejected': _(\"Rejected\"),\n}\n\nJINJA_ENV_COMMON = dict(\n trim_blocks=True, lstrip_blocks=True,\n line_statement_prefix='%',\n # undefined=CustomUndefined,\n)\n\n# https://docs.mangopay.com/api-references/kyc-rules/\nKYC_DOC_MAX_SIZE = 7000000\nKYC_DOC_MAX_SIZE_MB = int(KYC_DOC_MAX_SIZE / 1000000)\nKYC_DOCS_EXTS = ['pdf', 'jpeg', 'jpg', 'gif', 'png']\nKYC_DOCS_EXTS_STR = ', '.join(KYC_DOCS_EXTS)\nKYC_INCOME_THRESHOLDS = (\n (1, 18000),\n (2, 30000),\n (3, 50000),\n (4, 80000),\n (5, 120000),\n (6, 120000),\n)\nKYC_PAYIN_YEARLY_THRESHOLD = Decimal('2500')\nKYC_PAYOUT_YEARLY_THRESHOLD = Decimal('1000')\n\nLAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)\n\nPARTICIPANT_KINDS = {\n 'individual': _(\"Individual\"),\n 'organization': _(\"Organization\"),\n 'group': _(\"Team\"),\n}\n\nPASSWORD_MIN_SIZE = 8\nPASSWORD_MAX_SIZE = 150\n\nPAYIN_BANK_WIRE_MIN = Decimal('2.00') # fee \u2248 0.99%\nPAYIN_BANK_WIRE_TARGET = Decimal('5.00') # fee \u2248 0.6%\nPAYIN_CARD_MIN = Decimal(\"15.00\") # fee \u2248 3.5%\nPAYIN_CARD_TARGET = Decimal(\"92.00\") # fee \u2248 2.33%\nPAYIN_DIRECT_DEBIT_MIN = Decimal('25.00') # fee \u2248 3.6%\nPAYIN_DIRECT_DEBIT_TARGET = Decimal('99.00') # fee \u2248 0.94%\n\nPERIOD_CONVERSION_RATES = {\n 'weekly': Decimal(1),\n 'monthly': Decimal(12) / Decimal(52),\n 'yearly': Decimal(1) / Decimal(52),\n}\n\nPOSTAL_ADDRESS_KEYS = (\n 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'\n)\n\nPRIVACY_FIELDS = OrderedDict([\n ('hide_giving', _(\"Hide total giving from others.\")),\n ('hide_receiving', _(\"Hide total receiving from others.\")),\n ('hide_from_search', _(\"Hide myself from search results on Liberapay.\")),\n ('profile_noindex', _(\"Tell web search engines not to index my profile.\")),\n ('hide_from_lists', _(\"Prevent my profile from being listed on Liberapay.\")),\n])\nPRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())\n\nPRIVILEGES = dict(admin=1, run_payday=2)\ncheck_bits(list(PRIVILEGES.values()))\n\nQUARANTINE = timedelta(weeks=4)\n\nRATE_LIMITS = {\n 'add_email.source': (5, 60*60*24), # 5 per day\n 'add_email.target': (2, 60*60*24), # 2 per day\n 'change_username': (7, 60*60*24*7), # 7 per week\n 'log-in.email': (10, 60*60*24), # 10 per day\n 'log-in.email.not-verified': (2, 60*60*24), # 2 per day\n 'log-in.email.verified': (10, 60*60*24), # 10 per day\n 'log-in.password': (3, 60*60), # 3 per hour\n 'sign-up.ip-addr': (5, 60*60), # 5 per hour per IP address\n 'sign-up.ip-net': (15, 15*60), # 15 per 15 minutes per IP network\n 'sign-up.ip-version': (15, 15*60), # 15 per 15 minutes per IP version\n}\n\nSEPA = set(\"\"\"\n AT BE BG CH CY CZ DE DK EE ES ES FI FR GB GI GR HR HU IE IS IT LI LT LU LV\n MC MT NL NO PL PT RO SE SI SK\n\"\"\".split())\n\nSESSION = str('session') # bytes in python2, unicode in python3\nSESSION_REFRESH = timedelta(hours=1)\nSESSION_TIMEOUT = timedelta(hours=6)\n\n\ndef make_standard_tip(label, weekly):\n return StandardTip(\n label,\n weekly,\n weekly / PERIOD_CONVERSION_RATES['monthly'],\n weekly / PERIOD_CONVERSION_RATES['yearly'],\n )\n\n\nSTANDARD_TIPS = (\n make_standard_tip(_(\"Symbolic\"), Decimal('0.01')),\n make_standard_tip(_(\"Small\"), Decimal('0.25')),\n make_standard_tip(_(\"Medium\"), Decimal('1.00')),\n make_standard_tip(_(\"Large\"), Decimal('5.00')),\n make_standard_tip(_(\"Maximum\"), DONATION_WEEKLY_MAX),\n)\n\nSUMMARY_MAX_SIZE = 100\n\nUSERNAME_MAX_SIZE = 32\n\ndel _\n", "path": "liberapay/constants.py"}]}
| 3,625 | 87 |
gh_patches_debug_39810
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-5999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tune doesn't show timesteps anymore
This makes it not very useful for monitoring RL runs:
```
+--------------------------+----------+----------+-----------+--------+------------------+----------+
| Trial name | ID | status | loc | iter | total time (s) | reward |
|--------------------------+----------+----------+-----------+--------+------------------+----------|
| SAC_Pendulum-v0_79dcd0fe | 79dcd0fe | RUNNING | pid=11031 | 66 | 163.499 | -1105.85 |
+--------------------------+----------+----------+-----------+--------+------------------+----------+
```
cc @richardliaw
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/tune/result.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6
7 # yapf: disable
8 # __sphinx_doc_begin__
9 # (Optional/Auto-filled) training is terminated. Filled only if not provided.
10 DONE = "done"
11
12 # (Optional) Enum for user controlled checkpoint
13 SHOULD_CHECKPOINT = "should_checkpoint"
14
15 # (Auto-filled) The hostname of the machine hosting the training process.
16 HOSTNAME = "hostname"
17
18 # (Auto-filled) The auto-assigned id of the trial.
19 TRIAL_ID = "trial_id"
20
21 # (Auto-filled) The auto-assigned id of the trial.
22 EXPERIMENT_TAG = "experiment_tag"
23
24 # (Auto-filled) The node ip of the machine hosting the training process.
25 NODE_IP = "node_ip"
26
27 # (Auto-filled) The pid of the training process.
28 PID = "pid"
29
30 # (Optional) Mean reward for current training iteration
31 EPISODE_REWARD_MEAN = "episode_reward_mean"
32
33 # (Optional) Mean loss for training iteration
34 MEAN_LOSS = "mean_loss"
35
36 # (Optional) Mean accuracy for training iteration
37 MEAN_ACCURACY = "mean_accuracy"
38
39 # Number of episodes in this iteration.
40 EPISODES_THIS_ITER = "episodes_this_iter"
41
42 # (Optional/Auto-filled) Accumulated number of episodes for this trial.
43 EPISODES_TOTAL = "episodes_total"
44
45 # Number of timesteps in this iteration.
46 TIMESTEPS_THIS_ITER = "timesteps_this_iter"
47
48 # (Auto-filled) Accumulated number of timesteps for this entire trial.
49 TIMESTEPS_TOTAL = "timesteps_total"
50
51 # (Auto-filled) Time in seconds this iteration took to run.
52 # This may be overriden to override the system-computed time difference.
53 TIME_THIS_ITER_S = "time_this_iter_s"
54
55 # (Auto-filled) Accumulated time in seconds for this entire trial.
56 TIME_TOTAL_S = "time_total_s"
57
58 # (Auto-filled) The index of this training iteration.
59 TRAINING_ITERATION = "training_iteration"
60 # __sphinx_doc_end__
61 # yapf: enable
62
63 DEFAULT_EXPERIMENT_INFO_KEYS = ("trainable_name", EXPERIMENT_TAG, TRIAL_ID)
64
65 DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, MEAN_ACCURACY,
66 MEAN_LOSS)
67
68 # __duplicate__ is a magic keyword used internally to
69 # avoid double-logging results when using the Function API.
70 RESULT_DUPLICATE = "__duplicate__"
71
72 # Where Tune writes result files by default
73 DEFAULT_RESULTS_DIR = (os.environ.get("TUNE_RESULT_DIR")
74 or os.path.expanduser("~/ray_results"))
75
76 # Meta file about status under each experiment directory, can be
77 # parsed by automlboard if exists.
78 JOB_META_FILE = "job_status.json"
79
80 # Meta file about status under each trial directory, can be parsed
81 # by automlboard if exists.
82 EXPR_META_FILE = "trial_status.json"
83
84 # File that stores parameters of the trial.
85 EXPR_PARAM_FILE = "params.json"
86
87 # Pickle File that stores parameters of the trial.
88 EXPR_PARAM_PICKLE_FILE = "params.pkl"
89
90 # File that stores the progress of the trial.
91 EXPR_PROGRESS_FILE = "progress.csv"
92
93 # File that stores results of the trial.
94 EXPR_RESULT_FILE = "result.json"
95
96 # Config prefix when using Analysis.
97 CONFIG_PREFIX = "config/"
98
```
Path: `python/ray/tune/progress_reporter.py`
Content:
```
1 from __future__ import print_function
2
3 import os
4
5 from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID,
6 EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS,
7 HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S)
8 from ray.tune.util import flatten_dict
9
10 try:
11 from tabulate import tabulate
12 except ImportError:
13 raise ImportError("ray.tune in ray > 0.7.5 requires 'tabulate'. "
14 "Please re-run 'pip install ray[tune]' or "
15 "'pip install ray[rllib]'.")
16
17 DEFAULT_PROGRESS_KEYS = DEFAULT_RESULT_KEYS + (EPISODE_REWARD_MEAN, )
18 # Truncated representations of column names (to accommodate small screens).
19 REPORTED_REPRESENTATIONS = {
20 EPISODE_REWARD_MEAN: "reward",
21 MEAN_ACCURACY: "acc",
22 MEAN_LOSS: "loss",
23 TIME_TOTAL_S: "total time (s)",
24 TRAINING_ITERATION: "iter",
25 }
26
27
28 class ProgressReporter(object):
29 def report(self, trial_runner):
30 """Reports progress across all trials of the trial runner.
31
32 Args:
33 trial_runner: Trial runner to report on.
34 """
35 raise NotImplementedError
36
37
38 class JupyterNotebookReporter(ProgressReporter):
39 def __init__(self, overwrite):
40 """Initializes a new JupyterNotebookReporter.
41
42 Args:
43 overwrite (bool): Flag for overwriting the last reported progress.
44 """
45 self.overwrite = overwrite
46
47 def report(self, trial_runner):
48 delim = "<br>"
49 messages = [
50 "== Status ==",
51 memory_debug_str(),
52 trial_runner.debug_string(delim=delim),
53 trial_progress_str(trial_runner.get_trials(), fmt="html")
54 ]
55 from IPython.display import clear_output
56 from IPython.core.display import display, HTML
57 if self.overwrite:
58 clear_output(wait=True)
59 display(HTML(delim.join(messages) + delim))
60
61
62 class CLIReporter(ProgressReporter):
63 def report(self, trial_runner):
64 messages = [
65 "== Status ==",
66 memory_debug_str(),
67 trial_runner.debug_string(),
68 trial_progress_str(trial_runner.get_trials())
69 ]
70 print("\n".join(messages) + "\n")
71
72
73 def memory_debug_str():
74 try:
75 import psutil
76 total_gb = psutil.virtual_memory().total / (1024**3)
77 used_gb = total_gb - psutil.virtual_memory().available / (1024**3)
78 if used_gb > total_gb * 0.9:
79 warn = (": ***LOW MEMORY*** less than 10% of the memory on "
80 "this node is available for use. This can cause "
81 "unexpected crashes. Consider "
82 "reducing the memory used by your application "
83 "or reducing the Ray object store size by setting "
84 "`object_store_memory` when calling `ray.init`.")
85 else:
86 warn = ""
87 return "Memory usage on this node: {}/{} GiB{}".format(
88 round(used_gb, 1), round(total_gb, 1), warn)
89 except ImportError:
90 return ("Unknown memory usage. Please run `pip install psutil` "
91 "(or ray[debug]) to resolve)")
92
93
94 def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=100):
95 """Returns a human readable message for printing to the console.
96
97 This contains a table where each row represents a trial, its parameters
98 and the current values of its metrics.
99
100 Args:
101 trials (List[Trial]): List of trials to get progress string for.
102 metrics (List[str]): Names of metrics to include. Defaults to
103 metrics defined in DEFAULT_RESULT_KEYS.
104 fmt (str): Output format (see tablefmt in tabulate API).
105 max_rows (int): Maximum number of rows in the trial table.
106 """
107 messages = []
108 delim = "<br>" if fmt == "html" else "\n"
109 if len(trials) < 1:
110 return delim.join(messages)
111
112 num_trials = len(trials)
113 trials_per_state = {}
114 for t in trials:
115 trials_per_state[t.status] = trials_per_state.get(t.status, 0) + 1
116 messages.append("Number of trials: {} ({})".format(num_trials,
117 trials_per_state))
118 for local_dir in sorted({t.local_dir for t in trials}):
119 messages.append("Result logdir: {}".format(local_dir))
120
121 if num_trials > max_rows:
122 overflow = num_trials - max_rows
123 # TODO(ujvl): suggestion for users to view more rows.
124 messages.append("Table truncated to {} rows ({} overflow).".format(
125 max_rows, overflow))
126
127 # Pre-process trials to figure out what columns to show.
128 keys = list(metrics or DEFAULT_PROGRESS_KEYS)
129 keys = [k for k in keys if any(t.last_result.get(k) for t in trials)]
130 has_failed = any(t.error_file for t in trials)
131 # Build rows.
132 trial_table = []
133 params = list(set().union(*[t.evaluated_params for t in trials]))
134 for trial in trials[:min(num_trials, max_rows)]:
135 trial_table.append(_get_trial_info(trial, params, keys, has_failed))
136 # Parse columns.
137 parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys]
138 columns = ["Trial name", "ID", "status", "loc"]
139 columns += ["failures", "error file"] if has_failed else []
140 columns += params + parsed_columns
141 messages.append(
142 tabulate(trial_table, headers=columns, tablefmt=fmt, showindex=False))
143 return delim.join(messages)
144
145
146 def _get_trial_info(trial, parameters, metrics, include_error_data=False):
147 """Returns the following information about a trial:
148
149 name | ID | status | loc | # failures | error_file | params... | metrics...
150
151 Args:
152 trial (Trial): Trial to get information for.
153 parameters (List[str]): Names of trial parameters to include.
154 metrics (List[str]): Names of metrics to include.
155 include_error_data (bool): Include error file and # of failures.
156 """
157 result = flatten_dict(trial.last_result)
158 trial_info = [str(trial), trial.trial_id, trial.status]
159 trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))]
160 if include_error_data:
161 # TODO(ujvl): File path is too long to display in a single row.
162 trial_info += [trial.num_failures, trial.error_file]
163 trial_info += [result.get(CONFIG_PREFIX + param) for param in parameters]
164 trial_info += [result.get(metric) for metric in metrics]
165 return trial_info
166
167
168 def _location_str(hostname, pid):
169 if not pid:
170 return ""
171 elif hostname == os.uname()[1]:
172 return "pid={}".format(pid)
173 else:
174 return "{}:{}".format(hostname, pid)
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/ray/tune/progress_reporter.py b/python/ray/tune/progress_reporter.py
--- a/python/ray/tune/progress_reporter.py
+++ b/python/ray/tune/progress_reporter.py
@@ -4,7 +4,8 @@
from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID,
EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS,
- HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S)
+ HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S,
+ TIMESTEPS_TOTAL)
from ray.tune.util import flatten_dict
try:
@@ -21,6 +22,7 @@
MEAN_ACCURACY: "acc",
MEAN_LOSS: "loss",
TIME_TOTAL_S: "total time (s)",
+ TIMESTEPS_TOTAL: "timesteps",
TRAINING_ITERATION: "iter",
}
@@ -135,7 +137,7 @@
trial_table.append(_get_trial_info(trial, params, keys, has_failed))
# Parse columns.
parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys]
- columns = ["Trial name", "ID", "status", "loc"]
+ columns = ["Trial name", "status", "loc"]
columns += ["failures", "error file"] if has_failed else []
columns += params + parsed_columns
messages.append(
@@ -146,7 +148,7 @@
def _get_trial_info(trial, parameters, metrics, include_error_data=False):
"""Returns the following information about a trial:
- name | ID | status | loc | # failures | error_file | params... | metrics...
+ name | status | loc | # failures | error_file | params... | metrics...
Args:
trial (Trial): Trial to get information for.
@@ -155,7 +157,7 @@
include_error_data (bool): Include error file and # of failures.
"""
result = flatten_dict(trial.last_result)
- trial_info = [str(trial), trial.trial_id, trial.status]
+ trial_info = [str(trial), trial.status]
trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))]
if include_error_data:
# TODO(ujvl): File path is too long to display in a single row.
diff --git a/python/ray/tune/result.py b/python/ray/tune/result.py
--- a/python/ray/tune/result.py
+++ b/python/ray/tune/result.py
@@ -62,8 +62,8 @@
DEFAULT_EXPERIMENT_INFO_KEYS = ("trainable_name", EXPERIMENT_TAG, TRIAL_ID)
-DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, MEAN_ACCURACY,
- MEAN_LOSS)
+DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL,
+ MEAN_ACCURACY, MEAN_LOSS)
# __duplicate__ is a magic keyword used internally to
# avoid double-logging results when using the Function API.
|
{"golden_diff": "diff --git a/python/ray/tune/progress_reporter.py b/python/ray/tune/progress_reporter.py\n--- a/python/ray/tune/progress_reporter.py\n+++ b/python/ray/tune/progress_reporter.py\n@@ -4,7 +4,8 @@\n \n from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID,\n EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS,\n- HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S)\n+ HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S,\n+ TIMESTEPS_TOTAL)\n from ray.tune.util import flatten_dict\n \n try:\n@@ -21,6 +22,7 @@\n MEAN_ACCURACY: \"acc\",\n MEAN_LOSS: \"loss\",\n TIME_TOTAL_S: \"total time (s)\",\n+ TIMESTEPS_TOTAL: \"timesteps\",\n TRAINING_ITERATION: \"iter\",\n }\n \n@@ -135,7 +137,7 @@\n trial_table.append(_get_trial_info(trial, params, keys, has_failed))\n # Parse columns.\n parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys]\n- columns = [\"Trial name\", \"ID\", \"status\", \"loc\"]\n+ columns = [\"Trial name\", \"status\", \"loc\"]\n columns += [\"failures\", \"error file\"] if has_failed else []\n columns += params + parsed_columns\n messages.append(\n@@ -146,7 +148,7 @@\n def _get_trial_info(trial, parameters, metrics, include_error_data=False):\n \"\"\"Returns the following information about a trial:\n \n- name | ID | status | loc | # failures | error_file | params... | metrics...\n+ name | status | loc | # failures | error_file | params... | metrics...\n \n Args:\n trial (Trial): Trial to get information for.\n@@ -155,7 +157,7 @@\n include_error_data (bool): Include error file and # of failures.\n \"\"\"\n result = flatten_dict(trial.last_result)\n- trial_info = [str(trial), trial.trial_id, trial.status]\n+ trial_info = [str(trial), trial.status]\n trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))]\n if include_error_data:\n # TODO(ujvl): File path is too long to display in a single row.\ndiff --git a/python/ray/tune/result.py b/python/ray/tune/result.py\n--- a/python/ray/tune/result.py\n+++ b/python/ray/tune/result.py\n@@ -62,8 +62,8 @@\n \n DEFAULT_EXPERIMENT_INFO_KEYS = (\"trainable_name\", EXPERIMENT_TAG, TRIAL_ID)\n \n-DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, MEAN_ACCURACY,\n- MEAN_LOSS)\n+DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL,\n+ MEAN_ACCURACY, MEAN_LOSS)\n \n # __duplicate__ is a magic keyword used internally to\n # avoid double-logging results when using the Function API.\n", "issue": "Tune doesn't show timesteps anymore\nThis makes it not very useful for monitoring RL runs:\r\n```\r\n+--------------------------+----------+----------+-----------+--------+------------------+----------+\r\n| Trial name | ID | status | loc | iter | total time (s) | reward |\r\n|--------------------------+----------+----------+-----------+--------+------------------+----------|\r\n| SAC_Pendulum-v0_79dcd0fe | 79dcd0fe | RUNNING | pid=11031 | 66 | 163.499 | -1105.85 |\r\n+--------------------------+----------+----------+-----------+--------+------------------+----------+\r\n```\r\n\r\ncc @richardliaw \n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\n# yapf: disable\n# __sphinx_doc_begin__\n# (Optional/Auto-filled) training is terminated. Filled only if not provided.\nDONE = \"done\"\n\n# (Optional) Enum for user controlled checkpoint\nSHOULD_CHECKPOINT = \"should_checkpoint\"\n\n# (Auto-filled) The hostname of the machine hosting the training process.\nHOSTNAME = \"hostname\"\n\n# (Auto-filled) The auto-assigned id of the trial.\nTRIAL_ID = \"trial_id\"\n\n# (Auto-filled) The auto-assigned id of the trial.\nEXPERIMENT_TAG = \"experiment_tag\"\n\n# (Auto-filled) The node ip of the machine hosting the training process.\nNODE_IP = \"node_ip\"\n\n# (Auto-filled) The pid of the training process.\nPID = \"pid\"\n\n# (Optional) Mean reward for current training iteration\nEPISODE_REWARD_MEAN = \"episode_reward_mean\"\n\n# (Optional) Mean loss for training iteration\nMEAN_LOSS = \"mean_loss\"\n\n# (Optional) Mean accuracy for training iteration\nMEAN_ACCURACY = \"mean_accuracy\"\n\n# Number of episodes in this iteration.\nEPISODES_THIS_ITER = \"episodes_this_iter\"\n\n# (Optional/Auto-filled) Accumulated number of episodes for this trial.\nEPISODES_TOTAL = \"episodes_total\"\n\n# Number of timesteps in this iteration.\nTIMESTEPS_THIS_ITER = \"timesteps_this_iter\"\n\n# (Auto-filled) Accumulated number of timesteps for this entire trial.\nTIMESTEPS_TOTAL = \"timesteps_total\"\n\n# (Auto-filled) Time in seconds this iteration took to run.\n# This may be overriden to override the system-computed time difference.\nTIME_THIS_ITER_S = \"time_this_iter_s\"\n\n# (Auto-filled) Accumulated time in seconds for this entire trial.\nTIME_TOTAL_S = \"time_total_s\"\n\n# (Auto-filled) The index of this training iteration.\nTRAINING_ITERATION = \"training_iteration\"\n# __sphinx_doc_end__\n# yapf: enable\n\nDEFAULT_EXPERIMENT_INFO_KEYS = (\"trainable_name\", EXPERIMENT_TAG, TRIAL_ID)\n\nDEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, MEAN_ACCURACY,\n MEAN_LOSS)\n\n# __duplicate__ is a magic keyword used internally to\n# avoid double-logging results when using the Function API.\nRESULT_DUPLICATE = \"__duplicate__\"\n\n# Where Tune writes result files by default\nDEFAULT_RESULTS_DIR = (os.environ.get(\"TUNE_RESULT_DIR\")\n or os.path.expanduser(\"~/ray_results\"))\n\n# Meta file about status under each experiment directory, can be\n# parsed by automlboard if exists.\nJOB_META_FILE = \"job_status.json\"\n\n# Meta file about status under each trial directory, can be parsed\n# by automlboard if exists.\nEXPR_META_FILE = \"trial_status.json\"\n\n# File that stores parameters of the trial.\nEXPR_PARAM_FILE = \"params.json\"\n\n# Pickle File that stores parameters of the trial.\nEXPR_PARAM_PICKLE_FILE = \"params.pkl\"\n\n# File that stores the progress of the trial.\nEXPR_PROGRESS_FILE = \"progress.csv\"\n\n# File that stores results of the trial.\nEXPR_RESULT_FILE = \"result.json\"\n\n# Config prefix when using Analysis.\nCONFIG_PREFIX = \"config/\"\n", "path": "python/ray/tune/result.py"}, {"content": "from __future__ import print_function\n\nimport os\n\nfrom ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID,\n EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS,\n HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S)\nfrom ray.tune.util import flatten_dict\n\ntry:\n from tabulate import tabulate\nexcept ImportError:\n raise ImportError(\"ray.tune in ray > 0.7.5 requires 'tabulate'. \"\n \"Please re-run 'pip install ray[tune]' or \"\n \"'pip install ray[rllib]'.\")\n\nDEFAULT_PROGRESS_KEYS = DEFAULT_RESULT_KEYS + (EPISODE_REWARD_MEAN, )\n# Truncated representations of column names (to accommodate small screens).\nREPORTED_REPRESENTATIONS = {\n EPISODE_REWARD_MEAN: \"reward\",\n MEAN_ACCURACY: \"acc\",\n MEAN_LOSS: \"loss\",\n TIME_TOTAL_S: \"total time (s)\",\n TRAINING_ITERATION: \"iter\",\n}\n\n\nclass ProgressReporter(object):\n def report(self, trial_runner):\n \"\"\"Reports progress across all trials of the trial runner.\n\n Args:\n trial_runner: Trial runner to report on.\n \"\"\"\n raise NotImplementedError\n\n\nclass JupyterNotebookReporter(ProgressReporter):\n def __init__(self, overwrite):\n \"\"\"Initializes a new JupyterNotebookReporter.\n\n Args:\n overwrite (bool): Flag for overwriting the last reported progress.\n \"\"\"\n self.overwrite = overwrite\n\n def report(self, trial_runner):\n delim = \"<br>\"\n messages = [\n \"== Status ==\",\n memory_debug_str(),\n trial_runner.debug_string(delim=delim),\n trial_progress_str(trial_runner.get_trials(), fmt=\"html\")\n ]\n from IPython.display import clear_output\n from IPython.core.display import display, HTML\n if self.overwrite:\n clear_output(wait=True)\n display(HTML(delim.join(messages) + delim))\n\n\nclass CLIReporter(ProgressReporter):\n def report(self, trial_runner):\n messages = [\n \"== Status ==\",\n memory_debug_str(),\n trial_runner.debug_string(),\n trial_progress_str(trial_runner.get_trials())\n ]\n print(\"\\n\".join(messages) + \"\\n\")\n\n\ndef memory_debug_str():\n try:\n import psutil\n total_gb = psutil.virtual_memory().total / (1024**3)\n used_gb = total_gb - psutil.virtual_memory().available / (1024**3)\n if used_gb > total_gb * 0.9:\n warn = (\": ***LOW MEMORY*** less than 10% of the memory on \"\n \"this node is available for use. This can cause \"\n \"unexpected crashes. Consider \"\n \"reducing the memory used by your application \"\n \"or reducing the Ray object store size by setting \"\n \"`object_store_memory` when calling `ray.init`.\")\n else:\n warn = \"\"\n return \"Memory usage on this node: {}/{} GiB{}\".format(\n round(used_gb, 1), round(total_gb, 1), warn)\n except ImportError:\n return (\"Unknown memory usage. Please run `pip install psutil` \"\n \"(or ray[debug]) to resolve)\")\n\n\ndef trial_progress_str(trials, metrics=None, fmt=\"psql\", max_rows=100):\n \"\"\"Returns a human readable message for printing to the console.\n\n This contains a table where each row represents a trial, its parameters\n and the current values of its metrics.\n\n Args:\n trials (List[Trial]): List of trials to get progress string for.\n metrics (List[str]): Names of metrics to include. Defaults to\n metrics defined in DEFAULT_RESULT_KEYS.\n fmt (str): Output format (see tablefmt in tabulate API).\n max_rows (int): Maximum number of rows in the trial table.\n \"\"\"\n messages = []\n delim = \"<br>\" if fmt == \"html\" else \"\\n\"\n if len(trials) < 1:\n return delim.join(messages)\n\n num_trials = len(trials)\n trials_per_state = {}\n for t in trials:\n trials_per_state[t.status] = trials_per_state.get(t.status, 0) + 1\n messages.append(\"Number of trials: {} ({})\".format(num_trials,\n trials_per_state))\n for local_dir in sorted({t.local_dir for t in trials}):\n messages.append(\"Result logdir: {}\".format(local_dir))\n\n if num_trials > max_rows:\n overflow = num_trials - max_rows\n # TODO(ujvl): suggestion for users to view more rows.\n messages.append(\"Table truncated to {} rows ({} overflow).\".format(\n max_rows, overflow))\n\n # Pre-process trials to figure out what columns to show.\n keys = list(metrics or DEFAULT_PROGRESS_KEYS)\n keys = [k for k in keys if any(t.last_result.get(k) for t in trials)]\n has_failed = any(t.error_file for t in trials)\n # Build rows.\n trial_table = []\n params = list(set().union(*[t.evaluated_params for t in trials]))\n for trial in trials[:min(num_trials, max_rows)]:\n trial_table.append(_get_trial_info(trial, params, keys, has_failed))\n # Parse columns.\n parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys]\n columns = [\"Trial name\", \"ID\", \"status\", \"loc\"]\n columns += [\"failures\", \"error file\"] if has_failed else []\n columns += params + parsed_columns\n messages.append(\n tabulate(trial_table, headers=columns, tablefmt=fmt, showindex=False))\n return delim.join(messages)\n\n\ndef _get_trial_info(trial, parameters, metrics, include_error_data=False):\n \"\"\"Returns the following information about a trial:\n\n name | ID | status | loc | # failures | error_file | params... | metrics...\n\n Args:\n trial (Trial): Trial to get information for.\n parameters (List[str]): Names of trial parameters to include.\n metrics (List[str]): Names of metrics to include.\n include_error_data (bool): Include error file and # of failures.\n \"\"\"\n result = flatten_dict(trial.last_result)\n trial_info = [str(trial), trial.trial_id, trial.status]\n trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))]\n if include_error_data:\n # TODO(ujvl): File path is too long to display in a single row.\n trial_info += [trial.num_failures, trial.error_file]\n trial_info += [result.get(CONFIG_PREFIX + param) for param in parameters]\n trial_info += [result.get(metric) for metric in metrics]\n return trial_info\n\n\ndef _location_str(hostname, pid):\n if not pid:\n return \"\"\n elif hostname == os.uname()[1]:\n return \"pid={}\".format(pid)\n else:\n return \"{}:{}\".format(hostname, pid)\n", "path": "python/ray/tune/progress_reporter.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\n# yapf: disable\n# __sphinx_doc_begin__\n# (Optional/Auto-filled) training is terminated. Filled only if not provided.\nDONE = \"done\"\n\n# (Optional) Enum for user controlled checkpoint\nSHOULD_CHECKPOINT = \"should_checkpoint\"\n\n# (Auto-filled) The hostname of the machine hosting the training process.\nHOSTNAME = \"hostname\"\n\n# (Auto-filled) The auto-assigned id of the trial.\nTRIAL_ID = \"trial_id\"\n\n# (Auto-filled) The auto-assigned id of the trial.\nEXPERIMENT_TAG = \"experiment_tag\"\n\n# (Auto-filled) The node ip of the machine hosting the training process.\nNODE_IP = \"node_ip\"\n\n# (Auto-filled) The pid of the training process.\nPID = \"pid\"\n\n# (Optional) Mean reward for current training iteration\nEPISODE_REWARD_MEAN = \"episode_reward_mean\"\n\n# (Optional) Mean loss for training iteration\nMEAN_LOSS = \"mean_loss\"\n\n# (Optional) Mean accuracy for training iteration\nMEAN_ACCURACY = \"mean_accuracy\"\n\n# Number of episodes in this iteration.\nEPISODES_THIS_ITER = \"episodes_this_iter\"\n\n# (Optional/Auto-filled) Accumulated number of episodes for this trial.\nEPISODES_TOTAL = \"episodes_total\"\n\n# Number of timesteps in this iteration.\nTIMESTEPS_THIS_ITER = \"timesteps_this_iter\"\n\n# (Auto-filled) Accumulated number of timesteps for this entire trial.\nTIMESTEPS_TOTAL = \"timesteps_total\"\n\n# (Auto-filled) Time in seconds this iteration took to run.\n# This may be overriden to override the system-computed time difference.\nTIME_THIS_ITER_S = \"time_this_iter_s\"\n\n# (Auto-filled) Accumulated time in seconds for this entire trial.\nTIME_TOTAL_S = \"time_total_s\"\n\n# (Auto-filled) The index of this training iteration.\nTRAINING_ITERATION = \"training_iteration\"\n# __sphinx_doc_end__\n# yapf: enable\n\nDEFAULT_EXPERIMENT_INFO_KEYS = (\"trainable_name\", EXPERIMENT_TAG, TRIAL_ID)\n\nDEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL,\n MEAN_ACCURACY, MEAN_LOSS)\n\n# __duplicate__ is a magic keyword used internally to\n# avoid double-logging results when using the Function API.\nRESULT_DUPLICATE = \"__duplicate__\"\n\n# Where Tune writes result files by default\nDEFAULT_RESULTS_DIR = (os.environ.get(\"TUNE_RESULT_DIR\")\n or os.path.expanduser(\"~/ray_results\"))\n\n# Meta file about status under each experiment directory, can be\n# parsed by automlboard if exists.\nJOB_META_FILE = \"job_status.json\"\n\n# Meta file about status under each trial directory, can be parsed\n# by automlboard if exists.\nEXPR_META_FILE = \"trial_status.json\"\n\n# File that stores parameters of the trial.\nEXPR_PARAM_FILE = \"params.json\"\n\n# Pickle File that stores parameters of the trial.\nEXPR_PARAM_PICKLE_FILE = \"params.pkl\"\n\n# File that stores the progress of the trial.\nEXPR_PROGRESS_FILE = \"progress.csv\"\n\n# File that stores results of the trial.\nEXPR_RESULT_FILE = \"result.json\"\n\n# Config prefix when using Analysis.\nCONFIG_PREFIX = \"config/\"\n", "path": "python/ray/tune/result.py"}, {"content": "from __future__ import print_function\n\nimport os\n\nfrom ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID,\n EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS,\n HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S,\n TIMESTEPS_TOTAL)\nfrom ray.tune.util import flatten_dict\n\ntry:\n from tabulate import tabulate\nexcept ImportError:\n raise ImportError(\"ray.tune in ray > 0.7.5 requires 'tabulate'. \"\n \"Please re-run 'pip install ray[tune]' or \"\n \"'pip install ray[rllib]'.\")\n\nDEFAULT_PROGRESS_KEYS = DEFAULT_RESULT_KEYS + (EPISODE_REWARD_MEAN, )\n# Truncated representations of column names (to accommodate small screens).\nREPORTED_REPRESENTATIONS = {\n EPISODE_REWARD_MEAN: \"reward\",\n MEAN_ACCURACY: \"acc\",\n MEAN_LOSS: \"loss\",\n TIME_TOTAL_S: \"total time (s)\",\n TIMESTEPS_TOTAL: \"timesteps\",\n TRAINING_ITERATION: \"iter\",\n}\n\n\nclass ProgressReporter(object):\n def report(self, trial_runner):\n \"\"\"Reports progress across all trials of the trial runner.\n\n Args:\n trial_runner: Trial runner to report on.\n \"\"\"\n raise NotImplementedError\n\n\nclass JupyterNotebookReporter(ProgressReporter):\n def __init__(self, overwrite):\n \"\"\"Initializes a new JupyterNotebookReporter.\n\n Args:\n overwrite (bool): Flag for overwriting the last reported progress.\n \"\"\"\n self.overwrite = overwrite\n\n def report(self, trial_runner):\n delim = \"<br>\"\n messages = [\n \"== Status ==\",\n memory_debug_str(),\n trial_runner.debug_string(delim=delim),\n trial_progress_str(trial_runner.get_trials(), fmt=\"html\")\n ]\n from IPython.display import clear_output\n from IPython.core.display import display, HTML\n if self.overwrite:\n clear_output(wait=True)\n display(HTML(delim.join(messages) + delim))\n\n\nclass CLIReporter(ProgressReporter):\n def report(self, trial_runner):\n messages = [\n \"== Status ==\",\n memory_debug_str(),\n trial_runner.debug_string(),\n trial_progress_str(trial_runner.get_trials())\n ]\n print(\"\\n\".join(messages) + \"\\n\")\n\n\ndef memory_debug_str():\n try:\n import psutil\n total_gb = psutil.virtual_memory().total / (1024**3)\n used_gb = total_gb - psutil.virtual_memory().available / (1024**3)\n if used_gb > total_gb * 0.9:\n warn = (\": ***LOW MEMORY*** less than 10% of the memory on \"\n \"this node is available for use. This can cause \"\n \"unexpected crashes. Consider \"\n \"reducing the memory used by your application \"\n \"or reducing the Ray object store size by setting \"\n \"`object_store_memory` when calling `ray.init`.\")\n else:\n warn = \"\"\n return \"Memory usage on this node: {}/{} GiB{}\".format(\n round(used_gb, 1), round(total_gb, 1), warn)\n except ImportError:\n return (\"Unknown memory usage. Please run `pip install psutil` \"\n \"(or ray[debug]) to resolve)\")\n\n\ndef trial_progress_str(trials, metrics=None, fmt=\"psql\", max_rows=100):\n \"\"\"Returns a human readable message for printing to the console.\n\n This contains a table where each row represents a trial, its parameters\n and the current values of its metrics.\n\n Args:\n trials (List[Trial]): List of trials to get progress string for.\n metrics (List[str]): Names of metrics to include. Defaults to\n metrics defined in DEFAULT_RESULT_KEYS.\n fmt (str): Output format (see tablefmt in tabulate API).\n max_rows (int): Maximum number of rows in the trial table.\n \"\"\"\n messages = []\n delim = \"<br>\" if fmt == \"html\" else \"\\n\"\n if len(trials) < 1:\n return delim.join(messages)\n\n num_trials = len(trials)\n trials_per_state = {}\n for t in trials:\n trials_per_state[t.status] = trials_per_state.get(t.status, 0) + 1\n messages.append(\"Number of trials: {} ({})\".format(num_trials,\n trials_per_state))\n for local_dir in sorted({t.local_dir for t in trials}):\n messages.append(\"Result logdir: {}\".format(local_dir))\n\n if num_trials > max_rows:\n overflow = num_trials - max_rows\n # TODO(ujvl): suggestion for users to view more rows.\n messages.append(\"Table truncated to {} rows ({} overflow).\".format(\n max_rows, overflow))\n\n # Pre-process trials to figure out what columns to show.\n keys = list(metrics or DEFAULT_PROGRESS_KEYS)\n keys = [k for k in keys if any(t.last_result.get(k) for t in trials)]\n has_failed = any(t.error_file for t in trials)\n # Build rows.\n trial_table = []\n params = list(set().union(*[t.evaluated_params for t in trials]))\n for trial in trials[:min(num_trials, max_rows)]:\n trial_table.append(_get_trial_info(trial, params, keys, has_failed))\n # Parse columns.\n parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys]\n columns = [\"Trial name\", \"status\", \"loc\"]\n columns += [\"failures\", \"error file\"] if has_failed else []\n columns += params + parsed_columns\n messages.append(\n tabulate(trial_table, headers=columns, tablefmt=fmt, showindex=False))\n return delim.join(messages)\n\n\ndef _get_trial_info(trial, parameters, metrics, include_error_data=False):\n \"\"\"Returns the following information about a trial:\n\n name | status | loc | # failures | error_file | params... | metrics...\n\n Args:\n trial (Trial): Trial to get information for.\n parameters (List[str]): Names of trial parameters to include.\n metrics (List[str]): Names of metrics to include.\n include_error_data (bool): Include error file and # of failures.\n \"\"\"\n result = flatten_dict(trial.last_result)\n trial_info = [str(trial), trial.status]\n trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))]\n if include_error_data:\n # TODO(ujvl): File path is too long to display in a single row.\n trial_info += [trial.num_failures, trial.error_file]\n trial_info += [result.get(CONFIG_PREFIX + param) for param in parameters]\n trial_info += [result.get(metric) for metric in metrics]\n return trial_info\n\n\ndef _location_str(hostname, pid):\n if not pid:\n return \"\"\n elif hostname == os.uname()[1]:\n return \"pid={}\".format(pid)\n else:\n return \"{}:{}\".format(hostname, pid)\n", "path": "python/ray/tune/progress_reporter.py"}]}
| 3,302 | 696 |
gh_patches_debug_24637
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-2950
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `show_history` option to Spreadsheet API
Changes were made to the `modin_spreadsheet` API `show_grid`. Make corresponding changes on Spreadsheet API `from_dataframe`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modin/spreadsheet/general.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 from .. import pandas as pd
15 from modin_spreadsheet import show_grid, SpreadsheetWidget
16
17
18 def from_dataframe(
19 dataframe,
20 show_toolbar=None,
21 precision=None,
22 grid_options=None,
23 column_options=None,
24 column_definitions=None,
25 row_edit_callback=None,
26 ):
27 """
28 Renders a DataFrame or Series as an interactive spreadsheet, represented by
29 an instance of the ``SpreadsheetWidget`` class. The ``SpreadsheetWidget`` instance
30 is constructed using the options passed in to this function. The
31 ``dataframe`` argument to this function is used as the ``df`` kwarg in
32 call to the SpreadsheetWidget constructor, and the rest of the parameters
33 are passed through as is.
34
35 If the ``dataframe`` argument is a Series, it will be converted to a
36 DataFrame before being passed in to the SpreadsheetWidget constructor as the
37 ``df`` kwarg.
38
39 :rtype: SpreadsheetWidget
40
41 Parameters
42 ----------
43 dataframe : DataFrame
44 The DataFrame that will be displayed by this instance of
45 SpreadsheetWidget.
46 grid_options : dict
47 Options to use when creating the SlickGrid control (i.e. the
48 interactive grid). See the Notes section below for more information
49 on the available options, as well as the default options that this
50 widget uses.
51 precision : integer
52 The number of digits of precision to display for floating-point
53 values. If unset, we use the value of
54 `pandas.get_option('display.precision')`.
55 show_toolbar : bool
56 Whether to show a toolbar with options for adding/removing rows.
57 Adding/removing rows is an experimental feature which only works
58 with DataFrames that have an integer index.
59 column_options : dict
60 Column options that are to be applied to every column. See the
61 Notes section below for more information on the available options,
62 as well as the default options that this widget uses.
63 column_definitions : dict
64 Column options that are to be applied to individual
65 columns. The keys of the dict should be the column names, and each
66 value should be the column options for a particular column,
67 represented as a dict. The available options for each column are the
68 same options that are available to be set for all columns via the
69 ``column_options`` parameter. See the Notes section below for more
70 information on those options.
71 row_edit_callback : callable
72 A callable that is called to determine whether a particular row
73 should be editable or not. Its signature should be
74 ``callable(row)``, where ``row`` is a dictionary which contains a
75 particular row's values, keyed by column name. The callback should
76 return True if the provided row should be editable, and False
77 otherwise.
78
79
80 Notes
81 -----
82 The following dictionary is used for ``grid_options`` if none are
83 provided explicitly::
84
85 {
86 # SlickGrid options
87 'fullWidthRows': True,
88 'syncColumnCellResize': True,
89 'forceFitColumns': False,
90 'defaultColumnWidth': 150,
91 'rowHeight': 28,
92 'enableColumnReorder': False,
93 'enableTextSelectionOnCells': True,
94 'editable': True,
95 'autoEdit': False,
96 'explicitInitialization': True,
97
98 # Modin-spreadsheet options
99 'maxVisibleRows': 15,
100 'minVisibleRows': 8,
101 'sortable': True,
102 'filterable': True,
103 'highlightSelectedCell': False,
104 'highlightSelectedRow': True
105 }
106
107 The first group of options are SlickGrid "grid options" which are
108 described in the `SlickGrid documentation
109 <https://github.com/mleibman/SlickGrid/wiki/Grid-Options>`_.
110
111 The second group of option are options that were added specifically
112 for modin-spreadsheet and therefore are not documented in the SlickGrid documentation.
113 The following bullet points describe these options.
114
115 * **maxVisibleRows** The maximum number of rows that modin-spreadsheet will show.
116 * **minVisibleRows** The minimum number of rows that modin-spreadsheet will show
117 * **sortable** Whether the modin-spreadsheet instance will allow the user to sort
118 columns by clicking the column headers. When this is set to ``False``,
119 nothing will happen when users click the column headers.
120 * **filterable** Whether the modin-spreadsheet instance will allow the user to filter
121 the grid. When this is set to ``False`` the filter icons won't be shown
122 for any columns.
123 * **highlightSelectedCell** If you set this to True, the selected cell
124 will be given a light blue border.
125 * **highlightSelectedRow** If you set this to False, the light blue
126 background that's shown by default for selected rows will be hidden.
127
128 The following dictionary is used for ``column_options`` if none are
129 provided explicitly::
130
131 {
132 # SlickGrid column options
133 'defaultSortAsc': True,
134 'maxWidth': None,
135 'minWidth': 30,
136 'resizable': True,
137 'sortable': True,
138 'toolTip': "",
139 'width': None
140
141 # Modin-spreadsheet column options
142 'editable': True,
143 }
144
145 The first group of options are SlickGrid "column options" which are
146 described in the `SlickGrid documentation
147 <https://github.com/mleibman/SlickGrid/wiki/Column-Options>`_.
148
149 The ``editable`` option was added specifically for modin-spreadsheet and therefore is
150 not documented in the SlickGrid documentation. This option specifies
151 whether a column should be editable or not.
152
153 See Also
154 --------
155 set_defaults : Permanently set global defaults for the parameters
156 of ``show_grid``, with the exception of the ``dataframe``
157 and ``column_definitions`` parameters, since those
158 depend on the particular set of data being shown by an
159 instance, and therefore aren't parameters we would want
160 to set for all SpreadsheetWidget instances.
161 set_grid_option : Permanently set global defaults for individual
162 grid options. Does so by changing the defaults
163 that the ``show_grid`` method uses for the
164 ``grid_options`` parameter.
165 SpreadsheetWidget : The widget class that is instantiated and returned by this
166 method.
167
168 """
169 if not isinstance(dataframe, pd.DataFrame):
170 raise TypeError("dataframe must be modin.DataFrame, not %s" % type(dataframe))
171 return show_grid(
172 dataframe,
173 show_toolbar,
174 precision,
175 grid_options,
176 column_options,
177 column_definitions,
178 row_edit_callback,
179 )
180
181
182 def to_dataframe(spreadsheet):
183 """
184 Get a copy of the DataFrame that reflects the current state of the ``spreadsheet`` SpreadsheetWidget instance UI.
185 This includes any sorting or filtering changes, as well as edits
186 that have been made by double clicking cells.
187
188 :rtype: DataFrame
189
190 Parameters
191 ----------
192 spreadsheet : SpreadsheetWidget
193 The SpreadsheetWidget instance that DataFrame that will be displayed by this instance of
194 SpreadsheetWidget.
195 """
196 if not isinstance(spreadsheet, SpreadsheetWidget):
197 raise TypeError(
198 "spreadsheet must be modin_spreadsheet.SpreadsheetWidget, not %s"
199 % type(spreadsheet)
200 )
201 return spreadsheet.get_changed_df()
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modin/spreadsheet/general.py b/modin/spreadsheet/general.py
--- a/modin/spreadsheet/general.py
+++ b/modin/spreadsheet/general.py
@@ -18,6 +18,7 @@
def from_dataframe(
dataframe,
show_toolbar=None,
+ show_history=None,
precision=None,
grid_options=None,
column_options=None,
@@ -56,6 +57,9 @@
Whether to show a toolbar with options for adding/removing rows.
Adding/removing rows is an experimental feature which only works
with DataFrames that have an integer index.
+ show_history : bool
+ Whether to show the cell containing the spreadsheet transformation
+ history.
column_options : dict
Column options that are to be applied to every column. See the
Notes section below for more information on the available options,
@@ -171,6 +175,7 @@
return show_grid(
dataframe,
show_toolbar,
+ show_history,
precision,
grid_options,
column_options,
|
{"golden_diff": "diff --git a/modin/spreadsheet/general.py b/modin/spreadsheet/general.py\n--- a/modin/spreadsheet/general.py\n+++ b/modin/spreadsheet/general.py\n@@ -18,6 +18,7 @@\n def from_dataframe(\n dataframe,\n show_toolbar=None,\n+ show_history=None,\n precision=None,\n grid_options=None,\n column_options=None,\n@@ -56,6 +57,9 @@\n Whether to show a toolbar with options for adding/removing rows.\n Adding/removing rows is an experimental feature which only works\n with DataFrames that have an integer index.\n+ show_history : bool\n+ Whether to show the cell containing the spreadsheet transformation\n+ history.\n column_options : dict\n Column options that are to be applied to every column. See the\n Notes section below for more information on the available options,\n@@ -171,6 +175,7 @@\n return show_grid(\n dataframe,\n show_toolbar,\n+ show_history,\n precision,\n grid_options,\n column_options,\n", "issue": "Add `show_history` option to Spreadsheet API\nChanges were made to the `modin_spreadsheet` API `show_grid`. Make corresponding changes on Spreadsheet API `from_dataframe`.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .. import pandas as pd\nfrom modin_spreadsheet import show_grid, SpreadsheetWidget\n\n\ndef from_dataframe(\n dataframe,\n show_toolbar=None,\n precision=None,\n grid_options=None,\n column_options=None,\n column_definitions=None,\n row_edit_callback=None,\n):\n \"\"\"\n Renders a DataFrame or Series as an interactive spreadsheet, represented by\n an instance of the ``SpreadsheetWidget`` class. The ``SpreadsheetWidget`` instance\n is constructed using the options passed in to this function. The\n ``dataframe`` argument to this function is used as the ``df`` kwarg in\n call to the SpreadsheetWidget constructor, and the rest of the parameters\n are passed through as is.\n\n If the ``dataframe`` argument is a Series, it will be converted to a\n DataFrame before being passed in to the SpreadsheetWidget constructor as the\n ``df`` kwarg.\n\n :rtype: SpreadsheetWidget\n\n Parameters\n ----------\n dataframe : DataFrame\n The DataFrame that will be displayed by this instance of\n SpreadsheetWidget.\n grid_options : dict\n Options to use when creating the SlickGrid control (i.e. the\n interactive grid). See the Notes section below for more information\n on the available options, as well as the default options that this\n widget uses.\n precision : integer\n The number of digits of precision to display for floating-point\n values. If unset, we use the value of\n `pandas.get_option('display.precision')`.\n show_toolbar : bool\n Whether to show a toolbar with options for adding/removing rows.\n Adding/removing rows is an experimental feature which only works\n with DataFrames that have an integer index.\n column_options : dict\n Column options that are to be applied to every column. See the\n Notes section below for more information on the available options,\n as well as the default options that this widget uses.\n column_definitions : dict\n Column options that are to be applied to individual\n columns. The keys of the dict should be the column names, and each\n value should be the column options for a particular column,\n represented as a dict. The available options for each column are the\n same options that are available to be set for all columns via the\n ``column_options`` parameter. See the Notes section below for more\n information on those options.\n row_edit_callback : callable\n A callable that is called to determine whether a particular row\n should be editable or not. Its signature should be\n ``callable(row)``, where ``row`` is a dictionary which contains a\n particular row's values, keyed by column name. The callback should\n return True if the provided row should be editable, and False\n otherwise.\n\n\n Notes\n -----\n The following dictionary is used for ``grid_options`` if none are\n provided explicitly::\n\n {\n # SlickGrid options\n 'fullWidthRows': True,\n 'syncColumnCellResize': True,\n 'forceFitColumns': False,\n 'defaultColumnWidth': 150,\n 'rowHeight': 28,\n 'enableColumnReorder': False,\n 'enableTextSelectionOnCells': True,\n 'editable': True,\n 'autoEdit': False,\n 'explicitInitialization': True,\n\n # Modin-spreadsheet options\n 'maxVisibleRows': 15,\n 'minVisibleRows': 8,\n 'sortable': True,\n 'filterable': True,\n 'highlightSelectedCell': False,\n 'highlightSelectedRow': True\n }\n\n The first group of options are SlickGrid \"grid options\" which are\n described in the `SlickGrid documentation\n <https://github.com/mleibman/SlickGrid/wiki/Grid-Options>`_.\n\n The second group of option are options that were added specifically\n for modin-spreadsheet and therefore are not documented in the SlickGrid documentation.\n The following bullet points describe these options.\n\n * **maxVisibleRows** The maximum number of rows that modin-spreadsheet will show.\n * **minVisibleRows** The minimum number of rows that modin-spreadsheet will show\n * **sortable** Whether the modin-spreadsheet instance will allow the user to sort\n columns by clicking the column headers. When this is set to ``False``,\n nothing will happen when users click the column headers.\n * **filterable** Whether the modin-spreadsheet instance will allow the user to filter\n the grid. When this is set to ``False`` the filter icons won't be shown\n for any columns.\n * **highlightSelectedCell** If you set this to True, the selected cell\n will be given a light blue border.\n * **highlightSelectedRow** If you set this to False, the light blue\n background that's shown by default for selected rows will be hidden.\n\n The following dictionary is used for ``column_options`` if none are\n provided explicitly::\n\n {\n # SlickGrid column options\n 'defaultSortAsc': True,\n 'maxWidth': None,\n 'minWidth': 30,\n 'resizable': True,\n 'sortable': True,\n 'toolTip': \"\",\n 'width': None\n\n # Modin-spreadsheet column options\n 'editable': True,\n }\n\n The first group of options are SlickGrid \"column options\" which are\n described in the `SlickGrid documentation\n <https://github.com/mleibman/SlickGrid/wiki/Column-Options>`_.\n\n The ``editable`` option was added specifically for modin-spreadsheet and therefore is\n not documented in the SlickGrid documentation. This option specifies\n whether a column should be editable or not.\n\n See Also\n --------\n set_defaults : Permanently set global defaults for the parameters\n of ``show_grid``, with the exception of the ``dataframe``\n and ``column_definitions`` parameters, since those\n depend on the particular set of data being shown by an\n instance, and therefore aren't parameters we would want\n to set for all SpreadsheetWidget instances.\n set_grid_option : Permanently set global defaults for individual\n grid options. Does so by changing the defaults\n that the ``show_grid`` method uses for the\n ``grid_options`` parameter.\n SpreadsheetWidget : The widget class that is instantiated and returned by this\n method.\n\n \"\"\"\n if not isinstance(dataframe, pd.DataFrame):\n raise TypeError(\"dataframe must be modin.DataFrame, not %s\" % type(dataframe))\n return show_grid(\n dataframe,\n show_toolbar,\n precision,\n grid_options,\n column_options,\n column_definitions,\n row_edit_callback,\n )\n\n\ndef to_dataframe(spreadsheet):\n \"\"\"\n Get a copy of the DataFrame that reflects the current state of the ``spreadsheet`` SpreadsheetWidget instance UI.\n This includes any sorting or filtering changes, as well as edits\n that have been made by double clicking cells.\n\n :rtype: DataFrame\n\n Parameters\n ----------\n spreadsheet : SpreadsheetWidget\n The SpreadsheetWidget instance that DataFrame that will be displayed by this instance of\n SpreadsheetWidget.\n \"\"\"\n if not isinstance(spreadsheet, SpreadsheetWidget):\n raise TypeError(\n \"spreadsheet must be modin_spreadsheet.SpreadsheetWidget, not %s\"\n % type(spreadsheet)\n )\n return spreadsheet.get_changed_df()\n", "path": "modin/spreadsheet/general.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .. import pandas as pd\nfrom modin_spreadsheet import show_grid, SpreadsheetWidget\n\n\ndef from_dataframe(\n dataframe,\n show_toolbar=None,\n show_history=None,\n precision=None,\n grid_options=None,\n column_options=None,\n column_definitions=None,\n row_edit_callback=None,\n):\n \"\"\"\n Renders a DataFrame or Series as an interactive spreadsheet, represented by\n an instance of the ``SpreadsheetWidget`` class. The ``SpreadsheetWidget`` instance\n is constructed using the options passed in to this function. The\n ``dataframe`` argument to this function is used as the ``df`` kwarg in\n call to the SpreadsheetWidget constructor, and the rest of the parameters\n are passed through as is.\n\n If the ``dataframe`` argument is a Series, it will be converted to a\n DataFrame before being passed in to the SpreadsheetWidget constructor as the\n ``df`` kwarg.\n\n :rtype: SpreadsheetWidget\n\n Parameters\n ----------\n dataframe : DataFrame\n The DataFrame that will be displayed by this instance of\n SpreadsheetWidget.\n grid_options : dict\n Options to use when creating the SlickGrid control (i.e. the\n interactive grid). See the Notes section below for more information\n on the available options, as well as the default options that this\n widget uses.\n precision : integer\n The number of digits of precision to display for floating-point\n values. If unset, we use the value of\n `pandas.get_option('display.precision')`.\n show_toolbar : bool\n Whether to show a toolbar with options for adding/removing rows.\n Adding/removing rows is an experimental feature which only works\n with DataFrames that have an integer index.\n show_history : bool\n Whether to show the cell containing the spreadsheet transformation\n history.\n column_options : dict\n Column options that are to be applied to every column. See the\n Notes section below for more information on the available options,\n as well as the default options that this widget uses.\n column_definitions : dict\n Column options that are to be applied to individual\n columns. The keys of the dict should be the column names, and each\n value should be the column options for a particular column,\n represented as a dict. The available options for each column are the\n same options that are available to be set for all columns via the\n ``column_options`` parameter. See the Notes section below for more\n information on those options.\n row_edit_callback : callable\n A callable that is called to determine whether a particular row\n should be editable or not. Its signature should be\n ``callable(row)``, where ``row`` is a dictionary which contains a\n particular row's values, keyed by column name. The callback should\n return True if the provided row should be editable, and False\n otherwise.\n\n\n Notes\n -----\n The following dictionary is used for ``grid_options`` if none are\n provided explicitly::\n\n {\n # SlickGrid options\n 'fullWidthRows': True,\n 'syncColumnCellResize': True,\n 'forceFitColumns': False,\n 'defaultColumnWidth': 150,\n 'rowHeight': 28,\n 'enableColumnReorder': False,\n 'enableTextSelectionOnCells': True,\n 'editable': True,\n 'autoEdit': False,\n 'explicitInitialization': True,\n\n # Modin-spreadsheet options\n 'maxVisibleRows': 15,\n 'minVisibleRows': 8,\n 'sortable': True,\n 'filterable': True,\n 'highlightSelectedCell': False,\n 'highlightSelectedRow': True\n }\n\n The first group of options are SlickGrid \"grid options\" which are\n described in the `SlickGrid documentation\n <https://github.com/mleibman/SlickGrid/wiki/Grid-Options>`_.\n\n The second group of option are options that were added specifically\n for modin-spreadsheet and therefore are not documented in the SlickGrid documentation.\n The following bullet points describe these options.\n\n * **maxVisibleRows** The maximum number of rows that modin-spreadsheet will show.\n * **minVisibleRows** The minimum number of rows that modin-spreadsheet will show\n * **sortable** Whether the modin-spreadsheet instance will allow the user to sort\n columns by clicking the column headers. When this is set to ``False``,\n nothing will happen when users click the column headers.\n * **filterable** Whether the modin-spreadsheet instance will allow the user to filter\n the grid. When this is set to ``False`` the filter icons won't be shown\n for any columns.\n * **highlightSelectedCell** If you set this to True, the selected cell\n will be given a light blue border.\n * **highlightSelectedRow** If you set this to False, the light blue\n background that's shown by default for selected rows will be hidden.\n\n The following dictionary is used for ``column_options`` if none are\n provided explicitly::\n\n {\n # SlickGrid column options\n 'defaultSortAsc': True,\n 'maxWidth': None,\n 'minWidth': 30,\n 'resizable': True,\n 'sortable': True,\n 'toolTip': \"\",\n 'width': None\n\n # Modin-spreadsheet column options\n 'editable': True,\n }\n\n The first group of options are SlickGrid \"column options\" which are\n described in the `SlickGrid documentation\n <https://github.com/mleibman/SlickGrid/wiki/Column-Options>`_.\n\n The ``editable`` option was added specifically for modin-spreadsheet and therefore is\n not documented in the SlickGrid documentation. This option specifies\n whether a column should be editable or not.\n\n See Also\n --------\n set_defaults : Permanently set global defaults for the parameters\n of ``show_grid``, with the exception of the ``dataframe``\n and ``column_definitions`` parameters, since those\n depend on the particular set of data being shown by an\n instance, and therefore aren't parameters we would want\n to set for all SpreadsheetWidget instances.\n set_grid_option : Permanently set global defaults for individual\n grid options. Does so by changing the defaults\n that the ``show_grid`` method uses for the\n ``grid_options`` parameter.\n SpreadsheetWidget : The widget class that is instantiated and returned by this\n method.\n\n \"\"\"\n if not isinstance(dataframe, pd.DataFrame):\n raise TypeError(\"dataframe must be modin.DataFrame, not %s\" % type(dataframe))\n return show_grid(\n dataframe,\n show_toolbar,\n show_history,\n precision,\n grid_options,\n column_options,\n column_definitions,\n row_edit_callback,\n )\n\n\ndef to_dataframe(spreadsheet):\n \"\"\"\n Get a copy of the DataFrame that reflects the current state of the ``spreadsheet`` SpreadsheetWidget instance UI.\n This includes any sorting or filtering changes, as well as edits\n that have been made by double clicking cells.\n\n :rtype: DataFrame\n\n Parameters\n ----------\n spreadsheet : SpreadsheetWidget\n The SpreadsheetWidget instance that DataFrame that will be displayed by this instance of\n SpreadsheetWidget.\n \"\"\"\n if not isinstance(spreadsheet, SpreadsheetWidget):\n raise TypeError(\n \"spreadsheet must be modin_spreadsheet.SpreadsheetWidget, not %s\"\n % type(spreadsheet)\n )\n return spreadsheet.get_changed_df()\n", "path": "modin/spreadsheet/general.py"}]}
| 2,576 | 231 |
gh_patches_debug_17425
|
rasdani/github-patches
|
git_diff
|
python-discord__site-716
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValueError: Unknown format code 'X' for object of type 'str'
Sentry Issue: [SITE-25](https://sentry.io/organizations/python-discord/issues/3211854811/?referrer=github_integration)
```
ValueError: Unknown format code 'X' for object of type 'str'
(16 additional frame(s) were not displayed)
...
File "django/template/defaulttags.py", line 211, in render
nodelist.append(node.render_annotated(context))
File "django/template/base.py", line 905, in render_annotated
return self.render(context)
File "django/template/base.py", line 988, in render
output = self.filter_expression.resolve(context)
File "django/template/base.py", line 698, in resolve
new_obj = func(obj, *arg_vals)
File "pydis_site/apps/staff/templatetags/deletedmessage_filters.py", line 15, in hex_colour
colour = f"#{color:0>6X}"
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydis_site/apps/staff/templatetags/deletedmessage_filters.py`
Content:
```
1 from datetime import datetime
2
3 from django import template
4
5 register = template.Library()
6
7
8 @register.filter
9 def hex_colour(color: int) -> str:
10 """
11 Converts an integer representation of a colour to the RGB hex value.
12
13 As we are using a Discord dark theme analogue, black colours are returned as white instead.
14 """
15 colour = f"#{color:0>6X}"
16 return colour if colour != "#000000" else "#FFFFFF"
17
18
19 @register.filter
20 def footer_datetime(timestamp: str) -> datetime:
21 """Takes an embed timestamp and returns a timezone-aware datetime object."""
22 return datetime.fromisoformat(timestamp)
23
24
25 @register.filter
26 def visible_newlines(text: str) -> str:
27 """Takes an embed timestamp and returns a timezone-aware datetime object."""
28 return text.replace("\n", " <span class='has-text-grey'>↵</span><br>")
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pydis_site/apps/staff/templatetags/deletedmessage_filters.py b/pydis_site/apps/staff/templatetags/deletedmessage_filters.py
--- a/pydis_site/apps/staff/templatetags/deletedmessage_filters.py
+++ b/pydis_site/apps/staff/templatetags/deletedmessage_filters.py
@@ -1,4 +1,5 @@
from datetime import datetime
+from typing import Union
from django import template
@@ -6,13 +7,16 @@
@register.filter
-def hex_colour(color: int) -> str:
+def hex_colour(colour: Union[str, int]) -> str:
"""
- Converts an integer representation of a colour to the RGB hex value.
+ Converts the given representation of a colour to its RGB hex string.
As we are using a Discord dark theme analogue, black colours are returned as white instead.
"""
- colour = f"#{color:0>6X}"
+ if isinstance(colour, str):
+ colour = colour if colour.startswith("#") else f"#{colour}"
+ else:
+ colour = f"#{colour:0>6X}"
return colour if colour != "#000000" else "#FFFFFF"
|
{"golden_diff": "diff --git a/pydis_site/apps/staff/templatetags/deletedmessage_filters.py b/pydis_site/apps/staff/templatetags/deletedmessage_filters.py\n--- a/pydis_site/apps/staff/templatetags/deletedmessage_filters.py\n+++ b/pydis_site/apps/staff/templatetags/deletedmessage_filters.py\n@@ -1,4 +1,5 @@\n from datetime import datetime\n+from typing import Union\n \n from django import template\n \n@@ -6,13 +7,16 @@\n \n \n @register.filter\n-def hex_colour(color: int) -> str:\n+def hex_colour(colour: Union[str, int]) -> str:\n \"\"\"\n- Converts an integer representation of a colour to the RGB hex value.\n+ Converts the given representation of a colour to its RGB hex string.\n \n As we are using a Discord dark theme analogue, black colours are returned as white instead.\n \"\"\"\n- colour = f\"#{color:0>6X}\"\n+ if isinstance(colour, str):\n+ colour = colour if colour.startswith(\"#\") else f\"#{colour}\"\n+ else:\n+ colour = f\"#{colour:0>6X}\"\n return colour if colour != \"#000000\" else \"#FFFFFF\"\n", "issue": "ValueError: Unknown format code 'X' for object of type 'str'\nSentry Issue: [SITE-25](https://sentry.io/organizations/python-discord/issues/3211854811/?referrer=github_integration)\n\n```\nValueError: Unknown format code 'X' for object of type 'str'\n(16 additional frame(s) were not displayed)\n...\n File \"django/template/defaulttags.py\", line 211, in render\n nodelist.append(node.render_annotated(context))\n File \"django/template/base.py\", line 905, in render_annotated\n return self.render(context)\n File \"django/template/base.py\", line 988, in render\n output = self.filter_expression.resolve(context)\n File \"django/template/base.py\", line 698, in resolve\n new_obj = func(obj, *arg_vals)\n File \"pydis_site/apps/staff/templatetags/deletedmessage_filters.py\", line 15, in hex_colour\n colour = f\"#{color:0>6X}\"\n```\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]\ndef hex_colour(color: int) -> str:\n \"\"\"\n Converts an integer representation of a colour to the RGB hex value.\n\n As we are using a Discord dark theme analogue, black colours are returned as white instead.\n \"\"\"\n colour = f\"#{color:0>6X}\"\n return colour if colour != \"#000000\" else \"#FFFFFF\"\n\n\[email protected]\ndef footer_datetime(timestamp: str) -> datetime:\n \"\"\"Takes an embed timestamp and returns a timezone-aware datetime object.\"\"\"\n return datetime.fromisoformat(timestamp)\n\n\[email protected]\ndef visible_newlines(text: str) -> str:\n \"\"\"Takes an embed timestamp and returns a timezone-aware datetime object.\"\"\"\n return text.replace(\"\\n\", \" <span class='has-text-grey'>\u21b5</span><br>\")\n", "path": "pydis_site/apps/staff/templatetags/deletedmessage_filters.py"}], "after_files": [{"content": "from datetime import datetime\nfrom typing import Union\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]\ndef hex_colour(colour: Union[str, int]) -> str:\n \"\"\"\n Converts the given representation of a colour to its RGB hex string.\n\n As we are using a Discord dark theme analogue, black colours are returned as white instead.\n \"\"\"\n if isinstance(colour, str):\n colour = colour if colour.startswith(\"#\") else f\"#{colour}\"\n else:\n colour = f\"#{colour:0>6X}\"\n return colour if colour != \"#000000\" else \"#FFFFFF\"\n\n\[email protected]\ndef footer_datetime(timestamp: str) -> datetime:\n \"\"\"Takes an embed timestamp and returns a timezone-aware datetime object.\"\"\"\n return datetime.fromisoformat(timestamp)\n\n\[email protected]\ndef visible_newlines(text: str) -> str:\n \"\"\"Takes an embed timestamp and returns a timezone-aware datetime object.\"\"\"\n return text.replace(\"\\n\", \" <span class='has-text-grey'>\u21b5</span><br>\")\n", "path": "pydis_site/apps/staff/templatetags/deletedmessage_filters.py"}]}
| 751 | 273 |
gh_patches_debug_33783
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-6144
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some hooks should get fixed
<!--
Welcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller
- For questions and general support, use the mailing list, see
<http://www.pyinstaller.org/support.html#mailing-list>
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
-->
<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->
## Description of the issue
It returns `ModuleNotFoundError: No module named '_sysconfigdata__win32_'` at `from sklearn.cluster import MiniBatchKMeans` for some reason
### Context information (for bug reports)
* Output of `pyinstaller --version`: ```4.5.1```
* Version of Python: ```Python 3.9.6```
* Platform: ```MSYS2 MINGW64```
* How you installed Python: ```from MSYS2, the MINGW64 one```
* Did you also try this on another platform? Does it work there? ```Nope```
* follow *all* the instructions in our "If Things Go Wrong" Guide
(https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and
### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)
* [x] start with clean installation
* [ ] use the latest development version
* [x] Run your frozen program **from a command window (shell)** — instead of double-clicking on it ```But it didn't catch the program, like it runs it in separate process. I use windowed mode, anyways```
* [ ] Package your program in **--onedir mode**
* [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file
* [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.
### A minimal example program which shows the error
#### window.py
```
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gio
import os
import gc
import numpy as np
import pandas as pd
from sklearn.cluster import MiniBatchKMeans
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
@Gtk.Template(resource_path='/org/example/App/window.ui')
class ClusterifyWindow(Gtk.ApplicationWindow):
__gtype_name__ = 'ClusterifyWindow'
full_initialized = False
...
def __init__(self, log_error, **kwargs):
super().__init__(**kwargs)
self.log_error = log_error
@Gtk.Template.Callback()
def on_ClusterifyWindow_focus_in_event(self, cb_widget, data):
if not self.full_initialized:
import matplotlib
matplotlib.use("GTK3Cairo")
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from matplotlib.backends.backend_gtk3cairo import FigureCanvasGTK3Cairo as FigureCanvas
self.fi_clusters = plt.figure()
self.cv_clusters = FigureCanvas(self.fi_clusters)
self.cv_clusters.show_all()
...
self.full_initialized = True
```
#### clusterify.spec
```
# -*- mode: python ; coding: utf-8 -*-
typelib_path = "C:/msys64/mingw64/lib/girepository-1.0"
block_cipher = None
a = Analysis(['C:/msys64/mingw64/bin/clusterify'],
pathex=['C:/msys64/home/rivan/Clusterify'],
binaries=[(os.path.join(typelib_path, tl), 'gi_typelibs') for tl in os.listdir(typelib_path)],
datas=[],
hiddenimports=['numpy',
'pandas',
'sklearn.cluster', 'sklearn.preprocessing', 'sklearn.decomposition',
'matplotlib', 'matplotlib.backends.backend_gtk3cairo',
'seaborn'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='clusterify',
debug=True,
bootloader_ignore_signals=False,
strip=False,
upx=False,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None )
```
### Stacktrace / full error message
```
Traceback (most recent call last):
File "clusterify", line 51, in <module>
File "C:\msys64/mingw64/share/clusterify\clusterify\main.py", line 33, in <module>
from .window import ClusterifyWindow
File "C:\msys64/mingw64/share/clusterify\clusterify\window.py", line 36, in <module>
from sklearn.cluster import MiniBatchKMeans
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "sklearn/__init__.py", line 82, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "sklearn/base.py", line 20, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "sklearn/utils/__init__.py", line 24, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "sklearn/utils/_joblib.py", line 7, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "joblib/__init__.py", line 113, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "joblib/memory.py", line 16, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 546, in exec_module
File "pydoc.py", line 458, in <module>
File "pydoc.py", line 488, in Doc
File "sysconfig.py", line 554, in get_path
File "sysconfig.py", line 544, in get_paths
File "sysconfig.py", line 179, in _expand_vars
File "sysconfig.py", line 594, in get_config_vars
File "sysconfig.py", line 460, in _init_posix
ModuleNotFoundError: No module named '_sysconfigdata__win32_'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-distutils.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2021, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11 """
12 `distutils`-specific post-import hook.
13
14 This hook freezes the external `Makefile` and `pyconfig.h` files bundled with the active Python interpreter, which the
15 `distutils.sysconfig` module parses at runtime for platform-specific metadata.
16 """
17
18 # From Python 3.6 and later ``distutils.sysconfig`` takes on the same behaviour as regular ``sysconfig`` of moving the
19 # config vars to a module (see hook-sysconfig.py). It doesn't use a nice `get module name` function like ``sysconfig``
20 # does to help us locate it but the module is the same file that ``sysconfig`` uses so we can use the
21 # ``_get_sysconfigdata_name()`` from regular ``sysconfig``.
22 import sysconfig
23
24 from PyInstaller import compat
25
26 if not compat.is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):
27 hiddenimports = [sysconfig._get_sysconfigdata_name()]
28
```
Path: `PyInstaller/hooks/hook-sysconfig.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2021, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 import sysconfig
13
14 from PyInstaller.compat import is_win
15
16 if not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):
17 # Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see
18 # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417
19 # Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.
20 hiddenimports = [sysconfig._get_sysconfigdata_name()]
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-distutils.py b/PyInstaller/hooks/hook-distutils.py
--- a/PyInstaller/hooks/hook-distutils.py
+++ b/PyInstaller/hooks/hook-distutils.py
@@ -19,9 +19,10 @@
# config vars to a module (see hook-sysconfig.py). It doesn't use a nice `get module name` function like ``sysconfig``
# does to help us locate it but the module is the same file that ``sysconfig`` uses so we can use the
# ``_get_sysconfigdata_name()`` from regular ``sysconfig``.
-import sysconfig
-
-from PyInstaller import compat
-
-if not compat.is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):
+try:
+ import sysconfig
hiddenimports = [sysconfig._get_sysconfigdata_name()]
+except AttributeError:
+ # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows
+ # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.
+ pass
diff --git a/PyInstaller/hooks/hook-sysconfig.py b/PyInstaller/hooks/hook-sysconfig.py
--- a/PyInstaller/hooks/hook-sysconfig.py
+++ b/PyInstaller/hooks/hook-sysconfig.py
@@ -9,12 +9,15 @@
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
#-----------------------------------------------------------------------------
-import sysconfig
-
-from PyInstaller.compat import is_win
-
-if not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):
- # Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see
- # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417
- # Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.
+# Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see
+# https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417
+# Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.
+# Note: on Windows, python.org and Anaconda python provide _get_sysconfigdata_name, but calling it fails due to sys
+# module lacking abiflags attribute. It does work on MSYS2/MINGW python, where we need to collect corresponding file.
+try:
+ import sysconfig
hiddenimports = [sysconfig._get_sysconfigdata_name()]
+except AttributeError:
+ # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows
+ # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.
+ pass
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-distutils.py b/PyInstaller/hooks/hook-distutils.py\n--- a/PyInstaller/hooks/hook-distutils.py\n+++ b/PyInstaller/hooks/hook-distutils.py\n@@ -19,9 +19,10 @@\n # config vars to a module (see hook-sysconfig.py). It doesn't use a nice `get module name` function like ``sysconfig``\n # does to help us locate it but the module is the same file that ``sysconfig`` uses so we can use the\n # ``_get_sysconfigdata_name()`` from regular ``sysconfig``.\n-import sysconfig\n-\n-from PyInstaller import compat\n-\n-if not compat.is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):\n+try:\n+ import sysconfig\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n+except AttributeError:\n+ # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows\n+ # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.\n+ pass\ndiff --git a/PyInstaller/hooks/hook-sysconfig.py b/PyInstaller/hooks/hook-sysconfig.py\n--- a/PyInstaller/hooks/hook-sysconfig.py\n+++ b/PyInstaller/hooks/hook-sysconfig.py\n@@ -9,12 +9,15 @@\n # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n #-----------------------------------------------------------------------------\n \n-import sysconfig\n-\n-from PyInstaller.compat import is_win\n-\n-if not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):\n- # Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n- # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n- # Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.\n+# Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n+# https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n+# Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.\n+# Note: on Windows, python.org and Anaconda python provide _get_sysconfigdata_name, but calling it fails due to sys\n+# module lacking abiflags attribute. It does work on MSYS2/MINGW python, where we need to collect corresponding file.\n+try:\n+ import sysconfig\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n+except AttributeError:\n+ # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows\n+ # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.\n+ pass\n", "issue": "Some hooks should get fixed\n<!--\r\nWelcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:\r\n\r\n1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller\r\n - For questions and general support, use the mailing list, see\r\n\t <http://www.pyinstaller.org/support.html#mailing-list>\r\n2. Use the search function before creating a new issue. Duplicates will be closed and directed to\r\n the original discussion.\r\n3. When making a bug report, make sure you provide all required information. The easier it is for\r\n maintainers to reproduce, the faster it'll be fixed.\r\n-->\r\n\r\n<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->\r\n\r\n## Description of the issue\r\nIt returns `ModuleNotFoundError: No module named '_sysconfigdata__win32_'` at `from sklearn.cluster import MiniBatchKMeans` for some reason\r\n\r\n\r\n### Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: ```4.5.1```\r\n* Version of Python: ```Python 3.9.6```\r\n* Platform: ```MSYS2 MINGW64```\r\n* How you installed Python: ```from MSYS2, the MINGW64 one```\r\n* Did you also try this on another platform? Does it work there? ```Nope```\r\n\r\n* follow *all* the instructions in our \"If Things Go Wrong\" Guide\r\n (https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and\r\n\r\n### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)\r\n\r\n * [x] start with clean installation\r\n * [ ] use the latest development version\r\n * [x] Run your frozen program **from a command window (shell)** \u2014 instead of double-clicking on it ```But it didn't catch the program, like it runs it in separate process. I use windowed mode, anyways```\r\n * [ ] Package your program in **--onedir mode**\r\n * [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file\r\n * [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.\r\n\r\n\r\n### A minimal example program which shows the error\r\n#### window.py\r\n```\r\nimport gi\r\ngi.require_version('Gtk', '3.0')\r\nfrom gi.repository import Gtk, Gio\r\nimport os\r\nimport gc\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom sklearn.cluster import MiniBatchKMeans\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.decomposition import PCA\r\n\r\[email protected](resource_path='/org/example/App/window.ui')\r\nclass ClusterifyWindow(Gtk.ApplicationWindow):\r\n __gtype_name__ = 'ClusterifyWindow'\r\n full_initialized = False\r\n\r\n ...\r\n\r\n def __init__(self, log_error, **kwargs):\r\n super().__init__(**kwargs)\r\n self.log_error = log_error\r\n\r\n @Gtk.Template.Callback()\r\n def on_ClusterifyWindow_focus_in_event(self, cb_widget, data):\r\n if not self.full_initialized:\r\n import matplotlib\r\n matplotlib.use(\"GTK3Cairo\")\r\n import matplotlib.pyplot as plt\r\n import seaborn as sns\r\n sns.set()\r\n from matplotlib.backends.backend_gtk3cairo import FigureCanvasGTK3Cairo as FigureCanvas\r\n\r\n self.fi_clusters = plt.figure()\r\n self.cv_clusters = FigureCanvas(self.fi_clusters)\r\n self.cv_clusters.show_all()\r\n\r\n ...\r\n\r\n self.full_initialized = True\r\n```\r\n\r\n#### clusterify.spec\r\n```\r\n# -*- mode: python ; coding: utf-8 -*-\r\n\r\ntypelib_path = \"C:/msys64/mingw64/lib/girepository-1.0\"\r\nblock_cipher = None\r\n\r\n\r\na = Analysis(['C:/msys64/mingw64/bin/clusterify'],\r\n pathex=['C:/msys64/home/rivan/Clusterify'],\r\n binaries=[(os.path.join(typelib_path, tl), 'gi_typelibs') for tl in os.listdir(typelib_path)],\r\n datas=[],\r\n hiddenimports=['numpy',\r\n 'pandas',\r\n 'sklearn.cluster', 'sklearn.preprocessing', 'sklearn.decomposition',\r\n 'matplotlib', 'matplotlib.backends.backend_gtk3cairo',\r\n 'seaborn'],\r\n hookspath=[],\r\n hooksconfig={},\r\n runtime_hooks=[],\r\n excludes=[],\r\n win_no_prefer_redirects=False,\r\n win_private_assemblies=False,\r\n cipher=block_cipher,\r\n noarchive=False)\r\npyz = PYZ(a.pure, a.zipped_data,\r\n cipher=block_cipher)\r\n\r\nexe = EXE(pyz,\r\n a.scripts,\r\n a.binaries,\r\n a.zipfiles,\r\n a.datas, \r\n [],\r\n name='clusterify',\r\n debug=True,\r\n bootloader_ignore_signals=False,\r\n strip=False,\r\n upx=False,\r\n upx_exclude=[],\r\n runtime_tmpdir=None,\r\n console=False,\r\n disable_windowed_traceback=False,\r\n target_arch=None,\r\n codesign_identity=None,\r\n entitlements_file=None )\r\n```\r\n\r\n### Stacktrace / full error message\r\n```\r\nTraceback (most recent call last):\r\n File \"clusterify\", line 51, in <module>\r\n File \"C:\\msys64/mingw64/share/clusterify\\clusterify\\main.py\", line 33, in <module>\r\n from .window import ClusterifyWindow\r\n File \"C:\\msys64/mingw64/share/clusterify\\clusterify\\window.py\", line 36, in <module>\r\n from sklearn.cluster import MiniBatchKMeans\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"sklearn/__init__.py\", line 82, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"sklearn/base.py\", line 20, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"sklearn/utils/__init__.py\", line 24, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"sklearn/utils/_joblib.py\", line 7, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"joblib/__init__.py\", line 113, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"joblib/memory.py\", line 16, in <module>\r\n File \"PyInstaller/loader/pyimod03_importers.py\", line 546, in exec_module\r\n File \"pydoc.py\", line 458, in <module>\r\n File \"pydoc.py\", line 488, in Doc\r\n File \"sysconfig.py\", line 554, in get_path\r\n File \"sysconfig.py\", line 544, in get_paths\r\n File \"sysconfig.py\", line 179, in _expand_vars\r\n File \"sysconfig.py\", line 594, in get_config_vars\r\n File \"sysconfig.py\", line 460, in _init_posix\r\nModuleNotFoundError: No module named '_sysconfigdata__win32_'\r\n```\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\"\"\"\n`distutils`-specific post-import hook.\n\nThis hook freezes the external `Makefile` and `pyconfig.h` files bundled with the active Python interpreter, which the\n`distutils.sysconfig` module parses at runtime for platform-specific metadata.\n\"\"\"\n\n# From Python 3.6 and later ``distutils.sysconfig`` takes on the same behaviour as regular ``sysconfig`` of moving the\n# config vars to a module (see hook-sysconfig.py). It doesn't use a nice `get module name` function like ``sysconfig``\n# does to help us locate it but the module is the same file that ``sysconfig`` uses so we can use the\n# ``_get_sysconfigdata_name()`` from regular ``sysconfig``.\nimport sysconfig\n\nfrom PyInstaller import compat\n\nif not compat.is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n", "path": "PyInstaller/hooks/hook-distutils.py"}, {"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport sysconfig\n\nfrom PyInstaller.compat import is_win\n\nif not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):\n # Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n # Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n", "path": "PyInstaller/hooks/hook-sysconfig.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\"\"\"\n`distutils`-specific post-import hook.\n\nThis hook freezes the external `Makefile` and `pyconfig.h` files bundled with the active Python interpreter, which the\n`distutils.sysconfig` module parses at runtime for platform-specific metadata.\n\"\"\"\n\n# From Python 3.6 and later ``distutils.sysconfig`` takes on the same behaviour as regular ``sysconfig`` of moving the\n# config vars to a module (see hook-sysconfig.py). It doesn't use a nice `get module name` function like ``sysconfig``\n# does to help us locate it but the module is the same file that ``sysconfig`` uses so we can use the\n# ``_get_sysconfigdata_name()`` from regular ``sysconfig``.\ntry:\n import sysconfig\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\nexcept AttributeError:\n # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows\n # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.\n pass\n", "path": "PyInstaller/hooks/hook-distutils.py"}, {"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\n# Python 3.6 uses additional modules like `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n# https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n# Note: Some versions of Anaconda backport this feature to before 3.6. See issue #3105.\n# Note: on Windows, python.org and Anaconda python provide _get_sysconfigdata_name, but calling it fails due to sys\n# module lacking abiflags attribute. It does work on MSYS2/MINGW python, where we need to collect corresponding file.\ntry:\n import sysconfig\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\nexcept AttributeError:\n # Either sysconfig has no attribute _get_sysconfigdata_name (i.e., the function does not exist), or this is Windows\n # and the _get_sysconfigdata_name() call failed due to missing sys.abiflags attribute.\n pass\n", "path": "PyInstaller/hooks/hook-sysconfig.py"}]}
| 2,560 | 668 |
gh_patches_debug_9896
|
rasdani/github-patches
|
git_diff
|
pytorch__TensorRT-2515
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for `aten.select_scatter`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/torch_tensorrt/dynamo/lowering/_decompositions.py`
Content:
```
1 import logging
2 from typing import Any, Callable, Dict, List, Optional
3
4 import torch
5 from torch._decomp import register_decomposition
6 from torch._ops import OpOverload
7 from torch_tensorrt.dynamo.conversion.converter_utils import get_positive_dim
8
9 from ._decomposition_groups import (
10 ENABLED_TORCH_DECOMPOSITIONS,
11 TORCH_TRT_DECOMPOSITIONS,
12 _core_aten_decompositions,
13 aten,
14 torch_disabled_decompositions,
15 torch_enabled_decompositions,
16 )
17
18 logger = logging.getLogger(__name__)
19
20
21 def register_torch_trt_decomposition(
22 aten_op: OpOverload, registry: Optional[Any] = None
23 ) -> Callable[[Any], Any]:
24 """Checks if the decomposition already exists in one of the sets
25 Registers the decomposition via the Torch utility
26
27 Alerts the user if the decomposition already exists, before registering
28 Throws an AssertionError if the user attempts to register a decomposition
29 which is present in the set of explicitly disabled decompositions
30 """
31 if aten_op in torch_enabled_decompositions:
32 logger.warning(
33 f"Detected custom decomposition for {aten_op}, which conflicts "
34 "with an existing Torch decomposition in torch_enabled_decompositions. "
35 "The custom implementation will take precedence."
36 )
37 elif aten_op in torch_disabled_decompositions:
38 logger.info(
39 f"Detected custom decomposition for {aten_op}, which is present "
40 "in torch_disabled_decompositions."
41 )
42
43 # Conflicts with _core_aten_decompositions will only occur if
44 # enable_experimental_decompositions is True in get_decompositions
45 if aten_op in _core_aten_decompositions:
46 logger.debug(
47 f"Detected custom decomposition for {aten_op}, which conflicts "
48 "with an existing Torch decomposition in core_aten_decompositions. "
49 "The custom implementation will take precedence."
50 )
51
52 def register(fn: Callable[[Any], Any]) -> Any:
53 return register_decomposition(aten_op=aten_op, registry=registry)(fn)
54
55 return register
56
57
58 def replace_inplace_op(aten_op: OpOverload, outplace_op: OpOverload) -> Any:
59 """Replace inplace operation with functional equivalent
60 Adapted from:
61 https://github.com/pytorch/pytorch/blob/3344d79e3f732dadd5c85b99a7aa1a022f187929/torch/_decomp/decompositions.py#L3355-L3361
62 """
63
64 @register_torch_trt_decomposition(aten_op, registry=TORCH_TRT_DECOMPOSITIONS)
65 def inplace_op(*args, **kwargs): # type: ignore
66 out = outplace_op(*args, **kwargs)
67 return args[0].copy_(out)
68
69 return inplace_op
70
71
72 replace_inplace_op(aten.add_, aten.add)
73 replace_inplace_op(aten.addbmm_, aten.addbmm)
74 replace_inplace_op(aten.addmm_, aten.addmm)
75 replace_inplace_op(aten.addmv_, aten.addmv)
76 replace_inplace_op(aten.baddbmm_, aten.baddbmm)
77 replace_inplace_op(aten.cumprod_, aten.cumprod)
78 replace_inplace_op(aten.index_put_, aten.index_put)
79 replace_inplace_op(aten.index_reduce_, aten.index_reduce)
80 replace_inplace_op(aten.relu_, aten.relu)
81 replace_inplace_op(aten.round_, aten.round)
82 replace_inplace_op(aten.scatter_, aten.scatter)
83 replace_inplace_op(aten.scatter_add_, aten.scatter_add)
84 replace_inplace_op(aten.scatter_reduce_, aten.scatter_reduce)
85
86
87 @register_torch_trt_decomposition(aten.rsqrt, registry=TORCH_TRT_DECOMPOSITIONS)
88 def rsqrt_replacement(*args, **kwargs) -> torch.Tensor: # type: ignore
89 return torch.reciprocal(torch.sqrt(*args, **kwargs))
90
91
92 @register_torch_trt_decomposition(aten._unsafe_view, registry=TORCH_TRT_DECOMPOSITIONS)
93 def unsafe_view_replacement(x: torch.Tensor, *args, **kwargs) -> torch.Tensor: # type: ignore
94 return torch.reshape(x, *args, **kwargs)
95
96
97 @register_torch_trt_decomposition(
98 torch.ops.aten.lift_fresh_copy, registry=TORCH_TRT_DECOMPOSITIONS
99 )
100 def lift_fresh_copy_replacement(x: torch.Tensor) -> torch.Tensor:
101 return x
102
103
104 @register_torch_trt_decomposition(aten.alias, registry=TORCH_TRT_DECOMPOSITIONS)
105 def alias_replacement(x: torch.Tensor) -> torch.Tensor:
106 return x
107
108
109 @register_torch_trt_decomposition(
110 torch.ops.aten.reciprocal.default, registry=TORCH_TRT_DECOMPOSITIONS
111 )
112 def reciprocal_replacement(
113 input_: torch.Tensor,
114 ) -> torch.Tensor:
115 return torch.div(1, input_)
116
117
118 @register_torch_trt_decomposition(
119 torch.ops.prims.var.default, registry=TORCH_TRT_DECOMPOSITIONS
120 )
121 def var_decomposition(
122 input_tensor: torch.Tensor,
123 dims: Optional[List[int]],
124 correction: int,
125 output_dtype: Optional[torch.dtype] = None,
126 ) -> torch.Tensor:
127 if dims is None:
128 dims = []
129
130 # If the dimensions are empty, variance is taken over all dimensions
131 if isinstance(dims, (tuple, list)) and len(dims) == 0:
132 N = input_tensor.numel()
133 # Otherwise, the number of samples is the product of the dimensions reduced over
134 else:
135 N = 1
136 for dim_i in dims:
137 N *= input_tensor.shape[dim_i]
138
139 # Compute the mean, difference, and correction term as per the formula:
140 # https://pytorch.org/docs/stable/generated/torch.var.html
141
142 # Additionally, prims does not support keepdim, and so we only keep dimensions
143 # on the first reduction, then remove it for the second
144 sample_mean = torch.mean(input_tensor, dims, keepdim=True)
145 diff = input_tensor - sample_mean
146 squared_diff = diff * diff
147 variance_unnormalized = torch.sum(squared_diff, dims, keepdim=False)
148
149 if correction is None:
150 correction_term = float(N - 1)
151 elif isinstance(correction, int):
152 correction_term = float(N - correction)
153 elif isinstance(correction, float):
154 correction_term = float(N) - correction
155 else:
156 raise RuntimeError("correction must be int or float")
157
158 if correction_term <= 0:
159 raise RuntimeError(f"correction term was non-positive, got: {correction_term}")
160
161 variance = variance_unnormalized / correction_term
162
163 return variance
164
165
166 @register_torch_trt_decomposition(
167 torch.ops.aten.empty_permuted.default, registry=TORCH_TRT_DECOMPOSITIONS
168 )
169 def empty_permuted_decomposition(*args, **kwargs) -> torch.Tensor:
170 empty_size = args[0]
171 empty_permute = args[1]
172 perm = [0] * len(empty_size)
173 for permute_index, permute_element in enumerate(empty_permute):
174 perm[permute_element] = permute_index
175 return torch.empty([empty_size[l] for l in empty_permute], **kwargs).permute(perm)
176
177
178 @register_torch_trt_decomposition(
179 torch.ops.aten.slice_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS
180 )
181 def slice_scatter_decomposition(
182 input_tensor: torch.Tensor,
183 src_tensor: torch.Tensor,
184 dim: int,
185 start: Optional[int] = None,
186 end: Optional[int] = None,
187 step: Optional[int] = None,
188 ):
189 dim_size = input_tensor.shape[dim]
190 start = get_positive_dim(start, input_tensor.shape[dim])
191 if end is None:
192 end = dim_size
193 end = get_positive_dim(end, input_tensor.shape[dim])
194 if step is None:
195 step = 1
196
197 src_dim = src_tensor.shape
198 # step == 0 is not a valid torch case
199 # also src_dim should be equal to slice dimension
200
201 if start == 0 and end == dim_size and step == 1:
202 return src_tensor
203
204 cat_tensors = []
205 index_tensor_shape = []
206 for i, src_each_dim in enumerate(list(src_dim)):
207 if i != dim:
208 index_tensor_shape.append(src_each_dim)
209 for index in range(start, end, step):
210 cat_tensors.append(index * torch.ones(index_tensor_shape, dtype=torch.long))
211 index_tensor = torch.stack(cat_tensors, dim).cuda()
212 output_tensor = torch.scatter(input_tensor, dim, index_tensor, src_tensor)
213 return output_tensor
214
215
216 def get_decompositions(
217 enable_experimental_decompositions: bool = False,
218 ) -> Dict[OpOverload, Callable[[Any], Any]]:
219 if enable_experimental_decompositions:
220 CORE_ATEN_DECOMPOSITIONS_FILTERED: Dict[OpOverload, Callable[[Any], Any]] = {
221 decomp: _core_aten_decompositions[decomp]
222 for decomp in _core_aten_decompositions
223 if decomp not in torch_disabled_decompositions
224 }
225 return {**CORE_ATEN_DECOMPOSITIONS_FILTERED, **TORCH_TRT_DECOMPOSITIONS}
226 else:
227 return {**ENABLED_TORCH_DECOMPOSITIONS, **TORCH_TRT_DECOMPOSITIONS}
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/py/torch_tensorrt/dynamo/lowering/_decompositions.py b/py/torch_tensorrt/dynamo/lowering/_decompositions.py
--- a/py/torch_tensorrt/dynamo/lowering/_decompositions.py
+++ b/py/torch_tensorrt/dynamo/lowering/_decompositions.py
@@ -213,6 +213,19 @@
return output_tensor
+@register_torch_trt_decomposition(
+ torch.ops.aten.select_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS
+)
+def select_scatter_decomposition(
+ input_tensor: torch.Tensor,
+ src_tensor: torch.Tensor,
+ dim: int,
+ index: int,
+) -> torch.Tensor:
+ src_tensor = torch.unsqueeze(src_tensor, dim)
+ return torch.slice_scatter(input_tensor, src_tensor, dim, index, index + 1, 1)
+
+
def get_decompositions(
enable_experimental_decompositions: bool = False,
) -> Dict[OpOverload, Callable[[Any], Any]]:
|
{"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/lowering/_decompositions.py b/py/torch_tensorrt/dynamo/lowering/_decompositions.py\n--- a/py/torch_tensorrt/dynamo/lowering/_decompositions.py\n+++ b/py/torch_tensorrt/dynamo/lowering/_decompositions.py\n@@ -213,6 +213,19 @@\n return output_tensor\n \n \n+@register_torch_trt_decomposition(\n+ torch.ops.aten.select_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS\n+)\n+def select_scatter_decomposition(\n+ input_tensor: torch.Tensor,\n+ src_tensor: torch.Tensor,\n+ dim: int,\n+ index: int,\n+) -> torch.Tensor:\n+ src_tensor = torch.unsqueeze(src_tensor, dim)\n+ return torch.slice_scatter(input_tensor, src_tensor, dim, index, index + 1, 1)\n+\n+\n def get_decompositions(\n enable_experimental_decompositions: bool = False,\n ) -> Dict[OpOverload, Callable[[Any], Any]]:\n", "issue": "Add support for `aten.select_scatter`\n\n", "before_files": [{"content": "import logging\nfrom typing import Any, Callable, Dict, List, Optional\n\nimport torch\nfrom torch._decomp import register_decomposition\nfrom torch._ops import OpOverload\nfrom torch_tensorrt.dynamo.conversion.converter_utils import get_positive_dim\n\nfrom ._decomposition_groups import (\n ENABLED_TORCH_DECOMPOSITIONS,\n TORCH_TRT_DECOMPOSITIONS,\n _core_aten_decompositions,\n aten,\n torch_disabled_decompositions,\n torch_enabled_decompositions,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef register_torch_trt_decomposition(\n aten_op: OpOverload, registry: Optional[Any] = None\n) -> Callable[[Any], Any]:\n \"\"\"Checks if the decomposition already exists in one of the sets\n Registers the decomposition via the Torch utility\n\n Alerts the user if the decomposition already exists, before registering\n Throws an AssertionError if the user attempts to register a decomposition\n which is present in the set of explicitly disabled decompositions\n \"\"\"\n if aten_op in torch_enabled_decompositions:\n logger.warning(\n f\"Detected custom decomposition for {aten_op}, which conflicts \"\n \"with an existing Torch decomposition in torch_enabled_decompositions. \"\n \"The custom implementation will take precedence.\"\n )\n elif aten_op in torch_disabled_decompositions:\n logger.info(\n f\"Detected custom decomposition for {aten_op}, which is present \"\n \"in torch_disabled_decompositions.\"\n )\n\n # Conflicts with _core_aten_decompositions will only occur if\n # enable_experimental_decompositions is True in get_decompositions\n if aten_op in _core_aten_decompositions:\n logger.debug(\n f\"Detected custom decomposition for {aten_op}, which conflicts \"\n \"with an existing Torch decomposition in core_aten_decompositions. \"\n \"The custom implementation will take precedence.\"\n )\n\n def register(fn: Callable[[Any], Any]) -> Any:\n return register_decomposition(aten_op=aten_op, registry=registry)(fn)\n\n return register\n\n\ndef replace_inplace_op(aten_op: OpOverload, outplace_op: OpOverload) -> Any:\n \"\"\"Replace inplace operation with functional equivalent\n Adapted from:\n https://github.com/pytorch/pytorch/blob/3344d79e3f732dadd5c85b99a7aa1a022f187929/torch/_decomp/decompositions.py#L3355-L3361\n \"\"\"\n\n @register_torch_trt_decomposition(aten_op, registry=TORCH_TRT_DECOMPOSITIONS)\n def inplace_op(*args, **kwargs): # type: ignore\n out = outplace_op(*args, **kwargs)\n return args[0].copy_(out)\n\n return inplace_op\n\n\nreplace_inplace_op(aten.add_, aten.add)\nreplace_inplace_op(aten.addbmm_, aten.addbmm)\nreplace_inplace_op(aten.addmm_, aten.addmm)\nreplace_inplace_op(aten.addmv_, aten.addmv)\nreplace_inplace_op(aten.baddbmm_, aten.baddbmm)\nreplace_inplace_op(aten.cumprod_, aten.cumprod)\nreplace_inplace_op(aten.index_put_, aten.index_put)\nreplace_inplace_op(aten.index_reduce_, aten.index_reduce)\nreplace_inplace_op(aten.relu_, aten.relu)\nreplace_inplace_op(aten.round_, aten.round)\nreplace_inplace_op(aten.scatter_, aten.scatter)\nreplace_inplace_op(aten.scatter_add_, aten.scatter_add)\nreplace_inplace_op(aten.scatter_reduce_, aten.scatter_reduce)\n\n\n@register_torch_trt_decomposition(aten.rsqrt, registry=TORCH_TRT_DECOMPOSITIONS)\ndef rsqrt_replacement(*args, **kwargs) -> torch.Tensor: # type: ignore\n return torch.reciprocal(torch.sqrt(*args, **kwargs))\n\n\n@register_torch_trt_decomposition(aten._unsafe_view, registry=TORCH_TRT_DECOMPOSITIONS)\ndef unsafe_view_replacement(x: torch.Tensor, *args, **kwargs) -> torch.Tensor: # type: ignore\n return torch.reshape(x, *args, **kwargs)\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.lift_fresh_copy, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef lift_fresh_copy_replacement(x: torch.Tensor) -> torch.Tensor:\n return x\n\n\n@register_torch_trt_decomposition(aten.alias, registry=TORCH_TRT_DECOMPOSITIONS)\ndef alias_replacement(x: torch.Tensor) -> torch.Tensor:\n return x\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.reciprocal.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef reciprocal_replacement(\n input_: torch.Tensor,\n) -> torch.Tensor:\n return torch.div(1, input_)\n\n\n@register_torch_trt_decomposition(\n torch.ops.prims.var.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef var_decomposition(\n input_tensor: torch.Tensor,\n dims: Optional[List[int]],\n correction: int,\n output_dtype: Optional[torch.dtype] = None,\n) -> torch.Tensor:\n if dims is None:\n dims = []\n\n # If the dimensions are empty, variance is taken over all dimensions\n if isinstance(dims, (tuple, list)) and len(dims) == 0:\n N = input_tensor.numel()\n # Otherwise, the number of samples is the product of the dimensions reduced over\n else:\n N = 1\n for dim_i in dims:\n N *= input_tensor.shape[dim_i]\n\n # Compute the mean, difference, and correction term as per the formula:\n # https://pytorch.org/docs/stable/generated/torch.var.html\n\n # Additionally, prims does not support keepdim, and so we only keep dimensions\n # on the first reduction, then remove it for the second\n sample_mean = torch.mean(input_tensor, dims, keepdim=True)\n diff = input_tensor - sample_mean\n squared_diff = diff * diff\n variance_unnormalized = torch.sum(squared_diff, dims, keepdim=False)\n\n if correction is None:\n correction_term = float(N - 1)\n elif isinstance(correction, int):\n correction_term = float(N - correction)\n elif isinstance(correction, float):\n correction_term = float(N) - correction\n else:\n raise RuntimeError(\"correction must be int or float\")\n\n if correction_term <= 0:\n raise RuntimeError(f\"correction term was non-positive, got: {correction_term}\")\n\n variance = variance_unnormalized / correction_term\n\n return variance\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.empty_permuted.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef empty_permuted_decomposition(*args, **kwargs) -> torch.Tensor:\n empty_size = args[0]\n empty_permute = args[1]\n perm = [0] * len(empty_size)\n for permute_index, permute_element in enumerate(empty_permute):\n perm[permute_element] = permute_index\n return torch.empty([empty_size[l] for l in empty_permute], **kwargs).permute(perm)\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.slice_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef slice_scatter_decomposition(\n input_tensor: torch.Tensor,\n src_tensor: torch.Tensor,\n dim: int,\n start: Optional[int] = None,\n end: Optional[int] = None,\n step: Optional[int] = None,\n):\n dim_size = input_tensor.shape[dim]\n start = get_positive_dim(start, input_tensor.shape[dim])\n if end is None:\n end = dim_size\n end = get_positive_dim(end, input_tensor.shape[dim])\n if step is None:\n step = 1\n\n src_dim = src_tensor.shape\n # step == 0 is not a valid torch case\n # also src_dim should be equal to slice dimension\n\n if start == 0 and end == dim_size and step == 1:\n return src_tensor\n\n cat_tensors = []\n index_tensor_shape = []\n for i, src_each_dim in enumerate(list(src_dim)):\n if i != dim:\n index_tensor_shape.append(src_each_dim)\n for index in range(start, end, step):\n cat_tensors.append(index * torch.ones(index_tensor_shape, dtype=torch.long))\n index_tensor = torch.stack(cat_tensors, dim).cuda()\n output_tensor = torch.scatter(input_tensor, dim, index_tensor, src_tensor)\n return output_tensor\n\n\ndef get_decompositions(\n enable_experimental_decompositions: bool = False,\n) -> Dict[OpOverload, Callable[[Any], Any]]:\n if enable_experimental_decompositions:\n CORE_ATEN_DECOMPOSITIONS_FILTERED: Dict[OpOverload, Callable[[Any], Any]] = {\n decomp: _core_aten_decompositions[decomp]\n for decomp in _core_aten_decompositions\n if decomp not in torch_disabled_decompositions\n }\n return {**CORE_ATEN_DECOMPOSITIONS_FILTERED, **TORCH_TRT_DECOMPOSITIONS}\n else:\n return {**ENABLED_TORCH_DECOMPOSITIONS, **TORCH_TRT_DECOMPOSITIONS}\n", "path": "py/torch_tensorrt/dynamo/lowering/_decompositions.py"}], "after_files": [{"content": "import logging\nfrom typing import Any, Callable, Dict, List, Optional\n\nimport torch\nfrom torch._decomp import register_decomposition\nfrom torch._ops import OpOverload\nfrom torch_tensorrt.dynamo.conversion.converter_utils import get_positive_dim\n\nfrom ._decomposition_groups import (\n ENABLED_TORCH_DECOMPOSITIONS,\n TORCH_TRT_DECOMPOSITIONS,\n _core_aten_decompositions,\n aten,\n torch_disabled_decompositions,\n torch_enabled_decompositions,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef register_torch_trt_decomposition(\n aten_op: OpOverload, registry: Optional[Any] = None\n) -> Callable[[Any], Any]:\n \"\"\"Checks if the decomposition already exists in one of the sets\n Registers the decomposition via the Torch utility\n\n Alerts the user if the decomposition already exists, before registering\n Throws an AssertionError if the user attempts to register a decomposition\n which is present in the set of explicitly disabled decompositions\n \"\"\"\n if aten_op in torch_enabled_decompositions:\n logger.warning(\n f\"Detected custom decomposition for {aten_op}, which conflicts \"\n \"with an existing Torch decomposition in torch_enabled_decompositions. \"\n \"The custom implementation will take precedence.\"\n )\n elif aten_op in torch_disabled_decompositions:\n logger.info(\n f\"Detected custom decomposition for {aten_op}, which is present \"\n \"in torch_disabled_decompositions.\"\n )\n\n # Conflicts with _core_aten_decompositions will only occur if\n # enable_experimental_decompositions is True in get_decompositions\n if aten_op in _core_aten_decompositions:\n logger.debug(\n f\"Detected custom decomposition for {aten_op}, which conflicts \"\n \"with an existing Torch decomposition in core_aten_decompositions. \"\n \"The custom implementation will take precedence.\"\n )\n\n def register(fn: Callable[[Any], Any]) -> Any:\n return register_decomposition(aten_op=aten_op, registry=registry)(fn)\n\n return register\n\n\ndef replace_inplace_op(aten_op: OpOverload, outplace_op: OpOverload) -> Any:\n \"\"\"Replace inplace operation with functional equivalent\n Adapted from:\n https://github.com/pytorch/pytorch/blob/3344d79e3f732dadd5c85b99a7aa1a022f187929/torch/_decomp/decompositions.py#L3355-L3361\n \"\"\"\n\n @register_torch_trt_decomposition(aten_op, registry=TORCH_TRT_DECOMPOSITIONS)\n def inplace_op(*args, **kwargs): # type: ignore\n out = outplace_op(*args, **kwargs)\n return args[0].copy_(out)\n\n return inplace_op\n\n\nreplace_inplace_op(aten.add_, aten.add)\nreplace_inplace_op(aten.addbmm_, aten.addbmm)\nreplace_inplace_op(aten.addmm_, aten.addmm)\nreplace_inplace_op(aten.addmv_, aten.addmv)\nreplace_inplace_op(aten.baddbmm_, aten.baddbmm)\nreplace_inplace_op(aten.cumprod_, aten.cumprod)\nreplace_inplace_op(aten.index_put_, aten.index_put)\nreplace_inplace_op(aten.index_reduce_, aten.index_reduce)\nreplace_inplace_op(aten.relu_, aten.relu)\nreplace_inplace_op(aten.round_, aten.round)\nreplace_inplace_op(aten.scatter_, aten.scatter)\nreplace_inplace_op(aten.scatter_add_, aten.scatter_add)\nreplace_inplace_op(aten.scatter_reduce_, aten.scatter_reduce)\n\n\n@register_torch_trt_decomposition(aten.rsqrt, registry=TORCH_TRT_DECOMPOSITIONS)\ndef rsqrt_replacement(*args, **kwargs) -> torch.Tensor: # type: ignore\n return torch.reciprocal(torch.sqrt(*args, **kwargs))\n\n\n@register_torch_trt_decomposition(aten._unsafe_view, registry=TORCH_TRT_DECOMPOSITIONS)\ndef unsafe_view_replacement(x: torch.Tensor, *args, **kwargs) -> torch.Tensor: # type: ignore\n return torch.reshape(x, *args, **kwargs)\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.lift_fresh_copy, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef lift_fresh_copy_replacement(x: torch.Tensor) -> torch.Tensor:\n return x\n\n\n@register_torch_trt_decomposition(aten.alias, registry=TORCH_TRT_DECOMPOSITIONS)\ndef alias_replacement(x: torch.Tensor) -> torch.Tensor:\n return x\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.reciprocal.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef reciprocal_replacement(\n input_: torch.Tensor,\n) -> torch.Tensor:\n return torch.div(1, input_)\n\n\n@register_torch_trt_decomposition(\n torch.ops.prims.var.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef var_decomposition(\n input_tensor: torch.Tensor,\n dims: Optional[List[int]],\n correction: int,\n output_dtype: Optional[torch.dtype] = None,\n) -> torch.Tensor:\n if dims is None:\n dims = []\n\n # If the dimensions are empty, variance is taken over all dimensions\n if isinstance(dims, (tuple, list)) and len(dims) == 0:\n N = input_tensor.numel()\n # Otherwise, the number of samples is the product of the dimensions reduced over\n else:\n N = 1\n for dim_i in dims:\n N *= input_tensor.shape[dim_i]\n\n # Compute the mean, difference, and correction term as per the formula:\n # https://pytorch.org/docs/stable/generated/torch.var.html\n\n # Additionally, prims does not support keepdim, and so we only keep dimensions\n # on the first reduction, then remove it for the second\n sample_mean = torch.mean(input_tensor, dims, keepdim=True)\n diff = input_tensor - sample_mean\n squared_diff = diff * diff\n variance_unnormalized = torch.sum(squared_diff, dims, keepdim=False)\n\n if correction is None:\n correction_term = float(N - 1)\n elif isinstance(correction, int):\n correction_term = float(N - correction)\n elif isinstance(correction, float):\n correction_term = float(N) - correction\n else:\n raise RuntimeError(\"correction must be int or float\")\n\n if correction_term <= 0:\n raise RuntimeError(f\"correction term was non-positive, got: {correction_term}\")\n\n variance = variance_unnormalized / correction_term\n\n return variance\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.empty_permuted.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef empty_permuted_decomposition(*args, **kwargs) -> torch.Tensor:\n empty_size = args[0]\n empty_permute = args[1]\n perm = [0] * len(empty_size)\n for permute_index, permute_element in enumerate(empty_permute):\n perm[permute_element] = permute_index\n return torch.empty([empty_size[l] for l in empty_permute], **kwargs).permute(perm)\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.slice_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef slice_scatter_decomposition(\n input_tensor: torch.Tensor,\n src_tensor: torch.Tensor,\n dim: int,\n start: Optional[int] = None,\n end: Optional[int] = None,\n step: Optional[int] = None,\n):\n dim_size = input_tensor.shape[dim]\n start = get_positive_dim(start, input_tensor.shape[dim])\n if end is None:\n end = dim_size\n end = get_positive_dim(end, input_tensor.shape[dim])\n if step is None:\n step = 1\n\n src_dim = src_tensor.shape\n # step == 0 is not a valid torch case\n # also src_dim should be equal to slice dimension\n\n if start == 0 and end == dim_size and step == 1:\n return src_tensor\n\n cat_tensors = []\n index_tensor_shape = []\n for i, src_each_dim in enumerate(list(src_dim)):\n if i != dim:\n index_tensor_shape.append(src_each_dim)\n for index in range(start, end, step):\n cat_tensors.append(index * torch.ones(index_tensor_shape, dtype=torch.long))\n index_tensor = torch.stack(cat_tensors, dim).cuda()\n output_tensor = torch.scatter(input_tensor, dim, index_tensor, src_tensor)\n return output_tensor\n\n\n@register_torch_trt_decomposition(\n torch.ops.aten.select_scatter.default, registry=TORCH_TRT_DECOMPOSITIONS\n)\ndef select_scatter_decomposition(\n input_tensor: torch.Tensor,\n src_tensor: torch.Tensor,\n dim: int,\n index: int,\n) -> torch.Tensor:\n src_tensor = torch.unsqueeze(src_tensor, dim)\n return torch.slice_scatter(input_tensor, src_tensor, dim, index, index + 1, 1)\n\n\ndef get_decompositions(\n enable_experimental_decompositions: bool = False,\n) -> Dict[OpOverload, Callable[[Any], Any]]:\n if enable_experimental_decompositions:\n CORE_ATEN_DECOMPOSITIONS_FILTERED: Dict[OpOverload, Callable[[Any], Any]] = {\n decomp: _core_aten_decompositions[decomp]\n for decomp in _core_aten_decompositions\n if decomp not in torch_disabled_decompositions\n }\n return {**CORE_ATEN_DECOMPOSITIONS_FILTERED, **TORCH_TRT_DECOMPOSITIONS}\n else:\n return {**ENABLED_TORCH_DECOMPOSITIONS, **TORCH_TRT_DECOMPOSITIONS}\n", "path": "py/torch_tensorrt/dynamo/lowering/_decompositions.py"}]}
| 2,936 | 235 |
gh_patches_debug_24987
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-9300
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tune][rllib] Windows: GPU not recognized
### What is the problem?
I'm getting `ray.tune.error.TuneError: Insufficient cluster resources to launch trial`.
I specified a GPU in my config but ray does not recognize my GPU (RTX 2080) and throws an error.
I can get passed this by setting `num_gpus: 0` in my config for now.
https://gist.github.com/juliusfrost/fa7ebbb8d1dfc66eea0bbc4babcbe5aa
### Reproduction (REQUIRED)
```
git clone https://github.com/juliusfrost/rllib-tune-atari.git
cd rllib-tune-atari
pip install -r requirements.txt
python train.py --algo a2c
```
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/resource_spec.py`
Content:
```
1 import math
2 from collections import namedtuple
3 import logging
4 import multiprocessing
5 import os
6
7 import ray
8 import ray.ray_constants as ray_constants
9
10 logger = logging.getLogger(__name__)
11
12 # Prefix for the node id resource that is automatically added to each node.
13 # For example, a node may have id `node:172.23.42.1`.
14 NODE_ID_PREFIX = "node:"
15
16
17 class ResourceSpec(
18 namedtuple("ResourceSpec", [
19 "num_cpus", "num_gpus", "memory", "object_store_memory",
20 "resources", "redis_max_memory"
21 ])):
22 """Represents the resource configuration passed to a raylet.
23
24 All fields can be None. Before starting services, resolve() should be
25 called to return a ResourceSpec with unknown values filled in with
26 defaults based on the local machine specifications.
27
28 Attributes:
29 num_cpus: The CPUs allocated for this raylet.
30 num_gpus: The GPUs allocated for this raylet.
31 memory: The memory allocated for this raylet.
32 object_store_memory: The object store memory allocated for this raylet.
33 Note that when calling to_resource_dict(), this will be scaled down
34 by 30% to account for the global plasma LRU reserve.
35 resources: The custom resources allocated for this raylet.
36 redis_max_memory: The max amount of memory (in bytes) to allow each
37 redis shard to use. Once the limit is exceeded, redis will start
38 LRU eviction of entries. This only applies to the sharded redis
39 tables (task, object, and profile tables). By default, this is
40 capped at 10GB but can be set higher.
41 """
42
43 def __new__(cls,
44 num_cpus=None,
45 num_gpus=None,
46 memory=None,
47 object_store_memory=None,
48 resources=None,
49 redis_max_memory=None):
50 return super(ResourceSpec, cls).__new__(cls, num_cpus, num_gpus,
51 memory, object_store_memory,
52 resources, redis_max_memory)
53
54 def resolved(self):
55 """Returns if this ResourceSpec has default values filled out."""
56 for v in self._asdict().values():
57 if v is None:
58 return False
59 return True
60
61 def to_resource_dict(self):
62 """Returns a dict suitable to pass to raylet initialization.
63
64 This renames num_cpus / num_gpus to "CPU" / "GPU", translates memory
65 from bytes into 100MB memory units, and checks types.
66 """
67 assert self.resolved()
68
69 memory_units = ray_constants.to_memory_units(
70 self.memory, round_up=False)
71 reservable_object_store_memory = (
72 self.object_store_memory *
73 ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)
74 if (reservable_object_store_memory <
75 ray_constants.MEMORY_RESOURCE_UNIT_BYTES):
76 raise ValueError(
77 "The minimum amount of object_store_memory that can be "
78 "requested is {}, but you specified {}.".format(
79 int(
80 math.ceil(
81 ray_constants.MEMORY_RESOURCE_UNIT_BYTES /
82 ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)),
83 self.object_store_memory))
84 object_store_memory_units = ray_constants.to_memory_units(
85 self.object_store_memory *
86 ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION,
87 round_up=False)
88
89 resources = dict(
90 self.resources,
91 CPU=self.num_cpus,
92 GPU=self.num_gpus,
93 memory=memory_units,
94 object_store_memory=object_store_memory_units)
95
96 resources = {
97 resource_label: resource_quantity
98 for resource_label, resource_quantity in resources.items()
99 if resource_quantity != 0
100 }
101
102 # Check types.
103 for resource_label, resource_quantity in resources.items():
104 assert (isinstance(resource_quantity, int)
105 or isinstance(resource_quantity, float))
106 if (isinstance(resource_quantity, float)
107 and not resource_quantity.is_integer()):
108 raise ValueError(
109 "Resource quantities must all be whole numbers. "
110 "Violated by resource '{}' in {}.".format(
111 resource_label, resources))
112 if resource_quantity < 0:
113 raise ValueError("Resource quantities must be nonnegative. "
114 "Violated by resource '{}' in {}.".format(
115 resource_label, resources))
116 if resource_quantity > ray_constants.MAX_RESOURCE_QUANTITY:
117 raise ValueError("Resource quantities must be at most {}. "
118 "Violated by resource '{}' in {}.".format(
119 ray_constants.MAX_RESOURCE_QUANTITY,
120 resource_label, resources))
121
122 return resources
123
124 def resolve(self, is_head, node_ip_address=None):
125 """Returns a copy with values filled out with system defaults.
126
127 Args:
128 is_head (bool): Whether this is the head node.
129 node_ip_address (str): The IP address of the node that we are on.
130 This is used to automatically create a node id resource.
131 """
132
133 resources = (self.resources or {}).copy()
134 assert "CPU" not in resources, resources
135 assert "GPU" not in resources, resources
136 assert "memory" not in resources, resources
137 assert "object_store_memory" not in resources, resources
138
139 if node_ip_address is None:
140 node_ip_address = ray.services.get_node_ip_address()
141
142 # Automatically create a node id resource on each node. This is
143 # queryable with ray.state.node_ids() and ray.state.current_node_id().
144 resources[NODE_ID_PREFIX + node_ip_address] = 1.0
145
146 num_cpus = self.num_cpus
147 if num_cpus is None:
148 num_cpus = multiprocessing.cpu_count()
149
150 num_gpus = self.num_gpus
151 gpu_ids = ray.utils.get_cuda_visible_devices()
152 # Check that the number of GPUs that the raylet wants doesn't
153 # excede the amount allowed by CUDA_VISIBLE_DEVICES.
154 if (num_gpus is not None and gpu_ids is not None
155 and num_gpus > len(gpu_ids)):
156 raise ValueError("Attempting to start raylet with {} GPUs, "
157 "but CUDA_VISIBLE_DEVICES contains {}.".format(
158 num_gpus, gpu_ids))
159 if num_gpus is None:
160 # Try to automatically detect the number of GPUs.
161 num_gpus = _autodetect_num_gpus()
162 # Don't use more GPUs than allowed by CUDA_VISIBLE_DEVICES.
163 if gpu_ids is not None:
164 num_gpus = min(num_gpus, len(gpu_ids))
165
166 # Choose a default object store size.
167 system_memory = ray.utils.get_system_memory()
168 avail_memory = ray.utils.estimate_available_memory()
169 object_store_memory = self.object_store_memory
170 if object_store_memory is None:
171 object_store_memory = int(avail_memory * 0.3)
172 # Cap memory to avoid memory waste and perf issues on large nodes
173 if (object_store_memory >
174 ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES):
175 logger.debug(
176 "Warning: Capping object memory store to {}GB. ".format(
177 ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES //
178 1e9) +
179 "To increase this further, specify `object_store_memory` "
180 "when calling ray.init() or ray start.")
181 object_store_memory = (
182 ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES)
183
184 redis_max_memory = self.redis_max_memory
185 if redis_max_memory is None:
186 redis_max_memory = min(
187 ray_constants.DEFAULT_REDIS_MAX_MEMORY_BYTES,
188 max(
189 int(avail_memory * 0.1),
190 ray_constants.REDIS_MINIMUM_MEMORY_BYTES))
191 if redis_max_memory < ray_constants.REDIS_MINIMUM_MEMORY_BYTES:
192 raise ValueError(
193 "Attempting to cap Redis memory usage at {} bytes, "
194 "but the minimum allowed is {} bytes.".format(
195 redis_max_memory,
196 ray_constants.REDIS_MINIMUM_MEMORY_BYTES))
197
198 memory = self.memory
199 if memory is None:
200 memory = (avail_memory - object_store_memory - (redis_max_memory
201 if is_head else 0))
202 if memory < 100e6 and memory < 0.05 * system_memory:
203 raise ValueError(
204 "After taking into account object store and redis memory "
205 "usage, the amount of memory on this node available for "
206 "tasks and actors ({} GB) is less than {}% of total. "
207 "You can adjust these settings with "
208 "ray.init(memory=<bytes>, "
209 "object_store_memory=<bytes>).".format(
210 round(memory / 1e9, 2),
211 int(100 * (memory / system_memory))))
212
213 logger.info(
214 "Starting Ray with {} GiB memory available for workers and up to "
215 "{} GiB for objects. You can adjust these settings "
216 "with ray.init(memory=<bytes>, "
217 "object_store_memory=<bytes>).".format(
218 round(
219 ray_constants.round_to_memory_units(
220 memory, round_up=False) / (1024**3), 2),
221 round(object_store_memory / (1024**3), 2)))
222
223 spec = ResourceSpec(num_cpus, num_gpus, memory, object_store_memory,
224 resources, redis_max_memory)
225 assert spec.resolved()
226 return spec
227
228
229 def _autodetect_num_gpus():
230 """Attempt to detect the number of GPUs on this machine.
231
232 TODO(rkn): This currently assumes Nvidia GPUs and Linux.
233
234 Returns:
235 The number of GPUs if any were detected, otherwise 0.
236 """
237 proc_gpus_path = "/proc/driver/nvidia/gpus"
238 if os.path.isdir(proc_gpus_path):
239 return len(os.listdir(proc_gpus_path))
240 return 0
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/ray/resource_spec.py b/python/ray/resource_spec.py
--- a/python/ray/resource_spec.py
+++ b/python/ray/resource_spec.py
@@ -3,6 +3,8 @@
import logging
import multiprocessing
import os
+import subprocess
+import sys
import ray
import ray.ray_constants as ray_constants
@@ -229,12 +231,23 @@
def _autodetect_num_gpus():
"""Attempt to detect the number of GPUs on this machine.
- TODO(rkn): This currently assumes Nvidia GPUs and Linux.
+ TODO(rkn): This currently assumes NVIDIA GPUs on Linux.
+ TODO(mehrdadn): This currently does not work on macOS.
+ TODO(mehrdadn): Use a better mechanism for Windows.
+
+ Possibly useful: tensorflow.config.list_physical_devices()
Returns:
The number of GPUs if any were detected, otherwise 0.
"""
- proc_gpus_path = "/proc/driver/nvidia/gpus"
- if os.path.isdir(proc_gpus_path):
- return len(os.listdir(proc_gpus_path))
- return 0
+ result = 0
+ if sys.platform.startswith("linux"):
+ proc_gpus_path = "/proc/driver/nvidia/gpus"
+ if os.path.isdir(proc_gpus_path):
+ result = len(os.listdir(proc_gpus_path))
+ elif sys.platform == "win32":
+ props = "AdapterCompatibility"
+ cmdargs = ["WMIC", "PATH", "Win32_VideoController", "GET", props]
+ lines = subprocess.check_output(cmdargs).splitlines()[1:]
+ result = len([l.rstrip() for l in lines if l.startswith(b"NVIDIA")])
+ return result
|
{"golden_diff": "diff --git a/python/ray/resource_spec.py b/python/ray/resource_spec.py\n--- a/python/ray/resource_spec.py\n+++ b/python/ray/resource_spec.py\n@@ -3,6 +3,8 @@\n import logging\n import multiprocessing\n import os\n+import subprocess\n+import sys\n \n import ray\n import ray.ray_constants as ray_constants\n@@ -229,12 +231,23 @@\n def _autodetect_num_gpus():\n \"\"\"Attempt to detect the number of GPUs on this machine.\n \n- TODO(rkn): This currently assumes Nvidia GPUs and Linux.\n+ TODO(rkn): This currently assumes NVIDIA GPUs on Linux.\n+ TODO(mehrdadn): This currently does not work on macOS.\n+ TODO(mehrdadn): Use a better mechanism for Windows.\n+\n+ Possibly useful: tensorflow.config.list_physical_devices()\n \n Returns:\n The number of GPUs if any were detected, otherwise 0.\n \"\"\"\n- proc_gpus_path = \"/proc/driver/nvidia/gpus\"\n- if os.path.isdir(proc_gpus_path):\n- return len(os.listdir(proc_gpus_path))\n- return 0\n+ result = 0\n+ if sys.platform.startswith(\"linux\"):\n+ proc_gpus_path = \"/proc/driver/nvidia/gpus\"\n+ if os.path.isdir(proc_gpus_path):\n+ result = len(os.listdir(proc_gpus_path))\n+ elif sys.platform == \"win32\":\n+ props = \"AdapterCompatibility\"\n+ cmdargs = [\"WMIC\", \"PATH\", \"Win32_VideoController\", \"GET\", props]\n+ lines = subprocess.check_output(cmdargs).splitlines()[1:]\n+ result = len([l.rstrip() for l in lines if l.startswith(b\"NVIDIA\")])\n+ return result\n", "issue": "[tune][rllib] Windows: GPU not recognized\n### What is the problem?\r\n\r\nI'm getting `ray.tune.error.TuneError: Insufficient cluster resources to launch trial`.\r\nI specified a GPU in my config but ray does not recognize my GPU (RTX 2080) and throws an error.\r\nI can get passed this by setting `num_gpus: 0` in my config for now.\r\n\r\nhttps://gist.github.com/juliusfrost/fa7ebbb8d1dfc66eea0bbc4babcbe5aa\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n```\r\ngit clone https://github.com/juliusfrost/rllib-tune-atari.git\r\ncd rllib-tune-atari\r\npip install -r requirements.txt\r\npython train.py --algo a2c\r\n```\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import math\nfrom collections import namedtuple\nimport logging\nimport multiprocessing\nimport os\n\nimport ray\nimport ray.ray_constants as ray_constants\n\nlogger = logging.getLogger(__name__)\n\n# Prefix for the node id resource that is automatically added to each node.\n# For example, a node may have id `node:172.23.42.1`.\nNODE_ID_PREFIX = \"node:\"\n\n\nclass ResourceSpec(\n namedtuple(\"ResourceSpec\", [\n \"num_cpus\", \"num_gpus\", \"memory\", \"object_store_memory\",\n \"resources\", \"redis_max_memory\"\n ])):\n \"\"\"Represents the resource configuration passed to a raylet.\n\n All fields can be None. Before starting services, resolve() should be\n called to return a ResourceSpec with unknown values filled in with\n defaults based on the local machine specifications.\n\n Attributes:\n num_cpus: The CPUs allocated for this raylet.\n num_gpus: The GPUs allocated for this raylet.\n memory: The memory allocated for this raylet.\n object_store_memory: The object store memory allocated for this raylet.\n Note that when calling to_resource_dict(), this will be scaled down\n by 30% to account for the global plasma LRU reserve.\n resources: The custom resources allocated for this raylet.\n redis_max_memory: The max amount of memory (in bytes) to allow each\n redis shard to use. Once the limit is exceeded, redis will start\n LRU eviction of entries. This only applies to the sharded redis\n tables (task, object, and profile tables). By default, this is\n capped at 10GB but can be set higher.\n \"\"\"\n\n def __new__(cls,\n num_cpus=None,\n num_gpus=None,\n memory=None,\n object_store_memory=None,\n resources=None,\n redis_max_memory=None):\n return super(ResourceSpec, cls).__new__(cls, num_cpus, num_gpus,\n memory, object_store_memory,\n resources, redis_max_memory)\n\n def resolved(self):\n \"\"\"Returns if this ResourceSpec has default values filled out.\"\"\"\n for v in self._asdict().values():\n if v is None:\n return False\n return True\n\n def to_resource_dict(self):\n \"\"\"Returns a dict suitable to pass to raylet initialization.\n\n This renames num_cpus / num_gpus to \"CPU\" / \"GPU\", translates memory\n from bytes into 100MB memory units, and checks types.\n \"\"\"\n assert self.resolved()\n\n memory_units = ray_constants.to_memory_units(\n self.memory, round_up=False)\n reservable_object_store_memory = (\n self.object_store_memory *\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)\n if (reservable_object_store_memory <\n ray_constants.MEMORY_RESOURCE_UNIT_BYTES):\n raise ValueError(\n \"The minimum amount of object_store_memory that can be \"\n \"requested is {}, but you specified {}.\".format(\n int(\n math.ceil(\n ray_constants.MEMORY_RESOURCE_UNIT_BYTES /\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)),\n self.object_store_memory))\n object_store_memory_units = ray_constants.to_memory_units(\n self.object_store_memory *\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION,\n round_up=False)\n\n resources = dict(\n self.resources,\n CPU=self.num_cpus,\n GPU=self.num_gpus,\n memory=memory_units,\n object_store_memory=object_store_memory_units)\n\n resources = {\n resource_label: resource_quantity\n for resource_label, resource_quantity in resources.items()\n if resource_quantity != 0\n }\n\n # Check types.\n for resource_label, resource_quantity in resources.items():\n assert (isinstance(resource_quantity, int)\n or isinstance(resource_quantity, float))\n if (isinstance(resource_quantity, float)\n and not resource_quantity.is_integer()):\n raise ValueError(\n \"Resource quantities must all be whole numbers. \"\n \"Violated by resource '{}' in {}.\".format(\n resource_label, resources))\n if resource_quantity < 0:\n raise ValueError(\"Resource quantities must be nonnegative. \"\n \"Violated by resource '{}' in {}.\".format(\n resource_label, resources))\n if resource_quantity > ray_constants.MAX_RESOURCE_QUANTITY:\n raise ValueError(\"Resource quantities must be at most {}. \"\n \"Violated by resource '{}' in {}.\".format(\n ray_constants.MAX_RESOURCE_QUANTITY,\n resource_label, resources))\n\n return resources\n\n def resolve(self, is_head, node_ip_address=None):\n \"\"\"Returns a copy with values filled out with system defaults.\n\n Args:\n is_head (bool): Whether this is the head node.\n node_ip_address (str): The IP address of the node that we are on.\n This is used to automatically create a node id resource.\n \"\"\"\n\n resources = (self.resources or {}).copy()\n assert \"CPU\" not in resources, resources\n assert \"GPU\" not in resources, resources\n assert \"memory\" not in resources, resources\n assert \"object_store_memory\" not in resources, resources\n\n if node_ip_address is None:\n node_ip_address = ray.services.get_node_ip_address()\n\n # Automatically create a node id resource on each node. This is\n # queryable with ray.state.node_ids() and ray.state.current_node_id().\n resources[NODE_ID_PREFIX + node_ip_address] = 1.0\n\n num_cpus = self.num_cpus\n if num_cpus is None:\n num_cpus = multiprocessing.cpu_count()\n\n num_gpus = self.num_gpus\n gpu_ids = ray.utils.get_cuda_visible_devices()\n # Check that the number of GPUs that the raylet wants doesn't\n # excede the amount allowed by CUDA_VISIBLE_DEVICES.\n if (num_gpus is not None and gpu_ids is not None\n and num_gpus > len(gpu_ids)):\n raise ValueError(\"Attempting to start raylet with {} GPUs, \"\n \"but CUDA_VISIBLE_DEVICES contains {}.\".format(\n num_gpus, gpu_ids))\n if num_gpus is None:\n # Try to automatically detect the number of GPUs.\n num_gpus = _autodetect_num_gpus()\n # Don't use more GPUs than allowed by CUDA_VISIBLE_DEVICES.\n if gpu_ids is not None:\n num_gpus = min(num_gpus, len(gpu_ids))\n\n # Choose a default object store size.\n system_memory = ray.utils.get_system_memory()\n avail_memory = ray.utils.estimate_available_memory()\n object_store_memory = self.object_store_memory\n if object_store_memory is None:\n object_store_memory = int(avail_memory * 0.3)\n # Cap memory to avoid memory waste and perf issues on large nodes\n if (object_store_memory >\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES):\n logger.debug(\n \"Warning: Capping object memory store to {}GB. \".format(\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES //\n 1e9) +\n \"To increase this further, specify `object_store_memory` \"\n \"when calling ray.init() or ray start.\")\n object_store_memory = (\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES)\n\n redis_max_memory = self.redis_max_memory\n if redis_max_memory is None:\n redis_max_memory = min(\n ray_constants.DEFAULT_REDIS_MAX_MEMORY_BYTES,\n max(\n int(avail_memory * 0.1),\n ray_constants.REDIS_MINIMUM_MEMORY_BYTES))\n if redis_max_memory < ray_constants.REDIS_MINIMUM_MEMORY_BYTES:\n raise ValueError(\n \"Attempting to cap Redis memory usage at {} bytes, \"\n \"but the minimum allowed is {} bytes.\".format(\n redis_max_memory,\n ray_constants.REDIS_MINIMUM_MEMORY_BYTES))\n\n memory = self.memory\n if memory is None:\n memory = (avail_memory - object_store_memory - (redis_max_memory\n if is_head else 0))\n if memory < 100e6 and memory < 0.05 * system_memory:\n raise ValueError(\n \"After taking into account object store and redis memory \"\n \"usage, the amount of memory on this node available for \"\n \"tasks and actors ({} GB) is less than {}% of total. \"\n \"You can adjust these settings with \"\n \"ray.init(memory=<bytes>, \"\n \"object_store_memory=<bytes>).\".format(\n round(memory / 1e9, 2),\n int(100 * (memory / system_memory))))\n\n logger.info(\n \"Starting Ray with {} GiB memory available for workers and up to \"\n \"{} GiB for objects. You can adjust these settings \"\n \"with ray.init(memory=<bytes>, \"\n \"object_store_memory=<bytes>).\".format(\n round(\n ray_constants.round_to_memory_units(\n memory, round_up=False) / (1024**3), 2),\n round(object_store_memory / (1024**3), 2)))\n\n spec = ResourceSpec(num_cpus, num_gpus, memory, object_store_memory,\n resources, redis_max_memory)\n assert spec.resolved()\n return spec\n\n\ndef _autodetect_num_gpus():\n \"\"\"Attempt to detect the number of GPUs on this machine.\n\n TODO(rkn): This currently assumes Nvidia GPUs and Linux.\n\n Returns:\n The number of GPUs if any were detected, otherwise 0.\n \"\"\"\n proc_gpus_path = \"/proc/driver/nvidia/gpus\"\n if os.path.isdir(proc_gpus_path):\n return len(os.listdir(proc_gpus_path))\n return 0\n", "path": "python/ray/resource_spec.py"}], "after_files": [{"content": "import math\nfrom collections import namedtuple\nimport logging\nimport multiprocessing\nimport os\nimport subprocess\nimport sys\n\nimport ray\nimport ray.ray_constants as ray_constants\n\nlogger = logging.getLogger(__name__)\n\n# Prefix for the node id resource that is automatically added to each node.\n# For example, a node may have id `node:172.23.42.1`.\nNODE_ID_PREFIX = \"node:\"\n\n\nclass ResourceSpec(\n namedtuple(\"ResourceSpec\", [\n \"num_cpus\", \"num_gpus\", \"memory\", \"object_store_memory\",\n \"resources\", \"redis_max_memory\"\n ])):\n \"\"\"Represents the resource configuration passed to a raylet.\n\n All fields can be None. Before starting services, resolve() should be\n called to return a ResourceSpec with unknown values filled in with\n defaults based on the local machine specifications.\n\n Attributes:\n num_cpus: The CPUs allocated for this raylet.\n num_gpus: The GPUs allocated for this raylet.\n memory: The memory allocated for this raylet.\n object_store_memory: The object store memory allocated for this raylet.\n Note that when calling to_resource_dict(), this will be scaled down\n by 30% to account for the global plasma LRU reserve.\n resources: The custom resources allocated for this raylet.\n redis_max_memory: The max amount of memory (in bytes) to allow each\n redis shard to use. Once the limit is exceeded, redis will start\n LRU eviction of entries. This only applies to the sharded redis\n tables (task, object, and profile tables). By default, this is\n capped at 10GB but can be set higher.\n \"\"\"\n\n def __new__(cls,\n num_cpus=None,\n num_gpus=None,\n memory=None,\n object_store_memory=None,\n resources=None,\n redis_max_memory=None):\n return super(ResourceSpec, cls).__new__(cls, num_cpus, num_gpus,\n memory, object_store_memory,\n resources, redis_max_memory)\n\n def resolved(self):\n \"\"\"Returns if this ResourceSpec has default values filled out.\"\"\"\n for v in self._asdict().values():\n if v is None:\n return False\n return True\n\n def to_resource_dict(self):\n \"\"\"Returns a dict suitable to pass to raylet initialization.\n\n This renames num_cpus / num_gpus to \"CPU\" / \"GPU\", translates memory\n from bytes into 100MB memory units, and checks types.\n \"\"\"\n assert self.resolved()\n\n memory_units = ray_constants.to_memory_units(\n self.memory, round_up=False)\n reservable_object_store_memory = (\n self.object_store_memory *\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)\n if (reservable_object_store_memory <\n ray_constants.MEMORY_RESOURCE_UNIT_BYTES):\n raise ValueError(\n \"The minimum amount of object_store_memory that can be \"\n \"requested is {}, but you specified {}.\".format(\n int(\n math.ceil(\n ray_constants.MEMORY_RESOURCE_UNIT_BYTES /\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION)),\n self.object_store_memory))\n object_store_memory_units = ray_constants.to_memory_units(\n self.object_store_memory *\n ray_constants.PLASMA_RESERVABLE_MEMORY_FRACTION,\n round_up=False)\n\n resources = dict(\n self.resources,\n CPU=self.num_cpus,\n GPU=self.num_gpus,\n memory=memory_units,\n object_store_memory=object_store_memory_units)\n\n resources = {\n resource_label: resource_quantity\n for resource_label, resource_quantity in resources.items()\n if resource_quantity != 0\n }\n\n # Check types.\n for resource_label, resource_quantity in resources.items():\n assert (isinstance(resource_quantity, int)\n or isinstance(resource_quantity, float))\n if (isinstance(resource_quantity, float)\n and not resource_quantity.is_integer()):\n raise ValueError(\n \"Resource quantities must all be whole numbers. \"\n \"Violated by resource '{}' in {}.\".format(\n resource_label, resources))\n if resource_quantity < 0:\n raise ValueError(\"Resource quantities must be nonnegative. \"\n \"Violated by resource '{}' in {}.\".format(\n resource_label, resources))\n if resource_quantity > ray_constants.MAX_RESOURCE_QUANTITY:\n raise ValueError(\"Resource quantities must be at most {}. \"\n \"Violated by resource '{}' in {}.\".format(\n ray_constants.MAX_RESOURCE_QUANTITY,\n resource_label, resources))\n\n return resources\n\n def resolve(self, is_head, node_ip_address=None):\n \"\"\"Returns a copy with values filled out with system defaults.\n\n Args:\n is_head (bool): Whether this is the head node.\n node_ip_address (str): The IP address of the node that we are on.\n This is used to automatically create a node id resource.\n \"\"\"\n\n resources = (self.resources or {}).copy()\n assert \"CPU\" not in resources, resources\n assert \"GPU\" not in resources, resources\n assert \"memory\" not in resources, resources\n assert \"object_store_memory\" not in resources, resources\n\n if node_ip_address is None:\n node_ip_address = ray.services.get_node_ip_address()\n\n # Automatically create a node id resource on each node. This is\n # queryable with ray.state.node_ids() and ray.state.current_node_id().\n resources[NODE_ID_PREFIX + node_ip_address] = 1.0\n\n num_cpus = self.num_cpus\n if num_cpus is None:\n num_cpus = multiprocessing.cpu_count()\n\n num_gpus = self.num_gpus\n gpu_ids = ray.utils.get_cuda_visible_devices()\n # Check that the number of GPUs that the raylet wants doesn't\n # excede the amount allowed by CUDA_VISIBLE_DEVICES.\n if (num_gpus is not None and gpu_ids is not None\n and num_gpus > len(gpu_ids)):\n raise ValueError(\"Attempting to start raylet with {} GPUs, \"\n \"but CUDA_VISIBLE_DEVICES contains {}.\".format(\n num_gpus, gpu_ids))\n if num_gpus is None:\n # Try to automatically detect the number of GPUs.\n num_gpus = _autodetect_num_gpus()\n # Don't use more GPUs than allowed by CUDA_VISIBLE_DEVICES.\n if gpu_ids is not None:\n num_gpus = min(num_gpus, len(gpu_ids))\n\n # Choose a default object store size.\n system_memory = ray.utils.get_system_memory()\n avail_memory = ray.utils.estimate_available_memory()\n object_store_memory = self.object_store_memory\n if object_store_memory is None:\n object_store_memory = int(avail_memory * 0.3)\n # Cap memory to avoid memory waste and perf issues on large nodes\n if (object_store_memory >\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES):\n logger.debug(\n \"Warning: Capping object memory store to {}GB. \".format(\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES //\n 1e9) +\n \"To increase this further, specify `object_store_memory` \"\n \"when calling ray.init() or ray start.\")\n object_store_memory = (\n ray_constants.DEFAULT_OBJECT_STORE_MAX_MEMORY_BYTES)\n\n redis_max_memory = self.redis_max_memory\n if redis_max_memory is None:\n redis_max_memory = min(\n ray_constants.DEFAULT_REDIS_MAX_MEMORY_BYTES,\n max(\n int(avail_memory * 0.1),\n ray_constants.REDIS_MINIMUM_MEMORY_BYTES))\n if redis_max_memory < ray_constants.REDIS_MINIMUM_MEMORY_BYTES:\n raise ValueError(\n \"Attempting to cap Redis memory usage at {} bytes, \"\n \"but the minimum allowed is {} bytes.\".format(\n redis_max_memory,\n ray_constants.REDIS_MINIMUM_MEMORY_BYTES))\n\n memory = self.memory\n if memory is None:\n memory = (avail_memory - object_store_memory - (redis_max_memory\n if is_head else 0))\n if memory < 100e6 and memory < 0.05 * system_memory:\n raise ValueError(\n \"After taking into account object store and redis memory \"\n \"usage, the amount of memory on this node available for \"\n \"tasks and actors ({} GB) is less than {}% of total. \"\n \"You can adjust these settings with \"\n \"ray.init(memory=<bytes>, \"\n \"object_store_memory=<bytes>).\".format(\n round(memory / 1e9, 2),\n int(100 * (memory / system_memory))))\n\n logger.info(\n \"Starting Ray with {} GiB memory available for workers and up to \"\n \"{} GiB for objects. You can adjust these settings \"\n \"with ray.init(memory=<bytes>, \"\n \"object_store_memory=<bytes>).\".format(\n round(\n ray_constants.round_to_memory_units(\n memory, round_up=False) / (1024**3), 2),\n round(object_store_memory / (1024**3), 2)))\n\n spec = ResourceSpec(num_cpus, num_gpus, memory, object_store_memory,\n resources, redis_max_memory)\n assert spec.resolved()\n return spec\n\n\ndef _autodetect_num_gpus():\n \"\"\"Attempt to detect the number of GPUs on this machine.\n\n TODO(rkn): This currently assumes NVIDIA GPUs on Linux.\n TODO(mehrdadn): This currently does not work on macOS.\n TODO(mehrdadn): Use a better mechanism for Windows.\n\n Possibly useful: tensorflow.config.list_physical_devices()\n\n Returns:\n The number of GPUs if any were detected, otherwise 0.\n \"\"\"\n result = 0\n if sys.platform.startswith(\"linux\"):\n proc_gpus_path = \"/proc/driver/nvidia/gpus\"\n if os.path.isdir(proc_gpus_path):\n result = len(os.listdir(proc_gpus_path))\n elif sys.platform == \"win32\":\n props = \"AdapterCompatibility\"\n cmdargs = [\"WMIC\", \"PATH\", \"Win32_VideoController\", \"GET\", props]\n lines = subprocess.check_output(cmdargs).splitlines()[1:]\n result = len([l.rstrip() for l in lines if l.startswith(b\"NVIDIA\")])\n return result\n", "path": "python/ray/resource_spec.py"}]}
| 3,161 | 391 |
gh_patches_debug_30245
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-6569
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setuptools 60.7.0 breaks the executable
Building with setuptools==60.7.0 I get this error.
There is no `jaraco` module (seems like this is the author of `setuptools)
```
(simulator_venv) ➜ agent git:(feature/SB-22361-improve-socket-error-handling) ✗ dist/sbsimulator/sbsimulator
Traceback (most recent call last):
File "PyInstaller/hooks/rthooks/pyi_rth_pkgres.py", line 16, in <module>
File "/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/__init__.py", line 74, in <module>
from pkg_resources.extern.jaraco.text import (
File "/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/extern/__init__.py", line 52, in create_module
return self.load_module(spec.name)
File "/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/extern/__init__.py", line 44, in load_module
raise ImportError(
ImportError: The 'jaraco' package is required; normally this is bundled with this package so if you get this warning, consult the packager of your distribution.
[23421] Failed to execute script 'pyi_rth_pkgres' due to unhandled exception!
```
PyInstaller: 4.8
OS: Linux/Mac (not tested on Windows)
Python: 3.8.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-pkg_resources.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2021, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller.utils.hooks import collect_submodules
13
14 # pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose
15 # them as pkg_resources.extern.*
16 hiddenimports = collect_submodules('pkg_resources._vendor')
17
18 # pkg_resources v45.0 dropped support for Python 2 and added this module printing a warning. We could save some bytes if
19 # we would replace this by a fake module.
20 hiddenimports.append('pkg_resources.py2_warn')
21
22 excludedimports = ['__main__']
23
24 # Some more hidden imports. See:
25 # https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/15#issuecomment-663699288 `packaging` can either be
26 # its own package, or embedded in `pkg_resources._vendor.packaging`, or both. Assume the worst and include both if
27 # present.
28 hiddenimports += collect_submodules('packaging')
29
30 hiddenimports += ['pkg_resources.markers']
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py
--- a/PyInstaller/hooks/hook-pkg_resources.py
+++ b/PyInstaller/hooks/hook-pkg_resources.py
@@ -9,7 +9,7 @@
# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
#-----------------------------------------------------------------------------
-from PyInstaller.utils.hooks import collect_submodules
+from PyInstaller.utils.hooks import collect_submodules, is_module_satisfies
# pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose
# them as pkg_resources.extern.*
@@ -28,3 +28,24 @@
hiddenimports += collect_submodules('packaging')
hiddenimports += ['pkg_resources.markers']
+
+# As of v60.7, setuptools vendored jaraco and has pkg_resources use it. Currently, the pkg_resources._vendor.jaraco
+# namespace package cannot be automatically scanned due to limited support for pure namespace packages in our hook
+# utilities.
+#
+# In setuptools 60.7.0, the vendored jaraco.text package included "Lorem Ipsum.txt" data file, which also has to be
+# collected. However, the presence of the data file (and the resulting directory hierarchy) confuses the importer's
+# redirection logic; instead of trying to work-around that, tell user to upgrade or downgrade their setuptools.
+if is_module_satisfies("setuptools == 60.7.0"):
+ raise SystemExit(
+ "ERROR: Setuptools 60.7.0 is incompatible with PyInstaller. "
+ "Downgrade to an earlier version or upgrade to a later version."
+ )
+# In setuptools 60.7.1, the "Lorem Ipsum.txt" data file was dropped from the vendored jaraco.text package, so we can
+# accommodate it with couple of hidden imports.
+elif is_module_satisfies("setuptools >= 60.7.1"):
+ hiddenimports += [
+ 'pkg_resources._vendor.jaraco.functools',
+ 'pkg_resources._vendor.jaraco.context',
+ 'pkg_resources._vendor.jaraco.text',
+ ]
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py\n--- a/PyInstaller/hooks/hook-pkg_resources.py\n+++ b/PyInstaller/hooks/hook-pkg_resources.py\n@@ -9,7 +9,7 @@\n # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n #-----------------------------------------------------------------------------\n \n-from PyInstaller.utils.hooks import collect_submodules\n+from PyInstaller.utils.hooks import collect_submodules, is_module_satisfies\n \n # pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose\n # them as pkg_resources.extern.*\n@@ -28,3 +28,24 @@\n hiddenimports += collect_submodules('packaging')\n \n hiddenimports += ['pkg_resources.markers']\n+\n+# As of v60.7, setuptools vendored jaraco and has pkg_resources use it. Currently, the pkg_resources._vendor.jaraco\n+# namespace package cannot be automatically scanned due to limited support for pure namespace packages in our hook\n+# utilities.\n+#\n+# In setuptools 60.7.0, the vendored jaraco.text package included \"Lorem Ipsum.txt\" data file, which also has to be\n+# collected. However, the presence of the data file (and the resulting directory hierarchy) confuses the importer's\n+# redirection logic; instead of trying to work-around that, tell user to upgrade or downgrade their setuptools.\n+if is_module_satisfies(\"setuptools == 60.7.0\"):\n+ raise SystemExit(\n+ \"ERROR: Setuptools 60.7.0 is incompatible with PyInstaller. \"\n+ \"Downgrade to an earlier version or upgrade to a later version.\"\n+ )\n+# In setuptools 60.7.1, the \"Lorem Ipsum.txt\" data file was dropped from the vendored jaraco.text package, so we can\n+# accommodate it with couple of hidden imports.\n+elif is_module_satisfies(\"setuptools >= 60.7.1\"):\n+ hiddenimports += [\n+ 'pkg_resources._vendor.jaraco.functools',\n+ 'pkg_resources._vendor.jaraco.context',\n+ 'pkg_resources._vendor.jaraco.text',\n+ ]\n", "issue": "Setuptools 60.7.0 breaks the executable\nBuilding with setuptools==60.7.0 I get this error.\r\nThere is no `jaraco` module (seems like this is the author of `setuptools)\r\n```\r\n(simulator_venv) \u279c agent git:(feature/SB-22361-improve-socket-error-handling) \u2717 dist/sbsimulator/sbsimulator\r\nTraceback (most recent call last):\r\n File \"PyInstaller/hooks/rthooks/pyi_rth_pkgres.py\", line 16, in <module>\r\n File \"/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/__init__.py\", line 74, in <module>\r\n from pkg_resources.extern.jaraco.text import (\r\n File \"/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/extern/__init__.py\", line 52, in create_module\r\n return self.load_module(spec.name)\r\n File \"/Users/arossert/Shared/Bitbucket/agent/simulator_venv/lib/python3.8/site-packages/pkg_resources/extern/__init__.py\", line 44, in load_module\r\n raise ImportError(\r\nImportError: The 'jaraco' package is required; normally this is bundled with this package so if you get this warning, consult the packager of your distribution.\r\n[23421] Failed to execute script 'pyi_rth_pkgres' due to unhandled exception!\r\n```\r\n\r\nPyInstaller: 4.8\r\nOS: Linux/Mac (not tested on Windows)\r\nPython: 3.8.10\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import collect_submodules\n\n# pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose\n# them as pkg_resources.extern.*\nhiddenimports = collect_submodules('pkg_resources._vendor')\n\n# pkg_resources v45.0 dropped support for Python 2 and added this module printing a warning. We could save some bytes if\n# we would replace this by a fake module.\nhiddenimports.append('pkg_resources.py2_warn')\n\nexcludedimports = ['__main__']\n\n# Some more hidden imports. See:\n# https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/15#issuecomment-663699288 `packaging` can either be\n# its own package, or embedded in `pkg_resources._vendor.packaging`, or both. Assume the worst and include both if\n# present.\nhiddenimports += collect_submodules('packaging')\n\nhiddenimports += ['pkg_resources.markers']\n", "path": "PyInstaller/hooks/hook-pkg_resources.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2021, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import collect_submodules, is_module_satisfies\n\n# pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose\n# them as pkg_resources.extern.*\nhiddenimports = collect_submodules('pkg_resources._vendor')\n\n# pkg_resources v45.0 dropped support for Python 2 and added this module printing a warning. We could save some bytes if\n# we would replace this by a fake module.\nhiddenimports.append('pkg_resources.py2_warn')\n\nexcludedimports = ['__main__']\n\n# Some more hidden imports. See:\n# https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/15#issuecomment-663699288 `packaging` can either be\n# its own package, or embedded in `pkg_resources._vendor.packaging`, or both. Assume the worst and include both if\n# present.\nhiddenimports += collect_submodules('packaging')\n\nhiddenimports += ['pkg_resources.markers']\n\n# As of v60.7, setuptools vendored jaraco and has pkg_resources use it. Currently, the pkg_resources._vendor.jaraco\n# namespace package cannot be automatically scanned due to limited support for pure namespace packages in our hook\n# utilities.\n#\n# In setuptools 60.7.0, the vendored jaraco.text package included \"Lorem Ipsum.txt\" data file, which also has to be\n# collected. However, the presence of the data file (and the resulting directory hierarchy) confuses the importer's\n# redirection logic; instead of trying to work-around that, tell user to upgrade or downgrade their setuptools.\nif is_module_satisfies(\"setuptools == 60.7.0\"):\n raise SystemExit(\n \"ERROR: Setuptools 60.7.0 is incompatible with PyInstaller. \"\n \"Downgrade to an earlier version or upgrade to a later version.\"\n )\n# In setuptools 60.7.1, the \"Lorem Ipsum.txt\" data file was dropped from the vendored jaraco.text package, so we can\n# accommodate it with couple of hidden imports.\nelif is_module_satisfies(\"setuptools >= 60.7.1\"):\n hiddenimports += [\n 'pkg_resources._vendor.jaraco.functools',\n 'pkg_resources._vendor.jaraco.context',\n 'pkg_resources._vendor.jaraco.text',\n ]\n", "path": "PyInstaller/hooks/hook-pkg_resources.py"}]}
| 986 | 485 |
gh_patches_debug_24447
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4316
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_157 False Positive on Aurora
**Describe the issue**
CKV_AWS_157 fails on Aurora instances even though the `MultiAZ` property is not applicable to Aurora
**Examples**
```
RDSinstance:
Type: AWS::RDS::DBInstance
Properties:
DBClusterIdentifier: !Ref DBCluster
DBInstanceClass: !Ref DbType
DBInstanceIdentifier: !Sub ${AppName}-${EnvironmentName}
DBParameterGroupName: !Ref DbParameterGroup
DBSubnetGroupName: !Ref DBSubnetGroup
Engine: aurora-mysql
MonitoringInterval: "60"
MonitoringRoleArn: !GetAtt RdsMonitoringRole.Arn
PubliclyAccessible: 'false'
```
**Version (please complete the following information):**
- Checkov Version 2.2.270
**Additional context**
- AWS docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-multiaz
<img width="1377" alt="Screenshot 2023-01-19 at 1 25 49 PM" src="https://user-images.githubusercontent.com/1328683/213552592-4736b84a-3926-44b2-9dad-196b68f28daa.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py`
Content:
```
1 from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories
3
4
5 class RDSMultiAZEnabled(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure that RDS instances have Multi-AZ enabled"
8 id = "CKV_AWS_157"
9 supported_resources = ['AWS::RDS::DBInstance']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return 'Properties/MultiAZ'
15
16
17 check = RDSMultiAZEnabled()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py b/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py
--- a/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py
+++ b/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py
@@ -1,5 +1,5 @@
+from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
-from checkov.common.models.enums import CheckCategories
class RDSMultiAZEnabled(BaseResourceValueCheck):
@@ -10,6 +10,16 @@
categories = [CheckCategories.NETWORKING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
+ def scan_resource_conf(self, conf):
+ # Aurora is replicated across all AZs and doesn't require MultiAZ to be set
+ # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-multiaz
+ if 'Properties' in conf.keys():
+ if 'Engine' in conf['Properties'].keys():
+ if 'aurora' in conf['Properties']['Engine']:
+ return CheckResult.UNKNOWN
+ # Database is not Aurora; Use base class implementation
+ return super().scan_resource_conf(conf)
+
def get_inspected_key(self):
return 'Properties/MultiAZ'
|
{"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py b/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py\n--- a/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py\n+++ b/checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py\n@@ -1,5 +1,5 @@\n+from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-from checkov.common.models.enums import CheckCategories\n \n \n class RDSMultiAZEnabled(BaseResourceValueCheck):\n@@ -10,6 +10,16 @@\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n+ def scan_resource_conf(self, conf):\n+ # Aurora is replicated across all AZs and doesn't require MultiAZ to be set\n+ # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-multiaz\n+ if 'Properties' in conf.keys():\n+ if 'Engine' in conf['Properties'].keys():\n+ if 'aurora' in conf['Properties']['Engine']:\n+ return CheckResult.UNKNOWN\n+ # Database is not Aurora; Use base class implementation\n+ return super().scan_resource_conf(conf)\n+ \n def get_inspected_key(self):\n return 'Properties/MultiAZ'\n", "issue": "CKV_AWS_157 False Positive on Aurora\n**Describe the issue**\r\nCKV_AWS_157 fails on Aurora instances even though the `MultiAZ` property is not applicable to Aurora\r\n\r\n**Examples**\r\n```\r\nRDSinstance:\r\n Type: AWS::RDS::DBInstance\r\n Properties:\r\n DBClusterIdentifier: !Ref DBCluster\r\n DBInstanceClass: !Ref DbType\r\n DBInstanceIdentifier: !Sub ${AppName}-${EnvironmentName}\r\n DBParameterGroupName: !Ref DbParameterGroup\r\n DBSubnetGroupName: !Ref DBSubnetGroup\r\n Engine: aurora-mysql\r\n MonitoringInterval: \"60\"\r\n MonitoringRoleArn: !GetAtt RdsMonitoringRole.Arn\r\n PubliclyAccessible: 'false'\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.2.270\r\n\r\n**Additional context**\r\n- AWS docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-multiaz\r\n\r\n<img width=\"1377\" alt=\"Screenshot 2023-01-19 at 1 25 49 PM\" src=\"https://user-images.githubusercontent.com/1328683/213552592-4736b84a-3926-44b2-9dad-196b68f28daa.png\">\r\n\n", "before_files": [{"content": "from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\n\n\nclass RDSMultiAZEnabled(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that RDS instances have Multi-AZ enabled\"\n id = \"CKV_AWS_157\"\n supported_resources = ['AWS::RDS::DBInstance']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return 'Properties/MultiAZ'\n\n\ncheck = RDSMultiAZEnabled()\n", "path": "checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass RDSMultiAZEnabled(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that RDS instances have Multi-AZ enabled\"\n id = \"CKV_AWS_157\"\n supported_resources = ['AWS::RDS::DBInstance']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n # Aurora is replicated across all AZs and doesn't require MultiAZ to be set\n # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbinstance.html#cfn-rds-dbinstance-multiaz\n if 'Properties' in conf.keys():\n if 'Engine' in conf['Properties'].keys():\n if 'aurora' in conf['Properties']['Engine']:\n return CheckResult.UNKNOWN\n # Database is not Aurora; Use base class implementation\n return super().scan_resource_conf(conf)\n \n def get_inspected_key(self):\n return 'Properties/MultiAZ'\n\n\ncheck = RDSMultiAZEnabled()\n", "path": "checkov/cloudformation/checks/resource/aws/RDSMultiAZEnabled.py"}]}
| 767 | 324 |
gh_patches_debug_32293
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5675
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move session data into session
atm there is data like sidebar stuff that is stored in the actual cookie.
apart from slowing down normal users (they send all that data in *every* request), bots are acumulating cookie cruft and needlessly posting that at our sites.
if the cookie cruft gets too long it gets truncated and errors out
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/helpers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import json
10 from urllib import quote, unquote
11
12 from django.utils import dateformat
13
14
15 SIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'
16
17
18 def get_sidebar_announcements_context(request, objects):
19 """Return the announcements context for the browser pages sidebar.
20
21 :param request: a :cls:`django.http.HttpRequest` object.
22 :param objects: a tuple of Project, Language and TranslationProject to
23 retrieve the announcements for. Any of those can be
24 missing, but it is recommended for them to be in that exact
25 order.
26 """
27 announcements = []
28 new_cookie_data = {}
29 cookie_data = {}
30
31 if SIDEBAR_COOKIE_NAME in request.COOKIES:
32 json_str = unquote(request.COOKIES[SIDEBAR_COOKIE_NAME])
33 cookie_data = json.loads(json_str)
34
35 is_sidebar_open = cookie_data.get('isOpen', True)
36
37 for item in objects:
38 announcement = item.get_announcement(request.user)
39
40 if announcement is None:
41 continue
42
43 announcements.append(announcement)
44 # The virtual_path cannot be used as is for JSON.
45 ann_key = announcement.virtual_path.replace('/', '_')
46 ann_mtime = dateformat.format(announcement.modified_on, 'U')
47 stored_mtime = cookie_data.get(ann_key, None)
48
49 if ann_mtime != stored_mtime:
50 new_cookie_data[ann_key] = ann_mtime
51
52 if new_cookie_data:
53 # Some announcement has been changed or was never displayed before, so
54 # display sidebar and save the changed mtimes in the cookie to not
55 # display it next time unless it is necessary.
56 is_sidebar_open = True
57 cookie_data.update(new_cookie_data)
58 new_cookie_data = quote(json.dumps(cookie_data))
59
60 ctx = {
61 'announcements': announcements,
62 'is_sidebar_open': is_sidebar_open,
63 'has_sidebar': len(announcements) > 0,
64 }
65
66 return ctx, new_cookie_data
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/core/helpers.py b/pootle/core/helpers.py
--- a/pootle/core/helpers.py
+++ b/pootle/core/helpers.py
@@ -12,7 +12,7 @@
from django.utils import dateformat
-SIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'
+SIDEBAR_COOKIE_NAME = 'pootle-browser-open-sidebar'
def get_sidebar_announcements_context(request, objects):
@@ -24,6 +24,7 @@
missing, but it is recommended for them to be in that exact
order.
"""
+ must_show_announcement = False
announcements = []
new_cookie_data = {}
cookie_data = {}
@@ -41,20 +42,20 @@
continue
announcements.append(announcement)
- # The virtual_path cannot be used as is for JSON.
- ann_key = announcement.virtual_path.replace('/', '_')
+
ann_mtime = dateformat.format(announcement.modified_on, 'U')
- stored_mtime = cookie_data.get(ann_key, None)
+ stored_mtime = request.session.get(announcement.virtual_path, None)
if ann_mtime != stored_mtime:
- new_cookie_data[ann_key] = ann_mtime
+ # Some announcement has been changed or was never displayed before,
+ # so display sidebar and save the changed mtimes in the session to
+ # not display it next time unless it is necessary.
+ must_show_announcement = True
+ request.session[announcement.virtual_path] = ann_mtime
- if new_cookie_data:
- # Some announcement has been changed or was never displayed before, so
- # display sidebar and save the changed mtimes in the cookie to not
- # display it next time unless it is necessary.
+ if must_show_announcement and not is_sidebar_open:
is_sidebar_open = True
- cookie_data.update(new_cookie_data)
+ cookie_data['isOpen'] = is_sidebar_open
new_cookie_data = quote(json.dumps(cookie_data))
ctx = {
|
{"golden_diff": "diff --git a/pootle/core/helpers.py b/pootle/core/helpers.py\n--- a/pootle/core/helpers.py\n+++ b/pootle/core/helpers.py\n@@ -12,7 +12,7 @@\n from django.utils import dateformat\n \n \n-SIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'\n+SIDEBAR_COOKIE_NAME = 'pootle-browser-open-sidebar'\n \n \n def get_sidebar_announcements_context(request, objects):\n@@ -24,6 +24,7 @@\n missing, but it is recommended for them to be in that exact\n order.\n \"\"\"\n+ must_show_announcement = False\n announcements = []\n new_cookie_data = {}\n cookie_data = {}\n@@ -41,20 +42,20 @@\n continue\n \n announcements.append(announcement)\n- # The virtual_path cannot be used as is for JSON.\n- ann_key = announcement.virtual_path.replace('/', '_')\n+\n ann_mtime = dateformat.format(announcement.modified_on, 'U')\n- stored_mtime = cookie_data.get(ann_key, None)\n+ stored_mtime = request.session.get(announcement.virtual_path, None)\n \n if ann_mtime != stored_mtime:\n- new_cookie_data[ann_key] = ann_mtime\n+ # Some announcement has been changed or was never displayed before,\n+ # so display sidebar and save the changed mtimes in the session to\n+ # not display it next time unless it is necessary.\n+ must_show_announcement = True\n+ request.session[announcement.virtual_path] = ann_mtime\n \n- if new_cookie_data:\n- # Some announcement has been changed or was never displayed before, so\n- # display sidebar and save the changed mtimes in the cookie to not\n- # display it next time unless it is necessary.\n+ if must_show_announcement and not is_sidebar_open:\n is_sidebar_open = True\n- cookie_data.update(new_cookie_data)\n+ cookie_data['isOpen'] = is_sidebar_open\n new_cookie_data = quote(json.dumps(cookie_data))\n \n ctx = {\n", "issue": "Move session data into session\natm there is data like sidebar stuff that is stored in the actual cookie.\r\n\r\napart from slowing down normal users (they send all that data in *every* request), bots are acumulating cookie cruft and needlessly posting that at our sites.\r\n\r\nif the cookie cruft gets too long it gets truncated and errors out\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport json\nfrom urllib import quote, unquote\n\nfrom django.utils import dateformat\n\n\nSIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'\n\n\ndef get_sidebar_announcements_context(request, objects):\n \"\"\"Return the announcements context for the browser pages sidebar.\n\n :param request: a :cls:`django.http.HttpRequest` object.\n :param objects: a tuple of Project, Language and TranslationProject to\n retrieve the announcements for. Any of those can be\n missing, but it is recommended for them to be in that exact\n order.\n \"\"\"\n announcements = []\n new_cookie_data = {}\n cookie_data = {}\n\n if SIDEBAR_COOKIE_NAME in request.COOKIES:\n json_str = unquote(request.COOKIES[SIDEBAR_COOKIE_NAME])\n cookie_data = json.loads(json_str)\n\n is_sidebar_open = cookie_data.get('isOpen', True)\n\n for item in objects:\n announcement = item.get_announcement(request.user)\n\n if announcement is None:\n continue\n\n announcements.append(announcement)\n # The virtual_path cannot be used as is for JSON.\n ann_key = announcement.virtual_path.replace('/', '_')\n ann_mtime = dateformat.format(announcement.modified_on, 'U')\n stored_mtime = cookie_data.get(ann_key, None)\n\n if ann_mtime != stored_mtime:\n new_cookie_data[ann_key] = ann_mtime\n\n if new_cookie_data:\n # Some announcement has been changed or was never displayed before, so\n # display sidebar and save the changed mtimes in the cookie to not\n # display it next time unless it is necessary.\n is_sidebar_open = True\n cookie_data.update(new_cookie_data)\n new_cookie_data = quote(json.dumps(cookie_data))\n\n ctx = {\n 'announcements': announcements,\n 'is_sidebar_open': is_sidebar_open,\n 'has_sidebar': len(announcements) > 0,\n }\n\n return ctx, new_cookie_data\n", "path": "pootle/core/helpers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport json\nfrom urllib import quote, unquote\n\nfrom django.utils import dateformat\n\n\nSIDEBAR_COOKIE_NAME = 'pootle-browser-open-sidebar'\n\n\ndef get_sidebar_announcements_context(request, objects):\n \"\"\"Return the announcements context for the browser pages sidebar.\n\n :param request: a :cls:`django.http.HttpRequest` object.\n :param objects: a tuple of Project, Language and TranslationProject to\n retrieve the announcements for. Any of those can be\n missing, but it is recommended for them to be in that exact\n order.\n \"\"\"\n must_show_announcement = False\n announcements = []\n new_cookie_data = {}\n cookie_data = {}\n\n if SIDEBAR_COOKIE_NAME in request.COOKIES:\n json_str = unquote(request.COOKIES[SIDEBAR_COOKIE_NAME])\n cookie_data = json.loads(json_str)\n\n is_sidebar_open = cookie_data.get('isOpen', True)\n\n for item in objects:\n announcement = item.get_announcement(request.user)\n\n if announcement is None:\n continue\n\n announcements.append(announcement)\n\n ann_mtime = dateformat.format(announcement.modified_on, 'U')\n stored_mtime = request.session.get(announcement.virtual_path, None)\n\n if ann_mtime != stored_mtime:\n # Some announcement has been changed or was never displayed before,\n # so display sidebar and save the changed mtimes in the session to\n # not display it next time unless it is necessary.\n must_show_announcement = True\n request.session[announcement.virtual_path] = ann_mtime\n\n if must_show_announcement and not is_sidebar_open:\n is_sidebar_open = True\n cookie_data['isOpen'] = is_sidebar_open\n new_cookie_data = quote(json.dumps(cookie_data))\n\n ctx = {\n 'announcements': announcements,\n 'is_sidebar_open': is_sidebar_open,\n 'has_sidebar': len(announcements) > 0,\n }\n\n return ctx, new_cookie_data\n", "path": "pootle/core/helpers.py"}]}
| 948 | 443 |
gh_patches_debug_27272
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1001
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sphinx Documentation
CTFds documentation is lacking and we need a system for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 # import os
16 # import sys
17 # sys.path.insert(0, os.path.abspath('.'))
18
19
20 # -- Project information -----------------------------------------------------
21
22 project = u'CTFd'
23 copyright = u'2019, Kevin Chung'
24 author = u'Kevin Chung'
25
26 # The short X.Y version
27 version = u''
28 # The full version, including alpha/beta/rc tags
29 release = u'2.1.0'
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 'sphinx.ext.intersphinx',
43 'sphinx.ext.todo',
44 'sphinx.ext.imgmath',
45 'sphinx.ext.viewcode',
46 ]
47
48 # Add any paths that contain templates here, relative to this directory.
49 templates_path = ['_templates']
50
51 # The suffix(es) of source filenames.
52 # You can specify multiple suffix as a list of string:
53 #
54 # source_suffix = ['.rst', '.md']
55 source_suffix = '.rst'
56
57 # The master toctree document.
58 master_doc = 'index'
59
60 # The language for content autogenerated by Sphinx. Refer to documentation
61 # for a list of supported languages.
62 #
63 # This is also used if you do content translation via gettext catalogs.
64 # Usually you set "language" from the command line for these cases.
65 language = None
66
67 # List of patterns, relative to source directory, that match files and
68 # directories to ignore when looking for source files.
69 # This pattern also affects html_static_path and html_extra_path.
70 exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']
71
72 # The name of the Pygments (syntax highlighting) style to use.
73 pygments_style = None
74
75
76 # -- Options for HTML output -------------------------------------------------
77
78 # The theme to use for HTML and HTML Help pages. See the documentation for
79 # a list of builtin themes.
80 #
81 html_theme = 'alabaster'
82
83 # Theme options are theme-specific and customize the look and feel of a theme
84 # further. For a list of options available for each theme, see the
85 # documentation.
86 #
87 # html_theme_options = {}
88
89 # Add any paths that contain custom static files (such as style sheets) here,
90 # relative to this directory. They are copied after the builtin static files,
91 # so a file named "default.css" will overwrite the builtin "default.css".
92 html_static_path = ['_static']
93
94 # Custom sidebar templates, must be a dictionary that maps document names
95 # to template names.
96 #
97 # The default sidebars (for documents that don't match any pattern) are
98 # defined by theme itself. Builtin themes are using these templates by
99 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
100 # 'searchbox.html']``.
101 #
102 # html_sidebars = {}
103
104
105 # -- Options for HTMLHelp output ---------------------------------------------
106
107 # Output file base name for HTML help builder.
108 htmlhelp_basename = 'CTFddoc'
109
110
111 # -- Options for LaTeX output ------------------------------------------------
112
113 latex_elements = {
114 # The paper size ('letterpaper' or 'a4paper').
115 #
116 # 'papersize': 'letterpaper',
117
118 # The font size ('10pt', '11pt' or '12pt').
119 #
120 # 'pointsize': '10pt',
121
122 # Additional stuff for the LaTeX preamble.
123 #
124 # 'preamble': '',
125
126 # Latex figure (float) alignment
127 #
128 # 'figure_align': 'htbp',
129 }
130
131 # Grouping the document tree into LaTeX files. List of tuples
132 # (source start file, target name, title,
133 # author, documentclass [howto, manual, or own class]).
134 latex_documents = [
135 (master_doc, 'CTFd.tex', u'CTFd Documentation',
136 u'Kevin Chung', 'manual'),
137 ]
138
139
140 # -- Options for manual page output ------------------------------------------
141
142 # One entry per manual page. List of tuples
143 # (source start file, name, description, authors, manual section).
144 man_pages = [
145 (master_doc, 'ctfd', u'CTFd Documentation',
146 [author], 1)
147 ]
148
149
150 # -- Options for Texinfo output ----------------------------------------------
151
152 # Grouping the document tree into Texinfo files. List of tuples
153 # (source start file, target name, title, author,
154 # dir menu entry, description, category)
155 texinfo_documents = [
156 (master_doc, 'CTFd', u'CTFd Documentation',
157 author, 'CTFd', 'One line description of project.',
158 'Miscellaneous'),
159 ]
160
161
162 # -- Options for Epub output -------------------------------------------------
163
164 # Bibliographic Dublin Core info.
165 epub_title = project
166
167 # The unique identifier of the text. This can be a ISBN number
168 # or the project homepage.
169 #
170 # epub_identifier = ''
171
172 # A unique identification for the text.
173 #
174 # epub_uid = ''
175
176 # A list of files that should not be packed into the epub file.
177 epub_exclude_files = ['search.html']
178
179
180 # -- Extension configuration -------------------------------------------------
181
182 # -- Options for intersphinx extension ---------------------------------------
183
184 # Example configuration for intersphinx: refer to the Python standard library.
185 intersphinx_mapping = {'https://docs.python.org/': None}
186
187 # -- Options for todo extension ----------------------------------------------
188
189 # If true, `todo` and `todoList` produce output, else they produce nothing.
190 todo_include_todos = True
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -20,13 +20,13 @@
# -- Project information -----------------------------------------------------
project = u'CTFd'
-copyright = u'2019, Kevin Chung'
+copyright = u'2019, CTFd LLC'
author = u'Kevin Chung'
# The short X.Y version
version = u''
# The full version, including alpha/beta/rc tags
-release = u'2.1.0'
+release = u'2.1.2'
# -- General configuration ---------------------------------------------------
@@ -78,7 +78,7 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'alabaster'
+html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
@@ -154,7 +154,7 @@
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'CTFd', u'CTFd Documentation',
- author, 'CTFd', 'One line description of project.',
+ author, 'CTFd', 'A Capture The Flag framework focusing on ease of use and customizability.',
'Miscellaneous'),
]
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -20,13 +20,13 @@\n # -- Project information -----------------------------------------------------\n \n project = u'CTFd'\n-copyright = u'2019, Kevin Chung'\n+copyright = u'2019, CTFd LLC'\n author = u'Kevin Chung'\n \n # The short X.Y version\n version = u''\n # The full version, including alpha/beta/rc tags\n-release = u'2.1.0'\n+release = u'2.1.2'\n \n \n # -- General configuration ---------------------------------------------------\n@@ -78,7 +78,7 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = 'alabaster'\n+html_theme = 'sphinx_rtd_theme'\n \n # Theme options are theme-specific and customize the look and feel of a theme\n # further. For a list of options available for each theme, see the\n@@ -154,7 +154,7 @@\n # dir menu entry, description, category)\n texinfo_documents = [\n (master_doc, 'CTFd', u'CTFd Documentation',\n- author, 'CTFd', 'One line description of project.',\n+ author, 'CTFd', 'A Capture The Flag framework focusing on ease of use and customizability.',\n 'Miscellaneous'),\n ]\n", "issue": "Sphinx Documentation\nCTFds documentation is lacking and we need a system for it. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = u'CTFd'\ncopyright = u'2019, Kevin Chung'\nauthor = u'Kevin Chung'\n\n# The short X.Y version\nversion = u''\n# The full version, including alpha/beta/rc tags\nrelease = u'2.1.0'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.imgmath',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CTFddoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'CTFd.tex', u'CTFd Documentation',\n u'Kevin Chung', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ctfd', u'CTFd Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'CTFd', u'CTFd Documentation',\n author, 'CTFd', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = u'CTFd'\ncopyright = u'2019, CTFd LLC'\nauthor = u'Kevin Chung'\n\n# The short X.Y version\nversion = u''\n# The full version, including alpha/beta/rc tags\nrelease = u'2.1.2'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.imgmath',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CTFddoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'CTFd.tex', u'CTFd Documentation',\n u'Kevin Chung', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ctfd', u'CTFd Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'CTFd', u'CTFd Documentation',\n author, 'CTFd', 'A Capture The Flag framework focusing on ease of use and customizability.',\n 'Miscellaneous'),\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}]}
| 2,053 | 319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.