problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_10843
|
rasdani/github-patches
|
git_diff
|
common-workflow-language__cwltool-653
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Second expression in concatenated valueFrom is ignored
In the CommandLineTool:
```yaml
cwlVersion: v1.0
requirements:
- class: InlineJavascriptRequirement
inputs:
- id: parameter
inputBinding:
valueFrom: $("a ")$("string")
type: string?
outputs: []
class: CommandLineTool
baseCommand: echo
```
The expression `$("a ")` is ignored on when this is run:
```
/usr/local/bin/cwltool 1.0.20180130110340
Resolved 'test.cwl' to 'file:///Users/th10/checkouts/arvados-pipeline/arvados-pipelines/cwl/test.cwl'
[job test.cwl] /private/var/folders/5p/p6z8hhy174q_ctt3pn7mx5_c000h1t/T/tmp8fqrl6ng$ echo \
string
string
[job test.cwl] completed success
{}
Final process status is success
```
CWL version: 1.0.20180130110340
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cwltool/expression.py`
Content:
```
1 from __future__ import absolute_import
2 import copy
3 import json
4 import logging
5 import re
6 from typing import Any, AnyStr, Dict, List, Text, Union
7 from .utils import docker_windows_path_adjust
8 import six
9 from six import u
10
11 from . import sandboxjs
12 from .errors import WorkflowException
13 from .utils import bytes2str_in_dicts
14
15 _logger = logging.getLogger("cwltool")
16
17
18 def jshead(engineConfig, rootvars):
19 # type: (List[Text], Dict[Text, Any]) -> Text
20
21 # make sure all the byte strings are converted
22 # to str in `rootvars` dict.
23 # TODO: need to make sure the `rootvars dict`
24 # contains no bytes type in the first place.
25 if six.PY3:
26 rootvars = bytes2str_in_dicts(rootvars) # type: ignore
27
28 return u"\n".join(engineConfig + [u"var %s = %s;" % (k, json.dumps(v, indent=4)) for k, v in rootvars.items()])
29
30
31 # decode all raw strings to unicode
32 seg_symbol = r"""\w+"""
33 seg_single = r"""\['([^']|\\')+'\]"""
34 seg_double = r"""\["([^"]|\\")+"\]"""
35 seg_index = r"""\[[0-9]+\]"""
36 segments = r"(\.%s|%s|%s|%s)" % (seg_symbol, seg_single, seg_double, seg_index)
37 segment_re = re.compile(u(segments), flags=re.UNICODE)
38 param_str = r"\((%s)%s*\)$" % (seg_symbol, segments)
39 param_re = re.compile(u(param_str), flags=re.UNICODE)
40
41 JSON = Union[Dict[Any, Any], List[Any], Text, int, float, bool, None]
42
43
44 class SubstitutionError(Exception):
45 pass
46
47
48 def scanner(scan): # type: (Text) -> List[int]
49 DEFAULT = 0
50 DOLLAR = 1
51 PAREN = 2
52 BRACE = 3
53 SINGLE_QUOTE = 4
54 DOUBLE_QUOTE = 5
55 BACKSLASH = 6
56
57 i = 0
58 stack = [DEFAULT]
59 start = 0
60 while i < len(scan):
61 state = stack[-1]
62 c = scan[i]
63
64 if state == DEFAULT:
65 if c == '$':
66 stack.append(DOLLAR)
67 elif c == '\\':
68 stack.append(BACKSLASH)
69 elif state == BACKSLASH:
70 stack.pop()
71 if stack[-1] == DEFAULT:
72 return [i - 1, i + 1]
73 elif state == DOLLAR:
74 if c == '(':
75 start = i - 1
76 stack.append(PAREN)
77 elif c == '{':
78 start = i - 1
79 stack.append(BRACE)
80 else:
81 stack.pop()
82 elif state == PAREN:
83 if c == '(':
84 stack.append(PAREN)
85 elif c == ')':
86 stack.pop()
87 if stack[-1] == DOLLAR:
88 return [start, i + 1]
89 elif c == "'":
90 stack.append(SINGLE_QUOTE)
91 elif c == '"':
92 stack.append(DOUBLE_QUOTE)
93 elif state == BRACE:
94 if c == '{':
95 stack.append(BRACE)
96 elif c == '}':
97 stack.pop()
98 if stack[-1] == DOLLAR:
99 return [start, i + 1]
100 elif c == "'":
101 stack.append(SINGLE_QUOTE)
102 elif c == '"':
103 stack.append(DOUBLE_QUOTE)
104 elif state == SINGLE_QUOTE:
105 if c == "'":
106 stack.pop()
107 elif c == '\\':
108 stack.append(BACKSLASH)
109 elif state == DOUBLE_QUOTE:
110 if c == '"':
111 stack.pop()
112 elif c == '\\':
113 stack.append(BACKSLASH)
114 i += 1
115
116 if len(stack) > 1:
117 raise SubstitutionError(
118 "Substitution error, unfinished block starting at position {}: {}".format(start, scan[start:]))
119 else:
120 return None
121
122
123 def next_seg(remain, obj): # type: (Text, Any) -> Any
124 if remain:
125 m = segment_re.match(remain)
126 key = None # type: Union[Text, int]
127 if m.group(0)[0] == '.':
128 key = m.group(0)[1:]
129 elif m.group(0)[1] in ("'", '"'):
130 key = m.group(0)[2:-2].replace("\\'", "'").replace('\\"', '"')
131
132 if key:
133 if isinstance(obj, list) and key == "length" and not remain[m.end(0):]:
134 return len(obj)
135 if not isinstance(obj, dict):
136 raise WorkflowException(" is a %s, cannot index on string '%s'" % (type(obj).__name__, key))
137 if key not in obj:
138 raise WorkflowException(" does not contain key '%s'" % key)
139 else:
140 try:
141 key = int(m.group(0)[1:-1])
142 except ValueError as v:
143 raise WorkflowException(u(str(v)))
144 if not isinstance(obj, list):
145 raise WorkflowException(" is a %s, cannot index on int '%s'" % (type(obj).__name__, key))
146 if key >= len(obj):
147 raise WorkflowException(" list index %i out of range" % key)
148 try:
149 return next_seg(remain[m.end(0):], obj[key])
150 except WorkflowException as w:
151 raise WorkflowException("%s%s" % (m.group(0), w))
152 else:
153 return obj
154
155
156 def evaluator(ex, jslib, obj, fullJS=False, timeout=None, force_docker_pull=False, debug=False, js_console=False):
157 # type: (Text, Text, Dict[Text, Any], bool, int, bool, bool, bool) -> JSON
158 m = param_re.match(ex)
159 if m:
160 if m.end(1)+1 == len(ex) and m.group(1) == "null":
161 return None
162 try:
163 return next_seg(m.group(0)[m.end(1) - m.start(0):-1], obj[m.group(1)])
164 except Exception as w:
165 raise WorkflowException("%s%s" % (m.group(1), w))
166 elif fullJS:
167 return sandboxjs.execjs(ex, jslib, timeout=timeout, force_docker_pull=force_docker_pull, debug=debug, js_console=js_console)
168 else:
169 raise sandboxjs.JavascriptException(
170 "Syntax error in parameter reference '%s' or used Javascript code without specifying InlineJavascriptRequirement.",
171 ex)
172
173
174 def interpolate(scan, rootvars,
175 timeout=None, fullJS=None, jslib="", force_docker_pull=False,
176 debug=False, js_console=False):
177 # type: (Text, Dict[Text, Any], int, bool, Union[str, Text], bool, bool, bool) -> JSON
178 scan = scan.strip()
179 parts = []
180 w = scanner(scan)
181 while w:
182 parts.append(scan[0:w[0]])
183
184 if scan[w[0]] == '$':
185 e = evaluator(scan[w[0] + 1:w[1]], jslib, rootvars, fullJS=fullJS,
186 timeout=timeout, force_docker_pull=force_docker_pull,
187 debug=debug, js_console=js_console)
188 if w[0] == 0 and w[1] == len(scan):
189 return e
190 leaf = json.dumps(e, sort_keys=True)
191 if leaf[0] == '"':
192 leaf = leaf[1:-1]
193 parts.append(leaf)
194 elif scan[w[0]] == '\\':
195 e = scan[w[1] - 1]
196 parts.append(e)
197
198 scan = scan[w[1]:]
199 w = scanner(scan)
200 parts.append(scan)
201 return ''.join(parts)
202
203
204 def do_eval(ex, jobinput, requirements, outdir, tmpdir, resources,
205 context=None, pull_image=True, timeout=None, force_docker_pull=False, debug=False, js_console=False):
206 # type: (Union[dict, AnyStr], Dict[Text, Union[Dict, List, Text]], List[Dict[Text, Any]], Text, Text, Dict[Text, Union[int, Text]], Any, bool, int, bool, bool, bool) -> Any
207
208 runtime = copy.copy(resources)
209 runtime["tmpdir"] = docker_windows_path_adjust(tmpdir)
210 runtime["outdir"] = docker_windows_path_adjust(outdir)
211
212 rootvars = {
213 u"inputs": jobinput,
214 u"self": context,
215 u"runtime": runtime}
216
217 if isinstance(ex, (str, Text)) and ("$(" in ex or "${" in ex):
218 fullJS = False
219 jslib = u""
220 for r in reversed(requirements):
221 if r["class"] == "InlineJavascriptRequirement":
222 fullJS = True
223 jslib = jshead(r.get("expressionLib", []), rootvars)
224 break
225
226 try:
227 return interpolate(ex,
228 rootvars,
229 timeout=timeout,
230 fullJS=fullJS,
231 jslib=jslib,
232 force_docker_pull=force_docker_pull,
233 debug=debug,
234 js_console=js_console)
235
236 except Exception as e:
237 raise WorkflowException("Expression evaluation error:\n%s" % e)
238 else:
239 return ex
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cwltool/expression.py b/cwltool/expression.py
--- a/cwltool/expression.py
+++ b/cwltool/expression.py
@@ -185,7 +185,7 @@
e = evaluator(scan[w[0] + 1:w[1]], jslib, rootvars, fullJS=fullJS,
timeout=timeout, force_docker_pull=force_docker_pull,
debug=debug, js_console=js_console)
- if w[0] == 0 and w[1] == len(scan):
+ if w[0] == 0 and w[1] == len(scan) and len(parts) <= 1:
return e
leaf = json.dumps(e, sort_keys=True)
if leaf[0] == '"':
|
{"golden_diff": "diff --git a/cwltool/expression.py b/cwltool/expression.py\n--- a/cwltool/expression.py\n+++ b/cwltool/expression.py\n@@ -185,7 +185,7 @@\n e = evaluator(scan[w[0] + 1:w[1]], jslib, rootvars, fullJS=fullJS,\n timeout=timeout, force_docker_pull=force_docker_pull,\n debug=debug, js_console=js_console)\n- if w[0] == 0 and w[1] == len(scan):\n+ if w[0] == 0 and w[1] == len(scan) and len(parts) <= 1:\n return e\n leaf = json.dumps(e, sort_keys=True)\n if leaf[0] == '\"':\n", "issue": "Second expression in concatenated valueFrom is ignored\nIn the CommandLineTool:\r\n```yaml\r\ncwlVersion: v1.0\r\nrequirements:\r\n - class: InlineJavascriptRequirement\r\ninputs:\r\n - id: parameter\r\n inputBinding:\r\n valueFrom: $(\"a \")$(\"string\")\r\n type: string?\r\noutputs: []\r\nclass: CommandLineTool\r\nbaseCommand: echo\r\n```\r\n\r\nThe expression `$(\"a \")` is ignored on when this is run:\r\n\r\n```\r\n/usr/local/bin/cwltool 1.0.20180130110340\r\nResolved 'test.cwl' to 'file:///Users/th10/checkouts/arvados-pipeline/arvados-pipelines/cwl/test.cwl'\r\n[job test.cwl] /private/var/folders/5p/p6z8hhy174q_ctt3pn7mx5_c000h1t/T/tmp8fqrl6ng$ echo \\\r\n string\r\nstring\r\n[job test.cwl] completed success\r\n{}\r\nFinal process status is success\r\n```\r\n\r\nCWL version: 1.0.20180130110340\n", "before_files": [{"content": "from __future__ import absolute_import\nimport copy\nimport json\nimport logging\nimport re\nfrom typing import Any, AnyStr, Dict, List, Text, Union\nfrom .utils import docker_windows_path_adjust\nimport six\nfrom six import u\n\nfrom . import sandboxjs\nfrom .errors import WorkflowException\nfrom .utils import bytes2str_in_dicts\n\n_logger = logging.getLogger(\"cwltool\")\n\n\ndef jshead(engineConfig, rootvars):\n # type: (List[Text], Dict[Text, Any]) -> Text\n\n # make sure all the byte strings are converted\n # to str in `rootvars` dict.\n # TODO: need to make sure the `rootvars dict`\n # contains no bytes type in the first place.\n if six.PY3:\n rootvars = bytes2str_in_dicts(rootvars) # type: ignore\n\n return u\"\\n\".join(engineConfig + [u\"var %s = %s;\" % (k, json.dumps(v, indent=4)) for k, v in rootvars.items()])\n\n\n# decode all raw strings to unicode\nseg_symbol = r\"\"\"\\w+\"\"\"\nseg_single = r\"\"\"\\['([^']|\\\\')+'\\]\"\"\"\nseg_double = r\"\"\"\\[\"([^\"]|\\\\\")+\"\\]\"\"\"\nseg_index = r\"\"\"\\[[0-9]+\\]\"\"\"\nsegments = r\"(\\.%s|%s|%s|%s)\" % (seg_symbol, seg_single, seg_double, seg_index)\nsegment_re = re.compile(u(segments), flags=re.UNICODE)\nparam_str = r\"\\((%s)%s*\\)$\" % (seg_symbol, segments)\nparam_re = re.compile(u(param_str), flags=re.UNICODE)\n\nJSON = Union[Dict[Any, Any], List[Any], Text, int, float, bool, None]\n\n\nclass SubstitutionError(Exception):\n pass\n\n\ndef scanner(scan): # type: (Text) -> List[int]\n DEFAULT = 0\n DOLLAR = 1\n PAREN = 2\n BRACE = 3\n SINGLE_QUOTE = 4\n DOUBLE_QUOTE = 5\n BACKSLASH = 6\n\n i = 0\n stack = [DEFAULT]\n start = 0\n while i < len(scan):\n state = stack[-1]\n c = scan[i]\n\n if state == DEFAULT:\n if c == '$':\n stack.append(DOLLAR)\n elif c == '\\\\':\n stack.append(BACKSLASH)\n elif state == BACKSLASH:\n stack.pop()\n if stack[-1] == DEFAULT:\n return [i - 1, i + 1]\n elif state == DOLLAR:\n if c == '(':\n start = i - 1\n stack.append(PAREN)\n elif c == '{':\n start = i - 1\n stack.append(BRACE)\n else:\n stack.pop()\n elif state == PAREN:\n if c == '(':\n stack.append(PAREN)\n elif c == ')':\n stack.pop()\n if stack[-1] == DOLLAR:\n return [start, i + 1]\n elif c == \"'\":\n stack.append(SINGLE_QUOTE)\n elif c == '\"':\n stack.append(DOUBLE_QUOTE)\n elif state == BRACE:\n if c == '{':\n stack.append(BRACE)\n elif c == '}':\n stack.pop()\n if stack[-1] == DOLLAR:\n return [start, i + 1]\n elif c == \"'\":\n stack.append(SINGLE_QUOTE)\n elif c == '\"':\n stack.append(DOUBLE_QUOTE)\n elif state == SINGLE_QUOTE:\n if c == \"'\":\n stack.pop()\n elif c == '\\\\':\n stack.append(BACKSLASH)\n elif state == DOUBLE_QUOTE:\n if c == '\"':\n stack.pop()\n elif c == '\\\\':\n stack.append(BACKSLASH)\n i += 1\n\n if len(stack) > 1:\n raise SubstitutionError(\n \"Substitution error, unfinished block starting at position {}: {}\".format(start, scan[start:]))\n else:\n return None\n\n\ndef next_seg(remain, obj): # type: (Text, Any) -> Any\n if remain:\n m = segment_re.match(remain)\n key = None # type: Union[Text, int]\n if m.group(0)[0] == '.':\n key = m.group(0)[1:]\n elif m.group(0)[1] in (\"'\", '\"'):\n key = m.group(0)[2:-2].replace(\"\\\\'\", \"'\").replace('\\\\\"', '\"')\n\n if key:\n if isinstance(obj, list) and key == \"length\" and not remain[m.end(0):]:\n return len(obj)\n if not isinstance(obj, dict):\n raise WorkflowException(\" is a %s, cannot index on string '%s'\" % (type(obj).__name__, key))\n if key not in obj:\n raise WorkflowException(\" does not contain key '%s'\" % key)\n else:\n try:\n key = int(m.group(0)[1:-1])\n except ValueError as v:\n raise WorkflowException(u(str(v)))\n if not isinstance(obj, list):\n raise WorkflowException(\" is a %s, cannot index on int '%s'\" % (type(obj).__name__, key))\n if key >= len(obj):\n raise WorkflowException(\" list index %i out of range\" % key)\n try:\n return next_seg(remain[m.end(0):], obj[key])\n except WorkflowException as w:\n raise WorkflowException(\"%s%s\" % (m.group(0), w))\n else:\n return obj\n\n\ndef evaluator(ex, jslib, obj, fullJS=False, timeout=None, force_docker_pull=False, debug=False, js_console=False):\n # type: (Text, Text, Dict[Text, Any], bool, int, bool, bool, bool) -> JSON\n m = param_re.match(ex)\n if m:\n if m.end(1)+1 == len(ex) and m.group(1) == \"null\":\n return None\n try:\n return next_seg(m.group(0)[m.end(1) - m.start(0):-1], obj[m.group(1)])\n except Exception as w:\n raise WorkflowException(\"%s%s\" % (m.group(1), w))\n elif fullJS:\n return sandboxjs.execjs(ex, jslib, timeout=timeout, force_docker_pull=force_docker_pull, debug=debug, js_console=js_console)\n else:\n raise sandboxjs.JavascriptException(\n \"Syntax error in parameter reference '%s' or used Javascript code without specifying InlineJavascriptRequirement.\",\n ex)\n\n\ndef interpolate(scan, rootvars,\n timeout=None, fullJS=None, jslib=\"\", force_docker_pull=False,\n debug=False, js_console=False):\n # type: (Text, Dict[Text, Any], int, bool, Union[str, Text], bool, bool, bool) -> JSON\n scan = scan.strip()\n parts = []\n w = scanner(scan)\n while w:\n parts.append(scan[0:w[0]])\n\n if scan[w[0]] == '$':\n e = evaluator(scan[w[0] + 1:w[1]], jslib, rootvars, fullJS=fullJS,\n timeout=timeout, force_docker_pull=force_docker_pull,\n debug=debug, js_console=js_console)\n if w[0] == 0 and w[1] == len(scan):\n return e\n leaf = json.dumps(e, sort_keys=True)\n if leaf[0] == '\"':\n leaf = leaf[1:-1]\n parts.append(leaf)\n elif scan[w[0]] == '\\\\':\n e = scan[w[1] - 1]\n parts.append(e)\n\n scan = scan[w[1]:]\n w = scanner(scan)\n parts.append(scan)\n return ''.join(parts)\n\n\ndef do_eval(ex, jobinput, requirements, outdir, tmpdir, resources,\n context=None, pull_image=True, timeout=None, force_docker_pull=False, debug=False, js_console=False):\n # type: (Union[dict, AnyStr], Dict[Text, Union[Dict, List, Text]], List[Dict[Text, Any]], Text, Text, Dict[Text, Union[int, Text]], Any, bool, int, bool, bool, bool) -> Any\n\n runtime = copy.copy(resources)\n runtime[\"tmpdir\"] = docker_windows_path_adjust(tmpdir)\n runtime[\"outdir\"] = docker_windows_path_adjust(outdir)\n\n rootvars = {\n u\"inputs\": jobinput,\n u\"self\": context,\n u\"runtime\": runtime}\n\n if isinstance(ex, (str, Text)) and (\"$(\" in ex or \"${\" in ex):\n fullJS = False\n jslib = u\"\"\n for r in reversed(requirements):\n if r[\"class\"] == \"InlineJavascriptRequirement\":\n fullJS = True\n jslib = jshead(r.get(\"expressionLib\", []), rootvars)\n break\n\n try:\n return interpolate(ex,\n rootvars,\n timeout=timeout,\n fullJS=fullJS,\n jslib=jslib,\n force_docker_pull=force_docker_pull,\n debug=debug,\n js_console=js_console)\n\n except Exception as e:\n raise WorkflowException(\"Expression evaluation error:\\n%s\" % e)\n else:\n return ex\n", "path": "cwltool/expression.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport copy\nimport json\nimport logging\nimport re\nfrom typing import Any, AnyStr, Dict, List, Text, Union\nfrom .utils import docker_windows_path_adjust\nimport six\nfrom six import u\n\nfrom . import sandboxjs\nfrom .errors import WorkflowException\nfrom .utils import bytes2str_in_dicts\n\n_logger = logging.getLogger(\"cwltool\")\n\n\ndef jshead(engineConfig, rootvars):\n # type: (List[Text], Dict[Text, Any]) -> Text\n\n # make sure all the byte strings are converted\n # to str in `rootvars` dict.\n # TODO: need to make sure the `rootvars dict`\n # contains no bytes type in the first place.\n if six.PY3:\n rootvars = bytes2str_in_dicts(rootvars) # type: ignore\n\n return u\"\\n\".join(engineConfig + [u\"var %s = %s;\" % (k, json.dumps(v, indent=4)) for k, v in rootvars.items()])\n\n\n# decode all raw strings to unicode\nseg_symbol = r\"\"\"\\w+\"\"\"\nseg_single = r\"\"\"\\['([^']|\\\\')+'\\]\"\"\"\nseg_double = r\"\"\"\\[\"([^\"]|\\\\\")+\"\\]\"\"\"\nseg_index = r\"\"\"\\[[0-9]+\\]\"\"\"\nsegments = r\"(\\.%s|%s|%s|%s)\" % (seg_symbol, seg_single, seg_double, seg_index)\nsegment_re = re.compile(u(segments), flags=re.UNICODE)\nparam_str = r\"\\((%s)%s*\\)$\" % (seg_symbol, segments)\nparam_re = re.compile(u(param_str), flags=re.UNICODE)\n\nJSON = Union[Dict[Any, Any], List[Any], Text, int, float, bool, None]\n\n\nclass SubstitutionError(Exception):\n pass\n\n\ndef scanner(scan): # type: (Text) -> List[int]\n DEFAULT = 0\n DOLLAR = 1\n PAREN = 2\n BRACE = 3\n SINGLE_QUOTE = 4\n DOUBLE_QUOTE = 5\n BACKSLASH = 6\n\n i = 0\n stack = [DEFAULT]\n start = 0\n while i < len(scan):\n state = stack[-1]\n c = scan[i]\n\n if state == DEFAULT:\n if c == '$':\n stack.append(DOLLAR)\n elif c == '\\\\':\n stack.append(BACKSLASH)\n elif state == BACKSLASH:\n stack.pop()\n if stack[-1] == DEFAULT:\n return [i - 1, i + 1]\n elif state == DOLLAR:\n if c == '(':\n start = i - 1\n stack.append(PAREN)\n elif c == '{':\n start = i - 1\n stack.append(BRACE)\n else:\n stack.pop()\n elif state == PAREN:\n if c == '(':\n stack.append(PAREN)\n elif c == ')':\n stack.pop()\n if stack[-1] == DOLLAR:\n return [start, i + 1]\n elif c == \"'\":\n stack.append(SINGLE_QUOTE)\n elif c == '\"':\n stack.append(DOUBLE_QUOTE)\n elif state == BRACE:\n if c == '{':\n stack.append(BRACE)\n elif c == '}':\n stack.pop()\n if stack[-1] == DOLLAR:\n return [start, i + 1]\n elif c == \"'\":\n stack.append(SINGLE_QUOTE)\n elif c == '\"':\n stack.append(DOUBLE_QUOTE)\n elif state == SINGLE_QUOTE:\n if c == \"'\":\n stack.pop()\n elif c == '\\\\':\n stack.append(BACKSLASH)\n elif state == DOUBLE_QUOTE:\n if c == '\"':\n stack.pop()\n elif c == '\\\\':\n stack.append(BACKSLASH)\n i += 1\n\n if len(stack) > 1:\n raise SubstitutionError(\n \"Substitution error, unfinished block starting at position {}: {}\".format(start, scan[start:]))\n else:\n return None\n\n\ndef next_seg(remain, obj): # type: (Text, Any) -> Any\n if remain:\n m = segment_re.match(remain)\n key = None # type: Union[Text, int]\n if m.group(0)[0] == '.':\n key = m.group(0)[1:]\n elif m.group(0)[1] in (\"'\", '\"'):\n key = m.group(0)[2:-2].replace(\"\\\\'\", \"'\").replace('\\\\\"', '\"')\n\n if key:\n if isinstance(obj, list) and key == \"length\" and not remain[m.end(0):]:\n return len(obj)\n if not isinstance(obj, dict):\n raise WorkflowException(\" is a %s, cannot index on string '%s'\" % (type(obj).__name__, key))\n if key not in obj:\n raise WorkflowException(\" does not contain key '%s'\" % key)\n else:\n try:\n key = int(m.group(0)[1:-1])\n except ValueError as v:\n raise WorkflowException(u(str(v)))\n if not isinstance(obj, list):\n raise WorkflowException(\" is a %s, cannot index on int '%s'\" % (type(obj).__name__, key))\n if key >= len(obj):\n raise WorkflowException(\" list index %i out of range\" % key)\n try:\n return next_seg(remain[m.end(0):], obj[key])\n except WorkflowException as w:\n raise WorkflowException(\"%s%s\" % (m.group(0), w))\n else:\n return obj\n\n\ndef evaluator(ex, jslib, obj, fullJS=False, timeout=None, force_docker_pull=False, debug=False, js_console=False):\n # type: (Text, Text, Dict[Text, Any], bool, int, bool, bool, bool) -> JSON\n m = param_re.match(ex)\n if m:\n if m.end(1)+1 == len(ex) and m.group(1) == \"null\":\n return None\n try:\n return next_seg(m.group(0)[m.end(1) - m.start(0):-1], obj[m.group(1)])\n except Exception as w:\n raise WorkflowException(\"%s%s\" % (m.group(1), w))\n elif fullJS:\n return sandboxjs.execjs(ex, jslib, timeout=timeout, force_docker_pull=force_docker_pull, debug=debug, js_console=js_console)\n else:\n raise sandboxjs.JavascriptException(\n \"Syntax error in parameter reference '%s' or used Javascript code without specifying InlineJavascriptRequirement.\",\n ex)\n\n\ndef interpolate(scan, rootvars,\n timeout=None, fullJS=None, jslib=\"\", force_docker_pull=False,\n debug=False, js_console=False):\n # type: (Text, Dict[Text, Any], int, bool, Union[str, Text], bool, bool, bool) -> JSON\n scan = scan.strip()\n parts = []\n w = scanner(scan)\n while w:\n parts.append(scan[0:w[0]])\n\n if scan[w[0]] == '$':\n e = evaluator(scan[w[0] + 1:w[1]], jslib, rootvars, fullJS=fullJS,\n timeout=timeout, force_docker_pull=force_docker_pull,\n debug=debug, js_console=js_console)\n if w[0] == 0 and w[1] == len(scan) and len(parts) <= 1:\n return e\n leaf = json.dumps(e, sort_keys=True)\n if leaf[0] == '\"':\n leaf = leaf[1:-1]\n parts.append(leaf)\n elif scan[w[0]] == '\\\\':\n e = scan[w[1] - 1]\n parts.append(e)\n\n scan = scan[w[1]:]\n w = scanner(scan)\n parts.append(scan)\n return ''.join(parts)\n\n\ndef do_eval(ex, jobinput, requirements, outdir, tmpdir, resources,\n context=None, pull_image=True, timeout=None, force_docker_pull=False, debug=False, js_console=False):\n # type: (Union[dict, AnyStr], Dict[Text, Union[Dict, List, Text]], List[Dict[Text, Any]], Text, Text, Dict[Text, Union[int, Text]], Any, bool, int, bool, bool, bool) -> Any\n\n runtime = copy.copy(resources)\n runtime[\"tmpdir\"] = docker_windows_path_adjust(tmpdir)\n runtime[\"outdir\"] = docker_windows_path_adjust(outdir)\n\n rootvars = {\n u\"inputs\": jobinput,\n u\"self\": context,\n u\"runtime\": runtime}\n\n if isinstance(ex, (str, Text)) and (\"$(\" in ex or \"${\" in ex):\n fullJS = False\n jslib = u\"\"\n for r in reversed(requirements):\n if r[\"class\"] == \"InlineJavascriptRequirement\":\n fullJS = True\n jslib = jshead(r.get(\"expressionLib\", []), rootvars)\n break\n\n try:\n return interpolate(ex,\n rootvars,\n timeout=timeout,\n fullJS=fullJS,\n jslib=jslib,\n force_docker_pull=force_docker_pull,\n debug=debug,\n js_console=js_console)\n\n except Exception as e:\n raise WorkflowException(\"Expression evaluation error:\\n%s\" % e)\n else:\n return ex\n", "path": "cwltool/expression.py"}]}
| 3,191 | 174 |
gh_patches_debug_31821
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-7398
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] TorchDiagGaussian doesn’t handle multiple actions correctly.
This is not a contribution.
Ray version: 0.8.2
Python version: 3.6.8
Pytorch version: 1.4
OS: Ubuntu 18.04 Docker
TorchDiagGaussian doesn’t handle multiple actions correctly. As a result, training PPO with Pytorch will crash when the action space has more than 1 action. Here’s minimal reproduction script:
```python
import gym
from gym.spaces import Box
from ray import tune
class ContinuousEnv(gym.Env):
def __init__(self, config):
self.action_space = Box(0.0, 1.0, shape=(2,))
self.observation_space = Box(0.0, 1.0, shape=(1, ))
def reset(self):
return [0.0]
def step(self, action):
return [0.0], 1.0, False, {}
tune.run(
"PPO",
config={"env": ContinuousEnv, "use_pytorch": True, "num_workers": 1})
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/models/torch/torch_action_dist.py`
Content:
```
1 import numpy as np
2
3 from ray.rllib.models.action_dist import ActionDistribution
4 from ray.rllib.utils.annotations import override
5 from ray.rllib.utils import try_import_torch
6
7 torch, nn = try_import_torch()
8
9
10 class TorchDistributionWrapper(ActionDistribution):
11 """Wrapper class for torch.distributions."""
12
13 def __init_(self, inputs):
14 super().__init__(inputs)
15 # Store the last sample here.
16 self.last_sample = None
17
18 @override(ActionDistribution)
19 def logp(self, actions):
20 return self.dist.log_prob(actions)
21
22 @override(ActionDistribution)
23 def entropy(self):
24 return self.dist.entropy()
25
26 @override(ActionDistribution)
27 def kl(self, other):
28 return torch.distributions.kl.kl_divergence(self.dist, other.dist)
29
30 @override(ActionDistribution)
31 def sample(self):
32 self.last_sample = self.dist.sample()
33 return self.last_sample
34
35 @override(ActionDistribution)
36 def sampled_action_logp(self):
37 assert self.last_sample is not None
38 return self.logp(self.last_sample)
39
40
41 class TorchCategorical(TorchDistributionWrapper):
42 """Wrapper class for PyTorch Categorical distribution."""
43
44 @override(ActionDistribution)
45 def __init__(self, inputs, model):
46 super().__init__(inputs, model)
47 self.dist = torch.distributions.categorical.Categorical(logits=inputs)
48
49 @override(ActionDistribution)
50 def deterministic_sample(self):
51 return self.dist.probs.argmax(dim=1)
52
53 @staticmethod
54 @override(ActionDistribution)
55 def required_model_output_shape(action_space, model_config):
56 return action_space.n
57
58
59 class TorchDiagGaussian(TorchDistributionWrapper):
60 """Wrapper class for PyTorch Normal distribution."""
61
62 @override(ActionDistribution)
63 def __init__(self, inputs, model):
64 super().__init__(inputs, model)
65 mean, log_std = torch.chunk(inputs, 2, dim=1)
66 self.dist = torch.distributions.normal.Normal(mean, torch.exp(log_std))
67
68 @override(ActionDistribution)
69 def deterministic_sample(self):
70 return self.dist.mean
71
72 @override(TorchDistributionWrapper)
73 def logp(self, actions):
74 return TorchDistributionWrapper.logp(self, actions).sum(-1)
75
76 @staticmethod
77 @override(ActionDistribution)
78 def required_model_output_shape(action_space, model_config):
79 return np.prod(action_space.shape) * 2
80
```
Path: `rllib/agents/ppo/ppo_torch_policy.py`
Content:
```
1 import logging
2
3 import ray
4 from ray.rllib.agents.impala.vtrace_policy import BEHAVIOUR_LOGITS
5 from ray.rllib.agents.a3c.a3c_torch_policy import apply_grad_clipping
6 from ray.rllib.agents.ppo.ppo_tf_policy import postprocess_ppo_gae, \
7 setup_config
8 from ray.rllib.evaluation.postprocessing import Postprocessing
9 from ray.rllib.policy.sample_batch import SampleBatch
10 from ray.rllib.policy.policy import ACTION_LOGP
11 from ray.rllib.policy.torch_policy import EntropyCoeffSchedule, \
12 LearningRateSchedule
13 from ray.rllib.policy.torch_policy_template import build_torch_policy
14 from ray.rllib.utils.explained_variance import explained_variance
15 from ray.rllib.utils.torch_ops import sequence_mask
16 from ray.rllib.utils import try_import_torch
17
18 torch, nn = try_import_torch()
19
20 logger = logging.getLogger(__name__)
21
22
23 class PPOLoss:
24 def __init__(self,
25 dist_class,
26 model,
27 value_targets,
28 advantages,
29 actions,
30 prev_logits,
31 prev_actions_logp,
32 vf_preds,
33 curr_action_dist,
34 value_fn,
35 cur_kl_coeff,
36 valid_mask,
37 entropy_coeff=0,
38 clip_param=0.1,
39 vf_clip_param=0.1,
40 vf_loss_coeff=1.0,
41 use_gae=True):
42 """Constructs the loss for Proximal Policy Objective.
43
44 Arguments:
45 dist_class: action distribution class for logits.
46 value_targets (Placeholder): Placeholder for target values; used
47 for GAE.
48 actions (Placeholder): Placeholder for actions taken
49 from previous model evaluation.
50 advantages (Placeholder): Placeholder for calculated advantages
51 from previous model evaluation.
52 prev_logits (Placeholder): Placeholder for logits output from
53 previous model evaluation.
54 prev_actions_logp (Placeholder): Placeholder for prob output from
55 previous model evaluation.
56 vf_preds (Placeholder): Placeholder for value function output
57 from previous model evaluation.
58 curr_action_dist (ActionDistribution): ActionDistribution
59 of the current model.
60 value_fn (Tensor): Current value function output Tensor.
61 cur_kl_coeff (Variable): Variable holding the current PPO KL
62 coefficient.
63 valid_mask (Tensor): A bool mask of valid input elements (#2992).
64 entropy_coeff (float): Coefficient of the entropy regularizer.
65 clip_param (float): Clip parameter
66 vf_clip_param (float): Clip parameter for the value function
67 vf_loss_coeff (float): Coefficient of the value function loss
68 use_gae (bool): If true, use the Generalized Advantage Estimator.
69 """
70
71 def reduce_mean_valid(t):
72 return torch.mean(t * valid_mask)
73
74 prev_dist = dist_class(prev_logits, model)
75 # Make loss functions.
76 logp_ratio = torch.exp(
77 curr_action_dist.logp(actions) - prev_actions_logp)
78 action_kl = prev_dist.kl(curr_action_dist)
79 self.mean_kl = reduce_mean_valid(action_kl)
80
81 curr_entropy = curr_action_dist.entropy()
82 self.mean_entropy = reduce_mean_valid(curr_entropy)
83
84 surrogate_loss = torch.min(
85 advantages * logp_ratio,
86 advantages * torch.clamp(logp_ratio, 1 - clip_param,
87 1 + clip_param))
88 self.mean_policy_loss = reduce_mean_valid(-surrogate_loss)
89
90 if use_gae:
91 vf_loss1 = torch.pow(value_fn - value_targets, 2.0)
92 vf_clipped = vf_preds + torch.clamp(value_fn - vf_preds,
93 -vf_clip_param, vf_clip_param)
94 vf_loss2 = torch.pow(vf_clipped - value_targets, 2.0)
95 vf_loss = torch.max(vf_loss1, vf_loss2)
96 self.mean_vf_loss = reduce_mean_valid(vf_loss)
97 loss = reduce_mean_valid(
98 -surrogate_loss + cur_kl_coeff * action_kl +
99 vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy)
100 else:
101 self.mean_vf_loss = 0.0
102 loss = reduce_mean_valid(-surrogate_loss +
103 cur_kl_coeff * action_kl -
104 entropy_coeff * curr_entropy)
105 self.loss = loss
106
107
108 def ppo_surrogate_loss(policy, model, dist_class, train_batch):
109 logits, state = model.from_batch(train_batch)
110 action_dist = dist_class(logits, model)
111
112 if state:
113 max_seq_len = torch.max(train_batch["seq_lens"])
114 mask = sequence_mask(train_batch["seq_lens"], max_seq_len)
115 mask = torch.reshape(mask, [-1])
116 else:
117 mask = torch.ones_like(
118 train_batch[Postprocessing.ADVANTAGES], dtype=torch.bool)
119
120 policy.loss_obj = PPOLoss(
121 dist_class,
122 model,
123 train_batch[Postprocessing.VALUE_TARGETS],
124 train_batch[Postprocessing.ADVANTAGES],
125 train_batch[SampleBatch.ACTIONS],
126 train_batch[BEHAVIOUR_LOGITS],
127 train_batch[ACTION_LOGP],
128 train_batch[SampleBatch.VF_PREDS],
129 action_dist,
130 model.value_function(),
131 policy.kl_coeff,
132 mask,
133 entropy_coeff=policy.entropy_coeff,
134 clip_param=policy.config["clip_param"],
135 vf_clip_param=policy.config["vf_clip_param"],
136 vf_loss_coeff=policy.config["vf_loss_coeff"],
137 use_gae=policy.config["use_gae"],
138 )
139
140 return policy.loss_obj.loss
141
142
143 def kl_and_loss_stats(policy, train_batch):
144 return {
145 "cur_kl_coeff": policy.kl_coeff,
146 "cur_lr": policy.cur_lr,
147 "total_loss": policy.loss_obj.loss,
148 "policy_loss": policy.loss_obj.mean_policy_loss,
149 "vf_loss": policy.loss_obj.mean_vf_loss,
150 "vf_explained_var": explained_variance(
151 train_batch[Postprocessing.VALUE_TARGETS],
152 policy.model.value_function(),
153 framework="torch"),
154 "kl": policy.loss_obj.mean_kl,
155 "entropy": policy.loss_obj.mean_entropy,
156 "entropy_coeff": policy.entropy_coeff,
157 }
158
159
160 def vf_preds_and_logits_fetches(policy, input_dict, state_batches, model,
161 action_dist):
162 """Adds value function and logits outputs to experience train_batches."""
163 return {
164 SampleBatch.VF_PREDS: policy.model.value_function(),
165 BEHAVIOUR_LOGITS: policy.model.last_output(),
166 }
167
168
169 class KLCoeffMixin:
170 def __init__(self, config):
171 # KL Coefficient.
172 self.kl_coeff = config["kl_coeff"]
173 self.kl_target = config["kl_target"]
174
175 def update_kl(self, sampled_kl):
176 if sampled_kl > 2.0 * self.kl_target:
177 self.kl_coeff *= 1.5
178 elif sampled_kl < 0.5 * self.kl_target:
179 self.kl_coeff *= 0.5
180 return self.kl_coeff
181
182
183 class ValueNetworkMixin:
184 def __init__(self, obs_space, action_space, config):
185 if config["use_gae"]:
186
187 def value(ob, prev_action, prev_reward, *state):
188 model_out, _ = self.model({
189 SampleBatch.CUR_OBS: torch.Tensor([ob]).to(self.device),
190 SampleBatch.PREV_ACTIONS: torch.Tensor([prev_action]).to(
191 self.device),
192 SampleBatch.PREV_REWARDS: torch.Tensor([prev_reward]).to(
193 self.device),
194 "is_training": False,
195 }, [torch.Tensor([s]).to(self.device) for s in state],
196 torch.Tensor([1]).to(self.device))
197 return self.model.value_function()[0]
198
199 else:
200
201 def value(ob, prev_action, prev_reward, *state):
202 return 0.0
203
204 self._value = value
205
206
207 def setup_mixins(policy, obs_space, action_space, config):
208 ValueNetworkMixin.__init__(policy, obs_space, action_space, config)
209 KLCoeffMixin.__init__(policy, config)
210 EntropyCoeffSchedule.__init__(policy, config["entropy_coeff"],
211 config["entropy_coeff_schedule"])
212 LearningRateSchedule.__init__(policy, config["lr"], config["lr_schedule"])
213
214
215 PPOTorchPolicy = build_torch_policy(
216 name="PPOTorchPolicy",
217 get_default_config=lambda: ray.rllib.agents.ppo.ppo.DEFAULT_CONFIG,
218 loss_fn=ppo_surrogate_loss,
219 stats_fn=kl_and_loss_stats,
220 extra_action_out_fn=vf_preds_and_logits_fetches,
221 postprocess_fn=postprocess_ppo_gae,
222 extra_grad_process_fn=apply_grad_clipping,
223 before_init=setup_config,
224 after_init=setup_mixins,
225 mixins=[KLCoeffMixin, ValueNetworkMixin])
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/agents/ppo/ppo_torch_policy.py b/rllib/agents/ppo/ppo_torch_policy.py
--- a/rllib/agents/ppo/ppo_torch_policy.py
+++ b/rllib/agents/ppo/ppo_torch_policy.py
@@ -67,9 +67,15 @@
vf_loss_coeff (float): Coefficient of the value function loss
use_gae (bool): If true, use the Generalized Advantage Estimator.
"""
+ if valid_mask is not None:
- def reduce_mean_valid(t):
- return torch.mean(t * valid_mask)
+ def reduce_mean_valid(t):
+ return torch.mean(t * valid_mask)
+
+ else:
+
+ def reduce_mean_valid(t):
+ return torch.mean(t)
prev_dist = dist_class(prev_logits, model)
# Make loss functions.
@@ -109,13 +115,11 @@
logits, state = model.from_batch(train_batch)
action_dist = dist_class(logits, model)
+ mask = None
if state:
max_seq_len = torch.max(train_batch["seq_lens"])
mask = sequence_mask(train_batch["seq_lens"], max_seq_len)
mask = torch.reshape(mask, [-1])
- else:
- mask = torch.ones_like(
- train_batch[Postprocessing.ADVANTAGES], dtype=torch.bool)
policy.loss_obj = PPOLoss(
dist_class,
diff --git a/rllib/models/torch/torch_action_dist.py b/rllib/models/torch/torch_action_dist.py
--- a/rllib/models/torch/torch_action_dist.py
+++ b/rllib/models/torch/torch_action_dist.py
@@ -71,7 +71,15 @@
@override(TorchDistributionWrapper)
def logp(self, actions):
- return TorchDistributionWrapper.logp(self, actions).sum(-1)
+ return super().logp(actions).sum(-1)
+
+ @override(TorchDistributionWrapper)
+ def entropy(self):
+ return super().entropy().sum(-1)
+
+ @override(TorchDistributionWrapper)
+ def kl(self, other):
+ return super().kl(other).sum(-1)
@staticmethod
@override(ActionDistribution)
|
{"golden_diff": "diff --git a/rllib/agents/ppo/ppo_torch_policy.py b/rllib/agents/ppo/ppo_torch_policy.py\n--- a/rllib/agents/ppo/ppo_torch_policy.py\n+++ b/rllib/agents/ppo/ppo_torch_policy.py\n@@ -67,9 +67,15 @@\n vf_loss_coeff (float): Coefficient of the value function loss\n use_gae (bool): If true, use the Generalized Advantage Estimator.\n \"\"\"\n+ if valid_mask is not None:\n \n- def reduce_mean_valid(t):\n- return torch.mean(t * valid_mask)\n+ def reduce_mean_valid(t):\n+ return torch.mean(t * valid_mask)\n+\n+ else:\n+\n+ def reduce_mean_valid(t):\n+ return torch.mean(t)\n \n prev_dist = dist_class(prev_logits, model)\n # Make loss functions.\n@@ -109,13 +115,11 @@\n logits, state = model.from_batch(train_batch)\n action_dist = dist_class(logits, model)\n \n+ mask = None\n if state:\n max_seq_len = torch.max(train_batch[\"seq_lens\"])\n mask = sequence_mask(train_batch[\"seq_lens\"], max_seq_len)\n mask = torch.reshape(mask, [-1])\n- else:\n- mask = torch.ones_like(\n- train_batch[Postprocessing.ADVANTAGES], dtype=torch.bool)\n \n policy.loss_obj = PPOLoss(\n dist_class,\ndiff --git a/rllib/models/torch/torch_action_dist.py b/rllib/models/torch/torch_action_dist.py\n--- a/rllib/models/torch/torch_action_dist.py\n+++ b/rllib/models/torch/torch_action_dist.py\n@@ -71,7 +71,15 @@\n \n @override(TorchDistributionWrapper)\n def logp(self, actions):\n- return TorchDistributionWrapper.logp(self, actions).sum(-1)\n+ return super().logp(actions).sum(-1)\n+\n+ @override(TorchDistributionWrapper)\n+ def entropy(self):\n+ return super().entropy().sum(-1)\n+\n+ @override(TorchDistributionWrapper)\n+ def kl(self, other):\n+ return super().kl(other).sum(-1)\n \n @staticmethod\n @override(ActionDistribution)\n", "issue": "[rllib] TorchDiagGaussian doesn\u2019t handle multiple actions correctly.\nThis is not a contribution.\r\n\r\nRay version: 0.8.2\r\nPython version: 3.6.8\r\nPytorch version: 1.4\r\nOS: Ubuntu 18.04 Docker\r\n\r\nTorchDiagGaussian doesn\u2019t handle multiple actions correctly. As a result, training PPO with Pytorch will crash when the action space has more than 1 action. Here\u2019s minimal reproduction script:\r\n\r\n```python\r\nimport gym\r\nfrom gym.spaces import Box\r\nfrom ray import tune\r\n\r\nclass ContinuousEnv(gym.Env):\r\n def __init__(self, config):\r\n self.action_space = Box(0.0, 1.0, shape=(2,))\r\n self.observation_space = Box(0.0, 1.0, shape=(1, ))\r\n\r\n def reset(self):\r\n return [0.0]\r\n\r\n def step(self, action):\r\n return [0.0], 1.0, False, {}\r\n\r\ntune.run(\r\n \"PPO\",\r\n config={\"env\": ContinuousEnv, \"use_pytorch\": True, \"num_workers\": 1})\r\n```\r\n\n", "before_files": [{"content": "import numpy as np\n\nfrom ray.rllib.models.action_dist import ActionDistribution\nfrom ray.rllib.utils.annotations import override\nfrom ray.rllib.utils import try_import_torch\n\ntorch, nn = try_import_torch()\n\n\nclass TorchDistributionWrapper(ActionDistribution):\n \"\"\"Wrapper class for torch.distributions.\"\"\"\n\n def __init_(self, inputs):\n super().__init__(inputs)\n # Store the last sample here.\n self.last_sample = None\n\n @override(ActionDistribution)\n def logp(self, actions):\n return self.dist.log_prob(actions)\n\n @override(ActionDistribution)\n def entropy(self):\n return self.dist.entropy()\n\n @override(ActionDistribution)\n def kl(self, other):\n return torch.distributions.kl.kl_divergence(self.dist, other.dist)\n\n @override(ActionDistribution)\n def sample(self):\n self.last_sample = self.dist.sample()\n return self.last_sample\n\n @override(ActionDistribution)\n def sampled_action_logp(self):\n assert self.last_sample is not None\n return self.logp(self.last_sample)\n\n\nclass TorchCategorical(TorchDistributionWrapper):\n \"\"\"Wrapper class for PyTorch Categorical distribution.\"\"\"\n\n @override(ActionDistribution)\n def __init__(self, inputs, model):\n super().__init__(inputs, model)\n self.dist = torch.distributions.categorical.Categorical(logits=inputs)\n\n @override(ActionDistribution)\n def deterministic_sample(self):\n return self.dist.probs.argmax(dim=1)\n\n @staticmethod\n @override(ActionDistribution)\n def required_model_output_shape(action_space, model_config):\n return action_space.n\n\n\nclass TorchDiagGaussian(TorchDistributionWrapper):\n \"\"\"Wrapper class for PyTorch Normal distribution.\"\"\"\n\n @override(ActionDistribution)\n def __init__(self, inputs, model):\n super().__init__(inputs, model)\n mean, log_std = torch.chunk(inputs, 2, dim=1)\n self.dist = torch.distributions.normal.Normal(mean, torch.exp(log_std))\n\n @override(ActionDistribution)\n def deterministic_sample(self):\n return self.dist.mean\n\n @override(TorchDistributionWrapper)\n def logp(self, actions):\n return TorchDistributionWrapper.logp(self, actions).sum(-1)\n\n @staticmethod\n @override(ActionDistribution)\n def required_model_output_shape(action_space, model_config):\n return np.prod(action_space.shape) * 2\n", "path": "rllib/models/torch/torch_action_dist.py"}, {"content": "import logging\n\nimport ray\nfrom ray.rllib.agents.impala.vtrace_policy import BEHAVIOUR_LOGITS\nfrom ray.rllib.agents.a3c.a3c_torch_policy import apply_grad_clipping\nfrom ray.rllib.agents.ppo.ppo_tf_policy import postprocess_ppo_gae, \\\n setup_config\nfrom ray.rllib.evaluation.postprocessing import Postprocessing\nfrom ray.rllib.policy.sample_batch import SampleBatch\nfrom ray.rllib.policy.policy import ACTION_LOGP\nfrom ray.rllib.policy.torch_policy import EntropyCoeffSchedule, \\\n LearningRateSchedule\nfrom ray.rllib.policy.torch_policy_template import build_torch_policy\nfrom ray.rllib.utils.explained_variance import explained_variance\nfrom ray.rllib.utils.torch_ops import sequence_mask\nfrom ray.rllib.utils import try_import_torch\n\ntorch, nn = try_import_torch()\n\nlogger = logging.getLogger(__name__)\n\n\nclass PPOLoss:\n def __init__(self,\n dist_class,\n model,\n value_targets,\n advantages,\n actions,\n prev_logits,\n prev_actions_logp,\n vf_preds,\n curr_action_dist,\n value_fn,\n cur_kl_coeff,\n valid_mask,\n entropy_coeff=0,\n clip_param=0.1,\n vf_clip_param=0.1,\n vf_loss_coeff=1.0,\n use_gae=True):\n \"\"\"Constructs the loss for Proximal Policy Objective.\n\n Arguments:\n dist_class: action distribution class for logits.\n value_targets (Placeholder): Placeholder for target values; used\n for GAE.\n actions (Placeholder): Placeholder for actions taken\n from previous model evaluation.\n advantages (Placeholder): Placeholder for calculated advantages\n from previous model evaluation.\n prev_logits (Placeholder): Placeholder for logits output from\n previous model evaluation.\n prev_actions_logp (Placeholder): Placeholder for prob output from\n previous model evaluation.\n vf_preds (Placeholder): Placeholder for value function output\n from previous model evaluation.\n curr_action_dist (ActionDistribution): ActionDistribution\n of the current model.\n value_fn (Tensor): Current value function output Tensor.\n cur_kl_coeff (Variable): Variable holding the current PPO KL\n coefficient.\n valid_mask (Tensor): A bool mask of valid input elements (#2992).\n entropy_coeff (float): Coefficient of the entropy regularizer.\n clip_param (float): Clip parameter\n vf_clip_param (float): Clip parameter for the value function\n vf_loss_coeff (float): Coefficient of the value function loss\n use_gae (bool): If true, use the Generalized Advantage Estimator.\n \"\"\"\n\n def reduce_mean_valid(t):\n return torch.mean(t * valid_mask)\n\n prev_dist = dist_class(prev_logits, model)\n # Make loss functions.\n logp_ratio = torch.exp(\n curr_action_dist.logp(actions) - prev_actions_logp)\n action_kl = prev_dist.kl(curr_action_dist)\n self.mean_kl = reduce_mean_valid(action_kl)\n\n curr_entropy = curr_action_dist.entropy()\n self.mean_entropy = reduce_mean_valid(curr_entropy)\n\n surrogate_loss = torch.min(\n advantages * logp_ratio,\n advantages * torch.clamp(logp_ratio, 1 - clip_param,\n 1 + clip_param))\n self.mean_policy_loss = reduce_mean_valid(-surrogate_loss)\n\n if use_gae:\n vf_loss1 = torch.pow(value_fn - value_targets, 2.0)\n vf_clipped = vf_preds + torch.clamp(value_fn - vf_preds,\n -vf_clip_param, vf_clip_param)\n vf_loss2 = torch.pow(vf_clipped - value_targets, 2.0)\n vf_loss = torch.max(vf_loss1, vf_loss2)\n self.mean_vf_loss = reduce_mean_valid(vf_loss)\n loss = reduce_mean_valid(\n -surrogate_loss + cur_kl_coeff * action_kl +\n vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy)\n else:\n self.mean_vf_loss = 0.0\n loss = reduce_mean_valid(-surrogate_loss +\n cur_kl_coeff * action_kl -\n entropy_coeff * curr_entropy)\n self.loss = loss\n\n\ndef ppo_surrogate_loss(policy, model, dist_class, train_batch):\n logits, state = model.from_batch(train_batch)\n action_dist = dist_class(logits, model)\n\n if state:\n max_seq_len = torch.max(train_batch[\"seq_lens\"])\n mask = sequence_mask(train_batch[\"seq_lens\"], max_seq_len)\n mask = torch.reshape(mask, [-1])\n else:\n mask = torch.ones_like(\n train_batch[Postprocessing.ADVANTAGES], dtype=torch.bool)\n\n policy.loss_obj = PPOLoss(\n dist_class,\n model,\n train_batch[Postprocessing.VALUE_TARGETS],\n train_batch[Postprocessing.ADVANTAGES],\n train_batch[SampleBatch.ACTIONS],\n train_batch[BEHAVIOUR_LOGITS],\n train_batch[ACTION_LOGP],\n train_batch[SampleBatch.VF_PREDS],\n action_dist,\n model.value_function(),\n policy.kl_coeff,\n mask,\n entropy_coeff=policy.entropy_coeff,\n clip_param=policy.config[\"clip_param\"],\n vf_clip_param=policy.config[\"vf_clip_param\"],\n vf_loss_coeff=policy.config[\"vf_loss_coeff\"],\n use_gae=policy.config[\"use_gae\"],\n )\n\n return policy.loss_obj.loss\n\n\ndef kl_and_loss_stats(policy, train_batch):\n return {\n \"cur_kl_coeff\": policy.kl_coeff,\n \"cur_lr\": policy.cur_lr,\n \"total_loss\": policy.loss_obj.loss,\n \"policy_loss\": policy.loss_obj.mean_policy_loss,\n \"vf_loss\": policy.loss_obj.mean_vf_loss,\n \"vf_explained_var\": explained_variance(\n train_batch[Postprocessing.VALUE_TARGETS],\n policy.model.value_function(),\n framework=\"torch\"),\n \"kl\": policy.loss_obj.mean_kl,\n \"entropy\": policy.loss_obj.mean_entropy,\n \"entropy_coeff\": policy.entropy_coeff,\n }\n\n\ndef vf_preds_and_logits_fetches(policy, input_dict, state_batches, model,\n action_dist):\n \"\"\"Adds value function and logits outputs to experience train_batches.\"\"\"\n return {\n SampleBatch.VF_PREDS: policy.model.value_function(),\n BEHAVIOUR_LOGITS: policy.model.last_output(),\n }\n\n\nclass KLCoeffMixin:\n def __init__(self, config):\n # KL Coefficient.\n self.kl_coeff = config[\"kl_coeff\"]\n self.kl_target = config[\"kl_target\"]\n\n def update_kl(self, sampled_kl):\n if sampled_kl > 2.0 * self.kl_target:\n self.kl_coeff *= 1.5\n elif sampled_kl < 0.5 * self.kl_target:\n self.kl_coeff *= 0.5\n return self.kl_coeff\n\n\nclass ValueNetworkMixin:\n def __init__(self, obs_space, action_space, config):\n if config[\"use_gae\"]:\n\n def value(ob, prev_action, prev_reward, *state):\n model_out, _ = self.model({\n SampleBatch.CUR_OBS: torch.Tensor([ob]).to(self.device),\n SampleBatch.PREV_ACTIONS: torch.Tensor([prev_action]).to(\n self.device),\n SampleBatch.PREV_REWARDS: torch.Tensor([prev_reward]).to(\n self.device),\n \"is_training\": False,\n }, [torch.Tensor([s]).to(self.device) for s in state],\n torch.Tensor([1]).to(self.device))\n return self.model.value_function()[0]\n\n else:\n\n def value(ob, prev_action, prev_reward, *state):\n return 0.0\n\n self._value = value\n\n\ndef setup_mixins(policy, obs_space, action_space, config):\n ValueNetworkMixin.__init__(policy, obs_space, action_space, config)\n KLCoeffMixin.__init__(policy, config)\n EntropyCoeffSchedule.__init__(policy, config[\"entropy_coeff\"],\n config[\"entropy_coeff_schedule\"])\n LearningRateSchedule.__init__(policy, config[\"lr\"], config[\"lr_schedule\"])\n\n\nPPOTorchPolicy = build_torch_policy(\n name=\"PPOTorchPolicy\",\n get_default_config=lambda: ray.rllib.agents.ppo.ppo.DEFAULT_CONFIG,\n loss_fn=ppo_surrogate_loss,\n stats_fn=kl_and_loss_stats,\n extra_action_out_fn=vf_preds_and_logits_fetches,\n postprocess_fn=postprocess_ppo_gae,\n extra_grad_process_fn=apply_grad_clipping,\n before_init=setup_config,\n after_init=setup_mixins,\n mixins=[KLCoeffMixin, ValueNetworkMixin])\n", "path": "rllib/agents/ppo/ppo_torch_policy.py"}], "after_files": [{"content": "import numpy as np\n\nfrom ray.rllib.models.action_dist import ActionDistribution\nfrom ray.rllib.utils.annotations import override\nfrom ray.rllib.utils import try_import_torch\n\ntorch, nn = try_import_torch()\n\n\nclass TorchDistributionWrapper(ActionDistribution):\n \"\"\"Wrapper class for torch.distributions.\"\"\"\n\n def __init_(self, inputs):\n super().__init__(inputs)\n # Store the last sample here.\n self.last_sample = None\n\n @override(ActionDistribution)\n def logp(self, actions):\n return self.dist.log_prob(actions)\n\n @override(ActionDistribution)\n def entropy(self):\n return self.dist.entropy()\n\n @override(ActionDistribution)\n def kl(self, other):\n return torch.distributions.kl.kl_divergence(self.dist, other.dist)\n\n @override(ActionDistribution)\n def sample(self):\n self.last_sample = self.dist.sample()\n return self.last_sample\n\n @override(ActionDistribution)\n def sampled_action_logp(self):\n assert self.last_sample is not None\n return self.logp(self.last_sample)\n\n\nclass TorchCategorical(TorchDistributionWrapper):\n \"\"\"Wrapper class for PyTorch Categorical distribution.\"\"\"\n\n @override(ActionDistribution)\n def __init__(self, inputs, model):\n super().__init__(inputs, model)\n self.dist = torch.distributions.categorical.Categorical(logits=inputs)\n\n @override(ActionDistribution)\n def deterministic_sample(self):\n return self.dist.probs.argmax(dim=1)\n\n @staticmethod\n @override(ActionDistribution)\n def required_model_output_shape(action_space, model_config):\n return action_space.n\n\n\nclass TorchDiagGaussian(TorchDistributionWrapper):\n \"\"\"Wrapper class for PyTorch Normal distribution.\"\"\"\n\n @override(ActionDistribution)\n def __init__(self, inputs, model):\n super().__init__(inputs, model)\n mean, log_std = torch.chunk(inputs, 2, dim=1)\n self.dist = torch.distributions.normal.Normal(mean, torch.exp(log_std))\n\n @override(ActionDistribution)\n def deterministic_sample(self):\n return self.dist.mean\n\n @override(TorchDistributionWrapper)\n def logp(self, actions):\n return super().logp(actions).sum(-1)\n\n @override(TorchDistributionWrapper)\n def entropy(self):\n return super().entropy().sum(-1)\n\n @override(TorchDistributionWrapper)\n def kl(self, other):\n return super().kl(other).sum(-1)\n\n @staticmethod\n @override(ActionDistribution)\n def required_model_output_shape(action_space, model_config):\n return np.prod(action_space.shape) * 2\n", "path": "rllib/models/torch/torch_action_dist.py"}, {"content": "import logging\n\nimport ray\nfrom ray.rllib.agents.impala.vtrace_policy import BEHAVIOUR_LOGITS\nfrom ray.rllib.agents.a3c.a3c_torch_policy import apply_grad_clipping\nfrom ray.rllib.agents.ppo.ppo_tf_policy import postprocess_ppo_gae, \\\n setup_config\nfrom ray.rllib.evaluation.postprocessing import Postprocessing\nfrom ray.rllib.policy.sample_batch import SampleBatch\nfrom ray.rllib.policy.policy import ACTION_LOGP\nfrom ray.rllib.policy.torch_policy import EntropyCoeffSchedule, \\\n LearningRateSchedule\nfrom ray.rllib.policy.torch_policy_template import build_torch_policy\nfrom ray.rllib.utils.explained_variance import explained_variance\nfrom ray.rllib.utils.torch_ops import sequence_mask\nfrom ray.rllib.utils import try_import_torch\n\ntorch, nn = try_import_torch()\n\nlogger = logging.getLogger(__name__)\n\n\nclass PPOLoss:\n def __init__(self,\n dist_class,\n model,\n value_targets,\n advantages,\n actions,\n prev_logits,\n prev_actions_logp,\n vf_preds,\n curr_action_dist,\n value_fn,\n cur_kl_coeff,\n valid_mask,\n entropy_coeff=0,\n clip_param=0.1,\n vf_clip_param=0.1,\n vf_loss_coeff=1.0,\n use_gae=True):\n \"\"\"Constructs the loss for Proximal Policy Objective.\n\n Arguments:\n dist_class: action distribution class for logits.\n value_targets (Placeholder): Placeholder for target values; used\n for GAE.\n actions (Placeholder): Placeholder for actions taken\n from previous model evaluation.\n advantages (Placeholder): Placeholder for calculated advantages\n from previous model evaluation.\n prev_logits (Placeholder): Placeholder for logits output from\n previous model evaluation.\n prev_actions_logp (Placeholder): Placeholder for prob output from\n previous model evaluation.\n vf_preds (Placeholder): Placeholder for value function output\n from previous model evaluation.\n curr_action_dist (ActionDistribution): ActionDistribution\n of the current model.\n value_fn (Tensor): Current value function output Tensor.\n cur_kl_coeff (Variable): Variable holding the current PPO KL\n coefficient.\n valid_mask (Tensor): A bool mask of valid input elements (#2992).\n entropy_coeff (float): Coefficient of the entropy regularizer.\n clip_param (float): Clip parameter\n vf_clip_param (float): Clip parameter for the value function\n vf_loss_coeff (float): Coefficient of the value function loss\n use_gae (bool): If true, use the Generalized Advantage Estimator.\n \"\"\"\n if valid_mask is not None:\n\n def reduce_mean_valid(t):\n return torch.mean(t * valid_mask)\n\n else:\n\n def reduce_mean_valid(t):\n return torch.mean(t)\n\n prev_dist = dist_class(prev_logits, model)\n # Make loss functions.\n logp_ratio = torch.exp(\n curr_action_dist.logp(actions) - prev_actions_logp)\n action_kl = prev_dist.kl(curr_action_dist)\n self.mean_kl = reduce_mean_valid(action_kl)\n\n curr_entropy = curr_action_dist.entropy()\n self.mean_entropy = reduce_mean_valid(curr_entropy)\n\n surrogate_loss = torch.min(\n advantages * logp_ratio,\n advantages * torch.clamp(logp_ratio, 1 - clip_param,\n 1 + clip_param))\n self.mean_policy_loss = reduce_mean_valid(-surrogate_loss)\n\n if use_gae:\n vf_loss1 = torch.pow(value_fn - value_targets, 2.0)\n vf_clipped = vf_preds + torch.clamp(value_fn - vf_preds,\n -vf_clip_param, vf_clip_param)\n vf_loss2 = torch.pow(vf_clipped - value_targets, 2.0)\n vf_loss = torch.max(vf_loss1, vf_loss2)\n self.mean_vf_loss = reduce_mean_valid(vf_loss)\n loss = reduce_mean_valid(\n -surrogate_loss + cur_kl_coeff * action_kl +\n vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy)\n else:\n self.mean_vf_loss = 0.0\n loss = reduce_mean_valid(-surrogate_loss +\n cur_kl_coeff * action_kl -\n entropy_coeff * curr_entropy)\n self.loss = loss\n\n\ndef ppo_surrogate_loss(policy, model, dist_class, train_batch):\n logits, state = model.from_batch(train_batch)\n action_dist = dist_class(logits, model)\n\n mask = None\n if state:\n max_seq_len = torch.max(train_batch[\"seq_lens\"])\n mask = sequence_mask(train_batch[\"seq_lens\"], max_seq_len)\n mask = torch.reshape(mask, [-1])\n\n policy.loss_obj = PPOLoss(\n dist_class,\n model,\n train_batch[Postprocessing.VALUE_TARGETS],\n train_batch[Postprocessing.ADVANTAGES],\n train_batch[SampleBatch.ACTIONS],\n train_batch[BEHAVIOUR_LOGITS],\n train_batch[ACTION_LOGP],\n train_batch[SampleBatch.VF_PREDS],\n action_dist,\n model.value_function(),\n policy.kl_coeff,\n mask,\n entropy_coeff=policy.entropy_coeff,\n clip_param=policy.config[\"clip_param\"],\n vf_clip_param=policy.config[\"vf_clip_param\"],\n vf_loss_coeff=policy.config[\"vf_loss_coeff\"],\n use_gae=policy.config[\"use_gae\"],\n )\n\n return policy.loss_obj.loss\n\n\ndef kl_and_loss_stats(policy, train_batch):\n return {\n \"cur_kl_coeff\": policy.kl_coeff,\n \"cur_lr\": policy.cur_lr,\n \"total_loss\": policy.loss_obj.loss,\n \"policy_loss\": policy.loss_obj.mean_policy_loss,\n \"vf_loss\": policy.loss_obj.mean_vf_loss,\n \"vf_explained_var\": explained_variance(\n train_batch[Postprocessing.VALUE_TARGETS],\n policy.model.value_function(),\n framework=\"torch\"),\n \"kl\": policy.loss_obj.mean_kl,\n \"entropy\": policy.loss_obj.mean_entropy,\n \"entropy_coeff\": policy.entropy_coeff,\n }\n\n\ndef vf_preds_and_logits_fetches(policy, input_dict, state_batches, model,\n action_dist):\n \"\"\"Adds value function and logits outputs to experience train_batches.\"\"\"\n return {\n SampleBatch.VF_PREDS: policy.model.value_function(),\n BEHAVIOUR_LOGITS: policy.model.last_output(),\n }\n\n\nclass KLCoeffMixin:\n def __init__(self, config):\n # KL Coefficient.\n self.kl_coeff = config[\"kl_coeff\"]\n self.kl_target = config[\"kl_target\"]\n\n def update_kl(self, sampled_kl):\n if sampled_kl > 2.0 * self.kl_target:\n self.kl_coeff *= 1.5\n elif sampled_kl < 0.5 * self.kl_target:\n self.kl_coeff *= 0.5\n return self.kl_coeff\n\n\nclass ValueNetworkMixin:\n def __init__(self, obs_space, action_space, config):\n if config[\"use_gae\"]:\n\n def value(ob, prev_action, prev_reward, *state):\n model_out, _ = self.model({\n SampleBatch.CUR_OBS: torch.Tensor([ob]).to(self.device),\n SampleBatch.PREV_ACTIONS: torch.Tensor([prev_action]).to(\n self.device),\n SampleBatch.PREV_REWARDS: torch.Tensor([prev_reward]).to(\n self.device),\n \"is_training\": False,\n }, [torch.Tensor([s]).to(self.device) for s in state],\n torch.Tensor([1]).to(self.device))\n return self.model.value_function()[0]\n\n else:\n\n def value(ob, prev_action, prev_reward, *state):\n return 0.0\n\n self._value = value\n\n\ndef setup_mixins(policy, obs_space, action_space, config):\n ValueNetworkMixin.__init__(policy, obs_space, action_space, config)\n KLCoeffMixin.__init__(policy, config)\n EntropyCoeffSchedule.__init__(policy, config[\"entropy_coeff\"],\n config[\"entropy_coeff_schedule\"])\n LearningRateSchedule.__init__(policy, config[\"lr\"], config[\"lr_schedule\"])\n\n\nPPOTorchPolicy = build_torch_policy(\n name=\"PPOTorchPolicy\",\n get_default_config=lambda: ray.rllib.agents.ppo.ppo.DEFAULT_CONFIG,\n loss_fn=ppo_surrogate_loss,\n stats_fn=kl_and_loss_stats,\n extra_action_out_fn=vf_preds_and_logits_fetches,\n postprocess_fn=postprocess_ppo_gae,\n extra_grad_process_fn=apply_grad_clipping,\n before_init=setup_config,\n after_init=setup_mixins,\n mixins=[KLCoeffMixin, ValueNetworkMixin])\n", "path": "rllib/agents/ppo/ppo_torch_policy.py"}]}
| 3,663 | 500 |
gh_patches_debug_12580
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-384
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `desc` to ArgparsedCommand; move some commmands to Argparsed~
* adds optional keyword argument - `desc` - to `ArgparsedCommand` so we don't have to create `argparse.ArgumentParser` if we don't want any parameters
* adds prints for cases when plt/gotplt sections doesnt have any symbols (so user will get an output out of plt/gotplt instead of nothing)
* minor code refactorings
Potentially we could merge ArgparsedCommand and Command in the future (and remove all other Command children).
Also continuation of https://github.com/pwndbg/pwndbg/issues/244
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/__init__.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import argparse
9 import functools
10
11 import gdb
12
13 import pwndbg.chain
14 import pwndbg.color
15 import pwndbg.enhance
16 import pwndbg.exception
17 import pwndbg.hexdump
18 import pwndbg.memory
19 import pwndbg.regs
20 import pwndbg.symbol
21 import pwndbg.ui
22
23 commands = []
24
25
26 class Command(gdb.Command):
27 """Generic command wrapper"""
28 command_names = set()
29 history = {}
30
31 def __init__(self, function, prefix=False):
32 command_name = function.__name__
33
34 super(Command, self).__init__(command_name, gdb.COMMAND_USER, gdb.COMPLETE_EXPRESSION, prefix=prefix)
35 self.function = function
36
37 if command_name in self.command_names:
38 raise Exception('Cannot add command %s: already exists.' % command_name)
39
40 self.command_names.add(command_name)
41 commands.append(self)
42
43 functools.update_wrapper(self, function)
44 self.__doc__ = function.__doc__
45
46 def split_args(self, argument):
47 """Split a command-line string from the user into arguments.
48
49 Returns:
50 A ``(tuple, dict)``, in the form of ``*args, **kwargs``.
51 The contents of the tuple/dict are undefined.
52 """
53 return gdb.string_to_argv(argument), {}
54
55 def invoke(self, argument, from_tty):
56 """Invoke the command with an argument string"""
57 try:
58 args, kwargs = self.split_args(argument)
59 except SystemExit:
60 # Raised when the usage is printed by an ArgparsedCommand
61 return
62 except (TypeError, gdb.error):
63 pwndbg.exception.handle(self.function.__name__)
64 return
65
66 try:
67 self.repeat = self.check_repeated(argument, from_tty)
68 return self(*args, **kwargs)
69 finally:
70 self.repeat = False
71
72 def check_repeated(self, argument, from_tty):
73 """Keep a record of all commands which come from the TTY.
74
75 Returns:
76 True if this command was executed by the user just hitting "enter".
77 """
78 # Don't care unless it's interactive use
79 if not from_tty:
80 return False
81
82 lines = gdb.execute('show commands', from_tty=False, to_string=True)
83 lines = lines.splitlines()
84
85 # No history
86 if not lines:
87 return False
88
89 last_line = lines[-1]
90 number, command = last_line.split(None, 1)
91 number = int(number)
92
93 # A new command was entered by the user
94 if number not in Command.history:
95 Command.history[number] = command
96 return False
97
98 # Somehow the command is different than we got before?
99 if not command.endswith(argument):
100 return False
101
102 return True
103
104 def __call__(self, *args, **kwargs):
105 try:
106 return self.function(*args, **kwargs)
107 except TypeError as te:
108 print('%r: %s' % (self.function.__name__.strip(),
109 self.function.__doc__.strip()))
110 pwndbg.exception.handle(self.function.__name__)
111 except Exception:
112 pwndbg.exception.handle(self.function.__name__)
113
114
115 class ParsedCommand(Command):
116 #: Whether to return the string 'arg' if parsing fails.
117 sloppy = False
118
119 #: Whether to hide errors during parsing
120 quiet = False
121
122 def split_args(self, argument):
123 # sys.stdout.write(repr(argument) + '\n')
124 argv, _ = super(ParsedCommand, self).split_args(argument)
125 # sys.stdout.write(repr(argv) + '\n')
126 return list(filter(lambda x: x is not None, map(self.fix, argv))), {}
127
128 def fix(self, arg):
129 return fix(arg, self.sloppy, self.quiet)
130
131
132 class ParsedCommandPrefix(ParsedCommand):
133 def __init__(self, function, prefix=True):
134 super(ParsedCommand, self).__init__(function, prefix)
135
136
137 def fix(arg, sloppy=False, quiet=True, reraise=False):
138 """Fix a single command-line argument coming from the GDB CLI.
139
140 Arguments:
141 arg(str): Original string representation (e.g. '0', '$rax', '$rax+44')
142 sloppy(bool): If ``arg`` cannot be evaluated, return ``arg``. (default: False)
143 quiet(bool): If an error occurs, suppress it. (default: True)
144 reraise(bool): If an error occurs, raise the exception. (default: False)
145
146 Returns:
147 Ideally ``gdb.Value`` object. May return a ``str`` if ``sloppy==True``.
148 May return ``None`` if ``sloppy == False and reraise == False``.
149 """
150 if isinstance(arg, gdb.Value):
151 return arg
152
153 try:
154 parsed = gdb.parse_and_eval(arg)
155 return parsed
156 except Exception:
157 pass
158
159 try:
160 arg = pwndbg.regs.fix(arg)
161 return gdb.parse_and_eval(arg)
162 except Exception as e:
163 if not quiet:
164 print(e)
165 if reraise:
166 raise e
167 pass
168
169 if sloppy:
170 return arg
171
172 return None
173
174
175 def fix_int(*a, **kw):
176 return int(fix(*a,**kw))
177
178 def fix_int_reraise(*a, **kw):
179 return fix(*a, reraise=True, **kw)
180
181
182 def OnlyWithFile(function):
183 @functools.wraps(function)
184 def _OnlyWithFile(*a, **kw):
185 if pwndbg.proc.exe:
186 return function(*a, **kw)
187 else:
188 print("%s: There is no file loaded." % function.__name__)
189
190 return _OnlyWithFile
191
192
193 def OnlyWhenRunning(function):
194 @functools.wraps(function)
195 def _OnlyWhenRunning(*a, **kw):
196 if pwndbg.proc.alive:
197 return function(*a, **kw)
198 else:
199 print("%s: The program is not being run." % function.__name__)
200 return _OnlyWhenRunning
201
202
203 class QuietSloppyParsedCommand(ParsedCommand):
204 def __init__(self, *a, **kw):
205 super(QuietSloppyParsedCommand, self).__init__(*a, **kw)
206 self.quiet = True
207 self.sloppy = True
208
209
210 class _ArgparsedCommand(Command):
211 def __init__(self, parser, function, *a, **kw):
212 self.parser = parser
213 self.parser.prog = function.__name__
214 self.__doc__ = function.__doc__ = self.parser.description
215 super(_ArgparsedCommand, self).__init__(function, *a, **kw)
216
217 def split_args(self, argument):
218 argv = gdb.string_to_argv(argument)
219 return tuple(), vars(self.parser.parse_args(argv))
220
221
222 class ArgparsedCommand(object):
223 """Adds documentation and offloads parsing for a Command via argparse"""
224 def __init__(self, parser_or_desc):
225 """
226 :param parser_or_desc: `argparse.ArgumentParser` instance or `str`
227 """
228 if isinstance(parser_or_desc, str):
229 self.parser = argparse.ArgumentParser(description=parser_or_desc)
230 else:
231 self.parser = parser_or_desc
232
233 # We want to run all integer and otherwise-unspecified arguments
234 # through fix() so that GDB parses it.
235 for action in self.parser._actions:
236 if action.dest == 'help':
237 continue
238 if action.type in (int, None):
239 action.type = fix_int_reraise
240 if action.default is not None:
241 action.help += ' (default: %(default)s)'
242
243 def __call__(self, function):
244 return _ArgparsedCommand(self.parser, function)
245
246
247 def sloppy_gdb_parse(s):
248 """
249 This function should be used as ``argparse.ArgumentParser`` .add_argument method's `type` helper.
250
251 This makes the type being parsed as gdb value and if that parsing fails,
252 a string is returned.
253
254 :param s: String.
255 :return: Whatever gdb.parse_and_eval returns or string.
256 """
257 try:
258 return gdb.parse_and_eval(s)
259 except (TypeError, gdb.error):
260 return s
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/commands/__init__.py b/pwndbg/commands/__init__.py
--- a/pwndbg/commands/__init__.py
+++ b/pwndbg/commands/__init__.py
@@ -9,6 +9,7 @@
import functools
import gdb
+import six
import pwndbg.chain
import pwndbg.color
@@ -225,7 +226,7 @@
"""
:param parser_or_desc: `argparse.ArgumentParser` instance or `str`
"""
- if isinstance(parser_or_desc, str):
+ if isinstance(parser_or_desc, six.string_types):
self.parser = argparse.ArgumentParser(description=parser_or_desc)
else:
self.parser = parser_or_desc
|
{"golden_diff": "diff --git a/pwndbg/commands/__init__.py b/pwndbg/commands/__init__.py\n--- a/pwndbg/commands/__init__.py\n+++ b/pwndbg/commands/__init__.py\n@@ -9,6 +9,7 @@\n import functools\n \n import gdb\n+import six\n \n import pwndbg.chain\n import pwndbg.color\n@@ -225,7 +226,7 @@\n \"\"\"\n :param parser_or_desc: `argparse.ArgumentParser` instance or `str`\n \"\"\"\n- if isinstance(parser_or_desc, str):\n+ if isinstance(parser_or_desc, six.string_types):\n self.parser = argparse.ArgumentParser(description=parser_or_desc)\n else:\n self.parser = parser_or_desc\n", "issue": "Add `desc` to ArgparsedCommand; move some commmands to Argparsed~\n* adds optional keyword argument - `desc` - to `ArgparsedCommand` so we don't have to create `argparse.ArgumentParser` if we don't want any parameters\r\n* adds prints for cases when plt/gotplt sections doesnt have any symbols (so user will get an output out of plt/gotplt instead of nothing)\r\n* minor code refactorings\r\n\r\nPotentially we could merge ArgparsedCommand and Command in the future (and remove all other Command children).\r\n\r\nAlso continuation of https://github.com/pwndbg/pwndbg/issues/244\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport functools\n\nimport gdb\n\nimport pwndbg.chain\nimport pwndbg.color\nimport pwndbg.enhance\nimport pwndbg.exception\nimport pwndbg.hexdump\nimport pwndbg.memory\nimport pwndbg.regs\nimport pwndbg.symbol\nimport pwndbg.ui\n\ncommands = []\n\n\nclass Command(gdb.Command):\n \"\"\"Generic command wrapper\"\"\"\n command_names = set()\n history = {}\n\n def __init__(self, function, prefix=False):\n command_name = function.__name__\n\n super(Command, self).__init__(command_name, gdb.COMMAND_USER, gdb.COMPLETE_EXPRESSION, prefix=prefix)\n self.function = function\n\n if command_name in self.command_names:\n raise Exception('Cannot add command %s: already exists.' % command_name)\n\n self.command_names.add(command_name)\n commands.append(self)\n\n functools.update_wrapper(self, function)\n self.__doc__ = function.__doc__\n\n def split_args(self, argument):\n \"\"\"Split a command-line string from the user into arguments.\n\n Returns:\n A ``(tuple, dict)``, in the form of ``*args, **kwargs``.\n The contents of the tuple/dict are undefined.\n \"\"\"\n return gdb.string_to_argv(argument), {}\n\n def invoke(self, argument, from_tty):\n \"\"\"Invoke the command with an argument string\"\"\"\n try:\n args, kwargs = self.split_args(argument)\n except SystemExit:\n # Raised when the usage is printed by an ArgparsedCommand\n return\n except (TypeError, gdb.error):\n pwndbg.exception.handle(self.function.__name__)\n return\n\n try:\n self.repeat = self.check_repeated(argument, from_tty)\n return self(*args, **kwargs)\n finally:\n self.repeat = False\n\n def check_repeated(self, argument, from_tty):\n \"\"\"Keep a record of all commands which come from the TTY.\n\n Returns:\n True if this command was executed by the user just hitting \"enter\".\n \"\"\"\n # Don't care unless it's interactive use\n if not from_tty:\n return False\n\n lines = gdb.execute('show commands', from_tty=False, to_string=True)\n lines = lines.splitlines()\n\n # No history\n if not lines:\n return False\n\n last_line = lines[-1]\n number, command = last_line.split(None, 1)\n number = int(number)\n\n # A new command was entered by the user\n if number not in Command.history:\n Command.history[number] = command\n return False\n\n # Somehow the command is different than we got before?\n if not command.endswith(argument):\n return False\n\n return True\n\n def __call__(self, *args, **kwargs):\n try:\n return self.function(*args, **kwargs)\n except TypeError as te:\n print('%r: %s' % (self.function.__name__.strip(),\n self.function.__doc__.strip()))\n pwndbg.exception.handle(self.function.__name__)\n except Exception:\n pwndbg.exception.handle(self.function.__name__)\n\n\nclass ParsedCommand(Command):\n #: Whether to return the string 'arg' if parsing fails.\n sloppy = False\n\n #: Whether to hide errors during parsing\n quiet = False\n\n def split_args(self, argument):\n # sys.stdout.write(repr(argument) + '\\n')\n argv, _ = super(ParsedCommand, self).split_args(argument)\n # sys.stdout.write(repr(argv) + '\\n')\n return list(filter(lambda x: x is not None, map(self.fix, argv))), {}\n\n def fix(self, arg):\n return fix(arg, self.sloppy, self.quiet)\n\n\nclass ParsedCommandPrefix(ParsedCommand):\n def __init__(self, function, prefix=True):\n super(ParsedCommand, self).__init__(function, prefix)\n\n\ndef fix(arg, sloppy=False, quiet=True, reraise=False):\n \"\"\"Fix a single command-line argument coming from the GDB CLI.\n\n Arguments:\n arg(str): Original string representation (e.g. '0', '$rax', '$rax+44')\n sloppy(bool): If ``arg`` cannot be evaluated, return ``arg``. (default: False)\n quiet(bool): If an error occurs, suppress it. (default: True)\n reraise(bool): If an error occurs, raise the exception. (default: False)\n\n Returns:\n Ideally ``gdb.Value`` object. May return a ``str`` if ``sloppy==True``.\n May return ``None`` if ``sloppy == False and reraise == False``.\n \"\"\"\n if isinstance(arg, gdb.Value):\n return arg\n\n try:\n parsed = gdb.parse_and_eval(arg)\n return parsed\n except Exception:\n pass\n\n try:\n arg = pwndbg.regs.fix(arg)\n return gdb.parse_and_eval(arg)\n except Exception as e:\n if not quiet:\n print(e)\n if reraise:\n raise e\n pass\n\n if sloppy:\n return arg\n\n return None\n\n\ndef fix_int(*a, **kw):\n return int(fix(*a,**kw))\n\ndef fix_int_reraise(*a, **kw):\n return fix(*a, reraise=True, **kw)\n\n\ndef OnlyWithFile(function):\n @functools.wraps(function)\n def _OnlyWithFile(*a, **kw):\n if pwndbg.proc.exe:\n return function(*a, **kw)\n else:\n print(\"%s: There is no file loaded.\" % function.__name__)\n\n return _OnlyWithFile\n\n\ndef OnlyWhenRunning(function):\n @functools.wraps(function)\n def _OnlyWhenRunning(*a, **kw):\n if pwndbg.proc.alive:\n return function(*a, **kw)\n else:\n print(\"%s: The program is not being run.\" % function.__name__)\n return _OnlyWhenRunning\n\n\nclass QuietSloppyParsedCommand(ParsedCommand):\n def __init__(self, *a, **kw):\n super(QuietSloppyParsedCommand, self).__init__(*a, **kw)\n self.quiet = True\n self.sloppy = True\n\n\nclass _ArgparsedCommand(Command):\n def __init__(self, parser, function, *a, **kw):\n self.parser = parser\n self.parser.prog = function.__name__\n self.__doc__ = function.__doc__ = self.parser.description\n super(_ArgparsedCommand, self).__init__(function, *a, **kw)\n\n def split_args(self, argument):\n argv = gdb.string_to_argv(argument)\n return tuple(), vars(self.parser.parse_args(argv))\n\n\nclass ArgparsedCommand(object):\n \"\"\"Adds documentation and offloads parsing for a Command via argparse\"\"\"\n def __init__(self, parser_or_desc):\n \"\"\"\n :param parser_or_desc: `argparse.ArgumentParser` instance or `str`\n \"\"\"\n if isinstance(parser_or_desc, str):\n self.parser = argparse.ArgumentParser(description=parser_or_desc)\n else:\n self.parser = parser_or_desc\n\n # We want to run all integer and otherwise-unspecified arguments\n # through fix() so that GDB parses it.\n for action in self.parser._actions:\n if action.dest == 'help':\n continue\n if action.type in (int, None):\n action.type = fix_int_reraise\n if action.default is not None:\n action.help += ' (default: %(default)s)'\n\n def __call__(self, function):\n return _ArgparsedCommand(self.parser, function)\n\n\ndef sloppy_gdb_parse(s):\n \"\"\"\n This function should be used as ``argparse.ArgumentParser`` .add_argument method's `type` helper.\n \n This makes the type being parsed as gdb value and if that parsing fails,\n a string is returned.\n\n :param s: String.\n :return: Whatever gdb.parse_and_eval returns or string.\n \"\"\"\n try:\n return gdb.parse_and_eval(s)\n except (TypeError, gdb.error):\n return s\n", "path": "pwndbg/commands/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport functools\n\nimport gdb\nimport six\n\nimport pwndbg.chain\nimport pwndbg.color\nimport pwndbg.enhance\nimport pwndbg.exception\nimport pwndbg.hexdump\nimport pwndbg.memory\nimport pwndbg.regs\nimport pwndbg.symbol\nimport pwndbg.ui\n\ncommands = []\n\n\nclass Command(gdb.Command):\n \"\"\"Generic command wrapper\"\"\"\n command_names = set()\n history = {}\n\n def __init__(self, function, prefix=False):\n command_name = function.__name__\n\n super(Command, self).__init__(command_name, gdb.COMMAND_USER, gdb.COMPLETE_EXPRESSION, prefix=prefix)\n self.function = function\n\n if command_name in self.command_names:\n raise Exception('Cannot add command %s: already exists.' % command_name)\n\n self.command_names.add(command_name)\n commands.append(self)\n\n functools.update_wrapper(self, function)\n self.__doc__ = function.__doc__\n\n def split_args(self, argument):\n \"\"\"Split a command-line string from the user into arguments.\n\n Returns:\n A ``(tuple, dict)``, in the form of ``*args, **kwargs``.\n The contents of the tuple/dict are undefined.\n \"\"\"\n return gdb.string_to_argv(argument), {}\n\n def invoke(self, argument, from_tty):\n \"\"\"Invoke the command with an argument string\"\"\"\n try:\n args, kwargs = self.split_args(argument)\n except SystemExit:\n # Raised when the usage is printed by an ArgparsedCommand\n return\n except (TypeError, gdb.error):\n pwndbg.exception.handle(self.function.__name__)\n return\n\n try:\n self.repeat = self.check_repeated(argument, from_tty)\n return self(*args, **kwargs)\n finally:\n self.repeat = False\n\n def check_repeated(self, argument, from_tty):\n \"\"\"Keep a record of all commands which come from the TTY.\n\n Returns:\n True if this command was executed by the user just hitting \"enter\".\n \"\"\"\n # Don't care unless it's interactive use\n if not from_tty:\n return False\n\n lines = gdb.execute('show commands', from_tty=False, to_string=True)\n lines = lines.splitlines()\n\n # No history\n if not lines:\n return False\n\n last_line = lines[-1]\n number, command = last_line.split(None, 1)\n number = int(number)\n\n # A new command was entered by the user\n if number not in Command.history:\n Command.history[number] = command\n return False\n\n # Somehow the command is different than we got before?\n if not command.endswith(argument):\n return False\n\n return True\n\n def __call__(self, *args, **kwargs):\n try:\n return self.function(*args, **kwargs)\n except TypeError as te:\n print('%r: %s' % (self.function.__name__.strip(),\n self.function.__doc__.strip()))\n pwndbg.exception.handle(self.function.__name__)\n except Exception:\n pwndbg.exception.handle(self.function.__name__)\n\n\nclass ParsedCommand(Command):\n #: Whether to return the string 'arg' if parsing fails.\n sloppy = False\n\n #: Whether to hide errors during parsing\n quiet = False\n\n def split_args(self, argument):\n # sys.stdout.write(repr(argument) + '\\n')\n argv, _ = super(ParsedCommand, self).split_args(argument)\n # sys.stdout.write(repr(argv) + '\\n')\n return list(filter(lambda x: x is not None, map(self.fix, argv))), {}\n\n def fix(self, arg):\n return fix(arg, self.sloppy, self.quiet)\n\n\nclass ParsedCommandPrefix(ParsedCommand):\n def __init__(self, function, prefix=True):\n super(ParsedCommand, self).__init__(function, prefix)\n\n\ndef fix(arg, sloppy=False, quiet=True, reraise=False):\n \"\"\"Fix a single command-line argument coming from the GDB CLI.\n\n Arguments:\n arg(str): Original string representation (e.g. '0', '$rax', '$rax+44')\n sloppy(bool): If ``arg`` cannot be evaluated, return ``arg``. (default: False)\n quiet(bool): If an error occurs, suppress it. (default: True)\n reraise(bool): If an error occurs, raise the exception. (default: False)\n\n Returns:\n Ideally ``gdb.Value`` object. May return a ``str`` if ``sloppy==True``.\n May return ``None`` if ``sloppy == False and reraise == False``.\n \"\"\"\n if isinstance(arg, gdb.Value):\n return arg\n\n try:\n parsed = gdb.parse_and_eval(arg)\n return parsed\n except Exception:\n pass\n\n try:\n arg = pwndbg.regs.fix(arg)\n return gdb.parse_and_eval(arg)\n except Exception as e:\n if not quiet:\n print(e)\n if reraise:\n raise e\n pass\n\n if sloppy:\n return arg\n\n return None\n\n\ndef fix_int(*a, **kw):\n return int(fix(*a,**kw))\n\ndef fix_int_reraise(*a, **kw):\n return fix(*a, reraise=True, **kw)\n\n\ndef OnlyWithFile(function):\n @functools.wraps(function)\n def _OnlyWithFile(*a, **kw):\n if pwndbg.proc.exe:\n return function(*a, **kw)\n else:\n print(\"%s: There is no file loaded.\" % function.__name__)\n\n return _OnlyWithFile\n\n\ndef OnlyWhenRunning(function):\n @functools.wraps(function)\n def _OnlyWhenRunning(*a, **kw):\n if pwndbg.proc.alive:\n return function(*a, **kw)\n else:\n print(\"%s: The program is not being run.\" % function.__name__)\n return _OnlyWhenRunning\n\n\nclass QuietSloppyParsedCommand(ParsedCommand):\n def __init__(self, *a, **kw):\n super(QuietSloppyParsedCommand, self).__init__(*a, **kw)\n self.quiet = True\n self.sloppy = True\n\n\nclass _ArgparsedCommand(Command):\n def __init__(self, parser, function, *a, **kw):\n self.parser = parser\n self.parser.prog = function.__name__\n self.__doc__ = function.__doc__ = self.parser.description\n super(_ArgparsedCommand, self).__init__(function, *a, **kw)\n\n def split_args(self, argument):\n argv = gdb.string_to_argv(argument)\n return tuple(), vars(self.parser.parse_args(argv))\n\n\nclass ArgparsedCommand(object):\n \"\"\"Adds documentation and offloads parsing for a Command via argparse\"\"\"\n def __init__(self, parser_or_desc):\n \"\"\"\n :param parser_or_desc: `argparse.ArgumentParser` instance or `str`\n \"\"\"\n if isinstance(parser_or_desc, six.string_types):\n self.parser = argparse.ArgumentParser(description=parser_or_desc)\n else:\n self.parser = parser_or_desc\n\n # We want to run all integer and otherwise-unspecified arguments\n # through fix() so that GDB parses it.\n for action in self.parser._actions:\n if action.dest == 'help':\n continue\n if action.type in (int, None):\n action.type = fix_int_reraise\n if action.default is not None:\n action.help += ' (default: %(default)s)'\n\n def __call__(self, function):\n return _ArgparsedCommand(self.parser, function)\n\n\ndef sloppy_gdb_parse(s):\n \"\"\"\n This function should be used as ``argparse.ArgumentParser`` .add_argument method's `type` helper.\n \n This makes the type being parsed as gdb value and if that parsing fails,\n a string is returned.\n\n :param s: String.\n :return: Whatever gdb.parse_and_eval returns or string.\n \"\"\"\n try:\n return gdb.parse_and_eval(s)\n except (TypeError, gdb.error):\n return s\n", "path": "pwndbg/commands/__init__.py"}]}
| 2,899 | 160 |
gh_patches_debug_38809
|
rasdani/github-patches
|
git_diff
|
OCA__stock-logistics-warehouse-541
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[11.0] stock_move_location: travis is red
@sebalix stock_move_location is going red in tests, so 11.0 branch is in bad state:
https://travis-ci.org/OCA/stock-logistics-warehouse/jobs/487479800
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `stock_move_location/__manifest__.py`
Content:
```
1 # Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>
2 # Copyright 2018 Camptocamp SA
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)
4
5 {
6 "name": "Move Stock Location",
7 "version": "11.0.1.0.0",
8 "author": "Julius Network Solutions, "
9 "Odoo Community Association (OCA)",
10 "summary": "This module allows to move all stock "
11 "in a stock location to an other one.",
12 "website": "https://github.com/OCA/stock-logistics-warehouse",
13 'license': 'AGPL-3',
14 "depends": [
15 "stock",
16 ],
17 "category": "Stock",
18 "data": [
19 'wizard/stock_move_location.xml',
20 ],
21 }
22
```
Path: `stock_move_location/wizard/stock_move_location.py`
Content:
```
1 # Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>
2 # Copyright 2018 Camptocamp SA
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)
4
5 from itertools import groupby
6
7 from odoo import api, fields, models
8
9
10 class StockMoveLocationWizard(models.TransientModel):
11 _name = "wiz.stock.move.location"
12
13 origin_location_id = fields.Many2one(
14 string='Origin Location',
15 comodel_name='stock.location',
16 required=True,
17 domain=lambda self: self._get_locations_domain(),
18 )
19 destination_location_id = fields.Many2one(
20 string='Destination Location',
21 comodel_name='stock.location',
22 required=True,
23 domain=lambda self: self._get_locations_domain(),
24 )
25 stock_move_location_line_ids = fields.One2many(
26 string="Move Location lines",
27 comodel_name="wiz.stock.move.location.line",
28 inverse_name="move_location_wizard_id",
29 )
30 picking_id = fields.Many2one(
31 string="Connected Picking",
32 comodel_name="stock.picking",
33 )
34
35 @api.onchange('origin_location_id', 'destination_location_id')
36 def _onchange_locations(self):
37 self._clear_lines()
38
39 def _clear_lines(self):
40 origin = self.origin_location_id
41 destination = self.destination_location_id
42 # there is `invalidate_cache` call inside the unlink
43 # which will clear the wizard - not cool.
44 # we have to keep the values somehow
45 self.stock_move_location_line_ids.unlink()
46 self.origin_location_id = origin
47 self.destination_location_id = destination
48
49 def _get_locations_domain(self):
50 return [('usage', '=', 'internal')]
51
52 def _create_picking(self):
53 return self.env['stock.picking'].create({
54 'picking_type_id': self.env.ref('stock.picking_type_internal').id,
55 'location_id': self.origin_location_id.id,
56 'location_dest_id': self.destination_location_id.id,
57 })
58
59 @api.multi
60 def group_lines(self):
61 sorted_lines = sorted(
62 self.stock_move_location_line_ids,
63 key=lambda x: x.product_id,
64 )
65 groups = groupby(sorted_lines, key=lambda x: x.product_id)
66 groups_dict = {}
67 for prod, lines in groups:
68 groups_dict[prod.id] = list(lines)
69 return groups_dict
70
71 @api.multi
72 def _create_moves(self, picking):
73 self.ensure_one()
74 groups = self.group_lines()
75 moves = self.env["stock.move"]
76 for group, lines in groups.items():
77 move = self._create_move(picking, lines)
78 moves |= move
79 return moves
80
81 def _get_move_values(self, picking, lines):
82 # locations are same for the products
83 location_from_id = lines[0].origin_location_id.id
84 location_to_id = lines[0].destination_location_id.id
85 product_id = lines[0].product_id.id
86 product_uom_id = lines[0].product_uom_id.id
87 qty = sum([x.move_quantity for x in lines])
88 return {
89 "name": "test",
90 "location_id": location_from_id,
91 "location_dest_id": location_to_id,
92 "product_id": product_id,
93 "product_uom": product_uom_id,
94 "product_uom_qty": qty,
95 "picking_id": picking.id,
96 "location_move": True,
97 }
98
99 @api.multi
100 def _create_move(self, picking, lines):
101 self.ensure_one()
102 move = self.env["stock.move"].create(
103 self._get_move_values(picking, lines),
104 )
105 for line in lines:
106 line.create_move_lines(picking, move)
107 return move
108
109 @api.multi
110 def action_move_location(self):
111 self.ensure_one()
112 picking = self._create_picking()
113 self._create_moves(picking)
114 if not self.env.context.get("planned"):
115 picking.action_confirm()
116 picking.action_assign()
117 picking.button_validate()
118 self.picking_id = picking
119 return self._get_picking_action(picking.id)
120
121 def _get_picking_action(self, pickinig_id):
122 action = self.env.ref("stock.action_picking_tree_all").read()[0]
123 form_view = self.env.ref("stock.view_picking_form").id
124 action.update({
125 "view_mode": "form",
126 "views": [(form_view, "form")],
127 "res_id": pickinig_id,
128 })
129 return action
130
131 def _get_group_quants(self):
132 location_id = self.origin_location_id.id
133 company = self.env['res.company']._company_default_get(
134 'stock.inventory',
135 )
136 # Using sql as search_group doesn't support aggregation functions
137 # leading to overhead in queries to DB
138 query = """
139 SELECT product_id, lot_id, SUM(quantity)
140 FROM stock_quant
141 WHERE location_id = %s
142 AND company_id = %s
143 GROUP BY product_id, lot_id
144 """
145 self.env.cr.execute(query, (location_id, company.id))
146 return self.env.cr.dictfetchall()
147
148 def _get_stock_move_location_lines_values(self):
149 product_obj = self.env['product.product']
150 product_data = []
151 for group in self._get_group_quants():
152 product = product_obj.browse(group.get("product_id")).exists()
153 product_data.append({
154 'product_id': product.id,
155 'move_quantity': group.get("sum"),
156 'max_quantity': group.get("sum"),
157 'origin_location_id': self.origin_location_id.id,
158 'destination_location_id': self.destination_location_id.id,
159 # cursor returns None instead of False
160 'lot_id': group.get("lot_id") or False,
161 'product_uom_id': product.uom_id.id,
162 'move_location_wizard_id': self.id,
163 'custom': False,
164 })
165 return product_data
166
167 def add_lines(self):
168 self.ensure_one()
169 line_model = self.env["wiz.stock.move.location.line"]
170 if not self.stock_move_location_line_ids:
171 for line_val in self._get_stock_move_location_lines_values():
172 if line_val.get('max_quantity') <= 0:
173 continue
174 line = line_model.create(line_val)
175 line.onchange_product_id()
176 return {
177 "type": "ir.actions.do_nothing",
178 }
179
180 def clear_lines(self):
181 self._clear_lines()
182 return {
183 "type": "ir.action.do_nothing",
184 }
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/stock_move_location/__manifest__.py b/stock_move_location/__manifest__.py
--- a/stock_move_location/__manifest__.py
+++ b/stock_move_location/__manifest__.py
@@ -4,7 +4,7 @@
{
"name": "Move Stock Location",
- "version": "11.0.1.0.0",
+ "version": "11.0.1.0.1",
"author": "Julius Network Solutions, "
"Odoo Community Association (OCA)",
"summary": "This module allows to move all stock "
diff --git a/stock_move_location/wizard/stock_move_location.py b/stock_move_location/wizard/stock_move_location.py
--- a/stock_move_location/wizard/stock_move_location.py
+++ b/stock_move_location/wizard/stock_move_location.py
@@ -2,8 +2,6 @@
# Copyright 2018 Camptocamp SA
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)
-from itertools import groupby
-
from odoo import api, fields, models
@@ -58,22 +56,21 @@
@api.multi
def group_lines(self):
- sorted_lines = sorted(
- self.stock_move_location_line_ids,
- key=lambda x: x.product_id,
- )
- groups = groupby(sorted_lines, key=lambda x: x.product_id)
- groups_dict = {}
- for prod, lines in groups:
- groups_dict[prod.id] = list(lines)
- return groups_dict
+ lines_grouped = {}
+ for line in self.stock_move_location_line_ids:
+ lines_grouped.setdefault(
+ line.product_id.id,
+ self.env["wiz.stock.move.location.line"].browse(),
+ )
+ lines_grouped[line.product_id.id] |= line
+ return lines_grouped
@api.multi
def _create_moves(self, picking):
self.ensure_one()
groups = self.group_lines()
moves = self.env["stock.move"]
- for group, lines in groups.items():
+ for lines in groups.values():
move = self._create_move(picking, lines)
moves |= move
return moves
@@ -82,14 +79,14 @@
# locations are same for the products
location_from_id = lines[0].origin_location_id.id
location_to_id = lines[0].destination_location_id.id
- product_id = lines[0].product_id.id
+ product = lines[0].product_id
product_uom_id = lines[0].product_uom_id.id
qty = sum([x.move_quantity for x in lines])
return {
- "name": "test",
+ "name": product.display_name,
"location_id": location_from_id,
"location_dest_id": location_to_id,
- "product_id": product_id,
+ "product_id": product.id,
"product_uom": product_uom_id,
"product_uom_qty": qty,
"picking_id": picking.id,
|
{"golden_diff": "diff --git a/stock_move_location/__manifest__.py b/stock_move_location/__manifest__.py\n--- a/stock_move_location/__manifest__.py\n+++ b/stock_move_location/__manifest__.py\n@@ -4,7 +4,7 @@\n \n {\n \"name\": \"Move Stock Location\",\n- \"version\": \"11.0.1.0.0\",\n+ \"version\": \"11.0.1.0.1\",\n \"author\": \"Julius Network Solutions, \"\n \"Odoo Community Association (OCA)\",\n \"summary\": \"This module allows to move all stock \"\ndiff --git a/stock_move_location/wizard/stock_move_location.py b/stock_move_location/wizard/stock_move_location.py\n--- a/stock_move_location/wizard/stock_move_location.py\n+++ b/stock_move_location/wizard/stock_move_location.py\n@@ -2,8 +2,6 @@\n # Copyright 2018 Camptocamp SA\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)\n \n-from itertools import groupby\n-\n from odoo import api, fields, models\n \n \n@@ -58,22 +56,21 @@\n \n @api.multi\n def group_lines(self):\n- sorted_lines = sorted(\n- self.stock_move_location_line_ids,\n- key=lambda x: x.product_id,\n- )\n- groups = groupby(sorted_lines, key=lambda x: x.product_id)\n- groups_dict = {}\n- for prod, lines in groups:\n- groups_dict[prod.id] = list(lines)\n- return groups_dict\n+ lines_grouped = {}\n+ for line in self.stock_move_location_line_ids:\n+ lines_grouped.setdefault(\n+ line.product_id.id,\n+ self.env[\"wiz.stock.move.location.line\"].browse(),\n+ )\n+ lines_grouped[line.product_id.id] |= line\n+ return lines_grouped\n \n @api.multi\n def _create_moves(self, picking):\n self.ensure_one()\n groups = self.group_lines()\n moves = self.env[\"stock.move\"]\n- for group, lines in groups.items():\n+ for lines in groups.values():\n move = self._create_move(picking, lines)\n moves |= move\n return moves\n@@ -82,14 +79,14 @@\n # locations are same for the products\n location_from_id = lines[0].origin_location_id.id\n location_to_id = lines[0].destination_location_id.id\n- product_id = lines[0].product_id.id\n+ product = lines[0].product_id\n product_uom_id = lines[0].product_uom_id.id\n qty = sum([x.move_quantity for x in lines])\n return {\n- \"name\": \"test\",\n+ \"name\": product.display_name,\n \"location_id\": location_from_id,\n \"location_dest_id\": location_to_id,\n- \"product_id\": product_id,\n+ \"product_id\": product.id,\n \"product_uom\": product_uom_id,\n \"product_uom_qty\": qty,\n \"picking_id\": picking.id,\n", "issue": "[11.0] stock_move_location: travis is red\n@sebalix stock_move_location is going red in tests, so 11.0 branch is in bad state:\r\n\r\nhttps://travis-ci.org/OCA/stock-logistics-warehouse/jobs/487479800\n", "before_files": [{"content": "# Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>\n# Copyright 2018 Camptocamp SA\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)\n\n{\n \"name\": \"Move Stock Location\",\n \"version\": \"11.0.1.0.0\",\n \"author\": \"Julius Network Solutions, \"\n \"Odoo Community Association (OCA)\",\n \"summary\": \"This module allows to move all stock \"\n \"in a stock location to an other one.\",\n \"website\": \"https://github.com/OCA/stock-logistics-warehouse\",\n 'license': 'AGPL-3',\n \"depends\": [\n \"stock\",\n ],\n \"category\": \"Stock\",\n \"data\": [\n 'wizard/stock_move_location.xml',\n ],\n}\n", "path": "stock_move_location/__manifest__.py"}, {"content": "# Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>\n# Copyright 2018 Camptocamp SA\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)\n\nfrom itertools import groupby\n\nfrom odoo import api, fields, models\n\n\nclass StockMoveLocationWizard(models.TransientModel):\n _name = \"wiz.stock.move.location\"\n\n origin_location_id = fields.Many2one(\n string='Origin Location',\n comodel_name='stock.location',\n required=True,\n domain=lambda self: self._get_locations_domain(),\n )\n destination_location_id = fields.Many2one(\n string='Destination Location',\n comodel_name='stock.location',\n required=True,\n domain=lambda self: self._get_locations_domain(),\n )\n stock_move_location_line_ids = fields.One2many(\n string=\"Move Location lines\",\n comodel_name=\"wiz.stock.move.location.line\",\n inverse_name=\"move_location_wizard_id\",\n )\n picking_id = fields.Many2one(\n string=\"Connected Picking\",\n comodel_name=\"stock.picking\",\n )\n\n @api.onchange('origin_location_id', 'destination_location_id')\n def _onchange_locations(self):\n self._clear_lines()\n\n def _clear_lines(self):\n origin = self.origin_location_id\n destination = self.destination_location_id\n # there is `invalidate_cache` call inside the unlink\n # which will clear the wizard - not cool.\n # we have to keep the values somehow\n self.stock_move_location_line_ids.unlink()\n self.origin_location_id = origin\n self.destination_location_id = destination\n\n def _get_locations_domain(self):\n return [('usage', '=', 'internal')]\n\n def _create_picking(self):\n return self.env['stock.picking'].create({\n 'picking_type_id': self.env.ref('stock.picking_type_internal').id,\n 'location_id': self.origin_location_id.id,\n 'location_dest_id': self.destination_location_id.id,\n })\n\n @api.multi\n def group_lines(self):\n sorted_lines = sorted(\n self.stock_move_location_line_ids,\n key=lambda x: x.product_id,\n )\n groups = groupby(sorted_lines, key=lambda x: x.product_id)\n groups_dict = {}\n for prod, lines in groups:\n groups_dict[prod.id] = list(lines)\n return groups_dict\n\n @api.multi\n def _create_moves(self, picking):\n self.ensure_one()\n groups = self.group_lines()\n moves = self.env[\"stock.move\"]\n for group, lines in groups.items():\n move = self._create_move(picking, lines)\n moves |= move\n return moves\n\n def _get_move_values(self, picking, lines):\n # locations are same for the products\n location_from_id = lines[0].origin_location_id.id\n location_to_id = lines[0].destination_location_id.id\n product_id = lines[0].product_id.id\n product_uom_id = lines[0].product_uom_id.id\n qty = sum([x.move_quantity for x in lines])\n return {\n \"name\": \"test\",\n \"location_id\": location_from_id,\n \"location_dest_id\": location_to_id,\n \"product_id\": product_id,\n \"product_uom\": product_uom_id,\n \"product_uom_qty\": qty,\n \"picking_id\": picking.id,\n \"location_move\": True,\n }\n\n @api.multi\n def _create_move(self, picking, lines):\n self.ensure_one()\n move = self.env[\"stock.move\"].create(\n self._get_move_values(picking, lines),\n )\n for line in lines:\n line.create_move_lines(picking, move)\n return move\n\n @api.multi\n def action_move_location(self):\n self.ensure_one()\n picking = self._create_picking()\n self._create_moves(picking)\n if not self.env.context.get(\"planned\"):\n picking.action_confirm()\n picking.action_assign()\n picking.button_validate()\n self.picking_id = picking\n return self._get_picking_action(picking.id)\n\n def _get_picking_action(self, pickinig_id):\n action = self.env.ref(\"stock.action_picking_tree_all\").read()[0]\n form_view = self.env.ref(\"stock.view_picking_form\").id\n action.update({\n \"view_mode\": \"form\",\n \"views\": [(form_view, \"form\")],\n \"res_id\": pickinig_id,\n })\n return action\n\n def _get_group_quants(self):\n location_id = self.origin_location_id.id\n company = self.env['res.company']._company_default_get(\n 'stock.inventory',\n )\n # Using sql as search_group doesn't support aggregation functions\n # leading to overhead in queries to DB\n query = \"\"\"\n SELECT product_id, lot_id, SUM(quantity)\n FROM stock_quant\n WHERE location_id = %s\n AND company_id = %s\n GROUP BY product_id, lot_id\n \"\"\"\n self.env.cr.execute(query, (location_id, company.id))\n return self.env.cr.dictfetchall()\n\n def _get_stock_move_location_lines_values(self):\n product_obj = self.env['product.product']\n product_data = []\n for group in self._get_group_quants():\n product = product_obj.browse(group.get(\"product_id\")).exists()\n product_data.append({\n 'product_id': product.id,\n 'move_quantity': group.get(\"sum\"),\n 'max_quantity': group.get(\"sum\"),\n 'origin_location_id': self.origin_location_id.id,\n 'destination_location_id': self.destination_location_id.id,\n # cursor returns None instead of False\n 'lot_id': group.get(\"lot_id\") or False,\n 'product_uom_id': product.uom_id.id,\n 'move_location_wizard_id': self.id,\n 'custom': False,\n })\n return product_data\n\n def add_lines(self):\n self.ensure_one()\n line_model = self.env[\"wiz.stock.move.location.line\"]\n if not self.stock_move_location_line_ids:\n for line_val in self._get_stock_move_location_lines_values():\n if line_val.get('max_quantity') <= 0:\n continue\n line = line_model.create(line_val)\n line.onchange_product_id()\n return {\n \"type\": \"ir.actions.do_nothing\",\n }\n\n def clear_lines(self):\n self._clear_lines()\n return {\n \"type\": \"ir.action.do_nothing\",\n }\n", "path": "stock_move_location/wizard/stock_move_location.py"}], "after_files": [{"content": "# Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>\n# Copyright 2018 Camptocamp SA\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)\n\n{\n \"name\": \"Move Stock Location\",\n \"version\": \"11.0.1.0.1\",\n \"author\": \"Julius Network Solutions, \"\n \"Odoo Community Association (OCA)\",\n \"summary\": \"This module allows to move all stock \"\n \"in a stock location to an other one.\",\n \"website\": \"https://github.com/OCA/stock-logistics-warehouse\",\n 'license': 'AGPL-3',\n \"depends\": [\n \"stock\",\n ],\n \"category\": \"Stock\",\n \"data\": [\n 'wizard/stock_move_location.xml',\n ],\n}\n", "path": "stock_move_location/__manifest__.py"}, {"content": "# Copyright (C) 2011 Julius Network Solutions SARL <[email protected]>\n# Copyright 2018 Camptocamp SA\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl)\n\nfrom odoo import api, fields, models\n\n\nclass StockMoveLocationWizard(models.TransientModel):\n _name = \"wiz.stock.move.location\"\n\n origin_location_id = fields.Many2one(\n string='Origin Location',\n comodel_name='stock.location',\n required=True,\n domain=lambda self: self._get_locations_domain(),\n )\n destination_location_id = fields.Many2one(\n string='Destination Location',\n comodel_name='stock.location',\n required=True,\n domain=lambda self: self._get_locations_domain(),\n )\n stock_move_location_line_ids = fields.One2many(\n string=\"Move Location lines\",\n comodel_name=\"wiz.stock.move.location.line\",\n inverse_name=\"move_location_wizard_id\",\n )\n picking_id = fields.Many2one(\n string=\"Connected Picking\",\n comodel_name=\"stock.picking\",\n )\n\n @api.onchange('origin_location_id', 'destination_location_id')\n def _onchange_locations(self):\n self._clear_lines()\n\n def _clear_lines(self):\n origin = self.origin_location_id\n destination = self.destination_location_id\n # there is `invalidate_cache` call inside the unlink\n # which will clear the wizard - not cool.\n # we have to keep the values somehow\n self.stock_move_location_line_ids.unlink()\n self.origin_location_id = origin\n self.destination_location_id = destination\n\n def _get_locations_domain(self):\n return [('usage', '=', 'internal')]\n\n def _create_picking(self):\n return self.env['stock.picking'].create({\n 'picking_type_id': self.env.ref('stock.picking_type_internal').id,\n 'location_id': self.origin_location_id.id,\n 'location_dest_id': self.destination_location_id.id,\n })\n\n @api.multi\n def group_lines(self):\n lines_grouped = {}\n for line in self.stock_move_location_line_ids:\n lines_grouped.setdefault(\n line.product_id.id,\n self.env[\"wiz.stock.move.location.line\"].browse(),\n )\n lines_grouped[line.product_id.id] |= line\n return lines_grouped\n\n @api.multi\n def _create_moves(self, picking):\n self.ensure_one()\n groups = self.group_lines()\n moves = self.env[\"stock.move\"]\n for lines in groups.values():\n move = self._create_move(picking, lines)\n moves |= move\n return moves\n\n def _get_move_values(self, picking, lines):\n # locations are same for the products\n location_from_id = lines[0].origin_location_id.id\n location_to_id = lines[0].destination_location_id.id\n product = lines[0].product_id\n product_uom_id = lines[0].product_uom_id.id\n qty = sum([x.move_quantity for x in lines])\n return {\n \"name\": product.display_name,\n \"location_id\": location_from_id,\n \"location_dest_id\": location_to_id,\n \"product_id\": product.id,\n \"product_uom\": product_uom_id,\n \"product_uom_qty\": qty,\n \"picking_id\": picking.id,\n \"location_move\": True,\n }\n\n @api.multi\n def _create_move(self, picking, lines):\n self.ensure_one()\n move = self.env[\"stock.move\"].create(\n self._get_move_values(picking, lines),\n )\n for line in lines:\n line.create_move_lines(picking, move)\n return move\n\n @api.multi\n def action_move_location(self):\n self.ensure_one()\n picking = self._create_picking()\n self._create_moves(picking)\n if not self.env.context.get(\"planned\"):\n picking.action_confirm()\n picking.action_assign()\n picking.button_validate()\n self.picking_id = picking\n return self._get_picking_action(picking.id)\n\n def _get_picking_action(self, pickinig_id):\n action = self.env.ref(\"stock.action_picking_tree_all\").read()[0]\n form_view = self.env.ref(\"stock.view_picking_form\").id\n action.update({\n \"view_mode\": \"form\",\n \"views\": [(form_view, \"form\")],\n \"res_id\": pickinig_id,\n })\n return action\n\n def _get_group_quants(self):\n location_id = self.origin_location_id.id\n company = self.env['res.company']._company_default_get(\n 'stock.inventory',\n )\n # Using sql as search_group doesn't support aggregation functions\n # leading to overhead in queries to DB\n query = \"\"\"\n SELECT product_id, lot_id, SUM(quantity)\n FROM stock_quant\n WHERE location_id = %s\n AND company_id = %s\n GROUP BY product_id, lot_id\n \"\"\"\n self.env.cr.execute(query, (location_id, company.id))\n return self.env.cr.dictfetchall()\n\n def _get_stock_move_location_lines_values(self):\n product_obj = self.env['product.product']\n product_data = []\n for group in self._get_group_quants():\n product = product_obj.browse(group.get(\"product_id\")).exists()\n product_data.append({\n 'product_id': product.id,\n 'move_quantity': group.get(\"sum\"),\n 'max_quantity': group.get(\"sum\"),\n 'origin_location_id': self.origin_location_id.id,\n 'destination_location_id': self.destination_location_id.id,\n # cursor returns None instead of False\n 'lot_id': group.get(\"lot_id\") or False,\n 'product_uom_id': product.uom_id.id,\n 'move_location_wizard_id': self.id,\n 'custom': False,\n })\n return product_data\n\n def add_lines(self):\n self.ensure_one()\n line_model = self.env[\"wiz.stock.move.location.line\"]\n if not self.stock_move_location_line_ids:\n for line_val in self._get_stock_move_location_lines_values():\n if line_val.get('max_quantity') <= 0:\n continue\n line = line_model.create(line_val)\n line.onchange_product_id()\n return {\n \"type\": \"ir.actions.do_nothing\",\n }\n\n def clear_lines(self):\n self._clear_lines()\n return {\n \"type\": \"ir.action.do_nothing\",\n }\n", "path": "stock_move_location/wizard/stock_move_location.py"}]}
| 2,448 | 690 |
gh_patches_debug_35060
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1774
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MultiUpOrg crypter issue
Hi all,
I tried to download a link from the following url : http://www.multiup.org/fr/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi
When I use the debug mode, here is what I get :
```
18.08.2015 23:55:44 INFO Decrypting starts: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi
18.08.2015 23:55:44 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
18.08.2015 23:55:44 DEBUG CRYPTER MultiUpOrg[9]: PROCESS URL http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | PLUGIN VERSION 0.04
18.08.2015 23:55:44 INFO CRYPTER MultiUpOrg[9]: Updating file info...
18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: File info: {'status': 2, 'url': u'http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi', 'size': 0, 'name': u'Lulu.Vroumette.S1x03_Capitaine.Lulu.avi', 'pattern': {'TYPE': u'download', 'N': u'Lulu.Vroumette.S1x03_Capitaine.Lulu.avi'}}
18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Previous file info: {}
18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File name: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi
18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File size: Unknown
18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File status: online
18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Looking for direct download link...
18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Redirect #0 to: http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi
18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: LOAD URL http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True
18.08.2015 23:55:45 ERROR Decrypting failed: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | 'MultiUpOrg' object has no attribute 'links'
Traceback (most recent call last):
File "/home.sav/dl/pyload/module/PluginThread.py", line 363, in run
self.active.plugin.preprocessing(self)
File "/home.sav/dl/pyload/.pyload/userplugins/internal/Hoster.py", line 185, in preprocessing
return self.process(self.pyfile)
File "/home.sav/dl/pyload/module/plugins/internal/Crypter.py", line 41, in process
self.decrypt(pyfile)
File "/home.sav/dl/pyload/module/plugins/internal/SimpleCrypter.py", line 101, in decrypt
if self.link or self.links or self.urls or self.packages:
AttributeError: 'MultiUpOrg' object has no attribute 'links'
```
I tried to modify a bit the following file "module/plugins/crypter/MultiUpOrg.py" (
version "0.04") but got no success (in particular, when using french, there is no "<h5>DOWNLOAD</h5>" but "<h5>TÉLÉCHARGER</h5>'.
But it's not the cause of the issue as I tried with the "english" version of the url...
Please could you help me ;)
Regards
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `module/plugins/crypter/MultiUpOrg.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import re
4 import urlparse
5
6 from module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo
7
8
9 class MultiUpOrg(SimpleCrypter):
10 __name__ = "MultiUpOrg"
11 __type__ = "crypter"
12 __version__ = "0.04"
13 __status__ = "testing"
14
15 __pattern__ = r'http://(?:www\.)?multiup\.org/(en|fr)/(?P<TYPE>project|download|miror)/\w+(/\w+)?'
16 __config__ = [("use_premium" , "bool", "Use premium account if available" , True),
17 ("use_subfolder" , "bool", "Save package to subfolder" , True),
18 ("subfolder_per_pack", "bool", "Create a subfolder for each package", True)]
19
20 __description__ = """MultiUp.org crypter plugin"""
21 __license__ = "GPLv3"
22 __authors__ = [("Walter Purcaro", "[email protected]")]
23
24
25 NAME_PATTERN = r'<title>.*(?:Project|Projet|ownload|élécharger) (?P<N>.+?) (\(|- )'
26
27
28 def get_links(self):
29 m_type = re.match(self.__pattern__, self.pyfile.url).group('TYPE')
30
31 if m_type == "project":
32 pattern = r'\n(http://www\.multiup\.org/(?:en|fr)/download/.*)'
33 else:
34 pattern = r'style="width:97%;text-align:left".*\n.*href="(.*)"'
35 if m_type == "download":
36 dl_pattern = r'href="(.*)">.*\n.*<h5>DOWNLOAD</h5>'
37 miror_page = urlparse.urljoin("http://www.multiup.org/", re.search(dl_pattern, self.html).group(1))
38 self.html = self.load(miror_page)
39
40 return re.findall(pattern, self.html)
41
42
43 getInfo = create_getInfo(MultiUpOrg)
44
```
Path: `module/plugins/internal/SimpleCrypter.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import re
4
5 from module.plugins.internal.Crypter import Crypter
6 from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, replace_patterns, set_cookie, set_cookies
7 from module.utils import fixup, html_unescape
8
9
10 class SimpleCrypter(Crypter, SimpleHoster):
11 __name__ = "SimpleCrypter"
12 __type__ = "crypter"
13 __version__ = "0.60"
14 __status__ = "testing"
15
16 __pattern__ = r'^unmatchable$'
17 __config__ = [("use_subfolder" , "bool", "Save package to subfolder" , True), #: Overrides pyload.config['general']['folder_per_package']
18 ("subfolder_per_pack", "bool", "Create a subfolder for each package", True)]
19
20 __description__ = """Simple decrypter plugin"""
21 __license__ = "GPLv3"
22 __authors__ = [("Walter Purcaro", "[email protected]")]
23
24 """
25 Following patterns should be defined by each crypter:
26
27 LINK_PATTERN: Download link or regex to catch links in group(1)
28 example: LINK_PATTERN = r'<div class="link"><a href="(.+?)"'
29
30 NAME_PATTERN: (optional) folder name or page title
31 example: NAME_PATTERN = r'<title>Files of: (?P<N>[^<]+) folder</title>'
32
33 OFFLINE_PATTERN: (optional) Checks if the page is unreachable
34 example: OFFLINE_PATTERN = r'File (deleted|not found)'
35
36 TEMP_OFFLINE_PATTERN: (optional) Checks if the page is temporarily unreachable
37 example: TEMP_OFFLINE_PATTERN = r'Server maintainance'
38
39
40 You can override the getLinks method if you need a more sophisticated way to extract the links.
41
42
43 If the links are splitted on multiple pages you can define the PAGES_PATTERN regex:
44
45 PAGES_PATTERN: (optional) group(1) should be the number of overall pages containing the links
46 example: PAGES_PATTERN = r'Pages: (\d+)'
47
48 and its loadPage method:
49
50
51 def load_page(self, page_n):
52 return the html of the page number page_n
53 """
54
55 DIRECT_LINK = True
56 LEECH_HOSTER = False
57
58
59 #@TODO: Remove in 0.4.10
60 def _setup(self):
61 orig_name = self.__name__
62 self.__name__ = (orig_name + ".py").replace("Folder.py", "").replace(".py", "")
63
64 super(SimpleCrypter, self)._setup()
65
66 self.__name__ = orig_name
67
68
69 #@TODO: Remove in 0.4.10
70 def load_account(self):
71 orig_name = self.__name__
72 self.__name__ = (orig_name + ".py").replace("Folder.py", "").replace(".py", "")
73
74 super(SimpleCrypter, self).load_account()
75
76 self.__name__ = orig_name
77
78
79 def handle_direct(self, pyfile):
80 for i in xrange(self.get_config("maxredirs", plugin="UserAgentSwitcher")):
81 redirect = self.link or pyfile.url
82 self.log_debug("Redirect #%d to: %s" % (i, redirect))
83
84 header = self.load(redirect, just_header=True)
85 if 'location' in header and header['location']:
86 self.link = header['location']
87 else:
88 break
89 else:
90 self.log_error(_("Too many redirects"))
91
92
93 def decrypt(self, pyfile):
94 self.prepare()
95 self.check_info() #@TODO: Remove in 0.4.10
96
97 if self.direct_dl:
98 self.log_debug(_("Looking for direct download link..."))
99 self.handle_direct(pyfile)
100
101 if self.link or self.links or self.urls or self.packages:
102 self.log_info(_("Direct download link detected"))
103 else:
104 self.log_info(_("Direct download link not found"))
105
106 if not (self.link or self.links or self.urls or self.packages):
107 self.preload()
108
109 self.links = self.get_links() or list()
110
111 if hasattr(self, 'PAGES_PATTERN') and hasattr(self, 'loadPage'):
112 self.handle_pages(pyfile)
113
114 self.log_debug("Package has %d links" % len(self.links))
115
116 if self.link:
117 self.urls.append(self.link)
118
119 if self.links:
120 name = folder = pyfile.name
121 self.packages.append((name, self.links, folder))
122
123
124 def get_links(self):
125 """
126 Returns the links extracted from self.html
127 You should override this only if it's impossible to extract links using only the LINK_PATTERN.
128 """
129 return re.findall(self.LINK_PATTERN, self.html)
130
131
132 def handle_pages(self, pyfile):
133 try:
134 pages = int(re.search(self.PAGES_PATTERN, self.html).group(1))
135 except Exception:
136 pages = 1
137
138 for p in xrange(2, pages + 1):
139 self.html = self.load_page(p)
140 self.links += self.get_links()
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/module/plugins/crypter/MultiUpOrg.py b/module/plugins/crypter/MultiUpOrg.py
--- a/module/plugins/crypter/MultiUpOrg.py
+++ b/module/plugins/crypter/MultiUpOrg.py
@@ -9,10 +9,10 @@
class MultiUpOrg(SimpleCrypter):
__name__ = "MultiUpOrg"
__type__ = "crypter"
- __version__ = "0.04"
+ __version__ = "0.05"
__status__ = "testing"
- __pattern__ = r'http://(?:www\.)?multiup\.org/(en|fr)/(?P<TYPE>project|download|miror)/\w+(/\w+)?'
+ __pattern__ = r'http://(?:www\.)?multiup\.org/(en|fr)/(?P<TYPE>project|download|mirror)/\w+(/\w+)?'
__config__ = [("use_premium" , "bool", "Use premium account if available" , True),
("use_subfolder" , "bool", "Save package to subfolder" , True),
("subfolder_per_pack", "bool", "Create a subfolder for each package", True)]
@@ -34,8 +34,8 @@
pattern = r'style="width:97%;text-align:left".*\n.*href="(.*)"'
if m_type == "download":
dl_pattern = r'href="(.*)">.*\n.*<h5>DOWNLOAD</h5>'
- miror_page = urlparse.urljoin("http://www.multiup.org/", re.search(dl_pattern, self.html).group(1))
- self.html = self.load(miror_page)
+ mirror_page = urlparse.urljoin("http://www.multiup.org/", re.search(dl_pattern, self.html).group(1))
+ self.html = self.load(mirror_page)
return re.findall(pattern, self.html)
diff --git a/module/plugins/internal/SimpleCrypter.py b/module/plugins/internal/SimpleCrypter.py
--- a/module/plugins/internal/SimpleCrypter.py
+++ b/module/plugins/internal/SimpleCrypter.py
@@ -10,7 +10,7 @@
class SimpleCrypter(Crypter, SimpleHoster):
__name__ = "SimpleCrypter"
__type__ = "crypter"
- __version__ = "0.60"
+ __version__ = "0.61"
__status__ = "testing"
__pattern__ = r'^unmatchable$'
@@ -91,6 +91,7 @@
def decrypt(self, pyfile):
+ self.links = [] #@TODO: Recheck in 0.4.10
self.prepare()
self.check_info() #@TODO: Remove in 0.4.10
|
{"golden_diff": "diff --git a/module/plugins/crypter/MultiUpOrg.py b/module/plugins/crypter/MultiUpOrg.py\n--- a/module/plugins/crypter/MultiUpOrg.py\n+++ b/module/plugins/crypter/MultiUpOrg.py\n@@ -9,10 +9,10 @@\n class MultiUpOrg(SimpleCrypter):\n __name__ = \"MultiUpOrg\"\n __type__ = \"crypter\"\n- __version__ = \"0.04\"\n+ __version__ = \"0.05\"\n __status__ = \"testing\"\n \n- __pattern__ = r'http://(?:www\\.)?multiup\\.org/(en|fr)/(?P<TYPE>project|download|miror)/\\w+(/\\w+)?'\n+ __pattern__ = r'http://(?:www\\.)?multiup\\.org/(en|fr)/(?P<TYPE>project|download|mirror)/\\w+(/\\w+)?'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True),\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n@@ -34,8 +34,8 @@\n pattern = r'style=\"width:97%;text-align:left\".*\\n.*href=\"(.*)\"'\n if m_type == \"download\":\n dl_pattern = r'href=\"(.*)\">.*\\n.*<h5>DOWNLOAD</h5>'\n- miror_page = urlparse.urljoin(\"http://www.multiup.org/\", re.search(dl_pattern, self.html).group(1))\n- self.html = self.load(miror_page)\n+ mirror_page = urlparse.urljoin(\"http://www.multiup.org/\", re.search(dl_pattern, self.html).group(1))\n+ self.html = self.load(mirror_page)\n \n return re.findall(pattern, self.html)\n \ndiff --git a/module/plugins/internal/SimpleCrypter.py b/module/plugins/internal/SimpleCrypter.py\n--- a/module/plugins/internal/SimpleCrypter.py\n+++ b/module/plugins/internal/SimpleCrypter.py\n@@ -10,7 +10,7 @@\n class SimpleCrypter(Crypter, SimpleHoster):\n __name__ = \"SimpleCrypter\"\n __type__ = \"crypter\"\n- __version__ = \"0.60\"\n+ __version__ = \"0.61\"\n __status__ = \"testing\"\n \n __pattern__ = r'^unmatchable$'\n@@ -91,6 +91,7 @@\n \n \n def decrypt(self, pyfile):\n+ self.links = [] #@TODO: Recheck in 0.4.10\n self.prepare()\n self.check_info() #@TODO: Remove in 0.4.10\n", "issue": "MultiUpOrg crypter issue\nHi all,\n\nI tried to download a link from the following url : http://www.multiup.org/fr/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi\nWhen I use the debug mode, here is what I get :\n\n```\n18.08.2015 23:55:44 INFO Decrypting starts: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi\n18.08.2015 23:55:44 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0\n18.08.2015 23:55:44 DEBUG CRYPTER MultiUpOrg[9]: PROCESS URL http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | PLUGIN VERSION 0.04\n18.08.2015 23:55:44 INFO CRYPTER MultiUpOrg[9]: Updating file info...\n18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: File info: {'status': 2, 'url': u'http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi', 'size': 0, 'name': u'Lulu.Vroumette.S1x03_Capitaine.Lulu.avi', 'pattern': {'TYPE': u'download', 'N': u'Lulu.Vroumette.S1x03_Capitaine.Lulu.avi'}}\n18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Previous file info: {}\n18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File name: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi\n18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File size: Unknown\n18.08.2015 23:55:45 INFO CRYPTER MultiUpOrg[9]: File status: online\n18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Looking for direct download link...\n18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: Redirect #0 to: http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi\n18.08.2015 23:55:45 DEBUG CRYPTER MultiUpOrg[9]: LOAD URL http://www.multiup.org/en/download/acc13ec3f1b1214919b55d4a70c16190/Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True\n18.08.2015 23:55:45 ERROR Decrypting failed: Lulu.Vroumette.S1x03_Capitaine.Lulu.avi | 'MultiUpOrg' object has no attribute 'links'\nTraceback (most recent call last):\n File \"/home.sav/dl/pyload/module/PluginThread.py\", line 363, in run\n self.active.plugin.preprocessing(self)\n File \"/home.sav/dl/pyload/.pyload/userplugins/internal/Hoster.py\", line 185, in preprocessing\n return self.process(self.pyfile)\n File \"/home.sav/dl/pyload/module/plugins/internal/Crypter.py\", line 41, in process\n self.decrypt(pyfile)\n File \"/home.sav/dl/pyload/module/plugins/internal/SimpleCrypter.py\", line 101, in decrypt\n if self.link or self.links or self.urls or self.packages:\nAttributeError: 'MultiUpOrg' object has no attribute 'links'\n```\n\nI tried to modify a bit the following file \"module/plugins/crypter/MultiUpOrg.py\" (\nversion \"0.04\") but got no success (in particular, when using french, there is no \"<h5>DOWNLOAD</h5>\" but \"<h5>T\u00c9L\u00c9CHARGER</h5>'.\nBut it's not the cause of the issue as I tried with the \"english\" version of the url...\n\nPlease could you help me ;)\n\nRegards\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport urlparse\n\nfrom module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo\n\n\nclass MultiUpOrg(SimpleCrypter):\n __name__ = \"MultiUpOrg\"\n __type__ = \"crypter\"\n __version__ = \"0.04\"\n __status__ = \"testing\"\n\n __pattern__ = r'http://(?:www\\.)?multiup\\.org/(en|fr)/(?P<TYPE>project|download|miror)/\\w+(/\\w+)?'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True),\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"MultiUp.org crypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n NAME_PATTERN = r'<title>.*(?:Project|Projet|ownload|\u00e9l\u00e9charger) (?P<N>.+?) (\\(|- )'\n\n\n def get_links(self):\n m_type = re.match(self.__pattern__, self.pyfile.url).group('TYPE')\n\n if m_type == \"project\":\n pattern = r'\\n(http://www\\.multiup\\.org/(?:en|fr)/download/.*)'\n else:\n pattern = r'style=\"width:97%;text-align:left\".*\\n.*href=\"(.*)\"'\n if m_type == \"download\":\n dl_pattern = r'href=\"(.*)\">.*\\n.*<h5>DOWNLOAD</h5>'\n miror_page = urlparse.urljoin(\"http://www.multiup.org/\", re.search(dl_pattern, self.html).group(1))\n self.html = self.load(miror_page)\n\n return re.findall(pattern, self.html)\n\n\ngetInfo = create_getInfo(MultiUpOrg)\n", "path": "module/plugins/crypter/MultiUpOrg.py"}, {"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.Crypter import Crypter\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, replace_patterns, set_cookie, set_cookies\nfrom module.utils import fixup, html_unescape\n\n\nclass SimpleCrypter(Crypter, SimpleHoster):\n __name__ = \"SimpleCrypter\"\n __type__ = \"crypter\"\n __version__ = \"0.60\"\n __status__ = \"testing\"\n\n __pattern__ = r'^unmatchable$'\n __config__ = [(\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True), #: Overrides pyload.config['general']['folder_per_package']\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"Simple decrypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n \"\"\"\n Following patterns should be defined by each crypter:\n\n LINK_PATTERN: Download link or regex to catch links in group(1)\n example: LINK_PATTERN = r'<div class=\"link\"><a href=\"(.+?)\"'\n\n NAME_PATTERN: (optional) folder name or page title\n example: NAME_PATTERN = r'<title>Files of: (?P<N>[^<]+) folder</title>'\n\n OFFLINE_PATTERN: (optional) Checks if the page is unreachable\n example: OFFLINE_PATTERN = r'File (deleted|not found)'\n\n TEMP_OFFLINE_PATTERN: (optional) Checks if the page is temporarily unreachable\n example: TEMP_OFFLINE_PATTERN = r'Server maintainance'\n\n\n You can override the getLinks method if you need a more sophisticated way to extract the links.\n\n\n If the links are splitted on multiple pages you can define the PAGES_PATTERN regex:\n\n PAGES_PATTERN: (optional) group(1) should be the number of overall pages containing the links\n example: PAGES_PATTERN = r'Pages: (\\d+)'\n\n and its loadPage method:\n\n\n def load_page(self, page_n):\n return the html of the page number page_n\n \"\"\"\n\n DIRECT_LINK = True\n LEECH_HOSTER = False\n\n\n #@TODO: Remove in 0.4.10\n def _setup(self):\n orig_name = self.__name__\n self.__name__ = (orig_name + \".py\").replace(\"Folder.py\", \"\").replace(\".py\", \"\")\n\n super(SimpleCrypter, self)._setup()\n\n self.__name__ = orig_name\n\n\n #@TODO: Remove in 0.4.10\n def load_account(self):\n orig_name = self.__name__\n self.__name__ = (orig_name + \".py\").replace(\"Folder.py\", \"\").replace(\".py\", \"\")\n\n super(SimpleCrypter, self).load_account()\n\n self.__name__ = orig_name\n\n\n def handle_direct(self, pyfile):\n for i in xrange(self.get_config(\"maxredirs\", plugin=\"UserAgentSwitcher\")):\n redirect = self.link or pyfile.url\n self.log_debug(\"Redirect #%d to: %s\" % (i, redirect))\n\n header = self.load(redirect, just_header=True)\n if 'location' in header and header['location']:\n self.link = header['location']\n else:\n break\n else:\n self.log_error(_(\"Too many redirects\"))\n\n\n def decrypt(self, pyfile):\n self.prepare()\n self.check_info() #@TODO: Remove in 0.4.10\n\n if self.direct_dl:\n self.log_debug(_(\"Looking for direct download link...\"))\n self.handle_direct(pyfile)\n\n if self.link or self.links or self.urls or self.packages:\n self.log_info(_(\"Direct download link detected\"))\n else:\n self.log_info(_(\"Direct download link not found\"))\n\n if not (self.link or self.links or self.urls or self.packages):\n self.preload()\n\n self.links = self.get_links() or list()\n\n if hasattr(self, 'PAGES_PATTERN') and hasattr(self, 'loadPage'):\n self.handle_pages(pyfile)\n\n self.log_debug(\"Package has %d links\" % len(self.links))\n\n if self.link:\n self.urls.append(self.link)\n\n if self.links:\n name = folder = pyfile.name\n self.packages.append((name, self.links, folder))\n\n\n def get_links(self):\n \"\"\"\n Returns the links extracted from self.html\n You should override this only if it's impossible to extract links using only the LINK_PATTERN.\n \"\"\"\n return re.findall(self.LINK_PATTERN, self.html)\n\n\n def handle_pages(self, pyfile):\n try:\n pages = int(re.search(self.PAGES_PATTERN, self.html).group(1))\n except Exception:\n pages = 1\n\n for p in xrange(2, pages + 1):\n self.html = self.load_page(p)\n self.links += self.get_links()\n", "path": "module/plugins/internal/SimpleCrypter.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport urlparse\n\nfrom module.plugins.internal.SimpleCrypter import SimpleCrypter, create_getInfo\n\n\nclass MultiUpOrg(SimpleCrypter):\n __name__ = \"MultiUpOrg\"\n __type__ = \"crypter\"\n __version__ = \"0.05\"\n __status__ = \"testing\"\n\n __pattern__ = r'http://(?:www\\.)?multiup\\.org/(en|fr)/(?P<TYPE>project|download|mirror)/\\w+(/\\w+)?'\n __config__ = [(\"use_premium\" , \"bool\", \"Use premium account if available\" , True),\n (\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True),\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"MultiUp.org crypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n NAME_PATTERN = r'<title>.*(?:Project|Projet|ownload|\u00e9l\u00e9charger) (?P<N>.+?) (\\(|- )'\n\n\n def get_links(self):\n m_type = re.match(self.__pattern__, self.pyfile.url).group('TYPE')\n\n if m_type == \"project\":\n pattern = r'\\n(http://www\\.multiup\\.org/(?:en|fr)/download/.*)'\n else:\n pattern = r'style=\"width:97%;text-align:left\".*\\n.*href=\"(.*)\"'\n if m_type == \"download\":\n dl_pattern = r'href=\"(.*)\">.*\\n.*<h5>DOWNLOAD</h5>'\n mirror_page = urlparse.urljoin(\"http://www.multiup.org/\", re.search(dl_pattern, self.html).group(1))\n self.html = self.load(mirror_page)\n\n return re.findall(pattern, self.html)\n\n\ngetInfo = create_getInfo(MultiUpOrg)\n", "path": "module/plugins/crypter/MultiUpOrg.py"}, {"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.Crypter import Crypter\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo, replace_patterns, set_cookie, set_cookies\nfrom module.utils import fixup, html_unescape\n\n\nclass SimpleCrypter(Crypter, SimpleHoster):\n __name__ = \"SimpleCrypter\"\n __type__ = \"crypter\"\n __version__ = \"0.61\"\n __status__ = \"testing\"\n\n __pattern__ = r'^unmatchable$'\n __config__ = [(\"use_subfolder\" , \"bool\", \"Save package to subfolder\" , True), #: Overrides pyload.config['general']['folder_per_package']\n (\"subfolder_per_pack\", \"bool\", \"Create a subfolder for each package\", True)]\n\n __description__ = \"\"\"Simple decrypter plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n \"\"\"\n Following patterns should be defined by each crypter:\n\n LINK_PATTERN: Download link or regex to catch links in group(1)\n example: LINK_PATTERN = r'<div class=\"link\"><a href=\"(.+?)\"'\n\n NAME_PATTERN: (optional) folder name or page title\n example: NAME_PATTERN = r'<title>Files of: (?P<N>[^<]+) folder</title>'\n\n OFFLINE_PATTERN: (optional) Checks if the page is unreachable\n example: OFFLINE_PATTERN = r'File (deleted|not found)'\n\n TEMP_OFFLINE_PATTERN: (optional) Checks if the page is temporarily unreachable\n example: TEMP_OFFLINE_PATTERN = r'Server maintainance'\n\n\n You can override the getLinks method if you need a more sophisticated way to extract the links.\n\n\n If the links are splitted on multiple pages you can define the PAGES_PATTERN regex:\n\n PAGES_PATTERN: (optional) group(1) should be the number of overall pages containing the links\n example: PAGES_PATTERN = r'Pages: (\\d+)'\n\n and its loadPage method:\n\n\n def load_page(self, page_n):\n return the html of the page number page_n\n \"\"\"\n\n DIRECT_LINK = True\n LEECH_HOSTER = False\n\n\n #@TODO: Remove in 0.4.10\n def _setup(self):\n orig_name = self.__name__\n self.__name__ = (orig_name + \".py\").replace(\"Folder.py\", \"\").replace(\".py\", \"\")\n\n super(SimpleCrypter, self)._setup()\n\n self.__name__ = orig_name\n\n\n #@TODO: Remove in 0.4.10\n def load_account(self):\n orig_name = self.__name__\n self.__name__ = (orig_name + \".py\").replace(\"Folder.py\", \"\").replace(\".py\", \"\")\n\n super(SimpleCrypter, self).load_account()\n\n self.__name__ = orig_name\n\n\n def handle_direct(self, pyfile):\n for i in xrange(self.get_config(\"maxredirs\", plugin=\"UserAgentSwitcher\")):\n redirect = self.link or pyfile.url\n self.log_debug(\"Redirect #%d to: %s\" % (i, redirect))\n\n header = self.load(redirect, just_header=True)\n if 'location' in header and header['location']:\n self.link = header['location']\n else:\n break\n else:\n self.log_error(_(\"Too many redirects\"))\n\n\n def decrypt(self, pyfile):\n self.links = [] #@TODO: Recheck in 0.4.10\n self.prepare()\n self.check_info() #@TODO: Remove in 0.4.10\n\n if self.direct_dl:\n self.log_debug(_(\"Looking for direct download link...\"))\n self.handle_direct(pyfile)\n\n if self.link or self.links or self.urls or self.packages:\n self.log_info(_(\"Direct download link detected\"))\n else:\n self.log_info(_(\"Direct download link not found\"))\n\n if not (self.link or self.links or self.urls or self.packages):\n self.preload()\n\n self.links = self.get_links() or list()\n\n if hasattr(self, 'PAGES_PATTERN') and hasattr(self, 'loadPage'):\n self.handle_pages(pyfile)\n\n self.log_debug(\"Package has %d links\" % len(self.links))\n\n if self.link:\n self.urls.append(self.link)\n\n if self.links:\n name = folder = pyfile.name\n self.packages.append((name, self.links, folder))\n\n\n def get_links(self):\n \"\"\"\n Returns the links extracted from self.html\n You should override this only if it's impossible to extract links using only the LINK_PATTERN.\n \"\"\"\n return re.findall(self.LINK_PATTERN, self.html)\n\n\n def handle_pages(self, pyfile):\n try:\n pages = int(re.search(self.PAGES_PATTERN, self.html).group(1))\n except Exception:\n pages = 1\n\n for p in xrange(2, pages + 1):\n self.html = self.load_page(p)\n self.links += self.get_links()\n", "path": "module/plugins/internal/SimpleCrypter.py"}]}
| 3,481 | 643 |
gh_patches_debug_2576
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1641
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: StorageObject doesn't return < 0 when using expiry
### Description
When the stored value is expired, the returned interval is set to 86400 and will therefore not expire.
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/main/litestar/stores/base.py#L122
### MCVE
```python
from pathlib import Path
from litestar.stores.file import FileStore
store = FileStore(path=Path("test.db"))
async def setstore() -> None:
await store.set("test", "value", expires_in=5)
return None
async def getstore() -> int:
expiry = await store.expires_in("test")
return expiry
```
### Steps to reproduce
_No response_
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
`litestar==2.0.0a5`
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/stores/base.py`
Content:
```
1 from __future__ import annotations
2
3 from abc import ABC, abstractmethod
4 from datetime import datetime, timedelta, timezone
5 from typing import TYPE_CHECKING, Optional
6
7 from msgspec import Struct
8 from msgspec.msgpack import decode as msgpack_decode
9 from msgspec.msgpack import encode as msgpack_encode
10
11 if TYPE_CHECKING:
12 from typing_extensions import Self
13
14
15 __all__ = ("Store", "NamespacedStore", "StorageObject")
16
17
18 class Store(ABC): # pragma: no cover
19 """Thread and process safe asynchronous key/value store."""
20
21 @abstractmethod
22 async def set(self, key: str, value: str | bytes, expires_in: int | timedelta | None = None) -> None:
23 """Set a value.
24
25 Args:
26 key: Key to associate the value with
27 value: Value to store
28 expires_in: Time in seconds before the key is considered expired
29
30 Returns:
31 ``None``
32 """
33 raise NotImplementedError
34
35 @abstractmethod
36 async def get(self, key: str, renew_for: int | timedelta | None = None) -> bytes | None:
37 """Get a value.
38
39 Args:
40 key: Key associated with the value
41 renew_for: If given and the value had an initial expiry time set, renew the
42 expiry time for ``renew_for`` seconds. If the value has not been set
43 with an expiry time this is a no-op
44
45 Returns:
46 The value associated with ``key`` if it exists and is not expired, else
47 ``None``
48 """
49 raise NotImplementedError
50
51 @abstractmethod
52 async def delete(self, key: str) -> None:
53 """Delete a value.
54
55 If no such key exists, this is a no-op.
56
57 Args:
58 key: Key of the value to delete
59 """
60 raise NotImplementedError
61
62 @abstractmethod
63 async def delete_all(self) -> None:
64 """Delete all stored values."""
65 raise NotImplementedError
66
67 @abstractmethod
68 async def exists(self, key: str) -> bool:
69 """Check if a given ``key`` exists."""
70 raise NotImplementedError
71
72 @abstractmethod
73 async def expires_in(self, key: str) -> int | None:
74 """Get the time in seconds ``key`` expires in. If no such ``key`` exists or no
75 expiry time was set, return ``None``.
76 """
77 raise NotImplementedError
78
79
80 class NamespacedStore(Store):
81 """A subclass of :class:`Store`, offering hierarchical namespacing.
82
83 Bulk actions on a parent namespace should affect all child namespaces, whereas other operations on all namespaces
84 should be isolated.
85 """
86
87 @abstractmethod
88 def with_namespace(self, namespace: str) -> Self:
89 """Return a new instance of :class:`NamespacedStore`, which exists in a child namespace of the current namespace.
90 Bulk actions on the parent namespace should affect all child namespaces, whereas other operations on all
91 namespaces should be isolated.
92 """
93
94
95 class StorageObject(Struct):
96 """:class:`msgspec.Struct` to store serialized data alongside with their expiry time."""
97
98 expires_at: Optional[datetime] # noqa: UP007
99 data: bytes
100
101 @classmethod
102 def new(cls, data: bytes, expires_in: int | timedelta | None) -> StorageObject:
103 """Construct a new :class:`StorageObject` instance."""
104 if expires_in is not None and not isinstance(expires_in, timedelta):
105 expires_in = timedelta(seconds=expires_in)
106 return cls(
107 data=data,
108 expires_at=(datetime.now(tz=timezone.utc) + expires_in) if expires_in else None,
109 )
110
111 @property
112 def expired(self) -> bool:
113 """Return if the :class:`StorageObject` is expired"""
114 return self.expires_at is not None and datetime.now(tz=timezone.utc) >= self.expires_at
115
116 @property
117 def expires_in(self) -> int:
118 """Return the expiry time of this ``StorageObject`` in seconds. If no expiry time
119 was set, return ``-1``.
120 """
121 if self.expires_at:
122 return (self.expires_at - datetime.now(tz=timezone.utc)).seconds
123 return -1
124
125 def to_bytes(self) -> bytes:
126 """Encode the instance to bytes"""
127 return msgpack_encode(self)
128
129 @classmethod
130 def from_bytes(cls, raw: bytes) -> StorageObject:
131 """Load a previously encoded with :meth:`StorageObject.to_bytes`"""
132 return msgpack_decode(raw, type=cls)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/stores/base.py b/litestar/stores/base.py
--- a/litestar/stores/base.py
+++ b/litestar/stores/base.py
@@ -119,7 +119,7 @@
was set, return ``-1``.
"""
if self.expires_at:
- return (self.expires_at - datetime.now(tz=timezone.utc)).seconds
+ return int(self.expires_at.timestamp() - datetime.now(tz=timezone.utc).timestamp())
return -1
def to_bytes(self) -> bytes:
|
{"golden_diff": "diff --git a/litestar/stores/base.py b/litestar/stores/base.py\n--- a/litestar/stores/base.py\n+++ b/litestar/stores/base.py\n@@ -119,7 +119,7 @@\n was set, return ``-1``.\n \"\"\"\n if self.expires_at:\n- return (self.expires_at - datetime.now(tz=timezone.utc)).seconds\n+ return int(self.expires_at.timestamp() - datetime.now(tz=timezone.utc).timestamp())\n return -1\n \n def to_bytes(self) -> bytes:\n", "issue": "Bug: StorageObject doesn't return < 0 when using expiry\n### Description\n\nWhen the stored value is expired, the returned interval is set to 86400 and will therefore not expire. \n\n### URL to code causing the issue\n\nhttps://github.com/litestar-org/litestar/blob/main/litestar/stores/base.py#L122\n\n### MCVE\n\n```python\nfrom pathlib import Path\r\nfrom litestar.stores.file import FileStore\r\n\r\nstore = FileStore(path=Path(\"test.db\"))\r\n\r\nasync def setstore() -> None:\r\n await store.set(\"test\", \"value\", expires_in=5)\r\n return None\r\n\r\nasync def getstore() -> int:\r\n expiry = await store.expires_in(\"test\")\r\n return expiry\n```\n\n\n### Steps to reproduce\n\n_No response_\n\n### Screenshots\n\n```bash\n\"\"\n```\n\n\n### Logs\n\n_No response_\n\n### Litestar Version\n\n`litestar==2.0.0a5`\n\n### Platform\n\n- [ ] Linux\n- [X] Mac\n- [ ] Windows\n- [X] Other (Please specify in the description above)\nStaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime, timedelta, timezone\nfrom typing import TYPE_CHECKING, Optional\n\nfrom msgspec import Struct\nfrom msgspec.msgpack import decode as msgpack_decode\nfrom msgspec.msgpack import encode as msgpack_encode\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n\n__all__ = (\"Store\", \"NamespacedStore\", \"StorageObject\")\n\n\nclass Store(ABC): # pragma: no cover\n \"\"\"Thread and process safe asynchronous key/value store.\"\"\"\n\n @abstractmethod\n async def set(self, key: str, value: str | bytes, expires_in: int | timedelta | None = None) -> None:\n \"\"\"Set a value.\n\n Args:\n key: Key to associate the value with\n value: Value to store\n expires_in: Time in seconds before the key is considered expired\n\n Returns:\n ``None``\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def get(self, key: str, renew_for: int | timedelta | None = None) -> bytes | None:\n \"\"\"Get a value.\n\n Args:\n key: Key associated with the value\n renew_for: If given and the value had an initial expiry time set, renew the\n expiry time for ``renew_for`` seconds. If the value has not been set\n with an expiry time this is a no-op\n\n Returns:\n The value associated with ``key`` if it exists and is not expired, else\n ``None``\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def delete(self, key: str) -> None:\n \"\"\"Delete a value.\n\n If no such key exists, this is a no-op.\n\n Args:\n key: Key of the value to delete\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def delete_all(self) -> None:\n \"\"\"Delete all stored values.\"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def exists(self, key: str) -> bool:\n \"\"\"Check if a given ``key`` exists.\"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def expires_in(self, key: str) -> int | None:\n \"\"\"Get the time in seconds ``key`` expires in. If no such ``key`` exists or no\n expiry time was set, return ``None``.\n \"\"\"\n raise NotImplementedError\n\n\nclass NamespacedStore(Store):\n \"\"\"A subclass of :class:`Store`, offering hierarchical namespacing.\n\n Bulk actions on a parent namespace should affect all child namespaces, whereas other operations on all namespaces\n should be isolated.\n \"\"\"\n\n @abstractmethod\n def with_namespace(self, namespace: str) -> Self:\n \"\"\"Return a new instance of :class:`NamespacedStore`, which exists in a child namespace of the current namespace.\n Bulk actions on the parent namespace should affect all child namespaces, whereas other operations on all\n namespaces should be isolated.\n \"\"\"\n\n\nclass StorageObject(Struct):\n \"\"\":class:`msgspec.Struct` to store serialized data alongside with their expiry time.\"\"\"\n\n expires_at: Optional[datetime] # noqa: UP007\n data: bytes\n\n @classmethod\n def new(cls, data: bytes, expires_in: int | timedelta | None) -> StorageObject:\n \"\"\"Construct a new :class:`StorageObject` instance.\"\"\"\n if expires_in is not None and not isinstance(expires_in, timedelta):\n expires_in = timedelta(seconds=expires_in)\n return cls(\n data=data,\n expires_at=(datetime.now(tz=timezone.utc) + expires_in) if expires_in else None,\n )\n\n @property\n def expired(self) -> bool:\n \"\"\"Return if the :class:`StorageObject` is expired\"\"\"\n return self.expires_at is not None and datetime.now(tz=timezone.utc) >= self.expires_at\n\n @property\n def expires_in(self) -> int:\n \"\"\"Return the expiry time of this ``StorageObject`` in seconds. If no expiry time\n was set, return ``-1``.\n \"\"\"\n if self.expires_at:\n return (self.expires_at - datetime.now(tz=timezone.utc)).seconds\n return -1\n\n def to_bytes(self) -> bytes:\n \"\"\"Encode the instance to bytes\"\"\"\n return msgpack_encode(self)\n\n @classmethod\n def from_bytes(cls, raw: bytes) -> StorageObject:\n \"\"\"Load a previously encoded with :meth:`StorageObject.to_bytes`\"\"\"\n return msgpack_decode(raw, type=cls)\n", "path": "litestar/stores/base.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom datetime import datetime, timedelta, timezone\nfrom typing import TYPE_CHECKING, Optional\n\nfrom msgspec import Struct\nfrom msgspec.msgpack import decode as msgpack_decode\nfrom msgspec.msgpack import encode as msgpack_encode\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n\n__all__ = (\"Store\", \"NamespacedStore\", \"StorageObject\")\n\n\nclass Store(ABC): # pragma: no cover\n \"\"\"Thread and process safe asynchronous key/value store.\"\"\"\n\n @abstractmethod\n async def set(self, key: str, value: str | bytes, expires_in: int | timedelta | None = None) -> None:\n \"\"\"Set a value.\n\n Args:\n key: Key to associate the value with\n value: Value to store\n expires_in: Time in seconds before the key is considered expired\n\n Returns:\n ``None``\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def get(self, key: str, renew_for: int | timedelta | None = None) -> bytes | None:\n \"\"\"Get a value.\n\n Args:\n key: Key associated with the value\n renew_for: If given and the value had an initial expiry time set, renew the\n expiry time for ``renew_for`` seconds. If the value has not been set\n with an expiry time this is a no-op\n\n Returns:\n The value associated with ``key`` if it exists and is not expired, else\n ``None``\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def delete(self, key: str) -> None:\n \"\"\"Delete a value.\n\n If no such key exists, this is a no-op.\n\n Args:\n key: Key of the value to delete\n \"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def delete_all(self) -> None:\n \"\"\"Delete all stored values.\"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def exists(self, key: str) -> bool:\n \"\"\"Check if a given ``key`` exists.\"\"\"\n raise NotImplementedError\n\n @abstractmethod\n async def expires_in(self, key: str) -> int | None:\n \"\"\"Get the time in seconds ``key`` expires in. If no such ``key`` exists or no\n expiry time was set, return ``None``.\n \"\"\"\n raise NotImplementedError\n\n\nclass NamespacedStore(Store):\n \"\"\"A subclass of :class:`Store`, offering hierarchical namespacing.\n\n Bulk actions on a parent namespace should affect all child namespaces, whereas other operations on all namespaces\n should be isolated.\n \"\"\"\n\n @abstractmethod\n def with_namespace(self, namespace: str) -> Self:\n \"\"\"Return a new instance of :class:`NamespacedStore`, which exists in a child namespace of the current namespace.\n Bulk actions on the parent namespace should affect all child namespaces, whereas other operations on all\n namespaces should be isolated.\n \"\"\"\n\n\nclass StorageObject(Struct):\n \"\"\":class:`msgspec.Struct` to store serialized data alongside with their expiry time.\"\"\"\n\n expires_at: Optional[datetime] # noqa: UP007\n data: bytes\n\n @classmethod\n def new(cls, data: bytes, expires_in: int | timedelta | None) -> StorageObject:\n \"\"\"Construct a new :class:`StorageObject` instance.\"\"\"\n if expires_in is not None and not isinstance(expires_in, timedelta):\n expires_in = timedelta(seconds=expires_in)\n return cls(\n data=data,\n expires_at=(datetime.now(tz=timezone.utc) + expires_in) if expires_in else None,\n )\n\n @property\n def expired(self) -> bool:\n \"\"\"Return if the :class:`StorageObject` is expired\"\"\"\n return self.expires_at is not None and datetime.now(tz=timezone.utc) >= self.expires_at\n\n @property\n def expires_in(self) -> int:\n \"\"\"Return the expiry time of this ``StorageObject`` in seconds. If no expiry time\n was set, return ``-1``.\n \"\"\"\n if self.expires_at:\n return int(self.expires_at.timestamp() - datetime.now(tz=timezone.utc).timestamp())\n return -1\n\n def to_bytes(self) -> bytes:\n \"\"\"Encode the instance to bytes\"\"\"\n return msgpack_encode(self)\n\n @classmethod\n def from_bytes(cls, raw: bytes) -> StorageObject:\n \"\"\"Load a previously encoded with :meth:`StorageObject.to_bytes`\"\"\"\n return msgpack_decode(raw, type=cls)\n", "path": "litestar/stores/base.py"}]}
| 1,992 | 124 |
gh_patches_debug_34283
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-1495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no type named uint16 in rust
### Description
So, I think I correctly installed pwndbg on my system, when I run simple rust code like this:
```rust
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input);
}
```
And I try to debug it using gdb with pwndbg I got this problem:
```
Traceback (most recent call last):
File "/usr/share/pwndbg/pwndbg/events.py", line 165, in caller
func()
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 51, in update
module.ushort = lookup_types('unsigned short', 'ushort', 'u16', 'uint16')
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 39, in lookup_types
raise exc
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 36, in lookup_types
return gdb.lookup_type(type_str)
gdb.error: No type named uint16.
```
### My setup
gdb
```
Name : gdb
Version : 10.1-2
Description : The GNU Debugger
Architecture : x86_64
URL : https://www.gnu.org/software/gdb/
Licenses : GPL3
Groups : None
Provides : None
Depends On : glibc ncurses libncursesw.so=6-64 gcc-libs expat xz mpfr source-highlight gdb-common=10.1 readline libreadline.so=8-64 guile python libelf
Optional Deps : None
Required By : peda pwndbg-git
Optional For : rustup
Conflicts With : None
Replaces : None
Installed Size : 9.72 MiB
Packager : Anatol Pomozov <[email protected]>
Build Date : Wed 04 Nov 2020 08:44:41 PM CET
Install Date : Sat 14 Nov 2020 11:15:29 PM CET
Install Reason : Installed as a dependency for another package
Install Script : No
Validated By : Signature
```
python
```
Name : python
Version : 3.8.6-1
Description : Next generation of the python high-level scripting language
Architecture : x86_64
URL : https://www.python.org/
Licenses : custom
Groups : None
Provides : python3
Depends On : bzip2 expat gdbm libffi libnsl libxcrypt openssl zlib
Optional Deps : python-setuptools [installed]
python-pip [installed]
sqlite [installed]
mpdecimal: for decimal
xz: for lzma [installed]
tk: for tkinter [installed]
Required By : ...
Optional For : ...
Conflicts With : None
Replaces : python3
Installed Size : 77.85 MiB
Packager : Felix Yan <[email protected]>
Build Date : Wed 30 Sep 2020 06:00:38 AM CEST
Install Date : Fri 09 Oct 2020 09:13:49 PM CEST
Install Reason : Installed as a dependency for another package
Install Script : No
Validated By : Signature
```
pwndbg
```
Name : pwndbg-git
Version : 2020.07.23.879.29f962c-1
Description : Makes debugging with GDB suck less
Architecture : any
URL : https://github.com/pwndbg/pwndbg
Licenses : MIT
Groups : None
Provides : pwndbg
Depends On : gdb python-capstone python-unicorn python-pycparser python-psutil python-ptrace python-pyelftools python-six python-future python-pygments
Optional Deps : checksec: checksec command support
ropper: ropper command support
ropgadget: ropgadget command support
radare2: radare2 command support
Required By : None
Optional For : None
Conflicts With : pwndbg
Replaces : None
Installed Size : 3.89 MiB
Packager : Unknown Packager
Build Date : Sat 14 Nov 2020 11:51:32 PM CET
Install Date : Mon 16 Nov 2020 11:00:32 PM CET
Install Reason : Explicitly installed
Install Script : No
Validated By : None
```
rustup 1.22.1 (2020-07-08)
rustc 1.47.0 (18bf6b4f0 2020-10-07)
#### Small remark
I installed pwndbg using `pacman` at first, but it was not able to get correct the correct python version, so I had to install all dependencies in pip myself (I used a bits of information from `setup.sh`).
no type named uint16 in rust
### Description
So, I think I correctly installed pwndbg on my system, when I run simple rust code like this:
```rust
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input);
}
```
And I try to debug it using gdb with pwndbg I got this problem:
```
Traceback (most recent call last):
File "/usr/share/pwndbg/pwndbg/events.py", line 165, in caller
func()
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 51, in update
module.ushort = lookup_types('unsigned short', 'ushort', 'u16', 'uint16')
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 39, in lookup_types
raise exc
File "/usr/share/pwndbg/pwndbg/typeinfo.py", line 36, in lookup_types
return gdb.lookup_type(type_str)
gdb.error: No type named uint16.
```
### My setup
gdb
```
Name : gdb
Version : 10.1-2
Description : The GNU Debugger
Architecture : x86_64
URL : https://www.gnu.org/software/gdb/
Licenses : GPL3
Groups : None
Provides : None
Depends On : glibc ncurses libncursesw.so=6-64 gcc-libs expat xz mpfr source-highlight gdb-common=10.1 readline libreadline.so=8-64 guile python libelf
Optional Deps : None
Required By : peda pwndbg-git
Optional For : rustup
Conflicts With : None
Replaces : None
Installed Size : 9.72 MiB
Packager : Anatol Pomozov <[email protected]>
Build Date : Wed 04 Nov 2020 08:44:41 PM CET
Install Date : Sat 14 Nov 2020 11:15:29 PM CET
Install Reason : Installed as a dependency for another package
Install Script : No
Validated By : Signature
```
python
```
Name : python
Version : 3.8.6-1
Description : Next generation of the python high-level scripting language
Architecture : x86_64
URL : https://www.python.org/
Licenses : custom
Groups : None
Provides : python3
Depends On : bzip2 expat gdbm libffi libnsl libxcrypt openssl zlib
Optional Deps : python-setuptools [installed]
python-pip [installed]
sqlite [installed]
mpdecimal: for decimal
xz: for lzma [installed]
tk: for tkinter [installed]
Required By : ...
Optional For : ...
Conflicts With : None
Replaces : python3
Installed Size : 77.85 MiB
Packager : Felix Yan <[email protected]>
Build Date : Wed 30 Sep 2020 06:00:38 AM CEST
Install Date : Fri 09 Oct 2020 09:13:49 PM CEST
Install Reason : Installed as a dependency for another package
Install Script : No
Validated By : Signature
```
pwndbg
```
Name : pwndbg-git
Version : 2020.07.23.879.29f962c-1
Description : Makes debugging with GDB suck less
Architecture : any
URL : https://github.com/pwndbg/pwndbg
Licenses : MIT
Groups : None
Provides : pwndbg
Depends On : gdb python-capstone python-unicorn python-pycparser python-psutil python-ptrace python-pyelftools python-six python-future python-pygments
Optional Deps : checksec: checksec command support
ropper: ropper command support
ropgadget: ropgadget command support
radare2: radare2 command support
Required By : None
Optional For : None
Conflicts With : pwndbg
Replaces : None
Installed Size : 3.89 MiB
Packager : Unknown Packager
Build Date : Sat 14 Nov 2020 11:51:32 PM CET
Install Date : Mon 16 Nov 2020 11:00:32 PM CET
Install Reason : Explicitly installed
Install Script : No
Validated By : None
```
rustup 1.22.1 (2020-07-08)
rustc 1.47.0 (18bf6b4f0 2020-10-07)
#### Small remark
I installed pwndbg using `pacman` at first, but it was not able to get correct the correct python version, so I had to install all dependencies in pip myself (I used a bits of information from `setup.sh`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/gdblib/typeinfo.py`
Content:
```
1 """
2 Common types, and routines for manually loading types from file
3 via GCC.
4 """
5
6 import sys
7
8 import gdb
9
10 import pwndbg.lib.gcc
11 import pwndbg.lib.memoize
12 import pwndbg.lib.tempfile
13
14 module = sys.modules[__name__]
15 ptrsize: int
16
17
18 def lookup_types(*types):
19 for type_str in types:
20 try:
21 return gdb.lookup_type(type_str)
22 except Exception as e:
23 exc = e
24 raise exc
25
26
27 def update():
28 module.char = gdb.lookup_type("char")
29 module.ulong = lookup_types("unsigned long", "uint", "u32", "uint32")
30 module.long = lookup_types("long", "int", "i32", "int32")
31 module.uchar = lookup_types("unsigned char", "ubyte", "u8", "uint8")
32 module.ushort = lookup_types("unsigned short", "ushort", "u16", "uint16")
33 module.uint = lookup_types("unsigned int", "uint", "u32", "uint32")
34 module.void = lookup_types("void", "()")
35
36 module.uint8 = module.uchar
37 module.uint16 = module.ushort
38 module.uint32 = module.uint
39 module.uint64 = lookup_types("unsigned long long", "ulong", "u64", "uint64")
40 module.unsigned = {
41 1: module.uint8,
42 2: module.uint16,
43 4: module.uint32,
44 8: module.uint64,
45 }
46
47 module.int8 = lookup_types("char", "i8", "int8")
48 module.int16 = lookup_types("short", "i16", "int16")
49 module.int32 = lookup_types("int", "i32", "int32")
50 module.int64 = lookup_types("long long", "long", "i64", "int64")
51 module.signed = {1: module.int8, 2: module.int16, 4: module.int32, 8: module.int64}
52
53 module.pvoid = void.pointer()
54 module.ppvoid = pvoid.pointer()
55 module.pchar = char.pointer()
56
57 module.ptrsize = pvoid.sizeof
58
59 if pvoid.sizeof == 4:
60 module.ptrdiff = module.uint32
61 module.size_t = module.uint32
62 module.ssize_t = module.int32
63 elif pvoid.sizeof == 8:
64 module.ptrdiff = module.uint64
65 module.size_t = module.uint64
66 module.ssize_t = module.int64
67 else:
68 raise Exception("Pointer size not supported")
69 module.null = gdb.Value(0).cast(void)
70
71
72 # TODO: Remove this global initialization, or move it somewhere else
73 # Call it once so we load all of the types
74 update()
75
76
77 def load(name):
78 """Load a GDB symbol; note that new symbols can be added with `add-symbol-file` functionality"""
79 try:
80 return gdb.lookup_type(name)
81 except gdb.error:
82 return None
83
84
85 def read_gdbvalue(type_name, addr):
86 """Read the memory contents at addr and interpret them as a GDB value with the given type"""
87 gdb_type = pwndbg.gdblib.typeinfo.load(type_name)
88 return gdb.Value(addr).cast(gdb_type.pointer()).dereference()
89
90
91 def get_type(size):
92 return {
93 1: pwndbg.gdblib.typeinfo.uint8,
94 2: pwndbg.gdblib.typeinfo.uint16,
95 4: pwndbg.gdblib.typeinfo.uint32,
96 8: pwndbg.gdblib.typeinfo.uint64,
97 }[size]
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/gdblib/typeinfo.py b/pwndbg/gdblib/typeinfo.py
--- a/pwndbg/gdblib/typeinfo.py
+++ b/pwndbg/gdblib/typeinfo.py
@@ -25,11 +25,22 @@
def update():
+ # Workaround for Rust stuff, see https://github.com/pwndbg/pwndbg/issues/855
+ lang = gdb.execute("show language", to_string=True)
+ if "rust" not in lang:
+ restore_lang = None
+ else:
+ gdb.execute("set language c")
+ if '"auto;' in lang:
+ restore_lang = "auto"
+ else:
+ restore_lang = "rust"
+
module.char = gdb.lookup_type("char")
module.ulong = lookup_types("unsigned long", "uint", "u32", "uint32")
module.long = lookup_types("long", "int", "i32", "int32")
module.uchar = lookup_types("unsigned char", "ubyte", "u8", "uint8")
- module.ushort = lookup_types("unsigned short", "ushort", "u16", "uint16")
+ module.ushort = lookup_types("unsigned short", "ushort", "u16", "uint16", "uint16_t")
module.uint = lookup_types("unsigned int", "uint", "u32", "uint32")
module.void = lookup_types("void", "()")
@@ -45,9 +56,9 @@
}
module.int8 = lookup_types("char", "i8", "int8")
- module.int16 = lookup_types("short", "i16", "int16")
+ module.int16 = lookup_types("short", "short int", "i16", "int16")
module.int32 = lookup_types("int", "i32", "int32")
- module.int64 = lookup_types("long long", "long", "i64", "int64")
+ module.int64 = lookup_types("long long", "long long int", "long", "i64", "int64")
module.signed = {1: module.int8, 2: module.int16, 4: module.int32, 8: module.int64}
module.pvoid = void.pointer()
@@ -68,6 +79,10 @@
raise Exception("Pointer size not supported")
module.null = gdb.Value(0).cast(void)
+ # Rust workaround part 2
+ if restore_lang:
+ gdb.execute("set language %s" % restore_lang)
+
# TODO: Remove this global initialization, or move it somewhere else
# Call it once so we load all of the types
|
{"golden_diff": "diff --git a/pwndbg/gdblib/typeinfo.py b/pwndbg/gdblib/typeinfo.py\n--- a/pwndbg/gdblib/typeinfo.py\n+++ b/pwndbg/gdblib/typeinfo.py\n@@ -25,11 +25,22 @@\n \n \n def update():\n+ # Workaround for Rust stuff, see https://github.com/pwndbg/pwndbg/issues/855\n+ lang = gdb.execute(\"show language\", to_string=True)\n+ if \"rust\" not in lang:\n+ restore_lang = None\n+ else:\n+ gdb.execute(\"set language c\")\n+ if '\"auto;' in lang:\n+ restore_lang = \"auto\"\n+ else:\n+ restore_lang = \"rust\"\n+\n module.char = gdb.lookup_type(\"char\")\n module.ulong = lookup_types(\"unsigned long\", \"uint\", \"u32\", \"uint32\")\n module.long = lookup_types(\"long\", \"int\", \"i32\", \"int32\")\n module.uchar = lookup_types(\"unsigned char\", \"ubyte\", \"u8\", \"uint8\")\n- module.ushort = lookup_types(\"unsigned short\", \"ushort\", \"u16\", \"uint16\")\n+ module.ushort = lookup_types(\"unsigned short\", \"ushort\", \"u16\", \"uint16\", \"uint16_t\")\n module.uint = lookup_types(\"unsigned int\", \"uint\", \"u32\", \"uint32\")\n module.void = lookup_types(\"void\", \"()\")\n \n@@ -45,9 +56,9 @@\n }\n \n module.int8 = lookup_types(\"char\", \"i8\", \"int8\")\n- module.int16 = lookup_types(\"short\", \"i16\", \"int16\")\n+ module.int16 = lookup_types(\"short\", \"short int\", \"i16\", \"int16\")\n module.int32 = lookup_types(\"int\", \"i32\", \"int32\")\n- module.int64 = lookup_types(\"long long\", \"long\", \"i64\", \"int64\")\n+ module.int64 = lookup_types(\"long long\", \"long long int\", \"long\", \"i64\", \"int64\")\n module.signed = {1: module.int8, 2: module.int16, 4: module.int32, 8: module.int64}\n \n module.pvoid = void.pointer()\n@@ -68,6 +79,10 @@\n raise Exception(\"Pointer size not supported\")\n module.null = gdb.Value(0).cast(void)\n \n+ # Rust workaround part 2\n+ if restore_lang:\n+ gdb.execute(\"set language %s\" % restore_lang)\n+\n \n # TODO: Remove this global initialization, or move it somewhere else\n # Call it once so we load all of the types\n", "issue": "no type named uint16 in rust\n### Description\r\n\r\nSo, I think I correctly installed pwndbg on my system, when I run simple rust code like this: \r\n\r\n```rust\r\nuse std::io;\r\n\r\nfn main() {\r\n let mut input = String::new();\r\n io::stdin().read_line(&mut input);\r\n}\r\n```\r\n\r\nAnd I try to debug it using gdb with pwndbg I got this problem: \r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/share/pwndbg/pwndbg/events.py\", line 165, in caller\r\n func()\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 51, in update\r\n module.ushort = lookup_types('unsigned short', 'ushort', 'u16', 'uint16')\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 39, in lookup_types\r\n raise exc\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 36, in lookup_types\r\n return gdb.lookup_type(type_str)\r\ngdb.error: No type named uint16.\r\n```\r\n\r\n### My setup\r\n\r\ngdb\r\n```\r\nName : gdb\r\nVersion : 10.1-2\r\nDescription : The GNU Debugger\r\nArchitecture : x86_64\r\nURL : https://www.gnu.org/software/gdb/\r\nLicenses : GPL3\r\nGroups : None\r\nProvides : None\r\nDepends On : glibc ncurses libncursesw.so=6-64 gcc-libs expat xz mpfr source-highlight gdb-common=10.1 readline libreadline.so=8-64 guile python libelf\r\nOptional Deps : None\r\nRequired By : peda pwndbg-git\r\nOptional For : rustup\r\nConflicts With : None\r\nReplaces : None\r\nInstalled Size : 9.72 MiB\r\nPackager : Anatol Pomozov <[email protected]>\r\nBuild Date : Wed 04 Nov 2020 08:44:41 PM CET\r\nInstall Date : Sat 14 Nov 2020 11:15:29 PM CET\r\nInstall Reason : Installed as a dependency for another package\r\nInstall Script : No\r\nValidated By : Signature\r\n```\r\npython\r\n```\r\nName : python\r\nVersion : 3.8.6-1\r\nDescription : Next generation of the python high-level scripting language\r\nArchitecture : x86_64\r\nURL : https://www.python.org/\r\nLicenses : custom\r\nGroups : None\r\nProvides : python3\r\nDepends On : bzip2 expat gdbm libffi libnsl libxcrypt openssl zlib\r\nOptional Deps : python-setuptools [installed]\r\n python-pip [installed]\r\n sqlite [installed]\r\n mpdecimal: for decimal\r\n xz: for lzma [installed]\r\n tk: for tkinter [installed]\r\nRequired By : ...\r\nOptional For : ...\r\nConflicts With : None\r\nReplaces : python3\r\nInstalled Size : 77.85 MiB\r\nPackager : Felix Yan <[email protected]>\r\nBuild Date : Wed 30 Sep 2020 06:00:38 AM CEST\r\nInstall Date : Fri 09 Oct 2020 09:13:49 PM CEST\r\nInstall Reason : Installed as a dependency for another package\r\nInstall Script : No\r\nValidated By : Signature\r\n```\r\n\r\npwndbg\r\n```\r\nName : pwndbg-git\r\nVersion : 2020.07.23.879.29f962c-1\r\nDescription : Makes debugging with GDB suck less\r\nArchitecture : any\r\nURL : https://github.com/pwndbg/pwndbg\r\nLicenses : MIT\r\nGroups : None\r\nProvides : pwndbg\r\nDepends On : gdb python-capstone python-unicorn python-pycparser python-psutil python-ptrace python-pyelftools python-six python-future python-pygments\r\nOptional Deps : checksec: checksec command support\r\n ropper: ropper command support\r\n ropgadget: ropgadget command support\r\n radare2: radare2 command support\r\nRequired By : None\r\nOptional For : None\r\nConflicts With : pwndbg\r\nReplaces : None\r\nInstalled Size : 3.89 MiB\r\nPackager : Unknown Packager\r\nBuild Date : Sat 14 Nov 2020 11:51:32 PM CET\r\nInstall Date : Mon 16 Nov 2020 11:00:32 PM CET\r\nInstall Reason : Explicitly installed\r\nInstall Script : No\r\nValidated By : None\r\n```\r\n\r\nrustup 1.22.1 (2020-07-08)\r\nrustc 1.47.0 (18bf6b4f0 2020-10-07)\r\n\r\n#### Small remark\r\nI installed pwndbg using `pacman` at first, but it was not able to get correct the correct python version, so I had to install all dependencies in pip myself (I used a bits of information from `setup.sh`).\nno type named uint16 in rust\n### Description\r\n\r\nSo, I think I correctly installed pwndbg on my system, when I run simple rust code like this: \r\n\r\n```rust\r\nuse std::io;\r\n\r\nfn main() {\r\n let mut input = String::new();\r\n io::stdin().read_line(&mut input);\r\n}\r\n```\r\n\r\nAnd I try to debug it using gdb with pwndbg I got this problem: \r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/share/pwndbg/pwndbg/events.py\", line 165, in caller\r\n func()\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 51, in update\r\n module.ushort = lookup_types('unsigned short', 'ushort', 'u16', 'uint16')\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 39, in lookup_types\r\n raise exc\r\n File \"/usr/share/pwndbg/pwndbg/typeinfo.py\", line 36, in lookup_types\r\n return gdb.lookup_type(type_str)\r\ngdb.error: No type named uint16.\r\n```\r\n\r\n### My setup\r\n\r\ngdb\r\n```\r\nName : gdb\r\nVersion : 10.1-2\r\nDescription : The GNU Debugger\r\nArchitecture : x86_64\r\nURL : https://www.gnu.org/software/gdb/\r\nLicenses : GPL3\r\nGroups : None\r\nProvides : None\r\nDepends On : glibc ncurses libncursesw.so=6-64 gcc-libs expat xz mpfr source-highlight gdb-common=10.1 readline libreadline.so=8-64 guile python libelf\r\nOptional Deps : None\r\nRequired By : peda pwndbg-git\r\nOptional For : rustup\r\nConflicts With : None\r\nReplaces : None\r\nInstalled Size : 9.72 MiB\r\nPackager : Anatol Pomozov <[email protected]>\r\nBuild Date : Wed 04 Nov 2020 08:44:41 PM CET\r\nInstall Date : Sat 14 Nov 2020 11:15:29 PM CET\r\nInstall Reason : Installed as a dependency for another package\r\nInstall Script : No\r\nValidated By : Signature\r\n```\r\npython\r\n```\r\nName : python\r\nVersion : 3.8.6-1\r\nDescription : Next generation of the python high-level scripting language\r\nArchitecture : x86_64\r\nURL : https://www.python.org/\r\nLicenses : custom\r\nGroups : None\r\nProvides : python3\r\nDepends On : bzip2 expat gdbm libffi libnsl libxcrypt openssl zlib\r\nOptional Deps : python-setuptools [installed]\r\n python-pip [installed]\r\n sqlite [installed]\r\n mpdecimal: for decimal\r\n xz: for lzma [installed]\r\n tk: for tkinter [installed]\r\nRequired By : ...\r\nOptional For : ...\r\nConflicts With : None\r\nReplaces : python3\r\nInstalled Size : 77.85 MiB\r\nPackager : Felix Yan <[email protected]>\r\nBuild Date : Wed 30 Sep 2020 06:00:38 AM CEST\r\nInstall Date : Fri 09 Oct 2020 09:13:49 PM CEST\r\nInstall Reason : Installed as a dependency for another package\r\nInstall Script : No\r\nValidated By : Signature\r\n```\r\n\r\npwndbg\r\n```\r\nName : pwndbg-git\r\nVersion : 2020.07.23.879.29f962c-1\r\nDescription : Makes debugging with GDB suck less\r\nArchitecture : any\r\nURL : https://github.com/pwndbg/pwndbg\r\nLicenses : MIT\r\nGroups : None\r\nProvides : pwndbg\r\nDepends On : gdb python-capstone python-unicorn python-pycparser python-psutil python-ptrace python-pyelftools python-six python-future python-pygments\r\nOptional Deps : checksec: checksec command support\r\n ropper: ropper command support\r\n ropgadget: ropgadget command support\r\n radare2: radare2 command support\r\nRequired By : None\r\nOptional For : None\r\nConflicts With : pwndbg\r\nReplaces : None\r\nInstalled Size : 3.89 MiB\r\nPackager : Unknown Packager\r\nBuild Date : Sat 14 Nov 2020 11:51:32 PM CET\r\nInstall Date : Mon 16 Nov 2020 11:00:32 PM CET\r\nInstall Reason : Explicitly installed\r\nInstall Script : No\r\nValidated By : None\r\n```\r\n\r\nrustup 1.22.1 (2020-07-08)\r\nrustc 1.47.0 (18bf6b4f0 2020-10-07)\r\n\r\n#### Small remark\r\nI installed pwndbg using `pacman` at first, but it was not able to get correct the correct python version, so I had to install all dependencies in pip myself (I used a bits of information from `setup.sh`).\n", "before_files": [{"content": "\"\"\"\nCommon types, and routines for manually loading types from file\nvia GCC.\n\"\"\"\n\nimport sys\n\nimport gdb\n\nimport pwndbg.lib.gcc\nimport pwndbg.lib.memoize\nimport pwndbg.lib.tempfile\n\nmodule = sys.modules[__name__]\nptrsize: int\n\n\ndef lookup_types(*types):\n for type_str in types:\n try:\n return gdb.lookup_type(type_str)\n except Exception as e:\n exc = e\n raise exc\n\n\ndef update():\n module.char = gdb.lookup_type(\"char\")\n module.ulong = lookup_types(\"unsigned long\", \"uint\", \"u32\", \"uint32\")\n module.long = lookup_types(\"long\", \"int\", \"i32\", \"int32\")\n module.uchar = lookup_types(\"unsigned char\", \"ubyte\", \"u8\", \"uint8\")\n module.ushort = lookup_types(\"unsigned short\", \"ushort\", \"u16\", \"uint16\")\n module.uint = lookup_types(\"unsigned int\", \"uint\", \"u32\", \"uint32\")\n module.void = lookup_types(\"void\", \"()\")\n\n module.uint8 = module.uchar\n module.uint16 = module.ushort\n module.uint32 = module.uint\n module.uint64 = lookup_types(\"unsigned long long\", \"ulong\", \"u64\", \"uint64\")\n module.unsigned = {\n 1: module.uint8,\n 2: module.uint16,\n 4: module.uint32,\n 8: module.uint64,\n }\n\n module.int8 = lookup_types(\"char\", \"i8\", \"int8\")\n module.int16 = lookup_types(\"short\", \"i16\", \"int16\")\n module.int32 = lookup_types(\"int\", \"i32\", \"int32\")\n module.int64 = lookup_types(\"long long\", \"long\", \"i64\", \"int64\")\n module.signed = {1: module.int8, 2: module.int16, 4: module.int32, 8: module.int64}\n\n module.pvoid = void.pointer()\n module.ppvoid = pvoid.pointer()\n module.pchar = char.pointer()\n\n module.ptrsize = pvoid.sizeof\n\n if pvoid.sizeof == 4:\n module.ptrdiff = module.uint32\n module.size_t = module.uint32\n module.ssize_t = module.int32\n elif pvoid.sizeof == 8:\n module.ptrdiff = module.uint64\n module.size_t = module.uint64\n module.ssize_t = module.int64\n else:\n raise Exception(\"Pointer size not supported\")\n module.null = gdb.Value(0).cast(void)\n\n\n# TODO: Remove this global initialization, or move it somewhere else\n# Call it once so we load all of the types\nupdate()\n\n\ndef load(name):\n \"\"\"Load a GDB symbol; note that new symbols can be added with `add-symbol-file` functionality\"\"\"\n try:\n return gdb.lookup_type(name)\n except gdb.error:\n return None\n\n\ndef read_gdbvalue(type_name, addr):\n \"\"\"Read the memory contents at addr and interpret them as a GDB value with the given type\"\"\"\n gdb_type = pwndbg.gdblib.typeinfo.load(type_name)\n return gdb.Value(addr).cast(gdb_type.pointer()).dereference()\n\n\ndef get_type(size):\n return {\n 1: pwndbg.gdblib.typeinfo.uint8,\n 2: pwndbg.gdblib.typeinfo.uint16,\n 4: pwndbg.gdblib.typeinfo.uint32,\n 8: pwndbg.gdblib.typeinfo.uint64,\n }[size]\n", "path": "pwndbg/gdblib/typeinfo.py"}], "after_files": [{"content": "\"\"\"\nCommon types, and routines for manually loading types from file\nvia GCC.\n\"\"\"\n\nimport sys\n\nimport gdb\n\nimport pwndbg.lib.gcc\nimport pwndbg.lib.memoize\nimport pwndbg.lib.tempfile\n\nmodule = sys.modules[__name__]\nptrsize: int\n\n\ndef lookup_types(*types):\n for type_str in types:\n try:\n return gdb.lookup_type(type_str)\n except Exception as e:\n exc = e\n raise exc\n\n\ndef update():\n # Workaround for Rust stuff, see https://github.com/pwndbg/pwndbg/issues/855\n lang = gdb.execute(\"show language\", to_string=True)\n if \"rust\" not in lang:\n restore_lang = None\n else:\n gdb.execute(\"set language c\")\n if '\"auto;' in lang:\n restore_lang = \"auto\"\n else:\n restore_lang = \"rust\"\n\n module.char = gdb.lookup_type(\"char\")\n module.ulong = lookup_types(\"unsigned long\", \"uint\", \"u32\", \"uint32\")\n module.long = lookup_types(\"long\", \"int\", \"i32\", \"int32\")\n module.uchar = lookup_types(\"unsigned char\", \"ubyte\", \"u8\", \"uint8\")\n module.ushort = lookup_types(\"unsigned short\", \"ushort\", \"u16\", \"uint16\", \"uint16_t\")\n module.uint = lookup_types(\"unsigned int\", \"uint\", \"u32\", \"uint32\")\n module.void = lookup_types(\"void\", \"()\")\n\n module.uint8 = module.uchar\n module.uint16 = module.ushort\n module.uint32 = module.uint\n module.uint64 = lookup_types(\"unsigned long long\", \"ulong\", \"u64\", \"uint64\")\n module.unsigned = {\n 1: module.uint8,\n 2: module.uint16,\n 4: module.uint32,\n 8: module.uint64,\n }\n\n module.int8 = lookup_types(\"char\", \"i8\", \"int8\")\n module.int16 = lookup_types(\"short\", \"short int\", \"i16\", \"int16\")\n module.int32 = lookup_types(\"int\", \"i32\", \"int32\")\n module.int64 = lookup_types(\"long long\", \"long long int\", \"long\", \"i64\", \"int64\")\n module.signed = {1: module.int8, 2: module.int16, 4: module.int32, 8: module.int64}\n\n module.pvoid = void.pointer()\n module.ppvoid = pvoid.pointer()\n module.pchar = char.pointer()\n\n module.ptrsize = pvoid.sizeof\n\n if pvoid.sizeof == 4:\n module.ptrdiff = module.uint32\n module.size_t = module.uint32\n module.ssize_t = module.int32\n elif pvoid.sizeof == 8:\n module.ptrdiff = module.uint64\n module.size_t = module.uint64\n module.ssize_t = module.int64\n else:\n raise Exception(\"Pointer size not supported\")\n module.null = gdb.Value(0).cast(void)\n\n # Rust workaround part 2\n if restore_lang:\n gdb.execute(\"set language %s\" % restore_lang)\n\n\n# TODO: Remove this global initialization, or move it somewhere else\n# Call it once so we load all of the types\nupdate()\n\n\ndef load(name):\n \"\"\"Load a GDB symbol; note that new symbols can be added with `add-symbol-file` functionality\"\"\"\n try:\n return gdb.lookup_type(name)\n except gdb.error:\n return None\n\n\ndef read_gdbvalue(type_name, addr):\n \"\"\"Read the memory contents at addr and interpret them as a GDB value with the given type\"\"\"\n gdb_type = pwndbg.gdblib.typeinfo.load(type_name)\n return gdb.Value(addr).cast(gdb_type.pointer()).dereference()\n\n\ndef get_type(size):\n return {\n 1: pwndbg.gdblib.typeinfo.uint8,\n 2: pwndbg.gdblib.typeinfo.uint16,\n 4: pwndbg.gdblib.typeinfo.uint32,\n 8: pwndbg.gdblib.typeinfo.uint64,\n }[size]\n", "path": "pwndbg/gdblib/typeinfo.py"}]}
| 3,719 | 632 |
gh_patches_debug_42383
|
rasdani/github-patches
|
git_diff
|
Textualize__rich-1573
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tree: Flag to hide root from display
In certain situations it would be useful to hide the root of a tree, either to save on screen space or for clarity. A `hide_root` flag could allow this behavior. For example:
```
>>> tree = Tree("Dir to print")
>>> tree.add("dir1").add("file1")
>>> tree.add("file2")
>>> print(tree)
Dir to print
├── dir1
│ └── file1
└── file2
```
could instead be display as
```
>>> tree = Tree("Dir to print", hide_root=True)
>>> tree.add("dir1").add("file1")
>>> tree.add("file2")
>>> print(tree)
dir1
└── file1
file2
```
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rich/tree.py`
Content:
```
1 from typing import Iterator, List, Optional, Tuple
2
3 from ._loop import loop_first, loop_last
4 from .console import Console, ConsoleOptions, RenderableType, RenderResult
5 from .jupyter import JupyterMixin
6 from .measure import Measurement
7 from .segment import Segment
8 from .style import Style, StyleStack, StyleType
9 from .styled import Styled
10
11
12 class Tree(JupyterMixin):
13 """A renderable for a tree structure.
14
15 Args:
16 label (RenderableType): The renderable or str for the tree label.
17 style (StyleType, optional): Style of this tree. Defaults to "tree".
18 guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line".
19 expanded (bool, optional): Also display children. Defaults to True.
20 highlight (bool, optional): Highlight renderable (if str). Defaults to False.
21 """
22
23 def __init__(
24 self,
25 label: RenderableType,
26 *,
27 style: StyleType = "tree",
28 guide_style: StyleType = "tree.line",
29 expanded: bool = True,
30 highlight: bool = False,
31 ) -> None:
32 self.label = label
33 self.style = style
34 self.guide_style = guide_style
35 self.children: List[Tree] = []
36 self.expanded = expanded
37 self.highlight = highlight
38
39 def add(
40 self,
41 label: RenderableType,
42 *,
43 style: Optional[StyleType] = None,
44 guide_style: Optional[StyleType] = None,
45 expanded: bool = True,
46 highlight: bool = False,
47 ) -> "Tree":
48 """Add a child tree.
49
50 Args:
51 label (RenderableType): The renderable or str for the tree label.
52 style (StyleType, optional): Style of this tree. Defaults to "tree".
53 guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line".
54 expanded (bool, optional): Also display children. Defaults to True.
55 highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False.
56
57 Returns:
58 Tree: A new child Tree, which may be further modified.
59 """
60 node = Tree(
61 label,
62 style=self.style if style is None else style,
63 guide_style=self.guide_style if guide_style is None else guide_style,
64 expanded=expanded,
65 highlight=self.highlight if highlight is None else highlight,
66 )
67 self.children.append(node)
68 return node
69
70 def __rich_console__(
71 self, console: "Console", options: "ConsoleOptions"
72 ) -> "RenderResult":
73
74 stack: List[Iterator[Tuple[bool, Tree]]] = []
75 pop = stack.pop
76 push = stack.append
77 new_line = Segment.line()
78
79 get_style = console.get_style
80 null_style = Style.null()
81 guide_style = get_style(self.guide_style, default="") or null_style
82 SPACE, CONTINUE, FORK, END = range(4)
83
84 ASCII_GUIDES = (" ", "| ", "+-- ", "`-- ")
85 TREE_GUIDES = [
86 (" ", "│ ", "├── ", "└── "),
87 (" ", "┃ ", "┣━━ ", "┗━━ "),
88 (" ", "║ ", "╠══ ", "╚══ "),
89 ]
90 _Segment = Segment
91
92 def make_guide(index: int, style: Style) -> Segment:
93 """Make a Segment for a level of the guide lines."""
94 if options.ascii_only:
95 line = ASCII_GUIDES[index]
96 else:
97 guide = 1 if style.bold else (2 if style.underline2 else 0)
98 line = TREE_GUIDES[0 if options.legacy_windows else guide][index]
99 return _Segment(line, style)
100
101 levels: List[Segment] = [make_guide(CONTINUE, guide_style)]
102 push(iter(loop_last([self])))
103
104 guide_style_stack = StyleStack(get_style(self.guide_style))
105 style_stack = StyleStack(get_style(self.style))
106 remove_guide_styles = Style(bold=False, underline2=False)
107
108 while stack:
109 stack_node = pop()
110 try:
111 last, node = next(stack_node)
112 except StopIteration:
113 levels.pop()
114 if levels:
115 guide_style = levels[-1].style or null_style
116 levels[-1] = make_guide(FORK, guide_style)
117 guide_style_stack.pop()
118 style_stack.pop()
119 continue
120 push(stack_node)
121 if last:
122 levels[-1] = make_guide(END, levels[-1].style or null_style)
123
124 guide_style = guide_style_stack.current + get_style(node.guide_style)
125 style = style_stack.current + get_style(node.style)
126 prefix = levels[1:]
127 renderable_lines = console.render_lines(
128 Styled(node.label, style),
129 options.update(
130 width=options.max_width
131 - sum(level.cell_length for level in prefix),
132 highlight=self.highlight,
133 height=None,
134 ),
135 )
136 for first, line in loop_first(renderable_lines):
137 if prefix:
138 yield from _Segment.apply_style(
139 prefix,
140 style.background_style,
141 post_style=remove_guide_styles,
142 )
143 yield from line
144 yield new_line
145 if first and prefix:
146 prefix[-1] = make_guide(
147 SPACE if last else CONTINUE, prefix[-1].style or null_style
148 )
149
150 if node.expanded and node.children:
151 levels[-1] = make_guide(
152 SPACE if last else CONTINUE, levels[-1].style or null_style
153 )
154 levels.append(
155 make_guide(END if len(node.children) == 1 else FORK, guide_style)
156 )
157 style_stack.push(get_style(node.style))
158 guide_style_stack.push(get_style(node.guide_style))
159 push(iter(loop_last(node.children)))
160
161 def __rich_measure__(
162 self, console: "Console", options: "ConsoleOptions"
163 ) -> "Measurement":
164 stack: List[Iterator[Tree]] = [iter([self])]
165 pop = stack.pop
166 push = stack.append
167 minimum = 0
168 maximum = 0
169 measure = Measurement.get
170 level = 0
171 while stack:
172 iter_tree = pop()
173 try:
174 tree = next(iter_tree)
175 except StopIteration:
176 level -= 1
177 continue
178 push(iter_tree)
179 min_measure, max_measure = measure(console, options, tree.label)
180 indent = level * 4
181 minimum = max(min_measure + indent, minimum)
182 maximum = max(max_measure + indent, maximum)
183 if tree.expanded and tree.children:
184 push(iter(tree.children))
185 level += 1
186 return Measurement(minimum, maximum)
187
188
189 if __name__ == "__main__": # pragma: no cover
190
191 from rich.console import Group
192 from rich.markdown import Markdown
193 from rich.panel import Panel
194 from rich.syntax import Syntax
195 from rich.table import Table
196
197 table = Table(row_styles=["", "dim"])
198
199 table.add_column("Released", style="cyan", no_wrap=True)
200 table.add_column("Title", style="magenta")
201 table.add_column("Box Office", justify="right", style="green")
202
203 table.add_row("Dec 20, 2019", "Star Wars: The Rise of Skywalker", "$952,110,690")
204 table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347")
205 table.add_row("Dec 15, 2017", "Star Wars Ep. V111: The Last Jedi", "$1,332,539,889")
206 table.add_row("Dec 16, 2016", "Rogue One: A Star Wars Story", "$1,332,439,889")
207
208 code = """\
209 class Segment(NamedTuple):
210 text: str = ""
211 style: Optional[Style] = None
212 is_control: bool = False
213 """
214 syntax = Syntax(code, "python", theme="monokai", line_numbers=True)
215
216 markdown = Markdown(
217 """\
218 ### example.md
219 > Hello, World!
220 >
221 > Markdown _all_ the things
222 """
223 )
224
225 root = Tree("🌲 [b green]Rich Tree", highlight=True)
226
227 node = root.add(":file_folder: Renderables", guide_style="red")
228 simple_node = node.add(":file_folder: [bold yellow]Atomic", guide_style="uu green")
229 simple_node.add(Group("📄 Syntax", syntax))
230 simple_node.add(Group("📄 Markdown", Panel(markdown, border_style="green")))
231
232 containers_node = node.add(
233 ":file_folder: [bold magenta]Containers", guide_style="bold magenta"
234 )
235 containers_node.expanded = True
236 panel = Panel.fit("Just a panel", border_style="red")
237 containers_node.add(Group("📄 Panels", panel))
238
239 containers_node.add(Group("📄 [b magenta]Table", table))
240
241 console = Console()
242 console.print(root)
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rich/tree.py b/rich/tree.py
--- a/rich/tree.py
+++ b/rich/tree.py
@@ -28,6 +28,7 @@
guide_style: StyleType = "tree.line",
expanded: bool = True,
highlight: bool = False,
+ hide_root: bool = False,
) -> None:
self.label = label
self.style = style
@@ -35,6 +36,7 @@
self.children: List[Tree] = []
self.expanded = expanded
self.highlight = highlight
+ self.hide_root = hide_root
def add(
self,
@@ -105,6 +107,8 @@
style_stack = StyleStack(get_style(self.style))
remove_guide_styles = Style(bold=False, underline2=False)
+ depth = 0
+
while stack:
stack_node = pop()
try:
@@ -123,7 +127,7 @@
guide_style = guide_style_stack.current + get_style(node.guide_style)
style = style_stack.current + get_style(node.style)
- prefix = levels[1:]
+ prefix = levels[(2 if self.hide_root else 1) :]
renderable_lines = console.render_lines(
Styled(node.label, style),
options.update(
@@ -133,19 +137,21 @@
height=None,
),
)
- for first, line in loop_first(renderable_lines):
- if prefix:
- yield from _Segment.apply_style(
- prefix,
- style.background_style,
- post_style=remove_guide_styles,
- )
- yield from line
- yield new_line
- if first and prefix:
- prefix[-1] = make_guide(
- SPACE if last else CONTINUE, prefix[-1].style or null_style
- )
+
+ if not (depth == 0 and self.hide_root):
+ for first, line in loop_first(renderable_lines):
+ if prefix:
+ yield from _Segment.apply_style(
+ prefix,
+ style.background_style,
+ post_style=remove_guide_styles,
+ )
+ yield from line
+ yield new_line
+ if first and prefix:
+ prefix[-1] = make_guide(
+ SPACE if last else CONTINUE, prefix[-1].style or null_style
+ )
if node.expanded and node.children:
levels[-1] = make_guide(
@@ -157,6 +163,7 @@
style_stack.push(get_style(node.style))
guide_style_stack.push(get_style(node.guide_style))
push(iter(loop_last(node.children)))
+ depth += 1
def __rich_measure__(
self, console: "Console", options: "ConsoleOptions"
@@ -222,7 +229,7 @@
"""
)
- root = Tree("🌲 [b green]Rich Tree", highlight=True)
+ root = Tree("🌲 [b green]Rich Tree", highlight=True, hide_root=True)
node = root.add(":file_folder: Renderables", guide_style="red")
simple_node = node.add(":file_folder: [bold yellow]Atomic", guide_style="uu green")
|
{"golden_diff": "diff --git a/rich/tree.py b/rich/tree.py\n--- a/rich/tree.py\n+++ b/rich/tree.py\n@@ -28,6 +28,7 @@\n guide_style: StyleType = \"tree.line\",\n expanded: bool = True,\n highlight: bool = False,\n+ hide_root: bool = False,\n ) -> None:\n self.label = label\n self.style = style\n@@ -35,6 +36,7 @@\n self.children: List[Tree] = []\n self.expanded = expanded\n self.highlight = highlight\n+ self.hide_root = hide_root\n \n def add(\n self,\n@@ -105,6 +107,8 @@\n style_stack = StyleStack(get_style(self.style))\n remove_guide_styles = Style(bold=False, underline2=False)\n \n+ depth = 0\n+\n while stack:\n stack_node = pop()\n try:\n@@ -123,7 +127,7 @@\n \n guide_style = guide_style_stack.current + get_style(node.guide_style)\n style = style_stack.current + get_style(node.style)\n- prefix = levels[1:]\n+ prefix = levels[(2 if self.hide_root else 1) :]\n renderable_lines = console.render_lines(\n Styled(node.label, style),\n options.update(\n@@ -133,19 +137,21 @@\n height=None,\n ),\n )\n- for first, line in loop_first(renderable_lines):\n- if prefix:\n- yield from _Segment.apply_style(\n- prefix,\n- style.background_style,\n- post_style=remove_guide_styles,\n- )\n- yield from line\n- yield new_line\n- if first and prefix:\n- prefix[-1] = make_guide(\n- SPACE if last else CONTINUE, prefix[-1].style or null_style\n- )\n+\n+ if not (depth == 0 and self.hide_root):\n+ for first, line in loop_first(renderable_lines):\n+ if prefix:\n+ yield from _Segment.apply_style(\n+ prefix,\n+ style.background_style,\n+ post_style=remove_guide_styles,\n+ )\n+ yield from line\n+ yield new_line\n+ if first and prefix:\n+ prefix[-1] = make_guide(\n+ SPACE if last else CONTINUE, prefix[-1].style or null_style\n+ )\n \n if node.expanded and node.children:\n levels[-1] = make_guide(\n@@ -157,6 +163,7 @@\n style_stack.push(get_style(node.style))\n guide_style_stack.push(get_style(node.guide_style))\n push(iter(loop_last(node.children)))\n+ depth += 1\n \n def __rich_measure__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n@@ -222,7 +229,7 @@\n \"\"\"\n )\n \n- root = Tree(\"\ud83c\udf32 [b green]Rich Tree\", highlight=True)\n+ root = Tree(\"\ud83c\udf32 [b green]Rich Tree\", highlight=True, hide_root=True)\n \n node = root.add(\":file_folder: Renderables\", guide_style=\"red\")\n simple_node = node.add(\":file_folder: [bold yellow]Atomic\", guide_style=\"uu green\")\n", "issue": "Tree: Flag to hide root from display \nIn certain situations it would be useful to hide the root of a tree, either to save on screen space or for clarity. A `hide_root` flag could allow this behavior. For example: \r\n```\r\n>>> tree = Tree(\"Dir to print\")\r\n>>> tree.add(\"dir1\").add(\"file1\")\r\n>>> tree.add(\"file2\")\r\n>>> print(tree)\r\n\r\nDir to print\r\n\u251c\u2500\u2500 dir1 \r\n\u2502 \u2514\u2500\u2500 file1 \r\n\u2514\u2500\u2500 file2 \r\n```\r\ncould instead be display as\r\n```\r\n>>> tree = Tree(\"Dir to print\", hide_root=True)\r\n>>> tree.add(\"dir1\").add(\"file1\")\r\n>>> tree.add(\"file2\")\r\n>>> print(tree)\r\n\r\ndir1 \r\n\u2514\u2500\u2500 file1 \r\nfile2 \r\n```\r\n\r\nThanks!\n", "before_files": [{"content": "from typing import Iterator, List, Optional, Tuple\n\nfrom ._loop import loop_first, loop_last\nfrom .console import Console, ConsoleOptions, RenderableType, RenderResult\nfrom .jupyter import JupyterMixin\nfrom .measure import Measurement\nfrom .segment import Segment\nfrom .style import Style, StyleStack, StyleType\nfrom .styled import Styled\n\n\nclass Tree(JupyterMixin):\n \"\"\"A renderable for a tree structure.\n\n Args:\n label (RenderableType): The renderable or str for the tree label.\n style (StyleType, optional): Style of this tree. Defaults to \"tree\".\n guide_style (StyleType, optional): Style of the guide lines. Defaults to \"tree.line\".\n expanded (bool, optional): Also display children. Defaults to True.\n highlight (bool, optional): Highlight renderable (if str). Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n label: RenderableType,\n *,\n style: StyleType = \"tree\",\n guide_style: StyleType = \"tree.line\",\n expanded: bool = True,\n highlight: bool = False,\n ) -> None:\n self.label = label\n self.style = style\n self.guide_style = guide_style\n self.children: List[Tree] = []\n self.expanded = expanded\n self.highlight = highlight\n\n def add(\n self,\n label: RenderableType,\n *,\n style: Optional[StyleType] = None,\n guide_style: Optional[StyleType] = None,\n expanded: bool = True,\n highlight: bool = False,\n ) -> \"Tree\":\n \"\"\"Add a child tree.\n\n Args:\n label (RenderableType): The renderable or str for the tree label.\n style (StyleType, optional): Style of this tree. Defaults to \"tree\".\n guide_style (StyleType, optional): Style of the guide lines. Defaults to \"tree.line\".\n expanded (bool, optional): Also display children. Defaults to True.\n highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False.\n\n Returns:\n Tree: A new child Tree, which may be further modified.\n \"\"\"\n node = Tree(\n label,\n style=self.style if style is None else style,\n guide_style=self.guide_style if guide_style is None else guide_style,\n expanded=expanded,\n highlight=self.highlight if highlight is None else highlight,\n )\n self.children.append(node)\n return node\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"RenderResult\":\n\n stack: List[Iterator[Tuple[bool, Tree]]] = []\n pop = stack.pop\n push = stack.append\n new_line = Segment.line()\n\n get_style = console.get_style\n null_style = Style.null()\n guide_style = get_style(self.guide_style, default=\"\") or null_style\n SPACE, CONTINUE, FORK, END = range(4)\n\n ASCII_GUIDES = (\" \", \"| \", \"+-- \", \"`-- \")\n TREE_GUIDES = [\n (\" \", \"\u2502 \", \"\u251c\u2500\u2500 \", \"\u2514\u2500\u2500 \"),\n (\" \", \"\u2503 \", \"\u2523\u2501\u2501 \", \"\u2517\u2501\u2501 \"),\n (\" \", \"\u2551 \", \"\u2560\u2550\u2550 \", \"\u255a\u2550\u2550 \"),\n ]\n _Segment = Segment\n\n def make_guide(index: int, style: Style) -> Segment:\n \"\"\"Make a Segment for a level of the guide lines.\"\"\"\n if options.ascii_only:\n line = ASCII_GUIDES[index]\n else:\n guide = 1 if style.bold else (2 if style.underline2 else 0)\n line = TREE_GUIDES[0 if options.legacy_windows else guide][index]\n return _Segment(line, style)\n\n levels: List[Segment] = [make_guide(CONTINUE, guide_style)]\n push(iter(loop_last([self])))\n\n guide_style_stack = StyleStack(get_style(self.guide_style))\n style_stack = StyleStack(get_style(self.style))\n remove_guide_styles = Style(bold=False, underline2=False)\n\n while stack:\n stack_node = pop()\n try:\n last, node = next(stack_node)\n except StopIteration:\n levels.pop()\n if levels:\n guide_style = levels[-1].style or null_style\n levels[-1] = make_guide(FORK, guide_style)\n guide_style_stack.pop()\n style_stack.pop()\n continue\n push(stack_node)\n if last:\n levels[-1] = make_guide(END, levels[-1].style or null_style)\n\n guide_style = guide_style_stack.current + get_style(node.guide_style)\n style = style_stack.current + get_style(node.style)\n prefix = levels[1:]\n renderable_lines = console.render_lines(\n Styled(node.label, style),\n options.update(\n width=options.max_width\n - sum(level.cell_length for level in prefix),\n highlight=self.highlight,\n height=None,\n ),\n )\n for first, line in loop_first(renderable_lines):\n if prefix:\n yield from _Segment.apply_style(\n prefix,\n style.background_style,\n post_style=remove_guide_styles,\n )\n yield from line\n yield new_line\n if first and prefix:\n prefix[-1] = make_guide(\n SPACE if last else CONTINUE, prefix[-1].style or null_style\n )\n\n if node.expanded and node.children:\n levels[-1] = make_guide(\n SPACE if last else CONTINUE, levels[-1].style or null_style\n )\n levels.append(\n make_guide(END if len(node.children) == 1 else FORK, guide_style)\n )\n style_stack.push(get_style(node.style))\n guide_style_stack.push(get_style(node.guide_style))\n push(iter(loop_last(node.children)))\n\n def __rich_measure__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"Measurement\":\n stack: List[Iterator[Tree]] = [iter([self])]\n pop = stack.pop\n push = stack.append\n minimum = 0\n maximum = 0\n measure = Measurement.get\n level = 0\n while stack:\n iter_tree = pop()\n try:\n tree = next(iter_tree)\n except StopIteration:\n level -= 1\n continue\n push(iter_tree)\n min_measure, max_measure = measure(console, options, tree.label)\n indent = level * 4\n minimum = max(min_measure + indent, minimum)\n maximum = max(max_measure + indent, maximum)\n if tree.expanded and tree.children:\n push(iter(tree.children))\n level += 1\n return Measurement(minimum, maximum)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n\n from rich.console import Group\n from rich.markdown import Markdown\n from rich.panel import Panel\n from rich.syntax import Syntax\n from rich.table import Table\n\n table = Table(row_styles=[\"\", \"dim\"])\n\n table.add_column(\"Released\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Title\", style=\"magenta\")\n table.add_column(\"Box Office\", justify=\"right\", style=\"green\")\n\n table.add_row(\"Dec 20, 2019\", \"Star Wars: The Rise of Skywalker\", \"$952,110,690\")\n table.add_row(\"May 25, 2018\", \"Solo: A Star Wars Story\", \"$393,151,347\")\n table.add_row(\"Dec 15, 2017\", \"Star Wars Ep. V111: The Last Jedi\", \"$1,332,539,889\")\n table.add_row(\"Dec 16, 2016\", \"Rogue One: A Star Wars Story\", \"$1,332,439,889\")\n\n code = \"\"\"\\\nclass Segment(NamedTuple):\n text: str = \"\" \n style: Optional[Style] = None \n is_control: bool = False \n\"\"\"\n syntax = Syntax(code, \"python\", theme=\"monokai\", line_numbers=True)\n\n markdown = Markdown(\n \"\"\"\\\n### example.md\n> Hello, World!\n> \n> Markdown _all_ the things\n\"\"\"\n )\n\n root = Tree(\"\ud83c\udf32 [b green]Rich Tree\", highlight=True)\n\n node = root.add(\":file_folder: Renderables\", guide_style=\"red\")\n simple_node = node.add(\":file_folder: [bold yellow]Atomic\", guide_style=\"uu green\")\n simple_node.add(Group(\"\ud83d\udcc4 Syntax\", syntax))\n simple_node.add(Group(\"\ud83d\udcc4 Markdown\", Panel(markdown, border_style=\"green\")))\n\n containers_node = node.add(\n \":file_folder: [bold magenta]Containers\", guide_style=\"bold magenta\"\n )\n containers_node.expanded = True\n panel = Panel.fit(\"Just a panel\", border_style=\"red\")\n containers_node.add(Group(\"\ud83d\udcc4 Panels\", panel))\n\n containers_node.add(Group(\"\ud83d\udcc4 [b magenta]Table\", table))\n\n console = Console()\n console.print(root)\n", "path": "rich/tree.py"}], "after_files": [{"content": "from typing import Iterator, List, Optional, Tuple\n\nfrom ._loop import loop_first, loop_last\nfrom .console import Console, ConsoleOptions, RenderableType, RenderResult\nfrom .jupyter import JupyterMixin\nfrom .measure import Measurement\nfrom .segment import Segment\nfrom .style import Style, StyleStack, StyleType\nfrom .styled import Styled\n\n\nclass Tree(JupyterMixin):\n \"\"\"A renderable for a tree structure.\n\n Args:\n label (RenderableType): The renderable or str for the tree label.\n style (StyleType, optional): Style of this tree. Defaults to \"tree\".\n guide_style (StyleType, optional): Style of the guide lines. Defaults to \"tree.line\".\n expanded (bool, optional): Also display children. Defaults to True.\n highlight (bool, optional): Highlight renderable (if str). Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n label: RenderableType,\n *,\n style: StyleType = \"tree\",\n guide_style: StyleType = \"tree.line\",\n expanded: bool = True,\n highlight: bool = False,\n hide_root: bool = False,\n ) -> None:\n self.label = label\n self.style = style\n self.guide_style = guide_style\n self.children: List[Tree] = []\n self.expanded = expanded\n self.highlight = highlight\n self.hide_root = hide_root\n\n def add(\n self,\n label: RenderableType,\n *,\n style: Optional[StyleType] = None,\n guide_style: Optional[StyleType] = None,\n expanded: bool = True,\n highlight: bool = False,\n ) -> \"Tree\":\n \"\"\"Add a child tree.\n\n Args:\n label (RenderableType): The renderable or str for the tree label.\n style (StyleType, optional): Style of this tree. Defaults to \"tree\".\n guide_style (StyleType, optional): Style of the guide lines. Defaults to \"tree.line\".\n expanded (bool, optional): Also display children. Defaults to True.\n highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False.\n\n Returns:\n Tree: A new child Tree, which may be further modified.\n \"\"\"\n node = Tree(\n label,\n style=self.style if style is None else style,\n guide_style=self.guide_style if guide_style is None else guide_style,\n expanded=expanded,\n highlight=self.highlight if highlight is None else highlight,\n )\n self.children.append(node)\n return node\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"RenderResult\":\n\n stack: List[Iterator[Tuple[bool, Tree]]] = []\n pop = stack.pop\n push = stack.append\n new_line = Segment.line()\n\n get_style = console.get_style\n null_style = Style.null()\n guide_style = get_style(self.guide_style, default=\"\") or null_style\n SPACE, CONTINUE, FORK, END = range(4)\n\n ASCII_GUIDES = (\" \", \"| \", \"+-- \", \"`-- \")\n TREE_GUIDES = [\n (\" \", \"\u2502 \", \"\u251c\u2500\u2500 \", \"\u2514\u2500\u2500 \"),\n (\" \", \"\u2503 \", \"\u2523\u2501\u2501 \", \"\u2517\u2501\u2501 \"),\n (\" \", \"\u2551 \", \"\u2560\u2550\u2550 \", \"\u255a\u2550\u2550 \"),\n ]\n _Segment = Segment\n\n def make_guide(index: int, style: Style) -> Segment:\n \"\"\"Make a Segment for a level of the guide lines.\"\"\"\n if options.ascii_only:\n line = ASCII_GUIDES[index]\n else:\n guide = 1 if style.bold else (2 if style.underline2 else 0)\n line = TREE_GUIDES[0 if options.legacy_windows else guide][index]\n return _Segment(line, style)\n\n levels: List[Segment] = [make_guide(CONTINUE, guide_style)]\n push(iter(loop_last([self])))\n\n guide_style_stack = StyleStack(get_style(self.guide_style))\n style_stack = StyleStack(get_style(self.style))\n remove_guide_styles = Style(bold=False, underline2=False)\n\n depth = 0\n\n while stack:\n stack_node = pop()\n try:\n last, node = next(stack_node)\n except StopIteration:\n levels.pop()\n if levels:\n guide_style = levels[-1].style or null_style\n levels[-1] = make_guide(FORK, guide_style)\n guide_style_stack.pop()\n style_stack.pop()\n continue\n push(stack_node)\n if last:\n levels[-1] = make_guide(END, levels[-1].style or null_style)\n\n guide_style = guide_style_stack.current + get_style(node.guide_style)\n style = style_stack.current + get_style(node.style)\n prefix = levels[(2 if self.hide_root else 1) :]\n renderable_lines = console.render_lines(\n Styled(node.label, style),\n options.update(\n width=options.max_width\n - sum(level.cell_length for level in prefix),\n highlight=self.highlight,\n height=None,\n ),\n )\n\n if not (depth == 0 and self.hide_root):\n for first, line in loop_first(renderable_lines):\n if prefix:\n yield from _Segment.apply_style(\n prefix,\n style.background_style,\n post_style=remove_guide_styles,\n )\n yield from line\n yield new_line\n if first and prefix:\n prefix[-1] = make_guide(\n SPACE if last else CONTINUE, prefix[-1].style or null_style\n )\n\n if node.expanded and node.children:\n levels[-1] = make_guide(\n SPACE if last else CONTINUE, levels[-1].style or null_style\n )\n levels.append(\n make_guide(END if len(node.children) == 1 else FORK, guide_style)\n )\n style_stack.push(get_style(node.style))\n guide_style_stack.push(get_style(node.guide_style))\n push(iter(loop_last(node.children)))\n depth += 1\n\n def __rich_measure__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"Measurement\":\n stack: List[Iterator[Tree]] = [iter([self])]\n pop = stack.pop\n push = stack.append\n minimum = 0\n maximum = 0\n measure = Measurement.get\n level = 0\n while stack:\n iter_tree = pop()\n try:\n tree = next(iter_tree)\n except StopIteration:\n level -= 1\n continue\n push(iter_tree)\n min_measure, max_measure = measure(console, options, tree.label)\n indent = level * 4\n minimum = max(min_measure + indent, minimum)\n maximum = max(max_measure + indent, maximum)\n if tree.expanded and tree.children:\n push(iter(tree.children))\n level += 1\n return Measurement(minimum, maximum)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n\n from rich.console import Group\n from rich.markdown import Markdown\n from rich.panel import Panel\n from rich.syntax import Syntax\n from rich.table import Table\n\n table = Table(row_styles=[\"\", \"dim\"])\n\n table.add_column(\"Released\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Title\", style=\"magenta\")\n table.add_column(\"Box Office\", justify=\"right\", style=\"green\")\n\n table.add_row(\"Dec 20, 2019\", \"Star Wars: The Rise of Skywalker\", \"$952,110,690\")\n table.add_row(\"May 25, 2018\", \"Solo: A Star Wars Story\", \"$393,151,347\")\n table.add_row(\"Dec 15, 2017\", \"Star Wars Ep. V111: The Last Jedi\", \"$1,332,539,889\")\n table.add_row(\"Dec 16, 2016\", \"Rogue One: A Star Wars Story\", \"$1,332,439,889\")\n\n code = \"\"\"\\\nclass Segment(NamedTuple):\n text: str = \"\" \n style: Optional[Style] = None \n is_control: bool = False \n\"\"\"\n syntax = Syntax(code, \"python\", theme=\"monokai\", line_numbers=True)\n\n markdown = Markdown(\n \"\"\"\\\n### example.md\n> Hello, World!\n> \n> Markdown _all_ the things\n\"\"\"\n )\n\n root = Tree(\"\ud83c\udf32 [b green]Rich Tree\", highlight=True, hide_root=True)\n\n node = root.add(\":file_folder: Renderables\", guide_style=\"red\")\n simple_node = node.add(\":file_folder: [bold yellow]Atomic\", guide_style=\"uu green\")\n simple_node.add(Group(\"\ud83d\udcc4 Syntax\", syntax))\n simple_node.add(Group(\"\ud83d\udcc4 Markdown\", Panel(markdown, border_style=\"green\")))\n\n containers_node = node.add(\n \":file_folder: [bold magenta]Containers\", guide_style=\"bold magenta\"\n )\n containers_node.expanded = True\n panel = Panel.fit(\"Just a panel\", border_style=\"red\")\n containers_node.add(Group(\"\ud83d\udcc4 Panels\", panel))\n\n containers_node.add(Group(\"\ud83d\udcc4 [b magenta]Table\", table))\n\n console = Console()\n console.print(root)\n", "path": "rich/tree.py"}]}
| 3,038 | 712 |
gh_patches_debug_28540
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-1792
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Don't use builtin function names as variables
There is e.g. `all` in `PicklePersistence` and I think also `type` somewhere. Also `id`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/ext/picklepersistence.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2020
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the PicklePersistence class."""
20 import pickle
21 from collections import defaultdict
22 from copy import deepcopy
23
24 from telegram.ext import BasePersistence
25
26
27 class PicklePersistence(BasePersistence):
28 """Using python's builtin pickle for making you bot persistent.
29
30 Attributes:
31 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
32 is false this will be used as a prefix.
33 store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this
34 persistence class.
35 store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this
36 persistence class.
37 store_bot_data (:obj:`bool`): Optional. Whether bot_data should be saved by this
38 persistence class.
39 single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of
40 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
41 ``True``.
42 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
43 is called and keep data in memory until that happens. When ``False`` will store data
44 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
45
46 Args:
47 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
48 is false this will be used as a prefix.
49 store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this
50 persistence class. Default is ``True``.
51 store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this
52 persistence class. Default is ``True``.
53 store_bot_data (:obj:`bool`, optional): Whether bot_data should be saved by this
54 persistence class. Default is ``True`` .
55 single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of
56 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
57 ``True``.
58 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
59 is called and keep data in memory until that happens. When ``False`` will store data
60 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
61 """
62
63 def __init__(self, filename,
64 store_user_data=True,
65 store_chat_data=True,
66 store_bot_data=True,
67 single_file=True,
68 on_flush=False):
69 super(PicklePersistence, self).__init__(store_user_data=store_user_data,
70 store_chat_data=store_chat_data,
71 store_bot_data=store_bot_data)
72 self.filename = filename
73 self.single_file = single_file
74 self.on_flush = on_flush
75 self.user_data = None
76 self.chat_data = None
77 self.bot_data = None
78 self.conversations = None
79
80 def load_singlefile(self):
81 try:
82 filename = self.filename
83 with open(self.filename, "rb") as f:
84 all = pickle.load(f)
85 self.user_data = defaultdict(dict, all['user_data'])
86 self.chat_data = defaultdict(dict, all['chat_data'])
87 # For backwards compatibility with files not containing bot data
88 self.bot_data = all.get('bot_data', {})
89 self.conversations = all['conversations']
90 except IOError:
91 self.conversations = {}
92 self.user_data = defaultdict(dict)
93 self.chat_data = defaultdict(dict)
94 self.bot_data = {}
95 except pickle.UnpicklingError:
96 raise TypeError("File {} does not contain valid pickle data".format(filename))
97 except Exception:
98 raise TypeError("Something went wrong unpickling {}".format(filename))
99
100 def load_file(self, filename):
101 try:
102 with open(filename, "rb") as f:
103 return pickle.load(f)
104 except IOError:
105 return None
106 except pickle.UnpicklingError:
107 raise TypeError("File {} does not contain valid pickle data".format(filename))
108 except Exception:
109 raise TypeError("Something went wrong unpickling {}".format(filename))
110
111 def dump_singlefile(self):
112 with open(self.filename, "wb") as f:
113 all = {'conversations': self.conversations, 'user_data': self.user_data,
114 'chat_data': self.chat_data, 'bot_data': self.bot_data}
115 pickle.dump(all, f)
116
117 def dump_file(self, filename, data):
118 with open(filename, "wb") as f:
119 pickle.dump(data, f)
120
121 def get_user_data(self):
122 """Returns the user_data from the pickle file if it exsists or an empty defaultdict.
123
124 Returns:
125 :obj:`defaultdict`: The restored user data.
126 """
127 if self.user_data:
128 pass
129 elif not self.single_file:
130 filename = "{}_user_data".format(self.filename)
131 data = self.load_file(filename)
132 if not data:
133 data = defaultdict(dict)
134 else:
135 data = defaultdict(dict, data)
136 self.user_data = data
137 else:
138 self.load_singlefile()
139 return deepcopy(self.user_data)
140
141 def get_chat_data(self):
142 """Returns the chat_data from the pickle file if it exsists or an empty defaultdict.
143
144 Returns:
145 :obj:`defaultdict`: The restored chat data.
146 """
147 if self.chat_data:
148 pass
149 elif not self.single_file:
150 filename = "{}_chat_data".format(self.filename)
151 data = self.load_file(filename)
152 if not data:
153 data = defaultdict(dict)
154 else:
155 data = defaultdict(dict, data)
156 self.chat_data = data
157 else:
158 self.load_singlefile()
159 return deepcopy(self.chat_data)
160
161 def get_bot_data(self):
162 """Returns the bot_data from the pickle file if it exsists or an empty dict.
163
164 Returns:
165 :obj:`defaultdict`: The restored bot data.
166 """
167 if self.bot_data:
168 pass
169 elif not self.single_file:
170 filename = "{}_bot_data".format(self.filename)
171 data = self.load_file(filename)
172 if not data:
173 data = {}
174 self.bot_data = data
175 else:
176 self.load_singlefile()
177 return deepcopy(self.bot_data)
178
179 def get_conversations(self, name):
180 """Returns the conversations from the pickle file if it exsists or an empty defaultdict.
181
182 Args:
183 name (:obj:`str`): The handlers name.
184
185 Returns:
186 :obj:`dict`: The restored conversations for the handler.
187 """
188 if self.conversations:
189 pass
190 elif not self.single_file:
191 filename = "{}_conversations".format(self.filename)
192 data = self.load_file(filename)
193 if not data:
194 data = {name: {}}
195 self.conversations = data
196 else:
197 self.load_singlefile()
198 return self.conversations.get(name, {}).copy()
199
200 def update_conversation(self, name, key, new_state):
201 """Will update the conversations for the given handler and depending on :attr:`on_flush`
202 save the pickle file.
203
204 Args:
205 name (:obj:`str`): The handlers name.
206 key (:obj:`tuple`): The key the state is changed for.
207 new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.
208 """
209 if self.conversations.setdefault(name, {}).get(key) == new_state:
210 return
211 self.conversations[name][key] = new_state
212 if not self.on_flush:
213 if not self.single_file:
214 filename = "{}_conversations".format(self.filename)
215 self.dump_file(filename, self.conversations)
216 else:
217 self.dump_singlefile()
218
219 def update_user_data(self, user_id, data):
220 """Will update the user_data (if changed) and depending on :attr:`on_flush` save the
221 pickle file.
222
223 Args:
224 user_id (:obj:`int`): The user the data might have been changed for.
225 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].
226 """
227 if self.user_data.get(user_id) == data:
228 return
229 self.user_data[user_id] = data
230 if not self.on_flush:
231 if not self.single_file:
232 filename = "{}_user_data".format(self.filename)
233 self.dump_file(filename, self.user_data)
234 else:
235 self.dump_singlefile()
236
237 def update_chat_data(self, chat_id, data):
238 """Will update the chat_data (if changed) and depending on :attr:`on_flush` save the
239 pickle file.
240
241 Args:
242 chat_id (:obj:`int`): The chat the data might have been changed for.
243 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].
244 """
245 if self.chat_data.get(chat_id) == data:
246 return
247 self.chat_data[chat_id] = data
248 if not self.on_flush:
249 if not self.single_file:
250 filename = "{}_chat_data".format(self.filename)
251 self.dump_file(filename, self.chat_data)
252 else:
253 self.dump_singlefile()
254
255 def update_bot_data(self, data):
256 """Will update the bot_data (if changed) and depending on :attr:`on_flush` save the
257 pickle file.
258
259 Args:
260 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.bot_data`.
261 """
262 if self.bot_data == data:
263 return
264 self.bot_data = data.copy()
265 if not self.on_flush:
266 if not self.single_file:
267 filename = "{}_bot_data".format(self.filename)
268 self.dump_file(filename, self.bot_data)
269 else:
270 self.dump_singlefile()
271
272 def flush(self):
273 """ Will save all data in memory to pickle file(s).
274 """
275 if self.single_file:
276 if self.user_data or self.chat_data or self.conversations:
277 self.dump_singlefile()
278 else:
279 if self.user_data:
280 self.dump_file("{}_user_data".format(self.filename), self.user_data)
281 if self.chat_data:
282 self.dump_file("{}_chat_data".format(self.filename), self.chat_data)
283 if self.bot_data:
284 self.dump_file("{}_bot_data".format(self.filename), self.bot_data)
285 if self.conversations:
286 self.dump_file("{}_conversations".format(self.filename), self.conversations)
287
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py
--- a/telegram/ext/picklepersistence.py
+++ b/telegram/ext/picklepersistence.py
@@ -81,12 +81,12 @@
try:
filename = self.filename
with open(self.filename, "rb") as f:
- all = pickle.load(f)
- self.user_data = defaultdict(dict, all['user_data'])
- self.chat_data = defaultdict(dict, all['chat_data'])
+ data = pickle.load(f)
+ self.user_data = defaultdict(dict, data['user_data'])
+ self.chat_data = defaultdict(dict, data['chat_data'])
# For backwards compatibility with files not containing bot data
- self.bot_data = all.get('bot_data', {})
- self.conversations = all['conversations']
+ self.bot_data = data.get('bot_data', {})
+ self.conversations = data['conversations']
except IOError:
self.conversations = {}
self.user_data = defaultdict(dict)
@@ -110,9 +110,9 @@
def dump_singlefile(self):
with open(self.filename, "wb") as f:
- all = {'conversations': self.conversations, 'user_data': self.user_data,
- 'chat_data': self.chat_data, 'bot_data': self.bot_data}
- pickle.dump(all, f)
+ data = {'conversations': self.conversations, 'user_data': self.user_data,
+ 'chat_data': self.chat_data, 'bot_data': self.bot_data}
+ pickle.dump(data, f)
def dump_file(self, filename, data):
with open(filename, "wb") as f:
|
{"golden_diff": "diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py\n--- a/telegram/ext/picklepersistence.py\n+++ b/telegram/ext/picklepersistence.py\n@@ -81,12 +81,12 @@\n try:\n filename = self.filename\n with open(self.filename, \"rb\") as f:\n- all = pickle.load(f)\n- self.user_data = defaultdict(dict, all['user_data'])\n- self.chat_data = defaultdict(dict, all['chat_data'])\n+ data = pickle.load(f)\n+ self.user_data = defaultdict(dict, data['user_data'])\n+ self.chat_data = defaultdict(dict, data['chat_data'])\n # For backwards compatibility with files not containing bot data\n- self.bot_data = all.get('bot_data', {})\n- self.conversations = all['conversations']\n+ self.bot_data = data.get('bot_data', {})\n+ self.conversations = data['conversations']\n except IOError:\n self.conversations = {}\n self.user_data = defaultdict(dict)\n@@ -110,9 +110,9 @@\n \n def dump_singlefile(self):\n with open(self.filename, \"wb\") as f:\n- all = {'conversations': self.conversations, 'user_data': self.user_data,\n- 'chat_data': self.chat_data, 'bot_data': self.bot_data}\n- pickle.dump(all, f)\n+ data = {'conversations': self.conversations, 'user_data': self.user_data,\n+ 'chat_data': self.chat_data, 'bot_data': self.bot_data}\n+ pickle.dump(data, f)\n \n def dump_file(self, filename, data):\n with open(filename, \"wb\") as f:\n", "issue": "Don't use builtin function names as variables\nThere is e.g. `all` in `PicklePersistence` and I think also `type` somewhere. Also `id`\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2020\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the PicklePersistence class.\"\"\"\nimport pickle\nfrom collections import defaultdict\nfrom copy import deepcopy\n\nfrom telegram.ext import BasePersistence\n\n\nclass PicklePersistence(BasePersistence):\n \"\"\"Using python's builtin pickle for making you bot persistent.\n\n Attributes:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_bot_data (:obj:`bool`): Optional. Whether bot_data should be saved by this\n persistence class.\n single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n\n Args:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_bot_data (:obj:`bool`, optional): Whether bot_data should be saved by this\n persistence class. Default is ``True`` .\n single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n \"\"\"\n\n def __init__(self, filename,\n store_user_data=True,\n store_chat_data=True,\n store_bot_data=True,\n single_file=True,\n on_flush=False):\n super(PicklePersistence, self).__init__(store_user_data=store_user_data,\n store_chat_data=store_chat_data,\n store_bot_data=store_bot_data)\n self.filename = filename\n self.single_file = single_file\n self.on_flush = on_flush\n self.user_data = None\n self.chat_data = None\n self.bot_data = None\n self.conversations = None\n\n def load_singlefile(self):\n try:\n filename = self.filename\n with open(self.filename, \"rb\") as f:\n all = pickle.load(f)\n self.user_data = defaultdict(dict, all['user_data'])\n self.chat_data = defaultdict(dict, all['chat_data'])\n # For backwards compatibility with files not containing bot data\n self.bot_data = all.get('bot_data', {})\n self.conversations = all['conversations']\n except IOError:\n self.conversations = {}\n self.user_data = defaultdict(dict)\n self.chat_data = defaultdict(dict)\n self.bot_data = {}\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def load_file(self, filename):\n try:\n with open(filename, \"rb\") as f:\n return pickle.load(f)\n except IOError:\n return None\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def dump_singlefile(self):\n with open(self.filename, \"wb\") as f:\n all = {'conversations': self.conversations, 'user_data': self.user_data,\n 'chat_data': self.chat_data, 'bot_data': self.bot_data}\n pickle.dump(all, f)\n\n def dump_file(self, filename, data):\n with open(filename, \"wb\") as f:\n pickle.dump(data, f)\n\n def get_user_data(self):\n \"\"\"Returns the user_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored user data.\n \"\"\"\n if self.user_data:\n pass\n elif not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.user_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.user_data)\n\n def get_chat_data(self):\n \"\"\"Returns the chat_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored chat data.\n \"\"\"\n if self.chat_data:\n pass\n elif not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.chat_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.chat_data)\n\n def get_bot_data(self):\n \"\"\"Returns the bot_data from the pickle file if it exsists or an empty dict.\n\n Returns:\n :obj:`defaultdict`: The restored bot data.\n \"\"\"\n if self.bot_data:\n pass\n elif not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {}\n self.bot_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.bot_data)\n\n def get_conversations(self, name):\n \"\"\"Returns the conversations from the pickle file if it exsists or an empty defaultdict.\n\n Args:\n name (:obj:`str`): The handlers name.\n\n Returns:\n :obj:`dict`: The restored conversations for the handler.\n \"\"\"\n if self.conversations:\n pass\n elif not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {name: {}}\n self.conversations = data\n else:\n self.load_singlefile()\n return self.conversations.get(name, {}).copy()\n\n def update_conversation(self, name, key, new_state):\n \"\"\"Will update the conversations for the given handler and depending on :attr:`on_flush`\n save the pickle file.\n\n Args:\n name (:obj:`str`): The handlers name.\n key (:obj:`tuple`): The key the state is changed for.\n new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.\n \"\"\"\n if self.conversations.setdefault(name, {}).get(key) == new_state:\n return\n self.conversations[name][key] = new_state\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n self.dump_file(filename, self.conversations)\n else:\n self.dump_singlefile()\n\n def update_user_data(self, user_id, data):\n \"\"\"Will update the user_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n user_id (:obj:`int`): The user the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].\n \"\"\"\n if self.user_data.get(user_id) == data:\n return\n self.user_data[user_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n self.dump_file(filename, self.user_data)\n else:\n self.dump_singlefile()\n\n def update_chat_data(self, chat_id, data):\n \"\"\"Will update the chat_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n chat_id (:obj:`int`): The chat the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].\n \"\"\"\n if self.chat_data.get(chat_id) == data:\n return\n self.chat_data[chat_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n self.dump_file(filename, self.chat_data)\n else:\n self.dump_singlefile()\n\n def update_bot_data(self, data):\n \"\"\"Will update the bot_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.bot_data`.\n \"\"\"\n if self.bot_data == data:\n return\n self.bot_data = data.copy()\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n self.dump_file(filename, self.bot_data)\n else:\n self.dump_singlefile()\n\n def flush(self):\n \"\"\" Will save all data in memory to pickle file(s).\n \"\"\"\n if self.single_file:\n if self.user_data or self.chat_data or self.conversations:\n self.dump_singlefile()\n else:\n if self.user_data:\n self.dump_file(\"{}_user_data\".format(self.filename), self.user_data)\n if self.chat_data:\n self.dump_file(\"{}_chat_data\".format(self.filename), self.chat_data)\n if self.bot_data:\n self.dump_file(\"{}_bot_data\".format(self.filename), self.bot_data)\n if self.conversations:\n self.dump_file(\"{}_conversations\".format(self.filename), self.conversations)\n", "path": "telegram/ext/picklepersistence.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2020\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the PicklePersistence class.\"\"\"\nimport pickle\nfrom collections import defaultdict\nfrom copy import deepcopy\n\nfrom telegram.ext import BasePersistence\n\n\nclass PicklePersistence(BasePersistence):\n \"\"\"Using python's builtin pickle for making you bot persistent.\n\n Attributes:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_bot_data (:obj:`bool`): Optional. Whether bot_data should be saved by this\n persistence class.\n single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n\n Args:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_bot_data (:obj:`bool`, optional): Whether bot_data should be saved by this\n persistence class. Default is ``True`` .\n single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n \"\"\"\n\n def __init__(self, filename,\n store_user_data=True,\n store_chat_data=True,\n store_bot_data=True,\n single_file=True,\n on_flush=False):\n super(PicklePersistence, self).__init__(store_user_data=store_user_data,\n store_chat_data=store_chat_data,\n store_bot_data=store_bot_data)\n self.filename = filename\n self.single_file = single_file\n self.on_flush = on_flush\n self.user_data = None\n self.chat_data = None\n self.bot_data = None\n self.conversations = None\n\n def load_singlefile(self):\n try:\n filename = self.filename\n with open(self.filename, \"rb\") as f:\n data = pickle.load(f)\n self.user_data = defaultdict(dict, data['user_data'])\n self.chat_data = defaultdict(dict, data['chat_data'])\n # For backwards compatibility with files not containing bot data\n self.bot_data = data.get('bot_data', {})\n self.conversations = data['conversations']\n except IOError:\n self.conversations = {}\n self.user_data = defaultdict(dict)\n self.chat_data = defaultdict(dict)\n self.bot_data = {}\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def load_file(self, filename):\n try:\n with open(filename, \"rb\") as f:\n return pickle.load(f)\n except IOError:\n return None\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def dump_singlefile(self):\n with open(self.filename, \"wb\") as f:\n data = {'conversations': self.conversations, 'user_data': self.user_data,\n 'chat_data': self.chat_data, 'bot_data': self.bot_data}\n pickle.dump(data, f)\n\n def dump_file(self, filename, data):\n with open(filename, \"wb\") as f:\n pickle.dump(data, f)\n\n def get_user_data(self):\n \"\"\"Returns the user_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored user data.\n \"\"\"\n if self.user_data:\n pass\n elif not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.user_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.user_data)\n\n def get_chat_data(self):\n \"\"\"Returns the chat_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored chat data.\n \"\"\"\n if self.chat_data:\n pass\n elif not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.chat_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.chat_data)\n\n def get_bot_data(self):\n \"\"\"Returns the bot_data from the pickle file if it exsists or an empty dict.\n\n Returns:\n :obj:`defaultdict`: The restored bot data.\n \"\"\"\n if self.bot_data:\n pass\n elif not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {}\n self.bot_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.bot_data)\n\n def get_conversations(self, name):\n \"\"\"Returns the conversations from the pickle file if it exsists or an empty defaultdict.\n\n Args:\n name (:obj:`str`): The handlers name.\n\n Returns:\n :obj:`dict`: The restored conversations for the handler.\n \"\"\"\n if self.conversations:\n pass\n elif not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {name: {}}\n self.conversations = data\n else:\n self.load_singlefile()\n return self.conversations.get(name, {}).copy()\n\n def update_conversation(self, name, key, new_state):\n \"\"\"Will update the conversations for the given handler and depending on :attr:`on_flush`\n save the pickle file.\n\n Args:\n name (:obj:`str`): The handlers name.\n key (:obj:`tuple`): The key the state is changed for.\n new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.\n \"\"\"\n if self.conversations.setdefault(name, {}).get(key) == new_state:\n return\n self.conversations[name][key] = new_state\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n self.dump_file(filename, self.conversations)\n else:\n self.dump_singlefile()\n\n def update_user_data(self, user_id, data):\n \"\"\"Will update the user_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n user_id (:obj:`int`): The user the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].\n \"\"\"\n if self.user_data.get(user_id) == data:\n return\n self.user_data[user_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n self.dump_file(filename, self.user_data)\n else:\n self.dump_singlefile()\n\n def update_chat_data(self, chat_id, data):\n \"\"\"Will update the chat_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n chat_id (:obj:`int`): The chat the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].\n \"\"\"\n if self.chat_data.get(chat_id) == data:\n return\n self.chat_data[chat_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n self.dump_file(filename, self.chat_data)\n else:\n self.dump_singlefile()\n\n def update_bot_data(self, data):\n \"\"\"Will update the bot_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.bot_data`.\n \"\"\"\n if self.bot_data == data:\n return\n self.bot_data = data.copy()\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_bot_data\".format(self.filename)\n self.dump_file(filename, self.bot_data)\n else:\n self.dump_singlefile()\n\n def flush(self):\n \"\"\" Will save all data in memory to pickle file(s).\n \"\"\"\n if self.single_file:\n if self.user_data or self.chat_data or self.conversations:\n self.dump_singlefile()\n else:\n if self.user_data:\n self.dump_file(\"{}_user_data\".format(self.filename), self.user_data)\n if self.chat_data:\n self.dump_file(\"{}_chat_data\".format(self.filename), self.chat_data)\n if self.bot_data:\n self.dump_file(\"{}_bot_data\".format(self.filename), self.bot_data)\n if self.conversations:\n self.dump_file(\"{}_conversations\".format(self.filename), self.conversations)\n", "path": "telegram/ext/picklepersistence.py"}]}
| 3,500 | 377 |
gh_patches_debug_35116
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-4766
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.rtve: Live streams not working.
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
As for today, live streams from RTVE is not working.
### Debug log
```text
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] Streamlink: 4.3.0
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.15.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] websocket-client: 1.3.3
[cli][debug] Arguments:
[cli][debug] url=https://www.rtve.es/play/videos/directo/la-1/
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin rtve for URL https://www.rtve.es/play/videos/directo/la-1/
error: No playable streams found on this URL: https://www.rtve.es/play/videos/directo/la-1/
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/rtve.py`
Content:
```
1 """
2 $description Live TV channels and video on-demand service from RTVE, a Spanish public, state-owned broadcaster.
3 $url rtve.es
4 $type live, vod
5 $region Spain
6 """
7
8 import logging
9 import re
10 from base64 import b64decode
11 from io import BytesIO
12 from typing import Iterator, Sequence, Tuple
13 from urllib.parse import urlparse
14
15 from streamlink.plugin import Plugin, pluginargument, pluginmatcher
16 from streamlink.plugin.api import validate
17 from streamlink.stream.ffmpegmux import MuxedStream
18 from streamlink.stream.hls import HLSStream
19 from streamlink.stream.http import HTTPStream
20 from streamlink.utils.url import update_scheme
21
22 log = logging.getLogger(__name__)
23
24
25 class Base64Reader:
26 def __init__(self, data: str):
27 stream = BytesIO(b64decode(data))
28
29 def _iterate():
30 while True:
31 chunk = stream.read(1)
32 if len(chunk) == 0: # pragma: no cover
33 return
34 yield ord(chunk)
35
36 self._iterator: Iterator[int] = _iterate()
37
38 def read(self, num: int) -> Sequence[int]:
39 res = []
40 for _ in range(num):
41 item = next(self._iterator, None)
42 if item is None: # pragma: no cover
43 break
44 res.append(item)
45 return res
46
47 def skip(self, num: int) -> None:
48 self.read(num)
49
50 def read_chars(self, num: int) -> str:
51 return "".join(chr(item) for item in self.read(num))
52
53 def read_int(self) -> int:
54 a, b, c, d = self.read(4)
55 return a << 24 | b << 16 | c << 8 | d
56
57 def read_chunk(self) -> Tuple[str, Sequence[int]]:
58 size = self.read_int()
59 chunktype = self.read_chars(4)
60 chunkdata = self.read(size)
61 if len(chunkdata) != size: # pragma: no cover
62 raise ValueError("Invalid chunk length")
63 self.skip(4)
64 return chunktype, chunkdata
65
66
67 class ZTNR:
68 @staticmethod
69 def _get_alphabet(text: str) -> str:
70 res = []
71 j = 0
72 k = 0
73 for char in text:
74 if k > 0:
75 k -= 1
76 else:
77 res.append(char)
78 j = (j + 1) % 4
79 k = j
80 return "".join(res)
81
82 @staticmethod
83 def _get_url(text: str, alphabet: str) -> str:
84 res = []
85 j = 0
86 n = 0
87 k = 3
88 cont = 0
89 for char in text:
90 if j == 0:
91 n = int(char) * 10
92 j = 1
93 elif k > 0:
94 k -= 1
95 else:
96 res.append(alphabet[n + int(char)])
97 j = 0
98 k = cont % 4
99 cont += 1
100 return "".join(res)
101
102 @classmethod
103 def _get_source(cls, alphabet: str, data: str) -> str:
104 return cls._get_url(data, cls._get_alphabet(alphabet))
105
106 @classmethod
107 def translate(cls, data: str) -> Iterator[Tuple[str, str]]:
108 reader = Base64Reader(data.replace("\n", ""))
109 reader.skip(8)
110 chunk_type, chunk_data = reader.read_chunk()
111 while chunk_type != "IEND":
112 if chunk_type == "tEXt":
113 content = "".join(chr(item) for item in chunk_data if item > 0)
114 if "#" not in content or "%%" not in content: # pragma: no cover
115 continue
116 alphabet, content = content.split("#", 1)
117 quality, content = content.split("%%", 1)
118 yield quality, cls._get_source(alphabet, content)
119 chunk_type, chunk_data = reader.read_chunk()
120
121
122 @pluginmatcher(re.compile(
123 r"https?://(?:www\.)?rtve\.es/play/videos/.+"
124 ))
125 @pluginargument(
126 "mux-subtitles",
127 is_global=True,
128 )
129 class Rtve(Plugin):
130 URL_VIDEOS = "https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2"
131 URL_SUBTITLES = "https://www.rtve.es/api/videos/{id}/subtitulos.json"
132
133 def _get_streams(self):
134 self.id = self.session.http.get(self.url, schema=validate.Schema(
135 re.compile(r"\bdata-setup='({.+?})'", re.DOTALL),
136 validate.none_or_all(
137 validate.get(1),
138 validate.parse_json(),
139 {
140 "idAsset": validate.any(int, validate.all(str, validate.transform(int))),
141 },
142 validate.get("idAsset"),
143 ),
144 ))
145 if not self.id:
146 return
147
148 urls = self.session.http.get(
149 self.URL_VIDEOS.format(id=self.id),
150 schema=validate.Schema(
151 validate.transform(ZTNR.translate),
152 validate.transform(list),
153 [(str, validate.url())],
154 ),
155 )
156
157 url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None)
158 if not url:
159 url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None)
160 if url:
161 yield "vod", HTTPStream(self.session, url)
162 return
163
164 streams = HLSStream.parse_variant_playlist(self.session, url).items()
165
166 if self.options.get("mux-subtitles"):
167 subs = self.session.http.get(
168 self.URL_SUBTITLES.format(id=self.id),
169 schema=validate.Schema(
170 validate.parse_json(),
171 {
172 "page": {
173 "items": [{
174 "lang": str,
175 "src": validate.url(),
176 }]
177 }
178 },
179 validate.get(("page", "items")),
180 ),
181 )
182 if subs:
183 subtitles = {
184 s["lang"]: HTTPStream(self.session, update_scheme("https://", s["src"], force=True))
185 for s in subs
186 }
187 for quality, stream in streams:
188 yield quality, MuxedStream(self.session, stream, subtitles=subtitles)
189 return
190
191 yield from streams
192
193
194 __plugin__ = Rtve
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/rtve.py b/src/streamlink/plugins/rtve.py
--- a/src/streamlink/plugins/rtve.py
+++ b/src/streamlink/plugins/rtve.py
@@ -127,6 +127,7 @@
is_global=True,
)
class Rtve(Plugin):
+ URL_M3U8 = "https://ztnr.rtve.es/ztnr/{id}.m3u8"
URL_VIDEOS = "https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2"
URL_SUBTITLES = "https://www.rtve.es/api/videos/{id}/subtitulos.json"
@@ -145,6 +146,8 @@
if not self.id:
return
+ # check obfuscated stream URLs via self.URL_VIDEOS and ZTNR.translate() first
+ # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in same cases
urls = self.session.http.get(
self.URL_VIDEOS.format(id=self.id),
schema=validate.Schema(
@@ -154,12 +157,16 @@
),
)
- url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None)
- if not url:
- url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None)
- if url:
- yield "vod", HTTPStream(self.session, url)
- return
+ # then fall back to self.URL_M3U8
+ if not urls:
+ url = self.URL_M3U8.format(id=self.id)
+ else:
+ url = next((url for _, url in urls if urlparse(url).path.endswith(".m3u8")), None)
+ if not url:
+ url = next((url for _, url in urls if urlparse(url).path.endswith(".mp4")), None)
+ if url:
+ yield "vod", HTTPStream(self.session, url)
+ return
streams = HLSStream.parse_variant_playlist(self.session, url).items()
@@ -173,8 +180,8 @@
"items": [{
"lang": str,
"src": validate.url(),
- }]
- }
+ }],
+ },
},
validate.get(("page", "items")),
),
|
{"golden_diff": "diff --git a/src/streamlink/plugins/rtve.py b/src/streamlink/plugins/rtve.py\n--- a/src/streamlink/plugins/rtve.py\n+++ b/src/streamlink/plugins/rtve.py\n@@ -127,6 +127,7 @@\n is_global=True,\n )\n class Rtve(Plugin):\n+ URL_M3U8 = \"https://ztnr.rtve.es/ztnr/{id}.m3u8\"\n URL_VIDEOS = \"https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2\"\n URL_SUBTITLES = \"https://www.rtve.es/api/videos/{id}/subtitulos.json\"\n \n@@ -145,6 +146,8 @@\n if not self.id:\n return\n \n+ # check obfuscated stream URLs via self.URL_VIDEOS and ZTNR.translate() first\n+ # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in same cases\n urls = self.session.http.get(\n self.URL_VIDEOS.format(id=self.id),\n schema=validate.Schema(\n@@ -154,12 +157,16 @@\n ),\n )\n \n- url = next((url for _, url in urls if urlparse(url).path.endswith(\".m3u8\")), None)\n- if not url:\n- url = next((url for _, url in urls if urlparse(url).path.endswith(\".mp4\")), None)\n- if url:\n- yield \"vod\", HTTPStream(self.session, url)\n- return\n+ # then fall back to self.URL_M3U8\n+ if not urls:\n+ url = self.URL_M3U8.format(id=self.id)\n+ else:\n+ url = next((url for _, url in urls if urlparse(url).path.endswith(\".m3u8\")), None)\n+ if not url:\n+ url = next((url for _, url in urls if urlparse(url).path.endswith(\".mp4\")), None)\n+ if url:\n+ yield \"vod\", HTTPStream(self.session, url)\n+ return\n \n streams = HLSStream.parse_variant_playlist(self.session, url).items()\n \n@@ -173,8 +180,8 @@\n \"items\": [{\n \"lang\": str,\n \"src\": validate.url(),\n- }]\n- }\n+ }],\n+ },\n },\n validate.get((\"page\", \"items\")),\n ),\n", "issue": "plugins.rtve: Live streams not working.\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nAs for today, live streams from RTVE is not working.\n\n### Debug log\n\n```text\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] Streamlink: 4.3.0\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.15.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] websocket-client: 1.3.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.rtve.es/play/videos/directo/la-1/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin rtve for URL https://www.rtve.es/play/videos/directo/la-1/\r\nerror: No playable streams found on this URL: https://www.rtve.es/play/videos/directo/la-1/\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from RTVE, a Spanish public, state-owned broadcaster.\n$url rtve.es\n$type live, vod\n$region Spain\n\"\"\"\n\nimport logging\nimport re\nfrom base64 import b64decode\nfrom io import BytesIO\nfrom typing import Iterator, Sequence, Tuple\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginargument, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.ffmpegmux import MuxedStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.stream.http import HTTPStream\nfrom streamlink.utils.url import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass Base64Reader:\n def __init__(self, data: str):\n stream = BytesIO(b64decode(data))\n\n def _iterate():\n while True:\n chunk = stream.read(1)\n if len(chunk) == 0: # pragma: no cover\n return\n yield ord(chunk)\n\n self._iterator: Iterator[int] = _iterate()\n\n def read(self, num: int) -> Sequence[int]:\n res = []\n for _ in range(num):\n item = next(self._iterator, None)\n if item is None: # pragma: no cover\n break\n res.append(item)\n return res\n\n def skip(self, num: int) -> None:\n self.read(num)\n\n def read_chars(self, num: int) -> str:\n return \"\".join(chr(item) for item in self.read(num))\n\n def read_int(self) -> int:\n a, b, c, d = self.read(4)\n return a << 24 | b << 16 | c << 8 | d\n\n def read_chunk(self) -> Tuple[str, Sequence[int]]:\n size = self.read_int()\n chunktype = self.read_chars(4)\n chunkdata = self.read(size)\n if len(chunkdata) != size: # pragma: no cover\n raise ValueError(\"Invalid chunk length\")\n self.skip(4)\n return chunktype, chunkdata\n\n\nclass ZTNR:\n @staticmethod\n def _get_alphabet(text: str) -> str:\n res = []\n j = 0\n k = 0\n for char in text:\n if k > 0:\n k -= 1\n else:\n res.append(char)\n j = (j + 1) % 4\n k = j\n return \"\".join(res)\n\n @staticmethod\n def _get_url(text: str, alphabet: str) -> str:\n res = []\n j = 0\n n = 0\n k = 3\n cont = 0\n for char in text:\n if j == 0:\n n = int(char) * 10\n j = 1\n elif k > 0:\n k -= 1\n else:\n res.append(alphabet[n + int(char)])\n j = 0\n k = cont % 4\n cont += 1\n return \"\".join(res)\n\n @classmethod\n def _get_source(cls, alphabet: str, data: str) -> str:\n return cls._get_url(data, cls._get_alphabet(alphabet))\n\n @classmethod\n def translate(cls, data: str) -> Iterator[Tuple[str, str]]:\n reader = Base64Reader(data.replace(\"\\n\", \"\"))\n reader.skip(8)\n chunk_type, chunk_data = reader.read_chunk()\n while chunk_type != \"IEND\":\n if chunk_type == \"tEXt\":\n content = \"\".join(chr(item) for item in chunk_data if item > 0)\n if \"#\" not in content or \"%%\" not in content: # pragma: no cover\n continue\n alphabet, content = content.split(\"#\", 1)\n quality, content = content.split(\"%%\", 1)\n yield quality, cls._get_source(alphabet, content)\n chunk_type, chunk_data = reader.read_chunk()\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?rtve\\.es/play/videos/.+\"\n))\n@pluginargument(\n \"mux-subtitles\",\n is_global=True,\n)\nclass Rtve(Plugin):\n URL_VIDEOS = \"https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2\"\n URL_SUBTITLES = \"https://www.rtve.es/api/videos/{id}/subtitulos.json\"\n\n def _get_streams(self):\n self.id = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\\bdata-setup='({.+?})'\", re.DOTALL),\n validate.none_or_all(\n validate.get(1),\n validate.parse_json(),\n {\n \"idAsset\": validate.any(int, validate.all(str, validate.transform(int))),\n },\n validate.get(\"idAsset\"),\n ),\n ))\n if not self.id:\n return\n\n urls = self.session.http.get(\n self.URL_VIDEOS.format(id=self.id),\n schema=validate.Schema(\n validate.transform(ZTNR.translate),\n validate.transform(list),\n [(str, validate.url())],\n ),\n )\n\n url = next((url for _, url in urls if urlparse(url).path.endswith(\".m3u8\")), None)\n if not url:\n url = next((url for _, url in urls if urlparse(url).path.endswith(\".mp4\")), None)\n if url:\n yield \"vod\", HTTPStream(self.session, url)\n return\n\n streams = HLSStream.parse_variant_playlist(self.session, url).items()\n\n if self.options.get(\"mux-subtitles\"):\n subs = self.session.http.get(\n self.URL_SUBTITLES.format(id=self.id),\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"page\": {\n \"items\": [{\n \"lang\": str,\n \"src\": validate.url(),\n }]\n }\n },\n validate.get((\"page\", \"items\")),\n ),\n )\n if subs:\n subtitles = {\n s[\"lang\"]: HTTPStream(self.session, update_scheme(\"https://\", s[\"src\"], force=True))\n for s in subs\n }\n for quality, stream in streams:\n yield quality, MuxedStream(self.session, stream, subtitles=subtitles)\n return\n\n yield from streams\n\n\n__plugin__ = Rtve\n", "path": "src/streamlink/plugins/rtve.py"}], "after_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from RTVE, a Spanish public, state-owned broadcaster.\n$url rtve.es\n$type live, vod\n$region Spain\n\"\"\"\n\nimport logging\nimport re\nfrom base64 import b64decode\nfrom io import BytesIO\nfrom typing import Iterator, Sequence, Tuple\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginargument, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.ffmpegmux import MuxedStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.stream.http import HTTPStream\nfrom streamlink.utils.url import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass Base64Reader:\n def __init__(self, data: str):\n stream = BytesIO(b64decode(data))\n\n def _iterate():\n while True:\n chunk = stream.read(1)\n if len(chunk) == 0: # pragma: no cover\n return\n yield ord(chunk)\n\n self._iterator: Iterator[int] = _iterate()\n\n def read(self, num: int) -> Sequence[int]:\n res = []\n for _ in range(num):\n item = next(self._iterator, None)\n if item is None: # pragma: no cover\n break\n res.append(item)\n return res\n\n def skip(self, num: int) -> None:\n self.read(num)\n\n def read_chars(self, num: int) -> str:\n return \"\".join(chr(item) for item in self.read(num))\n\n def read_int(self) -> int:\n a, b, c, d = self.read(4)\n return a << 24 | b << 16 | c << 8 | d\n\n def read_chunk(self) -> Tuple[str, Sequence[int]]:\n size = self.read_int()\n chunktype = self.read_chars(4)\n chunkdata = self.read(size)\n if len(chunkdata) != size: # pragma: no cover\n raise ValueError(\"Invalid chunk length\")\n self.skip(4)\n return chunktype, chunkdata\n\n\nclass ZTNR:\n @staticmethod\n def _get_alphabet(text: str) -> str:\n res = []\n j = 0\n k = 0\n for char in text:\n if k > 0:\n k -= 1\n else:\n res.append(char)\n j = (j + 1) % 4\n k = j\n return \"\".join(res)\n\n @staticmethod\n def _get_url(text: str, alphabet: str) -> str:\n res = []\n j = 0\n n = 0\n k = 3\n cont = 0\n for char in text:\n if j == 0:\n n = int(char) * 10\n j = 1\n elif k > 0:\n k -= 1\n else:\n res.append(alphabet[n + int(char)])\n j = 0\n k = cont % 4\n cont += 1\n return \"\".join(res)\n\n @classmethod\n def _get_source(cls, alphabet: str, data: str) -> str:\n return cls._get_url(data, cls._get_alphabet(alphabet))\n\n @classmethod\n def translate(cls, data: str) -> Iterator[Tuple[str, str]]:\n reader = Base64Reader(data.replace(\"\\n\", \"\"))\n reader.skip(8)\n chunk_type, chunk_data = reader.read_chunk()\n while chunk_type != \"IEND\":\n if chunk_type == \"tEXt\":\n content = \"\".join(chr(item) for item in chunk_data if item > 0)\n if \"#\" not in content or \"%%\" not in content: # pragma: no cover\n continue\n alphabet, content = content.split(\"#\", 1)\n quality, content = content.split(\"%%\", 1)\n yield quality, cls._get_source(alphabet, content)\n chunk_type, chunk_data = reader.read_chunk()\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?rtve\\.es/play/videos/.+\"\n))\n@pluginargument(\n \"mux-subtitles\",\n is_global=True,\n)\nclass Rtve(Plugin):\n URL_M3U8 = \"https://ztnr.rtve.es/ztnr/{id}.m3u8\"\n URL_VIDEOS = \"https://ztnr.rtve.es/ztnr/movil/thumbnail/rtveplayw/videos/{id}.png?q=v2\"\n URL_SUBTITLES = \"https://www.rtve.es/api/videos/{id}/subtitulos.json\"\n\n def _get_streams(self):\n self.id = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\\bdata-setup='({.+?})'\", re.DOTALL),\n validate.none_or_all(\n validate.get(1),\n validate.parse_json(),\n {\n \"idAsset\": validate.any(int, validate.all(str, validate.transform(int))),\n },\n validate.get(\"idAsset\"),\n ),\n ))\n if not self.id:\n return\n\n # check obfuscated stream URLs via self.URL_VIDEOS and ZTNR.translate() first\n # self.URL_M3U8 appears to be valid for all streams, but doesn't provide any content in same cases\n urls = self.session.http.get(\n self.URL_VIDEOS.format(id=self.id),\n schema=validate.Schema(\n validate.transform(ZTNR.translate),\n validate.transform(list),\n [(str, validate.url())],\n ),\n )\n\n # then fall back to self.URL_M3U8\n if not urls:\n url = self.URL_M3U8.format(id=self.id)\n else:\n url = next((url for _, url in urls if urlparse(url).path.endswith(\".m3u8\")), None)\n if not url:\n url = next((url for _, url in urls if urlparse(url).path.endswith(\".mp4\")), None)\n if url:\n yield \"vod\", HTTPStream(self.session, url)\n return\n\n streams = HLSStream.parse_variant_playlist(self.session, url).items()\n\n if self.options.get(\"mux-subtitles\"):\n subs = self.session.http.get(\n self.URL_SUBTITLES.format(id=self.id),\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"page\": {\n \"items\": [{\n \"lang\": str,\n \"src\": validate.url(),\n }],\n },\n },\n validate.get((\"page\", \"items\")),\n ),\n )\n if subs:\n subtitles = {\n s[\"lang\"]: HTTPStream(self.session, update_scheme(\"https://\", s[\"src\"], force=True))\n for s in subs\n }\n for quality, stream in streams:\n yield quality, MuxedStream(self.session, stream, subtitles=subtitles)\n return\n\n yield from streams\n\n\n__plugin__ = Rtve\n", "path": "src/streamlink/plugins/rtve.py"}]}
| 2,617 | 543 |
gh_patches_debug_18457
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-3236
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torchvision sanity check requires a GPU
#2931 added a sanity check to torchvision, but this requires a GPU to be available when building. We should, instead, check if a GPU is available and only run the sanity check if there is one. If there is no GPU available then we should print a warning instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/t/torchvision.py`
Content:
```
1 ##
2 # Copyright 2021-2023 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for building and installing torchvision, implemented as an easyblock
27
28 @author: Alexander Grund (TU Dresden)
29 @author: Kenneth Hoste (HPC-UGent)
30 """
31 from easybuild.easyblocks.generic.pythonpackage import PythonPackage, det_pylibdir
32 from easybuild.tools.build_log import EasyBuildError
33 from easybuild.tools.config import build_option
34 from easybuild.tools.modules import get_software_version
35 import easybuild.tools.environment as env
36
37
38 class EB_torchvision(PythonPackage):
39 """Support for building/installing TorchVison."""
40
41 @staticmethod
42 def extra_options():
43 """Change some defaults for easyconfig parameters."""
44 extra_vars = PythonPackage.extra_options()
45 extra_vars['use_pip'][0] = True
46 extra_vars['download_dep_fail'][0] = True
47 extra_vars['sanity_pip_check'][0] = True
48 return extra_vars
49
50 def __init__(self, *args, **kwargs):
51 """Initialize torchvision easyblock."""
52 super(EB_torchvision, self).__init__(*args, **kwargs)
53
54 dep_names = set(dep['name'] for dep in self.cfg.dependencies())
55
56 # require that PyTorch is listed as dependency
57 if 'PyTorch' not in dep_names:
58 raise EasyBuildError('PyTorch not found as a dependency')
59
60 # enable building with GPU support if CUDA is included as dependency
61 if 'CUDA' in dep_names:
62 self.with_cuda = True
63 else:
64 self.with_cuda = False
65
66 def configure_step(self):
67 """Set up torchvision config"""
68
69 # Note: Those can be overwritten by e.g. preinstallopts
70 env.setvar('BUILD_VERSION', self.version)
71 env.setvar('PYTORCH_VERSION', get_software_version('PyTorch'))
72
73 if self.with_cuda:
74 # make sure that torchvision is installed with CUDA support by setting $FORCE_CUDA
75 env.setvar('FORCE_CUDA', '1')
76 # specify CUDA compute capabilities via $TORCH_CUDA_ARCH_LIST
77 cuda_cc = self.cfg['cuda_compute_capabilities'] or build_option('cuda_compute_capabilities')
78 if cuda_cc:
79 env.setvar('TORCH_CUDA_ARCH_LIST', ';'.join(cuda_cc))
80
81 super(EB_torchvision, self).configure_step()
82
83 def sanity_check_step(self):
84 """Custom sanity check for torchvision."""
85 custom_commands = None
86 custom_paths = None
87
88 # check whether torchvision was indeed built with CUDA support,
89 # cfr. https://discuss.pytorch.org/t/notimplementederror-could-not-run-torchvision-nms-with-arguments-from-\
90 # the-cuda-backend-this-could-be-because-the-operator-doesnt-exist-for-this-backend/132352/4
91 if self.with_cuda:
92 custom_commands = []
93 python_code = '; '.join([
94 "import torch, torchvision",
95 "boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')",
96 "scores = torch.randn(1).to('cuda')",
97 "print(torchvision.ops.nms(boxes, scores, 0.5))",
98 ])
99 custom_commands.append('python -c "%s"' % python_code)
100 custom_paths = {
101 'files': [],
102 'dirs': [det_pylibdir()],
103 }
104
105 return super(EB_torchvision, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/t/torchvision.py b/easybuild/easyblocks/t/torchvision.py
--- a/easybuild/easyblocks/t/torchvision.py
+++ b/easybuild/easyblocks/t/torchvision.py
@@ -90,11 +90,12 @@
# the-cuda-backend-this-could-be-because-the-operator-doesnt-exist-for-this-backend/132352/4
if self.with_cuda:
custom_commands = []
- python_code = '; '.join([
+ python_code = '\n'.join([
"import torch, torchvision",
- "boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')",
- "scores = torch.randn(1).to('cuda')",
- "print(torchvision.ops.nms(boxes, scores, 0.5))",
+ "if torch.cuda.device_count():",
+ " boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')",
+ " scores = torch.randn(1).to('cuda')",
+ " print(torchvision.ops.nms(boxes, scores, 0.5))",
])
custom_commands.append('python -c "%s"' % python_code)
custom_paths = {
|
{"golden_diff": "diff --git a/easybuild/easyblocks/t/torchvision.py b/easybuild/easyblocks/t/torchvision.py\n--- a/easybuild/easyblocks/t/torchvision.py\n+++ b/easybuild/easyblocks/t/torchvision.py\n@@ -90,11 +90,12 @@\n # the-cuda-backend-this-could-be-because-the-operator-doesnt-exist-for-this-backend/132352/4\n if self.with_cuda:\n custom_commands = []\n- python_code = '; '.join([\n+ python_code = '\\n'.join([\n \"import torch, torchvision\",\n- \"boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')\",\n- \"scores = torch.randn(1).to('cuda')\",\n- \"print(torchvision.ops.nms(boxes, scores, 0.5))\",\n+ \"if torch.cuda.device_count():\",\n+ \" boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')\",\n+ \" scores = torch.randn(1).to('cuda')\",\n+ \" print(torchvision.ops.nms(boxes, scores, 0.5))\",\n ])\n custom_commands.append('python -c \"%s\"' % python_code)\n custom_paths = {\n", "issue": "torchvision sanity check requires a GPU\n#2931 added a sanity check to torchvision, but this requires a GPU to be available when building. We should, instead, check if a GPU is available and only run the sanity check if there is one. If there is no GPU available then we should print a warning instead.\n", "before_files": [{"content": "##\n# Copyright 2021-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing torchvision, implemented as an easyblock\n\n@author: Alexander Grund (TU Dresden)\n@author: Kenneth Hoste (HPC-UGent)\n\"\"\"\nfrom easybuild.easyblocks.generic.pythonpackage import PythonPackage, det_pylibdir\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.modules import get_software_version\nimport easybuild.tools.environment as env\n\n\nclass EB_torchvision(PythonPackage):\n \"\"\"Support for building/installing TorchVison.\"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"Change some defaults for easyconfig parameters.\"\"\"\n extra_vars = PythonPackage.extra_options()\n extra_vars['use_pip'][0] = True\n extra_vars['download_dep_fail'][0] = True\n extra_vars['sanity_pip_check'][0] = True\n return extra_vars\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize torchvision easyblock.\"\"\"\n super(EB_torchvision, self).__init__(*args, **kwargs)\n\n dep_names = set(dep['name'] for dep in self.cfg.dependencies())\n\n # require that PyTorch is listed as dependency\n if 'PyTorch' not in dep_names:\n raise EasyBuildError('PyTorch not found as a dependency')\n\n # enable building with GPU support if CUDA is included as dependency\n if 'CUDA' in dep_names:\n self.with_cuda = True\n else:\n self.with_cuda = False\n\n def configure_step(self):\n \"\"\"Set up torchvision config\"\"\"\n\n # Note: Those can be overwritten by e.g. preinstallopts\n env.setvar('BUILD_VERSION', self.version)\n env.setvar('PYTORCH_VERSION', get_software_version('PyTorch'))\n\n if self.with_cuda:\n # make sure that torchvision is installed with CUDA support by setting $FORCE_CUDA\n env.setvar('FORCE_CUDA', '1')\n # specify CUDA compute capabilities via $TORCH_CUDA_ARCH_LIST\n cuda_cc = self.cfg['cuda_compute_capabilities'] or build_option('cuda_compute_capabilities')\n if cuda_cc:\n env.setvar('TORCH_CUDA_ARCH_LIST', ';'.join(cuda_cc))\n\n super(EB_torchvision, self).configure_step()\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for torchvision.\"\"\"\n custom_commands = None\n custom_paths = None\n\n # check whether torchvision was indeed built with CUDA support,\n # cfr. https://discuss.pytorch.org/t/notimplementederror-could-not-run-torchvision-nms-with-arguments-from-\\\n # the-cuda-backend-this-could-be-because-the-operator-doesnt-exist-for-this-backend/132352/4\n if self.with_cuda:\n custom_commands = []\n python_code = '; '.join([\n \"import torch, torchvision\",\n \"boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')\",\n \"scores = torch.randn(1).to('cuda')\",\n \"print(torchvision.ops.nms(boxes, scores, 0.5))\",\n ])\n custom_commands.append('python -c \"%s\"' % python_code)\n custom_paths = {\n 'files': [],\n 'dirs': [det_pylibdir()],\n }\n\n return super(EB_torchvision, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/t/torchvision.py"}], "after_files": [{"content": "##\n# Copyright 2021-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing torchvision, implemented as an easyblock\n\n@author: Alexander Grund (TU Dresden)\n@author: Kenneth Hoste (HPC-UGent)\n\"\"\"\nfrom easybuild.easyblocks.generic.pythonpackage import PythonPackage, det_pylibdir\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.modules import get_software_version\nimport easybuild.tools.environment as env\n\n\nclass EB_torchvision(PythonPackage):\n \"\"\"Support for building/installing TorchVison.\"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"Change some defaults for easyconfig parameters.\"\"\"\n extra_vars = PythonPackage.extra_options()\n extra_vars['use_pip'][0] = True\n extra_vars['download_dep_fail'][0] = True\n extra_vars['sanity_pip_check'][0] = True\n return extra_vars\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize torchvision easyblock.\"\"\"\n super(EB_torchvision, self).__init__(*args, **kwargs)\n\n dep_names = set(dep['name'] for dep in self.cfg.dependencies())\n\n # require that PyTorch is listed as dependency\n if 'PyTorch' not in dep_names:\n raise EasyBuildError('PyTorch not found as a dependency')\n\n # enable building with GPU support if CUDA is included as dependency\n if 'CUDA' in dep_names:\n self.with_cuda = True\n else:\n self.with_cuda = False\n\n def configure_step(self):\n \"\"\"Set up torchvision config\"\"\"\n\n # Note: Those can be overwritten by e.g. preinstallopts\n env.setvar('BUILD_VERSION', self.version)\n env.setvar('PYTORCH_VERSION', get_software_version('PyTorch'))\n\n if self.with_cuda:\n # make sure that torchvision is installed with CUDA support by setting $FORCE_CUDA\n env.setvar('FORCE_CUDA', '1')\n # specify CUDA compute capabilities via $TORCH_CUDA_ARCH_LIST\n cuda_cc = self.cfg['cuda_compute_capabilities'] or build_option('cuda_compute_capabilities')\n if cuda_cc:\n env.setvar('TORCH_CUDA_ARCH_LIST', ';'.join(cuda_cc))\n\n super(EB_torchvision, self).configure_step()\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for torchvision.\"\"\"\n custom_commands = None\n custom_paths = None\n\n # check whether torchvision was indeed built with CUDA support,\n # cfr. https://discuss.pytorch.org/t/notimplementederror-could-not-run-torchvision-nms-with-arguments-from-\\\n # the-cuda-backend-this-could-be-because-the-operator-doesnt-exist-for-this-backend/132352/4\n if self.with_cuda:\n custom_commands = []\n python_code = '\\n'.join([\n \"import torch, torchvision\",\n \"if torch.cuda.device_count():\",\n \" boxes = torch.tensor([[0., 1., 2., 3.]]).to('cuda')\",\n \" scores = torch.randn(1).to('cuda')\",\n \" print(torchvision.ops.nms(boxes, scores, 0.5))\",\n ])\n custom_commands.append('python -c \"%s\"' % python_code)\n custom_paths = {\n 'files': [],\n 'dirs': [det_pylibdir()],\n }\n\n return super(EB_torchvision, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/t/torchvision.py"}]}
| 1,560 | 297 |
gh_patches_debug_16317
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1913
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List of items failed to import does not allow access to next pages
**Describe the bug**
I've recently migrated my GoodReads library to bookwyrm and I've had 109 items failing to import.
When I go to that failed list location, https://bookwyrm.social/import/1151/failed, I see that there are 8 pages of failed results.
However, the list point to the imported list, not to the failures.
**To Reproduce**
Steps to reproduce the behavior:
1. Import a GoodReads CSV with many books missing from bookwyrm.
2. Click on 'View and troubleshoot failed items': https://bookwyrm.social/import/1151/failed
3. Hover over page two of the list
4. See link to page 2 of the whole import list: https://bookwyrm.social/import/1151?page=2
**Expected behavior**
The link of the next pages should include the failed path: https://bookwyrm.social/import/1151/failed?page=2
The navigation works correctly, it's just the link that is incorrect.
**Instance**
bookwyrm.social
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/imports/troubleshoot.py`
Content:
```
1 """ import books from another app """
2 from django.contrib.auth.decorators import login_required
3 from django.core.exceptions import PermissionDenied
4 from django.core.paginator import Paginator
5 from django.shortcuts import get_object_or_404, redirect
6 from django.template.response import TemplateResponse
7 from django.utils.decorators import method_decorator
8 from django.views import View
9
10 from bookwyrm import models
11 from bookwyrm.importers import Importer
12 from bookwyrm.settings import PAGE_LENGTH
13
14 # pylint: disable= no-self-use
15 @method_decorator(login_required, name="dispatch")
16 class ImportTroubleshoot(View):
17 """problems items in an existing import"""
18
19 def get(self, request, job_id):
20 """status of an import job"""
21 job = get_object_or_404(models.ImportJob, id=job_id)
22 if job.user != request.user:
23 raise PermissionDenied()
24
25 items = job.items.order_by("index").filter(
26 fail_reason__isnull=False, book_guess__isnull=True
27 )
28
29 paginated = Paginator(items, PAGE_LENGTH)
30 page = paginated.get_page(request.GET.get("page"))
31 data = {
32 "job": job,
33 "items": page,
34 "page_range": paginated.get_elided_page_range(
35 page.number, on_each_side=2, on_ends=1
36 ),
37 "complete": True,
38 }
39
40 return TemplateResponse(request, "import/troubleshoot.html", data)
41
42 def post(self, request, job_id):
43 """retry lines from an import"""
44 job = get_object_or_404(models.ImportJob, id=job_id)
45 items = job.items.filter(fail_reason__isnull=False)
46
47 importer = Importer()
48 job = importer.create_retry_job(
49 request.user,
50 job,
51 items,
52 )
53 importer.start_import(job)
54 return redirect(f"/import/{job.id}")
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/imports/troubleshoot.py b/bookwyrm/views/imports/troubleshoot.py
--- a/bookwyrm/views/imports/troubleshoot.py
+++ b/bookwyrm/views/imports/troubleshoot.py
@@ -5,6 +5,7 @@
from django.shortcuts import get_object_or_404, redirect
from django.template.response import TemplateResponse
from django.utils.decorators import method_decorator
+from django.urls import reverse
from django.views import View
from bookwyrm import models
@@ -35,6 +36,7 @@
page.number, on_each_side=2, on_ends=1
),
"complete": True,
+ "page_path": reverse("import-troubleshoot", args=[job.id]),
}
return TemplateResponse(request, "import/troubleshoot.html", data)
|
{"golden_diff": "diff --git a/bookwyrm/views/imports/troubleshoot.py b/bookwyrm/views/imports/troubleshoot.py\n--- a/bookwyrm/views/imports/troubleshoot.py\n+++ b/bookwyrm/views/imports/troubleshoot.py\n@@ -5,6 +5,7 @@\n from django.shortcuts import get_object_or_404, redirect\n from django.template.response import TemplateResponse\n from django.utils.decorators import method_decorator\n+from django.urls import reverse\n from django.views import View\n \n from bookwyrm import models\n@@ -35,6 +36,7 @@\n page.number, on_each_side=2, on_ends=1\n ),\n \"complete\": True,\n+ \"page_path\": reverse(\"import-troubleshoot\", args=[job.id]),\n }\n \n return TemplateResponse(request, \"import/troubleshoot.html\", data)\n", "issue": "List of items failed to import does not allow access to next pages\n**Describe the bug**\r\nI've recently migrated my GoodReads library to bookwyrm and I've had 109 items failing to import.\r\nWhen I go to that failed list location, https://bookwyrm.social/import/1151/failed, I see that there are 8 pages of failed results.\r\nHowever, the list point to the imported list, not to the failures.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Import a GoodReads CSV with many books missing from bookwyrm.\r\n2. Click on 'View and troubleshoot failed items': https://bookwyrm.social/import/1151/failed\r\n3. Hover over page two of the list\r\n4. See link to page 2 of the whole import list: https://bookwyrm.social/import/1151?page=2\r\n\r\n**Expected behavior**\r\nThe link of the next pages should include the failed path: https://bookwyrm.social/import/1151/failed?page=2\r\nThe navigation works correctly, it's just the link that is incorrect.\r\n\r\n\r\n**Instance**\r\nbookwyrm.social\n", "before_files": [{"content": "\"\"\" import books from another app \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import models\nfrom bookwyrm.importers import Importer\nfrom bookwyrm.settings import PAGE_LENGTH\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass ImportTroubleshoot(View):\n \"\"\"problems items in an existing import\"\"\"\n\n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n if job.user != request.user:\n raise PermissionDenied()\n\n items = job.items.order_by(\"index\").filter(\n fail_reason__isnull=False, book_guess__isnull=True\n )\n\n paginated = Paginator(items, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"job\": job,\n \"items\": page,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n \"complete\": True,\n }\n\n return TemplateResponse(request, \"import/troubleshoot.html\", data)\n\n def post(self, request, job_id):\n \"\"\"retry lines from an import\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n items = job.items.filter(fail_reason__isnull=False)\n\n importer = Importer()\n job = importer.create_retry_job(\n request.user,\n job,\n items,\n )\n importer.start_import(job)\n return redirect(f\"/import/{job.id}\")\n", "path": "bookwyrm/views/imports/troubleshoot.py"}], "after_files": [{"content": "\"\"\" import books from another app \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.urls import reverse\nfrom django.views import View\n\nfrom bookwyrm import models\nfrom bookwyrm.importers import Importer\nfrom bookwyrm.settings import PAGE_LENGTH\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass ImportTroubleshoot(View):\n \"\"\"problems items in an existing import\"\"\"\n\n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n if job.user != request.user:\n raise PermissionDenied()\n\n items = job.items.order_by(\"index\").filter(\n fail_reason__isnull=False, book_guess__isnull=True\n )\n\n paginated = Paginator(items, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"job\": job,\n \"items\": page,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n \"complete\": True,\n \"page_path\": reverse(\"import-troubleshoot\", args=[job.id]),\n }\n\n return TemplateResponse(request, \"import/troubleshoot.html\", data)\n\n def post(self, request, job_id):\n \"\"\"retry lines from an import\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n items = job.items.filter(fail_reason__isnull=False)\n\n importer = Importer()\n job = importer.create_retry_job(\n request.user,\n job,\n items,\n )\n importer.start_import(job)\n return redirect(f\"/import/{job.id}\")\n", "path": "bookwyrm/views/imports/troubleshoot.py"}]}
| 1,013 | 183 |
gh_patches_debug_21194
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2884
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Moving files to S3 should be idempotent
### Is your feature request related to a problem? Please describe.
Moving files to S3 will always upload the file, even if it already exists.
### Describe the solution you'd like
Check if it already exists
### Motivation
### Describe alternatives you've considered
### Additional context
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/thaliawebsite/management/commands/migratemedia.py`
Content:
```
1 import logging
2 import os
3
4 from django.conf import settings
5 from django.core.management.base import BaseCommand
6
7 import boto3
8 from botocore.exceptions import ClientError
9
10
11 class Command(BaseCommand):
12 def handle(self, *args, **options):
13 if not settings.AWS_STORAGE_BUCKET_NAME:
14 logging.error("No AWS settings found")
15 return
16
17 # create session to s3
18 session = boto3.session.Session()
19 s3_client = session.client(
20 service_name="s3",
21 aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
22 aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
23 )
24
25 # process migrate local media files to s3
26 for full_path in self._get_all_media_file():
27 try:
28 upload_path = self._split_path_to_upload(full_path)
29 s3_client.upload_file(
30 full_path, settings.AWS_STORAGE_BUCKET_NAME, upload_path
31 )
32 print(f"success upload {upload_path}")
33 logging.info(f"success upload {upload_path}")
34 except ClientError as e:
35 print(f"failed upload {upload_path}")
36 logging.error(f"{e}: {upload_path}")
37
38 def _get_all_media_file(self) -> [str]:
39 files = []
40 for r, d, f in os.walk(settings.MEDIA_ROOT):
41 for file in f:
42 files.append(os.path.join(r, file))
43 return files
44
45 def _split_path_to_upload(self, full_path: str) -> str:
46 media_root = settings.MEDIA_ROOT
47 upload_path = full_path.split(media_root)[-1][1:]
48 return upload_path
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/thaliawebsite/management/commands/migratemedia.py b/website/thaliawebsite/management/commands/migratemedia.py
--- a/website/thaliawebsite/management/commands/migratemedia.py
+++ b/website/thaliawebsite/management/commands/migratemedia.py
@@ -24,8 +24,25 @@
# process migrate local media files to s3
for full_path in self._get_all_media_file():
+ upload_path = self._split_path_to_upload(full_path)
try:
- upload_path = self._split_path_to_upload(full_path)
+ try:
+ remote_file = s3_client.head_object(
+ Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=upload_path
+ )
+ except ClientError as e:
+ if e.response["Error"]["Code"] == "404":
+ remote_file = None
+ else:
+ raise e
+
+ if remote_file:
+ # file already exists
+ # note that this will not check if the file contents are the same
+ print(f"file already exists {upload_path}")
+ logging.info(f"file already exists {upload_path}")
+ continue
+
s3_client.upload_file(
full_path, settings.AWS_STORAGE_BUCKET_NAME, upload_path
)
|
{"golden_diff": "diff --git a/website/thaliawebsite/management/commands/migratemedia.py b/website/thaliawebsite/management/commands/migratemedia.py\n--- a/website/thaliawebsite/management/commands/migratemedia.py\n+++ b/website/thaliawebsite/management/commands/migratemedia.py\n@@ -24,8 +24,25 @@\n \n # process migrate local media files to s3\n for full_path in self._get_all_media_file():\n+ upload_path = self._split_path_to_upload(full_path)\n try:\n- upload_path = self._split_path_to_upload(full_path)\n+ try:\n+ remote_file = s3_client.head_object(\n+ Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=upload_path\n+ )\n+ except ClientError as e:\n+ if e.response[\"Error\"][\"Code\"] == \"404\":\n+ remote_file = None\n+ else:\n+ raise e\n+\n+ if remote_file:\n+ # file already exists\n+ # note that this will not check if the file contents are the same\n+ print(f\"file already exists {upload_path}\")\n+ logging.info(f\"file already exists {upload_path}\")\n+ continue\n+\n s3_client.upload_file(\n full_path, settings.AWS_STORAGE_BUCKET_NAME, upload_path\n )\n", "issue": "Moving files to S3 should be idempotent\n### Is your feature request related to a problem? Please describe.\r\nMoving files to S3 will always upload the file, even if it already exists.\r\n\r\n### Describe the solution you'd like\r\nCheck if it already exists\r\n\r\n### Motivation\r\n\r\n### Describe alternatives you've considered\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "import logging\nimport os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\nimport boto3\nfrom botocore.exceptions import ClientError\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **options):\n if not settings.AWS_STORAGE_BUCKET_NAME:\n logging.error(\"No AWS settings found\")\n return\n\n # create session to s3\n session = boto3.session.Session()\n s3_client = session.client(\n service_name=\"s3\",\n aws_access_key_id=settings.AWS_ACCESS_KEY_ID,\n aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,\n )\n\n # process migrate local media files to s3\n for full_path in self._get_all_media_file():\n try:\n upload_path = self._split_path_to_upload(full_path)\n s3_client.upload_file(\n full_path, settings.AWS_STORAGE_BUCKET_NAME, upload_path\n )\n print(f\"success upload {upload_path}\")\n logging.info(f\"success upload {upload_path}\")\n except ClientError as e:\n print(f\"failed upload {upload_path}\")\n logging.error(f\"{e}: {upload_path}\")\n\n def _get_all_media_file(self) -> [str]:\n files = []\n for r, d, f in os.walk(settings.MEDIA_ROOT):\n for file in f:\n files.append(os.path.join(r, file))\n return files\n\n def _split_path_to_upload(self, full_path: str) -> str:\n media_root = settings.MEDIA_ROOT\n upload_path = full_path.split(media_root)[-1][1:]\n return upload_path\n", "path": "website/thaliawebsite/management/commands/migratemedia.py"}], "after_files": [{"content": "import logging\nimport os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\n\nimport boto3\nfrom botocore.exceptions import ClientError\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **options):\n if not settings.AWS_STORAGE_BUCKET_NAME:\n logging.error(\"No AWS settings found\")\n return\n\n # create session to s3\n session = boto3.session.Session()\n s3_client = session.client(\n service_name=\"s3\",\n aws_access_key_id=settings.AWS_ACCESS_KEY_ID,\n aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,\n )\n\n # process migrate local media files to s3\n for full_path in self._get_all_media_file():\n upload_path = self._split_path_to_upload(full_path)\n try:\n try:\n remote_file = s3_client.head_object(\n Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=upload_path\n )\n except ClientError as e:\n if e.response[\"Error\"][\"Code\"] == \"404\":\n remote_file = None\n else:\n raise e\n\n if remote_file:\n # file already exists\n # note that this will not check if the file contents are the same\n print(f\"file already exists {upload_path}\")\n logging.info(f\"file already exists {upload_path}\")\n continue\n\n s3_client.upload_file(\n full_path, settings.AWS_STORAGE_BUCKET_NAME, upload_path\n )\n print(f\"success upload {upload_path}\")\n logging.info(f\"success upload {upload_path}\")\n except ClientError as e:\n print(f\"failed upload {upload_path}\")\n logging.error(f\"{e}: {upload_path}\")\n\n def _get_all_media_file(self) -> [str]:\n files = []\n for r, d, f in os.walk(settings.MEDIA_ROOT):\n for file in f:\n files.append(os.path.join(r, file))\n return files\n\n def _split_path_to_upload(self, full_path: str) -> str:\n media_root = settings.MEDIA_ROOT\n upload_path = full_path.split(media_root)[-1][1:]\n return upload_path\n", "path": "website/thaliawebsite/management/commands/migratemedia.py"}]}
| 767 | 295 |
gh_patches_debug_101
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-1700
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WPS421 doesn't catch pprint.pprint()
WPS421 has pprint in the blacklist, however does not catch a call to `pprint.pprint()`.
WPS421 doesn't catch pprint.pprint()
WPS421 has pprint in the blacklist, however does not catch a call to `pprint.pprint()`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/constants.py`
Content:
```
1 """
2 This module contains list of white- and black-listed ``python`` members.
3
4 We add values here when we want to make them public.
5 Or when a value is reused in several places.
6 Then, we automatically have to add it here and document it.
7
8 Other constants that are not used across modules
9 and does not require to be documented can be defined where they are used.
10
11 All values here must be documented with ``#:`` comments.
12 """
13
14 import math
15 import re
16
17 from typing_extensions import Final
18
19 #: List of functions we forbid to use.
20 FUNCTIONS_BLACKLIST: Final = frozenset((
21 # Code generation:
22 'eval',
23 'exec',
24 'compile',
25
26 # Termination:
27 'exit',
28 'quit',
29
30 # Magic:
31 'globals',
32 'locals',
33 'vars',
34 'dir',
35
36 # IO:
37 'print',
38 'pprint',
39 'input',
40 'breakpoint',
41
42 # Attribute access:
43 'hasattr',
44 'delattr',
45
46 # Gratis:
47 'copyright',
48 'help',
49 'credits',
50
51 # Dynamic imports:
52 '__import__',
53
54 # OOP:
55 'staticmethod',
56
57 # Mypy:
58 'reveal_type',
59 ))
60
61 #: List of module metadata we forbid to use.
62 MODULE_METADATA_VARIABLES_BLACKLIST: Final = frozenset((
63 '__author__',
64 '__all__',
65 '__version__',
66 '__about__',
67 ))
68
69 #: List of variable names we forbid to use.
70 VARIABLE_NAMES_BLACKLIST: Final = frozenset((
71 # Meaningless words:
72 'data',
73 'result',
74 'results',
75 'item',
76 'items',
77 'value',
78 'values',
79 'val',
80 'vals',
81 'var',
82 'vars',
83 'variable',
84 'content',
85 'contents',
86 'info',
87 'handle',
88 'handler',
89 'file',
90 'obj',
91 'objects',
92 'objs',
93 'some',
94 'do',
95 'param',
96 'params',
97 'parameters',
98
99 # Confuseables:
100 'no',
101 'true',
102 'false',
103
104 # Names from examples:
105 'foo',
106 'bar',
107 'baz',
108 ))
109
110 #: List of characters sequences that are hard to read.
111 UNREADABLE_CHARACTER_COMBINATIONS: Final = frozenset((
112 '1l',
113 '1I',
114 '0O',
115 'O0',
116 # Not included: 'lI', 'l1', 'Il'
117 # Because these names are quite common in real words.
118 ))
119
120 #: List of special names that are used only as first argument in methods.
121 SPECIAL_ARGUMENT_NAMES_WHITELIST: Final = frozenset((
122 'self',
123 'cls',
124 'mcs',
125 ))
126
127 #: List of all magic methods from the python docs.
128 ALL_MAGIC_METHODS: Final = frozenset((
129 '__new__',
130 '__init__',
131 '__del__',
132
133 '__repr__',
134 '__str__',
135 '__bytes__',
136 '__format__',
137
138 '__lt__',
139 '__le__',
140 '__eq__',
141 '__ne__',
142 '__gt__',
143 '__ge__',
144
145 '__hash__',
146 '__bool__',
147
148 '__getattr__',
149 '__getattribute__',
150 '__setattr__',
151 '__delattr__',
152 '__dir__',
153
154 '__get__',
155 '__set__',
156 '__delete__',
157 '__set_name__',
158
159 '__init_subclass__',
160 '__instancecheck__',
161 '__subclasscheck__',
162 '__class_getitem__',
163
164 '__call__',
165 '__len__',
166 '__length_hint__',
167 '__getitem__',
168 '__setitem__',
169 '__delitem__',
170 '__missing__',
171 '__iter__',
172 '__reversed__',
173 '__contains__',
174
175 '__add__',
176 '__sub__',
177 '__mul__',
178 '__matmul__',
179 '__truediv__',
180 '__floordiv__',
181 '__mod__',
182 '__divmod__',
183 '__pow__',
184 '__lshift__',
185 '__rshift__',
186 '__and__',
187 '__xor__',
188 '__or__',
189 '__radd__',
190 '__rsub__',
191 '__rmul__',
192 '__rmatmul__',
193 '__rtruediv__',
194 '__rfloordiv__',
195 '__rmod__',
196 '__rdivmod__',
197 '__rpow__',
198 '__rlshift__',
199 '__rrshift__',
200 '__rand__',
201 '__rxor__',
202 '__ror__',
203 '__iadd__',
204 '__isub__',
205 '__imul__',
206 '__imatmul__',
207 '__itruediv__',
208 '__ifloordiv__',
209 '__imod__',
210 '__ipow__',
211 '__ilshift__',
212 '__irshift__',
213 '__iand__',
214 '__ixor__',
215 '__ior__',
216 '__neg__',
217 '__pos__',
218 '__abs__',
219 '__invert__',
220 '__complex__',
221 '__int__',
222 '__float__',
223 '__index__',
224 '__round__',
225 '__trunc__',
226 '__floor__',
227 '__ceil__',
228
229 '__enter__',
230 '__exit__',
231
232 '__await__',
233 '__aiter__',
234 '__anext__',
235 '__aenter__',
236 '__aexit__',
237 ))
238
239 #: List of magic methods that are forbidden to use.
240 MAGIC_METHODS_BLACKLIST: Final = frozenset((
241 # Since we don't use `del`:
242 '__del__',
243 '__delitem__',
244 '__delete__',
245
246 # Since we don't use `pickle`:
247 '__reduce__',
248 '__reduce_ex__',
249
250 '__dir__', # since we don't use `dir()`
251 '__delattr__', # since we don't use `delattr()`
252 ))
253
254 #: List of magic methods that are not allowed to be generators.
255 YIELD_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({
256 # Allowed to be used with ``yield`` keyword:
257 '__call__',
258 '__iter__',
259 })
260
261 #: List of magic methods that are not allowed to be async.
262 ASYNC_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({
263 # In order of appearance on
264 # https://docs.python.org/3/reference/datamodel.html#basic-customization
265 # Allowed magic methods are:
266 '__anext__',
267 '__aenter__',
268 '__aexit__',
269 '__call__',
270 })
271
272 #: List of builtin classes that are allowed to subclass.
273 ALLOWED_BUILTIN_CLASSES: Final = frozenset((
274 'type',
275 'object',
276 ))
277
278 #: List of nested functions' names we allow to use.
279 NESTED_FUNCTIONS_WHITELIST: Final = frozenset((
280 'decorator',
281 'factory',
282 'wrapper',
283 ))
284
285 #: List of allowed ``__future__`` imports.
286 FUTURE_IMPORTS_WHITELIST: Final = frozenset((
287 'annotations',
288 'generator_stop',
289 ))
290
291 #: List of blacklisted module names.
292 MODULE_NAMES_BLACKLIST: Final = frozenset((
293 'util',
294 'utils',
295 'utilities',
296 'helpers',
297 ))
298
299 #: List of allowed module magic names.
300 MAGIC_MODULE_NAMES_WHITELIST: Final = frozenset((
301 '__init__',
302 '__main__',
303 ))
304
305 #: List of bad magic module functions.
306 MAGIC_MODULE_NAMES_BLACKLIST: Final = frozenset((
307 '__getattr__',
308 '__dir__',
309 ))
310
311 #: Regex pattern to name modules.
312 MODULE_NAME_PATTERN: Final = re.compile(r'^_?_?[a-z][a-z\d_]*[a-z\d](__)?$')
313
314 #: Common numbers that are allowed to be used without being called "magic".
315 MAGIC_NUMBERS_WHITELIST: Final = frozenset((
316 0, # both int and float
317 0.1,
318 0.5,
319 1.0,
320 100,
321 1000,
322 1024, # bytes
323 24, # hours
324 60, # seconds, minutes
325
326 1j, # imaginary part of a complex number
327 ))
328
329 #: Maximum amount of ``pragma`` no-cover comments per module.
330 MAX_NO_COVER_COMMENTS: Final = 5
331
332 #: Maximum length of ``yield`` ``tuple`` expressions.
333 MAX_LEN_YIELD_TUPLE: Final = 5
334
335 #: Maximum number of compare nodes in a single expression.
336 MAX_COMPARES: Final = 2
337
338 #: Maximum number of conditions in a single ``if`` or ``while`` statement.
339 MAX_CONDITIONS: Final = 4
340
341 #: Maximum number of `elif` blocks in a single `if` condition:
342 MAX_ELIFS: Final = 3
343
344 #: Maximum number of ``except`` cases in a single ``try`` clause.
345 MAX_EXCEPT_CASES: Final = 3
346
347 #: Approximate constants which real values should be imported from math module.
348 MATH_APPROXIMATE_CONSTANTS: Final = frozenset((
349 math.pi,
350 math.e,
351 math.tau,
352 ))
353
354 #: List of vague method names that may cause confusion if imported as is:
355 VAGUE_IMPORTS_BLACKLIST: Final = frozenset((
356 'read',
357 'write',
358 'load',
359 'loads',
360 'dump',
361 'dumps',
362 'parse',
363 'safe_load',
364 'safe_dump',
365 'load_all',
366 'dump_all',
367 'safe_load_all',
368 'safe_dump_all',
369 ))
370
371 #: List of literals without arguments we forbid to use.
372 LITERALS_BLACKLIST: Final = frozenset((
373 'int',
374 'float',
375 'str',
376 'bytes',
377 'bool',
378 'complex',
379 ))
380
381 #: List of functions in which arguments must be tuples.
382 TUPLE_ARGUMENTS_METHODS: Final = frozenset((
383 'frozenset',
384 ))
385
386 #: Conditions that can appear in the ``if`` statement to allow nested imports.
387 ALLOWED_NESTED_IMPORTS_CONDITIONS: Final = frozenset((
388 'TYPE_CHECKING',
389 ))
390
391 #: List of commonly used aliases
392 ALIAS_NAMES_WHITELIST: Final = frozenset((
393 'np',
394 'pd',
395 'df',
396 'plt',
397 'sns',
398 'tf',
399 'cv',
400 ))
401
402 # Internal variables
403 # ==================
404
405 # Please, do not touch values beyond this line!
406 # ---------------------------------------------
407
408 # They are not publicly documented since they are not used by the end user.
409 # But, we still need them to be defined here.
410
411 # Used as a default filename, when it is not passed by flake8:
412 STDIN: Final = 'stdin'
413
414 # Used to specify as a placeholder for `__init__`:
415 INIT: Final = '__init__'
416
417 # Used to determine when we are running on Windows:
418 WINDOWS_OS: Final = 'nt'
419
420 # Used as a placeholder for special `_` variable:
421 UNUSED_PLACEHOLDER: Final = '_'
422
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py
--- a/wemake_python_styleguide/constants.py
+++ b/wemake_python_styleguide/constants.py
@@ -36,6 +36,7 @@
# IO:
'print',
'pprint',
+ 'pprint.pprint',
'input',
'breakpoint',
|
{"golden_diff": "diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py\n--- a/wemake_python_styleguide/constants.py\n+++ b/wemake_python_styleguide/constants.py\n@@ -36,6 +36,7 @@\n # IO:\n 'print',\n 'pprint',\n+ 'pprint.pprint',\n 'input',\n 'breakpoint',\n", "issue": "WPS421 doesn't catch pprint.pprint()\nWPS421 has pprint in the blacklist, however does not catch a call to `pprint.pprint()`.\nWPS421 doesn't catch pprint.pprint()\nWPS421 has pprint in the blacklist, however does not catch a call to `pprint.pprint()`.\n", "before_files": [{"content": "\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nWe add values here when we want to make them public.\nOr when a value is reused in several places.\nThen, we automatically have to add it here and document it.\n\nOther constants that are not used across modules\nand does not require to be documented can be defined where they are used.\n\nAll values here must be documented with ``#:`` comments.\n\"\"\"\n\nimport math\nimport re\n\nfrom typing_extensions import Final\n\n#: List of functions we forbid to use.\nFUNCTIONS_BLACKLIST: Final = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Termination:\n 'exit',\n 'quit',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'print',\n 'pprint',\n 'input',\n 'breakpoint',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Gratis:\n 'copyright',\n 'help',\n 'credits',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n\n # Mypy:\n 'reveal_type',\n))\n\n#: List of module metadata we forbid to use.\nMODULE_METADATA_VARIABLES_BLACKLIST: Final = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n#: List of variable names we forbid to use.\nVARIABLE_NAMES_BLACKLIST: Final = frozenset((\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'variable',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n 'do',\n 'param',\n 'params',\n 'parameters',\n\n # Confuseables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n))\n\n#: List of characters sequences that are hard to read.\nUNREADABLE_CHARACTER_COMBINATIONS: Final = frozenset((\n '1l',\n '1I',\n '0O',\n 'O0',\n # Not included: 'lI', 'l1', 'Il'\n # Because these names are quite common in real words.\n))\n\n#: List of special names that are used only as first argument in methods.\nSPECIAL_ARGUMENT_NAMES_WHITELIST: Final = frozenset((\n 'self',\n 'cls',\n 'mcs',\n))\n\n#: List of all magic methods from the python docs.\nALL_MAGIC_METHODS: Final = frozenset((\n '__new__',\n '__init__',\n '__del__',\n\n '__repr__',\n '__str__',\n '__bytes__',\n '__format__',\n\n '__lt__',\n '__le__',\n '__eq__',\n '__ne__',\n '__gt__',\n '__ge__',\n\n '__hash__',\n '__bool__',\n\n '__getattr__',\n '__getattribute__',\n '__setattr__',\n '__delattr__',\n '__dir__',\n\n '__get__',\n '__set__',\n '__delete__',\n '__set_name__',\n\n '__init_subclass__',\n '__instancecheck__',\n '__subclasscheck__',\n '__class_getitem__',\n\n '__call__',\n '__len__',\n '__length_hint__',\n '__getitem__',\n '__setitem__',\n '__delitem__',\n '__missing__',\n '__iter__',\n '__reversed__',\n '__contains__',\n\n '__add__',\n '__sub__',\n '__mul__',\n '__matmul__',\n '__truediv__',\n '__floordiv__',\n '__mod__',\n '__divmod__',\n '__pow__',\n '__lshift__',\n '__rshift__',\n '__and__',\n '__xor__',\n '__or__',\n '__radd__',\n '__rsub__',\n '__rmul__',\n '__rmatmul__',\n '__rtruediv__',\n '__rfloordiv__',\n '__rmod__',\n '__rdivmod__',\n '__rpow__',\n '__rlshift__',\n '__rrshift__',\n '__rand__',\n '__rxor__',\n '__ror__',\n '__iadd__',\n '__isub__',\n '__imul__',\n '__imatmul__',\n '__itruediv__',\n '__ifloordiv__',\n '__imod__',\n '__ipow__',\n '__ilshift__',\n '__irshift__',\n '__iand__',\n '__ixor__',\n '__ior__',\n '__neg__',\n '__pos__',\n '__abs__',\n '__invert__',\n '__complex__',\n '__int__',\n '__float__',\n '__index__',\n '__round__',\n '__trunc__',\n '__floor__',\n '__ceil__',\n\n '__enter__',\n '__exit__',\n\n '__await__',\n '__aiter__',\n '__anext__',\n '__aenter__',\n '__aexit__',\n))\n\n#: List of magic methods that are forbidden to use.\nMAGIC_METHODS_BLACKLIST: Final = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n # Since we don't use `pickle`:\n '__reduce__',\n '__reduce_ex__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of magic methods that are not allowed to be generators.\nYIELD_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({\n # Allowed to be used with ``yield`` keyword:\n '__call__',\n '__iter__',\n})\n\n#: List of magic methods that are not allowed to be async.\nASYNC_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({\n # In order of appearance on\n # https://docs.python.org/3/reference/datamodel.html#basic-customization\n # Allowed magic methods are:\n '__anext__',\n '__aenter__',\n '__aexit__',\n '__call__',\n})\n\n#: List of builtin classes that are allowed to subclass.\nALLOWED_BUILTIN_CLASSES: Final = frozenset((\n 'type',\n 'object',\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST: Final = frozenset((\n 'decorator',\n 'factory',\n 'wrapper',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST: Final = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names.\nMODULE_NAMES_BLACKLIST: Final = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names.\nMAGIC_MODULE_NAMES_WHITELIST: Final = frozenset((\n '__init__',\n '__main__',\n))\n\n#: List of bad magic module functions.\nMAGIC_MODULE_NAMES_BLACKLIST: Final = frozenset((\n '__getattr__',\n '__dir__',\n))\n\n#: Regex pattern to name modules.\nMODULE_NAME_PATTERN: Final = re.compile(r'^_?_?[a-z][a-z\\d_]*[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\".\nMAGIC_NUMBERS_WHITELIST: Final = frozenset((\n 0, # both int and float\n 0.1,\n 0.5,\n 1.0,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n\n 1j, # imaginary part of a complex number\n))\n\n#: Maximum amount of ``pragma`` no-cover comments per module.\nMAX_NO_COVER_COMMENTS: Final = 5\n\n#: Maximum length of ``yield`` ``tuple`` expressions.\nMAX_LEN_YIELD_TUPLE: Final = 5\n\n#: Maximum number of compare nodes in a single expression.\nMAX_COMPARES: Final = 2\n\n#: Maximum number of conditions in a single ``if`` or ``while`` statement.\nMAX_CONDITIONS: Final = 4\n\n#: Maximum number of `elif` blocks in a single `if` condition:\nMAX_ELIFS: Final = 3\n\n#: Maximum number of ``except`` cases in a single ``try`` clause.\nMAX_EXCEPT_CASES: Final = 3\n\n#: Approximate constants which real values should be imported from math module.\nMATH_APPROXIMATE_CONSTANTS: Final = frozenset((\n math.pi,\n math.e,\n math.tau,\n))\n\n#: List of vague method names that may cause confusion if imported as is:\nVAGUE_IMPORTS_BLACKLIST: Final = frozenset((\n 'read',\n 'write',\n 'load',\n 'loads',\n 'dump',\n 'dumps',\n 'parse',\n 'safe_load',\n 'safe_dump',\n 'load_all',\n 'dump_all',\n 'safe_load_all',\n 'safe_dump_all',\n))\n\n#: List of literals without arguments we forbid to use.\nLITERALS_BLACKLIST: Final = frozenset((\n 'int',\n 'float',\n 'str',\n 'bytes',\n 'bool',\n 'complex',\n))\n\n#: List of functions in which arguments must be tuples.\nTUPLE_ARGUMENTS_METHODS: Final = frozenset((\n 'frozenset',\n))\n\n#: Conditions that can appear in the ``if`` statement to allow nested imports.\nALLOWED_NESTED_IMPORTS_CONDITIONS: Final = frozenset((\n 'TYPE_CHECKING',\n))\n\n#: List of commonly used aliases\nALIAS_NAMES_WHITELIST: Final = frozenset((\n 'np',\n 'pd',\n 'df',\n 'plt',\n 'sns',\n 'tf',\n 'cv',\n))\n\n# Internal variables\n# ==================\n\n# Please, do not touch values beyond this line!\n# ---------------------------------------------\n\n# They are not publicly documented since they are not used by the end user.\n# But, we still need them to be defined here.\n\n# Used as a default filename, when it is not passed by flake8:\nSTDIN: Final = 'stdin'\n\n# Used to specify as a placeholder for `__init__`:\nINIT: Final = '__init__'\n\n# Used to determine when we are running on Windows:\nWINDOWS_OS: Final = 'nt'\n\n# Used as a placeholder for special `_` variable:\nUNUSED_PLACEHOLDER: Final = '_'\n", "path": "wemake_python_styleguide/constants.py"}], "after_files": [{"content": "\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nWe add values here when we want to make them public.\nOr when a value is reused in several places.\nThen, we automatically have to add it here and document it.\n\nOther constants that are not used across modules\nand does not require to be documented can be defined where they are used.\n\nAll values here must be documented with ``#:`` comments.\n\"\"\"\n\nimport math\nimport re\n\nfrom typing_extensions import Final\n\n#: List of functions we forbid to use.\nFUNCTIONS_BLACKLIST: Final = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Termination:\n 'exit',\n 'quit',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'print',\n 'pprint',\n 'pprint.pprint',\n 'input',\n 'breakpoint',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Gratis:\n 'copyright',\n 'help',\n 'credits',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n\n # Mypy:\n 'reveal_type',\n))\n\n#: List of module metadata we forbid to use.\nMODULE_METADATA_VARIABLES_BLACKLIST: Final = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n#: List of variable names we forbid to use.\nVARIABLE_NAMES_BLACKLIST: Final = frozenset((\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'variable',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n 'do',\n 'param',\n 'params',\n 'parameters',\n\n # Confuseables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n))\n\n#: List of characters sequences that are hard to read.\nUNREADABLE_CHARACTER_COMBINATIONS: Final = frozenset((\n '1l',\n '1I',\n '0O',\n 'O0',\n # Not included: 'lI', 'l1', 'Il'\n # Because these names are quite common in real words.\n))\n\n#: List of special names that are used only as first argument in methods.\nSPECIAL_ARGUMENT_NAMES_WHITELIST: Final = frozenset((\n 'self',\n 'cls',\n 'mcs',\n))\n\n#: List of all magic methods from the python docs.\nALL_MAGIC_METHODS: Final = frozenset((\n '__new__',\n '__init__',\n '__del__',\n\n '__repr__',\n '__str__',\n '__bytes__',\n '__format__',\n\n '__lt__',\n '__le__',\n '__eq__',\n '__ne__',\n '__gt__',\n '__ge__',\n\n '__hash__',\n '__bool__',\n\n '__getattr__',\n '__getattribute__',\n '__setattr__',\n '__delattr__',\n '__dir__',\n\n '__get__',\n '__set__',\n '__delete__',\n '__set_name__',\n\n '__init_subclass__',\n '__instancecheck__',\n '__subclasscheck__',\n '__class_getitem__',\n\n '__call__',\n '__len__',\n '__length_hint__',\n '__getitem__',\n '__setitem__',\n '__delitem__',\n '__missing__',\n '__iter__',\n '__reversed__',\n '__contains__',\n\n '__add__',\n '__sub__',\n '__mul__',\n '__matmul__',\n '__truediv__',\n '__floordiv__',\n '__mod__',\n '__divmod__',\n '__pow__',\n '__lshift__',\n '__rshift__',\n '__and__',\n '__xor__',\n '__or__',\n '__radd__',\n '__rsub__',\n '__rmul__',\n '__rmatmul__',\n '__rtruediv__',\n '__rfloordiv__',\n '__rmod__',\n '__rdivmod__',\n '__rpow__',\n '__rlshift__',\n '__rrshift__',\n '__rand__',\n '__rxor__',\n '__ror__',\n '__iadd__',\n '__isub__',\n '__imul__',\n '__imatmul__',\n '__itruediv__',\n '__ifloordiv__',\n '__imod__',\n '__ipow__',\n '__ilshift__',\n '__irshift__',\n '__iand__',\n '__ixor__',\n '__ior__',\n '__neg__',\n '__pos__',\n '__abs__',\n '__invert__',\n '__complex__',\n '__int__',\n '__float__',\n '__index__',\n '__round__',\n '__trunc__',\n '__floor__',\n '__ceil__',\n\n '__enter__',\n '__exit__',\n\n '__await__',\n '__aiter__',\n '__anext__',\n '__aenter__',\n '__aexit__',\n))\n\n#: List of magic methods that are forbidden to use.\nMAGIC_METHODS_BLACKLIST: Final = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n # Since we don't use `pickle`:\n '__reduce__',\n '__reduce_ex__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of magic methods that are not allowed to be generators.\nYIELD_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({\n # Allowed to be used with ``yield`` keyword:\n '__call__',\n '__iter__',\n})\n\n#: List of magic methods that are not allowed to be async.\nASYNC_MAGIC_METHODS_BLACKLIST: Final = ALL_MAGIC_METHODS.difference({\n # In order of appearance on\n # https://docs.python.org/3/reference/datamodel.html#basic-customization\n # Allowed magic methods are:\n '__anext__',\n '__aenter__',\n '__aexit__',\n '__call__',\n})\n\n#: List of builtin classes that are allowed to subclass.\nALLOWED_BUILTIN_CLASSES: Final = frozenset((\n 'type',\n 'object',\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST: Final = frozenset((\n 'decorator',\n 'factory',\n 'wrapper',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST: Final = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names.\nMODULE_NAMES_BLACKLIST: Final = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names.\nMAGIC_MODULE_NAMES_WHITELIST: Final = frozenset((\n '__init__',\n '__main__',\n))\n\n#: List of bad magic module functions.\nMAGIC_MODULE_NAMES_BLACKLIST: Final = frozenset((\n '__getattr__',\n '__dir__',\n))\n\n#: Regex pattern to name modules.\nMODULE_NAME_PATTERN: Final = re.compile(r'^_?_?[a-z][a-z\\d_]*[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\".\nMAGIC_NUMBERS_WHITELIST: Final = frozenset((\n 0, # both int and float\n 0.1,\n 0.5,\n 1.0,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n\n 1j, # imaginary part of a complex number\n))\n\n#: Maximum amount of ``pragma`` no-cover comments per module.\nMAX_NO_COVER_COMMENTS: Final = 5\n\n#: Maximum length of ``yield`` ``tuple`` expressions.\nMAX_LEN_YIELD_TUPLE: Final = 5\n\n#: Maximum number of compare nodes in a single expression.\nMAX_COMPARES: Final = 2\n\n#: Maximum number of conditions in a single ``if`` or ``while`` statement.\nMAX_CONDITIONS: Final = 4\n\n#: Maximum number of `elif` blocks in a single `if` condition:\nMAX_ELIFS: Final = 3\n\n#: Maximum number of ``except`` cases in a single ``try`` clause.\nMAX_EXCEPT_CASES: Final = 3\n\n#: Approximate constants which real values should be imported from math module.\nMATH_APPROXIMATE_CONSTANTS: Final = frozenset((\n math.pi,\n math.e,\n math.tau,\n))\n\n#: List of vague method names that may cause confusion if imported as is:\nVAGUE_IMPORTS_BLACKLIST: Final = frozenset((\n 'read',\n 'write',\n 'load',\n 'loads',\n 'dump',\n 'dumps',\n 'parse',\n 'safe_load',\n 'safe_dump',\n 'load_all',\n 'dump_all',\n 'safe_load_all',\n 'safe_dump_all',\n))\n\n#: List of literals without arguments we forbid to use.\nLITERALS_BLACKLIST: Final = frozenset((\n 'int',\n 'float',\n 'str',\n 'bytes',\n 'bool',\n 'complex',\n))\n\n#: List of functions in which arguments must be tuples.\nTUPLE_ARGUMENTS_METHODS: Final = frozenset((\n 'frozenset',\n))\n\n#: Conditions that can appear in the ``if`` statement to allow nested imports.\nALLOWED_NESTED_IMPORTS_CONDITIONS: Final = frozenset((\n 'TYPE_CHECKING',\n))\n\n#: List of commonly used aliases\nALIAS_NAMES_WHITELIST: Final = frozenset((\n 'np',\n 'pd',\n 'df',\n 'plt',\n 'sns',\n 'tf',\n 'cv',\n))\n\n# Internal variables\n# ==================\n\n# Please, do not touch values beyond this line!\n# ---------------------------------------------\n\n# They are not publicly documented since they are not used by the end user.\n# But, we still need them to be defined here.\n\n# Used as a default filename, when it is not passed by flake8:\nSTDIN: Final = 'stdin'\n\n# Used to specify as a placeholder for `__init__`:\nINIT: Final = '__init__'\n\n# Used to determine when we are running on Windows:\nWINDOWS_OS: Final = 'nt'\n\n# Used as a placeholder for special `_` variable:\nUNUSED_PLACEHOLDER: Final = '_'\n", "path": "wemake_python_styleguide/constants.py"}]}
| 3,904 | 86 |
gh_patches_debug_9100
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-2920
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scipy build failing with Cython error
## 🐛 Bug
Scipy build is failing in [main](https://app.circleci.com/pipelines/github/ryanking13/pyodide/705/workflows/f75c57c6-95c0-4437-85ca-47889051954d/jobs/11444?invite=true#step-105-1365). I guess it is related to the new Cython release.
```
_stats.pyx:173:0: Referring to a memoryview typed argument directly in a nested closure function is not supported in Cython 0.x. Either upgrade to Cython 3, or assign the argument to a local variable and use that in the nested function.
Traceback (most recent call last):
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 317, in <module>
main()
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 313, in main
find_process_files(root_dir)
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 302, in find_process_files
for result in pool.imap_unordered(lambda args: process(*args), jobs):
File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 870, in next
raise value
File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 302, in <lambda>
for result in pool.imap_unordered(lambda args: process(*args), jobs):
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 236, in process
processor_function(fromfile, tofile, cwd=path)
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 102, in process_pyx
raise Exception('Cython failed')
Exception: Cython failed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyodide-build/pyodide_build/pypabuild.py`
Content:
```
1 import os
2 import shutil
3 import sys
4 import traceback
5 from collections.abc import Mapping
6 from itertools import chain
7 from pathlib import Path
8
9 from build import BuildBackendException, ConfigSettingsType, ProjectBuilder
10 from build.__main__ import (
11 _STYLES,
12 _error,
13 _handle_build_error,
14 _IsolatedEnvBuilder,
15 _ProjectBuilder,
16 )
17 from build.env import IsolatedEnv
18 from packaging.requirements import Requirement
19
20 from .common import (
21 get_hostsitepackages,
22 get_pyversion,
23 get_unisolated_packages,
24 replace_env,
25 )
26
27
28 def symlink_unisolated_packages(env: IsolatedEnv) -> None:
29 pyversion = get_pyversion()
30 site_packages_path = f"lib/{pyversion}/site-packages"
31 env_site_packages = Path(env.path) / site_packages_path # type: ignore[attr-defined]
32 sysconfigdata_name = os.environ["SYSCONFIG_NAME"]
33 sysconfigdata_path = (
34 Path(os.environ["TARGETINSTALLDIR"]) / f"sysconfigdata/{sysconfigdata_name}.py"
35 )
36 shutil.copy(sysconfigdata_path, env_site_packages)
37 host_site_packages = Path(get_hostsitepackages())
38 for name in get_unisolated_packages():
39 for path in chain(
40 host_site_packages.glob(f"{name}*"), host_site_packages.glob(f"_{name}*")
41 ):
42 (env_site_packages / path.name).unlink(missing_ok=True)
43 (env_site_packages / path.name).symlink_to(path)
44
45
46 def remove_unisolated_requirements(requires: set[str]) -> set[str]:
47 for reqstr in list(requires):
48 req = Requirement(reqstr)
49 for avoid_name in get_unisolated_packages():
50 if avoid_name in req.name.lower():
51 requires.remove(reqstr)
52 return requires
53
54
55 def install_reqs(env: IsolatedEnv, reqs: set[str]) -> None:
56 env.install(remove_unisolated_requirements(reqs))
57 # Some packages (numcodecs) don't declare cython as a build dependency and
58 # only recythonize if it is present. We need them to always recythonize so
59 # we always install cython. If the reqs included some cython version already
60 # then this won't do anything.
61 env.install(["cython", "pythran"])
62
63
64 def _build_in_isolated_env(
65 build_env: Mapping[str, str],
66 builder: ProjectBuilder,
67 outdir: str,
68 distribution: str,
69 config_settings: ConfigSettingsType,
70 ) -> str:
71 # For debugging: The following line disables removal of the isolated venv.
72 # It will be left in the /tmp folder and can be inspected or entered as
73 # needed.
74 # _IsolatedEnvBuilder.__exit__ = lambda *args: None
75 with _IsolatedEnvBuilder() as env:
76 builder.python_executable = env.executable
77 builder.scripts_dir = env.scripts_dir
78 # first install the build dependencies
79 symlink_unisolated_packages(env)
80 install_reqs(env, builder.build_system_requires)
81 installed_requires_for_build = False
82 try:
83 build_reqs = builder.get_requires_for_build(distribution, config_settings)
84 except BuildBackendException:
85 pass
86 else:
87 install_reqs(env, build_reqs)
88 installed_requires_for_build = True
89
90 with replace_env(build_env):
91 if not installed_requires_for_build:
92 install_reqs(
93 env, builder.get_requires_for_build(distribution, config_settings)
94 )
95 return builder.build(distribution, outdir, config_settings)
96
97
98 def parse_backend_flags(backend_flags: str) -> ConfigSettingsType:
99 config_settings: dict[str, str | list[str]] = {}
100 for arg in backend_flags.split():
101 setting, _, value = arg.partition("=")
102 if setting not in config_settings:
103 config_settings[setting] = value
104 continue
105
106 cur_value = config_settings[setting]
107 if isinstance(cur_value, str):
108 config_settings[setting] = [cur_value, value]
109 else:
110 cur_value.append(value)
111 return config_settings
112
113
114 def build(build_env: Mapping[str, str], backend_flags: str) -> None:
115 srcdir = Path.cwd()
116 outdir = srcdir / "dist"
117 builder = _ProjectBuilder(str(srcdir))
118 distribution = "wheel"
119 config_settings = parse_backend_flags(backend_flags)
120 try:
121 with _handle_build_error():
122 built = _build_in_isolated_env(
123 build_env, builder, str(outdir), distribution, config_settings
124 )
125 print("{bold}{green}Successfully built {}{reset}".format(built, **_STYLES))
126 except Exception as e: # pragma: no cover
127 tb = traceback.format_exc().strip("\n")
128 print("\n{dim}{}{reset}\n".format(tb, **_STYLES))
129 _error(str(e))
130 sys.exit(1)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py
--- a/pyodide-build/pyodide_build/pypabuild.py
+++ b/pyodide-build/pyodide_build/pypabuild.py
@@ -58,7 +58,12 @@
# only recythonize if it is present. We need them to always recythonize so
# we always install cython. If the reqs included some cython version already
# then this won't do anything.
- env.install(["cython", "pythran"])
+ env.install(
+ [
+ "cython<0.29.31", # cython 0.29.31 is incompatible with scipy 1.8.1. TODO: remove this after the scipy update.
+ "pythran",
+ ]
+ )
def _build_in_isolated_env(
|
{"golden_diff": "diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py\n--- a/pyodide-build/pyodide_build/pypabuild.py\n+++ b/pyodide-build/pyodide_build/pypabuild.py\n@@ -58,7 +58,12 @@\n # only recythonize if it is present. We need them to always recythonize so\n # we always install cython. If the reqs included some cython version already\n # then this won't do anything.\n- env.install([\"cython\", \"pythran\"])\n+ env.install(\n+ [\n+ \"cython<0.29.31\", # cython 0.29.31 is incompatible with scipy 1.8.1. TODO: remove this after the scipy update.\n+ \"pythran\",\n+ ]\n+ )\n \n \n def _build_in_isolated_env(\n", "issue": "Scipy build failing with Cython error\n## \ud83d\udc1b Bug\r\n\r\nScipy build is failing in [main](https://app.circleci.com/pipelines/github/ryanking13/pyodide/705/workflows/f75c57c6-95c0-4437-85ca-47889051954d/jobs/11444?invite=true#step-105-1365). I guess it is related to the new Cython release.\r\n\r\n```\r\n_stats.pyx:173:0: Referring to a memoryview typed argument directly in a nested closure function is not supported in Cython 0.x. Either upgrade to Cython 3, or assign the argument to a local variable and use that in the nested function.\r\nTraceback (most recent call last):\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 317, in <module>\r\n main()\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 313, in main\r\n find_process_files(root_dir)\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 302, in find_process_files\r\n for result in pool.imap_unordered(lambda args: process(*args), jobs):\r\n File \"/usr/local/lib/python3.10/multiprocessing/pool.py\", line 870, in next\r\n raise value\r\n File \"/usr/local/lib/python3.10/multiprocessing/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 302, in <lambda>\r\n for result in pool.imap_unordered(lambda args: process(*args), jobs):\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 236, in process\r\n processor_function(fromfile, tofile, cwd=path)\r\n File \"/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py\", line 102, in process_pyx\r\n raise Exception('Cython failed')\r\nException: Cython failed\r\n```\r\n\n", "before_files": [{"content": "import os\nimport shutil\nimport sys\nimport traceback\nfrom collections.abc import Mapping\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom build import BuildBackendException, ConfigSettingsType, ProjectBuilder\nfrom build.__main__ import (\n _STYLES,\n _error,\n _handle_build_error,\n _IsolatedEnvBuilder,\n _ProjectBuilder,\n)\nfrom build.env import IsolatedEnv\nfrom packaging.requirements import Requirement\n\nfrom .common import (\n get_hostsitepackages,\n get_pyversion,\n get_unisolated_packages,\n replace_env,\n)\n\n\ndef symlink_unisolated_packages(env: IsolatedEnv) -> None:\n pyversion = get_pyversion()\n site_packages_path = f\"lib/{pyversion}/site-packages\"\n env_site_packages = Path(env.path) / site_packages_path # type: ignore[attr-defined]\n sysconfigdata_name = os.environ[\"SYSCONFIG_NAME\"]\n sysconfigdata_path = (\n Path(os.environ[\"TARGETINSTALLDIR\"]) / f\"sysconfigdata/{sysconfigdata_name}.py\"\n )\n shutil.copy(sysconfigdata_path, env_site_packages)\n host_site_packages = Path(get_hostsitepackages())\n for name in get_unisolated_packages():\n for path in chain(\n host_site_packages.glob(f\"{name}*\"), host_site_packages.glob(f\"_{name}*\")\n ):\n (env_site_packages / path.name).unlink(missing_ok=True)\n (env_site_packages / path.name).symlink_to(path)\n\n\ndef remove_unisolated_requirements(requires: set[str]) -> set[str]:\n for reqstr in list(requires):\n req = Requirement(reqstr)\n for avoid_name in get_unisolated_packages():\n if avoid_name in req.name.lower():\n requires.remove(reqstr)\n return requires\n\n\ndef install_reqs(env: IsolatedEnv, reqs: set[str]) -> None:\n env.install(remove_unisolated_requirements(reqs))\n # Some packages (numcodecs) don't declare cython as a build dependency and\n # only recythonize if it is present. We need them to always recythonize so\n # we always install cython. If the reqs included some cython version already\n # then this won't do anything.\n env.install([\"cython\", \"pythran\"])\n\n\ndef _build_in_isolated_env(\n build_env: Mapping[str, str],\n builder: ProjectBuilder,\n outdir: str,\n distribution: str,\n config_settings: ConfigSettingsType,\n) -> str:\n # For debugging: The following line disables removal of the isolated venv.\n # It will be left in the /tmp folder and can be inspected or entered as\n # needed.\n # _IsolatedEnvBuilder.__exit__ = lambda *args: None\n with _IsolatedEnvBuilder() as env:\n builder.python_executable = env.executable\n builder.scripts_dir = env.scripts_dir\n # first install the build dependencies\n symlink_unisolated_packages(env)\n install_reqs(env, builder.build_system_requires)\n installed_requires_for_build = False\n try:\n build_reqs = builder.get_requires_for_build(distribution, config_settings)\n except BuildBackendException:\n pass\n else:\n install_reqs(env, build_reqs)\n installed_requires_for_build = True\n\n with replace_env(build_env):\n if not installed_requires_for_build:\n install_reqs(\n env, builder.get_requires_for_build(distribution, config_settings)\n )\n return builder.build(distribution, outdir, config_settings)\n\n\ndef parse_backend_flags(backend_flags: str) -> ConfigSettingsType:\n config_settings: dict[str, str | list[str]] = {}\n for arg in backend_flags.split():\n setting, _, value = arg.partition(\"=\")\n if setting not in config_settings:\n config_settings[setting] = value\n continue\n\n cur_value = config_settings[setting]\n if isinstance(cur_value, str):\n config_settings[setting] = [cur_value, value]\n else:\n cur_value.append(value)\n return config_settings\n\n\ndef build(build_env: Mapping[str, str], backend_flags: str) -> None:\n srcdir = Path.cwd()\n outdir = srcdir / \"dist\"\n builder = _ProjectBuilder(str(srcdir))\n distribution = \"wheel\"\n config_settings = parse_backend_flags(backend_flags)\n try:\n with _handle_build_error():\n built = _build_in_isolated_env(\n build_env, builder, str(outdir), distribution, config_settings\n )\n print(\"{bold}{green}Successfully built {}{reset}\".format(built, **_STYLES))\n except Exception as e: # pragma: no cover\n tb = traceback.format_exc().strip(\"\\n\")\n print(\"\\n{dim}{}{reset}\\n\".format(tb, **_STYLES))\n _error(str(e))\n sys.exit(1)\n", "path": "pyodide-build/pyodide_build/pypabuild.py"}], "after_files": [{"content": "import os\nimport shutil\nimport sys\nimport traceback\nfrom collections.abc import Mapping\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom build import BuildBackendException, ConfigSettingsType, ProjectBuilder\nfrom build.__main__ import (\n _STYLES,\n _error,\n _handle_build_error,\n _IsolatedEnvBuilder,\n _ProjectBuilder,\n)\nfrom build.env import IsolatedEnv\nfrom packaging.requirements import Requirement\n\nfrom .common import (\n get_hostsitepackages,\n get_pyversion,\n get_unisolated_packages,\n replace_env,\n)\n\n\ndef symlink_unisolated_packages(env: IsolatedEnv) -> None:\n pyversion = get_pyversion()\n site_packages_path = f\"lib/{pyversion}/site-packages\"\n env_site_packages = Path(env.path) / site_packages_path # type: ignore[attr-defined]\n sysconfigdata_name = os.environ[\"SYSCONFIG_NAME\"]\n sysconfigdata_path = (\n Path(os.environ[\"TARGETINSTALLDIR\"]) / f\"sysconfigdata/{sysconfigdata_name}.py\"\n )\n shutil.copy(sysconfigdata_path, env_site_packages)\n host_site_packages = Path(get_hostsitepackages())\n for name in get_unisolated_packages():\n for path in chain(\n host_site_packages.glob(f\"{name}*\"), host_site_packages.glob(f\"_{name}*\")\n ):\n (env_site_packages / path.name).unlink(missing_ok=True)\n (env_site_packages / path.name).symlink_to(path)\n\n\ndef remove_unisolated_requirements(requires: set[str]) -> set[str]:\n for reqstr in list(requires):\n req = Requirement(reqstr)\n for avoid_name in get_unisolated_packages():\n if avoid_name in req.name.lower():\n requires.remove(reqstr)\n return requires\n\n\ndef install_reqs(env: IsolatedEnv, reqs: set[str]) -> None:\n env.install(remove_unisolated_requirements(reqs))\n # Some packages (numcodecs) don't declare cython as a build dependency and\n # only recythonize if it is present. We need them to always recythonize so\n # we always install cython. If the reqs included some cython version already\n # then this won't do anything.\n env.install(\n [\n \"cython<0.29.31\", # cython 0.29.31 is incompatible with scipy 1.8.1. TODO: remove this after the scipy update.\n \"pythran\",\n ]\n )\n\n\ndef _build_in_isolated_env(\n build_env: Mapping[str, str],\n builder: ProjectBuilder,\n outdir: str,\n distribution: str,\n config_settings: ConfigSettingsType,\n) -> str:\n # For debugging: The following line disables removal of the isolated venv.\n # It will be left in the /tmp folder and can be inspected or entered as\n # needed.\n # _IsolatedEnvBuilder.__exit__ = lambda *args: None\n with _IsolatedEnvBuilder() as env:\n builder.python_executable = env.executable\n builder.scripts_dir = env.scripts_dir\n # first install the build dependencies\n symlink_unisolated_packages(env)\n install_reqs(env, builder.build_system_requires)\n installed_requires_for_build = False\n try:\n build_reqs = builder.get_requires_for_build(distribution, config_settings)\n except BuildBackendException:\n pass\n else:\n install_reqs(env, build_reqs)\n installed_requires_for_build = True\n\n with replace_env(build_env):\n if not installed_requires_for_build:\n install_reqs(\n env, builder.get_requires_for_build(distribution, config_settings)\n )\n return builder.build(distribution, outdir, config_settings)\n\n\ndef parse_backend_flags(backend_flags: str) -> ConfigSettingsType:\n config_settings: dict[str, str | list[str]] = {}\n for arg in backend_flags.split():\n setting, _, value = arg.partition(\"=\")\n if setting not in config_settings:\n config_settings[setting] = value\n continue\n\n cur_value = config_settings[setting]\n if isinstance(cur_value, str):\n config_settings[setting] = [cur_value, value]\n else:\n cur_value.append(value)\n return config_settings\n\n\ndef build(build_env: Mapping[str, str], backend_flags: str) -> None:\n srcdir = Path.cwd()\n outdir = srcdir / \"dist\"\n builder = _ProjectBuilder(str(srcdir))\n distribution = \"wheel\"\n config_settings = parse_backend_flags(backend_flags)\n try:\n with _handle_build_error():\n built = _build_in_isolated_env(\n build_env, builder, str(outdir), distribution, config_settings\n )\n print(\"{bold}{green}Successfully built {}{reset}\".format(built, **_STYLES))\n except Exception as e: # pragma: no cover\n tb = traceback.format_exc().strip(\"\\n\")\n print(\"\\n{dim}{}{reset}\\n\".format(tb, **_STYLES))\n _error(str(e))\n sys.exit(1)\n", "path": "pyodide-build/pyodide_build/pypabuild.py"}]}
| 2,150 | 215 |
gh_patches_debug_32150
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-339
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installing from master fails due to ModuleNotFoundError
## 🐛 Bug
See the title and below error msg.
I think this is caused by `setup.py` which depends on `flash` where it shouldn't because `setup.py` gets run when any required packages are not installed yet.
### To Reproduce
In, for example, Google Colab, you will get the following results:
```console
$ pip install git+https://github.com/PytorchLightning/lightning-flash.git@master
Collecting git+https://github.com/PytorchLightning/lightning-flash.git@master
Cloning https://github.com/PytorchLightning/lightning-flash.git (to revision master) to /tmp/pip-req-build-ist5jx4a
Running command git clone -q https://github.com/PytorchLightning/lightning-flash.git /tmp/pip-req-build-ist5jx4a
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
$ python setup.py egg_info
Traceback (most recent call last):
File "setup.py", line 7, in <module>
import flash
File "/home/nitta/work/github.com/PyTorchLightning/lightning-flash/flash/__init__.py", line 22, in <module>
from flash.core.data.callback import FlashCallback
File "/home/nitta/work/github.com/PyTorchLightning/lightning-flash/flash/core/data/callback.py", line 4, in <module>
from pytorch_lightning.callbacks import Callback
ModuleNotFoundError: No module named 'pytorch_lightning.callbacks'
```
### Expected behavior
```console
$ pip install git+https://github.com/PytorchLightning/lightning-flash.git@master
```
will successfully install flash from master branch.
### Additional context
Found while working on #335.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 from importlib.util import module_from_spec, spec_from_file_location
4
5 from setuptools import find_packages, setup
6
7 import flash
8 import flash.__about__ as about
9
10 # https://packaging.python.org/guides/single-sourcing-package-version/
11 # http://blog.ionelmc.ro/2014/05/25/python-packaging/
12 _PATH_ROOT = os.path.dirname(os.path.dirname(flash.__file__))
13
14
15 def _load_py_module(fname, pkg="flash"):
16 spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))
17 py = module_from_spec(spec)
18 spec.loader.exec_module(py)
19 return py
20
21
22 setup_tools = _load_py_module('setup_tools.py')
23
24 long_description = setup_tools._load_readme_description(_PATH_ROOT, homepage=about.__homepage__, ver=about.__version__)
25
26 _PATH_REQUIRE = os.path.join(_PATH_ROOT, "requirements")
27
28 extras = {
29 "docs": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="docs.txt"),
30 "notebooks": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="notebooks.txt"),
31 "test": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="test.txt"),
32 "text": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="datatype_text.txt"),
33 "tabular": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="datatype_tabular.txt"),
34 "image": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="datatype_image.txt"),
35 "image_style_transfer": setup_tools._load_requirements(
36 path_dir=_PATH_REQUIRE, file_name="datatype_image_style_transfer.txt"
37 ),
38 "video": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="datatype_video.txt"),
39 }
40
41 # remove possible duplicate.
42 extras["vision"] = list(set(extras["image"] + extras["video"] + extras["image_style_transfer"]))
43 extras["dev"] = list(set(extras["vision"] + extras["tabular"] + extras["text"] + extras["image"]))
44 extras["dev-test"] = list(set(extras["test"] + extras["dev"]))
45 extras["all"] = list(set(extras["dev"] + extras["docs"]))
46
47 print(extras)
48
49 # https://packaging.python.org/discussions/install-requires-vs-requirements /
50 # keep the meta-data here for simplicity in reading this file... it's not obvious
51 # what happens and to non-engineers they won't know to look in init ...
52 # the goal of the project is simplicity for researchers, don't want to add too much
53 # engineer specific practices
54 setup(
55 name="lightning-flash",
56 version=about.__version__,
57 description=about.__docs__,
58 author=about.__author__,
59 author_email=about.__author_email__,
60 url=about.__homepage__,
61 download_url="https://github.com/PyTorchLightning/lightning-flash",
62 license=about.__license__,
63 packages=find_packages(exclude=["tests", "tests.*"]),
64 long_description=long_description,
65 long_description_content_type="text/markdown",
66 include_package_data=True,
67 extras_require=extras,
68 zip_safe=False,
69 keywords=["deep learning", "pytorch", "AI"],
70 python_requires=">=3.6",
71 install_requires=setup_tools._load_requirements(_PATH_ROOT, file_name='requirements.txt'),
72 project_urls={
73 "Bug Tracker": "https://github.com/PyTorchLightning/lightning-flash/issues",
74 "Documentation": "https://lightning-flash.rtfd.io/en/latest/",
75 "Source Code": "https://github.com/PyTorchLightning/lightning-flash",
76 },
77 classifiers=[
78 "Environment :: Console",
79 "Natural Language :: English",
80 # How mature is this project? Common values are
81 # 3 - Alpha, 4 - Beta, 5 - Production/Stable
82 "Development Status :: 3 - Alpha",
83 # Indicate who your project is intended for
84 "Intended Audience :: Developers",
85 "Topic :: Scientific/Engineering :: Artificial Intelligence",
86 "Topic :: Scientific/Engineering :: Image Recognition",
87 "Topic :: Scientific/Engineering :: Information Analysis",
88 # Pick your license as you wish
89 # 'License :: OSI Approved :: BSD License',
90 "Operating System :: OS Independent",
91 # Specify the Python versions you support here. In particular, ensure
92 # that you indicate whether you support Python 2, Python 3 or both.
93 "Programming Language :: Python :: 3",
94 "Programming Language :: Python :: 3.6",
95 "Programming Language :: Python :: 3.7",
96 "Programming Language :: Python :: 3.8",
97 ],
98 )
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,29 +1,46 @@
#!/usr/bin/env python
+# Copyright The PyTorch Lightning team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
import os
from importlib.util import module_from_spec, spec_from_file_location
from setuptools import find_packages, setup
-import flash
-import flash.__about__ as about
-
# https://packaging.python.org/guides/single-sourcing-package-version/
# http://blog.ionelmc.ro/2014/05/25/python-packaging/
-_PATH_ROOT = os.path.dirname(os.path.dirname(flash.__file__))
+_PATH_ROOT = os.path.dirname(__file__)
+_PATH_REQUIRE = os.path.join(_PATH_ROOT, "requirements")
def _load_py_module(fname, pkg="flash"):
- spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))
+ spec = spec_from_file_location(
+ os.path.join(pkg, fname),
+ os.path.join(_PATH_ROOT, pkg, fname),
+ )
py = module_from_spec(spec)
spec.loader.exec_module(py)
return py
+about = _load_py_module('__about__.py')
setup_tools = _load_py_module('setup_tools.py')
-long_description = setup_tools._load_readme_description(_PATH_ROOT, homepage=about.__homepage__, ver=about.__version__)
-
-_PATH_REQUIRE = os.path.join(_PATH_ROOT, "requirements")
+long_description = setup_tools._load_readme_description(
+ _PATH_ROOT,
+ homepage=about.__homepage__,
+ ver=about.__version__,
+)
extras = {
"docs": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name="docs.txt"),
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,29 +1,46 @@\n #!/usr/bin/env python\n+# Copyright The PyTorch Lightning team.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n import os\n from importlib.util import module_from_spec, spec_from_file_location\n \n from setuptools import find_packages, setup\n \n-import flash\n-import flash.__about__ as about\n-\n # https://packaging.python.org/guides/single-sourcing-package-version/\n # http://blog.ionelmc.ro/2014/05/25/python-packaging/\n-_PATH_ROOT = os.path.dirname(os.path.dirname(flash.__file__))\n+_PATH_ROOT = os.path.dirname(__file__)\n+_PATH_REQUIRE = os.path.join(_PATH_ROOT, \"requirements\")\n \n \n def _load_py_module(fname, pkg=\"flash\"):\n- spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))\n+ spec = spec_from_file_location(\n+ os.path.join(pkg, fname),\n+ os.path.join(_PATH_ROOT, pkg, fname),\n+ )\n py = module_from_spec(spec)\n spec.loader.exec_module(py)\n return py\n \n \n+about = _load_py_module('__about__.py')\n setup_tools = _load_py_module('setup_tools.py')\n \n-long_description = setup_tools._load_readme_description(_PATH_ROOT, homepage=about.__homepage__, ver=about.__version__)\n-\n-_PATH_REQUIRE = os.path.join(_PATH_ROOT, \"requirements\")\n+long_description = setup_tools._load_readme_description(\n+ _PATH_ROOT,\n+ homepage=about.__homepage__,\n+ ver=about.__version__,\n+)\n \n extras = {\n \"docs\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"docs.txt\"),\n", "issue": "Installing from master fails due to ModuleNotFoundError\n## \ud83d\udc1b Bug\r\nSee the title and below error msg.\r\n\r\nI think this is caused by `setup.py` which depends on `flash` where it shouldn't because `setup.py` gets run when any required packages are not installed yet.\r\n\r\n### To Reproduce\r\nIn, for example, Google Colab, you will get the following results:\r\n```console\r\n$ pip install git+https://github.com/PytorchLightning/lightning-flash.git@master\r\nCollecting git+https://github.com/PytorchLightning/lightning-flash.git@master\r\n Cloning https://github.com/PytorchLightning/lightning-flash.git (to revision master) to /tmp/pip-req-build-ist5jx4a\r\n Running command git clone -q https://github.com/PytorchLightning/lightning-flash.git /tmp/pip-req-build-ist5jx4a\r\nERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.\r\n\r\n$ python setup.py egg_info\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 7, in <module>\r\n import flash\r\n File \"/home/nitta/work/github.com/PyTorchLightning/lightning-flash/flash/__init__.py\", line 22, in <module>\r\n from flash.core.data.callback import FlashCallback\r\n File \"/home/nitta/work/github.com/PyTorchLightning/lightning-flash/flash/core/data/callback.py\", line 4, in <module>\r\n from pytorch_lightning.callbacks import Callback\r\nModuleNotFoundError: No module named 'pytorch_lightning.callbacks'\r\n```\r\n\r\n### Expected behavior\r\n```console\r\n$ pip install git+https://github.com/PytorchLightning/lightning-flash.git@master\r\n```\r\nwill successfully install flash from master branch.\r\n\r\n### Additional context\r\nFound while working on #335.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nfrom importlib.util import module_from_spec, spec_from_file_location\n\nfrom setuptools import find_packages, setup\n\nimport flash\nimport flash.__about__ as about\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\n# http://blog.ionelmc.ro/2014/05/25/python-packaging/\n_PATH_ROOT = os.path.dirname(os.path.dirname(flash.__file__))\n\n\ndef _load_py_module(fname, pkg=\"flash\"):\n spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_ROOT, pkg, fname))\n py = module_from_spec(spec)\n spec.loader.exec_module(py)\n return py\n\n\nsetup_tools = _load_py_module('setup_tools.py')\n\nlong_description = setup_tools._load_readme_description(_PATH_ROOT, homepage=about.__homepage__, ver=about.__version__)\n\n_PATH_REQUIRE = os.path.join(_PATH_ROOT, \"requirements\")\n\nextras = {\n \"docs\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"docs.txt\"),\n \"notebooks\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"notebooks.txt\"),\n \"test\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"test.txt\"),\n \"text\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_text.txt\"),\n \"tabular\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_tabular.txt\"),\n \"image\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_image.txt\"),\n \"image_style_transfer\": setup_tools._load_requirements(\n path_dir=_PATH_REQUIRE, file_name=\"datatype_image_style_transfer.txt\"\n ),\n \"video\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_video.txt\"),\n}\n\n# remove possible duplicate.\nextras[\"vision\"] = list(set(extras[\"image\"] + extras[\"video\"] + extras[\"image_style_transfer\"]))\nextras[\"dev\"] = list(set(extras[\"vision\"] + extras[\"tabular\"] + extras[\"text\"] + extras[\"image\"]))\nextras[\"dev-test\"] = list(set(extras[\"test\"] + extras[\"dev\"]))\nextras[\"all\"] = list(set(extras[\"dev\"] + extras[\"docs\"]))\n\nprint(extras)\n\n# https://packaging.python.org/discussions/install-requires-vs-requirements /\n# keep the meta-data here for simplicity in reading this file... it's not obvious\n# what happens and to non-engineers they won't know to look in init ...\n# the goal of the project is simplicity for researchers, don't want to add too much\n# engineer specific practices\nsetup(\n name=\"lightning-flash\",\n version=about.__version__,\n description=about.__docs__,\n author=about.__author__,\n author_email=about.__author_email__,\n url=about.__homepage__,\n download_url=\"https://github.com/PyTorchLightning/lightning-flash\",\n license=about.__license__,\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n extras_require=extras,\n zip_safe=False,\n keywords=[\"deep learning\", \"pytorch\", \"AI\"],\n python_requires=\">=3.6\",\n install_requires=setup_tools._load_requirements(_PATH_ROOT, file_name='requirements.txt'),\n project_urls={\n \"Bug Tracker\": \"https://github.com/PyTorchLightning/lightning-flash/issues\",\n \"Documentation\": \"https://lightning-flash.rtfd.io/en/latest/\",\n \"Source Code\": \"https://github.com/PyTorchLightning/lightning-flash\",\n },\n classifiers=[\n \"Environment :: Console\",\n \"Natural Language :: English\",\n # How mature is this project? Common values are\n # 3 - Alpha, 4 - Beta, 5 - Production/Stable\n \"Development Status :: 3 - Alpha\",\n # Indicate who your project is intended for\n \"Intended Audience :: Developers\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Image Recognition\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n # Pick your license as you wish\n # 'License :: OSI Approved :: BSD License',\n \"Operating System :: OS Independent\",\n # Specify the Python versions you support here. In particular, ensure\n # that you indicate whether you support Python 2, Python 3 or both.\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom importlib.util import module_from_spec, spec_from_file_location\n\nfrom setuptools import find_packages, setup\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\n# http://blog.ionelmc.ro/2014/05/25/python-packaging/\n_PATH_ROOT = os.path.dirname(__file__)\n_PATH_REQUIRE = os.path.join(_PATH_ROOT, \"requirements\")\n\n\ndef _load_py_module(fname, pkg=\"flash\"):\n spec = spec_from_file_location(\n os.path.join(pkg, fname),\n os.path.join(_PATH_ROOT, pkg, fname),\n )\n py = module_from_spec(spec)\n spec.loader.exec_module(py)\n return py\n\n\nabout = _load_py_module('__about__.py')\nsetup_tools = _load_py_module('setup_tools.py')\n\nlong_description = setup_tools._load_readme_description(\n _PATH_ROOT,\n homepage=about.__homepage__,\n ver=about.__version__,\n)\n\nextras = {\n \"docs\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"docs.txt\"),\n \"notebooks\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"notebooks.txt\"),\n \"test\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"test.txt\"),\n \"text\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_text.txt\"),\n \"tabular\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_tabular.txt\"),\n \"image\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_image.txt\"),\n \"image_style_transfer\": setup_tools._load_requirements(\n path_dir=_PATH_REQUIRE, file_name=\"datatype_image_style_transfer.txt\"\n ),\n \"video\": setup_tools._load_requirements(path_dir=_PATH_REQUIRE, file_name=\"datatype_video.txt\"),\n}\n\n# remove possible duplicate.\nextras[\"vision\"] = list(set(extras[\"image\"] + extras[\"video\"] + extras[\"image_style_transfer\"]))\nextras[\"dev\"] = list(set(extras[\"vision\"] + extras[\"tabular\"] + extras[\"text\"] + extras[\"image\"]))\nextras[\"dev-test\"] = list(set(extras[\"test\"] + extras[\"dev\"]))\nextras[\"all\"] = list(set(extras[\"dev\"] + extras[\"docs\"]))\n\nprint(extras)\n\n# https://packaging.python.org/discussions/install-requires-vs-requirements /\n# keep the meta-data here for simplicity in reading this file... it's not obvious\n# what happens and to non-engineers they won't know to look in init ...\n# the goal of the project is simplicity for researchers, don't want to add too much\n# engineer specific practices\nsetup(\n name=\"lightning-flash\",\n version=about.__version__,\n description=about.__docs__,\n author=about.__author__,\n author_email=about.__author_email__,\n url=about.__homepage__,\n download_url=\"https://github.com/PyTorchLightning/lightning-flash\",\n license=about.__license__,\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n extras_require=extras,\n zip_safe=False,\n keywords=[\"deep learning\", \"pytorch\", \"AI\"],\n python_requires=\">=3.6\",\n install_requires=setup_tools._load_requirements(_PATH_ROOT, file_name='requirements.txt'),\n project_urls={\n \"Bug Tracker\": \"https://github.com/PyTorchLightning/lightning-flash/issues\",\n \"Documentation\": \"https://lightning-flash.rtfd.io/en/latest/\",\n \"Source Code\": \"https://github.com/PyTorchLightning/lightning-flash\",\n },\n classifiers=[\n \"Environment :: Console\",\n \"Natural Language :: English\",\n # How mature is this project? Common values are\n # 3 - Alpha, 4 - Beta, 5 - Production/Stable\n \"Development Status :: 3 - Alpha\",\n # Indicate who your project is intended for\n \"Intended Audience :: Developers\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Image Recognition\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n # Pick your license as you wish\n # 'License :: OSI Approved :: BSD License',\n \"Operating System :: OS Independent\",\n # Specify the Python versions you support here. In particular, ensure\n # that you indicate whether you support Python 2, Python 3 or both.\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}]}
| 1,872 | 506 |
gh_patches_debug_40875
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve the user search and selection form
In #1684 the user selection for groups was consolidated, so now it would be good to improve it. The form uses a ModelSelect2 widget (https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/groups/forms.py#L19) that calls an autocomplete view that only allows filtering by username (https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/groups/views.py#L67).
Top priority is to extend the filter:
- Switch to filtering with `icontains` rather than `istartswith`
- Allow filtering on the users username, email address, full name, OR verification email address. To do this use Django `Q` objects (https://docs.djangoproject.com/en/3.1/topics/db/queries/#complex-lookups-with-q-objects).
Next, is to improve the look of the widget, at the moment you just get a string of the username:

On github it's much nicer

Extend this to
- Include the users full name in the string
- Show whether the user is verified or not in the string, and if so, what is the domain of their verification address (see https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/profiles/templatetags/profiles.py#L16)
- Stretch goal: Show the users mugshot
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/groups/forms.py`
Content:
```
1 from dal import autocomplete
2 from django.conf import settings
3 from django.contrib.auth import get_user_model
4 from django.core.exceptions import ValidationError
5 from django.forms import ChoiceField, Form, HiddenInput, ModelChoiceField
6 from guardian.utils import get_anonymous_user
7
8 from grandchallenge.core.forms import SaveFormInitMixin
9
10
11 class UserGroupForm(SaveFormInitMixin, Form):
12 ADD = "ADD"
13 REMOVE = "REMOVE"
14 CHOICES = ((ADD, "Add"), (REMOVE, "Remove"))
15 user = ModelChoiceField(
16 queryset=get_user_model().objects.all().order_by("username"),
17 help_text="Select a user that will be added to the group",
18 required=True,
19 widget=autocomplete.ModelSelect2(
20 url="users-autocomplete",
21 attrs={
22 "data-placeholder": "Search for a user ...",
23 "data-minimum-input-length": 3,
24 "data-theme": settings.CRISPY_TEMPLATE_PACK,
25 },
26 ),
27 )
28 action = ChoiceField(
29 choices=CHOICES, required=True, widget=HiddenInput(), initial=ADD
30 )
31
32 def clean_user(self):
33 user = self.cleaned_data["user"]
34 if user == get_anonymous_user():
35 raise ValidationError("You cannot add this user!")
36 return user
37
38 def add_or_remove_user(self, *, obj):
39 if self.cleaned_data["action"] == self.ADD:
40 getattr(obj, f"add_{self.role}")(self.cleaned_data["user"])
41 elif self.cleaned_data["action"] == self.REMOVE:
42 getattr(obj, f"remove_{self.role}")(self.cleaned_data["user"])
43
44
45 class EditorsForm(UserGroupForm):
46 role = "editor"
47
48
49 class MembersForm(UserGroupForm):
50 role = "member"
51
52
53 class UsersForm(UserGroupForm):
54 role = "user"
55
```
Path: `app/grandchallenge/groups/views.py`
Content:
```
1 from dal import autocomplete
2 from django.conf import settings
3 from django.contrib.auth import get_user_model
4 from django.contrib.auth.mixins import UserPassesTestMixin
5 from django.contrib.messages.views import SuccessMessageMixin
6 from django.views.generic import FormView
7 from guardian.mixins import (
8 LoginRequiredMixin,
9 PermissionRequiredMixin as ObjectPermissionRequiredMixin,
10 )
11 from guardian.shortcuts import get_objects_for_user
12
13
14 class UserGroupUpdateMixin(
15 LoginRequiredMixin,
16 ObjectPermissionRequiredMixin,
17 SuccessMessageMixin,
18 FormView,
19 ):
20 raise_exception = True
21
22 def get_permission_object(self):
23 return self.obj
24
25 @property
26 def obj(self):
27 raise NotImplementedError
28
29 def get_context_data(self, **kwargs):
30 context = super().get_context_data(**kwargs)
31 context.update({"object": self.obj, "role": self.get_form().role})
32 return context
33
34 def get_success_url(self):
35 return self.obj.get_absolute_url()
36
37 def form_valid(self, form):
38 form.add_or_remove_user(obj=self.obj)
39 return super().form_valid(form)
40
41
42 class UserAutocomplete(
43 LoginRequiredMixin, UserPassesTestMixin, autocomplete.Select2QuerySetView
44 ):
45 def test_func(self):
46 allowed_perms = [
47 "algorithms.change_algorithm",
48 "organizations.change_organization",
49 "archives.change_archive",
50 "reader_studies.change_readerstudy",
51 "workstations.change_workstation",
52 "algorithms.change_job",
53 ]
54 return any(
55 get_objects_for_user(user=self.request.user, perms=perm,).exists()
56 for perm in allowed_perms
57 )
58
59 def get_queryset(self):
60 qs = (
61 get_user_model()
62 .objects.order_by("username")
63 .exclude(username=settings.ANONYMOUS_USER_NAME)
64 )
65
66 if self.q:
67 qs = qs.filter(username__istartswith=self.q)
68
69 return qs
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/groups/forms.py b/app/grandchallenge/groups/forms.py
--- a/app/grandchallenge/groups/forms.py
+++ b/app/grandchallenge/groups/forms.py
@@ -22,6 +22,7 @@
"data-placeholder": "Search for a user ...",
"data-minimum-input-length": 3,
"data-theme": settings.CRISPY_TEMPLATE_PACK,
+ "data-html": True,
},
),
)
diff --git a/app/grandchallenge/groups/views.py b/app/grandchallenge/groups/views.py
--- a/app/grandchallenge/groups/views.py
+++ b/app/grandchallenge/groups/views.py
@@ -3,6 +3,9 @@
from django.contrib.auth import get_user_model
from django.contrib.auth.mixins import UserPassesTestMixin
from django.contrib.messages.views import SuccessMessageMixin
+from django.db.models import CharField, Q, Value
+from django.db.models.functions import Concat
+from django.utils.html import format_html
from django.views.generic import FormView
from guardian.mixins import (
LoginRequiredMixin,
@@ -10,6 +13,8 @@
)
from guardian.shortcuts import get_objects_for_user
+from grandchallenge.verifications.models import Verification
+
class UserGroupUpdateMixin(
LoginRequiredMixin,
@@ -51,6 +56,7 @@
"workstations.change_workstation",
"algorithms.change_job",
]
+ # TODO reduce number of queries
return any(
get_objects_for_user(user=self.request.user, perms=perm,).exists()
for perm in allowed_perms
@@ -61,9 +67,50 @@
get_user_model()
.objects.order_by("username")
.exclude(username=settings.ANONYMOUS_USER_NAME)
+ .annotate(
+ full_name=Concat(
+ "first_name",
+ Value(" "),
+ "last_name",
+ output_field=CharField(),
+ )
+ )
+ .select_related("verification", "user_profile")
)
if self.q:
- qs = qs.filter(username__istartswith=self.q)
+ qs = qs.filter(
+ Q(username__icontains=self.q)
+ | Q(email__icontains=self.q)
+ | Q(full_name__icontains=self.q)
+ | Q(verification__email__icontains=self.q)
+ )
return qs
+
+ def get_result_label(self, result):
+
+ try:
+ is_verified = result.verification.is_verified
+ except Verification.DoesNotExist:
+ is_verified = False
+
+ if is_verified:
+ return format_html(
+ '<img src="{}" width ="20" height ="20" style="vertical-align:top"> '
+ " <b>{}</b> {} "
+ '<i class="fas fa-user-check text-success" '
+ 'title="Verified email address at {}">',
+ result.user_profile.get_mugshot_url(),
+ result.get_username(),
+ result.get_full_name().title(),
+ result.verification.email.split("@")[1],
+ )
+ else:
+ return format_html(
+ '<img src="{}" width ="20" height ="20" style="vertical-align:top"> '
+ " <b>{}</b> {}",
+ result.user_profile.get_mugshot_url(),
+ result.get_username(),
+ result.get_full_name().title(),
+ )
|
{"golden_diff": "diff --git a/app/grandchallenge/groups/forms.py b/app/grandchallenge/groups/forms.py\n--- a/app/grandchallenge/groups/forms.py\n+++ b/app/grandchallenge/groups/forms.py\n@@ -22,6 +22,7 @@\n \"data-placeholder\": \"Search for a user ...\",\n \"data-minimum-input-length\": 3,\n \"data-theme\": settings.CRISPY_TEMPLATE_PACK,\n+ \"data-html\": True,\n },\n ),\n )\ndiff --git a/app/grandchallenge/groups/views.py b/app/grandchallenge/groups/views.py\n--- a/app/grandchallenge/groups/views.py\n+++ b/app/grandchallenge/groups/views.py\n@@ -3,6 +3,9 @@\n from django.contrib.auth import get_user_model\n from django.contrib.auth.mixins import UserPassesTestMixin\n from django.contrib.messages.views import SuccessMessageMixin\n+from django.db.models import CharField, Q, Value\n+from django.db.models.functions import Concat\n+from django.utils.html import format_html\n from django.views.generic import FormView\n from guardian.mixins import (\n LoginRequiredMixin,\n@@ -10,6 +13,8 @@\n )\n from guardian.shortcuts import get_objects_for_user\n \n+from grandchallenge.verifications.models import Verification\n+\n \n class UserGroupUpdateMixin(\n LoginRequiredMixin,\n@@ -51,6 +56,7 @@\n \"workstations.change_workstation\",\n \"algorithms.change_job\",\n ]\n+ # TODO reduce number of queries\n return any(\n get_objects_for_user(user=self.request.user, perms=perm,).exists()\n for perm in allowed_perms\n@@ -61,9 +67,50 @@\n get_user_model()\n .objects.order_by(\"username\")\n .exclude(username=settings.ANONYMOUS_USER_NAME)\n+ .annotate(\n+ full_name=Concat(\n+ \"first_name\",\n+ Value(\" \"),\n+ \"last_name\",\n+ output_field=CharField(),\n+ )\n+ )\n+ .select_related(\"verification\", \"user_profile\")\n )\n \n if self.q:\n- qs = qs.filter(username__istartswith=self.q)\n+ qs = qs.filter(\n+ Q(username__icontains=self.q)\n+ | Q(email__icontains=self.q)\n+ | Q(full_name__icontains=self.q)\n+ | Q(verification__email__icontains=self.q)\n+ )\n \n return qs\n+\n+ def get_result_label(self, result):\n+\n+ try:\n+ is_verified = result.verification.is_verified\n+ except Verification.DoesNotExist:\n+ is_verified = False\n+\n+ if is_verified:\n+ return format_html(\n+ '<img src=\"{}\" width =\"20\" height =\"20\" style=\"vertical-align:top\"> '\n+ \" <b>{}</b> {} \"\n+ '<i class=\"fas fa-user-check text-success\" '\n+ 'title=\"Verified email address at {}\">',\n+ result.user_profile.get_mugshot_url(),\n+ result.get_username(),\n+ result.get_full_name().title(),\n+ result.verification.email.split(\"@\")[1],\n+ )\n+ else:\n+ return format_html(\n+ '<img src=\"{}\" width =\"20\" height =\"20\" style=\"vertical-align:top\"> '\n+ \" <b>{}</b> {}\",\n+ result.user_profile.get_mugshot_url(),\n+ result.get_username(),\n+ result.get_full_name().title(),\n+ )\n", "issue": "Improve the user search and selection form\nIn #1684 the user selection for groups was consolidated, so now it would be good to improve it. The form uses a ModelSelect2 widget (https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/groups/forms.py#L19) that calls an autocomplete view that only allows filtering by username (https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/groups/views.py#L67). \r\n\r\nTop priority is to extend the filter:\r\n- Switch to filtering with `icontains` rather than `istartswith`\r\n- Allow filtering on the users username, email address, full name, OR verification email address. To do this use Django `Q` objects (https://docs.djangoproject.com/en/3.1/topics/db/queries/#complex-lookups-with-q-objects).\r\n\r\nNext, is to improve the look of the widget, at the moment you just get a string of the username:\r\n\r\n\r\nOn github it's much nicer\r\n\r\n\r\nExtend this to\r\n- Include the users full name in the string\r\n- Show whether the user is verified or not in the string, and if so, what is the domain of their verification address (see https://github.com/comic/grand-challenge.org/blob/master/app/grandchallenge/profiles/templatetags/profiles.py#L16)\r\n- Stretch goal: Show the users mugshot\r\n\n", "before_files": [{"content": "from dal import autocomplete\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ValidationError\nfrom django.forms import ChoiceField, Form, HiddenInput, ModelChoiceField\nfrom guardian.utils import get_anonymous_user\n\nfrom grandchallenge.core.forms import SaveFormInitMixin\n\n\nclass UserGroupForm(SaveFormInitMixin, Form):\n ADD = \"ADD\"\n REMOVE = \"REMOVE\"\n CHOICES = ((ADD, \"Add\"), (REMOVE, \"Remove\"))\n user = ModelChoiceField(\n queryset=get_user_model().objects.all().order_by(\"username\"),\n help_text=\"Select a user that will be added to the group\",\n required=True,\n widget=autocomplete.ModelSelect2(\n url=\"users-autocomplete\",\n attrs={\n \"data-placeholder\": \"Search for a user ...\",\n \"data-minimum-input-length\": 3,\n \"data-theme\": settings.CRISPY_TEMPLATE_PACK,\n },\n ),\n )\n action = ChoiceField(\n choices=CHOICES, required=True, widget=HiddenInput(), initial=ADD\n )\n\n def clean_user(self):\n user = self.cleaned_data[\"user\"]\n if user == get_anonymous_user():\n raise ValidationError(\"You cannot add this user!\")\n return user\n\n def add_or_remove_user(self, *, obj):\n if self.cleaned_data[\"action\"] == self.ADD:\n getattr(obj, f\"add_{self.role}\")(self.cleaned_data[\"user\"])\n elif self.cleaned_data[\"action\"] == self.REMOVE:\n getattr(obj, f\"remove_{self.role}\")(self.cleaned_data[\"user\"])\n\n\nclass EditorsForm(UserGroupForm):\n role = \"editor\"\n\n\nclass MembersForm(UserGroupForm):\n role = \"member\"\n\n\nclass UsersForm(UserGroupForm):\n role = \"user\"\n", "path": "app/grandchallenge/groups/forms.py"}, {"content": "from dal import autocomplete\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.views.generic import FormView\nfrom guardian.mixins import (\n LoginRequiredMixin,\n PermissionRequiredMixin as ObjectPermissionRequiredMixin,\n)\nfrom guardian.shortcuts import get_objects_for_user\n\n\nclass UserGroupUpdateMixin(\n LoginRequiredMixin,\n ObjectPermissionRequiredMixin,\n SuccessMessageMixin,\n FormView,\n):\n raise_exception = True\n\n def get_permission_object(self):\n return self.obj\n\n @property\n def obj(self):\n raise NotImplementedError\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context.update({\"object\": self.obj, \"role\": self.get_form().role})\n return context\n\n def get_success_url(self):\n return self.obj.get_absolute_url()\n\n def form_valid(self, form):\n form.add_or_remove_user(obj=self.obj)\n return super().form_valid(form)\n\n\nclass UserAutocomplete(\n LoginRequiredMixin, UserPassesTestMixin, autocomplete.Select2QuerySetView\n):\n def test_func(self):\n allowed_perms = [\n \"algorithms.change_algorithm\",\n \"organizations.change_organization\",\n \"archives.change_archive\",\n \"reader_studies.change_readerstudy\",\n \"workstations.change_workstation\",\n \"algorithms.change_job\",\n ]\n return any(\n get_objects_for_user(user=self.request.user, perms=perm,).exists()\n for perm in allowed_perms\n )\n\n def get_queryset(self):\n qs = (\n get_user_model()\n .objects.order_by(\"username\")\n .exclude(username=settings.ANONYMOUS_USER_NAME)\n )\n\n if self.q:\n qs = qs.filter(username__istartswith=self.q)\n\n return qs\n", "path": "app/grandchallenge/groups/views.py"}], "after_files": [{"content": "from dal import autocomplete\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ValidationError\nfrom django.forms import ChoiceField, Form, HiddenInput, ModelChoiceField\nfrom guardian.utils import get_anonymous_user\n\nfrom grandchallenge.core.forms import SaveFormInitMixin\n\n\nclass UserGroupForm(SaveFormInitMixin, Form):\n ADD = \"ADD\"\n REMOVE = \"REMOVE\"\n CHOICES = ((ADD, \"Add\"), (REMOVE, \"Remove\"))\n user = ModelChoiceField(\n queryset=get_user_model().objects.all().order_by(\"username\"),\n help_text=\"Select a user that will be added to the group\",\n required=True,\n widget=autocomplete.ModelSelect2(\n url=\"users-autocomplete\",\n attrs={\n \"data-placeholder\": \"Search for a user ...\",\n \"data-minimum-input-length\": 3,\n \"data-theme\": settings.CRISPY_TEMPLATE_PACK,\n \"data-html\": True,\n },\n ),\n )\n action = ChoiceField(\n choices=CHOICES, required=True, widget=HiddenInput(), initial=ADD\n )\n\n def clean_user(self):\n user = self.cleaned_data[\"user\"]\n if user == get_anonymous_user():\n raise ValidationError(\"You cannot add this user!\")\n return user\n\n def add_or_remove_user(self, *, obj):\n if self.cleaned_data[\"action\"] == self.ADD:\n getattr(obj, f\"add_{self.role}\")(self.cleaned_data[\"user\"])\n elif self.cleaned_data[\"action\"] == self.REMOVE:\n getattr(obj, f\"remove_{self.role}\")(self.cleaned_data[\"user\"])\n\n\nclass EditorsForm(UserGroupForm):\n role = \"editor\"\n\n\nclass MembersForm(UserGroupForm):\n role = \"member\"\n\n\nclass UsersForm(UserGroupForm):\n role = \"user\"\n", "path": "app/grandchallenge/groups/forms.py"}, {"content": "from dal import autocomplete\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.db.models import CharField, Q, Value\nfrom django.db.models.functions import Concat\nfrom django.utils.html import format_html\nfrom django.views.generic import FormView\nfrom guardian.mixins import (\n LoginRequiredMixin,\n PermissionRequiredMixin as ObjectPermissionRequiredMixin,\n)\nfrom guardian.shortcuts import get_objects_for_user\n\nfrom grandchallenge.verifications.models import Verification\n\n\nclass UserGroupUpdateMixin(\n LoginRequiredMixin,\n ObjectPermissionRequiredMixin,\n SuccessMessageMixin,\n FormView,\n):\n raise_exception = True\n\n def get_permission_object(self):\n return self.obj\n\n @property\n def obj(self):\n raise NotImplementedError\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context.update({\"object\": self.obj, \"role\": self.get_form().role})\n return context\n\n def get_success_url(self):\n return self.obj.get_absolute_url()\n\n def form_valid(self, form):\n form.add_or_remove_user(obj=self.obj)\n return super().form_valid(form)\n\n\nclass UserAutocomplete(\n LoginRequiredMixin, UserPassesTestMixin, autocomplete.Select2QuerySetView\n):\n def test_func(self):\n allowed_perms = [\n \"algorithms.change_algorithm\",\n \"organizations.change_organization\",\n \"archives.change_archive\",\n \"reader_studies.change_readerstudy\",\n \"workstations.change_workstation\",\n \"algorithms.change_job\",\n ]\n # TODO reduce number of queries\n return any(\n get_objects_for_user(user=self.request.user, perms=perm,).exists()\n for perm in allowed_perms\n )\n\n def get_queryset(self):\n qs = (\n get_user_model()\n .objects.order_by(\"username\")\n .exclude(username=settings.ANONYMOUS_USER_NAME)\n .annotate(\n full_name=Concat(\n \"first_name\",\n Value(\" \"),\n \"last_name\",\n output_field=CharField(),\n )\n )\n .select_related(\"verification\", \"user_profile\")\n )\n\n if self.q:\n qs = qs.filter(\n Q(username__icontains=self.q)\n | Q(email__icontains=self.q)\n | Q(full_name__icontains=self.q)\n | Q(verification__email__icontains=self.q)\n )\n\n return qs\n\n def get_result_label(self, result):\n\n try:\n is_verified = result.verification.is_verified\n except Verification.DoesNotExist:\n is_verified = False\n\n if is_verified:\n return format_html(\n '<img src=\"{}\" width =\"20\" height =\"20\" style=\"vertical-align:top\"> '\n \" <b>{}</b> {} \"\n '<i class=\"fas fa-user-check text-success\" '\n 'title=\"Verified email address at {}\">',\n result.user_profile.get_mugshot_url(),\n result.get_username(),\n result.get_full_name().title(),\n result.verification.email.split(\"@\")[1],\n )\n else:\n return format_html(\n '<img src=\"{}\" width =\"20\" height =\"20\" style=\"vertical-align:top\"> '\n \" <b>{}</b> {}\",\n result.user_profile.get_mugshot_url(),\n result.get_username(),\n result.get_full_name().title(),\n )\n", "path": "app/grandchallenge/groups/views.py"}]}
| 1,721 | 759 |
gh_patches_debug_1726
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-3627
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong class number for LVIS 1.0 dataset
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
Wrong class number for LVIS 1.0 dataset
**Reproduction**
1. What command or script did you run?
N.A.
2. Did you make any modifications on the code or config? Did you understand what you have modified?
Nope
3. What dataset did you use?
LVIS1.0
**Environment**
N.A.
**Error traceback**
N.A.
**Bug fix**
For the config file for LVIS1.0 dataset, the class number should be 1203 instead of 1230 as indicated on the official website:
https://www.lvisdataset.org/dataset
https://github.com/open-mmlab/mmdetection/blob/9596b9a4c916ae601f9a8a641c3a0ea47265abec/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py#L8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py`
Content:
```
1 _base_ = [
2 '../_base_/models/mask_rcnn_r50_fpn.py',
3 '../_base_/datasets/lvis_v1_instance.py',
4 '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5 ]
6 model = dict(
7 roi_head=dict(
8 bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)))
9 test_cfg = dict(
10 rcnn=dict(
11 score_thr=0.0001,
12 # LVIS allows up to 300
13 max_per_img=300))
14 img_norm_cfg = dict(
15 mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
16 train_pipeline = [
17 dict(type='LoadImageFromFile'),
18 dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
19 dict(
20 type='Resize',
21 img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
22 (1333, 768), (1333, 800)],
23 multiscale_mode='value',
24 keep_ratio=True),
25 dict(type='RandomFlip', flip_ratio=0.5),
26 dict(type='Normalize', **img_norm_cfg),
27 dict(type='Pad', size_divisor=32),
28 dict(type='DefaultFormatBundle'),
29 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
30 ]
31 data = dict(train=dict(dataset=dict(pipeline=train_pipeline)))
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py
--- a/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py
+++ b/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py
@@ -5,7 +5,7 @@
]
model = dict(
roi_head=dict(
- bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)))
+ bbox_head=dict(num_classes=1203), mask_head=dict(num_classes=1203)))
test_cfg = dict(
rcnn=dict(
score_thr=0.0001,
|
{"golden_diff": "diff --git a/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py\n--- a/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py\n+++ b/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py\n@@ -5,7 +5,7 @@\n ]\n model = dict(\n roi_head=dict(\n- bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)))\n+ bbox_head=dict(num_classes=1203), mask_head=dict(num_classes=1203)))\n test_cfg = dict(\n rcnn=dict(\n score_thr=0.0001,\n", "issue": "Wrong class number for LVIS 1.0 dataset\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nWrong class number for LVIS 1.0 dataset\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\nN.A.\r\n2. Did you make any modifications on the code or config? Did you understand what you have modified?\r\nNope\r\n3. What dataset did you use?\r\nLVIS1.0\r\n\r\n**Environment**\r\nN.A.\r\n\r\n**Error traceback**\r\nN.A.\r\n\r\n**Bug fix**\r\nFor the config file for LVIS1.0 dataset, the class number should be 1203 instead of 1230 as indicated on the official website:\r\nhttps://www.lvisdataset.org/dataset\r\n\r\nhttps://github.com/open-mmlab/mmdetection/blob/9596b9a4c916ae601f9a8a641c3a0ea47265abec/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py#L8\r\n\n", "before_files": [{"content": "_base_ = [\n '../_base_/models/mask_rcnn_r50_fpn.py',\n '../_base_/datasets/lvis_v1_instance.py',\n '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'\n]\nmodel = dict(\n roi_head=dict(\n bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)))\ntest_cfg = dict(\n rcnn=dict(\n score_thr=0.0001,\n # LVIS allows up to 300\n max_per_img=300))\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),\n (1333, 768), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),\n]\ndata = dict(train=dict(dataset=dict(pipeline=train_pipeline)))\n", "path": "configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py"}], "after_files": [{"content": "_base_ = [\n '../_base_/models/mask_rcnn_r50_fpn.py',\n '../_base_/datasets/lvis_v1_instance.py',\n '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'\n]\nmodel = dict(\n roi_head=dict(\n bbox_head=dict(num_classes=1203), mask_head=dict(num_classes=1203)))\ntest_cfg = dict(\n rcnn=dict(\n score_thr=0.0001,\n # LVIS allows up to 300\n max_per_img=300))\nimg_norm_cfg = dict(\n mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True, with_mask=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),\n (1333, 768), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),\n]\ndata = dict(train=dict(dataset=dict(pipeline=train_pipeline)))\n", "path": "configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py"}]}
| 989 | 213 |
gh_patches_debug_25302
|
rasdani/github-patches
|
git_diff
|
plotly__plotly.py-199
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installing simplejson kills pip on anaconda
@theengineear, from: http://stackoverflow.com/questions/28751216/plotly-killed-my-python-install?nah=1#28754036
Python 3.4 on Windows via Anaconda 2.1.0: installing simplejson (which is pulled in by Plotly's requirements.txt) kills pip:
```
Traceback (most recent call last):
File "C:\Anaconda3\Scripts\pip-script.py", line 3, in <module>
from pip import main
File "C:\Anaconda3\lib\site-packages\pip\__init__.py", line 11, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "C:\Anaconda3\lib\site-packages\pip\vcs\mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "C:\Anaconda3\lib\site-packages\pip\download.py", line 22, in <module>
from pip._vendor import requests, six
File "C:\Anaconda3\lib\site-packages\pip\_vendor\requests\__init__.py", line 58, in <module>
from . import utils
File "C:\Anaconda3\lib\site-packages\pip\_vendor\requests\utils.py", line 25, in <module>
from .compat import parse_http_list as _parse_list_header
File "C:\Anaconda3\lib\site-packages\pip\_vendor\requests\compat.py", line 77, in <module>
import simplejson as json
File "C:\Anaconda3\lib\site-packages\simplejson\__init__.py", line 112, in <module>
from .scanner import JSONDecodeError
File "C:\Anaconda3\lib\site-packages\simplejson\scanner.py", line 10, in <module>
c_make_scanner = _import_c_make_scanner()
File "C:\Anaconda3\lib\site-packages\simplejson\scanner.py", line 6, in _import_c_make_scanner
from simplejson._speedups import make_scanner
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 4: invalid start byte
```
Can we _not_ install simplejson unless it's really needed (Python 2.6 systems)? Or is there another way we can work around this Anaconda issue? (Logged for Anaconda as: https://github.com/ContinuumIO/anaconda-issues/issues/238 )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plotly/graph_objs/graph_objs_tools.py`
Content:
```
1 from __future__ import absolute_import
2 from plotly import utils
3 import textwrap
4 import os
5 import sys
6 if sys.version[:3] == '2.6':
7 from ordereddict import OrderedDict
8 import simplejson as json
9 else:
10 from collections import OrderedDict
11 import json
12 import six
13
14 from pkg_resources import resource_string
15
16
17 # Define graph reference loader
18 def _load_graph_ref():
19 graph_reference_dir = 'graph_reference'
20 json_files = [
21 'graph_objs_meta.json',
22 'OBJ_MAP.json',
23 'NAME_TO_KEY.json',
24 'KEY_TO_NAME.json'
25 ]
26 out = []
27 for json_file in json_files:
28 relative_path = os.path.join(graph_reference_dir, json_file)
29 s = resource_string('plotly', relative_path).decode('utf-8')
30 tmp = json.loads(s, object_pairs_hook=OrderedDict)
31 tmp = utils.decode_unicode(tmp)
32 out += [tmp]
33 return tuple(out)
34
35 # Load graph reference
36 INFO, OBJ_MAP, NAME_TO_KEY, KEY_TO_NAME = _load_graph_ref()
37
38 # Add mentions to Python-specific graph obj
39 # to NAME_TO_KEY, KEY_TO_NAME, INFO
40 NAME_TO_KEY['PlotlyList'] = 'plotlylist'
41 NAME_TO_KEY['PlotlyDict'] = 'plotlydict'
42 NAME_TO_KEY['PlotlyTrace'] = 'plotlytrace'
43 NAME_TO_KEY['Trace'] = 'trace'
44 KEY_TO_NAME['plotlylist'] = 'PlotlyList'
45 KEY_TO_NAME['plotlydict'] = 'PlotlyDict'
46 KEY_TO_NAME['plotlytrace'] = 'PlotlyTrace'
47 KEY_TO_NAME['trace'] = 'Trace'
48 INFO['plotlylist'] = dict(keymeta=dict())
49 INFO['plotlydict'] = dict(keymeta=dict())
50 INFO['plotlytrace'] = dict(keymeta=dict())
51 INFO['trace'] = dict(keymeta=dict())
52
53 # Define line and tab size for help text!
54 LINE_SIZE = 76
55 TAB_SIZE = 4
56
57
58 # Doc make function for list-like objects
59 def make_list_doc(name):
60 # get info for this graph obj
61 info = INFO[NAME_TO_KEY[name]]
62 # add docstring to doc
63 doc = info['docstring']
64 doc = "\t" + "\n\t".join(textwrap.wrap(doc, width=LINE_SIZE)) + "\n"
65 # Add examples to doc
66 examples = info['examples']
67 if len(examples):
68 doc += "\nExample:\n\n >>> " + "\n >>> ".join(examples) + "\n"
69 # Add links to online examples to doc
70 links = info['links']
71 if len(links) == 1:
72 doc += "\nOnline example:\n\n " + "\n ".join(links) + "\n"
73 elif len(links) > 1:
74 doc += "\nOnline examples:\n\n " + "\n ".join(links) + "\n"
75 # Add parents keys to doc
76 parent_keys = info['parent_keys']
77 if len(parent_keys) == 1:
78 doc += "\nParent key:\n\n " + "\n ".join(parent_keys) + "\n"
79 elif len(parent_keys) > 1:
80 doc += "\nParent keys:\n\n " + "\n ".join(parent_keys) + "\n"
81 # Add method list to doc
82 doc += "\nQuick method reference:\n\n"
83 doc += "\t{0}.".format(name) + "\n\t{0}.".format(name).join(
84 ["update(changes)", "strip_style()", "get_data()",
85 "to_graph_objs()", "validate()", "to_string()",
86 "force_clean()"]) + "\n\n"
87 return doc.expandtabs(TAB_SIZE)
88
89
90 # Doc make function for dict-like objects
91 def make_dict_doc(name):
92 # get info for this graph obj
93 info = INFO[NAME_TO_KEY[name]]
94 # add docstring to doc
95 doc = info['docstring']
96 doc = "\t" + "\n\t".join(textwrap.wrap(doc, width=LINE_SIZE)) + "\n"
97 # Add examples to doc
98 examples = info['examples']
99 if len(examples):
100 doc += "\nExample:\n\n >>> " + "\n >>> ".join(examples) + "\n"
101 # Add links to online examples to doc
102 links = info['links']
103 if len(links) == 1:
104 doc += "\nOnline example:\n\n " + "\n ".join(links) + "\n"
105 elif len(links) > 1:
106 doc += "\nOnline examples:\n\n " + "\n ".join(links) + "\n"
107 # Add parents keys to doc
108 parent_keys = info['parent_keys']
109 if len(parent_keys) == 1:
110 doc += "\nParent key:\n\n " + "\n ".join(parent_keys) + "\n"
111 elif len(parent_keys) > 1:
112 doc += "\nParent keys:\n\n " + "\n ".join(parent_keys) + "\n"
113 # Add method list to doc
114 doc += "\nQuick method reference:\n\n"
115 doc += "\t{0}.".format(name) + "\n\t{0}.".format(name).join(
116 ["update(changes)", "strip_style()", "get_data()",
117 "to_graph_objs()", "validate()", "to_string()",
118 "force_clean()"]) + "\n\n"
119 # Add key meta to doc
120 keymeta = info['keymeta']
121 if len(keymeta):
122 doc += "Valid keys:\n\n"
123 # Add each key one-by-one and format
124 width1 = LINE_SIZE-TAB_SIZE
125 width2 = LINE_SIZE-2*TAB_SIZE
126 width3 = LINE_SIZE-3*TAB_SIZE
127 undocumented = "Aw, snap! Undocumented!"
128 for key in keymeta:
129 # main portion of documentation
130 try:
131 required = str(keymeta[key]['required'])
132 except KeyError:
133 required = undocumented
134 try:
135 typ = str(keymeta[key]['key_type'])
136 except KeyError:
137 typ = undocumented
138 try:
139 val_types = str(keymeta[key]['val_types'])
140 if typ == 'object':
141 val_types = ("{0} object | ".format(KEY_TO_NAME[key]) +
142 val_types)
143 except KeyError:
144 val_types = undocumented
145 try:
146 descr = str(keymeta[key]['description'])
147 except KeyError:
148 descr = undocumented
149 str_1 = "{0} [required={1}] (value={2})".format(
150 key, required, val_types)
151 if "streamable" in keymeta[key] and keymeta[key]["streamable"]:
152 str_1 += " (streamable)"
153 str_1 += ":\n"
154 str_1 = "\t" + "\n\t".join(textwrap.wrap(str_1,
155 width=width1)) + "\n"
156 str_2 = "\t\t" + "\n\t\t".join(textwrap.wrap(descr,
157 width=width2)) + "\n"
158 doc += str_1 + str_2
159 # if a user can run help on this value, tell them!
160 if typ == "object":
161 doc += "\n\t\tFor more, run `help(plotly.graph_objs.{0" \
162 "})`\n".format(KEY_TO_NAME[key])
163 # if example usage exists, tell them!
164 try:
165 if len(keymeta[key]['examples']):
166 ex = "\n\t\tExamples:\n" + "\t\t\t"
167 ex += "\n\t\t\t".join(
168 textwrap.wrap(' | '.join(keymeta[key]['examples']),
169 width=width3)) + "\n"
170 doc += ex
171 except:
172 pass
173 doc += '\n'
174 return doc.expandtabs(TAB_SIZE)
175
176
177 def update_keys(keys):
178 """Change keys we used to support to their new equivalent."""
179 updated_keys = list()
180 for key in keys:
181 if key in translations:
182 updated_keys += [translations[key]]
183 else:
184 updated_keys += [key]
185 return updated_keys
186
187 translations = dict(
188 scl="colorscale",
189 reversescl="reversescale"
190 )
191
192
193 def curtail_val_repr(val, max_chars, add_delim=False):
194 delim = ", "
195 end = ".."
196 if isinstance(val, six.string_types):
197 if max_chars <= len("'" + end + "'"):
198 return ' ' * max_chars
199 elif add_delim and max_chars <= len("'" + end + "'") + len(delim):
200 return "'" + end + "'" + ' ' * (max_chars - len("'" + end + "'"))
201 else:
202 if max_chars <= len(end):
203 return ' ' * max_chars
204 elif add_delim and max_chars <= len(end) + len(delim):
205 return end + ' ' * (max_chars - len(end))
206 if add_delim:
207 max_chars -= len(delim)
208 r = repr(val)
209 if len(r) > max_chars:
210 if isinstance(val, six.string_types):
211 # TODO: can we assume this ends in "'"
212 r = r[:max_chars - len(end + "'")] + end + "'"
213 elif (isinstance(val, list) and
214 max_chars >= len("[{end}]".format(end=end))):
215 r = r[:max_chars - len(end + ']')] + end + ']'
216 else:
217 r = r[:max_chars - len(end)] + end
218 if add_delim:
219 r += delim
220 return r
221
```
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 exec (open('plotly/version.py').read())
4
5
6 def readme():
7 with open('README.rst') as f:
8 return f.read()
9
10
11 setup(name='plotly',
12 version=__version__,
13 use_2to3=False,
14 author='Chris P',
15 author_email='[email protected]',
16 maintainer='Chris P',
17 maintainer_email='[email protected]',
18 url='https://plot.ly/api/python',
19 description="Python plotting library for collaborative, "
20 "interactive, publication-quality graphs.",
21 long_description=readme(),
22 classifiers=[
23 'Development Status :: 4 - Beta',
24 'Programming Language :: Python :: 2',
25 'Programming Language :: Python :: 2.6',
26 'Programming Language :: Python :: 2.7',
27 'Programming Language :: Python :: 3',
28 'Programming Language :: Python :: 3.2',
29 'Programming Language :: Python :: 3.3',
30 'Programming Language :: Python :: 3.4',
31 'Topic :: Scientific/Engineering :: Visualization',
32 ],
33 license='MIT',
34 packages=['plotly',
35 'plotly/plotly',
36 'plotly/plotly/chunked_requests',
37 'plotly/graph_objs',
38 'plotly/grid_objs',
39 'plotly/widgets',
40 'plotly/matplotlylib',
41 'plotly/matplotlylib/mplexporter',
42 'plotly/matplotlylib/mplexporter/renderers'],
43 package_data={'plotly': ['graph_reference/*.json', 'widgets/*.js']},
44 install_requires=['requests',
45 'six',
46 'pytz',
47 'ordereddict',
48 'simplejson'],
49 zip_safe=False)
50
```
Path: `plotly/version.py`
Content:
```
1 __version__ = '1.6.9'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plotly/graph_objs/graph_objs_tools.py b/plotly/graph_objs/graph_objs_tools.py
--- a/plotly/graph_objs/graph_objs_tools.py
+++ b/plotly/graph_objs/graph_objs_tools.py
@@ -4,8 +4,16 @@
import os
import sys
if sys.version[:3] == '2.6':
- from ordereddict import OrderedDict
- import simplejson as json
+ try:
+ from ordereddict import OrderedDict
+ import simplejson as json
+ except ImportError:
+ raise ImportError(
+ "Looks like you're running Python 2.6. Plotly expects newer "
+ "standard library versions of ordereddict and json. You can "
+ "simply upgrade with these 'extras' with the following terminal "
+ "command:\npip install 'plotly[PY2.6]'"
+ )
else:
from collections import OrderedDict
import json
diff --git a/plotly/version.py b/plotly/version.py
--- a/plotly/version.py
+++ b/plotly/version.py
@@ -1 +1 @@
-__version__ = '1.6.9'
+__version__ = '1.6.10'
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,9 +41,6 @@
'plotly/matplotlylib/mplexporter',
'plotly/matplotlylib/mplexporter/renderers'],
package_data={'plotly': ['graph_reference/*.json', 'widgets/*.js']},
- install_requires=['requests',
- 'six',
- 'pytz',
- 'ordereddict',
- 'simplejson'],
+ install_requires=['requests', 'six', 'pytz'],
+ extras_require={"PY2.6": ['simplejson', 'ordereddict']},
zip_safe=False)
|
{"golden_diff": "diff --git a/plotly/graph_objs/graph_objs_tools.py b/plotly/graph_objs/graph_objs_tools.py\n--- a/plotly/graph_objs/graph_objs_tools.py\n+++ b/plotly/graph_objs/graph_objs_tools.py\n@@ -4,8 +4,16 @@\n import os\n import sys\n if sys.version[:3] == '2.6':\n- from ordereddict import OrderedDict\n- import simplejson as json\n+ try:\n+ from ordereddict import OrderedDict\n+ import simplejson as json\n+ except ImportError:\n+ raise ImportError(\n+ \"Looks like you're running Python 2.6. Plotly expects newer \"\n+ \"standard library versions of ordereddict and json. You can \"\n+ \"simply upgrade with these 'extras' with the following terminal \"\n+ \"command:\\npip install 'plotly[PY2.6]'\"\n+ )\n else:\n from collections import OrderedDict\n import json\ndiff --git a/plotly/version.py b/plotly/version.py\n--- a/plotly/version.py\n+++ b/plotly/version.py\n@@ -1 +1 @@\n-__version__ = '1.6.9'\n+__version__ = '1.6.10'\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,9 +41,6 @@\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['graph_reference/*.json', 'widgets/*.js']},\n- install_requires=['requests',\n- 'six',\n- 'pytz',\n- 'ordereddict',\n- 'simplejson'],\n+ install_requires=['requests', 'six', 'pytz'],\n+ extras_require={\"PY2.6\": ['simplejson', 'ordereddict']},\n zip_safe=False)\n", "issue": "Installing simplejson kills pip on anaconda\n@theengineear, from: http://stackoverflow.com/questions/28751216/plotly-killed-my-python-install?nah=1#28754036\n\nPython 3.4 on Windows via Anaconda 2.1.0: installing simplejson (which is pulled in by Plotly's requirements.txt) kills pip:\n\n```\nTraceback (most recent call last):\n File \"C:\\Anaconda3\\Scripts\\pip-script.py\", line 3, in <module>\n from pip import main\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\__init__.py\", line 11, in <module>\n from pip.vcs import git, mercurial, subversion, bazaar # noqa\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\vcs\\mercurial.py\", line 9, in <module>\n from pip.download import path_to_url\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\download.py\", line 22, in <module>\n from pip._vendor import requests, six\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\_vendor\\requests\\__init__.py\", line 58, in <module>\n from . import utils\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\_vendor\\requests\\utils.py\", line 25, in <module>\n from .compat import parse_http_list as _parse_list_header\n File \"C:\\Anaconda3\\lib\\site-packages\\pip\\_vendor\\requests\\compat.py\", line 77, in <module>\n import simplejson as json\n File \"C:\\Anaconda3\\lib\\site-packages\\simplejson\\__init__.py\", line 112, in <module>\n from .scanner import JSONDecodeError\n File \"C:\\Anaconda3\\lib\\site-packages\\simplejson\\scanner.py\", line 10, in <module>\n c_make_scanner = _import_c_make_scanner()\n File \"C:\\Anaconda3\\lib\\site-packages\\simplejson\\scanner.py\", line 6, in _import_c_make_scanner\n from simplejson._speedups import make_scanner\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 4: invalid start byte\n```\n\nCan we _not_ install simplejson unless it's really needed (Python 2.6 systems)? Or is there another way we can work around this Anaconda issue? (Logged for Anaconda as: https://github.com/ContinuumIO/anaconda-issues/issues/238 )\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom plotly import utils\nimport textwrap\nimport os\nimport sys\nif sys.version[:3] == '2.6':\n from ordereddict import OrderedDict\n import simplejson as json\nelse:\n from collections import OrderedDict\n import json\nimport six\n\nfrom pkg_resources import resource_string\n\n\n# Define graph reference loader\ndef _load_graph_ref():\n graph_reference_dir = 'graph_reference'\n json_files = [\n 'graph_objs_meta.json',\n 'OBJ_MAP.json',\n 'NAME_TO_KEY.json',\n 'KEY_TO_NAME.json'\n ]\n out = []\n for json_file in json_files:\n relative_path = os.path.join(graph_reference_dir, json_file)\n s = resource_string('plotly', relative_path).decode('utf-8')\n tmp = json.loads(s, object_pairs_hook=OrderedDict)\n tmp = utils.decode_unicode(tmp)\n out += [tmp]\n return tuple(out)\n\n# Load graph reference\nINFO, OBJ_MAP, NAME_TO_KEY, KEY_TO_NAME = _load_graph_ref()\n\n# Add mentions to Python-specific graph obj\n# to NAME_TO_KEY, KEY_TO_NAME, INFO\nNAME_TO_KEY['PlotlyList'] = 'plotlylist'\nNAME_TO_KEY['PlotlyDict'] = 'plotlydict'\nNAME_TO_KEY['PlotlyTrace'] = 'plotlytrace'\nNAME_TO_KEY['Trace'] = 'trace'\nKEY_TO_NAME['plotlylist'] = 'PlotlyList'\nKEY_TO_NAME['plotlydict'] = 'PlotlyDict'\nKEY_TO_NAME['plotlytrace'] = 'PlotlyTrace'\nKEY_TO_NAME['trace'] = 'Trace'\nINFO['plotlylist'] = dict(keymeta=dict())\nINFO['plotlydict'] = dict(keymeta=dict())\nINFO['plotlytrace'] = dict(keymeta=dict())\nINFO['trace'] = dict(keymeta=dict())\n\n# Define line and tab size for help text!\nLINE_SIZE = 76\nTAB_SIZE = 4\n\n\n# Doc make function for list-like objects\ndef make_list_doc(name):\n # get info for this graph obj\n info = INFO[NAME_TO_KEY[name]]\n # add docstring to doc\n doc = info['docstring']\n doc = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(doc, width=LINE_SIZE)) + \"\\n\"\n # Add examples to doc\n examples = info['examples']\n if len(examples):\n doc += \"\\nExample:\\n\\n >>> \" + \"\\n >>> \".join(examples) + \"\\n\"\n # Add links to online examples to doc\n links = info['links']\n if len(links) == 1:\n doc += \"\\nOnline example:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n elif len(links) > 1:\n doc += \"\\nOnline examples:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n # Add parents keys to doc\n parent_keys = info['parent_keys']\n if len(parent_keys) == 1:\n doc += \"\\nParent key:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n elif len(parent_keys) > 1:\n doc += \"\\nParent keys:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n # Add method list to doc\n doc += \"\\nQuick method reference:\\n\\n\"\n doc += \"\\t{0}.\".format(name) + \"\\n\\t{0}.\".format(name).join(\n [\"update(changes)\", \"strip_style()\", \"get_data()\",\n \"to_graph_objs()\", \"validate()\", \"to_string()\",\n \"force_clean()\"]) + \"\\n\\n\"\n return doc.expandtabs(TAB_SIZE)\n\n\n# Doc make function for dict-like objects\ndef make_dict_doc(name):\n # get info for this graph obj\n info = INFO[NAME_TO_KEY[name]]\n # add docstring to doc\n doc = info['docstring']\n doc = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(doc, width=LINE_SIZE)) + \"\\n\"\n # Add examples to doc\n examples = info['examples']\n if len(examples):\n doc += \"\\nExample:\\n\\n >>> \" + \"\\n >>> \".join(examples) + \"\\n\"\n # Add links to online examples to doc\n links = info['links']\n if len(links) == 1:\n doc += \"\\nOnline example:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n elif len(links) > 1:\n doc += \"\\nOnline examples:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n # Add parents keys to doc\n parent_keys = info['parent_keys']\n if len(parent_keys) == 1:\n doc += \"\\nParent key:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n elif len(parent_keys) > 1:\n doc += \"\\nParent keys:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n # Add method list to doc\n doc += \"\\nQuick method reference:\\n\\n\"\n doc += \"\\t{0}.\".format(name) + \"\\n\\t{0}.\".format(name).join(\n [\"update(changes)\", \"strip_style()\", \"get_data()\",\n \"to_graph_objs()\", \"validate()\", \"to_string()\",\n \"force_clean()\"]) + \"\\n\\n\"\n # Add key meta to doc\n keymeta = info['keymeta']\n if len(keymeta):\n doc += \"Valid keys:\\n\\n\"\n # Add each key one-by-one and format\n width1 = LINE_SIZE-TAB_SIZE\n width2 = LINE_SIZE-2*TAB_SIZE\n width3 = LINE_SIZE-3*TAB_SIZE\n undocumented = \"Aw, snap! Undocumented!\"\n for key in keymeta:\n # main portion of documentation\n try:\n required = str(keymeta[key]['required'])\n except KeyError:\n required = undocumented\n try:\n typ = str(keymeta[key]['key_type'])\n except KeyError:\n typ = undocumented\n try:\n val_types = str(keymeta[key]['val_types'])\n if typ == 'object':\n val_types = (\"{0} object | \".format(KEY_TO_NAME[key]) +\n val_types)\n except KeyError:\n val_types = undocumented\n try:\n descr = str(keymeta[key]['description'])\n except KeyError:\n descr = undocumented\n str_1 = \"{0} [required={1}] (value={2})\".format(\n key, required, val_types)\n if \"streamable\" in keymeta[key] and keymeta[key][\"streamable\"]:\n str_1 += \" (streamable)\"\n str_1 += \":\\n\"\n str_1 = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(str_1,\n width=width1)) + \"\\n\"\n str_2 = \"\\t\\t\" + \"\\n\\t\\t\".join(textwrap.wrap(descr,\n width=width2)) + \"\\n\"\n doc += str_1 + str_2\n # if a user can run help on this value, tell them!\n if typ == \"object\":\n doc += \"\\n\\t\\tFor more, run `help(plotly.graph_objs.{0\" \\\n \"})`\\n\".format(KEY_TO_NAME[key])\n # if example usage exists, tell them!\n try:\n if len(keymeta[key]['examples']):\n ex = \"\\n\\t\\tExamples:\\n\" + \"\\t\\t\\t\"\n ex += \"\\n\\t\\t\\t\".join(\n textwrap.wrap(' | '.join(keymeta[key]['examples']),\n width=width3)) + \"\\n\"\n doc += ex\n except:\n pass\n doc += '\\n'\n return doc.expandtabs(TAB_SIZE)\n\n\ndef update_keys(keys):\n \"\"\"Change keys we used to support to their new equivalent.\"\"\"\n updated_keys = list()\n for key in keys:\n if key in translations:\n updated_keys += [translations[key]]\n else:\n updated_keys += [key]\n return updated_keys\n\ntranslations = dict(\n scl=\"colorscale\",\n reversescl=\"reversescale\"\n)\n\n\ndef curtail_val_repr(val, max_chars, add_delim=False):\n delim = \", \"\n end = \"..\"\n if isinstance(val, six.string_types):\n if max_chars <= len(\"'\" + end + \"'\"):\n return ' ' * max_chars\n elif add_delim and max_chars <= len(\"'\" + end + \"'\") + len(delim):\n return \"'\" + end + \"'\" + ' ' * (max_chars - len(\"'\" + end + \"'\"))\n else:\n if max_chars <= len(end):\n return ' ' * max_chars\n elif add_delim and max_chars <= len(end) + len(delim):\n return end + ' ' * (max_chars - len(end))\n if add_delim:\n max_chars -= len(delim)\n r = repr(val)\n if len(r) > max_chars:\n if isinstance(val, six.string_types):\n # TODO: can we assume this ends in \"'\"\n r = r[:max_chars - len(end + \"'\")] + end + \"'\"\n elif (isinstance(val, list) and\n max_chars >= len(\"[{end}]\".format(end=end))):\n r = r[:max_chars - len(end + ']')] + end + ']'\n else:\n r = r[:max_chars - len(end)] + end\n if add_delim:\n r += delim\n return r\n", "path": "plotly/graph_objs/graph_objs_tools.py"}, {"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/api/python',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['graph_reference/*.json', 'widgets/*.js']},\n install_requires=['requests',\n 'six',\n 'pytz',\n 'ordereddict',\n 'simplejson'],\n zip_safe=False)\n", "path": "setup.py"}, {"content": "__version__ = '1.6.9'\n", "path": "plotly/version.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom plotly import utils\nimport textwrap\nimport os\nimport sys\nif sys.version[:3] == '2.6':\n try:\n from ordereddict import OrderedDict\n import simplejson as json\n except ImportError:\n raise ImportError(\n \"Looks like you're running Python 2.6. Plotly expects newer \"\n \"standard library versions of ordereddict and json. You can \"\n \"simply upgrade with these 'extras' with the following terminal \"\n \"command:\\npip install 'plotly[PY2.6]'\"\n )\nelse:\n from collections import OrderedDict\n import json\nimport six\n\nfrom pkg_resources import resource_string\n\n\n# Define graph reference loader\ndef _load_graph_ref():\n graph_reference_dir = 'graph_reference'\n json_files = [\n 'graph_objs_meta.json',\n 'OBJ_MAP.json',\n 'NAME_TO_KEY.json',\n 'KEY_TO_NAME.json'\n ]\n out = []\n for json_file in json_files:\n relative_path = os.path.join(graph_reference_dir, json_file)\n s = resource_string('plotly', relative_path).decode('utf-8')\n tmp = json.loads(s, object_pairs_hook=OrderedDict)\n tmp = utils.decode_unicode(tmp)\n out += [tmp]\n return tuple(out)\n\n# Load graph reference\nINFO, OBJ_MAP, NAME_TO_KEY, KEY_TO_NAME = _load_graph_ref()\n\n# Add mentions to Python-specific graph obj\n# to NAME_TO_KEY, KEY_TO_NAME, INFO\nNAME_TO_KEY['PlotlyList'] = 'plotlylist'\nNAME_TO_KEY['PlotlyDict'] = 'plotlydict'\nNAME_TO_KEY['PlotlyTrace'] = 'plotlytrace'\nNAME_TO_KEY['Trace'] = 'trace'\nKEY_TO_NAME['plotlylist'] = 'PlotlyList'\nKEY_TO_NAME['plotlydict'] = 'PlotlyDict'\nKEY_TO_NAME['plotlytrace'] = 'PlotlyTrace'\nKEY_TO_NAME['trace'] = 'Trace'\nINFO['plotlylist'] = dict(keymeta=dict())\nINFO['plotlydict'] = dict(keymeta=dict())\nINFO['plotlytrace'] = dict(keymeta=dict())\nINFO['trace'] = dict(keymeta=dict())\n\n# Define line and tab size for help text!\nLINE_SIZE = 76\nTAB_SIZE = 4\n\n\n# Doc make function for list-like objects\ndef make_list_doc(name):\n # get info for this graph obj\n info = INFO[NAME_TO_KEY[name]]\n # add docstring to doc\n doc = info['docstring']\n doc = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(doc, width=LINE_SIZE)) + \"\\n\"\n # Add examples to doc\n examples = info['examples']\n if len(examples):\n doc += \"\\nExample:\\n\\n >>> \" + \"\\n >>> \".join(examples) + \"\\n\"\n # Add links to online examples to doc\n links = info['links']\n if len(links) == 1:\n doc += \"\\nOnline example:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n elif len(links) > 1:\n doc += \"\\nOnline examples:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n # Add parents keys to doc\n parent_keys = info['parent_keys']\n if len(parent_keys) == 1:\n doc += \"\\nParent key:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n elif len(parent_keys) > 1:\n doc += \"\\nParent keys:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n # Add method list to doc\n doc += \"\\nQuick method reference:\\n\\n\"\n doc += \"\\t{0}.\".format(name) + \"\\n\\t{0}.\".format(name).join(\n [\"update(changes)\", \"strip_style()\", \"get_data()\",\n \"to_graph_objs()\", \"validate()\", \"to_string()\",\n \"force_clean()\"]) + \"\\n\\n\"\n return doc.expandtabs(TAB_SIZE)\n\n\n# Doc make function for dict-like objects\ndef make_dict_doc(name):\n # get info for this graph obj\n info = INFO[NAME_TO_KEY[name]]\n # add docstring to doc\n doc = info['docstring']\n doc = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(doc, width=LINE_SIZE)) + \"\\n\"\n # Add examples to doc\n examples = info['examples']\n if len(examples):\n doc += \"\\nExample:\\n\\n >>> \" + \"\\n >>> \".join(examples) + \"\\n\"\n # Add links to online examples to doc\n links = info['links']\n if len(links) == 1:\n doc += \"\\nOnline example:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n elif len(links) > 1:\n doc += \"\\nOnline examples:\\n\\n \" + \"\\n \".join(links) + \"\\n\"\n # Add parents keys to doc\n parent_keys = info['parent_keys']\n if len(parent_keys) == 1:\n doc += \"\\nParent key:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n elif len(parent_keys) > 1:\n doc += \"\\nParent keys:\\n\\n \" + \"\\n \".join(parent_keys) + \"\\n\"\n # Add method list to doc\n doc += \"\\nQuick method reference:\\n\\n\"\n doc += \"\\t{0}.\".format(name) + \"\\n\\t{0}.\".format(name).join(\n [\"update(changes)\", \"strip_style()\", \"get_data()\",\n \"to_graph_objs()\", \"validate()\", \"to_string()\",\n \"force_clean()\"]) + \"\\n\\n\"\n # Add key meta to doc\n keymeta = info['keymeta']\n if len(keymeta):\n doc += \"Valid keys:\\n\\n\"\n # Add each key one-by-one and format\n width1 = LINE_SIZE-TAB_SIZE\n width2 = LINE_SIZE-2*TAB_SIZE\n width3 = LINE_SIZE-3*TAB_SIZE\n undocumented = \"Aw, snap! Undocumented!\"\n for key in keymeta:\n # main portion of documentation\n try:\n required = str(keymeta[key]['required'])\n except KeyError:\n required = undocumented\n try:\n typ = str(keymeta[key]['key_type'])\n except KeyError:\n typ = undocumented\n try:\n val_types = str(keymeta[key]['val_types'])\n if typ == 'object':\n val_types = (\"{0} object | \".format(KEY_TO_NAME[key]) +\n val_types)\n except KeyError:\n val_types = undocumented\n try:\n descr = str(keymeta[key]['description'])\n except KeyError:\n descr = undocumented\n str_1 = \"{0} [required={1}] (value={2})\".format(\n key, required, val_types)\n if \"streamable\" in keymeta[key] and keymeta[key][\"streamable\"]:\n str_1 += \" (streamable)\"\n str_1 += \":\\n\"\n str_1 = \"\\t\" + \"\\n\\t\".join(textwrap.wrap(str_1,\n width=width1)) + \"\\n\"\n str_2 = \"\\t\\t\" + \"\\n\\t\\t\".join(textwrap.wrap(descr,\n width=width2)) + \"\\n\"\n doc += str_1 + str_2\n # if a user can run help on this value, tell them!\n if typ == \"object\":\n doc += \"\\n\\t\\tFor more, run `help(plotly.graph_objs.{0\" \\\n \"})`\\n\".format(KEY_TO_NAME[key])\n # if example usage exists, tell them!\n try:\n if len(keymeta[key]['examples']):\n ex = \"\\n\\t\\tExamples:\\n\" + \"\\t\\t\\t\"\n ex += \"\\n\\t\\t\\t\".join(\n textwrap.wrap(' | '.join(keymeta[key]['examples']),\n width=width3)) + \"\\n\"\n doc += ex\n except:\n pass\n doc += '\\n'\n return doc.expandtabs(TAB_SIZE)\n\n\ndef update_keys(keys):\n \"\"\"Change keys we used to support to their new equivalent.\"\"\"\n updated_keys = list()\n for key in keys:\n if key in translations:\n updated_keys += [translations[key]]\n else:\n updated_keys += [key]\n return updated_keys\n\ntranslations = dict(\n scl=\"colorscale\",\n reversescl=\"reversescale\"\n)\n\n\ndef curtail_val_repr(val, max_chars, add_delim=False):\n delim = \", \"\n end = \"..\"\n if isinstance(val, six.string_types):\n if max_chars <= len(\"'\" + end + \"'\"):\n return ' ' * max_chars\n elif add_delim and max_chars <= len(\"'\" + end + \"'\") + len(delim):\n return \"'\" + end + \"'\" + ' ' * (max_chars - len(\"'\" + end + \"'\"))\n else:\n if max_chars <= len(end):\n return ' ' * max_chars\n elif add_delim and max_chars <= len(end) + len(delim):\n return end + ' ' * (max_chars - len(end))\n if add_delim:\n max_chars -= len(delim)\n r = repr(val)\n if len(r) > max_chars:\n if isinstance(val, six.string_types):\n # TODO: can we assume this ends in \"'\"\n r = r[:max_chars - len(end + \"'\")] + end + \"'\"\n elif (isinstance(val, list) and\n max_chars >= len(\"[{end}]\".format(end=end))):\n r = r[:max_chars - len(end + ']')] + end + ']'\n else:\n r = r[:max_chars - len(end)] + end\n if add_delim:\n r += delim\n return r\n", "path": "plotly/graph_objs/graph_objs_tools.py"}, {"content": "from setuptools import setup\n\nexec (open('plotly/version.py').read())\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\n\nsetup(name='plotly',\n version=__version__,\n use_2to3=False,\n author='Chris P',\n author_email='[email protected]',\n maintainer='Chris P',\n maintainer_email='[email protected]',\n url='https://plot.ly/api/python',\n description=\"Python plotting library for collaborative, \"\n \"interactive, publication-quality graphs.\",\n long_description=readme(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n license='MIT',\n packages=['plotly',\n 'plotly/plotly',\n 'plotly/plotly/chunked_requests',\n 'plotly/graph_objs',\n 'plotly/grid_objs',\n 'plotly/widgets',\n 'plotly/matplotlylib',\n 'plotly/matplotlylib/mplexporter',\n 'plotly/matplotlylib/mplexporter/renderers'],\n package_data={'plotly': ['graph_reference/*.json', 'widgets/*.js']},\n install_requires=['requests', 'six', 'pytz'],\n extras_require={\"PY2.6\": ['simplejson', 'ordereddict']},\n zip_safe=False)\n", "path": "setup.py"}, {"content": "__version__ = '1.6.10'\n", "path": "plotly/version.py"}]}
| 4,002 | 412 |
gh_patches_debug_31169
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-3264
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pyramid.scripting.get_root() triggers AttributeError
In Pyramid 1.9.1, get_root() triggers an AttributeError on threadlocal_manager:
```
# bin/pshell mything/development.ini
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help" for more information.
Environment:
app The WSGI application.
registry Active Pyramid registry.
request Active request object.
root Root of the default resource tree.
root_factory Default root factory used to create `root`.
>>> from pyramid.scripting import get_root
>>> x = get_root(app)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/shane/src/pyramidtest/eggs/pyramid-1.9.1-py3.5.egg/pyramid/scripting.py", line 30, in get_root
app.threadlocal_manager.push(threadlocals)
AttributeError: 'Router' object has no attribute 'threadlocal_manager'
>>>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/scripting.py`
Content:
```
1 from pyramid.config import global_registries
2 from pyramid.exceptions import ConfigurationError
3
4 from pyramid.interfaces import (
5 IRequestFactory,
6 IRootFactory,
7 )
8 from pyramid.request import Request
9 from pyramid.request import apply_request_extensions
10
11 from pyramid.threadlocal import manager as threadlocal_manager
12 from pyramid.traversal import DefaultRootFactory
13
14 def get_root(app, request=None):
15 """ Return a tuple composed of ``(root, closer)`` when provided a
16 :term:`router` instance as the ``app`` argument. The ``root``
17 returned is the application root object. The ``closer`` returned
18 is a callable (accepting no arguments) that should be called when
19 your scripting application is finished using the root.
20
21 ``request`` is passed to the :app:`Pyramid` application root
22 factory to compute the root. If ``request`` is None, a default
23 will be constructed using the registry's :term:`Request Factory`
24 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
25 """
26 registry = app.registry
27 if request is None:
28 request = _make_request('/', registry)
29 threadlocals = {'registry':registry, 'request':request}
30 app.threadlocal_manager.push(threadlocals)
31 def closer(request=request): # keep request alive via this function default
32 app.threadlocal_manager.pop()
33 root = app.root_factory(request)
34 return root, closer
35
36 def prepare(request=None, registry=None):
37 """ This function pushes data onto the Pyramid threadlocal stack
38 (request and registry), making those objects 'current'. It
39 returns a dictionary useful for bootstrapping a Pyramid
40 application in a scripting environment.
41
42 ``request`` is passed to the :app:`Pyramid` application root
43 factory to compute the root. If ``request`` is None, a default
44 will be constructed using the registry's :term:`Request Factory`
45 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
46
47 If ``registry`` is not supplied, the last registry loaded from
48 :attr:`pyramid.config.global_registries` will be used. If you
49 have loaded more than one :app:`Pyramid` application in the
50 current process, you may not want to use the last registry
51 loaded, thus you can search the ``global_registries`` and supply
52 the appropriate one based on your own criteria.
53
54 The function returns a dictionary composed of ``root``,
55 ``closer``, ``registry``, ``request`` and ``root_factory``. The
56 ``root`` returned is the application's root resource object. The
57 ``closer`` returned is a callable (accepting no arguments) that
58 should be called when your scripting application is finished
59 using the root. ``registry`` is the resolved registry object.
60 ``request`` is the request object passed or the constructed request
61 if no request is passed. ``root_factory`` is the root factory used
62 to construct the root.
63
64 This function may be used as a context manager to call the ``closer``
65 automatically:
66
67 .. code-block:: python
68
69 registry = config.registry
70 with prepare(registry) as env:
71 request = env['request']
72 # ...
73
74 .. versionchanged:: 1.8
75
76 Added the ability to use the return value as a context manager.
77
78 """
79 if registry is None:
80 registry = getattr(request, 'registry', global_registries.last)
81 if registry is None:
82 raise ConfigurationError('No valid Pyramid applications could be '
83 'found, make sure one has been created '
84 'before trying to activate it.')
85 if request is None:
86 request = _make_request('/', registry)
87 # NB: even though _make_request might have already set registry on
88 # request, we reset it in case someone has passed in their own
89 # request.
90 request.registry = registry
91 threadlocals = {'registry':registry, 'request':request}
92 threadlocal_manager.push(threadlocals)
93 apply_request_extensions(request)
94 def closer():
95 threadlocal_manager.pop()
96 root_factory = registry.queryUtility(IRootFactory,
97 default=DefaultRootFactory)
98 root = root_factory(request)
99 if getattr(request, 'context', None) is None:
100 request.context = root
101 return AppEnvironment(
102 root=root,
103 closer=closer,
104 registry=registry,
105 request=request,
106 root_factory=root_factory,
107 )
108
109 class AppEnvironment(dict):
110 def __enter__(self):
111 return self
112
113 def __exit__(self, type, value, traceback):
114 self['closer']()
115
116 def _make_request(path, registry=None):
117 """ Return a :meth:`pyramid.request.Request` object anchored at a
118 given path. The object returned will be generated from the supplied
119 registry's :term:`Request Factory` using the
120 :meth:`pyramid.interfaces.IRequestFactory.blank` method.
121
122 This request object can be passed to :meth:`pyramid.scripting.get_root`
123 or :meth:`pyramid.scripting.prepare` to initialize an application in
124 preparation for executing a script with a proper environment setup.
125 URLs can then be generated with the object, as well as rendering
126 templates.
127
128 If ``registry`` is not supplied, the last registry loaded from
129 :attr:`pyramid.config.global_registries` will be used. If you have
130 loaded more than one :app:`Pyramid` application in the current
131 process, you may not want to use the last registry loaded, thus
132 you can search the ``global_registries`` and supply the appropriate
133 one based on your own criteria.
134 """
135 if registry is None:
136 registry = global_registries.last
137 request_factory = registry.queryUtility(IRequestFactory, default=Request)
138 request = request_factory.blank(path)
139 request.registry = registry
140 return request
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyramid/scripting.py b/pyramid/scripting.py
--- a/pyramid/scripting.py
+++ b/pyramid/scripting.py
@@ -8,7 +8,7 @@
from pyramid.request import Request
from pyramid.request import apply_request_extensions
-from pyramid.threadlocal import manager as threadlocal_manager
+from pyramid.threadlocal import RequestContext
from pyramid.traversal import DefaultRootFactory
def get_root(app, request=None):
@@ -26,10 +26,11 @@
registry = app.registry
if request is None:
request = _make_request('/', registry)
- threadlocals = {'registry':registry, 'request':request}
- app.threadlocal_manager.push(threadlocals)
- def closer(request=request): # keep request alive via this function default
- app.threadlocal_manager.pop()
+ request.registry = registry
+ ctx = RequestContext(request)
+ ctx.begin()
+ def closer():
+ ctx.end()
root = app.root_factory(request)
return root, closer
@@ -87,12 +88,12 @@
# NB: even though _make_request might have already set registry on
# request, we reset it in case someone has passed in their own
# request.
- request.registry = registry
- threadlocals = {'registry':registry, 'request':request}
- threadlocal_manager.push(threadlocals)
+ request.registry = registry
+ ctx = RequestContext(request)
+ ctx.begin()
apply_request_extensions(request)
def closer():
- threadlocal_manager.pop()
+ ctx.end()
root_factory = registry.queryUtility(IRootFactory,
default=DefaultRootFactory)
root = root_factory(request)
|
{"golden_diff": "diff --git a/pyramid/scripting.py b/pyramid/scripting.py\n--- a/pyramid/scripting.py\n+++ b/pyramid/scripting.py\n@@ -8,7 +8,7 @@\n from pyramid.request import Request\n from pyramid.request import apply_request_extensions\n \n-from pyramid.threadlocal import manager as threadlocal_manager\n+from pyramid.threadlocal import RequestContext\n from pyramid.traversal import DefaultRootFactory\n \n def get_root(app, request=None):\n@@ -26,10 +26,11 @@\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n- threadlocals = {'registry':registry, 'request':request}\n- app.threadlocal_manager.push(threadlocals)\n- def closer(request=request): # keep request alive via this function default\n- app.threadlocal_manager.pop()\n+ request.registry = registry\n+ ctx = RequestContext(request)\n+ ctx.begin()\n+ def closer():\n+ ctx.end()\n root = app.root_factory(request)\n return root, closer\n \n@@ -87,12 +88,12 @@\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n- request.registry = registry \n- threadlocals = {'registry':registry, 'request':request}\n- threadlocal_manager.push(threadlocals)\n+ request.registry = registry\n+ ctx = RequestContext(request)\n+ ctx.begin()\n apply_request_extensions(request)\n def closer():\n- threadlocal_manager.pop()\n+ ctx.end()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n", "issue": "pyramid.scripting.get_root() triggers AttributeError \nIn Pyramid 1.9.1, get_root() triggers an AttributeError on threadlocal_manager:\r\n\r\n```\r\n# bin/pshell mything/development.ini \r\nPython 3.5.2 (default, Nov 23 2017, 16:37:01) \r\n[GCC 5.4.0 20160609] on linux\r\nType \"help\" for more information.\r\n\r\nEnvironment:\r\n app The WSGI application.\r\n registry Active Pyramid registry.\r\n request Active request object.\r\n root Root of the default resource tree.\r\n root_factory Default root factory used to create `root`.\r\n\r\n>>> from pyramid.scripting import get_root\r\n>>> x = get_root(app)\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/home/shane/src/pyramidtest/eggs/pyramid-1.9.1-py3.5.egg/pyramid/scripting.py\", line 30, in get_root\r\n app.threadlocal_manager.push(threadlocals)\r\nAttributeError: 'Router' object has no attribute 'threadlocal_manager'\r\n>>>\r\n```\n", "before_files": [{"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import manager as threadlocal_manager\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n threadlocals = {'registry':registry, 'request':request}\n app.threadlocal_manager.push(threadlocals)\n def closer(request=request): # keep request alive via this function default\n app.threadlocal_manager.pop()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n registry = config.registry\n with prepare(registry) as env:\n request = env['request']\n # ...\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry \n threadlocals = {'registry':registry, 'request':request}\n threadlocal_manager.push(threadlocals)\n apply_request_extensions(request)\n def closer():\n threadlocal_manager.pop()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return AppEnvironment(\n root=root,\n closer=closer,\n registry=registry,\n request=request,\n root_factory=root_factory,\n )\n\nclass AppEnvironment(dict):\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self['closer']()\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}], "after_files": [{"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import RequestContext\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n request.registry = registry\n ctx = RequestContext(request)\n ctx.begin()\n def closer():\n ctx.end()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n registry = config.registry\n with prepare(registry) as env:\n request = env['request']\n # ...\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry\n ctx = RequestContext(request)\n ctx.begin()\n apply_request_extensions(request)\n def closer():\n ctx.end()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return AppEnvironment(\n root=root,\n closer=closer,\n registry=registry,\n request=request,\n root_factory=root_factory,\n )\n\nclass AppEnvironment(dict):\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self['closer']()\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}]}
| 2,102 | 366 |
gh_patches_debug_59208
|
rasdani/github-patches
|
git_diff
|
Nitrate__Nitrate-1096
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Social Auth Provider requires URLS
According to the [python-social-auth documentation](https://python-social-auth.readthedocs.io/en/latest/configuration/django.html#urls-entries), the url patterns have to be extended to support social auth properly.
```python
urlpatterns = patterns('',
...
url('', include('social_django.urls', namespace='social'))
...
)
```
Hence the `urls.py` of the nitrate should be extended like this:
```python
if "SOCIAL" in settings.ENABLED_AUTH_BACKENDS:
urlpatterns += [ path("", include('social_django.urls', namespace='social')) ]
```
Otherwise, the namespace social cannot be resolved when compiling the login page (refer to [this](https://github.com/Nitrate/Nitrate/blob/b89b37309f00469428e0372166d8c4c804bd9d0d/src/templates/registration/login.html#L53) for details).
Would you accept a pull request?
Social Auth Provider requires URLS
According to the [python-social-auth documentation](https://python-social-auth.readthedocs.io/en/latest/configuration/django.html#urls-entries), the url patterns have to be extended to support social auth properly.
```python
urlpatterns = patterns('',
...
url('', include('social_django.urls', namespace='social'))
...
)
```
Hence the `urls.py` of the nitrate should be extended like this:
```python
if "SOCIAL" in settings.ENABLED_AUTH_BACKENDS:
urlpatterns += [ path("", include('social_django.urls', namespace='social')) ]
```
Otherwise, the namespace social cannot be resolved when compiling the login page (refer to [this](https://github.com/Nitrate/Nitrate/blob/b89b37309f00469428e0372166d8c4c804bd9d0d/src/templates/registration/login.html#L53) for details).
Would you accept a pull request?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/tcms/urls.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django.conf import settings
4 from django.contrib import admin
5 from django.urls import include, path
6 from django.views.i18n import JavaScriptCatalog
7
8 # XML RPC handler
9 from kobo.django.xmlrpc.views import XMLRPCHandlerFactory
10
11 from tcms.core import ajax as tcms_core_ajax
12 from tcms.testruns import views as testruns_views
13
14 xmlrpc_handler = XMLRPCHandlerFactory("TCMS_XML_RPC")
15
16 urlpatterns = [
17 path("admin/", admin.site.urls),
18 path("admin/doc/", include("django.contrib.admindocs.urls")),
19 path("", include("tcms.core.urls")),
20 path("", include("tcms.management.urls")),
21 # Testplans zone
22 path("plan/", include("tcms.testplans.urls.plan_urls")),
23 path("plans/", include("tcms.testplans.urls.plans_urls")),
24 # Testcases zone
25 path("case/", include("tcms.testcases.urls.case_urls")),
26 path("cases/", include("tcms.testcases.urls.cases_urls")),
27 # Testruns zone
28 path("run/", include("tcms.testruns.urls.run_urls")),
29 path("runs/", include("tcms.testruns.urls.runs_urls")),
30 path("accounts/", include("tcms.profiles.urls")),
31 path("linkref/", include("tcms.linkreference.urls")),
32 path("comments/", include("tcms.comments.urls")),
33 path("advance-search/", include("tcms.search.urls")),
34 path("report/", include("tcms.report.urls")),
35 path("xmlrpc/", xmlrpc_handler),
36 path("tinymce/", include("tinymce.urls")),
37 # Using admin js without admin permission
38 # refer: https://docs.djangoproject.com/en/1.6/topics/i18n/translation/#module-django.views.i18n
39 path("jsi18n/", JavaScriptCatalog.as_view(), name="javascript-catalog"),
40 ]
41
42 # Debug zone
43
44 if settings.DEBUG:
45 import debug_toolbar
46
47 urlpatterns += [
48 path("__debug__/", include(debug_toolbar.urls)),
49 ]
50
51 # Overwrite default 500 handler
52 # More details could see django.core.urlresolvers._resolve_special()
53 handler500 = "tcms.core.views.error.server_error"
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/tcms/urls.py b/src/tcms/urls.py
--- a/src/tcms/urls.py
+++ b/src/tcms/urls.py
@@ -39,6 +39,10 @@
path("jsi18n/", JavaScriptCatalog.as_view(), name="javascript-catalog"),
]
+# Python Social Core / Django Social Auth
+if "SOCIAL" in settings.ENABLED_AUTH_BACKENDS:
+ urlpatterns += [path("", include("social_django.urls", namespace="social"))]
+
# Debug zone
if settings.DEBUG:
|
{"golden_diff": "diff --git a/src/tcms/urls.py b/src/tcms/urls.py\n--- a/src/tcms/urls.py\n+++ b/src/tcms/urls.py\n@@ -39,6 +39,10 @@\n path(\"jsi18n/\", JavaScriptCatalog.as_view(), name=\"javascript-catalog\"),\n ]\n \n+# Python Social Core / Django Social Auth\n+if \"SOCIAL\" in settings.ENABLED_AUTH_BACKENDS:\n+ urlpatterns += [path(\"\", include(\"social_django.urls\", namespace=\"social\"))]\n+\n # Debug zone\n \n if settings.DEBUG:\n", "issue": "Social Auth Provider requires URLS\nAccording to the [python-social-auth documentation](https://python-social-auth.readthedocs.io/en/latest/configuration/django.html#urls-entries), the url patterns have to be extended to support social auth properly.\r\n\r\n```python\r\nurlpatterns = patterns('',\r\n ...\r\n url('', include('social_django.urls', namespace='social'))\r\n ...\r\n)\r\n```\r\n\r\nHence the `urls.py` of the nitrate should be extended like this:\r\n\r\n```python\r\nif \"SOCIAL\" in settings.ENABLED_AUTH_BACKENDS:\r\n urlpatterns += [ path(\"\", include('social_django.urls', namespace='social')) ]\r\n```\r\n\r\nOtherwise, the namespace social cannot be resolved when compiling the login page (refer to [this](https://github.com/Nitrate/Nitrate/blob/b89b37309f00469428e0372166d8c4c804bd9d0d/src/templates/registration/login.html#L53) for details).\r\n\r\nWould you accept a pull request?\nSocial Auth Provider requires URLS\nAccording to the [python-social-auth documentation](https://python-social-auth.readthedocs.io/en/latest/configuration/django.html#urls-entries), the url patterns have to be extended to support social auth properly.\r\n\r\n```python\r\nurlpatterns = patterns('',\r\n ...\r\n url('', include('social_django.urls', namespace='social'))\r\n ...\r\n)\r\n```\r\n\r\nHence the `urls.py` of the nitrate should be extended like this:\r\n\r\n```python\r\nif \"SOCIAL\" in settings.ENABLED_AUTH_BACKENDS:\r\n urlpatterns += [ path(\"\", include('social_django.urls', namespace='social')) ]\r\n```\r\n\r\nOtherwise, the namespace social cannot be resolved when compiling the login page (refer to [this](https://github.com/Nitrate/Nitrate/blob/b89b37309f00469428e0372166d8c4c804bd9d0d/src/templates/registration/login.html#L53) for details).\r\n\r\nWould you accept a pull request?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.urls import include, path\nfrom django.views.i18n import JavaScriptCatalog\n\n# XML RPC handler\nfrom kobo.django.xmlrpc.views import XMLRPCHandlerFactory\n\nfrom tcms.core import ajax as tcms_core_ajax\nfrom tcms.testruns import views as testruns_views\n\nxmlrpc_handler = XMLRPCHandlerFactory(\"TCMS_XML_RPC\")\n\nurlpatterns = [\n path(\"admin/\", admin.site.urls),\n path(\"admin/doc/\", include(\"django.contrib.admindocs.urls\")),\n path(\"\", include(\"tcms.core.urls\")),\n path(\"\", include(\"tcms.management.urls\")),\n # Testplans zone\n path(\"plan/\", include(\"tcms.testplans.urls.plan_urls\")),\n path(\"plans/\", include(\"tcms.testplans.urls.plans_urls\")),\n # Testcases zone\n path(\"case/\", include(\"tcms.testcases.urls.case_urls\")),\n path(\"cases/\", include(\"tcms.testcases.urls.cases_urls\")),\n # Testruns zone\n path(\"run/\", include(\"tcms.testruns.urls.run_urls\")),\n path(\"runs/\", include(\"tcms.testruns.urls.runs_urls\")),\n path(\"accounts/\", include(\"tcms.profiles.urls\")),\n path(\"linkref/\", include(\"tcms.linkreference.urls\")),\n path(\"comments/\", include(\"tcms.comments.urls\")),\n path(\"advance-search/\", include(\"tcms.search.urls\")),\n path(\"report/\", include(\"tcms.report.urls\")),\n path(\"xmlrpc/\", xmlrpc_handler),\n path(\"tinymce/\", include(\"tinymce.urls\")),\n # Using admin js without admin permission\n # refer: https://docs.djangoproject.com/en/1.6/topics/i18n/translation/#module-django.views.i18n\n path(\"jsi18n/\", JavaScriptCatalog.as_view(), name=\"javascript-catalog\"),\n]\n\n# Debug zone\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns += [\n path(\"__debug__/\", include(debug_toolbar.urls)),\n ]\n\n# Overwrite default 500 handler\n# More details could see django.core.urlresolvers._resolve_special()\nhandler500 = \"tcms.core.views.error.server_error\"\n", "path": "src/tcms/urls.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.urls import include, path\nfrom django.views.i18n import JavaScriptCatalog\n\n# XML RPC handler\nfrom kobo.django.xmlrpc.views import XMLRPCHandlerFactory\n\nfrom tcms.core import ajax as tcms_core_ajax\nfrom tcms.testruns import views as testruns_views\n\nxmlrpc_handler = XMLRPCHandlerFactory(\"TCMS_XML_RPC\")\n\nurlpatterns = [\n path(\"admin/\", admin.site.urls),\n path(\"admin/doc/\", include(\"django.contrib.admindocs.urls\")),\n path(\"\", include(\"tcms.core.urls\")),\n path(\"\", include(\"tcms.management.urls\")),\n # Testplans zone\n path(\"plan/\", include(\"tcms.testplans.urls.plan_urls\")),\n path(\"plans/\", include(\"tcms.testplans.urls.plans_urls\")),\n # Testcases zone\n path(\"case/\", include(\"tcms.testcases.urls.case_urls\")),\n path(\"cases/\", include(\"tcms.testcases.urls.cases_urls\")),\n # Testruns zone\n path(\"run/\", include(\"tcms.testruns.urls.run_urls\")),\n path(\"runs/\", include(\"tcms.testruns.urls.runs_urls\")),\n path(\"accounts/\", include(\"tcms.profiles.urls\")),\n path(\"linkref/\", include(\"tcms.linkreference.urls\")),\n path(\"comments/\", include(\"tcms.comments.urls\")),\n path(\"advance-search/\", include(\"tcms.search.urls\")),\n path(\"report/\", include(\"tcms.report.urls\")),\n path(\"xmlrpc/\", xmlrpc_handler),\n path(\"tinymce/\", include(\"tinymce.urls\")),\n # Using admin js without admin permission\n # refer: https://docs.djangoproject.com/en/1.6/topics/i18n/translation/#module-django.views.i18n\n path(\"jsi18n/\", JavaScriptCatalog.as_view(), name=\"javascript-catalog\"),\n]\n\n# Python Social Core / Django Social Auth\nif \"SOCIAL\" in settings.ENABLED_AUTH_BACKENDS:\n urlpatterns += [path(\"\", include(\"social_django.urls\", namespace=\"social\"))]\n\n# Debug zone\n\nif settings.DEBUG:\n import debug_toolbar\n\n urlpatterns += [\n path(\"__debug__/\", include(debug_toolbar.urls)),\n ]\n\n# Overwrite default 500 handler\n# More details could see django.core.urlresolvers._resolve_special()\nhandler500 = \"tcms.core.views.error.server_error\"\n", "path": "src/tcms/urls.py"}]}
| 1,285 | 123 |
gh_patches_debug_33316
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-2877
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing `ignore_for_cache` docstring in `python_app` and `join_app`
https://github.com/Parsl/parsl/blob/40b969f7a9e16b3fcab2db127753bf85cbcb5375/parsl/app/app.py#L72-L93
The `ignore_for_cache` docstring is missing. Same thing for the `join_app` by the way. Super minor report.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/app/app.py`
Content:
```
1 """Definitions for the @App decorator and the App classes.
2
3 The App class encapsulates a generic leaf task that can be executed asynchronously.
4 """
5 import logging
6 import typeguard
7 from abc import ABCMeta, abstractmethod
8 from inspect import signature
9 from typing import List, Optional, Union
10 from typing_extensions import Literal
11
12 from parsl.dataflow.dflow import DataFlowKernel
13
14 logger = logging.getLogger(__name__)
15
16
17 class AppBase(metaclass=ABCMeta):
18 """This is the base class that defines the two external facing functions that an App must define.
19
20 The __init__ () which is called when the interpreter sees the definition of the decorated
21 function, and the __call__ () which is invoked when a decorated function is called by the user.
22
23 """
24
25 def __init__(self, func, data_flow_kernel=None, executors='all', cache=False, ignore_for_cache=None):
26 """Construct the App object.
27
28 Args:
29 - func (function): Takes the function to be made into an App
30
31 Kwargs:
32 - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for
33 managing this app. This can be omitted only
34 after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
35 - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
36 - cache (Bool) : Enable caching of this app ?
37
38 Returns:
39 - App object.
40
41 """
42 self.__name__ = func.__name__
43 self.func = func
44 self.data_flow_kernel = data_flow_kernel
45 self.executors = executors
46 self.cache = cache
47 self.ignore_for_cache = ignore_for_cache
48 if not (isinstance(executors, list) or isinstance(executors, str)):
49 logger.error("App {} specifies invalid executor option, expects string or list".format(
50 func.__name__))
51
52 params = signature(func).parameters
53
54 self.kwargs = {}
55 if 'stdout' in params:
56 self.kwargs['stdout'] = params['stdout'].default
57 if 'stderr' in params:
58 self.kwargs['stderr'] = params['stderr'].default
59 if 'walltime' in params:
60 self.kwargs['walltime'] = params['walltime'].default
61 if 'parsl_resource_specification' in params:
62 self.kwargs['parsl_resource_specification'] = params['parsl_resource_specification'].default
63 self.outputs = params['outputs'].default if 'outputs' in params else []
64 self.inputs = params['inputs'].default if 'inputs' in params else []
65
66 @abstractmethod
67 def __call__(self, *args, **kwargs):
68 pass
69
70
71 @typeguard.typechecked
72 def python_app(function=None,
73 data_flow_kernel: Optional[DataFlowKernel] = None,
74 cache: bool = False,
75 executors: Union[List[str], Literal['all']] = 'all',
76 ignore_for_cache: Optional[List[str]] = None):
77 """Decorator function for making python apps.
78
79 Parameters
80 ----------
81 function : function
82 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
83 for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the
84 decorator is used alone, function will be the actual function being decorated, whereas if it
85 is called with arguments, function will be None. Default is None.
86 data_flow_kernel : DataFlowKernel
87 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
88 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
89 executors : string or list
90 Labels of the executors that this app can execute over. Default is 'all'.
91 cache : bool
92 Enable caching of the app call. Default is False.
93 """
94 from parsl.app.python import PythonApp
95
96 def decorator(func):
97 def wrapper(f):
98 return PythonApp(f,
99 data_flow_kernel=data_flow_kernel,
100 cache=cache,
101 executors=executors,
102 ignore_for_cache=ignore_for_cache,
103 join=False)
104 return wrapper(func)
105 if function is not None:
106 return decorator(function)
107 return decorator
108
109
110 @typeguard.typechecked
111 def join_app(function=None,
112 data_flow_kernel: Optional[DataFlowKernel] = None,
113 cache: bool = False,
114 ignore_for_cache: Optional[List[str]] = None):
115 """Decorator function for making join apps
116
117 Parameters
118 ----------
119 function : function
120 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
121 for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the
122 decorator is used alone, function will be the actual function being decorated, whereas if it
123 is called with arguments, function will be None. Default is None.
124 data_flow_kernel : DataFlowKernel
125 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
126 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
127 cache : bool
128 Enable caching of the app call. Default is False.
129 """
130 from parsl.app.python import PythonApp
131
132 def decorator(func):
133 def wrapper(f):
134 return PythonApp(f,
135 data_flow_kernel=data_flow_kernel,
136 cache=cache,
137 executors=["_parsl_internal"],
138 ignore_for_cache=ignore_for_cache,
139 join=True)
140 return wrapper(func)
141 if function is not None:
142 return decorator(function)
143 return decorator
144
145
146 @typeguard.typechecked
147 def bash_app(function=None,
148 data_flow_kernel: Optional[DataFlowKernel] = None,
149 cache: bool = False,
150 executors: Union[List[str], Literal['all']] = 'all',
151 ignore_for_cache: Optional[List[str]] = None):
152 """Decorator function for making bash apps.
153
154 Parameters
155 ----------
156 function : function
157 Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,
158 for example, ``@bash_app`` if using all defaults or ``@bash_app(walltime=120)``. If the
159 decorator is used alone, function will be the actual function being decorated, whereas if it
160 is called with arguments, function will be None. Default is None.
161 data_flow_kernel : DataFlowKernel
162 The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can
163 be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
164 walltime : int
165 Walltime for app in seconds. Default is 60.
166 executors : string or list
167 Labels of the executors that this app can execute over. Default is 'all'.
168 cache : bool
169 Enable caching of the app call. Default is False.
170 """
171 from parsl.app.bash import BashApp
172
173 def decorator(func):
174 def wrapper(f):
175 return BashApp(f,
176 data_flow_kernel=data_flow_kernel,
177 cache=cache,
178 executors=executors,
179 ignore_for_cache=ignore_for_cache)
180 return wrapper(func)
181 if function is not None:
182 return decorator(function)
183 return decorator
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsl/app/app.py b/parsl/app/app.py
--- a/parsl/app/app.py
+++ b/parsl/app/app.py
@@ -34,6 +34,7 @@
after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.
- executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.
- cache (Bool) : Enable caching of this app ?
+ - ignore_for_cache (list|None): Names of arguments which will be ignored by the caching mechanism.
Returns:
- App object.
@@ -90,6 +91,8 @@
Labels of the executors that this app can execute over. Default is 'all'.
cache : bool
Enable caching of the app call. Default is False.
+ ignore_for_cache : (list|None)
+ Names of arguments which will be ignored by the caching mechanism.
"""
from parsl.app.python import PythonApp
@@ -126,6 +129,8 @@
be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.
cache : bool
Enable caching of the app call. Default is False.
+ ignore_for_cache : (list|None)
+ Names of arguments which will be ignored by the caching mechanism.
"""
from parsl.app.python import PythonApp
@@ -167,6 +172,8 @@
Labels of the executors that this app can execute over. Default is 'all'.
cache : bool
Enable caching of the app call. Default is False.
+ ignore_for_cache : (list|None)
+ Names of arguments which will be ignored by the caching mechanism.
"""
from parsl.app.bash import BashApp
|
{"golden_diff": "diff --git a/parsl/app/app.py b/parsl/app/app.py\n--- a/parsl/app/app.py\n+++ b/parsl/app/app.py\n@@ -34,6 +34,7 @@\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n+ - ignore_for_cache (list|None): Names of arguments which will be ignored by the caching mechanism.\n \n Returns:\n - App object.\n@@ -90,6 +91,8 @@\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n+ ignore_for_cache : (list|None)\n+ Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.python import PythonApp\n \n@@ -126,6 +129,8 @@\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n cache : bool\n Enable caching of the app call. Default is False.\n+ ignore_for_cache : (list|None)\n+ Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.python import PythonApp\n \n@@ -167,6 +172,8 @@\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n+ ignore_for_cache : (list|None)\n+ Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.bash import BashApp\n", "issue": "Missing `ignore_for_cache` docstring in `python_app` and `join_app`\nhttps://github.com/Parsl/parsl/blob/40b969f7a9e16b3fcab2db127753bf85cbcb5375/parsl/app/app.py#L72-L93\r\n\r\nThe `ignore_for_cache` docstring is missing. Same thing for the `join_app` by the way. Super minor report.\n", "before_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nimport typeguard\nfrom abc import ABCMeta, abstractmethod\nfrom inspect import signature\nfrom typing import List, Optional, Union\nfrom typing_extensions import Literal\n\nfrom parsl.dataflow.dflow import DataFlowKernel\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(metaclass=ABCMeta):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, executors='all', cache=False, ignore_for_cache=None):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.executors = executors\n self.cache = cache\n self.ignore_for_cache = ignore_for_cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n if 'walltime' in params:\n self.kwargs['walltime'] = params['walltime'].default\n if 'parsl_resource_specification' in params:\n self.kwargs['parsl_resource_specification'] = params['parsl_resource_specification'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\[email protected]\ndef python_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n executors: Union[List[str], Literal['all']] = 'all',\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=executors,\n ignore_for_cache=ignore_for_cache,\n join=False)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\[email protected]\ndef join_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making join apps\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=[\"_parsl_internal\"],\n ignore_for_cache=ignore_for_cache,\n join=True)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\[email protected]\ndef bash_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n executors: Union[List[str], Literal['all']] = 'all',\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@bash_app`` if using all defaults or ``@bash_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n \"\"\"\n from parsl.app.bash import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=executors,\n ignore_for_cache=ignore_for_cache)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}], "after_files": [{"content": "\"\"\"Definitions for the @App decorator and the App classes.\n\nThe App class encapsulates a generic leaf task that can be executed asynchronously.\n\"\"\"\nimport logging\nimport typeguard\nfrom abc import ABCMeta, abstractmethod\nfrom inspect import signature\nfrom typing import List, Optional, Union\nfrom typing_extensions import Literal\n\nfrom parsl.dataflow.dflow import DataFlowKernel\n\nlogger = logging.getLogger(__name__)\n\n\nclass AppBase(metaclass=ABCMeta):\n \"\"\"This is the base class that defines the two external facing functions that an App must define.\n\n The __init__ () which is called when the interpreter sees the definition of the decorated\n function, and the __call__ () which is invoked when a decorated function is called by the user.\n\n \"\"\"\n\n def __init__(self, func, data_flow_kernel=None, executors='all', cache=False, ignore_for_cache=None):\n \"\"\"Construct the App object.\n\n Args:\n - func (function): Takes the function to be made into an App\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of this app ?\n - ignore_for_cache (list|None): Names of arguments which will be ignored by the caching mechanism.\n\n Returns:\n - App object.\n\n \"\"\"\n self.__name__ = func.__name__\n self.func = func\n self.data_flow_kernel = data_flow_kernel\n self.executors = executors\n self.cache = cache\n self.ignore_for_cache = ignore_for_cache\n if not (isinstance(executors, list) or isinstance(executors, str)):\n logger.error(\"App {} specifies invalid executor option, expects string or list\".format(\n func.__name__))\n\n params = signature(func).parameters\n\n self.kwargs = {}\n if 'stdout' in params:\n self.kwargs['stdout'] = params['stdout'].default\n if 'stderr' in params:\n self.kwargs['stderr'] = params['stderr'].default\n if 'walltime' in params:\n self.kwargs['walltime'] = params['walltime'].default\n if 'parsl_resource_specification' in params:\n self.kwargs['parsl_resource_specification'] = params['parsl_resource_specification'].default\n self.outputs = params['outputs'].default if 'outputs' in params else []\n self.inputs = params['inputs'].default if 'inputs' in params else []\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\[email protected]\ndef python_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n executors: Union[List[str], Literal['all']] = 'all',\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n ignore_for_cache : (list|None)\n Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=executors,\n ignore_for_cache=ignore_for_cache,\n join=False)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\[email protected]\ndef join_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making join apps\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@python_app`` if using all defaults or ``@python_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n cache : bool\n Enable caching of the app call. Default is False.\n ignore_for_cache : (list|None)\n Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.python import PythonApp\n\n def decorator(func):\n def wrapper(f):\n return PythonApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=[\"_parsl_internal\"],\n ignore_for_cache=ignore_for_cache,\n join=True)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n\n\[email protected]\ndef bash_app(function=None,\n data_flow_kernel: Optional[DataFlowKernel] = None,\n cache: bool = False,\n executors: Union[List[str], Literal['all']] = 'all',\n ignore_for_cache: Optional[List[str]] = None):\n \"\"\"Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, ``@bash_app`` if using all defaults or ``@bash_app(walltime=120)``. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False.\n ignore_for_cache : (list|None)\n Names of arguments which will be ignored by the caching mechanism.\n \"\"\"\n from parsl.app.bash import BashApp\n\n def decorator(func):\n def wrapper(f):\n return BashApp(f,\n data_flow_kernel=data_flow_kernel,\n cache=cache,\n executors=executors,\n ignore_for_cache=ignore_for_cache)\n return wrapper(func)\n if function is not None:\n return decorator(function)\n return decorator\n", "path": "parsl/app/app.py"}]}
| 2,434 | 402 |
gh_patches_debug_11063
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1333
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search Page (Amok): search controller - datasets displayed
1. After merging the last changes, there are displayed a lower number of datasets (<10). This is basically due to the fact that search module searches in extras after "indicator" value 0. The database contains right now indicator value 1 for indicators and only for some datasets value 0.
What will be the resolution for this bug?
2. I noticed in search controller that we are basically performed 3 searches (in search_controller.py) line 280 and in count_types (line 283). Is it possible to optimize this?
Assigning it to @mbellotti
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_search/ckanext/hdx_search/plugin.py`
Content:
```
1 import logging
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6 class HDXSearchPlugin(plugins.SingletonPlugin):
7 plugins.implements(plugins.IConfigurer, inherit=False)
8 plugins.implements(plugins.IRoutes, inherit=True)
9 plugins.implements(plugins.ITemplateHelpers, inherit=False)
10 plugins.implements(plugins.IPackageController, inherit=True)
11
12 def update_config(self, config):
13 tk.add_template_directory(config, 'templates')
14
15 def get_helpers(self):
16 return {}
17
18 def before_map(self, map):
19 map.connect('search','/search', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
20 map.connect('/dataset', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='package_search')
21 return map
22
23 def after_map(self, map):
24 map.connect('search','/search', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
25 map.connect('/dataset', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='package_search')
26 return map
27
28 def before_search(self, search_params):
29 #If indicator flag is set, search only that type
30 if 'ext_indicator' in search_params['extras']:
31 if int(search_params['extras']['ext_indicator']) == 1:
32 search_params['q'] = search_params['q']+'+extras_indicator:1'
33 elif int(search_params['extras']['ext_indicator']) == 0:
34 search_params['q'] = search_params['q']+'+extras_indicator:0'
35 return search_params
36
37 def after_search(self, search_results, search_params):
38 return search_results
39
40 def before_view(self, pkg_dict):
41 return pkg_dict
42
43
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -26,7 +26,7 @@
if int(search_params['extras']['ext_indicator']) == 1:
search_params['q'] = search_params['q']+'+extras_indicator:1'
elif int(search_params['extras']['ext_indicator']) == 0:
- search_params['q'] = search_params['q']+'+extras_indicator:0'
+ search_params['q'] = search_params['q']+'+!extras_indicator:1'
return search_params
def after_search(self, search_results, search_params):
|
{"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -26,7 +26,7 @@\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['q'] = search_params['q']+'+extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n- search_params['q'] = search_params['q']+'+extras_indicator:0'\n+ search_params['q'] = search_params['q']+'+!extras_indicator:1'\n return search_params\n \n def after_search(self, search_results, search_params):\n", "issue": "Search Page (Amok): search controller - datasets displayed\n1. After merging the last changes, there are displayed a lower number of datasets (<10). This is basically due to the fact that search module searches in extras after \"indicator\" value 0. The database contains right now indicator value 1 for indicators and only for some datasets value 0.\n What will be the resolution for this bug?\n2. I noticed in search controller that we are basically performed 3 searches (in search_controller.py) line 280 and in count_types (line 283). Is it possible to optimize this?\n Assigning it to @mbellotti \n\n", "before_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search','/search', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('/dataset', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='package_search')\n return map\n \n def after_map(self, map):\n map.connect('search','/search', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('/dataset', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n #If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['q'] = search_params['q']+'+extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['q'] = search_params['q']+'+extras_indicator:0'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n\n\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}], "after_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('/search', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('/dataset', controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n #If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['q'] = search_params['q']+'+extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['q'] = search_params['q']+'+!extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n\n\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]}
| 884 | 195 |
gh_patches_debug_38686
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-5394
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
playlist list causing UnboundLocalError
### What Red version are you using?
3.4.14
### Cog name
Audio
### Command name
playlist list
### What did you expect to happen?
I first created an empty playlist, and then called `playlist list`, where I expected Red to show me the empty playlist.
### What actually happened?
Red confirmed that it created the empty playlist, but reported that no playlists were found.
### How can we reproduce this error?
1. `[p] playlist create playlistName`
2. `[p] playlist list`
### Anything else?
I found an error in my log files, an `UnboundLocalError` as referenced in the title.

Additionally, here is the output of `[p] debuginfo`.
<img width="1112" alt="image" src="https://user-images.githubusercontent.com/92138376/136501321-a604d42d-e837-4bab-8bcc-4c89ba5b7cb5.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/audio/apis/playlist_wrapper.py`
Content:
```
1 import concurrent
2 import json
3 import logging
4 from pathlib import Path
5
6 from types import SimpleNamespace
7 from typing import List, MutableMapping, Optional
8
9 from redbot.core import Config
10 from redbot.core.bot import Red
11 from redbot.core.i18n import Translator
12 from redbot.core.utils import AsyncIter
13 from redbot.core.utils.dbtools import APSWConnectionWrapper
14
15 from ..audio_logging import debug_exc_log
16 from ..sql_statements import (
17 HANDLE_DISCORD_DATA_DELETION_QUERY,
18 PLAYLIST_CREATE_INDEX,
19 PLAYLIST_CREATE_TABLE,
20 PLAYLIST_DELETE,
21 PLAYLIST_DELETE_SCHEDULED,
22 PLAYLIST_DELETE_SCOPE,
23 PLAYLIST_FETCH,
24 PLAYLIST_FETCH_ALL,
25 PLAYLIST_FETCH_ALL_CONVERTER,
26 PLAYLIST_FETCH_ALL_WITH_FILTER,
27 PLAYLIST_UPSERT,
28 PRAGMA_FETCH_user_version,
29 PRAGMA_SET_journal_mode,
30 PRAGMA_SET_read_uncommitted,
31 PRAGMA_SET_temp_store,
32 PRAGMA_SET_user_version,
33 )
34 from ..utils import PlaylistScope
35 from .api_utils import PlaylistFetchResult
36
37 log = logging.getLogger("red.cogs.Audio.api.Playlists")
38 _ = Translator("Audio", Path(__file__))
39
40
41 class PlaylistWrapper:
42 def __init__(self, bot: Red, config: Config, conn: APSWConnectionWrapper):
43 self.bot = bot
44 self.database = conn
45 self.config = config
46 self.statement = SimpleNamespace()
47 self.statement.pragma_temp_store = PRAGMA_SET_temp_store
48 self.statement.pragma_journal_mode = PRAGMA_SET_journal_mode
49 self.statement.pragma_read_uncommitted = PRAGMA_SET_read_uncommitted
50 self.statement.set_user_version = PRAGMA_SET_user_version
51 self.statement.get_user_version = PRAGMA_FETCH_user_version
52 self.statement.create_table = PLAYLIST_CREATE_TABLE
53 self.statement.create_index = PLAYLIST_CREATE_INDEX
54
55 self.statement.upsert = PLAYLIST_UPSERT
56 self.statement.delete = PLAYLIST_DELETE
57 self.statement.delete_scope = PLAYLIST_DELETE_SCOPE
58 self.statement.delete_scheduled = PLAYLIST_DELETE_SCHEDULED
59
60 self.statement.get_one = PLAYLIST_FETCH
61 self.statement.get_all = PLAYLIST_FETCH_ALL
62 self.statement.get_all_with_filter = PLAYLIST_FETCH_ALL_WITH_FILTER
63 self.statement.get_all_converter = PLAYLIST_FETCH_ALL_CONVERTER
64
65 self.statement.drop_user_playlists = HANDLE_DISCORD_DATA_DELETION_QUERY
66
67 async def init(self) -> None:
68 """Initialize the Playlist table."""
69 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
70 executor.submit(self.database.cursor().execute, self.statement.pragma_temp_store)
71 executor.submit(self.database.cursor().execute, self.statement.pragma_journal_mode)
72 executor.submit(self.database.cursor().execute, self.statement.pragma_read_uncommitted)
73 executor.submit(self.database.cursor().execute, self.statement.create_table)
74 executor.submit(self.database.cursor().execute, self.statement.create_index)
75
76 @staticmethod
77 def get_scope_type(scope: str) -> int:
78 """Convert a scope to a numerical identifier."""
79 if scope == PlaylistScope.GLOBAL.value:
80 table = 1
81 elif scope == PlaylistScope.USER.value:
82 table = 3
83 else:
84 table = 2
85 return table
86
87 async def fetch(self, scope: str, playlist_id: int, scope_id: int) -> PlaylistFetchResult:
88 """Fetch a single playlist."""
89 scope_type = self.get_scope_type(scope)
90
91 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
92 for future in concurrent.futures.as_completed(
93 [
94 executor.submit(
95 self.database.cursor().execute,
96 self.statement.get_one,
97 (
98 {
99 "playlist_id": playlist_id,
100 "scope_id": scope_id,
101 "scope_type": scope_type,
102 }
103 ),
104 )
105 ]
106 ):
107 try:
108 row_result = future.result()
109 except Exception as exc:
110 debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
111 row = row_result.fetchone()
112 if row:
113 row = PlaylistFetchResult(*row)
114 return row
115
116 async def fetch_all(
117 self, scope: str, scope_id: int, author_id=None
118 ) -> List[PlaylistFetchResult]:
119 """Fetch all playlists."""
120 scope_type = self.get_scope_type(scope)
121 output = []
122 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
123 if author_id is not None:
124 for future in concurrent.futures.as_completed(
125 [
126 executor.submit(
127 self.database.cursor().execute,
128 self.statement.get_all_with_filter,
129 (
130 {
131 "scope_type": scope_type,
132 "scope_id": scope_id,
133 "author_id": author_id,
134 }
135 ),
136 )
137 ]
138 ):
139 try:
140 row_result = future.result()
141 except Exception as exc:
142 debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
143 return []
144 else:
145 for future in concurrent.futures.as_completed(
146 [
147 executor.submit(
148 self.database.cursor().execute,
149 self.statement.get_all,
150 ({"scope_type": scope_type, "scope_id": scope_id}),
151 )
152 ]
153 ):
154 try:
155 row_result = future.result()
156 except Exception as exc:
157 debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
158 return []
159 async for row in AsyncIter(row_result):
160 output.append(PlaylistFetchResult(*row))
161 return output
162
163 async def fetch_all_converter(
164 self, scope: str, playlist_name, playlist_id
165 ) -> List[PlaylistFetchResult]:
166 """Fetch all playlists with the specified filter."""
167 scope_type = self.get_scope_type(scope)
168 try:
169 playlist_id = int(playlist_id)
170 except Exception as exc:
171 debug_exc_log(log, exc, "Failed converting playlist_id to int")
172 playlist_id = -1
173
174 output = []
175 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
176 for future in concurrent.futures.as_completed(
177 [
178 executor.submit(
179 self.database.cursor().execute,
180 self.statement.get_all_converter,
181 (
182 {
183 "scope_type": scope_type,
184 "playlist_name": playlist_name,
185 "playlist_id": playlist_id,
186 }
187 ),
188 )
189 ]
190 ):
191 try:
192 row_result = future.result()
193 except Exception as exc:
194 debug_exc_log(log, exc, "Failed to completed fetch from database")
195
196 async for row in AsyncIter(row_result):
197 output.append(PlaylistFetchResult(*row))
198 return output
199
200 async def delete(self, scope: str, playlist_id: int, scope_id: int):
201 """Deletes a single playlists."""
202 scope_type = self.get_scope_type(scope)
203 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
204 executor.submit(
205 self.database.cursor().execute,
206 self.statement.delete,
207 ({"playlist_id": playlist_id, "scope_id": scope_id, "scope_type": scope_type}),
208 )
209
210 async def delete_scheduled(self):
211 """Clean up database from all deleted playlists."""
212 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
213 executor.submit(self.database.cursor().execute, self.statement.delete_scheduled)
214
215 async def drop(self, scope: str):
216 """Delete all playlists in a scope."""
217 scope_type = self.get_scope_type(scope)
218 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
219 executor.submit(
220 self.database.cursor().execute,
221 self.statement.delete_scope,
222 ({"scope_type": scope_type}),
223 )
224
225 async def create_table(self):
226 """Create the playlist table."""
227 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
228 executor.submit(self.database.cursor().execute, PLAYLIST_CREATE_TABLE)
229
230 async def upsert(
231 self,
232 scope: str,
233 playlist_id: int,
234 playlist_name: str,
235 scope_id: int,
236 author_id: int,
237 playlist_url: Optional[str],
238 tracks: List[MutableMapping],
239 ):
240 """Insert or update a playlist into the database."""
241 scope_type = self.get_scope_type(scope)
242 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
243 executor.submit(
244 self.database.cursor().execute,
245 self.statement.upsert,
246 {
247 "scope_type": str(scope_type),
248 "playlist_id": int(playlist_id),
249 "playlist_name": str(playlist_name),
250 "scope_id": int(scope_id),
251 "author_id": int(author_id),
252 "playlist_url": playlist_url,
253 "tracks": json.dumps(tracks),
254 },
255 )
256
257 async def handle_playlist_user_id_deletion(self, user_id: int):
258 with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
259 executor.submit(
260 self.database.cursor().execute,
261 self.statement.drop_user_playlists,
262 {"user_id": user_id},
263 )
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redbot/cogs/audio/apis/playlist_wrapper.py b/redbot/cogs/audio/apis/playlist_wrapper.py
--- a/redbot/cogs/audio/apis/playlist_wrapper.py
+++ b/redbot/cogs/audio/apis/playlist_wrapper.py
@@ -84,7 +84,9 @@
table = 2
return table
- async def fetch(self, scope: str, playlist_id: int, scope_id: int) -> PlaylistFetchResult:
+ async def fetch(
+ self, scope: str, playlist_id: int, scope_id: int
+ ) -> Optional[PlaylistFetchResult]:
"""Fetch a single playlist."""
scope_type = self.get_scope_type(scope)
@@ -107,7 +109,8 @@
try:
row_result = future.result()
except Exception as exc:
- debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
+ debug_exc_log(log, exc, "Failed to complete playlist fetch from database")
+ return None
row = row_result.fetchone()
if row:
row = PlaylistFetchResult(*row)
@@ -139,7 +142,7 @@
try:
row_result = future.result()
except Exception as exc:
- debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
+ debug_exc_log(log, exc, "Failed to complete playlist fetch from database")
return []
else:
for future in concurrent.futures.as_completed(
@@ -154,7 +157,7 @@
try:
row_result = future.result()
except Exception as exc:
- debug_exc_log(log, exc, "Failed to completed playlist fetch from database")
+ debug_exc_log(log, exc, "Failed to complete playlist fetch from database")
return []
async for row in AsyncIter(row_result):
output.append(PlaylistFetchResult(*row))
@@ -191,7 +194,8 @@
try:
row_result = future.result()
except Exception as exc:
- debug_exc_log(log, exc, "Failed to completed fetch from database")
+ debug_exc_log(log, exc, "Failed to complete fetch from database")
+ return []
async for row in AsyncIter(row_result):
output.append(PlaylistFetchResult(*row))
|
{"golden_diff": "diff --git a/redbot/cogs/audio/apis/playlist_wrapper.py b/redbot/cogs/audio/apis/playlist_wrapper.py\n--- a/redbot/cogs/audio/apis/playlist_wrapper.py\n+++ b/redbot/cogs/audio/apis/playlist_wrapper.py\n@@ -84,7 +84,9 @@\n table = 2\n return table\n \n- async def fetch(self, scope: str, playlist_id: int, scope_id: int) -> PlaylistFetchResult:\n+ async def fetch(\n+ self, scope: str, playlist_id: int, scope_id: int\n+ ) -> Optional[PlaylistFetchResult]:\n \"\"\"Fetch a single playlist.\"\"\"\n scope_type = self.get_scope_type(scope)\n \n@@ -107,7 +109,8 @@\n try:\n row_result = future.result()\n except Exception as exc:\n- debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n+ debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n+ return None\n row = row_result.fetchone()\n if row:\n row = PlaylistFetchResult(*row)\n@@ -139,7 +142,7 @@\n try:\n row_result = future.result()\n except Exception as exc:\n- debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n+ debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n return []\n else:\n for future in concurrent.futures.as_completed(\n@@ -154,7 +157,7 @@\n try:\n row_result = future.result()\n except Exception as exc:\n- debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n+ debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n return []\n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n@@ -191,7 +194,8 @@\n try:\n row_result = future.result()\n except Exception as exc:\n- debug_exc_log(log, exc, \"Failed to completed fetch from database\")\n+ debug_exc_log(log, exc, \"Failed to complete fetch from database\")\n+ return []\n \n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n", "issue": "playlist list causing UnboundLocalError\n### What Red version are you using?\n\n3.4.14\n\n### Cog name\n\nAudio\n\n### Command name\n\nplaylist list\n\n### What did you expect to happen?\n\nI first created an empty playlist, and then called `playlist list`, where I expected Red to show me the empty playlist.\n\n### What actually happened?\n\nRed confirmed that it created the empty playlist, but reported that no playlists were found.\n\n### How can we reproduce this error?\n\n1. `[p] playlist create playlistName`\r\n2. `[p] playlist list`\r\n\n\n### Anything else?\n\nI found an error in my log files, an `UnboundLocalError` as referenced in the title.\r\n\r\nAdditionally, here is the output of `[p] debuginfo`.\r\n<img width=\"1112\" alt=\"image\" src=\"https://user-images.githubusercontent.com/92138376/136501321-a604d42d-e837-4bab-8bcc-4c89ba5b7cb5.png\">\r\n\n", "before_files": [{"content": "import concurrent\nimport json\nimport logging\nfrom pathlib import Path\n\nfrom types import SimpleNamespace\nfrom typing import List, MutableMapping, Optional\n\nfrom redbot.core import Config\nfrom redbot.core.bot import Red\nfrom redbot.core.i18n import Translator\nfrom redbot.core.utils import AsyncIter\nfrom redbot.core.utils.dbtools import APSWConnectionWrapper\n\nfrom ..audio_logging import debug_exc_log\nfrom ..sql_statements import (\n HANDLE_DISCORD_DATA_DELETION_QUERY,\n PLAYLIST_CREATE_INDEX,\n PLAYLIST_CREATE_TABLE,\n PLAYLIST_DELETE,\n PLAYLIST_DELETE_SCHEDULED,\n PLAYLIST_DELETE_SCOPE,\n PLAYLIST_FETCH,\n PLAYLIST_FETCH_ALL,\n PLAYLIST_FETCH_ALL_CONVERTER,\n PLAYLIST_FETCH_ALL_WITH_FILTER,\n PLAYLIST_UPSERT,\n PRAGMA_FETCH_user_version,\n PRAGMA_SET_journal_mode,\n PRAGMA_SET_read_uncommitted,\n PRAGMA_SET_temp_store,\n PRAGMA_SET_user_version,\n)\nfrom ..utils import PlaylistScope\nfrom .api_utils import PlaylistFetchResult\n\nlog = logging.getLogger(\"red.cogs.Audio.api.Playlists\")\n_ = Translator(\"Audio\", Path(__file__))\n\n\nclass PlaylistWrapper:\n def __init__(self, bot: Red, config: Config, conn: APSWConnectionWrapper):\n self.bot = bot\n self.database = conn\n self.config = config\n self.statement = SimpleNamespace()\n self.statement.pragma_temp_store = PRAGMA_SET_temp_store\n self.statement.pragma_journal_mode = PRAGMA_SET_journal_mode\n self.statement.pragma_read_uncommitted = PRAGMA_SET_read_uncommitted\n self.statement.set_user_version = PRAGMA_SET_user_version\n self.statement.get_user_version = PRAGMA_FETCH_user_version\n self.statement.create_table = PLAYLIST_CREATE_TABLE\n self.statement.create_index = PLAYLIST_CREATE_INDEX\n\n self.statement.upsert = PLAYLIST_UPSERT\n self.statement.delete = PLAYLIST_DELETE\n self.statement.delete_scope = PLAYLIST_DELETE_SCOPE\n self.statement.delete_scheduled = PLAYLIST_DELETE_SCHEDULED\n\n self.statement.get_one = PLAYLIST_FETCH\n self.statement.get_all = PLAYLIST_FETCH_ALL\n self.statement.get_all_with_filter = PLAYLIST_FETCH_ALL_WITH_FILTER\n self.statement.get_all_converter = PLAYLIST_FETCH_ALL_CONVERTER\n\n self.statement.drop_user_playlists = HANDLE_DISCORD_DATA_DELETION_QUERY\n\n async def init(self) -> None:\n \"\"\"Initialize the Playlist table.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, self.statement.pragma_temp_store)\n executor.submit(self.database.cursor().execute, self.statement.pragma_journal_mode)\n executor.submit(self.database.cursor().execute, self.statement.pragma_read_uncommitted)\n executor.submit(self.database.cursor().execute, self.statement.create_table)\n executor.submit(self.database.cursor().execute, self.statement.create_index)\n\n @staticmethod\n def get_scope_type(scope: str) -> int:\n \"\"\"Convert a scope to a numerical identifier.\"\"\"\n if scope == PlaylistScope.GLOBAL.value:\n table = 1\n elif scope == PlaylistScope.USER.value:\n table = 3\n else:\n table = 2\n return table\n\n async def fetch(self, scope: str, playlist_id: int, scope_id: int) -> PlaylistFetchResult:\n \"\"\"Fetch a single playlist.\"\"\"\n scope_type = self.get_scope_type(scope)\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_one,\n (\n {\n \"playlist_id\": playlist_id,\n \"scope_id\": scope_id,\n \"scope_type\": scope_type,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n row = row_result.fetchone()\n if row:\n row = PlaylistFetchResult(*row)\n return row\n\n async def fetch_all(\n self, scope: str, scope_id: int, author_id=None\n ) -> List[PlaylistFetchResult]:\n \"\"\"Fetch all playlists.\"\"\"\n scope_type = self.get_scope_type(scope)\n output = []\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n if author_id is not None:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all_with_filter,\n (\n {\n \"scope_type\": scope_type,\n \"scope_id\": scope_id,\n \"author_id\": author_id,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n return []\n else:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all,\n ({\"scope_type\": scope_type, \"scope_id\": scope_id}),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to completed playlist fetch from database\")\n return []\n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n return output\n\n async def fetch_all_converter(\n self, scope: str, playlist_name, playlist_id\n ) -> List[PlaylistFetchResult]:\n \"\"\"Fetch all playlists with the specified filter.\"\"\"\n scope_type = self.get_scope_type(scope)\n try:\n playlist_id = int(playlist_id)\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed converting playlist_id to int\")\n playlist_id = -1\n\n output = []\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all_converter,\n (\n {\n \"scope_type\": scope_type,\n \"playlist_name\": playlist_name,\n \"playlist_id\": playlist_id,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to completed fetch from database\")\n\n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n return output\n\n async def delete(self, scope: str, playlist_id: int, scope_id: int):\n \"\"\"Deletes a single playlists.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.delete,\n ({\"playlist_id\": playlist_id, \"scope_id\": scope_id, \"scope_type\": scope_type}),\n )\n\n async def delete_scheduled(self):\n \"\"\"Clean up database from all deleted playlists.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, self.statement.delete_scheduled)\n\n async def drop(self, scope: str):\n \"\"\"Delete all playlists in a scope.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.delete_scope,\n ({\"scope_type\": scope_type}),\n )\n\n async def create_table(self):\n \"\"\"Create the playlist table.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, PLAYLIST_CREATE_TABLE)\n\n async def upsert(\n self,\n scope: str,\n playlist_id: int,\n playlist_name: str,\n scope_id: int,\n author_id: int,\n playlist_url: Optional[str],\n tracks: List[MutableMapping],\n ):\n \"\"\"Insert or update a playlist into the database.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.upsert,\n {\n \"scope_type\": str(scope_type),\n \"playlist_id\": int(playlist_id),\n \"playlist_name\": str(playlist_name),\n \"scope_id\": int(scope_id),\n \"author_id\": int(author_id),\n \"playlist_url\": playlist_url,\n \"tracks\": json.dumps(tracks),\n },\n )\n\n async def handle_playlist_user_id_deletion(self, user_id: int):\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.drop_user_playlists,\n {\"user_id\": user_id},\n )\n", "path": "redbot/cogs/audio/apis/playlist_wrapper.py"}], "after_files": [{"content": "import concurrent\nimport json\nimport logging\nfrom pathlib import Path\n\nfrom types import SimpleNamespace\nfrom typing import List, MutableMapping, Optional\n\nfrom redbot.core import Config\nfrom redbot.core.bot import Red\nfrom redbot.core.i18n import Translator\nfrom redbot.core.utils import AsyncIter\nfrom redbot.core.utils.dbtools import APSWConnectionWrapper\n\nfrom ..audio_logging import debug_exc_log\nfrom ..sql_statements import (\n HANDLE_DISCORD_DATA_DELETION_QUERY,\n PLAYLIST_CREATE_INDEX,\n PLAYLIST_CREATE_TABLE,\n PLAYLIST_DELETE,\n PLAYLIST_DELETE_SCHEDULED,\n PLAYLIST_DELETE_SCOPE,\n PLAYLIST_FETCH,\n PLAYLIST_FETCH_ALL,\n PLAYLIST_FETCH_ALL_CONVERTER,\n PLAYLIST_FETCH_ALL_WITH_FILTER,\n PLAYLIST_UPSERT,\n PRAGMA_FETCH_user_version,\n PRAGMA_SET_journal_mode,\n PRAGMA_SET_read_uncommitted,\n PRAGMA_SET_temp_store,\n PRAGMA_SET_user_version,\n)\nfrom ..utils import PlaylistScope\nfrom .api_utils import PlaylistFetchResult\n\nlog = logging.getLogger(\"red.cogs.Audio.api.Playlists\")\n_ = Translator(\"Audio\", Path(__file__))\n\n\nclass PlaylistWrapper:\n def __init__(self, bot: Red, config: Config, conn: APSWConnectionWrapper):\n self.bot = bot\n self.database = conn\n self.config = config\n self.statement = SimpleNamespace()\n self.statement.pragma_temp_store = PRAGMA_SET_temp_store\n self.statement.pragma_journal_mode = PRAGMA_SET_journal_mode\n self.statement.pragma_read_uncommitted = PRAGMA_SET_read_uncommitted\n self.statement.set_user_version = PRAGMA_SET_user_version\n self.statement.get_user_version = PRAGMA_FETCH_user_version\n self.statement.create_table = PLAYLIST_CREATE_TABLE\n self.statement.create_index = PLAYLIST_CREATE_INDEX\n\n self.statement.upsert = PLAYLIST_UPSERT\n self.statement.delete = PLAYLIST_DELETE\n self.statement.delete_scope = PLAYLIST_DELETE_SCOPE\n self.statement.delete_scheduled = PLAYLIST_DELETE_SCHEDULED\n\n self.statement.get_one = PLAYLIST_FETCH\n self.statement.get_all = PLAYLIST_FETCH_ALL\n self.statement.get_all_with_filter = PLAYLIST_FETCH_ALL_WITH_FILTER\n self.statement.get_all_converter = PLAYLIST_FETCH_ALL_CONVERTER\n\n self.statement.drop_user_playlists = HANDLE_DISCORD_DATA_DELETION_QUERY\n\n async def init(self) -> None:\n \"\"\"Initialize the Playlist table.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, self.statement.pragma_temp_store)\n executor.submit(self.database.cursor().execute, self.statement.pragma_journal_mode)\n executor.submit(self.database.cursor().execute, self.statement.pragma_read_uncommitted)\n executor.submit(self.database.cursor().execute, self.statement.create_table)\n executor.submit(self.database.cursor().execute, self.statement.create_index)\n\n @staticmethod\n def get_scope_type(scope: str) -> int:\n \"\"\"Convert a scope to a numerical identifier.\"\"\"\n if scope == PlaylistScope.GLOBAL.value:\n table = 1\n elif scope == PlaylistScope.USER.value:\n table = 3\n else:\n table = 2\n return table\n\n async def fetch(\n self, scope: str, playlist_id: int, scope_id: int\n ) -> Optional[PlaylistFetchResult]:\n \"\"\"Fetch a single playlist.\"\"\"\n scope_type = self.get_scope_type(scope)\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_one,\n (\n {\n \"playlist_id\": playlist_id,\n \"scope_id\": scope_id,\n \"scope_type\": scope_type,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n return None\n row = row_result.fetchone()\n if row:\n row = PlaylistFetchResult(*row)\n return row\n\n async def fetch_all(\n self, scope: str, scope_id: int, author_id=None\n ) -> List[PlaylistFetchResult]:\n \"\"\"Fetch all playlists.\"\"\"\n scope_type = self.get_scope_type(scope)\n output = []\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n if author_id is not None:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all_with_filter,\n (\n {\n \"scope_type\": scope_type,\n \"scope_id\": scope_id,\n \"author_id\": author_id,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n return []\n else:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all,\n ({\"scope_type\": scope_type, \"scope_id\": scope_id}),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to complete playlist fetch from database\")\n return []\n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n return output\n\n async def fetch_all_converter(\n self, scope: str, playlist_name, playlist_id\n ) -> List[PlaylistFetchResult]:\n \"\"\"Fetch all playlists with the specified filter.\"\"\"\n scope_type = self.get_scope_type(scope)\n try:\n playlist_id = int(playlist_id)\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed converting playlist_id to int\")\n playlist_id = -1\n\n output = []\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n for future in concurrent.futures.as_completed(\n [\n executor.submit(\n self.database.cursor().execute,\n self.statement.get_all_converter,\n (\n {\n \"scope_type\": scope_type,\n \"playlist_name\": playlist_name,\n \"playlist_id\": playlist_id,\n }\n ),\n )\n ]\n ):\n try:\n row_result = future.result()\n except Exception as exc:\n debug_exc_log(log, exc, \"Failed to complete fetch from database\")\n return []\n\n async for row in AsyncIter(row_result):\n output.append(PlaylistFetchResult(*row))\n return output\n\n async def delete(self, scope: str, playlist_id: int, scope_id: int):\n \"\"\"Deletes a single playlists.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.delete,\n ({\"playlist_id\": playlist_id, \"scope_id\": scope_id, \"scope_type\": scope_type}),\n )\n\n async def delete_scheduled(self):\n \"\"\"Clean up database from all deleted playlists.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, self.statement.delete_scheduled)\n\n async def drop(self, scope: str):\n \"\"\"Delete all playlists in a scope.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.delete_scope,\n ({\"scope_type\": scope_type}),\n )\n\n async def create_table(self):\n \"\"\"Create the playlist table.\"\"\"\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(self.database.cursor().execute, PLAYLIST_CREATE_TABLE)\n\n async def upsert(\n self,\n scope: str,\n playlist_id: int,\n playlist_name: str,\n scope_id: int,\n author_id: int,\n playlist_url: Optional[str],\n tracks: List[MutableMapping],\n ):\n \"\"\"Insert or update a playlist into the database.\"\"\"\n scope_type = self.get_scope_type(scope)\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.upsert,\n {\n \"scope_type\": str(scope_type),\n \"playlist_id\": int(playlist_id),\n \"playlist_name\": str(playlist_name),\n \"scope_id\": int(scope_id),\n \"author_id\": int(author_id),\n \"playlist_url\": playlist_url,\n \"tracks\": json.dumps(tracks),\n },\n )\n\n async def handle_playlist_user_id_deletion(self, user_id: int):\n with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:\n executor.submit(\n self.database.cursor().execute,\n self.statement.drop_user_playlists,\n {\"user_id\": user_id},\n )\n", "path": "redbot/cogs/audio/apis/playlist_wrapper.py"}]}
| 3,195 | 503 |
gh_patches_debug_5682
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-2159
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Distribution of 42-logitech-unify-permissions.rules
I see `/usr/share/solaar/udev-rules.d/42-logitech-unify-permissions.rules` is installed by the setuptools packaging by default but it has to be installed/copied by user to e.g. `/usr/lib/udev/rules.d/42-logitech-unify-permissions.rules` for it to actually be useful and providing access to the hardware as non-root user.
Could this be installed in the expected (useful) location by default or is there some reason/limitation it's not?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2 import subprocess
3
4 from glob import glob as _glob
5
6 try:
7 from setuptools import setup
8 except ImportError:
9 from distutils.core import setup
10
11 NAME = 'Solaar'
12
13 with open('lib/solaar/version', 'r') as vfile:
14 version = vfile.read().strip()
15
16 try: # get commit from git describe
17 commit = subprocess.check_output(['git', 'describe', '--always'], stderr=subprocess.DEVNULL).strip().decode()
18 with open('lib/solaar/commit', 'w') as vfile:
19 vfile.write(f'{commit}\n')
20 except Exception: # get commit from Ubuntu dpkg-parsechangelog
21 try:
22 commit = subprocess.check_output(['dpkg-parsechangelog', '--show-field', 'Version'],
23 stderr=subprocess.DEVNULL).strip().decode()
24 commit = commit.split('~')
25 with open('lib/solaar/commit', 'w') as vfile:
26 vfile.write(f'{commit[0]}\n')
27 except Exception as e:
28 print('Exception using dpkg-parsechangelog', e)
29
30
31 def _data_files():
32 from os.path import dirname as _dirname
33
34 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
35 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
36 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
37
38 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
39 yield _dirname(mo), [mo]
40
41 yield 'share/applications', ['share/applications/solaar.desktop']
42 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
43 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
44
45 del _dirname
46
47
48 setup(
49 name=NAME.lower(),
50 version=version,
51 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',
52 long_description='''
53 Solaar is a Linux device manager for many Logitech peripherals that connect through
54 Unifying and other receivers or via USB or Bluetooth.
55 Solaar is able to pair/unpair devices with receivers and show and modify some of the
56 modifiable features of devices.
57 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),
58 author='Daniel Pavel',
59 license='GPLv2',
60 url='http://pwr-solaar.github.io/Solaar/',
61 classifiers=[
62 'Development Status :: 4 - Beta',
63 'Environment :: X11 Applications :: GTK',
64 'Environment :: Console',
65 'Intended Audience :: End Users/Desktop',
66 'License :: DFSG approved',
67 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
68 'Natural Language :: English',
69 'Programming Language :: Python :: 3 :: Only',
70 'Operating System :: POSIX :: Linux',
71 'Topic :: Utilities',
72 ],
73 platforms=['linux'],
74
75 # sudo apt install python-gi python3-gi \
76 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
77 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
78 python_requires='>=3.7',
79 install_requires=[
80 'evdev (>= 1.1.2)',
81 'pyudev (>= 0.13)',
82 'PyYAML (>= 3.12)',
83 'python-xlib (>= 0.27)',
84 'psutil (>= 5.4.3)',
85 'dbus-python (>=1.3.2)',
86 ],
87 extras_require={
88 'report-descriptor': ['hid-parser'],
89 'desktop-notifications': ['Notify (>= 0.7)'],
90 'git-commit': ['python-git-info'],
91 },
92 package_dir={'': 'lib'},
93 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
94 data_files=list(_data_files()),
95 include_package_data=True,
96 scripts=_glob('bin/*'),
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -39,7 +39,7 @@
yield _dirname(mo), [mo]
yield 'share/applications', ['share/applications/solaar.desktop']
- yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
+ yield 'lib/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']
yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
del _dirname
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -39,7 +39,7 @@\n yield _dirname(mo), [mo]\n \n yield 'share/applications', ['share/applications/solaar.desktop']\n- yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n+ yield 'lib/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n \n del _dirname\n", "issue": "Distribution of 42-logitech-unify-permissions.rules\nI see `/usr/share/solaar/udev-rules.d/42-logitech-unify-permissions.rules` is installed by the setuptools packaging by default but it has to be installed/copied by user to e.g. `/usr/lib/udev/rules.d/42-logitech-unify-permissions.rules` for it to actually be useful and providing access to the hardware as non-root user.\r\n\r\nCould this be installed in the expected (useful) location by default or is there some reason/limitation it's not?\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport subprocess\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nNAME = 'Solaar'\n\nwith open('lib/solaar/version', 'r') as vfile:\n version = vfile.read().strip()\n\ntry: # get commit from git describe\n commit = subprocess.check_output(['git', 'describe', '--always'], stderr=subprocess.DEVNULL).strip().decode()\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit}\\n')\nexcept Exception: # get commit from Ubuntu dpkg-parsechangelog\n try:\n commit = subprocess.check_output(['dpkg-parsechangelog', '--show-field', 'Version'],\n stderr=subprocess.DEVNULL).strip().decode()\n commit = commit.split('~')\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit[0]}\\n')\n except Exception as e:\n print('Exception using dpkg-parsechangelog', e)\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=version,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n 'dbus-python (>=1.3.2)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n include_package_data=True,\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport subprocess\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nNAME = 'Solaar'\n\nwith open('lib/solaar/version', 'r') as vfile:\n version = vfile.read().strip()\n\ntry: # get commit from git describe\n commit = subprocess.check_output(['git', 'describe', '--always'], stderr=subprocess.DEVNULL).strip().decode()\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit}\\n')\nexcept Exception: # get commit from Ubuntu dpkg-parsechangelog\n try:\n commit = subprocess.check_output(['dpkg-parsechangelog', '--show-field', 'Version'],\n stderr=subprocess.DEVNULL).strip().decode()\n commit = commit.split('~')\n with open('lib/solaar/commit', 'w') as vfile:\n vfile.write(f'{commit[0]}\\n')\n except Exception as e:\n print('Exception using dpkg-parsechangelog', e)\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'lib/udev/rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=version,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n 'dbus-python (>=1.3.2)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n include_package_data=True,\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]}
| 1,529 | 145 |
gh_patches_debug_25603
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-1590
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyUnusedCodeBear: Implement remove_unused_variables
The remove_unused_variables is implicitly passed on:
https://github.com/coala/coala-bears/blob/master/bears/python/PyUnusedCodeBear.py#L33
should be implemented as a setting in `run()`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bears/python/PyUnusedCodeBear.py`
Content:
```
1 import autoflake
2
3 from coalib.bears.LocalBear import LocalBear
4 from dependency_management.requirements.PipRequirement import PipRequirement
5 from coalib.results.Diff import Diff
6 from coalib.results.Result import Result
7
8
9 class PyUnusedCodeBear(LocalBear):
10 LANGUAGES = {'Python', 'Python 2', 'Python 3'}
11 REQUIREMENTS = {PipRequirement('autoflake', '0.6.6')}
12 AUTHORS = {'The coala developers'}
13 AUTHORS_EMAILS = {'[email protected]'}
14 LICENSE = 'AGPL-3.0'
15 CAN_DETECT = {'Unused Code'}
16
17 def run(self, filename, file,
18 remove_all_unused_imports: bool=False):
19 """
20 Detects unused code. By default this functionality is limited to:
21
22 - Unneeded pass statements.
23 - Unneeded builtin imports.
24
25 :param remove_all_unused_imports:
26 True removes all unused imports - might have side effects
27 """
28
29 corrected = autoflake.fix_code(
30 ''.join(file),
31 additional_imports=None,
32 remove_all_unused_imports=remove_all_unused_imports,
33 remove_unused_variables=True
34 ).splitlines(True)
35
36 for diff in Diff.from_string_arrays(file, corrected).split_diff():
37 yield Result(self,
38 'This file contains unused source code.',
39 affected_code=(diff.range(filename),),
40 diffs={filename: diff})
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bears/python/PyUnusedCodeBear.py b/bears/python/PyUnusedCodeBear.py
--- a/bears/python/PyUnusedCodeBear.py
+++ b/bears/python/PyUnusedCodeBear.py
@@ -15,7 +15,8 @@
CAN_DETECT = {'Unused Code'}
def run(self, filename, file,
- remove_all_unused_imports: bool=False):
+ remove_all_unused_imports: bool=False,
+ remove_unused_variables: bool=True):
"""
Detects unused code. By default this functionality is limited to:
@@ -24,13 +25,15 @@
:param remove_all_unused_imports:
True removes all unused imports - might have side effects
+ :param remove_unused_variables:
+ True removes unused variables - might have side effects
"""
corrected = autoflake.fix_code(
''.join(file),
additional_imports=None,
remove_all_unused_imports=remove_all_unused_imports,
- remove_unused_variables=True
+ remove_unused_variables=remove_unused_variables
).splitlines(True)
for diff in Diff.from_string_arrays(file, corrected).split_diff():
|
{"golden_diff": "diff --git a/bears/python/PyUnusedCodeBear.py b/bears/python/PyUnusedCodeBear.py\n--- a/bears/python/PyUnusedCodeBear.py\n+++ b/bears/python/PyUnusedCodeBear.py\n@@ -15,7 +15,8 @@\n CAN_DETECT = {'Unused Code'}\n \n def run(self, filename, file,\n- remove_all_unused_imports: bool=False):\n+ remove_all_unused_imports: bool=False,\n+ remove_unused_variables: bool=True):\n \"\"\"\n Detects unused code. By default this functionality is limited to:\n \n@@ -24,13 +25,15 @@\n \n :param remove_all_unused_imports:\n True removes all unused imports - might have side effects\n+ :param remove_unused_variables:\n+ True removes unused variables - might have side effects\n \"\"\"\n \n corrected = autoflake.fix_code(\n ''.join(file),\n additional_imports=None,\n remove_all_unused_imports=remove_all_unused_imports,\n- remove_unused_variables=True\n+ remove_unused_variables=remove_unused_variables\n ).splitlines(True)\n \n for diff in Diff.from_string_arrays(file, corrected).split_diff():\n", "issue": "PyUnusedCodeBear: Implement remove_unused_variables\nThe remove_unused_variables is implicitly passed on:\r\nhttps://github.com/coala/coala-bears/blob/master/bears/python/PyUnusedCodeBear.py#L33\r\n\r\nshould be implemented as a setting in `run()`\n", "before_files": [{"content": "import autoflake\n\nfrom coalib.bears.LocalBear import LocalBear\nfrom dependency_management.requirements.PipRequirement import PipRequirement\nfrom coalib.results.Diff import Diff\nfrom coalib.results.Result import Result\n\n\nclass PyUnusedCodeBear(LocalBear):\n LANGUAGES = {'Python', 'Python 2', 'Python 3'}\n REQUIREMENTS = {PipRequirement('autoflake', '0.6.6')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Unused Code'}\n\n def run(self, filename, file,\n remove_all_unused_imports: bool=False):\n \"\"\"\n Detects unused code. By default this functionality is limited to:\n\n - Unneeded pass statements.\n - Unneeded builtin imports.\n\n :param remove_all_unused_imports:\n True removes all unused imports - might have side effects\n \"\"\"\n\n corrected = autoflake.fix_code(\n ''.join(file),\n additional_imports=None,\n remove_all_unused_imports=remove_all_unused_imports,\n remove_unused_variables=True\n ).splitlines(True)\n\n for diff in Diff.from_string_arrays(file, corrected).split_diff():\n yield Result(self,\n 'This file contains unused source code.',\n affected_code=(diff.range(filename),),\n diffs={filename: diff})\n", "path": "bears/python/PyUnusedCodeBear.py"}], "after_files": [{"content": "import autoflake\n\nfrom coalib.bears.LocalBear import LocalBear\nfrom dependency_management.requirements.PipRequirement import PipRequirement\nfrom coalib.results.Diff import Diff\nfrom coalib.results.Result import Result\n\n\nclass PyUnusedCodeBear(LocalBear):\n LANGUAGES = {'Python', 'Python 2', 'Python 3'}\n REQUIREMENTS = {PipRequirement('autoflake', '0.6.6')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Unused Code'}\n\n def run(self, filename, file,\n remove_all_unused_imports: bool=False,\n remove_unused_variables: bool=True):\n \"\"\"\n Detects unused code. By default this functionality is limited to:\n\n - Unneeded pass statements.\n - Unneeded builtin imports.\n\n :param remove_all_unused_imports:\n True removes all unused imports - might have side effects\n :param remove_unused_variables:\n True removes unused variables - might have side effects\n \"\"\"\n\n corrected = autoflake.fix_code(\n ''.join(file),\n additional_imports=None,\n remove_all_unused_imports=remove_all_unused_imports,\n remove_unused_variables=remove_unused_variables\n ).splitlines(True)\n\n for diff in Diff.from_string_arrays(file, corrected).split_diff():\n yield Result(self,\n 'This file contains unused source code.',\n affected_code=(diff.range(filename),),\n diffs={filename: diff})\n", "path": "bears/python/PyUnusedCodeBear.py"}]}
| 693 | 256 |
gh_patches_debug_26928
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-8746
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NVDA doesn't read anything on iTunes
Today I woke up and NVDA was no longer reading anything on iTunes. After investigating I figured that iTunes was updated to version 12.9.0.167 from the Microsoft Store overnight, so suspecting it might be an issue with iTunes I tried Narrator which worked just fine.
I'm running Microsoft Windows 10 Home version 10.0.17134 Build 17134 with NVDA 2018.2.1, and after analyzing the [log](https://github.com/nvaccess/nvda/files/2379550/nvda.log) I found that there were exceptions every time I tried to interact with iTunes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/itunes.py`
Content:
```
1 #appModules/itunes.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #This file is covered by the GNU General Public License.
4 #See the file COPYING for more details.
5 #Copyright (C) 2009-2015 NV Access Limited
6
7 """App module for iTunes
8 """
9
10 import appModuleHandler
11 from comtypes import COMError
12 import controlTypes
13 import oleacc
14 import winUser
15 import speech
16 import treeInterceptorHandler
17 import api
18 import eventHandler
19 import NVDAObjects.IAccessible
20 from NVDAObjects.IAccessible import webKit
21
22 class AppModule(appModuleHandler.AppModule):
23
24 def event_NVDAObject_init(self,obj):
25 if isinstance(obj,NVDAObjects.IAccessible.IAccessible):
26 if obj.windowClassName=="WebViewWindowClass":
27 if obj.IAccessibleRole==oleacc.ROLE_SYSTEM_WINDOW:
28 #Disable a safety mechonism in our IAccessible support as in iTunes it causes an infinit ancestry.
29 obj.parentUsesSuperOnWindowRootIAccessible=False
30 else:
31 obj.hasEncodedAccDescription=True
32 elif obj.role==controlTypes.ROLE_BUTTON:
33 # iTunes seems to put some controls inside a button.
34 # Don't report this weirdness to the user.
35 obj.isPresentableFocusAncestor=False
36 elif obj.windowClassName=="iTunesWebViewControl" and obj.role==controlTypes.ROLE_DOCUMENT:
37 # This wrapper should never be seen by the user.
38 obj.shouldAllowIAccessibleFocusEvent = False
39 obj.presentationType = obj.presType_layout
40
41 def chooseNVDAObjectOverlayClasses(self, obj, clsList):
42 windowClassName=obj.windowClassName
43 role=obj.role
44 if windowClassName in ('iTunesList','iTunesSources','iTunesTrackList') and role in (controlTypes.ROLE_LISTITEM,controlTypes.ROLE_TREEVIEWITEM):
45 clsList.insert(0, ITunesItem)
46 elif webKit.Document in clsList:
47 clsList.insert(0, WebKitDocument)
48 elif windowClassName=="iTunes" and obj.IAccessibleRole==oleacc.ROLE_SYSTEM_CLIENT:
49 clsList.insert(0, TopLevelClient)
50
51 class ITunesItem(NVDAObjects.IAccessible.IAccessible):
52 """Retreaves position information encoded in the accDescription"""
53
54 hasEncodedAccDescription=True
55 value = None
56
57 def _get_next(self):
58 next=super(ITunesItem,self).next
59 if next:
60 return next
61 try:
62 parentChildCount=self.IAccessibleObject.accChildCount
63 except COMError:
64 parentChildCount=0
65 if self.IAccessibleChildID>0 and self.IAccessibleChildID<parentChildCount:
66 return NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID+1)
67 return None
68
69 def _get_previous(self):
70 previous=super(ITunesItem,self).previous
71 if not previous and self.IAccessibleChildID>1:
72 previous=NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID-1)
73 return previous
74
75 def _get_shouldAllowIAccessibleFocusEvent(self):
76 # These items can fire spurious focus events; e.g. when tabbing out of the Music list.
77 # The list reports that it's focused even when it isn't.
78 # Thankfully, the list items don't.
79 return self.hasFocus
80
81 class WebKitDocument(webKit.Document):
82
83 def event_stateChange(self):
84 # iTunes has indicated that a page has died and been replaced by a new one.
85 # #5191: This is actually fired on the "iTunesWebViewControl" parent,
86 # but AccessibleObjectFromEvent on this window returns the WebKit document as of iTunes 12.
87 focus = api.getFocusObject()
88 if self.windowHandle != focus.windowHandle:
89 return
90 # The new page has the same event params, so we must bypass NVDA's IAccessible caching.
91 obj = NVDAObjects.IAccessible.getNVDAObjectFromEvent(focus.windowHandle, winUser.OBJID_CLIENT, 0)
92 if not obj:
93 return
94 if focus.treeInterceptor:
95 speech.cancelSpeech()
96 treeInterceptorHandler.killTreeInterceptor(focus.treeInterceptor)
97 eventHandler.queueEvent("gainFocus",obj)
98
99 class TopLevelClient(NVDAObjects.IAccessible.IAccessible):
100
101 def _isEqual(self, other):
102 # The location seems to be reported differently depending on how you get to this object.
103 # This causes the focus ancestry to change when it really hasn't,
104 # which in turn causes spurious reporting.
105 if self.IAccessibleIdentity == other.IAccessibleIdentity:
106 return True
107 return super(TopLevelClient, self)._isEqual(other)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/source/appModules/itunes.py b/source/appModules/itunes.py
--- a/source/appModules/itunes.py
+++ b/source/appModules/itunes.py
@@ -2,7 +2,7 @@
#A part of NonVisual Desktop Access (NVDA)
#This file is covered by the GNU General Public License.
#See the file COPYING for more details.
-#Copyright (C) 2009-2015 NV Access Limited
+#Copyright (C) 2009-2018 NV Access Limited, Leonard de Ruijter
"""App module for iTunes
"""
@@ -17,6 +17,7 @@
import api
import eventHandler
import NVDAObjects.IAccessible
+import NVDAObjects.UIA
from NVDAObjects.IAccessible import webKit
class AppModule(appModuleHandler.AppModule):
@@ -39,6 +40,10 @@
obj.presentationType = obj.presType_layout
def chooseNVDAObjectOverlayClasses(self, obj, clsList):
+ if isinstance(obj,NVDAObjects.UIA.UIA):
+ # iTunes 12.9 implements UIA for many controls.
+ # Just leave them untouched for now.
+ return
windowClassName=obj.windowClassName
role=obj.role
if windowClassName in ('iTunesList','iTunesSources','iTunesTrackList') and role in (controlTypes.ROLE_LISTITEM,controlTypes.ROLE_TREEVIEWITEM):
|
{"golden_diff": "diff --git a/source/appModules/itunes.py b/source/appModules/itunes.py\n--- a/source/appModules/itunes.py\n+++ b/source/appModules/itunes.py\n@@ -2,7 +2,7 @@\n #A part of NonVisual Desktop Access (NVDA)\r\n #This file is covered by the GNU General Public License.\r\n #See the file COPYING for more details.\r\n-#Copyright (C) 2009-2015 NV Access Limited\r\n+#Copyright (C) 2009-2018 NV Access Limited, Leonard de Ruijter\r\n \r\n \"\"\"App module for iTunes\r\n \"\"\"\r\n@@ -17,6 +17,7 @@\n import api\r\n import eventHandler\r\n import NVDAObjects.IAccessible\r\n+import NVDAObjects.UIA\r\n from NVDAObjects.IAccessible import webKit\r\n \r\n class AppModule(appModuleHandler.AppModule):\r\n@@ -39,6 +40,10 @@\n \t\t\t\tobj.presentationType = obj.presType_layout\r\n \r\n \tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n+\t\tif isinstance(obj,NVDAObjects.UIA.UIA):\r\n+\t\t\t# iTunes 12.9 implements UIA for many controls.\r\n+\t\t\t# Just leave them untouched for now.\r\n+\t\t\treturn\r\n \t\twindowClassName=obj.windowClassName\r\n \t\trole=obj.role\r\n \t\tif windowClassName in ('iTunesList','iTunesSources','iTunesTrackList') and role in (controlTypes.ROLE_LISTITEM,controlTypes.ROLE_TREEVIEWITEM):\n", "issue": "NVDA doesn't read anything on iTunes\nToday I woke up and NVDA was no longer reading anything on iTunes. After investigating I figured that iTunes was updated to version 12.9.0.167 from the Microsoft Store overnight, so suspecting it might be an issue with iTunes I tried Narrator which worked just fine.\r\n\r\nI'm running Microsoft Windows 10 Home version 10.0.17134 Build 17134 with NVDA 2018.2.1, and after analyzing the [log](https://github.com/nvaccess/nvda/files/2379550/nvda.log) I found that there were exceptions every time I tried to interact with iTunes.\n", "before_files": [{"content": "#appModules/itunes.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n#Copyright (C) 2009-2015 NV Access Limited\r\n\r\n\"\"\"App module for iTunes\r\n\"\"\"\r\n\r\nimport appModuleHandler\r\nfrom comtypes import COMError\r\nimport controlTypes\r\nimport oleacc\r\nimport winUser\r\nimport speech\r\nimport treeInterceptorHandler\r\nimport api\r\nimport eventHandler\r\nimport NVDAObjects.IAccessible\r\nfrom NVDAObjects.IAccessible import webKit\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif isinstance(obj,NVDAObjects.IAccessible.IAccessible):\r\n\t\t\tif obj.windowClassName==\"WebViewWindowClass\":\r\n\t\t\t\tif obj.IAccessibleRole==oleacc.ROLE_SYSTEM_WINDOW:\r\n\t\t\t\t\t#Disable a safety mechonism in our IAccessible support as in iTunes it causes an infinit ancestry.\r\n\t\t\t\t\tobj.parentUsesSuperOnWindowRootIAccessible=False\r\n\t\t\t\telse:\r\n\t\t\t\t\tobj.hasEncodedAccDescription=True\r\n\t\t\telif obj.role==controlTypes.ROLE_BUTTON:\r\n\t\t\t\t# iTunes seems to put some controls inside a button.\r\n\t\t\t\t# Don't report this weirdness to the user.\r\n\t\t\t\tobj.isPresentableFocusAncestor=False\r\n\t\t\telif obj.windowClassName==\"iTunesWebViewControl\" and obj.role==controlTypes.ROLE_DOCUMENT:\r\n\t\t\t\t# This wrapper should never be seen by the user.\r\n\t\t\t\tobj.shouldAllowIAccessibleFocusEvent = False\r\n\t\t\t\tobj.presentationType = obj.presType_layout\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\twindowClassName=obj.windowClassName\r\n\t\trole=obj.role\r\n\t\tif windowClassName in ('iTunesList','iTunesSources','iTunesTrackList') and role in (controlTypes.ROLE_LISTITEM,controlTypes.ROLE_TREEVIEWITEM):\r\n\t\t\tclsList.insert(0, ITunesItem)\r\n\t\telif webKit.Document in clsList:\r\n\t\t\tclsList.insert(0, WebKitDocument)\r\n\t\telif windowClassName==\"iTunes\" and obj.IAccessibleRole==oleacc.ROLE_SYSTEM_CLIENT:\r\n\t\t\tclsList.insert(0, TopLevelClient)\r\n\r\nclass ITunesItem(NVDAObjects.IAccessible.IAccessible):\r\n\t\"\"\"Retreaves position information encoded in the accDescription\"\"\"\r\n\r\n\thasEncodedAccDescription=True\r\n\tvalue = None\r\n\r\n\tdef _get_next(self):\r\n\t\tnext=super(ITunesItem,self).next\r\n\t\tif next:\r\n\t\t\treturn next\r\n\t\ttry:\r\n\t\t\tparentChildCount=self.IAccessibleObject.accChildCount\r\n\t\texcept COMError:\r\n\t\t\tparentChildCount=0\r\n\t\tif self.IAccessibleChildID>0 and self.IAccessibleChildID<parentChildCount:\r\n\t\t\treturn NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID+1)\r\n\t\treturn None\r\n\r\n\tdef _get_previous(self):\r\n\t\tprevious=super(ITunesItem,self).previous\r\n\t\tif not previous and self.IAccessibleChildID>1:\r\n\t\t\tprevious=NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID-1)\r\n\t\treturn previous\r\n\r\n\tdef _get_shouldAllowIAccessibleFocusEvent(self):\r\n\t\t# These items can fire spurious focus events; e.g. when tabbing out of the Music list.\r\n\t\t# The list reports that it's focused even when it isn't.\r\n\t\t# Thankfully, the list items don't.\r\n\t\treturn self.hasFocus\r\n\r\nclass WebKitDocument(webKit.Document):\r\n\r\n\tdef event_stateChange(self):\r\n\t\t# iTunes has indicated that a page has died and been replaced by a new one.\r\n\t\t# #5191: This is actually fired on the \"iTunesWebViewControl\" parent,\r\n\t\t# but AccessibleObjectFromEvent on this window returns the WebKit document as of iTunes 12.\r\n\t\tfocus = api.getFocusObject()\r\n\t\tif self.windowHandle != focus.windowHandle:\r\n\t\t\treturn\r\n\t\t# The new page has the same event params, so we must bypass NVDA's IAccessible caching.\r\n\t\tobj = NVDAObjects.IAccessible.getNVDAObjectFromEvent(focus.windowHandle, winUser.OBJID_CLIENT, 0)\r\n\t\tif not obj:\r\n\t\t\treturn\r\n\t\tif focus.treeInterceptor:\r\n\t\t\tspeech.cancelSpeech()\r\n\t\t\ttreeInterceptorHandler.killTreeInterceptor(focus.treeInterceptor)\r\n\t\teventHandler.queueEvent(\"gainFocus\",obj)\r\n\r\nclass TopLevelClient(NVDAObjects.IAccessible.IAccessible):\r\n\r\n\tdef _isEqual(self, other):\r\n\t\t# The location seems to be reported differently depending on how you get to this object.\r\n\t\t# This causes the focus ancestry to change when it really hasn't,\r\n\t\t# which in turn causes spurious reporting.\r\n\t\tif self.IAccessibleIdentity == other.IAccessibleIdentity:\r\n\t\t\treturn True\r\n\t\treturn super(TopLevelClient, self)._isEqual(other)\r\n", "path": "source/appModules/itunes.py"}], "after_files": [{"content": "#appModules/itunes.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n#Copyright (C) 2009-2018 NV Access Limited, Leonard de Ruijter\r\n\r\n\"\"\"App module for iTunes\r\n\"\"\"\r\n\r\nimport appModuleHandler\r\nfrom comtypes import COMError\r\nimport controlTypes\r\nimport oleacc\r\nimport winUser\r\nimport speech\r\nimport treeInterceptorHandler\r\nimport api\r\nimport eventHandler\r\nimport NVDAObjects.IAccessible\r\nimport NVDAObjects.UIA\r\nfrom NVDAObjects.IAccessible import webKit\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif isinstance(obj,NVDAObjects.IAccessible.IAccessible):\r\n\t\t\tif obj.windowClassName==\"WebViewWindowClass\":\r\n\t\t\t\tif obj.IAccessibleRole==oleacc.ROLE_SYSTEM_WINDOW:\r\n\t\t\t\t\t#Disable a safety mechonism in our IAccessible support as in iTunes it causes an infinit ancestry.\r\n\t\t\t\t\tobj.parentUsesSuperOnWindowRootIAccessible=False\r\n\t\t\t\telse:\r\n\t\t\t\t\tobj.hasEncodedAccDescription=True\r\n\t\t\telif obj.role==controlTypes.ROLE_BUTTON:\r\n\t\t\t\t# iTunes seems to put some controls inside a button.\r\n\t\t\t\t# Don't report this weirdness to the user.\r\n\t\t\t\tobj.isPresentableFocusAncestor=False\r\n\t\t\telif obj.windowClassName==\"iTunesWebViewControl\" and obj.role==controlTypes.ROLE_DOCUMENT:\r\n\t\t\t\t# This wrapper should never be seen by the user.\r\n\t\t\t\tobj.shouldAllowIAccessibleFocusEvent = False\r\n\t\t\t\tobj.presentationType = obj.presType_layout\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\tif isinstance(obj,NVDAObjects.UIA.UIA):\r\n\t\t\t# iTunes 12.9 implements UIA for many controls.\r\n\t\t\t# Just leave them untouched for now.\r\n\t\t\treturn\r\n\t\twindowClassName=obj.windowClassName\r\n\t\trole=obj.role\r\n\t\tif windowClassName in ('iTunesList','iTunesSources','iTunesTrackList') and role in (controlTypes.ROLE_LISTITEM,controlTypes.ROLE_TREEVIEWITEM):\r\n\t\t\tclsList.insert(0, ITunesItem)\r\n\t\telif webKit.Document in clsList:\r\n\t\t\tclsList.insert(0, WebKitDocument)\r\n\t\telif windowClassName==\"iTunes\" and obj.IAccessibleRole==oleacc.ROLE_SYSTEM_CLIENT:\r\n\t\t\tclsList.insert(0, TopLevelClient)\r\n\r\nclass ITunesItem(NVDAObjects.IAccessible.IAccessible):\r\n\t\"\"\"Retreaves position information encoded in the accDescription\"\"\"\r\n\r\n\thasEncodedAccDescription=True\r\n\tvalue = None\r\n\r\n\tdef _get_next(self):\r\n\t\tnext=super(ITunesItem,self).next\r\n\t\tif next:\r\n\t\t\treturn next\r\n\t\ttry:\r\n\t\t\tparentChildCount=self.IAccessibleObject.accChildCount\r\n\t\texcept COMError:\r\n\t\t\tparentChildCount=0\r\n\t\tif self.IAccessibleChildID>0 and self.IAccessibleChildID<parentChildCount:\r\n\t\t\treturn NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID+1)\r\n\t\treturn None\r\n\r\n\tdef _get_previous(self):\r\n\t\tprevious=super(ITunesItem,self).previous\r\n\t\tif not previous and self.IAccessibleChildID>1:\r\n\t\t\tprevious=NVDAObjects.IAccessible.IAccessible(windowHandle=self.windowHandle,IAccessibleObject=self.IAccessibleObject,IAccessibleChildID=self.IAccessibleChildID-1)\r\n\t\treturn previous\r\n\r\n\tdef _get_shouldAllowIAccessibleFocusEvent(self):\r\n\t\t# These items can fire spurious focus events; e.g. when tabbing out of the Music list.\r\n\t\t# The list reports that it's focused even when it isn't.\r\n\t\t# Thankfully, the list items don't.\r\n\t\treturn self.hasFocus\r\n\r\nclass WebKitDocument(webKit.Document):\r\n\r\n\tdef event_stateChange(self):\r\n\t\t# iTunes has indicated that a page has died and been replaced by a new one.\r\n\t\t# #5191: This is actually fired on the \"iTunesWebViewControl\" parent,\r\n\t\t# but AccessibleObjectFromEvent on this window returns the WebKit document as of iTunes 12.\r\n\t\tfocus = api.getFocusObject()\r\n\t\tif self.windowHandle != focus.windowHandle:\r\n\t\t\treturn\r\n\t\t# The new page has the same event params, so we must bypass NVDA's IAccessible caching.\r\n\t\tobj = NVDAObjects.IAccessible.getNVDAObjectFromEvent(focus.windowHandle, winUser.OBJID_CLIENT, 0)\r\n\t\tif not obj:\r\n\t\t\treturn\r\n\t\tif focus.treeInterceptor:\r\n\t\t\tspeech.cancelSpeech()\r\n\t\t\ttreeInterceptorHandler.killTreeInterceptor(focus.treeInterceptor)\r\n\t\teventHandler.queueEvent(\"gainFocus\",obj)\r\n\r\nclass TopLevelClient(NVDAObjects.IAccessible.IAccessible):\r\n\r\n\tdef _isEqual(self, other):\r\n\t\t# The location seems to be reported differently depending on how you get to this object.\r\n\t\t# This causes the focus ancestry to change when it really hasn't,\r\n\t\t# which in turn causes spurious reporting.\r\n\t\tif self.IAccessibleIdentity == other.IAccessibleIdentity:\r\n\t\t\treturn True\r\n\t\treturn super(TopLevelClient, self)._isEqual(other)\r\n", "path": "source/appModules/itunes.py"}]}
| 1,698 | 322 |
gh_patches_debug_34257
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-205
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unique ID for entity/entities
Frankly, this is a minor / cosmetic structural issue, but its nice to have regarding the 2022.4 update, because it enables unique entites to be hidden (instead of on/off) - and it would be nice to have this on top of this already fantastic extension.
**Context:** Setting up hacs_waste_collection_schedule creates an Entry in the default Dashboard (former Lovelace) called Calendar/Kalender with a single entry, in my case "AWB Köln". It shows only the on/off status. Its not showing the different countdowns till the pick-up dates for each type of trash.
**Expected Behaviour:** Upon Clicking the entry "AWB Köln" and further clicking "settings", I would be able to switch the entry to "hidden", thus making it invisible to default Dashboards.
**Current Behaviour:** Clicking "settings" reveals that the entity has no unique id and can't be set to hidden.
**Tested solutions:** Manually adding a "unique_id" the entry via configuration.yaml - throws an error and does not work. Either
unique_id: awbkoelnsource
unique_id: calendar.awbkoelnsource
did not work.
Attached: Screenshots (in german) for explanation:
1: Shown entry in dashboard (former lovelace)
2: Error when trying to access the entrys settings
3: Snippet of the configuration.yaml and the tried postiton for a unique_id



--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import datetime
4 import importlib
5 import itertools
6 import logging
7 import traceback
8 from typing import Dict, List, Optional
9
10 from .collection import Collection, CollectionGroup
11
12 _LOGGER = logging.getLogger(__name__)
13
14
15 class Customize:
16 """Customize one waste collection type."""
17
18 def __init__(self, waste_type, alias=None, show=True, icon=None, picture=None):
19 self._waste_type = waste_type
20 self._alias = alias
21 self._show = show
22 self._icon = icon
23 self._picture = picture
24
25 @property
26 def waste_type(self):
27 return self._waste_type
28
29 @property
30 def alias(self):
31 return self._alias
32
33 @property
34 def show(self):
35 return self._show
36
37 @property
38 def icon(self):
39 return self._icon
40
41 @property
42 def picture(self):
43 return self._picture
44
45 def __repr__(self):
46 return f"Customize{{waste_type={self._waste_type}, alias={self._alias}, show={self._show}, icon={self._icon}, picture={self._picture}}}"
47
48
49 def filter_function(entry: Collection, customize: Dict[str, Customize]):
50 c = customize.get(entry.type)
51 if c is None:
52 return True
53 else:
54 return c.show
55
56
57 def customize_function(entry: Collection, customize: Dict[str, Customize]):
58 c = customize.get(entry.type)
59 if c is not None:
60 if c.alias is not None:
61 entry.set_type(c.alias)
62 if c.icon is not None:
63 entry.set_icon(c.icon)
64 if c.picture is not None:
65 entry.set_picture(c.picture)
66 return entry
67
68
69 class Scraper:
70 def __init__(
71 self,
72 source,
73 customize: Dict[str, Customize],
74 title: str,
75 description: str,
76 url: Optional[str],
77 calendar_title: Optional[str],
78 ):
79 self._source = source
80 self._customize = customize
81 self._title = title
82 self._description = description
83 self._url = url
84 self._calendar_title = calendar_title
85 self._refreshtime = None
86 self._entries: List[Collection] = []
87
88 @property
89 def source(self):
90 return self._source
91
92 @property
93 def refreshtime(self):
94 return self._refreshtime
95
96 @property
97 def title(self):
98 return self._title
99
100 @property
101 def description(self):
102 return self._description
103
104 @property
105 def url(self):
106 return self._url
107
108 @property
109 def calendar_title(self):
110 return self._calendar_title or self._title
111
112 def fetch(self):
113 """Fetch data from source."""
114 try:
115 # fetch returns a list of Collection's
116 entries = self._source.fetch()
117 except Exception:
118 _LOGGER.error(
119 f"fetch failed for source {self._title}:\n{traceback.format_exc()}"
120 )
121 return
122 self._refreshtime = datetime.datetime.now()
123
124 # strip whitespaces
125 for e in entries:
126 e.set_type(e.type.strip())
127
128 # filter hidden entries
129 entries = filter(lambda x: filter_function(x, self._customize), entries)
130
131 # customize fetched entries
132 entries = map(lambda x: customize_function(x, self._customize), entries)
133
134 self._entries = list(entries)
135
136 def get_types(self):
137 """Return set() of all collection types."""
138 types = set()
139 for e in self._entries:
140 types.add(e.type)
141 return types
142
143 def get_upcoming(self, count=None, leadtime=None, types=None, include_today=False):
144 """Return list of all entries, limited by count and/or leadtime.
145
146 Keyword arguments:
147 count -- limits the number of returned entries (default=10)
148 leadtime -- limits the timespan in days of returned entries (default=7, 0 = today)
149 """
150 return self._filter(
151 self._entries,
152 count=count,
153 leadtime=leadtime,
154 types=types,
155 include_today=include_today,
156 )
157
158 def get_upcoming_group_by_day(
159 self, count=None, leadtime=None, types=None, include_today=False
160 ):
161 """Return list of all entries, grouped by day, limited by count and/or leadtime."""
162 entries = []
163
164 iterator = itertools.groupby(
165 self._filter(
166 self._entries,
167 leadtime=leadtime,
168 types=types,
169 include_today=include_today,
170 ),
171 lambda e: e.date,
172 )
173
174 for key, group in iterator:
175 entries.append(CollectionGroup.create(list(group)))
176 if count is not None:
177 entries = entries[:count]
178
179 return entries
180
181 def _filter(
182 self, entries, count=None, leadtime=None, types=None, include_today=False
183 ):
184 # remove unwanted waste types
185 if types is not None:
186 # generate set
187 types_set = {t for t in types}
188 entries = list(filter(lambda e: e.type in types_set, self._entries))
189
190 # remove expired entries
191 now = datetime.datetime.now().date()
192 if include_today:
193 entries = list(filter(lambda e: e.date >= now, entries))
194 else:
195 entries = list(filter(lambda e: e.date > now, entries))
196
197 # remove entries which are too far in the future (0 = today)
198 if leadtime is not None:
199 x = now + datetime.timedelta(days=leadtime)
200 entries = list(filter(lambda e: e.date <= x, entries))
201
202 # ensure that entries are sorted by date
203 entries.sort(key=lambda e: e.date)
204
205 # remove surplus entries
206 if count is not None:
207 entries = entries[:count]
208
209 return entries
210
211 @staticmethod
212 def create(
213 source_name: str,
214 customize: Dict[str, Customize],
215 source_args,
216 calendar_title: Optional[str] = None,
217 ):
218 # load source module
219
220 # for home-assistant, use the last 3 folders, e.g. custom_component/wave_collection_schedule/waste_collection_schedule
221 # otherwise, only use waste_collection_schedule
222 try:
223 source_module = importlib.import_module(
224 f"waste_collection_schedule.source.{source_name}"
225 )
226 except ImportError:
227 _LOGGER.error(f"source not found: {source_name}")
228 return
229
230 # create source
231 source = source_module.Source(**source_args) # type: ignore
232
233 # create scraper
234 g = Scraper(
235 source=source,
236 customize=customize,
237 title=source_module.TITLE, # type: ignore[attr-defined]
238 description=source_module.DESCRIPTION, # type: ignore[attr-defined]
239 url=source_module.URL, # type: ignore[attr-defined]
240 calendar_title=calendar_title,
241 )
242
243 return g
244
```
Path: `custom_components/waste_collection_schedule/calendar.py`
Content:
```
1 """Calendar platform support for Waste Collection Schedule."""
2
3 import logging
4 from datetime import timedelta
5
6 from homeassistant.components.calendar import CalendarEventDevice
7
8 _LOGGER = logging.getLogger(__name__)
9
10
11 async def async_setup_platform(hass, config, async_add_entities, discovery_info=None):
12 """Set up calendar platform."""
13 # We only want this platform to be set up via discovery.
14 if discovery_info is None:
15 return
16
17 entities = []
18
19 api = discovery_info["api"]
20
21 for scraper in api.scrapers:
22 entities.append(WasteCollectionCalendar(api, scraper))
23
24 async_add_entities(entities)
25
26
27 class WasteCollectionCalendar(CalendarEventDevice):
28 """Calendar entity class."""
29
30 def __init__(self, api, scraper):
31 self._api = api
32 self._scraper = scraper
33
34 @property
35 def name(self):
36 """Return entity name."""
37 return self._scraper.calendar_title
38
39 @property
40 def event(self):
41 """Return next collection event."""
42 collections = self._scraper.get_upcoming(count=1, include_today=True)
43 if len(collections) == 0:
44 return None
45 else:
46 return self._convert(collections[0])
47
48 async def async_get_events(self, hass, start_date, end_date):
49 """Return all events within specified time span."""
50 collections = []
51 for a in self._scraper.get_upcoming(include_today=True):
52 if a.date >= start_date.date() and a.date <= end_date.date():
53 collections.append(self._convert(a))
54 return collections
55
56 def _convert(self, collection):
57 """Convert an collection into a Home Assistant calendar event."""
58 return {
59 "uid": f"{self._scraper.calendar_title}-{collection.date.isoformat()}-{collection.type}",
60 "summary": collection.type,
61 "start": {"date": collection.date.isoformat()},
62 "end": {"date": (collection.date + timedelta(days=1)).isoformat()},
63 "allDay": True,
64 }
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py
--- a/custom_components/waste_collection_schedule/calendar.py
+++ b/custom_components/waste_collection_schedule/calendar.py
@@ -30,6 +30,7 @@
def __init__(self, api, scraper):
self._api = api
self._scraper = scraper
+ self._attr_unique_id = scraper.unique_id + "_calendar"
@property
def name(self):
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py b/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py
@@ -75,6 +75,7 @@
description: str,
url: Optional[str],
calendar_title: Optional[str],
+ unique_id: str,
):
self._source = source
self._customize = customize
@@ -82,6 +83,7 @@
self._description = description
self._url = url
self._calendar_title = calendar_title
+ self._unique_id = unique_id
self._refreshtime = None
self._entries: List[Collection] = []
@@ -109,6 +111,10 @@
def calendar_title(self):
return self._calendar_title or self._title
+ @property
+ def unique_id(self):
+ return self._unique_id
+
def fetch(self):
"""Fetch data from source."""
try:
@@ -238,6 +244,11 @@
description=source_module.DESCRIPTION, # type: ignore[attr-defined]
url=source_module.URL, # type: ignore[attr-defined]
calendar_title=calendar_title,
+ unique_id=calc_unique_scraper_id(source_name, source_args),
)
return g
+
+
+def calc_unique_scraper_id(source_name, source_args):
+ return source_name + str(sorted(source_args.items()))
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/calendar.py b/custom_components/waste_collection_schedule/calendar.py\n--- a/custom_components/waste_collection_schedule/calendar.py\n+++ b/custom_components/waste_collection_schedule/calendar.py\n@@ -30,6 +30,7 @@\n def __init__(self, api, scraper):\n self._api = api\n self._scraper = scraper\n+ self._attr_unique_id = scraper.unique_id + \"_calendar\"\n \n @property\n def name(self):\ndiff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py b/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\n@@ -75,6 +75,7 @@\n description: str,\n url: Optional[str],\n calendar_title: Optional[str],\n+ unique_id: str,\n ):\n self._source = source\n self._customize = customize\n@@ -82,6 +83,7 @@\n self._description = description\n self._url = url\n self._calendar_title = calendar_title\n+ self._unique_id = unique_id\n self._refreshtime = None\n self._entries: List[Collection] = []\n \n@@ -109,6 +111,10 @@\n def calendar_title(self):\n return self._calendar_title or self._title\n \n+ @property\n+ def unique_id(self):\n+ return self._unique_id\n+\n def fetch(self):\n \"\"\"Fetch data from source.\"\"\"\n try:\n@@ -238,6 +244,11 @@\n description=source_module.DESCRIPTION, # type: ignore[attr-defined]\n url=source_module.URL, # type: ignore[attr-defined]\n calendar_title=calendar_title,\n+ unique_id=calc_unique_scraper_id(source_name, source_args),\n )\n \n return g\n+\n+\n+def calc_unique_scraper_id(source_name, source_args):\n+ return source_name + str(sorted(source_args.items()))\n", "issue": "Unique ID for entity/entities\nFrankly, this is a minor / cosmetic structural issue, but its nice to have regarding the 2022.4 update, because it enables unique entites to be hidden (instead of on/off) - and it would be nice to have this on top of this already fantastic extension. \r\n\r\n**Context:** Setting up hacs_waste_collection_schedule creates an Entry in the default Dashboard (former Lovelace) called Calendar/Kalender with a single entry, in my case \"AWB K\u00f6ln\". It shows only the on/off status. Its not showing the different countdowns till the pick-up dates for each type of trash.\r\n\r\n**Expected Behaviour:** Upon Clicking the entry \"AWB K\u00f6ln\" and further clicking \"settings\", I would be able to switch the entry to \"hidden\", thus making it invisible to default Dashboards.\r\n\r\n**Current Behaviour:** Clicking \"settings\" reveals that the entity has no unique id and can't be set to hidden. \r\n\r\n**Tested solutions:** Manually adding a \"unique_id\" the entry via configuration.yaml - throws an error and does not work. Either \r\n unique_id: awbkoelnsource\r\n unique_id: calendar.awbkoelnsource\r\ndid not work.\r\n\r\nAttached: Screenshots (in german) for explanation:\r\n1: Shown entry in dashboard (former lovelace)\r\n2: Error when trying to access the entrys settings\r\n3: Snippet of the configuration.yaml and the tried postiton for a unique_id\r\n\r\n\r\n\r\n\r\n \n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport datetime\nimport importlib\nimport itertools\nimport logging\nimport traceback\nfrom typing import Dict, List, Optional\n\nfrom .collection import Collection, CollectionGroup\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Customize:\n \"\"\"Customize one waste collection type.\"\"\"\n\n def __init__(self, waste_type, alias=None, show=True, icon=None, picture=None):\n self._waste_type = waste_type\n self._alias = alias\n self._show = show\n self._icon = icon\n self._picture = picture\n\n @property\n def waste_type(self):\n return self._waste_type\n\n @property\n def alias(self):\n return self._alias\n\n @property\n def show(self):\n return self._show\n\n @property\n def icon(self):\n return self._icon\n\n @property\n def picture(self):\n return self._picture\n\n def __repr__(self):\n return f\"Customize{{waste_type={self._waste_type}, alias={self._alias}, show={self._show}, icon={self._icon}, picture={self._picture}}}\"\n\n\ndef filter_function(entry: Collection, customize: Dict[str, Customize]):\n c = customize.get(entry.type)\n if c is None:\n return True\n else:\n return c.show\n\n\ndef customize_function(entry: Collection, customize: Dict[str, Customize]):\n c = customize.get(entry.type)\n if c is not None:\n if c.alias is not None:\n entry.set_type(c.alias)\n if c.icon is not None:\n entry.set_icon(c.icon)\n if c.picture is not None:\n entry.set_picture(c.picture)\n return entry\n\n\nclass Scraper:\n def __init__(\n self,\n source,\n customize: Dict[str, Customize],\n title: str,\n description: str,\n url: Optional[str],\n calendar_title: Optional[str],\n ):\n self._source = source\n self._customize = customize\n self._title = title\n self._description = description\n self._url = url\n self._calendar_title = calendar_title\n self._refreshtime = None\n self._entries: List[Collection] = []\n\n @property\n def source(self):\n return self._source\n\n @property\n def refreshtime(self):\n return self._refreshtime\n\n @property\n def title(self):\n return self._title\n\n @property\n def description(self):\n return self._description\n\n @property\n def url(self):\n return self._url\n\n @property\n def calendar_title(self):\n return self._calendar_title or self._title\n\n def fetch(self):\n \"\"\"Fetch data from source.\"\"\"\n try:\n # fetch returns a list of Collection's\n entries = self._source.fetch()\n except Exception:\n _LOGGER.error(\n f\"fetch failed for source {self._title}:\\n{traceback.format_exc()}\"\n )\n return\n self._refreshtime = datetime.datetime.now()\n\n # strip whitespaces\n for e in entries:\n e.set_type(e.type.strip())\n\n # filter hidden entries\n entries = filter(lambda x: filter_function(x, self._customize), entries)\n\n # customize fetched entries\n entries = map(lambda x: customize_function(x, self._customize), entries)\n\n self._entries = list(entries)\n\n def get_types(self):\n \"\"\"Return set() of all collection types.\"\"\"\n types = set()\n for e in self._entries:\n types.add(e.type)\n return types\n\n def get_upcoming(self, count=None, leadtime=None, types=None, include_today=False):\n \"\"\"Return list of all entries, limited by count and/or leadtime.\n\n Keyword arguments:\n count -- limits the number of returned entries (default=10)\n leadtime -- limits the timespan in days of returned entries (default=7, 0 = today)\n \"\"\"\n return self._filter(\n self._entries,\n count=count,\n leadtime=leadtime,\n types=types,\n include_today=include_today,\n )\n\n def get_upcoming_group_by_day(\n self, count=None, leadtime=None, types=None, include_today=False\n ):\n \"\"\"Return list of all entries, grouped by day, limited by count and/or leadtime.\"\"\"\n entries = []\n\n iterator = itertools.groupby(\n self._filter(\n self._entries,\n leadtime=leadtime,\n types=types,\n include_today=include_today,\n ),\n lambda e: e.date,\n )\n\n for key, group in iterator:\n entries.append(CollectionGroup.create(list(group)))\n if count is not None:\n entries = entries[:count]\n\n return entries\n\n def _filter(\n self, entries, count=None, leadtime=None, types=None, include_today=False\n ):\n # remove unwanted waste types\n if types is not None:\n # generate set\n types_set = {t for t in types}\n entries = list(filter(lambda e: e.type in types_set, self._entries))\n\n # remove expired entries\n now = datetime.datetime.now().date()\n if include_today:\n entries = list(filter(lambda e: e.date >= now, entries))\n else:\n entries = list(filter(lambda e: e.date > now, entries))\n\n # remove entries which are too far in the future (0 = today)\n if leadtime is not None:\n x = now + datetime.timedelta(days=leadtime)\n entries = list(filter(lambda e: e.date <= x, entries))\n\n # ensure that entries are sorted by date\n entries.sort(key=lambda e: e.date)\n\n # remove surplus entries\n if count is not None:\n entries = entries[:count]\n\n return entries\n\n @staticmethod\n def create(\n source_name: str,\n customize: Dict[str, Customize],\n source_args,\n calendar_title: Optional[str] = None,\n ):\n # load source module\n\n # for home-assistant, use the last 3 folders, e.g. custom_component/wave_collection_schedule/waste_collection_schedule\n # otherwise, only use waste_collection_schedule\n try:\n source_module = importlib.import_module(\n f\"waste_collection_schedule.source.{source_name}\"\n )\n except ImportError:\n _LOGGER.error(f\"source not found: {source_name}\")\n return\n\n # create source\n source = source_module.Source(**source_args) # type: ignore\n\n # create scraper\n g = Scraper(\n source=source,\n customize=customize,\n title=source_module.TITLE, # type: ignore[attr-defined]\n description=source_module.DESCRIPTION, # type: ignore[attr-defined]\n url=source_module.URL, # type: ignore[attr-defined]\n calendar_title=calendar_title,\n )\n\n return g\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py"}, {"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta\n\nfrom homeassistant.components.calendar import CalendarEventDevice\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n entities.append(WasteCollectionCalendar(api, scraper))\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEventDevice):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper):\n self._api = api\n self._scraper = scraper\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._scraper.calendar_title\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(count=1, include_today=True)\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(self, hass, start_date, end_date):\n \"\"\"Return all events within specified time span.\"\"\"\n collections = []\n for a in self._scraper.get_upcoming(include_today=True):\n if a.date >= start_date.date() and a.date <= end_date.date():\n collections.append(self._convert(a))\n return collections\n\n def _convert(self, collection):\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return {\n \"uid\": f\"{self._scraper.calendar_title}-{collection.date.isoformat()}-{collection.type}\",\n \"summary\": collection.type,\n \"start\": {\"date\": collection.date.isoformat()},\n \"end\": {\"date\": (collection.date + timedelta(days=1)).isoformat()},\n \"allDay\": True,\n }\n", "path": "custom_components/waste_collection_schedule/calendar.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport datetime\nimport importlib\nimport itertools\nimport logging\nimport traceback\nfrom typing import Dict, List, Optional\n\nfrom .collection import Collection, CollectionGroup\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Customize:\n \"\"\"Customize one waste collection type.\"\"\"\n\n def __init__(self, waste_type, alias=None, show=True, icon=None, picture=None):\n self._waste_type = waste_type\n self._alias = alias\n self._show = show\n self._icon = icon\n self._picture = picture\n\n @property\n def waste_type(self):\n return self._waste_type\n\n @property\n def alias(self):\n return self._alias\n\n @property\n def show(self):\n return self._show\n\n @property\n def icon(self):\n return self._icon\n\n @property\n def picture(self):\n return self._picture\n\n def __repr__(self):\n return f\"Customize{{waste_type={self._waste_type}, alias={self._alias}, show={self._show}, icon={self._icon}, picture={self._picture}}}\"\n\n\ndef filter_function(entry: Collection, customize: Dict[str, Customize]):\n c = customize.get(entry.type)\n if c is None:\n return True\n else:\n return c.show\n\n\ndef customize_function(entry: Collection, customize: Dict[str, Customize]):\n c = customize.get(entry.type)\n if c is not None:\n if c.alias is not None:\n entry.set_type(c.alias)\n if c.icon is not None:\n entry.set_icon(c.icon)\n if c.picture is not None:\n entry.set_picture(c.picture)\n return entry\n\n\nclass Scraper:\n def __init__(\n self,\n source,\n customize: Dict[str, Customize],\n title: str,\n description: str,\n url: Optional[str],\n calendar_title: Optional[str],\n unique_id: str,\n ):\n self._source = source\n self._customize = customize\n self._title = title\n self._description = description\n self._url = url\n self._calendar_title = calendar_title\n self._unique_id = unique_id\n self._refreshtime = None\n self._entries: List[Collection] = []\n\n @property\n def source(self):\n return self._source\n\n @property\n def refreshtime(self):\n return self._refreshtime\n\n @property\n def title(self):\n return self._title\n\n @property\n def description(self):\n return self._description\n\n @property\n def url(self):\n return self._url\n\n @property\n def calendar_title(self):\n return self._calendar_title or self._title\n\n @property\n def unique_id(self):\n return self._unique_id\n\n def fetch(self):\n \"\"\"Fetch data from source.\"\"\"\n try:\n # fetch returns a list of Collection's\n entries = self._source.fetch()\n except Exception:\n _LOGGER.error(\n f\"fetch failed for source {self._title}:\\n{traceback.format_exc()}\"\n )\n return\n self._refreshtime = datetime.datetime.now()\n\n # strip whitespaces\n for e in entries:\n e.set_type(e.type.strip())\n\n # filter hidden entries\n entries = filter(lambda x: filter_function(x, self._customize), entries)\n\n # customize fetched entries\n entries = map(lambda x: customize_function(x, self._customize), entries)\n\n self._entries = list(entries)\n\n def get_types(self):\n \"\"\"Return set() of all collection types.\"\"\"\n types = set()\n for e in self._entries:\n types.add(e.type)\n return types\n\n def get_upcoming(self, count=None, leadtime=None, types=None, include_today=False):\n \"\"\"Return list of all entries, limited by count and/or leadtime.\n\n Keyword arguments:\n count -- limits the number of returned entries (default=10)\n leadtime -- limits the timespan in days of returned entries (default=7, 0 = today)\n \"\"\"\n return self._filter(\n self._entries,\n count=count,\n leadtime=leadtime,\n types=types,\n include_today=include_today,\n )\n\n def get_upcoming_group_by_day(\n self, count=None, leadtime=None, types=None, include_today=False\n ):\n \"\"\"Return list of all entries, grouped by day, limited by count and/or leadtime.\"\"\"\n entries = []\n\n iterator = itertools.groupby(\n self._filter(\n self._entries,\n leadtime=leadtime,\n types=types,\n include_today=include_today,\n ),\n lambda e: e.date,\n )\n\n for key, group in iterator:\n entries.append(CollectionGroup.create(list(group)))\n if count is not None:\n entries = entries[:count]\n\n return entries\n\n def _filter(\n self, entries, count=None, leadtime=None, types=None, include_today=False\n ):\n # remove unwanted waste types\n if types is not None:\n # generate set\n types_set = {t for t in types}\n entries = list(filter(lambda e: e.type in types_set, self._entries))\n\n # remove expired entries\n now = datetime.datetime.now().date()\n if include_today:\n entries = list(filter(lambda e: e.date >= now, entries))\n else:\n entries = list(filter(lambda e: e.date > now, entries))\n\n # remove entries which are too far in the future (0 = today)\n if leadtime is not None:\n x = now + datetime.timedelta(days=leadtime)\n entries = list(filter(lambda e: e.date <= x, entries))\n\n # ensure that entries are sorted by date\n entries.sort(key=lambda e: e.date)\n\n # remove surplus entries\n if count is not None:\n entries = entries[:count]\n\n return entries\n\n @staticmethod\n def create(\n source_name: str,\n customize: Dict[str, Customize],\n source_args,\n calendar_title: Optional[str] = None,\n ):\n # load source module\n\n # for home-assistant, use the last 3 folders, e.g. custom_component/wave_collection_schedule/waste_collection_schedule\n # otherwise, only use waste_collection_schedule\n try:\n source_module = importlib.import_module(\n f\"waste_collection_schedule.source.{source_name}\"\n )\n except ImportError:\n _LOGGER.error(f\"source not found: {source_name}\")\n return\n\n # create source\n source = source_module.Source(**source_args) # type: ignore\n\n # create scraper\n g = Scraper(\n source=source,\n customize=customize,\n title=source_module.TITLE, # type: ignore[attr-defined]\n description=source_module.DESCRIPTION, # type: ignore[attr-defined]\n url=source_module.URL, # type: ignore[attr-defined]\n calendar_title=calendar_title,\n unique_id=calc_unique_scraper_id(source_name, source_args),\n )\n\n return g\n\n\ndef calc_unique_scraper_id(source_name, source_args):\n return source_name + str(sorted(source_args.items()))\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py"}, {"content": "\"\"\"Calendar platform support for Waste Collection Schedule.\"\"\"\n\nimport logging\nfrom datetime import timedelta\n\nfrom homeassistant.components.calendar import CalendarEventDevice\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def async_setup_platform(hass, config, async_add_entities, discovery_info=None):\n \"\"\"Set up calendar platform.\"\"\"\n # We only want this platform to be set up via discovery.\n if discovery_info is None:\n return\n\n entities = []\n\n api = discovery_info[\"api\"]\n\n for scraper in api.scrapers:\n entities.append(WasteCollectionCalendar(api, scraper))\n\n async_add_entities(entities)\n\n\nclass WasteCollectionCalendar(CalendarEventDevice):\n \"\"\"Calendar entity class.\"\"\"\n\n def __init__(self, api, scraper):\n self._api = api\n self._scraper = scraper\n self._attr_unique_id = scraper.unique_id + \"_calendar\"\n\n @property\n def name(self):\n \"\"\"Return entity name.\"\"\"\n return self._scraper.calendar_title\n\n @property\n def event(self):\n \"\"\"Return next collection event.\"\"\"\n collections = self._scraper.get_upcoming(count=1, include_today=True)\n if len(collections) == 0:\n return None\n else:\n return self._convert(collections[0])\n\n async def async_get_events(self, hass, start_date, end_date):\n \"\"\"Return all events within specified time span.\"\"\"\n collections = []\n for a in self._scraper.get_upcoming(include_today=True):\n if a.date >= start_date.date() and a.date <= end_date.date():\n collections.append(self._convert(a))\n return collections\n\n def _convert(self, collection):\n \"\"\"Convert an collection into a Home Assistant calendar event.\"\"\"\n return {\n \"uid\": f\"{self._scraper.calendar_title}-{collection.date.isoformat()}-{collection.type}\",\n \"summary\": collection.type,\n \"start\": {\"date\": collection.date.isoformat()},\n \"end\": {\"date\": (collection.date + timedelta(days=1)).isoformat()},\n \"allDay\": True,\n }\n", "path": "custom_components/waste_collection_schedule/calendar.py"}]}
| 3,558 | 466 |
gh_patches_debug_57045
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmcv-474
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug feedback.
Thanks for your codes.
In [mmcv/runner/base_runner](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/base_runner.py#L385),
` def register_logger_hooks(self, log_config):
log_interval = log_config['interval']
for info in log_config['hooks']:
logger_hook = mmcv.build_from_cfg(
info, HOOKS, default_args=dict(interval=log_interval))
self.register_hook(logger_hook, priority='VERY_LOW')
`
.
Only the argument 'interval' in log_config is fed into logger_hook. However, the argument 'by_epoch' is ignored, always true, so that if iter_based_runner is used, the logger will work unexpectedly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcv/runner/iter_based_runner.py`
Content:
```
1 # Copyright (c) Open-MMLab. All rights reserved.
2 import os.path as osp
3 import time
4
5 import torch
6 from torch.optim import Optimizer
7
8 import mmcv
9 from .base_runner import BaseRunner
10 from .checkpoint import save_checkpoint
11 from .hooks import IterTimerHook
12 from .utils import get_host_info
13
14
15 class IterLoader:
16
17 def __init__(self, dataloader):
18 self._dataloader = dataloader
19 self.iter_loader = iter(self._dataloader)
20 self._epoch = 0
21
22 @property
23 def epoch(self):
24 return self._epoch
25
26 def __next__(self):
27 try:
28 data = next(self.iter_loader)
29 except StopIteration:
30 self._epoch += 1
31 if hasattr(self._dataloader.sampler, 'set_epoch'):
32 self._dataloader.sampler.set_epoch(self._epoch)
33 self.iter_loader = iter(self._dataloader)
34 data = next(self.iter_loader)
35
36 return data
37
38 def __len__(self):
39 return len(self._dataloader)
40
41
42 class IterBasedRunner(BaseRunner):
43 """Iteration-based Runner.
44
45 This runner train models iteration by iteration.
46 """
47
48 def train(self, data_loader, **kwargs):
49 self.model.train()
50 self.mode = 'train'
51 self.data_loader = data_loader
52 self._epoch = data_loader.epoch
53 self.call_hook('before_train_iter')
54 data_batch = next(data_loader)
55 outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
56 if not isinstance(outputs, dict):
57 raise TypeError('model.train_step() must return a dict')
58 if 'log_vars' in outputs:
59 self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
60 self.outputs = outputs
61 self.call_hook('after_train_iter')
62 self._inner_iter += 1
63 self._iter += 1
64
65 def val(self, data_loader, **kwargs):
66 self.model.eval()
67 self.mode = 'val'
68 self.data_loader = data_loader
69 self.call_hook('before_val_iter')
70 data_batch = next(data_loader)
71 outputs = self.model.val_step(data_batch, **kwargs)
72 if not isinstance(outputs, dict):
73 raise TypeError('model.val_step() must return a dict')
74 if 'log_vars' in outputs:
75 self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
76 self.outputs = outputs
77 self.call_hook('after_val_iter')
78 self._inner_iter += 1
79
80 def run(self, data_loaders, workflow, max_iters, **kwargs):
81 """Start running.
82
83 Args:
84 data_loaders (list[:obj:`DataLoader`]): Dataloaders for training
85 and validation.
86 workflow (list[tuple]): A list of (phase, iters) to specify the
87 running order and iterations. E.g, [('train', 10000),
88 ('val', 1000)] means running 10000 iterations for training and
89 1000 iterations for validation, iteratively.
90 max_iters (int): Total training iterations.
91 """
92 assert isinstance(data_loaders, list)
93 assert mmcv.is_list_of(workflow, tuple)
94 assert len(data_loaders) == len(workflow)
95
96 self._max_iters = max_iters
97 work_dir = self.work_dir if self.work_dir is not None else 'NONE'
98 self.logger.info('Start running, host: %s, work_dir: %s',
99 get_host_info(), work_dir)
100 self.logger.info('workflow: %s, max: %d iters', workflow, max_iters)
101 self.call_hook('before_run')
102
103 iter_loaders = [IterLoader(x) for x in data_loaders]
104
105 self.call_hook('before_epoch')
106
107 while self.iter < max_iters:
108 for i, flow in enumerate(workflow):
109 self._inner_iter = 0
110 mode, iters = flow
111 if not isinstance(mode, str) or not hasattr(self, mode):
112 raise ValueError(
113 'runner has no method named "{}" to run a workflow'.
114 format(mode))
115 iter_runner = getattr(self, mode)
116 for _ in range(iters):
117 if mode == 'train' and self.iter >= max_iters:
118 break
119 iter_runner(iter_loaders[i], **kwargs)
120
121 time.sleep(1) # wait for some hooks like loggers to finish
122 self.call_hook('after_epoch')
123 self.call_hook('after_run')
124
125 def resume(self,
126 checkpoint,
127 resume_optimizer=True,
128 map_location='default'):
129 """Resume model from checkpoint.
130
131 Args:
132 checkpoint (str): Checkpoint to resume from.
133 resume_optimizer (bool, optional): Whether resume the optimizer(s)
134 if the checkpoint file includes optimizer(s). Default to True.
135 map_location (str, optional): Same as :func:`torch.load`.
136 Default to 'default'.
137 """
138 if map_location == 'default':
139 device_id = torch.cuda.current_device()
140 checkpoint = self.load_checkpoint(
141 checkpoint,
142 map_location=lambda storage, loc: storage.cuda(device_id))
143 else:
144 checkpoint = self.load_checkpoint(
145 checkpoint, map_location=map_location)
146
147 self._epoch = checkpoint['meta']['epoch']
148 self._iter = checkpoint['meta']['iter']
149 self._inner_iter = checkpoint['meta']['iter']
150 if 'optimizer' in checkpoint and resume_optimizer:
151 if isinstance(self.optimizer, Optimizer):
152 self.optimizer.load_state_dict(checkpoint['optimizer'])
153 elif isinstance(self.optimizer, dict):
154 for k in self.optimizer.keys():
155 self.optimizer[k].load_state_dict(
156 checkpoint['optimizer'][k])
157
158 self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')
159
160 def save_checkpoint(self,
161 out_dir,
162 filename_tmpl='iter_{}.pth',
163 meta=None,
164 save_optimizer=True,
165 create_symlink=True):
166 """Save checkpoint to file.
167
168 Args:
169 out_dir (str): Directory to save checkpoint files.
170 filename_tmpl (str, optional): Checkpoint file template.
171 Defaults to 'iter_{}.pth'.
172 meta (dict, optional): Metadata to be saved in checkpoint.
173 Defaults to None.
174 save_optimizer (bool, optional): Whether save optimizer.
175 Defaults to True.
176 create_symlink (bool, optional): Whether create symlink to the
177 latest checkpoint file. Defaults to True.
178 """
179 if meta is None:
180 meta = dict(iter=self.iter + 1, epoch=self.epoch + 1)
181 elif isinstance(meta, dict):
182 meta.update(iter=self.iter + 1, epoch=self.epoch + 1)
183 else:
184 raise TypeError(
185 f'meta should be a dict or None, but got {type(meta)}')
186 if self.meta is not None:
187 meta.update(self.meta)
188
189 filename = filename_tmpl.format(self.iter + 1)
190 filepath = osp.join(out_dir, filename)
191 optimizer = self.optimizer if save_optimizer else None
192 save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)
193 # in some environments, `os.symlink` is not supported, you may need to
194 # set `create_symlink` to False
195 if create_symlink:
196 mmcv.symlink(filename, osp.join(out_dir, 'latest.pth'))
197
198 def register_training_hooks(self,
199 lr_config,
200 optimizer_config=None,
201 checkpoint_config=None,
202 log_config=None,
203 momentum_config=None):
204 """Register default hooks for iter-based training.
205
206 Default hooks include:
207
208 - LrUpdaterHook
209 - MomentumUpdaterHook
210 - OptimizerStepperHook
211 - CheckpointSaverHook
212 - IterTimerHook
213 - LoggerHook(s)
214 """
215 if checkpoint_config is not None:
216 checkpoint_config.setdefault('by_epoch', False)
217 if lr_config is not None:
218 lr_config.setdefault('by_epoch', False)
219 self.register_lr_hook(lr_config)
220 self.register_momentum_hook(momentum_config)
221 self.register_optimizer_hook(optimizer_config)
222 self.register_checkpoint_hook(checkpoint_config)
223 self.register_hook(IterTimerHook())
224 if log_config is not None:
225 log_config.setdefault('by_epoch', False)
226 self.register_logger_hooks(log_config)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmcv/runner/iter_based_runner.py b/mmcv/runner/iter_based_runner.py
--- a/mmcv/runner/iter_based_runner.py
+++ b/mmcv/runner/iter_based_runner.py
@@ -222,5 +222,6 @@
self.register_checkpoint_hook(checkpoint_config)
self.register_hook(IterTimerHook())
if log_config is not None:
- log_config.setdefault('by_epoch', False)
+ for info in log_config['hooks']:
+ info.setdefault('by_epoch', False)
self.register_logger_hooks(log_config)
|
{"golden_diff": "diff --git a/mmcv/runner/iter_based_runner.py b/mmcv/runner/iter_based_runner.py\n--- a/mmcv/runner/iter_based_runner.py\n+++ b/mmcv/runner/iter_based_runner.py\n@@ -222,5 +222,6 @@\n self.register_checkpoint_hook(checkpoint_config)\n self.register_hook(IterTimerHook())\n if log_config is not None:\n- log_config.setdefault('by_epoch', False)\n+ for info in log_config['hooks']:\n+ info.setdefault('by_epoch', False)\n self.register_logger_hooks(log_config)\n", "issue": "Bug feedback.\nThanks for your codes. \r\nIn [mmcv/runner/base_runner](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/base_runner.py#L385), \r\n` def register_logger_hooks(self, log_config):\r\n log_interval = log_config['interval']\r\n for info in log_config['hooks']:\r\n logger_hook = mmcv.build_from_cfg(\r\n info, HOOKS, default_args=dict(interval=log_interval))\r\n self.register_hook(logger_hook, priority='VERY_LOW')\r\n`\r\n.\r\nOnly the argument 'interval' in log_config is fed into logger_hook. However, the argument 'by_epoch' is ignored, always true, so that if iter_based_runner is used, the logger will work unexpectedly. \n", "before_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nimport os.path as osp\nimport time\n\nimport torch\nfrom torch.optim import Optimizer\n\nimport mmcv\nfrom .base_runner import BaseRunner\nfrom .checkpoint import save_checkpoint\nfrom .hooks import IterTimerHook\nfrom .utils import get_host_info\n\n\nclass IterLoader:\n\n def __init__(self, dataloader):\n self._dataloader = dataloader\n self.iter_loader = iter(self._dataloader)\n self._epoch = 0\n\n @property\n def epoch(self):\n return self._epoch\n\n def __next__(self):\n try:\n data = next(self.iter_loader)\n except StopIteration:\n self._epoch += 1\n if hasattr(self._dataloader.sampler, 'set_epoch'):\n self._dataloader.sampler.set_epoch(self._epoch)\n self.iter_loader = iter(self._dataloader)\n data = next(self.iter_loader)\n\n return data\n\n def __len__(self):\n return len(self._dataloader)\n\n\nclass IterBasedRunner(BaseRunner):\n \"\"\"Iteration-based Runner.\n\n This runner train models iteration by iteration.\n \"\"\"\n\n def train(self, data_loader, **kwargs):\n self.model.train()\n self.mode = 'train'\n self.data_loader = data_loader\n self._epoch = data_loader.epoch\n self.call_hook('before_train_iter')\n data_batch = next(data_loader)\n outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)\n if not isinstance(outputs, dict):\n raise TypeError('model.train_step() must return a dict')\n if 'log_vars' in outputs:\n self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])\n self.outputs = outputs\n self.call_hook('after_train_iter')\n self._inner_iter += 1\n self._iter += 1\n\n def val(self, data_loader, **kwargs):\n self.model.eval()\n self.mode = 'val'\n self.data_loader = data_loader\n self.call_hook('before_val_iter')\n data_batch = next(data_loader)\n outputs = self.model.val_step(data_batch, **kwargs)\n if not isinstance(outputs, dict):\n raise TypeError('model.val_step() must return a dict')\n if 'log_vars' in outputs:\n self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])\n self.outputs = outputs\n self.call_hook('after_val_iter')\n self._inner_iter += 1\n\n def run(self, data_loaders, workflow, max_iters, **kwargs):\n \"\"\"Start running.\n\n Args:\n data_loaders (list[:obj:`DataLoader`]): Dataloaders for training\n and validation.\n workflow (list[tuple]): A list of (phase, iters) to specify the\n running order and iterations. E.g, [('train', 10000),\n ('val', 1000)] means running 10000 iterations for training and\n 1000 iterations for validation, iteratively.\n max_iters (int): Total training iterations.\n \"\"\"\n assert isinstance(data_loaders, list)\n assert mmcv.is_list_of(workflow, tuple)\n assert len(data_loaders) == len(workflow)\n\n self._max_iters = max_iters\n work_dir = self.work_dir if self.work_dir is not None else 'NONE'\n self.logger.info('Start running, host: %s, work_dir: %s',\n get_host_info(), work_dir)\n self.logger.info('workflow: %s, max: %d iters', workflow, max_iters)\n self.call_hook('before_run')\n\n iter_loaders = [IterLoader(x) for x in data_loaders]\n\n self.call_hook('before_epoch')\n\n while self.iter < max_iters:\n for i, flow in enumerate(workflow):\n self._inner_iter = 0\n mode, iters = flow\n if not isinstance(mode, str) or not hasattr(self, mode):\n raise ValueError(\n 'runner has no method named \"{}\" to run a workflow'.\n format(mode))\n iter_runner = getattr(self, mode)\n for _ in range(iters):\n if mode == 'train' and self.iter >= max_iters:\n break\n iter_runner(iter_loaders[i], **kwargs)\n\n time.sleep(1) # wait for some hooks like loggers to finish\n self.call_hook('after_epoch')\n self.call_hook('after_run')\n\n def resume(self,\n checkpoint,\n resume_optimizer=True,\n map_location='default'):\n \"\"\"Resume model from checkpoint.\n\n Args:\n checkpoint (str): Checkpoint to resume from.\n resume_optimizer (bool, optional): Whether resume the optimizer(s)\n if the checkpoint file includes optimizer(s). Default to True.\n map_location (str, optional): Same as :func:`torch.load`.\n Default to 'default'.\n \"\"\"\n if map_location == 'default':\n device_id = torch.cuda.current_device()\n checkpoint = self.load_checkpoint(\n checkpoint,\n map_location=lambda storage, loc: storage.cuda(device_id))\n else:\n checkpoint = self.load_checkpoint(\n checkpoint, map_location=map_location)\n\n self._epoch = checkpoint['meta']['epoch']\n self._iter = checkpoint['meta']['iter']\n self._inner_iter = checkpoint['meta']['iter']\n if 'optimizer' in checkpoint and resume_optimizer:\n if isinstance(self.optimizer, Optimizer):\n self.optimizer.load_state_dict(checkpoint['optimizer'])\n elif isinstance(self.optimizer, dict):\n for k in self.optimizer.keys():\n self.optimizer[k].load_state_dict(\n checkpoint['optimizer'][k])\n\n self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')\n\n def save_checkpoint(self,\n out_dir,\n filename_tmpl='iter_{}.pth',\n meta=None,\n save_optimizer=True,\n create_symlink=True):\n \"\"\"Save checkpoint to file.\n\n Args:\n out_dir (str): Directory to save checkpoint files.\n filename_tmpl (str, optional): Checkpoint file template.\n Defaults to 'iter_{}.pth'.\n meta (dict, optional): Metadata to be saved in checkpoint.\n Defaults to None.\n save_optimizer (bool, optional): Whether save optimizer.\n Defaults to True.\n create_symlink (bool, optional): Whether create symlink to the\n latest checkpoint file. Defaults to True.\n \"\"\"\n if meta is None:\n meta = dict(iter=self.iter + 1, epoch=self.epoch + 1)\n elif isinstance(meta, dict):\n meta.update(iter=self.iter + 1, epoch=self.epoch + 1)\n else:\n raise TypeError(\n f'meta should be a dict or None, but got {type(meta)}')\n if self.meta is not None:\n meta.update(self.meta)\n\n filename = filename_tmpl.format(self.iter + 1)\n filepath = osp.join(out_dir, filename)\n optimizer = self.optimizer if save_optimizer else None\n save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)\n # in some environments, `os.symlink` is not supported, you may need to\n # set `create_symlink` to False\n if create_symlink:\n mmcv.symlink(filename, osp.join(out_dir, 'latest.pth'))\n\n def register_training_hooks(self,\n lr_config,\n optimizer_config=None,\n checkpoint_config=None,\n log_config=None,\n momentum_config=None):\n \"\"\"Register default hooks for iter-based training.\n\n Default hooks include:\n\n - LrUpdaterHook\n - MomentumUpdaterHook\n - OptimizerStepperHook\n - CheckpointSaverHook\n - IterTimerHook\n - LoggerHook(s)\n \"\"\"\n if checkpoint_config is not None:\n checkpoint_config.setdefault('by_epoch', False)\n if lr_config is not None:\n lr_config.setdefault('by_epoch', False)\n self.register_lr_hook(lr_config)\n self.register_momentum_hook(momentum_config)\n self.register_optimizer_hook(optimizer_config)\n self.register_checkpoint_hook(checkpoint_config)\n self.register_hook(IterTimerHook())\n if log_config is not None:\n log_config.setdefault('by_epoch', False)\n self.register_logger_hooks(log_config)\n", "path": "mmcv/runner/iter_based_runner.py"}], "after_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nimport os.path as osp\nimport time\n\nimport torch\nfrom torch.optim import Optimizer\n\nimport mmcv\nfrom .base_runner import BaseRunner\nfrom .checkpoint import save_checkpoint\nfrom .hooks import IterTimerHook\nfrom .utils import get_host_info\n\n\nclass IterLoader:\n\n def __init__(self, dataloader):\n self._dataloader = dataloader\n self.iter_loader = iter(self._dataloader)\n self._epoch = 0\n\n @property\n def epoch(self):\n return self._epoch\n\n def __next__(self):\n try:\n data = next(self.iter_loader)\n except StopIteration:\n self._epoch += 1\n if hasattr(self._dataloader.sampler, 'set_epoch'):\n self._dataloader.sampler.set_epoch(self._epoch)\n self.iter_loader = iter(self._dataloader)\n data = next(self.iter_loader)\n\n return data\n\n def __len__(self):\n return len(self._dataloader)\n\n\nclass IterBasedRunner(BaseRunner):\n \"\"\"Iteration-based Runner.\n\n This runner train models iteration by iteration.\n \"\"\"\n\n def train(self, data_loader, **kwargs):\n self.model.train()\n self.mode = 'train'\n self.data_loader = data_loader\n self._epoch = data_loader.epoch\n self.call_hook('before_train_iter')\n data_batch = next(data_loader)\n outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)\n if not isinstance(outputs, dict):\n raise TypeError('model.train_step() must return a dict')\n if 'log_vars' in outputs:\n self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])\n self.outputs = outputs\n self.call_hook('after_train_iter')\n self._inner_iter += 1\n self._iter += 1\n\n def val(self, data_loader, **kwargs):\n self.model.eval()\n self.mode = 'val'\n self.data_loader = data_loader\n self.call_hook('before_val_iter')\n data_batch = next(data_loader)\n outputs = self.model.val_step(data_batch, **kwargs)\n if not isinstance(outputs, dict):\n raise TypeError('model.val_step() must return a dict')\n if 'log_vars' in outputs:\n self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])\n self.outputs = outputs\n self.call_hook('after_val_iter')\n self._inner_iter += 1\n\n def run(self, data_loaders, workflow, max_iters, **kwargs):\n \"\"\"Start running.\n\n Args:\n data_loaders (list[:obj:`DataLoader`]): Dataloaders for training\n and validation.\n workflow (list[tuple]): A list of (phase, iters) to specify the\n running order and iterations. E.g, [('train', 10000),\n ('val', 1000)] means running 10000 iterations for training and\n 1000 iterations for validation, iteratively.\n max_iters (int): Total training iterations.\n \"\"\"\n assert isinstance(data_loaders, list)\n assert mmcv.is_list_of(workflow, tuple)\n assert len(data_loaders) == len(workflow)\n\n self._max_iters = max_iters\n work_dir = self.work_dir if self.work_dir is not None else 'NONE'\n self.logger.info('Start running, host: %s, work_dir: %s',\n get_host_info(), work_dir)\n self.logger.info('workflow: %s, max: %d iters', workflow, max_iters)\n self.call_hook('before_run')\n\n iter_loaders = [IterLoader(x) for x in data_loaders]\n\n self.call_hook('before_epoch')\n\n while self.iter < max_iters:\n for i, flow in enumerate(workflow):\n self._inner_iter = 0\n mode, iters = flow\n if not isinstance(mode, str) or not hasattr(self, mode):\n raise ValueError(\n 'runner has no method named \"{}\" to run a workflow'.\n format(mode))\n iter_runner = getattr(self, mode)\n for _ in range(iters):\n if mode == 'train' and self.iter >= max_iters:\n break\n iter_runner(iter_loaders[i], **kwargs)\n\n time.sleep(1) # wait for some hooks like loggers to finish\n self.call_hook('after_epoch')\n self.call_hook('after_run')\n\n def resume(self,\n checkpoint,\n resume_optimizer=True,\n map_location='default'):\n \"\"\"Resume model from checkpoint.\n\n Args:\n checkpoint (str): Checkpoint to resume from.\n resume_optimizer (bool, optional): Whether resume the optimizer(s)\n if the checkpoint file includes optimizer(s). Default to True.\n map_location (str, optional): Same as :func:`torch.load`.\n Default to 'default'.\n \"\"\"\n if map_location == 'default':\n device_id = torch.cuda.current_device()\n checkpoint = self.load_checkpoint(\n checkpoint,\n map_location=lambda storage, loc: storage.cuda(device_id))\n else:\n checkpoint = self.load_checkpoint(\n checkpoint, map_location=map_location)\n\n self._epoch = checkpoint['meta']['epoch']\n self._iter = checkpoint['meta']['iter']\n self._inner_iter = checkpoint['meta']['iter']\n if 'optimizer' in checkpoint and resume_optimizer:\n if isinstance(self.optimizer, Optimizer):\n self.optimizer.load_state_dict(checkpoint['optimizer'])\n elif isinstance(self.optimizer, dict):\n for k in self.optimizer.keys():\n self.optimizer[k].load_state_dict(\n checkpoint['optimizer'][k])\n\n self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')\n\n def save_checkpoint(self,\n out_dir,\n filename_tmpl='iter_{}.pth',\n meta=None,\n save_optimizer=True,\n create_symlink=True):\n \"\"\"Save checkpoint to file.\n\n Args:\n out_dir (str): Directory to save checkpoint files.\n filename_tmpl (str, optional): Checkpoint file template.\n Defaults to 'iter_{}.pth'.\n meta (dict, optional): Metadata to be saved in checkpoint.\n Defaults to None.\n save_optimizer (bool, optional): Whether save optimizer.\n Defaults to True.\n create_symlink (bool, optional): Whether create symlink to the\n latest checkpoint file. Defaults to True.\n \"\"\"\n if meta is None:\n meta = dict(iter=self.iter + 1, epoch=self.epoch + 1)\n elif isinstance(meta, dict):\n meta.update(iter=self.iter + 1, epoch=self.epoch + 1)\n else:\n raise TypeError(\n f'meta should be a dict or None, but got {type(meta)}')\n if self.meta is not None:\n meta.update(self.meta)\n\n filename = filename_tmpl.format(self.iter + 1)\n filepath = osp.join(out_dir, filename)\n optimizer = self.optimizer if save_optimizer else None\n save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)\n # in some environments, `os.symlink` is not supported, you may need to\n # set `create_symlink` to False\n if create_symlink:\n mmcv.symlink(filename, osp.join(out_dir, 'latest.pth'))\n\n def register_training_hooks(self,\n lr_config,\n optimizer_config=None,\n checkpoint_config=None,\n log_config=None,\n momentum_config=None):\n \"\"\"Register default hooks for iter-based training.\n\n Default hooks include:\n\n - LrUpdaterHook\n - MomentumUpdaterHook\n - OptimizerStepperHook\n - CheckpointSaverHook\n - IterTimerHook\n - LoggerHook(s)\n \"\"\"\n if checkpoint_config is not None:\n checkpoint_config.setdefault('by_epoch', False)\n if lr_config is not None:\n lr_config.setdefault('by_epoch', False)\n self.register_lr_hook(lr_config)\n self.register_momentum_hook(momentum_config)\n self.register_optimizer_hook(optimizer_config)\n self.register_checkpoint_hook(checkpoint_config)\n self.register_hook(IterTimerHook())\n if log_config is not None:\n for info in log_config['hooks']:\n info.setdefault('by_epoch', False)\n self.register_logger_hooks(log_config)\n", "path": "mmcv/runner/iter_based_runner.py"}]}
| 2,776 | 131 |
gh_patches_debug_9319
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-2975
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Overriding rspamd forbidden_file_extension.map doesn't work as documented
<!--
Thank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.
For **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).
To be able to help you best, we need some more information.
Before you open your issue
- Check if no issue or pull-request for this already exists.
- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- You understand `Mailu` is made by volunteers in their **free time** — be concise, civil and accept that delays can occur.
- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
Please put your text outside of the comment blocks to be visible. You can use the button "Preview" above to check.
-->
## Environment & Version
The host is Ubuntu 22.04
### Environment
- [x] docker compose
- [ ] kubernetes
- [ ] docker swarm
### Version
- Version: `2.0`
<!--
To find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).
$> docker ps -a | grep mailu
140b09d4b09c mailu/roundcube:1.7 "docker-php-entrypoi…" 2 weeks ago Up 2 days (healthy) 80/tcp
$> grep MAILU_VERSION docker-compose.yml mailu.env
-->
## Description
<!--
Further explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.
-->
My forbidden_file_extension.map placed in `overrides/rspamd/` is ignored and the default file is used instead.
## Replication Steps
<!--
Steps for replicating your issue
-->
With a Mailu 2.0 instance, follow the [instructions](https://mailu.io/2.0/antispam.html#can-i-change-the-list-of-authorized-file-attachments) for changing allowed attachments. Change `overrides/rspamd/forbidden_file_extension.map` and add/remove some lines, then run `docker compose restart antispam`. Re-run the `docker compose exec antispam cat /etc/rspamd/local.d/forbidden_file_extension.map` command to see the file used by rspamd.
## Observed behaviour
<!--
Explain or paste the result you received.
-->
I get the output of `core/rspamd/conf/forbidden_file_extension.map` from the Mailu distribution. Indeed, attachments allowed are still in line with this file, rather than the new changed file.
## Expected behaviour
<!--
Explain what results you expected - be as specific as possible.
Just saying "it doesn’t work as expected" is not useful. It's also helpful to describe what you actually experienced.
-->
I'd expect the modified forbidden_file_extension.map in the `overrides/rspamd` directory is taken into account. For instance, if I remove js from the list, I should be able to send an attachment with the js extension.
<!--
Often it is very useful to include log fragments of the involved component.
You can get the logs via `docker logs <container name> --tail 1000`.
For example for the admin container: `docker logs mailu_admin_1 --tail 1000`
or using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`
If you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:
```
Your logs here!
```
-->
## My diagnosis
The overridden file is available at `/overrides/forbidden_file_extension.map` in the antispam container, as expected. However, `core/rspamd/start.py` adds `forbidden_file_extension.map` to the `config_files` variable so the override file at `/overrides/forbidden_file_extension.map` is ignored and not copied over. The start script runs Jinja on `/conf/forbidden_file_extension.map` but since there's no directives it's just copied verbatim to `/etc/rspamd/local.d/`. This default file is what rspamd reads. The `multimap.conf` file references the map file, but that has no directive for overriding the path of the map file so overriding `multimap.conf` doesn't help.
## Possible solution
To get the expected behaviour (my override file is copied to `/etc/rspamd/local.d/` instead of the default file), I suspect the start script needs to be altered. I'm unsure how this was intended to be done originally, but I'd probably modify the start script so an administrator can specify whether certain override configuration files should be copied despite being on the `config_files` list. A pleasant way to still keep Jinja flexibility is for the copied override configuration files to be processed as templates too. For instance I could copy the `multimap.conf` template to a file in the overrides directory, change it and let the start script process the new template. This is nicely used in Kolla Ansible for instance. If it's something relatively straightforward like this I'd be happy to make a PR for it, but might take some time since it's my first issue/PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/rspamd/start.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 import glob
5 import logging as log
6 import requests
7 import shutil
8 import sys
9 import time
10 from socrate import system,conf
11
12 env = system.set_env()
13
14 # Actual startup script
15
16 config_files = []
17 for rspamd_file in glob.glob("/conf/*"):
18 conf.jinja(rspamd_file, env, os.path.join("/etc/rspamd/local.d", os.path.basename(rspamd_file)))
19 config_files.append(os.path.basename(rspamd_file))
20
21 for override_file in glob.glob("/overrides/*"):
22 if os.path.basename(override_file) not in config_files:
23 shutil.copyfile(override_file, os.path.join("/etc/rspamd/local.d", os.path.basename(override_file)))
24
25 # Admin may not be up just yet
26 healthcheck = f'http://{env["ADMIN_ADDRESS"]}:8080/internal/rspamd/local_domains'
27 while True:
28 time.sleep(1)
29 try:
30 if requests.get(healthcheck,timeout=2).ok:
31 break
32 except:
33 pass
34 log.warning("Admin is not up just yet, retrying in 1 second")
35
36 # Run rspamd
37 os.system("mkdir -m 755 -p /run/rspamd")
38 os.system("chown rspamd:rspamd /run/rspamd")
39 os.system("find /var/lib/rspamd | grep -v /filter | xargs -n1 chown rspamd:rspamd")
40 os.execv("/usr/bin/rspamd", ["rspamd", "-f", "-u", "rspamd", "-g", "rspamd"])
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/rspamd/start.py b/core/rspamd/start.py
--- a/core/rspamd/start.py
+++ b/core/rspamd/start.py
@@ -16,7 +16,8 @@
config_files = []
for rspamd_file in glob.glob("/conf/*"):
conf.jinja(rspamd_file, env, os.path.join("/etc/rspamd/local.d", os.path.basename(rspamd_file)))
- config_files.append(os.path.basename(rspamd_file))
+ if rspamd_file != '/conf/forbidden_file_extension.map':
+ config_files.append(os.path.basename(rspamd_file))
for override_file in glob.glob("/overrides/*"):
if os.path.basename(override_file) not in config_files:
|
{"golden_diff": "diff --git a/core/rspamd/start.py b/core/rspamd/start.py\n--- a/core/rspamd/start.py\n+++ b/core/rspamd/start.py\n@@ -16,7 +16,8 @@\n config_files = []\n for rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, env, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n- config_files.append(os.path.basename(rspamd_file))\n+ if rspamd_file != '/conf/forbidden_file_extension.map':\n+ config_files.append(os.path.basename(rspamd_file))\n \n for override_file in glob.glob(\"/overrides/*\"):\n if os.path.basename(override_file) not in config_files:\n", "issue": "Overriding rspamd forbidden_file_extension.map doesn't work as documented\n<!--\r\n\r\nThank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.\r\nFor **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).\r\n\r\nTo be able to help you best, we need some more information.\r\n\r\nBefore you open your issue\r\n- Check if no issue or pull-request for this already exists.\r\n- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- You understand `Mailu` is made by volunteers in their **free time** \u2014 be concise, civil and accept that delays can occur.\r\n- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\nPlease put your text outside of the comment blocks to be visible. You can use the button \"Preview\" above to check.\r\n\r\n-->\r\n\r\n## Environment & Version\r\nThe host is Ubuntu 22.04\r\n### Environment\r\n\r\n- [x] docker compose\r\n- [ ] kubernetes\r\n- [ ] docker swarm\r\n\r\n### Version\r\n\r\n- Version: `2.0`\r\n\r\n<!--\r\nTo find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).\r\n\r\n$> docker ps -a | grep mailu\r\n140b09d4b09c mailu/roundcube:1.7 \"docker-php-entrypoi\u2026\" 2 weeks ago Up 2 days (healthy) 80/tcp\r\n$> grep MAILU_VERSION docker-compose.yml mailu.env\r\n-->\r\n\r\n## Description\r\n<!--\r\nFurther explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.\r\n-->\r\nMy forbidden_file_extension.map placed in `overrides/rspamd/` is ignored and the default file is used instead.\r\n\r\n## Replication Steps\r\n<!--\r\nSteps for replicating your issue\r\n-->\r\n\r\nWith a Mailu 2.0 instance, follow the [instructions](https://mailu.io/2.0/antispam.html#can-i-change-the-list-of-authorized-file-attachments) for changing allowed attachments. Change `overrides/rspamd/forbidden_file_extension.map` and add/remove some lines, then run `docker compose restart antispam`. Re-run the `docker compose exec antispam cat /etc/rspamd/local.d/forbidden_file_extension.map` command to see the file used by rspamd.\r\n\r\n## Observed behaviour\r\n<!--\r\nExplain or paste the result you received.\r\n-->\r\nI get the output of `core/rspamd/conf/forbidden_file_extension.map` from the Mailu distribution. Indeed, attachments allowed are still in line with this file, rather than the new changed file.\r\n\r\n## Expected behaviour\r\n<!--\r\nExplain what results you expected - be as specific as possible.\r\nJust saying \"it doesn\u2019t work as expected\" is not useful. It's also helpful to describe what you actually experienced.\r\n-->\r\nI'd expect the modified forbidden_file_extension.map in the `overrides/rspamd` directory is taken into account. For instance, if I remove js from the list, I should be able to send an attachment with the js extension.\r\n\r\n<!--\r\nOften it is very useful to include log fragments of the involved component.\r\nYou can get the logs via `docker logs <container name> --tail 1000`.\r\nFor example for the admin container: `docker logs mailu_admin_1 --tail 1000`\r\nor using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`\r\n\r\nIf you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:\r\n\r\n```\r\nYour logs here!\r\n```\r\n-->\r\n## My diagnosis\r\nThe overridden file is available at `/overrides/forbidden_file_extension.map` in the antispam container, as expected. However, `core/rspamd/start.py` adds `forbidden_file_extension.map` to the `config_files` variable so the override file at `/overrides/forbidden_file_extension.map` is ignored and not copied over. The start script runs Jinja on `/conf/forbidden_file_extension.map` but since there's no directives it's just copied verbatim to `/etc/rspamd/local.d/`. This default file is what rspamd reads. The `multimap.conf` file references the map file, but that has no directive for overriding the path of the map file so overriding `multimap.conf` doesn't help.\r\n\r\n## Possible solution\r\nTo get the expected behaviour (my override file is copied to `/etc/rspamd/local.d/` instead of the default file), I suspect the start script needs to be altered. I'm unsure how this was intended to be done originally, but I'd probably modify the start script so an administrator can specify whether certain override configuration files should be copied despite being on the `config_files` list. A pleasant way to still keep Jinja flexibility is for the copied override configuration files to be processed as templates too. For instance I could copy the `multimap.conf` template to a file in the overrides directory, change it and let the start script process the new template. This is nicely used in Kolla Ansible for instance. If it's something relatively straightforward like this I'd be happy to make a PR for it, but might take some time since it's my first issue/PR.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport logging as log\nimport requests\nimport shutil\nimport sys\nimport time\nfrom socrate import system,conf\n\nenv = system.set_env()\n\n# Actual startup script\n\nconfig_files = []\nfor rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, env, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n config_files.append(os.path.basename(rspamd_file))\n\nfor override_file in glob.glob(\"/overrides/*\"):\n if os.path.basename(override_file) not in config_files:\n shutil.copyfile(override_file, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(override_file)))\n\n# Admin may not be up just yet\nhealthcheck = f'http://{env[\"ADMIN_ADDRESS\"]}:8080/internal/rspamd/local_domains'\nwhile True:\n time.sleep(1)\n try:\n if requests.get(healthcheck,timeout=2).ok:\n break\n except:\n pass\n log.warning(\"Admin is not up just yet, retrying in 1 second\")\n\n# Run rspamd\nos.system(\"mkdir -m 755 -p /run/rspamd\")\nos.system(\"chown rspamd:rspamd /run/rspamd\")\nos.system(\"find /var/lib/rspamd | grep -v /filter | xargs -n1 chown rspamd:rspamd\")\nos.execv(\"/usr/bin/rspamd\", [\"rspamd\", \"-f\", \"-u\", \"rspamd\", \"-g\", \"rspamd\"])\n", "path": "core/rspamd/start.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport logging as log\nimport requests\nimport shutil\nimport sys\nimport time\nfrom socrate import system,conf\n\nenv = system.set_env()\n\n# Actual startup script\n\nconfig_files = []\nfor rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, env, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n if rspamd_file != '/conf/forbidden_file_extension.map':\n config_files.append(os.path.basename(rspamd_file))\n\nfor override_file in glob.glob(\"/overrides/*\"):\n if os.path.basename(override_file) not in config_files:\n shutil.copyfile(override_file, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(override_file)))\n\n# Admin may not be up just yet\nhealthcheck = f'http://{env[\"ADMIN_ADDRESS\"]}:8080/internal/rspamd/local_domains'\nwhile True:\n time.sleep(1)\n try:\n if requests.get(healthcheck,timeout=2).ok:\n break\n except:\n pass\n log.warning(\"Admin is not up just yet, retrying in 1 second\")\n\n# Run rspamd\nos.system(\"mkdir -m 755 -p /run/rspamd\")\nos.system(\"chown rspamd:rspamd /run/rspamd\")\nos.system(\"find /var/lib/rspamd | grep -v /filter | xargs -n1 chown rspamd:rspamd\")\nos.execv(\"/usr/bin/rspamd\", [\"rspamd\", \"-f\", \"-u\", \"rspamd\", \"-g\", \"rspamd\"])\n", "path": "core/rspamd/start.py"}]}
| 1,891 | 160 |
gh_patches_debug_11196
|
rasdani/github-patches
|
git_diff
|
beeware__toga-1927
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DetailedList showing empty rows
### Describe the bug
Using Toga for the first time, trying to add `DetailedListBox` to my window. *Something* shows up in the box that contains the `DetailedListBox`, but it just looks like a single empty list cell. `on_select` confirms that it's a list cell.
### Steps to reproduce
Run this code:
```python
import toga
from toga.style.pack import Pack
history_data = [
{
'icon': toga.Icon.DEFAULT_ICON,
'title': 'Foo',
'subtitle': 'Bar',
'pk': 100,
},
]
def build(app):
history = toga.DetailedList(
id='foo',
data=history_data,
on_select=selection_handler,
)
history_box = toga.Box()
history_box.add(history)
box = toga.Box(style=Pack(padding=(5, 5)))
box.add(history_box)
return box
def main():
return toga.App('DetailedList test', 'foo.bar', startup=build)
def selection_handler(widget, row):
print('Row {} of widget {} was selected'.format(row, widget))
if __name__ == '__main__':
main().main_loop()
```
### Expected behavior
A populated listbox shows up
### Screenshots

### Environment
- Operating System: Manjaro Linux
- Python version: 3.10
- Software versions:
- Briefcase:
- Toga: 0.3.1
- ...
### Logs
```
```
### Additional context
- If I add other stuff to `box`, that stuff renders fine
- No matter how many rows I add in list, always seems to render just one empty cell
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gtk/src/toga_gtk/widgets/detailedlist.py`
Content:
```
1 from ..libs import Gdk, Gio, GLib, Gtk
2 from .base import Widget
3 from .internal.buttons.refresh import RefreshButton
4 from .internal.buttons.scroll import ScrollButton
5 from .internal.rows.texticon import TextIconRow
6
7
8 # TODO: Verify if right clicking a row currently works with touch screens, if not,
9 # use Gtk.GestureLongPress
10 class DetailedList(Widget):
11 """Gtk DetailedList implementation.
12
13 Gtk.ListBox inside a Gtk.ScrolledWindow.
14 """
15
16 def create(self):
17 # Not the same as selected row. _active_row is the one with its buttons exposed.
18 self._active_row = None
19
20 self.gtk_on_select_signal_handler = None
21
22 self.list_box = Gtk.ListBox()
23
24 self.list_box.set_selection_mode(Gtk.SelectionMode.SINGLE)
25
26 self.store = Gio.ListStore()
27 # We need to provide a function that transforms whatever is in the store into
28 # a `Gtk.ListBoxRow`, but the items in the store already are `Gtk.ListBoxRow` thus
29 # the identity function.
30 self.list_box.bind_model(self.store, lambda a: a)
31
32 self.scrolled_window = Gtk.ScrolledWindow()
33
34 self.scrolled_window.set_policy(Gtk.PolicyType.NEVER, Gtk.PolicyType.AUTOMATIC)
35 self.scrolled_window.set_min_content_width(self.interface._MIN_WIDTH)
36 self.scrolled_window.set_min_content_height(self.interface._MIN_HEIGHT)
37
38 self.scrolled_window.add(self.list_box)
39
40 self.refresh_button = RefreshButton(self.scrolled_window.get_vadjustment())
41
42 self.scroll_button = ScrollButton(self.scrolled_window.get_vadjustment())
43 self.scroll_button.set_scroll(lambda: self.scroll_to_row(-1))
44
45 self.native = Gtk.Overlay()
46 self.native.add_overlay(self.scrolled_window)
47
48 self.refresh_button.overlay_over(self.native)
49 self.scroll_button.overlay_over(self.native)
50
51 self.gtk_on_select_signal_handler = self.list_box.connect(
52 "row-selected", self.gtk_on_row_selected
53 )
54
55 self.right_click_gesture = Gtk.GestureMultiPress.new(self.list_box)
56 self.right_click_gesture.set_button(3)
57 self.right_click_gesture.set_propagation_phase(Gtk.PropagationPhase.BUBBLE)
58 self.right_click_gesture.connect("pressed", self.gtk_on_right_click)
59
60 def row_factory(self, item):
61 """
62 Args:
63 item (:obj:`Row`)
64 Returns:
65 Returns a (:obj:`TextIconRow`)
66 """
67 return TextIconRow(self.interface.factory, self, item)
68
69 def destroy(self):
70 self.disconnect(self.gtk_on_select_signal_handler)
71 super().destroy()
72
73 def change_source(self, source):
74 """
75 Args:
76 source (:obj:`ListSource`)
77 """
78 self.store.remove_all()
79 for item in source:
80 self.store.append(self.row_factory(item))
81
82 # We can't know the dimensions of each row (and thus of the list) until gtk allocates
83 # space for it. Gtk does emit `size-allocate` after allocation, but I couldn't find any
84 # guarantees that the rows have their sizes allocated in the order they are inserted
85 # in the `ListStore` and in my opinion that's unlikely to be the case.
86
87 # Therefore we would need to wait for `size-allocate` on all rows and either update
88 # the visibility of the buttons on all `size-allocates` or have a counter and only do
89 # it on the last `size-allocate`. Obviously none of those options are desirable.
90
91 # Fortunately functions added with `idle_add` are run when gtk is idle and thus after
92 # any size allocation. This solves our problem and from the perspective of the user
93 # happens immediately.
94
95 # Even though we are adding the callback to the global loop, it only runs once.
96 # This is what the lambda is for. If a callback returns `False` then it's not ran again.
97 # I used a lambda because returning `False` from `self._list_items_changed()` would mean
98 # returning `False` on success.
99 GLib.idle_add(lambda: not self._list_items_changed())
100
101 def insert(self, index, item):
102 """
103 Args:
104 index (int)
105 item (:obj:`Row`)
106 """
107 item_impl = self.row_factory(item)
108 self.store.insert(index, item_impl)
109 self.list_box.show_all()
110 self._list_items_changed()
111
112 def change(self, item):
113 """
114 Args:
115 item (:obj:`Row`)
116 """
117 index = item._impl.get_index()
118 self.remove(item, index)
119 item_impl = self.row_factory(item)
120 self.store.insert(index, item_impl)
121
122 def remove(self, item, index):
123 """Removes a row from the store. Doesn't remove the row from the interface.
124
125 Args:
126 item (:obj:`Row`)
127 index (int)
128 """
129 if index is None:
130 index = item._impl.get_index()
131
132 if self._active_row == item._impl:
133 self._active_row = None
134
135 self.store.remove(index)
136
137 if self.interface.on_delete is not None:
138 self.interface.on_delete(self.interface, item._impl.interface)
139
140 item._impl.destroy()
141 self._list_items_changed()
142
143 def clear(self):
144 self.store.remove_all()
145 self._list_items_changed()
146
147 def get_selection(self):
148 item_impl = self.list_box.get_selected_row()
149 if item_impl is None:
150 return None
151 else:
152 return item_impl.interface
153
154 def scroll_to_row(self, row: int):
155 item = self.store[row]
156 item.scroll_to_center()
157
158 def set_on_refresh(self, handler: callable):
159 if handler is not None:
160 self.refresh_button.set_on_refresh(self.gtk_on_refresh_clicked)
161
162 def set_on_select(self, handler: callable):
163 pass
164
165 def set_on_delete(self, handler: callable):
166 pass
167
168 def after_on_refresh(self, widget, result):
169 # No special handling required
170 pass
171
172 def gtk_on_refresh_clicked(self):
173 if self.interface.on_refresh is not None:
174 self.interface.on_refresh(self.interface)
175
176 def gtk_on_row_selected(self, w: Gtk.ListBox, item_impl: Gtk.ListBoxRow):
177 if self.interface.on_select is not None:
178 if item_impl is not None:
179 self.interface.on_select(self.interface, item_impl.interface)
180 else:
181 self.interface.on_select(self.interface, None)
182
183 if self._active_row is not None and self._active_row != item_impl:
184 self._active_row.hide_buttons()
185 self._active_row = None
186
187 def gtk_on_right_click(self, gesture, n_press, x, y):
188 item_impl = self.list_box.get_row_at_y(y)
189
190 if item_impl is None:
191 return
192
193 rect = Gdk.Rectangle()
194 rect.x, rect.y = item_impl.translate_coordinates(self.list_box, x, y)
195
196 if self._active_row is not None and self._active_row != item_impl:
197 self._active_row.hide_buttons()
198
199 self._active_row = item_impl
200 item_impl.on_right_click(rect)
201
202 if self.interface.on_select is not None:
203 self.list_box.select_row(item_impl)
204
205 def _list_items_changed(self):
206 """Some components such as the refresh button and scroll button change their
207 appearance based on how many items there are on the list or the size of the
208 items.
209
210 If either of those things changes the buttons need to be notified to recalculate
211 their positions.
212 """
213 self.refresh_button.list_changed()
214 self.scroll_button.list_changed()
215 return True
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gtk/src/toga_gtk/widgets/detailedlist.py b/gtk/src/toga_gtk/widgets/detailedlist.py
--- a/gtk/src/toga_gtk/widgets/detailedlist.py
+++ b/gtk/src/toga_gtk/widgets/detailedlist.py
@@ -1,3 +1,5 @@
+from travertino.size import at_least
+
from ..libs import Gdk, Gio, GLib, Gtk
from .base import Widget
from .internal.buttons.refresh import RefreshButton
@@ -213,3 +215,7 @@
self.refresh_button.list_changed()
self.scroll_button.list_changed()
return True
+
+ def rehint(self):
+ self.interface.intrinsic.width = at_least(self.interface._MIN_WIDTH)
+ self.interface.intrinsic.height = at_least(self.interface._MIN_HEIGHT)
|
{"golden_diff": "diff --git a/gtk/src/toga_gtk/widgets/detailedlist.py b/gtk/src/toga_gtk/widgets/detailedlist.py\n--- a/gtk/src/toga_gtk/widgets/detailedlist.py\n+++ b/gtk/src/toga_gtk/widgets/detailedlist.py\n@@ -1,3 +1,5 @@\n+from travertino.size import at_least\n+\n from ..libs import Gdk, Gio, GLib, Gtk\n from .base import Widget\n from .internal.buttons.refresh import RefreshButton\n@@ -213,3 +215,7 @@\n self.refresh_button.list_changed()\n self.scroll_button.list_changed()\n return True\n+\n+ def rehint(self):\n+ self.interface.intrinsic.width = at_least(self.interface._MIN_WIDTH)\n+ self.interface.intrinsic.height = at_least(self.interface._MIN_HEIGHT)\n", "issue": "DetailedList showing empty rows\n### Describe the bug\n\nUsing Toga for the first time, trying to add `DetailedListBox` to my window. *Something* shows up in the box that contains the `DetailedListBox`, but it just looks like a single empty list cell. `on_select` confirms that it's a list cell.\n\n### Steps to reproduce\n\nRun this code:\r\n\r\n```python\r\nimport toga\r\nfrom toga.style.pack import Pack\r\n\r\nhistory_data = [\r\n {\r\n 'icon': toga.Icon.DEFAULT_ICON,\r\n 'title': 'Foo',\r\n 'subtitle': 'Bar',\r\n 'pk': 100,\r\n },\r\n]\r\n\r\n\r\ndef build(app):\r\n history = toga.DetailedList(\r\n id='foo',\r\n data=history_data,\r\n on_select=selection_handler,\r\n )\r\n\r\n history_box = toga.Box()\r\n history_box.add(history)\r\n\r\n box = toga.Box(style=Pack(padding=(5, 5)))\r\n\r\n box.add(history_box)\r\n\r\n return box\r\n\r\n\r\ndef main():\r\n return toga.App('DetailedList test', 'foo.bar', startup=build)\r\n\r\n\r\ndef selection_handler(widget, row):\r\n print('Row {} of widget {} was selected'.format(row, widget))\r\n\r\n\r\nif __name__ == '__main__':\r\n main().main_loop()\r\n```\n\n### Expected behavior\n\nA populated listbox shows up\n\n### Screenshots\n\n\r\n\n\n### Environment\n\n- Operating System: Manjaro Linux\r\n- Python version: 3.10\r\n- Software versions:\r\n - Briefcase:\r\n - Toga: 0.3.1\r\n - ...\r\n\n\n### Logs\n\n```\r\n\r\n```\r\n\n\n### Additional context\n\n- If I add other stuff to `box`, that stuff renders fine\r\n- No matter how many rows I add in list, always seems to render just one empty cell\n", "before_files": [{"content": "from ..libs import Gdk, Gio, GLib, Gtk\nfrom .base import Widget\nfrom .internal.buttons.refresh import RefreshButton\nfrom .internal.buttons.scroll import ScrollButton\nfrom .internal.rows.texticon import TextIconRow\n\n\n# TODO: Verify if right clicking a row currently works with touch screens, if not,\n# use Gtk.GestureLongPress\nclass DetailedList(Widget):\n \"\"\"Gtk DetailedList implementation.\n\n Gtk.ListBox inside a Gtk.ScrolledWindow.\n \"\"\"\n\n def create(self):\n # Not the same as selected row. _active_row is the one with its buttons exposed.\n self._active_row = None\n\n self.gtk_on_select_signal_handler = None\n\n self.list_box = Gtk.ListBox()\n\n self.list_box.set_selection_mode(Gtk.SelectionMode.SINGLE)\n\n self.store = Gio.ListStore()\n # We need to provide a function that transforms whatever is in the store into\n # a `Gtk.ListBoxRow`, but the items in the store already are `Gtk.ListBoxRow` thus\n # the identity function.\n self.list_box.bind_model(self.store, lambda a: a)\n\n self.scrolled_window = Gtk.ScrolledWindow()\n\n self.scrolled_window.set_policy(Gtk.PolicyType.NEVER, Gtk.PolicyType.AUTOMATIC)\n self.scrolled_window.set_min_content_width(self.interface._MIN_WIDTH)\n self.scrolled_window.set_min_content_height(self.interface._MIN_HEIGHT)\n\n self.scrolled_window.add(self.list_box)\n\n self.refresh_button = RefreshButton(self.scrolled_window.get_vadjustment())\n\n self.scroll_button = ScrollButton(self.scrolled_window.get_vadjustment())\n self.scroll_button.set_scroll(lambda: self.scroll_to_row(-1))\n\n self.native = Gtk.Overlay()\n self.native.add_overlay(self.scrolled_window)\n\n self.refresh_button.overlay_over(self.native)\n self.scroll_button.overlay_over(self.native)\n\n self.gtk_on_select_signal_handler = self.list_box.connect(\n \"row-selected\", self.gtk_on_row_selected\n )\n\n self.right_click_gesture = Gtk.GestureMultiPress.new(self.list_box)\n self.right_click_gesture.set_button(3)\n self.right_click_gesture.set_propagation_phase(Gtk.PropagationPhase.BUBBLE)\n self.right_click_gesture.connect(\"pressed\", self.gtk_on_right_click)\n\n def row_factory(self, item):\n \"\"\"\n Args:\n item (:obj:`Row`)\n Returns:\n Returns a (:obj:`TextIconRow`)\n \"\"\"\n return TextIconRow(self.interface.factory, self, item)\n\n def destroy(self):\n self.disconnect(self.gtk_on_select_signal_handler)\n super().destroy()\n\n def change_source(self, source):\n \"\"\"\n Args:\n source (:obj:`ListSource`)\n \"\"\"\n self.store.remove_all()\n for item in source:\n self.store.append(self.row_factory(item))\n\n # We can't know the dimensions of each row (and thus of the list) until gtk allocates\n # space for it. Gtk does emit `size-allocate` after allocation, but I couldn't find any\n # guarantees that the rows have their sizes allocated in the order they are inserted\n # in the `ListStore` and in my opinion that's unlikely to be the case.\n\n # Therefore we would need to wait for `size-allocate` on all rows and either update\n # the visibility of the buttons on all `size-allocates` or have a counter and only do\n # it on the last `size-allocate`. Obviously none of those options are desirable.\n\n # Fortunately functions added with `idle_add` are run when gtk is idle and thus after\n # any size allocation. This solves our problem and from the perspective of the user\n # happens immediately.\n\n # Even though we are adding the callback to the global loop, it only runs once.\n # This is what the lambda is for. If a callback returns `False` then it's not ran again.\n # I used a lambda because returning `False` from `self._list_items_changed()` would mean\n # returning `False` on success.\n GLib.idle_add(lambda: not self._list_items_changed())\n\n def insert(self, index, item):\n \"\"\"\n Args:\n index (int)\n item (:obj:`Row`)\n \"\"\"\n item_impl = self.row_factory(item)\n self.store.insert(index, item_impl)\n self.list_box.show_all()\n self._list_items_changed()\n\n def change(self, item):\n \"\"\"\n Args:\n item (:obj:`Row`)\n \"\"\"\n index = item._impl.get_index()\n self.remove(item, index)\n item_impl = self.row_factory(item)\n self.store.insert(index, item_impl)\n\n def remove(self, item, index):\n \"\"\"Removes a row from the store. Doesn't remove the row from the interface.\n\n Args:\n item (:obj:`Row`)\n index (int)\n \"\"\"\n if index is None:\n index = item._impl.get_index()\n\n if self._active_row == item._impl:\n self._active_row = None\n\n self.store.remove(index)\n\n if self.interface.on_delete is not None:\n self.interface.on_delete(self.interface, item._impl.interface)\n\n item._impl.destroy()\n self._list_items_changed()\n\n def clear(self):\n self.store.remove_all()\n self._list_items_changed()\n\n def get_selection(self):\n item_impl = self.list_box.get_selected_row()\n if item_impl is None:\n return None\n else:\n return item_impl.interface\n\n def scroll_to_row(self, row: int):\n item = self.store[row]\n item.scroll_to_center()\n\n def set_on_refresh(self, handler: callable):\n if handler is not None:\n self.refresh_button.set_on_refresh(self.gtk_on_refresh_clicked)\n\n def set_on_select(self, handler: callable):\n pass\n\n def set_on_delete(self, handler: callable):\n pass\n\n def after_on_refresh(self, widget, result):\n # No special handling required\n pass\n\n def gtk_on_refresh_clicked(self):\n if self.interface.on_refresh is not None:\n self.interface.on_refresh(self.interface)\n\n def gtk_on_row_selected(self, w: Gtk.ListBox, item_impl: Gtk.ListBoxRow):\n if self.interface.on_select is not None:\n if item_impl is not None:\n self.interface.on_select(self.interface, item_impl.interface)\n else:\n self.interface.on_select(self.interface, None)\n\n if self._active_row is not None and self._active_row != item_impl:\n self._active_row.hide_buttons()\n self._active_row = None\n\n def gtk_on_right_click(self, gesture, n_press, x, y):\n item_impl = self.list_box.get_row_at_y(y)\n\n if item_impl is None:\n return\n\n rect = Gdk.Rectangle()\n rect.x, rect.y = item_impl.translate_coordinates(self.list_box, x, y)\n\n if self._active_row is not None and self._active_row != item_impl:\n self._active_row.hide_buttons()\n\n self._active_row = item_impl\n item_impl.on_right_click(rect)\n\n if self.interface.on_select is not None:\n self.list_box.select_row(item_impl)\n\n def _list_items_changed(self):\n \"\"\"Some components such as the refresh button and scroll button change their\n appearance based on how many items there are on the list or the size of the\n items.\n\n If either of those things changes the buttons need to be notified to recalculate\n their positions.\n \"\"\"\n self.refresh_button.list_changed()\n self.scroll_button.list_changed()\n return True\n", "path": "gtk/src/toga_gtk/widgets/detailedlist.py"}], "after_files": [{"content": "from travertino.size import at_least\n\nfrom ..libs import Gdk, Gio, GLib, Gtk\nfrom .base import Widget\nfrom .internal.buttons.refresh import RefreshButton\nfrom .internal.buttons.scroll import ScrollButton\nfrom .internal.rows.texticon import TextIconRow\n\n\n# TODO: Verify if right clicking a row currently works with touch screens, if not,\n# use Gtk.GestureLongPress\nclass DetailedList(Widget):\n \"\"\"Gtk DetailedList implementation.\n\n Gtk.ListBox inside a Gtk.ScrolledWindow.\n \"\"\"\n\n def create(self):\n # Not the same as selected row. _active_row is the one with its buttons exposed.\n self._active_row = None\n\n self.gtk_on_select_signal_handler = None\n\n self.list_box = Gtk.ListBox()\n\n self.list_box.set_selection_mode(Gtk.SelectionMode.SINGLE)\n\n self.store = Gio.ListStore()\n # We need to provide a function that transforms whatever is in the store into\n # a `Gtk.ListBoxRow`, but the items in the store already are `Gtk.ListBoxRow` thus\n # the identity function.\n self.list_box.bind_model(self.store, lambda a: a)\n\n self.scrolled_window = Gtk.ScrolledWindow()\n\n self.scrolled_window.set_policy(Gtk.PolicyType.NEVER, Gtk.PolicyType.AUTOMATIC)\n self.scrolled_window.set_min_content_width(self.interface._MIN_WIDTH)\n self.scrolled_window.set_min_content_height(self.interface._MIN_HEIGHT)\n\n self.scrolled_window.add(self.list_box)\n\n self.refresh_button = RefreshButton(self.scrolled_window.get_vadjustment())\n\n self.scroll_button = ScrollButton(self.scrolled_window.get_vadjustment())\n self.scroll_button.set_scroll(lambda: self.scroll_to_row(-1))\n\n self.native = Gtk.Overlay()\n self.native.add_overlay(self.scrolled_window)\n\n self.refresh_button.overlay_over(self.native)\n self.scroll_button.overlay_over(self.native)\n\n self.gtk_on_select_signal_handler = self.list_box.connect(\n \"row-selected\", self.gtk_on_row_selected\n )\n\n self.right_click_gesture = Gtk.GestureMultiPress.new(self.list_box)\n self.right_click_gesture.set_button(3)\n self.right_click_gesture.set_propagation_phase(Gtk.PropagationPhase.BUBBLE)\n self.right_click_gesture.connect(\"pressed\", self.gtk_on_right_click)\n\n def row_factory(self, item):\n \"\"\"\n Args:\n item (:obj:`Row`)\n Returns:\n Returns a (:obj:`TextIconRow`)\n \"\"\"\n return TextIconRow(self.interface.factory, self, item)\n\n def destroy(self):\n self.disconnect(self.gtk_on_select_signal_handler)\n super().destroy()\n\n def change_source(self, source):\n \"\"\"\n Args:\n source (:obj:`ListSource`)\n \"\"\"\n self.store.remove_all()\n for item in source:\n self.store.append(self.row_factory(item))\n\n # We can't know the dimensions of each row (and thus of the list) until gtk allocates\n # space for it. Gtk does emit `size-allocate` after allocation, but I couldn't find any\n # guarantees that the rows have their sizes allocated in the order they are inserted\n # in the `ListStore` and in my opinion that's unlikely to be the case.\n\n # Therefore we would need to wait for `size-allocate` on all rows and either update\n # the visibility of the buttons on all `size-allocates` or have a counter and only do\n # it on the last `size-allocate`. Obviously none of those options are desirable.\n\n # Fortunately functions added with `idle_add` are run when gtk is idle and thus after\n # any size allocation. This solves our problem and from the perspective of the user\n # happens immediately.\n\n # Even though we are adding the callback to the global loop, it only runs once.\n # This is what the lambda is for. If a callback returns `False` then it's not ran again.\n # I used a lambda because returning `False` from `self._list_items_changed()` would mean\n # returning `False` on success.\n GLib.idle_add(lambda: not self._list_items_changed())\n\n def insert(self, index, item):\n \"\"\"\n Args:\n index (int)\n item (:obj:`Row`)\n \"\"\"\n item_impl = self.row_factory(item)\n self.store.insert(index, item_impl)\n self.list_box.show_all()\n self._list_items_changed()\n\n def change(self, item):\n \"\"\"\n Args:\n item (:obj:`Row`)\n \"\"\"\n index = item._impl.get_index()\n self.remove(item, index)\n item_impl = self.row_factory(item)\n self.store.insert(index, item_impl)\n\n def remove(self, item, index):\n \"\"\"Removes a row from the store. Doesn't remove the row from the interface.\n\n Args:\n item (:obj:`Row`)\n index (int)\n \"\"\"\n if index is None:\n index = item._impl.get_index()\n\n if self._active_row == item._impl:\n self._active_row = None\n\n self.store.remove(index)\n\n if self.interface.on_delete is not None:\n self.interface.on_delete(self.interface, item._impl.interface)\n\n item._impl.destroy()\n self._list_items_changed()\n\n def clear(self):\n self.store.remove_all()\n self._list_items_changed()\n\n def get_selection(self):\n item_impl = self.list_box.get_selected_row()\n if item_impl is None:\n return None\n else:\n return item_impl.interface\n\n def scroll_to_row(self, row: int):\n item = self.store[row]\n item.scroll_to_center()\n\n def set_on_refresh(self, handler: callable):\n if handler is not None:\n self.refresh_button.set_on_refresh(self.gtk_on_refresh_clicked)\n\n def set_on_select(self, handler: callable):\n pass\n\n def set_on_delete(self, handler: callable):\n pass\n\n def after_on_refresh(self, widget, result):\n # No special handling required\n pass\n\n def gtk_on_refresh_clicked(self):\n if self.interface.on_refresh is not None:\n self.interface.on_refresh(self.interface)\n\n def gtk_on_row_selected(self, w: Gtk.ListBox, item_impl: Gtk.ListBoxRow):\n if self.interface.on_select is not None:\n if item_impl is not None:\n self.interface.on_select(self.interface, item_impl.interface)\n else:\n self.interface.on_select(self.interface, None)\n\n if self._active_row is not None and self._active_row != item_impl:\n self._active_row.hide_buttons()\n self._active_row = None\n\n def gtk_on_right_click(self, gesture, n_press, x, y):\n item_impl = self.list_box.get_row_at_y(y)\n\n if item_impl is None:\n return\n\n rect = Gdk.Rectangle()\n rect.x, rect.y = item_impl.translate_coordinates(self.list_box, x, y)\n\n if self._active_row is not None and self._active_row != item_impl:\n self._active_row.hide_buttons()\n\n self._active_row = item_impl\n item_impl.on_right_click(rect)\n\n if self.interface.on_select is not None:\n self.list_box.select_row(item_impl)\n\n def _list_items_changed(self):\n \"\"\"Some components such as the refresh button and scroll button change their\n appearance based on how many items there are on the list or the size of the\n items.\n\n If either of those things changes the buttons need to be notified to recalculate\n their positions.\n \"\"\"\n self.refresh_button.list_changed()\n self.scroll_button.list_changed()\n return True\n\n def rehint(self):\n self.interface.intrinsic.width = at_least(self.interface._MIN_WIDTH)\n self.interface.intrinsic.height = at_least(self.interface._MIN_HEIGHT)\n", "path": "gtk/src/toga_gtk/widgets/detailedlist.py"}]}
| 2,902 | 180 |
gh_patches_debug_40288
|
rasdani/github-patches
|
git_diff
|
CMSgov__bluebutton-web-server-9
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add URL pattern mating support to Protected Capability
Add URL pattern mating support to Protected Capability model. For example, something like this could work:
[
["POST", "/fhir/v3/Practitioner"],
["PUT", "/fhir/v3/Practitioner/[id]"],
["POST", "/fhir/v3/Organization"],
["PUT", "/fhir/v3/Organization/[id]"],
["POST", "/npi/v1/update-all/[npi]"],
["POST", "/npi/v1/update-partial/[npi]"],
["GET", "/fhir/v3/metadata"]
]
Perhaps how the pattern format is expressed in the form should simply match how django URLs work. I leave that to you how best to address.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/dot_ext/models.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import logging
5
6 from django.core.urlresolvers import reverse
7 from django.db import models
8 from django.utils.encoding import python_2_unicode_compatible
9
10 from oauth2_provider.models import AbstractApplication
11
12 from apps.capabilities.models import ProtectedCapability
13
14
15 logger = logging.getLogger('hhs_server.%s' % __name__)
16
17
18 @python_2_unicode_compatible
19 class Endorsement(models.Model):
20 title = models.TextField(max_length=256, default="")
21 iss = models.TextField(max_length=512, default="", verbose_name="Issuer",
22 help_text= "Must contain a QDN")
23 jws = models.TextField(max_length=10240, default="")
24
25 def __str__(self):
26 return self.title
27
28
29 class Application(AbstractApplication):
30 scope = models.ManyToManyField(ProtectedCapability)
31 endorsements = models.ManyToManyField(Endorsement, blank=True)
32 agree = models.BooleanField(default=False)
33
34 _messages = {
35 "resource_allowed": "application '%s': access to resource '%s %s' allowed",
36 "resource_forbidden": "application '%s': access to resource '%s %s' forbidden",
37 }
38
39 def get_absolute_url(self):
40 return reverse('dote_detail', args=[str(self.id)])
41
42 def allow_resource(self, method, path):
43 """
44 Return True when this applications has capability to allow
45 request to `path` with method `method`.
46 """
47 logger.debug("application '%s': checking access to resource '%s %s' is allowed",
48 self.name, method, path)
49
50 for scope in self.scope.all():
51 resources = scope.resources_as_dict()
52 logger.debug("application '%s': checking access with scope '%s'", self.name, resources)
53 # both paths are normalized removing trailing slash
54 for allowed_path in resources.get(method, []):
55 if path.rstrip('/') == allowed_path.rstrip('/'):
56 logger.info(self._messages['resource_allowed'], self.name, method, path)
57 return True
58
59 logger.info(self._messages['resource_forbidden'], self.name, method, path)
60 return False
61
```
Path: `apps/capabilities/models.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import json
5
6 from django.db import models
7 from django.utils.encoding import python_2_unicode_compatible
8 from django.contrib.auth.models import Group
9
10
11 @python_2_unicode_compatible
12 class ProtectedCapability(models.Model):
13 title = models.CharField(max_length=256, default="", unique=True)
14 slug = models.SlugField(verbose_name="Scope", max_length=100, default="", unique=True)
15 group = models.ForeignKey(Group)
16 protected_resources = models.TextField(max_length=10240,
17 help_text="""A JSON list of pairs containing HTTP method and URL.
18 Example: [["GET","/api/task1"], ["POST","/api/task2"]]
19 """, default="""[["GET", "/some-url"]]""")
20 description = models.TextField(max_length=10240, blank=True, default="")
21
22 def __str__(self):
23 return self.title
24
25 def resources_as_dict(self):
26 """
27 Return protected_resources mapped into a dictionary.
28 e.g. {"GET": ["/api/example1", "/api/example2"], "POST": ... }
29 """
30 protected_resources = {}
31 for method, path in json.loads(self.protected_resources):
32 if method not in protected_resources:
33 protected_resources[method] = [path]
34 else:
35 protected_resources[method].append(path)
36 return protected_resources
37
38 def scope(self):
39 return self.slug
40
41 class Meta:
42 verbose_name_plural = "Protected Capabilities"
43 verbose_name = "Protected Capability"
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/capabilities/models.py b/apps/capabilities/models.py
--- a/apps/capabilities/models.py
+++ b/apps/capabilities/models.py
@@ -2,10 +2,18 @@
from __future__ import unicode_literals
import json
+import re
+
+try:
+ # this import should work on python2 only
+ from itertools import izip as zip
+except ImportError:
+ pass
from django.db import models
-from django.utils.encoding import python_2_unicode_compatible
from django.contrib.auth.models import Group
+from django.utils.encoding import python_2_unicode_compatible
+from django.utils.lru_cache import lru_cache
@python_2_unicode_compatible
@@ -35,9 +43,63 @@
protected_resources[method].append(path)
return protected_resources
+ def allow(self, method, path):
+ """
+ Check if the capability allow access for `method` and `path`.
+ """
+ resources = self.resources_as_dict()
+ for allowed_path in resources.get(method, []):
+ if _match(path, allowed_path):
+ return True
+ return False
+
def scope(self):
return self.slug
class Meta:
verbose_name_plural = "Protected Capabilities"
verbose_name = "Protected Capability"
+
+
+@lru_cache()
+def _tokenize_path(path):
+ """
+ Helper function that removes trailing slash
+ and split the path into bits.
+
+ e.g.: "/api/foo/" -> ["", "api", "foo"]
+ """
+ return path.rstrip("/").split("/")
+
+
+URL_BIT_PATTERN = re.compile(r"\[.*\]")
+
+
+@lru_cache()
+def _match(request_path, allowed_path):
+ """
+ Helper function that check if request_path matches with allowed_path
+ """
+ # normalize and tokenize both paths
+ # from '/api/foo' to ['', 'api', 'foo']
+ request_tokens = _tokenize_path(request_path)
+ allowed_tokens = _tokenize_path(allowed_path)
+
+ # if the number of tokens is different we know
+ # that the request path does not match the allowed path
+ if len(request_tokens) != len(allowed_tokens):
+ return False
+
+ # otherwise we start the comparison token by token
+ for request_token, allowed_token in zip(request_tokens, allowed_tokens):
+ # if the allowed token matches a pattern like "[id]"
+ # the match is valid and move to the next token
+ if URL_BIT_PATTERN.match(allowed_token):
+ continue
+
+ # we can break the loop and return False
+ # when we encounter the first non-matching token
+ if request_token != allowed_token:
+ return False
+
+ return True
diff --git a/apps/dot_ext/models.py b/apps/dot_ext/models.py
--- a/apps/dot_ext/models.py
+++ b/apps/dot_ext/models.py
@@ -48,13 +48,11 @@
self.name, method, path)
for scope in self.scope.all():
- resources = scope.resources_as_dict()
- logger.debug("application '%s': checking access with scope '%s'", self.name, resources)
- # both paths are normalized removing trailing slash
- for allowed_path in resources.get(method, []):
- if path.rstrip('/') == allowed_path.rstrip('/'):
- logger.info(self._messages['resource_allowed'], self.name, method, path)
- return True
+ logger.debug("application '%s': checking access with scope '%s'",
+ self.name, scope.protected_resources)
+ if scope.allow(method, path):
+ logger.info(self._messages['resource_allowed'], self.name, method, path)
+ return True
logger.info(self._messages['resource_forbidden'], self.name, method, path)
return False
|
{"golden_diff": "diff --git a/apps/capabilities/models.py b/apps/capabilities/models.py\n--- a/apps/capabilities/models.py\n+++ b/apps/capabilities/models.py\n@@ -2,10 +2,18 @@\n from __future__ import unicode_literals\n \n import json\n+import re\n+\n+try:\n+ # this import should work on python2 only\n+ from itertools import izip as zip\n+except ImportError:\n+ pass\n \n from django.db import models\n-from django.utils.encoding import python_2_unicode_compatible\n from django.contrib.auth.models import Group\n+from django.utils.encoding import python_2_unicode_compatible\n+from django.utils.lru_cache import lru_cache\n \n \n @python_2_unicode_compatible\n@@ -35,9 +43,63 @@\n protected_resources[method].append(path)\n return protected_resources\n \n+ def allow(self, method, path):\n+ \"\"\"\n+ Check if the capability allow access for `method` and `path`.\n+ \"\"\"\n+ resources = self.resources_as_dict()\n+ for allowed_path in resources.get(method, []):\n+ if _match(path, allowed_path):\n+ return True\n+ return False\n+\n def scope(self):\n return self.slug\n \n class Meta:\n verbose_name_plural = \"Protected Capabilities\"\n verbose_name = \"Protected Capability\"\n+\n+\n+@lru_cache()\n+def _tokenize_path(path):\n+ \"\"\"\n+ Helper function that removes trailing slash\n+ and split the path into bits.\n+\n+ e.g.: \"/api/foo/\" -> [\"\", \"api\", \"foo\"]\n+ \"\"\"\n+ return path.rstrip(\"/\").split(\"/\")\n+\n+\n+URL_BIT_PATTERN = re.compile(r\"\\[.*\\]\")\n+\n+\n+@lru_cache()\n+def _match(request_path, allowed_path):\n+ \"\"\"\n+ Helper function that check if request_path matches with allowed_path\n+ \"\"\"\n+ # normalize and tokenize both paths\n+ # from '/api/foo' to ['', 'api', 'foo']\n+ request_tokens = _tokenize_path(request_path)\n+ allowed_tokens = _tokenize_path(allowed_path)\n+\n+ # if the number of tokens is different we know\n+ # that the request path does not match the allowed path\n+ if len(request_tokens) != len(allowed_tokens):\n+ return False\n+\n+ # otherwise we start the comparison token by token\n+ for request_token, allowed_token in zip(request_tokens, allowed_tokens):\n+ # if the allowed token matches a pattern like \"[id]\"\n+ # the match is valid and move to the next token\n+ if URL_BIT_PATTERN.match(allowed_token):\n+ continue\n+\n+ # we can break the loop and return False\n+ # when we encounter the first non-matching token\n+ if request_token != allowed_token:\n+ return False\n+\n+ return True\ndiff --git a/apps/dot_ext/models.py b/apps/dot_ext/models.py\n--- a/apps/dot_ext/models.py\n+++ b/apps/dot_ext/models.py\n@@ -48,13 +48,11 @@\n self.name, method, path)\n \n for scope in self.scope.all():\n- resources = scope.resources_as_dict()\n- logger.debug(\"application '%s': checking access with scope '%s'\", self.name, resources)\n- # both paths are normalized removing trailing slash\n- for allowed_path in resources.get(method, []):\n- if path.rstrip('/') == allowed_path.rstrip('/'):\n- logger.info(self._messages['resource_allowed'], self.name, method, path)\n- return True\n+ logger.debug(\"application '%s': checking access with scope '%s'\",\n+ self.name, scope.protected_resources)\n+ if scope.allow(method, path):\n+ logger.info(self._messages['resource_allowed'], self.name, method, path)\n+ return True\n \n logger.info(self._messages['resource_forbidden'], self.name, method, path)\n return False\n", "issue": "Add URL pattern mating support to Protected Capability\nAdd URL pattern mating support to Protected Capability model. For example, something like this could work:\n\n[\n [\"POST\", \"/fhir/v3/Practitioner\"],\n [\"PUT\", \"/fhir/v3/Practitioner/[id]\"],\n [\"POST\", \"/fhir/v3/Organization\"],\n [\"PUT\", \"/fhir/v3/Organization/[id]\"],\n [\"POST\", \"/npi/v1/update-all/[npi]\"],\n [\"POST\", \"/npi/v1/update-partial/[npi]\"],\n [\"GET\", \"/fhir/v3/metadata\"]\n]\n\nPerhaps how the pattern format is expressed in the form should simply match how django URLs work. I leave that to you how best to address.\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom oauth2_provider.models import AbstractApplication\n\nfrom apps.capabilities.models import ProtectedCapability\n\n\nlogger = logging.getLogger('hhs_server.%s' % __name__)\n\n\n@python_2_unicode_compatible\nclass Endorsement(models.Model):\n title = models.TextField(max_length=256, default=\"\")\n iss = models.TextField(max_length=512, default=\"\", verbose_name=\"Issuer\",\n help_text= \"Must contain a QDN\")\n jws = models.TextField(max_length=10240, default=\"\")\n\n def __str__(self):\n return self.title\n\n\nclass Application(AbstractApplication):\n scope = models.ManyToManyField(ProtectedCapability)\n endorsements = models.ManyToManyField(Endorsement, blank=True)\n agree = models.BooleanField(default=False)\n\n _messages = {\n \"resource_allowed\": \"application '%s': access to resource '%s %s' allowed\",\n \"resource_forbidden\": \"application '%s': access to resource '%s %s' forbidden\",\n }\n\n def get_absolute_url(self):\n return reverse('dote_detail', args=[str(self.id)])\n\n def allow_resource(self, method, path):\n \"\"\"\n Return True when this applications has capability to allow\n request to `path` with method `method`.\n \"\"\"\n logger.debug(\"application '%s': checking access to resource '%s %s' is allowed\",\n self.name, method, path)\n\n for scope in self.scope.all():\n resources = scope.resources_as_dict()\n logger.debug(\"application '%s': checking access with scope '%s'\", self.name, resources)\n # both paths are normalized removing trailing slash\n for allowed_path in resources.get(method, []):\n if path.rstrip('/') == allowed_path.rstrip('/'):\n logger.info(self._messages['resource_allowed'], self.name, method, path)\n return True\n\n logger.info(self._messages['resource_forbidden'], self.name, method, path)\n return False\n", "path": "apps/dot_ext/models.py"}, {"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport json\n\nfrom django.db import models\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.contrib.auth.models import Group\n\n\n@python_2_unicode_compatible\nclass ProtectedCapability(models.Model):\n title = models.CharField(max_length=256, default=\"\", unique=True)\n slug = models.SlugField(verbose_name=\"Scope\", max_length=100, default=\"\", unique=True)\n group = models.ForeignKey(Group)\n protected_resources = models.TextField(max_length=10240,\n help_text=\"\"\"A JSON list of pairs containing HTTP method and URL.\n Example: [[\"GET\",\"/api/task1\"], [\"POST\",\"/api/task2\"]]\n \"\"\", default=\"\"\"[[\"GET\", \"/some-url\"]]\"\"\")\n description = models.TextField(max_length=10240, blank=True, default=\"\")\n\n def __str__(self):\n return self.title\n\n def resources_as_dict(self):\n \"\"\"\n Return protected_resources mapped into a dictionary.\n e.g. {\"GET\": [\"/api/example1\", \"/api/example2\"], \"POST\": ... }\n \"\"\"\n protected_resources = {}\n for method, path in json.loads(self.protected_resources):\n if method not in protected_resources:\n protected_resources[method] = [path]\n else:\n protected_resources[method].append(path)\n return protected_resources\n\n def scope(self):\n return self.slug\n\n class Meta:\n verbose_name_plural = \"Protected Capabilities\"\n verbose_name = \"Protected Capability\"\n", "path": "apps/capabilities/models.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom oauth2_provider.models import AbstractApplication\n\nfrom apps.capabilities.models import ProtectedCapability\n\n\nlogger = logging.getLogger('hhs_server.%s' % __name__)\n\n\n@python_2_unicode_compatible\nclass Endorsement(models.Model):\n title = models.TextField(max_length=256, default=\"\")\n iss = models.TextField(max_length=512, default=\"\", verbose_name=\"Issuer\",\n help_text= \"Must contain a QDN\")\n jws = models.TextField(max_length=10240, default=\"\")\n\n def __str__(self):\n return self.title\n\n\nclass Application(AbstractApplication):\n scope = models.ManyToManyField(ProtectedCapability)\n endorsements = models.ManyToManyField(Endorsement, blank=True)\n agree = models.BooleanField(default=False)\n\n _messages = {\n \"resource_allowed\": \"application '%s': access to resource '%s %s' allowed\",\n \"resource_forbidden\": \"application '%s': access to resource '%s %s' forbidden\",\n }\n\n def get_absolute_url(self):\n return reverse('dote_detail', args=[str(self.id)])\n\n def allow_resource(self, method, path):\n \"\"\"\n Return True when this applications has capability to allow\n request to `path` with method `method`.\n \"\"\"\n logger.debug(\"application '%s': checking access to resource '%s %s' is allowed\",\n self.name, method, path)\n\n for scope in self.scope.all():\n logger.debug(\"application '%s': checking access with scope '%s'\",\n self.name, scope.protected_resources)\n if scope.allow(method, path):\n logger.info(self._messages['resource_allowed'], self.name, method, path)\n return True\n\n logger.info(self._messages['resource_forbidden'], self.name, method, path)\n return False\n", "path": "apps/dot_ext/models.py"}, {"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport json\nimport re\n\ntry:\n # this import should work on python2 only\n from itertools import izip as zip\nexcept ImportError:\n pass\n\nfrom django.db import models\nfrom django.contrib.auth.models import Group\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.utils.lru_cache import lru_cache\n\n\n@python_2_unicode_compatible\nclass ProtectedCapability(models.Model):\n title = models.CharField(max_length=256, default=\"\", unique=True)\n slug = models.SlugField(verbose_name=\"Scope\", max_length=100, default=\"\", unique=True)\n group = models.ForeignKey(Group)\n protected_resources = models.TextField(max_length=10240,\n help_text=\"\"\"A JSON list of pairs containing HTTP method and URL.\n Example: [[\"GET\",\"/api/task1\"], [\"POST\",\"/api/task2\"]]\n \"\"\", default=\"\"\"[[\"GET\", \"/some-url\"]]\"\"\")\n description = models.TextField(max_length=10240, blank=True, default=\"\")\n\n def __str__(self):\n return self.title\n\n def resources_as_dict(self):\n \"\"\"\n Return protected_resources mapped into a dictionary.\n e.g. {\"GET\": [\"/api/example1\", \"/api/example2\"], \"POST\": ... }\n \"\"\"\n protected_resources = {}\n for method, path in json.loads(self.protected_resources):\n if method not in protected_resources:\n protected_resources[method] = [path]\n else:\n protected_resources[method].append(path)\n return protected_resources\n\n def allow(self, method, path):\n \"\"\"\n Check if the capability allow access for `method` and `path`.\n \"\"\"\n resources = self.resources_as_dict()\n for allowed_path in resources.get(method, []):\n if _match(path, allowed_path):\n return True\n return False\n\n def scope(self):\n return self.slug\n\n class Meta:\n verbose_name_plural = \"Protected Capabilities\"\n verbose_name = \"Protected Capability\"\n\n\n@lru_cache()\ndef _tokenize_path(path):\n \"\"\"\n Helper function that removes trailing slash\n and split the path into bits.\n\n e.g.: \"/api/foo/\" -> [\"\", \"api\", \"foo\"]\n \"\"\"\n return path.rstrip(\"/\").split(\"/\")\n\n\nURL_BIT_PATTERN = re.compile(r\"\\[.*\\]\")\n\n\n@lru_cache()\ndef _match(request_path, allowed_path):\n \"\"\"\n Helper function that check if request_path matches with allowed_path\n \"\"\"\n # normalize and tokenize both paths\n # from '/api/foo' to ['', 'api', 'foo']\n request_tokens = _tokenize_path(request_path)\n allowed_tokens = _tokenize_path(allowed_path)\n\n # if the number of tokens is different we know\n # that the request path does not match the allowed path\n if len(request_tokens) != len(allowed_tokens):\n return False\n\n # otherwise we start the comparison token by token\n for request_token, allowed_token in zip(request_tokens, allowed_tokens):\n # if the allowed token matches a pattern like \"[id]\"\n # the match is valid and move to the next token\n if URL_BIT_PATTERN.match(allowed_token):\n continue\n\n # we can break the loop and return False\n # when we encounter the first non-matching token\n if request_token != allowed_token:\n return False\n\n return True\n", "path": "apps/capabilities/models.py"}]}
| 1,439 | 860 |
gh_patches_debug_62080
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-2822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plone 5.1.4 to 5.1.5 update: resource registry meta bundle generator comments first css construct of individual bundles
In one of our projects, after upgrading from Plone 5.1.4 to Plone 5.1.5 A very small part of the css became broken in plone.app.mosaic layouts . Images inside a tile no longer had a "height: auto" on them. This is normally included in mosaic-styles.css , and the mosaic styles were included in default.css.
We quickly patched the missing statement into our theme file and did a patch release, but the underlying problem was vague. The problem would only appear on production, running the site locally did not show the problem, so my attention was pulled to the metabundle generation. This was modified between 5.1.4 and 5.1.5 in https://github.com/plone/Products.CMFPlone/commit/397918cd39ba0be4e2e150df5f5f2220e6ecc828 by @vangheem
The problematic code is in this part:
https://github.com/plone/Products.CMFPlone/blob/2195c4a43ba100fb2b7973dccb4299dad2de42fe/Products/CMFPlone/resources/browser/combine.py#L123-L132
The individual bundles are separated by comment lines with // Start bundle and // End Bundle, but // actually comments out the first following css construct , more info at https://www.xanthir.com/b4U10
And the mosaic-styles.css individual bundle start with:
```
// Start Bundle: mosaic-styles
/* Images will never be bigger then a tile */
.mosaic-tile img {
max-width: 100%;
height: auto;
}
```
If even skips over the /* */ comment on the next line and comments the first {} it sees
So that is how only our height: auto; got disabled in production .
This is at the moment only broken in Plone 5.1 , In Plone 5.2 the whole bundler got rewritten again, partly because of Python3 support and doesn't seem to insert these comments. I have swapped the // comment for /* */ comments and this also solves the problem. I'll create a pull request shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/browser/combine.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from Acquisition import aq_base
3 from datetime import datetime
4 from plone.registry.interfaces import IRegistry
5 from plone.resource.file import FilesystemFile
6 from plone.resource.interfaces import IResourceDirectory
7 from Products.CMFPlone.interfaces import IBundleRegistry
8 from Products.CMFPlone.interfaces.resources import OVERRIDE_RESOURCE_DIRECTORY_NAME # noqa
9 from StringIO import StringIO
10 from zExceptions import NotFound
11 from zope.component import getUtility
12 from zope.component import queryUtility
13
14 from collections import OrderedDict
15 import logging
16 import re
17
18
19 PRODUCTION_RESOURCE_DIRECTORY = 'production'
20 logger = logging.getLogger(__name__)
21
22
23 def get_production_resource_directory():
24 persistent_directory = queryUtility(IResourceDirectory, name='persistent')
25 if persistent_directory is None:
26 return ''
27 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
28 try:
29 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
30 except NotFound:
31 return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY
32 if 'timestamp.txt' not in production_folder:
33 return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY
34 timestamp = production_folder.readFile('timestamp.txt')
35 return '%s/++unique++%s' % (
36 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
37
38
39 def get_resource(context, path):
40 if path.startswith('++plone++'):
41 # ++plone++ resources can be customized, we return their override
42 # value if any
43 overrides = get_override_directory(context)
44 filepath = path[9:]
45 if overrides.isFile(filepath):
46 return overrides.readFile(filepath)
47
48 try:
49 resource = context.unrestrictedTraverse(path)
50 except NotFound:
51 logger.warn(u'Could not find resource {0}. You may have to create it first.'.format(path)) # noqa
52 return
53
54 if isinstance(resource, FilesystemFile):
55 (directory, sep, filename) = path.rpartition('/')
56 return context.unrestrictedTraverse(directory).readFile(filename)
57
58 # calling the resource may modify the header, i.e. the content-type.
59 # we do not want this, so keep the original header intact.
60 response_before = context.REQUEST.response
61 context.REQUEST.response = response_before.__class__()
62 if hasattr(aq_base(resource), 'GET'):
63 # for FileResource
64 result = resource.GET()
65 else:
66 # any BrowserView
67 result = resource()
68 context.REQUEST.response = response_before
69 return result
70
71
72 class MetaBundleWriter(object):
73
74 def __init__(self, context, folder, name):
75 self.context = context
76 self.folder = folder
77 self.name = name
78 self.js_resources = OrderedDict()
79 self.css_resources = OrderedDict()
80 self.registry = getUtility(IRegistry)
81 self.bundles = self.registry.collectionOfInterface(
82 IBundleRegistry, prefix='plone.bundles', check=False)
83
84 def write_js(self):
85
86 # default resources
87 if self.name == 'default' and self.registry.records.get(
88 'plone.resources/jquery.js'
89 ):
90 self.js_resources['_jquery'] = get_resource(
91 self.context,
92 self.registry.records['plone.resources/jquery.js'].value)
93 self.js_resources['_requirejs'] = get_resource(
94 self.context,
95 self.registry.records['plone.resources.requirejs'].value)
96 self.js_resources['_configjs'] = get_resource(
97 self.context,
98 self.registry.records['plone.resources.configjs'].value)
99
100 # bundles
101 for name, bundle in self.bundles.items():
102 self.load_js_bundle(name, bundle)
103
104 self._write_out(self.js_resources, '.js')
105
106 def load_js_bundle(self, name, bundle, depth=0):
107 if depth > 10:
108 # recursion detection
109 return
110 if bundle.merge_with != self.name:
111 return
112 if bundle.jscompilation:
113 if bundle.depends and bundle.depends in self.bundles:
114 self.load_js_bundle(
115 bundle.depends, self.bundles[bundle.depends], depth + 1)
116 if name in self.js_resources:
117 return
118 resource = get_resource(self.context, bundle.jscompilation)
119 if not resource:
120 return
121 self.js_resources[name] = resource
122
123 def _write_out(self, resources, postfix):
124 fi = StringIO()
125 for bname, script in resources.items():
126 fi.write('''
127 // Start Bundle: {0}
128 {1}
129 // End Bundle: {2}
130 '''.format(bname, script, bname))
131 self.folder.writeFile(self.name + postfix, fi)
132 resources.clear()
133
134 def load_css_bundle(self, name, bundle, depth=0):
135 if depth > 10:
136 # recursion detection
137 return
138
139 if bundle.merge_with != self.name:
140 return
141
142 if bundle.csscompilation:
143 if bundle.depends and bundle.depends in self.bundles:
144 self.load_css_bundle(
145 bundle.depends, self.bundles[bundle.depends], depth + 1)
146 if name in self.css_resources:
147 return
148
149 css = get_resource(self.context, bundle.csscompilation)
150 if not css:
151 return
152 (path, sep, filename) = bundle.csscompilation.rpartition('/')
153 # Process relative urls:
154 # we prefix with current resource path any url not starting with
155 # '/' or http: or data:
156 css = re.sub(
157 r'''(url\(['"]?(?!['"]?([a-z]+:|\/)))''',
158 r'\1%s/' % path,
159 css)
160 self.css_resources[name] = css
161
162 def write_css(self):
163 for name, bundle in self.bundles.items():
164 self.load_css_bundle(name, bundle)
165
166 self._write_out(self.css_resources, '.css')
167
168
169 def get_override_directory(context):
170 persistent_directory = queryUtility(IResourceDirectory, name='persistent')
171 if persistent_directory is None:
172 return
173 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
174 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
175 return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
176
177
178 def combine_bundles(context):
179 container = get_override_directory(context)
180 if PRODUCTION_RESOURCE_DIRECTORY not in container:
181 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
182 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
183
184 # store timestamp
185 fi = StringIO()
186 fi.write(datetime.now().isoformat())
187 production_folder.writeFile('timestamp.txt', fi)
188
189 # generate new combined bundles
190 default_writer = MetaBundleWriter(
191 context, production_folder, 'default')
192 default_writer.write_js()
193 logged_in_writer = MetaBundleWriter(
194 context, production_folder, 'logged-in')
195 logged_in_writer.write_js()
196 default_writer.write_css()
197 logged_in_writer.write_css()
198
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -124,9 +124,9 @@
fi = StringIO()
for bname, script in resources.items():
fi.write('''
-// Start Bundle: {0}
+/* Start Bundle: {0} */
{1}
-// End Bundle: {2}
+/* End Bundle: {2} */
'''.format(bname, script, bname))
self.folder.writeFile(self.name + postfix, fi)
resources.clear()
|
{"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -124,9 +124,9 @@\n fi = StringIO()\n for bname, script in resources.items():\n fi.write('''\n-// Start Bundle: {0}\n+/* Start Bundle: {0} */\n {1}\n-// End Bundle: {2}\n+/* End Bundle: {2} */\n '''.format(bname, script, bname))\n self.folder.writeFile(self.name + postfix, fi)\n resources.clear()\n", "issue": "Plone 5.1.4 to 5.1.5 update: resource registry meta bundle generator comments first css construct of individual bundles\nIn one of our projects, after upgrading from Plone 5.1.4 to Plone 5.1.5 A very small part of the css became broken in plone.app.mosaic layouts . Images inside a tile no longer had a \"height: auto\" on them. This is normally included in mosaic-styles.css , and the mosaic styles were included in default.css. \r\n\r\nWe quickly patched the missing statement into our theme file and did a patch release, but the underlying problem was vague. The problem would only appear on production, running the site locally did not show the problem, so my attention was pulled to the metabundle generation. This was modified between 5.1.4 and 5.1.5 in https://github.com/plone/Products.CMFPlone/commit/397918cd39ba0be4e2e150df5f5f2220e6ecc828 by @vangheem \r\n\r\nThe problematic code is in this part:\r\n\r\nhttps://github.com/plone/Products.CMFPlone/blob/2195c4a43ba100fb2b7973dccb4299dad2de42fe/Products/CMFPlone/resources/browser/combine.py#L123-L132\r\n\r\nThe individual bundles are separated by comment lines with // Start bundle and // End Bundle, but // actually comments out the first following css construct , more info at https://www.xanthir.com/b4U10\r\n\r\nAnd the mosaic-styles.css individual bundle start with:\r\n\r\n```\r\n// Start Bundle: mosaic-styles\r\n/* Images will never be bigger then a tile */\r\n.mosaic-tile img {\r\n max-width: 100%;\r\n height: auto;\r\n}\r\n```\r\n\r\nIf even skips over the /* */ comment on the next line and comments the first {} it sees\r\nSo that is how only our height: auto; got disabled in production . \r\n\r\nThis is at the moment only broken in Plone 5.1 , In Plone 5.2 the whole bundler got rewritten again, partly because of Python3 support and doesn't seem to insert these comments. I have swapped the // comment for /* */ comments and this also solves the problem. I'll create a pull request shortly. \r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import OVERRIDE_RESOURCE_DIRECTORY_NAME # noqa\nfrom StringIO import StringIO\nfrom zExceptions import NotFound\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nfrom collections import OrderedDict\nimport logging\nimport re\n\n\nPRODUCTION_RESOURCE_DIRECTORY = 'production'\nlogger = logging.getLogger(__name__)\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name='persistent')\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY\n if 'timestamp.txt' not in production_folder:\n return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return '%s/++unique++%s' % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n try:\n resource = context.unrestrictedTraverse(path)\n except NotFound:\n logger.warn(u'Could not find resource {0}. You may have to create it first.'.format(path)) # noqa\n return\n\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n\n # calling the resource may modify the header, i.e. the content-type.\n # we do not want this, so keep the original header intact.\n response_before = context.REQUEST.response\n context.REQUEST.response = response_before.__class__()\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n result = resource.GET()\n else:\n # any BrowserView\n result = resource()\n context.REQUEST.response = response_before\n return result\n\n\nclass MetaBundleWriter(object):\n\n def __init__(self, context, folder, name):\n self.context = context\n self.folder = folder\n self.name = name\n self.js_resources = OrderedDict()\n self.css_resources = OrderedDict()\n self.registry = getUtility(IRegistry)\n self.bundles = self.registry.collectionOfInterface(\n IBundleRegistry, prefix='plone.bundles', check=False)\n\n def write_js(self):\n\n # default resources\n if self.name == 'default' and self.registry.records.get(\n 'plone.resources/jquery.js'\n ):\n self.js_resources['_jquery'] = get_resource(\n self.context,\n self.registry.records['plone.resources/jquery.js'].value)\n self.js_resources['_requirejs'] = get_resource(\n self.context,\n self.registry.records['plone.resources.requirejs'].value)\n self.js_resources['_configjs'] = get_resource(\n self.context,\n self.registry.records['plone.resources.configjs'].value)\n\n # bundles\n for name, bundle in self.bundles.items():\n self.load_js_bundle(name, bundle)\n\n self._write_out(self.js_resources, '.js')\n\n def load_js_bundle(self, name, bundle, depth=0):\n if depth > 10:\n # recursion detection\n return\n if bundle.merge_with != self.name:\n return\n if bundle.jscompilation:\n if bundle.depends and bundle.depends in self.bundles:\n self.load_js_bundle(\n bundle.depends, self.bundles[bundle.depends], depth + 1)\n if name in self.js_resources:\n return\n resource = get_resource(self.context, bundle.jscompilation)\n if not resource:\n return\n self.js_resources[name] = resource\n\n def _write_out(self, resources, postfix):\n fi = StringIO()\n for bname, script in resources.items():\n fi.write('''\n// Start Bundle: {0}\n{1}\n// End Bundle: {2}\n'''.format(bname, script, bname))\n self.folder.writeFile(self.name + postfix, fi)\n resources.clear()\n\n def load_css_bundle(self, name, bundle, depth=0):\n if depth > 10:\n # recursion detection\n return\n\n if bundle.merge_with != self.name:\n return\n\n if bundle.csscompilation:\n if bundle.depends and bundle.depends in self.bundles:\n self.load_css_bundle(\n bundle.depends, self.bundles[bundle.depends], depth + 1)\n if name in self.css_resources:\n return\n\n css = get_resource(self.context, bundle.csscompilation)\n if not css:\n return\n (path, sep, filename) = bundle.csscompilation.rpartition('/')\n # Process relative urls:\n # we prefix with current resource path any url not starting with\n # '/' or http: or data:\n css = re.sub(\n r'''(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))''',\n r'\\1%s/' % path,\n css)\n self.css_resources[name] = css\n\n def write_css(self):\n for name, bundle in self.bundles.items():\n self.load_css_bundle(name, bundle)\n\n self._write_out(self.css_resources, '.css')\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name='persistent')\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile('timestamp.txt', fi)\n\n # generate new combined bundles\n default_writer = MetaBundleWriter(\n context, production_folder, 'default')\n default_writer.write_js()\n logged_in_writer = MetaBundleWriter(\n context, production_folder, 'logged-in')\n logged_in_writer.write_js()\n default_writer.write_css()\n logged_in_writer.write_css()\n", "path": "Products/CMFPlone/resources/browser/combine.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import OVERRIDE_RESOURCE_DIRECTORY_NAME # noqa\nfrom StringIO import StringIO\nfrom zExceptions import NotFound\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nfrom collections import OrderedDict\nimport logging\nimport re\n\n\nPRODUCTION_RESOURCE_DIRECTORY = 'production'\nlogger = logging.getLogger(__name__)\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name='persistent')\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY\n if 'timestamp.txt' not in production_folder:\n return '%s/++unique++1' % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return '%s/++unique++%s' % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n try:\n resource = context.unrestrictedTraverse(path)\n except NotFound:\n logger.warn(u'Could not find resource {0}. You may have to create it first.'.format(path)) # noqa\n return\n\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n\n # calling the resource may modify the header, i.e. the content-type.\n # we do not want this, so keep the original header intact.\n response_before = context.REQUEST.response\n context.REQUEST.response = response_before.__class__()\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n result = resource.GET()\n else:\n # any BrowserView\n result = resource()\n context.REQUEST.response = response_before\n return result\n\n\nclass MetaBundleWriter(object):\n\n def __init__(self, context, folder, name):\n self.context = context\n self.folder = folder\n self.name = name\n self.js_resources = OrderedDict()\n self.css_resources = OrderedDict()\n self.registry = getUtility(IRegistry)\n self.bundles = self.registry.collectionOfInterface(\n IBundleRegistry, prefix='plone.bundles', check=False)\n\n def write_js(self):\n\n # default resources\n if self.name == 'default' and self.registry.records.get(\n 'plone.resources/jquery.js'\n ):\n self.js_resources['_jquery'] = get_resource(\n self.context,\n self.registry.records['plone.resources/jquery.js'].value)\n self.js_resources['_requirejs'] = get_resource(\n self.context,\n self.registry.records['plone.resources.requirejs'].value)\n self.js_resources['_configjs'] = get_resource(\n self.context,\n self.registry.records['plone.resources.configjs'].value)\n\n # bundles\n for name, bundle in self.bundles.items():\n self.load_js_bundle(name, bundle)\n\n self._write_out(self.js_resources, '.js')\n\n def load_js_bundle(self, name, bundle, depth=0):\n if depth > 10:\n # recursion detection\n return\n if bundle.merge_with != self.name:\n return\n if bundle.jscompilation:\n if bundle.depends and bundle.depends in self.bundles:\n self.load_js_bundle(\n bundle.depends, self.bundles[bundle.depends], depth + 1)\n if name in self.js_resources:\n return\n resource = get_resource(self.context, bundle.jscompilation)\n if not resource:\n return\n self.js_resources[name] = resource\n\n def _write_out(self, resources, postfix):\n fi = StringIO()\n for bname, script in resources.items():\n fi.write('''\n/* Start Bundle: {0} */\n{1}\n/* End Bundle: {2} */\n'''.format(bname, script, bname))\n self.folder.writeFile(self.name + postfix, fi)\n resources.clear()\n\n def load_css_bundle(self, name, bundle, depth=0):\n if depth > 10:\n # recursion detection\n return\n\n if bundle.merge_with != self.name:\n return\n\n if bundle.csscompilation:\n if bundle.depends and bundle.depends in self.bundles:\n self.load_css_bundle(\n bundle.depends, self.bundles[bundle.depends], depth + 1)\n if name in self.css_resources:\n return\n\n css = get_resource(self.context, bundle.csscompilation)\n if not css:\n return\n (path, sep, filename) = bundle.csscompilation.rpartition('/')\n # Process relative urls:\n # we prefix with current resource path any url not starting with\n # '/' or http: or data:\n css = re.sub(\n r'''(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))''',\n r'\\1%s/' % path,\n css)\n self.css_resources[name] = css\n\n def write_css(self):\n for name, bundle in self.bundles.items():\n self.load_css_bundle(name, bundle)\n\n self._write_out(self.css_resources, '.css')\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name='persistent')\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile('timestamp.txt', fi)\n\n # generate new combined bundles\n default_writer = MetaBundleWriter(\n context, production_folder, 'default')\n default_writer.write_js()\n logged_in_writer = MetaBundleWriter(\n context, production_folder, 'logged-in')\n logged_in_writer.write_js()\n default_writer.write_css()\n logged_in_writer.write_css()\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]}
| 2,720 | 157 |
gh_patches_debug_17562
|
rasdani/github-patches
|
git_diff
|
huggingface__transformers-5122
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
data_collator.py does not allow NoneType labels for test set predictions on Glue
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distilbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html
2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set.
3. The error is as shown below.
Earlier, this has worked with the exact same program, ever since an update recently, this error shows up.
`TypeError Traceback (most recent call last)
<ipython-input-16-9eecdd4d48b1> in <module>()
2 output_mode = "classification"
3
----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions
5 if output_mode == "classification":
6 predictions = np.argmax(predictions, axis=1)
7 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)
45 if "label" in first:
46 dtype = torch.long if type(first["label"]) is int else torch.float
---> 47 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
48 elif "label_ids" in first:
49 if isinstance(first["label_ids"], torch.Tensor):
TypeError: must be real number, not NoneType`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: NA
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
data_collator.py does not allow NoneType labels for test set predictions on Glue
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distilbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html
2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set.
3. The error is as shown below.
Earlier, this has worked with the exact same program, ever since an update recently, this error shows up.
`TypeError Traceback (most recent call last)
<ipython-input-16-9eecdd4d48b1> in <module>()
2 output_mode = "classification"
3
----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions
5 if output_mode == "classification":
6 predictions = np.argmax(predictions, axis=1)
7 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)
45 if "label" in first:
46 dtype = torch.long if type(first["label"]) is int else torch.float
---> 47 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
48 elif "label_ids" in first:
49 if isinstance(first["label_ids"], torch.Tensor):
TypeError: must be real number, not NoneType`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: NA
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/transformers/data/data_collator.py`
Content:
```
1 from dataclasses import dataclass
2 from typing import Any, Callable, Dict, List, NewType, Tuple
3
4 import torch
5 from torch.nn.utils.rnn import pad_sequence
6
7 from ..tokenization_utils import PreTrainedTokenizer
8
9
10 InputDataClass = NewType("InputDataClass", Any)
11
12 """
13 A DataCollator is a function that takes a list of samples from a Dataset
14 and collate them into a batch, as a dictionary of Tensors.
15 """
16 DataCollator = NewType("DataCollator", Callable[[List[InputDataClass]], Dict[str, torch.Tensor]])
17
18
19 def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:
20 """
21 Very simple data collator that:
22 - simply collates batches of dict-like objects
23 - Performs special handling for potential keys named:
24 - `label`: handles a single value (int or float) per object
25 - `label_ids`: handles a list of values per object
26 - does not do any additional preprocessing
27
28 i.e., Property names of the input object will be used as corresponding inputs to the model.
29 See glue and ner for example of how it's useful.
30 """
31
32 # In this function we'll make the assumption that all `features` in the batch
33 # have the same attributes.
34 # So we will look at the first element as a proxy for what attributes exist
35 # on the whole batch.
36 if not isinstance(features[0], dict):
37 features = [vars(f) for f in features]
38
39 first = features[0]
40 batch = {}
41
42 # Special handling for labels.
43 # Ensure that tensor is created with the correct type
44 # (it should be automatically the case, but let's make sure of it.)
45 if "label" in first:
46 dtype = torch.long if type(first["label"]) is int else torch.float
47 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
48 elif "label_ids" in first:
49 if isinstance(first["label_ids"], torch.Tensor):
50 batch["labels"] = torch.stack([f["label_ids"] for f in features])
51 else:
52 dtype = torch.long if type(first["label_ids"][0]) is int else torch.float
53 batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
54
55 # Handling of all other possible keys.
56 # Again, we will use the first element to figure out which key/values are not None for this model.
57 for k, v in first.items():
58 if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
59 if isinstance(v, torch.Tensor):
60 batch[k] = torch.stack([f[k] for f in features])
61 else:
62 batch[k] = torch.tensor([f[k] for f in features], dtype=torch.long)
63
64 return batch
65
66
67 @dataclass
68 class DataCollatorForLanguageModeling:
69 """
70 Data collator used for language modeling.
71 - collates batches of tensors, honoring their tokenizer's pad_token
72 - preprocesses batches for masked language modeling
73 """
74
75 tokenizer: PreTrainedTokenizer
76 mlm: bool = True
77 mlm_probability: float = 0.15
78
79 def __call__(self, examples: List[torch.Tensor]) -> Dict[str, torch.Tensor]:
80 batch = self._tensorize_batch(examples)
81 if self.mlm:
82 inputs, labels = self.mask_tokens(batch)
83 return {"input_ids": inputs, "labels": labels}
84 else:
85 return {"input_ids": batch, "labels": batch}
86
87 def _tensorize_batch(self, examples: List[torch.Tensor]) -> torch.Tensor:
88 length_of_first = examples[0].size(0)
89 are_tensors_same_length = all(x.size(0) == length_of_first for x in examples)
90 if are_tensors_same_length:
91 return torch.stack(examples, dim=0)
92 else:
93 if self.tokenizer._pad_token is None:
94 raise ValueError(
95 "You are attempting to pad samples but the tokenizer you are using"
96 f" ({self.tokenizer.__class__.__name__}) does not have one."
97 )
98 return pad_sequence(examples, batch_first=True, padding_value=self.tokenizer.pad_token_id)
99
100 def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
101 """
102 Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
103 """
104
105 if self.tokenizer.mask_token is None:
106 raise ValueError(
107 "This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer."
108 )
109
110 labels = inputs.clone()
111 # We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)
112 probability_matrix = torch.full(labels.shape, self.mlm_probability)
113 special_tokens_mask = [
114 self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
115 ]
116 probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
117 if self.tokenizer._pad_token is not None:
118 padding_mask = labels.eq(self.tokenizer.pad_token_id)
119 probability_matrix.masked_fill_(padding_mask, value=0.0)
120 masked_indices = torch.bernoulli(probability_matrix).bool()
121 labels[~masked_indices] = -100 # We only compute loss on masked tokens
122
123 # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
124 indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
125 inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
126
127 # 10% of the time, we replace masked input tokens with random word
128 indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
129 random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
130 inputs[indices_random] = random_words[indices_random]
131
132 # The rest of the time (10% of the time) we keep the masked input tokens unchanged
133 return inputs, labels
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/transformers/data/data_collator.py b/src/transformers/data/data_collator.py
--- a/src/transformers/data/data_collator.py
+++ b/src/transformers/data/data_collator.py
@@ -42,10 +42,10 @@
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
- if "label" in first:
+ if "label" in first and first["label"] is not None:
dtype = torch.long if type(first["label"]) is int else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
- elif "label_ids" in first:
+ elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
|
{"golden_diff": "diff --git a/src/transformers/data/data_collator.py b/src/transformers/data/data_collator.py\n--- a/src/transformers/data/data_collator.py\n+++ b/src/transformers/data/data_collator.py\n@@ -42,10 +42,10 @@\n # Special handling for labels.\n # Ensure that tensor is created with the correct type\n # (it should be automatically the case, but let's make sure of it.)\n- if \"label\" in first:\n+ if \"label\" in first and first[\"label\"] is not None:\n dtype = torch.long if type(first[\"label\"]) is int else torch.float\n batch[\"labels\"] = torch.tensor([f[\"label\"] for f in features], dtype=dtype)\n- elif \"label_ids\" in first:\n+ elif \"label_ids\" in first and first[\"label_ids\"] is not None:\n if isinstance(first[\"label_ids\"], torch.Tensor):\n batch[\"labels\"] = torch.stack([f[\"label_ids\"] for f in features])\n else:\n", "issue": "data_collator.py does not allow NoneType labels for test set predictions on Glue\n# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...): Distilbert\r\n\r\nLanguage I am using the model on (English, Chinese ...): English\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below) \r\n* [ ] my own modified scripts: (give details below)\r\n\r\nThe tasks I am working on is:\r\n* [x] an official GLUE/SQUaD task: (give the name)\r\n* [x] my own task or dataset: (give details below)\r\n\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html\r\n2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set. \r\n3. The error is as shown below. \r\n\r\nEarlier, this has worked with the exact same program, ever since an update recently, this error shows up. \r\n\r\n`TypeError Traceback (most recent call last)\r\n<ipython-input-16-9eecdd4d48b1> in <module>()\r\n 2 output_mode = \"classification\"\r\n 3 \r\n----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions\r\n 5 if output_mode == \"classification\":\r\n 6 predictions = np.argmax(predictions, axis=1)\r\n\r\n7 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)\r\n 45 if \"label\" in first:\r\n 46 dtype = torch.long if type(first[\"label\"]) is int else torch.float\r\n---> 47 batch[\"labels\"] = torch.tensor([f[\"label\"] for f in features], dtype=dtype)\r\n 48 elif \"label_ids\" in first:\r\n 49 if isinstance(first[\"label_ids\"], torch.Tensor):\r\n\r\nTypeError: must be real number, not NoneType`\r\n\r\n<!-- If you have code snippets, error messages, stack traces please provide them here as well.\r\n Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting\r\n Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->\r\n\r\n## Expected behavior\r\nThe error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing\r\n\r\n<!-- A clear and concise description of what you would expect to happen. -->\r\n\r\n## Environment info\r\n<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.\r\n Don't forget to fill out the missing fields in that output! -->\r\n \r\n- `transformers` version:\r\n- Platform: Colab\r\n- Python version: NA\r\n- PyTorch version (GPU?):\r\n- Tensorflow version (GPU?):\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?:\r\n\ndata_collator.py does not allow NoneType labels for test set predictions on Glue\n# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...): Distilbert\r\n\r\nLanguage I am using the model on (English, Chinese ...): English\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below) \r\n* [ ] my own modified scripts: (give details below)\r\n\r\nThe tasks I am working on is:\r\n* [x] an official GLUE/SQUaD task: (give the name)\r\n* [x] my own task or dataset: (give details below)\r\n\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html\r\n2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set. \r\n3. The error is as shown below. \r\n\r\nEarlier, this has worked with the exact same program, ever since an update recently, this error shows up. \r\n\r\n`TypeError Traceback (most recent call last)\r\n<ipython-input-16-9eecdd4d48b1> in <module>()\r\n 2 output_mode = \"classification\"\r\n 3 \r\n----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions\r\n 5 if output_mode == \"classification\":\r\n 6 predictions = np.argmax(predictions, axis=1)\r\n\r\n7 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)\r\n 45 if \"label\" in first:\r\n 46 dtype = torch.long if type(first[\"label\"]) is int else torch.float\r\n---> 47 batch[\"labels\"] = torch.tensor([f[\"label\"] for f in features], dtype=dtype)\r\n 48 elif \"label_ids\" in first:\r\n 49 if isinstance(first[\"label_ids\"], torch.Tensor):\r\n\r\nTypeError: must be real number, not NoneType`\r\n\r\n<!-- If you have code snippets, error messages, stack traces please provide them here as well.\r\n Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting\r\n Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->\r\n\r\n## Expected behavior\r\nThe error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing\r\n\r\n<!-- A clear and concise description of what you would expect to happen. -->\r\n\r\n## Environment info\r\n<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.\r\n Don't forget to fill out the missing fields in that output! -->\r\n \r\n- `transformers` version:\r\n- Platform: Colab\r\n- Python version: NA\r\n- PyTorch version (GPU?):\r\n- Tensorflow version (GPU?):\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?:\r\n\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import Any, Callable, Dict, List, NewType, Tuple\n\nimport torch\nfrom torch.nn.utils.rnn import pad_sequence\n\nfrom ..tokenization_utils import PreTrainedTokenizer\n\n\nInputDataClass = NewType(\"InputDataClass\", Any)\n\n\"\"\"\nA DataCollator is a function that takes a list of samples from a Dataset\nand collate them into a batch, as a dictionary of Tensors.\n\"\"\"\nDataCollator = NewType(\"DataCollator\", Callable[[List[InputDataClass]], Dict[str, torch.Tensor]])\n\n\ndef default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:\n \"\"\"\n Very simple data collator that:\n - simply collates batches of dict-like objects\n - Performs special handling for potential keys named:\n - `label`: handles a single value (int or float) per object\n - `label_ids`: handles a list of values per object\n - does not do any additional preprocessing\n\n i.e., Property names of the input object will be used as corresponding inputs to the model.\n See glue and ner for example of how it's useful.\n \"\"\"\n\n # In this function we'll make the assumption that all `features` in the batch\n # have the same attributes.\n # So we will look at the first element as a proxy for what attributes exist\n # on the whole batch.\n if not isinstance(features[0], dict):\n features = [vars(f) for f in features]\n\n first = features[0]\n batch = {}\n\n # Special handling for labels.\n # Ensure that tensor is created with the correct type\n # (it should be automatically the case, but let's make sure of it.)\n if \"label\" in first:\n dtype = torch.long if type(first[\"label\"]) is int else torch.float\n batch[\"labels\"] = torch.tensor([f[\"label\"] for f in features], dtype=dtype)\n elif \"label_ids\" in first:\n if isinstance(first[\"label_ids\"], torch.Tensor):\n batch[\"labels\"] = torch.stack([f[\"label_ids\"] for f in features])\n else:\n dtype = torch.long if type(first[\"label_ids\"][0]) is int else torch.float\n batch[\"labels\"] = torch.tensor([f[\"label_ids\"] for f in features], dtype=dtype)\n\n # Handling of all other possible keys.\n # Again, we will use the first element to figure out which key/values are not None for this model.\n for k, v in first.items():\n if k not in (\"label\", \"label_ids\") and v is not None and not isinstance(v, str):\n if isinstance(v, torch.Tensor):\n batch[k] = torch.stack([f[k] for f in features])\n else:\n batch[k] = torch.tensor([f[k] for f in features], dtype=torch.long)\n\n return batch\n\n\n@dataclass\nclass DataCollatorForLanguageModeling:\n \"\"\"\n Data collator used for language modeling.\n - collates batches of tensors, honoring their tokenizer's pad_token\n - preprocesses batches for masked language modeling\n \"\"\"\n\n tokenizer: PreTrainedTokenizer\n mlm: bool = True\n mlm_probability: float = 0.15\n\n def __call__(self, examples: List[torch.Tensor]) -> Dict[str, torch.Tensor]:\n batch = self._tensorize_batch(examples)\n if self.mlm:\n inputs, labels = self.mask_tokens(batch)\n return {\"input_ids\": inputs, \"labels\": labels}\n else:\n return {\"input_ids\": batch, \"labels\": batch}\n\n def _tensorize_batch(self, examples: List[torch.Tensor]) -> torch.Tensor:\n length_of_first = examples[0].size(0)\n are_tensors_same_length = all(x.size(0) == length_of_first for x in examples)\n if are_tensors_same_length:\n return torch.stack(examples, dim=0)\n else:\n if self.tokenizer._pad_token is None:\n raise ValueError(\n \"You are attempting to pad samples but the tokenizer you are using\"\n f\" ({self.tokenizer.__class__.__name__}) does not have one.\"\n )\n return pad_sequence(examples, batch_first=True, padding_value=self.tokenizer.pad_token_id)\n\n def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:\n \"\"\"\n Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.\n \"\"\"\n\n if self.tokenizer.mask_token is None:\n raise ValueError(\n \"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer.\"\n )\n\n labels = inputs.clone()\n # We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)\n probability_matrix = torch.full(labels.shape, self.mlm_probability)\n special_tokens_mask = [\n self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()\n ]\n probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)\n if self.tokenizer._pad_token is not None:\n padding_mask = labels.eq(self.tokenizer.pad_token_id)\n probability_matrix.masked_fill_(padding_mask, value=0.0)\n masked_indices = torch.bernoulli(probability_matrix).bool()\n labels[~masked_indices] = -100 # We only compute loss on masked tokens\n\n # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])\n indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices\n inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)\n\n # 10% of the time, we replace masked input tokens with random word\n indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\n random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)\n inputs[indices_random] = random_words[indices_random]\n\n # The rest of the time (10% of the time) we keep the masked input tokens unchanged\n return inputs, labels\n", "path": "src/transformers/data/data_collator.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom typing import Any, Callable, Dict, List, NewType, Tuple\n\nimport torch\nfrom torch.nn.utils.rnn import pad_sequence\n\nfrom ..tokenization_utils import PreTrainedTokenizer\n\n\nInputDataClass = NewType(\"InputDataClass\", Any)\n\n\"\"\"\nA DataCollator is a function that takes a list of samples from a Dataset\nand collate them into a batch, as a dictionary of Tensors.\n\"\"\"\nDataCollator = NewType(\"DataCollator\", Callable[[List[InputDataClass]], Dict[str, torch.Tensor]])\n\n\ndef default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:\n \"\"\"\n Very simple data collator that:\n - simply collates batches of dict-like objects\n - Performs special handling for potential keys named:\n - `label`: handles a single value (int or float) per object\n - `label_ids`: handles a list of values per object\n - does not do any additional preprocessing\n\n i.e., Property names of the input object will be used as corresponding inputs to the model.\n See glue and ner for example of how it's useful.\n \"\"\"\n\n # In this function we'll make the assumption that all `features` in the batch\n # have the same attributes.\n # So we will look at the first element as a proxy for what attributes exist\n # on the whole batch.\n if not isinstance(features[0], dict):\n features = [vars(f) for f in features]\n\n first = features[0]\n batch = {}\n\n # Special handling for labels.\n # Ensure that tensor is created with the correct type\n # (it should be automatically the case, but let's make sure of it.)\n if \"label\" in first and first[\"label\"] is not None:\n dtype = torch.long if type(first[\"label\"]) is int else torch.float\n batch[\"labels\"] = torch.tensor([f[\"label\"] for f in features], dtype=dtype)\n elif \"label_ids\" in first and first[\"label_ids\"] is not None:\n if isinstance(first[\"label_ids\"], torch.Tensor):\n batch[\"labels\"] = torch.stack([f[\"label_ids\"] for f in features])\n else:\n dtype = torch.long if type(first[\"label_ids\"][0]) is int else torch.float\n batch[\"labels\"] = torch.tensor([f[\"label_ids\"] for f in features], dtype=dtype)\n\n # Handling of all other possible keys.\n # Again, we will use the first element to figure out which key/values are not None for this model.\n for k, v in first.items():\n if k not in (\"label\", \"label_ids\") and v is not None and not isinstance(v, str):\n if isinstance(v, torch.Tensor):\n batch[k] = torch.stack([f[k] for f in features])\n else:\n batch[k] = torch.tensor([f[k] for f in features], dtype=torch.long)\n\n return batch\n\n\n@dataclass\nclass DataCollatorForLanguageModeling:\n \"\"\"\n Data collator used for language modeling.\n - collates batches of tensors, honoring their tokenizer's pad_token\n - preprocesses batches for masked language modeling\n \"\"\"\n\n tokenizer: PreTrainedTokenizer\n mlm: bool = True\n mlm_probability: float = 0.15\n\n def __call__(self, examples: List[torch.Tensor]) -> Dict[str, torch.Tensor]:\n batch = self._tensorize_batch(examples)\n if self.mlm:\n inputs, labels = self.mask_tokens(batch)\n return {\"input_ids\": inputs, \"labels\": labels}\n else:\n return {\"input_ids\": batch, \"labels\": batch}\n\n def _tensorize_batch(self, examples: List[torch.Tensor]) -> torch.Tensor:\n length_of_first = examples[0].size(0)\n are_tensors_same_length = all(x.size(0) == length_of_first for x in examples)\n if are_tensors_same_length:\n return torch.stack(examples, dim=0)\n else:\n if self.tokenizer._pad_token is None:\n raise ValueError(\n \"You are attempting to pad samples but the tokenizer you are using\"\n f\" ({self.tokenizer.__class__.__name__}) does not have one.\"\n )\n return pad_sequence(examples, batch_first=True, padding_value=self.tokenizer.pad_token_id)\n\n def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:\n \"\"\"\n Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.\n \"\"\"\n\n if self.tokenizer.mask_token is None:\n raise ValueError(\n \"This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer.\"\n )\n\n labels = inputs.clone()\n # We sample a few tokens in each sequence for masked-LM training (with probability args.mlm_probability defaults to 0.15 in Bert/RoBERTa)\n probability_matrix = torch.full(labels.shape, self.mlm_probability)\n special_tokens_mask = [\n self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()\n ]\n probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)\n if self.tokenizer._pad_token is not None:\n padding_mask = labels.eq(self.tokenizer.pad_token_id)\n probability_matrix.masked_fill_(padding_mask, value=0.0)\n masked_indices = torch.bernoulli(probability_matrix).bool()\n labels[~masked_indices] = -100 # We only compute loss on masked tokens\n\n # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])\n indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices\n inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)\n\n # 10% of the time, we replace masked input tokens with random word\n indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\n random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)\n inputs[indices_random] = random_words[indices_random]\n\n # The rest of the time (10% of the time) we keep the masked input tokens unchanged\n return inputs, labels\n", "path": "src/transformers/data/data_collator.py"}]}
| 3,354 | 226 |
gh_patches_debug_19387
|
rasdani/github-patches
|
git_diff
|
pytorch__audio-3615
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Optimization Suggestion for 'torchaudio.functional.merge_tokens' Function
### 🚀 The feature
Optimization of the merge_tokens function in 'torchaudio.functional.merge_tokens' to leverage PyTorch's tensor operations for improved efficiency and reduced runtime.
Proposed Implementation:
```python
def merge_tokens_optimized(tokens: torch.Tensor, scores: torch.Tensor, blank: int = 0) -> List[TokenSpan]:
"""Removes repeated tokens and blank tokens from the given CTC token sequence
Args:
tokens (torch.Tensor): Alignment tokens (unbatched)
scores (torch.Tensor): Alignment scores (unbatched)
blank (int, optional): Blank token. Defaults to 0.
Returns:
List[TokenSpan]: list of TokenSpan
"""
# Compute the difference between consecutive tokens. prepend and append a -1 to make sure the first and last
# tokens are not removed
diff = torch.diff(
tokens, prepend=torch.tensor([-1], device=tokens.device), append=torch.tensor([-1], device=tokens.device)
)
# Compute the change points and mask out the points where the new value is blank
changes_wo_blank = torch.nonzero((diff != 0)).squeeze().tolist()
tokens = tokens.tolist()
# Create a TokenSpan for each change point.
spans = [
TokenSpan(token=token, start=start, end=end, score=scores[start:end].mean().item())
for start, end in zip(changes_wo_blank[:-1], changes_wo_blank[1:])
if (token := tokens[start]) != blank
]
return spans
```
### Motivation, pitch
While working on some CTC token sequence operations, I noticed that the current implementation of the merge_tokens function could benefit from tensor operations, potentially offering significant runtime improvements. Specifically, the current loop-based approach might be streamlined by taking advantage of PyTorch's capabilities. My tests indicate that my proposed optimization runs approximately 7 times faster over 1000 runs while producing equivalent outputs.
### Alternatives
Proposed tensor-based optimization: This approach leverages PyTorch's tensor operations, eliminating explicit loops and potentially considerably reducing execution time.
### Additional context
To validate the accuracy of the optimized function, a method was crafted to verify the outputs of both the original and proposed functions:
```python
import torchaudio.functional as F
from torchaudio.functional import TokenSpan
def token_span_outputs_equal(span_list_1: List[TokenSpan], span_list_2: List[TokenSpan]) -> bool:
"""Compares two lists of TokenSpan objects for equality."""
if len(span_list_1) != len(span_list_2):
return False
for span_1, span_2 in zip(span_list_1, span_list_2):
if (
span_1.token != span_2.token or
span_1.start != span_2.start or
span_1.end != span_2.end or
abs(span_1.score - span_2.score) > 1e-6 # Allowing a small tolerance for floating-point comparisons
):
return False
return True
output_original = F.merge_tokens(aligned_tokens, alignment_scores)
output_optimized = merge_tokens_optimized(aligned_tokens, alignment_scores)
assert token_span_outputs_equal(output_original, output_optimized), "Outputs are not equivalent!"
```
Benchmark code:
```python
import timeit
pytorch_time = timeit.timeit(lambda: F.merge_tokens(aligned_tokens, alignment_scores), number=1000)
new_method_time = timeit.timeit(lambda: merge_tokens_optimized(aligned_tokens, alignment_scores), number=1000)
print(f"PyTorch method: {pytorch_time}")
print(f"New method: {new_method_time}")
```
Output:
```
Pytorch method: 7.622203521430492
New method: 1.0753349959850311
```
Integrating this optimization could bring about significant improvements in both performance and maintainability, benefiting a broad spectrum of users.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchaudio/functional/_alignment.py`
Content:
```
1 from dataclasses import dataclass
2 from typing import List, Optional, Tuple
3
4 import torch
5 from torch import Tensor
6 from torchaudio._extension import fail_if_no_align
7
8 __all__ = []
9
10
11 @fail_if_no_align
12 def forced_align(
13 log_probs: Tensor,
14 targets: Tensor,
15 input_lengths: Optional[Tensor] = None,
16 target_lengths: Optional[Tensor] = None,
17 blank: int = 0,
18 ) -> Tuple[Tensor, Tensor]:
19 r"""Align a CTC label sequence to an emission.
20
21 .. devices:: CPU CUDA
22
23 .. properties:: TorchScript
24
25 Args:
26 log_probs (Tensor): log probability of CTC emission output.
27 Tensor of shape `(B, T, C)`. where `B` is the batch size, `T` is the input length,
28 `C` is the number of characters in alphabet including blank.
29 targets (Tensor): Target sequence. Tensor of shape `(B, L)`,
30 where `L` is the target length.
31 input_lengths (Tensor or None, optional):
32 Lengths of the inputs (max value must each be <= `T`). 1-D Tensor of shape `(B,)`.
33 target_lengths (Tensor or None, optional):
34 Lengths of the targets. 1-D Tensor of shape `(B,)`.
35 blank_id (int, optional): The index of blank symbol in CTC emission. (Default: 0)
36
37 Returns:
38 Tuple(Tensor, Tensor):
39 Tensor: Label for each time step in the alignment path computed using forced alignment.
40
41 Tensor: Log probability scores of the labels for each time step.
42
43 Note:
44 The sequence length of `log_probs` must satisfy:
45
46
47 .. math::
48 L_{\text{log\_probs}} \ge L_{\text{label}} + N_{\text{repeat}}
49
50 where :math:`N_{\text{repeat}}` is the number of consecutively repeated tokens.
51 For example, in str `"aabbc"`, the number of repeats are `2`.
52
53 Note:
54 The current version only supports ``batch_size==1``.
55 """
56 if blank in targets:
57 raise ValueError(f"targets Tensor shouldn't contain blank index. Found {targets}.")
58 if torch.max(targets) >= log_probs.shape[-1]:
59 raise ValueError("targets values must be less than the CTC dimension")
60
61 if input_lengths is None:
62 batch_size, length = log_probs.size(0), log_probs.size(1)
63 input_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=log_probs.device)
64 if target_lengths is None:
65 batch_size, length = targets.size(0), targets.size(1)
66 target_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=targets.device)
67
68 # For TorchScript compatibility
69 assert input_lengths is not None
70 assert target_lengths is not None
71
72 paths, scores = torch.ops.torchaudio.forced_align(log_probs, targets, input_lengths, target_lengths, blank)
73 return paths, scores
74
75
76 @dataclass
77 class TokenSpan:
78 """TokenSpan()
79 Token with time stamps and score. Returned by :py:func:`merge_tokens`.
80 """
81
82 token: int
83 """The token"""
84 start: int
85 """The start time (inclusive) in emission time axis."""
86 end: int
87 """The end time (exclusive) in emission time axis."""
88 score: float
89 """The score of the this token."""
90
91 def __len__(self) -> int:
92 """Returns the time span"""
93 return self.end - self.start
94
95
96 def merge_tokens(tokens: Tensor, scores: Tensor, blank: int = 0) -> List[TokenSpan]:
97 """Removes repeated tokens and blank tokens from the given CTC token sequence.
98
99 Args:
100 tokens (Tensor): Alignment tokens (unbatched) returned from :py:func:`forced_align`.
101 Shape: `(time, )`.
102 scores (Tensor): Alignment scores (unbatched) returned from :py:func:`forced_align`.
103 Shape: `(time, )`. When computing the token-size score, the given score is averaged
104 across the corresponding time span.
105
106 Returns:
107 list of TokenSpan
108
109 Example:
110 >>> aligned_tokens, scores = forced_align(emission, targets, input_lengths, target_lengths)
111 >>> token_spans = merge_tokens(aligned_tokens[0], scores[0])
112 """
113 if tokens.ndim != 1 or scores.ndim != 1:
114 raise ValueError("`tokens` and `scores` must be 1D Tensor.")
115 if len(tokens) != len(scores):
116 raise ValueError("`tokens` and `scores` must be the same length.")
117
118 t_prev = blank
119 i = start = -1
120 spans = []
121 for t, token in enumerate(tokens):
122 if token != t_prev:
123 if t_prev != blank:
124 spans.append(TokenSpan(t_prev.item(), start, t, scores[start:t].mean().item()))
125 if token != blank:
126 i += 1
127 start = t
128 t_prev = token
129 if t_prev != blank:
130 spans.append(TokenSpan(t_prev.item(), start, len(tokens), scores[start:].mean().item()))
131 return spans
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchaudio/functional/_alignment.py b/torchaudio/functional/_alignment.py
--- a/torchaudio/functional/_alignment.py
+++ b/torchaudio/functional/_alignment.py
@@ -115,17 +115,14 @@
if len(tokens) != len(scores):
raise ValueError("`tokens` and `scores` must be the same length.")
- t_prev = blank
- i = start = -1
- spans = []
- for t, token in enumerate(tokens):
- if token != t_prev:
- if t_prev != blank:
- spans.append(TokenSpan(t_prev.item(), start, t, scores[start:t].mean().item()))
- if token != blank:
- i += 1
- start = t
- t_prev = token
- if t_prev != blank:
- spans.append(TokenSpan(t_prev.item(), start, len(tokens), scores[start:].mean().item()))
+ diff = torch.diff(
+ tokens, prepend=torch.tensor([-1], device=tokens.device), append=torch.tensor([-1], device=tokens.device)
+ )
+ changes_wo_blank = torch.nonzero((diff != 0)).squeeze().tolist()
+ tokens = tokens.tolist()
+ spans = [
+ TokenSpan(token=token, start=start, end=end, score=scores[start:end].mean().item())
+ for start, end in zip(changes_wo_blank[:-1], changes_wo_blank[1:])
+ if (token := tokens[start]) != blank
+ ]
return spans
|
{"golden_diff": "diff --git a/torchaudio/functional/_alignment.py b/torchaudio/functional/_alignment.py\n--- a/torchaudio/functional/_alignment.py\n+++ b/torchaudio/functional/_alignment.py\n@@ -115,17 +115,14 @@\n if len(tokens) != len(scores):\n raise ValueError(\"`tokens` and `scores` must be the same length.\")\n \n- t_prev = blank\n- i = start = -1\n- spans = []\n- for t, token in enumerate(tokens):\n- if token != t_prev:\n- if t_prev != blank:\n- spans.append(TokenSpan(t_prev.item(), start, t, scores[start:t].mean().item()))\n- if token != blank:\n- i += 1\n- start = t\n- t_prev = token\n- if t_prev != blank:\n- spans.append(TokenSpan(t_prev.item(), start, len(tokens), scores[start:].mean().item()))\n+ diff = torch.diff(\n+ tokens, prepend=torch.tensor([-1], device=tokens.device), append=torch.tensor([-1], device=tokens.device)\n+ )\n+ changes_wo_blank = torch.nonzero((diff != 0)).squeeze().tolist()\n+ tokens = tokens.tolist()\n+ spans = [\n+ TokenSpan(token=token, start=start, end=end, score=scores[start:end].mean().item())\n+ for start, end in zip(changes_wo_blank[:-1], changes_wo_blank[1:])\n+ if (token := tokens[start]) != blank\n+ ]\n return spans\n", "issue": "Optimization Suggestion for 'torchaudio.functional.merge_tokens' Function\n### \ud83d\ude80 The feature\r\n\r\nOptimization of the merge_tokens function in 'torchaudio.functional.merge_tokens' to leverage PyTorch's tensor operations for improved efficiency and reduced runtime.\r\n\r\nProposed Implementation:\r\n```python\r\ndef merge_tokens_optimized(tokens: torch.Tensor, scores: torch.Tensor, blank: int = 0) -> List[TokenSpan]:\r\n \"\"\"Removes repeated tokens and blank tokens from the given CTC token sequence\r\n\r\n Args:\r\n tokens (torch.Tensor): Alignment tokens (unbatched)\r\n scores (torch.Tensor): Alignment scores (unbatched)\r\n blank (int, optional): Blank token. Defaults to 0.\r\n\r\n Returns:\r\n List[TokenSpan]: list of TokenSpan\r\n \"\"\"\r\n\r\n # Compute the difference between consecutive tokens. prepend and append a -1 to make sure the first and last\r\n # tokens are not removed\r\n diff = torch.diff(\r\n tokens, prepend=torch.tensor([-1], device=tokens.device), append=torch.tensor([-1], device=tokens.device)\r\n )\r\n # Compute the change points and mask out the points where the new value is blank\r\n changes_wo_blank = torch.nonzero((diff != 0)).squeeze().tolist()\r\n\r\n tokens = tokens.tolist()\r\n # Create a TokenSpan for each change point.\r\n spans = [\r\n TokenSpan(token=token, start=start, end=end, score=scores[start:end].mean().item())\r\n for start, end in zip(changes_wo_blank[:-1], changes_wo_blank[1:])\r\n if (token := tokens[start]) != blank\r\n ]\r\n return spans\r\n```\r\n\r\n### Motivation, pitch\r\n\r\nWhile working on some CTC token sequence operations, I noticed that the current implementation of the merge_tokens function could benefit from tensor operations, potentially offering significant runtime improvements. Specifically, the current loop-based approach might be streamlined by taking advantage of PyTorch's capabilities. My tests indicate that my proposed optimization runs approximately 7 times faster over 1000 runs while producing equivalent outputs.\r\n\r\n### Alternatives\r\n\r\nProposed tensor-based optimization: This approach leverages PyTorch's tensor operations, eliminating explicit loops and potentially considerably reducing execution time.\r\n\r\n### Additional context\r\n\r\nTo validate the accuracy of the optimized function, a method was crafted to verify the outputs of both the original and proposed functions:\r\n\r\n```python\r\nimport torchaudio.functional as F\r\nfrom torchaudio.functional import TokenSpan\r\n\r\ndef token_span_outputs_equal(span_list_1: List[TokenSpan], span_list_2: List[TokenSpan]) -> bool:\r\n \"\"\"Compares two lists of TokenSpan objects for equality.\"\"\"\r\n \r\n if len(span_list_1) != len(span_list_2):\r\n return False\r\n\r\n for span_1, span_2 in zip(span_list_1, span_list_2):\r\n if (\r\n span_1.token != span_2.token or \r\n span_1.start != span_2.start or \r\n span_1.end != span_2.end or \r\n abs(span_1.score - span_2.score) > 1e-6 # Allowing a small tolerance for floating-point comparisons\r\n ):\r\n return False\r\n\r\n return True\r\n \r\n \r\noutput_original = F.merge_tokens(aligned_tokens, alignment_scores)\r\noutput_optimized = merge_tokens_optimized(aligned_tokens, alignment_scores)\r\n\r\nassert token_span_outputs_equal(output_original, output_optimized), \"Outputs are not equivalent!\"\r\n```\r\n\r\nBenchmark code:\r\n```python\r\nimport timeit\r\n\r\npytorch_time = timeit.timeit(lambda: F.merge_tokens(aligned_tokens, alignment_scores), number=1000)\r\nnew_method_time = timeit.timeit(lambda: merge_tokens_optimized(aligned_tokens, alignment_scores), number=1000)\r\n\r\nprint(f\"PyTorch method: {pytorch_time}\")\r\nprint(f\"New method: {new_method_time}\")\r\n```\r\n\r\nOutput:\r\n```\r\nPytorch method: 7.622203521430492\r\nNew method: 1.0753349959850311\r\n```\r\n\r\nIntegrating this optimization could bring about significant improvements in both performance and maintainability, benefiting a broad spectrum of users.\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import List, Optional, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torchaudio._extension import fail_if_no_align\n\n__all__ = []\n\n\n@fail_if_no_align\ndef forced_align(\n log_probs: Tensor,\n targets: Tensor,\n input_lengths: Optional[Tensor] = None,\n target_lengths: Optional[Tensor] = None,\n blank: int = 0,\n) -> Tuple[Tensor, Tensor]:\n r\"\"\"Align a CTC label sequence to an emission.\n\n .. devices:: CPU CUDA\n\n .. properties:: TorchScript\n\n Args:\n log_probs (Tensor): log probability of CTC emission output.\n Tensor of shape `(B, T, C)`. where `B` is the batch size, `T` is the input length,\n `C` is the number of characters in alphabet including blank.\n targets (Tensor): Target sequence. Tensor of shape `(B, L)`,\n where `L` is the target length.\n input_lengths (Tensor or None, optional):\n Lengths of the inputs (max value must each be <= `T`). 1-D Tensor of shape `(B,)`.\n target_lengths (Tensor or None, optional):\n Lengths of the targets. 1-D Tensor of shape `(B,)`.\n blank_id (int, optional): The index of blank symbol in CTC emission. (Default: 0)\n\n Returns:\n Tuple(Tensor, Tensor):\n Tensor: Label for each time step in the alignment path computed using forced alignment.\n\n Tensor: Log probability scores of the labels for each time step.\n\n Note:\n The sequence length of `log_probs` must satisfy:\n\n\n .. math::\n L_{\\text{log\\_probs}} \\ge L_{\\text{label}} + N_{\\text{repeat}}\n\n where :math:`N_{\\text{repeat}}` is the number of consecutively repeated tokens.\n For example, in str `\"aabbc\"`, the number of repeats are `2`.\n\n Note:\n The current version only supports ``batch_size==1``.\n \"\"\"\n if blank in targets:\n raise ValueError(f\"targets Tensor shouldn't contain blank index. Found {targets}.\")\n if torch.max(targets) >= log_probs.shape[-1]:\n raise ValueError(\"targets values must be less than the CTC dimension\")\n\n if input_lengths is None:\n batch_size, length = log_probs.size(0), log_probs.size(1)\n input_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=log_probs.device)\n if target_lengths is None:\n batch_size, length = targets.size(0), targets.size(1)\n target_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=targets.device)\n\n # For TorchScript compatibility\n assert input_lengths is not None\n assert target_lengths is not None\n\n paths, scores = torch.ops.torchaudio.forced_align(log_probs, targets, input_lengths, target_lengths, blank)\n return paths, scores\n\n\n@dataclass\nclass TokenSpan:\n \"\"\"TokenSpan()\n Token with time stamps and score. Returned by :py:func:`merge_tokens`.\n \"\"\"\n\n token: int\n \"\"\"The token\"\"\"\n start: int\n \"\"\"The start time (inclusive) in emission time axis.\"\"\"\n end: int\n \"\"\"The end time (exclusive) in emission time axis.\"\"\"\n score: float\n \"\"\"The score of the this token.\"\"\"\n\n def __len__(self) -> int:\n \"\"\"Returns the time span\"\"\"\n return self.end - self.start\n\n\ndef merge_tokens(tokens: Tensor, scores: Tensor, blank: int = 0) -> List[TokenSpan]:\n \"\"\"Removes repeated tokens and blank tokens from the given CTC token sequence.\n\n Args:\n tokens (Tensor): Alignment tokens (unbatched) returned from :py:func:`forced_align`.\n Shape: `(time, )`.\n scores (Tensor): Alignment scores (unbatched) returned from :py:func:`forced_align`.\n Shape: `(time, )`. When computing the token-size score, the given score is averaged\n across the corresponding time span.\n\n Returns:\n list of TokenSpan\n\n Example:\n >>> aligned_tokens, scores = forced_align(emission, targets, input_lengths, target_lengths)\n >>> token_spans = merge_tokens(aligned_tokens[0], scores[0])\n \"\"\"\n if tokens.ndim != 1 or scores.ndim != 1:\n raise ValueError(\"`tokens` and `scores` must be 1D Tensor.\")\n if len(tokens) != len(scores):\n raise ValueError(\"`tokens` and `scores` must be the same length.\")\n\n t_prev = blank\n i = start = -1\n spans = []\n for t, token in enumerate(tokens):\n if token != t_prev:\n if t_prev != blank:\n spans.append(TokenSpan(t_prev.item(), start, t, scores[start:t].mean().item()))\n if token != blank:\n i += 1\n start = t\n t_prev = token\n if t_prev != blank:\n spans.append(TokenSpan(t_prev.item(), start, len(tokens), scores[start:].mean().item()))\n return spans\n", "path": "torchaudio/functional/_alignment.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom typing import List, Optional, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torchaudio._extension import fail_if_no_align\n\n__all__ = []\n\n\n@fail_if_no_align\ndef forced_align(\n log_probs: Tensor,\n targets: Tensor,\n input_lengths: Optional[Tensor] = None,\n target_lengths: Optional[Tensor] = None,\n blank: int = 0,\n) -> Tuple[Tensor, Tensor]:\n r\"\"\"Align a CTC label sequence to an emission.\n\n .. devices:: CPU CUDA\n\n .. properties:: TorchScript\n\n Args:\n log_probs (Tensor): log probability of CTC emission output.\n Tensor of shape `(B, T, C)`. where `B` is the batch size, `T` is the input length,\n `C` is the number of characters in alphabet including blank.\n targets (Tensor): Target sequence. Tensor of shape `(B, L)`,\n where `L` is the target length.\n input_lengths (Tensor or None, optional):\n Lengths of the inputs (max value must each be <= `T`). 1-D Tensor of shape `(B,)`.\n target_lengths (Tensor or None, optional):\n Lengths of the targets. 1-D Tensor of shape `(B,)`.\n blank_id (int, optional): The index of blank symbol in CTC emission. (Default: 0)\n\n Returns:\n Tuple(Tensor, Tensor):\n Tensor: Label for each time step in the alignment path computed using forced alignment.\n\n Tensor: Log probability scores of the labels for each time step.\n\n Note:\n The sequence length of `log_probs` must satisfy:\n\n\n .. math::\n L_{\\text{log\\_probs}} \\ge L_{\\text{label}} + N_{\\text{repeat}}\n\n where :math:`N_{\\text{repeat}}` is the number of consecutively repeated tokens.\n For example, in str `\"aabbc\"`, the number of repeats are `2`.\n\n Note:\n The current version only supports ``batch_size==1``.\n \"\"\"\n if blank in targets:\n raise ValueError(f\"targets Tensor shouldn't contain blank index. Found {targets}.\")\n if torch.max(targets) >= log_probs.shape[-1]:\n raise ValueError(\"targets values must be less than the CTC dimension\")\n\n if input_lengths is None:\n batch_size, length = log_probs.size(0), log_probs.size(1)\n input_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=log_probs.device)\n if target_lengths is None:\n batch_size, length = targets.size(0), targets.size(1)\n target_lengths = torch.full((batch_size,), length, dtype=torch.int64, device=targets.device)\n\n # For TorchScript compatibility\n assert input_lengths is not None\n assert target_lengths is not None\n\n paths, scores = torch.ops.torchaudio.forced_align(log_probs, targets, input_lengths, target_lengths, blank)\n return paths, scores\n\n\n@dataclass\nclass TokenSpan:\n \"\"\"TokenSpan()\n Token with time stamps and score. Returned by :py:func:`merge_tokens`.\n \"\"\"\n\n token: int\n \"\"\"The token\"\"\"\n start: int\n \"\"\"The start time (inclusive) in emission time axis.\"\"\"\n end: int\n \"\"\"The end time (exclusive) in emission time axis.\"\"\"\n score: float\n \"\"\"The score of the this token.\"\"\"\n\n def __len__(self) -> int:\n \"\"\"Returns the time span\"\"\"\n return self.end - self.start\n\n\ndef merge_tokens(tokens: Tensor, scores: Tensor, blank: int = 0) -> List[TokenSpan]:\n \"\"\"Removes repeated tokens and blank tokens from the given CTC token sequence.\n\n Args:\n tokens (Tensor): Alignment tokens (unbatched) returned from :py:func:`forced_align`.\n Shape: `(time, )`.\n scores (Tensor): Alignment scores (unbatched) returned from :py:func:`forced_align`.\n Shape: `(time, )`. When computing the token-size score, the given score is averaged\n across the corresponding time span.\n\n Returns:\n list of TokenSpan\n\n Example:\n >>> aligned_tokens, scores = forced_align(emission, targets, input_lengths, target_lengths)\n >>> token_spans = merge_tokens(aligned_tokens[0], scores[0])\n \"\"\"\n if tokens.ndim != 1 or scores.ndim != 1:\n raise ValueError(\"`tokens` and `scores` must be 1D Tensor.\")\n if len(tokens) != len(scores):\n raise ValueError(\"`tokens` and `scores` must be the same length.\")\n\n diff = torch.diff(\n tokens, prepend=torch.tensor([-1], device=tokens.device), append=torch.tensor([-1], device=tokens.device)\n )\n changes_wo_blank = torch.nonzero((diff != 0)).squeeze().tolist()\n tokens = tokens.tolist()\n spans = [\n TokenSpan(token=token, start=start, end=end, score=scores[start:end].mean().item())\n for start, end in zip(changes_wo_blank[:-1], changes_wo_blank[1:])\n if (token := tokens[start]) != blank\n ]\n return spans\n", "path": "torchaudio/functional/_alignment.py"}]}
| 2,586 | 346 |
gh_patches_debug_6416
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-515
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase tolerance of test_pcolormesh_mercator_wrap.
This is the only test that is affected by matplotlib/matplotlib#15970. The new image is correct, but the difference is small enough to just change the tolerance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/cartopy/io/srtm.py`
Content:
```
1 # (C) British Crown Copyright 2011 - 2014, Met Office
2 #
3 # This file is part of cartopy.
4 #
5 # cartopy is free software: you can redistribute it and/or modify it under
6 # the terms of the GNU Lesser General Public License as published by the
7 # Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # cartopy is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU Lesser General Public License for more details.
14 #
15 # You should have received a copy of the GNU Lesser General Public License
16 # along with cartopy. If not, see <http://www.gnu.org/licenses/>.
17
18
19 """
20 The Shuttle Radar Topography Mission (SRTM) is an international research
21 effort that obtained digital elevation models on a near-global scale from
22 56S to 60N, to generate the most complete high-resolution digital topographic
23 database of Earth prior to the release of the ASTER GDEM in 2009.
24
25 - Wikipedia (August 2012)
26
27 """
28
29 from __future__ import (absolute_import, division, print_function)
30
31 import json
32 import os
33
34 import numpy as np
35 import six
36
37 from cartopy import config
38 import cartopy.crs as ccrs
39 from cartopy.io import fh_getter, Downloader
40
41
42 def srtm(lon, lat):
43 """
44 Return (elevation, crs, extent) for the given longitude latitude.
45 Elevation is in meters.
46 """
47 fname = SRTM3_retrieve(lon, lat)
48 if fname is None:
49 raise ValueError('No srtm tile found for those coordinates.')
50 return read_SRTM3(fname)
51
52
53 def add_shading(elevation, azimuth, altitude):
54 """Adds shading to SRTM elevation data, using azimuth and altitude
55 of the sun.
56
57 :type elevation: numpy.ndarray
58 :param elevation: SRTM elevation data (in meters)
59 :type azimuth: float
60 :param azimuth: azimuth of the Sun (in degrees)
61 :type altitude: float
62 :param altitude: altitude of the Sun (in degrees)
63
64 :rtype: numpy.ndarray
65 :return: shaded SRTM relief map.
66 """
67 azimuth = np.deg2rad(azimuth)
68 altitude = np.deg2rad(altitude)
69 x, y = np.gradient(elevation)
70 slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))
71 # -x here because of pixel orders in the SRTM tile
72 aspect = np.arctan2(-x, y)
73 shaded = np.sin(altitude) * np.sin(slope)\
74 + np.cos(altitude) * np.cos(slope)\
75 * np.cos((azimuth - np.pi/2.) - aspect)
76 return shaded
77
78
79 def fill_gaps(elevation, max_distance=10):
80 """Fills gaps in SRTM elevation data for which the distance from
81 missing pixel to nearest existing one is smaller than `max_distance`.
82
83 This function requires osgeo/gdal to work.
84
85 :type elevation: numpy.ndarray
86 :param elevation: SRTM elevation data (in meters)
87 :type max_distance: int
88 :param max_distance: maximal distance (in pixels) between a missing point
89 and the nearest valid one.
90
91 :rtype: numpy.ndarray
92 :return: SRTM elevation data with filled gaps..
93 """
94 # Lazily import osgeo - it is only an optional dependency for cartopy.
95 from osgeo import gdal
96 from osgeo import gdal_array
97
98 src_ds = gdal_array.OpenArray(elevation)
99 srcband = src_ds.GetRasterBand(1)
100 dstband = srcband
101 maskband = srcband
102 smoothing_iterations = 0
103 options = []
104 gdal.FillNodata(dstband, maskband,
105 max_distance, smoothing_iterations, options,
106 callback=None)
107 elevation = dstband.ReadAsArray()
108 return elevation
109
110
111 def srtm_composite(lon_min, lat_min, nx, ny):
112
113 # XXX nx and ny have got confused in the code (numpy array ordering?).
114 # However, the interface works well.
115
116 bottom_left_ll = (lon_min, lat_min)
117 shape = np.array([1201, 1201])
118 img = np.empty(shape * (nx, ny))
119
120 for i in range(nx):
121 for j in range(ny):
122 x_img_slice = slice(i * shape[0], (i + 1) * shape[0])
123 y_img_slice = slice(j * shape[1], (j + 1) * shape[1])
124
125 try:
126 tile_img, _, _ = srtm(bottom_left_ll[0] + j,
127 bottom_left_ll[1] + i)
128 except ValueError:
129 img[x_img_slice, y_img_slice] = 0
130 else:
131 img[x_img_slice, y_img_slice] = tile_img
132
133 extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,
134 bottom_left_ll[1], bottom_left_ll[1] + nx)
135
136 return img, ccrs.PlateCarree(), extent
137
138
139 def read_SRTM3(fh):
140 fh, fname = fh_getter(fh, needs_filename=True)
141 if fname.endswith('.zip'):
142 from zipfile import ZipFile
143 zfh = ZipFile(fh, 'rb')
144 fh = zfh.open(os.path.basename(fname[:-4]), 'r')
145
146 elev = np.fromfile(fh, dtype=np.dtype('>i2'))
147 elev.shape = (1201, 1201)
148
149 fname = os.path.basename(fname)
150 y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])
151
152 if y_dir == 'S':
153 y *= -1
154
155 if x_dir == 'W':
156 x *= -1
157
158 # xxx extent may need to be wider by half a pixel
159 return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]
160
161
162 def SRTM3_retrieve(lon, lat):
163 """
164 Return the path of a .hgt file for the given SRTM location.
165
166 If no such .hgt file exists (because it is over the ocean)
167 None will be returned.
168
169 """
170 x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))
171 y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))
172
173 srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))
174 params = {'config': config, 'x': x, 'y': y}
175 if srtm_downloader.url(params) is None:
176 return None
177 else:
178 return srtm_downloader.path({'config': config, 'x': x, 'y': y})
179
180
181 class SRTM3Downloader(Downloader):
182 """
183 Provides a SRTM3 download mechanism.
184
185 """
186 FORMAT_KEYS = ('config', 'x', 'y')
187
188 _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),
189 'srtm.json')
190 _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))
191 """
192 The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url
193 of the file to download.
194
195 """
196
197 def __init__(self,
198 target_path_template,
199 pre_downloaded_path_template='',
200 ):
201 # adds some SRTM3 defaults to the __init__ of a Downloader
202 # namely, the URl is determined on the fly using the
203 # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary
204 Downloader.__init__(self, None,
205 target_path_template,
206 pre_downloaded_path_template)
207
208 def url(self, format_dict):
209 # override the url method, looking up the url from the
210 # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary
211 key = u'{y}{x}'.format(**format_dict)
212 url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)
213 return url
214
215 def acquire_resource(self, target_path, format_dict):
216 from zipfile import ZipFile
217
218 target_dir = os.path.dirname(target_path)
219 if not os.path.isdir(target_dir):
220 os.makedirs(target_dir)
221
222 url = self.url(format_dict)
223
224 srtm_online = self._urlopen(url)
225 zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')
226
227 zip_member_path = u'{y}{x}.hgt'.format(**format_dict)
228 member = zfh.getinfo(zip_member_path)
229 with open(target_path, 'wb') as fh:
230 fh.write(zfh.open(member).read())
231
232 srtm_online.close()
233 zfh.close()
234
235 return target_path
236
237 @staticmethod
238 def _create_srtm3_dict():
239 """
240 Returns a dictionary mapping srtm filename to the URL of the file.
241
242 This is slow as it must query the SRTM server to identify the
243 continent from which the tile comes. Hence a json file with this
244 content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.
245
246 The json file was created with::
247
248 import cartopy.io.srtm as srtm
249 import json
250 fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')
251 json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)
252
253 """
254 # lazy imports. In most situations, these are not
255 # dependencies of cartopy.
256 if six.PY3:
257 from urllib.request import urlopen
258 else:
259 from urllib2 import urlopen
260 from BeautifulSoup import BeautifulSoup
261
262 files = {}
263
264 for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',
265 'North_America', 'South_America']:
266
267 url = "http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s" % continent
268 f = urlopen(url)
269 html = f.read()
270 soup = BeautifulSoup(html)
271
272 for link in soup('li'):
273 name = str(link.text)
274 if name != ' Parent Directory':
275 # remove the '.hgt.zip'
276 files[name[:-8]] = url + '/' + name
277 f.close()
278 return files
279
280 @classmethod
281 def default_downloader(cls):
282 """
283 Returns a typical downloader for this class. In general, this static
284 method is used to create the default configuration in cartopy.config
285
286 """
287 default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')
288 target_path_template = os.path.join('{config[data_dir]}',
289 *default_spec)
290 pre_path_template = os.path.join('{config[pre_existing_data_dir]}',
291 *default_spec)
292 return cls(target_path_template=target_path_template,
293 pre_downloaded_path_template=pre_path_template)
294
295
296 # add a generic SRTM downloader to the config 'downloaders' section.
297 config['downloaders'].setdefault(('SRTM', 'SRTM3'),
298 SRTM3Downloader.default_downloader())
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/cartopy/io/srtm.py b/lib/cartopy/io/srtm.py
--- a/lib/cartopy/io/srtm.py
+++ b/lib/cartopy/io/srtm.py
@@ -167,8 +167,8 @@
None will be returned.
"""
- x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))
- y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))
+ x = '%s%03d' % ('E' if lon >= 0 else 'W', abs(int(lon)))
+ y = '%s%02d' % ('N' if lat >= 0 else 'S', abs(int(lat)))
srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))
params = {'config': config, 'x': x, 'y': y}
|
{"golden_diff": "diff --git a/lib/cartopy/io/srtm.py b/lib/cartopy/io/srtm.py\n--- a/lib/cartopy/io/srtm.py\n+++ b/lib/cartopy/io/srtm.py\n@@ -167,8 +167,8 @@\n None will be returned.\n \n \"\"\"\n- x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))\n- y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))\n+ x = '%s%03d' % ('E' if lon >= 0 else 'W', abs(int(lon)))\n+ y = '%s%02d' % ('N' if lat >= 0 else 'S', abs(int(lat)))\n \n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n params = {'config': config, 'x': x, 'y': y}\n", "issue": "Increase tolerance of test_pcolormesh_mercator_wrap.\nThis is the only test that is affected by matplotlib/matplotlib#15970. The new image is correct, but the difference is small enough to just change the tolerance.\n", "before_files": [{"content": "# (C) British Crown Copyright 2011 - 2014, Met Office\n#\n# This file is part of cartopy.\n#\n# cartopy is free software: you can redistribute it and/or modify it under\n# the terms of the GNU Lesser General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cartopy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with cartopy. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"\nThe Shuttle Radar Topography Mission (SRTM) is an international research\neffort that obtained digital elevation models on a near-global scale from\n56S to 60N, to generate the most complete high-resolution digital topographic\ndatabase of Earth prior to the release of the ASTER GDEM in 2009.\n\n - Wikipedia (August 2012)\n\n\"\"\"\n\nfrom __future__ import (absolute_import, division, print_function)\n\nimport json\nimport os\n\nimport numpy as np\nimport six\n\nfrom cartopy import config\nimport cartopy.crs as ccrs\nfrom cartopy.io import fh_getter, Downloader\n\n\ndef srtm(lon, lat):\n \"\"\"\n Return (elevation, crs, extent) for the given longitude latitude.\n Elevation is in meters.\n \"\"\"\n fname = SRTM3_retrieve(lon, lat)\n if fname is None:\n raise ValueError('No srtm tile found for those coordinates.')\n return read_SRTM3(fname)\n\n\ndef add_shading(elevation, azimuth, altitude):\n \"\"\"Adds shading to SRTM elevation data, using azimuth and altitude\n of the sun.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type azimuth: float\n :param azimuth: azimuth of the Sun (in degrees)\n :type altitude: float\n :param altitude: altitude of the Sun (in degrees)\n\n :rtype: numpy.ndarray\n :return: shaded SRTM relief map.\n \"\"\"\n azimuth = np.deg2rad(azimuth)\n altitude = np.deg2rad(altitude)\n x, y = np.gradient(elevation)\n slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))\n # -x here because of pixel orders in the SRTM tile\n aspect = np.arctan2(-x, y)\n shaded = np.sin(altitude) * np.sin(slope)\\\n + np.cos(altitude) * np.cos(slope)\\\n * np.cos((azimuth - np.pi/2.) - aspect)\n return shaded\n\n\ndef fill_gaps(elevation, max_distance=10):\n \"\"\"Fills gaps in SRTM elevation data for which the distance from\n missing pixel to nearest existing one is smaller than `max_distance`.\n\n This function requires osgeo/gdal to work.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type max_distance: int\n :param max_distance: maximal distance (in pixels) between a missing point\n and the nearest valid one.\n\n :rtype: numpy.ndarray\n :return: SRTM elevation data with filled gaps..\n \"\"\"\n # Lazily import osgeo - it is only an optional dependency for cartopy.\n from osgeo import gdal\n from osgeo import gdal_array\n\n src_ds = gdal_array.OpenArray(elevation)\n srcband = src_ds.GetRasterBand(1)\n dstband = srcband\n maskband = srcband\n smoothing_iterations = 0\n options = []\n gdal.FillNodata(dstband, maskband,\n max_distance, smoothing_iterations, options,\n callback=None)\n elevation = dstband.ReadAsArray()\n return elevation\n\n\ndef srtm_composite(lon_min, lat_min, nx, ny):\n\n # XXX nx and ny have got confused in the code (numpy array ordering?).\n # However, the interface works well.\n\n bottom_left_ll = (lon_min, lat_min)\n shape = np.array([1201, 1201])\n img = np.empty(shape * (nx, ny))\n\n for i in range(nx):\n for j in range(ny):\n x_img_slice = slice(i * shape[0], (i + 1) * shape[0])\n y_img_slice = slice(j * shape[1], (j + 1) * shape[1])\n\n try:\n tile_img, _, _ = srtm(bottom_left_ll[0] + j,\n bottom_left_ll[1] + i)\n except ValueError:\n img[x_img_slice, y_img_slice] = 0\n else:\n img[x_img_slice, y_img_slice] = tile_img\n\n extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,\n bottom_left_ll[1], bottom_left_ll[1] + nx)\n\n return img, ccrs.PlateCarree(), extent\n\n\ndef read_SRTM3(fh):\n fh, fname = fh_getter(fh, needs_filename=True)\n if fname.endswith('.zip'):\n from zipfile import ZipFile\n zfh = ZipFile(fh, 'rb')\n fh = zfh.open(os.path.basename(fname[:-4]), 'r')\n\n elev = np.fromfile(fh, dtype=np.dtype('>i2'))\n elev.shape = (1201, 1201)\n\n fname = os.path.basename(fname)\n y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])\n\n if y_dir == 'S':\n y *= -1\n\n if x_dir == 'W':\n x *= -1\n\n # xxx extent may need to be wider by half a pixel\n return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]\n\n\ndef SRTM3_retrieve(lon, lat):\n \"\"\"\n Return the path of a .hgt file for the given SRTM location.\n\n If no such .hgt file exists (because it is over the ocean)\n None will be returned.\n\n \"\"\"\n x = '%s%03d' % ('E' if lon > 0 else 'W', abs(int(lon)))\n y = '%s%02d' % ('N' if lat > 0 else 'S', abs(int(lat)))\n\n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n params = {'config': config, 'x': x, 'y': y}\n if srtm_downloader.url(params) is None:\n return None\n else:\n return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n\n\nclass SRTM3Downloader(Downloader):\n \"\"\"\n Provides a SRTM3 download mechanism.\n\n \"\"\"\n FORMAT_KEYS = ('config', 'x', 'y')\n\n _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),\n 'srtm.json')\n _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))\n \"\"\"\n The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url\n of the file to download.\n\n \"\"\"\n\n def __init__(self,\n target_path_template,\n pre_downloaded_path_template='',\n ):\n # adds some SRTM3 defaults to the __init__ of a Downloader\n # namely, the URl is determined on the fly using the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n Downloader.__init__(self, None,\n target_path_template,\n pre_downloaded_path_template)\n\n def url(self, format_dict):\n # override the url method, looking up the url from the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n key = u'{y}{x}'.format(**format_dict)\n url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)\n return url\n\n def acquire_resource(self, target_path, format_dict):\n from zipfile import ZipFile\n\n target_dir = os.path.dirname(target_path)\n if not os.path.isdir(target_dir):\n os.makedirs(target_dir)\n\n url = self.url(format_dict)\n\n srtm_online = self._urlopen(url)\n zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')\n\n zip_member_path = u'{y}{x}.hgt'.format(**format_dict)\n member = zfh.getinfo(zip_member_path)\n with open(target_path, 'wb') as fh:\n fh.write(zfh.open(member).read())\n\n srtm_online.close()\n zfh.close()\n\n return target_path\n\n @staticmethod\n def _create_srtm3_dict():\n \"\"\"\n Returns a dictionary mapping srtm filename to the URL of the file.\n\n This is slow as it must query the SRTM server to identify the\n continent from which the tile comes. Hence a json file with this\n content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.\n\n The json file was created with::\n\n import cartopy.io.srtm as srtm\n import json\n fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')\n json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)\n\n \"\"\"\n # lazy imports. In most situations, these are not\n # dependencies of cartopy.\n if six.PY3:\n from urllib.request import urlopen\n else:\n from urllib2 import urlopen\n from BeautifulSoup import BeautifulSoup\n\n files = {}\n\n for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',\n 'North_America', 'South_America']:\n\n url = \"http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s\" % continent\n f = urlopen(url)\n html = f.read()\n soup = BeautifulSoup(html)\n\n for link in soup('li'):\n name = str(link.text)\n if name != ' Parent Directory':\n # remove the '.hgt.zip'\n files[name[:-8]] = url + '/' + name\n f.close()\n return files\n\n @classmethod\n def default_downloader(cls):\n \"\"\"\n Returns a typical downloader for this class. In general, this static\n method is used to create the default configuration in cartopy.config\n\n \"\"\"\n default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')\n target_path_template = os.path.join('{config[data_dir]}',\n *default_spec)\n pre_path_template = os.path.join('{config[pre_existing_data_dir]}',\n *default_spec)\n return cls(target_path_template=target_path_template,\n pre_downloaded_path_template=pre_path_template)\n\n\n# add a generic SRTM downloader to the config 'downloaders' section.\nconfig['downloaders'].setdefault(('SRTM', 'SRTM3'),\n SRTM3Downloader.default_downloader())\n", "path": "lib/cartopy/io/srtm.py"}], "after_files": [{"content": "# (C) British Crown Copyright 2011 - 2014, Met Office\n#\n# This file is part of cartopy.\n#\n# cartopy is free software: you can redistribute it and/or modify it under\n# the terms of the GNU Lesser General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cartopy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public License\n# along with cartopy. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"\nThe Shuttle Radar Topography Mission (SRTM) is an international research\neffort that obtained digital elevation models on a near-global scale from\n56S to 60N, to generate the most complete high-resolution digital topographic\ndatabase of Earth prior to the release of the ASTER GDEM in 2009.\n\n - Wikipedia (August 2012)\n\n\"\"\"\n\nfrom __future__ import (absolute_import, division, print_function)\n\nimport json\nimport os\n\nimport numpy as np\nimport six\n\nfrom cartopy import config\nimport cartopy.crs as ccrs\nfrom cartopy.io import fh_getter, Downloader\n\n\ndef srtm(lon, lat):\n \"\"\"\n Return (elevation, crs, extent) for the given longitude latitude.\n Elevation is in meters.\n \"\"\"\n fname = SRTM3_retrieve(lon, lat)\n if fname is None:\n raise ValueError('No srtm tile found for those coordinates.')\n return read_SRTM3(fname)\n\n\ndef add_shading(elevation, azimuth, altitude):\n \"\"\"Adds shading to SRTM elevation data, using azimuth and altitude\n of the sun.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type azimuth: float\n :param azimuth: azimuth of the Sun (in degrees)\n :type altitude: float\n :param altitude: altitude of the Sun (in degrees)\n\n :rtype: numpy.ndarray\n :return: shaded SRTM relief map.\n \"\"\"\n azimuth = np.deg2rad(azimuth)\n altitude = np.deg2rad(altitude)\n x, y = np.gradient(elevation)\n slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))\n # -x here because of pixel orders in the SRTM tile\n aspect = np.arctan2(-x, y)\n shaded = np.sin(altitude) * np.sin(slope)\\\n + np.cos(altitude) * np.cos(slope)\\\n * np.cos((azimuth - np.pi/2.) - aspect)\n return shaded\n\n\ndef fill_gaps(elevation, max_distance=10):\n \"\"\"Fills gaps in SRTM elevation data for which the distance from\n missing pixel to nearest existing one is smaller than `max_distance`.\n\n This function requires osgeo/gdal to work.\n\n :type elevation: numpy.ndarray\n :param elevation: SRTM elevation data (in meters)\n :type max_distance: int\n :param max_distance: maximal distance (in pixels) between a missing point\n and the nearest valid one.\n\n :rtype: numpy.ndarray\n :return: SRTM elevation data with filled gaps..\n \"\"\"\n # Lazily import osgeo - it is only an optional dependency for cartopy.\n from osgeo import gdal\n from osgeo import gdal_array\n\n src_ds = gdal_array.OpenArray(elevation)\n srcband = src_ds.GetRasterBand(1)\n dstband = srcband\n maskband = srcband\n smoothing_iterations = 0\n options = []\n gdal.FillNodata(dstband, maskband,\n max_distance, smoothing_iterations, options,\n callback=None)\n elevation = dstband.ReadAsArray()\n return elevation\n\n\ndef srtm_composite(lon_min, lat_min, nx, ny):\n\n # XXX nx and ny have got confused in the code (numpy array ordering?).\n # However, the interface works well.\n\n bottom_left_ll = (lon_min, lat_min)\n shape = np.array([1201, 1201])\n img = np.empty(shape * (nx, ny))\n\n for i in range(nx):\n for j in range(ny):\n x_img_slice = slice(i * shape[0], (i + 1) * shape[0])\n y_img_slice = slice(j * shape[1], (j + 1) * shape[1])\n\n try:\n tile_img, _, _ = srtm(bottom_left_ll[0] + j,\n bottom_left_ll[1] + i)\n except ValueError:\n img[x_img_slice, y_img_slice] = 0\n else:\n img[x_img_slice, y_img_slice] = tile_img\n\n extent = (bottom_left_ll[0], bottom_left_ll[0] + ny,\n bottom_left_ll[1], bottom_left_ll[1] + nx)\n\n return img, ccrs.PlateCarree(), extent\n\n\ndef read_SRTM3(fh):\n fh, fname = fh_getter(fh, needs_filename=True)\n if fname.endswith('.zip'):\n from zipfile import ZipFile\n zfh = ZipFile(fh, 'rb')\n fh = zfh.open(os.path.basename(fname[:-4]), 'r')\n\n elev = np.fromfile(fh, dtype=np.dtype('>i2'))\n elev.shape = (1201, 1201)\n\n fname = os.path.basename(fname)\n y_dir, y, x_dir, x = fname[0], int(fname[1:3]), fname[3], int(fname[4:7])\n\n if y_dir == 'S':\n y *= -1\n\n if x_dir == 'W':\n x *= -1\n\n # xxx extent may need to be wider by half a pixel\n return elev[::-1, ...], ccrs.PlateCarree(), [x, x + 1, y, y + 1]\n\n\ndef SRTM3_retrieve(lon, lat):\n \"\"\"\n Return the path of a .hgt file for the given SRTM location.\n\n If no such .hgt file exists (because it is over the ocean)\n None will be returned.\n\n \"\"\"\n x = '%s%03d' % ('E' if lon >= 0 else 'W', abs(int(lon)))\n y = '%s%02d' % ('N' if lat >= 0 else 'S', abs(int(lat)))\n\n srtm_downloader = Downloader.from_config(('SRTM', 'SRTM3'))\n params = {'config': config, 'x': x, 'y': y}\n if srtm_downloader.url(params) is None:\n return None\n else:\n return srtm_downloader.path({'config': config, 'x': x, 'y': y})\n\n\nclass SRTM3Downloader(Downloader):\n \"\"\"\n Provides a SRTM3 download mechanism.\n\n \"\"\"\n FORMAT_KEYS = ('config', 'x', 'y')\n\n _JSON_SRTM3_LOOKUP = os.path.join(os.path.dirname(__file__),\n 'srtm.json')\n _SRTM3_LOOKUP_URL = json.load(open(_JSON_SRTM3_LOOKUP, 'r'))\n \"\"\"\n The SRTM3 url lookup dictionary maps keys such as 'N43E043' to the url\n of the file to download.\n\n \"\"\"\n\n def __init__(self,\n target_path_template,\n pre_downloaded_path_template='',\n ):\n # adds some SRTM3 defaults to the __init__ of a Downloader\n # namely, the URl is determined on the fly using the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n Downloader.__init__(self, None,\n target_path_template,\n pre_downloaded_path_template)\n\n def url(self, format_dict):\n # override the url method, looking up the url from the\n # ``SRTM3Downloader._SRTM3_LOOKUP_URL`` dictionary\n key = u'{y}{x}'.format(**format_dict)\n url = SRTM3Downloader._SRTM3_LOOKUP_URL.get(key, None)\n return url\n\n def acquire_resource(self, target_path, format_dict):\n from zipfile import ZipFile\n\n target_dir = os.path.dirname(target_path)\n if not os.path.isdir(target_dir):\n os.makedirs(target_dir)\n\n url = self.url(format_dict)\n\n srtm_online = self._urlopen(url)\n zfh = ZipFile(six.BytesIO(srtm_online.read()), 'r')\n\n zip_member_path = u'{y}{x}.hgt'.format(**format_dict)\n member = zfh.getinfo(zip_member_path)\n with open(target_path, 'wb') as fh:\n fh.write(zfh.open(member).read())\n\n srtm_online.close()\n zfh.close()\n\n return target_path\n\n @staticmethod\n def _create_srtm3_dict():\n \"\"\"\n Returns a dictionary mapping srtm filename to the URL of the file.\n\n This is slow as it must query the SRTM server to identify the\n continent from which the tile comes. Hence a json file with this\n content exists in ``SRTM3Downloader._JSON_SRTM3_LOOKUP``.\n\n The json file was created with::\n\n import cartopy.io.srtm as srtm\n import json\n fh = open(srtm.SRTM3Downloader._JSON_SRTM3_LOOKUP, 'w')\n json.dump(srtm.SRTM3Downloader._create_srtm3_dict(), fh)\n\n \"\"\"\n # lazy imports. In most situations, these are not\n # dependencies of cartopy.\n if six.PY3:\n from urllib.request import urlopen\n else:\n from urllib2 import urlopen\n from BeautifulSoup import BeautifulSoup\n\n files = {}\n\n for continent in ['Australia', 'Africa', 'Eurasia', 'Islands',\n 'North_America', 'South_America']:\n\n url = \"http://dds.cr.usgs.gov/srtm/version2_1/SRTM3/%s\" % continent\n f = urlopen(url)\n html = f.read()\n soup = BeautifulSoup(html)\n\n for link in soup('li'):\n name = str(link.text)\n if name != ' Parent Directory':\n # remove the '.hgt.zip'\n files[name[:-8]] = url + '/' + name\n f.close()\n return files\n\n @classmethod\n def default_downloader(cls):\n \"\"\"\n Returns a typical downloader for this class. In general, this static\n method is used to create the default configuration in cartopy.config\n\n \"\"\"\n default_spec = ('SRTM', 'SRTM3', '{y}{x}.hgt')\n target_path_template = os.path.join('{config[data_dir]}',\n *default_spec)\n pre_path_template = os.path.join('{config[pre_existing_data_dir]}',\n *default_spec)\n return cls(target_path_template=target_path_template,\n pre_downloaded_path_template=pre_path_template)\n\n\n# add a generic SRTM downloader to the config 'downloaders' section.\nconfig['downloaders'].setdefault(('SRTM', 'SRTM3'),\n SRTM3Downloader.default_downloader())\n", "path": "lib/cartopy/io/srtm.py"}]}
| 3,681 | 220 |
gh_patches_debug_64249
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-285
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Why is argon2i used?
Referring to: https://github.com/archlinux/archinstall/blob/master/archinstall/lib/luks.py#L73
I read that argon2id gives better security, so is there any reason against it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/luks.py`
Content:
```
1 import os
2 import shlex
3 import time
4 import pathlib
5 from .exceptions import *
6 from .general import *
7 from .disk import Partition
8 from .output import log, LOG_LEVELS
9 from .storage import storage
10
11 class luks2():
12 def __init__(self, partition, mountpoint, password, key_file=None, auto_unmount=False, *args, **kwargs):
13 self.password = password
14 self.partition = partition
15 self.mountpoint = mountpoint
16 self.args = args
17 self.kwargs = kwargs
18 self.key_file = key_file
19 self.auto_unmount = auto_unmount
20 self.filesystem = 'crypto_LUKS'
21 self.mapdev = None
22
23 def __enter__(self):
24 #if self.partition.allow_formatting:
25 # self.key_file = self.encrypt(self.partition, *self.args, **self.kwargs)
26 #else:
27 if not self.key_file:
28 self.key_file = f"/tmp/{os.path.basename(self.partition.path)}.disk_pw" # TODO: Make disk-pw-file randomly unique?
29
30 if type(self.password) != bytes:
31 self.password = bytes(self.password, 'UTF-8')
32
33 with open(self.key_file, 'wb') as fh:
34 fh.write(self.password)
35
36 return self.unlock(self.partition, self.mountpoint, self.key_file)
37
38 def __exit__(self, *args, **kwargs):
39 # TODO: https://stackoverflow.com/questions/28157929/how-to-safely-handle-an-exception-inside-a-context-manager
40 if self.auto_unmount:
41 self.close()
42
43 if len(args) >= 2 and args[1]:
44 raise args[1]
45 return True
46
47 def encrypt(self, partition, password=None, key_size=512, hash_type='sha512', iter_time=10000, key_file=None):
48 if not self.partition.allow_formatting:
49 raise DiskError(f'Could not encrypt volume {self.partition} due to it having a formatting lock.')
50
51 log(f'Encrypting {partition} (This might take a while)', level=LOG_LEVELS.Info)
52
53 if not key_file:
54 if self.key_file:
55 key_file = self.key_file
56 else:
57 key_file = f"/tmp/{os.path.basename(self.partition.path)}.disk_pw" # TODO: Make disk-pw-file randomly unique?
58
59 if not password:
60 password = self.password
61
62 if type(password) != bytes:
63 password = bytes(password, 'UTF-8')
64
65 with open(key_file, 'wb') as fh:
66 fh.write(password)
67
68 cryptsetup_args = shlex.join([
69 '/usr/bin/cryptsetup',
70 '--batch-mode',
71 '--verbose',
72 '--type', 'luks2',
73 '--pbkdf', 'argon2i',
74 '--hash', hash_type,
75 '--key-size', str(key_size),
76 '--iter-time', str(iter_time),
77 '--key-file', os.path.abspath(key_file),
78 '--use-urandom',
79 'luksFormat', partition.path,
80 ])
81
82 try:
83 # Try to setup the crypt-device
84 cmd_handle = sys_command(cryptsetup_args)
85 except SysCallError as err:
86 if err.exit_code == 256:
87 log(f'{partition} is being used, trying to unmount and crypt-close the device and running one more attempt at encrypting the device.', level=LOG_LEVELS.Debug)
88 # Partition was in use, unmount it and try again
89 partition.unmount()
90
91 # Get crypt-information about the device by doing a reverse lookup starting with the partition path
92 # For instance: /dev/sda
93 devinfo = json.loads(b''.join(sys_command(f"lsblk --fs -J {partition.path}")).decode('UTF-8'))['blockdevices'][0]
94
95 # For each child (sub-partition/sub-device)
96 if len(children := devinfo.get('children', [])):
97 for child in children:
98 # Unmount the child location
99 if child_mountpoint := child.get('mountpoint', None):
100 log(f'Unmounting {child_mountpoint}', level=LOG_LEVELS.Debug)
101 sys_command(f"umount -R {child_mountpoint}")
102
103 # And close it if possible.
104 log(f"Closing crypt device {child['name']}", level=LOG_LEVELS.Debug)
105 sys_command(f"cryptsetup close {child['name']}")
106
107 # Then try again to set up the crypt-device
108 cmd_handle = sys_command(cryptsetup_args)
109 else:
110 raise err
111
112 if cmd_handle.exit_code != 0:
113 raise DiskError(f'Could not encrypt volume "{partition.path}": {cmd_output}')
114
115 return key_file
116
117 def unlock(self, partition, mountpoint, key_file):
118 """
119 Mounts a luks2 compatible partition to a certain mountpoint.
120 Keyfile must be specified as there's no way to interact with the pw-prompt atm.
121
122 :param mountpoint: The name without absolute path, for instance "luksdev" will point to /dev/mapper/luksdev
123 :type mountpoint: str
124 """
125 from .disk import get_filesystem_type
126 if '/' in mountpoint:
127 os.path.basename(mountpoint) # TODO: Raise exception instead?
128
129 wait_timer = time.time()
130 while pathlib.Path(partition.path).exists() is False and time.time() - wait_timer < 10:
131 time.sleep(0.025)
132
133 sys_command(f'/usr/bin/cryptsetup open {partition.path} {mountpoint} --key-file {os.path.abspath(key_file)} --type luks2')
134 if os.path.islink(f'/dev/mapper/{mountpoint}'):
135 self.mapdev = f'/dev/mapper/{mountpoint}'
136 unlocked_partition = Partition(self.mapdev, None, encrypted=True, filesystem=get_filesystem_type(self.mapdev), autodetect_filesystem=False)
137 unlocked_partition.allow_formatting = self.partition.allow_formatting
138 return unlocked_partition
139
140 def close(self, mountpoint=None):
141 if not mountpoint:
142 mountpoint = self.mapdev
143
144 sys_command(f'/usr/bin/cryptsetup close {self.mapdev}')
145 return os.path.islink(self.mapdev) is False
146
147 def format(self, path):
148 if (handle := sys_command(f"/usr/bin/cryptsetup -q -v luksErase {path}")).exit_code != 0:
149 raise DiskError(f'Could not format {path} with {self.filesystem} because: {b"".join(handle)}')
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/luks.py b/archinstall/lib/luks.py
--- a/archinstall/lib/luks.py
+++ b/archinstall/lib/luks.py
@@ -70,7 +70,7 @@
'--batch-mode',
'--verbose',
'--type', 'luks2',
- '--pbkdf', 'argon2i',
+ '--pbkdf', 'argon2id',
'--hash', hash_type,
'--key-size', str(key_size),
'--iter-time', str(iter_time),
|
{"golden_diff": "diff --git a/archinstall/lib/luks.py b/archinstall/lib/luks.py\n--- a/archinstall/lib/luks.py\n+++ b/archinstall/lib/luks.py\n@@ -70,7 +70,7 @@\n \t\t\t'--batch-mode',\n \t\t\t'--verbose',\n \t\t\t'--type', 'luks2',\n-\t\t\t'--pbkdf', 'argon2i',\n+\t\t\t'--pbkdf', 'argon2id',\n \t\t\t'--hash', hash_type,\n \t\t\t'--key-size', str(key_size),\n \t\t\t'--iter-time', str(iter_time),\n", "issue": "Why is argon2i used?\nReferring to: https://github.com/archlinux/archinstall/blob/master/archinstall/lib/luks.py#L73\r\n\r\nI read that argon2id gives better security, so is there any reason against it?\n", "before_files": [{"content": "import os\nimport shlex\nimport time\nimport pathlib\nfrom .exceptions import *\nfrom .general import *\nfrom .disk import Partition\nfrom .output import log, LOG_LEVELS\nfrom .storage import storage\n\nclass luks2():\n\tdef __init__(self, partition, mountpoint, password, key_file=None, auto_unmount=False, *args, **kwargs):\n\t\tself.password = password\n\t\tself.partition = partition\n\t\tself.mountpoint = mountpoint\n\t\tself.args = args\n\t\tself.kwargs = kwargs\n\t\tself.key_file = key_file\n\t\tself.auto_unmount = auto_unmount\n\t\tself.filesystem = 'crypto_LUKS'\n\t\tself.mapdev = None\n\n\tdef __enter__(self):\n\t\t#if self.partition.allow_formatting:\n\t\t#\tself.key_file = self.encrypt(self.partition, *self.args, **self.kwargs)\n\t\t#else:\n\t\tif not self.key_file:\n\t\t\tself.key_file = f\"/tmp/{os.path.basename(self.partition.path)}.disk_pw\" # TODO: Make disk-pw-file randomly unique?\n\t\t\n\t\tif type(self.password) != bytes:\n\t\t\tself.password = bytes(self.password, 'UTF-8')\n\n\t\twith open(self.key_file, 'wb') as fh:\n\t\t\tfh.write(self.password)\n\n\t\treturn self.unlock(self.partition, self.mountpoint, self.key_file)\n\n\tdef __exit__(self, *args, **kwargs):\n\t\t# TODO: https://stackoverflow.com/questions/28157929/how-to-safely-handle-an-exception-inside-a-context-manager\n\t\tif self.auto_unmount:\n\t\t\tself.close()\n\n\t\tif len(args) >= 2 and args[1]:\n\t\t\traise args[1]\n\t\treturn True\n\n\tdef encrypt(self, partition, password=None, key_size=512, hash_type='sha512', iter_time=10000, key_file=None):\n\t\tif not self.partition.allow_formatting:\n\t\t\traise DiskError(f'Could not encrypt volume {self.partition} due to it having a formatting lock.')\n\n\t\tlog(f'Encrypting {partition} (This might take a while)', level=LOG_LEVELS.Info)\n\n\t\tif not key_file:\n\t\t\tif self.key_file:\n\t\t\t\tkey_file = self.key_file\n\t\t\telse:\n\t\t\t\tkey_file = f\"/tmp/{os.path.basename(self.partition.path)}.disk_pw\" # TODO: Make disk-pw-file randomly unique?\n\n\t\tif not password:\n\t\t\tpassword = self.password\n\n\t\tif type(password) != bytes:\n\t\t\tpassword = bytes(password, 'UTF-8')\n\n\t\twith open(key_file, 'wb') as fh:\n\t\t\tfh.write(password)\n\n\t\tcryptsetup_args = shlex.join([\n\t\t\t'/usr/bin/cryptsetup',\n\t\t\t'--batch-mode',\n\t\t\t'--verbose',\n\t\t\t'--type', 'luks2',\n\t\t\t'--pbkdf', 'argon2i',\n\t\t\t'--hash', hash_type,\n\t\t\t'--key-size', str(key_size),\n\t\t\t'--iter-time', str(iter_time),\n\t\t\t'--key-file', os.path.abspath(key_file),\n\t\t\t'--use-urandom',\n\t\t\t'luksFormat', partition.path,\n\t\t])\n\n\t\ttry:\n\t\t\t# Try to setup the crypt-device\n\t\t\tcmd_handle = sys_command(cryptsetup_args)\n\t\texcept SysCallError as err:\n\t\t\tif err.exit_code == 256:\n\t\t\t\tlog(f'{partition} is being used, trying to unmount and crypt-close the device and running one more attempt at encrypting the device.', level=LOG_LEVELS.Debug)\n\t\t\t\t# Partition was in use, unmount it and try again\n\t\t\t\tpartition.unmount()\n\n\t\t\t\t# Get crypt-information about the device by doing a reverse lookup starting with the partition path\n\t\t\t\t# For instance: /dev/sda\n\t\t\t\tdevinfo = json.loads(b''.join(sys_command(f\"lsblk --fs -J {partition.path}\")).decode('UTF-8'))['blockdevices'][0]\n\n\t\t\t\t# For each child (sub-partition/sub-device)\n\t\t\t\tif len(children := devinfo.get('children', [])):\n\t\t\t\t\tfor child in children:\n\t\t\t\t\t\t# Unmount the child location\n\t\t\t\t\t\tif child_mountpoint := child.get('mountpoint', None):\n\t\t\t\t\t\t\tlog(f'Unmounting {child_mountpoint}', level=LOG_LEVELS.Debug)\n\t\t\t\t\t\t\tsys_command(f\"umount -R {child_mountpoint}\")\n\n\t\t\t\t\t\t# And close it if possible.\n\t\t\t\t\t\tlog(f\"Closing crypt device {child['name']}\", level=LOG_LEVELS.Debug)\n\t\t\t\t\t\tsys_command(f\"cryptsetup close {child['name']}\")\n\n\t\t\t\t# Then try again to set up the crypt-device\n\t\t\t\tcmd_handle = sys_command(cryptsetup_args)\n\t\t\telse:\n\t\t\t\traise err\n\n\t\tif cmd_handle.exit_code != 0:\n\t\t\traise DiskError(f'Could not encrypt volume \"{partition.path}\": {cmd_output}')\n\t\n\t\treturn key_file\n\n\tdef unlock(self, partition, mountpoint, key_file):\n\t\t\"\"\"\n\t\tMounts a luks2 compatible partition to a certain mountpoint.\n\t\tKeyfile must be specified as there's no way to interact with the pw-prompt atm.\n\n\t\t:param mountpoint: The name without absolute path, for instance \"luksdev\" will point to /dev/mapper/luksdev\n\t\t:type mountpoint: str\n\t\t\"\"\"\n\t\tfrom .disk import get_filesystem_type\n\t\tif '/' in mountpoint:\n\t\t\tos.path.basename(mountpoint) # TODO: Raise exception instead?\n\n\t\twait_timer = time.time()\n\t\twhile pathlib.Path(partition.path).exists() is False and time.time() - wait_timer < 10:\n\t\t\ttime.sleep(0.025)\n\n\t\tsys_command(f'/usr/bin/cryptsetup open {partition.path} {mountpoint} --key-file {os.path.abspath(key_file)} --type luks2')\n\t\tif os.path.islink(f'/dev/mapper/{mountpoint}'):\n\t\t\tself.mapdev = f'/dev/mapper/{mountpoint}'\n\t\t\tunlocked_partition = Partition(self.mapdev, None, encrypted=True, filesystem=get_filesystem_type(self.mapdev), autodetect_filesystem=False)\n\t\t\tunlocked_partition.allow_formatting = self.partition.allow_formatting\n\t\t\treturn unlocked_partition\n\n\tdef close(self, mountpoint=None):\n\t\tif not mountpoint:\n\t\t\tmountpoint = self.mapdev\n\n\t\tsys_command(f'/usr/bin/cryptsetup close {self.mapdev}')\n\t\treturn os.path.islink(self.mapdev) is False\n\n\tdef format(self, path):\n\t\tif (handle := sys_command(f\"/usr/bin/cryptsetup -q -v luksErase {path}\")).exit_code != 0:\n\t\t\traise DiskError(f'Could not format {path} with {self.filesystem} because: {b\"\".join(handle)}')\n", "path": "archinstall/lib/luks.py"}], "after_files": [{"content": "import os\nimport shlex\nimport time\nimport pathlib\nfrom .exceptions import *\nfrom .general import *\nfrom .disk import Partition\nfrom .output import log, LOG_LEVELS\nfrom .storage import storage\n\nclass luks2():\n\tdef __init__(self, partition, mountpoint, password, key_file=None, auto_unmount=False, *args, **kwargs):\n\t\tself.password = password\n\t\tself.partition = partition\n\t\tself.mountpoint = mountpoint\n\t\tself.args = args\n\t\tself.kwargs = kwargs\n\t\tself.key_file = key_file\n\t\tself.auto_unmount = auto_unmount\n\t\tself.filesystem = 'crypto_LUKS'\n\t\tself.mapdev = None\n\n\tdef __enter__(self):\n\t\t#if self.partition.allow_formatting:\n\t\t#\tself.key_file = self.encrypt(self.partition, *self.args, **self.kwargs)\n\t\t#else:\n\t\tif not self.key_file:\n\t\t\tself.key_file = f\"/tmp/{os.path.basename(self.partition.path)}.disk_pw\" # TODO: Make disk-pw-file randomly unique?\n\t\t\n\t\tif type(self.password) != bytes:\n\t\t\tself.password = bytes(self.password, 'UTF-8')\n\n\t\twith open(self.key_file, 'wb') as fh:\n\t\t\tfh.write(self.password)\n\n\t\treturn self.unlock(self.partition, self.mountpoint, self.key_file)\n\n\tdef __exit__(self, *args, **kwargs):\n\t\t# TODO: https://stackoverflow.com/questions/28157929/how-to-safely-handle-an-exception-inside-a-context-manager\n\t\tif self.auto_unmount:\n\t\t\tself.close()\n\n\t\tif len(args) >= 2 and args[1]:\n\t\t\traise args[1]\n\t\treturn True\n\n\tdef encrypt(self, partition, password=None, key_size=512, hash_type='sha512', iter_time=10000, key_file=None):\n\t\tif not self.partition.allow_formatting:\n\t\t\traise DiskError(f'Could not encrypt volume {self.partition} due to it having a formatting lock.')\n\n\t\tlog(f'Encrypting {partition} (This might take a while)', level=LOG_LEVELS.Info)\n\n\t\tif not key_file:\n\t\t\tif self.key_file:\n\t\t\t\tkey_file = self.key_file\n\t\t\telse:\n\t\t\t\tkey_file = f\"/tmp/{os.path.basename(self.partition.path)}.disk_pw\" # TODO: Make disk-pw-file randomly unique?\n\n\t\tif not password:\n\t\t\tpassword = self.password\n\n\t\tif type(password) != bytes:\n\t\t\tpassword = bytes(password, 'UTF-8')\n\n\t\twith open(key_file, 'wb') as fh:\n\t\t\tfh.write(password)\n\n\t\tcryptsetup_args = shlex.join([\n\t\t\t'/usr/bin/cryptsetup',\n\t\t\t'--batch-mode',\n\t\t\t'--verbose',\n\t\t\t'--type', 'luks2',\n\t\t\t'--pbkdf', 'argon2id',\n\t\t\t'--hash', hash_type,\n\t\t\t'--key-size', str(key_size),\n\t\t\t'--iter-time', str(iter_time),\n\t\t\t'--key-file', os.path.abspath(key_file),\n\t\t\t'--use-urandom',\n\t\t\t'luksFormat', partition.path,\n\t\t])\n\n\t\ttry:\n\t\t\t# Try to setup the crypt-device\n\t\t\tcmd_handle = sys_command(cryptsetup_args)\n\t\texcept SysCallError as err:\n\t\t\tif err.exit_code == 256:\n\t\t\t\tlog(f'{partition} is being used, trying to unmount and crypt-close the device and running one more attempt at encrypting the device.', level=LOG_LEVELS.Debug)\n\t\t\t\t# Partition was in use, unmount it and try again\n\t\t\t\tpartition.unmount()\n\n\t\t\t\t# Get crypt-information about the device by doing a reverse lookup starting with the partition path\n\t\t\t\t# For instance: /dev/sda\n\t\t\t\tdevinfo = json.loads(b''.join(sys_command(f\"lsblk --fs -J {partition.path}\")).decode('UTF-8'))['blockdevices'][0]\n\n\t\t\t\t# For each child (sub-partition/sub-device)\n\t\t\t\tif len(children := devinfo.get('children', [])):\n\t\t\t\t\tfor child in children:\n\t\t\t\t\t\t# Unmount the child location\n\t\t\t\t\t\tif child_mountpoint := child.get('mountpoint', None):\n\t\t\t\t\t\t\tlog(f'Unmounting {child_mountpoint}', level=LOG_LEVELS.Debug)\n\t\t\t\t\t\t\tsys_command(f\"umount -R {child_mountpoint}\")\n\n\t\t\t\t\t\t# And close it if possible.\n\t\t\t\t\t\tlog(f\"Closing crypt device {child['name']}\", level=LOG_LEVELS.Debug)\n\t\t\t\t\t\tsys_command(f\"cryptsetup close {child['name']}\")\n\n\t\t\t\t# Then try again to set up the crypt-device\n\t\t\t\tcmd_handle = sys_command(cryptsetup_args)\n\t\t\telse:\n\t\t\t\traise err\n\n\t\tif cmd_handle.exit_code != 0:\n\t\t\traise DiskError(f'Could not encrypt volume \"{partition.path}\": {cmd_output}')\n\t\n\t\treturn key_file\n\n\tdef unlock(self, partition, mountpoint, key_file):\n\t\t\"\"\"\n\t\tMounts a luks2 compatible partition to a certain mountpoint.\n\t\tKeyfile must be specified as there's no way to interact with the pw-prompt atm.\n\n\t\t:param mountpoint: The name without absolute path, for instance \"luksdev\" will point to /dev/mapper/luksdev\n\t\t:type mountpoint: str\n\t\t\"\"\"\n\t\tfrom .disk import get_filesystem_type\n\t\tif '/' in mountpoint:\n\t\t\tos.path.basename(mountpoint) # TODO: Raise exception instead?\n\n\t\twait_timer = time.time()\n\t\twhile pathlib.Path(partition.path).exists() is False and time.time() - wait_timer < 10:\n\t\t\ttime.sleep(0.025)\n\n\t\tsys_command(f'/usr/bin/cryptsetup open {partition.path} {mountpoint} --key-file {os.path.abspath(key_file)} --type luks2')\n\t\tif os.path.islink(f'/dev/mapper/{mountpoint}'):\n\t\t\tself.mapdev = f'/dev/mapper/{mountpoint}'\n\t\t\tunlocked_partition = Partition(self.mapdev, None, encrypted=True, filesystem=get_filesystem_type(self.mapdev), autodetect_filesystem=False)\n\t\t\tunlocked_partition.allow_formatting = self.partition.allow_formatting\n\t\t\treturn unlocked_partition\n\n\tdef close(self, mountpoint=None):\n\t\tif not mountpoint:\n\t\t\tmountpoint = self.mapdev\n\n\t\tsys_command(f'/usr/bin/cryptsetup close {self.mapdev}')\n\t\treturn os.path.islink(self.mapdev) is False\n\n\tdef format(self, path):\n\t\tif (handle := sys_command(f\"/usr/bin/cryptsetup -q -v luksErase {path}\")).exit_code != 0:\n\t\t\traise DiskError(f'Could not format {path} with {self.filesystem} because: {b\"\".join(handle)}')\n", "path": "archinstall/lib/luks.py"}]}
| 2,127 | 135 |
gh_patches_debug_39574
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-586
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: yggtorrent engine ( Suspended: access denied )
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/tiekoetter/searxng
Branch: master
Version: 1.0.0-1246-10968a7f
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/yggtorrent.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Yggtorrent (Videos, Music, Files)
4 """
5
6 from lxml import html
7 from operator import itemgetter
8 from datetime import datetime
9 from urllib.parse import quote
10 from searx.utils import extract_text, get_torrent_size
11 from searx.network import get as http_get
12
13 # about
14 about = {
15 "website": 'https://www4.yggtorrent.li/',
16 "wikidata_id": None,
17 "official_api_documentation": None,
18 "use_official_api": False,
19 "require_api_key": False,
20 "results": 'HTML',
21 }
22
23 # engine dependent config
24 categories = ['files']
25 paging = True
26
27 # search-url
28 url = 'https://www4.yggtorrent.li/'
29 search_url = url + 'engine/search?name={search_term}&do=search&page={pageno}&category={search_type}'
30
31 # yggtorrent specific type-definitions
32 search_types = {'files': 'all',
33 'music': '2139',
34 'videos': '2145'}
35
36 cookies = dict()
37
38
39 def init(engine_settings=None):
40 # initial cookies
41 resp = http_get(url, allow_redirects=False)
42 if resp.ok:
43 for r in resp.history:
44 cookies.update(r.cookies)
45 cookies.update(resp.cookies)
46
47
48 # do search-request
49 def request(query, params):
50 search_type = search_types.get(params['category'], 'all')
51 pageno = (params['pageno'] - 1) * 50
52
53 params['url'] = search_url.format(search_term=quote(query),
54 search_type=search_type,
55 pageno=pageno)
56
57 params['cookies'] = cookies
58
59 return params
60
61
62 # get response from search-request
63 def response(resp):
64 results = []
65 dom = html.fromstring(resp.text)
66
67 search_res = dom.xpath('//section[@id="#torrents"]/div/table/tbody/tr')
68
69 # return empty array if nothing is found
70 if not search_res:
71 return []
72
73 # parse results
74 for result in search_res:
75 link = result.xpath('.//a[@id="torrent_name"]')[0]
76 href = link.attrib.get('href')
77 title = extract_text(link)
78 seed = result.xpath('.//td[8]/text()')[0]
79 leech = result.xpath('.//td[9]/text()')[0]
80
81 # convert seed to int if possible
82 if seed.isdigit():
83 seed = int(seed)
84 else:
85 seed = 0
86
87 # convert leech to int if possible
88 if leech.isdigit():
89 leech = int(leech)
90 else:
91 leech = 0
92
93 params = {'url': href,
94 'title': title,
95 'seed': seed,
96 'leech': leech,
97 'template': 'torrent.html'}
98
99 # let's try to calculate the torrent size
100 try:
101 filesize_info = result.xpath('.//td[6]/text()')[0]
102 filesize = filesize_info[:-2]
103 filesize_multiplier = filesize_info[-2:].lower()
104 multiplier_french_to_english = {
105 'to': 'TiB',
106 'go': 'GiB',
107 'mo': 'MiB',
108 'ko': 'KiB'
109 }
110 filesize = get_torrent_size(filesize, multiplier_french_to_english[filesize_multiplier])
111 params['filesize'] = filesize
112 except:
113 pass
114
115 # extract and convert creation date
116 try:
117 date_ts = result.xpath('.//td[5]/div/text()')[0]
118 date = datetime.fromtimestamp(float(date_ts))
119 params['publishedDate'] = date
120 except:
121 pass
122
123 # append result
124 results.append(params)
125
126 # return results sorted by seeder
127 return sorted(results, key=itemgetter('seed'), reverse=True)
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/yggtorrent.py b/searx/engines/yggtorrent.py
deleted file mode 100644
--- a/searx/engines/yggtorrent.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# SPDX-License-Identifier: AGPL-3.0-or-later
-"""
- Yggtorrent (Videos, Music, Files)
-"""
-
-from lxml import html
-from operator import itemgetter
-from datetime import datetime
-from urllib.parse import quote
-from searx.utils import extract_text, get_torrent_size
-from searx.network import get as http_get
-
-# about
-about = {
- "website": 'https://www4.yggtorrent.li/',
- "wikidata_id": None,
- "official_api_documentation": None,
- "use_official_api": False,
- "require_api_key": False,
- "results": 'HTML',
-}
-
-# engine dependent config
-categories = ['files']
-paging = True
-
-# search-url
-url = 'https://www4.yggtorrent.li/'
-search_url = url + 'engine/search?name={search_term}&do=search&page={pageno}&category={search_type}'
-
-# yggtorrent specific type-definitions
-search_types = {'files': 'all',
- 'music': '2139',
- 'videos': '2145'}
-
-cookies = dict()
-
-
-def init(engine_settings=None):
- # initial cookies
- resp = http_get(url, allow_redirects=False)
- if resp.ok:
- for r in resp.history:
- cookies.update(r.cookies)
- cookies.update(resp.cookies)
-
-
-# do search-request
-def request(query, params):
- search_type = search_types.get(params['category'], 'all')
- pageno = (params['pageno'] - 1) * 50
-
- params['url'] = search_url.format(search_term=quote(query),
- search_type=search_type,
- pageno=pageno)
-
- params['cookies'] = cookies
-
- return params
-
-
-# get response from search-request
-def response(resp):
- results = []
- dom = html.fromstring(resp.text)
-
- search_res = dom.xpath('//section[@id="#torrents"]/div/table/tbody/tr')
-
- # return empty array if nothing is found
- if not search_res:
- return []
-
- # parse results
- for result in search_res:
- link = result.xpath('.//a[@id="torrent_name"]')[0]
- href = link.attrib.get('href')
- title = extract_text(link)
- seed = result.xpath('.//td[8]/text()')[0]
- leech = result.xpath('.//td[9]/text()')[0]
-
- # convert seed to int if possible
- if seed.isdigit():
- seed = int(seed)
- else:
- seed = 0
-
- # convert leech to int if possible
- if leech.isdigit():
- leech = int(leech)
- else:
- leech = 0
-
- params = {'url': href,
- 'title': title,
- 'seed': seed,
- 'leech': leech,
- 'template': 'torrent.html'}
-
- # let's try to calculate the torrent size
- try:
- filesize_info = result.xpath('.//td[6]/text()')[0]
- filesize = filesize_info[:-2]
- filesize_multiplier = filesize_info[-2:].lower()
- multiplier_french_to_english = {
- 'to': 'TiB',
- 'go': 'GiB',
- 'mo': 'MiB',
- 'ko': 'KiB'
- }
- filesize = get_torrent_size(filesize, multiplier_french_to_english[filesize_multiplier])
- params['filesize'] = filesize
- except:
- pass
-
- # extract and convert creation date
- try:
- date_ts = result.xpath('.//td[5]/div/text()')[0]
- date = datetime.fromtimestamp(float(date_ts))
- params['publishedDate'] = date
- except:
- pass
-
- # append result
- results.append(params)
-
- # return results sorted by seeder
- return sorted(results, key=itemgetter('seed'), reverse=True)
|
{"golden_diff": "diff --git a/searx/engines/yggtorrent.py b/searx/engines/yggtorrent.py\ndeleted file mode 100644\n--- a/searx/engines/yggtorrent.py\n+++ /dev/null\n@@ -1,127 +0,0 @@\n-# SPDX-License-Identifier: AGPL-3.0-or-later\n-\"\"\"\n- Yggtorrent (Videos, Music, Files)\n-\"\"\"\n-\n-from lxml import html\n-from operator import itemgetter\n-from datetime import datetime\n-from urllib.parse import quote\n-from searx.utils import extract_text, get_torrent_size\n-from searx.network import get as http_get\n-\n-# about\n-about = {\n- \"website\": 'https://www4.yggtorrent.li/',\n- \"wikidata_id\": None,\n- \"official_api_documentation\": None,\n- \"use_official_api\": False,\n- \"require_api_key\": False,\n- \"results\": 'HTML',\n-}\n-\n-# engine dependent config\n-categories = ['files']\n-paging = True\n-\n-# search-url\n-url = 'https://www4.yggtorrent.li/'\n-search_url = url + 'engine/search?name={search_term}&do=search&page={pageno}&category={search_type}'\n-\n-# yggtorrent specific type-definitions\n-search_types = {'files': 'all',\n- 'music': '2139',\n- 'videos': '2145'}\n-\n-cookies = dict()\n-\n-\n-def init(engine_settings=None):\n- # initial cookies\n- resp = http_get(url, allow_redirects=False)\n- if resp.ok:\n- for r in resp.history:\n- cookies.update(r.cookies)\n- cookies.update(resp.cookies)\n-\n-\n-# do search-request\n-def request(query, params):\n- search_type = search_types.get(params['category'], 'all')\n- pageno = (params['pageno'] - 1) * 50\n-\n- params['url'] = search_url.format(search_term=quote(query),\n- search_type=search_type,\n- pageno=pageno)\n-\n- params['cookies'] = cookies\n-\n- return params\n-\n-\n-# get response from search-request\n-def response(resp):\n- results = []\n- dom = html.fromstring(resp.text)\n-\n- search_res = dom.xpath('//section[@id=\"#torrents\"]/div/table/tbody/tr')\n-\n- # return empty array if nothing is found\n- if not search_res:\n- return []\n-\n- # parse results\n- for result in search_res:\n- link = result.xpath('.//a[@id=\"torrent_name\"]')[0]\n- href = link.attrib.get('href')\n- title = extract_text(link)\n- seed = result.xpath('.//td[8]/text()')[0]\n- leech = result.xpath('.//td[9]/text()')[0]\n-\n- # convert seed to int if possible\n- if seed.isdigit():\n- seed = int(seed)\n- else:\n- seed = 0\n-\n- # convert leech to int if possible\n- if leech.isdigit():\n- leech = int(leech)\n- else:\n- leech = 0\n-\n- params = {'url': href,\n- 'title': title,\n- 'seed': seed,\n- 'leech': leech,\n- 'template': 'torrent.html'}\n-\n- # let's try to calculate the torrent size\n- try:\n- filesize_info = result.xpath('.//td[6]/text()')[0]\n- filesize = filesize_info[:-2]\n- filesize_multiplier = filesize_info[-2:].lower()\n- multiplier_french_to_english = {\n- 'to': 'TiB',\n- 'go': 'GiB',\n- 'mo': 'MiB',\n- 'ko': 'KiB'\n- }\n- filesize = get_torrent_size(filesize, multiplier_french_to_english[filesize_multiplier])\n- params['filesize'] = filesize\n- except:\n- pass\n-\n- # extract and convert creation date\n- try:\n- date_ts = result.xpath('.//td[5]/div/text()')[0]\n- date = datetime.fromtimestamp(float(date_ts))\n- params['publishedDate'] = date\n- except:\n- pass\n-\n- # append result\n- results.append(params)\n-\n- # return results sorted by seeder\n- return sorted(results, key=itemgetter('seed'), reverse=True)\n", "issue": "Bug: yggtorrent engine ( Suspended: access denied )\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/tiekoetter/searxng\r\nBranch: master\r\nVersion: 1.0.0-1246-10968a7f\r\n\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Yggtorrent (Videos, Music, Files)\n\"\"\"\n\nfrom lxml import html\nfrom operator import itemgetter\nfrom datetime import datetime\nfrom urllib.parse import quote\nfrom searx.utils import extract_text, get_torrent_size\nfrom searx.network import get as http_get\n\n# about\nabout = {\n \"website\": 'https://www4.yggtorrent.li/',\n \"wikidata_id\": None,\n \"official_api_documentation\": None,\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['files']\npaging = True\n\n# search-url\nurl = 'https://www4.yggtorrent.li/'\nsearch_url = url + 'engine/search?name={search_term}&do=search&page={pageno}&category={search_type}'\n\n# yggtorrent specific type-definitions\nsearch_types = {'files': 'all',\n 'music': '2139',\n 'videos': '2145'}\n\ncookies = dict()\n\n\ndef init(engine_settings=None):\n # initial cookies\n resp = http_get(url, allow_redirects=False)\n if resp.ok:\n for r in resp.history:\n cookies.update(r.cookies)\n cookies.update(resp.cookies)\n\n\n# do search-request\ndef request(query, params):\n search_type = search_types.get(params['category'], 'all')\n pageno = (params['pageno'] - 1) * 50\n\n params['url'] = search_url.format(search_term=quote(query),\n search_type=search_type,\n pageno=pageno)\n\n params['cookies'] = cookies\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n dom = html.fromstring(resp.text)\n\n search_res = dom.xpath('//section[@id=\"#torrents\"]/div/table/tbody/tr')\n\n # return empty array if nothing is found\n if not search_res:\n return []\n\n # parse results\n for result in search_res:\n link = result.xpath('.//a[@id=\"torrent_name\"]')[0]\n href = link.attrib.get('href')\n title = extract_text(link)\n seed = result.xpath('.//td[8]/text()')[0]\n leech = result.xpath('.//td[9]/text()')[0]\n\n # convert seed to int if possible\n if seed.isdigit():\n seed = int(seed)\n else:\n seed = 0\n\n # convert leech to int if possible\n if leech.isdigit():\n leech = int(leech)\n else:\n leech = 0\n\n params = {'url': href,\n 'title': title,\n 'seed': seed,\n 'leech': leech,\n 'template': 'torrent.html'}\n\n # let's try to calculate the torrent size\n try:\n filesize_info = result.xpath('.//td[6]/text()')[0]\n filesize = filesize_info[:-2]\n filesize_multiplier = filesize_info[-2:].lower()\n multiplier_french_to_english = {\n 'to': 'TiB',\n 'go': 'GiB',\n 'mo': 'MiB',\n 'ko': 'KiB'\n }\n filesize = get_torrent_size(filesize, multiplier_french_to_english[filesize_multiplier])\n params['filesize'] = filesize\n except:\n pass\n\n # extract and convert creation date\n try:\n date_ts = result.xpath('.//td[5]/div/text()')[0]\n date = datetime.fromtimestamp(float(date_ts))\n params['publishedDate'] = date\n except:\n pass\n\n # append result\n results.append(params)\n\n # return results sorted by seeder\n return sorted(results, key=itemgetter('seed'), reverse=True)\n", "path": "searx/engines/yggtorrent.py"}], "after_files": [{"content": null, "path": "searx/engines/yggtorrent.py"}]}
| 1,477 | 1,007 |
gh_patches_debug_3058
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-432
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug - Redirect loop
There seems to be a bug of redirection loop when a user (created without any admin/editor roles) attempts to log in. I was trying out the demo and apparently the admin interface does not cater for users without any roles.
It would be nice to know if there is a workaround this issue. Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/views/account.py`
Content:
```
1 from django.conf import settings
2 from django.shortcuts import render, redirect
3 from django.contrib import messages
4 from django.contrib.auth.forms import SetPasswordForm
5 from django.contrib.auth.decorators import permission_required
6 from django.contrib.auth.views import logout as auth_logout, login as auth_login
7 from django.utils.translation import ugettext as _
8 from django.views.decorators.debug import sensitive_post_parameters
9 from django.views.decorators.cache import never_cache
10
11 from wagtail.wagtailadmin import forms
12 from wagtail.wagtailusers.forms import NotificationPreferencesForm
13 from wagtail.wagtailusers.models import UserProfile
14 from wagtail.wagtailcore.models import UserPagePermissionsProxy
15
16
17 @permission_required('wagtailadmin.access_admin')
18 def account(request):
19 user_perms = UserPagePermissionsProxy(request.user)
20 show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()
21
22 return render(request, 'wagtailadmin/account/account.html', {
23 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),
24 'show_notification_preferences': show_notification_preferences
25 })
26
27
28 @permission_required('wagtailadmin.access_admin')
29 def change_password(request):
30 can_change_password = request.user.has_usable_password()
31
32 if can_change_password:
33 if request.POST:
34 form = SetPasswordForm(request.user, request.POST)
35
36 if form.is_valid():
37 form.save()
38
39 messages.success(request, _("Your password has been changed successfully!"))
40 return redirect('wagtailadmin_account')
41 else:
42 form = SetPasswordForm(request.user)
43 else:
44 form = None
45
46 return render(request, 'wagtailadmin/account/change_password.html', {
47 'form': form,
48 'can_change_password': can_change_password,
49 })
50
51
52 @permission_required('wagtailadmin.access_admin')
53 def notification_preferences(request):
54
55 if request.POST:
56 form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))
57
58 if form.is_valid():
59 form.save()
60 messages.success(request, _("Your preferences have been updated successfully!"))
61 return redirect('wagtailadmin_account')
62 else:
63 form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))
64
65 # quick-and-dirty catch-all in case the form has been rendered with no
66 # fields, as the user has no customisable permissions
67 if not form.fields:
68 return redirect('wagtailadmin_account')
69
70 return render(request, 'wagtailadmin/account/notification_preferences.html', {
71 'form': form,
72 })
73
74
75 @sensitive_post_parameters()
76 @never_cache
77 def login(request):
78 if request.user.is_authenticated():
79 return redirect('wagtailadmin_home')
80 else:
81 return auth_login(request,
82 template_name='wagtailadmin/login.html',
83 authentication_form=forms.LoginForm,
84 extra_context={
85 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),
86 },
87 )
88
89
90 def logout(request):
91 response = auth_logout(request, next_page = 'wagtailadmin_login')
92
93 # By default, logging out will generate a fresh sessionid cookie. We want to use the
94 # absence of sessionid as an indication that front-end pages are being viewed by a
95 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
96 response.delete_cookie(settings.SESSION_COOKIE_NAME,
97 domain=settings.SESSION_COOKIE_DOMAIN,
98 path=settings.SESSION_COOKIE_PATH)
99
100 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
101 # won't override the above and write a new cookie.
102 request.session.modified = False
103
104 return response
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py
--- a/wagtail/wagtailadmin/views/account.py
+++ b/wagtail/wagtailadmin/views/account.py
@@ -75,7 +75,7 @@
@sensitive_post_parameters()
@never_cache
def login(request):
- if request.user.is_authenticated():
+ if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):
return redirect('wagtailadmin_home')
else:
return auth_login(request,
|
{"golden_diff": "diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py\n--- a/wagtail/wagtailadmin/views/account.py\n+++ b/wagtail/wagtailadmin/views/account.py\n@@ -75,7 +75,7 @@\n @sensitive_post_parameters()\n @never_cache\n def login(request):\n- if request.user.is_authenticated():\n+ if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n return auth_login(request,\n", "issue": "Bug - Redirect loop\nThere seems to be a bug of redirection loop when a user (created without any admin/editor roles) attempts to log in. I was trying out the demo and apparently the admin interface does not cater for users without any roles.\n\nIt would be nice to know if there is a workaround this issue. Thanks!\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.decorators import permission_required\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\n@permission_required('wagtailadmin.access_admin')\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated():\n return redirect('wagtailadmin_home')\n else:\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page = 'wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.decorators import permission_required\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\n@permission_required('wagtailadmin.access_admin')\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page = 'wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}]}
| 1,328 | 126 |
gh_patches_debug_1957
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-192
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support other common names for SKLearn runtime
Add support for models named `model.pickle` and `model.pkl`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtimes/sklearn/mlserver_sklearn/sklearn.py`
Content:
```
1 import joblib
2
3 from typing import List
4
5 from mlserver import types
6 from mlserver.model import MLModel
7 from mlserver.errors import InferenceError
8 from mlserver.utils import get_model_uri, to_ndarray
9
10
11 PREDICT_OUTPUT = "predict"
12 PREDICT_PROBA_OUTPUT = "predict_proba"
13 VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
14
15 WELLKNOWN_MODEL_FILENAMES = ["model.joblib"]
16
17
18 class SKLearnModel(MLModel):
19 """
20 Implementation of the MLModel interface to load and serve `scikit-learn`
21 models persisted with `joblib`.
22 """
23
24 async def load(self) -> bool:
25 # TODO: Log info message
26 model_uri = await get_model_uri(
27 self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES
28 )
29 self._model = joblib.load(model_uri)
30
31 self.ready = True
32 return self.ready
33
34 async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:
35 payload = self._check_request(payload)
36
37 return types.InferenceResponse(
38 model_name=self.name,
39 model_version=self.version,
40 outputs=self._predict_outputs(payload),
41 )
42
43 def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:
44 if len(payload.inputs) != 1:
45 raise InferenceError(
46 "SKLearnModel only supports a single input tensor "
47 f"({len(payload.inputs)} were received)"
48 )
49
50 if not payload.outputs:
51 # By default, only return the result of `predict()`
52 payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]
53 else:
54 for request_output in payload.outputs:
55 if request_output.name not in VALID_OUTPUTS:
56 raise InferenceError(
57 f"SKLearnModel only supports '{PREDICT_OUTPUT}' and "
58 f"'{PREDICT_PROBA_OUTPUT}' as outputs "
59 f"({request_output.name} was received)"
60 )
61
62 return payload
63
64 def _predict_outputs(
65 self, payload: types.InferenceRequest
66 ) -> List[types.ResponseOutput]:
67 model_input = payload.inputs[0]
68 input_data = to_ndarray(model_input)
69
70 outputs = []
71 for request_output in payload.outputs: # type: ignore
72 predict_fn = getattr(self._model, request_output.name)
73 y = predict_fn(input_data)
74
75 # TODO: Set datatype (cast from numpy?)
76 outputs.append(
77 types.ResponseOutput(
78 name=request_output.name,
79 shape=y.shape,
80 datatype="FP32",
81 data=y.tolist(),
82 )
83 )
84
85 return outputs
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/runtimes/sklearn/mlserver_sklearn/sklearn.py b/runtimes/sklearn/mlserver_sklearn/sklearn.py
--- a/runtimes/sklearn/mlserver_sklearn/sklearn.py
+++ b/runtimes/sklearn/mlserver_sklearn/sklearn.py
@@ -12,7 +12,7 @@
PREDICT_PROBA_OUTPUT = "predict_proba"
VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
-WELLKNOWN_MODEL_FILENAMES = ["model.joblib"]
+WELLKNOWN_MODEL_FILENAMES = ["model.joblib", "model.pickle", "model.pkl"]
class SKLearnModel(MLModel):
|
{"golden_diff": "diff --git a/runtimes/sklearn/mlserver_sklearn/sklearn.py b/runtimes/sklearn/mlserver_sklearn/sklearn.py\n--- a/runtimes/sklearn/mlserver_sklearn/sklearn.py\n+++ b/runtimes/sklearn/mlserver_sklearn/sklearn.py\n@@ -12,7 +12,7 @@\n PREDICT_PROBA_OUTPUT = \"predict_proba\"\n VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n \n-WELLKNOWN_MODEL_FILENAMES = [\"model.joblib\"]\n+WELLKNOWN_MODEL_FILENAMES = [\"model.joblib\", \"model.pickle\", \"model.pkl\"]\n \n \n class SKLearnModel(MLModel):\n", "issue": "Support other common names for SKLearn runtime\nAdd support for models named `model.pickle` and `model.pkl`\n", "before_files": [{"content": "import joblib\n\nfrom typing import List\n\nfrom mlserver import types\nfrom mlserver.model import MLModel\nfrom mlserver.errors import InferenceError\nfrom mlserver.utils import get_model_uri, to_ndarray\n\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n return types.InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=self._predict_outputs(payload),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if len(payload.inputs) != 1:\n raise InferenceError(\n \"SKLearnModel only supports a single input tensor \"\n f\"({len(payload.inputs)} were received)\"\n )\n\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n return payload\n\n def _predict_outputs(\n self, payload: types.InferenceRequest\n ) -> List[types.ResponseOutput]:\n model_input = payload.inputs[0]\n input_data = to_ndarray(model_input)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(input_data)\n\n # TODO: Set datatype (cast from numpy?)\n outputs.append(\n types.ResponseOutput(\n name=request_output.name,\n shape=y.shape,\n datatype=\"FP32\",\n data=y.tolist(),\n )\n )\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py"}], "after_files": [{"content": "import joblib\n\nfrom typing import List\n\nfrom mlserver import types\nfrom mlserver.model import MLModel\nfrom mlserver.errors import InferenceError\nfrom mlserver.utils import get_model_uri, to_ndarray\n\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\", \"model.pickle\", \"model.pkl\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n return types.InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=self._predict_outputs(payload),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if len(payload.inputs) != 1:\n raise InferenceError(\n \"SKLearnModel only supports a single input tensor \"\n f\"({len(payload.inputs)} were received)\"\n )\n\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n return payload\n\n def _predict_outputs(\n self, payload: types.InferenceRequest\n ) -> List[types.ResponseOutput]:\n model_input = payload.inputs[0]\n input_data = to_ndarray(model_input)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(input_data)\n\n # TODO: Set datatype (cast from numpy?)\n outputs.append(\n types.ResponseOutput(\n name=request_output.name,\n shape=y.shape,\n datatype=\"FP32\",\n data=y.tolist(),\n )\n )\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py"}]}
| 1,029 | 152 |
gh_patches_debug_1960
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-2952
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update cirq.google.Bristlecone/Foxtail with accurate duration numbers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/google/devices/known_devices.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import (Any, Collection, Dict, Optional, Iterable, List, Set, Tuple,
16 TYPE_CHECKING)
17
18 from cirq._doc import document
19 from cirq.devices import GridQubit
20 from cirq.google import gate_sets, serializable_gate_set
21 from cirq.google.api import v2
22 from cirq.google.api.v2 import device_pb2
23 from cirq.google.devices.serializable_device import SerializableDevice
24 from cirq.google.devices.xmon_device import XmonDevice
25 from cirq.ops import MeasurementGate, SingleQubitGate, WaitGate
26 from cirq.value import Duration
27
28 if TYPE_CHECKING:
29 import cirq
30
31 _2_QUBIT_TARGET_SET = "2_qubit_targets"
32 _MEAS_TARGET_SET = "meas_targets"
33
34
35 def _parse_device(s: str) -> Tuple[List[GridQubit], Dict[str, Set[GridQubit]]]:
36 """Parse ASCIIart device layout into info about qubits and connectivity.
37
38 Args:
39 s: String representing the qubit layout. Each line represents a row,
40 and each character in the row is a qubit, or a blank site if the
41 character is a hyphen '-'. Different letters for the qubit specify
42 which measurement line that qubit is connected to, e.g. all 'A'
43 qubits share a measurement line. Leading and trailing spaces on
44 each line are ignored.
45
46 Returns:
47 A list of qubits and a dict mapping measurement line name to the qubits
48 on that measurement line.
49 """
50 lines = s.strip().split('\n')
51 qubits = [] # type: List[GridQubit]
52 measurement_lines = {} # type: Dict[str, Set[GridQubit]]
53 for row, line in enumerate(lines):
54 for col, c in enumerate(line.strip()):
55 if c != '-':
56 qubit = GridQubit(row, col)
57 qubits.append(qubit)
58 measurement_line = measurement_lines.setdefault(c, set())
59 measurement_line.add(qubit)
60 return qubits, measurement_lines
61
62
63 def create_device_proto_from_diagram(
64 ascii_grid: str,
65 gate_sets: Optional[Iterable[
66 serializable_gate_set.SerializableGateSet]] = None,
67 durations_picos: Optional[Dict[str, int]] = None,
68 out: Optional[device_pb2.DeviceSpecification] = None,
69 ) -> device_pb2.DeviceSpecification:
70 """Parse ASCIIart device layout into DeviceSpecification proto.
71
72 This function assumes that all pairs of adjacent qubits are valid targets
73 for two-qubit gates.
74
75 Args:
76 ascii_grid: ASCII version of the grid (see _parse_device for details).
77 gate_sets: Gate sets that define the translation between gate ids and
78 cirq Gate objects.
79 durations_picos: A map from gate ids to gate durations in picoseconds.
80 out: If given, populate this proto, otherwise create a new proto.
81 """
82 qubits, _ = _parse_device(ascii_grid)
83
84 # Create a list of all adjacent pairs on the grid for two-qubit gates.
85 qubit_set = frozenset(qubits)
86 pairs: List[Tuple['cirq.Qid', 'cirq.Qid']] = []
87 for qubit in qubits:
88 for neighbor in sorted(qubit.neighbors()):
89 if neighbor > qubit and neighbor in qubit_set:
90 pairs.append((qubit, neighbor))
91
92 return create_device_proto_for_qubits(qubits, pairs, gate_sets,
93 durations_picos, out)
94
95
96 def create_device_proto_for_qubits(
97 qubits: Collection['cirq.Qid'],
98 pairs: Collection[Tuple['cirq.Qid', 'cirq.Qid']],
99 gate_sets: Optional[Iterable[
100 serializable_gate_set.SerializableGateSet]] = None,
101 durations_picos: Optional[Dict[str, int]] = None,
102 out: Optional[device_pb2.DeviceSpecification] = None,
103 ) -> device_pb2.DeviceSpecification:
104 """Create device spec for the given qubits and coupled pairs.
105
106 Args:
107 qubits: Qubits that can perform single-qubit gates.
108 pairs: Pairs of coupled qubits that can perform two-qubit gates.
109 gate_sets: Gate sets that define the translation between gate ids and
110 cirq Gate objects.
111 durations_picos: A map from gate ids to gate durations in picoseconds.
112 out: If given, populate this proto, otherwise create a new proto.
113 """
114 if out is None:
115 out = device_pb2.DeviceSpecification()
116
117 # Create valid qubit list
118 out.valid_qubits.extend(v2.qubit_to_proto_id(q) for q in qubits)
119
120 # Set up a target set for measurement (any qubit permutation)
121 meas_targets = out.valid_targets.add()
122 meas_targets.name = _MEAS_TARGET_SET
123 meas_targets.target_ordering = device_pb2.TargetSet.SUBSET_PERMUTATION
124
125 # Set up a target set for 2 qubit gates (specified qubit pairs)
126 grid_targets = out.valid_targets.add()
127 grid_targets.name = _2_QUBIT_TARGET_SET
128 grid_targets.target_ordering = device_pb2.TargetSet.SYMMETRIC
129 for pair in pairs:
130 new_target = grid_targets.targets.add()
131 new_target.ids.extend(v2.qubit_to_proto_id(q) for q in pair)
132
133 # Create gate sets
134 arg_def = device_pb2.ArgDefinition
135 for gate_set in gate_sets or []:
136 gs_proto = out.valid_gate_sets.add()
137 gs_proto.name = gate_set.gate_set_name
138 gate_ids: Set[str] = set()
139 for gate_type in gate_set.serializers:
140 for serializer in gate_set.serializers[gate_type]:
141 gate_id = serializer.serialized_gate_id
142 if gate_id in gate_ids:
143 # Only add each type once
144 continue
145
146 gate_ids.add(gate_id)
147 gate = gs_proto.valid_gates.add()
148 gate.id = gate_id
149
150 # Choose target set and number of qubits based on gate type.
151
152 # Note: if it is not a measurement gate and doesn't inherit
153 # from SingleQubitGate, it's assumed to be a two qubit gate.
154 if gate_type == MeasurementGate:
155 gate.valid_targets.append(_MEAS_TARGET_SET)
156 elif gate_type == WaitGate:
157 # TODO(#2537): Refactor gate-sets / device to eliminate
158 # The need for checking type here.
159 gate.number_of_qubits = 1
160 elif issubclass(gate_type, SingleQubitGate):
161 gate.number_of_qubits = 1
162 else:
163 # This must be a two-qubit gate
164 gate.valid_targets.append(_2_QUBIT_TARGET_SET)
165 gate.number_of_qubits = 2
166
167 # Add gate duration
168 if (durations_picos is not None and gate.id in durations_picos):
169 gate.gate_duration_picos = durations_picos[gate.id]
170
171 # Add argument names and types for each gate.
172 for arg in serializer.args:
173 new_arg = gate.valid_args.add()
174 if arg.serialized_type == str:
175 new_arg.type = arg_def.STRING
176 if arg.serialized_type == float:
177 new_arg.type = arg_def.FLOAT
178 if arg.serialized_type == List[bool]:
179 new_arg.type = arg_def.REPEATED_BOOLEAN
180 new_arg.name = arg.serialized_name
181 # Note: this does not yet support adding allowed_ranges
182
183 return out
184
185
186 _FOXTAIL_GRID = """
187 AAAAABBBBBB
188 CCCCCCDDDDD
189 """
190
191
192 class _NamedConstantXmonDevice(XmonDevice):
193
194 def __init__(self, constant: str, **kwargs) -> None:
195 super().__init__(**kwargs)
196 self._repr = constant
197
198 def __repr__(self) -> str:
199 return self._repr
200
201 @classmethod
202 def _from_json_dict_(cls, constant: str, **kwargs):
203 if constant == Foxtail._repr:
204 return Foxtail
205 if constant == Bristlecone._repr:
206 return Bristlecone
207 raise ValueError(f'Unrecognized xmon device name: {constant!r}')
208
209 def _json_dict_(self) -> Dict[str, Any]:
210 return {
211 'cirq_type': self.__class__.__name__,
212 'constant': self._repr,
213 }
214
215
216 Foxtail = _NamedConstantXmonDevice('cirq.google.Foxtail',
217 measurement_duration=Duration(nanos=4000),
218 exp_w_duration=Duration(nanos=20),
219 exp_11_duration=Duration(nanos=50),
220 qubits=_parse_device(_FOXTAIL_GRID)[0])
221 document(Foxtail, f"""72 xmon qubit device.
222
223 **Qubit grid**:
224 ```
225 {str(Foxtail)}
226 ```
227 """)
228
229 # Duration dict in picoseconds
230 _DURATIONS_FOR_XMON = {
231 'cz': 50_000,
232 'xy': 20_000,
233 'z': 0,
234 'meas': 4_000_000, # 1000ns for readout, 3000ns for "ring down"
235 }
236
237 FOXTAIL_PROTO = create_device_proto_from_diagram(_FOXTAIL_GRID,
238 [gate_sets.XMON],
239 _DURATIONS_FOR_XMON)
240
241 _BRISTLECONE_GRID = """
242 -----AB-----
243 ----ABCD----
244 ---ABCDEF---
245 --ABCDEFGH--
246 -ABCDEFGHIJ-
247 ABCDEFGHIJKL
248 -CDEFGHIJKL-
249 --EFGHIJKL--
250 ---GHIJKL---
251 ----IJKL----
252 -----KL-----
253 """
254
255 Bristlecone = _NamedConstantXmonDevice(
256 'cirq.google.Bristlecone',
257 measurement_duration=Duration(nanos=4000),
258 exp_w_duration=Duration(nanos=20),
259 exp_11_duration=Duration(nanos=50),
260 qubits=_parse_device(_BRISTLECONE_GRID)[0])
261 document(
262 Bristlecone, f"""72 xmon qubit device.
263
264 **Qubit grid**:
265 ```
266 {str(Bristlecone)}
267 ```
268 """)
269
270 BRISTLECONE_PROTO = create_device_proto_from_diagram(_BRISTLECONE_GRID,
271 [gate_sets.XMON],
272 _DURATIONS_FOR_XMON)
273
274 _SYCAMORE_GRID = """
275 -----AB---
276 ----ABCD--
277 ---ABCDEF-
278 --ABCDEFGH
279 -ABCDEFGHI
280 ABCDEFGHI-
281 -CDEFGHI--
282 --EFGHI---
283 ---GHI----
284 ----I-----
285 """
286
287 _SYCAMORE_DURATIONS_PICOS = {
288 'xy': 25_000,
289 'xy_half_pi': 25_000,
290 'xy_pi': 25_000,
291 'xyz': 25_000,
292 'fsim_pi_4': 32_000,
293 'inv_fsim_pi_4': 32_000,
294 'syc': 12_000,
295 'z': 0,
296 'meas': 4_000_000, # 1000 ns for readout, 3000ns for ring_down
297 }
298
299 SYCAMORE_PROTO = create_device_proto_from_diagram(
300 _SYCAMORE_GRID,
301 [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],
302 _SYCAMORE_DURATIONS_PICOS,
303 )
304
305 Sycamore = SerializableDevice.from_proto(
306 proto=SYCAMORE_PROTO,
307 gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])
308
309 # Subset of the Sycamore grid with a reduced layout.
310 _SYCAMORE23_GRID = """
311 ----------
312 ----------
313 ----------
314 --A-------
315 -ABC------
316 ABCDE-----
317 -CDEFG----
318 --EFGHI---
319 ---GHI----
320 ----I-----
321 """
322
323 SYCAMORE23_PROTO = create_device_proto_from_diagram(
324 _SYCAMORE23_GRID,
325 [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],
326 _SYCAMORE_DURATIONS_PICOS,
327 )
328
329 Sycamore23 = SerializableDevice.from_proto(
330 proto=SYCAMORE23_PROTO,
331 gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])
332
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/google/devices/known_devices.py b/cirq/google/devices/known_devices.py
--- a/cirq/google/devices/known_devices.py
+++ b/cirq/google/devices/known_devices.py
@@ -228,8 +228,8 @@
# Duration dict in picoseconds
_DURATIONS_FOR_XMON = {
- 'cz': 50_000,
- 'xy': 20_000,
+ 'cz': 45_000,
+ 'xy': 15_000,
'z': 0,
'meas': 4_000_000, # 1000ns for readout, 3000ns for "ring down"
}
|
{"golden_diff": "diff --git a/cirq/google/devices/known_devices.py b/cirq/google/devices/known_devices.py\n--- a/cirq/google/devices/known_devices.py\n+++ b/cirq/google/devices/known_devices.py\n@@ -228,8 +228,8 @@\n \n # Duration dict in picoseconds\n _DURATIONS_FOR_XMON = {\n- 'cz': 50_000,\n- 'xy': 20_000,\n+ 'cz': 45_000,\n+ 'xy': 15_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000ns for readout, 3000ns for \"ring down\"\n }\n", "issue": "Update cirq.google.Bristlecone/Foxtail with accurate duration numbers\n\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import (Any, Collection, Dict, Optional, Iterable, List, Set, Tuple,\n TYPE_CHECKING)\n\nfrom cirq._doc import document\nfrom cirq.devices import GridQubit\nfrom cirq.google import gate_sets, serializable_gate_set\nfrom cirq.google.api import v2\nfrom cirq.google.api.v2 import device_pb2\nfrom cirq.google.devices.serializable_device import SerializableDevice\nfrom cirq.google.devices.xmon_device import XmonDevice\nfrom cirq.ops import MeasurementGate, SingleQubitGate, WaitGate\nfrom cirq.value import Duration\n\nif TYPE_CHECKING:\n import cirq\n\n_2_QUBIT_TARGET_SET = \"2_qubit_targets\"\n_MEAS_TARGET_SET = \"meas_targets\"\n\n\ndef _parse_device(s: str) -> Tuple[List[GridQubit], Dict[str, Set[GridQubit]]]:\n \"\"\"Parse ASCIIart device layout into info about qubits and connectivity.\n\n Args:\n s: String representing the qubit layout. Each line represents a row,\n and each character in the row is a qubit, or a blank site if the\n character is a hyphen '-'. Different letters for the qubit specify\n which measurement line that qubit is connected to, e.g. all 'A'\n qubits share a measurement line. Leading and trailing spaces on\n each line are ignored.\n\n Returns:\n A list of qubits and a dict mapping measurement line name to the qubits\n on that measurement line.\n \"\"\"\n lines = s.strip().split('\\n')\n qubits = [] # type: List[GridQubit]\n measurement_lines = {} # type: Dict[str, Set[GridQubit]]\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c != '-':\n qubit = GridQubit(row, col)\n qubits.append(qubit)\n measurement_line = measurement_lines.setdefault(c, set())\n measurement_line.add(qubit)\n return qubits, measurement_lines\n\n\ndef create_device_proto_from_diagram(\n ascii_grid: str,\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Parse ASCIIart device layout into DeviceSpecification proto.\n\n This function assumes that all pairs of adjacent qubits are valid targets\n for two-qubit gates.\n\n Args:\n ascii_grid: ASCII version of the grid (see _parse_device for details).\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n qubits, _ = _parse_device(ascii_grid)\n\n # Create a list of all adjacent pairs on the grid for two-qubit gates.\n qubit_set = frozenset(qubits)\n pairs: List[Tuple['cirq.Qid', 'cirq.Qid']] = []\n for qubit in qubits:\n for neighbor in sorted(qubit.neighbors()):\n if neighbor > qubit and neighbor in qubit_set:\n pairs.append((qubit, neighbor))\n\n return create_device_proto_for_qubits(qubits, pairs, gate_sets,\n durations_picos, out)\n\n\ndef create_device_proto_for_qubits(\n qubits: Collection['cirq.Qid'],\n pairs: Collection[Tuple['cirq.Qid', 'cirq.Qid']],\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Create device spec for the given qubits and coupled pairs.\n\n Args:\n qubits: Qubits that can perform single-qubit gates.\n pairs: Pairs of coupled qubits that can perform two-qubit gates.\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n if out is None:\n out = device_pb2.DeviceSpecification()\n\n # Create valid qubit list\n out.valid_qubits.extend(v2.qubit_to_proto_id(q) for q in qubits)\n\n # Set up a target set for measurement (any qubit permutation)\n meas_targets = out.valid_targets.add()\n meas_targets.name = _MEAS_TARGET_SET\n meas_targets.target_ordering = device_pb2.TargetSet.SUBSET_PERMUTATION\n\n # Set up a target set for 2 qubit gates (specified qubit pairs)\n grid_targets = out.valid_targets.add()\n grid_targets.name = _2_QUBIT_TARGET_SET\n grid_targets.target_ordering = device_pb2.TargetSet.SYMMETRIC\n for pair in pairs:\n new_target = grid_targets.targets.add()\n new_target.ids.extend(v2.qubit_to_proto_id(q) for q in pair)\n\n # Create gate sets\n arg_def = device_pb2.ArgDefinition\n for gate_set in gate_sets or []:\n gs_proto = out.valid_gate_sets.add()\n gs_proto.name = gate_set.gate_set_name\n gate_ids: Set[str] = set()\n for gate_type in gate_set.serializers:\n for serializer in gate_set.serializers[gate_type]:\n gate_id = serializer.serialized_gate_id\n if gate_id in gate_ids:\n # Only add each type once\n continue\n\n gate_ids.add(gate_id)\n gate = gs_proto.valid_gates.add()\n gate.id = gate_id\n\n # Choose target set and number of qubits based on gate type.\n\n # Note: if it is not a measurement gate and doesn't inherit\n # from SingleQubitGate, it's assumed to be a two qubit gate.\n if gate_type == MeasurementGate:\n gate.valid_targets.append(_MEAS_TARGET_SET)\n elif gate_type == WaitGate:\n # TODO(#2537): Refactor gate-sets / device to eliminate\n # The need for checking type here.\n gate.number_of_qubits = 1\n elif issubclass(gate_type, SingleQubitGate):\n gate.number_of_qubits = 1\n else:\n # This must be a two-qubit gate\n gate.valid_targets.append(_2_QUBIT_TARGET_SET)\n gate.number_of_qubits = 2\n\n # Add gate duration\n if (durations_picos is not None and gate.id in durations_picos):\n gate.gate_duration_picos = durations_picos[gate.id]\n\n # Add argument names and types for each gate.\n for arg in serializer.args:\n new_arg = gate.valid_args.add()\n if arg.serialized_type == str:\n new_arg.type = arg_def.STRING\n if arg.serialized_type == float:\n new_arg.type = arg_def.FLOAT\n if arg.serialized_type == List[bool]:\n new_arg.type = arg_def.REPEATED_BOOLEAN\n new_arg.name = arg.serialized_name\n # Note: this does not yet support adding allowed_ranges\n\n return out\n\n\n_FOXTAIL_GRID = \"\"\"\nAAAAABBBBBB\nCCCCCCDDDDD\n\"\"\"\n\n\nclass _NamedConstantXmonDevice(XmonDevice):\n\n def __init__(self, constant: str, **kwargs) -> None:\n super().__init__(**kwargs)\n self._repr = constant\n\n def __repr__(self) -> str:\n return self._repr\n\n @classmethod\n def _from_json_dict_(cls, constant: str, **kwargs):\n if constant == Foxtail._repr:\n return Foxtail\n if constant == Bristlecone._repr:\n return Bristlecone\n raise ValueError(f'Unrecognized xmon device name: {constant!r}')\n\n def _json_dict_(self) -> Dict[str, Any]:\n return {\n 'cirq_type': self.__class__.__name__,\n 'constant': self._repr,\n }\n\n\nFoxtail = _NamedConstantXmonDevice('cirq.google.Foxtail',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_FOXTAIL_GRID)[0])\ndocument(Foxtail, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Foxtail)}\n```\n\"\"\")\n\n# Duration dict in picoseconds\n_DURATIONS_FOR_XMON = {\n 'cz': 50_000,\n 'xy': 20_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000ns for readout, 3000ns for \"ring down\"\n}\n\nFOXTAIL_PROTO = create_device_proto_from_diagram(_FOXTAIL_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_BRISTLECONE_GRID = \"\"\"\n-----AB-----\n----ABCD----\n---ABCDEF---\n--ABCDEFGH--\n-ABCDEFGHIJ-\nABCDEFGHIJKL\n-CDEFGHIJKL-\n--EFGHIJKL--\n---GHIJKL---\n----IJKL----\n-----KL-----\n\"\"\"\n\nBristlecone = _NamedConstantXmonDevice(\n 'cirq.google.Bristlecone',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_BRISTLECONE_GRID)[0])\ndocument(\n Bristlecone, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Bristlecone)}\n```\n\"\"\")\n\nBRISTLECONE_PROTO = create_device_proto_from_diagram(_BRISTLECONE_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_SYCAMORE_GRID = \"\"\"\n-----AB---\n----ABCD--\n---ABCDEF-\n--ABCDEFGH\n-ABCDEFGHI\nABCDEFGHI-\n-CDEFGHI--\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\n_SYCAMORE_DURATIONS_PICOS = {\n 'xy': 25_000,\n 'xy_half_pi': 25_000,\n 'xy_pi': 25_000,\n 'xyz': 25_000,\n 'fsim_pi_4': 32_000,\n 'inv_fsim_pi_4': 32_000,\n 'syc': 12_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000 ns for readout, 3000ns for ring_down\n}\n\nSYCAMORE_PROTO = create_device_proto_from_diagram(\n _SYCAMORE_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore = SerializableDevice.from_proto(\n proto=SYCAMORE_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n\n# Subset of the Sycamore grid with a reduced layout.\n_SYCAMORE23_GRID = \"\"\"\n----------\n----------\n----------\n--A-------\n-ABC------\nABCDE-----\n-CDEFG----\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\nSYCAMORE23_PROTO = create_device_proto_from_diagram(\n _SYCAMORE23_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore23 = SerializableDevice.from_proto(\n proto=SYCAMORE23_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n", "path": "cirq/google/devices/known_devices.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import (Any, Collection, Dict, Optional, Iterable, List, Set, Tuple,\n TYPE_CHECKING)\n\nfrom cirq._doc import document\nfrom cirq.devices import GridQubit\nfrom cirq.google import gate_sets, serializable_gate_set\nfrom cirq.google.api import v2\nfrom cirq.google.api.v2 import device_pb2\nfrom cirq.google.devices.serializable_device import SerializableDevice\nfrom cirq.google.devices.xmon_device import XmonDevice\nfrom cirq.ops import MeasurementGate, SingleQubitGate, WaitGate\nfrom cirq.value import Duration\n\nif TYPE_CHECKING:\n import cirq\n\n_2_QUBIT_TARGET_SET = \"2_qubit_targets\"\n_MEAS_TARGET_SET = \"meas_targets\"\n\n\ndef _parse_device(s: str) -> Tuple[List[GridQubit], Dict[str, Set[GridQubit]]]:\n \"\"\"Parse ASCIIart device layout into info about qubits and connectivity.\n\n Args:\n s: String representing the qubit layout. Each line represents a row,\n and each character in the row is a qubit, or a blank site if the\n character is a hyphen '-'. Different letters for the qubit specify\n which measurement line that qubit is connected to, e.g. all 'A'\n qubits share a measurement line. Leading and trailing spaces on\n each line are ignored.\n\n Returns:\n A list of qubits and a dict mapping measurement line name to the qubits\n on that measurement line.\n \"\"\"\n lines = s.strip().split('\\n')\n qubits = [] # type: List[GridQubit]\n measurement_lines = {} # type: Dict[str, Set[GridQubit]]\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c != '-':\n qubit = GridQubit(row, col)\n qubits.append(qubit)\n measurement_line = measurement_lines.setdefault(c, set())\n measurement_line.add(qubit)\n return qubits, measurement_lines\n\n\ndef create_device_proto_from_diagram(\n ascii_grid: str,\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Parse ASCIIart device layout into DeviceSpecification proto.\n\n This function assumes that all pairs of adjacent qubits are valid targets\n for two-qubit gates.\n\n Args:\n ascii_grid: ASCII version of the grid (see _parse_device for details).\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n qubits, _ = _parse_device(ascii_grid)\n\n # Create a list of all adjacent pairs on the grid for two-qubit gates.\n qubit_set = frozenset(qubits)\n pairs: List[Tuple['cirq.Qid', 'cirq.Qid']] = []\n for qubit in qubits:\n for neighbor in sorted(qubit.neighbors()):\n if neighbor > qubit and neighbor in qubit_set:\n pairs.append((qubit, neighbor))\n\n return create_device_proto_for_qubits(qubits, pairs, gate_sets,\n durations_picos, out)\n\n\ndef create_device_proto_for_qubits(\n qubits: Collection['cirq.Qid'],\n pairs: Collection[Tuple['cirq.Qid', 'cirq.Qid']],\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Create device spec for the given qubits and coupled pairs.\n\n Args:\n qubits: Qubits that can perform single-qubit gates.\n pairs: Pairs of coupled qubits that can perform two-qubit gates.\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n if out is None:\n out = device_pb2.DeviceSpecification()\n\n # Create valid qubit list\n out.valid_qubits.extend(v2.qubit_to_proto_id(q) for q in qubits)\n\n # Set up a target set for measurement (any qubit permutation)\n meas_targets = out.valid_targets.add()\n meas_targets.name = _MEAS_TARGET_SET\n meas_targets.target_ordering = device_pb2.TargetSet.SUBSET_PERMUTATION\n\n # Set up a target set for 2 qubit gates (specified qubit pairs)\n grid_targets = out.valid_targets.add()\n grid_targets.name = _2_QUBIT_TARGET_SET\n grid_targets.target_ordering = device_pb2.TargetSet.SYMMETRIC\n for pair in pairs:\n new_target = grid_targets.targets.add()\n new_target.ids.extend(v2.qubit_to_proto_id(q) for q in pair)\n\n # Create gate sets\n arg_def = device_pb2.ArgDefinition\n for gate_set in gate_sets or []:\n gs_proto = out.valid_gate_sets.add()\n gs_proto.name = gate_set.gate_set_name\n gate_ids: Set[str] = set()\n for gate_type in gate_set.serializers:\n for serializer in gate_set.serializers[gate_type]:\n gate_id = serializer.serialized_gate_id\n if gate_id in gate_ids:\n # Only add each type once\n continue\n\n gate_ids.add(gate_id)\n gate = gs_proto.valid_gates.add()\n gate.id = gate_id\n\n # Choose target set and number of qubits based on gate type.\n\n # Note: if it is not a measurement gate and doesn't inherit\n # from SingleQubitGate, it's assumed to be a two qubit gate.\n if gate_type == MeasurementGate:\n gate.valid_targets.append(_MEAS_TARGET_SET)\n elif gate_type == WaitGate:\n # TODO(#2537): Refactor gate-sets / device to eliminate\n # The need for checking type here.\n gate.number_of_qubits = 1\n elif issubclass(gate_type, SingleQubitGate):\n gate.number_of_qubits = 1\n else:\n # This must be a two-qubit gate\n gate.valid_targets.append(_2_QUBIT_TARGET_SET)\n gate.number_of_qubits = 2\n\n # Add gate duration\n if (durations_picos is not None and gate.id in durations_picos):\n gate.gate_duration_picos = durations_picos[gate.id]\n\n # Add argument names and types for each gate.\n for arg in serializer.args:\n new_arg = gate.valid_args.add()\n if arg.serialized_type == str:\n new_arg.type = arg_def.STRING\n if arg.serialized_type == float:\n new_arg.type = arg_def.FLOAT\n if arg.serialized_type == List[bool]:\n new_arg.type = arg_def.REPEATED_BOOLEAN\n new_arg.name = arg.serialized_name\n # Note: this does not yet support adding allowed_ranges\n\n return out\n\n\n_FOXTAIL_GRID = \"\"\"\nAAAAABBBBBB\nCCCCCCDDDDD\n\"\"\"\n\n\nclass _NamedConstantXmonDevice(XmonDevice):\n\n def __init__(self, constant: str, **kwargs) -> None:\n super().__init__(**kwargs)\n self._repr = constant\n\n def __repr__(self) -> str:\n return self._repr\n\n @classmethod\n def _from_json_dict_(cls, constant: str, **kwargs):\n if constant == Foxtail._repr:\n return Foxtail\n if constant == Bristlecone._repr:\n return Bristlecone\n raise ValueError(f'Unrecognized xmon device name: {constant!r}')\n\n def _json_dict_(self) -> Dict[str, Any]:\n return {\n 'cirq_type': self.__class__.__name__,\n 'constant': self._repr,\n }\n\n\nFoxtail = _NamedConstantXmonDevice('cirq.google.Foxtail',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_FOXTAIL_GRID)[0])\ndocument(Foxtail, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Foxtail)}\n```\n\"\"\")\n\n# Duration dict in picoseconds\n_DURATIONS_FOR_XMON = {\n 'cz': 45_000,\n 'xy': 15_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000ns for readout, 3000ns for \"ring down\"\n}\n\nFOXTAIL_PROTO = create_device_proto_from_diagram(_FOXTAIL_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_BRISTLECONE_GRID = \"\"\"\n-----AB-----\n----ABCD----\n---ABCDEF---\n--ABCDEFGH--\n-ABCDEFGHIJ-\nABCDEFGHIJKL\n-CDEFGHIJKL-\n--EFGHIJKL--\n---GHIJKL---\n----IJKL----\n-----KL-----\n\"\"\"\n\nBristlecone = _NamedConstantXmonDevice(\n 'cirq.google.Bristlecone',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_BRISTLECONE_GRID)[0])\ndocument(\n Bristlecone, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Bristlecone)}\n```\n\"\"\")\n\nBRISTLECONE_PROTO = create_device_proto_from_diagram(_BRISTLECONE_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_SYCAMORE_GRID = \"\"\"\n-----AB---\n----ABCD--\n---ABCDEF-\n--ABCDEFGH\n-ABCDEFGHI\nABCDEFGHI-\n-CDEFGHI--\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\n_SYCAMORE_DURATIONS_PICOS = {\n 'xy': 25_000,\n 'xy_half_pi': 25_000,\n 'xy_pi': 25_000,\n 'xyz': 25_000,\n 'fsim_pi_4': 32_000,\n 'inv_fsim_pi_4': 32_000,\n 'syc': 12_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000 ns for readout, 3000ns for ring_down\n}\n\nSYCAMORE_PROTO = create_device_proto_from_diagram(\n _SYCAMORE_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore = SerializableDevice.from_proto(\n proto=SYCAMORE_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n\n# Subset of the Sycamore grid with a reduced layout.\n_SYCAMORE23_GRID = \"\"\"\n----------\n----------\n----------\n--A-------\n-ABC------\nABCDE-----\n-CDEFG----\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\nSYCAMORE23_PROTO = create_device_proto_from_diagram(\n _SYCAMORE23_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore23 = SerializableDevice.from_proto(\n proto=SYCAMORE23_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n", "path": "cirq/google/devices/known_devices.py"}]}
| 4,010 | 174 |
gh_patches_debug_37330
|
rasdani/github-patches
|
git_diff
|
keras-team__autokeras-787
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add evaluate function to AutoModel
Would it be relevant to have an `evaluate` method as well? e.g. a standard practice I would expect is to have 2 validation sets, one used for search during training, and one used at the end as the actual validation set (to prevent validation overfitting).
_Originally posted by @fchollet in https://github.com/keras-team/autokeras/pull/691_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autokeras/auto_model.py`
Content:
```
1 import numpy as np
2 import pandas as pd
3 import tensorflow as tf
4 from tensorflow.python.util import nest
5
6 import kerastuner
7 from autokeras import meta_model
8 from autokeras import tuner
9 from autokeras import utils
10 from autokeras.hypermodel import graph
11 from autokeras.hypermodel import head
12 from autokeras.hypermodel import node
13
14
15 class AutoModel(object):
16 """ A Model defined by inputs and outputs.
17
18 AutoModel combines a HyperModel and a Tuner to tune the HyperModel.
19 The user can use it in a similar way to a Keras model since it
20 also has `fit()` and `predict()` methods.
21
22 The user can specify the inputs and outputs of the AutoModel. It will infer
23 the rest of the high-level neural architecture.
24
25 # Arguments
26 inputs: A list of or a HyperNode instance.
27 The input node(s) of the AutoModel.
28 outputs: A list of or a HyperHead instance.
29 The output head(s) of the AutoModel.
30 name: String. The name of the AutoModel. Defaults to 'auto_model'.
31 max_trials: Int. The maximum number of different Keras Models to try.
32 The search may finish before reaching the max_trials. Defaults to 100.
33 directory: String. The path to a directory for storing the search outputs.
34 Defaults to None, which would create a folder with the name of the
35 AutoModel in the current directory.
36 seed: Int. Random seed.
37 """
38
39 def __init__(self,
40 inputs,
41 outputs,
42 name='auto_model',
43 max_trials=100,
44 directory=None,
45 seed=None):
46 self.inputs = nest.flatten(inputs)
47 self.outputs = nest.flatten(outputs)
48 self.name = name
49 self.tuner = None
50 self.max_trials = max_trials
51 self.directory = directory
52 self.seed = seed
53 self.hypermodel = None
54 if all([isinstance(output_node, head.Head)
55 for output_node in self.outputs]):
56 self.heads = self.outputs
57 else:
58 self.heads = [output_node.in_blocks[0] for output_node in self.outputs]
59
60 def _meta_build(self, dataset):
61 self.hypermodel = meta_model.assemble(inputs=self.inputs,
62 outputs=self.outputs,
63 dataset=dataset,
64 seed=self.seed)
65 self.outputs = self.hypermodel.outputs
66
67 def fit(self,
68 x=None,
69 y=None,
70 validation_split=0,
71 validation_data=None,
72 **kwargs):
73 """Search for the best model and hyperparameters for the AutoModel.
74
75 It will search for the best model based on the performances on
76 validation data.
77
78 # Arguments
79 x: numpy.ndarray or tensorflow.Dataset. Training data x.
80 y: numpy.ndarray or tensorflow.Dataset. Training data y.
81 validation_split: Float between 0 and 1.
82 Fraction of the training data to be used as validation data.
83 The model will set apart this fraction of the training data,
84 will not train on it, and will evaluate
85 the loss and any model metrics
86 on this data at the end of each epoch.
87 The validation data is selected from the last samples
88 in the `x` and `y` data provided, before shuffling. This argument is
89 not supported when `x` is a dataset.
90 validation_data: Data on which to evaluate
91 the loss and any model metrics at the end of each epoch.
92 The model will not be trained on this data.
93 `validation_data` will override `validation_split`.
94 `validation_data` could be:
95 - tuple `(x_val, y_val)` of Numpy arrays or tensors
96 - tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays
97 - dataset or a dataset iterator
98 For the first two cases, `batch_size` must be provided.
99 For the last case, `validation_steps` must be provided.
100 **kwargs: Any arguments supported by keras.Model.fit.
101 """
102 dataset, validation_data = self._prepare_data(
103 x=x,
104 y=y,
105 validation_data=validation_data,
106 validation_split=validation_split)
107
108 # Initialize the hypermodel.
109 self._meta_build(dataset)
110 self.hypermodel.set_io_shapes(dataset)
111
112 # Build the hypermodel in tuner init.
113 hp = kerastuner.HyperParameters()
114 self.hypermodel.hyper_build(hp)
115 self.hypermodel.preprocess(
116 hp=kerastuner.HyperParameters(),
117 dataset=dataset,
118 validation_data=validation_data,
119 fit=True)
120 self.tuner = tuner.RandomSearch(
121 hypermodel=self.hypermodel,
122 objective='val_loss',
123 max_trials=self.max_trials,
124 directory=self.directory,
125 seed=self.seed,
126 project_name=self.name)
127 self.hypermodel.clear_preprocessors()
128
129 # TODO: allow early stop if epochs is not specified.
130 self.tuner.search(x=dataset,
131 validation_data=validation_data,
132 **kwargs)
133
134 def _process_xy(self, x, y=None, fit=False, predict=False):
135 """Convert x, y to tf.data.Dataset.
136
137 # Arguments
138 x: Any type allowed by the corresponding input node.
139 y: Any type allowed by the corresponding head.
140 fit: Boolean. Whether to fit the type converter with the provided data.
141 predict: Boolean. If it is called by the predict function of AutoModel.
142
143 # Returns
144 A tf.data.Dataset containing both x and y.
145 """
146 if isinstance(x, tf.data.Dataset):
147 if y is None and not predict:
148 return x
149 if isinstance(y, tf.data.Dataset):
150 return tf.data.Dataset.zip((x, y))
151
152 x = nest.flatten(x)
153 new_x = []
154 for data, input_node in zip(x, self.inputs):
155 if fit:
156 input_node.fit(data)
157 data = input_node.transform(data)
158 new_x.append(data)
159 x = tf.data.Dataset.zip(tuple(new_x))
160
161 if predict:
162 return tf.data.Dataset.zip((x, x))
163
164 if not isinstance(y, tf.data.Dataset):
165 y = nest.flatten(y)
166 new_y = []
167 for data, head_block in zip(y, self.heads):
168 if fit:
169 head_block.fit(data)
170 data = head_block.transform(data)
171 new_y.append(data)
172 y = tf.data.Dataset.zip(tuple(new_y))
173
174 return tf.data.Dataset.zip((x, y))
175
176 def _prepare_data(self, x, y, validation_data, validation_split):
177 """Convert the data to tf.data.Dataset."""
178 # Check validation information.
179 if not validation_data and not validation_split:
180 raise ValueError('Either validation_data or validation_split'
181 'should be provided.')
182 # TODO: Handle other types of input, zip dataset, tensor, dict.
183 # Prepare the dataset.
184 dataset = self._process_xy(x, y, fit=True)
185 if validation_data:
186 val_x, val_y = validation_data
187 validation_data = self._process_xy(val_x, val_y)
188 # Split the data with validation_split.
189 if validation_data is None and validation_split:
190 dataset, validation_data = utils.split_dataset(dataset, validation_split)
191 return dataset, validation_data
192
193 def predict(self, x, batch_size=32, **kwargs):
194 """Predict the output for a given testing data.
195
196 # Arguments
197 x: tf.data.Dataset or numpy.ndarray. Testing data.
198 batch_size: Int. Defaults to 32.
199 **kwargs: Any arguments supported by keras.Model.predict.
200
201 # Returns
202 A list of numpy.ndarray objects or a single numpy.ndarray.
203 The predicted results.
204 """
205 best_model = self.tuner.get_best_models(1)[0]
206 best_trial = self.tuner.get_best_trials(1)[0]
207 best_hp = best_trial.hyperparameters
208
209 self.tuner.load_trial(best_trial)
210 x = self._process_xy(x, predict=True)
211 x = self.hypermodel.preprocess(best_hp, x)
212 x = x.batch(batch_size)
213 y = best_model.predict(x, **kwargs)
214 y = self._postprocess(y)
215 if isinstance(y, list) and len(y) == 1:
216 y = y[0]
217 return y
218
219 def _postprocess(self, y):
220 y = nest.flatten(y)
221 new_y = []
222 for temp_y, head_block in zip(y, self.heads):
223 if isinstance(head_block, head.Head):
224 temp_y = head_block.postprocess(temp_y)
225 new_y.append(temp_y)
226 return new_y
227
228
229 class GraphAutoModel(AutoModel):
230 """A HyperModel defined by a graph of HyperBlocks.
231
232 GraphAutoModel is a subclass of HyperModel. Besides the HyperModel properties,
233 it also has a tuner to tune the HyperModel. The user can use it in a similar
234 way to a Keras model since it also has `fit()` and `predict()` methods.
235
236 The user can specify the high-level neural architecture by connecting the
237 HyperBlocks with the functional API, which is the same as
238 the Keras functional API.
239
240 # Arguments
241 inputs: A list of or a HyperNode instances.
242 The input node(s) of the GraphAutoModel.
243 outputs: A list of or a HyperNode instances.
244 The output node(s) of the GraphAutoModel.
245 name: String. The name of the AutoModel. Defaults to 'graph_auto_model'.
246 max_trials: Int. The maximum number of different Keras Models to try.
247 The search may finish before reaching the max_trials. Defaults to 100.
248 directory: String. The path to a directory for storing the search outputs.
249 Defaults to None, which would create a folder with the name of the
250 AutoModel in the current directory.
251 seed: Int. Random seed.
252 """
253
254 def __init__(self,
255 inputs,
256 outputs,
257 name='graph_auto_model',
258 max_trials=100,
259 directory=None,
260 seed=None):
261 super().__init__(
262 inputs=inputs,
263 outputs=outputs,
264 name=name,
265 max_trials=max_trials,
266 directory=directory,
267 seed=seed
268 )
269 self.hypermodel = graph.GraphHyperModel(self.inputs, self.outputs)
270
271 def _meta_build(self, dataset):
272 pass
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/autokeras/auto_model.py b/autokeras/auto_model.py
--- a/autokeras/auto_model.py
+++ b/autokeras/auto_model.py
@@ -194,7 +194,7 @@
"""Predict the output for a given testing data.
# Arguments
- x: tf.data.Dataset or numpy.ndarray. Testing data.
+ x: Any allowed types according to the input node. Testing data.
batch_size: Int. Defaults to 32.
**kwargs: Any arguments supported by keras.Model.predict.
@@ -202,14 +202,11 @@
A list of numpy.ndarray objects or a single numpy.ndarray.
The predicted results.
"""
- best_model = self.tuner.get_best_models(1)[0]
- best_trial = self.tuner.get_best_trials(1)[0]
- best_hp = best_trial.hyperparameters
-
- self.tuner.load_trial(best_trial)
- x = self._process_xy(x, predict=True)
- x = self.hypermodel.preprocess(best_hp, x)
- x = x.batch(batch_size)
+ best_model, x = self._prepare_best_model_and_data(
+ x=x,
+ y=None,
+ batch_size=batch_size,
+ predict=True)
y = best_model.predict(x, **kwargs)
y = self._postprocess(y)
if isinstance(y, list) and len(y) == 1:
@@ -225,6 +222,39 @@
new_y.append(temp_y)
return new_y
+ def evaluate(self, x, y=None, batch_size=32, **kwargs):
+ """Evaluate the best model for the given data.
+
+ # Arguments
+ x: Any allowed types according to the input node. Testing data.
+ y: Any allowed types according to the head. Testing targets.
+ Defaults to None.
+ batch_size: Int. Defaults to 32.
+ **kwargs: Any arguments supported by keras.Model.evaluate.
+
+ # Returns
+ Scalar test loss (if the model has a single output and no metrics) or
+ list of scalars (if the model has multiple outputs and/or metrics).
+ The attribute model.metrics_names will give you the display labels for
+ the scalar outputs.
+ """
+ best_model, data = self._prepare_best_model_and_data(
+ x=x,
+ y=y,
+ batch_size=batch_size)
+ return best_model.evaluate(data, **kwargs)
+
+ def _prepare_best_model_and_data(self, x, y, batch_size, predict=False):
+ best_model = self.tuner.get_best_models(1)[0]
+ best_trial = self.tuner.get_best_trials(1)[0]
+ best_hp = best_trial.hyperparameters
+
+ self.tuner.load_trial(best_trial)
+ x = self._process_xy(x, y, predict=predict)
+ x = self.hypermodel.preprocess(best_hp, x)
+ x = x.batch(batch_size)
+ return best_model, x
+
class GraphAutoModel(AutoModel):
"""A HyperModel defined by a graph of HyperBlocks.
|
{"golden_diff": "diff --git a/autokeras/auto_model.py b/autokeras/auto_model.py\n--- a/autokeras/auto_model.py\n+++ b/autokeras/auto_model.py\n@@ -194,7 +194,7 @@\n \"\"\"Predict the output for a given testing data.\n \n # Arguments\n- x: tf.data.Dataset or numpy.ndarray. Testing data.\n+ x: Any allowed types according to the input node. Testing data.\n batch_size: Int. Defaults to 32.\n **kwargs: Any arguments supported by keras.Model.predict.\n \n@@ -202,14 +202,11 @@\n A list of numpy.ndarray objects or a single numpy.ndarray.\n The predicted results.\n \"\"\"\n- best_model = self.tuner.get_best_models(1)[0]\n- best_trial = self.tuner.get_best_trials(1)[0]\n- best_hp = best_trial.hyperparameters\n-\n- self.tuner.load_trial(best_trial)\n- x = self._process_xy(x, predict=True)\n- x = self.hypermodel.preprocess(best_hp, x)\n- x = x.batch(batch_size)\n+ best_model, x = self._prepare_best_model_and_data(\n+ x=x,\n+ y=None,\n+ batch_size=batch_size,\n+ predict=True)\n y = best_model.predict(x, **kwargs)\n y = self._postprocess(y)\n if isinstance(y, list) and len(y) == 1:\n@@ -225,6 +222,39 @@\n new_y.append(temp_y)\n return new_y\n \n+ def evaluate(self, x, y=None, batch_size=32, **kwargs):\n+ \"\"\"Evaluate the best model for the given data.\n+\n+ # Arguments\n+ x: Any allowed types according to the input node. Testing data.\n+ y: Any allowed types according to the head. Testing targets.\n+ Defaults to None.\n+ batch_size: Int. Defaults to 32.\n+ **kwargs: Any arguments supported by keras.Model.evaluate.\n+\n+ # Returns\n+ Scalar test loss (if the model has a single output and no metrics) or\n+ list of scalars (if the model has multiple outputs and/or metrics).\n+ The attribute model.metrics_names will give you the display labels for\n+ the scalar outputs.\n+ \"\"\"\n+ best_model, data = self._prepare_best_model_and_data(\n+ x=x,\n+ y=y,\n+ batch_size=batch_size)\n+ return best_model.evaluate(data, **kwargs)\n+\n+ def _prepare_best_model_and_data(self, x, y, batch_size, predict=False):\n+ best_model = self.tuner.get_best_models(1)[0]\n+ best_trial = self.tuner.get_best_trials(1)[0]\n+ best_hp = best_trial.hyperparameters\n+\n+ self.tuner.load_trial(best_trial)\n+ x = self._process_xy(x, y, predict=predict)\n+ x = self.hypermodel.preprocess(best_hp, x)\n+ x = x.batch(batch_size)\n+ return best_model, x\n+\n \n class GraphAutoModel(AutoModel):\n \"\"\"A HyperModel defined by a graph of HyperBlocks.\n", "issue": "Add evaluate function to AutoModel\nWould it be relevant to have an `evaluate` method as well? e.g. a standard practice I would expect is to have 2 validation sets, one used for search during training, and one used at the end as the actual validation set (to prevent validation overfitting).\r\n\r\n_Originally posted by @fchollet in https://github.com/keras-team/autokeras/pull/691_\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\n\nimport kerastuner\nfrom autokeras import meta_model\nfrom autokeras import tuner\nfrom autokeras import utils\nfrom autokeras.hypermodel import graph\nfrom autokeras.hypermodel import head\nfrom autokeras.hypermodel import node\n\n\nclass AutoModel(object):\n \"\"\" A Model defined by inputs and outputs.\n\n AutoModel combines a HyperModel and a Tuner to tune the HyperModel.\n The user can use it in a similar way to a Keras model since it\n also has `fit()` and `predict()` methods.\n\n The user can specify the inputs and outputs of the AutoModel. It will infer\n the rest of the high-level neural architecture.\n\n # Arguments\n inputs: A list of or a HyperNode instance.\n The input node(s) of the AutoModel.\n outputs: A list of or a HyperHead instance.\n The output head(s) of the AutoModel.\n name: String. The name of the AutoModel. Defaults to 'auto_model'.\n max_trials: Int. The maximum number of different Keras Models to try.\n The search may finish before reaching the max_trials. Defaults to 100.\n directory: String. The path to a directory for storing the search outputs.\n Defaults to None, which would create a folder with the name of the\n AutoModel in the current directory.\n seed: Int. Random seed.\n \"\"\"\n\n def __init__(self,\n inputs,\n outputs,\n name='auto_model',\n max_trials=100,\n directory=None,\n seed=None):\n self.inputs = nest.flatten(inputs)\n self.outputs = nest.flatten(outputs)\n self.name = name\n self.tuner = None\n self.max_trials = max_trials\n self.directory = directory\n self.seed = seed\n self.hypermodel = None\n if all([isinstance(output_node, head.Head)\n for output_node in self.outputs]):\n self.heads = self.outputs\n else:\n self.heads = [output_node.in_blocks[0] for output_node in self.outputs]\n\n def _meta_build(self, dataset):\n self.hypermodel = meta_model.assemble(inputs=self.inputs,\n outputs=self.outputs,\n dataset=dataset,\n seed=self.seed)\n self.outputs = self.hypermodel.outputs\n\n def fit(self,\n x=None,\n y=None,\n validation_split=0,\n validation_data=None,\n **kwargs):\n \"\"\"Search for the best model and hyperparameters for the AutoModel.\n\n It will search for the best model based on the performances on\n validation data.\n\n # Arguments\n x: numpy.ndarray or tensorflow.Dataset. Training data x.\n y: numpy.ndarray or tensorflow.Dataset. Training data y.\n validation_split: Float between 0 and 1.\n Fraction of the training data to be used as validation data.\n The model will set apart this fraction of the training data,\n will not train on it, and will evaluate\n the loss and any model metrics\n on this data at the end of each epoch.\n The validation data is selected from the last samples\n in the `x` and `y` data provided, before shuffling. This argument is\n not supported when `x` is a dataset.\n validation_data: Data on which to evaluate\n the loss and any model metrics at the end of each epoch.\n The model will not be trained on this data.\n `validation_data` will override `validation_split`.\n `validation_data` could be:\n - tuple `(x_val, y_val)` of Numpy arrays or tensors\n - tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays\n - dataset or a dataset iterator\n For the first two cases, `batch_size` must be provided.\n For the last case, `validation_steps` must be provided.\n **kwargs: Any arguments supported by keras.Model.fit.\n \"\"\"\n dataset, validation_data = self._prepare_data(\n x=x,\n y=y,\n validation_data=validation_data,\n validation_split=validation_split)\n\n # Initialize the hypermodel.\n self._meta_build(dataset)\n self.hypermodel.set_io_shapes(dataset)\n\n # Build the hypermodel in tuner init.\n hp = kerastuner.HyperParameters()\n self.hypermodel.hyper_build(hp)\n self.hypermodel.preprocess(\n hp=kerastuner.HyperParameters(),\n dataset=dataset,\n validation_data=validation_data,\n fit=True)\n self.tuner = tuner.RandomSearch(\n hypermodel=self.hypermodel,\n objective='val_loss',\n max_trials=self.max_trials,\n directory=self.directory,\n seed=self.seed,\n project_name=self.name)\n self.hypermodel.clear_preprocessors()\n\n # TODO: allow early stop if epochs is not specified.\n self.tuner.search(x=dataset,\n validation_data=validation_data,\n **kwargs)\n\n def _process_xy(self, x, y=None, fit=False, predict=False):\n \"\"\"Convert x, y to tf.data.Dataset.\n\n # Arguments\n x: Any type allowed by the corresponding input node.\n y: Any type allowed by the corresponding head.\n fit: Boolean. Whether to fit the type converter with the provided data.\n predict: Boolean. If it is called by the predict function of AutoModel.\n\n # Returns\n A tf.data.Dataset containing both x and y.\n \"\"\"\n if isinstance(x, tf.data.Dataset):\n if y is None and not predict:\n return x\n if isinstance(y, tf.data.Dataset):\n return tf.data.Dataset.zip((x, y))\n\n x = nest.flatten(x)\n new_x = []\n for data, input_node in zip(x, self.inputs):\n if fit:\n input_node.fit(data)\n data = input_node.transform(data)\n new_x.append(data)\n x = tf.data.Dataset.zip(tuple(new_x))\n\n if predict:\n return tf.data.Dataset.zip((x, x))\n\n if not isinstance(y, tf.data.Dataset):\n y = nest.flatten(y)\n new_y = []\n for data, head_block in zip(y, self.heads):\n if fit:\n head_block.fit(data)\n data = head_block.transform(data)\n new_y.append(data)\n y = tf.data.Dataset.zip(tuple(new_y))\n\n return tf.data.Dataset.zip((x, y))\n\n def _prepare_data(self, x, y, validation_data, validation_split):\n \"\"\"Convert the data to tf.data.Dataset.\"\"\"\n # Check validation information.\n if not validation_data and not validation_split:\n raise ValueError('Either validation_data or validation_split'\n 'should be provided.')\n # TODO: Handle other types of input, zip dataset, tensor, dict.\n # Prepare the dataset.\n dataset = self._process_xy(x, y, fit=True)\n if validation_data:\n val_x, val_y = validation_data\n validation_data = self._process_xy(val_x, val_y)\n # Split the data with validation_split.\n if validation_data is None and validation_split:\n dataset, validation_data = utils.split_dataset(dataset, validation_split)\n return dataset, validation_data\n\n def predict(self, x, batch_size=32, **kwargs):\n \"\"\"Predict the output for a given testing data.\n\n # Arguments\n x: tf.data.Dataset or numpy.ndarray. Testing data.\n batch_size: Int. Defaults to 32.\n **kwargs: Any arguments supported by keras.Model.predict.\n\n # Returns\n A list of numpy.ndarray objects or a single numpy.ndarray.\n The predicted results.\n \"\"\"\n best_model = self.tuner.get_best_models(1)[0]\n best_trial = self.tuner.get_best_trials(1)[0]\n best_hp = best_trial.hyperparameters\n\n self.tuner.load_trial(best_trial)\n x = self._process_xy(x, predict=True)\n x = self.hypermodel.preprocess(best_hp, x)\n x = x.batch(batch_size)\n y = best_model.predict(x, **kwargs)\n y = self._postprocess(y)\n if isinstance(y, list) and len(y) == 1:\n y = y[0]\n return y\n\n def _postprocess(self, y):\n y = nest.flatten(y)\n new_y = []\n for temp_y, head_block in zip(y, self.heads):\n if isinstance(head_block, head.Head):\n temp_y = head_block.postprocess(temp_y)\n new_y.append(temp_y)\n return new_y\n\n\nclass GraphAutoModel(AutoModel):\n \"\"\"A HyperModel defined by a graph of HyperBlocks.\n\n GraphAutoModel is a subclass of HyperModel. Besides the HyperModel properties,\n it also has a tuner to tune the HyperModel. The user can use it in a similar\n way to a Keras model since it also has `fit()` and `predict()` methods.\n\n The user can specify the high-level neural architecture by connecting the\n HyperBlocks with the functional API, which is the same as\n the Keras functional API.\n\n # Arguments\n inputs: A list of or a HyperNode instances.\n The input node(s) of the GraphAutoModel.\n outputs: A list of or a HyperNode instances.\n The output node(s) of the GraphAutoModel.\n name: String. The name of the AutoModel. Defaults to 'graph_auto_model'.\n max_trials: Int. The maximum number of different Keras Models to try.\n The search may finish before reaching the max_trials. Defaults to 100.\n directory: String. The path to a directory for storing the search outputs.\n Defaults to None, which would create a folder with the name of the\n AutoModel in the current directory.\n seed: Int. Random seed.\n \"\"\"\n\n def __init__(self,\n inputs,\n outputs,\n name='graph_auto_model',\n max_trials=100,\n directory=None,\n seed=None):\n super().__init__(\n inputs=inputs,\n outputs=outputs,\n name=name,\n max_trials=max_trials,\n directory=directory,\n seed=seed\n )\n self.hypermodel = graph.GraphHyperModel(self.inputs, self.outputs)\n\n def _meta_build(self, dataset):\n pass\n", "path": "autokeras/auto_model.py"}], "after_files": [{"content": "import numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\n\nimport kerastuner\nfrom autokeras import meta_model\nfrom autokeras import tuner\nfrom autokeras import utils\nfrom autokeras.hypermodel import graph\nfrom autokeras.hypermodel import head\nfrom autokeras.hypermodel import node\n\n\nclass AutoModel(object):\n \"\"\" A Model defined by inputs and outputs.\n\n AutoModel combines a HyperModel and a Tuner to tune the HyperModel.\n The user can use it in a similar way to a Keras model since it\n also has `fit()` and `predict()` methods.\n\n The user can specify the inputs and outputs of the AutoModel. It will infer\n the rest of the high-level neural architecture.\n\n # Arguments\n inputs: A list of or a HyperNode instance.\n The input node(s) of the AutoModel.\n outputs: A list of or a HyperHead instance.\n The output head(s) of the AutoModel.\n name: String. The name of the AutoModel. Defaults to 'auto_model'.\n max_trials: Int. The maximum number of different Keras Models to try.\n The search may finish before reaching the max_trials. Defaults to 100.\n directory: String. The path to a directory for storing the search outputs.\n Defaults to None, which would create a folder with the name of the\n AutoModel in the current directory.\n seed: Int. Random seed.\n \"\"\"\n\n def __init__(self,\n inputs,\n outputs,\n name='auto_model',\n max_trials=100,\n directory=None,\n seed=None):\n self.inputs = nest.flatten(inputs)\n self.outputs = nest.flatten(outputs)\n self.name = name\n self.tuner = None\n self.max_trials = max_trials\n self.directory = directory\n self.seed = seed\n self.hypermodel = None\n if all([isinstance(output_node, head.Head)\n for output_node in self.outputs]):\n self.heads = self.outputs\n else:\n self.heads = [output_node.in_blocks[0] for output_node in self.outputs]\n\n def _meta_build(self, dataset):\n self.hypermodel = meta_model.assemble(inputs=self.inputs,\n outputs=self.outputs,\n dataset=dataset,\n seed=self.seed)\n self.outputs = self.hypermodel.outputs\n\n def fit(self,\n x=None,\n y=None,\n validation_split=0,\n validation_data=None,\n **kwargs):\n \"\"\"Search for the best model and hyperparameters for the AutoModel.\n\n It will search for the best model based on the performances on\n validation data.\n\n # Arguments\n x: numpy.ndarray or tensorflow.Dataset. Training data x.\n y: numpy.ndarray or tensorflow.Dataset. Training data y.\n validation_split: Float between 0 and 1.\n Fraction of the training data to be used as validation data.\n The model will set apart this fraction of the training data,\n will not train on it, and will evaluate\n the loss and any model metrics\n on this data at the end of each epoch.\n The validation data is selected from the last samples\n in the `x` and `y` data provided, before shuffling. This argument is\n not supported when `x` is a dataset.\n validation_data: Data on which to evaluate\n the loss and any model metrics at the end of each epoch.\n The model will not be trained on this data.\n `validation_data` will override `validation_split`.\n `validation_data` could be:\n - tuple `(x_val, y_val)` of Numpy arrays or tensors\n - tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays\n - dataset or a dataset iterator\n For the first two cases, `batch_size` must be provided.\n For the last case, `validation_steps` must be provided.\n **kwargs: Any arguments supported by keras.Model.fit.\n \"\"\"\n dataset, validation_data = self._prepare_data(\n x=x,\n y=y,\n validation_data=validation_data,\n validation_split=validation_split)\n\n # Initialize the hypermodel.\n self._meta_build(dataset)\n self.hypermodel.set_io_shapes(dataset)\n\n # Build the hypermodel in tuner init.\n hp = kerastuner.HyperParameters()\n self.hypermodel.hyper_build(hp)\n self.hypermodel.preprocess(\n hp=kerastuner.HyperParameters(),\n dataset=dataset,\n validation_data=validation_data,\n fit=True)\n self.tuner = tuner.RandomSearch(\n hypermodel=self.hypermodel,\n objective='val_loss',\n max_trials=self.max_trials,\n directory=self.directory,\n seed=self.seed,\n project_name=self.name)\n self.hypermodel.clear_preprocessors()\n\n # TODO: allow early stop if epochs is not specified.\n self.tuner.search(x=dataset,\n validation_data=validation_data,\n **kwargs)\n\n def _process_xy(self, x, y=None, fit=False, predict=False):\n \"\"\"Convert x, y to tf.data.Dataset.\n\n # Arguments\n x: Any type allowed by the corresponding input node.\n y: Any type allowed by the corresponding head.\n fit: Boolean. Whether to fit the type converter with the provided data.\n predict: Boolean. If it is called by the predict function of AutoModel.\n\n # Returns\n A tf.data.Dataset containing both x and y.\n \"\"\"\n if isinstance(x, tf.data.Dataset):\n if y is None and not predict:\n return x\n if isinstance(y, tf.data.Dataset):\n return tf.data.Dataset.zip((x, y))\n\n x = nest.flatten(x)\n new_x = []\n for data, input_node in zip(x, self.inputs):\n if fit:\n input_node.fit(data)\n data = input_node.transform(data)\n new_x.append(data)\n x = tf.data.Dataset.zip(tuple(new_x))\n\n if predict:\n return tf.data.Dataset.zip((x, x))\n\n if not isinstance(y, tf.data.Dataset):\n y = nest.flatten(y)\n new_y = []\n for data, head_block in zip(y, self.heads):\n if fit:\n head_block.fit(data)\n data = head_block.transform(data)\n new_y.append(data)\n y = tf.data.Dataset.zip(tuple(new_y))\n\n return tf.data.Dataset.zip((x, y))\n\n def _prepare_data(self, x, y, validation_data, validation_split):\n \"\"\"Convert the data to tf.data.Dataset.\"\"\"\n # Check validation information.\n if not validation_data and not validation_split:\n raise ValueError('Either validation_data or validation_split'\n 'should be provided.')\n # TODO: Handle other types of input, zip dataset, tensor, dict.\n # Prepare the dataset.\n dataset = self._process_xy(x, y, fit=True)\n if validation_data:\n val_x, val_y = validation_data\n validation_data = self._process_xy(val_x, val_y)\n # Split the data with validation_split.\n if validation_data is None and validation_split:\n dataset, validation_data = utils.split_dataset(dataset, validation_split)\n return dataset, validation_data\n\n def predict(self, x, batch_size=32, **kwargs):\n \"\"\"Predict the output for a given testing data.\n\n # Arguments\n x: Any allowed types according to the input node. Testing data.\n batch_size: Int. Defaults to 32.\n **kwargs: Any arguments supported by keras.Model.predict.\n\n # Returns\n A list of numpy.ndarray objects or a single numpy.ndarray.\n The predicted results.\n \"\"\"\n best_model, x = self._prepare_best_model_and_data(\n x=x,\n y=None,\n batch_size=batch_size,\n predict=True)\n y = best_model.predict(x, **kwargs)\n y = self._postprocess(y)\n if isinstance(y, list) and len(y) == 1:\n y = y[0]\n return y\n\n def _postprocess(self, y):\n y = nest.flatten(y)\n new_y = []\n for temp_y, head_block in zip(y, self.heads):\n if isinstance(head_block, head.Head):\n temp_y = head_block.postprocess(temp_y)\n new_y.append(temp_y)\n return new_y\n\n def evaluate(self, x, y=None, batch_size=32, **kwargs):\n \"\"\"Evaluate the best model for the given data.\n\n # Arguments\n x: Any allowed types according to the input node. Testing data.\n y: Any allowed types according to the head. Testing targets.\n Defaults to None.\n batch_size: Int. Defaults to 32.\n **kwargs: Any arguments supported by keras.Model.evaluate.\n\n # Returns\n Scalar test loss (if the model has a single output and no metrics) or\n list of scalars (if the model has multiple outputs and/or metrics).\n The attribute model.metrics_names will give you the display labels for\n the scalar outputs.\n \"\"\"\n best_model, data = self._prepare_best_model_and_data(\n x=x,\n y=y,\n batch_size=batch_size)\n return best_model.evaluate(data, **kwargs)\n\n def _prepare_best_model_and_data(self, x, y, batch_size, predict=False):\n best_model = self.tuner.get_best_models(1)[0]\n best_trial = self.tuner.get_best_trials(1)[0]\n best_hp = best_trial.hyperparameters\n\n self.tuner.load_trial(best_trial)\n x = self._process_xy(x, y, predict=predict)\n x = self.hypermodel.preprocess(best_hp, x)\n x = x.batch(batch_size)\n return best_model, x\n\n\nclass GraphAutoModel(AutoModel):\n \"\"\"A HyperModel defined by a graph of HyperBlocks.\n\n GraphAutoModel is a subclass of HyperModel. Besides the HyperModel properties,\n it also has a tuner to tune the HyperModel. The user can use it in a similar\n way to a Keras model since it also has `fit()` and `predict()` methods.\n\n The user can specify the high-level neural architecture by connecting the\n HyperBlocks with the functional API, which is the same as\n the Keras functional API.\n\n # Arguments\n inputs: A list of or a HyperNode instances.\n The input node(s) of the GraphAutoModel.\n outputs: A list of or a HyperNode instances.\n The output node(s) of the GraphAutoModel.\n name: String. The name of the AutoModel. Defaults to 'graph_auto_model'.\n max_trials: Int. The maximum number of different Keras Models to try.\n The search may finish before reaching the max_trials. Defaults to 100.\n directory: String. The path to a directory for storing the search outputs.\n Defaults to None, which would create a folder with the name of the\n AutoModel in the current directory.\n seed: Int. Random seed.\n \"\"\"\n\n def __init__(self,\n inputs,\n outputs,\n name='graph_auto_model',\n max_trials=100,\n directory=None,\n seed=None):\n super().__init__(\n inputs=inputs,\n outputs=outputs,\n name=name,\n max_trials=max_trials,\n directory=directory,\n seed=seed\n )\n self.hypermodel = graph.GraphHyperModel(self.inputs, self.outputs)\n\n def _meta_build(self, dataset):\n pass\n", "path": "autokeras/auto_model.py"}]}
| 3,298 | 708 |
gh_patches_debug_10341
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regression when building docs
I got this when building the Django REST Framework docs with 0.12.1
```
$ mkdocs build
Building documentation to directory: site
Directory site contains stale files. Use --clean to remove them.
Traceback (most recent call last):
File "/home/dougalmatthews/.virtualenvs/mkdocs/bin/mkdocs", line 9, in <module>
load_entry_point('mkdocs==0.12.1', 'console_scripts', 'mkdocs')()
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/main.py", line 77, in run_main
main(cmd, args=sys.argv[2:], options=dict(opts))
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/main.py", line 52, in main
build(config, clean_site_dir=clean_site_dir)
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/build.py", line 252, in build
build_pages(config)
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/build.py", line 209, in build_pages
site_navigation = nav.SiteNavigation(config['pages'], config['use_directory_urls'])
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/nav.py", line 37, in __init__
_generate_site_navigation(pages_config, self.url_context, use_directory_urls)
File "/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/nav.py", line 243, in _generate_site_navigation
header.children.append(page)
AttributeError: 'Page' object has no attribute 'children'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/nav.py`
Content:
```
1 # coding: utf-8
2
3 """
4 Deals with generating the site-wide navigation.
5
6 This consists of building a set of interlinked page and header objects.
7 """
8
9 import logging
10 import os
11
12 from mkdocs import utils, exceptions
13
14 log = logging.getLogger(__name__)
15
16
17 def filename_to_title(filename):
18 """
19 Automatically generate a default title, given a filename.
20 """
21 if utils.is_homepage(filename):
22 return 'Home'
23
24 title = os.path.splitext(filename)[0]
25 title = title.replace('-', ' ').replace('_', ' ')
26 # Captialize if the filename was all lowercase, otherwise leave it as-is.
27 if title.lower() == title:
28 title = title.capitalize()
29 return title
30
31
32 class SiteNavigation(object):
33 def __init__(self, pages_config, use_directory_urls=True):
34 self.url_context = URLContext()
35 self.file_context = FileContext()
36 self.nav_items, self.pages = \
37 _generate_site_navigation(pages_config, self.url_context, use_directory_urls)
38 self.homepage = self.pages[0] if self.pages else None
39 self.use_directory_urls = use_directory_urls
40
41 def __str__(self):
42 return ''.join([str(item) for item in self])
43
44 def __iter__(self):
45 return iter(self.nav_items)
46
47 def walk_pages(self):
48 """
49 Returns each page in the site in turn.
50
51 Additionally this sets the active status of the pages and headers,
52 in the site navigation, so that the rendered navbar can correctly
53 highlight the currently active page and/or header item.
54 """
55 page = self.homepage
56 page.set_active()
57 self.url_context.set_current_url(page.abs_url)
58 self.file_context.set_current_path(page.input_path)
59 yield page
60 while page.next_page:
61 page.set_active(False)
62 page = page.next_page
63 page.set_active()
64 self.url_context.set_current_url(page.abs_url)
65 self.file_context.set_current_path(page.input_path)
66 yield page
67 page.set_active(False)
68
69 @property
70 def source_files(self):
71 if not hasattr(self, '_source_files'):
72 self._source_files = set([page.input_path for page in self.pages])
73 return self._source_files
74
75
76 class URLContext(object):
77 """
78 The URLContext is used to ensure that we can generate the appropriate
79 relative URLs to other pages from any given page in the site.
80
81 We use relative URLs so that static sites can be deployed to any location
82 without having to specify what the path component on the host will be
83 if the documentation is not hosted at the root path.
84 """
85
86 def __init__(self):
87 self.base_path = '/'
88
89 def set_current_url(self, current_url):
90 self.base_path = os.path.dirname(current_url)
91
92 def make_relative(self, url):
93 """
94 Given a URL path return it as a relative URL,
95 given the context of the current page.
96 """
97 suffix = '/' if (url.endswith('/') and len(url) > 1) else ''
98 # Workaround for bug on `os.path.relpath()` in Python 2.6
99 if self.base_path == '/':
100 if url == '/':
101 # Workaround for static assets
102 return '.'
103 return url.lstrip('/')
104 # Under Python 2.6, relative_path adds an extra '/' at the end.
105 relative_path = os.path.relpath(url, start=self.base_path).rstrip('/') + suffix
106
107 return utils.path_to_url(relative_path)
108
109
110 class FileContext(object):
111 """
112 The FileContext is used to ensure that we can generate the appropriate
113 full path for other pages given their relative path from a particular page.
114
115 This is used when we have relative hyperlinks in the documentation, so that
116 we can ensure that they point to markdown documents that actually exist
117 in the `pages` config.
118 """
119 def __init__(self):
120 self.current_file = None
121 self.base_path = ''
122
123 def set_current_path(self, current_path):
124 self.current_file = current_path
125 self.base_path = os.path.dirname(current_path)
126
127 def make_absolute(self, path):
128 """
129 Given a relative file path return it as a POSIX-style
130 absolute filepath, given the context of the current page.
131 """
132 return os.path.normpath(os.path.join(self.base_path, path))
133
134
135 class Page(object):
136 def __init__(self, title, url, path, url_context):
137 self.title = title
138 self.abs_url = url
139 self.active = False
140 self.url_context = url_context
141
142 # Relative paths to the input markdown file and output html file.
143 self.input_path = path
144 self.output_path = utils.get_html_path(path)
145
146 # Links to related pages
147 self.previous_page = None
148 self.next_page = None
149 self.ancestors = []
150
151 @property
152 def url(self):
153 return self.url_context.make_relative(self.abs_url)
154
155 @property
156 def is_homepage(self):
157 return utils.is_homepage(self.input_path)
158
159 def __str__(self):
160 return self._indent_print()
161
162 def _indent_print(self, depth=0):
163 indent = ' ' * depth
164 active_marker = ' [*]' if self.active else ''
165 title = self.title if (self.title is not None) else '[blank]'
166 return '%s%s - %s%s\n' % (indent, title, self.abs_url, active_marker)
167
168 def set_active(self, active=True):
169 self.active = active
170 for ancestor in self.ancestors:
171 ancestor.active = active
172
173
174 class Header(object):
175 def __init__(self, title, children):
176 self.title, self.children = title, children
177 self.active = False
178
179 def __str__(self):
180 return self._indent_print()
181
182 def _indent_print(self, depth=0):
183 indent = ' ' * depth
184 active_marker = ' [*]' if self.active else ''
185 ret = '%s%s%s\n' % (indent, self.title, active_marker)
186 for item in self.children:
187 ret += item._indent_print(depth + 1)
188 return ret
189
190
191 def _generate_site_navigation(pages_config, url_context, use_directory_urls=True):
192 """
193 Returns a list of Page and Header instances that represent the
194 top level site navigation.
195 """
196 nav_items = []
197 pages = []
198 previous = None
199
200 for config_line in pages_config:
201 if isinstance(config_line, str):
202 path = os.path.normpath(config_line)
203 title, child_title = None, None
204 elif len(config_line) in (1, 2, 3):
205 # Pad any items that don't exist with 'None'
206 padded_config = (list(config_line) + [None, None])[:3]
207 path, title, child_title = padded_config
208 path = os.path.normpath(path)
209 else:
210 msg = (
211 "Line in 'page' config contained %d items. "
212 "Expected 1, 2 or 3 strings." % len(config_line)
213 )
214 raise exceptions.ConfigurationError(msg)
215
216 # If both the title and child_title are None, then we
217 # have just been given a path. If that path contains a /
218 # then lets automatically nest it.
219 if title is None and child_title is None and os.path.sep in path:
220 filename = path.split(os.path.sep)[-1]
221 child_title = filename_to_title(filename)
222
223 if title is None:
224 filename = path.split(os.path.sep)[0]
225 title = filename_to_title(filename)
226
227 url = utils.get_url_path(path, use_directory_urls)
228
229 if not child_title:
230 # New top level page.
231 page = Page(title=title, url=url, path=path, url_context=url_context)
232 nav_items.append(page)
233 elif not nav_items or (nav_items[-1].title != title):
234 # New second level page.
235 page = Page(title=child_title, url=url, path=path, url_context=url_context)
236 header = Header(title=title, children=[page])
237 nav_items.append(header)
238 page.ancestors = [header]
239 else:
240 # Additional second level page.
241 page = Page(title=child_title, url=url, path=path, url_context=url_context)
242 header = nav_items[-1]
243 header.children.append(page)
244 page.ancestors = [header]
245
246 # Add in previous and next information.
247 if previous:
248 page.previous_page = previous
249 previous.next_page = page
250 previous = page
251
252 pages.append(page)
253
254 return (nav_items, pages)
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/nav.py b/mkdocs/nav.py
--- a/mkdocs/nav.py
+++ b/mkdocs/nav.py
@@ -224,6 +224,13 @@
filename = path.split(os.path.sep)[0]
title = filename_to_title(filename)
+ # If we don't have a child title but the other title is the same, we
+ # should be within a section and the child title needs to be inferred
+ # from the filename.
+ if len(nav_items) and title == nav_items[-1].title == title and child_title is None:
+ filename = path.split(os.path.sep)[-1]
+ child_title = filename_to_title(filename)
+
url = utils.get_url_path(path, use_directory_urls)
if not child_title:
|
{"golden_diff": "diff --git a/mkdocs/nav.py b/mkdocs/nav.py\n--- a/mkdocs/nav.py\n+++ b/mkdocs/nav.py\n@@ -224,6 +224,13 @@\n filename = path.split(os.path.sep)[0]\n title = filename_to_title(filename)\n \n+ # If we don't have a child title but the other title is the same, we\n+ # should be within a section and the child title needs to be inferred\n+ # from the filename.\n+ if len(nav_items) and title == nav_items[-1].title == title and child_title is None:\n+ filename = path.split(os.path.sep)[-1]\n+ child_title = filename_to_title(filename)\n+\n url = utils.get_url_path(path, use_directory_urls)\n \n if not child_title:\n", "issue": "Regression when building docs\nI got this when building the Django REST Framework docs with 0.12.1\n\n```\n$ mkdocs build \nBuilding documentation to directory: site\nDirectory site contains stale files. Use --clean to remove them.\nTraceback (most recent call last):\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/bin/mkdocs\", line 9, in <module>\n load_entry_point('mkdocs==0.12.1', 'console_scripts', 'mkdocs')()\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/main.py\", line 77, in run_main\n main(cmd, args=sys.argv[2:], options=dict(opts))\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/main.py\", line 52, in main\n build(config, clean_site_dir=clean_site_dir)\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/build.py\", line 252, in build\n build_pages(config)\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/build.py\", line 209, in build_pages\n site_navigation = nav.SiteNavigation(config['pages'], config['use_directory_urls'])\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/nav.py\", line 37, in __init__\n _generate_site_navigation(pages_config, self.url_context, use_directory_urls)\n File \"/home/dougalmatthews/.virtualenvs/mkdocs/lib/python2.7/site-packages/mkdocs/nav.py\", line 243, in _generate_site_navigation\n header.children.append(page)\nAttributeError: 'Page' object has no attribute 'children'\n```\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the site-wide navigation.\n\nThis consists of building a set of interlinked page and header objects.\n\"\"\"\n\nimport logging\nimport os\n\nfrom mkdocs import utils, exceptions\n\nlog = logging.getLogger(__name__)\n\n\ndef filename_to_title(filename):\n \"\"\"\n Automatically generate a default title, given a filename.\n \"\"\"\n if utils.is_homepage(filename):\n return 'Home'\n\n title = os.path.splitext(filename)[0]\n title = title.replace('-', ' ').replace('_', ' ')\n # Captialize if the filename was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n return title\n\n\nclass SiteNavigation(object):\n def __init__(self, pages_config, use_directory_urls=True):\n self.url_context = URLContext()\n self.file_context = FileContext()\n self.nav_items, self.pages = \\\n _generate_site_navigation(pages_config, self.url_context, use_directory_urls)\n self.homepage = self.pages[0] if self.pages else None\n self.use_directory_urls = use_directory_urls\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n def __iter__(self):\n return iter(self.nav_items)\n\n def walk_pages(self):\n \"\"\"\n Returns each page in the site in turn.\n\n Additionally this sets the active status of the pages and headers,\n in the site navigation, so that the rendered navbar can correctly\n highlight the currently active page and/or header item.\n \"\"\"\n page = self.homepage\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n while page.next_page:\n page.set_active(False)\n page = page.next_page\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n page.set_active(False)\n\n @property\n def source_files(self):\n if not hasattr(self, '_source_files'):\n self._source_files = set([page.input_path for page in self.pages])\n return self._source_files\n\n\nclass URLContext(object):\n \"\"\"\n The URLContext is used to ensure that we can generate the appropriate\n relative URLs to other pages from any given page in the site.\n\n We use relative URLs so that static sites can be deployed to any location\n without having to specify what the path component on the host will be\n if the documentation is not hosted at the root path.\n \"\"\"\n\n def __init__(self):\n self.base_path = '/'\n\n def set_current_url(self, current_url):\n self.base_path = os.path.dirname(current_url)\n\n def make_relative(self, url):\n \"\"\"\n Given a URL path return it as a relative URL,\n given the context of the current page.\n \"\"\"\n suffix = '/' if (url.endswith('/') and len(url) > 1) else ''\n # Workaround for bug on `os.path.relpath()` in Python 2.6\n if self.base_path == '/':\n if url == '/':\n # Workaround for static assets\n return '.'\n return url.lstrip('/')\n # Under Python 2.6, relative_path adds an extra '/' at the end.\n relative_path = os.path.relpath(url, start=self.base_path).rstrip('/') + suffix\n\n return utils.path_to_url(relative_path)\n\n\nclass FileContext(object):\n \"\"\"\n The FileContext is used to ensure that we can generate the appropriate\n full path for other pages given their relative path from a particular page.\n\n This is used when we have relative hyperlinks in the documentation, so that\n we can ensure that they point to markdown documents that actually exist\n in the `pages` config.\n \"\"\"\n def __init__(self):\n self.current_file = None\n self.base_path = ''\n\n def set_current_path(self, current_path):\n self.current_file = current_path\n self.base_path = os.path.dirname(current_path)\n\n def make_absolute(self, path):\n \"\"\"\n Given a relative file path return it as a POSIX-style\n absolute filepath, given the context of the current page.\n \"\"\"\n return os.path.normpath(os.path.join(self.base_path, path))\n\n\nclass Page(object):\n def __init__(self, title, url, path, url_context):\n self.title = title\n self.abs_url = url\n self.active = False\n self.url_context = url_context\n\n # Relative paths to the input markdown file and output html file.\n self.input_path = path\n self.output_path = utils.get_html_path(path)\n\n # Links to related pages\n self.previous_page = None\n self.next_page = None\n self.ancestors = []\n\n @property\n def url(self):\n return self.url_context.make_relative(self.abs_url)\n\n @property\n def is_homepage(self):\n return utils.is_homepage(self.input_path)\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n title = self.title if (self.title is not None) else '[blank]'\n return '%s%s - %s%s\\n' % (indent, title, self.abs_url, active_marker)\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.active = active\n\n\nclass Header(object):\n def __init__(self, title, children):\n self.title, self.children = title, children\n self.active = False\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n ret = '%s%s%s\\n' % (indent, self.title, active_marker)\n for item in self.children:\n ret += item._indent_print(depth + 1)\n return ret\n\n\ndef _generate_site_navigation(pages_config, url_context, use_directory_urls=True):\n \"\"\"\n Returns a list of Page and Header instances that represent the\n top level site navigation.\n \"\"\"\n nav_items = []\n pages = []\n previous = None\n\n for config_line in pages_config:\n if isinstance(config_line, str):\n path = os.path.normpath(config_line)\n title, child_title = None, None\n elif len(config_line) in (1, 2, 3):\n # Pad any items that don't exist with 'None'\n padded_config = (list(config_line) + [None, None])[:3]\n path, title, child_title = padded_config\n path = os.path.normpath(path)\n else:\n msg = (\n \"Line in 'page' config contained %d items. \"\n \"Expected 1, 2 or 3 strings.\" % len(config_line)\n )\n raise exceptions.ConfigurationError(msg)\n\n # If both the title and child_title are None, then we\n # have just been given a path. If that path contains a /\n # then lets automatically nest it.\n if title is None and child_title is None and os.path.sep in path:\n filename = path.split(os.path.sep)[-1]\n child_title = filename_to_title(filename)\n\n if title is None:\n filename = path.split(os.path.sep)[0]\n title = filename_to_title(filename)\n\n url = utils.get_url_path(path, use_directory_urls)\n\n if not child_title:\n # New top level page.\n page = Page(title=title, url=url, path=path, url_context=url_context)\n nav_items.append(page)\n elif not nav_items or (nav_items[-1].title != title):\n # New second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = Header(title=title, children=[page])\n nav_items.append(header)\n page.ancestors = [header]\n else:\n # Additional second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = nav_items[-1]\n header.children.append(page)\n page.ancestors = [header]\n\n # Add in previous and next information.\n if previous:\n page.previous_page = previous\n previous.next_page = page\n previous = page\n\n pages.append(page)\n\n return (nav_items, pages)\n", "path": "mkdocs/nav.py"}], "after_files": [{"content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the site-wide navigation.\n\nThis consists of building a set of interlinked page and header objects.\n\"\"\"\n\nimport logging\nimport os\n\nfrom mkdocs import utils, exceptions\n\nlog = logging.getLogger(__name__)\n\n\ndef filename_to_title(filename):\n \"\"\"\n Automatically generate a default title, given a filename.\n \"\"\"\n if utils.is_homepage(filename):\n return 'Home'\n\n title = os.path.splitext(filename)[0]\n title = title.replace('-', ' ').replace('_', ' ')\n # Captialize if the filename was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n return title\n\n\nclass SiteNavigation(object):\n def __init__(self, pages_config, use_directory_urls=True):\n self.url_context = URLContext()\n self.file_context = FileContext()\n self.nav_items, self.pages = \\\n _generate_site_navigation(pages_config, self.url_context, use_directory_urls)\n self.homepage = self.pages[0] if self.pages else None\n self.use_directory_urls = use_directory_urls\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n def __iter__(self):\n return iter(self.nav_items)\n\n def walk_pages(self):\n \"\"\"\n Returns each page in the site in turn.\n\n Additionally this sets the active status of the pages and headers,\n in the site navigation, so that the rendered navbar can correctly\n highlight the currently active page and/or header item.\n \"\"\"\n page = self.homepage\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n while page.next_page:\n page.set_active(False)\n page = page.next_page\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n page.set_active(False)\n\n @property\n def source_files(self):\n if not hasattr(self, '_source_files'):\n self._source_files = set([page.input_path for page in self.pages])\n return self._source_files\n\n\nclass URLContext(object):\n \"\"\"\n The URLContext is used to ensure that we can generate the appropriate\n relative URLs to other pages from any given page in the site.\n\n We use relative URLs so that static sites can be deployed to any location\n without having to specify what the path component on the host will be\n if the documentation is not hosted at the root path.\n \"\"\"\n\n def __init__(self):\n self.base_path = '/'\n\n def set_current_url(self, current_url):\n self.base_path = os.path.dirname(current_url)\n\n def make_relative(self, url):\n \"\"\"\n Given a URL path return it as a relative URL,\n given the context of the current page.\n \"\"\"\n suffix = '/' if (url.endswith('/') and len(url) > 1) else ''\n # Workaround for bug on `os.path.relpath()` in Python 2.6\n if self.base_path == '/':\n if url == '/':\n # Workaround for static assets\n return '.'\n return url.lstrip('/')\n # Under Python 2.6, relative_path adds an extra '/' at the end.\n relative_path = os.path.relpath(url, start=self.base_path).rstrip('/') + suffix\n\n return utils.path_to_url(relative_path)\n\n\nclass FileContext(object):\n \"\"\"\n The FileContext is used to ensure that we can generate the appropriate\n full path for other pages given their relative path from a particular page.\n\n This is used when we have relative hyperlinks in the documentation, so that\n we can ensure that they point to markdown documents that actually exist\n in the `pages` config.\n \"\"\"\n def __init__(self):\n self.current_file = None\n self.base_path = ''\n\n def set_current_path(self, current_path):\n self.current_file = current_path\n self.base_path = os.path.dirname(current_path)\n\n def make_absolute(self, path):\n \"\"\"\n Given a relative file path return it as a POSIX-style\n absolute filepath, given the context of the current page.\n \"\"\"\n return os.path.normpath(os.path.join(self.base_path, path))\n\n\nclass Page(object):\n def __init__(self, title, url, path, url_context):\n self.title = title\n self.abs_url = url\n self.active = False\n self.url_context = url_context\n\n # Relative paths to the input markdown file and output html file.\n self.input_path = path\n self.output_path = utils.get_html_path(path)\n\n # Links to related pages\n self.previous_page = None\n self.next_page = None\n self.ancestors = []\n\n @property\n def url(self):\n return self.url_context.make_relative(self.abs_url)\n\n @property\n def is_homepage(self):\n return utils.is_homepage(self.input_path)\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n title = self.title if (self.title is not None) else '[blank]'\n return '%s%s - %s%s\\n' % (indent, title, self.abs_url, active_marker)\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.active = active\n\n\nclass Header(object):\n def __init__(self, title, children):\n self.title, self.children = title, children\n self.active = False\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n ret = '%s%s%s\\n' % (indent, self.title, active_marker)\n for item in self.children:\n ret += item._indent_print(depth + 1)\n return ret\n\n\ndef _generate_site_navigation(pages_config, url_context, use_directory_urls=True):\n \"\"\"\n Returns a list of Page and Header instances that represent the\n top level site navigation.\n \"\"\"\n nav_items = []\n pages = []\n previous = None\n\n for config_line in pages_config:\n if isinstance(config_line, str):\n path = os.path.normpath(config_line)\n title, child_title = None, None\n elif len(config_line) in (1, 2, 3):\n # Pad any items that don't exist with 'None'\n padded_config = (list(config_line) + [None, None])[:3]\n path, title, child_title = padded_config\n path = os.path.normpath(path)\n else:\n msg = (\n \"Line in 'page' config contained %d items. \"\n \"Expected 1, 2 or 3 strings.\" % len(config_line)\n )\n raise exceptions.ConfigurationError(msg)\n\n # If both the title and child_title are None, then we\n # have just been given a path. If that path contains a /\n # then lets automatically nest it.\n if title is None and child_title is None and os.path.sep in path:\n filename = path.split(os.path.sep)[-1]\n child_title = filename_to_title(filename)\n\n if title is None:\n filename = path.split(os.path.sep)[0]\n title = filename_to_title(filename)\n\n # If we don't have a child title but the other title is the same, we\n # should be within a section and the child title needs to be inferred\n # from the filename.\n if len(nav_items) and title == nav_items[-1].title == title and child_title is None:\n filename = path.split(os.path.sep)[-1]\n child_title = filename_to_title(filename)\n\n url = utils.get_url_path(path, use_directory_urls)\n\n if not child_title:\n # New top level page.\n page = Page(title=title, url=url, path=path, url_context=url_context)\n nav_items.append(page)\n elif not nav_items or (nav_items[-1].title != title):\n # New second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = Header(title=title, children=[page])\n nav_items.append(header)\n page.ancestors = [header]\n else:\n # Additional second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = nav_items[-1]\n header.children.append(page)\n page.ancestors = [header]\n\n # Add in previous and next information.\n if previous:\n page.previous_page = previous\n previous.next_page = page\n previous = page\n\n pages.append(page)\n\n return (nav_items, pages)\n", "path": "mkdocs/nav.py"}]}
| 3,218 | 179 |
gh_patches_debug_16615
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-8555
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bokeh cannot serve notebooks on Python 2
Using a notebook containing a single code cell (simply `1+1`) in a Python 2 environment:
`bokeh serve --show Test.ipynb`
Results in the following error:
```
SyntaxError: encoding declaration in Unicode string
```
Note that this is why the unit tests added in #8546 skip for Python 2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/application/handlers/notebook.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Provide a Bokeh Application Handler to build up documents by running
8 the code from Jupyter notebook (``.ipynb``) files.
9
10 This handler is configured with the filename of a Jupyter notebook. When a
11 Bokeh application calls ``modify_doc``, the code from all the notebook cells
12 is collected and executed to process a new Document for a session. When the
13 notebook code is executed, the Document being modified will be available as
14 ``curdoc``, and any optionally provided ``args`` will be available as
15 ``sys.argv``.
16
17 '''
18
19 #-----------------------------------------------------------------------------
20 # Boilerplate
21 #-----------------------------------------------------------------------------
22 from __future__ import absolute_import, division, print_function, unicode_literals
23
24 import re
25 import logging
26 log = logging.getLogger(__name__)
27
28 #-----------------------------------------------------------------------------
29 # Imports
30 #-----------------------------------------------------------------------------
31
32 # Standard library imports
33
34 # External imports
35
36 # Bokeh imports
37 from ...util.dependencies import import_required
38 from .code import CodeHandler
39
40 #-----------------------------------------------------------------------------
41 # Globals and constants
42 #-----------------------------------------------------------------------------
43
44 __all__ = (
45 'NotebookHandler',
46 )
47
48 #-----------------------------------------------------------------------------
49 # General API
50 #-----------------------------------------------------------------------------
51
52 #-----------------------------------------------------------------------------
53 # Dev API
54 #-----------------------------------------------------------------------------
55
56 class NotebookHandler(CodeHandler):
57 ''' A Handler that uses code in a Jupyter notebook for modifying Bokeh
58 Documents.
59
60 '''
61
62 _logger_text = "%s: call to %s() ignored when running notebooks with the 'bokeh' command."
63
64 _origin = "Notebook"
65
66 def __init__(self, *args, **kwargs):
67 '''
68
69 Keywords:
70 filename (str) : a path to a Jupyter notebook (".ipynb") file
71
72 '''
73 nbformat = import_required('nbformat', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')
74 nbconvert = import_required('nbconvert', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')
75
76 if 'filename' not in kwargs:
77 raise ValueError('Must pass a filename to NotebookHandler')
78
79
80 class StripMagicsProcessor(nbconvert.preprocessors.Preprocessor):
81 """
82 Preprocessor to convert notebooks to Python source while stripping
83 out all magics (i.e IPython specific syntax).
84 """
85
86 _magic_pattern = re.compile('^\s*(?P<magic>%%\w\w+)($|(\s+))')
87
88 def strip_magics(self, source):
89 """
90 Given the source of a cell, filter out all cell and line magics.
91 """
92 filtered=[]
93 for line in source.splitlines():
94 match = self._magic_pattern.match(line)
95 if match is None:
96 filtered.append(line)
97 else:
98 msg = 'Stripping out IPython magic {magic} in code cell {cell}'
99 message = msg.format(cell=self._cell_counter, magic=match.group('magic'))
100 log.warn(message)
101 return '\n'.join(filtered)
102
103 def preprocess_cell(self, cell, resources, index):
104 if cell['cell_type'] == 'code':
105 self._cell_counter += 1
106 cell['source'] = self.strip_magics(cell['source'])
107 return cell, resources
108
109 def __call__(self, nb, resources):
110 self._cell_counter = 0
111 return self.preprocess(nb,resources)
112
113 preprocessors=[StripMagicsProcessor()]
114 filename = kwargs['filename']
115
116 with open(filename) as f:
117 nb = nbformat.read(f, nbformat.NO_CONVERT)
118 exporter = nbconvert.PythonExporter()
119
120 for preprocessor in preprocessors:
121 exporter.register_preprocessor(preprocessor)
122
123 source, _ = exporter.from_notebook_node(nb)
124 source = source.replace('get_ipython().run_line_magic', '')
125 kwargs['source'] = source
126
127 super(NotebookHandler, self).__init__(*args, **kwargs)
128
129 #-----------------------------------------------------------------------------
130 # Private API
131 #-----------------------------------------------------------------------------
132
133 #-----------------------------------------------------------------------------
134 # Code
135 #-----------------------------------------------------------------------------
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/application/handlers/notebook.py b/bokeh/application/handlers/notebook.py
--- a/bokeh/application/handlers/notebook.py
+++ b/bokeh/application/handlers/notebook.py
@@ -22,6 +22,7 @@
from __future__ import absolute_import, division, print_function, unicode_literals
import re
+import sys
import logging
log = logging.getLogger(__name__)
@@ -122,6 +123,9 @@
source, _ = exporter.from_notebook_node(nb)
source = source.replace('get_ipython().run_line_magic', '')
+ source = source.replace('get_ipython().magic', '')
+ if sys.version_info.major == 2:
+ source = source.replace('# coding: utf-8','')
kwargs['source'] = source
super(NotebookHandler, self).__init__(*args, **kwargs)
|
{"golden_diff": "diff --git a/bokeh/application/handlers/notebook.py b/bokeh/application/handlers/notebook.py\n--- a/bokeh/application/handlers/notebook.py\n+++ b/bokeh/application/handlers/notebook.py\n@@ -22,6 +22,7 @@\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n import re\n+import sys\n import logging\n log = logging.getLogger(__name__)\n \n@@ -122,6 +123,9 @@\n \n source, _ = exporter.from_notebook_node(nb)\n source = source.replace('get_ipython().run_line_magic', '')\n+ source = source.replace('get_ipython().magic', '')\n+ if sys.version_info.major == 2:\n+ source = source.replace('# coding: utf-8','')\n kwargs['source'] = source\n \n super(NotebookHandler, self).__init__(*args, **kwargs)\n", "issue": "Bokeh cannot serve notebooks on Python 2\nUsing a notebook containing a single code cell (simply `1+1`) in a Python 2 environment: \r\n\r\n`bokeh serve --show Test.ipynb`\r\n\r\nResults in the following error:\r\n\r\n```\r\nSyntaxError: encoding declaration in Unicode string\r\n```\r\n\r\nNote that this is why the unit tests added in #8546 skip for Python 2.\r\n\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from Jupyter notebook (``.ipynb``) files.\n\nThis handler is configured with the filename of a Jupyter notebook. When a\nBokeh application calls ``modify_doc``, the code from all the notebook cells\nis collected and executed to process a new Document for a session. When the\nnotebook code is executed, the Document being modified will be available as\n``curdoc``, and any optionally provided ``args`` will be available as\n``sys.argv``.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport re\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ...util.dependencies import import_required\nfrom .code import CodeHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'NotebookHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass NotebookHandler(CodeHandler):\n ''' A Handler that uses code in a Jupyter notebook for modifying Bokeh\n Documents.\n\n '''\n\n _logger_text = \"%s: call to %s() ignored when running notebooks with the 'bokeh' command.\"\n\n _origin = \"Notebook\"\n\n def __init__(self, *args, **kwargs):\n '''\n\n Keywords:\n filename (str) : a path to a Jupyter notebook (\".ipynb\") file\n\n '''\n nbformat = import_required('nbformat', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')\n nbconvert = import_required('nbconvert', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to NotebookHandler')\n\n\n class StripMagicsProcessor(nbconvert.preprocessors.Preprocessor):\n \"\"\"\n Preprocessor to convert notebooks to Python source while stripping\n out all magics (i.e IPython specific syntax).\n \"\"\"\n\n _magic_pattern = re.compile('^\\s*(?P<magic>%%\\w\\w+)($|(\\s+))')\n\n def strip_magics(self, source):\n \"\"\"\n Given the source of a cell, filter out all cell and line magics.\n \"\"\"\n filtered=[]\n for line in source.splitlines():\n match = self._magic_pattern.match(line)\n if match is None:\n filtered.append(line)\n else:\n msg = 'Stripping out IPython magic {magic} in code cell {cell}'\n message = msg.format(cell=self._cell_counter, magic=match.group('magic'))\n log.warn(message)\n return '\\n'.join(filtered)\n\n def preprocess_cell(self, cell, resources, index):\n if cell['cell_type'] == 'code':\n self._cell_counter += 1\n cell['source'] = self.strip_magics(cell['source'])\n return cell, resources\n\n def __call__(self, nb, resources):\n self._cell_counter = 0\n return self.preprocess(nb,resources)\n\n preprocessors=[StripMagicsProcessor()]\n filename = kwargs['filename']\n\n with open(filename) as f:\n nb = nbformat.read(f, nbformat.NO_CONVERT)\n exporter = nbconvert.PythonExporter()\n\n for preprocessor in preprocessors:\n exporter.register_preprocessor(preprocessor)\n\n source, _ = exporter.from_notebook_node(nb)\n source = source.replace('get_ipython().run_line_magic', '')\n kwargs['source'] = source\n\n super(NotebookHandler, self).__init__(*args, **kwargs)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/application/handlers/notebook.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Provide a Bokeh Application Handler to build up documents by running\nthe code from Jupyter notebook (``.ipynb``) files.\n\nThis handler is configured with the filename of a Jupyter notebook. When a\nBokeh application calls ``modify_doc``, the code from all the notebook cells\nis collected and executed to process a new Document for a session. When the\nnotebook code is executed, the Document being modified will be available as\n``curdoc``, and any optionally provided ``args`` will be available as\n``sys.argv``.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport re\nimport sys\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ...util.dependencies import import_required\nfrom .code import CodeHandler\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'NotebookHandler',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass NotebookHandler(CodeHandler):\n ''' A Handler that uses code in a Jupyter notebook for modifying Bokeh\n Documents.\n\n '''\n\n _logger_text = \"%s: call to %s() ignored when running notebooks with the 'bokeh' command.\"\n\n _origin = \"Notebook\"\n\n def __init__(self, *args, **kwargs):\n '''\n\n Keywords:\n filename (str) : a path to a Jupyter notebook (\".ipynb\") file\n\n '''\n nbformat = import_required('nbformat', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')\n nbconvert = import_required('nbconvert', 'The Bokeh notebook application handler requires Jupyter Notebook to be installed.')\n\n if 'filename' not in kwargs:\n raise ValueError('Must pass a filename to NotebookHandler')\n\n\n class StripMagicsProcessor(nbconvert.preprocessors.Preprocessor):\n \"\"\"\n Preprocessor to convert notebooks to Python source while stripping\n out all magics (i.e IPython specific syntax).\n \"\"\"\n\n _magic_pattern = re.compile('^\\s*(?P<magic>%%\\w\\w+)($|(\\s+))')\n\n def strip_magics(self, source):\n \"\"\"\n Given the source of a cell, filter out all cell and line magics.\n \"\"\"\n filtered=[]\n for line in source.splitlines():\n match = self._magic_pattern.match(line)\n if match is None:\n filtered.append(line)\n else:\n msg = 'Stripping out IPython magic {magic} in code cell {cell}'\n message = msg.format(cell=self._cell_counter, magic=match.group('magic'))\n log.warn(message)\n return '\\n'.join(filtered)\n\n def preprocess_cell(self, cell, resources, index):\n if cell['cell_type'] == 'code':\n self._cell_counter += 1\n cell['source'] = self.strip_magics(cell['source'])\n return cell, resources\n\n def __call__(self, nb, resources):\n self._cell_counter = 0\n return self.preprocess(nb,resources)\n\n preprocessors=[StripMagicsProcessor()]\n filename = kwargs['filename']\n\n with open(filename) as f:\n nb = nbformat.read(f, nbformat.NO_CONVERT)\n exporter = nbconvert.PythonExporter()\n\n for preprocessor in preprocessors:\n exporter.register_preprocessor(preprocessor)\n\n source, _ = exporter.from_notebook_node(nb)\n source = source.replace('get_ipython().run_line_magic', '')\n source = source.replace('get_ipython().magic', '')\n if sys.version_info.major == 2:\n source = source.replace('# coding: utf-8','')\n kwargs['source'] = source\n\n super(NotebookHandler, self).__init__(*args, **kwargs)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/application/handlers/notebook.py"}]}
| 1,546 | 206 |
gh_patches_debug_40325
|
rasdani/github-patches
|
git_diff
|
lk-geimfari__mimesis-780
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve IPv4/IPv6 generators for Internet()
We can improve how `.ip_v4()` and `.ip_v6()` works.
I suggest using `ipaddress` module from the standard library, just like this:
```python
def ip_v4(self, with_port: bool = False, stringify: bool = True) -> Union[str, IPv4Address]:
"""Generate a random IPv4 address.
:param stringify: Return IPv4 as string (True by default).
:param with_port: Add port to IP.
:return: Random IPv4 address.
:Example:
19.121.223.58
"""
ipv4 = IPv4Address(self.random.getrandbits(32))
if not stringify:
return ipv4
if with_port:
ipv4 += ':{}'.format(self.port())
return ipv4
```
The idea is to support IP addresses as objects to make them more useful than just strings.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mimesis/providers/internet.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Provides data related to internet."""
4
5 import urllib.error
6 import urllib.request
7 from ipaddress import IPv6Address
8 from typing import List, Optional, Union
9
10 from mimesis.data import (
11 EMOJI,
12 HASHTAGS,
13 HTTP_METHODS,
14 HTTP_STATUS_CODES,
15 HTTP_STATUS_MSGS,
16 NETWORK_PROTOCOLS,
17 TLD,
18 USER_AGENTS,
19 USERNAMES,
20 )
21 from mimesis.enums import Layer, MimeType, PortRange, TLDType
22 from mimesis.exceptions import NonEnumerableError
23 from mimesis.providers.base import BaseProvider
24 from mimesis.providers.file import File
25
26 __all__ = ['Internet']
27
28
29 class Internet(BaseProvider):
30 """Class for generating data related to the internet."""
31
32 def __init__(self, *args, **kwargs):
33 """Initialize attributes.
34
35 :param args: Arguments.
36 :param kwargs: Keyword arguments.
37 """
38 super().__init__(*args, **kwargs)
39 self.__file = File(seed=self.seed)
40
41 class Meta:
42 """Class for metadata."""
43
44 name = 'internet'
45
46 def content_type(self, mime_type: Optional[MimeType] = None) -> str:
47 """Get a random HTTP content type.
48
49 :return: Content type.
50
51 :Example:
52 Content-Type: application/json
53 """
54 fmt = self.__file.mime_type(type_=mime_type)
55 return 'Content-Type: {}'.format(fmt)
56
57 def http_status_message(self) -> str:
58 """Get a random HTTP status message.
59
60 :return: HTTP status message.
61
62 :Example:
63 200 OK
64 """
65 return self.random.choice(HTTP_STATUS_MSGS)
66
67 def http_status_code(self) -> int:
68 """Get a random HTTP status code.
69
70 :return: HTTP status.
71
72 :Example:
73 200
74 """
75 return self.random.choice(HTTP_STATUS_CODES)
76
77 def http_method(self) -> str:
78 """Get a random HTTP method.
79
80 :return: HTTP method.
81
82 :Example:
83 POST
84 """
85 return self.random.choice(HTTP_METHODS)
86
87 def ip_v4(self, with_port: bool = False) -> str:
88 """Generate a random IPv4 address.
89
90 :param with_port: Add port to IP.
91 :return: Random IPv4 address.
92
93 :Example:
94 19.121.223.58
95 """
96 ip = '.'.join(str(self.random.randint(0, 255)) for _ in range(4))
97
98 if with_port:
99 ip += ':{}'.format(self.port())
100
101 return ip
102
103 def ip_v6(self) -> str:
104 """Generate a random IPv6 address.
105
106 :return: Random IPv6 address.
107
108 :Example:
109 2001:c244:cf9d:1fb1:c56d:f52c:8a04:94f3
110 """
111 ipv6 = IPv6Address(
112 self.random.randint(
113 0, 2 ** 128 - 1,
114 ),
115 )
116 return str(ipv6)
117
118 def mac_address(self) -> str:
119 """Generate a random MAC address.
120
121 :return: Random MAC address.
122
123 :Example:
124 00:16:3e:25:e7:b1
125 """
126 mac_hex = [
127 0x00, 0x16, 0x3e,
128 self.random.randint(0x00, 0x7f),
129 self.random.randint(0x00, 0xff),
130 self.random.randint(0x00, 0xff),
131 ]
132 mac = map(lambda x: '%02x' % x, mac_hex)
133 return ':'.join(mac)
134
135 def emoji(self) -> str:
136 """Get a random emoji shortcut code.
137
138 :return: Emoji code.
139
140 :Example:
141 :kissing:
142 """
143 return self.random.choice(EMOJI)
144
145 @staticmethod
146 def image_placeholder(width: Union[int, str] = 1920,
147 height: Union[int, str] = 1080) -> str:
148 """Generate a link to the image placeholder.
149
150 :param width: Width of image.
151 :param height: Height of image.
152 :return: URL to image placeholder.
153 """
154 url = 'http://placehold.it/{width}x{height}'
155 return url.format(width=width, height=height)
156
157 @staticmethod
158 def stock_image(width: Union[int, str] = 1920,
159 height: Union[int, str] = 1080,
160 keywords: Optional[List[str]] = None,
161 writable: bool = False) -> Union[str, bytes]:
162 """Generate random stock image (JPEG) hosted on Unsplash.
163
164 .. note:: This method required an active HTTP connection.
165
166 :param width: Width of the image.
167 :param height: Height of the image.
168 :param keywords: List of search keywords.
169 :param writable: Return image as sequence ob bytes.
170 :return: Link to the image.
171 """
172 api = 'https://source.unsplash.com/{}x{}?{}'
173
174 if keywords is not None:
175 keywords_str = ','.join(keywords)
176 else:
177 keywords_str = ''
178
179 url = api.format(width, height, keywords_str)
180
181 try:
182 response = urllib.request.urlopen(url)
183 if writable:
184 return response.read()
185 url = response.geturl()
186 return url
187 except urllib.error.URLError:
188 raise urllib.error.URLError(
189 'Required an active HTTP connection')
190
191 def hashtags(self, quantity: int = 4) -> Union[str, list]:
192 """Generate a list of hashtags.
193
194 :param quantity: The quantity of hashtags.
195 :return: The list of hashtags.
196 :raises NonEnumerableError: if category is not in Hashtag.
197
198 :Example:
199 ['#love', '#sky', '#nice']
200 """
201 tags = ['#' + self.random.choice(HASHTAGS)
202 for _ in range(quantity)]
203
204 if int(quantity) == 1:
205 return tags[0]
206
207 return tags
208
209 def home_page(self, tld_type: Optional[TLDType] = None) -> str:
210 """Generate a random home page.
211
212 :param tld_type: TLD type.
213 :return: Random home page.
214
215 :Example:
216 http://www.fontir.info
217 """
218 resource = self.random.choice(USERNAMES)
219 domain = self.top_level_domain(
220 tld_type=tld_type,
221 )
222
223 return 'http://www.{}{}'.format(
224 resource, domain)
225
226 def top_level_domain(self, tld_type: Optional[TLDType] = None) -> str:
227 """Return random top level domain.
228
229 :param tld_type: Enum object DomainType
230 :return: Top level domain.
231 :raises NonEnumerableError: if tld_type not in DomainType.
232 """
233 key = self._validate_enum(item=tld_type, enum=TLDType)
234 return self.random.choice(TLD[key])
235
236 def user_agent(self) -> str:
237 """Get a random user agent.
238
239 :return: User agent.
240
241 :Example:
242 Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)
243 Gecko/20100101 Firefox/15.0.1
244 """
245 return self.random.choice(USER_AGENTS)
246
247 def network_protocol(self, layer: Optional[Layer] = None) -> str:
248 """Get a random network protocol form OSI model.
249
250 :param layer: Enum object Layer.
251 :return: Protocol name.
252
253 :Example:
254 AMQP
255 """
256 key = self._validate_enum(item=layer, enum=Layer)
257 protocols = NETWORK_PROTOCOLS[key]
258 return self.random.choice(protocols)
259
260 def port(self, port_range: PortRange = PortRange.ALL) -> int:
261 """Generate random port.
262
263 :param port_range: Range enum object.
264 :return: Port number.
265 :raises NonEnumerableError: if port_range is not in PortRange.
266
267 :Example:
268 8080
269 """
270 if isinstance(port_range, PortRange):
271 return self.random.randint(*port_range.value)
272 else:
273 raise NonEnumerableError(PortRange)
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mimesis/providers/internet.py b/mimesis/providers/internet.py
--- a/mimesis/providers/internet.py
+++ b/mimesis/providers/internet.py
@@ -4,7 +4,7 @@
import urllib.error
import urllib.request
-from ipaddress import IPv6Address
+from ipaddress import IPv4Address, IPv6Address
from typing import List, Optional, Union
from mimesis.data import (
@@ -37,6 +37,8 @@
"""
super().__init__(*args, **kwargs)
self.__file = File(seed=self.seed)
+ self._MAX_IPV4 = (2 ** 32) - 1
+ self._MAX_IPV6 = (2 ** 128) - 1
class Meta:
"""Class for metadata."""
@@ -84,36 +86,60 @@
"""
return self.random.choice(HTTP_METHODS)
- def ip_v4(self, with_port: bool = False) -> str:
- """Generate a random IPv4 address.
+ def ip_v4_object(self) -> IPv4Address:
+ """Generate random IPv4Address object.
- :param with_port: Add port to IP.
- :return: Random IPv4 address.
+ See documentation for module ipaddress:
+ https://docs.python.org/3.7/library/ipaddress.html
+
+ :return: IPv4Address object.
+ """
+ return IPv4Address(
+ self.random.randint(0, self._MAX_IPV4),
+ )
+
+ def ip_v6_object(self) -> IPv6Address:
+ """Generate random IPv6Address object.
+
+ See documentation for module ipaddress:
+ https://docs.python.org/3.7/library/ipaddress.html
+
+ :return: IPv6Address object.
+ """
+ return IPv6Address(
+ self.random.randint(
+ 0, self._MAX_IPV6,
+ ),
+ )
+
+ def ip_v4(self, with_port: bool = False,
+ port_range: PortRange = PortRange.ALL) -> str:
+ """Generate a random IPv4 address as string.
+
+ :param port_range: PortRange enum object.
+ :param with_port: Add port from PortRange to IP.
+ :return: IPv4 address as string.
:Example:
- 19.121.223.58
+ 19.121.223.58 or 19.121.223.58:8000
"""
- ip = '.'.join(str(self.random.randint(0, 255)) for _ in range(4))
+ ip = str(self.ip_v4_object())
if with_port:
- ip += ':{}'.format(self.port())
+ port = self.port(port_range=port_range)
+ return '{}:{}'.format(ip, port)
return ip
def ip_v6(self) -> str:
- """Generate a random IPv6 address.
+ """Generate a random IPv6 address as string.
- :return: Random IPv6 address.
+ :return: IPv6 address string.
:Example:
2001:c244:cf9d:1fb1:c56d:f52c:8a04:94f3
"""
- ipv6 = IPv6Address(
- self.random.randint(
- 0, 2 ** 128 - 1,
- ),
- )
- return str(ipv6)
+ return str(self.ip_v6_object())
def mac_address(self) -> str:
"""Generate a random MAC address.
@@ -260,7 +286,7 @@
def port(self, port_range: PortRange = PortRange.ALL) -> int:
"""Generate random port.
- :param port_range: Range enum object.
+ :param port_range: PortRange enum object.
:return: Port number.
:raises NonEnumerableError: if port_range is not in PortRange.
|
{"golden_diff": "diff --git a/mimesis/providers/internet.py b/mimesis/providers/internet.py\n--- a/mimesis/providers/internet.py\n+++ b/mimesis/providers/internet.py\n@@ -4,7 +4,7 @@\n \n import urllib.error\n import urllib.request\n-from ipaddress import IPv6Address\n+from ipaddress import IPv4Address, IPv6Address\n from typing import List, Optional, Union\n \n from mimesis.data import (\n@@ -37,6 +37,8 @@\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__file = File(seed=self.seed)\n+ self._MAX_IPV4 = (2 ** 32) - 1\n+ self._MAX_IPV6 = (2 ** 128) - 1\n \n class Meta:\n \"\"\"Class for metadata.\"\"\"\n@@ -84,36 +86,60 @@\n \"\"\"\n return self.random.choice(HTTP_METHODS)\n \n- def ip_v4(self, with_port: bool = False) -> str:\n- \"\"\"Generate a random IPv4 address.\n+ def ip_v4_object(self) -> IPv4Address:\n+ \"\"\"Generate random IPv4Address object.\n \n- :param with_port: Add port to IP.\n- :return: Random IPv4 address.\n+ See documentation for module ipaddress:\n+ https://docs.python.org/3.7/library/ipaddress.html\n+\n+ :return: IPv4Address object.\n+ \"\"\"\n+ return IPv4Address(\n+ self.random.randint(0, self._MAX_IPV4),\n+ )\n+\n+ def ip_v6_object(self) -> IPv6Address:\n+ \"\"\"Generate random IPv6Address object.\n+\n+ See documentation for module ipaddress:\n+ https://docs.python.org/3.7/library/ipaddress.html\n+\n+ :return: IPv6Address object.\n+ \"\"\"\n+ return IPv6Address(\n+ self.random.randint(\n+ 0, self._MAX_IPV6,\n+ ),\n+ )\n+\n+ def ip_v4(self, with_port: bool = False,\n+ port_range: PortRange = PortRange.ALL) -> str:\n+ \"\"\"Generate a random IPv4 address as string.\n+\n+ :param port_range: PortRange enum object.\n+ :param with_port: Add port from PortRange to IP.\n+ :return: IPv4 address as string.\n \n :Example:\n- 19.121.223.58\n+ 19.121.223.58 or 19.121.223.58:8000\n \"\"\"\n- ip = '.'.join(str(self.random.randint(0, 255)) for _ in range(4))\n+ ip = str(self.ip_v4_object())\n \n if with_port:\n- ip += ':{}'.format(self.port())\n+ port = self.port(port_range=port_range)\n+ return '{}:{}'.format(ip, port)\n \n return ip\n \n def ip_v6(self) -> str:\n- \"\"\"Generate a random IPv6 address.\n+ \"\"\"Generate a random IPv6 address as string.\n \n- :return: Random IPv6 address.\n+ :return: IPv6 address string.\n \n :Example:\n 2001:c244:cf9d:1fb1:c56d:f52c:8a04:94f3\n \"\"\"\n- ipv6 = IPv6Address(\n- self.random.randint(\n- 0, 2 ** 128 - 1,\n- ),\n- )\n- return str(ipv6)\n+ return str(self.ip_v6_object())\n \n def mac_address(self) -> str:\n \"\"\"Generate a random MAC address.\n@@ -260,7 +286,7 @@\n def port(self, port_range: PortRange = PortRange.ALL) -> int:\n \"\"\"Generate random port.\n \n- :param port_range: Range enum object.\n+ :param port_range: PortRange enum object.\n :return: Port number.\n :raises NonEnumerableError: if port_range is not in PortRange.\n", "issue": "Improve IPv4/IPv6 generators for Internet()\nWe can improve how `.ip_v4()` and `.ip_v6()` works.\r\n\r\nI suggest using `ipaddress` module from the standard library, just like this:\r\n\r\n```python\r\n def ip_v4(self, with_port: bool = False, stringify: bool = True) -> Union[str, IPv4Address]:\r\n \"\"\"Generate a random IPv4 address.\r\n\r\n :param stringify: Return IPv4 as string (True by default).\r\n :param with_port: Add port to IP.\r\n :return: Random IPv4 address.\r\n\r\n :Example:\r\n 19.121.223.58\r\n \"\"\"\r\n ipv4 = IPv4Address(self.random.getrandbits(32))\r\n\r\n if not stringify:\r\n return ipv4\r\n\r\n if with_port:\r\n ipv4 += ':{}'.format(self.port())\r\n\r\n return ipv4\r\n```\r\n\r\nThe idea is to support IP addresses as objects to make them more useful than just strings.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides data related to internet.\"\"\"\n\nimport urllib.error\nimport urllib.request\nfrom ipaddress import IPv6Address\nfrom typing import List, Optional, Union\n\nfrom mimesis.data import (\n EMOJI,\n HASHTAGS,\n HTTP_METHODS,\n HTTP_STATUS_CODES,\n HTTP_STATUS_MSGS,\n NETWORK_PROTOCOLS,\n TLD,\n USER_AGENTS,\n USERNAMES,\n)\nfrom mimesis.enums import Layer, MimeType, PortRange, TLDType\nfrom mimesis.exceptions import NonEnumerableError\nfrom mimesis.providers.base import BaseProvider\nfrom mimesis.providers.file import File\n\n__all__ = ['Internet']\n\n\nclass Internet(BaseProvider):\n \"\"\"Class for generating data related to the internet.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize attributes.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__file = File(seed=self.seed)\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'internet'\n\n def content_type(self, mime_type: Optional[MimeType] = None) -> str:\n \"\"\"Get a random HTTP content type.\n\n :return: Content type.\n\n :Example:\n Content-Type: application/json\n \"\"\"\n fmt = self.__file.mime_type(type_=mime_type)\n return 'Content-Type: {}'.format(fmt)\n\n def http_status_message(self) -> str:\n \"\"\"Get a random HTTP status message.\n\n :return: HTTP status message.\n\n :Example:\n 200 OK\n \"\"\"\n return self.random.choice(HTTP_STATUS_MSGS)\n\n def http_status_code(self) -> int:\n \"\"\"Get a random HTTP status code.\n\n :return: HTTP status.\n\n :Example:\n 200\n \"\"\"\n return self.random.choice(HTTP_STATUS_CODES)\n\n def http_method(self) -> str:\n \"\"\"Get a random HTTP method.\n\n :return: HTTP method.\n\n :Example:\n POST\n \"\"\"\n return self.random.choice(HTTP_METHODS)\n\n def ip_v4(self, with_port: bool = False) -> str:\n \"\"\"Generate a random IPv4 address.\n\n :param with_port: Add port to IP.\n :return: Random IPv4 address.\n\n :Example:\n 19.121.223.58\n \"\"\"\n ip = '.'.join(str(self.random.randint(0, 255)) for _ in range(4))\n\n if with_port:\n ip += ':{}'.format(self.port())\n\n return ip\n\n def ip_v6(self) -> str:\n \"\"\"Generate a random IPv6 address.\n\n :return: Random IPv6 address.\n\n :Example:\n 2001:c244:cf9d:1fb1:c56d:f52c:8a04:94f3\n \"\"\"\n ipv6 = IPv6Address(\n self.random.randint(\n 0, 2 ** 128 - 1,\n ),\n )\n return str(ipv6)\n\n def mac_address(self) -> str:\n \"\"\"Generate a random MAC address.\n\n :return: Random MAC address.\n\n :Example:\n 00:16:3e:25:e7:b1\n \"\"\"\n mac_hex = [\n 0x00, 0x16, 0x3e,\n self.random.randint(0x00, 0x7f),\n self.random.randint(0x00, 0xff),\n self.random.randint(0x00, 0xff),\n ]\n mac = map(lambda x: '%02x' % x, mac_hex)\n return ':'.join(mac)\n\n def emoji(self) -> str:\n \"\"\"Get a random emoji shortcut code.\n\n :return: Emoji code.\n\n :Example:\n :kissing:\n \"\"\"\n return self.random.choice(EMOJI)\n\n @staticmethod\n def image_placeholder(width: Union[int, str] = 1920,\n height: Union[int, str] = 1080) -> str:\n \"\"\"Generate a link to the image placeholder.\n\n :param width: Width of image.\n :param height: Height of image.\n :return: URL to image placeholder.\n \"\"\"\n url = 'http://placehold.it/{width}x{height}'\n return url.format(width=width, height=height)\n\n @staticmethod\n def stock_image(width: Union[int, str] = 1920,\n height: Union[int, str] = 1080,\n keywords: Optional[List[str]] = None,\n writable: bool = False) -> Union[str, bytes]:\n \"\"\"Generate random stock image (JPEG) hosted on Unsplash.\n\n .. note:: This method required an active HTTP connection.\n\n :param width: Width of the image.\n :param height: Height of the image.\n :param keywords: List of search keywords.\n :param writable: Return image as sequence ob bytes.\n :return: Link to the image.\n \"\"\"\n api = 'https://source.unsplash.com/{}x{}?{}'\n\n if keywords is not None:\n keywords_str = ','.join(keywords)\n else:\n keywords_str = ''\n\n url = api.format(width, height, keywords_str)\n\n try:\n response = urllib.request.urlopen(url)\n if writable:\n return response.read()\n url = response.geturl()\n return url\n except urllib.error.URLError:\n raise urllib.error.URLError(\n 'Required an active HTTP connection')\n\n def hashtags(self, quantity: int = 4) -> Union[str, list]:\n \"\"\"Generate a list of hashtags.\n\n :param quantity: The quantity of hashtags.\n :return: The list of hashtags.\n :raises NonEnumerableError: if category is not in Hashtag.\n\n :Example:\n ['#love', '#sky', '#nice']\n \"\"\"\n tags = ['#' + self.random.choice(HASHTAGS)\n for _ in range(quantity)]\n\n if int(quantity) == 1:\n return tags[0]\n\n return tags\n\n def home_page(self, tld_type: Optional[TLDType] = None) -> str:\n \"\"\"Generate a random home page.\n\n :param tld_type: TLD type.\n :return: Random home page.\n\n :Example:\n http://www.fontir.info\n \"\"\"\n resource = self.random.choice(USERNAMES)\n domain = self.top_level_domain(\n tld_type=tld_type,\n )\n\n return 'http://www.{}{}'.format(\n resource, domain)\n\n def top_level_domain(self, tld_type: Optional[TLDType] = None) -> str:\n \"\"\"Return random top level domain.\n\n :param tld_type: Enum object DomainType\n :return: Top level domain.\n :raises NonEnumerableError: if tld_type not in DomainType.\n \"\"\"\n key = self._validate_enum(item=tld_type, enum=TLDType)\n return self.random.choice(TLD[key])\n\n def user_agent(self) -> str:\n \"\"\"Get a random user agent.\n\n :return: User agent.\n\n :Example:\n Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)\n Gecko/20100101 Firefox/15.0.1\n \"\"\"\n return self.random.choice(USER_AGENTS)\n\n def network_protocol(self, layer: Optional[Layer] = None) -> str:\n \"\"\"Get a random network protocol form OSI model.\n\n :param layer: Enum object Layer.\n :return: Protocol name.\n\n :Example:\n AMQP\n \"\"\"\n key = self._validate_enum(item=layer, enum=Layer)\n protocols = NETWORK_PROTOCOLS[key]\n return self.random.choice(protocols)\n\n def port(self, port_range: PortRange = PortRange.ALL) -> int:\n \"\"\"Generate random port.\n\n :param port_range: Range enum object.\n :return: Port number.\n :raises NonEnumerableError: if port_range is not in PortRange.\n\n :Example:\n 8080\n \"\"\"\n if isinstance(port_range, PortRange):\n return self.random.randint(*port_range.value)\n else:\n raise NonEnumerableError(PortRange)\n", "path": "mimesis/providers/internet.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides data related to internet.\"\"\"\n\nimport urllib.error\nimport urllib.request\nfrom ipaddress import IPv4Address, IPv6Address\nfrom typing import List, Optional, Union\n\nfrom mimesis.data import (\n EMOJI,\n HASHTAGS,\n HTTP_METHODS,\n HTTP_STATUS_CODES,\n HTTP_STATUS_MSGS,\n NETWORK_PROTOCOLS,\n TLD,\n USER_AGENTS,\n USERNAMES,\n)\nfrom mimesis.enums import Layer, MimeType, PortRange, TLDType\nfrom mimesis.exceptions import NonEnumerableError\nfrom mimesis.providers.base import BaseProvider\nfrom mimesis.providers.file import File\n\n__all__ = ['Internet']\n\n\nclass Internet(BaseProvider):\n \"\"\"Class for generating data related to the internet.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize attributes.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__file = File(seed=self.seed)\n self._MAX_IPV4 = (2 ** 32) - 1\n self._MAX_IPV6 = (2 ** 128) - 1\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'internet'\n\n def content_type(self, mime_type: Optional[MimeType] = None) -> str:\n \"\"\"Get a random HTTP content type.\n\n :return: Content type.\n\n :Example:\n Content-Type: application/json\n \"\"\"\n fmt = self.__file.mime_type(type_=mime_type)\n return 'Content-Type: {}'.format(fmt)\n\n def http_status_message(self) -> str:\n \"\"\"Get a random HTTP status message.\n\n :return: HTTP status message.\n\n :Example:\n 200 OK\n \"\"\"\n return self.random.choice(HTTP_STATUS_MSGS)\n\n def http_status_code(self) -> int:\n \"\"\"Get a random HTTP status code.\n\n :return: HTTP status.\n\n :Example:\n 200\n \"\"\"\n return self.random.choice(HTTP_STATUS_CODES)\n\n def http_method(self) -> str:\n \"\"\"Get a random HTTP method.\n\n :return: HTTP method.\n\n :Example:\n POST\n \"\"\"\n return self.random.choice(HTTP_METHODS)\n\n def ip_v4_object(self) -> IPv4Address:\n \"\"\"Generate random IPv4Address object.\n\n See documentation for module ipaddress:\n https://docs.python.org/3.7/library/ipaddress.html\n\n :return: IPv4Address object.\n \"\"\"\n return IPv4Address(\n self.random.randint(0, self._MAX_IPV4),\n )\n\n def ip_v6_object(self) -> IPv6Address:\n \"\"\"Generate random IPv6Address object.\n\n See documentation for module ipaddress:\n https://docs.python.org/3.7/library/ipaddress.html\n\n :return: IPv6Address object.\n \"\"\"\n return IPv6Address(\n self.random.randint(\n 0, self._MAX_IPV6,\n ),\n )\n\n def ip_v4(self, with_port: bool = False,\n port_range: PortRange = PortRange.ALL) -> str:\n \"\"\"Generate a random IPv4 address as string.\n\n :param port_range: PortRange enum object.\n :param with_port: Add port from PortRange to IP.\n :return: IPv4 address as string.\n\n :Example:\n 19.121.223.58 or 19.121.223.58:8000\n \"\"\"\n ip = str(self.ip_v4_object())\n\n if with_port:\n port = self.port(port_range=port_range)\n return '{}:{}'.format(ip, port)\n\n return ip\n\n def ip_v6(self) -> str:\n \"\"\"Generate a random IPv6 address as string.\n\n :return: IPv6 address string.\n\n :Example:\n 2001:c244:cf9d:1fb1:c56d:f52c:8a04:94f3\n \"\"\"\n return str(self.ip_v6_object())\n\n def mac_address(self) -> str:\n \"\"\"Generate a random MAC address.\n\n :return: Random MAC address.\n\n :Example:\n 00:16:3e:25:e7:b1\n \"\"\"\n mac_hex = [\n 0x00, 0x16, 0x3e,\n self.random.randint(0x00, 0x7f),\n self.random.randint(0x00, 0xff),\n self.random.randint(0x00, 0xff),\n ]\n mac = map(lambda x: '%02x' % x, mac_hex)\n return ':'.join(mac)\n\n def emoji(self) -> str:\n \"\"\"Get a random emoji shortcut code.\n\n :return: Emoji code.\n\n :Example:\n :kissing:\n \"\"\"\n return self.random.choice(EMOJI)\n\n @staticmethod\n def image_placeholder(width: Union[int, str] = 1920,\n height: Union[int, str] = 1080) -> str:\n \"\"\"Generate a link to the image placeholder.\n\n :param width: Width of image.\n :param height: Height of image.\n :return: URL to image placeholder.\n \"\"\"\n url = 'http://placehold.it/{width}x{height}'\n return url.format(width=width, height=height)\n\n @staticmethod\n def stock_image(width: Union[int, str] = 1920,\n height: Union[int, str] = 1080,\n keywords: Optional[List[str]] = None,\n writable: bool = False) -> Union[str, bytes]:\n \"\"\"Generate random stock image (JPEG) hosted on Unsplash.\n\n .. note:: This method required an active HTTP connection.\n\n :param width: Width of the image.\n :param height: Height of the image.\n :param keywords: List of search keywords.\n :param writable: Return image as sequence ob bytes.\n :return: Link to the image.\n \"\"\"\n api = 'https://source.unsplash.com/{}x{}?{}'\n\n if keywords is not None:\n keywords_str = ','.join(keywords)\n else:\n keywords_str = ''\n\n url = api.format(width, height, keywords_str)\n\n try:\n response = urllib.request.urlopen(url)\n if writable:\n return response.read()\n url = response.geturl()\n return url\n except urllib.error.URLError:\n raise urllib.error.URLError(\n 'Required an active HTTP connection')\n\n def hashtags(self, quantity: int = 4) -> Union[str, list]:\n \"\"\"Generate a list of hashtags.\n\n :param quantity: The quantity of hashtags.\n :return: The list of hashtags.\n :raises NonEnumerableError: if category is not in Hashtag.\n\n :Example:\n ['#love', '#sky', '#nice']\n \"\"\"\n tags = ['#' + self.random.choice(HASHTAGS)\n for _ in range(quantity)]\n\n if int(quantity) == 1:\n return tags[0]\n\n return tags\n\n def home_page(self, tld_type: Optional[TLDType] = None) -> str:\n \"\"\"Generate a random home page.\n\n :param tld_type: TLD type.\n :return: Random home page.\n\n :Example:\n http://www.fontir.info\n \"\"\"\n resource = self.random.choice(USERNAMES)\n domain = self.top_level_domain(\n tld_type=tld_type,\n )\n\n return 'http://www.{}{}'.format(\n resource, domain)\n\n def top_level_domain(self, tld_type: Optional[TLDType] = None) -> str:\n \"\"\"Return random top level domain.\n\n :param tld_type: Enum object DomainType\n :return: Top level domain.\n :raises NonEnumerableError: if tld_type not in DomainType.\n \"\"\"\n key = self._validate_enum(item=tld_type, enum=TLDType)\n return self.random.choice(TLD[key])\n\n def user_agent(self) -> str:\n \"\"\"Get a random user agent.\n\n :return: User agent.\n\n :Example:\n Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)\n Gecko/20100101 Firefox/15.0.1\n \"\"\"\n return self.random.choice(USER_AGENTS)\n\n def network_protocol(self, layer: Optional[Layer] = None) -> str:\n \"\"\"Get a random network protocol form OSI model.\n\n :param layer: Enum object Layer.\n :return: Protocol name.\n\n :Example:\n AMQP\n \"\"\"\n key = self._validate_enum(item=layer, enum=Layer)\n protocols = NETWORK_PROTOCOLS[key]\n return self.random.choice(protocols)\n\n def port(self, port_range: PortRange = PortRange.ALL) -> int:\n \"\"\"Generate random port.\n\n :param port_range: PortRange enum object.\n :return: Port number.\n :raises NonEnumerableError: if port_range is not in PortRange.\n\n :Example:\n 8080\n \"\"\"\n if isinstance(port_range, PortRange):\n return self.random.randint(*port_range.value)\n else:\n raise NonEnumerableError(PortRange)\n", "path": "mimesis/providers/internet.py"}]}
| 3,045 | 917 |
gh_patches_debug_284
|
rasdani/github-patches
|
git_diff
|
voxel51__fiftyone-1652
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] `ImportError: cannot import name 'soft_unicode' from 'markupsafe'`
When `Jinja2<3` is installed a user will encounter this error.
```py
>>> import fiftyone as fo
Uncaught exception
Traceback (most recent call last):
File "/home/user/.local/lib/python3.8/site-packages/fiftyone/service/main.py", line 43, in <module>
from fiftyone.core.service import Service
File "/home/user/.local/lib/python3.8/site-packages/fiftyone/__init__.py", line 25, in <module>
from fiftyone.__public__ import *
File "/home/user/.local/lib/python3.8/site-packages/fiftyone/__public__.py", line 172, in <module>
from .core.session import (
File "/home/user/.local/lib/python3.8/site-packages/fiftyone/core/session.py", line 16, in <module>
from jinja2 import Template
File "/home/user/.local/lib/python3.8/site-packages/jinja2/__init__.py", line 12, in <module>
from .environment import Environment
File "/home/user/.local/lib/python3.8/site-packages/jinja2/environment.py", line 25, in <module>
from .defaults import BLOCK_END_STRING
File "/home/user/.local/lib/python3.8/site-packages/jinja2/defaults.py", line 3, in <module>
from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401
File "/home/user/.local/lib/python3.8/site-packages/jinja2/filters.py", line 13, in <module>
from markupsafe import soft_unicode
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/home/user/.local/lib/python3.8/site-packages/markupsafe/__init__.py)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Installs FiftyOne.
4
5 | Copyright 2017-2022, Voxel51, Inc.
6 | `voxel51.com <https://voxel51.com/>`_
7 |
8 """
9 import os
10 from pkg_resources import DistributionNotFound, get_distribution
11 import re
12 from setuptools import setup, find_packages
13
14
15 VERSION = "0.15.0" # updated by https://github.com/voxel51/fiftyone/pull/1615
16
17
18 def get_version():
19 if "RELEASE_VERSION" in os.environ:
20 version = os.environ["RELEASE_VERSION"]
21 if not version.startswith(VERSION):
22 raise ValueError(
23 "Release version does not match version: %s and %s"
24 % (version, VERSION)
25 )
26 return version
27
28 return VERSION
29
30
31 INSTALL_REQUIRES = [
32 # third-party packages
33 "aiofiles",
34 "argcomplete",
35 "boto3",
36 "Deprecated",
37 "eventlet",
38 "future",
39 "Jinja2",
40 "kaleido",
41 "matplotlib",
42 "mongoengine==0.20.0",
43 "motor>=2.3,<3",
44 "ndjson",
45 "numpy",
46 "packaging",
47 "pandas",
48 "Pillow>=6.2",
49 "plotly>=4.14,<5",
50 "pprintpp",
51 "psutil",
52 "pymongo>=3.11,<4",
53 "pytz",
54 "PyYAML",
55 "retrying",
56 "scikit-learn",
57 "scikit-image",
58 "setuptools",
59 "tabulate",
60 "tornado>=5.1.1,<7",
61 "xmltodict",
62 "universal-analytics-python3>=1.0.1,<2",
63 # internal packages
64 "fiftyone-brain>=0.8,<0.9",
65 "fiftyone-db>=0.3,<0.4",
66 "voxel51-eta>=0.6.3,<0.7",
67 ]
68
69
70 CHOOSE_INSTALL_REQUIRES = [
71 (
72 (
73 "opencv-python",
74 "opencv-contrib-python",
75 "opencv-contrib-python-headless",
76 ),
77 "opencv-python-headless",
78 )
79 ]
80
81
82 def choose_requirement(mains, secondary):
83 chosen = secondary
84 for main in mains:
85 try:
86 name = re.split(r"[!<>=]", main)[0]
87 get_distribution(name)
88 chosen = main
89 break
90 except DistributionNotFound:
91 pass
92
93 return str(chosen)
94
95
96 def get_install_requirements(install_requires, choose_install_requires):
97 for mains, secondary in choose_install_requires:
98 install_requires.append(choose_requirement(mains, secondary))
99
100 return install_requires
101
102
103 EXTRAS_REQUIREMENTS = {"desktop": ["fiftyone-desktop>=0.19.2,<0.20"]}
104
105
106 with open("README.md", "r") as fh:
107 long_description = fh.read()
108
109
110 setup(
111 name="fiftyone",
112 version=get_version(),
113 description=(
114 "FiftyOne: the open-source tool for building high-quality datasets "
115 "and computer vision models"
116 ),
117 author="Voxel51, Inc.",
118 author_email="[email protected]",
119 url="https://github.com/voxel51/fiftyone",
120 extras_require=EXTRAS_REQUIREMENTS,
121 license="Apache",
122 long_description=long_description,
123 long_description_content_type="text/markdown",
124 packages=find_packages(
125 exclude=["app", "eta", "package", "requirements", "tests", "tools"]
126 )
127 + ["fiftyone.recipes", "fiftyone.tutorials"],
128 package_dir={
129 "fiftyone.recipes": "docs/source/recipes",
130 "fiftyone.tutorials": "docs/source/tutorials",
131 },
132 install_requires=get_install_requirements(
133 INSTALL_REQUIRES, CHOOSE_INSTALL_REQUIRES
134 ),
135 include_package_data=True,
136 classifiers=[
137 "Development Status :: 4 - Beta",
138 "Intended Audience :: Developers",
139 "Intended Audience :: Science/Research",
140 "License :: OSI Approved :: Apache Software License",
141 "Topic :: Scientific/Engineering :: Artificial Intelligence",
142 "Topic :: Scientific/Engineering :: Image Processing",
143 "Topic :: Scientific/Engineering :: Image Recognition",
144 "Topic :: Scientific/Engineering :: Information Analysis",
145 "Topic :: Scientific/Engineering :: Visualization",
146 "Operating System :: MacOS :: MacOS X",
147 "Operating System :: POSIX :: Linux",
148 "Operating System :: Microsoft :: Windows",
149 "Programming Language :: Python :: 3",
150 "Programming Language :: Python :: 3.6",
151 "Programming Language :: Python :: 3.7",
152 "Programming Language :: Python :: 3.8",
153 "Programming Language :: Python :: 3.9",
154 ],
155 entry_points={"console_scripts": ["fiftyone=fiftyone.core.cli:main"]},
156 python_requires=">=3.6",
157 )
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,7 @@
"Deprecated",
"eventlet",
"future",
- "Jinja2",
+ "Jinja2>=3",
"kaleido",
"matplotlib",
"mongoengine==0.20.0",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -36,7 +36,7 @@\n \"Deprecated\",\n \"eventlet\",\n \"future\",\n- \"Jinja2\",\n+ \"Jinja2>=3\",\n \"kaleido\",\n \"matplotlib\",\n \"mongoengine==0.20.0\",\n", "issue": "[BUG] `ImportError: cannot import name 'soft_unicode' from 'markupsafe'`\nWhen `Jinja2<3` is installed a user will encounter this error.\r\n\r\n```py\r\n>>> import fiftyone as fo\r\nUncaught exception\r\nTraceback (most recent call last):\r\n File \"/home/user/.local/lib/python3.8/site-packages/fiftyone/service/main.py\", line 43, in <module>\r\n from fiftyone.core.service import Service\r\n File \"/home/user/.local/lib/python3.8/site-packages/fiftyone/__init__.py\", line 25, in <module>\r\n from fiftyone.__public__ import *\r\n File \"/home/user/.local/lib/python3.8/site-packages/fiftyone/__public__.py\", line 172, in <module>\r\n from .core.session import (\r\n File \"/home/user/.local/lib/python3.8/site-packages/fiftyone/core/session.py\", line 16, in <module>\r\n from jinja2 import Template\r\n File \"/home/user/.local/lib/python3.8/site-packages/jinja2/__init__.py\", line 12, in <module>\r\n from .environment import Environment\r\n File \"/home/user/.local/lib/python3.8/site-packages/jinja2/environment.py\", line 25, in <module>\r\n from .defaults import BLOCK_END_STRING\r\n File \"/home/user/.local/lib/python3.8/site-packages/jinja2/defaults.py\", line 3, in <module>\r\n from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401\r\n File \"/home/user/.local/lib/python3.8/site-packages/jinja2/filters.py\", line 13, in <module>\r\n from markupsafe import soft_unicode\r\nImportError: cannot import name 'soft_unicode' from 'markupsafe' (/home/user/.local/lib/python3.8/site-packages/markupsafe/__init__.py)\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nInstalls FiftyOne.\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport os\nfrom pkg_resources import DistributionNotFound, get_distribution\nimport re\nfrom setuptools import setup, find_packages\n\n\nVERSION = \"0.15.0\" # updated by https://github.com/voxel51/fiftyone/pull/1615\n\n\ndef get_version():\n if \"RELEASE_VERSION\" in os.environ:\n version = os.environ[\"RELEASE_VERSION\"]\n if not version.startswith(VERSION):\n raise ValueError(\n \"Release version does not match version: %s and %s\"\n % (version, VERSION)\n )\n return version\n\n return VERSION\n\n\nINSTALL_REQUIRES = [\n # third-party packages\n \"aiofiles\",\n \"argcomplete\",\n \"boto3\",\n \"Deprecated\",\n \"eventlet\",\n \"future\",\n \"Jinja2\",\n \"kaleido\",\n \"matplotlib\",\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"ndjson\",\n \"numpy\",\n \"packaging\",\n \"pandas\",\n \"Pillow>=6.2\",\n \"plotly>=4.14,<5\",\n \"pprintpp\",\n \"psutil\",\n \"pymongo>=3.11,<4\",\n \"pytz\",\n \"PyYAML\",\n \"retrying\",\n \"scikit-learn\",\n \"scikit-image\",\n \"setuptools\",\n \"tabulate\",\n \"tornado>=5.1.1,<7\",\n \"xmltodict\",\n \"universal-analytics-python3>=1.0.1,<2\",\n # internal packages\n \"fiftyone-brain>=0.8,<0.9\",\n \"fiftyone-db>=0.3,<0.4\",\n \"voxel51-eta>=0.6.3,<0.7\",\n]\n\n\nCHOOSE_INSTALL_REQUIRES = [\n (\n (\n \"opencv-python\",\n \"opencv-contrib-python\",\n \"opencv-contrib-python-headless\",\n ),\n \"opencv-python-headless\",\n )\n]\n\n\ndef choose_requirement(mains, secondary):\n chosen = secondary\n for main in mains:\n try:\n name = re.split(r\"[!<>=]\", main)[0]\n get_distribution(name)\n chosen = main\n break\n except DistributionNotFound:\n pass\n\n return str(chosen)\n\n\ndef get_install_requirements(install_requires, choose_install_requires):\n for mains, secondary in choose_install_requires:\n install_requires.append(choose_requirement(mains, secondary))\n\n return install_requires\n\n\nEXTRAS_REQUIREMENTS = {\"desktop\": [\"fiftyone-desktop>=0.19.2,<0.20\"]}\n\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\n\nsetup(\n name=\"fiftyone\",\n version=get_version(),\n description=(\n \"FiftyOne: the open-source tool for building high-quality datasets \"\n \"and computer vision models\"\n ),\n author=\"Voxel51, Inc.\",\n author_email=\"[email protected]\",\n url=\"https://github.com/voxel51/fiftyone\",\n extras_require=EXTRAS_REQUIREMENTS,\n license=\"Apache\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\"app\", \"eta\", \"package\", \"requirements\", \"tests\", \"tools\"]\n )\n + [\"fiftyone.recipes\", \"fiftyone.tutorials\"],\n package_dir={\n \"fiftyone.recipes\": \"docs/source/recipes\",\n \"fiftyone.tutorials\": \"docs/source/tutorials\",\n },\n install_requires=get_install_requirements(\n INSTALL_REQUIRES, CHOOSE_INSTALL_REQUIRES\n ),\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Image Processing\",\n \"Topic :: Scientific/Engineering :: Image Recognition\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n entry_points={\"console_scripts\": [\"fiftyone=fiftyone.core.cli:main\"]},\n python_requires=\">=3.6\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nInstalls FiftyOne.\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport os\nfrom pkg_resources import DistributionNotFound, get_distribution\nimport re\nfrom setuptools import setup, find_packages\n\n\nVERSION = \"0.15.0\" # updated by https://github.com/voxel51/fiftyone/pull/1615\n\n\ndef get_version():\n if \"RELEASE_VERSION\" in os.environ:\n version = os.environ[\"RELEASE_VERSION\"]\n if not version.startswith(VERSION):\n raise ValueError(\n \"Release version does not match version: %s and %s\"\n % (version, VERSION)\n )\n return version\n\n return VERSION\n\n\nINSTALL_REQUIRES = [\n # third-party packages\n \"aiofiles\",\n \"argcomplete\",\n \"boto3\",\n \"Deprecated\",\n \"eventlet\",\n \"future\",\n \"Jinja2>=3\",\n \"kaleido\",\n \"matplotlib\",\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"ndjson\",\n \"numpy\",\n \"packaging\",\n \"pandas\",\n \"Pillow>=6.2\",\n \"plotly>=4.14,<5\",\n \"pprintpp\",\n \"psutil\",\n \"pymongo>=3.11,<4\",\n \"pytz\",\n \"PyYAML\",\n \"retrying\",\n \"scikit-learn\",\n \"scikit-image\",\n \"setuptools\",\n \"tabulate\",\n \"tornado>=5.1.1,<7\",\n \"xmltodict\",\n \"universal-analytics-python3>=1.0.1,<2\",\n # internal packages\n \"fiftyone-brain>=0.8,<0.9\",\n \"fiftyone-db>=0.3,<0.4\",\n \"voxel51-eta>=0.6.3,<0.7\",\n]\n\n\nCHOOSE_INSTALL_REQUIRES = [\n (\n (\n \"opencv-python\",\n \"opencv-contrib-python\",\n \"opencv-contrib-python-headless\",\n ),\n \"opencv-python-headless\",\n )\n]\n\n\ndef choose_requirement(mains, secondary):\n chosen = secondary\n for main in mains:\n try:\n name = re.split(r\"[!<>=]\", main)[0]\n get_distribution(name)\n chosen = main\n break\n except DistributionNotFound:\n pass\n\n return str(chosen)\n\n\ndef get_install_requirements(install_requires, choose_install_requires):\n for mains, secondary in choose_install_requires:\n install_requires.append(choose_requirement(mains, secondary))\n\n return install_requires\n\n\nEXTRAS_REQUIREMENTS = {\"desktop\": [\"fiftyone-desktop>=0.19.2,<0.20\"]}\n\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\n\nsetup(\n name=\"fiftyone\",\n version=get_version(),\n description=(\n \"FiftyOne: the open-source tool for building high-quality datasets \"\n \"and computer vision models\"\n ),\n author=\"Voxel51, Inc.\",\n author_email=\"[email protected]\",\n url=\"https://github.com/voxel51/fiftyone\",\n extras_require=EXTRAS_REQUIREMENTS,\n license=\"Apache\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\"app\", \"eta\", \"package\", \"requirements\", \"tests\", \"tools\"]\n )\n + [\"fiftyone.recipes\", \"fiftyone.tutorials\"],\n package_dir={\n \"fiftyone.recipes\": \"docs/source/recipes\",\n \"fiftyone.tutorials\": \"docs/source/tutorials\",\n },\n install_requires=get_install_requirements(\n INSTALL_REQUIRES, CHOOSE_INSTALL_REQUIRES\n ),\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Image Processing\",\n \"Topic :: Scientific/Engineering :: Image Recognition\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n entry_points={\"console_scripts\": [\"fiftyone=fiftyone.core.cli:main\"]},\n python_requires=\">=3.6\",\n)\n", "path": "setup.py"}]}
| 2,133 | 83 |
gh_patches_debug_31058
|
rasdani/github-patches
|
git_diff
|
openfun__richie-160
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin dependencies in package.json to get auto-updates from Greenkeeper
## Bug Report
**Expected behavior/code**
Greenkeeper should update every possible version for every dependency
**Actual Behavior**
It only updates major versions (semver)
**Steps to Reproduce**
N/A (see Greenkeeper bot Pull Requests)
**Environment**
N/A
**Possible Solution**
https://greenkeeper.io/faq.html#pinning-versions
Pinning the versions (removing ^ will cause greenkeeper to upgrade all dependencies)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/core/helpers.py`
Content:
```
1 """
2 Helpers that can be useful throughout the whole project
3 """
4 from django.utils.text import slugify
5
6 from cms.api import add_plugin, create_page, create_title
7
8
9 def create_i18n_page(content, is_homepage=False, **kwargs):
10 """
11 Creating a multilingual page is not straightforward so we thought we should have a helper
12
13 This content argument should be a dictionary with the title of the page in each language:
14
15 {
16 'en': 'About',
17 'fr': 'A propos',
18 'de': 'Impressum',
19 }
20
21 """
22 # Create the page with a first language from what is given to us
23 languages = list(content.keys()) # convert `dict_keys` to list so it can be poped
24 first_language = languages.pop(0)
25 slug = slugify(content[first_language])
26 page = create_page(
27 language=first_language,
28 menu_title=content[first_language],
29 title=content[first_language],
30 slug=slug,
31 **kwargs
32 )
33
34 if is_homepage is True:
35 page.set_as_homepage()
36
37 # Add a title for each additional language
38 for language in languages:
39 create_title(
40 language=language,
41 menu_title=content[language],
42 title=content[language],
43 slug=slugify(content[language]),
44 page=page,
45 )
46 # Publish page in each additional language
47 if kwargs.get("published") is True:
48 page.publish(language)
49
50 # Add a plugin for each language (including the first language this time...)
51 placeholder = page.placeholders.get(slot="maincontent")
52 for language in content.keys():
53 add_plugin(
54 body="[{:s}] Lorem ipsum...".format(language),
55 language=language,
56 placeholder=placeholder,
57 plugin_type="TextPlugin",
58 )
59
60 return page
61
```
Path: `apps/core/management/commands/create_demo_site.py`
Content:
```
1 """
2 create_demo_site management command
3 """
4 import logging
5
6 from django.core.management.base import BaseCommand, CommandError
7 from django.conf import settings
8 from django.contrib.sites.models import Site
9
10 from cms import models as cms_models
11
12 from apps.courses.models import Course, Organization, Subject
13 from apps.persons.models import Person
14 from apps.courses.factories import OrganizationFactory
15
16 from ...helpers import create_i18n_page
17
18 logger = logging.getLogger("richie.commands.core.create_demo_site")
19
20 NB_ORGANIZATIONS = 8
21 PAGE_INFOS = {
22 "home": {
23 "content": {"en": "Home", "fr": "Accueil"},
24 "kwargs": {"template": "richie/fullwidth.html"},
25 },
26 "news": {
27 "content": {"en": "News", "fr": "Actualités"},
28 "kwargs": {"template": "richie/fullwidth.html"},
29 },
30 "courses": {
31 "content": {"en": "Courses", "fr": "Cours"},
32 "kwargs": {
33 "reverse_id": Course.ROOT_REVERSE_ID,
34 "template": "richie/fullwidth.html",
35 },
36 },
37 "subjects": {
38 "content": {"en": "Subjects", "fr": "Sujets"},
39 "kwargs": {
40 "reverse_id": Subject.ROOT_REVERSE_ID,
41 "template": "richie/fullwidth.html",
42 },
43 },
44 "organizations": {
45 "content": {"en": "Organizations", "fr": "Etablissements"},
46 "kwargs": {
47 "reverse_id": Organization.ROOT_REVERSE_ID,
48 "template": "richie/fullwidth.html",
49 },
50 },
51 "persons": {
52 "content": {"en": "Persons", "fr": "Personnes"},
53 "kwargs": {
54 "reverse_id": Person.ROOT_REVERSE_ID,
55 "template": "richie/fullwidth.html",
56 },
57 },
58 "dashboard": {
59 "content": {"en": "Dashboard", "fr": "Tableau de bord"},
60 "cms": False,
61 "kwargs": {"template": "richie/fullwidth.html"},
62 },
63 "about": {
64 "content": {"en": "About", "fr": "A propos"},
65 "kwargs": {"template": "richie/fullwidth.html"},
66 },
67 }
68
69
70 # pylint: disable=no-member
71 #
72 # Looks like pylint is not relevant at guessing object types when cascading
73 # methods over querysets: Instance of 'list' has no 'delete' member (no-member).
74 # We choose to ignore this false positive warning.
75 def clear_cms_data():
76 """Clear all CMS data (CMS models + organizations)"""
77
78 cms_models.Page.objects.all().delete()
79 cms_models.Title.objects.all().delete()
80 cms_models.CMSPlugin.objects.all().delete()
81 cms_models.Placeholder.objects.all().delete()
82 Course.objects.all().delete()
83 Organization.objects.all().delete()
84 Subject.objects.all().delete()
85
86
87 def create_demo_site():
88 """
89 Create a simple site tree structure for developpers to work in realistic environment.
90
91 We create multilingual pages, add organizations under the related page and add
92 plugins to each page.
93 """
94 site = Site.objects.get(id=1)
95
96 # Create pages as described in PAGES_INFOS
97 pages_created = {}
98 for name, info in PAGE_INFOS.items():
99 page = create_i18n_page(
100 info["content"],
101 is_homepage=(name == "home"),
102 in_navigation=True,
103 published=True,
104 site=site,
105 **info["kwargs"]
106 )
107
108 pages_created[name] = page
109
110 # Create organizations under the `organizations` page
111 for i, _ in enumerate(range(NB_ORGANIZATIONS)):
112 page = create_i18n_page(
113 {
114 "en": "Organization #{:d}".format(i),
115 "fr": "Organisation #{:d}".format(i),
116 },
117 parent=pages_created["organizations"],
118 published=True,
119 site=site,
120 template="courses/cms/organization_detail.html",
121 )
122 OrganizationFactory(extended_object=page)
123
124
125 class Command(BaseCommand):
126 """Create default pages for FUN frontend"""
127
128 help = __doc__
129
130 def add_arguments(self, parser):
131
132 parser.add_argument(
133 "-f",
134 "--force",
135 action="store_true",
136 default=False,
137 help="Force command execution despite DEBUG is set to False",
138 )
139
140 def handle(self, *args, **options):
141
142 if not settings.DEBUG and not options["force"]:
143 raise CommandError(
144 (
145 "This command is not meant to be used in production environment "
146 "except you know what you are doing, if so use --force parameter"
147 )
148 )
149
150 clear_cms_data()
151 create_demo_site()
152
153 logger.info("done")
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/core/helpers.py b/apps/core/helpers.py
--- a/apps/core/helpers.py
+++ b/apps/core/helpers.py
@@ -6,7 +6,7 @@
from cms.api import add_plugin, create_page, create_title
-def create_i18n_page(content, is_homepage=False, **kwargs):
+def create_i18n_page(content, has_maincontent=True, is_homepage=False, **kwargs):
"""
Creating a multilingual page is not straightforward so we thought we should have a helper
@@ -47,14 +47,17 @@
if kwargs.get("published") is True:
page.publish(language)
- # Add a plugin for each language (including the first language this time...)
- placeholder = page.placeholders.get(slot="maincontent")
- for language in content.keys():
- add_plugin(
- body="[{:s}] Lorem ipsum...".format(language),
- language=language,
- placeholder=placeholder,
- plugin_type="TextPlugin",
- )
+ # Some pages have specific purpose templates that do not have a user-editable maincontent area
+ # One such page is the "Courses" page which uses the Search template
+ if has_maincontent:
+ # Add a plugin for each language
+ placeholder = page.placeholders.get(slot="maincontent")
+ for language in content.keys():
+ add_plugin(
+ body="[{:s}] Lorem ipsum...".format(language),
+ language=language,
+ placeholder=placeholder,
+ plugin_type="TextPlugin",
+ )
return page
diff --git a/apps/core/management/commands/create_demo_site.py b/apps/core/management/commands/create_demo_site.py
--- a/apps/core/management/commands/create_demo_site.py
+++ b/apps/core/management/commands/create_demo_site.py
@@ -30,8 +30,9 @@
"courses": {
"content": {"en": "Courses", "fr": "Cours"},
"kwargs": {
+ "has_maincontent": False,
"reverse_id": Course.ROOT_REVERSE_ID,
- "template": "richie/fullwidth.html",
+ "template": "search/search.html",
},
},
"subjects": {
|
{"golden_diff": "diff --git a/apps/core/helpers.py b/apps/core/helpers.py\n--- a/apps/core/helpers.py\n+++ b/apps/core/helpers.py\n@@ -6,7 +6,7 @@\n from cms.api import add_plugin, create_page, create_title\n \n \n-def create_i18n_page(content, is_homepage=False, **kwargs):\n+def create_i18n_page(content, has_maincontent=True, is_homepage=False, **kwargs):\n \"\"\"\n Creating a multilingual page is not straightforward so we thought we should have a helper\n \n@@ -47,14 +47,17 @@\n if kwargs.get(\"published\") is True:\n page.publish(language)\n \n- # Add a plugin for each language (including the first language this time...)\n- placeholder = page.placeholders.get(slot=\"maincontent\")\n- for language in content.keys():\n- add_plugin(\n- body=\"[{:s}] Lorem ipsum...\".format(language),\n- language=language,\n- placeholder=placeholder,\n- plugin_type=\"TextPlugin\",\n- )\n+ # Some pages have specific purpose templates that do not have a user-editable maincontent area\n+ # One such page is the \"Courses\" page which uses the Search template\n+ if has_maincontent:\n+ # Add a plugin for each language\n+ placeholder = page.placeholders.get(slot=\"maincontent\")\n+ for language in content.keys():\n+ add_plugin(\n+ body=\"[{:s}] Lorem ipsum...\".format(language),\n+ language=language,\n+ placeholder=placeholder,\n+ plugin_type=\"TextPlugin\",\n+ )\n \n return page\ndiff --git a/apps/core/management/commands/create_demo_site.py b/apps/core/management/commands/create_demo_site.py\n--- a/apps/core/management/commands/create_demo_site.py\n+++ b/apps/core/management/commands/create_demo_site.py\n@@ -30,8 +30,9 @@\n \"courses\": {\n \"content\": {\"en\": \"Courses\", \"fr\": \"Cours\"},\n \"kwargs\": {\n+ \"has_maincontent\": False,\n \"reverse_id\": Course.ROOT_REVERSE_ID,\n- \"template\": \"richie/fullwidth.html\",\n+ \"template\": \"search/search.html\",\n },\n },\n \"subjects\": {\n", "issue": "Pin dependencies in package.json to get auto-updates from Greenkeeper\n## Bug Report\r\n\r\n**Expected behavior/code**\r\nGreenkeeper should update every possible version for every dependency\r\n\r\n**Actual Behavior**\r\nIt only updates major versions (semver)\r\n\r\n**Steps to Reproduce**\r\nN/A (see Greenkeeper bot Pull Requests)\r\n\r\n**Environment**\r\nN/A\r\n\r\n**Possible Solution**\r\nhttps://greenkeeper.io/faq.html#pinning-versions\r\nPinning the versions (removing ^ will cause greenkeeper to upgrade all dependencies)\r\n\n", "before_files": [{"content": "\"\"\"\nHelpers that can be useful throughout the whole project\n\"\"\"\nfrom django.utils.text import slugify\n\nfrom cms.api import add_plugin, create_page, create_title\n\n\ndef create_i18n_page(content, is_homepage=False, **kwargs):\n \"\"\"\n Creating a multilingual page is not straightforward so we thought we should have a helper\n\n This content argument should be a dictionary with the title of the page in each language:\n\n {\n 'en': 'About',\n 'fr': 'A propos',\n 'de': 'Impressum',\n }\n\n \"\"\"\n # Create the page with a first language from what is given to us\n languages = list(content.keys()) # convert `dict_keys` to list so it can be poped\n first_language = languages.pop(0)\n slug = slugify(content[first_language])\n page = create_page(\n language=first_language,\n menu_title=content[first_language],\n title=content[first_language],\n slug=slug,\n **kwargs\n )\n\n if is_homepage is True:\n page.set_as_homepage()\n\n # Add a title for each additional language\n for language in languages:\n create_title(\n language=language,\n menu_title=content[language],\n title=content[language],\n slug=slugify(content[language]),\n page=page,\n )\n # Publish page in each additional language\n if kwargs.get(\"published\") is True:\n page.publish(language)\n\n # Add a plugin for each language (including the first language this time...)\n placeholder = page.placeholders.get(slot=\"maincontent\")\n for language in content.keys():\n add_plugin(\n body=\"[{:s}] Lorem ipsum...\".format(language),\n language=language,\n placeholder=placeholder,\n plugin_type=\"TextPlugin\",\n )\n\n return page\n", "path": "apps/core/helpers.py"}, {"content": "\"\"\"\ncreate_demo_site management command\n\"\"\"\nimport logging\n\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.conf import settings\nfrom django.contrib.sites.models import Site\n\nfrom cms import models as cms_models\n\nfrom apps.courses.models import Course, Organization, Subject\nfrom apps.persons.models import Person\nfrom apps.courses.factories import OrganizationFactory\n\nfrom ...helpers import create_i18n_page\n\nlogger = logging.getLogger(\"richie.commands.core.create_demo_site\")\n\nNB_ORGANIZATIONS = 8\nPAGE_INFOS = {\n \"home\": {\n \"content\": {\"en\": \"Home\", \"fr\": \"Accueil\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"news\": {\n \"content\": {\"en\": \"News\", \"fr\": \"Actualit\u00e9s\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"courses\": {\n \"content\": {\"en\": \"Courses\", \"fr\": \"Cours\"},\n \"kwargs\": {\n \"reverse_id\": Course.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"subjects\": {\n \"content\": {\"en\": \"Subjects\", \"fr\": \"Sujets\"},\n \"kwargs\": {\n \"reverse_id\": Subject.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"organizations\": {\n \"content\": {\"en\": \"Organizations\", \"fr\": \"Etablissements\"},\n \"kwargs\": {\n \"reverse_id\": Organization.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"persons\": {\n \"content\": {\"en\": \"Persons\", \"fr\": \"Personnes\"},\n \"kwargs\": {\n \"reverse_id\": Person.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"dashboard\": {\n \"content\": {\"en\": \"Dashboard\", \"fr\": \"Tableau de bord\"},\n \"cms\": False,\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"about\": {\n \"content\": {\"en\": \"About\", \"fr\": \"A propos\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n}\n\n\n# pylint: disable=no-member\n#\n# Looks like pylint is not relevant at guessing object types when cascading\n# methods over querysets: Instance of 'list' has no 'delete' member (no-member).\n# We choose to ignore this false positive warning.\ndef clear_cms_data():\n \"\"\"Clear all CMS data (CMS models + organizations)\"\"\"\n\n cms_models.Page.objects.all().delete()\n cms_models.Title.objects.all().delete()\n cms_models.CMSPlugin.objects.all().delete()\n cms_models.Placeholder.objects.all().delete()\n Course.objects.all().delete()\n Organization.objects.all().delete()\n Subject.objects.all().delete()\n\n\ndef create_demo_site():\n \"\"\"\n Create a simple site tree structure for developpers to work in realistic environment.\n\n We create multilingual pages, add organizations under the related page and add\n plugins to each page.\n \"\"\"\n site = Site.objects.get(id=1)\n\n # Create pages as described in PAGES_INFOS\n pages_created = {}\n for name, info in PAGE_INFOS.items():\n page = create_i18n_page(\n info[\"content\"],\n is_homepage=(name == \"home\"),\n in_navigation=True,\n published=True,\n site=site,\n **info[\"kwargs\"]\n )\n\n pages_created[name] = page\n\n # Create organizations under the `organizations` page\n for i, _ in enumerate(range(NB_ORGANIZATIONS)):\n page = create_i18n_page(\n {\n \"en\": \"Organization #{:d}\".format(i),\n \"fr\": \"Organisation #{:d}\".format(i),\n },\n parent=pages_created[\"organizations\"],\n published=True,\n site=site,\n template=\"courses/cms/organization_detail.html\",\n )\n OrganizationFactory(extended_object=page)\n\n\nclass Command(BaseCommand):\n \"\"\"Create default pages for FUN frontend\"\"\"\n\n help = __doc__\n\n def add_arguments(self, parser):\n\n parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Force command execution despite DEBUG is set to False\",\n )\n\n def handle(self, *args, **options):\n\n if not settings.DEBUG and not options[\"force\"]:\n raise CommandError(\n (\n \"This command is not meant to be used in production environment \"\n \"except you know what you are doing, if so use --force parameter\"\n )\n )\n\n clear_cms_data()\n create_demo_site()\n\n logger.info(\"done\")\n", "path": "apps/core/management/commands/create_demo_site.py"}], "after_files": [{"content": "\"\"\"\nHelpers that can be useful throughout the whole project\n\"\"\"\nfrom django.utils.text import slugify\n\nfrom cms.api import add_plugin, create_page, create_title\n\n\ndef create_i18n_page(content, has_maincontent=True, is_homepage=False, **kwargs):\n \"\"\"\n Creating a multilingual page is not straightforward so we thought we should have a helper\n\n This content argument should be a dictionary with the title of the page in each language:\n\n {\n 'en': 'About',\n 'fr': 'A propos',\n 'de': 'Impressum',\n }\n\n \"\"\"\n # Create the page with a first language from what is given to us\n languages = list(content.keys()) # convert `dict_keys` to list so it can be poped\n first_language = languages.pop(0)\n slug = slugify(content[first_language])\n page = create_page(\n language=first_language,\n menu_title=content[first_language],\n title=content[first_language],\n slug=slug,\n **kwargs\n )\n\n if is_homepage is True:\n page.set_as_homepage()\n\n # Add a title for each additional language\n for language in languages:\n create_title(\n language=language,\n menu_title=content[language],\n title=content[language],\n slug=slugify(content[language]),\n page=page,\n )\n # Publish page in each additional language\n if kwargs.get(\"published\") is True:\n page.publish(language)\n\n # Some pages have specific purpose templates that do not have a user-editable maincontent area\n # One such page is the \"Courses\" page which uses the Search template\n if has_maincontent:\n # Add a plugin for each language\n placeholder = page.placeholders.get(slot=\"maincontent\")\n for language in content.keys():\n add_plugin(\n body=\"[{:s}] Lorem ipsum...\".format(language),\n language=language,\n placeholder=placeholder,\n plugin_type=\"TextPlugin\",\n )\n\n return page\n", "path": "apps/core/helpers.py"}, {"content": "\"\"\"\ncreate_demo_site management command\n\"\"\"\nimport logging\n\nfrom django.core.management.base import BaseCommand, CommandError\nfrom django.conf import settings\nfrom django.contrib.sites.models import Site\n\nfrom cms import models as cms_models\n\nfrom apps.courses.models import Course, Organization, Subject\nfrom apps.persons.models import Person\nfrom apps.courses.factories import OrganizationFactory\n\nfrom ...helpers import create_i18n_page\n\nlogger = logging.getLogger(\"richie.commands.core.create_demo_site\")\n\nNB_ORGANIZATIONS = 8\nPAGE_INFOS = {\n \"home\": {\n \"content\": {\"en\": \"Home\", \"fr\": \"Accueil\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"news\": {\n \"content\": {\"en\": \"News\", \"fr\": \"Actualit\u00e9s\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"courses\": {\n \"content\": {\"en\": \"Courses\", \"fr\": \"Cours\"},\n \"kwargs\": {\n \"has_maincontent\": False,\n \"reverse_id\": Course.ROOT_REVERSE_ID,\n \"template\": \"search/search.html\",\n },\n },\n \"subjects\": {\n \"content\": {\"en\": \"Subjects\", \"fr\": \"Sujets\"},\n \"kwargs\": {\n \"reverse_id\": Subject.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"organizations\": {\n \"content\": {\"en\": \"Organizations\", \"fr\": \"Etablissements\"},\n \"kwargs\": {\n \"reverse_id\": Organization.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"persons\": {\n \"content\": {\"en\": \"Persons\", \"fr\": \"Personnes\"},\n \"kwargs\": {\n \"reverse_id\": Person.ROOT_REVERSE_ID,\n \"template\": \"richie/fullwidth.html\",\n },\n },\n \"dashboard\": {\n \"content\": {\"en\": \"Dashboard\", \"fr\": \"Tableau de bord\"},\n \"cms\": False,\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n \"about\": {\n \"content\": {\"en\": \"About\", \"fr\": \"A propos\"},\n \"kwargs\": {\"template\": \"richie/fullwidth.html\"},\n },\n}\n\n\n# pylint: disable=no-member\n#\n# Looks like pylint is not relevant at guessing object types when cascading\n# methods over querysets: Instance of 'list' has no 'delete' member (no-member).\n# We choose to ignore this false positive warning.\ndef clear_cms_data():\n \"\"\"Clear all CMS data (CMS models + organizations)\"\"\"\n\n cms_models.Page.objects.all().delete()\n cms_models.Title.objects.all().delete()\n cms_models.CMSPlugin.objects.all().delete()\n cms_models.Placeholder.objects.all().delete()\n Course.objects.all().delete()\n Organization.objects.all().delete()\n Subject.objects.all().delete()\n\n\ndef create_demo_site():\n \"\"\"\n Create a simple site tree structure for developpers to work in realistic environment.\n\n We create multilingual pages, add organizations under the related page and add\n plugins to each page.\n \"\"\"\n site = Site.objects.get(id=1)\n\n # Create pages as described in PAGES_INFOS\n pages_created = {}\n for name, info in PAGE_INFOS.items():\n page = create_i18n_page(\n info[\"content\"],\n is_homepage=(name == \"home\"),\n in_navigation=True,\n published=True,\n site=site,\n **info[\"kwargs\"]\n )\n\n pages_created[name] = page\n\n # Create organizations under the `organizations` page\n for i, _ in enumerate(range(NB_ORGANIZATIONS)):\n page = create_i18n_page(\n {\n \"en\": \"Organization #{:d}\".format(i),\n \"fr\": \"Organisation #{:d}\".format(i),\n },\n parent=pages_created[\"organizations\"],\n published=True,\n site=site,\n template=\"courses/cms/organization_detail.html\",\n )\n OrganizationFactory(extended_object=page)\n\n\nclass Command(BaseCommand):\n \"\"\"Create default pages for FUN frontend\"\"\"\n\n help = __doc__\n\n def add_arguments(self, parser):\n\n parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Force command execution despite DEBUG is set to False\",\n )\n\n def handle(self, *args, **options):\n\n if not settings.DEBUG and not options[\"force\"]:\n raise CommandError(\n (\n \"This command is not meant to be used in production environment \"\n \"except you know what you are doing, if so use --force parameter\"\n )\n )\n\n clear_cms_data()\n create_demo_site()\n\n logger.info(\"done\")\n", "path": "apps/core/management/commands/create_demo_site.py"}]}
| 2,289 | 489 |
gh_patches_debug_8467
|
rasdani/github-patches
|
git_diff
|
python-trio__trio-1639
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
to_thread.run_sync() foils meta-class destructors
Invoking `to_thread.run_sync()` seems to prevent meta-class destructors of completely unrelated classes from being called on shutdown.
```python
import trio
class deltype(type):
def __del__(self):
print(f"{self.__name__} class deleted")
Test = deltype("Test", (object,), {})
async def async_main():
await trio.to_thread.run_sync(lambda: print('hi'))
await trio.sleep(0)
trio.run(async_main)
```
expected output:
```
hi
Test class deleted
```
actual output:
```
hi
```
if the `run_sync()` is commented out, the meta-class destructor is called as expected:
```
Test class deleted
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `trio/_core/_thread_cache.py`
Content:
```
1 from threading import Thread, Lock
2 import outcome
3 from itertools import count
4
5 # The "thread cache" is a simple unbounded thread pool, i.e., it automatically
6 # spawns as many threads as needed to handle all the requests its given. Its
7 # only purpose is to cache worker threads so that they don't have to be
8 # started from scratch every time we want to delegate some work to a thread.
9 # It's expected that some higher-level code will track how many threads are in
10 # use to avoid overwhelming the system (e.g. the limiter= argument to
11 # trio.to_thread.run_sync).
12 #
13 # To maximize sharing, there's only one thread cache per process, even if you
14 # have multiple calls to trio.run.
15 #
16 # Guarantees:
17 #
18 # It's safe to call start_thread_soon simultaneously from
19 # multiple threads.
20 #
21 # Idle threads are chosen in LIFO order, i.e. we *don't* spread work evenly
22 # over all threads. Instead we try to let some threads do most of the work
23 # while others sit idle as much as possible. Compared to FIFO, this has better
24 # memory cache behavior, and it makes it easier to detect when we have too
25 # many threads, so idle ones can exit.
26 #
27 # This code assumes that 'dict' has the following properties:
28 #
29 # - __setitem__, __delitem__, and popitem are all thread-safe and atomic with
30 # respect to each other. This is guaranteed by the GIL.
31 #
32 # - popitem returns the most-recently-added item (i.e., __setitem__ + popitem
33 # give you a LIFO queue). This relies on dicts being insertion-ordered, like
34 # they are in py36+.
35
36 # How long a thread will idle waiting for new work before gives up and exits.
37 # This value is pretty arbitrary; I don't think it matters too much.
38 IDLE_TIMEOUT = 10 # seconds
39
40 name_counter = count()
41
42
43 class WorkerThread:
44 def __init__(self, thread_cache):
45 self._job = None
46 self._thread_cache = thread_cache
47 # This Lock is used in an unconventional way.
48 #
49 # "Unlocked" means we have a pending job that's been assigned to us;
50 # "locked" means that we don't.
51 #
52 # Initially we have no job, so it starts out in locked state.
53 self._worker_lock = Lock()
54 self._worker_lock.acquire()
55 thread = Thread(target=self._work, daemon=True)
56 thread.name = f"Trio worker thread {next(name_counter)}"
57 thread.start()
58
59 def _work(self):
60 while True:
61 if self._worker_lock.acquire(timeout=IDLE_TIMEOUT):
62 # We got a job
63 fn, deliver = self._job
64 self._job = None
65 result = outcome.capture(fn)
66 # Tell the cache that we're available to be assigned a new
67 # job. We do this *before* calling 'deliver', so that if
68 # 'deliver' triggers a new job, it can be assigned to us
69 # instead of spawning a new thread.
70 self._thread_cache._idle_workers[self] = None
71 deliver(result)
72 else:
73 # Timeout acquiring lock, so we can probably exit. But,
74 # there's a race condition: we might be assigned a job *just*
75 # as we're about to exit. So we have to check.
76 try:
77 del self._thread_cache._idle_workers[self]
78 except KeyError:
79 # Someone else removed us from the idle worker queue, so
80 # they must be in the process of assigning us a job - loop
81 # around and wait for it.
82 continue
83 else:
84 # We successfully removed ourselves from the idle
85 # worker queue, so no more jobs are incoming; it's safe to
86 # exit.
87 return
88
89
90 class ThreadCache:
91 def __init__(self):
92 self._idle_workers = {}
93
94 def start_thread_soon(self, fn, deliver):
95 try:
96 worker, _ = self._idle_workers.popitem()
97 except KeyError:
98 worker = WorkerThread(self)
99 worker._job = (fn, deliver)
100 worker._worker_lock.release()
101
102
103 THREAD_CACHE = ThreadCache()
104
105
106 def start_thread_soon(fn, deliver):
107 """Runs ``deliver(outcome.capture(fn))`` in a worker thread.
108
109 Generally ``fn`` does some blocking work, and ``deliver`` delivers the
110 result back to whoever is interested.
111
112 This is a low-level, no-frills interface, very similar to using
113 `threading.Thread` to spawn a thread directly. The main difference is
114 that this function tries to re-use threads when possible, so it can be
115 a bit faster than `threading.Thread`.
116
117 Worker threads have the `~threading.Thread.daemon` flag set, which means
118 that if your main thread exits, worker threads will automatically be
119 killed. If you want to make sure that your ``fn`` runs to completion, then
120 you should make sure that the main thread remains alive until ``deliver``
121 is called.
122
123 It is safe to call this function simultaneously from multiple threads.
124
125 Args:
126
127 fn (sync function): Performs arbitrary blocking work.
128
129 deliver (sync function): Takes the `outcome.Outcome` of ``fn``, and
130 delivers it. *Must not block.*
131
132 Because worker threads are cached and reused for multiple calls, neither
133 function should mutate thread-level state, like `threading.local` objects
134 – or if they do, they should be careful to revert their changes before
135 returning.
136
137 Note:
138
139 The split between ``fn`` and ``deliver`` serves two purposes. First,
140 it's convenient, since most callers need something like this anyway.
141
142 Second, it avoids a small race condition that could cause too many
143 threads to be spawned. Consider a program that wants to run several
144 jobs sequentially on a thread, so the main thread submits a job, waits
145 for it to finish, submits another job, etc. In theory, this program
146 should only need one worker thread. But what could happen is:
147
148 1. Worker thread: First job finishes, and calls ``deliver``.
149
150 2. Main thread: receives notification that the job finished, and calls
151 ``start_thread_soon``.
152
153 3. Main thread: sees that no worker threads are marked idle, so spawns
154 a second worker thread.
155
156 4. Original worker thread: marks itself as idle.
157
158 To avoid this, threads mark themselves as idle *before* calling
159 ``deliver``.
160
161 Is this potential extra thread a major problem? Maybe not, but it's
162 easy enough to avoid, and we figure that if the user is trying to
163 limit how many threads they're using then it's polite to respect that.
164
165 """
166 THREAD_CACHE.start_thread_soon(fn, deliver)
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/trio/_core/_thread_cache.py b/trio/_core/_thread_cache.py
--- a/trio/_core/_thread_cache.py
+++ b/trio/_core/_thread_cache.py
@@ -69,6 +69,8 @@
# instead of spawning a new thread.
self._thread_cache._idle_workers[self] = None
deliver(result)
+ del fn
+ del deliver
else:
# Timeout acquiring lock, so we can probably exit. But,
# there's a race condition: we might be assigned a job *just*
|
{"golden_diff": "diff --git a/trio/_core/_thread_cache.py b/trio/_core/_thread_cache.py\n--- a/trio/_core/_thread_cache.py\n+++ b/trio/_core/_thread_cache.py\n@@ -69,6 +69,8 @@\n # instead of spawning a new thread.\n self._thread_cache._idle_workers[self] = None\n deliver(result)\n+ del fn\n+ del deliver\n else:\n # Timeout acquiring lock, so we can probably exit. But,\n # there's a race condition: we might be assigned a job *just*\n", "issue": "to_thread.run_sync() foils meta-class destructors \nInvoking `to_thread.run_sync()` seems to prevent meta-class destructors of completely unrelated classes from being called on shutdown.\r\n\r\n```python\r\nimport trio\r\n\r\nclass deltype(type):\r\n def __del__(self):\r\n print(f\"{self.__name__} class deleted\")\r\n\r\nTest = deltype(\"Test\", (object,), {})\r\n\r\nasync def async_main():\r\n await trio.to_thread.run_sync(lambda: print('hi'))\r\n await trio.sleep(0)\r\n\r\ntrio.run(async_main)\r\n```\r\n\r\nexpected output:\r\n```\r\nhi\r\nTest class deleted\r\n```\r\n\r\nactual output:\r\n```\r\nhi\r\n```\r\n\r\nif the `run_sync()` is commented out, the meta-class destructor is called as expected:\r\n```\r\nTest class deleted\r\n```\r\n\n", "before_files": [{"content": "from threading import Thread, Lock\nimport outcome\nfrom itertools import count\n\n# The \"thread cache\" is a simple unbounded thread pool, i.e., it automatically\n# spawns as many threads as needed to handle all the requests its given. Its\n# only purpose is to cache worker threads so that they don't have to be\n# started from scratch every time we want to delegate some work to a thread.\n# It's expected that some higher-level code will track how many threads are in\n# use to avoid overwhelming the system (e.g. the limiter= argument to\n# trio.to_thread.run_sync).\n#\n# To maximize sharing, there's only one thread cache per process, even if you\n# have multiple calls to trio.run.\n#\n# Guarantees:\n#\n# It's safe to call start_thread_soon simultaneously from\n# multiple threads.\n#\n# Idle threads are chosen in LIFO order, i.e. we *don't* spread work evenly\n# over all threads. Instead we try to let some threads do most of the work\n# while others sit idle as much as possible. Compared to FIFO, this has better\n# memory cache behavior, and it makes it easier to detect when we have too\n# many threads, so idle ones can exit.\n#\n# This code assumes that 'dict' has the following properties:\n#\n# - __setitem__, __delitem__, and popitem are all thread-safe and atomic with\n# respect to each other. This is guaranteed by the GIL.\n#\n# - popitem returns the most-recently-added item (i.e., __setitem__ + popitem\n# give you a LIFO queue). This relies on dicts being insertion-ordered, like\n# they are in py36+.\n\n# How long a thread will idle waiting for new work before gives up and exits.\n# This value is pretty arbitrary; I don't think it matters too much.\nIDLE_TIMEOUT = 10 # seconds\n\nname_counter = count()\n\n\nclass WorkerThread:\n def __init__(self, thread_cache):\n self._job = None\n self._thread_cache = thread_cache\n # This Lock is used in an unconventional way.\n #\n # \"Unlocked\" means we have a pending job that's been assigned to us;\n # \"locked\" means that we don't.\n #\n # Initially we have no job, so it starts out in locked state.\n self._worker_lock = Lock()\n self._worker_lock.acquire()\n thread = Thread(target=self._work, daemon=True)\n thread.name = f\"Trio worker thread {next(name_counter)}\"\n thread.start()\n\n def _work(self):\n while True:\n if self._worker_lock.acquire(timeout=IDLE_TIMEOUT):\n # We got a job\n fn, deliver = self._job\n self._job = None\n result = outcome.capture(fn)\n # Tell the cache that we're available to be assigned a new\n # job. We do this *before* calling 'deliver', so that if\n # 'deliver' triggers a new job, it can be assigned to us\n # instead of spawning a new thread.\n self._thread_cache._idle_workers[self] = None\n deliver(result)\n else:\n # Timeout acquiring lock, so we can probably exit. But,\n # there's a race condition: we might be assigned a job *just*\n # as we're about to exit. So we have to check.\n try:\n del self._thread_cache._idle_workers[self]\n except KeyError:\n # Someone else removed us from the idle worker queue, so\n # they must be in the process of assigning us a job - loop\n # around and wait for it.\n continue\n else:\n # We successfully removed ourselves from the idle\n # worker queue, so no more jobs are incoming; it's safe to\n # exit.\n return\n\n\nclass ThreadCache:\n def __init__(self):\n self._idle_workers = {}\n\n def start_thread_soon(self, fn, deliver):\n try:\n worker, _ = self._idle_workers.popitem()\n except KeyError:\n worker = WorkerThread(self)\n worker._job = (fn, deliver)\n worker._worker_lock.release()\n\n\nTHREAD_CACHE = ThreadCache()\n\n\ndef start_thread_soon(fn, deliver):\n \"\"\"Runs ``deliver(outcome.capture(fn))`` in a worker thread.\n\n Generally ``fn`` does some blocking work, and ``deliver`` delivers the\n result back to whoever is interested.\n\n This is a low-level, no-frills interface, very similar to using\n `threading.Thread` to spawn a thread directly. The main difference is\n that this function tries to re-use threads when possible, so it can be\n a bit faster than `threading.Thread`.\n\n Worker threads have the `~threading.Thread.daemon` flag set, which means\n that if your main thread exits, worker threads will automatically be\n killed. If you want to make sure that your ``fn`` runs to completion, then\n you should make sure that the main thread remains alive until ``deliver``\n is called.\n\n It is safe to call this function simultaneously from multiple threads.\n\n Args:\n\n fn (sync function): Performs arbitrary blocking work.\n\n deliver (sync function): Takes the `outcome.Outcome` of ``fn``, and\n delivers it. *Must not block.*\n\n Because worker threads are cached and reused for multiple calls, neither\n function should mutate thread-level state, like `threading.local` objects\n \u2013 or if they do, they should be careful to revert their changes before\n returning.\n\n Note:\n\n The split between ``fn`` and ``deliver`` serves two purposes. First,\n it's convenient, since most callers need something like this anyway.\n\n Second, it avoids a small race condition that could cause too many\n threads to be spawned. Consider a program that wants to run several\n jobs sequentially on a thread, so the main thread submits a job, waits\n for it to finish, submits another job, etc. In theory, this program\n should only need one worker thread. But what could happen is:\n\n 1. Worker thread: First job finishes, and calls ``deliver``.\n\n 2. Main thread: receives notification that the job finished, and calls\n ``start_thread_soon``.\n\n 3. Main thread: sees that no worker threads are marked idle, so spawns\n a second worker thread.\n\n 4. Original worker thread: marks itself as idle.\n\n To avoid this, threads mark themselves as idle *before* calling\n ``deliver``.\n\n Is this potential extra thread a major problem? Maybe not, but it's\n easy enough to avoid, and we figure that if the user is trying to\n limit how many threads they're using then it's polite to respect that.\n\n \"\"\"\n THREAD_CACHE.start_thread_soon(fn, deliver)\n", "path": "trio/_core/_thread_cache.py"}], "after_files": [{"content": "from threading import Thread, Lock\nimport outcome\nfrom itertools import count\n\n# The \"thread cache\" is a simple unbounded thread pool, i.e., it automatically\n# spawns as many threads as needed to handle all the requests its given. Its\n# only purpose is to cache worker threads so that they don't have to be\n# started from scratch every time we want to delegate some work to a thread.\n# It's expected that some higher-level code will track how many threads are in\n# use to avoid overwhelming the system (e.g. the limiter= argument to\n# trio.to_thread.run_sync).\n#\n# To maximize sharing, there's only one thread cache per process, even if you\n# have multiple calls to trio.run.\n#\n# Guarantees:\n#\n# It's safe to call start_thread_soon simultaneously from\n# multiple threads.\n#\n# Idle threads are chosen in LIFO order, i.e. we *don't* spread work evenly\n# over all threads. Instead we try to let some threads do most of the work\n# while others sit idle as much as possible. Compared to FIFO, this has better\n# memory cache behavior, and it makes it easier to detect when we have too\n# many threads, so idle ones can exit.\n#\n# This code assumes that 'dict' has the following properties:\n#\n# - __setitem__, __delitem__, and popitem are all thread-safe and atomic with\n# respect to each other. This is guaranteed by the GIL.\n#\n# - popitem returns the most-recently-added item (i.e., __setitem__ + popitem\n# give you a LIFO queue). This relies on dicts being insertion-ordered, like\n# they are in py36+.\n\n# How long a thread will idle waiting for new work before gives up and exits.\n# This value is pretty arbitrary; I don't think it matters too much.\nIDLE_TIMEOUT = 10 # seconds\n\nname_counter = count()\n\n\nclass WorkerThread:\n def __init__(self, thread_cache):\n self._job = None\n self._thread_cache = thread_cache\n # This Lock is used in an unconventional way.\n #\n # \"Unlocked\" means we have a pending job that's been assigned to us;\n # \"locked\" means that we don't.\n #\n # Initially we have no job, so it starts out in locked state.\n self._worker_lock = Lock()\n self._worker_lock.acquire()\n thread = Thread(target=self._work, daemon=True)\n thread.name = f\"Trio worker thread {next(name_counter)}\"\n thread.start()\n\n def _work(self):\n while True:\n if self._worker_lock.acquire(timeout=IDLE_TIMEOUT):\n # We got a job\n fn, deliver = self._job\n self._job = None\n result = outcome.capture(fn)\n # Tell the cache that we're available to be assigned a new\n # job. We do this *before* calling 'deliver', so that if\n # 'deliver' triggers a new job, it can be assigned to us\n # instead of spawning a new thread.\n self._thread_cache._idle_workers[self] = None\n deliver(result)\n del fn\n del deliver\n else:\n # Timeout acquiring lock, so we can probably exit. But,\n # there's a race condition: we might be assigned a job *just*\n # as we're about to exit. So we have to check.\n try:\n del self._thread_cache._idle_workers[self]\n except KeyError:\n # Someone else removed us from the idle worker queue, so\n # they must be in the process of assigning us a job - loop\n # around and wait for it.\n continue\n else:\n # We successfully removed ourselves from the idle\n # worker queue, so no more jobs are incoming; it's safe to\n # exit.\n return\n\n\nclass ThreadCache:\n def __init__(self):\n self._idle_workers = {}\n\n def start_thread_soon(self, fn, deliver):\n try:\n worker, _ = self._idle_workers.popitem()\n except KeyError:\n worker = WorkerThread(self)\n worker._job = (fn, deliver)\n worker._worker_lock.release()\n\n\nTHREAD_CACHE = ThreadCache()\n\n\ndef start_thread_soon(fn, deliver):\n \"\"\"Runs ``deliver(outcome.capture(fn))`` in a worker thread.\n\n Generally ``fn`` does some blocking work, and ``deliver`` delivers the\n result back to whoever is interested.\n\n This is a low-level, no-frills interface, very similar to using\n `threading.Thread` to spawn a thread directly. The main difference is\n that this function tries to re-use threads when possible, so it can be\n a bit faster than `threading.Thread`.\n\n Worker threads have the `~threading.Thread.daemon` flag set, which means\n that if your main thread exits, worker threads will automatically be\n killed. If you want to make sure that your ``fn`` runs to completion, then\n you should make sure that the main thread remains alive until ``deliver``\n is called.\n\n It is safe to call this function simultaneously from multiple threads.\n\n Args:\n\n fn (sync function): Performs arbitrary blocking work.\n\n deliver (sync function): Takes the `outcome.Outcome` of ``fn``, and\n delivers it. *Must not block.*\n\n Because worker threads are cached and reused for multiple calls, neither\n function should mutate thread-level state, like `threading.local` objects\n \u2013 or if they do, they should be careful to revert their changes before\n returning.\n\n Note:\n\n The split between ``fn`` and ``deliver`` serves two purposes. First,\n it's convenient, since most callers need something like this anyway.\n\n Second, it avoids a small race condition that could cause too many\n threads to be spawned. Consider a program that wants to run several\n jobs sequentially on a thread, so the main thread submits a job, waits\n for it to finish, submits another job, etc. In theory, this program\n should only need one worker thread. But what could happen is:\n\n 1. Worker thread: First job finishes, and calls ``deliver``.\n\n 2. Main thread: receives notification that the job finished, and calls\n ``start_thread_soon``.\n\n 3. Main thread: sees that no worker threads are marked idle, so spawns\n a second worker thread.\n\n 4. Original worker thread: marks itself as idle.\n\n To avoid this, threads mark themselves as idle *before* calling\n ``deliver``.\n\n Is this potential extra thread a major problem? Maybe not, but it's\n easy enough to avoid, and we figure that if the user is trying to\n limit how many threads they're using then it's polite to respect that.\n\n \"\"\"\n THREAD_CACHE.start_thread_soon(fn, deliver)\n", "path": "trio/_core/_thread_cache.py"}]}
| 2,320 | 126 |
gh_patches_debug_32220
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-58875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change Slack Linked Team Message
Right now when we link a team to a Slack channel we get the message "<Team> will now receive issue alert notifications ...".
Change it to "<Team> will now receive issue alert and workflow notifications ..." (behind the feature flag)
<img width="746" alt="Screenshot 2023-10-25 at 8 55 42 AM" src="https://github.com/getsentry/sentry/assets/116035587/5f81ab16-b889-43aa-ab82-f74f00cc157f">
<img width="606" alt="Screenshot 2023-10-26 at 10 02 18 AM" src="https://github.com/getsentry/sentry/assets/116035587/c8819e04-062f-443a-b4f8-5c597240b7b4">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/integrations/slack/views/link_team.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any, Sequence
4
5 from django import forms
6 from django.core.signing import BadSignature, SignatureExpired
7 from django.http import Http404, HttpResponse
8 from django.utils.decorators import method_decorator
9 from rest_framework.request import Request
10
11 from sentry import analytics
12 from sentry.models.integrations.external_actor import ExternalActor
13 from sentry.models.integrations.integration import Integration
14 from sentry.models.organizationmember import OrganizationMember
15 from sentry.models.team import Team
16 from sentry.notifications.types import NotificationSettingOptionValues, NotificationSettingTypes
17 from sentry.services.hybrid_cloud.actor import ActorType, RpcActor
18 from sentry.services.hybrid_cloud.identity import identity_service
19 from sentry.services.hybrid_cloud.integration import RpcIntegration, integration_service
20 from sentry.services.hybrid_cloud.notifications import notifications_service
21 from sentry.types.integrations import ExternalProviders
22 from sentry.utils.signing import unsign
23 from sentry.web.decorators import transaction_start
24 from sentry.web.frontend.base import BaseView, region_silo_view
25 from sentry.web.helpers import render_to_response
26
27 from ..utils import is_valid_role, logger
28 from . import build_linking_url as base_build_linking_url
29 from . import never_cache, render_error_page
30
31 ALLOWED_METHODS = ["GET", "POST"]
32
33 ALREADY_LINKED_TITLE = "Already linked"
34 ALREADY_LINKED_MESSAGE = "The {slug} team has already been linked to a Slack channel."
35 SUCCESS_LINKED_TITLE = "Team linked"
36 SUCCESS_LINKED_MESSAGE = (
37 "The {slug} team will now receive issue alert notifications in the {channel_name} channel."
38 )
39
40
41 def build_team_linking_url(
42 integration: Integration | RpcIntegration,
43 slack_id: str,
44 channel_id: str,
45 channel_name: str,
46 response_url: str,
47 ) -> str:
48 return base_build_linking_url(
49 "sentry-integration-slack-link-team",
50 integration_id=integration.id,
51 slack_id=slack_id,
52 channel_id=channel_id,
53 channel_name=channel_name,
54 response_url=response_url,
55 )
56
57
58 class SelectTeamForm(forms.Form):
59 team = forms.ChoiceField(label="Team")
60
61 def __init__(self, teams: Sequence[Team], *args: Any, **kwargs: Any):
62 super().__init__(*args, **kwargs)
63
64 self.fields["team"].choices = [(team.id, team.slug) for team in teams]
65 self.fields["team"].widget.choices = self.fields["team"].choices
66
67
68 @region_silo_view
69 class SlackLinkTeamView(BaseView):
70 """
71 Django view for linking team to slack channel. Creates an entry on ExternalActor table.
72 """
73
74 @transaction_start("SlackLinkTeamView")
75 @method_decorator(never_cache)
76 def handle(self, request: Request, signed_params: str) -> HttpResponse:
77 if request.method not in ALLOWED_METHODS:
78 return render_error_page(request, status=405, body_text="HTTP 405: Method not allowed")
79
80 try:
81 params = unsign(signed_params)
82 except (SignatureExpired, BadSignature):
83 return render_to_response(
84 "sentry/integrations/slack/expired-link.html",
85 status=400,
86 request=request,
87 )
88
89 integration = integration_service.get_integration(integration_id=params["integration_id"])
90 if integration is None:
91 raise Http404
92
93 organization_memberships = OrganizationMember.objects.get_for_integration(
94 integration, request.user
95 )
96 # Filter to teams where we have write access to, either through having a sufficient
97 # organization role (owner/manager/admin) or by being a team admin on at least one team.
98 teams_by_id = {}
99 for org_membership in organization_memberships:
100 for team in Team.objects.get_for_user(
101 org_membership.organization,
102 request.user,
103 # Setting is_team_admin to True only returns teams that member is team admin on.
104 # We only want to filter for this when the user does not have a sufficient
105 # role in the org, which is checked using is_valid_role.
106 is_team_admin=not is_valid_role(org_membership),
107 ):
108 teams_by_id[team.id] = team
109
110 if not teams_by_id:
111 return render_error_page(
112 request,
113 status=404,
114 body_text="HTTP 404: No teams found in your organizations to link. You must be a Sentry organization admin/manager/owner or a team admin to link a team in your respective organization.",
115 )
116
117 channel_name = params["channel_name"]
118 channel_id = params["channel_id"]
119 form = SelectTeamForm(list(teams_by_id.values()), request.POST or None)
120
121 if request.method == "GET":
122 return self.respond(
123 "sentry/integrations/slack/link-team.html",
124 {
125 "form": form,
126 "teams": teams_by_id.values(),
127 "channel_name": channel_name,
128 "provider": integration.get_provider(),
129 },
130 )
131
132 if not form.is_valid():
133 return render_error_page(request, status=400, body_text="HTTP 400: Bad request")
134
135 team_id = int(form.cleaned_data["team"])
136 team = teams_by_id.get(team_id)
137 if not team:
138 return render_error_page(
139 request,
140 status=404,
141 body_text="HTTP 404: Team does not exist or you do not have sufficient permission to link a team",
142 )
143
144 idp = identity_service.get_provider(
145 provider_type="slack", provider_ext_id=integration.external_id
146 )
147 if idp is None:
148 logger.info("slack.action.invalid-team-id", extra={"slack_id": integration.external_id})
149 return render_error_page(request, status=403, body_text="HTTP 403: Invalid team ID")
150
151 ident = identity_service.get_identity(
152 filter={"provider_id": idp.id, "identity_ext_id": params["slack_id"]}
153 )
154 if not ident:
155 return render_error_page(
156 request, status=403, body_text="HTTP 403: User identity does not exist"
157 )
158
159 external_team, created = ExternalActor.objects.get_or_create(
160 team_id=team.id,
161 organization=team.organization,
162 integration_id=integration.id,
163 provider=ExternalProviders.SLACK.value,
164 defaults=dict(
165 external_name=channel_name,
166 external_id=channel_id,
167 ),
168 )
169
170 analytics.record(
171 "integrations.identity_linked",
172 provider="slack",
173 actor_id=team.actor_id,
174 actor_type="team",
175 )
176
177 if not created:
178 message = ALREADY_LINKED_MESSAGE.format(slug=team.slug)
179
180 integration_service.send_message(
181 integration_id=integration.id,
182 organization_id=team.organization_id,
183 channel=channel_id,
184 message=message,
185 )
186 return render_to_response(
187 "sentry/integrations/slack/post-linked-team.html",
188 request=request,
189 context={
190 "heading_text": ALREADY_LINKED_TITLE,
191 "body_text": message,
192 "channel_id": channel_id,
193 "team_id": integration.external_id,
194 },
195 )
196
197 # Turn on notifications for all of a team's projects.
198 notifications_service.update_settings(
199 external_provider=ExternalProviders.SLACK,
200 notification_type=NotificationSettingTypes.ISSUE_ALERTS,
201 setting_option=NotificationSettingOptionValues.ALWAYS,
202 actor=RpcActor(id=team.id, actor_type=ActorType.TEAM),
203 organization_id_for_team=team.organization_id,
204 )
205 message = SUCCESS_LINKED_MESSAGE.format(slug=team.slug, channel_name=channel_name)
206 integration_service.send_message(
207 integration_id=integration.id,
208 organization_id=team.organization_id,
209 channel=channel_id,
210 message=message,
211 )
212 return render_to_response(
213 "sentry/integrations/slack/post-linked-team.html",
214 request=request,
215 context={
216 "heading_text": SUCCESS_LINKED_TITLE,
217 "body_text": message,
218 "channel_id": channel_id,
219 "team_id": integration.external_id,
220 },
221 )
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/integrations/slack/views/link_team.py b/src/sentry/integrations/slack/views/link_team.py
--- a/src/sentry/integrations/slack/views/link_team.py
+++ b/src/sentry/integrations/slack/views/link_team.py
@@ -8,7 +8,7 @@
from django.utils.decorators import method_decorator
from rest_framework.request import Request
-from sentry import analytics
+from sentry import analytics, features
from sentry.models.integrations.external_actor import ExternalActor
from sentry.models.integrations.integration import Integration
from sentry.models.organizationmember import OrganizationMember
@@ -33,9 +33,7 @@
ALREADY_LINKED_TITLE = "Already linked"
ALREADY_LINKED_MESSAGE = "The {slug} team has already been linked to a Slack channel."
SUCCESS_LINKED_TITLE = "Team linked"
-SUCCESS_LINKED_MESSAGE = (
- "The {slug} team will now receive issue alert notifications in the {channel_name} channel."
-)
+SUCCESS_LINKED_MESSAGE = "The {slug} team will now receive issue alert{workflow_addon} notifications in the {channel_name} channel."
def build_team_linking_url(
@@ -202,7 +200,13 @@
actor=RpcActor(id=team.id, actor_type=ActorType.TEAM),
organization_id_for_team=team.organization_id,
)
- message = SUCCESS_LINKED_MESSAGE.format(slug=team.slug, channel_name=channel_name)
+ message = SUCCESS_LINKED_MESSAGE.format(
+ slug=team.slug,
+ workflow_addon=" and workflow"
+ if features.has("organizations:team-workflow-notifications", team.organization)
+ else "",
+ channel_name=channel_name,
+ )
integration_service.send_message(
integration_id=integration.id,
organization_id=team.organization_id,
|
{"golden_diff": "diff --git a/src/sentry/integrations/slack/views/link_team.py b/src/sentry/integrations/slack/views/link_team.py\n--- a/src/sentry/integrations/slack/views/link_team.py\n+++ b/src/sentry/integrations/slack/views/link_team.py\n@@ -8,7 +8,7 @@\n from django.utils.decorators import method_decorator\n from rest_framework.request import Request\n \n-from sentry import analytics\n+from sentry import analytics, features\n from sentry.models.integrations.external_actor import ExternalActor\n from sentry.models.integrations.integration import Integration\n from sentry.models.organizationmember import OrganizationMember\n@@ -33,9 +33,7 @@\n ALREADY_LINKED_TITLE = \"Already linked\"\n ALREADY_LINKED_MESSAGE = \"The {slug} team has already been linked to a Slack channel.\"\n SUCCESS_LINKED_TITLE = \"Team linked\"\n-SUCCESS_LINKED_MESSAGE = (\n- \"The {slug} team will now receive issue alert notifications in the {channel_name} channel.\"\n-)\n+SUCCESS_LINKED_MESSAGE = \"The {slug} team will now receive issue alert{workflow_addon} notifications in the {channel_name} channel.\"\n \n \n def build_team_linking_url(\n@@ -202,7 +200,13 @@\n actor=RpcActor(id=team.id, actor_type=ActorType.TEAM),\n organization_id_for_team=team.organization_id,\n )\n- message = SUCCESS_LINKED_MESSAGE.format(slug=team.slug, channel_name=channel_name)\n+ message = SUCCESS_LINKED_MESSAGE.format(\n+ slug=team.slug,\n+ workflow_addon=\" and workflow\"\n+ if features.has(\"organizations:team-workflow-notifications\", team.organization)\n+ else \"\",\n+ channel_name=channel_name,\n+ )\n integration_service.send_message(\n integration_id=integration.id,\n organization_id=team.organization_id,\n", "issue": "Change Slack Linked Team Message\nRight now when we link a team to a Slack channel we get the message \"<Team> will now receive issue alert notifications ...\".\r\nChange it to \"<Team> will now receive issue alert and workflow notifications ...\" (behind the feature flag)\r\n\r\n<img width=\"746\" alt=\"Screenshot 2023-10-25 at 8 55 42 AM\" src=\"https://github.com/getsentry/sentry/assets/116035587/5f81ab16-b889-43aa-ab82-f74f00cc157f\">\r\n<img width=\"606\" alt=\"Screenshot 2023-10-26 at 10 02 18 AM\" src=\"https://github.com/getsentry/sentry/assets/116035587/c8819e04-062f-443a-b4f8-5c597240b7b4\">\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, Sequence\n\nfrom django import forms\nfrom django.core.signing import BadSignature, SignatureExpired\nfrom django.http import Http404, HttpResponse\nfrom django.utils.decorators import method_decorator\nfrom rest_framework.request import Request\n\nfrom sentry import analytics\nfrom sentry.models.integrations.external_actor import ExternalActor\nfrom sentry.models.integrations.integration import Integration\nfrom sentry.models.organizationmember import OrganizationMember\nfrom sentry.models.team import Team\nfrom sentry.notifications.types import NotificationSettingOptionValues, NotificationSettingTypes\nfrom sentry.services.hybrid_cloud.actor import ActorType, RpcActor\nfrom sentry.services.hybrid_cloud.identity import identity_service\nfrom sentry.services.hybrid_cloud.integration import RpcIntegration, integration_service\nfrom sentry.services.hybrid_cloud.notifications import notifications_service\nfrom sentry.types.integrations import ExternalProviders\nfrom sentry.utils.signing import unsign\nfrom sentry.web.decorators import transaction_start\nfrom sentry.web.frontend.base import BaseView, region_silo_view\nfrom sentry.web.helpers import render_to_response\n\nfrom ..utils import is_valid_role, logger\nfrom . import build_linking_url as base_build_linking_url\nfrom . import never_cache, render_error_page\n\nALLOWED_METHODS = [\"GET\", \"POST\"]\n\nALREADY_LINKED_TITLE = \"Already linked\"\nALREADY_LINKED_MESSAGE = \"The {slug} team has already been linked to a Slack channel.\"\nSUCCESS_LINKED_TITLE = \"Team linked\"\nSUCCESS_LINKED_MESSAGE = (\n \"The {slug} team will now receive issue alert notifications in the {channel_name} channel.\"\n)\n\n\ndef build_team_linking_url(\n integration: Integration | RpcIntegration,\n slack_id: str,\n channel_id: str,\n channel_name: str,\n response_url: str,\n) -> str:\n return base_build_linking_url(\n \"sentry-integration-slack-link-team\",\n integration_id=integration.id,\n slack_id=slack_id,\n channel_id=channel_id,\n channel_name=channel_name,\n response_url=response_url,\n )\n\n\nclass SelectTeamForm(forms.Form):\n team = forms.ChoiceField(label=\"Team\")\n\n def __init__(self, teams: Sequence[Team], *args: Any, **kwargs: Any):\n super().__init__(*args, **kwargs)\n\n self.fields[\"team\"].choices = [(team.id, team.slug) for team in teams]\n self.fields[\"team\"].widget.choices = self.fields[\"team\"].choices\n\n\n@region_silo_view\nclass SlackLinkTeamView(BaseView):\n \"\"\"\n Django view for linking team to slack channel. Creates an entry on ExternalActor table.\n \"\"\"\n\n @transaction_start(\"SlackLinkTeamView\")\n @method_decorator(never_cache)\n def handle(self, request: Request, signed_params: str) -> HttpResponse:\n if request.method not in ALLOWED_METHODS:\n return render_error_page(request, status=405, body_text=\"HTTP 405: Method not allowed\")\n\n try:\n params = unsign(signed_params)\n except (SignatureExpired, BadSignature):\n return render_to_response(\n \"sentry/integrations/slack/expired-link.html\",\n status=400,\n request=request,\n )\n\n integration = integration_service.get_integration(integration_id=params[\"integration_id\"])\n if integration is None:\n raise Http404\n\n organization_memberships = OrganizationMember.objects.get_for_integration(\n integration, request.user\n )\n # Filter to teams where we have write access to, either through having a sufficient\n # organization role (owner/manager/admin) or by being a team admin on at least one team.\n teams_by_id = {}\n for org_membership in organization_memberships:\n for team in Team.objects.get_for_user(\n org_membership.organization,\n request.user,\n # Setting is_team_admin to True only returns teams that member is team admin on.\n # We only want to filter for this when the user does not have a sufficient\n # role in the org, which is checked using is_valid_role.\n is_team_admin=not is_valid_role(org_membership),\n ):\n teams_by_id[team.id] = team\n\n if not teams_by_id:\n return render_error_page(\n request,\n status=404,\n body_text=\"HTTP 404: No teams found in your organizations to link. You must be a Sentry organization admin/manager/owner or a team admin to link a team in your respective organization.\",\n )\n\n channel_name = params[\"channel_name\"]\n channel_id = params[\"channel_id\"]\n form = SelectTeamForm(list(teams_by_id.values()), request.POST or None)\n\n if request.method == \"GET\":\n return self.respond(\n \"sentry/integrations/slack/link-team.html\",\n {\n \"form\": form,\n \"teams\": teams_by_id.values(),\n \"channel_name\": channel_name,\n \"provider\": integration.get_provider(),\n },\n )\n\n if not form.is_valid():\n return render_error_page(request, status=400, body_text=\"HTTP 400: Bad request\")\n\n team_id = int(form.cleaned_data[\"team\"])\n team = teams_by_id.get(team_id)\n if not team:\n return render_error_page(\n request,\n status=404,\n body_text=\"HTTP 404: Team does not exist or you do not have sufficient permission to link a team\",\n )\n\n idp = identity_service.get_provider(\n provider_type=\"slack\", provider_ext_id=integration.external_id\n )\n if idp is None:\n logger.info(\"slack.action.invalid-team-id\", extra={\"slack_id\": integration.external_id})\n return render_error_page(request, status=403, body_text=\"HTTP 403: Invalid team ID\")\n\n ident = identity_service.get_identity(\n filter={\"provider_id\": idp.id, \"identity_ext_id\": params[\"slack_id\"]}\n )\n if not ident:\n return render_error_page(\n request, status=403, body_text=\"HTTP 403: User identity does not exist\"\n )\n\n external_team, created = ExternalActor.objects.get_or_create(\n team_id=team.id,\n organization=team.organization,\n integration_id=integration.id,\n provider=ExternalProviders.SLACK.value,\n defaults=dict(\n external_name=channel_name,\n external_id=channel_id,\n ),\n )\n\n analytics.record(\n \"integrations.identity_linked\",\n provider=\"slack\",\n actor_id=team.actor_id,\n actor_type=\"team\",\n )\n\n if not created:\n message = ALREADY_LINKED_MESSAGE.format(slug=team.slug)\n\n integration_service.send_message(\n integration_id=integration.id,\n organization_id=team.organization_id,\n channel=channel_id,\n message=message,\n )\n return render_to_response(\n \"sentry/integrations/slack/post-linked-team.html\",\n request=request,\n context={\n \"heading_text\": ALREADY_LINKED_TITLE,\n \"body_text\": message,\n \"channel_id\": channel_id,\n \"team_id\": integration.external_id,\n },\n )\n\n # Turn on notifications for all of a team's projects.\n notifications_service.update_settings(\n external_provider=ExternalProviders.SLACK,\n notification_type=NotificationSettingTypes.ISSUE_ALERTS,\n setting_option=NotificationSettingOptionValues.ALWAYS,\n actor=RpcActor(id=team.id, actor_type=ActorType.TEAM),\n organization_id_for_team=team.organization_id,\n )\n message = SUCCESS_LINKED_MESSAGE.format(slug=team.slug, channel_name=channel_name)\n integration_service.send_message(\n integration_id=integration.id,\n organization_id=team.organization_id,\n channel=channel_id,\n message=message,\n )\n return render_to_response(\n \"sentry/integrations/slack/post-linked-team.html\",\n request=request,\n context={\n \"heading_text\": SUCCESS_LINKED_TITLE,\n \"body_text\": message,\n \"channel_id\": channel_id,\n \"team_id\": integration.external_id,\n },\n )\n", "path": "src/sentry/integrations/slack/views/link_team.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, Sequence\n\nfrom django import forms\nfrom django.core.signing import BadSignature, SignatureExpired\nfrom django.http import Http404, HttpResponse\nfrom django.utils.decorators import method_decorator\nfrom rest_framework.request import Request\n\nfrom sentry import analytics, features\nfrom sentry.models.integrations.external_actor import ExternalActor\nfrom sentry.models.integrations.integration import Integration\nfrom sentry.models.organizationmember import OrganizationMember\nfrom sentry.models.team import Team\nfrom sentry.notifications.types import NotificationSettingOptionValues, NotificationSettingTypes\nfrom sentry.services.hybrid_cloud.actor import ActorType, RpcActor\nfrom sentry.services.hybrid_cloud.identity import identity_service\nfrom sentry.services.hybrid_cloud.integration import RpcIntegration, integration_service\nfrom sentry.services.hybrid_cloud.notifications import notifications_service\nfrom sentry.types.integrations import ExternalProviders\nfrom sentry.utils.signing import unsign\nfrom sentry.web.decorators import transaction_start\nfrom sentry.web.frontend.base import BaseView, region_silo_view\nfrom sentry.web.helpers import render_to_response\n\nfrom ..utils import is_valid_role, logger\nfrom . import build_linking_url as base_build_linking_url\nfrom . import never_cache, render_error_page\n\nALLOWED_METHODS = [\"GET\", \"POST\"]\n\nALREADY_LINKED_TITLE = \"Already linked\"\nALREADY_LINKED_MESSAGE = \"The {slug} team has already been linked to a Slack channel.\"\nSUCCESS_LINKED_TITLE = \"Team linked\"\nSUCCESS_LINKED_MESSAGE = \"The {slug} team will now receive issue alert{workflow_addon} notifications in the {channel_name} channel.\"\n\n\ndef build_team_linking_url(\n integration: Integration | RpcIntegration,\n slack_id: str,\n channel_id: str,\n channel_name: str,\n response_url: str,\n) -> str:\n return base_build_linking_url(\n \"sentry-integration-slack-link-team\",\n integration_id=integration.id,\n slack_id=slack_id,\n channel_id=channel_id,\n channel_name=channel_name,\n response_url=response_url,\n )\n\n\nclass SelectTeamForm(forms.Form):\n team = forms.ChoiceField(label=\"Team\")\n\n def __init__(self, teams: Sequence[Team], *args: Any, **kwargs: Any):\n super().__init__(*args, **kwargs)\n\n self.fields[\"team\"].choices = [(team.id, team.slug) for team in teams]\n self.fields[\"team\"].widget.choices = self.fields[\"team\"].choices\n\n\n@region_silo_view\nclass SlackLinkTeamView(BaseView):\n \"\"\"\n Django view for linking team to slack channel. Creates an entry on ExternalActor table.\n \"\"\"\n\n @transaction_start(\"SlackLinkTeamView\")\n @method_decorator(never_cache)\n def handle(self, request: Request, signed_params: str) -> HttpResponse:\n if request.method not in ALLOWED_METHODS:\n return render_error_page(request, status=405, body_text=\"HTTP 405: Method not allowed\")\n\n try:\n params = unsign(signed_params)\n except (SignatureExpired, BadSignature):\n return render_to_response(\n \"sentry/integrations/slack/expired-link.html\",\n status=400,\n request=request,\n )\n\n integration = integration_service.get_integration(integration_id=params[\"integration_id\"])\n if integration is None:\n raise Http404\n\n organization_memberships = OrganizationMember.objects.get_for_integration(\n integration, request.user\n )\n # Filter to teams where we have write access to, either through having a sufficient\n # organization role (owner/manager/admin) or by being a team admin on at least one team.\n teams_by_id = {}\n for org_membership in organization_memberships:\n for team in Team.objects.get_for_user(\n org_membership.organization,\n request.user,\n # Setting is_team_admin to True only returns teams that member is team admin on.\n # We only want to filter for this when the user does not have a sufficient\n # role in the org, which is checked using is_valid_role.\n is_team_admin=not is_valid_role(org_membership),\n ):\n teams_by_id[team.id] = team\n\n if not teams_by_id:\n return render_error_page(\n request,\n status=404,\n body_text=\"HTTP 404: No teams found in your organizations to link. You must be a Sentry organization admin/manager/owner or a team admin to link a team in your respective organization.\",\n )\n\n channel_name = params[\"channel_name\"]\n channel_id = params[\"channel_id\"]\n form = SelectTeamForm(list(teams_by_id.values()), request.POST or None)\n\n if request.method == \"GET\":\n return self.respond(\n \"sentry/integrations/slack/link-team.html\",\n {\n \"form\": form,\n \"teams\": teams_by_id.values(),\n \"channel_name\": channel_name,\n \"provider\": integration.get_provider(),\n },\n )\n\n if not form.is_valid():\n return render_error_page(request, status=400, body_text=\"HTTP 400: Bad request\")\n\n team_id = int(form.cleaned_data[\"team\"])\n team = teams_by_id.get(team_id)\n if not team:\n return render_error_page(\n request,\n status=404,\n body_text=\"HTTP 404: Team does not exist or you do not have sufficient permission to link a team\",\n )\n\n idp = identity_service.get_provider(\n provider_type=\"slack\", provider_ext_id=integration.external_id\n )\n if idp is None:\n logger.info(\"slack.action.invalid-team-id\", extra={\"slack_id\": integration.external_id})\n return render_error_page(request, status=403, body_text=\"HTTP 403: Invalid team ID\")\n\n ident = identity_service.get_identity(\n filter={\"provider_id\": idp.id, \"identity_ext_id\": params[\"slack_id\"]}\n )\n if not ident:\n return render_error_page(\n request, status=403, body_text=\"HTTP 403: User identity does not exist\"\n )\n\n external_team, created = ExternalActor.objects.get_or_create(\n team_id=team.id,\n organization=team.organization,\n integration_id=integration.id,\n provider=ExternalProviders.SLACK.value,\n defaults=dict(\n external_name=channel_name,\n external_id=channel_id,\n ),\n )\n\n analytics.record(\n \"integrations.identity_linked\",\n provider=\"slack\",\n actor_id=team.actor_id,\n actor_type=\"team\",\n )\n\n if not created:\n message = ALREADY_LINKED_MESSAGE.format(slug=team.slug)\n\n integration_service.send_message(\n integration_id=integration.id,\n organization_id=team.organization_id,\n channel=channel_id,\n message=message,\n )\n return render_to_response(\n \"sentry/integrations/slack/post-linked-team.html\",\n request=request,\n context={\n \"heading_text\": ALREADY_LINKED_TITLE,\n \"body_text\": message,\n \"channel_id\": channel_id,\n \"team_id\": integration.external_id,\n },\n )\n\n # Turn on notifications for all of a team's projects.\n notifications_service.update_settings(\n external_provider=ExternalProviders.SLACK,\n notification_type=NotificationSettingTypes.ISSUE_ALERTS,\n setting_option=NotificationSettingOptionValues.ALWAYS,\n actor=RpcActor(id=team.id, actor_type=ActorType.TEAM),\n organization_id_for_team=team.organization_id,\n )\n message = SUCCESS_LINKED_MESSAGE.format(\n slug=team.slug,\n workflow_addon=\" and workflow\"\n if features.has(\"organizations:team-workflow-notifications\", team.organization)\n else \"\",\n channel_name=channel_name,\n )\n integration_service.send_message(\n integration_id=integration.id,\n organization_id=team.organization_id,\n channel=channel_id,\n message=message,\n )\n return render_to_response(\n \"sentry/integrations/slack/post-linked-team.html\",\n request=request,\n context={\n \"heading_text\": SUCCESS_LINKED_TITLE,\n \"body_text\": message,\n \"channel_id\": channel_id,\n \"team_id\": integration.external_id,\n },\n )\n", "path": "src/sentry/integrations/slack/views/link_team.py"}]}
| 2,804 | 401 |
gh_patches_debug_24706
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-1181
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show title in figure for contour plot.
I noticed `optuna.visualization.plot_contour` show a plot without its title.
<img width="913" alt="Screen Shot 2020-04-28 at 10 49 53" src="https://user-images.githubusercontent.com/5164000/80437965-01d5fc00-893e-11ea-896a-343b2e51bc90.png">
I think it's better to show a title to be consistent with other plot features such as `optuna.visualization.plot_optimization_history` and `optuna.visualization.plot_slice`.
We can reproduce this behavior in the [official quickstart](https://colab.research.google.com/github/optuna/optuna/blob/master/examples/quickstart.ipynb)
## Expected behavior
Show title of a figure in the contour plot.
## Environment
- Optuna version: 1.3.0
- Python version: 3.6.9
- OS: Ubuntu 18.04.3 LTS
## Steps to reproduce
Run notebook on [Google Colab](https://colab.research.google.com/github/optuna/optuna/blob/master/examples/quickstart.ipynb).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/visualization/contour.py`
Content:
```
1 import math
2
3 from optuna.logging import get_logger
4 from optuna.study import StudyDirection
5 from optuna.trial import TrialState
6 from optuna import type_checking
7 from optuna.visualization.utils import _check_plotly_availability
8 from optuna.visualization.utils import _is_log_scale
9 from optuna.visualization.utils import is_available
10
11 if type_checking.TYPE_CHECKING:
12 from typing import List # NOQA
13 from typing import Optional # NOQA
14 from typing import Tuple # NOQA
15
16 from optuna.study import Study # NOQA
17 from optuna.trial import FrozenTrial # NOQA
18 from optuna.visualization.plotly_imports import Contour # NOQA
19 from optuna.visualization.plotly_imports import Scatter # NOQA
20
21 if is_available():
22 from optuna.visualization.plotly_imports import go
23 from optuna.visualization.plotly_imports import make_subplots
24 from optuna.visualization.plotly_imports import plotly
25
26 logger = get_logger(__name__)
27
28
29 def plot_contour(study, params=None):
30 # type: (Study, Optional[List[str]]) -> go.Figure
31 """Plot the parameter relationship as contour plot in a study.
32
33 Note that, If a parameter contains missing values, a trial with missing values is not plotted.
34
35 Example:
36
37 The following code snippet shows how to plot the parameter relationship as contour plot.
38
39 .. testcode::
40
41 import optuna
42
43 def objective(trial):
44 x = trial.suggest_uniform('x', -100, 100)
45 y = trial.suggest_categorical('y', [-1, 0, 1])
46 return x ** 2 + y
47
48 study = optuna.create_study()
49 study.optimize(objective, n_trials=10)
50
51 optuna.visualization.plot_contour(study, params=['x', 'y'])
52
53 .. raw:: html
54
55 <iframe src="../_static/plot_contour.html" width="100%" height="500px" frameborder="0">
56 </iframe>
57
58 Args:
59 study:
60 A :class:`~optuna.study.Study` object whose trials are plotted for their objective
61 values.
62 params:
63 Parameter list to visualize. The default is all parameters.
64
65 Returns:
66 A :class:`plotly.graph_objs.Figure` object.
67 """
68
69 _check_plotly_availability()
70 return _get_contour_plot(study, params)
71
72
73 def _get_contour_plot(study, params=None):
74 # type: (Study, Optional[List[str]]) -> go.Figure
75
76 layout = go.Layout(title="Contour Plot",)
77
78 trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE]
79
80 if len(trials) == 0:
81 logger.warning("Your study does not have any completed trials.")
82 return go.Figure(data=[], layout=layout)
83
84 all_params = {p_name for t in trials for p_name in t.params.keys()}
85 if params is None:
86 sorted_params = sorted(list(all_params))
87 elif len(params) <= 1:
88 logger.warning("The length of params must be greater than 1.")
89 return go.Figure(data=[], layout=layout)
90 else:
91 for input_p_name in params:
92 if input_p_name not in all_params:
93 raise ValueError("Parameter {} does not exist in your study.".format(input_p_name))
94 sorted_params = sorted(list(set(params)))
95
96 param_values_range = {}
97 for p_name in sorted_params:
98 values = [t.params[p_name] for t in trials if p_name in t.params]
99 param_values_range[p_name] = (min(values), max(values))
100
101 if len(sorted_params) == 2:
102 x_param = sorted_params[0]
103 y_param = sorted_params[1]
104 sub_plots = _generate_contour_subplot(trials, x_param, y_param, study.direction)
105 figure = go.Figure(data=sub_plots)
106 figure.update_xaxes(title_text=x_param, range=param_values_range[x_param])
107 figure.update_yaxes(title_text=y_param, range=param_values_range[y_param])
108 if _is_log_scale(trials, x_param):
109 log_range = [math.log10(p) for p in param_values_range[x_param]]
110 figure.update_xaxes(range=log_range, type="log")
111 if _is_log_scale(trials, y_param):
112 log_range = [math.log10(p) for p in param_values_range[y_param]]
113 figure.update_yaxes(range=log_range, type="log")
114 else:
115 figure = make_subplots(
116 rows=len(sorted_params), cols=len(sorted_params), shared_xaxes=True, shared_yaxes=True
117 )
118 showscale = True # showscale option only needs to be specified once
119 for x_i, x_param in enumerate(sorted_params):
120 for y_i, y_param in enumerate(sorted_params):
121 if x_param == y_param:
122 figure.add_trace(go.Scatter(), row=y_i + 1, col=x_i + 1)
123 else:
124 sub_plots = _generate_contour_subplot(
125 trials, x_param, y_param, study.direction
126 )
127 contour = sub_plots[0]
128 scatter = sub_plots[1]
129 contour.update(showscale=showscale) # showscale's default is True
130 if showscale:
131 showscale = False
132 figure.add_trace(contour, row=y_i + 1, col=x_i + 1)
133 figure.add_trace(scatter, row=y_i + 1, col=x_i + 1)
134 figure.update_xaxes(range=param_values_range[x_param], row=y_i + 1, col=x_i + 1)
135 figure.update_yaxes(range=param_values_range[y_param], row=y_i + 1, col=x_i + 1)
136 if _is_log_scale(trials, x_param):
137 log_range = [math.log10(p) for p in param_values_range[x_param]]
138 figure.update_xaxes(range=log_range, type="log", row=y_i + 1, col=x_i + 1)
139 if _is_log_scale(trials, y_param):
140 log_range = [math.log10(p) for p in param_values_range[y_param]]
141 figure.update_yaxes(range=log_range, type="log", row=y_i + 1, col=x_i + 1)
142 if x_i == 0:
143 figure.update_yaxes(title_text=y_param, row=y_i + 1, col=x_i + 1)
144 if y_i == len(sorted_params) - 1:
145 figure.update_xaxes(title_text=x_param, row=y_i + 1, col=x_i + 1)
146
147 return figure
148
149
150 def _generate_contour_subplot(trials, x_param, y_param, direction):
151 # type: (List[FrozenTrial], str, str, StudyDirection) -> Tuple[Contour, Scatter]
152
153 x_indices = sorted(list({t.params[x_param] for t in trials if x_param in t.params}))
154 y_indices = sorted(list({t.params[y_param] for t in trials if y_param in t.params}))
155 if len(x_indices) < 2:
156 logger.warning("Param {} unique value length is less than 2.".format(x_param))
157 return go.Contour(), go.Scatter()
158 if len(y_indices) < 2:
159 logger.warning("Param {} unique value length is less than 2.".format(y_param))
160 return go.Contour(), go.Scatter()
161 z = [[float("nan") for _ in range(len(x_indices))] for _ in range(len(y_indices))]
162
163 x_values = []
164 y_values = []
165 for trial in trials:
166 if x_param not in trial.params or y_param not in trial.params:
167 continue
168 x_values.append(trial.params[x_param])
169 y_values.append(trial.params[y_param])
170 x_i = x_indices.index(trial.params[x_param])
171 y_i = y_indices.index(trial.params[y_param])
172 if isinstance(trial.value, int):
173 value = float(trial.value)
174 elif isinstance(trial.value, float):
175 value = trial.value
176 else:
177 raise ValueError(
178 "Trial{} has COMPLETE state, but its value is non-numeric.".format(trial.number)
179 )
180 z[y_i][x_i] = value
181
182 # TODO(Yanase): Use reversescale argument to reverse colorscale if Plotly's bug is fixed.
183 # If contours_coloring='heatmap' is specified, reversesecale argument of go.Contour does not
184 # work correctly. See https://github.com/pfnet/optuna/issues/606.
185 colorscale = plotly.colors.PLOTLY_SCALES["Blues"]
186 if direction == StudyDirection.MINIMIZE:
187 colorscale = [[1 - t[0], t[1]] for t in colorscale]
188 colorscale.reverse()
189
190 contour = go.Contour(
191 x=x_indices,
192 y=y_indices,
193 z=z,
194 colorbar={"title": "Objective Value"},
195 colorscale=colorscale,
196 connectgaps=True,
197 contours_coloring="heatmap",
198 hoverinfo="none",
199 line_smoothing=1.3,
200 )
201
202 scatter = go.Scatter(
203 x=x_values, y=y_values, marker={"color": "black"}, mode="markers", showlegend=False
204 )
205
206 return (contour, scatter)
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/visualization/contour.py b/optuna/visualization/contour.py
--- a/optuna/visualization/contour.py
+++ b/optuna/visualization/contour.py
@@ -102,7 +102,7 @@
x_param = sorted_params[0]
y_param = sorted_params[1]
sub_plots = _generate_contour_subplot(trials, x_param, y_param, study.direction)
- figure = go.Figure(data=sub_plots)
+ figure = go.Figure(data=sub_plots, layout=layout)
figure.update_xaxes(title_text=x_param, range=param_values_range[x_param])
figure.update_yaxes(title_text=y_param, range=param_values_range[y_param])
if _is_log_scale(trials, x_param):
@@ -115,6 +115,7 @@
figure = make_subplots(
rows=len(sorted_params), cols=len(sorted_params), shared_xaxes=True, shared_yaxes=True
)
+ figure.update_layout(layout)
showscale = True # showscale option only needs to be specified once
for x_i, x_param in enumerate(sorted_params):
for y_i, y_param in enumerate(sorted_params):
|
{"golden_diff": "diff --git a/optuna/visualization/contour.py b/optuna/visualization/contour.py\n--- a/optuna/visualization/contour.py\n+++ b/optuna/visualization/contour.py\n@@ -102,7 +102,7 @@\n x_param = sorted_params[0]\n y_param = sorted_params[1]\n sub_plots = _generate_contour_subplot(trials, x_param, y_param, study.direction)\n- figure = go.Figure(data=sub_plots)\n+ figure = go.Figure(data=sub_plots, layout=layout)\n figure.update_xaxes(title_text=x_param, range=param_values_range[x_param])\n figure.update_yaxes(title_text=y_param, range=param_values_range[y_param])\n if _is_log_scale(trials, x_param):\n@@ -115,6 +115,7 @@\n figure = make_subplots(\n rows=len(sorted_params), cols=len(sorted_params), shared_xaxes=True, shared_yaxes=True\n )\n+ figure.update_layout(layout)\n showscale = True # showscale option only needs to be specified once\n for x_i, x_param in enumerate(sorted_params):\n for y_i, y_param in enumerate(sorted_params):\n", "issue": "Show title in figure for contour plot.\nI noticed `optuna.visualization.plot_contour` show a plot without its title.\r\n\r\n<img width=\"913\" alt=\"Screen Shot 2020-04-28 at 10 49 53\" src=\"https://user-images.githubusercontent.com/5164000/80437965-01d5fc00-893e-11ea-896a-343b2e51bc90.png\">\r\n\r\nI think it's better to show a title to be consistent with other plot features such as `optuna.visualization.plot_optimization_history` and `optuna.visualization.plot_slice`.\r\n\r\nWe can reproduce this behavior in the [official quickstart](https://colab.research.google.com/github/optuna/optuna/blob/master/examples/quickstart.ipynb)\r\n\r\n## Expected behavior\r\n\r\nShow title of a figure in the contour plot.\r\n\r\n## Environment\r\n\r\n- Optuna version: 1.3.0\r\n- Python version: 3.6.9\r\n- OS: Ubuntu 18.04.3 LTS\r\n\r\n## Steps to reproduce\r\n\r\nRun notebook on [Google Colab](https://colab.research.google.com/github/optuna/optuna/blob/master/examples/quickstart.ipynb).\n", "before_files": [{"content": "import math\n\nfrom optuna.logging import get_logger\nfrom optuna.study import StudyDirection\nfrom optuna.trial import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import _is_log_scale\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from typing import List # NOQA\n from typing import Optional # NOQA\n from typing import Tuple # NOQA\n\n from optuna.study import Study # NOQA\n from optuna.trial import FrozenTrial # NOQA\n from optuna.visualization.plotly_imports import Contour # NOQA\n from optuna.visualization.plotly_imports import Scatter # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n from optuna.visualization.plotly_imports import make_subplots\n from optuna.visualization.plotly_imports import plotly\n\nlogger = get_logger(__name__)\n\n\ndef plot_contour(study, params=None):\n # type: (Study, Optional[List[str]]) -> go.Figure\n \"\"\"Plot the parameter relationship as contour plot in a study.\n\n Note that, If a parameter contains missing values, a trial with missing values is not plotted.\n\n Example:\n\n The following code snippet shows how to plot the parameter relationship as contour plot.\n\n .. testcode::\n\n import optuna\n\n def objective(trial):\n x = trial.suggest_uniform('x', -100, 100)\n y = trial.suggest_categorical('y', [-1, 0, 1])\n return x ** 2 + y\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=10)\n\n optuna.visualization.plot_contour(study, params=['x', 'y'])\n\n .. raw:: html\n\n <iframe src=\"../_static/plot_contour.html\" width=\"100%\" height=\"500px\" frameborder=\"0\">\n </iframe>\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their objective\n values.\n params:\n Parameter list to visualize. The default is all parameters.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_contour_plot(study, params)\n\n\ndef _get_contour_plot(study, params=None):\n # type: (Study, Optional[List[str]]) -> go.Figure\n\n layout = go.Layout(title=\"Contour Plot\",)\n\n trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE]\n\n if len(trials) == 0:\n logger.warning(\"Your study does not have any completed trials.\")\n return go.Figure(data=[], layout=layout)\n\n all_params = {p_name for t in trials for p_name in t.params.keys()}\n if params is None:\n sorted_params = sorted(list(all_params))\n elif len(params) <= 1:\n logger.warning(\"The length of params must be greater than 1.\")\n return go.Figure(data=[], layout=layout)\n else:\n for input_p_name in params:\n if input_p_name not in all_params:\n raise ValueError(\"Parameter {} does not exist in your study.\".format(input_p_name))\n sorted_params = sorted(list(set(params)))\n\n param_values_range = {}\n for p_name in sorted_params:\n values = [t.params[p_name] for t in trials if p_name in t.params]\n param_values_range[p_name] = (min(values), max(values))\n\n if len(sorted_params) == 2:\n x_param = sorted_params[0]\n y_param = sorted_params[1]\n sub_plots = _generate_contour_subplot(trials, x_param, y_param, study.direction)\n figure = go.Figure(data=sub_plots)\n figure.update_xaxes(title_text=x_param, range=param_values_range[x_param])\n figure.update_yaxes(title_text=y_param, range=param_values_range[y_param])\n if _is_log_scale(trials, x_param):\n log_range = [math.log10(p) for p in param_values_range[x_param]]\n figure.update_xaxes(range=log_range, type=\"log\")\n if _is_log_scale(trials, y_param):\n log_range = [math.log10(p) for p in param_values_range[y_param]]\n figure.update_yaxes(range=log_range, type=\"log\")\n else:\n figure = make_subplots(\n rows=len(sorted_params), cols=len(sorted_params), shared_xaxes=True, shared_yaxes=True\n )\n showscale = True # showscale option only needs to be specified once\n for x_i, x_param in enumerate(sorted_params):\n for y_i, y_param in enumerate(sorted_params):\n if x_param == y_param:\n figure.add_trace(go.Scatter(), row=y_i + 1, col=x_i + 1)\n else:\n sub_plots = _generate_contour_subplot(\n trials, x_param, y_param, study.direction\n )\n contour = sub_plots[0]\n scatter = sub_plots[1]\n contour.update(showscale=showscale) # showscale's default is True\n if showscale:\n showscale = False\n figure.add_trace(contour, row=y_i + 1, col=x_i + 1)\n figure.add_trace(scatter, row=y_i + 1, col=x_i + 1)\n figure.update_xaxes(range=param_values_range[x_param], row=y_i + 1, col=x_i + 1)\n figure.update_yaxes(range=param_values_range[y_param], row=y_i + 1, col=x_i + 1)\n if _is_log_scale(trials, x_param):\n log_range = [math.log10(p) for p in param_values_range[x_param]]\n figure.update_xaxes(range=log_range, type=\"log\", row=y_i + 1, col=x_i + 1)\n if _is_log_scale(trials, y_param):\n log_range = [math.log10(p) for p in param_values_range[y_param]]\n figure.update_yaxes(range=log_range, type=\"log\", row=y_i + 1, col=x_i + 1)\n if x_i == 0:\n figure.update_yaxes(title_text=y_param, row=y_i + 1, col=x_i + 1)\n if y_i == len(sorted_params) - 1:\n figure.update_xaxes(title_text=x_param, row=y_i + 1, col=x_i + 1)\n\n return figure\n\n\ndef _generate_contour_subplot(trials, x_param, y_param, direction):\n # type: (List[FrozenTrial], str, str, StudyDirection) -> Tuple[Contour, Scatter]\n\n x_indices = sorted(list({t.params[x_param] for t in trials if x_param in t.params}))\n y_indices = sorted(list({t.params[y_param] for t in trials if y_param in t.params}))\n if len(x_indices) < 2:\n logger.warning(\"Param {} unique value length is less than 2.\".format(x_param))\n return go.Contour(), go.Scatter()\n if len(y_indices) < 2:\n logger.warning(\"Param {} unique value length is less than 2.\".format(y_param))\n return go.Contour(), go.Scatter()\n z = [[float(\"nan\") for _ in range(len(x_indices))] for _ in range(len(y_indices))]\n\n x_values = []\n y_values = []\n for trial in trials:\n if x_param not in trial.params or y_param not in trial.params:\n continue\n x_values.append(trial.params[x_param])\n y_values.append(trial.params[y_param])\n x_i = x_indices.index(trial.params[x_param])\n y_i = y_indices.index(trial.params[y_param])\n if isinstance(trial.value, int):\n value = float(trial.value)\n elif isinstance(trial.value, float):\n value = trial.value\n else:\n raise ValueError(\n \"Trial{} has COMPLETE state, but its value is non-numeric.\".format(trial.number)\n )\n z[y_i][x_i] = value\n\n # TODO(Yanase): Use reversescale argument to reverse colorscale if Plotly's bug is fixed.\n # If contours_coloring='heatmap' is specified, reversesecale argument of go.Contour does not\n # work correctly. See https://github.com/pfnet/optuna/issues/606.\n colorscale = plotly.colors.PLOTLY_SCALES[\"Blues\"]\n if direction == StudyDirection.MINIMIZE:\n colorscale = [[1 - t[0], t[1]] for t in colorscale]\n colorscale.reverse()\n\n contour = go.Contour(\n x=x_indices,\n y=y_indices,\n z=z,\n colorbar={\"title\": \"Objective Value\"},\n colorscale=colorscale,\n connectgaps=True,\n contours_coloring=\"heatmap\",\n hoverinfo=\"none\",\n line_smoothing=1.3,\n )\n\n scatter = go.Scatter(\n x=x_values, y=y_values, marker={\"color\": \"black\"}, mode=\"markers\", showlegend=False\n )\n\n return (contour, scatter)\n", "path": "optuna/visualization/contour.py"}], "after_files": [{"content": "import math\n\nfrom optuna.logging import get_logger\nfrom optuna.study import StudyDirection\nfrom optuna.trial import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import _is_log_scale\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from typing import List # NOQA\n from typing import Optional # NOQA\n from typing import Tuple # NOQA\n\n from optuna.study import Study # NOQA\n from optuna.trial import FrozenTrial # NOQA\n from optuna.visualization.plotly_imports import Contour # NOQA\n from optuna.visualization.plotly_imports import Scatter # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n from optuna.visualization.plotly_imports import make_subplots\n from optuna.visualization.plotly_imports import plotly\n\nlogger = get_logger(__name__)\n\n\ndef plot_contour(study, params=None):\n # type: (Study, Optional[List[str]]) -> go.Figure\n \"\"\"Plot the parameter relationship as contour plot in a study.\n\n Note that, If a parameter contains missing values, a trial with missing values is not plotted.\n\n Example:\n\n The following code snippet shows how to plot the parameter relationship as contour plot.\n\n .. testcode::\n\n import optuna\n\n def objective(trial):\n x = trial.suggest_uniform('x', -100, 100)\n y = trial.suggest_categorical('y', [-1, 0, 1])\n return x ** 2 + y\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=10)\n\n optuna.visualization.plot_contour(study, params=['x', 'y'])\n\n .. raw:: html\n\n <iframe src=\"../_static/plot_contour.html\" width=\"100%\" height=\"500px\" frameborder=\"0\">\n </iframe>\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their objective\n values.\n params:\n Parameter list to visualize. The default is all parameters.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_contour_plot(study, params)\n\n\ndef _get_contour_plot(study, params=None):\n # type: (Study, Optional[List[str]]) -> go.Figure\n\n layout = go.Layout(title=\"Contour Plot\",)\n\n trials = [trial for trial in study.trials if trial.state == TrialState.COMPLETE]\n\n if len(trials) == 0:\n logger.warning(\"Your study does not have any completed trials.\")\n return go.Figure(data=[], layout=layout)\n\n all_params = {p_name for t in trials for p_name in t.params.keys()}\n if params is None:\n sorted_params = sorted(list(all_params))\n elif len(params) <= 1:\n logger.warning(\"The length of params must be greater than 1.\")\n return go.Figure(data=[], layout=layout)\n else:\n for input_p_name in params:\n if input_p_name not in all_params:\n raise ValueError(\"Parameter {} does not exist in your study.\".format(input_p_name))\n sorted_params = sorted(list(set(params)))\n\n param_values_range = {}\n for p_name in sorted_params:\n values = [t.params[p_name] for t in trials if p_name in t.params]\n param_values_range[p_name] = (min(values), max(values))\n\n if len(sorted_params) == 2:\n x_param = sorted_params[0]\n y_param = sorted_params[1]\n sub_plots = _generate_contour_subplot(trials, x_param, y_param, study.direction)\n figure = go.Figure(data=sub_plots, layout=layout)\n figure.update_xaxes(title_text=x_param, range=param_values_range[x_param])\n figure.update_yaxes(title_text=y_param, range=param_values_range[y_param])\n if _is_log_scale(trials, x_param):\n log_range = [math.log10(p) for p in param_values_range[x_param]]\n figure.update_xaxes(range=log_range, type=\"log\")\n if _is_log_scale(trials, y_param):\n log_range = [math.log10(p) for p in param_values_range[y_param]]\n figure.update_yaxes(range=log_range, type=\"log\")\n else:\n figure = make_subplots(\n rows=len(sorted_params), cols=len(sorted_params), shared_xaxes=True, shared_yaxes=True\n )\n figure.update_layout(layout)\n showscale = True # showscale option only needs to be specified once\n for x_i, x_param in enumerate(sorted_params):\n for y_i, y_param in enumerate(sorted_params):\n if x_param == y_param:\n figure.add_trace(go.Scatter(), row=y_i + 1, col=x_i + 1)\n else:\n sub_plots = _generate_contour_subplot(\n trials, x_param, y_param, study.direction\n )\n contour = sub_plots[0]\n scatter = sub_plots[1]\n contour.update(showscale=showscale) # showscale's default is True\n if showscale:\n showscale = False\n figure.add_trace(contour, row=y_i + 1, col=x_i + 1)\n figure.add_trace(scatter, row=y_i + 1, col=x_i + 1)\n figure.update_xaxes(range=param_values_range[x_param], row=y_i + 1, col=x_i + 1)\n figure.update_yaxes(range=param_values_range[y_param], row=y_i + 1, col=x_i + 1)\n if _is_log_scale(trials, x_param):\n log_range = [math.log10(p) for p in param_values_range[x_param]]\n figure.update_xaxes(range=log_range, type=\"log\", row=y_i + 1, col=x_i + 1)\n if _is_log_scale(trials, y_param):\n log_range = [math.log10(p) for p in param_values_range[y_param]]\n figure.update_yaxes(range=log_range, type=\"log\", row=y_i + 1, col=x_i + 1)\n if x_i == 0:\n figure.update_yaxes(title_text=y_param, row=y_i + 1, col=x_i + 1)\n if y_i == len(sorted_params) - 1:\n figure.update_xaxes(title_text=x_param, row=y_i + 1, col=x_i + 1)\n\n return figure\n\n\ndef _generate_contour_subplot(trials, x_param, y_param, direction):\n # type: (List[FrozenTrial], str, str, StudyDirection) -> Tuple[Contour, Scatter]\n\n x_indices = sorted(list({t.params[x_param] for t in trials if x_param in t.params}))\n y_indices = sorted(list({t.params[y_param] for t in trials if y_param in t.params}))\n if len(x_indices) < 2:\n logger.warning(\"Param {} unique value length is less than 2.\".format(x_param))\n return go.Contour(), go.Scatter()\n if len(y_indices) < 2:\n logger.warning(\"Param {} unique value length is less than 2.\".format(y_param))\n return go.Contour(), go.Scatter()\n z = [[float(\"nan\") for _ in range(len(x_indices))] for _ in range(len(y_indices))]\n\n x_values = []\n y_values = []\n for trial in trials:\n if x_param not in trial.params or y_param not in trial.params:\n continue\n x_values.append(trial.params[x_param])\n y_values.append(trial.params[y_param])\n x_i = x_indices.index(trial.params[x_param])\n y_i = y_indices.index(trial.params[y_param])\n if isinstance(trial.value, int):\n value = float(trial.value)\n elif isinstance(trial.value, float):\n value = trial.value\n else:\n raise ValueError(\n \"Trial{} has COMPLETE state, but its value is non-numeric.\".format(trial.number)\n )\n z[y_i][x_i] = value\n\n # TODO(Yanase): Use reversescale argument to reverse colorscale if Plotly's bug is fixed.\n # If contours_coloring='heatmap' is specified, reversesecale argument of go.Contour does not\n # work correctly. See https://github.com/pfnet/optuna/issues/606.\n colorscale = plotly.colors.PLOTLY_SCALES[\"Blues\"]\n if direction == StudyDirection.MINIMIZE:\n colorscale = [[1 - t[0], t[1]] for t in colorscale]\n colorscale.reverse()\n\n contour = go.Contour(\n x=x_indices,\n y=y_indices,\n z=z,\n colorbar={\"title\": \"Objective Value\"},\n colorscale=colorscale,\n connectgaps=True,\n contours_coloring=\"heatmap\",\n hoverinfo=\"none\",\n line_smoothing=1.3,\n )\n\n scatter = go.Scatter(\n x=x_values, y=y_values, marker={\"color\": \"black\"}, mode=\"markers\", showlegend=False\n )\n\n return (contour, scatter)\n", "path": "optuna/visualization/contour.py"}]}
| 3,091 | 263 |
gh_patches_debug_3461
|
rasdani/github-patches
|
git_diff
|
fonttools__fonttools-2689
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[varLib.merger] <exception str() failed>
I'm in the process of figuring out what is happening here, but in the meantime I thought I'd post this as it apears there is a bug in the exception handling: see the `<exception str() failed>` messages below.
```
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py", line 1065, in run_from_designspace
self._run_from_designspace_interpolatable(
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py", line 1157, in _run_from_designspace_interpolatable
self.build_variable_fonts(
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py", line 384, in build_variable_fonts
fonts = ufo2ft.compileVariableTTFs(
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/ufo2ft/__init__.py", line 605, in compileVariableTTFs
vfNameToTTFont = varLib.build_many(
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py", line 911, in build_many
vf = build(
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py", line 986, in build
_merge_OTL(vf, model, master_fonts, axisTags)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py", line 624, in _merge_OTL
merger.mergeTables(font, master_fonts, ['GSUB', 'GDEF', 'GPOS'])
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 150, in mergeTables
self.mergeThings(font[tag], [m.get(tag) for m in master_ttfs])
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 1067, in mergeThings
super(VariationMerger, self).mergeThings(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 138, in mergeThings
self.mergeObjects(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 108, in mergeObjects
mergerFunc(self, value, values)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 1067, in mergeThings
super(VariationMerger, self).mergeThings(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 138, in mergeThings
self.mergeObjects(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 108, in mergeObjects
mergerFunc(self, value, values)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 1067, in mergeThings
super(VariationMerger, self).mergeThings(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 138, in mergeThings
self.mergeObjects(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 108, in mergeObjects
mergerFunc(self, value, values)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 1067, in mergeThings
super(VariationMerger, self).mergeThings(out, lst)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py", line 143, in mergeThings
raise ShouldBeConstant(self, expected=out, got=lst)
fontTools.varLib.errors.ShouldBeConstant: <exception str() failed>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/__main__.py", line 639, in main
project.run_from_designspace(inputs.designspace_path, **args)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py", line 1080, in run_from_designspace
raise FontmakeError(
fontmake.errors.FontmakeError: <exception str() failed>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.8.13/x64/bin/fontmake", line 8, in <module>
sys.exit(main())
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/__main__.py", line 664, in main
sys.exit(f"fontmake: Error: {str(e)}")
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/errors.py", line 33, in __str__
cause = str(self.__cause__) if self.__cause__ is not None else None
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/errors.py", line 70, in __str__
return "\n\n" + basic + location + self.details
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/errors.py", line 86, in details
good_ttf = next(
StopIteration
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/varLib/errors.py`
Content:
```
1 import textwrap
2
3
4 class VarLibError(Exception):
5 """Base exception for the varLib module."""
6
7
8 class VarLibValidationError(VarLibError):
9 """Raised when input data is invalid from varLib's point of view."""
10
11
12 class VarLibMergeError(VarLibError):
13 """Raised when input data cannot be merged into a variable font."""
14
15 def __init__(self, merger=None, **kwargs):
16 self.merger = merger
17 if not kwargs:
18 kwargs = {}
19 if "stack" in kwargs:
20 self.stack = kwargs["stack"]
21 del kwargs["stack"]
22 else:
23 self.stack = []
24 self.cause = kwargs
25
26 @property
27 def reason(self):
28 return self.__doc__
29
30 def _master_name(self, ix):
31 if self.merger is not None:
32 ttf = self.merger.ttfs[ix]
33 if "name" in ttf and ttf["name"].getBestFullName():
34 return ttf["name"].getBestFullName()
35 elif hasattr(ttf.reader, "file") and hasattr(ttf.reader.file, "name"):
36 return ttf.reader.file.name
37 return f"master number {ix}"
38
39 @property
40 def offender(self):
41 if "expected" in self.cause and "got" in self.cause:
42 index = [x == self.cause["expected"] for x in self.cause["got"]].index(
43 False
44 )
45 master_name = self._master_name(index)
46 if "location" in self.cause:
47 master_name = f"{master_name} ({self.cause['location']})"
48 return index, master_name
49 return None, None
50
51 @property
52 def details(self):
53 if "expected" in self.cause and "got" in self.cause:
54 offender_index, offender = self.offender
55 got = self.cause["got"][offender_index]
56 return f"Expected to see {self.stack[0]}=={self.cause['expected']!r}, instead saw {got!r}\n"
57 return ""
58
59 def __str__(self):
60 offender_index, offender = self.offender
61 location = ""
62 if offender:
63 location = f"\n\nThe problem is likely to be in {offender}:\n"
64 context = "".join(reversed(self.stack))
65 basic = textwrap.fill(
66 f"Couldn't merge the fonts, because {self.reason}. "
67 f"This happened while performing the following operation: {context}",
68 width=78,
69 )
70 return "\n\n" + basic + location + self.details
71
72
73 class ShouldBeConstant(VarLibMergeError):
74 """some values were different, but should have been the same"""
75
76 @property
77 def details(self):
78 basic_message = super().details
79
80 if self.stack[0] != ".FeatureCount" or self.merger is None:
81 return basic_message
82
83 assert self.stack[0] == ".FeatureCount"
84 offender_index, _ = self.offender
85 bad_ttf = self.merger.ttfs[offender_index]
86 good_ttf = next(
87 ttf
88 for ttf in self.merger.ttfs
89 if ttf["GPOS"].table.FeatureList.FeatureCount == self.cause["expected"]
90 )
91
92 good_features = [
93 x.FeatureTag
94 for x in good_ttf[self.stack[-1]].table.FeatureList.FeatureRecord
95 ]
96 bad_features = [
97 x.FeatureTag
98 for x in bad_ttf[self.stack[-1]].table.FeatureList.FeatureRecord
99 ]
100 return basic_message + (
101 "\nIncompatible features between masters.\n"
102 f"Expected: {', '.join(good_features)}.\n"
103 f"Got: {', '.join(bad_features)}.\n"
104 )
105
106
107 class FoundANone(VarLibMergeError):
108 """one of the values in a list was empty when it shouldn't have been"""
109
110 @property
111 def offender(self):
112 index = [x is None for x in self.cause["got"]].index(True)
113 return index, self._master_name(index)
114
115 @property
116 def details(self):
117 cause, stack = self.cause, self.stack
118 return f"{stack[0]}=={cause['got']}\n"
119
120
121 class MismatchedTypes(VarLibMergeError):
122 """data had inconsistent types"""
123
124
125 class LengthsDiffer(VarLibMergeError):
126 """a list of objects had inconsistent lengths"""
127
128
129 class KeysDiffer(VarLibMergeError):
130 """a list of objects had different keys"""
131
132
133 class InconsistentGlyphOrder(VarLibMergeError):
134 """the glyph order was inconsistent between masters"""
135
136
137 class InconsistentExtensions(VarLibMergeError):
138 """the masters use extension lookups in inconsistent ways"""
139
140
141 class UnsupportedFormat(VarLibMergeError):
142 """an OpenType subtable (%s) had a format I didn't expect"""
143
144 def __init__(self, merger=None, **kwargs):
145 super().__init__(merger, **kwargs)
146 if not self.stack:
147 self.stack = [".Format"]
148
149 @property
150 def reason(self):
151 s = self.__doc__ % self.cause["subtable"]
152 if "value" in self.cause:
153 s += f" ({self.cause['value']!r})"
154 return s
155
156
157 class InconsistentFormats(UnsupportedFormat):
158 """an OpenType subtable (%s) had inconsistent formats between masters"""
159
160
161 class VarLibCFFMergeError(VarLibError):
162 pass
163
164
165 class VarLibCFFDictMergeError(VarLibCFFMergeError):
166 """Raised when a CFF PrivateDict cannot be merged."""
167
168 def __init__(self, key, value, values):
169 error_msg = (
170 f"For the Private Dict key '{key}', the default font value list:"
171 f"\n\t{value}\nhad a different number of values than a region font:"
172 )
173 for region_value in values:
174 error_msg += f"\n\t{region_value}"
175 self.args = (error_msg,)
176
177
178 class VarLibCFFPointTypeMergeError(VarLibCFFMergeError):
179 """Raised when a CFF glyph cannot be merged because of point type differences."""
180
181 def __init__(self, point_type, pt_index, m_index, default_type, glyph_name):
182 error_msg = (
183 f"Glyph '{glyph_name}': '{point_type}' at point index {pt_index} in "
184 f"master index {m_index} differs from the default font point type "
185 f"'{default_type}'"
186 )
187 self.args = (error_msg,)
188
189
190 class VarLibCFFHintTypeMergeError(VarLibCFFMergeError):
191 """Raised when a CFF glyph cannot be merged because of hint type differences."""
192
193 def __init__(self, hint_type, cmd_index, m_index, default_type, glyph_name):
194 error_msg = (
195 f"Glyph '{glyph_name}': '{hint_type}' at index {cmd_index} in "
196 f"master index {m_index} differs from the default font hint type "
197 f"'{default_type}'"
198 )
199 self.args = (error_msg,)
200
201
202 class VariationModelError(VarLibError):
203 """Raised when a variation model is faulty."""
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Lib/fontTools/varLib/errors.py b/Lib/fontTools/varLib/errors.py
--- a/Lib/fontTools/varLib/errors.py
+++ b/Lib/fontTools/varLib/errors.py
@@ -86,7 +86,9 @@
good_ttf = next(
ttf
for ttf in self.merger.ttfs
- if ttf["GPOS"].table.FeatureList.FeatureCount == self.cause["expected"]
+ if self.stack[-1] in ttf
+ and ttf[self.stack[-1]].table.FeatureList.FeatureCount
+ == self.cause["expected"]
)
good_features = [
|
{"golden_diff": "diff --git a/Lib/fontTools/varLib/errors.py b/Lib/fontTools/varLib/errors.py\n--- a/Lib/fontTools/varLib/errors.py\n+++ b/Lib/fontTools/varLib/errors.py\n@@ -86,7 +86,9 @@\n good_ttf = next(\n ttf\n for ttf in self.merger.ttfs\n- if ttf[\"GPOS\"].table.FeatureList.FeatureCount == self.cause[\"expected\"]\n+ if self.stack[-1] in ttf\n+ and ttf[self.stack[-1]].table.FeatureList.FeatureCount\n+ == self.cause[\"expected\"]\n )\n \n good_features = [\n", "issue": "[varLib.merger] <exception str() failed>\nI'm in the process of figuring out what is happening here, but in the meantime I thought I'd post this as it apears there is a bug in the exception handling: see the `<exception str() failed>` messages below.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py\", line 1065, in run_from_designspace\r\n self._run_from_designspace_interpolatable(\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py\", line 1157, in _run_from_designspace_interpolatable\r\n self.build_variable_fonts(\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py\", line 384, in build_variable_fonts\r\n fonts = ufo2ft.compileVariableTTFs(\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/ufo2ft/__init__.py\", line 605, in compileVariableTTFs\r\n vfNameToTTFont = varLib.build_many(\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py\", line 911, in build_many\r\n vf = build(\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py\", line 986, in build\r\n _merge_OTL(vf, model, master_fonts, axisTags)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/__init__.py\", line 624, in _merge_OTL\r\n merger.mergeTables(font, master_fonts, ['GSUB', 'GDEF', 'GPOS'])\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 150, in mergeTables\r\n self.mergeThings(font[tag], [m.get(tag) for m in master_ttfs])\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 1067, in mergeThings\r\n super(VariationMerger, self).mergeThings(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 138, in mergeThings\r\n self.mergeObjects(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 108, in mergeObjects\r\n mergerFunc(self, value, values)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 1067, in mergeThings\r\n super(VariationMerger, self).mergeThings(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 138, in mergeThings\r\n self.mergeObjects(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 108, in mergeObjects\r\n mergerFunc(self, value, values)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 1067, in mergeThings\r\n super(VariationMerger, self).mergeThings(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 138, in mergeThings\r\n self.mergeObjects(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 108, in mergeObjects\r\n mergerFunc(self, value, values)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 1067, in mergeThings\r\n super(VariationMerger, self).mergeThings(out, lst)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/merger.py\", line 143, in mergeThings\r\n raise ShouldBeConstant(self, expected=out, got=lst)\r\nfontTools.varLib.errors.ShouldBeConstant: <exception str() failed>\r\nThe above exception was the direct cause of the following exception:\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/__main__.py\", line 639, in main\r\n project.run_from_designspace(inputs.designspace_path, **args)\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/font_project.py\", line 1080, in run_from_designspace\r\n raise FontmakeError(\r\nfontmake.errors.FontmakeError: <exception str() failed>\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/bin/fontmake\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/__main__.py\", line 664, in main\r\n sys.exit(f\"fontmake: Error: {str(e)}\")\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontmake/errors.py\", line 33, in __str__\r\n cause = str(self.__cause__) if self.__cause__ is not None else None\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/errors.py\", line 70, in __str__\r\n return \"\\n\\n\" + basic + location + self.details\r\n File \"/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/site-packages/fontTools/varLib/errors.py\", line 86, in details\r\n good_ttf = next(\r\nStopIteration\r\n```\n", "before_files": [{"content": "import textwrap\n\n\nclass VarLibError(Exception):\n \"\"\"Base exception for the varLib module.\"\"\"\n\n\nclass VarLibValidationError(VarLibError):\n \"\"\"Raised when input data is invalid from varLib's point of view.\"\"\"\n\n\nclass VarLibMergeError(VarLibError):\n \"\"\"Raised when input data cannot be merged into a variable font.\"\"\"\n\n def __init__(self, merger=None, **kwargs):\n self.merger = merger\n if not kwargs:\n kwargs = {}\n if \"stack\" in kwargs:\n self.stack = kwargs[\"stack\"]\n del kwargs[\"stack\"]\n else:\n self.stack = []\n self.cause = kwargs\n\n @property\n def reason(self):\n return self.__doc__\n\n def _master_name(self, ix):\n if self.merger is not None:\n ttf = self.merger.ttfs[ix]\n if \"name\" in ttf and ttf[\"name\"].getBestFullName():\n return ttf[\"name\"].getBestFullName()\n elif hasattr(ttf.reader, \"file\") and hasattr(ttf.reader.file, \"name\"):\n return ttf.reader.file.name\n return f\"master number {ix}\"\n\n @property\n def offender(self):\n if \"expected\" in self.cause and \"got\" in self.cause:\n index = [x == self.cause[\"expected\"] for x in self.cause[\"got\"]].index(\n False\n )\n master_name = self._master_name(index)\n if \"location\" in self.cause:\n master_name = f\"{master_name} ({self.cause['location']})\"\n return index, master_name\n return None, None\n\n @property\n def details(self):\n if \"expected\" in self.cause and \"got\" in self.cause:\n offender_index, offender = self.offender\n got = self.cause[\"got\"][offender_index]\n return f\"Expected to see {self.stack[0]}=={self.cause['expected']!r}, instead saw {got!r}\\n\"\n return \"\"\n\n def __str__(self):\n offender_index, offender = self.offender\n location = \"\"\n if offender:\n location = f\"\\n\\nThe problem is likely to be in {offender}:\\n\"\n context = \"\".join(reversed(self.stack))\n basic = textwrap.fill(\n f\"Couldn't merge the fonts, because {self.reason}. \"\n f\"This happened while performing the following operation: {context}\",\n width=78,\n )\n return \"\\n\\n\" + basic + location + self.details\n\n\nclass ShouldBeConstant(VarLibMergeError):\n \"\"\"some values were different, but should have been the same\"\"\"\n\n @property\n def details(self):\n basic_message = super().details\n\n if self.stack[0] != \".FeatureCount\" or self.merger is None:\n return basic_message\n\n assert self.stack[0] == \".FeatureCount\"\n offender_index, _ = self.offender\n bad_ttf = self.merger.ttfs[offender_index]\n good_ttf = next(\n ttf\n for ttf in self.merger.ttfs\n if ttf[\"GPOS\"].table.FeatureList.FeatureCount == self.cause[\"expected\"]\n )\n\n good_features = [\n x.FeatureTag\n for x in good_ttf[self.stack[-1]].table.FeatureList.FeatureRecord\n ]\n bad_features = [\n x.FeatureTag\n for x in bad_ttf[self.stack[-1]].table.FeatureList.FeatureRecord\n ]\n return basic_message + (\n \"\\nIncompatible features between masters.\\n\"\n f\"Expected: {', '.join(good_features)}.\\n\"\n f\"Got: {', '.join(bad_features)}.\\n\"\n )\n\n\nclass FoundANone(VarLibMergeError):\n \"\"\"one of the values in a list was empty when it shouldn't have been\"\"\"\n\n @property\n def offender(self):\n index = [x is None for x in self.cause[\"got\"]].index(True)\n return index, self._master_name(index)\n\n @property\n def details(self):\n cause, stack = self.cause, self.stack\n return f\"{stack[0]}=={cause['got']}\\n\"\n\n\nclass MismatchedTypes(VarLibMergeError):\n \"\"\"data had inconsistent types\"\"\"\n\n\nclass LengthsDiffer(VarLibMergeError):\n \"\"\"a list of objects had inconsistent lengths\"\"\"\n\n\nclass KeysDiffer(VarLibMergeError):\n \"\"\"a list of objects had different keys\"\"\"\n\n\nclass InconsistentGlyphOrder(VarLibMergeError):\n \"\"\"the glyph order was inconsistent between masters\"\"\"\n\n\nclass InconsistentExtensions(VarLibMergeError):\n \"\"\"the masters use extension lookups in inconsistent ways\"\"\"\n\n\nclass UnsupportedFormat(VarLibMergeError):\n \"\"\"an OpenType subtable (%s) had a format I didn't expect\"\"\"\n\n def __init__(self, merger=None, **kwargs):\n super().__init__(merger, **kwargs)\n if not self.stack:\n self.stack = [\".Format\"]\n\n @property\n def reason(self):\n s = self.__doc__ % self.cause[\"subtable\"]\n if \"value\" in self.cause:\n s += f\" ({self.cause['value']!r})\"\n return s\n\n\nclass InconsistentFormats(UnsupportedFormat):\n \"\"\"an OpenType subtable (%s) had inconsistent formats between masters\"\"\"\n\n\nclass VarLibCFFMergeError(VarLibError):\n pass\n\n\nclass VarLibCFFDictMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF PrivateDict cannot be merged.\"\"\"\n\n def __init__(self, key, value, values):\n error_msg = (\n f\"For the Private Dict key '{key}', the default font value list:\"\n f\"\\n\\t{value}\\nhad a different number of values than a region font:\"\n )\n for region_value in values:\n error_msg += f\"\\n\\t{region_value}\"\n self.args = (error_msg,)\n\n\nclass VarLibCFFPointTypeMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF glyph cannot be merged because of point type differences.\"\"\"\n\n def __init__(self, point_type, pt_index, m_index, default_type, glyph_name):\n error_msg = (\n f\"Glyph '{glyph_name}': '{point_type}' at point index {pt_index} in \"\n f\"master index {m_index} differs from the default font point type \"\n f\"'{default_type}'\"\n )\n self.args = (error_msg,)\n\n\nclass VarLibCFFHintTypeMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF glyph cannot be merged because of hint type differences.\"\"\"\n\n def __init__(self, hint_type, cmd_index, m_index, default_type, glyph_name):\n error_msg = (\n f\"Glyph '{glyph_name}': '{hint_type}' at index {cmd_index} in \"\n f\"master index {m_index} differs from the default font hint type \"\n f\"'{default_type}'\"\n )\n self.args = (error_msg,)\n\n\nclass VariationModelError(VarLibError):\n \"\"\"Raised when a variation model is faulty.\"\"\"\n", "path": "Lib/fontTools/varLib/errors.py"}], "after_files": [{"content": "import textwrap\n\n\nclass VarLibError(Exception):\n \"\"\"Base exception for the varLib module.\"\"\"\n\n\nclass VarLibValidationError(VarLibError):\n \"\"\"Raised when input data is invalid from varLib's point of view.\"\"\"\n\n\nclass VarLibMergeError(VarLibError):\n \"\"\"Raised when input data cannot be merged into a variable font.\"\"\"\n\n def __init__(self, merger=None, **kwargs):\n self.merger = merger\n if not kwargs:\n kwargs = {}\n if \"stack\" in kwargs:\n self.stack = kwargs[\"stack\"]\n del kwargs[\"stack\"]\n else:\n self.stack = []\n self.cause = kwargs\n\n @property\n def reason(self):\n return self.__doc__\n\n def _master_name(self, ix):\n if self.merger is not None:\n ttf = self.merger.ttfs[ix]\n if \"name\" in ttf and ttf[\"name\"].getBestFullName():\n return ttf[\"name\"].getBestFullName()\n elif hasattr(ttf.reader, \"file\") and hasattr(ttf.reader.file, \"name\"):\n return ttf.reader.file.name\n return f\"master number {ix}\"\n\n @property\n def offender(self):\n if \"expected\" in self.cause and \"got\" in self.cause:\n index = [x == self.cause[\"expected\"] for x in self.cause[\"got\"]].index(\n False\n )\n master_name = self._master_name(index)\n if \"location\" in self.cause:\n master_name = f\"{master_name} ({self.cause['location']})\"\n return index, master_name\n return None, None\n\n @property\n def details(self):\n if \"expected\" in self.cause and \"got\" in self.cause:\n offender_index, offender = self.offender\n got = self.cause[\"got\"][offender_index]\n return f\"Expected to see {self.stack[0]}=={self.cause['expected']!r}, instead saw {got!r}\\n\"\n return \"\"\n\n def __str__(self):\n offender_index, offender = self.offender\n location = \"\"\n if offender:\n location = f\"\\n\\nThe problem is likely to be in {offender}:\\n\"\n context = \"\".join(reversed(self.stack))\n basic = textwrap.fill(\n f\"Couldn't merge the fonts, because {self.reason}. \"\n f\"This happened while performing the following operation: {context}\",\n width=78,\n )\n return \"\\n\\n\" + basic + location + self.details\n\n\nclass ShouldBeConstant(VarLibMergeError):\n \"\"\"some values were different, but should have been the same\"\"\"\n\n @property\n def details(self):\n basic_message = super().details\n\n if self.stack[0] != \".FeatureCount\" or self.merger is None:\n return basic_message\n\n assert self.stack[0] == \".FeatureCount\"\n offender_index, _ = self.offender\n bad_ttf = self.merger.ttfs[offender_index]\n good_ttf = next(\n ttf\n for ttf in self.merger.ttfs\n if self.stack[-1] in ttf\n and ttf[self.stack[-1]].table.FeatureList.FeatureCount\n == self.cause[\"expected\"]\n )\n\n good_features = [\n x.FeatureTag\n for x in good_ttf[self.stack[-1]].table.FeatureList.FeatureRecord\n ]\n bad_features = [\n x.FeatureTag\n for x in bad_ttf[self.stack[-1]].table.FeatureList.FeatureRecord\n ]\n return basic_message + (\n \"\\nIncompatible features between masters.\\n\"\n f\"Expected: {', '.join(good_features)}.\\n\"\n f\"Got: {', '.join(bad_features)}.\\n\"\n )\n\n\nclass FoundANone(VarLibMergeError):\n \"\"\"one of the values in a list was empty when it shouldn't have been\"\"\"\n\n @property\n def offender(self):\n index = [x is None for x in self.cause[\"got\"]].index(True)\n return index, self._master_name(index)\n\n @property\n def details(self):\n cause, stack = self.cause, self.stack\n return f\"{stack[0]}=={cause['got']}\\n\"\n\n\nclass MismatchedTypes(VarLibMergeError):\n \"\"\"data had inconsistent types\"\"\"\n\n\nclass LengthsDiffer(VarLibMergeError):\n \"\"\"a list of objects had inconsistent lengths\"\"\"\n\n\nclass KeysDiffer(VarLibMergeError):\n \"\"\"a list of objects had different keys\"\"\"\n\n\nclass InconsistentGlyphOrder(VarLibMergeError):\n \"\"\"the glyph order was inconsistent between masters\"\"\"\n\n\nclass InconsistentExtensions(VarLibMergeError):\n \"\"\"the masters use extension lookups in inconsistent ways\"\"\"\n\n\nclass UnsupportedFormat(VarLibMergeError):\n \"\"\"an OpenType subtable (%s) had a format I didn't expect\"\"\"\n\n def __init__(self, merger=None, **kwargs):\n super().__init__(merger, **kwargs)\n if not self.stack:\n self.stack = [\".Format\"]\n\n @property\n def reason(self):\n s = self.__doc__ % self.cause[\"subtable\"]\n if \"value\" in self.cause:\n s += f\" ({self.cause['value']!r})\"\n return s\n\n\nclass InconsistentFormats(UnsupportedFormat):\n \"\"\"an OpenType subtable (%s) had inconsistent formats between masters\"\"\"\n\n\nclass VarLibCFFMergeError(VarLibError):\n pass\n\n\nclass VarLibCFFDictMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF PrivateDict cannot be merged.\"\"\"\n\n def __init__(self, key, value, values):\n error_msg = (\n f\"For the Private Dict key '{key}', the default font value list:\"\n f\"\\n\\t{value}\\nhad a different number of values than a region font:\"\n )\n for region_value in values:\n error_msg += f\"\\n\\t{region_value}\"\n self.args = (error_msg,)\n\n\nclass VarLibCFFPointTypeMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF glyph cannot be merged because of point type differences.\"\"\"\n\n def __init__(self, point_type, pt_index, m_index, default_type, glyph_name):\n error_msg = (\n f\"Glyph '{glyph_name}': '{point_type}' at point index {pt_index} in \"\n f\"master index {m_index} differs from the default font point type \"\n f\"'{default_type}'\"\n )\n self.args = (error_msg,)\n\n\nclass VarLibCFFHintTypeMergeError(VarLibCFFMergeError):\n \"\"\"Raised when a CFF glyph cannot be merged because of hint type differences.\"\"\"\n\n def __init__(self, hint_type, cmd_index, m_index, default_type, glyph_name):\n error_msg = (\n f\"Glyph '{glyph_name}': '{hint_type}' at index {cmd_index} in \"\n f\"master index {m_index} differs from the default font hint type \"\n f\"'{default_type}'\"\n )\n self.args = (error_msg,)\n\n\nclass VariationModelError(VarLibError):\n \"\"\"Raised when a variation model is faulty.\"\"\"\n", "path": "Lib/fontTools/varLib/errors.py"}]}
| 3,984 | 145 |
gh_patches_debug_37632
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-2767
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hashtags are catching parts of urls
Links with anchors (https://www.example.com/test#three) are getting parsed as containing hashtags
(This was reported to me via email)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/status.py`
Content:
```
1 """ what are we here for if not for posting """
2 import re
3 import logging
4 from urllib.parse import urlparse
5
6 from django.contrib.auth.decorators import login_required
7 from django.core.validators import URLValidator
8 from django.core.exceptions import ValidationError
9 from django.db.models import Q
10 from django.http import HttpResponse, HttpResponseBadRequest, Http404
11 from django.shortcuts import get_object_or_404, redirect
12 from django.template.response import TemplateResponse
13 from django.utils import timezone
14 from django.utils.decorators import method_decorator
15 from django.views import View
16 from django.views.decorators.http import require_POST
17
18 from markdown import markdown
19 from bookwyrm import forms, models
20 from bookwyrm.utils import regex, sanitizer
21 from .helpers import handle_remote_webfinger, is_api_request
22 from .helpers import load_date_in_user_tz_as_utc, redirect_to_referer
23
24 logger = logging.getLogger(__name__)
25
26
27 # pylint: disable= no-self-use
28 @method_decorator(login_required, name="dispatch")
29 class EditStatus(View):
30 """the view for *posting*"""
31
32 def get(self, request, status_id): # pylint: disable=unused-argument
33 """load the edit panel"""
34 status = get_object_or_404(
35 models.Status.objects.select_subclasses(), id=status_id
36 )
37
38 status_type = "reply" if status.reply_parent else status.status_type.lower()
39 data = {
40 "type": status_type,
41 "book": getattr(status, "book", None),
42 "draft": status,
43 }
44 return TemplateResponse(request, "compose.html", data)
45
46
47 # pylint: disable= no-self-use
48 @method_decorator(login_required, name="dispatch")
49 class CreateStatus(View):
50 """the view for *posting*"""
51
52 def get(self, request, status_type): # pylint: disable=unused-argument
53 """compose view (...not used?)"""
54 book = get_object_or_404(models.Edition, id=request.GET.get("book"))
55 data = {"book": book}
56 return TemplateResponse(request, "compose.html", data)
57
58 # pylint: disable=too-many-branches
59 def post(self, request, status_type, existing_status_id=None):
60 """create status of whatever type"""
61 created = not existing_status_id
62 existing_status = None
63 if existing_status_id:
64 existing_status = get_object_or_404(
65 models.Status.objects.select_subclasses(), id=existing_status_id
66 )
67 existing_status.edited_date = timezone.now()
68
69 status_type = status_type[0].upper() + status_type[1:]
70
71 try:
72 form = getattr(forms, f"{status_type}Form")(
73 request.POST, instance=existing_status
74 )
75 except AttributeError as err:
76 logger.exception(err)
77 return HttpResponseBadRequest()
78
79 if not form.is_valid():
80 if is_api_request(request):
81 logger.exception(form.errors)
82 return HttpResponseBadRequest()
83 return redirect_to_referer(request)
84
85 status = form.save(request, commit=False)
86 status.ready = False
87 # save the plain, unformatted version of the status for future editing
88 status.raw_content = status.content
89 if hasattr(status, "quote"):
90 status.raw_quote = status.quote
91
92 status.sensitive = status.content_warning not in [None, ""]
93 # the status has to be saved now before we can add many to many fields
94 # like mentions
95 status.save(broadcast=False)
96
97 # inspect the text for user tags
98 content = status.content
99 for (mention_text, mention_user) in find_mentions(
100 request.user, content
101 ).items():
102 # add them to status mentions fk
103 status.mention_users.add(mention_user)
104
105 # turn the mention into a link
106 content = re.sub(
107 rf"{mention_text}\b(?!@)",
108 rf'<a href="{mention_user.remote_id}">{mention_text}</a>',
109 content,
110 )
111 # add reply parent to mentions
112 if status.reply_parent:
113 status.mention_users.add(status.reply_parent.user)
114
115 # inspect the text for hashtags
116 for (mention_text, mention_hashtag) in find_or_create_hashtags(content).items():
117 # add them to status mentions fk
118 status.mention_hashtags.add(mention_hashtag)
119
120 # turn the mention into a link
121 content = re.sub(
122 rf"{mention_text}\b(?!@)",
123 rf'<a href="{mention_hashtag.remote_id}" data-mention="hashtag">'
124 + rf"{mention_text}</a>",
125 content,
126 )
127
128 # deduplicate mentions
129 status.mention_users.set(set(status.mention_users.all()))
130
131 # don't apply formatting to generated notes
132 if not isinstance(status, models.GeneratedNote) and content:
133 status.content = to_markdown(content)
134 # do apply formatting to quotes
135 if hasattr(status, "quote"):
136 status.quote = to_markdown(status.quote)
137
138 status.ready = True
139 status.save(created=created)
140
141 # update a readthrough, if needed
142 if bool(request.POST.get("id")):
143 try:
144 edit_readthrough(request)
145 except Http404:
146 pass
147
148 if is_api_request(request):
149 return HttpResponse()
150 return redirect_to_referer(request)
151
152
153 @method_decorator(login_required, name="dispatch")
154 class DeleteStatus(View):
155 """tombstone that bad boy"""
156
157 def post(self, request, status_id):
158 """delete and tombstone a status"""
159 status = get_object_or_404(models.Status, id=status_id)
160
161 # don't let people delete other people's statuses
162 status.raise_not_deletable(request.user)
163
164 # perform deletion
165 status.delete()
166 return redirect("/")
167
168
169 @login_required
170 @require_POST
171 def update_progress(request, book_id): # pylint: disable=unused-argument
172 """Either it's just a progress update, or it's a comment with a progress update"""
173 if request.POST.get("post-status"):
174 return CreateStatus.as_view()(request, "comment")
175 return edit_readthrough(request)
176
177
178 @login_required
179 @require_POST
180 def edit_readthrough(request):
181 """can't use the form because the dates are too finnicky"""
182 # TODO: remove this, it duplicates the code in the ReadThrough view
183 readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get("id"))
184
185 readthrough.start_date = load_date_in_user_tz_as_utc(
186 request.POST.get("start_date"), request.user
187 )
188 readthrough.finish_date = load_date_in_user_tz_as_utc(
189 request.POST.get("finish_date"), request.user
190 )
191
192 progress = request.POST.get("progress")
193 try:
194 progress = int(progress)
195 readthrough.progress = progress
196 except (ValueError, TypeError):
197 pass
198
199 progress_mode = request.POST.get("progress_mode")
200 try:
201 progress_mode = models.ProgressMode(progress_mode)
202 readthrough.progress_mode = progress_mode
203 except ValueError:
204 pass
205
206 readthrough.save()
207
208 # record the progress update individually
209 # use default now for date field
210 readthrough.create_update()
211
212 if is_api_request(request):
213 return HttpResponse()
214 return redirect_to_referer(request)
215
216
217 def find_mentions(user, content):
218 """detect @mentions in raw status content"""
219 if not content:
220 return {}
221 # The regex has nested match groups, so the 0th entry has the full (outer) match
222 # And beacuse the strict username starts with @, the username is 1st char onward
223 usernames = [m[0][1:] for m in re.findall(regex.STRICT_USERNAME, content)]
224
225 known_users = (
226 models.User.viewer_aware_objects(user)
227 .filter(Q(username__in=usernames) | Q(localname__in=usernames))
228 .distinct()
229 )
230 # Prepare a lookup based on both username and localname
231 username_dict = {
232 **{f"@{u.username}": u for u in known_users},
233 **{f"@{u.localname}": u for u in known_users.filter(local=True)},
234 }
235
236 # Users not captured here could be blocked or not yet loaded on the server
237 not_found = set(usernames) - set(username_dict.keys())
238 for username in not_found:
239 mention_user = handle_remote_webfinger(username, unknown_only=True)
240 if not mention_user:
241 # this user is blocked or can't be found
242 continue
243 username_dict[f"@{mention_user.username}"] = mention_user
244 username_dict[f"@{mention_user.localname}"] = mention_user
245 return username_dict
246
247
248 def find_or_create_hashtags(content):
249 """detect #hashtags in raw status content
250
251 it stores hashtags case-sensitive, but ensures that an existing
252 hashtag with different case are found and re-used. for example,
253 an existing #BookWyrm hashtag will be found and used even if the
254 status content is using #bookwyrm.
255 """
256 if not content:
257 return {}
258
259 found_hashtags = {t.lower(): t for t in re.findall(regex.HASHTAG, content)}
260 if len(found_hashtags) == 0:
261 return {}
262
263 known_hashtags = {
264 t.name.lower(): t
265 for t in models.Hashtag.objects.filter(
266 Q(name__in=found_hashtags.keys())
267 ).distinct()
268 }
269
270 not_found = found_hashtags.keys() - known_hashtags.keys()
271 for lower_name in not_found:
272 tag_name = found_hashtags[lower_name]
273 mention_hashtag = models.Hashtag(name=tag_name)
274 mention_hashtag.save()
275 known_hashtags[lower_name] = mention_hashtag
276
277 return {found_hashtags[k]: v for k, v in known_hashtags.items()}
278
279
280 def format_links(content):
281 """detect and format links"""
282 validator = URLValidator()
283 formatted_content = ""
284 split_content = re.split(r"(\s+)", content)
285
286 for potential_link in split_content:
287 if not potential_link:
288 continue
289 wrapped = _wrapped(potential_link)
290 if wrapped:
291 wrapper_close = potential_link[-1]
292 formatted_content += potential_link[0]
293 potential_link = potential_link[1:-1]
294
295 try:
296 # raises an error on anything that's not a valid link
297 validator(potential_link)
298
299 # use everything but the scheme in the presentation of the link
300 url = urlparse(potential_link)
301 link = url.netloc + url.path + url.params
302 if url.query != "":
303 link += "?" + url.query
304 if url.fragment != "":
305 link += "#" + url.fragment
306
307 formatted_content += f'<a href="{potential_link}">{link}</a>'
308 except (ValidationError, UnicodeError):
309 formatted_content += potential_link
310
311 if wrapped:
312 formatted_content += wrapper_close
313
314 return formatted_content
315
316
317 def _wrapped(text):
318 """check if a line of text is wrapped"""
319 wrappers = [("(", ")"), ("[", "]"), ("{", "}")]
320 for wrapper in wrappers:
321 if text[0] == wrapper[0] and text[-1] == wrapper[-1]:
322 return True
323 return False
324
325
326 def to_markdown(content):
327 """catch links and convert to markdown"""
328 content = format_links(content)
329 content = markdown(content)
330 # sanitize resulting html
331 return sanitizer.clean(content)
332
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py
--- a/bookwyrm/views/status.py
+++ b/bookwyrm/views/status.py
@@ -96,34 +96,22 @@
# inspect the text for user tags
content = status.content
- for (mention_text, mention_user) in find_mentions(
- request.user, content
- ).items():
+ mentions = find_mentions(request.user, content)
+ for (_, mention_user) in mentions.items():
# add them to status mentions fk
status.mention_users.add(mention_user)
+ content = format_mentions(content, mentions)
- # turn the mention into a link
- content = re.sub(
- rf"{mention_text}\b(?!@)",
- rf'<a href="{mention_user.remote_id}">{mention_text}</a>',
- content,
- )
# add reply parent to mentions
if status.reply_parent:
status.mention_users.add(status.reply_parent.user)
# inspect the text for hashtags
- for (mention_text, mention_hashtag) in find_or_create_hashtags(content).items():
+ hashtags = find_or_create_hashtags(content)
+ for (_, mention_hashtag) in hashtags.items():
# add them to status mentions fk
status.mention_hashtags.add(mention_hashtag)
-
- # turn the mention into a link
- content = re.sub(
- rf"{mention_text}\b(?!@)",
- rf'<a href="{mention_hashtag.remote_id}" data-mention="hashtag">'
- + rf"{mention_text}</a>",
- content,
- )
+ content = format_hashtags(content, hashtags)
# deduplicate mentions
status.mention_users.set(set(status.mention_users.all()))
@@ -150,6 +138,31 @@
return redirect_to_referer(request)
+def format_mentions(content, mentions):
+ """Detect @mentions and make them links"""
+ for (mention_text, mention_user) in mentions.items():
+ # turn the mention into a link
+ content = re.sub(
+ rf"(?<!/)\B{mention_text}\b(?!@)",
+ rf'<a href="{mention_user.remote_id}">{mention_text}</a>',
+ content,
+ )
+ return content
+
+
+def format_hashtags(content, hashtags):
+ """Detect #hashtags and make them links"""
+ for (mention_text, mention_hashtag) in hashtags.items():
+ # turn the mention into a link
+ content = re.sub(
+ rf"(?<!/)\B{mention_text}\b(?!@)",
+ rf'<a href="{mention_hashtag.remote_id}" data-mention="hashtag">'
+ + rf"{mention_text}</a>",
+ content,
+ )
+ return content
+
+
@method_decorator(login_required, name="dispatch")
class DeleteStatus(View):
"""tombstone that bad boy"""
|
{"golden_diff": "diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py\n--- a/bookwyrm/views/status.py\n+++ b/bookwyrm/views/status.py\n@@ -96,34 +96,22 @@\n \n # inspect the text for user tags\n content = status.content\n- for (mention_text, mention_user) in find_mentions(\n- request.user, content\n- ).items():\n+ mentions = find_mentions(request.user, content)\n+ for (_, mention_user) in mentions.items():\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n+ content = format_mentions(content, mentions)\n \n- # turn the mention into a link\n- content = re.sub(\n- rf\"{mention_text}\\b(?!@)\",\n- rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>',\n- content,\n- )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n \n # inspect the text for hashtags\n- for (mention_text, mention_hashtag) in find_or_create_hashtags(content).items():\n+ hashtags = find_or_create_hashtags(content)\n+ for (_, mention_hashtag) in hashtags.items():\n # add them to status mentions fk\n status.mention_hashtags.add(mention_hashtag)\n-\n- # turn the mention into a link\n- content = re.sub(\n- rf\"{mention_text}\\b(?!@)\",\n- rf'<a href=\"{mention_hashtag.remote_id}\" data-mention=\"hashtag\">'\n- + rf\"{mention_text}</a>\",\n- content,\n- )\n+ content = format_hashtags(content, hashtags)\n \n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n@@ -150,6 +138,31 @@\n return redirect_to_referer(request)\n \n \n+def format_mentions(content, mentions):\n+ \"\"\"Detect @mentions and make them links\"\"\"\n+ for (mention_text, mention_user) in mentions.items():\n+ # turn the mention into a link\n+ content = re.sub(\n+ rf\"(?<!/)\\B{mention_text}\\b(?!@)\",\n+ rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>',\n+ content,\n+ )\n+ return content\n+\n+\n+def format_hashtags(content, hashtags):\n+ \"\"\"Detect #hashtags and make them links\"\"\"\n+ for (mention_text, mention_hashtag) in hashtags.items():\n+ # turn the mention into a link\n+ content = re.sub(\n+ rf\"(?<!/)\\B{mention_text}\\b(?!@)\",\n+ rf'<a href=\"{mention_hashtag.remote_id}\" data-mention=\"hashtag\">'\n+ + rf\"{mention_text}</a>\",\n+ content,\n+ )\n+ return content\n+\n+\n @method_decorator(login_required, name=\"dispatch\")\n class DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n", "issue": "Hashtags are catching parts of urls\nLinks with anchors (https://www.example.com/test#three) are getting parsed as containing hashtags\r\n\r\n(This was reported to me via email)\n", "before_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nimport logging\nfrom urllib.parse import urlparse\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.validators import URLValidator\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.http import HttpResponse, HttpResponseBadRequest, Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom markdown import markdown\nfrom bookwyrm import forms, models\nfrom bookwyrm.utils import regex, sanitizer\nfrom .helpers import handle_remote_webfinger, is_api_request\nfrom .helpers import load_date_in_user_tz_as_utc, redirect_to_referer\n\nlogger = logging.getLogger(__name__)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass EditStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_id): # pylint: disable=unused-argument\n \"\"\"load the edit panel\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n\n status_type = \"reply\" if status.reply_parent else status.status_type.lower()\n data = {\n \"type\": status_type,\n \"book\": getattr(status, \"book\", None),\n \"draft\": status,\n }\n return TemplateResponse(request, \"compose.html\", data)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (...not used?)\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n # pylint: disable=too-many-branches\n def post(self, request, status_type, existing_status_id=None):\n \"\"\"create status of whatever type\"\"\"\n created = not existing_status_id\n existing_status = None\n if existing_status_id:\n existing_status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=existing_status_id\n )\n existing_status.edited_date = timezone.now()\n\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, f\"{status_type}Form\")(\n request.POST, instance=existing_status\n )\n except AttributeError as err:\n logger.exception(err)\n return HttpResponseBadRequest()\n\n if not form.is_valid():\n if is_api_request(request):\n logger.exception(form.errors)\n return HttpResponseBadRequest()\n return redirect_to_referer(request)\n\n status = form.save(request, commit=False)\n status.ready = False\n # save the plain, unformatted version of the status for future editing\n status.raw_content = status.content\n if hasattr(status, \"quote\"):\n status.raw_quote = status.quote\n\n status.sensitive = status.content_warning not in [None, \"\"]\n # the status has to be saved now before we can add many to many fields\n # like mentions\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(\n request.user, content\n ).items():\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n rf\"{mention_text}\\b(?!@)\",\n rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>',\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # inspect the text for hashtags\n for (mention_text, mention_hashtag) in find_or_create_hashtags(content).items():\n # add them to status mentions fk\n status.mention_hashtags.add(mention_hashtag)\n\n # turn the mention into a link\n content = re.sub(\n rf\"{mention_text}\\b(?!@)\",\n rf'<a href=\"{mention_hashtag.remote_id}\" data-mention=\"hashtag\">'\n + rf\"{mention_text}</a>\",\n content,\n )\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.ready = True\n status.save(created=created)\n\n # update a readthrough, if needed\n if bool(request.POST.get(\"id\")):\n try:\n edit_readthrough(request)\n except Http404:\n pass\n\n if is_api_request(request):\n return HttpResponse()\n return redirect_to_referer(request)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n status.raise_not_deletable(request.user)\n\n # perform deletion\n status.delete()\n return redirect(\"/\")\n\n\n@login_required\n@require_POST\ndef update_progress(request, book_id): # pylint: disable=unused-argument\n \"\"\"Either it's just a progress update, or it's a comment with a progress update\"\"\"\n if request.POST.get(\"post-status\"):\n return CreateStatus.as_view()(request, \"comment\")\n return edit_readthrough(request)\n\n\n@login_required\n@require_POST\ndef edit_readthrough(request):\n \"\"\"can't use the form because the dates are too finnicky\"\"\"\n # TODO: remove this, it duplicates the code in the ReadThrough view\n readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get(\"id\"))\n\n readthrough.start_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"start_date\"), request.user\n )\n readthrough.finish_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"finish_date\"), request.user\n )\n\n progress = request.POST.get(\"progress\")\n try:\n progress = int(progress)\n readthrough.progress = progress\n except (ValueError, TypeError):\n pass\n\n progress_mode = request.POST.get(\"progress_mode\")\n try:\n progress_mode = models.ProgressMode(progress_mode)\n readthrough.progress_mode = progress_mode\n except ValueError:\n pass\n\n readthrough.save()\n\n # record the progress update individually\n # use default now for date field\n readthrough.create_update()\n\n if is_api_request(request):\n return HttpResponse()\n return redirect_to_referer(request)\n\n\ndef find_mentions(user, content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return {}\n # The regex has nested match groups, so the 0th entry has the full (outer) match\n # And beacuse the strict username starts with @, the username is 1st char onward\n usernames = [m[0][1:] for m in re.findall(regex.STRICT_USERNAME, content)]\n\n known_users = (\n models.User.viewer_aware_objects(user)\n .filter(Q(username__in=usernames) | Q(localname__in=usernames))\n .distinct()\n )\n # Prepare a lookup based on both username and localname\n username_dict = {\n **{f\"@{u.username}\": u for u in known_users},\n **{f\"@{u.localname}\": u for u in known_users.filter(local=True)},\n }\n\n # Users not captured here could be blocked or not yet loaded on the server\n not_found = set(usernames) - set(username_dict.keys())\n for username in not_found:\n mention_user = handle_remote_webfinger(username, unknown_only=True)\n if not mention_user:\n # this user is blocked or can't be found\n continue\n username_dict[f\"@{mention_user.username}\"] = mention_user\n username_dict[f\"@{mention_user.localname}\"] = mention_user\n return username_dict\n\n\ndef find_or_create_hashtags(content):\n \"\"\"detect #hashtags in raw status content\n\n it stores hashtags case-sensitive, but ensures that an existing\n hashtag with different case are found and re-used. for example,\n an existing #BookWyrm hashtag will be found and used even if the\n status content is using #bookwyrm.\n \"\"\"\n if not content:\n return {}\n\n found_hashtags = {t.lower(): t for t in re.findall(regex.HASHTAG, content)}\n if len(found_hashtags) == 0:\n return {}\n\n known_hashtags = {\n t.name.lower(): t\n for t in models.Hashtag.objects.filter(\n Q(name__in=found_hashtags.keys())\n ).distinct()\n }\n\n not_found = found_hashtags.keys() - known_hashtags.keys()\n for lower_name in not_found:\n tag_name = found_hashtags[lower_name]\n mention_hashtag = models.Hashtag(name=tag_name)\n mention_hashtag.save()\n known_hashtags[lower_name] = mention_hashtag\n\n return {found_hashtags[k]: v for k, v in known_hashtags.items()}\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n validator = URLValidator()\n formatted_content = \"\"\n split_content = re.split(r\"(\\s+)\", content)\n\n for potential_link in split_content:\n if not potential_link:\n continue\n wrapped = _wrapped(potential_link)\n if wrapped:\n wrapper_close = potential_link[-1]\n formatted_content += potential_link[0]\n potential_link = potential_link[1:-1]\n\n try:\n # raises an error on anything that's not a valid link\n validator(potential_link)\n\n # use everything but the scheme in the presentation of the link\n url = urlparse(potential_link)\n link = url.netloc + url.path + url.params\n if url.query != \"\":\n link += \"?\" + url.query\n if url.fragment != \"\":\n link += \"#\" + url.fragment\n\n formatted_content += f'<a href=\"{potential_link}\">{link}</a>'\n except (ValidationError, UnicodeError):\n formatted_content += potential_link\n\n if wrapped:\n formatted_content += wrapper_close\n\n return formatted_content\n\n\ndef _wrapped(text):\n \"\"\"check if a line of text is wrapped\"\"\"\n wrappers = [(\"(\", \")\"), (\"[\", \"]\"), (\"{\", \"}\")]\n for wrapper in wrappers:\n if text[0] == wrapper[0] and text[-1] == wrapper[-1]:\n return True\n return False\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n return sanitizer.clean(content)\n", "path": "bookwyrm/views/status.py"}], "after_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nimport logging\nfrom urllib.parse import urlparse\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.validators import URLValidator\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.http import HttpResponse, HttpResponseBadRequest, Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom markdown import markdown\nfrom bookwyrm import forms, models\nfrom bookwyrm.utils import regex, sanitizer\nfrom .helpers import handle_remote_webfinger, is_api_request\nfrom .helpers import load_date_in_user_tz_as_utc, redirect_to_referer\n\nlogger = logging.getLogger(__name__)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass EditStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_id): # pylint: disable=unused-argument\n \"\"\"load the edit panel\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n\n status_type = \"reply\" if status.reply_parent else status.status_type.lower()\n data = {\n \"type\": status_type,\n \"book\": getattr(status, \"book\", None),\n \"draft\": status,\n }\n return TemplateResponse(request, \"compose.html\", data)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (...not used?)\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n # pylint: disable=too-many-branches\n def post(self, request, status_type, existing_status_id=None):\n \"\"\"create status of whatever type\"\"\"\n created = not existing_status_id\n existing_status = None\n if existing_status_id:\n existing_status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=existing_status_id\n )\n existing_status.edited_date = timezone.now()\n\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, f\"{status_type}Form\")(\n request.POST, instance=existing_status\n )\n except AttributeError as err:\n logger.exception(err)\n return HttpResponseBadRequest()\n\n if not form.is_valid():\n if is_api_request(request):\n logger.exception(form.errors)\n return HttpResponseBadRequest()\n return redirect_to_referer(request)\n\n status = form.save(request, commit=False)\n status.ready = False\n # save the plain, unformatted version of the status for future editing\n status.raw_content = status.content\n if hasattr(status, \"quote\"):\n status.raw_quote = status.quote\n\n status.sensitive = status.content_warning not in [None, \"\"]\n # the status has to be saved now before we can add many to many fields\n # like mentions\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n mentions = find_mentions(request.user, content)\n for (_, mention_user) in mentions.items():\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n content = format_mentions(content, mentions)\n\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # inspect the text for hashtags\n hashtags = find_or_create_hashtags(content)\n for (_, mention_hashtag) in hashtags.items():\n # add them to status mentions fk\n status.mention_hashtags.add(mention_hashtag)\n content = format_hashtags(content, hashtags)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.ready = True\n status.save(created=created)\n\n # update a readthrough, if needed\n if bool(request.POST.get(\"id\")):\n try:\n edit_readthrough(request)\n except Http404:\n pass\n\n if is_api_request(request):\n return HttpResponse()\n return redirect_to_referer(request)\n\n\ndef format_mentions(content, mentions):\n \"\"\"Detect @mentions and make them links\"\"\"\n for (mention_text, mention_user) in mentions.items():\n # turn the mention into a link\n content = re.sub(\n rf\"(?<!/)\\B{mention_text}\\b(?!@)\",\n rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>',\n content,\n )\n return content\n\n\ndef format_hashtags(content, hashtags):\n \"\"\"Detect #hashtags and make them links\"\"\"\n for (mention_text, mention_hashtag) in hashtags.items():\n # turn the mention into a link\n content = re.sub(\n rf\"(?<!/)\\B{mention_text}\\b(?!@)\",\n rf'<a href=\"{mention_hashtag.remote_id}\" data-mention=\"hashtag\">'\n + rf\"{mention_text}</a>\",\n content,\n )\n return content\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n status.raise_not_deletable(request.user)\n\n # perform deletion\n status.delete()\n return redirect(\"/\")\n\n\n@login_required\n@require_POST\ndef update_progress(request, book_id): # pylint: disable=unused-argument\n \"\"\"Either it's just a progress update, or it's a comment with a progress update\"\"\"\n if request.POST.get(\"post-status\"):\n return CreateStatus.as_view()(request, \"comment\")\n return edit_readthrough(request)\n\n\n@login_required\n@require_POST\ndef edit_readthrough(request):\n \"\"\"can't use the form because the dates are too finnicky\"\"\"\n # TODO: remove this, it duplicates the code in the ReadThrough view\n readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get(\"id\"))\n\n readthrough.start_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"start_date\"), request.user\n )\n readthrough.finish_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"finish_date\"), request.user\n )\n\n progress = request.POST.get(\"progress\")\n try:\n progress = int(progress)\n readthrough.progress = progress\n except (ValueError, TypeError):\n pass\n\n progress_mode = request.POST.get(\"progress_mode\")\n try:\n progress_mode = models.ProgressMode(progress_mode)\n readthrough.progress_mode = progress_mode\n except ValueError:\n pass\n\n readthrough.save()\n\n # record the progress update individually\n # use default now for date field\n readthrough.create_update()\n\n if is_api_request(request):\n return HttpResponse()\n return redirect_to_referer(request)\n\n\ndef find_mentions(user, content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return {}\n # The regex has nested match groups, so the 0th entry has the full (outer) match\n # And beacuse the strict username starts with @, the username is 1st char onward\n usernames = [m[0][1:] for m in re.findall(regex.STRICT_USERNAME, content)]\n\n known_users = (\n models.User.viewer_aware_objects(user)\n .filter(Q(username__in=usernames) | Q(localname__in=usernames))\n .distinct()\n )\n # Prepare a lookup based on both username and localname\n username_dict = {\n **{f\"@{u.username}\": u for u in known_users},\n **{f\"@{u.localname}\": u for u in known_users.filter(local=True)},\n }\n\n # Users not captured here could be blocked or not yet loaded on the server\n not_found = set(usernames) - set(username_dict.keys())\n for username in not_found:\n mention_user = handle_remote_webfinger(username, unknown_only=True)\n if not mention_user:\n # this user is blocked or can't be found\n continue\n username_dict[f\"@{mention_user.username}\"] = mention_user\n username_dict[f\"@{mention_user.localname}\"] = mention_user\n return username_dict\n\n\ndef find_or_create_hashtags(content):\n \"\"\"detect #hashtags in raw status content\n\n it stores hashtags case-sensitive, but ensures that an existing\n hashtag with different case are found and re-used. for example,\n an existing #BookWyrm hashtag will be found and used even if the\n status content is using #bookwyrm.\n \"\"\"\n if not content:\n return {}\n\n found_hashtags = {t.lower(): t for t in re.findall(regex.HASHTAG, content)}\n if len(found_hashtags) == 0:\n return {}\n\n known_hashtags = {\n t.name.lower(): t\n for t in models.Hashtag.objects.filter(\n Q(name__in=found_hashtags.keys())\n ).distinct()\n }\n\n not_found = found_hashtags.keys() - known_hashtags.keys()\n for lower_name in not_found:\n tag_name = found_hashtags[lower_name]\n mention_hashtag = models.Hashtag(name=tag_name)\n mention_hashtag.save()\n known_hashtags[lower_name] = mention_hashtag\n\n return {found_hashtags[k]: v for k, v in known_hashtags.items()}\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n validator = URLValidator()\n formatted_content = \"\"\n split_content = re.split(r\"(\\s+)\", content)\n\n for potential_link in split_content:\n if not potential_link:\n continue\n wrapped = _wrapped(potential_link)\n if wrapped:\n wrapper_close = potential_link[-1]\n formatted_content += potential_link[0]\n potential_link = potential_link[1:-1]\n\n try:\n # raises an error on anything that's not a valid link\n validator(potential_link)\n\n # use everything but the scheme in the presentation of the link\n url = urlparse(potential_link)\n link = url.netloc + url.path + url.params\n if url.query != \"\":\n link += \"?\" + url.query\n if url.fragment != \"\":\n link += \"#\" + url.fragment\n\n formatted_content += f'<a href=\"{potential_link}\">{link}</a>'\n except (ValidationError, UnicodeError):\n formatted_content += potential_link\n\n if wrapped:\n formatted_content += wrapper_close\n\n return formatted_content\n\n\ndef _wrapped(text):\n \"\"\"check if a line of text is wrapped\"\"\"\n wrappers = [(\"(\", \")\"), (\"[\", \"]\"), (\"{\", \"}\")]\n for wrapper in wrappers:\n if text[0] == wrapper[0] and text[-1] == wrapper[-1]:\n return True\n return False\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n return sanitizer.clean(content)\n", "path": "bookwyrm/views/status.py"}]}
| 3,649 | 657 |
gh_patches_debug_14854
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Package not classified as Python 3 compatible
* `sentry-sdk` is classified as not Python 3 compatible by [pyup](https://pyup.io) when it actually is - e.g. [https://pyup.io/repos/github/pushresume/backend](https://pyup.io/repos/github/pushresume/backend)

* `setup.py` classifiers should be updated with `Programming Language :: Python :: 3`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Sentry-Python - Sentry SDK for Python
5 =====================================
6
7 **Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub
8 <https://github.com/getsentry/sentry-python>`_ to find out more.
9 """
10
11 from setuptools import setup, find_packages
12
13 setup(
14 name="sentry-sdk",
15 version="0.5.1",
16 author="Sentry Team and Contributors",
17 author_email="[email protected]",
18 url="https://github.com/getsentry/sentry-python",
19 description="Python client for Sentry (https://getsentry.com)",
20 long_description=__doc__,
21 packages=find_packages(exclude=("tests", "tests.*")),
22 zip_safe=False,
23 license="BSD",
24 install_requires=["urllib3", "certifi"],
25 extras_require={"flask": ["flask>=0.8", "blinker>=1.1"]},
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,4 +23,20 @@
license="BSD",
install_requires=["urllib3", "certifi"],
extras_require={"flask": ["flask>=0.8", "blinker>=1.1"]},
+ classifiers=[
+ 'Development Status :: 5 - Production/Stable',
+ 'Environment :: Web Environment',
+ 'Intended Audience :: Developers',
+ 'License :: OSI Approved :: BSD License',
+ 'Operating System :: OS Independent',
+ 'Programming Language :: Python',
+ 'Programming Language :: Python :: 2',
+ 'Programming Language :: Python :: 2.7',
+ 'Programming Language :: Python :: 3',
+ 'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
+ 'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
+ 'Topic :: Software Development :: Libraries :: Python Modules',
+ ],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,4 +23,20 @@\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n+ classifiers=[\n+ 'Development Status :: 5 - Production/Stable',\n+ 'Environment :: Web Environment',\n+ 'Intended Audience :: Developers',\n+ 'License :: OSI Approved :: BSD License',\n+ 'Operating System :: OS Independent',\n+ 'Programming Language :: Python',\n+ 'Programming Language :: Python :: 2',\n+ 'Programming Language :: Python :: 2.7',\n+ 'Programming Language :: Python :: 3',\n+ 'Programming Language :: Python :: 3.4',\n+ 'Programming Language :: Python :: 3.5',\n+ 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n+ 'Topic :: Software Development :: Libraries :: Python Modules',\n+ ],\n )\n", "issue": "Package not classified as Python 3 compatible \n* `sentry-sdk` is classified as not Python 3 compatible by [pyup](https://pyup.io) when it actually is - e.g. [https://pyup.io/repos/github/pushresume/backend](https://pyup.io/repos/github/pushresume/backend)\r\n\r\n\r\n\r\n* `setup.py` classifiers should be updated with `Programming Language :: Python :: 3`\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nSentry-Python - Sentry SDK for Python\n=====================================\n\n**Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub\n<https://github.com/getsentry/sentry-python>`_ to find out more.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\nsetup(\n name=\"sentry-sdk\",\n version=\"0.5.1\",\n author=\"Sentry Team and Contributors\",\n author_email=\"[email protected]\",\n url=\"https://github.com/getsentry/sentry-python\",\n description=\"Python client for Sentry (https://getsentry.com)\",\n long_description=__doc__,\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n zip_safe=False,\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nSentry-Python - Sentry SDK for Python\n=====================================\n\n**Sentry-Python is an experimental SDK for Sentry.** Check out `GitHub\n<https://github.com/getsentry/sentry-python>`_ to find out more.\n\"\"\"\n\nfrom setuptools import setup, find_packages\n\nsetup(\n name=\"sentry-sdk\",\n version=\"0.5.1\",\n author=\"Sentry Team and Contributors\",\n author_email=\"[email protected]\",\n url=\"https://github.com/getsentry/sentry-python\",\n description=\"Python client for Sentry (https://getsentry.com)\",\n long_description=__doc__,\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n zip_safe=False,\n license=\"BSD\",\n install_requires=[\"urllib3\", \"certifi\"],\n extras_require={\"flask\": [\"flask>=0.8\", \"blinker>=1.1\"]},\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n)\n", "path": "setup.py"}]}
| 648 | 240 |
gh_patches_debug_36319
|
rasdani/github-patches
|
git_diff
|
CTPUG__wafer-163
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pentabarf export calculates duration incorrectly for items spanning multiple slots
The pentabarf,xml export does the wrong thing if an item spans multiple slots. It calculates the duration from just the first slot, rather than the entire item.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wafer/schedule/admin.py`
Content:
```
1 import datetime
2
3 from django.conf.urls import url
4 from django.contrib import admin
5 from django.contrib import messages
6 from django.utils.encoding import force_text
7 from django.utils.translation import ugettext as _
8 from django import forms
9
10 from wafer.schedule.models import Day, Venue, Slot, ScheduleItem
11 from wafer.talks.models import Talk, ACCEPTED
12 from wafer.pages.models import Page
13 from wafer.utils import cache_result
14
15
16 # These are functions to simplify testing
17 def find_overlapping_slots():
18 """Find any slots that overlap"""
19 overlaps = set([])
20 all_slots = list(Slot.objects.all())
21 for slot in all_slots:
22 # Because slots are ordered, we can be more efficient than this
23 # N^2 loop, but this is simple and, since the number of slots
24 # should be low, this should be "fast enough"
25 start = slot.get_start_time()
26 end = slot.end_time
27 for other_slot in all_slots:
28 if other_slot.pk == slot.pk:
29 continue
30 if other_slot.get_day() != slot.get_day():
31 # different days, can't overlap
32 continue
33 # Overlap if the start_time or end_time is bounded by our times
34 # start_time <= other.start_time < end_time
35 # or
36 # start_time < other.end_time <= end_time
37 other_start = other_slot.get_start_time()
38 other_end = other_slot.end_time
39 if start <= other_start and other_start < end:
40 overlaps.add(slot)
41 overlaps.add(other_slot)
42 elif start < other_end and other_end <= end:
43 overlaps.add(slot)
44 overlaps.add(other_slot)
45 return overlaps
46
47
48 def validate_items(all_items=None):
49 """Find errors in the schedule. Check for:
50 - pending / rejected talks in the schedule
51 - items with both talks and pages assigned
52 - items with neither talks nor pages assigned
53 """
54 if all_items is None:
55 all_items = prefetch_schedule_items()
56 validation = []
57 for item in all_items:
58 if item.talk is not None and item.page is not None:
59 validation.append(item)
60 elif item.talk is None and item.page is None:
61 validation.append(item)
62 elif item.talk and item.talk.status != ACCEPTED:
63 validation.append(item)
64 return validation
65
66
67 def find_duplicate_schedule_items(all_items=None):
68 """Find talks / pages assigned to mulitple schedule items"""
69 if all_items is None:
70 all_items = prefetch_schedule_items()
71 duplicates = []
72 seen_talks = {}
73 for item in all_items:
74 if item.talk and item.talk in seen_talks:
75 duplicates.append(item)
76 if seen_talks[item.talk] not in duplicates:
77 duplicates.append(seen_talks[item.talk])
78 else:
79 seen_talks[item.talk] = item
80 # We currently allow duplicate pages for cases were we need disjoint
81 # schedule items, like multiple open space sessions on different
82 # days and similar cases. This may be revisited later
83 return duplicates
84
85
86 def find_clashes(all_items=None):
87 """Find schedule items which clash (common slot and venue)"""
88 if all_items is None:
89 all_items = prefetch_schedule_items()
90 clashes = {}
91 seen_venue_slots = {}
92 for item in all_items:
93 for slot in item.slots.all():
94 pos = (item.venue, slot)
95 if pos in seen_venue_slots:
96 if seen_venue_slots[pos] not in clashes:
97 clashes[pos] = [seen_venue_slots[pos]]
98 clashes[pos].append(item)
99 else:
100 seen_venue_slots[pos] = item
101 return clashes
102
103
104 def find_invalid_venues(all_items=None):
105 """Find venues assigned slots that aren't on the allowed list
106 of days."""
107 if all_items is None:
108 all_items = prefetch_schedule_items()
109 venues = {}
110 for item in all_items:
111 valid = False
112 item_days = list(item.venue.days.all())
113 for slot in item.slots.all():
114 for day in item_days:
115 if day == slot.get_day():
116 valid = True
117 break
118 if not valid:
119 venues.setdefault(item.venue, [])
120 venues[item.venue].append(item)
121 return venues
122
123
124 def prefetch_schedule_items():
125 """Prefetch all schedule items and related objects."""
126 return list(ScheduleItem.objects
127 .select_related(
128 'talk', 'page', 'venue')
129 .prefetch_related(
130 'slots', 'slots__previous_slot', 'slots__day')
131 .all())
132
133
134 @cache_result('wafer_schedule_check_schedule', 60*60)
135 def check_schedule():
136 """Helper routine to eaily test if the schedule is valid"""
137 all_items = prefetch_schedule_items()
138 if find_clashes(all_items):
139 return False
140 if find_duplicate_schedule_items(all_items):
141 return False
142 if validate_items(all_items):
143 return False
144 if find_overlapping_slots():
145 return False
146 if find_invalid_venues(all_items):
147 return False
148 return True
149
150
151 class ScheduleItemAdminForm(forms.ModelForm):
152 class Meta:
153 model = ScheduleItem
154 fields = ('slots', 'venue', 'talk', 'page', 'details', 'notes',
155 'css_class')
156
157 def __init__(self, *args, **kwargs):
158 super(ScheduleItemAdminForm, self).__init__(*args, **kwargs)
159 self.fields['talk'].queryset = Talk.objects.filter(status=ACCEPTED)
160 # Present all pages as possible entries in the schedule
161 self.fields['page'].queryset = Page.objects.all()
162
163
164 class ScheduleItemAdmin(admin.ModelAdmin):
165 form = ScheduleItemAdminForm
166
167 change_list_template = 'admin/scheduleitem_list.html'
168 list_display = ['get_start_time', 'venue', 'get_title']
169
170 # We stuff these validation results into the view, rather than
171 # enforcing conditions on the actual model, since it can be hard
172 # to edit the schedule and keep it entirely consistent at every
173 # step (think exchanging talks and so forth)
174 def changelist_view(self, request, extra_context=None):
175 extra_context = extra_context or {}
176 # Find issues in the schedule
177 clashes = find_clashes()
178 validation = validate_items()
179 venues = find_invalid_venues()
180 duplicates = find_duplicate_schedule_items()
181 errors = {}
182 if clashes:
183 errors['clashes'] = clashes
184 if duplicates:
185 errors['duplicates'] = duplicates
186 if validation:
187 errors['validation'] = validation
188 if venues:
189 errors['venues'] = venues
190 extra_context['errors'] = errors
191 return super(ScheduleItemAdmin, self).changelist_view(request,
192 extra_context)
193
194 def get_urls(self):
195 from wafer.schedule.views import ScheduleEditView
196
197 urls = super(ScheduleItemAdmin, self).get_urls()
198 admin_schedule_edit_view = self.admin_site.admin_view(
199 ScheduleEditView.as_view())
200 my_urls = [
201 url(r'^edit/$', admin_schedule_edit_view, name='schedule_editor'),
202 url(r'^edit/(?P<day_id>[0-9]+)$', admin_schedule_edit_view,
203 name='schedule_editor'),
204 ]
205 return my_urls + urls
206
207
208 class SlotAdminForm(forms.ModelForm):
209
210 class Meta:
211 model = Slot
212 fields = ('name', 'previous_slot', 'day', 'start_time', 'end_time')
213
214 class Media:
215 js = ('js/scheduledatetime.js',)
216
217
218 class SlotAdminAddForm(SlotAdminForm):
219
220 # Additional field added for creating multiple slots at once
221 additional = forms.IntegerField(min_value=0, max_value=30, required=False,
222 label=_("Additional slots"),
223 help_text=_("Create this number of "
224 "additional slots following"
225 "this one"))
226
227
228 class SlotAdmin(admin.ModelAdmin):
229 form = SlotAdminForm
230
231 list_display = ('__str__', 'day', 'end_time')
232 list_editable = ('end_time',)
233
234 change_list_template = 'admin/slot_list.html'
235
236 def changelist_view(self, request, extra_context=None):
237 extra_context = extra_context or {}
238 # Find issues with the slots
239 errors = {}
240 overlaps = find_overlapping_slots()
241 if overlaps:
242 errors['overlaps'] = overlaps
243 extra_context['errors'] = errors
244 return super(SlotAdmin, self).changelist_view(request,
245 extra_context)
246
247 def get_form(self, request, obj=None, **kwargs):
248 """Change the form depending on whether we're adding or
249 editing the slot."""
250 if obj is None:
251 # Adding a new Slot
252 kwargs['form'] = SlotAdminAddForm
253 return super(SlotAdmin, self).get_form(request, obj, **kwargs)
254
255 def save_model(self, request, obj, form, change):
256 super(SlotAdmin, self).save_model(request, obj, form, change)
257 if not change and form.cleaned_data['additional'] > 0:
258 # We add the requested additional slots
259 # All created slot will have the same length as the slot just
260 # created , and we specify them as a sequence using
261 # "previous_slot" so tweaking start times is simple.
262 prev = obj
263 end = datetime.datetime.combine(prev.day.date, prev.end_time)
264 start = datetime.datetime.combine(prev.day.date,
265 prev.get_start_time())
266 slot_len = end - start
267 for loop in range(form.cleaned_data['additional']):
268 end = end + slot_len
269 new_slot = Slot(day=prev.day, previous_slot=prev,
270 end_time=end.time())
271 new_slot.save()
272 msgdict = {'obj': force_text(new_slot)}
273 msg = _("Additional slot %(obj)s added sucessfully") % msgdict
274 if hasattr(request, '_messages'):
275 # Don't add messages unless we have a suitable request
276 # Needed during testing, and possibly in other cases
277 self.message_user(request, msg, messages.SUCCESS)
278 prev = new_slot
279
280
281 admin.site.register(Day)
282 admin.site.register(Slot, SlotAdmin)
283 admin.site.register(Venue)
284 admin.site.register(ScheduleItem, ScheduleItemAdmin)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wafer/schedule/admin.py b/wafer/schedule/admin.py
--- a/wafer/schedule/admin.py
+++ b/wafer/schedule/admin.py
@@ -45,6 +45,25 @@
return overlaps
+def find_non_contiguous(all_items=None):
+ """Find any items that have slots that aren't contiguous"""
+ if all_items is None:
+ all_items = prefetch_schedule_items()
+ non_contiguous = []
+ for item in all_items:
+ if item.slots.count() < 2:
+ # No point in checking
+ continue
+ last_slot = None
+ for slot in item.slots.all().order_by('end_time'):
+ if last_slot:
+ if last_slot.end_time != slot.get_start_time():
+ non_contiguous.append(item)
+ break
+ last_slot = slot
+ return non_contiguous
+
+
def validate_items(all_items=None):
"""Find errors in the schedule. Check for:
- pending / rejected talks in the schedule
@@ -143,6 +162,8 @@
return False
if find_overlapping_slots():
return False
+ if find_non_contiguous(all_items):
+ return False
if find_invalid_venues(all_items):
return False
return True
@@ -174,10 +195,12 @@
def changelist_view(self, request, extra_context=None):
extra_context = extra_context or {}
# Find issues in the schedule
+ all_items = None
clashes = find_clashes()
- validation = validate_items()
+ validation = validate_items(all_items)
venues = find_invalid_venues()
- duplicates = find_duplicate_schedule_items()
+ duplicates = find_duplicate_schedule_items(all_items)
+ non_contiguous = find_non_contiguous(all_items)
errors = {}
if clashes:
errors['clashes'] = clashes
@@ -187,6 +210,8 @@
errors['validation'] = validation
if venues:
errors['venues'] = venues
+ if non_contiguous:
+ errors['non_contiguous'] = non_contiguous
extra_context['errors'] = errors
return super(ScheduleItemAdmin, self).changelist_view(request,
extra_context)
|
{"golden_diff": "diff --git a/wafer/schedule/admin.py b/wafer/schedule/admin.py\n--- a/wafer/schedule/admin.py\n+++ b/wafer/schedule/admin.py\n@@ -45,6 +45,25 @@\n return overlaps\n \n \n+def find_non_contiguous(all_items=None):\n+ \"\"\"Find any items that have slots that aren't contiguous\"\"\"\n+ if all_items is None:\n+ all_items = prefetch_schedule_items()\n+ non_contiguous = []\n+ for item in all_items:\n+ if item.slots.count() < 2:\n+ # No point in checking\n+ continue\n+ last_slot = None\n+ for slot in item.slots.all().order_by('end_time'):\n+ if last_slot:\n+ if last_slot.end_time != slot.get_start_time():\n+ non_contiguous.append(item)\n+ break\n+ last_slot = slot\n+ return non_contiguous\n+\n+\n def validate_items(all_items=None):\n \"\"\"Find errors in the schedule. Check for:\n - pending / rejected talks in the schedule\n@@ -143,6 +162,8 @@\n return False\n if find_overlapping_slots():\n return False\n+ if find_non_contiguous(all_items):\n+ return False\n if find_invalid_venues(all_items):\n return False\n return True\n@@ -174,10 +195,12 @@\n def changelist_view(self, request, extra_context=None):\n extra_context = extra_context or {}\n # Find issues in the schedule\n+ all_items = None\n clashes = find_clashes()\n- validation = validate_items()\n+ validation = validate_items(all_items)\n venues = find_invalid_venues()\n- duplicates = find_duplicate_schedule_items()\n+ duplicates = find_duplicate_schedule_items(all_items)\n+ non_contiguous = find_non_contiguous(all_items)\n errors = {}\n if clashes:\n errors['clashes'] = clashes\n@@ -187,6 +210,8 @@\n errors['validation'] = validation\n if venues:\n errors['venues'] = venues\n+ if non_contiguous:\n+ errors['non_contiguous'] = non_contiguous\n extra_context['errors'] = errors\n return super(ScheduleItemAdmin, self).changelist_view(request,\n extra_context)\n", "issue": "Pentabarf export calculates duration incorrectly for items spanning multiple slots\nThe pentabarf,xml export does the wrong thing if an item spans multiple slots. It calculates the duration from just the first slot, rather than the entire item.\n\n", "before_files": [{"content": "import datetime\n\nfrom django.conf.urls import url\nfrom django.contrib import admin\nfrom django.contrib import messages\nfrom django.utils.encoding import force_text\nfrom django.utils.translation import ugettext as _\nfrom django import forms\n\nfrom wafer.schedule.models import Day, Venue, Slot, ScheduleItem\nfrom wafer.talks.models import Talk, ACCEPTED\nfrom wafer.pages.models import Page\nfrom wafer.utils import cache_result\n\n\n# These are functions to simplify testing\ndef find_overlapping_slots():\n \"\"\"Find any slots that overlap\"\"\"\n overlaps = set([])\n all_slots = list(Slot.objects.all())\n for slot in all_slots:\n # Because slots are ordered, we can be more efficient than this\n # N^2 loop, but this is simple and, since the number of slots\n # should be low, this should be \"fast enough\"\n start = slot.get_start_time()\n end = slot.end_time\n for other_slot in all_slots:\n if other_slot.pk == slot.pk:\n continue\n if other_slot.get_day() != slot.get_day():\n # different days, can't overlap\n continue\n # Overlap if the start_time or end_time is bounded by our times\n # start_time <= other.start_time < end_time\n # or\n # start_time < other.end_time <= end_time\n other_start = other_slot.get_start_time()\n other_end = other_slot.end_time\n if start <= other_start and other_start < end:\n overlaps.add(slot)\n overlaps.add(other_slot)\n elif start < other_end and other_end <= end:\n overlaps.add(slot)\n overlaps.add(other_slot)\n return overlaps\n\n\ndef validate_items(all_items=None):\n \"\"\"Find errors in the schedule. Check for:\n - pending / rejected talks in the schedule\n - items with both talks and pages assigned\n - items with neither talks nor pages assigned\n \"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n validation = []\n for item in all_items:\n if item.talk is not None and item.page is not None:\n validation.append(item)\n elif item.talk is None and item.page is None:\n validation.append(item)\n elif item.talk and item.talk.status != ACCEPTED:\n validation.append(item)\n return validation\n\n\ndef find_duplicate_schedule_items(all_items=None):\n \"\"\"Find talks / pages assigned to mulitple schedule items\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n duplicates = []\n seen_talks = {}\n for item in all_items:\n if item.talk and item.talk in seen_talks:\n duplicates.append(item)\n if seen_talks[item.talk] not in duplicates:\n duplicates.append(seen_talks[item.talk])\n else:\n seen_talks[item.talk] = item\n # We currently allow duplicate pages for cases were we need disjoint\n # schedule items, like multiple open space sessions on different\n # days and similar cases. This may be revisited later\n return duplicates\n\n\ndef find_clashes(all_items=None):\n \"\"\"Find schedule items which clash (common slot and venue)\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n clashes = {}\n seen_venue_slots = {}\n for item in all_items:\n for slot in item.slots.all():\n pos = (item.venue, slot)\n if pos in seen_venue_slots:\n if seen_venue_slots[pos] not in clashes:\n clashes[pos] = [seen_venue_slots[pos]]\n clashes[pos].append(item)\n else:\n seen_venue_slots[pos] = item\n return clashes\n\n\ndef find_invalid_venues(all_items=None):\n \"\"\"Find venues assigned slots that aren't on the allowed list\n of days.\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n venues = {}\n for item in all_items:\n valid = False\n item_days = list(item.venue.days.all())\n for slot in item.slots.all():\n for day in item_days:\n if day == slot.get_day():\n valid = True\n break\n if not valid:\n venues.setdefault(item.venue, [])\n venues[item.venue].append(item)\n return venues\n\n\ndef prefetch_schedule_items():\n \"\"\"Prefetch all schedule items and related objects.\"\"\"\n return list(ScheduleItem.objects\n .select_related(\n 'talk', 'page', 'venue')\n .prefetch_related(\n 'slots', 'slots__previous_slot', 'slots__day')\n .all())\n\n\n@cache_result('wafer_schedule_check_schedule', 60*60)\ndef check_schedule():\n \"\"\"Helper routine to eaily test if the schedule is valid\"\"\"\n all_items = prefetch_schedule_items()\n if find_clashes(all_items):\n return False\n if find_duplicate_schedule_items(all_items):\n return False\n if validate_items(all_items):\n return False\n if find_overlapping_slots():\n return False\n if find_invalid_venues(all_items):\n return False\n return True\n\n\nclass ScheduleItemAdminForm(forms.ModelForm):\n class Meta:\n model = ScheduleItem\n fields = ('slots', 'venue', 'talk', 'page', 'details', 'notes',\n 'css_class')\n\n def __init__(self, *args, **kwargs):\n super(ScheduleItemAdminForm, self).__init__(*args, **kwargs)\n self.fields['talk'].queryset = Talk.objects.filter(status=ACCEPTED)\n # Present all pages as possible entries in the schedule\n self.fields['page'].queryset = Page.objects.all()\n\n\nclass ScheduleItemAdmin(admin.ModelAdmin):\n form = ScheduleItemAdminForm\n\n change_list_template = 'admin/scheduleitem_list.html'\n list_display = ['get_start_time', 'venue', 'get_title']\n\n # We stuff these validation results into the view, rather than\n # enforcing conditions on the actual model, since it can be hard\n # to edit the schedule and keep it entirely consistent at every\n # step (think exchanging talks and so forth)\n def changelist_view(self, request, extra_context=None):\n extra_context = extra_context or {}\n # Find issues in the schedule\n clashes = find_clashes()\n validation = validate_items()\n venues = find_invalid_venues()\n duplicates = find_duplicate_schedule_items()\n errors = {}\n if clashes:\n errors['clashes'] = clashes\n if duplicates:\n errors['duplicates'] = duplicates\n if validation:\n errors['validation'] = validation\n if venues:\n errors['venues'] = venues\n extra_context['errors'] = errors\n return super(ScheduleItemAdmin, self).changelist_view(request,\n extra_context)\n\n def get_urls(self):\n from wafer.schedule.views import ScheduleEditView\n\n urls = super(ScheduleItemAdmin, self).get_urls()\n admin_schedule_edit_view = self.admin_site.admin_view(\n ScheduleEditView.as_view())\n my_urls = [\n url(r'^edit/$', admin_schedule_edit_view, name='schedule_editor'),\n url(r'^edit/(?P<day_id>[0-9]+)$', admin_schedule_edit_view,\n name='schedule_editor'),\n ]\n return my_urls + urls\n\n\nclass SlotAdminForm(forms.ModelForm):\n\n class Meta:\n model = Slot\n fields = ('name', 'previous_slot', 'day', 'start_time', 'end_time')\n\n class Media:\n js = ('js/scheduledatetime.js',)\n\n\nclass SlotAdminAddForm(SlotAdminForm):\n\n # Additional field added for creating multiple slots at once\n additional = forms.IntegerField(min_value=0, max_value=30, required=False,\n label=_(\"Additional slots\"),\n help_text=_(\"Create this number of \"\n \"additional slots following\"\n \"this one\"))\n\n\nclass SlotAdmin(admin.ModelAdmin):\n form = SlotAdminForm\n\n list_display = ('__str__', 'day', 'end_time')\n list_editable = ('end_time',)\n\n change_list_template = 'admin/slot_list.html'\n\n def changelist_view(self, request, extra_context=None):\n extra_context = extra_context or {}\n # Find issues with the slots\n errors = {}\n overlaps = find_overlapping_slots()\n if overlaps:\n errors['overlaps'] = overlaps\n extra_context['errors'] = errors\n return super(SlotAdmin, self).changelist_view(request,\n extra_context)\n\n def get_form(self, request, obj=None, **kwargs):\n \"\"\"Change the form depending on whether we're adding or\n editing the slot.\"\"\"\n if obj is None:\n # Adding a new Slot\n kwargs['form'] = SlotAdminAddForm\n return super(SlotAdmin, self).get_form(request, obj, **kwargs)\n\n def save_model(self, request, obj, form, change):\n super(SlotAdmin, self).save_model(request, obj, form, change)\n if not change and form.cleaned_data['additional'] > 0:\n # We add the requested additional slots\n # All created slot will have the same length as the slot just\n # created , and we specify them as a sequence using\n # \"previous_slot\" so tweaking start times is simple.\n prev = obj\n end = datetime.datetime.combine(prev.day.date, prev.end_time)\n start = datetime.datetime.combine(prev.day.date,\n prev.get_start_time())\n slot_len = end - start\n for loop in range(form.cleaned_data['additional']):\n end = end + slot_len\n new_slot = Slot(day=prev.day, previous_slot=prev,\n end_time=end.time())\n new_slot.save()\n msgdict = {'obj': force_text(new_slot)}\n msg = _(\"Additional slot %(obj)s added sucessfully\") % msgdict\n if hasattr(request, '_messages'):\n # Don't add messages unless we have a suitable request\n # Needed during testing, and possibly in other cases\n self.message_user(request, msg, messages.SUCCESS)\n prev = new_slot\n\n\nadmin.site.register(Day)\nadmin.site.register(Slot, SlotAdmin)\nadmin.site.register(Venue)\nadmin.site.register(ScheduleItem, ScheduleItemAdmin)\n", "path": "wafer/schedule/admin.py"}], "after_files": [{"content": "import datetime\n\nfrom django.conf.urls import url\nfrom django.contrib import admin\nfrom django.contrib import messages\nfrom django.utils.encoding import force_text\nfrom django.utils.translation import ugettext as _\nfrom django import forms\n\nfrom wafer.schedule.models import Day, Venue, Slot, ScheduleItem\nfrom wafer.talks.models import Talk, ACCEPTED\nfrom wafer.pages.models import Page\nfrom wafer.utils import cache_result\n\n\n# These are functions to simplify testing\ndef find_overlapping_slots():\n \"\"\"Find any slots that overlap\"\"\"\n overlaps = set([])\n all_slots = list(Slot.objects.all())\n for slot in all_slots:\n # Because slots are ordered, we can be more efficient than this\n # N^2 loop, but this is simple and, since the number of slots\n # should be low, this should be \"fast enough\"\n start = slot.get_start_time()\n end = slot.end_time\n for other_slot in all_slots:\n if other_slot.pk == slot.pk:\n continue\n if other_slot.get_day() != slot.get_day():\n # different days, can't overlap\n continue\n # Overlap if the start_time or end_time is bounded by our times\n # start_time <= other.start_time < end_time\n # or\n # start_time < other.end_time <= end_time\n other_start = other_slot.get_start_time()\n other_end = other_slot.end_time\n if start <= other_start and other_start < end:\n overlaps.add(slot)\n overlaps.add(other_slot)\n elif start < other_end and other_end <= end:\n overlaps.add(slot)\n overlaps.add(other_slot)\n return overlaps\n\n\ndef find_non_contiguous(all_items=None):\n \"\"\"Find any items that have slots that aren't contiguous\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n non_contiguous = []\n for item in all_items:\n if item.slots.count() < 2:\n # No point in checking\n continue\n last_slot = None\n for slot in item.slots.all().order_by('end_time'):\n if last_slot:\n if last_slot.end_time != slot.get_start_time():\n non_contiguous.append(item)\n break\n last_slot = slot\n return non_contiguous\n\n\ndef validate_items(all_items=None):\n \"\"\"Find errors in the schedule. Check for:\n - pending / rejected talks in the schedule\n - items with both talks and pages assigned\n - items with neither talks nor pages assigned\n \"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n validation = []\n for item in all_items:\n if item.talk is not None and item.page is not None:\n validation.append(item)\n elif item.talk is None and item.page is None:\n validation.append(item)\n elif item.talk and item.talk.status != ACCEPTED:\n validation.append(item)\n return validation\n\n\ndef find_duplicate_schedule_items(all_items=None):\n \"\"\"Find talks / pages assigned to mulitple schedule items\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n duplicates = []\n seen_talks = {}\n for item in all_items:\n if item.talk and item.talk in seen_talks:\n duplicates.append(item)\n if seen_talks[item.talk] not in duplicates:\n duplicates.append(seen_talks[item.talk])\n else:\n seen_talks[item.talk] = item\n # We currently allow duplicate pages for cases were we need disjoint\n # schedule items, like multiple open space sessions on different\n # days and similar cases. This may be revisited later\n return duplicates\n\n\ndef find_clashes(all_items=None):\n \"\"\"Find schedule items which clash (common slot and venue)\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n clashes = {}\n seen_venue_slots = {}\n for item in all_items:\n for slot in item.slots.all():\n pos = (item.venue, slot)\n if pos in seen_venue_slots:\n if seen_venue_slots[pos] not in clashes:\n clashes[pos] = [seen_venue_slots[pos]]\n clashes[pos].append(item)\n else:\n seen_venue_slots[pos] = item\n return clashes\n\n\ndef find_invalid_venues(all_items=None):\n \"\"\"Find venues assigned slots that aren't on the allowed list\n of days.\"\"\"\n if all_items is None:\n all_items = prefetch_schedule_items()\n venues = {}\n for item in all_items:\n valid = False\n item_days = list(item.venue.days.all())\n for slot in item.slots.all():\n for day in item_days:\n if day == slot.get_day():\n valid = True\n break\n if not valid:\n venues.setdefault(item.venue, [])\n venues[item.venue].append(item)\n return venues\n\n\ndef prefetch_schedule_items():\n \"\"\"Prefetch all schedule items and related objects.\"\"\"\n return list(ScheduleItem.objects\n .select_related(\n 'talk', 'page', 'venue')\n .prefetch_related(\n 'slots', 'slots__previous_slot', 'slots__day')\n .all())\n\n\n@cache_result('wafer_schedule_check_schedule', 60*60)\ndef check_schedule():\n \"\"\"Helper routine to eaily test if the schedule is valid\"\"\"\n all_items = prefetch_schedule_items()\n if find_clashes(all_items):\n return False\n if find_duplicate_schedule_items(all_items):\n return False\n if validate_items(all_items):\n return False\n if find_overlapping_slots():\n return False\n if find_non_contiguous(all_items):\n return False\n if find_invalid_venues(all_items):\n return False\n return True\n\n\nclass ScheduleItemAdminForm(forms.ModelForm):\n class Meta:\n model = ScheduleItem\n fields = ('slots', 'venue', 'talk', 'page', 'details', 'notes',\n 'css_class')\n\n def __init__(self, *args, **kwargs):\n super(ScheduleItemAdminForm, self).__init__(*args, **kwargs)\n self.fields['talk'].queryset = Talk.objects.filter(status=ACCEPTED)\n # Present all pages as possible entries in the schedule\n self.fields['page'].queryset = Page.objects.all()\n\n\nclass ScheduleItemAdmin(admin.ModelAdmin):\n form = ScheduleItemAdminForm\n\n change_list_template = 'admin/scheduleitem_list.html'\n list_display = ['get_start_time', 'venue', 'get_title']\n\n # We stuff these validation results into the view, rather than\n # enforcing conditions on the actual model, since it can be hard\n # to edit the schedule and keep it entirely consistent at every\n # step (think exchanging talks and so forth)\n def changelist_view(self, request, extra_context=None):\n extra_context = extra_context or {}\n # Find issues in the schedule\n all_items = None\n clashes = find_clashes()\n validation = validate_items(all_items)\n venues = find_invalid_venues()\n duplicates = find_duplicate_schedule_items(all_items)\n non_contiguous = find_non_contiguous(all_items)\n errors = {}\n if clashes:\n errors['clashes'] = clashes\n if duplicates:\n errors['duplicates'] = duplicates\n if validation:\n errors['validation'] = validation\n if venues:\n errors['venues'] = venues\n if non_contiguous:\n errors['non_contiguous'] = non_contiguous\n extra_context['errors'] = errors\n return super(ScheduleItemAdmin, self).changelist_view(request,\n extra_context)\n\n def get_urls(self):\n from wafer.schedule.views import ScheduleEditView\n\n urls = super(ScheduleItemAdmin, self).get_urls()\n admin_schedule_edit_view = self.admin_site.admin_view(\n ScheduleEditView.as_view())\n my_urls = [\n url(r'^edit/$', admin_schedule_edit_view, name='schedule_editor'),\n url(r'^edit/(?P<day_id>[0-9]+)$', admin_schedule_edit_view,\n name='schedule_editor'),\n ]\n return my_urls + urls\n\n\nclass SlotAdminForm(forms.ModelForm):\n\n class Meta:\n model = Slot\n fields = ('name', 'previous_slot', 'day', 'start_time', 'end_time')\n\n class Media:\n js = ('js/scheduledatetime.js',)\n\n\nclass SlotAdminAddForm(SlotAdminForm):\n\n # Additional field added for creating multiple slots at once\n additional = forms.IntegerField(min_value=0, max_value=30, required=False,\n label=_(\"Additional slots\"),\n help_text=_(\"Create this number of \"\n \"additional slots following\"\n \"this one\"))\n\n\nclass SlotAdmin(admin.ModelAdmin):\n form = SlotAdminForm\n\n list_display = ('__str__', 'day', 'end_time')\n list_editable = ('end_time',)\n\n change_list_template = 'admin/slot_list.html'\n\n def changelist_view(self, request, extra_context=None):\n extra_context = extra_context or {}\n # Find issues with the slots\n errors = {}\n overlaps = find_overlapping_slots()\n if overlaps:\n errors['overlaps'] = overlaps\n extra_context['errors'] = errors\n return super(SlotAdmin, self).changelist_view(request,\n extra_context)\n\n def get_form(self, request, obj=None, **kwargs):\n \"\"\"Change the form depending on whether we're adding or\n editing the slot.\"\"\"\n if obj is None:\n # Adding a new Slot\n kwargs['form'] = SlotAdminAddForm\n return super(SlotAdmin, self).get_form(request, obj, **kwargs)\n\n def save_model(self, request, obj, form, change):\n super(SlotAdmin, self).save_model(request, obj, form, change)\n if not change and form.cleaned_data['additional'] > 0:\n # We add the requested additional slots\n # All created slot will have the same length as the slot just\n # created , and we specify them as a sequence using\n # \"previous_slot\" so tweaking start times is simple.\n prev = obj\n end = datetime.datetime.combine(prev.day.date, prev.end_time)\n start = datetime.datetime.combine(prev.day.date,\n prev.get_start_time())\n slot_len = end - start\n for loop in range(form.cleaned_data['additional']):\n end = end + slot_len\n new_slot = Slot(day=prev.day, previous_slot=prev,\n end_time=end.time())\n new_slot.save()\n msgdict = {'obj': force_text(new_slot)}\n msg = _(\"Additional slot %(obj)s added sucessfully\") % msgdict\n if hasattr(request, '_messages'):\n # Don't add messages unless we have a suitable request\n # Needed during testing, and possibly in other cases\n self.message_user(request, msg, messages.SUCCESS)\n prev = new_slot\n\n\nadmin.site.register(Day)\nadmin.site.register(Slot, SlotAdmin)\nadmin.site.register(Venue)\nadmin.site.register(ScheduleItem, ScheduleItemAdmin)\n", "path": "wafer/schedule/admin.py"}]}
| 3,259 | 513 |
gh_patches_debug_15869
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1033
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
prequest does not support PUT
Title says it all. Give me the green light and I can whip up a patch (no pun intended).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/scripts/prequest.py`
Content:
```
1 import optparse
2 import sys
3 import textwrap
4
5 from pyramid.compat import url_unquote
6 from pyramid.request import Request
7 from pyramid.paster import get_app
8 from pyramid.scripts.common import parse_vars
9
10 def main(argv=sys.argv, quiet=False):
11 command = PRequestCommand(argv, quiet)
12 return command.run()
13
14 class PRequestCommand(object):
15 description = """\
16 Run a request for the described application.
17
18 This command makes an artifical request to a web application that uses a
19 PasteDeploy (.ini) configuration file for the server and application.
20
21 Use "prequest config.ini /path" to request "/path". Use "prequest
22 --method=POST config.ini /path < data" to do a POST with the given
23 request body.
24
25 If the path is relative (doesn't begin with "/") it is interpreted as
26 relative to "/". The path passed to this script should be URL-quoted.
27 The path can be succeeded with a query string (e.g. `/path?a=1&=b2').
28
29 The variable "environ['paste.command_request']" will be set to "True" in
30 the request's WSGI environment, so your application can distinguish these
31 calls from normal requests.
32 """
33 usage = "usage: %prog config_uri path_info [args/options]"
34 parser = optparse.OptionParser(
35 usage=usage,
36 description=textwrap.dedent(description)
37 )
38 parser.add_option(
39 '-n', '--app-name',
40 dest='app_name',
41 metavar= 'NAME',
42 help="Load the named application from the config file (default 'main')",
43 type="string",
44 )
45 parser.add_option(
46 '--header',
47 dest='headers',
48 metavar='NAME:VALUE',
49 type='string',
50 action='append',
51 help="Header to add to request (you can use this option multiple times)"
52 )
53 parser.add_option(
54 '-d', '--display-headers',
55 dest='display_headers',
56 action='store_true',
57 help='Display status and headers before the response body'
58 )
59 parser.add_option(
60 '-m', '--method',
61 dest='method',
62 choices=['GET', 'HEAD', 'POST', 'DELETE'],
63 type='choice',
64 help='Request method type (GET, POST, DELETE)',
65 )
66
67 get_app = staticmethod(get_app)
68 stdin = sys.stdin
69
70 def __init__(self, argv, quiet=False):
71 self.quiet = quiet
72 self.options, self.args = self.parser.parse_args(argv[1:])
73
74 def out(self, msg): # pragma: no cover
75 if not self.quiet:
76 print(msg)
77
78 def run(self):
79 if not len(self.args) >= 2:
80 self.out('You must provide at least two arguments')
81 return 2
82 app_spec = self.args[0]
83 path = self.args[1]
84 if not path.startswith('/'):
85 path = '/' + path
86
87 try:
88 path, qs = path.split('?', 1)
89 except ValueError:
90 qs = ''
91
92 path = url_unquote(path)
93
94 headers = {}
95 if self.options.headers:
96 for item in self.options.headers:
97 if ':' not in item:
98 self.out(
99 "Bad --header=%s option, value must be in the form "
100 "'name:value'" % item)
101 return 2
102 name, value = item.split(':', 1)
103 headers[name] = value.strip()
104
105 app = self.get_app(app_spec, self.options.app_name,
106 options=parse_vars(self.args[2:]))
107
108 request_method = (self.options.method or 'GET').upper()
109
110 environ = {
111 'REQUEST_METHOD': request_method,
112 'SCRIPT_NAME': '', # may be empty if app is at the root
113 'PATH_INFO': path,
114 'SERVER_NAME': 'localhost', # always mandatory
115 'SERVER_PORT': '80', # always mandatory
116 'SERVER_PROTOCOL': 'HTTP/1.0',
117 'CONTENT_TYPE': 'text/plain',
118 'REMOTE_ADDR':'127.0.0.1',
119 'wsgi.run_once': True,
120 'wsgi.multithread': False,
121 'wsgi.multiprocess': False,
122 'wsgi.errors': sys.stderr,
123 'wsgi.url_scheme': 'http',
124 'wsgi.version': (1, 0),
125 'QUERY_STRING': qs,
126 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',
127 'paste.command_request': True,
128 }
129
130 if request_method == 'POST':
131 environ['wsgi.input'] = self.stdin
132 environ['CONTENT_LENGTH'] = '-1'
133
134 for name, value in headers.items():
135 if name.lower() == 'content-type':
136 name = 'CONTENT_TYPE'
137 else:
138 name = 'HTTP_'+name.upper().replace('-', '_')
139 environ[name] = value
140
141 request = Request.blank(path, environ=environ)
142 response = request.get_response(app)
143 if self.options.display_headers:
144 self.out(response.status)
145 for name, value in response.headerlist:
146 self.out('%s: %s' % (name, value))
147 if response.charset:
148 self.out(response.ubody)
149 else:
150 self.out(response.body)
151 return 0
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyramid/scripts/prequest.py b/pyramid/scripts/prequest.py
--- a/pyramid/scripts/prequest.py
+++ b/pyramid/scripts/prequest.py
@@ -59,9 +59,9 @@
parser.add_option(
'-m', '--method',
dest='method',
- choices=['GET', 'HEAD', 'POST', 'DELETE'],
+ choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],
type='choice',
- help='Request method type (GET, POST, DELETE)',
+ help='Request method type',
)
get_app = staticmethod(get_app)
@@ -127,7 +127,7 @@
'paste.command_request': True,
}
- if request_method == 'POST':
+ if request_method in ('POST', 'PUT', 'PATCH'):
environ['wsgi.input'] = self.stdin
environ['CONTENT_LENGTH'] = '-1'
|
{"golden_diff": "diff --git a/pyramid/scripts/prequest.py b/pyramid/scripts/prequest.py\n--- a/pyramid/scripts/prequest.py\n+++ b/pyramid/scripts/prequest.py\n@@ -59,9 +59,9 @@\n parser.add_option(\n '-m', '--method',\n dest='method',\n- choices=['GET', 'HEAD', 'POST', 'DELETE'],\n+ choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],\n type='choice',\n- help='Request method type (GET, POST, DELETE)',\n+ help='Request method type',\n )\n \n get_app = staticmethod(get_app)\n@@ -127,7 +127,7 @@\n 'paste.command_request': True,\n }\n \n- if request_method == 'POST':\n+ if request_method in ('POST', 'PUT', 'PATCH'):\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n", "issue": "prequest does not support PUT\nTitle says it all. Give me the green light and I can whip up a patch (no pun intended).\n\n", "before_files": [{"content": "import optparse\nimport sys\nimport textwrap\n\nfrom pyramid.compat import url_unquote\nfrom pyramid.request import Request\nfrom pyramid.paster import get_app\nfrom pyramid.scripts.common import parse_vars\n\ndef main(argv=sys.argv, quiet=False):\n command = PRequestCommand(argv, quiet)\n return command.run()\n\nclass PRequestCommand(object):\n description = \"\"\"\\\n Run a request for the described application.\n\n This command makes an artifical request to a web application that uses a\n PasteDeploy (.ini) configuration file for the server and application.\n\n Use \"prequest config.ini /path\" to request \"/path\". Use \"prequest\n --method=POST config.ini /path < data\" to do a POST with the given\n request body.\n\n If the path is relative (doesn't begin with \"/\") it is interpreted as\n relative to \"/\". The path passed to this script should be URL-quoted.\n The path can be succeeded with a query string (e.g. `/path?a=1&=b2').\n\n The variable \"environ['paste.command_request']\" will be set to \"True\" in\n the request's WSGI environment, so your application can distinguish these\n calls from normal requests.\n \"\"\"\n usage = \"usage: %prog config_uri path_info [args/options]\"\n parser = optparse.OptionParser(\n usage=usage,\n description=textwrap.dedent(description)\n )\n parser.add_option(\n '-n', '--app-name',\n dest='app_name',\n metavar= 'NAME',\n help=\"Load the named application from the config file (default 'main')\",\n type=\"string\",\n )\n parser.add_option(\n '--header',\n dest='headers',\n metavar='NAME:VALUE',\n type='string',\n action='append',\n help=\"Header to add to request (you can use this option multiple times)\"\n )\n parser.add_option(\n '-d', '--display-headers',\n dest='display_headers',\n action='store_true',\n help='Display status and headers before the response body'\n )\n parser.add_option(\n '-m', '--method',\n dest='method',\n choices=['GET', 'HEAD', 'POST', 'DELETE'],\n type='choice',\n help='Request method type (GET, POST, DELETE)',\n )\n\n get_app = staticmethod(get_app)\n stdin = sys.stdin\n\n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n self.options, self.args = self.parser.parse_args(argv[1:])\n\n def out(self, msg): # pragma: no cover\n if not self.quiet:\n print(msg)\n\n def run(self):\n if not len(self.args) >= 2:\n self.out('You must provide at least two arguments')\n return 2\n app_spec = self.args[0]\n path = self.args[1]\n if not path.startswith('/'):\n path = '/' + path\n\n try:\n path, qs = path.split('?', 1)\n except ValueError:\n qs = ''\n\n path = url_unquote(path)\n\n headers = {}\n if self.options.headers:\n for item in self.options.headers:\n if ':' not in item:\n self.out(\n \"Bad --header=%s option, value must be in the form \"\n \"'name:value'\" % item)\n return 2\n name, value = item.split(':', 1)\n headers[name] = value.strip()\n\n app = self.get_app(app_spec, self.options.app_name,\n options=parse_vars(self.args[2:]))\n\n request_method = (self.options.method or 'GET').upper()\n\n environ = {\n 'REQUEST_METHOD': request_method,\n 'SCRIPT_NAME': '', # may be empty if app is at the root\n 'PATH_INFO': path, \n 'SERVER_NAME': 'localhost', # always mandatory\n 'SERVER_PORT': '80', # always mandatory \n 'SERVER_PROTOCOL': 'HTTP/1.0',\n 'CONTENT_TYPE': 'text/plain',\n 'REMOTE_ADDR':'127.0.0.1',\n 'wsgi.run_once': True,\n 'wsgi.multithread': False,\n 'wsgi.multiprocess': False,\n 'wsgi.errors': sys.stderr,\n 'wsgi.url_scheme': 'http',\n 'wsgi.version': (1, 0),\n 'QUERY_STRING': qs,\n 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',\n 'paste.command_request': True,\n }\n\n if request_method == 'POST':\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n\n for name, value in headers.items():\n if name.lower() == 'content-type':\n name = 'CONTENT_TYPE'\n else:\n name = 'HTTP_'+name.upper().replace('-', '_')\n environ[name] = value\n\n request = Request.blank(path, environ=environ)\n response = request.get_response(app)\n if self.options.display_headers:\n self.out(response.status)\n for name, value in response.headerlist:\n self.out('%s: %s' % (name, value))\n if response.charset:\n self.out(response.ubody)\n else:\n self.out(response.body)\n return 0\n", "path": "pyramid/scripts/prequest.py"}], "after_files": [{"content": "import optparse\nimport sys\nimport textwrap\n\nfrom pyramid.compat import url_unquote\nfrom pyramid.request import Request\nfrom pyramid.paster import get_app\nfrom pyramid.scripts.common import parse_vars\n\ndef main(argv=sys.argv, quiet=False):\n command = PRequestCommand(argv, quiet)\n return command.run()\n\nclass PRequestCommand(object):\n description = \"\"\"\\\n Run a request for the described application.\n\n This command makes an artifical request to a web application that uses a\n PasteDeploy (.ini) configuration file for the server and application.\n\n Use \"prequest config.ini /path\" to request \"/path\". Use \"prequest\n --method=POST config.ini /path < data\" to do a POST with the given\n request body.\n\n If the path is relative (doesn't begin with \"/\") it is interpreted as\n relative to \"/\". The path passed to this script should be URL-quoted.\n The path can be succeeded with a query string (e.g. `/path?a=1&=b2').\n\n The variable \"environ['paste.command_request']\" will be set to \"True\" in\n the request's WSGI environment, so your application can distinguish these\n calls from normal requests.\n \"\"\"\n usage = \"usage: %prog config_uri path_info [args/options]\"\n parser = optparse.OptionParser(\n usage=usage,\n description=textwrap.dedent(description)\n )\n parser.add_option(\n '-n', '--app-name',\n dest='app_name',\n metavar= 'NAME',\n help=\"Load the named application from the config file (default 'main')\",\n type=\"string\",\n )\n parser.add_option(\n '--header',\n dest='headers',\n metavar='NAME:VALUE',\n type='string',\n action='append',\n help=\"Header to add to request (you can use this option multiple times)\"\n )\n parser.add_option(\n '-d', '--display-headers',\n dest='display_headers',\n action='store_true',\n help='Display status and headers before the response body'\n )\n parser.add_option(\n '-m', '--method',\n dest='method',\n choices=['GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE'],\n type='choice',\n help='Request method type',\n )\n\n get_app = staticmethod(get_app)\n stdin = sys.stdin\n\n def __init__(self, argv, quiet=False):\n self.quiet = quiet\n self.options, self.args = self.parser.parse_args(argv[1:])\n\n def out(self, msg): # pragma: no cover\n if not self.quiet:\n print(msg)\n\n def run(self):\n if not len(self.args) >= 2:\n self.out('You must provide at least two arguments')\n return 2\n app_spec = self.args[0]\n path = self.args[1]\n if not path.startswith('/'):\n path = '/' + path\n\n try:\n path, qs = path.split('?', 1)\n except ValueError:\n qs = ''\n\n path = url_unquote(path)\n\n headers = {}\n if self.options.headers:\n for item in self.options.headers:\n if ':' not in item:\n self.out(\n \"Bad --header=%s option, value must be in the form \"\n \"'name:value'\" % item)\n return 2\n name, value = item.split(':', 1)\n headers[name] = value.strip()\n\n app = self.get_app(app_spec, self.options.app_name,\n options=parse_vars(self.args[2:]))\n\n request_method = (self.options.method or 'GET').upper()\n\n environ = {\n 'REQUEST_METHOD': request_method,\n 'SCRIPT_NAME': '', # may be empty if app is at the root\n 'PATH_INFO': path, \n 'SERVER_NAME': 'localhost', # always mandatory\n 'SERVER_PORT': '80', # always mandatory \n 'SERVER_PROTOCOL': 'HTTP/1.0',\n 'CONTENT_TYPE': 'text/plain',\n 'REMOTE_ADDR':'127.0.0.1',\n 'wsgi.run_once': True,\n 'wsgi.multithread': False,\n 'wsgi.multiprocess': False,\n 'wsgi.errors': sys.stderr,\n 'wsgi.url_scheme': 'http',\n 'wsgi.version': (1, 0),\n 'QUERY_STRING': qs,\n 'HTTP_ACCEPT': 'text/plain;q=1.0, */*;q=0.1',\n 'paste.command_request': True,\n }\n\n if request_method in ('POST', 'PUT', 'PATCH'):\n environ['wsgi.input'] = self.stdin\n environ['CONTENT_LENGTH'] = '-1'\n\n for name, value in headers.items():\n if name.lower() == 'content-type':\n name = 'CONTENT_TYPE'\n else:\n name = 'HTTP_'+name.upper().replace('-', '_')\n environ[name] = value\n\n request = Request.blank(path, environ=environ)\n response = request.get_response(app)\n if self.options.display_headers:\n self.out(response.status)\n for name, value in response.headerlist:\n self.out('%s: %s' % (name, value))\n if response.charset:\n self.out(response.ubody)\n else:\n self.out(response.body)\n return 0\n", "path": "pyramid/scripts/prequest.py"}]}
| 1,802 | 210 |
gh_patches_debug_9258
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-2173
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve the example of Timer's usage
## 📚 Documentation
The example in the `Timer's documentation suggest to estimate the processing time for a batch as follows:
```python
timer.attach(
trainer,
start=Events.EPOCH_STARTED,
resume=Events.ITERATION_STARTED,
pause=Events.ITERATION_COMPLETED,
step=Events.ITERATION_COMPLETED)
```
It is to note that this timer will be reset at the start of each epoch. When we use multiple workers in the data loader, the first couple of iterations at each epoch often take longer than the later ones. The reset behavior will incur an inaccurate estimation of the remaining training time (ETA), even when the `average` flag is set to `True`. Specifically, the ETA is computed as `(engine.state.max_iters - engine.state.iteration) * time_per_iter`. So the small fluctuation in `time_per_iter` will be magnified by remaining number of iterations. To address this problem, we can let the timer only start once in the whole training process:
```python
timer.attach(
trainer,
start=Events.EPOCH_STARTED(once=1),
resume=Events.ITERATION_STARTED,
pause=Events.ITERATION_COMPLETED,
step=Events.ITERATION_COMPLETED)
```
I have empirically verified the effectiveness of this modification.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/handlers/timing.py`
Content:
```
1 from time import perf_counter
2 from typing import Any, Optional
3
4 from ignite.engine import Engine, Events
5
6 __all__ = ["Timer"]
7
8
9 class Timer:
10 """ Timer object can be used to measure (average) time between events.
11
12 Args:
13 average: if True, then when ``.value()`` method is called, the returned value
14 will be equal to total time measured, divided by the value of internal counter.
15
16 Attributes:
17 total (float): total time elapsed when the Timer was running (in seconds).
18 step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.
19 Incremented with the ``.step()`` method.
20 running (bool): flag indicating if timer is measuring time.
21
22 Note:
23 When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See
24 the examples below.
25
26 Examples:
27
28 Measuring total time of the epoch:
29
30 >>> from ignite.handlers import Timer
31 >>> import time
32 >>> work = lambda : time.sleep(0.1)
33 >>> idle = lambda : time.sleep(0.1)
34 >>> t = Timer(average=False)
35 >>> for _ in range(10):
36 ... work()
37 ... idle()
38 ...
39 >>> t.value()
40 2.003073937026784
41
42 Measuring average time of the epoch:
43
44 >>> t = Timer(average=True)
45 >>> for _ in range(10):
46 ... work()
47 ... idle()
48 ... t.step()
49 ...
50 >>> t.value()
51 0.2003182829997968
52
53 Measuring average time it takes to execute a single ``work()`` call:
54
55 >>> t = Timer(average=True)
56 >>> for _ in range(10):
57 ... t.resume()
58 ... work()
59 ... t.pause()
60 ... idle()
61 ... t.step()
62 ...
63 >>> t.value()
64 0.10016545779653825
65
66 Using the Timer to measure average time it takes to process a single batch of examples:
67
68 >>> from ignite.engine import Engine, Events
69 >>> from ignite.handlers import Timer
70 >>> trainer = Engine(training_update_function)
71 >>> timer = Timer(average=True)
72 >>> timer.attach(trainer,
73 ... start=Events.EPOCH_STARTED,
74 ... resume=Events.ITERATION_STARTED,
75 ... pause=Events.ITERATION_COMPLETED,
76 ... step=Events.ITERATION_COMPLETED)
77 """
78
79 def __init__(self, average: bool = False):
80 self._average = average
81
82 self.reset()
83
84 def attach(
85 self,
86 engine: Engine,
87 start: Events = Events.STARTED,
88 pause: Events = Events.COMPLETED,
89 resume: Optional[Events] = None,
90 step: Optional[Events] = None,
91 ) -> "Timer":
92 """ Register callbacks to control the timer.
93
94 Args:
95 engine: Engine that this timer will be attached to.
96 start: Event which should start (reset) the timer.
97 pause: Event which should pause the timer.
98 resume: Event which should resume the timer.
99 step: Event which should call the `step` method of the counter.
100
101 Returns:
102 this timer
103 """
104
105 engine.add_event_handler(start, self.reset)
106 engine.add_event_handler(pause, self.pause)
107
108 if resume is not None:
109 engine.add_event_handler(resume, self.resume)
110
111 if step is not None:
112 engine.add_event_handler(step, self.step)
113
114 return self
115
116 def reset(self, *args: Any) -> "Timer":
117 """Reset the timer to zero."""
118 self._t0 = perf_counter()
119 self.total = 0.0
120 self.step_count = 0.0
121 self.running = True
122
123 return self
124
125 def pause(self, *args: Any) -> None:
126 """Pause the current running timer."""
127 if self.running:
128 self.total += self._elapsed()
129 self.running = False
130
131 def resume(self, *args: Any) -> None:
132 """Resume the current running timer."""
133 if not self.running:
134 self.running = True
135 self._t0 = perf_counter()
136
137 def value(self) -> float:
138 """Return the average timer value."""
139 total = self.total
140 if self.running:
141 total += self._elapsed()
142
143 if self._average:
144 denominator = max(self.step_count, 1.0)
145 else:
146 denominator = 1.0
147
148 return total / denominator
149
150 def step(self, *args: Any) -> None:
151 """Increment the timer."""
152 self.step_count += 1.0
153
154 def _elapsed(self) -> float:
155 return perf_counter() - self._t0
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/handlers/timing.py b/ignite/handlers/timing.py
--- a/ignite/handlers/timing.py
+++ b/ignite/handlers/timing.py
@@ -70,7 +70,7 @@
>>> trainer = Engine(training_update_function)
>>> timer = Timer(average=True)
>>> timer.attach(trainer,
- ... start=Events.EPOCH_STARTED,
+ ... start=Events.STARTED,
... resume=Events.ITERATION_STARTED,
... pause=Events.ITERATION_COMPLETED,
... step=Events.ITERATION_COMPLETED)
|
{"golden_diff": "diff --git a/ignite/handlers/timing.py b/ignite/handlers/timing.py\n--- a/ignite/handlers/timing.py\n+++ b/ignite/handlers/timing.py\n@@ -70,7 +70,7 @@\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n- ... start=Events.EPOCH_STARTED,\n+ ... start=Events.STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n", "issue": "Improve the example of Timer's usage\n## \ud83d\udcda Documentation\r\n\r\nThe example in the `Timer's documentation suggest to estimate the processing time for a batch as follows:\r\n```python\r\ntimer.attach(\r\n trainer, \r\n start=Events.EPOCH_STARTED, \r\n resume=Events.ITERATION_STARTED, \r\n pause=Events.ITERATION_COMPLETED, \r\n step=Events.ITERATION_COMPLETED)\r\n```\r\nIt is to note that this timer will be reset at the start of each epoch. When we use multiple workers in the data loader, the first couple of iterations at each epoch often take longer than the later ones. The reset behavior will incur an inaccurate estimation of the remaining training time (ETA), even when the `average` flag is set to `True`. Specifically, the ETA is computed as `(engine.state.max_iters - engine.state.iteration) * time_per_iter`. So the small fluctuation in `time_per_iter` will be magnified by remaining number of iterations. To address this problem, we can let the timer only start once in the whole training process:\r\n```python\r\n timer.attach(\r\n trainer, \r\n start=Events.EPOCH_STARTED(once=1), \r\n resume=Events.ITERATION_STARTED, \r\n pause=Events.ITERATION_COMPLETED, \r\n step=Events.ITERATION_COMPLETED)\r\n```\r\nI have empirically verified the effectiveness of this modification.\n", "before_files": [{"content": "from time import perf_counter\nfrom typing import Any, Optional\n\nfrom ignite.engine import Engine, Events\n\n__all__ = [\"Timer\"]\n\n\nclass Timer:\n \"\"\" Timer object can be used to measure (average) time between events.\n\n Args:\n average: if True, then when ``.value()`` method is called, the returned value\n will be equal to total time measured, divided by the value of internal counter.\n\n Attributes:\n total (float): total time elapsed when the Timer was running (in seconds).\n step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.\n Incremented with the ``.step()`` method.\n running (bool): flag indicating if timer is measuring time.\n\n Note:\n When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See\n the examples below.\n\n Examples:\n\n Measuring total time of the epoch:\n\n >>> from ignite.handlers import Timer\n >>> import time\n >>> work = lambda : time.sleep(0.1)\n >>> idle = lambda : time.sleep(0.1)\n >>> t = Timer(average=False)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ...\n >>> t.value()\n 2.003073937026784\n\n Measuring average time of the epoch:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.2003182829997968\n\n Measuring average time it takes to execute a single ``work()`` call:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... t.resume()\n ... work()\n ... t.pause()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.10016545779653825\n\n Using the Timer to measure average time it takes to process a single batch of examples:\n\n >>> from ignite.engine import Engine, Events\n >>> from ignite.handlers import Timer\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n ... start=Events.EPOCH_STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n \"\"\"\n\n def __init__(self, average: bool = False):\n self._average = average\n\n self.reset()\n\n def attach(\n self,\n engine: Engine,\n start: Events = Events.STARTED,\n pause: Events = Events.COMPLETED,\n resume: Optional[Events] = None,\n step: Optional[Events] = None,\n ) -> \"Timer\":\n \"\"\" Register callbacks to control the timer.\n\n Args:\n engine: Engine that this timer will be attached to.\n start: Event which should start (reset) the timer.\n pause: Event which should pause the timer.\n resume: Event which should resume the timer.\n step: Event which should call the `step` method of the counter.\n\n Returns:\n this timer\n \"\"\"\n\n engine.add_event_handler(start, self.reset)\n engine.add_event_handler(pause, self.pause)\n\n if resume is not None:\n engine.add_event_handler(resume, self.resume)\n\n if step is not None:\n engine.add_event_handler(step, self.step)\n\n return self\n\n def reset(self, *args: Any) -> \"Timer\":\n \"\"\"Reset the timer to zero.\"\"\"\n self._t0 = perf_counter()\n self.total = 0.0\n self.step_count = 0.0\n self.running = True\n\n return self\n\n def pause(self, *args: Any) -> None:\n \"\"\"Pause the current running timer.\"\"\"\n if self.running:\n self.total += self._elapsed()\n self.running = False\n\n def resume(self, *args: Any) -> None:\n \"\"\"Resume the current running timer.\"\"\"\n if not self.running:\n self.running = True\n self._t0 = perf_counter()\n\n def value(self) -> float:\n \"\"\"Return the average timer value.\"\"\"\n total = self.total\n if self.running:\n total += self._elapsed()\n\n if self._average:\n denominator = max(self.step_count, 1.0)\n else:\n denominator = 1.0\n\n return total / denominator\n\n def step(self, *args: Any) -> None:\n \"\"\"Increment the timer.\"\"\"\n self.step_count += 1.0\n\n def _elapsed(self) -> float:\n return perf_counter() - self._t0\n", "path": "ignite/handlers/timing.py"}], "after_files": [{"content": "from time import perf_counter\nfrom typing import Any, Optional\n\nfrom ignite.engine import Engine, Events\n\n__all__ = [\"Timer\"]\n\n\nclass Timer:\n \"\"\" Timer object can be used to measure (average) time between events.\n\n Args:\n average: if True, then when ``.value()`` method is called, the returned value\n will be equal to total time measured, divided by the value of internal counter.\n\n Attributes:\n total (float): total time elapsed when the Timer was running (in seconds).\n step_count (int): internal counter, useful to measure average time, e.g. of processing a single batch.\n Incremented with the ``.step()`` method.\n running (bool): flag indicating if timer is measuring time.\n\n Note:\n When using ``Timer(average=True)`` do not forget to call ``timer.step()`` every time an event occurs. See\n the examples below.\n\n Examples:\n\n Measuring total time of the epoch:\n\n >>> from ignite.handlers import Timer\n >>> import time\n >>> work = lambda : time.sleep(0.1)\n >>> idle = lambda : time.sleep(0.1)\n >>> t = Timer(average=False)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ...\n >>> t.value()\n 2.003073937026784\n\n Measuring average time of the epoch:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... work()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.2003182829997968\n\n Measuring average time it takes to execute a single ``work()`` call:\n\n >>> t = Timer(average=True)\n >>> for _ in range(10):\n ... t.resume()\n ... work()\n ... t.pause()\n ... idle()\n ... t.step()\n ...\n >>> t.value()\n 0.10016545779653825\n\n Using the Timer to measure average time it takes to process a single batch of examples:\n\n >>> from ignite.engine import Engine, Events\n >>> from ignite.handlers import Timer\n >>> trainer = Engine(training_update_function)\n >>> timer = Timer(average=True)\n >>> timer.attach(trainer,\n ... start=Events.STARTED,\n ... resume=Events.ITERATION_STARTED,\n ... pause=Events.ITERATION_COMPLETED,\n ... step=Events.ITERATION_COMPLETED)\n \"\"\"\n\n def __init__(self, average: bool = False):\n self._average = average\n\n self.reset()\n\n def attach(\n self,\n engine: Engine,\n start: Events = Events.STARTED,\n pause: Events = Events.COMPLETED,\n resume: Optional[Events] = None,\n step: Optional[Events] = None,\n ) -> \"Timer\":\n \"\"\" Register callbacks to control the timer.\n\n Args:\n engine: Engine that this timer will be attached to.\n start: Event which should start (reset) the timer.\n pause: Event which should pause the timer.\n resume: Event which should resume the timer.\n step: Event which should call the `step` method of the counter.\n\n Returns:\n this timer\n \"\"\"\n\n engine.add_event_handler(start, self.reset)\n engine.add_event_handler(pause, self.pause)\n\n if resume is not None:\n engine.add_event_handler(resume, self.resume)\n\n if step is not None:\n engine.add_event_handler(step, self.step)\n\n return self\n\n def reset(self, *args: Any) -> \"Timer\":\n \"\"\"Reset the timer to zero.\"\"\"\n self._t0 = perf_counter()\n self.total = 0.0\n self.step_count = 0.0\n self.running = True\n\n return self\n\n def pause(self, *args: Any) -> None:\n \"\"\"Pause the current running timer.\"\"\"\n if self.running:\n self.total += self._elapsed()\n self.running = False\n\n def resume(self, *args: Any) -> None:\n \"\"\"Resume the current running timer.\"\"\"\n if not self.running:\n self.running = True\n self._t0 = perf_counter()\n\n def value(self) -> float:\n \"\"\"Return the average timer value.\"\"\"\n total = self.total\n if self.running:\n total += self._elapsed()\n\n if self._average:\n denominator = max(self.step_count, 1.0)\n else:\n denominator = 1.0\n\n return total / denominator\n\n def step(self, *args: Any) -> None:\n \"\"\"Increment the timer.\"\"\"\n self.step_count += 1.0\n\n def _elapsed(self) -> float:\n return perf_counter() - self._t0\n", "path": "ignite/handlers/timing.py"}]}
| 1,994 | 135 |
gh_patches_debug_25230
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-4840
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: name of bg_color is not supported in skimage.color.label2rgb
## Description
Docstring of `label2rgb` says
```
bg_color : str or array, optional
Background color. Must be a name in `color_dict` or RGB float values
between [0, 1].
```
but only array of `bg_color` is supported in the `kind` of overlay.
The reason is lack of processing for string at here
https://github.com/scikit-image/scikit-image/blob/7227346b262659103392b5701b057b776664f9aa/skimage/color/colorlabel.py#L46-L48
<!--
(Note: for guidance on how to use `scikit-image`, please post instead on https://forum.image.sc/tag/scikit-image)
-->
## Way to reproduce
we can simply reproduce the bug by using the example `doc/examples/segmentation/plot_label.py`
```python
# Place the full code we need to recreate your issue here
# upload all necessary images to github too!
from skimage import data
from skimage.filters import threshold_otsu
from skimage.segmentation import clear_border
from skimage.measure import label
from skimage.morphology import closing, square
from skimage.color import label2rgb
image = data.coins()[50:-50, 50:-50]
thresh = threshold_otsu(image)
bw = closing(image > thresh, square(3))
cleared = clear_border(bw)
label_image = label(cleared)
image_label_overlay = label2rgb(label_image, image=image, bg_label=0, bg_color='red')
```
## Version information
```python
# Paste the output of the following python commands
from __future__ import print_function
import sys; print(sys.version)
import platform; print(platform.platform())
import skimage; print("scikit-image version: {}".format(skimage.__version__))
import numpy; print("numpy version: {}".format(numpy.__version__))
```
```python
# your output here
3.6.10 | packaged by conda-forge | (default, Mar 5 2020, 09:56:10)
[GCC Clang 9.0.1 ]
Darwin-18.7.0-x86_64-i386-64bit
scikit-image version: 0.17.dev0
numpy version: 1.18.1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/color/colorlabel.py`
Content:
```
1 import itertools
2
3 import numpy as np
4
5 from .._shared.utils import warn, change_default_value
6 from ..util import img_as_float
7 from . import rgb_colors
8 from .colorconv import rgb2gray, gray2rgb
9
10
11 __all__ = ['color_dict', 'label2rgb', 'DEFAULT_COLORS']
12
13
14 DEFAULT_COLORS = ('red', 'blue', 'yellow', 'magenta', 'green',
15 'indigo', 'darkorange', 'cyan', 'pink', 'yellowgreen')
16
17
18 color_dict = {k: v for k, v in rgb_colors.__dict__.items()
19 if isinstance(v, tuple)}
20
21
22 def _rgb_vector(color):
23 """Return RGB color as (1, 3) array.
24
25 This RGB array gets multiplied by masked regions of an RGB image, which are
26 partially flattened by masking (i.e. dimensions 2D + RGB -> 1D + RGB).
27
28 Parameters
29 ----------
30 color : str or array
31 Color name in `color_dict` or RGB float values between [0, 1].
32 """
33 if isinstance(color, str):
34 color = color_dict[color]
35 # Slice to handle RGBA colors.
36 return np.array(color[:3])
37
38
39 def _match_label_with_color(label, colors, bg_label, bg_color):
40 """Return `unique_labels` and `color_cycle` for label array and color list.
41
42 Colors are cycled for normal labels, but the background color should only
43 be used for the background.
44 """
45 # Temporarily set background color; it will be removed later.
46 if bg_color is None:
47 bg_color = (0, 0, 0)
48 bg_color = _rgb_vector([bg_color])
49
50 # map labels to their ranks among all labels from small to large
51 unique_labels, mapped_labels = np.unique(label, return_inverse=True)
52
53 # get rank of bg_label
54 bg_label_rank_list = mapped_labels[label.flat == bg_label]
55
56 # The rank of each label is the index of the color it is matched to in
57 # color cycle. bg_label should always be mapped to the first color, so
58 # its rank must be 0. Other labels should be ranked from small to large
59 # from 1.
60 if len(bg_label_rank_list) > 0:
61 bg_label_rank = bg_label_rank_list[0]
62 mapped_labels[mapped_labels < bg_label_rank] += 1
63 mapped_labels[label.flat == bg_label] = 0
64 else:
65 mapped_labels += 1
66
67 # Modify labels and color cycle so background color is used only once.
68 color_cycle = itertools.cycle(colors)
69 color_cycle = itertools.chain(bg_color, color_cycle)
70
71 return mapped_labels, color_cycle
72
73
74 @change_default_value("bg_label", new_value=0, changed_version="0.19")
75 def label2rgb(label, image=None, colors=None, alpha=0.3,
76 bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):
77 """Return an RGB image where color-coded labels are painted over the image.
78
79 Parameters
80 ----------
81 label : array, shape (M, N)
82 Integer array of labels with the same shape as `image`.
83 image : array, shape (M, N, 3), optional
84 Image used as underlay for labels. If the input is an RGB image, it's
85 converted to grayscale before coloring.
86 colors : list, optional
87 List of colors. If the number of labels exceeds the number of colors,
88 then the colors are cycled.
89 alpha : float [0, 1], optional
90 Opacity of colorized labels. Ignored if image is `None`.
91 bg_label : int, optional
92 Label that's treated as the background. If `bg_label` is specified,
93 `bg_color` is `None`, and `kind` is `overlay`,
94 background is not painted by any colors.
95 bg_color : str or array, optional
96 Background color. Must be a name in `color_dict` or RGB float values
97 between [0, 1].
98 image_alpha : float [0, 1], optional
99 Opacity of the image.
100 kind : string, one of {'overlay', 'avg'}
101 The kind of color image desired. 'overlay' cycles over defined colors
102 and overlays the colored labels over the original image. 'avg' replaces
103 each labeled segment with its average color, for a stained-class or
104 pastel painting appearance.
105
106 Returns
107 -------
108 result : array of float, shape (M, N, 3)
109 The result of blending a cycling colormap (`colors`) for each distinct
110 value in `label` with the image, at a certain alpha value.
111 """
112 if kind == 'overlay':
113 return _label2rgb_overlay(label, image, colors, alpha, bg_label,
114 bg_color, image_alpha)
115 elif kind == 'avg':
116 return _label2rgb_avg(label, image, bg_label, bg_color)
117 else:
118 raise ValueError("`kind` must be either 'overlay' or 'avg'.")
119
120
121 def _label2rgb_overlay(label, image=None, colors=None, alpha=0.3,
122 bg_label=-1, bg_color=None, image_alpha=1):
123 """Return an RGB image where color-coded labels are painted over the image.
124
125 Parameters
126 ----------
127 label : array, shape (M, N)
128 Integer array of labels with the same shape as `image`.
129 image : array, shape (M, N, 3), optional
130 Image used as underlay for labels. If the input is an RGB image, it's
131 converted to grayscale before coloring.
132 colors : list, optional
133 List of colors. If the number of labels exceeds the number of colors,
134 then the colors are cycled.
135 alpha : float [0, 1], optional
136 Opacity of colorized labels. Ignored if image is `None`.
137 bg_label : int, optional
138 Label that's treated as the background. If `bg_label` is specified and
139 `bg_color` is `None`, background is not painted by any colors.
140 bg_color : str or array, optional
141 Background color. Must be a name in `color_dict` or RGB float values
142 between [0, 1].
143 image_alpha : float [0, 1], optional
144 Opacity of the image.
145
146 Returns
147 -------
148 result : array of float, shape (M, N, 3)
149 The result of blending a cycling colormap (`colors`) for each distinct
150 value in `label` with the image, at a certain alpha value.
151 """
152 if colors is None:
153 colors = DEFAULT_COLORS
154 colors = [_rgb_vector(c) for c in colors]
155
156 if image is None:
157 image = np.zeros(label.shape + (3,), dtype=np.float64)
158 # Opacity doesn't make sense if no image exists.
159 alpha = 1
160 else:
161 if not image.shape[:2] == label.shape:
162 raise ValueError("`image` and `label` must be the same shape")
163
164 if image.min() < 0:
165 warn("Negative intensities in `image` are not supported")
166
167 if image.ndim > label.ndim:
168 image = img_as_float(rgb2gray(image))
169 else:
170 image = img_as_float(image)
171 image = gray2rgb(image) * image_alpha + (1 - image_alpha)
172
173 # Ensure that all labels are non-negative so we can index into
174 # `label_to_color` correctly.
175 offset = min(label.min(), bg_label)
176 if offset != 0:
177 label = label - offset # Make sure you don't modify the input array.
178 bg_label -= offset
179
180 new_type = np.min_scalar_type(int(label.max()))
181 if new_type == np.bool:
182 new_type = np.uint8
183 label = label.astype(new_type)
184
185 mapped_labels_flat, color_cycle = _match_label_with_color(label, colors,
186 bg_label, bg_color)
187
188 if len(mapped_labels_flat) == 0:
189 return image
190
191 dense_labels = range(max(mapped_labels_flat) + 1)
192
193 label_to_color = np.stack([c for i, c in zip(dense_labels, color_cycle)])
194
195 mapped_labels = label
196 mapped_labels.flat = mapped_labels_flat
197 result = label_to_color[mapped_labels] * alpha + image * (1 - alpha)
198
199 # Remove background label if its color was not specified.
200 remove_background = 0 in mapped_labels_flat and bg_color is None
201 if remove_background:
202 result[label == bg_label] = image[label == bg_label]
203
204 return result
205
206
207 def _label2rgb_avg(label_field, image, bg_label=0, bg_color=(0, 0, 0)):
208 """Visualise each segment in `label_field` with its mean color in `image`.
209
210 Parameters
211 ----------
212 label_field : array of int
213 A segmentation of an image.
214 image : array, shape ``label_field.shape + (3,)``
215 A color image of the same spatial shape as `label_field`.
216 bg_label : int, optional
217 A value in `label_field` to be treated as background.
218 bg_color : 3-tuple of int, optional
219 The color for the background label
220
221 Returns
222 -------
223 out : array, same shape and type as `image`
224 The output visualization.
225 """
226 out = np.zeros_like(image)
227 labels = np.unique(label_field)
228 bg = (labels == bg_label)
229 if bg.any():
230 labels = labels[labels != bg_label]
231 mask = (label_field == bg_label).nonzero()
232 out[mask] = bg_color
233 for label in labels:
234 mask = (label_field == label).nonzero()
235 color = image[mask].mean(axis=0)
236 out[mask] = color
237 return out
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/color/colorlabel.py b/skimage/color/colorlabel.py
--- a/skimage/color/colorlabel.py
+++ b/skimage/color/colorlabel.py
@@ -45,7 +45,7 @@
# Temporarily set background color; it will be removed later.
if bg_color is None:
bg_color = (0, 0, 0)
- bg_color = _rgb_vector([bg_color])
+ bg_color = _rgb_vector(bg_color)
# map labels to their ranks among all labels from small to large
unique_labels, mapped_labels = np.unique(label, return_inverse=True)
@@ -66,7 +66,7 @@
# Modify labels and color cycle so background color is used only once.
color_cycle = itertools.cycle(colors)
- color_cycle = itertools.chain(bg_color, color_cycle)
+ color_cycle = itertools.chain([bg_color], color_cycle)
return mapped_labels, color_cycle
@@ -223,7 +223,7 @@
out : array, same shape and type as `image`
The output visualization.
"""
- out = np.zeros_like(image)
+ out = np.zeros(label_field.shape + (3,))
labels = np.unique(label_field)
bg = (labels == bg_label)
if bg.any():
|
{"golden_diff": "diff --git a/skimage/color/colorlabel.py b/skimage/color/colorlabel.py\n--- a/skimage/color/colorlabel.py\n+++ b/skimage/color/colorlabel.py\n@@ -45,7 +45,7 @@\n # Temporarily set background color; it will be removed later.\n if bg_color is None:\n bg_color = (0, 0, 0)\n- bg_color = _rgb_vector([bg_color])\n+ bg_color = _rgb_vector(bg_color)\n \n # map labels to their ranks among all labels from small to large\n unique_labels, mapped_labels = np.unique(label, return_inverse=True)\n@@ -66,7 +66,7 @@\n \n # Modify labels and color cycle so background color is used only once.\n color_cycle = itertools.cycle(colors)\n- color_cycle = itertools.chain(bg_color, color_cycle)\n+ color_cycle = itertools.chain([bg_color], color_cycle)\n \n return mapped_labels, color_cycle\n \n@@ -223,7 +223,7 @@\n out : array, same shape and type as `image`\n The output visualization.\n \"\"\"\n- out = np.zeros_like(image)\n+ out = np.zeros(label_field.shape + (3,))\n labels = np.unique(label_field)\n bg = (labels == bg_label)\n if bg.any():\n", "issue": "BUG: name of bg_color is not supported in skimage.color.label2rgb\n## Description\r\nDocstring of `label2rgb` says\r\n```\r\n bg_color : str or array, optional\r\n Background color. Must be a name in `color_dict` or RGB float values\r\n between [0, 1].\r\n```\r\nbut only array of `bg_color` is supported in the `kind` of overlay.\r\nThe reason is lack of processing for string at here\r\nhttps://github.com/scikit-image/scikit-image/blob/7227346b262659103392b5701b057b776664f9aa/skimage/color/colorlabel.py#L46-L48\r\n\r\n<!--\r\n(Note: for guidance on how to use `scikit-image`, please post instead on https://forum.image.sc/tag/scikit-image)\r\n-->\r\n\r\n## Way to reproduce\r\nwe can simply reproduce the bug by using the example `doc/examples/segmentation/plot_label.py`\r\n```python\r\n# Place the full code we need to recreate your issue here\r\n# upload all necessary images to github too!\r\nfrom skimage import data\r\nfrom skimage.filters import threshold_otsu\r\nfrom skimage.segmentation import clear_border\r\nfrom skimage.measure import label\r\nfrom skimage.morphology import closing, square\r\nfrom skimage.color import label2rgb\r\n\r\n\r\nimage = data.coins()[50:-50, 50:-50]\r\n\r\nthresh = threshold_otsu(image)\r\nbw = closing(image > thresh, square(3))\r\n\r\ncleared = clear_border(bw)\r\n\r\nlabel_image = label(cleared)\r\nimage_label_overlay = label2rgb(label_image, image=image, bg_label=0, bg_color='red')\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n# Paste the output of the following python commands\r\nfrom __future__ import print_function\r\nimport sys; print(sys.version)\r\nimport platform; print(platform.platform())\r\nimport skimage; print(\"scikit-image version: {}\".format(skimage.__version__))\r\nimport numpy; print(\"numpy version: {}\".format(numpy.__version__))\r\n```\r\n\r\n```python\r\n# your output here\r\n3.6.10 | packaged by conda-forge | (default, Mar 5 2020, 09:56:10) \r\n[GCC Clang 9.0.1 ]\r\nDarwin-18.7.0-x86_64-i386-64bit\r\nscikit-image version: 0.17.dev0\r\nnumpy version: 1.18.1\r\n```\r\n\n", "before_files": [{"content": "import itertools\n\nimport numpy as np\n\nfrom .._shared.utils import warn, change_default_value\nfrom ..util import img_as_float\nfrom . import rgb_colors\nfrom .colorconv import rgb2gray, gray2rgb\n\n\n__all__ = ['color_dict', 'label2rgb', 'DEFAULT_COLORS']\n\n\nDEFAULT_COLORS = ('red', 'blue', 'yellow', 'magenta', 'green',\n 'indigo', 'darkorange', 'cyan', 'pink', 'yellowgreen')\n\n\ncolor_dict = {k: v for k, v in rgb_colors.__dict__.items()\n if isinstance(v, tuple)}\n\n\ndef _rgb_vector(color):\n \"\"\"Return RGB color as (1, 3) array.\n\n This RGB array gets multiplied by masked regions of an RGB image, which are\n partially flattened by masking (i.e. dimensions 2D + RGB -> 1D + RGB).\n\n Parameters\n ----------\n color : str or array\n Color name in `color_dict` or RGB float values between [0, 1].\n \"\"\"\n if isinstance(color, str):\n color = color_dict[color]\n # Slice to handle RGBA colors.\n return np.array(color[:3])\n\n\ndef _match_label_with_color(label, colors, bg_label, bg_color):\n \"\"\"Return `unique_labels` and `color_cycle` for label array and color list.\n\n Colors are cycled for normal labels, but the background color should only\n be used for the background.\n \"\"\"\n # Temporarily set background color; it will be removed later.\n if bg_color is None:\n bg_color = (0, 0, 0)\n bg_color = _rgb_vector([bg_color])\n\n # map labels to their ranks among all labels from small to large\n unique_labels, mapped_labels = np.unique(label, return_inverse=True)\n\n # get rank of bg_label\n bg_label_rank_list = mapped_labels[label.flat == bg_label]\n\n # The rank of each label is the index of the color it is matched to in\n # color cycle. bg_label should always be mapped to the first color, so\n # its rank must be 0. Other labels should be ranked from small to large\n # from 1.\n if len(bg_label_rank_list) > 0:\n bg_label_rank = bg_label_rank_list[0]\n mapped_labels[mapped_labels < bg_label_rank] += 1\n mapped_labels[label.flat == bg_label] = 0\n else:\n mapped_labels += 1\n\n # Modify labels and color cycle so background color is used only once.\n color_cycle = itertools.cycle(colors)\n color_cycle = itertools.chain(bg_color, color_cycle)\n\n return mapped_labels, color_cycle\n\n\n@change_default_value(\"bg_label\", new_value=0, changed_version=\"0.19\")\ndef label2rgb(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified,\n `bg_color` is `None`, and `kind` is `overlay`,\n background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n kind : string, one of {'overlay', 'avg'}\n The kind of color image desired. 'overlay' cycles over defined colors\n and overlays the colored labels over the original image. 'avg' replaces\n each labeled segment with its average color, for a stained-class or\n pastel painting appearance.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if kind == 'overlay':\n return _label2rgb_overlay(label, image, colors, alpha, bg_label,\n bg_color, image_alpha)\n elif kind == 'avg':\n return _label2rgb_avg(label, image, bg_label, bg_color)\n else:\n raise ValueError(\"`kind` must be either 'overlay' or 'avg'.\")\n\n\ndef _label2rgb_overlay(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=None, image_alpha=1):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified and\n `bg_color` is `None`, background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if colors is None:\n colors = DEFAULT_COLORS\n colors = [_rgb_vector(c) for c in colors]\n\n if image is None:\n image = np.zeros(label.shape + (3,), dtype=np.float64)\n # Opacity doesn't make sense if no image exists.\n alpha = 1\n else:\n if not image.shape[:2] == label.shape:\n raise ValueError(\"`image` and `label` must be the same shape\")\n\n if image.min() < 0:\n warn(\"Negative intensities in `image` are not supported\")\n\n if image.ndim > label.ndim:\n image = img_as_float(rgb2gray(image))\n else:\n image = img_as_float(image)\n image = gray2rgb(image) * image_alpha + (1 - image_alpha)\n\n # Ensure that all labels are non-negative so we can index into\n # `label_to_color` correctly.\n offset = min(label.min(), bg_label)\n if offset != 0:\n label = label - offset # Make sure you don't modify the input array.\n bg_label -= offset\n\n new_type = np.min_scalar_type(int(label.max()))\n if new_type == np.bool:\n new_type = np.uint8\n label = label.astype(new_type)\n\n mapped_labels_flat, color_cycle = _match_label_with_color(label, colors,\n bg_label, bg_color)\n\n if len(mapped_labels_flat) == 0:\n return image\n\n dense_labels = range(max(mapped_labels_flat) + 1)\n\n label_to_color = np.stack([c for i, c in zip(dense_labels, color_cycle)])\n\n mapped_labels = label\n mapped_labels.flat = mapped_labels_flat\n result = label_to_color[mapped_labels] * alpha + image * (1 - alpha)\n\n # Remove background label if its color was not specified.\n remove_background = 0 in mapped_labels_flat and bg_color is None\n if remove_background:\n result[label == bg_label] = image[label == bg_label]\n\n return result\n\n\ndef _label2rgb_avg(label_field, image, bg_label=0, bg_color=(0, 0, 0)):\n \"\"\"Visualise each segment in `label_field` with its mean color in `image`.\n\n Parameters\n ----------\n label_field : array of int\n A segmentation of an image.\n image : array, shape ``label_field.shape + (3,)``\n A color image of the same spatial shape as `label_field`.\n bg_label : int, optional\n A value in `label_field` to be treated as background.\n bg_color : 3-tuple of int, optional\n The color for the background label\n\n Returns\n -------\n out : array, same shape and type as `image`\n The output visualization.\n \"\"\"\n out = np.zeros_like(image)\n labels = np.unique(label_field)\n bg = (labels == bg_label)\n if bg.any():\n labels = labels[labels != bg_label]\n mask = (label_field == bg_label).nonzero()\n out[mask] = bg_color\n for label in labels:\n mask = (label_field == label).nonzero()\n color = image[mask].mean(axis=0)\n out[mask] = color\n return out\n", "path": "skimage/color/colorlabel.py"}], "after_files": [{"content": "import itertools\n\nimport numpy as np\n\nfrom .._shared.utils import warn, change_default_value\nfrom ..util import img_as_float\nfrom . import rgb_colors\nfrom .colorconv import rgb2gray, gray2rgb\n\n\n__all__ = ['color_dict', 'label2rgb', 'DEFAULT_COLORS']\n\n\nDEFAULT_COLORS = ('red', 'blue', 'yellow', 'magenta', 'green',\n 'indigo', 'darkorange', 'cyan', 'pink', 'yellowgreen')\n\n\ncolor_dict = {k: v for k, v in rgb_colors.__dict__.items()\n if isinstance(v, tuple)}\n\n\ndef _rgb_vector(color):\n \"\"\"Return RGB color as (1, 3) array.\n\n This RGB array gets multiplied by masked regions of an RGB image, which are\n partially flattened by masking (i.e. dimensions 2D + RGB -> 1D + RGB).\n\n Parameters\n ----------\n color : str or array\n Color name in `color_dict` or RGB float values between [0, 1].\n \"\"\"\n if isinstance(color, str):\n color = color_dict[color]\n # Slice to handle RGBA colors.\n return np.array(color[:3])\n\n\ndef _match_label_with_color(label, colors, bg_label, bg_color):\n \"\"\"Return `unique_labels` and `color_cycle` for label array and color list.\n\n Colors are cycled for normal labels, but the background color should only\n be used for the background.\n \"\"\"\n # Temporarily set background color; it will be removed later.\n if bg_color is None:\n bg_color = (0, 0, 0)\n bg_color = _rgb_vector(bg_color)\n\n # map labels to their ranks among all labels from small to large\n unique_labels, mapped_labels = np.unique(label, return_inverse=True)\n\n # get rank of bg_label\n bg_label_rank_list = mapped_labels[label.flat == bg_label]\n\n # The rank of each label is the index of the color it is matched to in\n # color cycle. bg_label should always be mapped to the first color, so\n # its rank must be 0. Other labels should be ranked from small to large\n # from 1.\n if len(bg_label_rank_list) > 0:\n bg_label_rank = bg_label_rank_list[0]\n mapped_labels[mapped_labels < bg_label_rank] += 1\n mapped_labels[label.flat == bg_label] = 0\n else:\n mapped_labels += 1\n\n # Modify labels and color cycle so background color is used only once.\n color_cycle = itertools.cycle(colors)\n color_cycle = itertools.chain([bg_color], color_cycle)\n\n return mapped_labels, color_cycle\n\n\n@change_default_value(\"bg_label\", new_value=0, changed_version=\"0.19\")\ndef label2rgb(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified,\n `bg_color` is `None`, and `kind` is `overlay`,\n background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n kind : string, one of {'overlay', 'avg'}\n The kind of color image desired. 'overlay' cycles over defined colors\n and overlays the colored labels over the original image. 'avg' replaces\n each labeled segment with its average color, for a stained-class or\n pastel painting appearance.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if kind == 'overlay':\n return _label2rgb_overlay(label, image, colors, alpha, bg_label,\n bg_color, image_alpha)\n elif kind == 'avg':\n return _label2rgb_avg(label, image, bg_label, bg_color)\n else:\n raise ValueError(\"`kind` must be either 'overlay' or 'avg'.\")\n\n\ndef _label2rgb_overlay(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=None, image_alpha=1):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified and\n `bg_color` is `None`, background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if colors is None:\n colors = DEFAULT_COLORS\n colors = [_rgb_vector(c) for c in colors]\n\n if image is None:\n image = np.zeros(label.shape + (3,), dtype=np.float64)\n # Opacity doesn't make sense if no image exists.\n alpha = 1\n else:\n if not image.shape[:2] == label.shape:\n raise ValueError(\"`image` and `label` must be the same shape\")\n\n if image.min() < 0:\n warn(\"Negative intensities in `image` are not supported\")\n\n if image.ndim > label.ndim:\n image = img_as_float(rgb2gray(image))\n else:\n image = img_as_float(image)\n image = gray2rgb(image) * image_alpha + (1 - image_alpha)\n\n # Ensure that all labels are non-negative so we can index into\n # `label_to_color` correctly.\n offset = min(label.min(), bg_label)\n if offset != 0:\n label = label - offset # Make sure you don't modify the input array.\n bg_label -= offset\n\n new_type = np.min_scalar_type(int(label.max()))\n if new_type == np.bool:\n new_type = np.uint8\n label = label.astype(new_type)\n\n mapped_labels_flat, color_cycle = _match_label_with_color(label, colors,\n bg_label, bg_color)\n\n if len(mapped_labels_flat) == 0:\n return image\n\n dense_labels = range(max(mapped_labels_flat) + 1)\n\n label_to_color = np.array([c for i, c in zip(dense_labels, color_cycle)])\n\n mapped_labels = label\n mapped_labels.flat = mapped_labels_flat\n result = label_to_color[mapped_labels] * alpha + image * (1 - alpha)\n\n # Remove background label if its color was not specified.\n remove_background = 0 in mapped_labels_flat and bg_color is None\n if remove_background:\n result[label == bg_label] = image[label == bg_label]\n\n return result\n\n\ndef _label2rgb_avg(label_field, image, bg_label=0, bg_color=(0, 0, 0)):\n \"\"\"Visualise each segment in `label_field` with its mean color in `image`.\n\n Parameters\n ----------\n label_field : array of int\n A segmentation of an image.\n image : array, shape ``label_field.shape + (3,)``\n A color image of the same spatial shape as `label_field`.\n bg_label : int, optional\n A value in `label_field` to be treated as background.\n bg_color : 3-tuple of int, optional\n The color for the background label\n\n Returns\n -------\n out : array, same shape and type as `image`\n The output visualization.\n \"\"\"\n out = np.zeros(label_field.shape + (3,))\n labels = np.unique(label_field)\n bg = (labels == bg_label)\n if bg.any():\n labels = labels[labels != bg_label]\n mask = (label_field == bg_label).nonzero()\n out[mask] = bg_color\n for label in labels:\n mask = (label_field == label).nonzero()\n color = image[mask].mean(axis=0)\n out[mask] = color\n return out\n", "path": "skimage/color/colorlabel.py"}]}
| 3,582 | 290 |
gh_patches_debug_15127
|
rasdani/github-patches
|
git_diff
|
gammapy__gammapy-3335
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cash statistics : Inconsistency between doc/code or pvalue/siginificance
Gammpay 0.17, official
Hello
In the Cash count statistics description in the "Stats-Statistics" docs
https://docs.gammapy.org/0.17/stats/index.html#cash-counts-statistic
The text says that *the significance is the square root of TS ... which is equivalent to the Li&Ma significance for Li&Ma...*
The statement is true but this is not what is in ```stat.signifcance```.
As you know the Li&Ma usual formula gives a significance which is "single-sided", i.e. that corresponds to a ```1-p_value``` obtained by the integration of the normal distribution from ```-Inf``` to the significance value (that can be negative).
I have checked on the example that from the p_value the significance should be:
```
from scipy.stats import norm, Chi2
p_value = 1 - chi2.cdf(stat.delta_ts) # same value found : 7.365232895800901
sig_LiMa = norm.ppf(1-p_value)
```
that gives ``` 2.475649754526797```
whereas the results computed in the notebook is :```2.71389625737626```
I checked that this corresponds to a ``` p_value``` shared on both sides of the normal function:
```norm.ppf(1-p_value/2)```
As a consequence, if I am correct:
- the text is misleading because it mentions Li&Ma, and the situation should be explicit;
- the significance attribute should be documented as the "double-sided" significance.
However I have checked that ```CashCountsStatitics``` can return negative significances which in principle is not possible with a "double-sided" integration. So there might be another inconsistency somewhere....
Thanks!
P.S.: one should never rely on siginiicances, only on p values.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/stats/counts_statistic.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 import abc
3 import numpy as np
4 from scipy.optimize import brentq, newton
5 from scipy.stats import chi2
6 from .fit_statistics import cash, get_wstat_mu_bkg, wstat
7
8 __all__ = ["WStatCountsStatistic", "CashCountsStatistic"]
9
10
11 class CountsStatistic(abc.ABC):
12 @property
13 def ts(self):
14 """Return stat difference (TS) of measured excess versus no excess."""
15 # Remove (small) negative TS due to error in root finding
16 ts = np.clip(self.stat_null - self.stat_max, 0, None)
17 return ts
18
19 @property
20 def sqrt_ts(self):
21 """Return statistical significance of measured excess."""
22 return np.sign(self.n_sig) * np.sqrt(self.ts)
23
24 @property
25 def p_value(self):
26 """Return p_value of measured excess."""
27 return chi2.sf(self.ts, 1)
28
29 def compute_errn(self, n_sigma=1.0):
30 """Compute downward excess uncertainties.
31
32 Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.
33
34 Parameters
35 ----------
36 n_sigma : float
37 Confidence level of the uncertainty expressed in number of sigma. Default is 1.
38 """
39 errn = np.zeros_like(self.n_on, dtype="float")
40 min_range = self.n_sig - 2 * n_sigma * (self.error + 1)
41
42 it = np.nditer(errn, flags=["multi_index"])
43 while not it.finished:
44 try:
45 res = brentq(
46 self._stat_fcn,
47 min_range[it.multi_index],
48 self.n_sig[it.multi_index],
49 args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),
50 )
51 errn[it.multi_index] = res - self.n_sig[it.multi_index]
52 except ValueError:
53 errn[it.multi_index] = -self.n_on[it.multi_index]
54 it.iternext()
55
56 return errn
57
58 def compute_errp(self, n_sigma=1):
59 """Compute upward excess uncertainties.
60
61 Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.
62
63 Parameters
64 ----------
65 n_sigma : float
66 Confidence level of the uncertainty expressed in number of sigma. Default is 1.
67 """
68 errp = np.zeros_like(self.n_on, dtype="float")
69 max_range = self.n_sig + 2 * n_sigma * (self.error + 1)
70
71 it = np.nditer(errp, flags=["multi_index"])
72 while not it.finished:
73 errp[it.multi_index] = brentq(
74 self._stat_fcn,
75 self.n_sig[it.multi_index],
76 max_range[it.multi_index],
77 args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),
78 )
79 it.iternext()
80
81 return errp - self.n_sig
82
83 def compute_upper_limit(self, n_sigma=3):
84 """Compute upper limit on the signal.
85
86 Searches the signal value for which the test statistics is n_sigma**2 away from the maximum
87 or from 0 if the measured excess is negative.
88
89 Parameters
90 ----------
91 n_sigma : float
92 Confidence level of the upper limit expressed in number of sigma. Default is 3.
93 """
94 ul = np.zeros_like(self.n_on, dtype="float")
95
96 min_range = np.maximum(0, self.n_sig)
97 max_range = min_range + 2 * n_sigma * (self.error + 1)
98 it = np.nditer(ul, flags=["multi_index"])
99
100 while not it.finished:
101 TS_ref = self._stat_fcn(min_range[it.multi_index], 0.0, it.multi_index)
102
103 ul[it.multi_index] = brentq(
104 self._stat_fcn,
105 min_range[it.multi_index],
106 max_range[it.multi_index],
107 args=(TS_ref + n_sigma ** 2, it.multi_index),
108 )
109 it.iternext()
110
111 return ul
112
113 def n_sig_matching_significance(self, significance):
114 """Compute excess matching a given significance.
115
116 This function is the inverse of `significance`.
117
118 Parameters
119 ----------
120 significance : float
121 Significance
122
123 Returns
124 -------
125 n_sig : `numpy.ndarray`
126 Excess
127 """
128 n_sig = np.zeros_like(self.n_bkg, dtype="float")
129 it = np.nditer(n_sig, flags=["multi_index"])
130
131 while not it.finished:
132 try:
133 n_sig[it.multi_index] = newton(
134 self._n_sig_matching_significance_fcn,
135 np.sqrt(self.n_bkg[it.multi_index]) * significance,
136 args=(significance, it.multi_index),
137 )
138 except:
139 n_sig[it.multi_index] = np.nan
140
141 it.iternext()
142 return n_sig
143
144
145 class CashCountsStatistic(CountsStatistic):
146 """Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable
147 with known background.
148
149 Parameters
150 ----------
151 n_on : int
152 Measured counts
153 mu_bkg : float
154 Known level of background
155 """
156
157 def __init__(self, n_on, mu_bkg):
158 self.n_on = np.asanyarray(n_on)
159 self.mu_bkg = np.asanyarray(mu_bkg)
160
161 @property
162 def n_bkg(self):
163 """Expected background counts"""
164 return self.mu_bkg
165
166 @property
167 def n_sig(self):
168 """Excess"""
169 return self.n_on - self.n_bkg
170
171 @property
172 def error(self):
173 """Approximate error from the covariance matrix."""
174 return np.sqrt(self.n_on)
175
176 @property
177 def stat_null(self):
178 """Stat value for null hypothesis, i.e. 0 expected signal counts"""
179 return cash(self.n_on, self.mu_bkg + 0)
180
181 @property
182 def stat_max(self):
183 """Stat value for best fit hypothesis, i.e. expected signal mu = n_on - mu_bkg"""
184 return cash(self.n_on, self.n_on)
185
186 def _stat_fcn(self, mu, delta=0, index=None):
187 return cash(self.n_on[index], self.mu_bkg[index] + mu) - delta
188
189 def _n_sig_matching_significance_fcn(self, n_sig, significance, index):
190 TS0 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index])
191 TS1 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index] + n_sig)
192 return np.sign(n_sig) * np.sqrt(np.clip(TS0 - TS1, 0, None)) - significance
193
194
195 class WStatCountsStatistic(CountsStatistic):
196 """Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable
197 with unknown background.
198
199 Parameters
200 ----------
201 n_on : int
202 Measured counts in on region
203 n_off : int
204 Measured counts in off region
205 alpha : float
206 Acceptance ratio of on and off measurements
207 mu_sig : float
208 Expected signal counts in on region
209 """
210
211 def __init__(self, n_on, n_off, alpha, mu_sig=None):
212 self.n_on = np.asanyarray(n_on)
213 self.n_off = np.asanyarray(n_off)
214 self.alpha = np.asanyarray(alpha)
215 if mu_sig is None:
216 self.mu_sig = np.zeros_like(self.n_on)
217 else:
218 self.mu_sig = np.asanyarray(mu_sig)
219
220 @property
221 def n_bkg(self):
222 """Known background computed alpha * n_off"""
223 return self.alpha * self.n_off
224
225 @property
226 def n_sig(self):
227 """Excess """
228 return self.n_on - self.n_bkg - self.mu_sig
229
230 @property
231 def error(self):
232 """Approximate error from the covariance matrix."""
233 return np.sqrt(self.n_on + self.alpha ** 2 * self.n_off)
234
235 @property
236 def stat_null(self):
237 """Stat value for null hypothesis, i.e. mu_sig expected signal counts"""
238 return wstat(self.n_on, self.n_off, self.alpha, self.mu_sig)
239
240 @property
241 def stat_max(self):
242 """Stat value for best fit hypothesis, i.e. expected signal mu = n_on - alpha * n_off - mu_sig"""
243 return wstat(self.n_on, self.n_off, self.alpha, self.n_sig + self.mu_sig)
244
245 def _stat_fcn(self, mu, delta=0, index=None):
246 return (
247 wstat(
248 self.n_on[index],
249 self.n_off[index],
250 self.alpha[index],
251 (mu + self.mu_sig[index]),
252 )
253 - delta
254 )
255
256 def _n_sig_matching_significance_fcn(self, n_sig, significance, index):
257 stat0 = wstat(
258 n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], 0
259 )
260 stat1 = wstat(
261 n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], n_sig,
262 )
263 return np.sign(n_sig) * np.sqrt(np.clip(stat0 - stat1, 0, None)) - significance
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gammapy/stats/counts_statistic.py b/gammapy/stats/counts_statistic.py
--- a/gammapy/stats/counts_statistic.py
+++ b/gammapy/stats/counts_statistic.py
@@ -18,13 +18,17 @@
@property
def sqrt_ts(self):
- """Return statistical significance of measured excess."""
+ """Return statistical significance of measured excess.
+ The sign of the excess is applied to distinguish positive and negative fluctuations.
+ """
return np.sign(self.n_sig) * np.sqrt(self.ts)
@property
def p_value(self):
- """Return p_value of measured excess."""
- return chi2.sf(self.ts, 1)
+ """Return p_value of measured excess.
+ Here the value accounts only for the positive excess significance (i.e. one-sided).
+ """
+ return 0.5*chi2.sf(self.ts, 1)
def compute_errn(self, n_sigma=1.0):
"""Compute downward excess uncertainties.
|
{"golden_diff": "diff --git a/gammapy/stats/counts_statistic.py b/gammapy/stats/counts_statistic.py\n--- a/gammapy/stats/counts_statistic.py\n+++ b/gammapy/stats/counts_statistic.py\n@@ -18,13 +18,17 @@\n \n @property\n def sqrt_ts(self):\n- \"\"\"Return statistical significance of measured excess.\"\"\"\n+ \"\"\"Return statistical significance of measured excess.\n+ The sign of the excess is applied to distinguish positive and negative fluctuations.\n+ \"\"\"\n return np.sign(self.n_sig) * np.sqrt(self.ts)\n \n @property\n def p_value(self):\n- \"\"\"Return p_value of measured excess.\"\"\"\n- return chi2.sf(self.ts, 1)\n+ \"\"\"Return p_value of measured excess.\n+ Here the value accounts only for the positive excess significance (i.e. one-sided).\n+ \"\"\"\n+ return 0.5*chi2.sf(self.ts, 1)\n \n def compute_errn(self, n_sigma=1.0):\n \"\"\"Compute downward excess uncertainties.\n", "issue": "Cash statistics : Inconsistency between doc/code or pvalue/siginificance\nGammpay 0.17, official\r\n\r\nHello\r\nIn the Cash count statistics description in the \"Stats-Statistics\" docs\r\nhttps://docs.gammapy.org/0.17/stats/index.html#cash-counts-statistic\r\n\r\nThe text says that *the significance is the square root of TS ... which is equivalent to the Li&Ma significance for Li&Ma...*\r\n\r\nThe statement is true but this is not what is in ```stat.signifcance```.\r\n\r\nAs you know the Li&Ma usual formula gives a significance which is \"single-sided\", i.e. that corresponds to a ```1-p_value``` obtained by the integration of the normal distribution from ```-Inf``` to the significance value (that can be negative). \r\n\r\nI have checked on the example that from the p_value the significance should be:\r\n```\r\nfrom scipy.stats import norm, Chi2\r\np_value = 1 - chi2.cdf(stat.delta_ts) # same value found : 7.365232895800901\r\nsig_LiMa = norm.ppf(1-p_value)\r\n```\r\nthat gives ``` 2.475649754526797```\r\n\r\nwhereas the results computed in the notebook is :```2.71389625737626``` \r\nI checked that this corresponds to a ``` p_value``` shared on both sides of the normal function:\r\n```norm.ppf(1-p_value/2)```\r\n\r\nAs a consequence, if I am correct:\r\n- the text is misleading because it mentions Li&Ma, and the situation should be explicit;\r\n- the significance attribute should be documented as the \"double-sided\" significance.\r\n\r\nHowever I have checked that ```CashCountsStatitics``` can return negative significances which in principle is not possible with a \"double-sided\" integration. So there might be another inconsistency somewhere....\r\n\r\nThanks!\r\n\r\nP.S.: one should never rely on siginiicances, only on p values.\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport abc\nimport numpy as np\nfrom scipy.optimize import brentq, newton\nfrom scipy.stats import chi2\nfrom .fit_statistics import cash, get_wstat_mu_bkg, wstat\n\n__all__ = [\"WStatCountsStatistic\", \"CashCountsStatistic\"]\n\n\nclass CountsStatistic(abc.ABC):\n @property\n def ts(self):\n \"\"\"Return stat difference (TS) of measured excess versus no excess.\"\"\"\n # Remove (small) negative TS due to error in root finding\n ts = np.clip(self.stat_null - self.stat_max, 0, None)\n return ts\n\n @property\n def sqrt_ts(self):\n \"\"\"Return statistical significance of measured excess.\"\"\"\n return np.sign(self.n_sig) * np.sqrt(self.ts)\n\n @property\n def p_value(self):\n \"\"\"Return p_value of measured excess.\"\"\"\n return chi2.sf(self.ts, 1)\n\n def compute_errn(self, n_sigma=1.0):\n \"\"\"Compute downward excess uncertainties.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the uncertainty expressed in number of sigma. Default is 1.\n \"\"\"\n errn = np.zeros_like(self.n_on, dtype=\"float\")\n min_range = self.n_sig - 2 * n_sigma * (self.error + 1)\n\n it = np.nditer(errn, flags=[\"multi_index\"])\n while not it.finished:\n try:\n res = brentq(\n self._stat_fcn,\n min_range[it.multi_index],\n self.n_sig[it.multi_index],\n args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),\n )\n errn[it.multi_index] = res - self.n_sig[it.multi_index]\n except ValueError:\n errn[it.multi_index] = -self.n_on[it.multi_index]\n it.iternext()\n\n return errn\n\n def compute_errp(self, n_sigma=1):\n \"\"\"Compute upward excess uncertainties.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the uncertainty expressed in number of sigma. Default is 1.\n \"\"\"\n errp = np.zeros_like(self.n_on, dtype=\"float\")\n max_range = self.n_sig + 2 * n_sigma * (self.error + 1)\n\n it = np.nditer(errp, flags=[\"multi_index\"])\n while not it.finished:\n errp[it.multi_index] = brentq(\n self._stat_fcn,\n self.n_sig[it.multi_index],\n max_range[it.multi_index],\n args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),\n )\n it.iternext()\n\n return errp - self.n_sig\n\n def compute_upper_limit(self, n_sigma=3):\n \"\"\"Compute upper limit on the signal.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum\n or from 0 if the measured excess is negative.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the upper limit expressed in number of sigma. Default is 3.\n \"\"\"\n ul = np.zeros_like(self.n_on, dtype=\"float\")\n\n min_range = np.maximum(0, self.n_sig)\n max_range = min_range + 2 * n_sigma * (self.error + 1)\n it = np.nditer(ul, flags=[\"multi_index\"])\n\n while not it.finished:\n TS_ref = self._stat_fcn(min_range[it.multi_index], 0.0, it.multi_index)\n\n ul[it.multi_index] = brentq(\n self._stat_fcn,\n min_range[it.multi_index],\n max_range[it.multi_index],\n args=(TS_ref + n_sigma ** 2, it.multi_index),\n )\n it.iternext()\n\n return ul\n\n def n_sig_matching_significance(self, significance):\n \"\"\"Compute excess matching a given significance.\n\n This function is the inverse of `significance`.\n\n Parameters\n ----------\n significance : float\n Significance\n\n Returns\n -------\n n_sig : `numpy.ndarray`\n Excess\n \"\"\"\n n_sig = np.zeros_like(self.n_bkg, dtype=\"float\")\n it = np.nditer(n_sig, flags=[\"multi_index\"])\n\n while not it.finished:\n try:\n n_sig[it.multi_index] = newton(\n self._n_sig_matching_significance_fcn,\n np.sqrt(self.n_bkg[it.multi_index]) * significance,\n args=(significance, it.multi_index),\n )\n except:\n n_sig[it.multi_index] = np.nan\n\n it.iternext()\n return n_sig\n\n\nclass CashCountsStatistic(CountsStatistic):\n \"\"\"Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable\n with known background.\n\n Parameters\n ----------\n n_on : int\n Measured counts\n mu_bkg : float\n Known level of background\n \"\"\"\n\n def __init__(self, n_on, mu_bkg):\n self.n_on = np.asanyarray(n_on)\n self.mu_bkg = np.asanyarray(mu_bkg)\n\n @property\n def n_bkg(self):\n \"\"\"Expected background counts\"\"\"\n return self.mu_bkg\n\n @property\n def n_sig(self):\n \"\"\"Excess\"\"\"\n return self.n_on - self.n_bkg\n\n @property\n def error(self):\n \"\"\"Approximate error from the covariance matrix.\"\"\"\n return np.sqrt(self.n_on)\n\n @property\n def stat_null(self):\n \"\"\"Stat value for null hypothesis, i.e. 0 expected signal counts\"\"\"\n return cash(self.n_on, self.mu_bkg + 0)\n\n @property\n def stat_max(self):\n \"\"\"Stat value for best fit hypothesis, i.e. expected signal mu = n_on - mu_bkg\"\"\"\n return cash(self.n_on, self.n_on)\n\n def _stat_fcn(self, mu, delta=0, index=None):\n return cash(self.n_on[index], self.mu_bkg[index] + mu) - delta\n\n def _n_sig_matching_significance_fcn(self, n_sig, significance, index):\n TS0 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index])\n TS1 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index] + n_sig)\n return np.sign(n_sig) * np.sqrt(np.clip(TS0 - TS1, 0, None)) - significance\n\n\nclass WStatCountsStatistic(CountsStatistic):\n \"\"\"Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable\n with unknown background.\n\n Parameters\n ----------\n n_on : int\n Measured counts in on region\n n_off : int\n Measured counts in off region\n alpha : float\n Acceptance ratio of on and off measurements\n mu_sig : float\n Expected signal counts in on region\n \"\"\"\n\n def __init__(self, n_on, n_off, alpha, mu_sig=None):\n self.n_on = np.asanyarray(n_on)\n self.n_off = np.asanyarray(n_off)\n self.alpha = np.asanyarray(alpha)\n if mu_sig is None:\n self.mu_sig = np.zeros_like(self.n_on)\n else:\n self.mu_sig = np.asanyarray(mu_sig)\n\n @property\n def n_bkg(self):\n \"\"\"Known background computed alpha * n_off\"\"\"\n return self.alpha * self.n_off\n\n @property\n def n_sig(self):\n \"\"\"Excess \"\"\"\n return self.n_on - self.n_bkg - self.mu_sig\n\n @property\n def error(self):\n \"\"\"Approximate error from the covariance matrix.\"\"\"\n return np.sqrt(self.n_on + self.alpha ** 2 * self.n_off)\n\n @property\n def stat_null(self):\n \"\"\"Stat value for null hypothesis, i.e. mu_sig expected signal counts\"\"\"\n return wstat(self.n_on, self.n_off, self.alpha, self.mu_sig)\n\n @property\n def stat_max(self):\n \"\"\"Stat value for best fit hypothesis, i.e. expected signal mu = n_on - alpha * n_off - mu_sig\"\"\"\n return wstat(self.n_on, self.n_off, self.alpha, self.n_sig + self.mu_sig)\n\n def _stat_fcn(self, mu, delta=0, index=None):\n return (\n wstat(\n self.n_on[index],\n self.n_off[index],\n self.alpha[index],\n (mu + self.mu_sig[index]),\n )\n - delta\n )\n\n def _n_sig_matching_significance_fcn(self, n_sig, significance, index):\n stat0 = wstat(\n n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], 0\n )\n stat1 = wstat(\n n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], n_sig,\n )\n return np.sign(n_sig) * np.sqrt(np.clip(stat0 - stat1, 0, None)) - significance\n", "path": "gammapy/stats/counts_statistic.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport abc\nimport numpy as np\nfrom scipy.optimize import brentq, newton\nfrom scipy.stats import chi2\nfrom .fit_statistics import cash, get_wstat_mu_bkg, wstat\n\n__all__ = [\"WStatCountsStatistic\", \"CashCountsStatistic\"]\n\n\nclass CountsStatistic(abc.ABC):\n @property\n def ts(self):\n \"\"\"Return stat difference (TS) of measured excess versus no excess.\"\"\"\n # Remove (small) negative TS due to error in root finding\n ts = np.clip(self.stat_null - self.stat_max, 0, None)\n return ts\n\n @property\n def sqrt_ts(self):\n \"\"\"Return statistical significance of measured excess.\n The sign of the excess is applied to distinguish positive and negative fluctuations.\n \"\"\"\n return np.sign(self.n_sig) * np.sqrt(self.ts)\n\n @property\n def p_value(self):\n \"\"\"Return p_value of measured excess.\n Here the value accounts only for the positive excess significance (i.e. one-sided).\n \"\"\"\n return 0.5*chi2.sf(self.ts, 1)\n\n def compute_errn(self, n_sigma=1.0):\n \"\"\"Compute downward excess uncertainties.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the uncertainty expressed in number of sigma. Default is 1.\n \"\"\"\n errn = np.zeros_like(self.n_on, dtype=\"float\")\n min_range = self.n_sig - 2 * n_sigma * (self.error + 1)\n\n it = np.nditer(errn, flags=[\"multi_index\"])\n while not it.finished:\n try:\n res = brentq(\n self._stat_fcn,\n min_range[it.multi_index],\n self.n_sig[it.multi_index],\n args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),\n )\n errn[it.multi_index] = res - self.n_sig[it.multi_index]\n except ValueError:\n errn[it.multi_index] = -self.n_on[it.multi_index]\n it.iternext()\n\n return errn\n\n def compute_errp(self, n_sigma=1):\n \"\"\"Compute upward excess uncertainties.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the uncertainty expressed in number of sigma. Default is 1.\n \"\"\"\n errp = np.zeros_like(self.n_on, dtype=\"float\")\n max_range = self.n_sig + 2 * n_sigma * (self.error + 1)\n\n it = np.nditer(errp, flags=[\"multi_index\"])\n while not it.finished:\n errp[it.multi_index] = brentq(\n self._stat_fcn,\n self.n_sig[it.multi_index],\n max_range[it.multi_index],\n args=(self.stat_max[it.multi_index] + n_sigma ** 2, it.multi_index),\n )\n it.iternext()\n\n return errp - self.n_sig\n\n def compute_upper_limit(self, n_sigma=3):\n \"\"\"Compute upper limit on the signal.\n\n Searches the signal value for which the test statistics is n_sigma**2 away from the maximum\n or from 0 if the measured excess is negative.\n\n Parameters\n ----------\n n_sigma : float\n Confidence level of the upper limit expressed in number of sigma. Default is 3.\n \"\"\"\n ul = np.zeros_like(self.n_on, dtype=\"float\")\n\n min_range = np.maximum(0, self.n_sig)\n max_range = min_range + 2 * n_sigma * (self.error + 1)\n it = np.nditer(ul, flags=[\"multi_index\"])\n\n while not it.finished:\n TS_ref = self._stat_fcn(min_range[it.multi_index], 0.0, it.multi_index)\n\n ul[it.multi_index] = brentq(\n self._stat_fcn,\n min_range[it.multi_index],\n max_range[it.multi_index],\n args=(TS_ref + n_sigma ** 2, it.multi_index),\n )\n it.iternext()\n\n return ul\n\n def n_sig_matching_significance(self, significance):\n \"\"\"Compute excess matching a given significance.\n\n This function is the inverse of `significance`.\n\n Parameters\n ----------\n significance : float\n Significance\n\n Returns\n -------\n n_sig : `numpy.ndarray`\n Excess\n \"\"\"\n n_sig = np.zeros_like(self.n_bkg, dtype=\"float\")\n it = np.nditer(n_sig, flags=[\"multi_index\"])\n\n while not it.finished:\n try:\n n_sig[it.multi_index] = newton(\n self._n_sig_matching_significance_fcn,\n np.sqrt(self.n_bkg[it.multi_index]) * significance,\n args=(significance, it.multi_index),\n )\n except:\n n_sig[it.multi_index] = np.nan\n\n it.iternext()\n return n_sig\n\n\nclass CashCountsStatistic(CountsStatistic):\n \"\"\"Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable\n with known background.\n\n Parameters\n ----------\n n_on : int\n Measured counts\n mu_bkg : float\n Known level of background\n \"\"\"\n\n def __init__(self, n_on, mu_bkg):\n self.n_on = np.asanyarray(n_on)\n self.mu_bkg = np.asanyarray(mu_bkg)\n\n @property\n def n_bkg(self):\n \"\"\"Expected background counts\"\"\"\n return self.mu_bkg\n\n @property\n def n_sig(self):\n \"\"\"Excess\"\"\"\n return self.n_on - self.n_bkg\n\n @property\n def error(self):\n \"\"\"Approximate error from the covariance matrix.\"\"\"\n return np.sqrt(self.n_on)\n\n @property\n def stat_null(self):\n \"\"\"Stat value for null hypothesis, i.e. 0 expected signal counts\"\"\"\n return cash(self.n_on, self.mu_bkg + 0)\n\n @property\n def stat_max(self):\n \"\"\"Stat value for best fit hypothesis, i.e. expected signal mu = n_on - mu_bkg\"\"\"\n return cash(self.n_on, self.n_on)\n\n def _stat_fcn(self, mu, delta=0, index=None):\n return cash(self.n_on[index], self.mu_bkg[index] + mu) - delta\n\n def _n_sig_matching_significance_fcn(self, n_sig, significance, index):\n TS0 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index])\n TS1 = cash(n_sig + self.mu_bkg[index], self.mu_bkg[index] + n_sig)\n return np.sign(n_sig) * np.sqrt(np.clip(TS0 - TS1, 0, None)) - significance\n\n\nclass WStatCountsStatistic(CountsStatistic):\n \"\"\"Class to compute statistics (significance, asymmetric errors , ul) for Poisson distributed variable\n with unknown background.\n\n Parameters\n ----------\n n_on : int\n Measured counts in on region\n n_off : int\n Measured counts in off region\n alpha : float\n Acceptance ratio of on and off measurements\n mu_sig : float\n Expected signal counts in on region\n \"\"\"\n\n def __init__(self, n_on, n_off, alpha, mu_sig=None):\n self.n_on = np.asanyarray(n_on)\n self.n_off = np.asanyarray(n_off)\n self.alpha = np.asanyarray(alpha)\n if mu_sig is None:\n self.mu_sig = np.zeros_like(self.n_on)\n else:\n self.mu_sig = np.asanyarray(mu_sig)\n\n @property\n def n_bkg(self):\n \"\"\"Known background computed alpha * n_off\"\"\"\n return self.alpha * self.n_off\n\n @property\n def n_sig(self):\n \"\"\"Excess \"\"\"\n return self.n_on - self.n_bkg - self.mu_sig\n\n @property\n def error(self):\n \"\"\"Approximate error from the covariance matrix.\"\"\"\n return np.sqrt(self.n_on + self.alpha ** 2 * self.n_off)\n\n @property\n def stat_null(self):\n \"\"\"Stat value for null hypothesis, i.e. mu_sig expected signal counts\"\"\"\n return wstat(self.n_on, self.n_off, self.alpha, self.mu_sig)\n\n @property\n def stat_max(self):\n \"\"\"Stat value for best fit hypothesis, i.e. expected signal mu = n_on - alpha * n_off - mu_sig\"\"\"\n return wstat(self.n_on, self.n_off, self.alpha, self.n_sig + self.mu_sig)\n\n def _stat_fcn(self, mu, delta=0, index=None):\n return (\n wstat(\n self.n_on[index],\n self.n_off[index],\n self.alpha[index],\n (mu + self.mu_sig[index]),\n )\n - delta\n )\n\n def _n_sig_matching_significance_fcn(self, n_sig, significance, index):\n stat0 = wstat(\n n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], 0\n )\n stat1 = wstat(\n n_sig + self.n_bkg[index], self.n_off[index], self.alpha[index], n_sig,\n )\n return np.sign(n_sig) * np.sqrt(np.clip(stat0 - stat1, 0, None)) - significance\n", "path": "gammapy/stats/counts_statistic.py"}]}
| 3,431 | 234 |
gh_patches_debug_9835
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-1968
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pre_prompt bug when generating from current folder
* Cookiecutter version: 2.4.0
* Template project url: N/A
* Python version: 3.10
* Operating System: Linux
### Description:
I just learned about the new pre_prompt in 2.4.0. It's exactly what I wanted. Thanks for the new feature. When I develop the templates sometimes I will just run the cookiecutter in the same directory: `cookiecutter .`. With pre_prompt defined it will crash with an error saying the temporary directory already exists:
```
Traceback (most recent call last):
File "/envs/cookiecutter/bin/cookiecutter", line 10, in <module>
sys.exit(main())
File "/envs/cookiecutter/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/envs/cookiecutter/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/envs/cookiecutter/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/envs/cookiecutter/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/cli.py", line 202, in main
cookiecutter(
File "/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/main.py", line 91, in cookiecutter
repo_dir = run_pre_prompt_hook(base_repo_dir) if accept_hooks else repo_dir
File "/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/hooks.py", line 179, in run_pre_prompt_hook
repo_dir = create_tmp_repo_dir(repo_dir)
File "/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/utils.py", line 128, in create_tmp_repo_dir
shutil.copytree(repo_dir, new_dir)
File "/envs/cookiecutter/lib/python3.10/shutil.py", line 559, in copytree
return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,
File "/envs/cookiecutter/lib/python3.10/shutil.py", line 457, in _copytree
os.makedirs(dst, exist_ok=dirs_exist_ok)
File "/envs/cookiecutter/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/tmp/cookiecutter7yw9w4f5/'
```
It seems the temporary sub-directory name depends on the directory name of the repository, "." probably resolves to nothing and it will try to create the parent directory instead, so if I use the absolute path of the current directory it works: `cookiecutter $(pwd)`. It feels like a possible fix is just to compute the absolute path of the input path first. Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/utils.py`
Content:
```
1 """Helper functions used throughout Cookiecutter."""
2 import contextlib
3 import logging
4 import os
5 import shutil
6 import stat
7 import sys
8 import tempfile
9 from pathlib import Path
10
11 from jinja2.ext import Extension
12
13 from cookiecutter.prompt import read_user_yes_no
14
15 logger = logging.getLogger(__name__)
16
17
18 def force_delete(func, path, exc_info):
19 """Error handler for `shutil.rmtree()` equivalent to `rm -rf`.
20
21 Usage: `shutil.rmtree(path, onerror=force_delete)`
22 From https://docs.python.org/3/library/shutil.html#rmtree-example
23 """
24 os.chmod(path, stat.S_IWRITE)
25 func(path)
26
27
28 def rmtree(path):
29 """Remove a directory and all its contents. Like rm -rf on Unix.
30
31 :param path: A directory path.
32 """
33 shutil.rmtree(path, onerror=force_delete)
34
35
36 def make_sure_path_exists(path: "os.PathLike[str]") -> None:
37 """Ensure that a directory exists.
38
39 :param path: A directory tree path for creation.
40 """
41 logger.debug('Making sure path exists (creates tree if not exist): %s', path)
42 try:
43 Path(path).mkdir(parents=True, exist_ok=True)
44 except OSError as error:
45 raise OSError(f'Unable to create directory at {path}') from error
46
47
48 @contextlib.contextmanager
49 def work_in(dirname=None):
50 """Context manager version of os.chdir.
51
52 When exited, returns to the working directory prior to entering.
53 """
54 curdir = os.getcwd()
55 try:
56 if dirname is not None:
57 os.chdir(dirname)
58 yield
59 finally:
60 os.chdir(curdir)
61
62
63 def make_executable(script_path):
64 """Make `script_path` executable.
65
66 :param script_path: The file to change
67 """
68 status = os.stat(script_path)
69 os.chmod(script_path, status.st_mode | stat.S_IEXEC)
70
71
72 def prompt_and_delete(path, no_input=False):
73 """
74 Ask user if it's okay to delete the previously-downloaded file/directory.
75
76 If yes, delete it. If no, checks to see if the old version should be
77 reused. If yes, it's reused; otherwise, Cookiecutter exits.
78
79 :param path: Previously downloaded zipfile.
80 :param no_input: Suppress prompt to delete repo and just delete it.
81 :return: True if the content was deleted
82 """
83 # Suppress prompt if called via API
84 if no_input:
85 ok_to_delete = True
86 else:
87 question = (
88 f"You've downloaded {path} before. Is it okay to delete and re-download it?"
89 )
90
91 ok_to_delete = read_user_yes_no(question, 'yes')
92
93 if ok_to_delete:
94 if os.path.isdir(path):
95 rmtree(path)
96 else:
97 os.remove(path)
98 return True
99 else:
100 ok_to_reuse = read_user_yes_no(
101 "Do you want to re-use the existing version?", 'yes'
102 )
103
104 if ok_to_reuse:
105 return False
106
107 sys.exit()
108
109
110 def simple_filter(filter_function):
111 """Decorate a function to wrap it in a simplified jinja2 extension."""
112
113 class SimpleFilterExtension(Extension):
114 def __init__(self, environment):
115 super().__init__(environment)
116 environment.filters[filter_function.__name__] = filter_function
117
118 SimpleFilterExtension.__name__ = filter_function.__name__
119 return SimpleFilterExtension
120
121
122 def create_tmp_repo_dir(repo_dir: "os.PathLike[str]") -> Path:
123 """Create a temporary dir with a copy of the contents of repo_dir."""
124 repo_dir = Path(repo_dir) if isinstance(repo_dir, str) else repo_dir
125 base_dir = tempfile.mkdtemp(prefix='cookiecutter')
126 new_dir = f"{base_dir}/{repo_dir.name}"
127 logger.debug(f'Copying repo_dir from {repo_dir} to {new_dir}')
128 shutil.copytree(repo_dir, new_dir)
129 return Path(new_dir)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cookiecutter/utils.py b/cookiecutter/utils.py
--- a/cookiecutter/utils.py
+++ b/cookiecutter/utils.py
@@ -121,7 +121,7 @@
def create_tmp_repo_dir(repo_dir: "os.PathLike[str]") -> Path:
"""Create a temporary dir with a copy of the contents of repo_dir."""
- repo_dir = Path(repo_dir) if isinstance(repo_dir, str) else repo_dir
+ repo_dir = Path(repo_dir).resolve()
base_dir = tempfile.mkdtemp(prefix='cookiecutter')
new_dir = f"{base_dir}/{repo_dir.name}"
logger.debug(f'Copying repo_dir from {repo_dir} to {new_dir}')
|
{"golden_diff": "diff --git a/cookiecutter/utils.py b/cookiecutter/utils.py\n--- a/cookiecutter/utils.py\n+++ b/cookiecutter/utils.py\n@@ -121,7 +121,7 @@\n \n def create_tmp_repo_dir(repo_dir: \"os.PathLike[str]\") -> Path:\n \"\"\"Create a temporary dir with a copy of the contents of repo_dir.\"\"\"\n- repo_dir = Path(repo_dir) if isinstance(repo_dir, str) else repo_dir\n+ repo_dir = Path(repo_dir).resolve()\n base_dir = tempfile.mkdtemp(prefix='cookiecutter')\n new_dir = f\"{base_dir}/{repo_dir.name}\"\n logger.debug(f'Copying repo_dir from {repo_dir} to {new_dir}')\n", "issue": "pre_prompt bug when generating from current folder\n* Cookiecutter version: 2.4.0\r\n* Template project url: N/A\r\n* Python version: 3.10\r\n* Operating System: Linux\r\n\r\n### Description:\r\n\r\nI just learned about the new pre_prompt in 2.4.0. It's exactly what I wanted. Thanks for the new feature. When I develop the templates sometimes I will just run the cookiecutter in the same directory: `cookiecutter .`. With pre_prompt defined it will crash with an error saying the temporary directory already exists:\r\n```\r\nTraceback (most recent call last):\r\n File \"/envs/cookiecutter/bin/cookiecutter\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/cli.py\", line 202, in main\r\n cookiecutter(\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/main.py\", line 91, in cookiecutter\r\n repo_dir = run_pre_prompt_hook(base_repo_dir) if accept_hooks else repo_dir\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/hooks.py\", line 179, in run_pre_prompt_hook\r\n repo_dir = create_tmp_repo_dir(repo_dir)\r\n File \"/envs/cookiecutter/lib/python3.10/site-packages/cookiecutter/utils.py\", line 128, in create_tmp_repo_dir\r\n shutil.copytree(repo_dir, new_dir)\r\n File \"/envs/cookiecutter/lib/python3.10/shutil.py\", line 559, in copytree\r\n return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,\r\n File \"/envs/cookiecutter/lib/python3.10/shutil.py\", line 457, in _copytree\r\n os.makedirs(dst, exist_ok=dirs_exist_ok)\r\n File \"/envs/cookiecutter/lib/python3.10/os.py\", line 225, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '/tmp/cookiecutter7yw9w4f5/'\r\n```\r\n\r\nIt seems the temporary sub-directory name depends on the directory name of the repository, \".\" probably resolves to nothing and it will try to create the parent directory instead, so if I use the absolute path of the current directory it works: `cookiecutter $(pwd)`. It feels like a possible fix is just to compute the absolute path of the input path first. Thanks.\r\n\n", "before_files": [{"content": "\"\"\"Helper functions used throughout Cookiecutter.\"\"\"\nimport contextlib\nimport logging\nimport os\nimport shutil\nimport stat\nimport sys\nimport tempfile\nfrom pathlib import Path\n\nfrom jinja2.ext import Extension\n\nfrom cookiecutter.prompt import read_user_yes_no\n\nlogger = logging.getLogger(__name__)\n\n\ndef force_delete(func, path, exc_info):\n \"\"\"Error handler for `shutil.rmtree()` equivalent to `rm -rf`.\n\n Usage: `shutil.rmtree(path, onerror=force_delete)`\n From https://docs.python.org/3/library/shutil.html#rmtree-example\n \"\"\"\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef rmtree(path):\n \"\"\"Remove a directory and all its contents. Like rm -rf on Unix.\n\n :param path: A directory path.\n \"\"\"\n shutil.rmtree(path, onerror=force_delete)\n\n\ndef make_sure_path_exists(path: \"os.PathLike[str]\") -> None:\n \"\"\"Ensure that a directory exists.\n\n :param path: A directory tree path for creation.\n \"\"\"\n logger.debug('Making sure path exists (creates tree if not exist): %s', path)\n try:\n Path(path).mkdir(parents=True, exist_ok=True)\n except OSError as error:\n raise OSError(f'Unable to create directory at {path}') from error\n\n\[email protected]\ndef work_in(dirname=None):\n \"\"\"Context manager version of os.chdir.\n\n When exited, returns to the working directory prior to entering.\n \"\"\"\n curdir = os.getcwd()\n try:\n if dirname is not None:\n os.chdir(dirname)\n yield\n finally:\n os.chdir(curdir)\n\n\ndef make_executable(script_path):\n \"\"\"Make `script_path` executable.\n\n :param script_path: The file to change\n \"\"\"\n status = os.stat(script_path)\n os.chmod(script_path, status.st_mode | stat.S_IEXEC)\n\n\ndef prompt_and_delete(path, no_input=False):\n \"\"\"\n Ask user if it's okay to delete the previously-downloaded file/directory.\n\n If yes, delete it. If no, checks to see if the old version should be\n reused. If yes, it's reused; otherwise, Cookiecutter exits.\n\n :param path: Previously downloaded zipfile.\n :param no_input: Suppress prompt to delete repo and just delete it.\n :return: True if the content was deleted\n \"\"\"\n # Suppress prompt if called via API\n if no_input:\n ok_to_delete = True\n else:\n question = (\n f\"You've downloaded {path} before. Is it okay to delete and re-download it?\"\n )\n\n ok_to_delete = read_user_yes_no(question, 'yes')\n\n if ok_to_delete:\n if os.path.isdir(path):\n rmtree(path)\n else:\n os.remove(path)\n return True\n else:\n ok_to_reuse = read_user_yes_no(\n \"Do you want to re-use the existing version?\", 'yes'\n )\n\n if ok_to_reuse:\n return False\n\n sys.exit()\n\n\ndef simple_filter(filter_function):\n \"\"\"Decorate a function to wrap it in a simplified jinja2 extension.\"\"\"\n\n class SimpleFilterExtension(Extension):\n def __init__(self, environment):\n super().__init__(environment)\n environment.filters[filter_function.__name__] = filter_function\n\n SimpleFilterExtension.__name__ = filter_function.__name__\n return SimpleFilterExtension\n\n\ndef create_tmp_repo_dir(repo_dir: \"os.PathLike[str]\") -> Path:\n \"\"\"Create a temporary dir with a copy of the contents of repo_dir.\"\"\"\n repo_dir = Path(repo_dir) if isinstance(repo_dir, str) else repo_dir\n base_dir = tempfile.mkdtemp(prefix='cookiecutter')\n new_dir = f\"{base_dir}/{repo_dir.name}\"\n logger.debug(f'Copying repo_dir from {repo_dir} to {new_dir}')\n shutil.copytree(repo_dir, new_dir)\n return Path(new_dir)\n", "path": "cookiecutter/utils.py"}], "after_files": [{"content": "\"\"\"Helper functions used throughout Cookiecutter.\"\"\"\nimport contextlib\nimport logging\nimport os\nimport shutil\nimport stat\nimport sys\nimport tempfile\nfrom pathlib import Path\n\nfrom jinja2.ext import Extension\n\nfrom cookiecutter.prompt import read_user_yes_no\n\nlogger = logging.getLogger(__name__)\n\n\ndef force_delete(func, path, exc_info):\n \"\"\"Error handler for `shutil.rmtree()` equivalent to `rm -rf`.\n\n Usage: `shutil.rmtree(path, onerror=force_delete)`\n From https://docs.python.org/3/library/shutil.html#rmtree-example\n \"\"\"\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef rmtree(path):\n \"\"\"Remove a directory and all its contents. Like rm -rf on Unix.\n\n :param path: A directory path.\n \"\"\"\n shutil.rmtree(path, onerror=force_delete)\n\n\ndef make_sure_path_exists(path: \"os.PathLike[str]\") -> None:\n \"\"\"Ensure that a directory exists.\n\n :param path: A directory tree path for creation.\n \"\"\"\n logger.debug('Making sure path exists (creates tree if not exist): %s', path)\n try:\n Path(path).mkdir(parents=True, exist_ok=True)\n except OSError as error:\n raise OSError(f'Unable to create directory at {path}') from error\n\n\[email protected]\ndef work_in(dirname=None):\n \"\"\"Context manager version of os.chdir.\n\n When exited, returns to the working directory prior to entering.\n \"\"\"\n curdir = os.getcwd()\n try:\n if dirname is not None:\n os.chdir(dirname)\n yield\n finally:\n os.chdir(curdir)\n\n\ndef make_executable(script_path):\n \"\"\"Make `script_path` executable.\n\n :param script_path: The file to change\n \"\"\"\n status = os.stat(script_path)\n os.chmod(script_path, status.st_mode | stat.S_IEXEC)\n\n\ndef prompt_and_delete(path, no_input=False):\n \"\"\"\n Ask user if it's okay to delete the previously-downloaded file/directory.\n\n If yes, delete it. If no, checks to see if the old version should be\n reused. If yes, it's reused; otherwise, Cookiecutter exits.\n\n :param path: Previously downloaded zipfile.\n :param no_input: Suppress prompt to delete repo and just delete it.\n :return: True if the content was deleted\n \"\"\"\n # Suppress prompt if called via API\n if no_input:\n ok_to_delete = True\n else:\n question = (\n f\"You've downloaded {path} before. Is it okay to delete and re-download it?\"\n )\n\n ok_to_delete = read_user_yes_no(question, 'yes')\n\n if ok_to_delete:\n if os.path.isdir(path):\n rmtree(path)\n else:\n os.remove(path)\n return True\n else:\n ok_to_reuse = read_user_yes_no(\n \"Do you want to re-use the existing version?\", 'yes'\n )\n\n if ok_to_reuse:\n return False\n\n sys.exit()\n\n\ndef simple_filter(filter_function):\n \"\"\"Decorate a function to wrap it in a simplified jinja2 extension.\"\"\"\n\n class SimpleFilterExtension(Extension):\n def __init__(self, environment):\n super().__init__(environment)\n environment.filters[filter_function.__name__] = filter_function\n\n SimpleFilterExtension.__name__ = filter_function.__name__\n return SimpleFilterExtension\n\n\ndef create_tmp_repo_dir(repo_dir: \"os.PathLike[str]\") -> Path:\n \"\"\"Create a temporary dir with a copy of the contents of repo_dir.\"\"\"\n repo_dir = Path(repo_dir).resolve()\n base_dir = tempfile.mkdtemp(prefix='cookiecutter')\n new_dir = f\"{base_dir}/{repo_dir.name}\"\n logger.debug(f'Copying repo_dir from {repo_dir} to {new_dir}')\n shutil.copytree(repo_dir, new_dir)\n return Path(new_dir)\n", "path": "cookiecutter/utils.py"}]}
| 2,160 | 160 |
gh_patches_debug_545
|
rasdani/github-patches
|
git_diff
|
pallets__click-1591
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
click.echo() raises UnsupportedOperation on Windows when using pytest capsys
### Actual behavior
When using `click.echo()` in a test function that uses the pytest `capsys` fixture, it raises `UnsupportedOperation` on Windows. It runs fine on Linux and macOS.
Here is a complete test module `tests/unit/test_error_click.py` that reproduces the issue:
```
import click
import pytest
def myfunc():
"""Function to be tested"""
click.echo('bla')
def test_myfunc(capsys):
myfunc()
stdout, stderr = capsys.readouterr()
assert stdout == 'bla\n'
```
Here is the failure on Windows (using Python 2.7):
```
$ pytest tests
. . .
_________________________________ test_myfunc _________________________________
capsys = <_pytest.capture.CaptureFixture object at 0x04865830>
def test_myfunc(capsys):
> myfunc()
tests\unit\test_error_click.py:15:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\unit\test_error_click.py:11: in myfunc
click.echo('bla')
.tox\win64_py27_32\lib\site-packages\click\utils.py:230: in echo
file = _default_text_stdout()
.tox\win64_py27_32\lib\site-packages\click\_compat.py:760: in func
rv = wrapper_func()
.tox\win64_py27_32\lib\site-packages\click\_compat.py:256: in get_text_stdout
rv = _get_windows_console_stream(sys.stdout, encoding, errors)
.tox\win64_py27_32\lib\site-packages\click\_winconsole.py:356: in _get_windows_console_stream
and _is_console(f)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
f = <_pytest.compat.CaptureIO object at 0x049156F0>
def _is_console(f):
if not hasattr(f, "fileno"):
return False
try:
> fileno = f.fileno()
E UnsupportedOperation: fileno
.tox\win64_py27_32\lib\site-packages\click\_winconsole.py:343: UnsupportedOperation
```
This is from this Appveyor run on Python 2.7: https://ci.appveyor.com/project/KSchopmeyer/pywbemtools/builds/33566529/job/j22nqi2rq10f7uk2#L1463
It also happens on Python 3.8: https://ci.appveyor.com/project/KSchopmeyer/pywbemtools/builds/33566529/job/0mo646j7ck1yidhs#L1473
### Expected behavior
This should succeed on Windows as it does on Linux and macOS.
### Possible solutions
I think `UnsupportedOperation` should be tolerated in the call to `f.fileno()` in `_winconsole._is_console()`. This seems to be done in other people's code calling fileno(), too: https://www.programcreek.com/python/example/17474/io.UnsupportedOperation
### Environment
* Python version: 2.7, 3.8
* Click version: 7.1.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/click/_winconsole.py`
Content:
```
1 # This module is based on the excellent work by Adam Bartoš who
2 # provided a lot of what went into the implementation here in
3 # the discussion to issue1602 in the Python bug tracker.
4 #
5 # There are some general differences in regards to how this works
6 # compared to the original patches as we do not need to patch
7 # the entire interpreter but just work in our little world of
8 # echo and prompt.
9 import ctypes
10 import io
11 import time
12 from ctypes import byref
13 from ctypes import c_char
14 from ctypes import c_char_p
15 from ctypes import c_int
16 from ctypes import c_ssize_t
17 from ctypes import c_ulong
18 from ctypes import c_void_p
19 from ctypes import POINTER
20 from ctypes import py_object
21 from ctypes import windll
22 from ctypes import WINFUNCTYPE
23 from ctypes.wintypes import DWORD
24 from ctypes.wintypes import HANDLE
25 from ctypes.wintypes import LPCWSTR
26 from ctypes.wintypes import LPWSTR
27
28 import msvcrt
29
30 from ._compat import _NonClosingTextIOWrapper
31
32 try:
33 from ctypes import pythonapi
34 except ImportError:
35 pythonapi = None
36 else:
37 PyObject_GetBuffer = pythonapi.PyObject_GetBuffer
38 PyBuffer_Release = pythonapi.PyBuffer_Release
39
40
41 c_ssize_p = POINTER(c_ssize_t)
42
43 kernel32 = windll.kernel32
44 GetStdHandle = kernel32.GetStdHandle
45 ReadConsoleW = kernel32.ReadConsoleW
46 WriteConsoleW = kernel32.WriteConsoleW
47 GetConsoleMode = kernel32.GetConsoleMode
48 GetLastError = kernel32.GetLastError
49 GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32))
50 CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(
51 ("CommandLineToArgvW", windll.shell32)
52 )
53 LocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)(
54 ("LocalFree", windll.kernel32)
55 )
56
57
58 STDIN_HANDLE = GetStdHandle(-10)
59 STDOUT_HANDLE = GetStdHandle(-11)
60 STDERR_HANDLE = GetStdHandle(-12)
61
62
63 PyBUF_SIMPLE = 0
64 PyBUF_WRITABLE = 1
65
66 ERROR_SUCCESS = 0
67 ERROR_NOT_ENOUGH_MEMORY = 8
68 ERROR_OPERATION_ABORTED = 995
69
70 STDIN_FILENO = 0
71 STDOUT_FILENO = 1
72 STDERR_FILENO = 2
73
74 EOF = b"\x1a"
75 MAX_BYTES_WRITTEN = 32767
76
77
78 class Py_buffer(ctypes.Structure):
79 _fields_ = [
80 ("buf", c_void_p),
81 ("obj", py_object),
82 ("len", c_ssize_t),
83 ("itemsize", c_ssize_t),
84 ("readonly", c_int),
85 ("ndim", c_int),
86 ("format", c_char_p),
87 ("shape", c_ssize_p),
88 ("strides", c_ssize_p),
89 ("suboffsets", c_ssize_p),
90 ("internal", c_void_p),
91 ]
92
93
94 # On PyPy we cannot get buffers so our ability to operate here is
95 # severely limited.
96 if pythonapi is None:
97 get_buffer = None
98 else:
99
100 def get_buffer(obj, writable=False):
101 buf = Py_buffer()
102 flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE
103 PyObject_GetBuffer(py_object(obj), byref(buf), flags)
104 try:
105 buffer_type = c_char * buf.len
106 return buffer_type.from_address(buf.buf)
107 finally:
108 PyBuffer_Release(byref(buf))
109
110
111 class _WindowsConsoleRawIOBase(io.RawIOBase):
112 def __init__(self, handle):
113 self.handle = handle
114
115 def isatty(self):
116 io.RawIOBase.isatty(self)
117 return True
118
119
120 class _WindowsConsoleReader(_WindowsConsoleRawIOBase):
121 def readable(self):
122 return True
123
124 def readinto(self, b):
125 bytes_to_be_read = len(b)
126 if not bytes_to_be_read:
127 return 0
128 elif bytes_to_be_read % 2:
129 raise ValueError(
130 "cannot read odd number of bytes from UTF-16-LE encoded console"
131 )
132
133 buffer = get_buffer(b, writable=True)
134 code_units_to_be_read = bytes_to_be_read // 2
135 code_units_read = c_ulong()
136
137 rv = ReadConsoleW(
138 HANDLE(self.handle),
139 buffer,
140 code_units_to_be_read,
141 byref(code_units_read),
142 None,
143 )
144 if GetLastError() == ERROR_OPERATION_ABORTED:
145 # wait for KeyboardInterrupt
146 time.sleep(0.1)
147 if not rv:
148 raise OSError(f"Windows error: {GetLastError()}")
149
150 if buffer[0] == EOF:
151 return 0
152 return 2 * code_units_read.value
153
154
155 class _WindowsConsoleWriter(_WindowsConsoleRawIOBase):
156 def writable(self):
157 return True
158
159 @staticmethod
160 def _get_error_message(errno):
161 if errno == ERROR_SUCCESS:
162 return "ERROR_SUCCESS"
163 elif errno == ERROR_NOT_ENOUGH_MEMORY:
164 return "ERROR_NOT_ENOUGH_MEMORY"
165 return f"Windows error {errno}"
166
167 def write(self, b):
168 bytes_to_be_written = len(b)
169 buf = get_buffer(b)
170 code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2
171 code_units_written = c_ulong()
172
173 WriteConsoleW(
174 HANDLE(self.handle),
175 buf,
176 code_units_to_be_written,
177 byref(code_units_written),
178 None,
179 )
180 bytes_written = 2 * code_units_written.value
181
182 if bytes_written == 0 and bytes_to_be_written > 0:
183 raise OSError(self._get_error_message(GetLastError()))
184 return bytes_written
185
186
187 class ConsoleStream:
188 def __init__(self, text_stream, byte_stream):
189 self._text_stream = text_stream
190 self.buffer = byte_stream
191
192 @property
193 def name(self):
194 return self.buffer.name
195
196 def write(self, x):
197 if isinstance(x, str):
198 return self._text_stream.write(x)
199 try:
200 self.flush()
201 except Exception:
202 pass
203 return self.buffer.write(x)
204
205 def writelines(self, lines):
206 for line in lines:
207 self.write(line)
208
209 def __getattr__(self, name):
210 return getattr(self._text_stream, name)
211
212 def isatty(self):
213 return self.buffer.isatty()
214
215 def __repr__(self):
216 return f"<ConsoleStream name={self.name!r} encoding={self.encoding!r}>"
217
218
219 class WindowsChunkedWriter:
220 """
221 Wraps a stream (such as stdout), acting as a transparent proxy for all
222 attribute access apart from method 'write()' which we wrap to write in
223 limited chunks due to a Windows limitation on binary console streams.
224 """
225
226 def __init__(self, wrapped):
227 # double-underscore everything to prevent clashes with names of
228 # attributes on the wrapped stream object.
229 self.__wrapped = wrapped
230
231 def __getattr__(self, name):
232 return getattr(self.__wrapped, name)
233
234 def write(self, text):
235 total_to_write = len(text)
236 written = 0
237
238 while written < total_to_write:
239 to_write = min(total_to_write - written, MAX_BYTES_WRITTEN)
240 self.__wrapped.write(text[written : written + to_write])
241 written += to_write
242
243
244 def _get_text_stdin(buffer_stream):
245 text_stream = _NonClosingTextIOWrapper(
246 io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),
247 "utf-16-le",
248 "strict",
249 line_buffering=True,
250 )
251 return ConsoleStream(text_stream, buffer_stream)
252
253
254 def _get_text_stdout(buffer_stream):
255 text_stream = _NonClosingTextIOWrapper(
256 io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),
257 "utf-16-le",
258 "strict",
259 line_buffering=True,
260 )
261 return ConsoleStream(text_stream, buffer_stream)
262
263
264 def _get_text_stderr(buffer_stream):
265 text_stream = _NonClosingTextIOWrapper(
266 io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),
267 "utf-16-le",
268 "strict",
269 line_buffering=True,
270 )
271 return ConsoleStream(text_stream, buffer_stream)
272
273
274 _stream_factories = {
275 0: _get_text_stdin,
276 1: _get_text_stdout,
277 2: _get_text_stderr,
278 }
279
280
281 def _is_console(f):
282 if not hasattr(f, "fileno"):
283 return False
284
285 try:
286 fileno = f.fileno()
287 except OSError:
288 return False
289
290 handle = msvcrt.get_osfhandle(fileno)
291 return bool(GetConsoleMode(handle, byref(DWORD())))
292
293
294 def _get_windows_console_stream(f, encoding, errors):
295 if (
296 get_buffer is not None
297 and encoding in {"utf-16-le", None}
298 and errors in {"strict", None}
299 and _is_console(f)
300 ):
301 func = _stream_factories.get(f.fileno())
302 if func is not None:
303 f = getattr(f, "buffer", None)
304
305 if f is None:
306 return None
307
308 return func(f)
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/click/_winconsole.py b/src/click/_winconsole.py
--- a/src/click/_winconsole.py
+++ b/src/click/_winconsole.py
@@ -284,7 +284,7 @@
try:
fileno = f.fileno()
- except OSError:
+ except (OSError, io.UnsupportedOperation):
return False
handle = msvcrt.get_osfhandle(fileno)
|
{"golden_diff": "diff --git a/src/click/_winconsole.py b/src/click/_winconsole.py\n--- a/src/click/_winconsole.py\n+++ b/src/click/_winconsole.py\n@@ -284,7 +284,7 @@\n \n try:\n fileno = f.fileno()\n- except OSError:\n+ except (OSError, io.UnsupportedOperation):\n return False\n \n handle = msvcrt.get_osfhandle(fileno)\n", "issue": "click.echo() raises UnsupportedOperation on Windows when using pytest capsys\n### Actual behavior\r\n\r\nWhen using `click.echo()` in a test function that uses the pytest `capsys` fixture, it raises `UnsupportedOperation` on Windows. It runs fine on Linux and macOS.\r\n\r\nHere is a complete test module `tests/unit/test_error_click.py` that reproduces the issue:\r\n```\r\nimport click\r\nimport pytest\r\n\r\ndef myfunc():\r\n \"\"\"Function to be tested\"\"\"\r\n click.echo('bla')\r\n\r\ndef test_myfunc(capsys):\r\n myfunc()\r\n stdout, stderr = capsys.readouterr()\r\n assert stdout == 'bla\\n'\r\n```\r\n\r\nHere is the failure on Windows (using Python 2.7):\r\n```\r\n$ pytest tests\r\n. . .\r\n_________________________________ test_myfunc _________________________________\r\ncapsys = <_pytest.capture.CaptureFixture object at 0x04865830>\r\n def test_myfunc(capsys):\r\n> myfunc()\r\ntests\\unit\\test_error_click.py:15: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\\unit\\test_error_click.py:11: in myfunc\r\n click.echo('bla')\r\n.tox\\win64_py27_32\\lib\\site-packages\\click\\utils.py:230: in echo\r\n file = _default_text_stdout()\r\n.tox\\win64_py27_32\\lib\\site-packages\\click\\_compat.py:760: in func\r\n rv = wrapper_func()\r\n.tox\\win64_py27_32\\lib\\site-packages\\click\\_compat.py:256: in get_text_stdout\r\n rv = _get_windows_console_stream(sys.stdout, encoding, errors)\r\n.tox\\win64_py27_32\\lib\\site-packages\\click\\_winconsole.py:356: in _get_windows_console_stream\r\n and _is_console(f)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nf = <_pytest.compat.CaptureIO object at 0x049156F0>\r\n def _is_console(f):\r\n if not hasattr(f, \"fileno\"):\r\n return False\r\n \r\n try:\r\n> fileno = f.fileno()\r\nE UnsupportedOperation: fileno\r\n.tox\\win64_py27_32\\lib\\site-packages\\click\\_winconsole.py:343: UnsupportedOperation\r\n```\r\n\r\nThis is from this Appveyor run on Python 2.7: https://ci.appveyor.com/project/KSchopmeyer/pywbemtools/builds/33566529/job/j22nqi2rq10f7uk2#L1463\r\n\r\nIt also happens on Python 3.8: https://ci.appveyor.com/project/KSchopmeyer/pywbemtools/builds/33566529/job/0mo646j7ck1yidhs#L1473\r\n\r\n### Expected behavior\r\n\r\nThis should succeed on Windows as it does on Linux and macOS.\r\n\r\n### Possible solutions\r\n\r\nI think `UnsupportedOperation` should be tolerated in the call to `f.fileno()` in `_winconsole._is_console()`. This seems to be done in other people's code calling fileno(), too: https://www.programcreek.com/python/example/17474/io.UnsupportedOperation\r\n\r\n### Environment\r\n\r\n* Python version: 2.7, 3.8\r\n* Click version: 7.1.2\n", "before_files": [{"content": "# This module is based on the excellent work by Adam Barto\u0161 who\n# provided a lot of what went into the implementation here in\n# the discussion to issue1602 in the Python bug tracker.\n#\n# There are some general differences in regards to how this works\n# compared to the original patches as we do not need to patch\n# the entire interpreter but just work in our little world of\n# echo and prompt.\nimport ctypes\nimport io\nimport time\nfrom ctypes import byref\nfrom ctypes import c_char\nfrom ctypes import c_char_p\nfrom ctypes import c_int\nfrom ctypes import c_ssize_t\nfrom ctypes import c_ulong\nfrom ctypes import c_void_p\nfrom ctypes import POINTER\nfrom ctypes import py_object\nfrom ctypes import windll\nfrom ctypes import WINFUNCTYPE\nfrom ctypes.wintypes import DWORD\nfrom ctypes.wintypes import HANDLE\nfrom ctypes.wintypes import LPCWSTR\nfrom ctypes.wintypes import LPWSTR\n\nimport msvcrt\n\nfrom ._compat import _NonClosingTextIOWrapper\n\ntry:\n from ctypes import pythonapi\nexcept ImportError:\n pythonapi = None\nelse:\n PyObject_GetBuffer = pythonapi.PyObject_GetBuffer\n PyBuffer_Release = pythonapi.PyBuffer_Release\n\n\nc_ssize_p = POINTER(c_ssize_t)\n\nkernel32 = windll.kernel32\nGetStdHandle = kernel32.GetStdHandle\nReadConsoleW = kernel32.ReadConsoleW\nWriteConsoleW = kernel32.WriteConsoleW\nGetConsoleMode = kernel32.GetConsoleMode\nGetLastError = kernel32.GetLastError\nGetCommandLineW = WINFUNCTYPE(LPWSTR)((\"GetCommandLineW\", windll.kernel32))\nCommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(\n (\"CommandLineToArgvW\", windll.shell32)\n)\nLocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)(\n (\"LocalFree\", windll.kernel32)\n)\n\n\nSTDIN_HANDLE = GetStdHandle(-10)\nSTDOUT_HANDLE = GetStdHandle(-11)\nSTDERR_HANDLE = GetStdHandle(-12)\n\n\nPyBUF_SIMPLE = 0\nPyBUF_WRITABLE = 1\n\nERROR_SUCCESS = 0\nERROR_NOT_ENOUGH_MEMORY = 8\nERROR_OPERATION_ABORTED = 995\n\nSTDIN_FILENO = 0\nSTDOUT_FILENO = 1\nSTDERR_FILENO = 2\n\nEOF = b\"\\x1a\"\nMAX_BYTES_WRITTEN = 32767\n\n\nclass Py_buffer(ctypes.Structure):\n _fields_ = [\n (\"buf\", c_void_p),\n (\"obj\", py_object),\n (\"len\", c_ssize_t),\n (\"itemsize\", c_ssize_t),\n (\"readonly\", c_int),\n (\"ndim\", c_int),\n (\"format\", c_char_p),\n (\"shape\", c_ssize_p),\n (\"strides\", c_ssize_p),\n (\"suboffsets\", c_ssize_p),\n (\"internal\", c_void_p),\n ]\n\n\n# On PyPy we cannot get buffers so our ability to operate here is\n# severely limited.\nif pythonapi is None:\n get_buffer = None\nelse:\n\n def get_buffer(obj, writable=False):\n buf = Py_buffer()\n flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE\n PyObject_GetBuffer(py_object(obj), byref(buf), flags)\n try:\n buffer_type = c_char * buf.len\n return buffer_type.from_address(buf.buf)\n finally:\n PyBuffer_Release(byref(buf))\n\n\nclass _WindowsConsoleRawIOBase(io.RawIOBase):\n def __init__(self, handle):\n self.handle = handle\n\n def isatty(self):\n io.RawIOBase.isatty(self)\n return True\n\n\nclass _WindowsConsoleReader(_WindowsConsoleRawIOBase):\n def readable(self):\n return True\n\n def readinto(self, b):\n bytes_to_be_read = len(b)\n if not bytes_to_be_read:\n return 0\n elif bytes_to_be_read % 2:\n raise ValueError(\n \"cannot read odd number of bytes from UTF-16-LE encoded console\"\n )\n\n buffer = get_buffer(b, writable=True)\n code_units_to_be_read = bytes_to_be_read // 2\n code_units_read = c_ulong()\n\n rv = ReadConsoleW(\n HANDLE(self.handle),\n buffer,\n code_units_to_be_read,\n byref(code_units_read),\n None,\n )\n if GetLastError() == ERROR_OPERATION_ABORTED:\n # wait for KeyboardInterrupt\n time.sleep(0.1)\n if not rv:\n raise OSError(f\"Windows error: {GetLastError()}\")\n\n if buffer[0] == EOF:\n return 0\n return 2 * code_units_read.value\n\n\nclass _WindowsConsoleWriter(_WindowsConsoleRawIOBase):\n def writable(self):\n return True\n\n @staticmethod\n def _get_error_message(errno):\n if errno == ERROR_SUCCESS:\n return \"ERROR_SUCCESS\"\n elif errno == ERROR_NOT_ENOUGH_MEMORY:\n return \"ERROR_NOT_ENOUGH_MEMORY\"\n return f\"Windows error {errno}\"\n\n def write(self, b):\n bytes_to_be_written = len(b)\n buf = get_buffer(b)\n code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2\n code_units_written = c_ulong()\n\n WriteConsoleW(\n HANDLE(self.handle),\n buf,\n code_units_to_be_written,\n byref(code_units_written),\n None,\n )\n bytes_written = 2 * code_units_written.value\n\n if bytes_written == 0 and bytes_to_be_written > 0:\n raise OSError(self._get_error_message(GetLastError()))\n return bytes_written\n\n\nclass ConsoleStream:\n def __init__(self, text_stream, byte_stream):\n self._text_stream = text_stream\n self.buffer = byte_stream\n\n @property\n def name(self):\n return self.buffer.name\n\n def write(self, x):\n if isinstance(x, str):\n return self._text_stream.write(x)\n try:\n self.flush()\n except Exception:\n pass\n return self.buffer.write(x)\n\n def writelines(self, lines):\n for line in lines:\n self.write(line)\n\n def __getattr__(self, name):\n return getattr(self._text_stream, name)\n\n def isatty(self):\n return self.buffer.isatty()\n\n def __repr__(self):\n return f\"<ConsoleStream name={self.name!r} encoding={self.encoding!r}>\"\n\n\nclass WindowsChunkedWriter:\n \"\"\"\n Wraps a stream (such as stdout), acting as a transparent proxy for all\n attribute access apart from method 'write()' which we wrap to write in\n limited chunks due to a Windows limitation on binary console streams.\n \"\"\"\n\n def __init__(self, wrapped):\n # double-underscore everything to prevent clashes with names of\n # attributes on the wrapped stream object.\n self.__wrapped = wrapped\n\n def __getattr__(self, name):\n return getattr(self.__wrapped, name)\n\n def write(self, text):\n total_to_write = len(text)\n written = 0\n\n while written < total_to_write:\n to_write = min(total_to_write - written, MAX_BYTES_WRITTEN)\n self.__wrapped.write(text[written : written + to_write])\n written += to_write\n\n\ndef _get_text_stdin(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\ndef _get_text_stdout(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\ndef _get_text_stderr(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\n_stream_factories = {\n 0: _get_text_stdin,\n 1: _get_text_stdout,\n 2: _get_text_stderr,\n}\n\n\ndef _is_console(f):\n if not hasattr(f, \"fileno\"):\n return False\n\n try:\n fileno = f.fileno()\n except OSError:\n return False\n\n handle = msvcrt.get_osfhandle(fileno)\n return bool(GetConsoleMode(handle, byref(DWORD())))\n\n\ndef _get_windows_console_stream(f, encoding, errors):\n if (\n get_buffer is not None\n and encoding in {\"utf-16-le\", None}\n and errors in {\"strict\", None}\n and _is_console(f)\n ):\n func = _stream_factories.get(f.fileno())\n if func is not None:\n f = getattr(f, \"buffer\", None)\n\n if f is None:\n return None\n\n return func(f)\n", "path": "src/click/_winconsole.py"}], "after_files": [{"content": "# This module is based on the excellent work by Adam Barto\u0161 who\n# provided a lot of what went into the implementation here in\n# the discussion to issue1602 in the Python bug tracker.\n#\n# There are some general differences in regards to how this works\n# compared to the original patches as we do not need to patch\n# the entire interpreter but just work in our little world of\n# echo and prompt.\nimport ctypes\nimport io\nimport time\nfrom ctypes import byref\nfrom ctypes import c_char\nfrom ctypes import c_char_p\nfrom ctypes import c_int\nfrom ctypes import c_ssize_t\nfrom ctypes import c_ulong\nfrom ctypes import c_void_p\nfrom ctypes import POINTER\nfrom ctypes import py_object\nfrom ctypes import windll\nfrom ctypes import WINFUNCTYPE\nfrom ctypes.wintypes import DWORD\nfrom ctypes.wintypes import HANDLE\nfrom ctypes.wintypes import LPCWSTR\nfrom ctypes.wintypes import LPWSTR\n\nimport msvcrt\n\nfrom ._compat import _NonClosingTextIOWrapper\n\ntry:\n from ctypes import pythonapi\nexcept ImportError:\n pythonapi = None\nelse:\n PyObject_GetBuffer = pythonapi.PyObject_GetBuffer\n PyBuffer_Release = pythonapi.PyBuffer_Release\n\n\nc_ssize_p = POINTER(c_ssize_t)\n\nkernel32 = windll.kernel32\nGetStdHandle = kernel32.GetStdHandle\nReadConsoleW = kernel32.ReadConsoleW\nWriteConsoleW = kernel32.WriteConsoleW\nGetConsoleMode = kernel32.GetConsoleMode\nGetLastError = kernel32.GetLastError\nGetCommandLineW = WINFUNCTYPE(LPWSTR)((\"GetCommandLineW\", windll.kernel32))\nCommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(\n (\"CommandLineToArgvW\", windll.shell32)\n)\nLocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)(\n (\"LocalFree\", windll.kernel32)\n)\n\n\nSTDIN_HANDLE = GetStdHandle(-10)\nSTDOUT_HANDLE = GetStdHandle(-11)\nSTDERR_HANDLE = GetStdHandle(-12)\n\n\nPyBUF_SIMPLE = 0\nPyBUF_WRITABLE = 1\n\nERROR_SUCCESS = 0\nERROR_NOT_ENOUGH_MEMORY = 8\nERROR_OPERATION_ABORTED = 995\n\nSTDIN_FILENO = 0\nSTDOUT_FILENO = 1\nSTDERR_FILENO = 2\n\nEOF = b\"\\x1a\"\nMAX_BYTES_WRITTEN = 32767\n\n\nclass Py_buffer(ctypes.Structure):\n _fields_ = [\n (\"buf\", c_void_p),\n (\"obj\", py_object),\n (\"len\", c_ssize_t),\n (\"itemsize\", c_ssize_t),\n (\"readonly\", c_int),\n (\"ndim\", c_int),\n (\"format\", c_char_p),\n (\"shape\", c_ssize_p),\n (\"strides\", c_ssize_p),\n (\"suboffsets\", c_ssize_p),\n (\"internal\", c_void_p),\n ]\n\n\n# On PyPy we cannot get buffers so our ability to operate here is\n# severely limited.\nif pythonapi is None:\n get_buffer = None\nelse:\n\n def get_buffer(obj, writable=False):\n buf = Py_buffer()\n flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE\n PyObject_GetBuffer(py_object(obj), byref(buf), flags)\n try:\n buffer_type = c_char * buf.len\n return buffer_type.from_address(buf.buf)\n finally:\n PyBuffer_Release(byref(buf))\n\n\nclass _WindowsConsoleRawIOBase(io.RawIOBase):\n def __init__(self, handle):\n self.handle = handle\n\n def isatty(self):\n io.RawIOBase.isatty(self)\n return True\n\n\nclass _WindowsConsoleReader(_WindowsConsoleRawIOBase):\n def readable(self):\n return True\n\n def readinto(self, b):\n bytes_to_be_read = len(b)\n if not bytes_to_be_read:\n return 0\n elif bytes_to_be_read % 2:\n raise ValueError(\n \"cannot read odd number of bytes from UTF-16-LE encoded console\"\n )\n\n buffer = get_buffer(b, writable=True)\n code_units_to_be_read = bytes_to_be_read // 2\n code_units_read = c_ulong()\n\n rv = ReadConsoleW(\n HANDLE(self.handle),\n buffer,\n code_units_to_be_read,\n byref(code_units_read),\n None,\n )\n if GetLastError() == ERROR_OPERATION_ABORTED:\n # wait for KeyboardInterrupt\n time.sleep(0.1)\n if not rv:\n raise OSError(f\"Windows error: {GetLastError()}\")\n\n if buffer[0] == EOF:\n return 0\n return 2 * code_units_read.value\n\n\nclass _WindowsConsoleWriter(_WindowsConsoleRawIOBase):\n def writable(self):\n return True\n\n @staticmethod\n def _get_error_message(errno):\n if errno == ERROR_SUCCESS:\n return \"ERROR_SUCCESS\"\n elif errno == ERROR_NOT_ENOUGH_MEMORY:\n return \"ERROR_NOT_ENOUGH_MEMORY\"\n return f\"Windows error {errno}\"\n\n def write(self, b):\n bytes_to_be_written = len(b)\n buf = get_buffer(b)\n code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2\n code_units_written = c_ulong()\n\n WriteConsoleW(\n HANDLE(self.handle),\n buf,\n code_units_to_be_written,\n byref(code_units_written),\n None,\n )\n bytes_written = 2 * code_units_written.value\n\n if bytes_written == 0 and bytes_to_be_written > 0:\n raise OSError(self._get_error_message(GetLastError()))\n return bytes_written\n\n\nclass ConsoleStream:\n def __init__(self, text_stream, byte_stream):\n self._text_stream = text_stream\n self.buffer = byte_stream\n\n @property\n def name(self):\n return self.buffer.name\n\n def write(self, x):\n if isinstance(x, str):\n return self._text_stream.write(x)\n try:\n self.flush()\n except Exception:\n pass\n return self.buffer.write(x)\n\n def writelines(self, lines):\n for line in lines:\n self.write(line)\n\n def __getattr__(self, name):\n return getattr(self._text_stream, name)\n\n def isatty(self):\n return self.buffer.isatty()\n\n def __repr__(self):\n return f\"<ConsoleStream name={self.name!r} encoding={self.encoding!r}>\"\n\n\nclass WindowsChunkedWriter:\n \"\"\"\n Wraps a stream (such as stdout), acting as a transparent proxy for all\n attribute access apart from method 'write()' which we wrap to write in\n limited chunks due to a Windows limitation on binary console streams.\n \"\"\"\n\n def __init__(self, wrapped):\n # double-underscore everything to prevent clashes with names of\n # attributes on the wrapped stream object.\n self.__wrapped = wrapped\n\n def __getattr__(self, name):\n return getattr(self.__wrapped, name)\n\n def write(self, text):\n total_to_write = len(text)\n written = 0\n\n while written < total_to_write:\n to_write = min(total_to_write - written, MAX_BYTES_WRITTEN)\n self.__wrapped.write(text[written : written + to_write])\n written += to_write\n\n\ndef _get_text_stdin(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\ndef _get_text_stdout(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\ndef _get_text_stderr(buffer_stream):\n text_stream = _NonClosingTextIOWrapper(\n io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),\n \"utf-16-le\",\n \"strict\",\n line_buffering=True,\n )\n return ConsoleStream(text_stream, buffer_stream)\n\n\n_stream_factories = {\n 0: _get_text_stdin,\n 1: _get_text_stdout,\n 2: _get_text_stderr,\n}\n\n\ndef _is_console(f):\n if not hasattr(f, \"fileno\"):\n return False\n\n try:\n fileno = f.fileno()\n except (OSError, io.UnsupportedOperation):\n return False\n\n handle = msvcrt.get_osfhandle(fileno)\n return bool(GetConsoleMode(handle, byref(DWORD())))\n\n\ndef _get_windows_console_stream(f, encoding, errors):\n if (\n get_buffer is not None\n and encoding in {\"utf-16-le\", None}\n and errors in {\"strict\", None}\n and _is_console(f)\n ):\n func = _stream_factories.get(f.fileno())\n if func is not None:\n f = getattr(f, \"buffer\", None)\n\n if f is None:\n return None\n\n return func(f)\n", "path": "src/click/_winconsole.py"}]}
| 3,914 | 99 |
gh_patches_debug_59130
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-core-348
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
keras.layers.CenterCrop raises AttributeError when passed a list of images
With tf.keras, the following code works, but not in Keras-Core:
```python
import keras_core as keras
import numpy as np
images = [
np.random.rand(100, 100, 3),
np.random.rand(100, 100, 3),
]
keras.layers.CenterCrop(height=60, width=50)(images) #!!! AttributeError
```
Full stacktrace below:
<details>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[21], line 8
2 import numpy as np
4 images = [
5 np.random.rand(100, 100, 3),
6 np.random.rand(100, 100, 3),
7 ]
----> 8 keras.layers.CenterCrop(height=60, width=50)(images)
File ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/layers/preprocessing/tf_data_layer.py:36, in TFDataLayer.__call__(self, inputs, **kwargs)
34 self._convert_input_args = True
35 return outputs
---> 36 return super().__call__(inputs, **kwargs)
File ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras_core.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/layers/preprocessing/center_crop.py:59, in CenterCrop.call(self, inputs)
57 init_width = inputs.shape[-1]
58 else:
---> 59 init_height = inputs.shape[-3]
60 init_width = inputs.shape[-2]
62 if init_height is None or init_width is None:
63 # Dynamic size case. TODO.
AttributeError: Exception encountered when calling CenterCrop.call().
'list' object has no attribute 'shape'
Arguments received by CenterCrop.call():
• inputs=['jnp.ndarray(shape=(100, 100, 3), dtype=float32)', 'jnp.ndarray(shape=(100, 100, 3), dtype=float32)']
```
</details>
A simple workaround is to stack the images:
```python
keras.layers.CenterCrop(height=60, width=50)(np.stack(images))
```
Not sure this can be considered a bug, but it's one of those little differences that may porting code from Keras 2.x to 3.0 a bit harder.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_core/layers/preprocessing/center_crop.py`
Content:
```
1 from keras_core import backend
2 from keras_core.api_export import keras_core_export
3 from keras_core.layers.preprocessing.tf_data_layer import TFDataLayer
4 from keras_core.utils import image_utils
5
6
7 @keras_core_export("keras_core.layers.CenterCrop")
8 class CenterCrop(TFDataLayer):
9 """A preprocessing layer which crops images.
10
11 This layers crops the central portion of the images to a target size. If an
12 image is smaller than the target size, it will be resized and cropped
13 so as to return the largest possible window in the image that matches
14 the target aspect ratio.
15
16 Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`).
17
18 Input shape:
19 3D (unbatched) or 4D (batched) tensor with shape:
20 `(..., height, width, channels)`, in `"channels_last"` format,
21 or `(..., channels, height, width)`, in `"channels_first"` format.
22
23 Output shape:
24 3D (unbatched) or 4D (batched) tensor with shape:
25 `(..., target_height, target_width, channels)`,
26 or `(..., channels, target_height, target_width)`,
27 in `"channels_first"` format.
28
29 If the input height/width is even and the target height/width is odd (or
30 inversely), the input image is left-padded by 1 pixel.
31
32 **Note:** This layer is safe to use inside a `tf.data` pipeline
33 (independently of which backend you're using).
34
35 Args:
36 height: Integer, the height of the output shape.
37 width: Integer, the width of the output shape.
38 data_format: string, either `"channels_last"` or `"channels_first"`.
39 The ordering of the dimensions in the inputs. `"channels_last"`
40 corresponds to inputs with shape `(batch, height, width, channels)`
41 while `"channels_first"` corresponds to inputs with shape
42 `(batch, channels, height, width)`. It defaults to the
43 `image_data_format` value found in your Keras config file at
44 `~/.keras/keras.json`. If you never set it, then it will be
45 `"channels_last"`.
46 """
47
48 def __init__(self, height, width, data_format=None, **kwargs):
49 super().__init__(**kwargs)
50 self.height = height
51 self.width = width
52 self.data_format = backend.standardize_data_format(data_format)
53
54 def call(self, inputs):
55 if self.data_format == "channels_first":
56 init_height = inputs.shape[-2]
57 init_width = inputs.shape[-1]
58 else:
59 init_height = inputs.shape[-3]
60 init_width = inputs.shape[-2]
61
62 if init_height is None or init_width is None:
63 # Dynamic size case. TODO.
64 raise ValueError(
65 "At this time, CenterCrop can only "
66 "process images with a static spatial "
67 f"shape. Received: inputs.shape={inputs.shape}"
68 )
69
70 h_diff = init_height - self.height
71 w_diff = init_width - self.width
72
73 h_start = int(h_diff / 2)
74 w_start = int(w_diff / 2)
75
76 if h_diff >= 0 and w_diff >= 0:
77 if len(inputs.shape) == 4:
78 if self.data_format == "channels_first":
79 return inputs[
80 :,
81 :,
82 h_start : h_start + self.height,
83 w_start : w_start + self.width,
84 ]
85 return inputs[
86 :,
87 h_start : h_start + self.height,
88 w_start : w_start + self.width,
89 :,
90 ]
91 elif len(inputs.shape) == 3:
92 if self.data_format == "channels_first":
93 return inputs[
94 :,
95 h_start : h_start + self.height,
96 w_start : w_start + self.width,
97 ]
98 return inputs[
99 h_start : h_start + self.height,
100 w_start : w_start + self.width,
101 :,
102 ]
103
104 return image_utils.smart_resize(
105 inputs,
106 [self.height, self.width],
107 data_format=self.data_format,
108 backend_module=self.backend,
109 )
110
111 def compute_output_shape(self, input_shape):
112 input_shape = list(input_shape)
113 if len(input_shape) == 4:
114 if self.data_format == "channels_last":
115 input_shape[1] = self.height
116 input_shape[2] = self.width
117 else:
118 input_shape[2] = self.height
119 input_shape[3] = self.width
120 else:
121 if self.data_format == "channels_last":
122 input_shape[0] = self.height
123 input_shape[1] = self.width
124 else:
125 input_shape[1] = self.height
126 input_shape[2] = self.width
127 return tuple(input_shape)
128
129 def get_config(self):
130 base_config = super().get_config()
131 config = {
132 "height": self.height,
133 "width": self.width,
134 "data_format": self.data_format,
135 }
136 return {**base_config, **config}
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras_core/layers/preprocessing/center_crop.py b/keras_core/layers/preprocessing/center_crop.py
--- a/keras_core/layers/preprocessing/center_crop.py
+++ b/keras_core/layers/preprocessing/center_crop.py
@@ -52,6 +52,7 @@
self.data_format = backend.standardize_data_format(data_format)
def call(self, inputs):
+ inputs = self.backend.cast(inputs, self.compute_dtype)
if self.data_format == "channels_first":
init_height = inputs.shape[-2]
init_width = inputs.shape[-1]
|
{"golden_diff": "diff --git a/keras_core/layers/preprocessing/center_crop.py b/keras_core/layers/preprocessing/center_crop.py\n--- a/keras_core/layers/preprocessing/center_crop.py\n+++ b/keras_core/layers/preprocessing/center_crop.py\n@@ -52,6 +52,7 @@\n self.data_format = backend.standardize_data_format(data_format)\n \n def call(self, inputs):\n+ inputs = self.backend.cast(inputs, self.compute_dtype)\n if self.data_format == \"channels_first\":\n init_height = inputs.shape[-2]\n init_width = inputs.shape[-1]\n", "issue": "keras.layers.CenterCrop raises AttributeError when passed a list of images\nWith tf.keras, the following code works, but not in Keras-Core:\r\n\r\n```python\r\nimport keras_core as keras\r\nimport numpy as np\r\n\r\nimages = [\r\n np.random.rand(100, 100, 3),\r\n np.random.rand(100, 100, 3),\r\n]\r\nkeras.layers.CenterCrop(height=60, width=50)(images) #!!! AttributeError\r\n```\r\n\r\nFull stacktrace below:\r\n\r\n<details>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[21], line 8\r\n 2 import numpy as np\r\n 4 images = [\r\n 5 np.random.rand(100, 100, 3),\r\n 6 np.random.rand(100, 100, 3),\r\n 7 ]\r\n----> 8 keras.layers.CenterCrop(height=60, width=50)(images)\r\n\r\nFile ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/layers/preprocessing/tf_data_layer.py:36, in TFDataLayer.__call__(self, inputs, **kwargs)\r\n 34 self._convert_input_args = True\r\n 35 return outputs\r\n---> 36 return super().__call__(inputs, **kwargs)\r\n\r\nFile ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)\r\n 119 filtered_tb = _process_traceback_frames(e.__traceback__)\r\n 120 # To get the full stack trace, call:\r\n 121 # `keras_core.config.disable_traceback_filtering()`\r\n--> 122 raise e.with_traceback(filtered_tb) from None\r\n 123 finally:\r\n 124 del filtered_tb\r\n\r\nFile ~/opt/miniconda3/envs/kerascore/lib/python3.8/site-packages/keras_core/src/layers/preprocessing/center_crop.py:59, in CenterCrop.call(self, inputs)\r\n 57 init_width = inputs.shape[-1]\r\n 58 else:\r\n---> 59 init_height = inputs.shape[-3]\r\n 60 init_width = inputs.shape[-2]\r\n 62 if init_height is None or init_width is None:\r\n 63 # Dynamic size case. TODO.\r\n\r\nAttributeError: Exception encountered when calling CenterCrop.call().\r\n\r\n'list' object has no attribute 'shape'\r\n\r\nArguments received by CenterCrop.call():\r\n \u2022 inputs=['jnp.ndarray(shape=(100, 100, 3), dtype=float32)', 'jnp.ndarray(shape=(100, 100, 3), dtype=float32)']\r\n```\r\n\r\n</details>\r\n\r\nA simple workaround is to stack the images:\r\n\r\n```python\r\nkeras.layers.CenterCrop(height=60, width=50)(np.stack(images))\r\n```\r\n\r\nNot sure this can be considered a bug, but it's one of those little differences that may porting code from Keras 2.x to 3.0 a bit harder.\n", "before_files": [{"content": "from keras_core import backend\nfrom keras_core.api_export import keras_core_export\nfrom keras_core.layers.preprocessing.tf_data_layer import TFDataLayer\nfrom keras_core.utils import image_utils\n\n\n@keras_core_export(\"keras_core.layers.CenterCrop\")\nclass CenterCrop(TFDataLayer):\n \"\"\"A preprocessing layer which crops images.\n\n This layers crops the central portion of the images to a target size. If an\n image is smaller than the target size, it will be resized and cropped\n so as to return the largest possible window in the image that matches\n the target aspect ratio.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format,\n or `(..., channels, height, width)`, in `\"channels_first\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., target_height, target_width, channels)`,\n or `(..., channels, target_height, target_width)`,\n in `\"channels_first\"` format.\n\n If the input height/width is even and the target height/width is odd (or\n inversely), the input image is left-padded by 1 pixel.\n\n **Note:** This layer is safe to use inside a `tf.data` pipeline\n (independently of which backend you're using).\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n data_format: string, either `\"channels_last\"` or `\"channels_first\"`.\n The ordering of the dimensions in the inputs. `\"channels_last\"`\n corresponds to inputs with shape `(batch, height, width, channels)`\n while `\"channels_first\"` corresponds to inputs with shape\n `(batch, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n `~/.keras/keras.json`. If you never set it, then it will be\n `\"channels_last\"`.\n \"\"\"\n\n def __init__(self, height, width, data_format=None, **kwargs):\n super().__init__(**kwargs)\n self.height = height\n self.width = width\n self.data_format = backend.standardize_data_format(data_format)\n\n def call(self, inputs):\n if self.data_format == \"channels_first\":\n init_height = inputs.shape[-2]\n init_width = inputs.shape[-1]\n else:\n init_height = inputs.shape[-3]\n init_width = inputs.shape[-2]\n\n if init_height is None or init_width is None:\n # Dynamic size case. TODO.\n raise ValueError(\n \"At this time, CenterCrop can only \"\n \"process images with a static spatial \"\n f\"shape. Received: inputs.shape={inputs.shape}\"\n )\n\n h_diff = init_height - self.height\n w_diff = init_width - self.width\n\n h_start = int(h_diff / 2)\n w_start = int(w_diff / 2)\n\n if h_diff >= 0 and w_diff >= 0:\n if len(inputs.shape) == 4:\n if self.data_format == \"channels_first\":\n return inputs[\n :,\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n ]\n return inputs[\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n :,\n ]\n elif len(inputs.shape) == 3:\n if self.data_format == \"channels_first\":\n return inputs[\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n ]\n return inputs[\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n :,\n ]\n\n return image_utils.smart_resize(\n inputs,\n [self.height, self.width],\n data_format=self.data_format,\n backend_module=self.backend,\n )\n\n def compute_output_shape(self, input_shape):\n input_shape = list(input_shape)\n if len(input_shape) == 4:\n if self.data_format == \"channels_last\":\n input_shape[1] = self.height\n input_shape[2] = self.width\n else:\n input_shape[2] = self.height\n input_shape[3] = self.width\n else:\n if self.data_format == \"channels_last\":\n input_shape[0] = self.height\n input_shape[1] = self.width\n else:\n input_shape[1] = self.height\n input_shape[2] = self.width\n return tuple(input_shape)\n\n def get_config(self):\n base_config = super().get_config()\n config = {\n \"height\": self.height,\n \"width\": self.width,\n \"data_format\": self.data_format,\n }\n return {**base_config, **config}\n", "path": "keras_core/layers/preprocessing/center_crop.py"}], "after_files": [{"content": "from keras_core import backend\nfrom keras_core.api_export import keras_core_export\nfrom keras_core.layers.preprocessing.tf_data_layer import TFDataLayer\nfrom keras_core.utils import image_utils\n\n\n@keras_core_export(\"keras_core.layers.CenterCrop\")\nclass CenterCrop(TFDataLayer):\n \"\"\"A preprocessing layer which crops images.\n\n This layers crops the central portion of the images to a target size. If an\n image is smaller than the target size, it will be resized and cropped\n so as to return the largest possible window in the image that matches\n the target aspect ratio.\n\n Input pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`).\n\n Input shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., height, width, channels)`, in `\"channels_last\"` format,\n or `(..., channels, height, width)`, in `\"channels_first\"` format.\n\n Output shape:\n 3D (unbatched) or 4D (batched) tensor with shape:\n `(..., target_height, target_width, channels)`,\n or `(..., channels, target_height, target_width)`,\n in `\"channels_first\"` format.\n\n If the input height/width is even and the target height/width is odd (or\n inversely), the input image is left-padded by 1 pixel.\n\n **Note:** This layer is safe to use inside a `tf.data` pipeline\n (independently of which backend you're using).\n\n Args:\n height: Integer, the height of the output shape.\n width: Integer, the width of the output shape.\n data_format: string, either `\"channels_last\"` or `\"channels_first\"`.\n The ordering of the dimensions in the inputs. `\"channels_last\"`\n corresponds to inputs with shape `(batch, height, width, channels)`\n while `\"channels_first\"` corresponds to inputs with shape\n `(batch, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n `~/.keras/keras.json`. If you never set it, then it will be\n `\"channels_last\"`.\n \"\"\"\n\n def __init__(self, height, width, data_format=None, **kwargs):\n super().__init__(**kwargs)\n self.height = height\n self.width = width\n self.data_format = backend.standardize_data_format(data_format)\n\n def call(self, inputs):\n inputs = self.backend.cast(inputs, self.compute_dtype)\n if self.data_format == \"channels_first\":\n init_height = inputs.shape[-2]\n init_width = inputs.shape[-1]\n else:\n init_height = inputs.shape[-3]\n init_width = inputs.shape[-2]\n\n if init_height is None or init_width is None:\n # Dynamic size case. TODO.\n raise ValueError(\n \"At this time, CenterCrop can only \"\n \"process images with a static spatial \"\n f\"shape. Received: inputs.shape={inputs.shape}\"\n )\n\n h_diff = init_height - self.height\n w_diff = init_width - self.width\n\n h_start = int(h_diff / 2)\n w_start = int(w_diff / 2)\n\n if h_diff >= 0 and w_diff >= 0:\n if len(inputs.shape) == 4:\n if self.data_format == \"channels_first\":\n return inputs[\n :,\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n ]\n return inputs[\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n :,\n ]\n elif len(inputs.shape) == 3:\n if self.data_format == \"channels_first\":\n return inputs[\n :,\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n ]\n return inputs[\n h_start : h_start + self.height,\n w_start : w_start + self.width,\n :,\n ]\n\n return image_utils.smart_resize(\n inputs,\n [self.height, self.width],\n data_format=self.data_format,\n backend_module=self.backend,\n )\n\n def compute_output_shape(self, input_shape):\n input_shape = list(input_shape)\n if len(input_shape) == 4:\n if self.data_format == \"channels_last\":\n input_shape[1] = self.height\n input_shape[2] = self.width\n else:\n input_shape[2] = self.height\n input_shape[3] = self.width\n else:\n if self.data_format == \"channels_last\":\n input_shape[0] = self.height\n input_shape[1] = self.width\n else:\n input_shape[1] = self.height\n input_shape[2] = self.width\n return tuple(input_shape)\n\n def get_config(self):\n base_config = super().get_config()\n config = {\n \"height\": self.height,\n \"width\": self.width,\n \"data_format\": self.data_format,\n }\n return {**base_config, **config}\n", "path": "keras_core/layers/preprocessing/center_crop.py"}]}
| 2,405 | 134 |
gh_patches_debug_17398
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__numpyro-1469
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Leak when running examples with JAX_CHECK_TRACER_LEAKS=1
Hi
when I run ar2.py from yours examples (or bnn.py did not try with others) with the environment variable JAX_CHECK_TRACER_LEAKS=1 they fail. (I had to use it to try to find an issue with a function I had written)
the exception raised is
Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
here the complete log:
Traceback (most recent call last):
File "/home/ffranco/Downloads/ar2.py", line 138, in <module>
main(args)
File "/home/ffranco/Downloads/ar2.py", line 117, in main
run_inference(model, args, rng_key, y)
File "/home/ffranco/Downloads/ar2.py", line 96, in run_inference
mcmc.run(rng_key, y=y)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 593, in run
states_flat, last_state = partial_map_fn(map_args)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 386, in _single_chain_mcmc
model_kwargs=kwargs,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 707, in init
rng_key_init_model, model_args, model_kwargs, init_params
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 659, in _init_state
forward_mode_differentiation=self._forward_mode_differentiation,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 606, in initialize_model
) = _get_model_transforms(substituted_model, model_args, model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 404, in _get_model_transforms
model_trace = trace(model).get_trace(*model_args, **model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/Downloads/ar2.py", line 67, in ar2_scan
scan(transition, init, timesteps)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 438, in scan
msg = apply_stack(initial_msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 53, in apply_stack
default_process_message(msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 28, in default_process_message
msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 306, in scan_wrapper
body_fn, wrapped_carry, xs, length=length, reverse=reverse
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 1345, in scan
init_flat, carry_avals, carry_avals_out, init_tree, *rest = _create_jaxpr(init)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 1332, in _create_jaxpr
f, in_tree, carry_avals + x_avals, "scan")
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py", line 185, in wrapper
return f(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 78, in _initial_style_jaxpr
fun, in_tree, in_avals, primitive_name)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py", line 185, in wrapper
return f(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py", line 71, in _initial_style_open_jaxpr
jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals, debug)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 1511, in trace_to_jaxpr_dynamic
del main, fun
File "/home/ffranco/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/core.py", line 810, in new_main
raise Exception(f'Leaked level {t()}. Leaked tracer(s): {leaked_tracers}.')
jax._src.traceback_util.UnfilteredStackTrace: Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ffranco/Downloads/ar2.py", line 138, in <module>
main(args)
File "/home/ffranco/Downloads/ar2.py", line 117, in main
run_inference(model, args, rng_key, y)
File "/home/ffranco/Downloads/ar2.py", line 96, in run_inference
mcmc.run(rng_key, y=y)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 593, in run
states_flat, last_state = partial_map_fn(map_args)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py", line 386, in _single_chain_mcmc
model_kwargs=kwargs,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 707, in init
rng_key_init_model, model_args, model_kwargs, init_params
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py", line 659, in _init_state
forward_mode_differentiation=self._forward_mode_differentiation,
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 606, in initialize_model
) = _get_model_transforms(substituted_model, model_args, model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py", line 404, in _get_model_transforms
model_trace = trace(model).get_trace(*model_args, **model_kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/ffranco/Downloads/ar2.py", line 67, in ar2_scan
scan(transition, init, timesteps)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 438, in scan
msg = apply_stack(initial_msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 53, in apply_stack
default_process_message(msg)
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py", line 28, in default_process_message
msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
File "/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py", line 306, in scan_wrapper
body_fn, wrapped_carry, xs, length=length, reverse=reverse
File "/home/ffranco/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpyro/ops/pytree.py`
Content:
```
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from jax.tree_util import register_pytree_node_class
5
6
7 @register_pytree_node_class
8 class PytreeTrace:
9 def __init__(self, trace):
10 self.trace = trace
11
12 def tree_flatten(self):
13 trace, aux_trace = {}, {}
14 for name, site in self.trace.items():
15 if site["type"] in ["sample", "deterministic", "plate", "param"]:
16 trace[name], aux_trace[name] = {}, {"_control_flow_done": True}
17 for key in site:
18 if key == "fn":
19 if site["type"] == "sample":
20 trace[name][key] = site[key]
21 elif site["type"] == "plate":
22 aux_trace[name][key] = site[key]
23 elif key in ["args", "value", "intermediates"]:
24 trace[name][key] = site[key]
25 # scanned sites have stop field because we trace them inside a block handler
26 elif key != "stop":
27 if key == "kwargs":
28 kwargs = site["kwargs"].copy()
29 if "rng_key" in kwargs:
30 # rng_key is not traced else it is collected by the
31 # scan primitive which doesn't make sense
32 # set to None to avoid leaks during tracing by JAX
33 kwargs["rng_key"] = None
34 aux_trace[name][key] = kwargs
35 else:
36 aux_trace[name][key] = site[key]
37 # keep the site order information because in JAX, flatten and unflatten do not preserve
38 # the order of keys in a dict
39 site_names = list(trace.keys())
40 return (trace,), (aux_trace, site_names)
41
42 @classmethod
43 def tree_unflatten(cls, aux_data, children):
44 aux_trace, site_names = aux_data
45 (trace,) = children
46 trace_with_aux = {}
47 for name in site_names:
48 trace[name].update(aux_trace[name])
49 trace_with_aux[name] = trace[name]
50 return cls(trace_with_aux)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numpyro/ops/pytree.py b/numpyro/ops/pytree.py
--- a/numpyro/ops/pytree.py
+++ b/numpyro/ops/pytree.py
@@ -32,6 +32,12 @@
# set to None to avoid leaks during tracing by JAX
kwargs["rng_key"] = None
aux_trace[name][key] = kwargs
+ elif key == "infer":
+ kwargs = site["infer"].copy()
+ if "_scan_current_index" in kwargs:
+ # set to None to avoid leaks during tracing by JAX
+ kwargs["_scan_current_index"] = None
+ aux_trace[name][key] = kwargs
else:
aux_trace[name][key] = site[key]
# keep the site order information because in JAX, flatten and unflatten do not preserve
|
{"golden_diff": "diff --git a/numpyro/ops/pytree.py b/numpyro/ops/pytree.py\n--- a/numpyro/ops/pytree.py\n+++ b/numpyro/ops/pytree.py\n@@ -32,6 +32,12 @@\n # set to None to avoid leaks during tracing by JAX\n kwargs[\"rng_key\"] = None\n aux_trace[name][key] = kwargs\n+ elif key == \"infer\":\n+ kwargs = site[\"infer\"].copy()\n+ if \"_scan_current_index\" in kwargs:\n+ # set to None to avoid leaks during tracing by JAX\n+ kwargs[\"_scan_current_index\"] = None\n+ aux_trace[name][key] = kwargs\n else:\n aux_trace[name][key] = site[key]\n # keep the site order information because in JAX, flatten and unflatten do not preserve\n", "issue": "Leak when running examples with JAX_CHECK_TRACER_LEAKS=1\nHi\r\nwhen I run ar2.py from yours examples (or bnn.py did not try with others) with the environment variable JAX_CHECK_TRACER_LEAKS=1 they fail. (I had to use it to try to find an issue with a function I had written)\r\n\r\nthe exception raised is \r\n\r\nException: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].\r\n\r\nhere the complete log:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ffranco/Downloads/ar2.py\", line 138, in <module>\r\n main(args)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 117, in main\r\n run_inference(model, args, rng_key, y)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 96, in run_inference\r\n mcmc.run(rng_key, y=y)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py\", line 593, in run\r\n states_flat, last_state = partial_map_fn(map_args)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py\", line 386, in _single_chain_mcmc\r\n model_kwargs=kwargs,\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py\", line 707, in init\r\n rng_key_init_model, model_args, model_kwargs, init_params\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py\", line 659, in _init_state\r\n forward_mode_differentiation=self._forward_mode_differentiation,\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py\", line 606, in initialize_model\r\n ) = _get_model_transforms(substituted_model, model_args, model_kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py\", line 404, in _get_model_transforms\r\n model_trace = trace(model).get_trace(*model_args, **model_kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py\", line 171, in get_trace\r\n self(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 67, in ar2_scan\r\n scan(transition, init, timesteps)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py\", line 438, in scan\r\n msg = apply_stack(initial_msg)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 53, in apply_stack\r\n default_process_message(msg)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 28, in default_process_message\r\n msg[\"value\"] = msg[\"fn\"](*msg[\"args\"], **msg[\"kwargs\"])\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py\", line 306, in scan_wrapper\r\n body_fn, wrapped_carry, xs, length=length, reverse=reverse\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/traceback_util.py\", line 162, in reraise_with_filtered_traceback\r\n return fun(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py\", line 1345, in scan\r\n init_flat, carry_avals, carry_avals_out, init_tree, *rest = _create_jaxpr(init)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py\", line 1332, in _create_jaxpr\r\n f, in_tree, carry_avals + x_avals, \"scan\")\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py\", line 185, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py\", line 78, in _initial_style_jaxpr\r\n fun, in_tree, in_avals, primitive_name)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/util.py\", line 185, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/_src/lax/control_flow.py\", line 71, in _initial_style_open_jaxpr\r\n jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals, debug)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/interpreters/partial_eval.py\", line 1511, in trace_to_jaxpr_dynamic\r\n del main, fun\r\n File \"/home/ffranco/anaconda3/lib/python3.7/contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/jax/core.py\", line 810, in new_main\r\n raise Exception(f'Leaked level {t()}. Leaked tracer(s): {leaked_tracers}.')\r\njax._src.traceback_util.UnfilteredStackTrace: Exception: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].\r\n\r\nThe stack trace below excludes JAX-internal frames.\r\nThe preceding is the original exception that occurred, unmodified.\r\n\r\n--------------------\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ffranco/Downloads/ar2.py\", line 138, in <module>\r\n main(args)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 117, in main\r\n run_inference(model, args, rng_key, y)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 96, in run_inference\r\n mcmc.run(rng_key, y=y)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py\", line 593, in run\r\n states_flat, last_state = partial_map_fn(map_args)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/mcmc.py\", line 386, in _single_chain_mcmc\r\n model_kwargs=kwargs,\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py\", line 707, in init\r\n rng_key_init_model, model_args, model_kwargs, init_params\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/hmc.py\", line 659, in _init_state\r\n forward_mode_differentiation=self._forward_mode_differentiation,\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py\", line 606, in initialize_model\r\n ) = _get_model_transforms(substituted_model, model_args, model_kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/infer/util.py\", line 404, in _get_model_transforms\r\n model_trace = trace(model).get_trace(*model_args, **model_kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/handlers.py\", line 171, in get_trace\r\n self(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 105, in __call__\r\n return self.fn(*args, **kwargs)\r\n File \"/home/ffranco/Downloads/ar2.py\", line 67, in ar2_scan\r\n scan(transition, init, timesteps)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py\", line 438, in scan\r\n msg = apply_stack(initial_msg)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 53, in apply_stack\r\n default_process_message(msg)\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/primitives.py\", line 28, in default_process_message\r\n msg[\"value\"] = msg[\"fn\"](*msg[\"args\"], **msg[\"kwargs\"])\r\n File \"/home/ffranco/numpyro08/lib/python3.7/site-packages/numpyro/contrib/control_flow/scan.py\", line 306, in scan_wrapper\r\n body_fn, wrapped_carry, xs, length=length, reverse=reverse\r\n File \"/home/ffranco/anaconda3/lib/python3.7/contextlib.py\", line 119, in __exit__\r\n next(self.gen)\r\nException: Leaked level MainTrace(1,DynamicJaxprTrace). Leaked tracer(s): [Traced<ShapedArray(int32[])>with<DynamicJaxprTrace(level=1/0)>].\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom jax.tree_util import register_pytree_node_class\n\n\n@register_pytree_node_class\nclass PytreeTrace:\n def __init__(self, trace):\n self.trace = trace\n\n def tree_flatten(self):\n trace, aux_trace = {}, {}\n for name, site in self.trace.items():\n if site[\"type\"] in [\"sample\", \"deterministic\", \"plate\", \"param\"]:\n trace[name], aux_trace[name] = {}, {\"_control_flow_done\": True}\n for key in site:\n if key == \"fn\":\n if site[\"type\"] == \"sample\":\n trace[name][key] = site[key]\n elif site[\"type\"] == \"plate\":\n aux_trace[name][key] = site[key]\n elif key in [\"args\", \"value\", \"intermediates\"]:\n trace[name][key] = site[key]\n # scanned sites have stop field because we trace them inside a block handler\n elif key != \"stop\":\n if key == \"kwargs\":\n kwargs = site[\"kwargs\"].copy()\n if \"rng_key\" in kwargs:\n # rng_key is not traced else it is collected by the\n # scan primitive which doesn't make sense\n # set to None to avoid leaks during tracing by JAX\n kwargs[\"rng_key\"] = None\n aux_trace[name][key] = kwargs\n else:\n aux_trace[name][key] = site[key]\n # keep the site order information because in JAX, flatten and unflatten do not preserve\n # the order of keys in a dict\n site_names = list(trace.keys())\n return (trace,), (aux_trace, site_names)\n\n @classmethod\n def tree_unflatten(cls, aux_data, children):\n aux_trace, site_names = aux_data\n (trace,) = children\n trace_with_aux = {}\n for name in site_names:\n trace[name].update(aux_trace[name])\n trace_with_aux[name] = trace[name]\n return cls(trace_with_aux)\n", "path": "numpyro/ops/pytree.py"}], "after_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom jax.tree_util import register_pytree_node_class\n\n\n@register_pytree_node_class\nclass PytreeTrace:\n def __init__(self, trace):\n self.trace = trace\n\n def tree_flatten(self):\n trace, aux_trace = {}, {}\n for name, site in self.trace.items():\n if site[\"type\"] in [\"sample\", \"deterministic\", \"plate\", \"param\"]:\n trace[name], aux_trace[name] = {}, {\"_control_flow_done\": True}\n for key in site:\n if key == \"fn\":\n if site[\"type\"] == \"sample\":\n trace[name][key] = site[key]\n elif site[\"type\"] == \"plate\":\n aux_trace[name][key] = site[key]\n elif key in [\"args\", \"value\", \"intermediates\"]:\n trace[name][key] = site[key]\n # scanned sites have stop field because we trace them inside a block handler\n elif key != \"stop\":\n if key == \"kwargs\":\n kwargs = site[\"kwargs\"].copy()\n if \"rng_key\" in kwargs:\n # rng_key is not traced else it is collected by the\n # scan primitive which doesn't make sense\n # set to None to avoid leaks during tracing by JAX\n kwargs[\"rng_key\"] = None\n aux_trace[name][key] = kwargs\n elif key == \"infer\":\n kwargs = site[\"infer\"].copy()\n if \"_scan_current_index\" in kwargs:\n # set to None to avoid leaks during tracing by JAX\n kwargs[\"_scan_current_index\"] = None\n aux_trace[name][key] = kwargs\n else:\n aux_trace[name][key] = site[key]\n # keep the site order information because in JAX, flatten and unflatten do not preserve\n # the order of keys in a dict\n site_names = list(trace.keys())\n return (trace,), (aux_trace, site_names)\n\n @classmethod\n def tree_unflatten(cls, aux_data, children):\n aux_trace, site_names = aux_data\n (trace,) = children\n trace_with_aux = {}\n for name in site_names:\n trace[name].update(aux_trace[name])\n trace_with_aux[name] = trace[name]\n return cls(trace_with_aux)\n", "path": "numpyro/ops/pytree.py"}]}
| 3,359 | 190 |
gh_patches_debug_33887
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-1065
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make `scripts/setup/generate-secrets -d` use existing setting values
Currently, `generate-secrets -d` regenerates all the secret keys each time it is run, modifying zproject/settings.py, which means that if you use the same Zulip checkout for both a development VM on the host and a Vagrant guest, every time you provision on the Vagrant guest, you need to rebuild the database (to match the new keys). I think this would also affect someone using both Docker and Vagrant with the same checkout (as I imagine happens with some frequency when testing).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/setup/generate_secrets.py`
Content:
```
1 #!/usr/bin/env python
2 # This tools generates local_settings_generated.py using the template
3
4 from __future__ import print_function
5 import sys, os, os.path
6
7 sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
8 os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'
9
10 from django.utils.crypto import get_random_string
11 from zerver.lib.utils import generate_random_token
12
13 os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))
14
15 CAMO_CONFIG_FILENAME = '/etc/default/camo'
16
17 AUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',
18 'initial_password_salt']
19
20 def generate_camo_config_file(camo_key):
21 camo_config = """ENABLED=yes
22 PORT=9292
23 CAMO_KEY=%s
24 """ % (camo_key,)
25 with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:
26 camo_file.write(camo_config)
27 print("Generated Camo config file %s" % (CAMO_CONFIG_FILENAME,))
28
29 def generate_django_secretkey():
30 # Secret key generation taken from Django's startproject.py
31 chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
32 return get_random_string(50, chars)
33
34 def generate_secrets(development=False):
35 if development:
36 OUTPUT_SETTINGS_FILENAME = "zproject/dev-secrets.conf"
37 else:
38 OUTPUT_SETTINGS_FILENAME = "/etc/zulip/zulip-secrets.conf"
39
40 lines = ['[secrets]\n']
41
42 def config_line(var, value):
43 return "%s = %s\n" % (var, value)
44
45 for name in AUTOGENERATED_SETTINGS:
46 lines.append(config_line(name, generate_random_token(64)))
47
48 lines.append(config_line('secret_key', generate_django_secretkey()))
49 camo_key = get_random_string(64)
50 lines.append(config_line('camo_key', camo_key))
51 if not development:
52 # Write the Camo config file directly
53 generate_camo_config_file(camo_key)
54
55 out = open(OUTPUT_SETTINGS_FILENAME, 'w')
56 out.write("".join(lines))
57 out.close()
58
59 print("Generated %s with auto-generated secrets!" % (OUTPUT_SETTINGS_FILENAME,))
60
61 if __name__ == '__main__':
62
63 development = False
64 extra_args = sys.argv[1:]
65
66 if len(extra_args) and extra_args[0] in ('-d', '--development'):
67 development = True
68
69 generate_secrets(development)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/setup/generate_secrets.py b/scripts/setup/generate_secrets.py
--- a/scripts/setup/generate_secrets.py
+++ b/scripts/setup/generate_secrets.py
@@ -8,6 +8,8 @@
os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'
from django.utils.crypto import get_random_string
+import six
+
from zerver.lib.utils import generate_random_token
os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))
@@ -31,6 +33,21 @@
chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
return get_random_string(50, chars)
+def get_old_conf(output_filename):
+ if not os.path.exists(output_filename):
+ return {}
+
+ secrets_file = six.moves.configparser.RawConfigParser()
+ secrets_file.read(output_filename)
+
+ def get_secret(key):
+ if secrets_file.has_option('secrets', key):
+ return secrets_file.get('secrets', key)
+ return None
+
+ fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']
+ return {name: get_secret(name) for name in fields}
+
def generate_secrets(development=False):
if development:
OUTPUT_SETTINGS_FILENAME = "zproject/dev-secrets.conf"
@@ -42,12 +59,16 @@
def config_line(var, value):
return "%s = %s\n" % (var, value)
+ old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)
for name in AUTOGENERATED_SETTINGS:
- lines.append(config_line(name, generate_random_token(64)))
+ lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))
+
+ secret_key = old_conf.get('secret_key', generate_django_secretkey())
+ lines.append(config_line('secret_key', secret_key))
- lines.append(config_line('secret_key', generate_django_secretkey()))
- camo_key = get_random_string(64)
+ camo_key = old_conf.get('camo_key', get_random_string(64))
lines.append(config_line('camo_key', camo_key))
+
if not development:
# Write the Camo config file directly
generate_camo_config_file(camo_key)
|
{"golden_diff": "diff --git a/scripts/setup/generate_secrets.py b/scripts/setup/generate_secrets.py\n--- a/scripts/setup/generate_secrets.py\n+++ b/scripts/setup/generate_secrets.py\n@@ -8,6 +8,8 @@\n os.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n \n from django.utils.crypto import get_random_string\n+import six\n+\n from zerver.lib.utils import generate_random_token\n \n os.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n@@ -31,6 +33,21 @@\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n \n+def get_old_conf(output_filename):\n+ if not os.path.exists(output_filename):\n+ return {}\n+\n+ secrets_file = six.moves.configparser.RawConfigParser()\n+ secrets_file.read(output_filename)\n+\n+ def get_secret(key):\n+ if secrets_file.has_option('secrets', key):\n+ return secrets_file.get('secrets', key)\n+ return None\n+\n+ fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']\n+ return {name: get_secret(name) for name in fields}\n+\n def generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n@@ -42,12 +59,16 @@\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n \n+ old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)\n for name in AUTOGENERATED_SETTINGS:\n- lines.append(config_line(name, generate_random_token(64)))\n+ lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))\n+\n+ secret_key = old_conf.get('secret_key', generate_django_secretkey())\n+ lines.append(config_line('secret_key', secret_key))\n \n- lines.append(config_line('secret_key', generate_django_secretkey()))\n- camo_key = get_random_string(64)\n+ camo_key = old_conf.get('camo_key', get_random_string(64))\n lines.append(config_line('camo_key', camo_key))\n+\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n", "issue": "Make `scripts/setup/generate-secrets -d` use existing setting values\nCurrently, `generate-secrets -d` regenerates all the secret keys each time it is run, modifying zproject/settings.py, which means that if you use the same Zulip checkout for both a development VM on the host and a Vagrant guest, every time you provision on the Vagrant guest, you need to rebuild the database (to match the new keys). I think this would also affect someone using both Docker and Vagrant with the same checkout (as I imagine happens with some frequency when testing).\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# This tools generates local_settings_generated.py using the template\n\nfrom __future__ import print_function\nimport sys, os, os.path\n\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n\nfrom django.utils.crypto import get_random_string\nfrom zerver.lib.utils import generate_random_token\n\nos.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n\nCAMO_CONFIG_FILENAME = '/etc/default/camo'\n\nAUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',\n 'initial_password_salt']\n\ndef generate_camo_config_file(camo_key):\n camo_config = \"\"\"ENABLED=yes\nPORT=9292\nCAMO_KEY=%s\n\"\"\" % (camo_key,)\n with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:\n camo_file.write(camo_config)\n print(\"Generated Camo config file %s\" % (CAMO_CONFIG_FILENAME,))\n\ndef generate_django_secretkey():\n # Secret key generation taken from Django's startproject.py\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n\ndef generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n else:\n OUTPUT_SETTINGS_FILENAME = \"/etc/zulip/zulip-secrets.conf\"\n\n lines = ['[secrets]\\n']\n\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n\n for name in AUTOGENERATED_SETTINGS:\n lines.append(config_line(name, generate_random_token(64)))\n\n lines.append(config_line('secret_key', generate_django_secretkey()))\n camo_key = get_random_string(64)\n lines.append(config_line('camo_key', camo_key))\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n\n out = open(OUTPUT_SETTINGS_FILENAME, 'w')\n out.write(\"\".join(lines))\n out.close()\n\n print(\"Generated %s with auto-generated secrets!\" % (OUTPUT_SETTINGS_FILENAME,))\n\nif __name__ == '__main__':\n\n development = False\n extra_args = sys.argv[1:]\n\n if len(extra_args) and extra_args[0] in ('-d', '--development'):\n development = True\n\n generate_secrets(development)\n", "path": "scripts/setup/generate_secrets.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# This tools generates local_settings_generated.py using the template\n\nfrom __future__ import print_function\nimport sys, os, os.path\n\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))\nos.environ['DJANGO_SETTINGS_MODULE'] = 'zproject.settings'\n\nfrom django.utils.crypto import get_random_string\nimport six\n\nfrom zerver.lib.utils import generate_random_token\n\nos.chdir(os.path.join(os.path.dirname(__file__), '..', '..'))\n\nCAMO_CONFIG_FILENAME = '/etc/default/camo'\n\nAUTOGENERATED_SETTINGS = ['shared_secret', 'avatar_salt', 'rabbitmq_password', 'local_database_password',\n 'initial_password_salt']\n\ndef generate_camo_config_file(camo_key):\n camo_config = \"\"\"ENABLED=yes\nPORT=9292\nCAMO_KEY=%s\n\"\"\" % (camo_key,)\n with open(CAMO_CONFIG_FILENAME, 'w') as camo_file:\n camo_file.write(camo_config)\n print(\"Generated Camo config file %s\" % (CAMO_CONFIG_FILENAME,))\n\ndef generate_django_secretkey():\n # Secret key generation taken from Django's startproject.py\n chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'\n return get_random_string(50, chars)\n\ndef get_old_conf(output_filename):\n if not os.path.exists(output_filename):\n return {}\n\n secrets_file = six.moves.configparser.RawConfigParser()\n secrets_file.read(output_filename)\n\n def get_secret(key):\n if secrets_file.has_option('secrets', key):\n return secrets_file.get('secrets', key)\n return None\n\n fields = AUTOGENERATED_SETTINGS + ['secret_key', 'camo_key']\n return {name: get_secret(name) for name in fields}\n\ndef generate_secrets(development=False):\n if development:\n OUTPUT_SETTINGS_FILENAME = \"zproject/dev-secrets.conf\"\n else:\n OUTPUT_SETTINGS_FILENAME = \"/etc/zulip/zulip-secrets.conf\"\n\n lines = ['[secrets]\\n']\n\n def config_line(var, value):\n return \"%s = %s\\n\" % (var, value)\n\n old_conf = get_old_conf(OUTPUT_SETTINGS_FILENAME)\n for name in AUTOGENERATED_SETTINGS:\n lines.append(config_line(name, old_conf.get(name, generate_random_token(64))))\n\n secret_key = old_conf.get('secret_key', generate_django_secretkey())\n lines.append(config_line('secret_key', secret_key))\n\n camo_key = old_conf.get('camo_key', get_random_string(64))\n lines.append(config_line('camo_key', camo_key))\n\n if not development:\n # Write the Camo config file directly\n generate_camo_config_file(camo_key)\n\n out = open(OUTPUT_SETTINGS_FILENAME, 'w')\n out.write(\"\".join(lines))\n out.close()\n\n print(\"Generated %s with auto-generated secrets!\" % (OUTPUT_SETTINGS_FILENAME,))\n\nif __name__ == '__main__':\n\n development = False\n extra_args = sys.argv[1:]\n\n if len(extra_args) and extra_args[0] in ('-d', '--development'):\n development = True\n\n generate_secrets(development)\n", "path": "scripts/setup/generate_secrets.py"}]}
| 1,074 | 512 |
gh_patches_debug_19425
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-4502
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Numpy v1.26.4 build fails: Module "features" does not exist
@ryanking13 @mattip @seberg
v1.26.3 build succeeds
Failure for v1.26.4:
<details><summary>Details</summary>
<p>
```
The Meson build system
Version: 1.3.1
Source dir: /home/hood/Documents/programming/pyodide/packages/numpy/build/numpy-1.26.4
Build dir: /home/hood/Documents/programming/pyodide/packages/numpy/build/numpy-1.26.4/.mesonpy-c6ekdglk
Build type: cross build
Project name: NumPy
Project version: 1.26.4
Cross compiler sanity tests disabled via the cross file.
C compiler for the host machine: /tmp/tmpqfmd_k4r/cc (emscripten 3.1.52 "emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.52 (fa478400df3351f7153c0279bc638784d3d90334)")
C linker for the host machine: /tmp/tmpqfmd_k4r/cc ld.wasm 18.0.0
C++ compiler for the host machine: /tmp/tmpqfmd_k4r/c++ (emscripten 3.1.52 "emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.52 (fa478400df3351f7153c0279bc638784d3d90334)")
C++ linker for the host machine: /tmp/tmpqfmd_k4r/c++ ld.wasm 18.0.0
Cython compiler for the host machine: cython (cython 3.0.8)
C compiler for the build machine: ccache cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
C linker for the build machine: cc ld.bfd 2.34
C++ compiler for the build machine: ccache c++ (gcc 9.4.0 "c++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
C++ linker for the build machine: c++ ld.bfd 2.34
Cython compiler for the build machine: cython (cython 3.0.8)
Build machine cpu family: x86_64
Build machine cpu: x86_64
Host machine cpu family: wasm32
Host machine cpu: wasm
Target machine cpu family: wasm32
Target machine cpu: wasm
Program python3 found: YES (/tmp/build-env-mr2vjazb/bin/python)
Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1
Run-time dependency python found: YES 3.12
Has header "Python.h" with dependency python: YES
Compiler for C supports arguments -fno-strict-aliasing: YES
../meson_cpu/x86/meson.build:2:15: ERROR: Module "features" does not exist
```
</p>
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyodide-build/pyodide_build/pypabuild.py`
Content:
```
1 import json
2 import os
3 import shutil
4 import subprocess as sp
5 import sys
6 import traceback
7 from collections.abc import Callable, Iterator, Mapping, Sequence
8 from contextlib import contextmanager
9 from itertools import chain
10 from pathlib import Path
11 from tempfile import TemporaryDirectory
12
13 from build import BuildBackendException, ConfigSettingsType
14 from build.env import DefaultIsolatedEnv
15 from packaging.requirements import Requirement
16
17 from . import common, pywasmcross
18 from .build_env import (
19 get_build_flag,
20 get_hostsitepackages,
21 get_pyversion,
22 get_unisolated_packages,
23 platform,
24 )
25 from .io import _BuildSpecExports
26 from .vendor._pypabuild import (
27 _STYLES,
28 _DefaultIsolatedEnv,
29 _error,
30 _handle_build_error,
31 _ProjectBuilder,
32 )
33
34 AVOIDED_REQUIREMENTS = [
35 # We don't want to install cmake Python package inside the isolated env as it will shadow
36 # the pywasmcross cmake wrapper.
37 # TODO: Find a way to make scikit-build use the pywasmcross cmake wrapper.
38 "cmake",
39 # mesonpy installs patchelf in linux platform but we don't want it.
40 "patchelf",
41 ]
42
43
44 def _gen_runner(
45 cross_build_env: Mapping[str, str],
46 isolated_build_env: DefaultIsolatedEnv,
47 ) -> Callable[[Sequence[str], str | None, Mapping[str, str] | None], None]:
48 """
49 This returns a slightly modified version of default subprocess runner that pypa/build uses.
50 pypa/build prepends the virtual environment's bin directory to the PATH environment variable.
51 This is problematic because it shadows the pywasmcross compiler wrappers for cmake, meson, etc.
52
53 This function prepends the compiler wrapper directory to the PATH again so that our compiler wrappers
54 are searched first.
55
56 Parameters
57 ----------
58 cross_build_env
59 The cross build environment for pywasmcross.
60 isolated_build_env
61 The isolated build environment created by pypa/build.
62 """
63
64 def _runner(cmd, cwd=None, extra_environ=None):
65 env = os.environ.copy()
66 if extra_environ:
67 env.update(extra_environ)
68
69 # Some build dependencies like cmake, meson installs binaries to this directory
70 # and we should add it to the PATH so that they can be found.
71 env["BUILD_ENV_SCRIPTS_DIR"] = isolated_build_env._scripts_dir
72 env["PATH"] = f"{cross_build_env['COMPILER_WRAPPER_DIR']}:{env['PATH']}"
73 # For debugging: Uncomment the following line to print the build command
74 # print("Build backend call:", " ".join(str(x) for x in cmd), file=sys.stderr)
75 sp.check_call(cmd, cwd=cwd, env=env)
76
77 return _runner
78
79
80 def symlink_unisolated_packages(env: DefaultIsolatedEnv) -> None:
81 pyversion = get_pyversion()
82 site_packages_path = f"lib/{pyversion}/site-packages"
83 env_site_packages = Path(env.path) / site_packages_path
84 sysconfigdata_name = get_build_flag("SYSCONFIG_NAME")
85 sysconfigdata_path = (
86 Path(get_build_flag("TARGETINSTALLDIR"))
87 / f"sysconfigdata/{sysconfigdata_name}.py"
88 )
89
90 env_site_packages.mkdir(parents=True, exist_ok=True)
91 shutil.copy(sysconfigdata_path, env_site_packages)
92 host_site_packages = Path(get_hostsitepackages())
93 for name in get_unisolated_packages():
94 for path in chain(
95 host_site_packages.glob(f"{name}*"), host_site_packages.glob(f"_{name}*")
96 ):
97 (env_site_packages / path.name).unlink(missing_ok=True)
98 (env_site_packages / path.name).symlink_to(path)
99
100
101 def remove_avoided_requirements(
102 requires: set[str], avoided_requirements: set[str] | list[str]
103 ) -> set[str]:
104 for reqstr in list(requires):
105 req = Requirement(reqstr)
106 for avoid_name in set(avoided_requirements):
107 if avoid_name in req.name.lower():
108 requires.remove(reqstr)
109 return requires
110
111
112 def install_reqs(env: DefaultIsolatedEnv, reqs: set[str]) -> None:
113 env.install(
114 remove_avoided_requirements(
115 reqs,
116 get_unisolated_packages() + AVOIDED_REQUIREMENTS,
117 )
118 )
119
120
121 def _build_in_isolated_env(
122 build_env: Mapping[str, str],
123 srcdir: Path,
124 outdir: str,
125 distribution: str,
126 config_settings: ConfigSettingsType,
127 ) -> str:
128 # For debugging: The following line disables removal of the isolated venv.
129 # It will be left in the /tmp folder and can be inspected or entered as
130 # needed.
131 # _DefaultIsolatedEnv.__exit__ = lambda *args: None
132 with _DefaultIsolatedEnv() as env:
133 builder = _ProjectBuilder.from_isolated_env(
134 env,
135 srcdir,
136 runner=_gen_runner(build_env, env),
137 )
138
139 # first install the build dependencies
140 symlink_unisolated_packages(env)
141 install_reqs(env, builder.build_system_requires)
142 installed_requires_for_build = False
143 try:
144 build_reqs = builder.get_requires_for_build(
145 distribution,
146 )
147 except BuildBackendException:
148 pass
149 else:
150 install_reqs(env, build_reqs)
151 installed_requires_for_build = True
152
153 with common.replace_env(build_env):
154 if not installed_requires_for_build:
155 build_reqs = builder.get_requires_for_build(
156 distribution,
157 config_settings,
158 )
159 install_reqs(env, build_reqs)
160
161 return builder.build(distribution, outdir, config_settings)
162
163
164 def parse_backend_flags(backend_flags: str | list[str]) -> ConfigSettingsType:
165 config_settings: dict[str, str | list[str]] = {}
166
167 if isinstance(backend_flags, str):
168 backend_flags = backend_flags.split()
169
170 for arg in backend_flags:
171 setting, _, value = arg.partition("=")
172 if setting not in config_settings:
173 config_settings[setting] = value
174 continue
175
176 cur_value = config_settings[setting]
177 if isinstance(cur_value, str):
178 config_settings[setting] = [cur_value, value]
179 else:
180 cur_value.append(value)
181 return config_settings
182
183
184 def make_command_wrapper_symlinks(symlink_dir: Path) -> dict[str, str]:
185 """
186 Create symlinks that make pywasmcross look like a compiler.
187
188 Parameters
189 ----------
190 symlink_dir
191 The directory where the symlinks will be created.
192
193 Returns
194 -------
195 The dictionary of compiler environment variables that points to the symlinks.
196 """
197
198 pywasmcross_exe = symlink_dir / "pywasmcross.py"
199 shutil.copy2(pywasmcross.__file__, pywasmcross_exe)
200 pywasmcross_exe.chmod(0o755)
201
202 env = {}
203 for symlink in pywasmcross.SYMLINKS:
204 symlink_path = symlink_dir / symlink
205 if os.path.lexists(symlink_path) and not symlink_path.exists():
206 # remove broken symlink so it can be re-created
207 symlink_path.unlink()
208
209 symlink_path.symlink_to(pywasmcross_exe)
210 if symlink == "c++":
211 var = "CXX"
212 elif symlink == "gfortran":
213 var = "FC" # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables
214 else:
215 var = symlink.upper()
216 env[var] = str(symlink_path)
217
218 return env
219
220
221 @contextmanager
222 def get_build_env(
223 env: dict[str, str],
224 *,
225 pkgname: str,
226 cflags: str,
227 cxxflags: str,
228 ldflags: str,
229 target_install_dir: str,
230 exports: _BuildSpecExports,
231 ) -> Iterator[dict[str, str]]:
232 """
233 Returns a dict of environment variables that should be used when building
234 a package with pypa/build.
235 """
236
237 kwargs = dict(
238 pkgname=pkgname,
239 cflags=cflags,
240 cxxflags=cxxflags,
241 ldflags=ldflags,
242 target_install_dir=target_install_dir,
243 )
244
245 args = common.environment_substitute_args(kwargs, env)
246 args["builddir"] = str(Path(".").absolute())
247 args["exports"] = exports
248 env = env.copy()
249
250 with TemporaryDirectory() as symlink_dir_str:
251 symlink_dir = Path(symlink_dir_str)
252 env.update(make_command_wrapper_symlinks(symlink_dir))
253
254 sysconfig_dir = Path(get_build_flag("TARGETINSTALLDIR")) / "sysconfigdata"
255 args["PYTHONPATH"] = sys.path + [str(sysconfig_dir)]
256 args["orig__name__"] = __name__
257 args["pythoninclude"] = get_build_flag("PYTHONINCLUDE")
258 args["PATH"] = env["PATH"]
259
260 pywasmcross_env = json.dumps(args)
261 # Store into environment variable and to disk. In most cases we will
262 # load from the environment variable but if some other tool filters
263 # environment variables we will load from disk instead.
264 env["PYWASMCROSS_ARGS"] = pywasmcross_env
265 (symlink_dir / "pywasmcross_env.json").write_text(pywasmcross_env)
266
267 env["_PYTHON_HOST_PLATFORM"] = platform()
268 env["_PYTHON_SYSCONFIGDATA_NAME"] = get_build_flag("SYSCONFIG_NAME")
269 env["PYTHONPATH"] = str(sysconfig_dir)
270 env["COMPILER_WRAPPER_DIR"] = str(symlink_dir)
271
272 yield env
273
274
275 def build(
276 srcdir: Path,
277 outdir: Path,
278 build_env: Mapping[str, str],
279 config_settings: ConfigSettingsType,
280 ) -> str:
281 distribution = "wheel"
282 try:
283 with _handle_build_error():
284 built = _build_in_isolated_env(
285 build_env, srcdir, str(outdir), distribution, config_settings
286 )
287 print("{bold}{green}Successfully built {}{reset}".format(built, **_STYLES))
288 return built
289 except Exception as e: # pragma: no cover
290 tb = traceback.format_exc().strip("\n")
291 print("\n{dim}{}{reset}\n".format(tb, **_STYLES))
292 _error(str(e))
293 sys.exit(1)
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py
--- a/pyodide-build/pyodide_build/pypabuild.py
+++ b/pyodide-build/pyodide_build/pypabuild.py
@@ -40,6 +40,19 @@
"patchelf",
]
+# corresponding env variables for symlinks
+SYMLINK_ENV_VARS = {
+ "cc": "CC",
+ "c++": "CXX",
+ "ld": "LD",
+ "lld": "LLD",
+ "ar": "AR",
+ "gcc": "GCC",
+ "ranlib": "RANLIB",
+ "strip": "STRIP",
+ "gfortran": "FC", # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables
+}
+
def _gen_runner(
cross_build_env: Mapping[str, str],
@@ -207,13 +220,8 @@
symlink_path.unlink()
symlink_path.symlink_to(pywasmcross_exe)
- if symlink == "c++":
- var = "CXX"
- elif symlink == "gfortran":
- var = "FC" # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables
- else:
- var = symlink.upper()
- env[var] = str(symlink_path)
+ if symlink in SYMLINK_ENV_VARS:
+ env[SYMLINK_ENV_VARS[symlink]] = str(symlink_path)
return env
|
{"golden_diff": "diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py\n--- a/pyodide-build/pyodide_build/pypabuild.py\n+++ b/pyodide-build/pyodide_build/pypabuild.py\n@@ -40,6 +40,19 @@\n \"patchelf\",\n ]\n \n+# corresponding env variables for symlinks\n+SYMLINK_ENV_VARS = {\n+ \"cc\": \"CC\",\n+ \"c++\": \"CXX\",\n+ \"ld\": \"LD\",\n+ \"lld\": \"LLD\",\n+ \"ar\": \"AR\",\n+ \"gcc\": \"GCC\",\n+ \"ranlib\": \"RANLIB\",\n+ \"strip\": \"STRIP\",\n+ \"gfortran\": \"FC\", # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables\n+}\n+\n \n def _gen_runner(\n cross_build_env: Mapping[str, str],\n@@ -207,13 +220,8 @@\n symlink_path.unlink()\n \n symlink_path.symlink_to(pywasmcross_exe)\n- if symlink == \"c++\":\n- var = \"CXX\"\n- elif symlink == \"gfortran\":\n- var = \"FC\" # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables\n- else:\n- var = symlink.upper()\n- env[var] = str(symlink_path)\n+ if symlink in SYMLINK_ENV_VARS:\n+ env[SYMLINK_ENV_VARS[symlink]] = str(symlink_path)\n \n return env\n", "issue": "Numpy v1.26.4 build fails: Module \"features\" does not exist\n@ryanking13 @mattip @seberg\r\n\r\nv1.26.3 build succeeds\r\n\r\nFailure for v1.26.4:\r\n\r\n<details><summary>Details</summary>\r\n<p>\r\n\r\n```\r\nThe Meson build system\r\nVersion: 1.3.1\r\nSource dir: /home/hood/Documents/programming/pyodide/packages/numpy/build/numpy-1.26.4\r\nBuild dir: /home/hood/Documents/programming/pyodide/packages/numpy/build/numpy-1.26.4/.mesonpy-c6ekdglk\r\nBuild type: cross build\r\nProject name: NumPy\r\nProject version: 1.26.4\r\nCross compiler sanity tests disabled via the cross file.\r\nC compiler for the host machine: /tmp/tmpqfmd_k4r/cc (emscripten 3.1.52 \"emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.52 (fa478400df3351f7153c0279bc638784d3d90334)\")\r\nC linker for the host machine: /tmp/tmpqfmd_k4r/cc ld.wasm 18.0.0\r\nC++ compiler for the host machine: /tmp/tmpqfmd_k4r/c++ (emscripten 3.1.52 \"emcc (Emscripten gcc/clang-like replacement + linker emulating GNU ld) 3.1.52 (fa478400df3351f7153c0279bc638784d3d90334)\")\r\nC++ linker for the host machine: /tmp/tmpqfmd_k4r/c++ ld.wasm 18.0.0\r\nCython compiler for the host machine: cython (cython 3.0.8)\r\nC compiler for the build machine: ccache cc (gcc 9.4.0 \"cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\")\r\nC linker for the build machine: cc ld.bfd 2.34\r\nC++ compiler for the build machine: ccache c++ (gcc 9.4.0 \"c++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\")\r\nC++ linker for the build machine: c++ ld.bfd 2.34\r\nCython compiler for the build machine: cython (cython 3.0.8)\r\nBuild machine cpu family: x86_64\r\nBuild machine cpu: x86_64\r\nHost machine cpu family: wasm32\r\nHost machine cpu: wasm\r\nTarget machine cpu family: wasm32\r\nTarget machine cpu: wasm\r\nProgram python3 found: YES (/tmp/build-env-mr2vjazb/bin/python)\r\nFound pkg-config: YES (/usr/bin/pkg-config) 0.29.1\r\nRun-time dependency python found: YES 3.12\r\nHas header \"Python.h\" with dependency python: YES \r\nCompiler for C supports arguments -fno-strict-aliasing: YES \r\n\r\n../meson_cpu/x86/meson.build:2:15: ERROR: Module \"features\" does not exist\r\n```\r\n\r\n</p>\r\n</details> \r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import json\nimport os\nimport shutil\nimport subprocess as sp\nimport sys\nimport traceback\nfrom collections.abc import Callable, Iterator, Mapping, Sequence\nfrom contextlib import contextmanager\nfrom itertools import chain\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\n\nfrom build import BuildBackendException, ConfigSettingsType\nfrom build.env import DefaultIsolatedEnv\nfrom packaging.requirements import Requirement\n\nfrom . import common, pywasmcross\nfrom .build_env import (\n get_build_flag,\n get_hostsitepackages,\n get_pyversion,\n get_unisolated_packages,\n platform,\n)\nfrom .io import _BuildSpecExports\nfrom .vendor._pypabuild import (\n _STYLES,\n _DefaultIsolatedEnv,\n _error,\n _handle_build_error,\n _ProjectBuilder,\n)\n\nAVOIDED_REQUIREMENTS = [\n # We don't want to install cmake Python package inside the isolated env as it will shadow\n # the pywasmcross cmake wrapper.\n # TODO: Find a way to make scikit-build use the pywasmcross cmake wrapper.\n \"cmake\",\n # mesonpy installs patchelf in linux platform but we don't want it.\n \"patchelf\",\n]\n\n\ndef _gen_runner(\n cross_build_env: Mapping[str, str],\n isolated_build_env: DefaultIsolatedEnv,\n) -> Callable[[Sequence[str], str | None, Mapping[str, str] | None], None]:\n \"\"\"\n This returns a slightly modified version of default subprocess runner that pypa/build uses.\n pypa/build prepends the virtual environment's bin directory to the PATH environment variable.\n This is problematic because it shadows the pywasmcross compiler wrappers for cmake, meson, etc.\n\n This function prepends the compiler wrapper directory to the PATH again so that our compiler wrappers\n are searched first.\n\n Parameters\n ----------\n cross_build_env\n The cross build environment for pywasmcross.\n isolated_build_env\n The isolated build environment created by pypa/build.\n \"\"\"\n\n def _runner(cmd, cwd=None, extra_environ=None):\n env = os.environ.copy()\n if extra_environ:\n env.update(extra_environ)\n\n # Some build dependencies like cmake, meson installs binaries to this directory\n # and we should add it to the PATH so that they can be found.\n env[\"BUILD_ENV_SCRIPTS_DIR\"] = isolated_build_env._scripts_dir\n env[\"PATH\"] = f\"{cross_build_env['COMPILER_WRAPPER_DIR']}:{env['PATH']}\"\n # For debugging: Uncomment the following line to print the build command\n # print(\"Build backend call:\", \" \".join(str(x) for x in cmd), file=sys.stderr)\n sp.check_call(cmd, cwd=cwd, env=env)\n\n return _runner\n\n\ndef symlink_unisolated_packages(env: DefaultIsolatedEnv) -> None:\n pyversion = get_pyversion()\n site_packages_path = f\"lib/{pyversion}/site-packages\"\n env_site_packages = Path(env.path) / site_packages_path\n sysconfigdata_name = get_build_flag(\"SYSCONFIG_NAME\")\n sysconfigdata_path = (\n Path(get_build_flag(\"TARGETINSTALLDIR\"))\n / f\"sysconfigdata/{sysconfigdata_name}.py\"\n )\n\n env_site_packages.mkdir(parents=True, exist_ok=True)\n shutil.copy(sysconfigdata_path, env_site_packages)\n host_site_packages = Path(get_hostsitepackages())\n for name in get_unisolated_packages():\n for path in chain(\n host_site_packages.glob(f\"{name}*\"), host_site_packages.glob(f\"_{name}*\")\n ):\n (env_site_packages / path.name).unlink(missing_ok=True)\n (env_site_packages / path.name).symlink_to(path)\n\n\ndef remove_avoided_requirements(\n requires: set[str], avoided_requirements: set[str] | list[str]\n) -> set[str]:\n for reqstr in list(requires):\n req = Requirement(reqstr)\n for avoid_name in set(avoided_requirements):\n if avoid_name in req.name.lower():\n requires.remove(reqstr)\n return requires\n\n\ndef install_reqs(env: DefaultIsolatedEnv, reqs: set[str]) -> None:\n env.install(\n remove_avoided_requirements(\n reqs,\n get_unisolated_packages() + AVOIDED_REQUIREMENTS,\n )\n )\n\n\ndef _build_in_isolated_env(\n build_env: Mapping[str, str],\n srcdir: Path,\n outdir: str,\n distribution: str,\n config_settings: ConfigSettingsType,\n) -> str:\n # For debugging: The following line disables removal of the isolated venv.\n # It will be left in the /tmp folder and can be inspected or entered as\n # needed.\n # _DefaultIsolatedEnv.__exit__ = lambda *args: None\n with _DefaultIsolatedEnv() as env:\n builder = _ProjectBuilder.from_isolated_env(\n env,\n srcdir,\n runner=_gen_runner(build_env, env),\n )\n\n # first install the build dependencies\n symlink_unisolated_packages(env)\n install_reqs(env, builder.build_system_requires)\n installed_requires_for_build = False\n try:\n build_reqs = builder.get_requires_for_build(\n distribution,\n )\n except BuildBackendException:\n pass\n else:\n install_reqs(env, build_reqs)\n installed_requires_for_build = True\n\n with common.replace_env(build_env):\n if not installed_requires_for_build:\n build_reqs = builder.get_requires_for_build(\n distribution,\n config_settings,\n )\n install_reqs(env, build_reqs)\n\n return builder.build(distribution, outdir, config_settings)\n\n\ndef parse_backend_flags(backend_flags: str | list[str]) -> ConfigSettingsType:\n config_settings: dict[str, str | list[str]] = {}\n\n if isinstance(backend_flags, str):\n backend_flags = backend_flags.split()\n\n for arg in backend_flags:\n setting, _, value = arg.partition(\"=\")\n if setting not in config_settings:\n config_settings[setting] = value\n continue\n\n cur_value = config_settings[setting]\n if isinstance(cur_value, str):\n config_settings[setting] = [cur_value, value]\n else:\n cur_value.append(value)\n return config_settings\n\n\ndef make_command_wrapper_symlinks(symlink_dir: Path) -> dict[str, str]:\n \"\"\"\n Create symlinks that make pywasmcross look like a compiler.\n\n Parameters\n ----------\n symlink_dir\n The directory where the symlinks will be created.\n\n Returns\n -------\n The dictionary of compiler environment variables that points to the symlinks.\n \"\"\"\n\n pywasmcross_exe = symlink_dir / \"pywasmcross.py\"\n shutil.copy2(pywasmcross.__file__, pywasmcross_exe)\n pywasmcross_exe.chmod(0o755)\n\n env = {}\n for symlink in pywasmcross.SYMLINKS:\n symlink_path = symlink_dir / symlink\n if os.path.lexists(symlink_path) and not symlink_path.exists():\n # remove broken symlink so it can be re-created\n symlink_path.unlink()\n\n symlink_path.symlink_to(pywasmcross_exe)\n if symlink == \"c++\":\n var = \"CXX\"\n elif symlink == \"gfortran\":\n var = \"FC\" # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables\n else:\n var = symlink.upper()\n env[var] = str(symlink_path)\n\n return env\n\n\n@contextmanager\ndef get_build_env(\n env: dict[str, str],\n *,\n pkgname: str,\n cflags: str,\n cxxflags: str,\n ldflags: str,\n target_install_dir: str,\n exports: _BuildSpecExports,\n) -> Iterator[dict[str, str]]:\n \"\"\"\n Returns a dict of environment variables that should be used when building\n a package with pypa/build.\n \"\"\"\n\n kwargs = dict(\n pkgname=pkgname,\n cflags=cflags,\n cxxflags=cxxflags,\n ldflags=ldflags,\n target_install_dir=target_install_dir,\n )\n\n args = common.environment_substitute_args(kwargs, env)\n args[\"builddir\"] = str(Path(\".\").absolute())\n args[\"exports\"] = exports\n env = env.copy()\n\n with TemporaryDirectory() as symlink_dir_str:\n symlink_dir = Path(symlink_dir_str)\n env.update(make_command_wrapper_symlinks(symlink_dir))\n\n sysconfig_dir = Path(get_build_flag(\"TARGETINSTALLDIR\")) / \"sysconfigdata\"\n args[\"PYTHONPATH\"] = sys.path + [str(sysconfig_dir)]\n args[\"orig__name__\"] = __name__\n args[\"pythoninclude\"] = get_build_flag(\"PYTHONINCLUDE\")\n args[\"PATH\"] = env[\"PATH\"]\n\n pywasmcross_env = json.dumps(args)\n # Store into environment variable and to disk. In most cases we will\n # load from the environment variable but if some other tool filters\n # environment variables we will load from disk instead.\n env[\"PYWASMCROSS_ARGS\"] = pywasmcross_env\n (symlink_dir / \"pywasmcross_env.json\").write_text(pywasmcross_env)\n\n env[\"_PYTHON_HOST_PLATFORM\"] = platform()\n env[\"_PYTHON_SYSCONFIGDATA_NAME\"] = get_build_flag(\"SYSCONFIG_NAME\")\n env[\"PYTHONPATH\"] = str(sysconfig_dir)\n env[\"COMPILER_WRAPPER_DIR\"] = str(symlink_dir)\n\n yield env\n\n\ndef build(\n srcdir: Path,\n outdir: Path,\n build_env: Mapping[str, str],\n config_settings: ConfigSettingsType,\n) -> str:\n distribution = \"wheel\"\n try:\n with _handle_build_error():\n built = _build_in_isolated_env(\n build_env, srcdir, str(outdir), distribution, config_settings\n )\n print(\"{bold}{green}Successfully built {}{reset}\".format(built, **_STYLES))\n return built\n except Exception as e: # pragma: no cover\n tb = traceback.format_exc().strip(\"\\n\")\n print(\"\\n{dim}{}{reset}\\n\".format(tb, **_STYLES))\n _error(str(e))\n sys.exit(1)\n", "path": "pyodide-build/pyodide_build/pypabuild.py"}], "after_files": [{"content": "import json\nimport os\nimport shutil\nimport subprocess as sp\nimport sys\nimport traceback\nfrom collections.abc import Callable, Iterator, Mapping, Sequence\nfrom contextlib import contextmanager\nfrom itertools import chain\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\n\nfrom build import BuildBackendException, ConfigSettingsType\nfrom build.env import DefaultIsolatedEnv\nfrom packaging.requirements import Requirement\n\nfrom . import common, pywasmcross\nfrom .build_env import (\n get_build_flag,\n get_hostsitepackages,\n get_pyversion,\n get_unisolated_packages,\n platform,\n)\nfrom .io import _BuildSpecExports\nfrom .vendor._pypabuild import (\n _STYLES,\n _DefaultIsolatedEnv,\n _error,\n _handle_build_error,\n _ProjectBuilder,\n)\n\nAVOIDED_REQUIREMENTS = [\n # We don't want to install cmake Python package inside the isolated env as it will shadow\n # the pywasmcross cmake wrapper.\n # TODO: Find a way to make scikit-build use the pywasmcross cmake wrapper.\n \"cmake\",\n # mesonpy installs patchelf in linux platform but we don't want it.\n \"patchelf\",\n]\n\n# corresponding env variables for symlinks\nSYMLINK_ENV_VARS = {\n \"cc\": \"CC\",\n \"c++\": \"CXX\",\n \"ld\": \"LD\",\n \"lld\": \"LLD\",\n \"ar\": \"AR\",\n \"gcc\": \"GCC\",\n \"ranlib\": \"RANLIB\",\n \"strip\": \"STRIP\",\n \"gfortran\": \"FC\", # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables\n}\n\n\ndef _gen_runner(\n cross_build_env: Mapping[str, str],\n isolated_build_env: DefaultIsolatedEnv,\n) -> Callable[[Sequence[str], str | None, Mapping[str, str] | None], None]:\n \"\"\"\n This returns a slightly modified version of default subprocess runner that pypa/build uses.\n pypa/build prepends the virtual environment's bin directory to the PATH environment variable.\n This is problematic because it shadows the pywasmcross compiler wrappers for cmake, meson, etc.\n\n This function prepends the compiler wrapper directory to the PATH again so that our compiler wrappers\n are searched first.\n\n Parameters\n ----------\n cross_build_env\n The cross build environment for pywasmcross.\n isolated_build_env\n The isolated build environment created by pypa/build.\n \"\"\"\n\n def _runner(cmd, cwd=None, extra_environ=None):\n env = os.environ.copy()\n if extra_environ:\n env.update(extra_environ)\n\n # Some build dependencies like cmake, meson installs binaries to this directory\n # and we should add it to the PATH so that they can be found.\n env[\"BUILD_ENV_SCRIPTS_DIR\"] = isolated_build_env._scripts_dir\n env[\"PATH\"] = f\"{cross_build_env['COMPILER_WRAPPER_DIR']}:{env['PATH']}\"\n # For debugging: Uncomment the following line to print the build command\n # print(\"Build backend call:\", \" \".join(str(x) for x in cmd), file=sys.stderr)\n sp.check_call(cmd, cwd=cwd, env=env)\n\n return _runner\n\n\ndef symlink_unisolated_packages(env: DefaultIsolatedEnv) -> None:\n pyversion = get_pyversion()\n site_packages_path = f\"lib/{pyversion}/site-packages\"\n env_site_packages = Path(env.path) / site_packages_path\n sysconfigdata_name = get_build_flag(\"SYSCONFIG_NAME\")\n sysconfigdata_path = (\n Path(get_build_flag(\"TARGETINSTALLDIR\"))\n / f\"sysconfigdata/{sysconfigdata_name}.py\"\n )\n\n env_site_packages.mkdir(parents=True, exist_ok=True)\n shutil.copy(sysconfigdata_path, env_site_packages)\n host_site_packages = Path(get_hostsitepackages())\n for name in get_unisolated_packages():\n for path in chain(\n host_site_packages.glob(f\"{name}*\"), host_site_packages.glob(f\"_{name}*\")\n ):\n (env_site_packages / path.name).unlink(missing_ok=True)\n (env_site_packages / path.name).symlink_to(path)\n\n\ndef remove_avoided_requirements(\n requires: set[str], avoided_requirements: set[str] | list[str]\n) -> set[str]:\n for reqstr in list(requires):\n req = Requirement(reqstr)\n for avoid_name in set(avoided_requirements):\n if avoid_name in req.name.lower():\n requires.remove(reqstr)\n return requires\n\n\ndef install_reqs(env: DefaultIsolatedEnv, reqs: set[str]) -> None:\n env.install(\n remove_avoided_requirements(\n reqs,\n get_unisolated_packages() + AVOIDED_REQUIREMENTS,\n )\n )\n\n\ndef _build_in_isolated_env(\n build_env: Mapping[str, str],\n srcdir: Path,\n outdir: str,\n distribution: str,\n config_settings: ConfigSettingsType,\n) -> str:\n # For debugging: The following line disables removal of the isolated venv.\n # It will be left in the /tmp folder and can be inspected or entered as\n # needed.\n # _DefaultIsolatedEnv.__exit__ = lambda *args: None\n with _DefaultIsolatedEnv() as env:\n builder = _ProjectBuilder.from_isolated_env(\n env,\n srcdir,\n runner=_gen_runner(build_env, env),\n )\n\n # first install the build dependencies\n symlink_unisolated_packages(env)\n install_reqs(env, builder.build_system_requires)\n installed_requires_for_build = False\n try:\n build_reqs = builder.get_requires_for_build(\n distribution,\n )\n except BuildBackendException:\n pass\n else:\n install_reqs(env, build_reqs)\n installed_requires_for_build = True\n\n with common.replace_env(build_env):\n if not installed_requires_for_build:\n build_reqs = builder.get_requires_for_build(\n distribution,\n config_settings,\n )\n install_reqs(env, build_reqs)\n\n return builder.build(distribution, outdir, config_settings)\n\n\ndef parse_backend_flags(backend_flags: str | list[str]) -> ConfigSettingsType:\n config_settings: dict[str, str | list[str]] = {}\n\n if isinstance(backend_flags, str):\n backend_flags = backend_flags.split()\n\n for arg in backend_flags:\n setting, _, value = arg.partition(\"=\")\n if setting not in config_settings:\n config_settings[setting] = value\n continue\n\n cur_value = config_settings[setting]\n if isinstance(cur_value, str):\n config_settings[setting] = [cur_value, value]\n else:\n cur_value.append(value)\n return config_settings\n\n\ndef make_command_wrapper_symlinks(symlink_dir: Path) -> dict[str, str]:\n \"\"\"\n Create symlinks that make pywasmcross look like a compiler.\n\n Parameters\n ----------\n symlink_dir\n The directory where the symlinks will be created.\n\n Returns\n -------\n The dictionary of compiler environment variables that points to the symlinks.\n \"\"\"\n\n pywasmcross_exe = symlink_dir / \"pywasmcross.py\"\n shutil.copy2(pywasmcross.__file__, pywasmcross_exe)\n pywasmcross_exe.chmod(0o755)\n\n env = {}\n for symlink in pywasmcross.SYMLINKS:\n symlink_path = symlink_dir / symlink\n if os.path.lexists(symlink_path) and not symlink_path.exists():\n # remove broken symlink so it can be re-created\n symlink_path.unlink()\n\n symlink_path.symlink_to(pywasmcross_exe)\n if symlink in SYMLINK_ENV_VARS:\n env[SYMLINK_ENV_VARS[symlink]] = str(symlink_path)\n\n return env\n\n\n@contextmanager\ndef get_build_env(\n env: dict[str, str],\n *,\n pkgname: str,\n cflags: str,\n cxxflags: str,\n ldflags: str,\n target_install_dir: str,\n exports: _BuildSpecExports,\n) -> Iterator[dict[str, str]]:\n \"\"\"\n Returns a dict of environment variables that should be used when building\n a package with pypa/build.\n \"\"\"\n\n kwargs = dict(\n pkgname=pkgname,\n cflags=cflags,\n cxxflags=cxxflags,\n ldflags=ldflags,\n target_install_dir=target_install_dir,\n )\n\n args = common.environment_substitute_args(kwargs, env)\n args[\"builddir\"] = str(Path(\".\").absolute())\n args[\"exports\"] = exports\n env = env.copy()\n\n with TemporaryDirectory() as symlink_dir_str:\n symlink_dir = Path(symlink_dir_str)\n env.update(make_command_wrapper_symlinks(symlink_dir))\n\n sysconfig_dir = Path(get_build_flag(\"TARGETINSTALLDIR\")) / \"sysconfigdata\"\n args[\"PYTHONPATH\"] = sys.path + [str(sysconfig_dir)]\n args[\"orig__name__\"] = __name__\n args[\"pythoninclude\"] = get_build_flag(\"PYTHONINCLUDE\")\n args[\"PATH\"] = env[\"PATH\"]\n\n pywasmcross_env = json.dumps(args)\n # Store into environment variable and to disk. In most cases we will\n # load from the environment variable but if some other tool filters\n # environment variables we will load from disk instead.\n env[\"PYWASMCROSS_ARGS\"] = pywasmcross_env\n (symlink_dir / \"pywasmcross_env.json\").write_text(pywasmcross_env)\n\n env[\"_PYTHON_HOST_PLATFORM\"] = platform()\n env[\"_PYTHON_SYSCONFIGDATA_NAME\"] = get_build_flag(\"SYSCONFIG_NAME\")\n env[\"PYTHONPATH\"] = str(sysconfig_dir)\n env[\"COMPILER_WRAPPER_DIR\"] = str(symlink_dir)\n\n yield env\n\n\ndef build(\n srcdir: Path,\n outdir: Path,\n build_env: Mapping[str, str],\n config_settings: ConfigSettingsType,\n) -> str:\n distribution = \"wheel\"\n try:\n with _handle_build_error():\n built = _build_in_isolated_env(\n build_env, srcdir, str(outdir), distribution, config_settings\n )\n print(\"{bold}{green}Successfully built {}{reset}\".format(built, **_STYLES))\n return built\n except Exception as e: # pragma: no cover\n tb = traceback.format_exc().strip(\"\\n\")\n print(\"\\n{dim}{}{reset}\\n\".format(tb, **_STYLES))\n _error(str(e))\n sys.exit(1)\n", "path": "pyodide-build/pyodide_build/pypabuild.py"}]}
| 4,086 | 371 |
gh_patches_debug_64666
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-416
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
S3 - RequestTimeout during large files
I'm trying to upload a large file (9 GB) and getting a RequestTimeout error using `aws s3 mv ...`
I haven't fully tested it yet, but it seems like if I run the command over and over it will eventually work.
Here's the debug log from a failed attempt: https://s3.amazonaws.com/nimbus-public/s3_backup.log
I'll post back if I determine that retrying the command several times works or not.
aws version: aws-cli/1.1.2 Python/2.7.3 Windows/2008ServerR2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/s3/utils.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 from datetime import datetime
14 import mimetypes
15 import hashlib
16 import math
17 import os
18 import sys
19 from functools import partial
20
21 from six import PY3
22 from six.moves import queue as Queue
23 from dateutil.tz import tzlocal
24
25 from awscli.customizations.s3.constants import QUEUE_TIMEOUT_WAIT, \
26 MAX_PARTS, MAX_SINGLE_UPLOAD_SIZE
27
28
29 class MD5Error(Exception):
30 """
31 Exception for md5's that do not match.
32 """
33 pass
34
35
36 class NoBlockQueue(Queue.Queue):
37 """
38 This queue ensures that joining does not block interrupt signals.
39 It also contains a threading event ``interrupt`` that breaks the
40 while loop if signaled. The ``interrupt`` signal is optional.
41 If left out, this should act like a normal queue.
42 """
43 def __init__(self, interrupt=None, maxsize=0):
44 Queue.Queue.__init__(self, maxsize=maxsize)
45 self.interrupt = interrupt
46
47 def join(self):
48 self.all_tasks_done.acquire()
49 try:
50 while self.unfinished_tasks:
51 if self.interrupt and self.interrupt.isSet():
52 break
53 self.all_tasks_done.wait(QUEUE_TIMEOUT_WAIT)
54 finally:
55 self.all_tasks_done.release()
56
57
58 def find_bucket_key(s3_path):
59 """
60 This is a helper function that given an s3 path such that the path is of
61 the form: bucket/key
62 It will return the bucket and the key represented by the s3 path
63 """
64 s3_components = s3_path.split('/')
65 bucket = s3_components[0]
66 s3_key = ""
67 if len(s3_components) > 1:
68 s3_key = '/'.join(s3_components[1:])
69 return bucket, s3_key
70
71
72 def get_file_stat(path):
73 """
74 This is a helper function that given a local path return the size of
75 the file in bytes and time of last modification.
76 """
77 stats = os.stat(path)
78 update_time = datetime.fromtimestamp(stats.st_mtime, tzlocal())
79 return stats.st_size, update_time
80
81
82 def check_etag(etag, fileobj):
83 """
84 This fucntion checks the etag and the md5 checksum to ensure no
85 data was corrupted upon transfer.
86 """
87 get_chunk = partial(fileobj.read, 1024 * 1024)
88 m = hashlib.md5()
89 for chunk in iter(get_chunk, b''):
90 m.update(chunk)
91 if '-' not in etag:
92 if etag != m.hexdigest():
93 raise MD5Error
94
95
96 def check_error(response_data):
97 """
98 A helper function that prints out the error message recieved in the
99 response_data and raises an error when there is an error.
100 """
101 if response_data:
102 if 'Errors' in response_data:
103 errors = response_data['Errors']
104 for error in errors:
105 raise Exception("Error: %s\n" % error['Message'])
106
107
108 def operate(service, cmd, kwargs):
109 """
110 A helper function that universally calls any command by taking in the
111 service, name of the command, and any additional parameters required in
112 the call.
113 """
114 operation = service.get_operation(cmd)
115 http_response, response_data = operation.call(**kwargs)
116 check_error(response_data)
117 return response_data, http_response
118
119
120 def find_chunksize(size, current_chunksize):
121 """
122 The purpose of this function is determine a chunksize so that
123 the number of parts in a multipart upload is not greater than
124 the ``MAX_PARTS``. If the ``chunksize`` is greater than
125 ``MAX_SINGLE_UPLOAD_SIZE`` it returns ``MAX_SINGLE_UPLOAD_SIZE``.
126 """
127 chunksize = current_chunksize
128 num_parts = int(math.ceil(size / float(chunksize)))
129 while num_parts > MAX_PARTS:
130 chunksize *= 2
131 num_parts = int(math.ceil(size / float(chunksize)))
132 if chunksize > MAX_SINGLE_UPLOAD_SIZE:
133 return MAX_SINGLE_UPLOAD_SIZE
134 else:
135 return chunksize
136
137
138 class MultiCounter(object):
139 """
140 This class is used as a way to keep track of how many multipart
141 operations are in progress. It also is used to track how many
142 part operations are occuring.
143 """
144 def __init__(self):
145 self.count = 0
146
147
148 def uni_print(statement):
149 """
150 This function is used to properly write unicode to stdout. It
151 ensures that the proper encoding is used if the statement is
152 not in a version type of string. The initial check is to
153 allow if ``sys.stdout`` does not use an encoding
154 """
155 encoding = getattr(sys.stdout, 'encoding', None)
156 if encoding is not None and not PY3:
157 sys.stdout.write(statement.encode(sys.stdout.encoding))
158 else:
159 try:
160 sys.stdout.write(statement)
161 except UnicodeEncodeError:
162 # Some file like objects like cStringIO will
163 # try to decode as ascii. Interestingly enough
164 # this works with a normal StringIO.
165 sys.stdout.write(statement.encode('utf-8'))
166
167
168 def guess_content_type(filename):
169 """Given a filename, guess it's content type.
170
171 If the type cannot be guessed, a value of None is returned.
172 """
173 return mimetypes.guess_type(filename)[0]
174
175
176 def relative_path(filename, start=os.path.curdir):
177 """Cross platform relative path of a filename.
178
179 If no relative path can be calculated (i.e different
180 drives on Windows), then instead of raising a ValueError,
181 the absolute path is returned.
182
183 """
184 try:
185 dirname, basename = os.path.split(filename)
186 relative_dir = os.path.relpath(dirname, start)
187 return os.path.join(relative_dir, basename)
188 except ValueError:
189 return os.path.abspath(filename)
190
191
192 class ReadFileChunk(object):
193 def __init__(self, filename, start_byte, size):
194 self._filename = filename
195 self._start_byte = start_byte
196 self._fileobj = open(self._filename, 'rb')
197 self._size = self._calculate_file_size(self._fileobj, requested_size=size,
198 start_byte=start_byte)
199 self._fileobj.seek(self._start_byte)
200 self._amount_read = 0
201
202 def _calculate_file_size(self, fileobj, requested_size, start_byte):
203 actual_file_size = os.fstat(fileobj.fileno()).st_size
204 max_chunk_size = actual_file_size - start_byte
205 return min(max_chunk_size, requested_size)
206
207 def read(self, amount=None):
208 if amount is None:
209 remaining = self._size - self._amount_read
210 data = self._fileobj.read(remaining)
211 self._amount_read += remaining
212 return data
213 else:
214 actual_amount = min(self._size - self._amount_read, amount)
215 data = self._fileobj.read(actual_amount)
216 self._amount_read += actual_amount
217 return data
218
219 def close(self):
220 self._fileobj.close()
221
222 def __len__(self):
223 # __len__ is defined because requests will try to determine the length
224 # of the stream to set a content length. In the normal case
225 # of the file it will just stat the file, but we need to change that
226 # behavior. By providing a __len__, requests will use that instead
227 # of stat'ing the file.
228 return self._size
229
230 def __enter__(self):
231 return self
232
233 def __exit__(self, *args, **kwargs):
234 self._fileobj.close()
235
236 def __iter__(self):
237 # This is a workaround for http://bugs.python.org/issue17575
238 # Basically httplib will try to iterate over the contents, even
239 # if its a file like object. This wasn't noticed because we've
240 # already exhausted the stream so iterating over the file immediately
241 # steps, which is what we're simulating here.
242 return iter([])
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/s3/utils.py b/awscli/customizations/s3/utils.py
--- a/awscli/customizations/s3/utils.py
+++ b/awscli/customizations/s3/utils.py
@@ -216,6 +216,10 @@
self._amount_read += actual_amount
return data
+ def seek(self, where):
+ self._fileobj.seek(self._start_byte + where)
+ self._amount_read = where
+
def close(self):
self._fileobj.close()
|
{"golden_diff": "diff --git a/awscli/customizations/s3/utils.py b/awscli/customizations/s3/utils.py\n--- a/awscli/customizations/s3/utils.py\n+++ b/awscli/customizations/s3/utils.py\n@@ -216,6 +216,10 @@\n self._amount_read += actual_amount\n return data\n \n+ def seek(self, where):\n+ self._fileobj.seek(self._start_byte + where)\n+ self._amount_read = where\n+\n def close(self):\n self._fileobj.close()\n", "issue": "S3 - RequestTimeout during large files\nI'm trying to upload a large file (9 GB) and getting a RequestTimeout error using `aws s3 mv ...`\n\nI haven't fully tested it yet, but it seems like if I run the command over and over it will eventually work.\n\nHere's the debug log from a failed attempt: https://s3.amazonaws.com/nimbus-public/s3_backup.log\n\nI'll post back if I determine that retrying the command several times works or not.\n\naws version: aws-cli/1.1.2 Python/2.7.3 Windows/2008ServerR2\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom datetime import datetime\nimport mimetypes\nimport hashlib\nimport math\nimport os\nimport sys\nfrom functools import partial\n\nfrom six import PY3\nfrom six.moves import queue as Queue\nfrom dateutil.tz import tzlocal\n\nfrom awscli.customizations.s3.constants import QUEUE_TIMEOUT_WAIT, \\\n MAX_PARTS, MAX_SINGLE_UPLOAD_SIZE\n\n\nclass MD5Error(Exception):\n \"\"\"\n Exception for md5's that do not match.\n \"\"\"\n pass\n\n\nclass NoBlockQueue(Queue.Queue):\n \"\"\"\n This queue ensures that joining does not block interrupt signals.\n It also contains a threading event ``interrupt`` that breaks the\n while loop if signaled. The ``interrupt`` signal is optional.\n If left out, this should act like a normal queue.\n \"\"\"\n def __init__(self, interrupt=None, maxsize=0):\n Queue.Queue.__init__(self, maxsize=maxsize)\n self.interrupt = interrupt\n\n def join(self):\n self.all_tasks_done.acquire()\n try:\n while self.unfinished_tasks:\n if self.interrupt and self.interrupt.isSet():\n break\n self.all_tasks_done.wait(QUEUE_TIMEOUT_WAIT)\n finally:\n self.all_tasks_done.release()\n\n\ndef find_bucket_key(s3_path):\n \"\"\"\n This is a helper function that given an s3 path such that the path is of\n the form: bucket/key\n It will return the bucket and the key represented by the s3 path\n \"\"\"\n s3_components = s3_path.split('/')\n bucket = s3_components[0]\n s3_key = \"\"\n if len(s3_components) > 1:\n s3_key = '/'.join(s3_components[1:])\n return bucket, s3_key\n\n\ndef get_file_stat(path):\n \"\"\"\n This is a helper function that given a local path return the size of\n the file in bytes and time of last modification.\n \"\"\"\n stats = os.stat(path)\n update_time = datetime.fromtimestamp(stats.st_mtime, tzlocal())\n return stats.st_size, update_time\n\n\ndef check_etag(etag, fileobj):\n \"\"\"\n This fucntion checks the etag and the md5 checksum to ensure no\n data was corrupted upon transfer.\n \"\"\"\n get_chunk = partial(fileobj.read, 1024 * 1024)\n m = hashlib.md5()\n for chunk in iter(get_chunk, b''):\n m.update(chunk)\n if '-' not in etag:\n if etag != m.hexdigest():\n raise MD5Error\n\n\ndef check_error(response_data):\n \"\"\"\n A helper function that prints out the error message recieved in the\n response_data and raises an error when there is an error.\n \"\"\"\n if response_data:\n if 'Errors' in response_data:\n errors = response_data['Errors']\n for error in errors:\n raise Exception(\"Error: %s\\n\" % error['Message'])\n\n\ndef operate(service, cmd, kwargs):\n \"\"\"\n A helper function that universally calls any command by taking in the\n service, name of the command, and any additional parameters required in\n the call.\n \"\"\"\n operation = service.get_operation(cmd)\n http_response, response_data = operation.call(**kwargs)\n check_error(response_data)\n return response_data, http_response\n\n\ndef find_chunksize(size, current_chunksize):\n \"\"\"\n The purpose of this function is determine a chunksize so that\n the number of parts in a multipart upload is not greater than\n the ``MAX_PARTS``. If the ``chunksize`` is greater than\n ``MAX_SINGLE_UPLOAD_SIZE`` it returns ``MAX_SINGLE_UPLOAD_SIZE``.\n \"\"\"\n chunksize = current_chunksize\n num_parts = int(math.ceil(size / float(chunksize)))\n while num_parts > MAX_PARTS:\n chunksize *= 2\n num_parts = int(math.ceil(size / float(chunksize)))\n if chunksize > MAX_SINGLE_UPLOAD_SIZE:\n return MAX_SINGLE_UPLOAD_SIZE\n else:\n return chunksize\n\n\nclass MultiCounter(object):\n \"\"\"\n This class is used as a way to keep track of how many multipart\n operations are in progress. It also is used to track how many\n part operations are occuring.\n \"\"\"\n def __init__(self):\n self.count = 0\n\n\ndef uni_print(statement):\n \"\"\"\n This function is used to properly write unicode to stdout. It\n ensures that the proper encoding is used if the statement is\n not in a version type of string. The initial check is to\n allow if ``sys.stdout`` does not use an encoding\n \"\"\"\n encoding = getattr(sys.stdout, 'encoding', None)\n if encoding is not None and not PY3:\n sys.stdout.write(statement.encode(sys.stdout.encoding))\n else:\n try:\n sys.stdout.write(statement)\n except UnicodeEncodeError:\n # Some file like objects like cStringIO will\n # try to decode as ascii. Interestingly enough\n # this works with a normal StringIO.\n sys.stdout.write(statement.encode('utf-8'))\n\n\ndef guess_content_type(filename):\n \"\"\"Given a filename, guess it's content type.\n\n If the type cannot be guessed, a value of None is returned.\n \"\"\"\n return mimetypes.guess_type(filename)[0]\n\n\ndef relative_path(filename, start=os.path.curdir):\n \"\"\"Cross platform relative path of a filename.\n\n If no relative path can be calculated (i.e different\n drives on Windows), then instead of raising a ValueError,\n the absolute path is returned.\n\n \"\"\"\n try:\n dirname, basename = os.path.split(filename)\n relative_dir = os.path.relpath(dirname, start)\n return os.path.join(relative_dir, basename)\n except ValueError:\n return os.path.abspath(filename)\n\n\nclass ReadFileChunk(object):\n def __init__(self, filename, start_byte, size):\n self._filename = filename\n self._start_byte = start_byte\n self._fileobj = open(self._filename, 'rb')\n self._size = self._calculate_file_size(self._fileobj, requested_size=size,\n start_byte=start_byte)\n self._fileobj.seek(self._start_byte)\n self._amount_read = 0\n\n def _calculate_file_size(self, fileobj, requested_size, start_byte):\n actual_file_size = os.fstat(fileobj.fileno()).st_size\n max_chunk_size = actual_file_size - start_byte\n return min(max_chunk_size, requested_size)\n\n def read(self, amount=None):\n if amount is None:\n remaining = self._size - self._amount_read\n data = self._fileobj.read(remaining)\n self._amount_read += remaining\n return data\n else:\n actual_amount = min(self._size - self._amount_read, amount)\n data = self._fileobj.read(actual_amount)\n self._amount_read += actual_amount\n return data\n\n def close(self):\n self._fileobj.close()\n\n def __len__(self):\n # __len__ is defined because requests will try to determine the length\n # of the stream to set a content length. In the normal case\n # of the file it will just stat the file, but we need to change that\n # behavior. By providing a __len__, requests will use that instead\n # of stat'ing the file.\n return self._size\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args, **kwargs):\n self._fileobj.close()\n\n def __iter__(self):\n # This is a workaround for http://bugs.python.org/issue17575\n # Basically httplib will try to iterate over the contents, even\n # if its a file like object. This wasn't noticed because we've\n # already exhausted the stream so iterating over the file immediately\n # steps, which is what we're simulating here.\n return iter([])\n", "path": "awscli/customizations/s3/utils.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom datetime import datetime\nimport mimetypes\nimport hashlib\nimport math\nimport os\nimport sys\nfrom functools import partial\n\nfrom six import PY3\nfrom six.moves import queue as Queue\nfrom dateutil.tz import tzlocal\n\nfrom awscli.customizations.s3.constants import QUEUE_TIMEOUT_WAIT, \\\n MAX_PARTS, MAX_SINGLE_UPLOAD_SIZE\n\n\nclass MD5Error(Exception):\n \"\"\"\n Exception for md5's that do not match.\n \"\"\"\n pass\n\n\nclass NoBlockQueue(Queue.Queue):\n \"\"\"\n This queue ensures that joining does not block interrupt signals.\n It also contains a threading event ``interrupt`` that breaks the\n while loop if signaled. The ``interrupt`` signal is optional.\n If left out, this should act like a normal queue.\n \"\"\"\n def __init__(self, interrupt=None, maxsize=0):\n Queue.Queue.__init__(self, maxsize=maxsize)\n self.interrupt = interrupt\n\n def join(self):\n self.all_tasks_done.acquire()\n try:\n while self.unfinished_tasks:\n if self.interrupt and self.interrupt.isSet():\n break\n self.all_tasks_done.wait(QUEUE_TIMEOUT_WAIT)\n finally:\n self.all_tasks_done.release()\n\n\ndef find_bucket_key(s3_path):\n \"\"\"\n This is a helper function that given an s3 path such that the path is of\n the form: bucket/key\n It will return the bucket and the key represented by the s3 path\n \"\"\"\n s3_components = s3_path.split('/')\n bucket = s3_components[0]\n s3_key = \"\"\n if len(s3_components) > 1:\n s3_key = '/'.join(s3_components[1:])\n return bucket, s3_key\n\n\ndef get_file_stat(path):\n \"\"\"\n This is a helper function that given a local path return the size of\n the file in bytes and time of last modification.\n \"\"\"\n stats = os.stat(path)\n update_time = datetime.fromtimestamp(stats.st_mtime, tzlocal())\n return stats.st_size, update_time\n\n\ndef check_etag(etag, fileobj):\n \"\"\"\n This fucntion checks the etag and the md5 checksum to ensure no\n data was corrupted upon transfer.\n \"\"\"\n get_chunk = partial(fileobj.read, 1024 * 1024)\n m = hashlib.md5()\n for chunk in iter(get_chunk, b''):\n m.update(chunk)\n if '-' not in etag:\n if etag != m.hexdigest():\n raise MD5Error\n\n\ndef check_error(response_data):\n \"\"\"\n A helper function that prints out the error message recieved in the\n response_data and raises an error when there is an error.\n \"\"\"\n if response_data:\n if 'Errors' in response_data:\n errors = response_data['Errors']\n for error in errors:\n raise Exception(\"Error: %s\\n\" % error['Message'])\n\n\ndef operate(service, cmd, kwargs):\n \"\"\"\n A helper function that universally calls any command by taking in the\n service, name of the command, and any additional parameters required in\n the call.\n \"\"\"\n operation = service.get_operation(cmd)\n http_response, response_data = operation.call(**kwargs)\n check_error(response_data)\n return response_data, http_response\n\n\ndef find_chunksize(size, current_chunksize):\n \"\"\"\n The purpose of this function is determine a chunksize so that\n the number of parts in a multipart upload is not greater than\n the ``MAX_PARTS``. If the ``chunksize`` is greater than\n ``MAX_SINGLE_UPLOAD_SIZE`` it returns ``MAX_SINGLE_UPLOAD_SIZE``.\n \"\"\"\n chunksize = current_chunksize\n num_parts = int(math.ceil(size / float(chunksize)))\n while num_parts > MAX_PARTS:\n chunksize *= 2\n num_parts = int(math.ceil(size / float(chunksize)))\n if chunksize > MAX_SINGLE_UPLOAD_SIZE:\n return MAX_SINGLE_UPLOAD_SIZE\n else:\n return chunksize\n\n\nclass MultiCounter(object):\n \"\"\"\n This class is used as a way to keep track of how many multipart\n operations are in progress. It also is used to track how many\n part operations are occuring.\n \"\"\"\n def __init__(self):\n self.count = 0\n\n\ndef uni_print(statement):\n \"\"\"\n This function is used to properly write unicode to stdout. It\n ensures that the proper encoding is used if the statement is\n not in a version type of string. The initial check is to\n allow if ``sys.stdout`` does not use an encoding\n \"\"\"\n encoding = getattr(sys.stdout, 'encoding', None)\n if encoding is not None and not PY3:\n sys.stdout.write(statement.encode(sys.stdout.encoding))\n else:\n try:\n sys.stdout.write(statement)\n except UnicodeEncodeError:\n # Some file like objects like cStringIO will\n # try to decode as ascii. Interestingly enough\n # this works with a normal StringIO.\n sys.stdout.write(statement.encode('utf-8'))\n\n\ndef guess_content_type(filename):\n \"\"\"Given a filename, guess it's content type.\n\n If the type cannot be guessed, a value of None is returned.\n \"\"\"\n return mimetypes.guess_type(filename)[0]\n\n\ndef relative_path(filename, start=os.path.curdir):\n \"\"\"Cross platform relative path of a filename.\n\n If no relative path can be calculated (i.e different\n drives on Windows), then instead of raising a ValueError,\n the absolute path is returned.\n\n \"\"\"\n try:\n dirname, basename = os.path.split(filename)\n relative_dir = os.path.relpath(dirname, start)\n return os.path.join(relative_dir, basename)\n except ValueError:\n return os.path.abspath(filename)\n\n\nclass ReadFileChunk(object):\n def __init__(self, filename, start_byte, size):\n self._filename = filename\n self._start_byte = start_byte\n self._fileobj = open(self._filename, 'rb')\n self._size = self._calculate_file_size(self._fileobj, requested_size=size,\n start_byte=start_byte)\n self._fileobj.seek(self._start_byte)\n self._amount_read = 0\n\n def _calculate_file_size(self, fileobj, requested_size, start_byte):\n actual_file_size = os.fstat(fileobj.fileno()).st_size\n max_chunk_size = actual_file_size - start_byte\n return min(max_chunk_size, requested_size)\n\n def read(self, amount=None):\n if amount is None:\n remaining = self._size - self._amount_read\n data = self._fileobj.read(remaining)\n self._amount_read += remaining\n return data\n else:\n actual_amount = min(self._size - self._amount_read, amount)\n data = self._fileobj.read(actual_amount)\n self._amount_read += actual_amount\n return data\n\n def seek(self, where):\n self._fileobj.seek(self._start_byte + where)\n self._amount_read = where\n\n def close(self):\n self._fileobj.close()\n\n def __len__(self):\n # __len__ is defined because requests will try to determine the length\n # of the stream to set a content length. In the normal case\n # of the file it will just stat the file, but we need to change that\n # behavior. By providing a __len__, requests will use that instead\n # of stat'ing the file.\n return self._size\n\n def __enter__(self):\n return self\n\n def __exit__(self, *args, **kwargs):\n self._fileobj.close()\n\n def __iter__(self):\n # This is a workaround for http://bugs.python.org/issue17575\n # Basically httplib will try to iterate over the contents, even\n # if its a file like object. This wasn't noticed because we've\n # already exhausted the stream so iterating over the file immediately\n # steps, which is what we're simulating here.\n return iter([])\n", "path": "awscli/customizations/s3/utils.py"}]}
| 2,877 | 116 |
gh_patches_debug_606
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1559
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.61
On the docket:
+ [x] Merge packages for --venv-site-packages-copies. #1557
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.60"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.60"
+__version__ = "2.1.61"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.60\"\n+__version__ = \"2.1.61\"\n", "issue": "Release 2.1.61\nOn the docket:\r\n+ [x] Merge packages for --venv-site-packages-copies. #1557 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.60\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.61\"\n", "path": "pex/version.py"}]}
| 343 | 96 |
gh_patches_debug_16741
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1938
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Prometheus client causes immediate websocket disconnect
### Description
When using the Prometheus integration, websockets always disconnect immediately after opening the connection.
I've done a bit of digging, and the error arises in `PrometheusMiddleware._get_default_labels` [here](https://github.com/litestar-org/litestar/blob/main/litestar/contrib/prometheus/middleware.py#L107-L122), where `request.method` raises an attribute error for websockets. One possible fix that I've tested locally is to change it to:
```python
"method": (
request.method
if request.scope["type"] == "http"
else request.scope["type"]
),
```
I'd be happy to open a PR for this change or some other fix if someone has a better idea.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, websocket_listener
from litestar.testing import TestClient
from litestar.contrib.prometheus import PrometheusConfig, PrometheusController
config = PrometheusConfig()
@websocket_listener("/")
def websocket_handler(data: str):
return f"Hello, {data}!"
app_with_prometheus = Litestar(
route_handlers=[websocket_handler, PrometheusController],
middleware=[config.middleware]
)
app_without_prometheus = Litestar(route_handlers=[websocket_handler])
def test_app(app_to_test):
with TestClient(app=app_to_test) as test_client:
with test_client.websocket_connect("/") as socket:
socket.send("World")
response = socket.receive()
print(response)
test_app(app_without_prometheus)
test_app(app_with_prometheus)
```
### Steps to reproduce
```bash
1. Run the MCVE
2. Observe the traceback
```
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "/Users/erik/test.py", line 27, in <module>
test_app(app_with_prometheus)
File "/Users/erik/test.py", line 20, in test_app
with test_client.websocket_connect("/") as socket:
File "/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py", line 52, in __enter__
message = self.receive(timeout=self.client.timeout.read)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py", line 195, in receive
raise WebSocketDisconnect(
litestar.exceptions.websocket_exceptions.WebSocketDisconnect: KeyError('method')
```
### Litestar Version
2.0.0b2
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
## Funding
* You can sponsor this specific effort via a [Polar.sh](https://polar.sh) pledge below
* We receive the pledge once the issue is completed & verified
<a href="https://polar.sh/litestar-org/litestar/issues/1934">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
Bug: Prometheus client causes immediate websocket disconnect
### Description
When using the Prometheus integration, websockets always disconnect immediately after opening the connection.
I've done a bit of digging, and the error arises in `PrometheusMiddleware._get_default_labels` [here](https://github.com/litestar-org/litestar/blob/main/litestar/contrib/prometheus/middleware.py#L107-L122), where `request.method` raises an attribute error for websockets. One possible fix that I've tested locally is to change it to:
```python
"method": (
request.method
if request.scope["type"] == "http"
else request.scope["type"]
),
```
I'd be happy to open a PR for this change or some other fix if someone has a better idea.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, websocket_listener
from litestar.testing import TestClient
from litestar.contrib.prometheus import PrometheusConfig, PrometheusController
config = PrometheusConfig()
@websocket_listener("/")
def websocket_handler(data: str):
return f"Hello, {data}!"
app_with_prometheus = Litestar(
route_handlers=[websocket_handler, PrometheusController],
middleware=[config.middleware]
)
app_without_prometheus = Litestar(route_handlers=[websocket_handler])
def test_app(app_to_test):
with TestClient(app=app_to_test) as test_client:
with test_client.websocket_connect("/") as socket:
socket.send("World")
response = socket.receive()
print(response)
test_app(app_without_prometheus)
test_app(app_with_prometheus)
```
### Steps to reproduce
```bash
1. Run the MCVE
2. Observe the traceback
```
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "/Users/erik/test.py", line 27, in <module>
test_app(app_with_prometheus)
File "/Users/erik/test.py", line 20, in test_app
with test_client.websocket_connect("/") as socket:
File "/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py", line 52, in __enter__
message = self.receive(timeout=self.client.timeout.read)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py", line 195, in receive
raise WebSocketDisconnect(
litestar.exceptions.websocket_exceptions.WebSocketDisconnect: KeyError('method')
```
### Litestar Version
2.0.0b2
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
## Funding
* You can sponsor this specific effort via a [Polar.sh](https://polar.sh) pledge below
* We receive the pledge once the issue is completed & verified
<a href="https://polar.sh/litestar-org/litestar/issues/1934">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/prometheus/middleware.py`
Content:
```
1 from __future__ import annotations
2
3 import time
4 from functools import wraps
5 from typing import TYPE_CHECKING, Any, Callable, ClassVar, cast
6
7 from litestar.connection.request import Request
8 from litestar.exceptions import MissingDependencyException
9 from litestar.middleware.base import AbstractMiddleware
10
11 __all__ = ("PrometheusMiddleware",)
12
13 from litestar.status_codes import HTTP_500_INTERNAL_SERVER_ERROR
14
15 try:
16 import prometheus_client # noqa: F401
17 except ImportError as e:
18 raise MissingDependencyException("prometheus_client") from e
19
20 from prometheus_client import Counter, Gauge, Histogram
21
22 if TYPE_CHECKING:
23 from prometheus_client.metrics import MetricWrapperBase
24
25 from litestar.contrib.prometheus import PrometheusConfig
26 from litestar.types import ASGIApp, Message, Receive, Scope, Send
27
28
29 class PrometheusMiddleware(AbstractMiddleware):
30 """Prometheus Middleware."""
31
32 _metrics: ClassVar[dict[str, MetricWrapperBase]] = {}
33
34 def __init__(self, app: ASGIApp, config: PrometheusConfig) -> None:
35 """Middleware that adds Prometheus instrumentation to the application.
36
37 Args:
38 app: The ``next`` ASGI app to call.
39 config: An instance of :class:`PrometheusConfig <.contrib.prometheus.PrometheusConfig>`
40 """
41 super().__init__(app=app, scopes=config.scopes, exclude=config.exclude, exclude_opt_key=config.exclude_opt_key)
42 self._config = config
43 self._kwargs: dict[str, Any] = {}
44
45 if self._config.buckets is not None:
46 self._kwargs["buckets"] = self._config.buckets
47
48 def request_count(self, labels: dict[str, str | int | float]) -> Counter:
49 metric_name = f"{self._config.prefix}_requests_total"
50
51 if metric_name not in PrometheusMiddleware._metrics:
52 PrometheusMiddleware._metrics[metric_name] = Counter(
53 name=metric_name,
54 documentation="Total requests",
55 labelnames=[*labels.keys()],
56 )
57
58 return cast("Counter", PrometheusMiddleware._metrics[metric_name])
59
60 def request_time(self, labels: dict[str, str | int | float]) -> Histogram:
61 metric_name = f"{self._config.prefix}_request_duration_seconds"
62
63 if metric_name not in PrometheusMiddleware._metrics:
64 PrometheusMiddleware._metrics[metric_name] = Histogram(
65 name=metric_name,
66 documentation="Request duration, in seconds",
67 labelnames=[*labels.keys()],
68 **self._kwargs,
69 )
70 return cast("Histogram", PrometheusMiddleware._metrics[metric_name])
71
72 def requests_in_progress(self, labels: dict[str, str | int | float]) -> Gauge:
73 metric_name = f"{self._config.prefix}_requests_in_progress"
74
75 if metric_name not in PrometheusMiddleware._metrics:
76 PrometheusMiddleware._metrics[metric_name] = Gauge(
77 name=metric_name,
78 documentation="Total requests currently in progress",
79 labelnames=[*labels.keys()],
80 multiprocess_mode="livesum",
81 )
82 return cast("Gauge", PrometheusMiddleware._metrics[metric_name])
83
84 def requests_error_count(self, labels: dict[str, str | int | float]) -> Counter:
85 metric_name = f"{self._config.prefix}_requests_error_total"
86
87 if metric_name not in PrometheusMiddleware._metrics:
88 PrometheusMiddleware._metrics[metric_name] = Counter(
89 name=metric_name,
90 documentation="Total errors in requests",
91 labelnames=[*labels.keys()],
92 )
93 return cast("Counter", PrometheusMiddleware._metrics[metric_name])
94
95 def _get_extra_labels(self, request: Request[Any, Any, Any]) -> dict[str, str]:
96 """Get extra labels provided by the config and if they are callable, parse them.
97
98 Args:
99 request: The request object.
100
101 Returns:
102 A dictionary of extra labels.
103 """
104
105 return {k: str(v(request) if callable(v) else v) for k, v in (self._config.labels or {}).items()}
106
107 def _get_default_labels(self, request: Request[Any, Any, Any]) -> dict[str, str | int | float]:
108 """Get default label values from the request.
109
110 Args:
111 request: The request object.
112
113 Returns:
114 A dictionary of default labels.
115 """
116
117 return {
118 "method": request.method,
119 "path": request.url.path,
120 "status_code": 200,
121 "app_name": self._config.app_name,
122 }
123
124 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
125 """ASGI callable.
126
127 Args:
128 scope: The ASGI connection scope.
129 receive: The ASGI receive function.
130 send: The ASGI send function.
131
132 Returns:
133 None
134 """
135
136 request = Request[Any, Any, Any](scope, receive)
137
138 if self._config.excluded_http_methods and request.method in self._config.excluded_http_methods:
139 await self.app(scope, receive, send)
140 return
141
142 labels = {**self._get_default_labels(request), **self._get_extra_labels(request)}
143
144 request_span = {"start_time": time.perf_counter(), "end_time": 0, "duration": 0, "status_code": 200}
145
146 wrapped_send = self._get_wrapped_send(send, request_span)
147
148 self.requests_in_progress(labels).labels(*labels.values()).inc()
149
150 try:
151 await self.app(scope, receive, wrapped_send)
152 finally:
153 extra: dict[str, Any] = {}
154 if self._config.exemplars:
155 extra["exemplar"] = self._config.exemplars(request)
156
157 self.requests_in_progress(labels).labels(*labels.values()).dec()
158
159 labels["status_code"] = request_span["status_code"]
160 label_values = [*labels.values()]
161
162 if request_span["status_code"] >= HTTP_500_INTERNAL_SERVER_ERROR:
163 self.requests_error_count(labels).labels(*label_values).inc(**extra)
164
165 self.request_count(labels).labels(*label_values).inc(**extra)
166 self.request_time(labels).labels(*label_values).observe(request_span["duration"], **extra)
167
168 def _get_wrapped_send(self, send: Send, request_span: dict[str, float]) -> Callable:
169 @wraps(send)
170 async def wrapped_send(message: Message) -> None:
171 if message["type"] == "http.response.start":
172 request_span["status_code"] = message["status"]
173
174 if message["type"] == "http.response.body":
175 end = time.perf_counter()
176 request_span["duration"] = end - request_span["start_time"]
177 request_span["end_time"] = end
178 await send(message)
179
180 return wrapped_send
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/contrib/prometheus/middleware.py b/litestar/contrib/prometheus/middleware.py
--- a/litestar/contrib/prometheus/middleware.py
+++ b/litestar/contrib/prometheus/middleware.py
@@ -5,6 +5,7 @@
from typing import TYPE_CHECKING, Any, Callable, ClassVar, cast
from litestar.connection.request import Request
+from litestar.enums import ScopeType
from litestar.exceptions import MissingDependencyException
from litestar.middleware.base import AbstractMiddleware
@@ -115,7 +116,7 @@
"""
return {
- "method": request.method,
+ "method": request.method if request.scope["type"] == ScopeType.HTTP else request.scope["type"],
"path": request.url.path,
"status_code": 200,
"app_name": self._config.app_name,
|
{"golden_diff": "diff --git a/litestar/contrib/prometheus/middleware.py b/litestar/contrib/prometheus/middleware.py\n--- a/litestar/contrib/prometheus/middleware.py\n+++ b/litestar/contrib/prometheus/middleware.py\n@@ -5,6 +5,7 @@\n from typing import TYPE_CHECKING, Any, Callable, ClassVar, cast\n \n from litestar.connection.request import Request\n+from litestar.enums import ScopeType\n from litestar.exceptions import MissingDependencyException\n from litestar.middleware.base import AbstractMiddleware\n \n@@ -115,7 +116,7 @@\n \"\"\"\n \n return {\n- \"method\": request.method,\n+ \"method\": request.method if request.scope[\"type\"] == ScopeType.HTTP else request.scope[\"type\"],\n \"path\": request.url.path,\n \"status_code\": 200,\n \"app_name\": self._config.app_name,\n", "issue": "Bug: Prometheus client causes immediate websocket disconnect \n### Description\n\nWhen using the Prometheus integration, websockets always disconnect immediately after opening the connection.\r\n\r\nI've done a bit of digging, and the error arises in `PrometheusMiddleware._get_default_labels` [here](https://github.com/litestar-org/litestar/blob/main/litestar/contrib/prometheus/middleware.py#L107-L122), where `request.method` raises an attribute error for websockets. One possible fix that I've tested locally is to change it to:\r\n```python\r\n\"method\": (\r\n request.method\r\n if request.scope[\"type\"] == \"http\"\r\n else request.scope[\"type\"]\r\n),\r\n```\r\n\r\nI'd be happy to open a PR for this change or some other fix if someone has a better idea.\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n```python\nfrom litestar import Litestar, websocket_listener\r\nfrom litestar.testing import TestClient\r\nfrom litestar.contrib.prometheus import PrometheusConfig, PrometheusController\r\n\r\nconfig = PrometheusConfig()\r\n\r\n@websocket_listener(\"/\")\r\ndef websocket_handler(data: str):\r\n return f\"Hello, {data}!\"\r\n\r\napp_with_prometheus = Litestar(\r\n route_handlers=[websocket_handler, PrometheusController],\r\n middleware=[config.middleware]\r\n)\r\n\r\napp_without_prometheus = Litestar(route_handlers=[websocket_handler])\r\n\r\ndef test_app(app_to_test):\r\n with TestClient(app=app_to_test) as test_client:\r\n with test_client.websocket_connect(\"/\") as socket:\r\n socket.send(\"World\")\r\n response = socket.receive()\r\n print(response)\r\n\r\ntest_app(app_without_prometheus)\r\ntest_app(app_with_prometheus)\n```\n\n\n### Steps to reproduce\n\n```bash\n1. Run the MCVE\r\n2. Observe the traceback\n```\n\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n```bash\nTraceback (most recent call last):\r\n File \"/Users/erik/test.py\", line 27, in <module>\r\n test_app(app_with_prometheus)\r\n File \"/Users/erik/test.py\", line 20, in test_app\r\n with test_client.websocket_connect(\"/\") as socket:\r\n File \"/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py\", line 52, in __enter__\r\n message = self.receive(timeout=self.client.timeout.read)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py\", line 195, in receive\r\n raise WebSocketDisconnect(\r\nlitestar.exceptions.websocket_exceptions.WebSocketDisconnect: KeyError('method')\n```\n\n\n### Litestar Version\n\n2.0.0b2\n\n### Platform\n\n- [ ] Linux\n- [X] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n\n<!-- POLAR PLEDGE BADGE START -->\n## Funding\n* You can sponsor this specific effort via a [Polar.sh](https://polar.sh) pledge below\n* We receive the pledge once the issue is completed & verified\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/1934\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\nBug: Prometheus client causes immediate websocket disconnect \n### Description\n\nWhen using the Prometheus integration, websockets always disconnect immediately after opening the connection.\r\n\r\nI've done a bit of digging, and the error arises in `PrometheusMiddleware._get_default_labels` [here](https://github.com/litestar-org/litestar/blob/main/litestar/contrib/prometheus/middleware.py#L107-L122), where `request.method` raises an attribute error for websockets. One possible fix that I've tested locally is to change it to:\r\n```python\r\n\"method\": (\r\n request.method\r\n if request.scope[\"type\"] == \"http\"\r\n else request.scope[\"type\"]\r\n),\r\n```\r\n\r\nI'd be happy to open a PR for this change or some other fix if someone has a better idea.\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n```python\nfrom litestar import Litestar, websocket_listener\r\nfrom litestar.testing import TestClient\r\nfrom litestar.contrib.prometheus import PrometheusConfig, PrometheusController\r\n\r\nconfig = PrometheusConfig()\r\n\r\n@websocket_listener(\"/\")\r\ndef websocket_handler(data: str):\r\n return f\"Hello, {data}!\"\r\n\r\napp_with_prometheus = Litestar(\r\n route_handlers=[websocket_handler, PrometheusController],\r\n middleware=[config.middleware]\r\n)\r\n\r\napp_without_prometheus = Litestar(route_handlers=[websocket_handler])\r\n\r\ndef test_app(app_to_test):\r\n with TestClient(app=app_to_test) as test_client:\r\n with test_client.websocket_connect(\"/\") as socket:\r\n socket.send(\"World\")\r\n response = socket.receive()\r\n print(response)\r\n\r\ntest_app(app_without_prometheus)\r\ntest_app(app_with_prometheus)\n```\n\n\n### Steps to reproduce\n\n```bash\n1. Run the MCVE\r\n2. Observe the traceback\n```\n\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n```bash\nTraceback (most recent call last):\r\n File \"/Users/erik/test.py\", line 27, in <module>\r\n test_app(app_with_prometheus)\r\n File \"/Users/erik/test.py\", line 20, in test_app\r\n with test_client.websocket_connect(\"/\") as socket:\r\n File \"/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py\", line 52, in __enter__\r\n message = self.receive(timeout=self.client.timeout.read)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/erik/Library/Caches/pypoetry/virtualenvs/*****-ocilal_E-py3.11/lib/python3.11/site-packages/litestar/testing/websocket_test_session.py\", line 195, in receive\r\n raise WebSocketDisconnect(\r\nlitestar.exceptions.websocket_exceptions.WebSocketDisconnect: KeyError('method')\n```\n\n\n### Litestar Version\n\n2.0.0b2\n\n### Platform\n\n- [ ] Linux\n- [X] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n\n<!-- POLAR PLEDGE BADGE START -->\n## Funding\n* You can sponsor this specific effort via a [Polar.sh](https://polar.sh) pledge below\n* We receive the pledge once the issue is completed & verified\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/1934\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/1934/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport time\nfrom functools import wraps\nfrom typing import TYPE_CHECKING, Any, Callable, ClassVar, cast\n\nfrom litestar.connection.request import Request\nfrom litestar.exceptions import MissingDependencyException\nfrom litestar.middleware.base import AbstractMiddleware\n\n__all__ = (\"PrometheusMiddleware\",)\n\nfrom litestar.status_codes import HTTP_500_INTERNAL_SERVER_ERROR\n\ntry:\n import prometheus_client # noqa: F401\nexcept ImportError as e:\n raise MissingDependencyException(\"prometheus_client\") from e\n\nfrom prometheus_client import Counter, Gauge, Histogram\n\nif TYPE_CHECKING:\n from prometheus_client.metrics import MetricWrapperBase\n\n from litestar.contrib.prometheus import PrometheusConfig\n from litestar.types import ASGIApp, Message, Receive, Scope, Send\n\n\nclass PrometheusMiddleware(AbstractMiddleware):\n \"\"\"Prometheus Middleware.\"\"\"\n\n _metrics: ClassVar[dict[str, MetricWrapperBase]] = {}\n\n def __init__(self, app: ASGIApp, config: PrometheusConfig) -> None:\n \"\"\"Middleware that adds Prometheus instrumentation to the application.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: An instance of :class:`PrometheusConfig <.contrib.prometheus.PrometheusConfig>`\n \"\"\"\n super().__init__(app=app, scopes=config.scopes, exclude=config.exclude, exclude_opt_key=config.exclude_opt_key)\n self._config = config\n self._kwargs: dict[str, Any] = {}\n\n if self._config.buckets is not None:\n self._kwargs[\"buckets\"] = self._config.buckets\n\n def request_count(self, labels: dict[str, str | int | float]) -> Counter:\n metric_name = f\"{self._config.prefix}_requests_total\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Counter(\n name=metric_name,\n documentation=\"Total requests\",\n labelnames=[*labels.keys()],\n )\n\n return cast(\"Counter\", PrometheusMiddleware._metrics[metric_name])\n\n def request_time(self, labels: dict[str, str | int | float]) -> Histogram:\n metric_name = f\"{self._config.prefix}_request_duration_seconds\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Histogram(\n name=metric_name,\n documentation=\"Request duration, in seconds\",\n labelnames=[*labels.keys()],\n **self._kwargs,\n )\n return cast(\"Histogram\", PrometheusMiddleware._metrics[metric_name])\n\n def requests_in_progress(self, labels: dict[str, str | int | float]) -> Gauge:\n metric_name = f\"{self._config.prefix}_requests_in_progress\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Gauge(\n name=metric_name,\n documentation=\"Total requests currently in progress\",\n labelnames=[*labels.keys()],\n multiprocess_mode=\"livesum\",\n )\n return cast(\"Gauge\", PrometheusMiddleware._metrics[metric_name])\n\n def requests_error_count(self, labels: dict[str, str | int | float]) -> Counter:\n metric_name = f\"{self._config.prefix}_requests_error_total\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Counter(\n name=metric_name,\n documentation=\"Total errors in requests\",\n labelnames=[*labels.keys()],\n )\n return cast(\"Counter\", PrometheusMiddleware._metrics[metric_name])\n\n def _get_extra_labels(self, request: Request[Any, Any, Any]) -> dict[str, str]:\n \"\"\"Get extra labels provided by the config and if they are callable, parse them.\n\n Args:\n request: The request object.\n\n Returns:\n A dictionary of extra labels.\n \"\"\"\n\n return {k: str(v(request) if callable(v) else v) for k, v in (self._config.labels or {}).items()}\n\n def _get_default_labels(self, request: Request[Any, Any, Any]) -> dict[str, str | int | float]:\n \"\"\"Get default label values from the request.\n\n Args:\n request: The request object.\n\n Returns:\n A dictionary of default labels.\n \"\"\"\n\n return {\n \"method\": request.method,\n \"path\": request.url.path,\n \"status_code\": 200,\n \"app_name\": self._config.app_name,\n }\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n\n request = Request[Any, Any, Any](scope, receive)\n\n if self._config.excluded_http_methods and request.method in self._config.excluded_http_methods:\n await self.app(scope, receive, send)\n return\n\n labels = {**self._get_default_labels(request), **self._get_extra_labels(request)}\n\n request_span = {\"start_time\": time.perf_counter(), \"end_time\": 0, \"duration\": 0, \"status_code\": 200}\n\n wrapped_send = self._get_wrapped_send(send, request_span)\n\n self.requests_in_progress(labels).labels(*labels.values()).inc()\n\n try:\n await self.app(scope, receive, wrapped_send)\n finally:\n extra: dict[str, Any] = {}\n if self._config.exemplars:\n extra[\"exemplar\"] = self._config.exemplars(request)\n\n self.requests_in_progress(labels).labels(*labels.values()).dec()\n\n labels[\"status_code\"] = request_span[\"status_code\"]\n label_values = [*labels.values()]\n\n if request_span[\"status_code\"] >= HTTP_500_INTERNAL_SERVER_ERROR:\n self.requests_error_count(labels).labels(*label_values).inc(**extra)\n\n self.request_count(labels).labels(*label_values).inc(**extra)\n self.request_time(labels).labels(*label_values).observe(request_span[\"duration\"], **extra)\n\n def _get_wrapped_send(self, send: Send, request_span: dict[str, float]) -> Callable:\n @wraps(send)\n async def wrapped_send(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n request_span[\"status_code\"] = message[\"status\"]\n\n if message[\"type\"] == \"http.response.body\":\n end = time.perf_counter()\n request_span[\"duration\"] = end - request_span[\"start_time\"]\n request_span[\"end_time\"] = end\n await send(message)\n\n return wrapped_send\n", "path": "litestar/contrib/prometheus/middleware.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport time\nfrom functools import wraps\nfrom typing import TYPE_CHECKING, Any, Callable, ClassVar, cast\n\nfrom litestar.connection.request import Request\nfrom litestar.enums import ScopeType\nfrom litestar.exceptions import MissingDependencyException\nfrom litestar.middleware.base import AbstractMiddleware\n\n__all__ = (\"PrometheusMiddleware\",)\n\nfrom litestar.status_codes import HTTP_500_INTERNAL_SERVER_ERROR\n\ntry:\n import prometheus_client # noqa: F401\nexcept ImportError as e:\n raise MissingDependencyException(\"prometheus_client\") from e\n\nfrom prometheus_client import Counter, Gauge, Histogram\n\nif TYPE_CHECKING:\n from prometheus_client.metrics import MetricWrapperBase\n\n from litestar.contrib.prometheus import PrometheusConfig\n from litestar.types import ASGIApp, Message, Receive, Scope, Send\n\n\nclass PrometheusMiddleware(AbstractMiddleware):\n \"\"\"Prometheus Middleware.\"\"\"\n\n _metrics: ClassVar[dict[str, MetricWrapperBase]] = {}\n\n def __init__(self, app: ASGIApp, config: PrometheusConfig) -> None:\n \"\"\"Middleware that adds Prometheus instrumentation to the application.\n\n Args:\n app: The ``next`` ASGI app to call.\n config: An instance of :class:`PrometheusConfig <.contrib.prometheus.PrometheusConfig>`\n \"\"\"\n super().__init__(app=app, scopes=config.scopes, exclude=config.exclude, exclude_opt_key=config.exclude_opt_key)\n self._config = config\n self._kwargs: dict[str, Any] = {}\n\n if self._config.buckets is not None:\n self._kwargs[\"buckets\"] = self._config.buckets\n\n def request_count(self, labels: dict[str, str | int | float]) -> Counter:\n metric_name = f\"{self._config.prefix}_requests_total\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Counter(\n name=metric_name,\n documentation=\"Total requests\",\n labelnames=[*labels.keys()],\n )\n\n return cast(\"Counter\", PrometheusMiddleware._metrics[metric_name])\n\n def request_time(self, labels: dict[str, str | int | float]) -> Histogram:\n metric_name = f\"{self._config.prefix}_request_duration_seconds\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Histogram(\n name=metric_name,\n documentation=\"Request duration, in seconds\",\n labelnames=[*labels.keys()],\n **self._kwargs,\n )\n return cast(\"Histogram\", PrometheusMiddleware._metrics[metric_name])\n\n def requests_in_progress(self, labels: dict[str, str | int | float]) -> Gauge:\n metric_name = f\"{self._config.prefix}_requests_in_progress\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Gauge(\n name=metric_name,\n documentation=\"Total requests currently in progress\",\n labelnames=[*labels.keys()],\n multiprocess_mode=\"livesum\",\n )\n return cast(\"Gauge\", PrometheusMiddleware._metrics[metric_name])\n\n def requests_error_count(self, labels: dict[str, str | int | float]) -> Counter:\n metric_name = f\"{self._config.prefix}_requests_error_total\"\n\n if metric_name not in PrometheusMiddleware._metrics:\n PrometheusMiddleware._metrics[metric_name] = Counter(\n name=metric_name,\n documentation=\"Total errors in requests\",\n labelnames=[*labels.keys()],\n )\n return cast(\"Counter\", PrometheusMiddleware._metrics[metric_name])\n\n def _get_extra_labels(self, request: Request[Any, Any, Any]) -> dict[str, str]:\n \"\"\"Get extra labels provided by the config and if they are callable, parse them.\n\n Args:\n request: The request object.\n\n Returns:\n A dictionary of extra labels.\n \"\"\"\n\n return {k: str(v(request) if callable(v) else v) for k, v in (self._config.labels or {}).items()}\n\n def _get_default_labels(self, request: Request[Any, Any, Any]) -> dict[str, str | int | float]:\n \"\"\"Get default label values from the request.\n\n Args:\n request: The request object.\n\n Returns:\n A dictionary of default labels.\n \"\"\"\n\n return {\n \"method\": request.method if request.scope[\"type\"] == ScopeType.HTTP else request.scope[\"type\"],\n \"path\": request.url.path,\n \"status_code\": 200,\n \"app_name\": self._config.app_name,\n }\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"ASGI callable.\n\n Args:\n scope: The ASGI connection scope.\n receive: The ASGI receive function.\n send: The ASGI send function.\n\n Returns:\n None\n \"\"\"\n\n request = Request[Any, Any, Any](scope, receive)\n\n if self._config.excluded_http_methods and request.method in self._config.excluded_http_methods:\n await self.app(scope, receive, send)\n return\n\n labels = {**self._get_default_labels(request), **self._get_extra_labels(request)}\n\n request_span = {\"start_time\": time.perf_counter(), \"end_time\": 0, \"duration\": 0, \"status_code\": 200}\n\n wrapped_send = self._get_wrapped_send(send, request_span)\n\n self.requests_in_progress(labels).labels(*labels.values()).inc()\n\n try:\n await self.app(scope, receive, wrapped_send)\n finally:\n extra: dict[str, Any] = {}\n if self._config.exemplars:\n extra[\"exemplar\"] = self._config.exemplars(request)\n\n self.requests_in_progress(labels).labels(*labels.values()).dec()\n\n labels[\"status_code\"] = request_span[\"status_code\"]\n label_values = [*labels.values()]\n\n if request_span[\"status_code\"] >= HTTP_500_INTERNAL_SERVER_ERROR:\n self.requests_error_count(labels).labels(*label_values).inc(**extra)\n\n self.request_count(labels).labels(*label_values).inc(**extra)\n self.request_time(labels).labels(*label_values).observe(request_span[\"duration\"], **extra)\n\n def _get_wrapped_send(self, send: Send, request_span: dict[str, float]) -> Callable:\n @wraps(send)\n async def wrapped_send(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n request_span[\"status_code\"] = message[\"status\"]\n\n if message[\"type\"] == \"http.response.body\":\n end = time.perf_counter()\n request_span[\"duration\"] = end - request_span[\"start_time\"]\n request_span[\"end_time\"] = end\n await send(message)\n\n return wrapped_send\n", "path": "litestar/contrib/prometheus/middleware.py"}]}
| 3,802 | 193 |
gh_patches_debug_29836
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-482
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to load a config file from a URL and use that for installation
This would compliment the feature to use a configuration file
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/__init__.py`
Content:
```
1 """Arch Linux installer - guided, templates etc."""
2 from argparse import ArgumentParser, FileType
3
4 from .lib.disk import *
5 from .lib.exceptions import *
6 from .lib.general import *
7 from .lib.hardware import *
8 from .lib.installer import __packages__, Installer
9 from .lib.locale_helpers import *
10 from .lib.luks import *
11 from .lib.mirrors import *
12 from .lib.networking import *
13 from .lib.output import *
14 from .lib.packages import *
15 from .lib.profiles import *
16 from .lib.services import *
17 from .lib.storage import *
18 from .lib.systemd import *
19 from .lib.user_interaction import *
20
21 parser = ArgumentParser()
22
23 __version__ = "2.2.0.dev1"
24
25
26 def initialize_arguments():
27 config = {}
28 parser.add_argument("--config", nargs="?", help="json config file", type=FileType("r", encoding="UTF-8"))
29 parser.add_argument("--silent", action="store_true",
30 help="Warning!!! No prompts, ignored if config is not passed")
31 parser.add_argument("--script", default="guided", nargs="?", help="Script to run for installation", type=str)
32 parser.add_argument("--vars",
33 metavar="KEY=VALUE",
34 nargs='?',
35 help="Set a number of key-value pairs "
36 "(do not put spaces before or after the = sign). "
37 "If a value contains spaces, you should define "
38 "it with double quotes: "
39 'foo="this is a sentence". Note that '
40 "values are always treated as strings.")
41 args = parser.parse_args()
42 if args.config is not None:
43 try:
44 config = json.load(args.config)
45 except Exception as e:
46 print(e)
47 # Installation can't be silent if config is not passed
48 config["silent"] = args.silent
49 if args.vars is not None:
50 try:
51 for var in args.vars.split(' '):
52 key, val = var.split("=")
53 config[key] = val
54 except Exception as e:
55 print(e)
56 config["script"] = args.script
57 return config
58
59
60 arguments = initialize_arguments()
61
62
63 # TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)
64
65
66 def run_as_a_module():
67 """
68 Since we're running this as a 'python -m archinstall' module OR
69 a nuitka3 compiled version of the project.
70 This function and the file __main__ acts as a entry point.
71 """
72
73 # Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.
74 storage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))
75 try:
76 script = Script(arguments.get('script', None))
77 except ProfileNotFound as err:
78 print(f"Couldn't find file: {err}")
79 sys.exit(1)
80
81 os.chdir(os.path.abspath(os.path.dirname(__file__)))
82
83 # Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.
84 storage['PROFILE_PATH'].pop()
85 script.execute()
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/__init__.py b/archinstall/__init__.py
--- a/archinstall/__init__.py
+++ b/archinstall/__init__.py
@@ -1,5 +1,8 @@
"""Arch Linux installer - guided, templates etc."""
-from argparse import ArgumentParser, FileType
+import urllib.error
+import urllib.parse
+import urllib.request
+from argparse import ArgumentParser
from .lib.disk import *
from .lib.exceptions import *
@@ -25,7 +28,7 @@
def initialize_arguments():
config = {}
- parser.add_argument("--config", nargs="?", help="json config file", type=FileType("r", encoding="UTF-8"))
+ parser.add_argument("--config", nargs="?", help="JSON configuration file or URL")
parser.add_argument("--silent", action="store_true",
help="Warning!!! No prompts, ignored if config is not passed")
parser.add_argument("--script", default="guided", nargs="?", help="Script to run for installation", type=str)
@@ -41,7 +44,15 @@
args = parser.parse_args()
if args.config is not None:
try:
- config = json.load(args.config)
+ # First, let's check if this is a URL scheme instead of a filename
+ parsed_url = urllib.parse.urlparse(args.config)
+
+ if not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.
+ with open(args.config) as file:
+ config = json.load(file)
+ else: # Attempt to load the configuration from the URL.
+ with urllib.request.urlopen(args.config) as response:
+ config = json.loads(response.read())
except Exception as e:
print(e)
# Installation can't be silent if config is not passed
|
{"golden_diff": "diff --git a/archinstall/__init__.py b/archinstall/__init__.py\n--- a/archinstall/__init__.py\n+++ b/archinstall/__init__.py\n@@ -1,5 +1,8 @@\n \"\"\"Arch Linux installer - guided, templates etc.\"\"\"\n-from argparse import ArgumentParser, FileType\n+import urllib.error\n+import urllib.parse\n+import urllib.request\n+from argparse import ArgumentParser\n \n from .lib.disk import *\n from .lib.exceptions import *\n@@ -25,7 +28,7 @@\n \n def initialize_arguments():\n \tconfig = {}\n-\tparser.add_argument(\"--config\", nargs=\"?\", help=\"json config file\", type=FileType(\"r\", encoding=\"UTF-8\"))\n+\tparser.add_argument(\"--config\", nargs=\"?\", help=\"JSON configuration file or URL\")\n \tparser.add_argument(\"--silent\", action=\"store_true\",\n \t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n \tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n@@ -41,7 +44,15 @@\n \targs = parser.parse_args()\n \tif args.config is not None:\n \t\ttry:\n-\t\t\tconfig = json.load(args.config)\n+\t\t\t# First, let's check if this is a URL scheme instead of a filename\n+\t\t\tparsed_url = urllib.parse.urlparse(args.config)\n+\n+\t\t\tif not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.\n+\t\t\t\twith open(args.config) as file:\n+\t\t\t\t\tconfig = json.load(file)\n+\t\t\telse: # Attempt to load the configuration from the URL.\n+\t\t\t\twith urllib.request.urlopen(args.config) as response:\n+\t\t\t\t\tconfig = json.loads(response.read())\n \t\texcept Exception as e:\n \t\t\tprint(e)\n \t\t# Installation can't be silent if config is not passed\n", "issue": "Add ability to load a config file from a URL and use that for installation\nThis would compliment the feature to use a configuration file\n", "before_files": [{"content": "\"\"\"Arch Linux installer - guided, templates etc.\"\"\"\nfrom argparse import ArgumentParser, FileType\n\nfrom .lib.disk import *\nfrom .lib.exceptions import *\nfrom .lib.general import *\nfrom .lib.hardware import *\nfrom .lib.installer import __packages__, Installer\nfrom .lib.locale_helpers import *\nfrom .lib.luks import *\nfrom .lib.mirrors import *\nfrom .lib.networking import *\nfrom .lib.output import *\nfrom .lib.packages import *\nfrom .lib.profiles import *\nfrom .lib.services import *\nfrom .lib.storage import *\nfrom .lib.systemd import *\nfrom .lib.user_interaction import *\n\nparser = ArgumentParser()\n\n__version__ = \"2.2.0.dev1\"\n\n\ndef initialize_arguments():\n\tconfig = {}\n\tparser.add_argument(\"--config\", nargs=\"?\", help=\"json config file\", type=FileType(\"r\", encoding=\"UTF-8\"))\n\tparser.add_argument(\"--silent\", action=\"store_true\",\n\t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n\tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n\tparser.add_argument(\"--vars\",\n\t\t\t\t\t\tmetavar=\"KEY=VALUE\",\n\t\t\t\t\t\tnargs='?',\n\t\t\t\t\t\thelp=\"Set a number of key-value pairs \"\n\t\t\t\t\t\t\t \"(do not put spaces before or after the = sign). \"\n\t\t\t\t\t\t\t \"If a value contains spaces, you should define \"\n\t\t\t\t\t\t\t \"it with double quotes: \"\n\t\t\t\t\t\t\t 'foo=\"this is a sentence\". Note that '\n\t\t\t\t\t\t\t \"values are always treated as strings.\")\n\targs = parser.parse_args()\n\tif args.config is not None:\n\t\ttry:\n\t\t\tconfig = json.load(args.config)\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\t\t# Installation can't be silent if config is not passed\n\t\tconfig[\"silent\"] = args.silent\n\tif args.vars is not None:\n\t\ttry:\n\t\t\tfor var in args.vars.split(' '):\n\t\t\t\tkey, val = var.split(\"=\")\n\t\t\t\tconfig[key] = val\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\tconfig[\"script\"] = args.script\n\treturn config\n\n\narguments = initialize_arguments()\n\n\n# TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)\n\n\ndef run_as_a_module():\n\t\"\"\"\n\tSince we're running this as a 'python -m archinstall' module OR\n\ta nuitka3 compiled version of the project.\n\tThis function and the file __main__ acts as a entry point.\n\t\"\"\"\n\n\t# Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.\n\tstorage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))\n\ttry:\n\t\tscript = Script(arguments.get('script', None))\n\texcept ProfileNotFound as err:\n\t\tprint(f\"Couldn't find file: {err}\")\n\t\tsys.exit(1)\n\n\tos.chdir(os.path.abspath(os.path.dirname(__file__)))\n\n\t# Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.\n\tstorage['PROFILE_PATH'].pop()\n\tscript.execute()\n", "path": "archinstall/__init__.py"}], "after_files": [{"content": "\"\"\"Arch Linux installer - guided, templates etc.\"\"\"\nimport urllib.error\nimport urllib.parse\nimport urllib.request\nfrom argparse import ArgumentParser\n\nfrom .lib.disk import *\nfrom .lib.exceptions import *\nfrom .lib.general import *\nfrom .lib.hardware import *\nfrom .lib.installer import __packages__, Installer\nfrom .lib.locale_helpers import *\nfrom .lib.luks import *\nfrom .lib.mirrors import *\nfrom .lib.networking import *\nfrom .lib.output import *\nfrom .lib.packages import *\nfrom .lib.profiles import *\nfrom .lib.services import *\nfrom .lib.storage import *\nfrom .lib.systemd import *\nfrom .lib.user_interaction import *\n\nparser = ArgumentParser()\n\n__version__ = \"2.2.0.dev1\"\n\n\ndef initialize_arguments():\n\tconfig = {}\n\tparser.add_argument(\"--config\", nargs=\"?\", help=\"JSON configuration file or URL\")\n\tparser.add_argument(\"--silent\", action=\"store_true\",\n\t\t\t\t\t\thelp=\"Warning!!! No prompts, ignored if config is not passed\")\n\tparser.add_argument(\"--script\", default=\"guided\", nargs=\"?\", help=\"Script to run for installation\", type=str)\n\tparser.add_argument(\"--vars\",\n\t\t\t\t\t\tmetavar=\"KEY=VALUE\",\n\t\t\t\t\t\tnargs='?',\n\t\t\t\t\t\thelp=\"Set a number of key-value pairs \"\n\t\t\t\t\t\t\t \"(do not put spaces before or after the = sign). \"\n\t\t\t\t\t\t\t \"If a value contains spaces, you should define \"\n\t\t\t\t\t\t\t \"it with double quotes: \"\n\t\t\t\t\t\t\t 'foo=\"this is a sentence\". Note that '\n\t\t\t\t\t\t\t \"values are always treated as strings.\")\n\targs = parser.parse_args()\n\tif args.config is not None:\n\t\ttry:\n\t\t\t# First, let's check if this is a URL scheme instead of a filename\n\t\t\tparsed_url = urllib.parse.urlparse(args.config)\n\n\t\t\tif not parsed_url.scheme: # The Profile was not a direct match on a remote URL, it must be a local file.\n\t\t\t\twith open(args.config) as file:\n\t\t\t\t\tconfig = json.load(file)\n\t\t\telse: # Attempt to load the configuration from the URL.\n\t\t\t\twith urllib.request.urlopen(args.config) as response:\n\t\t\t\t\tconfig = json.loads(response.read())\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\t\t# Installation can't be silent if config is not passed\n\t\tconfig[\"silent\"] = args.silent\n\tif args.vars is not None:\n\t\ttry:\n\t\t\tfor var in args.vars.split(' '):\n\t\t\t\tkey, val = var.split(\"=\")\n\t\t\t\tconfig[key] = val\n\t\texcept Exception as e:\n\t\t\tprint(e)\n\tconfig[\"script\"] = args.script\n\treturn config\n\n\narguments = initialize_arguments()\n\n\n# TODO: Learn the dark arts of argparse... (I summon thee dark spawn of cPython)\n\n\ndef run_as_a_module():\n\t\"\"\"\n\tSince we're running this as a 'python -m archinstall' module OR\n\ta nuitka3 compiled version of the project.\n\tThis function and the file __main__ acts as a entry point.\n\t\"\"\"\n\n\t# Add another path for finding profiles, so that list_profiles() in Script() can find guided.py, unattended.py etc.\n\tstorage['PROFILE_PATH'].append(os.path.abspath(f'{os.path.dirname(__file__)}/examples'))\n\ttry:\n\t\tscript = Script(arguments.get('script', None))\n\texcept ProfileNotFound as err:\n\t\tprint(f\"Couldn't find file: {err}\")\n\t\tsys.exit(1)\n\n\tos.chdir(os.path.abspath(os.path.dirname(__file__)))\n\n\t# Remove the example directory from the PROFILE_PATH, to avoid guided.py etc shows up in user input questions.\n\tstorage['PROFILE_PATH'].pop()\n\tscript.execute()\n", "path": "archinstall/__init__.py"}]}
| 1,131 | 398 |
gh_patches_debug_30071
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-917
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CheckstyleBear should error when use_spaces is False or indent_size is not 2
If `checkstyle_configs = Google`, indentation checking using 2 spaces, according to https://google.github.io/styleguide/javaguide.html#s4.2-block-indentation
`use_spaces=False` must emit an error
`indent_size` must be set to 2, otherwise emit an error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bears/java/CheckstyleBear.py`
Content:
```
1 from coalib.bearlib.abstractions.Linter import linter
2 from coalib.settings.Setting import path
3
4
5 known_checkstyles = {
6 "google": "https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml",
7 "sun": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',
8 "android-check-easy": "https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml",
9 "android-check-hard": "https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml",
10 "geosoft": "http://geosoft.no/development/geosoft_checks.xml"}
11
12
13 def known_checkstyle_or_path(setting):
14 if str(setting) in known_checkstyles.keys():
15 return str(setting)
16 else:
17 return path(setting)
18
19
20 @linter(executable='java',
21 output_format='regex',
22 output_regex=r'\[(?P<severity>WARN|INFO)\].*?'
23 r'(?P<line>\d+):?(?P<column>\d+)?. '
24 r'(?P<message>.*?) *\[(?P<origin>[a-zA-Z]+?)\]')
25 class CheckstyleBear:
26 """
27 Check Java code for possible style, semantic and design issues.
28
29 For more information, consult
30 <http://checkstyle.sourceforge.net/checks.html>.
31 """
32
33 LANGUAGES = {"Java"}
34 AUTHORS = {'The coala developers'}
35 AUTHORS_EMAILS = {'[email protected]'}
36 LICENSE = 'AGPL-3.0'
37 CAN_DETECT = {'Formatting', 'Smell'}
38
39 def setup_dependencies(self):
40 type(self).checkstyle_jar_file = self.download_cached_file(
41 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'
42 '/checkstyle-6.15-all.jar',
43 "checkstyle.jar")
44
45 def create_arguments(
46 self, filename, file, config_file,
47 checkstyle_configs: known_checkstyle_or_path="google"):
48 """
49 :param checkstyle_configs:
50 A file containing configs to use in ``checkstyle``. It can also
51 have the special values:
52
53 - google - Google's Java style. More info at
54 <http://checkstyle.sourceforge.net/style_configs.html>.
55 - sun - Sun's Java style. These are the same
56 as the default Eclipse checks. More info at
57 <http://checkstyle.sourceforge.net/style_configs.html>.
58 - android-check-easy - The easy Android configs provided by the
59 android-check eclipse plugin. More info at
60 <https://github.com/noveogroup/android-check>.
61 - android-check-hard - The hard Android confis provided by the
62 android-check eclipse plugin. More info at
63 <https://github.com/noveogroup/android-check>.
64 - geosoft - The Java style followed by GeoSoft. More info at
65 <http://geosoft.no/development/javastyle.html>
66 """
67 if checkstyle_configs in known_checkstyles:
68 checkstyle_configs = self.download_cached_file(
69 known_checkstyles[checkstyle_configs],
70 checkstyle_configs + ".xml")
71
72 return ('-jar', self.checkstyle_jar_file, '-c',
73 checkstyle_configs, filename)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bears/java/CheckstyleBear.py b/bears/java/CheckstyleBear.py
--- a/bears/java/CheckstyleBear.py
+++ b/bears/java/CheckstyleBear.py
@@ -10,6 +10,13 @@
"geosoft": "http://geosoft.no/development/geosoft_checks.xml"}
+def check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):
+ if (checkstyle_configs is 'google' and
+ (not use_spaces or indent_size != 2)):
+ raise ValueError('Google checkstyle config does not support '
+ 'use_spaces=False or indent_size != 2')
+
+
def known_checkstyle_or_path(setting):
if str(setting) in known_checkstyles.keys():
return str(setting)
@@ -44,7 +51,8 @@
def create_arguments(
self, filename, file, config_file,
- checkstyle_configs: known_checkstyle_or_path="google"):
+ checkstyle_configs: known_checkstyle_or_path="google",
+ use_spaces: bool=True, indent_size: int=2):
"""
:param checkstyle_configs:
A file containing configs to use in ``checkstyle``. It can also
@@ -64,6 +72,9 @@
- geosoft - The Java style followed by GeoSoft. More info at
<http://geosoft.no/development/javastyle.html>
"""
+ check_invalid_configuration(
+ checkstyle_configs, use_spaces, indent_size)
+
if checkstyle_configs in known_checkstyles:
checkstyle_configs = self.download_cached_file(
known_checkstyles[checkstyle_configs],
|
{"golden_diff": "diff --git a/bears/java/CheckstyleBear.py b/bears/java/CheckstyleBear.py\n--- a/bears/java/CheckstyleBear.py\n+++ b/bears/java/CheckstyleBear.py\n@@ -10,6 +10,13 @@\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n \n \n+def check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):\n+ if (checkstyle_configs is 'google' and\n+ (not use_spaces or indent_size != 2)):\n+ raise ValueError('Google checkstyle config does not support '\n+ 'use_spaces=False or indent_size != 2')\n+\n+\n def known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n@@ -44,7 +51,8 @@\n \n def create_arguments(\n self, filename, file, config_file,\n- checkstyle_configs: known_checkstyle_or_path=\"google\"):\n+ checkstyle_configs: known_checkstyle_or_path=\"google\",\n+ use_spaces: bool=True, indent_size: int=2):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n@@ -64,6 +72,9 @@\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n+ check_invalid_configuration(\n+ checkstyle_configs, use_spaces, indent_size)\n+\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n", "issue": "CheckstyleBear should error when use_spaces is False or indent_size is not 2\nIf `checkstyle_configs = Google`, indentation checking using 2 spaces, according to https://google.github.io/styleguide/javaguide.html#s4.2-block-indentation\n\n`use_spaces=False` must emit an error\n`indent_size` must be set to 2, otherwise emit an error.\n\n", "before_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom coalib.settings.Setting import path\n\n\nknown_checkstyles = {\n \"google\": \"https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml\",\n \"sun\": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',\n \"android-check-easy\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml\",\n \"android-check-hard\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml\",\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n\n\ndef known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n else:\n return path(setting)\n\n\n@linter(executable='java',\n output_format='regex',\n output_regex=r'\\[(?P<severity>WARN|INFO)\\].*?'\n r'(?P<line>\\d+):?(?P<column>\\d+)?. '\n r'(?P<message>.*?) *\\[(?P<origin>[a-zA-Z]+?)\\]')\nclass CheckstyleBear:\n \"\"\"\n Check Java code for possible style, semantic and design issues.\n\n For more information, consult\n <http://checkstyle.sourceforge.net/checks.html>.\n \"\"\"\n\n LANGUAGES = {\"Java\"}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Formatting', 'Smell'}\n\n def setup_dependencies(self):\n type(self).checkstyle_jar_file = self.download_cached_file(\n 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'\n '/checkstyle-6.15-all.jar',\n \"checkstyle.jar\")\n\n def create_arguments(\n self, filename, file, config_file,\n checkstyle_configs: known_checkstyle_or_path=\"google\"):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n have the special values:\n\n - google - Google's Java style. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - sun - Sun's Java style. These are the same\n as the default Eclipse checks. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - android-check-easy - The easy Android configs provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - android-check-hard - The hard Android confis provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n checkstyle_configs + \".xml\")\n\n return ('-jar', self.checkstyle_jar_file, '-c',\n checkstyle_configs, filename)\n", "path": "bears/java/CheckstyleBear.py"}], "after_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom coalib.settings.Setting import path\n\n\nknown_checkstyles = {\n \"google\": \"https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/google_checks.xml\",\n \"sun\": 'https://raw.githubusercontent.com/checkstyle/checkstyle/master/src/main/resources/sun_checks.xml',\n \"android-check-easy\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-easy.xml\",\n \"android-check-hard\": \"https://raw.githubusercontent.com/noveogroup/android-check/master/android-check-plugin/src/main/resources/checkstyle/checkstyle-hard.xml\",\n \"geosoft\": \"http://geosoft.no/development/geosoft_checks.xml\"}\n\n\ndef check_invalid_configuration(checkstyle_configs, use_spaces, indent_size):\n if (checkstyle_configs is 'google' and\n (not use_spaces or indent_size != 2)):\n raise ValueError('Google checkstyle config does not support '\n 'use_spaces=False or indent_size != 2')\n\n\ndef known_checkstyle_or_path(setting):\n if str(setting) in known_checkstyles.keys():\n return str(setting)\n else:\n return path(setting)\n\n\n@linter(executable='java',\n output_format='regex',\n output_regex=r'\\[(?P<severity>WARN|INFO)\\].*?'\n r'(?P<line>\\d+):?(?P<column>\\d+)?. '\n r'(?P<message>.*?) *\\[(?P<origin>[a-zA-Z]+?)\\]')\nclass CheckstyleBear:\n \"\"\"\n Check Java code for possible style, semantic and design issues.\n\n For more information, consult\n <http://checkstyle.sourceforge.net/checks.html>.\n \"\"\"\n\n LANGUAGES = {\"Java\"}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Formatting', 'Smell'}\n\n def setup_dependencies(self):\n type(self).checkstyle_jar_file = self.download_cached_file(\n 'http://sourceforge.net/projects/checkstyle/files/checkstyle/6.15'\n '/checkstyle-6.15-all.jar',\n \"checkstyle.jar\")\n\n def create_arguments(\n self, filename, file, config_file,\n checkstyle_configs: known_checkstyle_or_path=\"google\",\n use_spaces: bool=True, indent_size: int=2):\n \"\"\"\n :param checkstyle_configs:\n A file containing configs to use in ``checkstyle``. It can also\n have the special values:\n\n - google - Google's Java style. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - sun - Sun's Java style. These are the same\n as the default Eclipse checks. More info at\n <http://checkstyle.sourceforge.net/style_configs.html>.\n - android-check-easy - The easy Android configs provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - android-check-hard - The hard Android confis provided by the\n android-check eclipse plugin. More info at\n <https://github.com/noveogroup/android-check>.\n - geosoft - The Java style followed by GeoSoft. More info at\n <http://geosoft.no/development/javastyle.html>\n \"\"\"\n check_invalid_configuration(\n checkstyle_configs, use_spaces, indent_size)\n\n if checkstyle_configs in known_checkstyles:\n checkstyle_configs = self.download_cached_file(\n known_checkstyles[checkstyle_configs],\n checkstyle_configs + \".xml\")\n\n return ('-jar', self.checkstyle_jar_file, '-c',\n checkstyle_configs, filename)\n", "path": "bears/java/CheckstyleBear.py"}]}
| 1,208 | 364 |
gh_patches_debug_28221
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-947
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make random seed accessible by cocotb tests
I use the numpy package in my cocotb tests. I want to seed the numpy random generator to make numpy's randomization predictable.
Cocotb does this for python's `random ` package in the [`__init__.py`](https://github.com/potentialventures/cocotb/blob/master/cocotb/__init__.py#L138) using `seed` variable. By making `seed` `global` type it can be accessed in `cocotb.test` as `cocotb.seed`.
This allows the following: `numpy.random.seed(cocotb.seed)` which solves the problem.
I know that this is a one-liner but I don't know if this is the correct approach. Please review my proposal, thanks.
Make random seed accessible by cocotb tests
I use the numpy package in my cocotb tests. I want to seed the numpy random generator to make numpy's randomization predictable.
Cocotb does this for python's `random ` package in the [`__init__.py`](https://github.com/potentialventures/cocotb/blob/master/cocotb/__init__.py#L138) using `seed` variable. By making `seed` `global` type it can be accessed in `cocotb.test` as `cocotb.seed`.
This allows the following: `numpy.random.seed(cocotb.seed)` which solves the problem.
I know that this is a one-liner but I don't know if this is the correct approach. Please review my proposal, thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cocotb/__init__.py`
Content:
```
1 ''' Copyright (c) 2013 Potential Ventures Ltd
2 Copyright (c) 2013 SolarFlare Communications Inc
3 All rights reserved.
4
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions are met:
7 * Redistributions of source code must retain the above copyright
8 notice, this list of conditions and the following disclaimer.
9 * Redistributions in binary form must reproduce the above copyright
10 notice, this list of conditions and the following disclaimer in the
11 documentation and/or other materials provided with the distribution.
12 * Neither the name of Potential Ventures Ltd,
13 SolarFlare Communications Inc nor the
14 names of its contributors may be used to endorse or promote products
15 derived from this software without specific prior written permission.
16
17 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''
27
28 """
29 Cocotb is a coroutine, cosimulation framework for writing testbenches in Python.
30
31 See http://cocotb.readthedocs.org for full documentation
32 """
33 import os
34 import sys
35 import logging
36 import threading
37 import random
38 import time
39
40 import cocotb.handle
41 from cocotb.scheduler import Scheduler
42 from cocotb.log import SimLogFormatter, SimBaseLog, SimLog
43 from cocotb.regression import RegressionManager
44
45
46 # Things we want in the cocotb namespace
47 from cocotb.decorators import test, coroutine, hook, function, external
48
49 # Singleton scheduler instance
50 # NB this cheekily ensures a singleton since we're replacing the reference
51 # so that cocotb.scheduler gives you the singleton instance and not the
52 # scheduler package
53
54 # GPI logging instance
55 if "COCOTB_SIM" in os.environ:
56 import simulator
57 logging.basicConfig()
58 logging.setLoggerClass(SimBaseLog)
59 log = SimLog('cocotb')
60 level = os.getenv("COCOTB_LOG_LEVEL", "INFO")
61 try:
62 _default_log = getattr(logging, level)
63 except AttributeError as e:
64 log.error("Unable to set loging level to %s" % level)
65 _default_log = logging.INFO
66 log.setLevel(_default_log)
67 loggpi = SimLog('cocotb.gpi')
68 # Notify GPI of log level
69 simulator.log_level(_default_log)
70
71 # If stdout/stderr are not TTYs, Python may not have opened them with line
72 # buffering. In that case, try to reopen them with line buffering
73 # explicitly enabled. This ensures that prints such as stack traces always
74 # appear. Continue silently if this fails.
75 try:
76 if not sys.stdout.isatty():
77 sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)
78 log.debug("Reopened stdout with line buffering")
79 if not sys.stderr.isatty():
80 sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)
81 log.debug("Reopened stderr with line buffering")
82 except Exception as e:
83 log.warning("Failed to ensure that stdout/stderr are line buffered: %s", e)
84 log.warning("Some stack traces may not appear because of this.")
85
86
87 scheduler = Scheduler()
88 regression_manager = None
89
90 plusargs = {}
91
92 # To save typing provide an alias to scheduler.add
93 fork = scheduler.add
94
95 # FIXME is this really required?
96 _rlock = threading.RLock()
97
98
99 def mem_debug(port):
100 import cocotb.memdebug
101 cocotb.memdebug.start(port)
102
103
104 def _initialise_testbench(root_name):
105 """
106 This function is called after the simulator has elaborated all
107 entities and is ready to run the test.
108
109 The test must be defined by the environment variables
110 MODULE
111 TESTCASE
112
113 The environment variable COCOTB_HOOKS contains a comma-separated list of
114 modules that should be executed before the first test.
115 """
116 _rlock.acquire()
117
118 memcheck_port = os.getenv('MEMCHECK')
119 if memcheck_port is not None:
120 mem_debug(int(memcheck_port))
121
122 exec_path = os.getenv('COCOTB_PY_DIR')
123 if exec_path is None:
124 exec_path = 'Unknown'
125
126 version = os.getenv('VERSION')
127 if version is None:
128 log.info("Unable to determine Cocotb version from %s" % exec_path)
129 else:
130 log.info("Running tests with Cocotb v%s from %s" %
131 (version, exec_path))
132
133 # Create the base handle type
134
135 process_plusargs()
136
137 # Seed the Python random number generator to make this repeatable
138 seed = os.getenv('RANDOM_SEED')
139
140 if seed is None:
141 if 'ntb_random_seed' in plusargs:
142 seed = eval(plusargs['ntb_random_seed'])
143 elif 'seed' in plusargs:
144 seed = eval(plusargs['seed'])
145 else:
146 seed = int(time.time())
147 log.info("Seeding Python random module with %d" % (seed))
148 else:
149 seed = int(seed)
150 log.info("Seeding Python random module with supplied seed %d" % (seed))
151 random.seed(seed)
152
153 module_str = os.getenv('MODULE')
154 test_str = os.getenv('TESTCASE')
155 hooks_str = os.getenv('COCOTB_HOOKS', '')
156
157 if not module_str:
158 raise ImportError("Environment variables defining the module(s) to " +
159 "execute not defined. MODULE=\"%s\"" % (module_str))
160
161 modules = module_str.split(',')
162 hooks = hooks_str.split(',') if hooks_str else []
163
164 global regression_manager
165
166 regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=seed, hooks=hooks)
167 regression_manager.initialise()
168 regression_manager.execute()
169
170 _rlock.release()
171 return True
172
173
174 def _sim_event(level, message):
175 """Function that can be called externally to signal an event"""
176 SIM_INFO = 0
177 SIM_TEST_FAIL = 1
178 SIM_FAIL = 2
179 from cocotb.result import TestFailure, SimFailure
180
181 if level is SIM_TEST_FAIL:
182 scheduler.log.error("Failing test at simulator request")
183 scheduler.finish_test(TestFailure("Failure from external source: %s" %
184 message))
185 elif level is SIM_FAIL:
186 # We simply return here as the simulator will exit
187 # so no cleanup is needed
188 msg = ("Failing test at simulator request before test run completion: "
189 "%s" % message)
190 scheduler.log.error(msg)
191 scheduler.finish_scheduler(SimFailure(msg))
192 else:
193 scheduler.log.error("Unsupported sim event")
194
195 return True
196
197
198 def process_plusargs():
199
200 global plusargs
201
202 plusargs = {}
203
204 for option in cocotb.argv:
205 if option.startswith('+'):
206 if option.find('=') != -1:
207 (name, value) = option[1:].split('=')
208 plusargs[name] = value
209 else:
210 plusargs[option[1:]] = True
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cocotb/__init__.py b/cocotb/__init__.py
--- a/cocotb/__init__.py
+++ b/cocotb/__init__.py
@@ -135,20 +135,21 @@
process_plusargs()
# Seed the Python random number generator to make this repeatable
- seed = os.getenv('RANDOM_SEED')
+ global RANDOM_SEED
+ RANDOM_SEED = os.getenv('RANDOM_SEED')
- if seed is None:
+ if RANDOM_SEED is None:
if 'ntb_random_seed' in plusargs:
- seed = eval(plusargs['ntb_random_seed'])
+ RANDOM_SEED = eval(plusargs['ntb_random_seed'])
elif 'seed' in plusargs:
- seed = eval(plusargs['seed'])
+ RANDOM_SEED = eval(plusargs['seed'])
else:
- seed = int(time.time())
- log.info("Seeding Python random module with %d" % (seed))
+ RANDOM_SEED = int(time.time())
+ log.info("Seeding Python random module with %d" % (RANDOM_SEED))
else:
- seed = int(seed)
- log.info("Seeding Python random module with supplied seed %d" % (seed))
- random.seed(seed)
+ RANDOM_SEED = int(RANDOM_SEED)
+ log.info("Seeding Python random module with supplied seed %d" % (RANDOM_SEED))
+ random.seed(RANDOM_SEED)
module_str = os.getenv('MODULE')
test_str = os.getenv('TESTCASE')
@@ -163,7 +164,7 @@
global regression_manager
- regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=seed, hooks=hooks)
+ regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)
regression_manager.initialise()
regression_manager.execute()
|
{"golden_diff": "diff --git a/cocotb/__init__.py b/cocotb/__init__.py\n--- a/cocotb/__init__.py\n+++ b/cocotb/__init__.py\n@@ -135,20 +135,21 @@\n process_plusargs()\n \n # Seed the Python random number generator to make this repeatable\n- seed = os.getenv('RANDOM_SEED')\n+ global RANDOM_SEED\n+ RANDOM_SEED = os.getenv('RANDOM_SEED')\n \n- if seed is None:\n+ if RANDOM_SEED is None:\n if 'ntb_random_seed' in plusargs:\n- seed = eval(plusargs['ntb_random_seed'])\n+ RANDOM_SEED = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n- seed = eval(plusargs['seed'])\n+ RANDOM_SEED = eval(plusargs['seed'])\n else:\n- seed = int(time.time())\n- log.info(\"Seeding Python random module with %d\" % (seed))\n+ RANDOM_SEED = int(time.time())\n+ log.info(\"Seeding Python random module with %d\" % (RANDOM_SEED))\n else:\n- seed = int(seed)\n- log.info(\"Seeding Python random module with supplied seed %d\" % (seed))\n- random.seed(seed)\n+ RANDOM_SEED = int(RANDOM_SEED)\n+ log.info(\"Seeding Python random module with supplied seed %d\" % (RANDOM_SEED))\n+ random.seed(RANDOM_SEED)\n \n module_str = os.getenv('MODULE')\n test_str = os.getenv('TESTCASE')\n@@ -163,7 +164,7 @@\n \n global regression_manager\n \n- regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=seed, hooks=hooks)\n+ regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)\n regression_manager.initialise()\n regression_manager.execute()\n", "issue": "Make random seed accessible by cocotb tests\nI use the numpy package in my cocotb tests. I want to seed the numpy random generator to make numpy's randomization predictable.\r\nCocotb does this for python's `random ` package in the [`__init__.py`](https://github.com/potentialventures/cocotb/blob/master/cocotb/__init__.py#L138) using `seed` variable. By making `seed` `global` type it can be accessed in `cocotb.test` as `cocotb.seed`. \r\nThis allows the following: `numpy.random.seed(cocotb.seed)` which solves the problem.\r\n\r\nI know that this is a one-liner but I don't know if this is the correct approach. Please review my proposal, thanks.\nMake random seed accessible by cocotb tests\nI use the numpy package in my cocotb tests. I want to seed the numpy random generator to make numpy's randomization predictable.\r\nCocotb does this for python's `random ` package in the [`__init__.py`](https://github.com/potentialventures/cocotb/blob/master/cocotb/__init__.py#L138) using `seed` variable. By making `seed` `global` type it can be accessed in `cocotb.test` as `cocotb.seed`. \r\nThis allows the following: `numpy.random.seed(cocotb.seed)` which solves the problem.\r\n\r\nI know that this is a one-liner but I don't know if this is the correct approach. Please review my proposal, thanks.\n", "before_files": [{"content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n\"\"\"\nCocotb is a coroutine, cosimulation framework for writing testbenches in Python.\n\nSee http://cocotb.readthedocs.org for full documentation\n\"\"\"\nimport os\nimport sys\nimport logging\nimport threading\nimport random\nimport time\n\nimport cocotb.handle\nfrom cocotb.scheduler import Scheduler\nfrom cocotb.log import SimLogFormatter, SimBaseLog, SimLog\nfrom cocotb.regression import RegressionManager\n\n\n# Things we want in the cocotb namespace\nfrom cocotb.decorators import test, coroutine, hook, function, external\n\n# Singleton scheduler instance\n# NB this cheekily ensures a singleton since we're replacing the reference\n# so that cocotb.scheduler gives you the singleton instance and not the\n# scheduler package\n\n# GPI logging instance\nif \"COCOTB_SIM\" in os.environ:\n import simulator\n logging.basicConfig()\n logging.setLoggerClass(SimBaseLog)\n log = SimLog('cocotb')\n level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n try:\n _default_log = getattr(logging, level)\n except AttributeError as e:\n log.error(\"Unable to set loging level to %s\" % level)\n _default_log = logging.INFO\n log.setLevel(_default_log)\n loggpi = SimLog('cocotb.gpi')\n # Notify GPI of log level\n simulator.log_level(_default_log)\n\n # If stdout/stderr are not TTYs, Python may not have opened them with line\n # buffering. In that case, try to reopen them with line buffering\n # explicitly enabled. This ensures that prints such as stack traces always\n # appear. Continue silently if this fails.\n try:\n if not sys.stdout.isatty():\n sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)\n log.debug(\"Reopened stdout with line buffering\")\n if not sys.stderr.isatty():\n sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)\n log.debug(\"Reopened stderr with line buffering\")\n except Exception as e:\n log.warning(\"Failed to ensure that stdout/stderr are line buffered: %s\", e)\n log.warning(\"Some stack traces may not appear because of this.\")\n\n\nscheduler = Scheduler()\nregression_manager = None\n\nplusargs = {}\n\n# To save typing provide an alias to scheduler.add\nfork = scheduler.add\n\n# FIXME is this really required?\n_rlock = threading.RLock()\n\n\ndef mem_debug(port):\n import cocotb.memdebug\n cocotb.memdebug.start(port)\n\n\ndef _initialise_testbench(root_name):\n \"\"\"\n This function is called after the simulator has elaborated all\n entities and is ready to run the test.\n\n The test must be defined by the environment variables\n MODULE\n TESTCASE\n\n The environment variable COCOTB_HOOKS contains a comma-separated list of\n modules that should be executed before the first test.\n \"\"\"\n _rlock.acquire()\n\n memcheck_port = os.getenv('MEMCHECK')\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n\n exec_path = os.getenv('COCOTB_PY_DIR')\n if exec_path is None:\n exec_path = 'Unknown'\n\n version = os.getenv('VERSION')\n if version is None:\n log.info(\"Unable to determine Cocotb version from %s\" % exec_path)\n else:\n log.info(\"Running tests with Cocotb v%s from %s\" %\n (version, exec_path))\n\n # Create the base handle type\n\n process_plusargs()\n\n # Seed the Python random number generator to make this repeatable\n seed = os.getenv('RANDOM_SEED')\n\n if seed is None:\n if 'ntb_random_seed' in plusargs:\n seed = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n seed = eval(plusargs['seed'])\n else:\n seed = int(time.time())\n log.info(\"Seeding Python random module with %d\" % (seed))\n else:\n seed = int(seed)\n log.info(\"Seeding Python random module with supplied seed %d\" % (seed))\n random.seed(seed)\n\n module_str = os.getenv('MODULE')\n test_str = os.getenv('TESTCASE')\n hooks_str = os.getenv('COCOTB_HOOKS', '')\n\n if not module_str:\n raise ImportError(\"Environment variables defining the module(s) to \" +\n \"execute not defined. MODULE=\\\"%s\\\"\" % (module_str))\n\n modules = module_str.split(',')\n hooks = hooks_str.split(',') if hooks_str else []\n\n global regression_manager\n\n regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=seed, hooks=hooks)\n regression_manager.initialise()\n regression_manager.execute()\n\n _rlock.release()\n return True\n\n\ndef _sim_event(level, message):\n \"\"\"Function that can be called externally to signal an event\"\"\"\n SIM_INFO = 0\n SIM_TEST_FAIL = 1\n SIM_FAIL = 2\n from cocotb.result import TestFailure, SimFailure\n\n if level is SIM_TEST_FAIL:\n scheduler.log.error(\"Failing test at simulator request\")\n scheduler.finish_test(TestFailure(\"Failure from external source: %s\" %\n message))\n elif level is SIM_FAIL:\n # We simply return here as the simulator will exit\n # so no cleanup is needed\n msg = (\"Failing test at simulator request before test run completion: \"\n \"%s\" % message)\n scheduler.log.error(msg)\n scheduler.finish_scheduler(SimFailure(msg))\n else:\n scheduler.log.error(\"Unsupported sim event\")\n\n return True\n\n\ndef process_plusargs():\n\n global plusargs\n\n plusargs = {}\n\n for option in cocotb.argv:\n if option.startswith('+'):\n if option.find('=') != -1:\n (name, value) = option[1:].split('=')\n plusargs[name] = value\n else:\n plusargs[option[1:]] = True\n", "path": "cocotb/__init__.py"}], "after_files": [{"content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n\"\"\"\nCocotb is a coroutine, cosimulation framework for writing testbenches in Python.\n\nSee http://cocotb.readthedocs.org for full documentation\n\"\"\"\nimport os\nimport sys\nimport logging\nimport threading\nimport random\nimport time\n\nimport cocotb.handle\nfrom cocotb.scheduler import Scheduler\nfrom cocotb.log import SimLogFormatter, SimBaseLog, SimLog\nfrom cocotb.regression import RegressionManager\n\n\n# Things we want in the cocotb namespace\nfrom cocotb.decorators import test, coroutine, hook, function, external\n\n# Singleton scheduler instance\n# NB this cheekily ensures a singleton since we're replacing the reference\n# so that cocotb.scheduler gives you the singleton instance and not the\n# scheduler package\n\n# GPI logging instance\nif \"COCOTB_SIM\" in os.environ:\n import simulator\n logging.basicConfig()\n logging.setLoggerClass(SimBaseLog)\n log = SimLog('cocotb')\n level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n try:\n _default_log = getattr(logging, level)\n except AttributeError as e:\n log.error(\"Unable to set loging level to %s\" % level)\n _default_log = logging.INFO\n log.setLevel(_default_log)\n loggpi = SimLog('cocotb.gpi')\n # Notify GPI of log level\n simulator.log_level(_default_log)\n\n # If stdout/stderr are not TTYs, Python may not have opened them with line\n # buffering. In that case, try to reopen them with line buffering\n # explicitly enabled. This ensures that prints such as stack traces always\n # appear. Continue silently if this fails.\n try:\n if not sys.stdout.isatty():\n sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)\n log.debug(\"Reopened stdout with line buffering\")\n if not sys.stderr.isatty():\n sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)\n log.debug(\"Reopened stderr with line buffering\")\n except Exception as e:\n log.warning(\"Failed to ensure that stdout/stderr are line buffered: %s\", e)\n log.warning(\"Some stack traces may not appear because of this.\")\n\n\nscheduler = Scheduler()\nregression_manager = None\n\nplusargs = {}\n\n# To save typing provide an alias to scheduler.add\nfork = scheduler.add\n\n# FIXME is this really required?\n_rlock = threading.RLock()\n\n\ndef mem_debug(port):\n import cocotb.memdebug\n cocotb.memdebug.start(port)\n\n\ndef _initialise_testbench(root_name):\n \"\"\"\n This function is called after the simulator has elaborated all\n entities and is ready to run the test.\n\n The test must be defined by the environment variables\n MODULE\n TESTCASE\n\n The environment variable COCOTB_HOOKS contains a comma-separated list of\n modules that should be executed before the first test.\n \"\"\"\n _rlock.acquire()\n\n memcheck_port = os.getenv('MEMCHECK')\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n\n exec_path = os.getenv('COCOTB_PY_DIR')\n if exec_path is None:\n exec_path = 'Unknown'\n\n version = os.getenv('VERSION')\n if version is None:\n log.info(\"Unable to determine Cocotb version from %s\" % exec_path)\n else:\n log.info(\"Running tests with Cocotb v%s from %s\" %\n (version, exec_path))\n\n # Create the base handle type\n\n process_plusargs()\n\n # Seed the Python random number generator to make this repeatable\n global RANDOM_SEED\n RANDOM_SEED = os.getenv('RANDOM_SEED')\n\n if RANDOM_SEED is None:\n if 'ntb_random_seed' in plusargs:\n RANDOM_SEED = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n RANDOM_SEED = eval(plusargs['seed'])\n else:\n RANDOM_SEED = int(time.time())\n log.info(\"Seeding Python random module with %d\" % (RANDOM_SEED))\n else:\n RANDOM_SEED = int(RANDOM_SEED)\n log.info(\"Seeding Python random module with supplied seed %d\" % (RANDOM_SEED))\n random.seed(RANDOM_SEED)\n\n module_str = os.getenv('MODULE')\n test_str = os.getenv('TESTCASE')\n hooks_str = os.getenv('COCOTB_HOOKS', '')\n\n if not module_str:\n raise ImportError(\"Environment variables defining the module(s) to \" +\n \"execute not defined. MODULE=\\\"%s\\\"\" % (module_str))\n\n modules = module_str.split(',')\n hooks = hooks_str.split(',') if hooks_str else []\n\n global regression_manager\n\n regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)\n regression_manager.initialise()\n regression_manager.execute()\n\n _rlock.release()\n return True\n\n\ndef _sim_event(level, message):\n \"\"\"Function that can be called externally to signal an event\"\"\"\n SIM_INFO = 0\n SIM_TEST_FAIL = 1\n SIM_FAIL = 2\n from cocotb.result import TestFailure, SimFailure\n\n if level is SIM_TEST_FAIL:\n scheduler.log.error(\"Failing test at simulator request\")\n scheduler.finish_test(TestFailure(\"Failure from external source: %s\" %\n message))\n elif level is SIM_FAIL:\n # We simply return here as the simulator will exit\n # so no cleanup is needed\n msg = (\"Failing test at simulator request before test run completion: \"\n \"%s\" % message)\n scheduler.log.error(msg)\n scheduler.finish_scheduler(SimFailure(msg))\n else:\n scheduler.log.error(\"Unsupported sim event\")\n\n return True\n\n\ndef process_plusargs():\n\n global plusargs\n\n plusargs = {}\n\n for option in cocotb.argv:\n if option.startswith('+'):\n if option.find('=') != -1:\n (name, value) = option[1:].split('=')\n plusargs[name] = value\n else:\n plusargs[option[1:]] = True\n", "path": "cocotb/__init__.py"}]}
| 2,773 | 442 |
gh_patches_debug_38665
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1105
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`APIDataSet` ease of use
## Description
Howdy team!
I was working with the `APIDataSet` recently and had two issues out of the box.
#### 1. Specifying the `auth` keyword argument in yaml
The `requests` library expects the `auth` parameter of a request to be either a `HTTPBasicAuth` or a `tuple` (lists are not allowed, see [here](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) in requests). At the moment, neither are possible to specify in my `catalog.yml`.
From what I hear, you're already working on this (#1011). So maybe this point is moot.
#### 2. The `auth` keyword argument and `credentials.yml`
I would like to specify my `(username, password)` tuple inside `credentials.yml`. However, the `APIDataSet`'s `auth` keyword wouldn't get filled in by the config loader.
To get this working, you'd have to extend `APIDataSet` to have a `credentials` keyword that is filled in for `auth` in an upcall.
It would be great to either have this by default, or even have the loader fill `auth` keywords in addition to `credentials`. Although that might have unintended consequences.
## Context
Hopefully this would unify the experience a bit. Right now, the `credentials` keyword in a dataset and `credentials.yml` are the main points of access to secrets. Which is probably good.
## Possible Implementation
I whipped up [my own `APIDataSet`](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) to solve both the problems above.
## Possible Alternatives
To get this working with no changes to `APIDataSet`, we'd have to implement the changes in #1011 so we can specify tuples in `credentials.yml` and have the config loader fill in `auth` as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/datasets/api/api_dataset.py`
Content:
```
1 """``APIDataSet`` loads the data from HTTP(S) APIs.
2 It uses the python requests library: https://requests.readthedocs.io/en/master/
3 """
4 from typing import Any, Dict, List, Tuple, Union
5
6 import requests
7 from requests.auth import AuthBase
8
9 from kedro.io.core import AbstractDataSet, DataSetError
10
11
12 class APIDataSet(AbstractDataSet):
13 """``APIDataSet`` loads the data from HTTP(S) APIs.
14 It uses the python requests library: https://requests.readthedocs.io/en/master/
15
16 Example:
17 ::
18
19 >>> from kedro.extras.datasets.api import APIDataSet
20 >>>
21 >>>
22 >>> data_set = APIDataSet(
23 >>> url="https://quickstats.nass.usda.gov",
24 >>> params={
25 >>> "key": "SOME_TOKEN",
26 >>> "format": "JSON",
27 >>> "commodity_desc": "CORN",
28 >>> "statisticcat_des": "YIELD",
29 >>> "agg_level_desc": "STATE",
30 >>> "year": 2000
31 >>> }
32 >>> )
33 >>> data = data_set.load()
34 """
35
36 # pylint: disable=too-many-arguments
37 def __init__(
38 self,
39 url: str,
40 method: str = "GET",
41 data: Any = None,
42 params: Dict[str, Any] = None,
43 headers: Dict[str, Any] = None,
44 auth: Union[Tuple[str], AuthBase] = None,
45 json: Union[List, Dict[str, Any]] = None,
46 timeout: int = 60,
47 ) -> None:
48 """Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.
49
50 Args:
51 url: The API URL endpoint.
52 method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...
53 data: The request payload, used for POST, PUT, etc requests
54 https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
55 params: The url parameters of the API.
56 https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls
57 headers: The HTTP headers.
58 https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers
59 auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,
60 or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.
61 json: The request payload, used for POST, PUT, etc requests, passed in
62 to the json kwarg in the requests object.
63 https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
64 timeout: The wait time in seconds for a response, defaults to 1 minute.
65 https://requests.readthedocs.io/en/master/user/quickstart/#timeouts
66
67 """
68 super().__init__()
69 self._request_args: Dict[str, Any] = {
70 "url": url,
71 "method": method,
72 "data": data,
73 "params": params,
74 "headers": headers,
75 "auth": auth,
76 "json": json,
77 "timeout": timeout,
78 }
79
80 def _describe(self) -> Dict[str, Any]:
81 return dict(**self._request_args)
82
83 def _execute_request(self) -> requests.Response:
84 try:
85 response = requests.request(**self._request_args)
86 response.raise_for_status()
87 except requests.exceptions.HTTPError as exc:
88 raise DataSetError("Failed to fetch data", exc) from exc
89 except OSError as exc:
90 raise DataSetError("Failed to connect to the remote server") from exc
91
92 return response
93
94 def _load(self) -> requests.Response:
95 return self._execute_request()
96
97 def _save(self, data: Any) -> None:
98 raise DataSetError(f"{self.__class__.__name__} is a read only data set type")
99
100 def _exists(self) -> bool:
101 response = self._execute_request()
102
103 return response.ok
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/extras/datasets/api/api_dataset.py b/kedro/extras/datasets/api/api_dataset.py
--- a/kedro/extras/datasets/api/api_dataset.py
+++ b/kedro/extras/datasets/api/api_dataset.py
@@ -1,7 +1,7 @@
"""``APIDataSet`` loads the data from HTTP(S) APIs.
It uses the python requests library: https://requests.readthedocs.io/en/master/
"""
-from typing import Any, Dict, List, Tuple, Union
+from typing import Any, Dict, Iterable, List, Union
import requests
from requests.auth import AuthBase
@@ -41,9 +41,10 @@
data: Any = None,
params: Dict[str, Any] = None,
headers: Dict[str, Any] = None,
- auth: Union[Tuple[str], AuthBase] = None,
+ auth: Union[Iterable[str], AuthBase] = None,
json: Union[List, Dict[str, Any]] = None,
timeout: int = 60,
+ credentials: Union[Iterable[str], AuthBase] = None,
) -> None:
"""Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.
@@ -57,15 +58,29 @@
headers: The HTTP headers.
https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers
auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,
- or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.
+ or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any
+ iterable will be cast to a tuple.
json: The request payload, used for POST, PUT, etc requests, passed in
to the json kwarg in the requests object.
https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests
timeout: The wait time in seconds for a response, defaults to 1 minute.
https://requests.readthedocs.io/en/master/user/quickstart/#timeouts
+ credentials: same as ``auth``. Allows specifying ``auth`` secrets in
+ credentials.yml.
+ Raises:
+ ValueError: if both ``credentials`` and ``auth`` are specified.
"""
super().__init__()
+
+ if credentials is not None and auth is not None:
+ raise ValueError("Cannot specify both auth and credentials.")
+
+ auth = credentials or auth
+
+ if isinstance(auth, Iterable):
+ auth = tuple(auth)
+
self._request_args: Dict[str, Any] = {
"url": url,
"method": method,
|
{"golden_diff": "diff --git a/kedro/extras/datasets/api/api_dataset.py b/kedro/extras/datasets/api/api_dataset.py\n--- a/kedro/extras/datasets/api/api_dataset.py\n+++ b/kedro/extras/datasets/api/api_dataset.py\n@@ -1,7 +1,7 @@\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n \"\"\"\n-from typing import Any, Dict, List, Tuple, Union\n+from typing import Any, Dict, Iterable, List, Union\n \n import requests\n from requests.auth import AuthBase\n@@ -41,9 +41,10 @@\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n- auth: Union[Tuple[str], AuthBase] = None,\n+ auth: Union[Iterable[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n+ credentials: Union[Iterable[str], AuthBase] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n \n@@ -57,15 +58,29 @@\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n- or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.\n+ or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any\n+ iterable will be cast to a tuple.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n+ credentials: same as ``auth``. Allows specifying ``auth`` secrets in\n+ credentials.yml.\n \n+ Raises:\n+ ValueError: if both ``credentials`` and ``auth`` are specified.\n \"\"\"\n super().__init__()\n+\n+ if credentials is not None and auth is not None:\n+ raise ValueError(\"Cannot specify both auth and credentials.\")\n+\n+ auth = credentials or auth\n+\n+ if isinstance(auth, Iterable):\n+ auth = tuple(auth)\n+\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n", "issue": "`APIDataSet` ease of use\n## Description\r\n\r\nHowdy team!\r\n\r\nI was working with the `APIDataSet` recently and had two issues out of the box.\r\n\r\n#### 1. Specifying the `auth` keyword argument in yaml\r\n\r\nThe `requests` library expects the `auth` parameter of a request to be either a `HTTPBasicAuth` or a `tuple` (lists are not allowed, see [here](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) in requests). At the moment, neither are possible to specify in my `catalog.yml`. \r\n\r\nFrom what I hear, you're already working on this (#1011). So maybe this point is moot.\r\n\r\n#### 2. The `auth` keyword argument and `credentials.yml`\r\n\r\nI would like to specify my `(username, password)` tuple inside `credentials.yml`. However, the `APIDataSet`'s `auth` keyword wouldn't get filled in by the config loader. \r\n\r\nTo get this working, you'd have to extend `APIDataSet` to have a `credentials` keyword that is filled in for `auth` in an upcall.\r\n\r\nIt would be great to either have this by default, or even have the loader fill `auth` keywords in addition to `credentials`. Although that might have unintended consequences. \r\n\r\n## Context\r\n\r\nHopefully this would unify the experience a bit. Right now, the `credentials` keyword in a dataset and `credentials.yml` are the main points of access to secrets. Which is probably good.\r\n\r\n## Possible Implementation\r\n\r\nI whipped up [my own `APIDataSet`](https://github.com/ianwhale/kedro-kaggle-starter/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/%7B%7B%20cookiecutter.python_package%20%7D%7D/api.py) to solve both the problems above.\r\n\r\n## Possible Alternatives\r\n\r\nTo get this working with no changes to `APIDataSet`, we'd have to implement the changes in #1011 so we can specify tuples in `credentials.yml` and have the config loader fill in `auth` as well.\n", "before_files": [{"content": "\"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\nIt uses the python requests library: https://requests.readthedocs.io/en/master/\n\"\"\"\nfrom typing import Any, Dict, List, Tuple, Union\n\nimport requests\nfrom requests.auth import AuthBase\n\nfrom kedro.io.core import AbstractDataSet, DataSetError\n\n\nclass APIDataSet(AbstractDataSet):\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n\n Example:\n ::\n\n >>> from kedro.extras.datasets.api import APIDataSet\n >>>\n >>>\n >>> data_set = APIDataSet(\n >>> url=\"https://quickstats.nass.usda.gov\",\n >>> params={\n >>> \"key\": \"SOME_TOKEN\",\n >>> \"format\": \"JSON\",\n >>> \"commodity_desc\": \"CORN\",\n >>> \"statisticcat_des\": \"YIELD\",\n >>> \"agg_level_desc\": \"STATE\",\n >>> \"year\": 2000\n >>> }\n >>> )\n >>> data = data_set.load()\n \"\"\"\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n url: str,\n method: str = \"GET\",\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n auth: Union[Tuple[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n\n Args:\n url: The API URL endpoint.\n method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...\n data: The request payload, used for POST, PUT, etc requests\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n params: The url parameters of the API.\n https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n\n \"\"\"\n super().__init__()\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n \"data\": data,\n \"params\": params,\n \"headers\": headers,\n \"auth\": auth,\n \"json\": json,\n \"timeout\": timeout,\n }\n\n def _describe(self) -> Dict[str, Any]:\n return dict(**self._request_args)\n\n def _execute_request(self) -> requests.Response:\n try:\n response = requests.request(**self._request_args)\n response.raise_for_status()\n except requests.exceptions.HTTPError as exc:\n raise DataSetError(\"Failed to fetch data\", exc) from exc\n except OSError as exc:\n raise DataSetError(\"Failed to connect to the remote server\") from exc\n\n return response\n\n def _load(self) -> requests.Response:\n return self._execute_request()\n\n def _save(self, data: Any) -> None:\n raise DataSetError(f\"{self.__class__.__name__} is a read only data set type\")\n\n def _exists(self) -> bool:\n response = self._execute_request()\n\n return response.ok\n", "path": "kedro/extras/datasets/api/api_dataset.py"}], "after_files": [{"content": "\"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\nIt uses the python requests library: https://requests.readthedocs.io/en/master/\n\"\"\"\nfrom typing import Any, Dict, Iterable, List, Union\n\nimport requests\nfrom requests.auth import AuthBase\n\nfrom kedro.io.core import AbstractDataSet, DataSetError\n\n\nclass APIDataSet(AbstractDataSet):\n \"\"\"``APIDataSet`` loads the data from HTTP(S) APIs.\n It uses the python requests library: https://requests.readthedocs.io/en/master/\n\n Example:\n ::\n\n >>> from kedro.extras.datasets.api import APIDataSet\n >>>\n >>>\n >>> data_set = APIDataSet(\n >>> url=\"https://quickstats.nass.usda.gov\",\n >>> params={\n >>> \"key\": \"SOME_TOKEN\",\n >>> \"format\": \"JSON\",\n >>> \"commodity_desc\": \"CORN\",\n >>> \"statisticcat_des\": \"YIELD\",\n >>> \"agg_level_desc\": \"STATE\",\n >>> \"year\": 2000\n >>> }\n >>> )\n >>> data = data_set.load()\n \"\"\"\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n url: str,\n method: str = \"GET\",\n data: Any = None,\n params: Dict[str, Any] = None,\n headers: Dict[str, Any] = None,\n auth: Union[Iterable[str], AuthBase] = None,\n json: Union[List, Dict[str, Any]] = None,\n timeout: int = 60,\n credentials: Union[Iterable[str], AuthBase] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``APIDataSet`` to fetch data from an API endpoint.\n\n Args:\n url: The API URL endpoint.\n method: The Method of the request, GET, POST, PUT, DELETE, HEAD, etc...\n data: The request payload, used for POST, PUT, etc requests\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n params: The url parameters of the API.\n https://requests.readthedocs.io/en/master/user/quickstart/#passing-parameters-in-urls\n headers: The HTTP headers.\n https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers\n auth: Anything ``requests`` accepts. Normally it's either ``('login', 'password')``,\n or ``AuthBase``, ``HTTPBasicAuth`` instance for more complex cases. Any\n iterable will be cast to a tuple.\n json: The request payload, used for POST, PUT, etc requests, passed in\n to the json kwarg in the requests object.\n https://requests.readthedocs.io/en/master/user/quickstart/#more-complicated-post-requests\n timeout: The wait time in seconds for a response, defaults to 1 minute.\n https://requests.readthedocs.io/en/master/user/quickstart/#timeouts\n credentials: same as ``auth``. Allows specifying ``auth`` secrets in\n credentials.yml.\n\n Raises:\n ValueError: if both ``credentials`` and ``auth`` are specified.\n \"\"\"\n super().__init__()\n\n if credentials is not None and auth is not None:\n raise ValueError(\"Cannot specify both auth and credentials.\")\n\n auth = credentials or auth\n\n if isinstance(auth, Iterable):\n auth = tuple(auth)\n\n self._request_args: Dict[str, Any] = {\n \"url\": url,\n \"method\": method,\n \"data\": data,\n \"params\": params,\n \"headers\": headers,\n \"auth\": auth,\n \"json\": json,\n \"timeout\": timeout,\n }\n\n def _describe(self) -> Dict[str, Any]:\n return dict(**self._request_args)\n\n def _execute_request(self) -> requests.Response:\n try:\n response = requests.request(**self._request_args)\n response.raise_for_status()\n except requests.exceptions.HTTPError as exc:\n raise DataSetError(\"Failed to fetch data\", exc) from exc\n except OSError as exc:\n raise DataSetError(\"Failed to connect to the remote server\") from exc\n\n return response\n\n def _load(self) -> requests.Response:\n return self._execute_request()\n\n def _save(self, data: Any) -> None:\n raise DataSetError(f\"{self.__class__.__name__} is a read only data set type\")\n\n def _exists(self) -> bool:\n response = self._execute_request()\n\n return response.ok\n", "path": "kedro/extras/datasets/api/api_dataset.py"}]}
| 1,871 | 598 |
gh_patches_debug_3630
|
rasdani/github-patches
|
git_diff
|
GoogleCloudPlatform__PerfKitBenchmarker-1320
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default GCP config for gpu_pcie_bandwidth is broken
When running the gpu_pcie_bandwidth benchmark under the default GCP config, the following error is raised:
```
ERROR: (gcloud.compute.instances.create) Some requests did not succeed:
- Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/bionic-baton-343/global/images//ubuntu-os-cloud/ubuntu-1604-xenial-v20161115'. The URL is malformed.
```
Until a patch is released, this error can be avoided by running the benchmark and passing in the desired image and image project on the CLI, like so:
`./pkb.py --benchmarks=gpu_pcie_bandwidth --cloud=GCP --image=ubuntu-1610-yakkety-v20170307 --image_project=ubuntu-os-cloud`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py`
Content:
```
1 # Copyright 2017 PerfKitBenchmarker Authors. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 """Runs NVIDIA's CUDA PCI-E bandwidth test
17 (https://developer.nvidia.com/cuda-code-samples)
18 """
19
20 import numpy
21 import re
22 from perfkitbenchmarker import configs
23 from perfkitbenchmarker import flags
24 from perfkitbenchmarker import sample
25 from perfkitbenchmarker import regex_util
26 from perfkitbenchmarker.linux_packages import cuda_toolkit_8
27
28
29 flags.DEFINE_integer('gpu_pcie_bandwidth_iterations', 30,
30 'number of iterations to run',
31 lower_bound=1)
32
33
34 FLAGS = flags.FLAGS
35
36 BENCHMARK_NAME = 'gpu_pcie_bandwidth'
37 # Note on the config: gce_migrate_on_maintenance must be false,
38 # because GCE does not support migrating the user's GPU state.
39 BENCHMARK_CONFIG = """
40 gpu_pcie_bandwidth:
41 description: Runs NVIDIA's CUDA bandwidth test.
42 flags:
43 gce_migrate_on_maintenance: False
44 vm_groups:
45 default:
46 vm_spec:
47 GCP:
48 image: /ubuntu-os-cloud/ubuntu-1604-xenial-v20161115
49 machine_type: n1-standard-4-k80x1
50 zone: us-east1-d
51 boot_disk_size: 200
52 AWS:
53 image: ami-a9d276c9
54 machine_type: p2.xlarge
55 zone: us-west-2b
56 boot_disk_size: 200
57 Azure:
58 image: Canonical:UbuntuServer:16.04.0-LTS:latest
59 machine_type: Standard_NC6
60 zone: eastus
61 """
62 BENCHMARK_METRICS = ['Host to device bandwidth',
63 'Device to host bandwidth',
64 'Device to device bandwidth']
65
66 EXTRACT_BANDWIDTH_TEST_RESULTS_REGEX = r'\d+\s+(\d+\.?\d*)'
67 EXTRACT_DEVICE_INFO_REGEX = r'Device\s*(\d):\s*(.*$)'
68
69
70 def GetConfig(user_config):
71 config = configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)
72 return config
73
74
75 def CheckPrerequisites(benchmark_config):
76 """Verifies that the required resources are present.
77
78 Raises:
79 perfkitbenchmarker.data.ResourceNotFound: On missing resource.
80 """
81 cuda_toolkit_8.CheckPrerequisites()
82
83
84 def Prepare(benchmark_spec):
85 """Install CUDA toolkit 8.
86
87 Args:
88 benchmark_spec: The benchmark specification. Contains all data that is
89 required to run the benchmark.
90 """
91 vm = benchmark_spec.vms[0]
92 vm.Install('cuda_toolkit_8')
93
94
95 def _ParseDeviceInfo(test_output):
96 """Parses the GPU device info from the CUDA device bandwidth test output.
97
98 Args:
99 test_output: The resulting output string from the bandwidth
100 test application.
101
102 Returns:
103 A dictionary mapping the device number to its name, for every
104 device available on the system.
105 """
106 matches = regex_util.ExtractAllMatches(EXTRACT_DEVICE_INFO_REGEX,
107 test_output, re.MULTILINE)
108 devices = {str(i[0]): str(i[1]) for i in matches}
109 return devices
110
111
112 def _ParseOutputFromSingleIteration(test_output):
113 """Parses the output of the CUDA device bandwidth test.
114
115 Args:
116 test_output: The resulting output string from the bandwidth
117 test application.
118
119 Returns:
120 A dictionary containing the following values as floats:
121 * the device to host bandwidth
122 * the host to device bandwidth
123 * the device to device bandwidth
124 All units are in MB/s, as these are the units guaranteed to be output
125 by the test.
126 """
127 matches = regex_util.ExtractAllMatches(EXTRACT_BANDWIDTH_TEST_RESULTS_REGEX,
128 test_output)
129 results = {}
130 for i, metric in enumerate(BENCHMARK_METRICS):
131 results[metric] = float(matches[i])
132 return results
133
134
135 def _CalculateMetricsOverAllIterations(result_dicts, metadata={}):
136 """Calculates stats given list of result dictionaries.
137
138 Each item in the list represends the results from a single
139 iteration.
140
141 Args:
142 result_dicts: a list of result dictionaries. Each result dictionary
143 represents a single run of the CUDA device bandwidth test,
144 parsed by _ParseOutputFromSingleIteration().
145
146 metadata: metadata dict to be added to each Sample.
147
148 Returns:
149 a list of sample.Samples containing the device to host bandwidth,
150 host to device bandwidth, and device to device bandwidth for each
151 iteration, along with the following stats for each bandwidth type:
152 * mean
153 * min
154 * max
155 * stddev
156 """
157 samples = []
158 for metric in BENCHMARK_METRICS:
159 sequence = [x[metric] for x in result_dicts]
160 # Add a Sample for each iteration, and include the iteration number
161 # in the metadata.
162 for idx, measurement in enumerate(sequence):
163 metadata_copy = metadata.copy()
164 metadata_copy['iteration'] = idx
165 samples.append(sample.Sample(
166 metric, measurement, 'MB/s', metadata_copy))
167
168 samples.append(sample.Sample(
169 metric + ', min', min(sequence), 'MB/s', metadata))
170 samples.append(sample.Sample(
171 metric + ', max', max(sequence), 'MB/s', metadata))
172 samples.append(sample.Sample(
173 metric + ', mean', numpy.mean(sequence), 'MB/s', metadata))
174 samples.append(sample.Sample(
175 metric + ', stddev', numpy.std(sequence), 'MB/s', metadata))
176 return samples
177
178
179 def Run(benchmark_spec):
180 """Sets the GPU clock speed and runs the CUDA PCIe benchmark.
181
182 Args:
183 benchmark_spec: The benchmark specification. Contains all data that is
184 required to run the benchmark.
185
186 Returns:
187 A list of sample.Sample objects.
188 """
189 vm = benchmark_spec.vms[0]
190 # Note: The clock speed is set in this function rather than Prepare()
191 # so that the user can perform multiple runs with a specified
192 # clock speed without having to re-prepare the VM.
193 cuda_toolkit_8.SetAndConfirmGpuClocks(vm)
194 num_iterations = FLAGS.gpu_pcie_bandwidth_iterations
195 raw_results = []
196 metadata = {}
197 metadata['num_iterations'] = num_iterations
198 metadata['num_gpus'] = cuda_toolkit_8.QueryNumberOfGpus(vm)
199 metadata['memory_clock_MHz'] = FLAGS.gpu_clock_speeds[0]
200 metadata['graphics_clock_MHz'] = FLAGS.gpu_clock_speeds[1]
201 run_command = ('%s/extras/demo_suite/bandwidthTest --device=all'
202 % cuda_toolkit_8.CUDA_TOOLKIT_INSTALL_DIR)
203 for i in range(num_iterations):
204 stdout, _ = vm.RemoteCommand(run_command, should_log=True)
205 raw_results.append(_ParseOutputFromSingleIteration(stdout))
206 if 'device_info' not in metadata:
207 metadata['device_info'] = _ParseDeviceInfo(stdout)
208 return _CalculateMetricsOverAllIterations(raw_results, metadata)
209
210
211 def Cleanup(benchmark_spec):
212 """Uninstalls CUDA toolkit 8
213
214 Args:
215 benchmark_spec: The benchmark specification. Contains all data that is
216 required to run the benchmark.
217 """
218 vm = benchmark_spec.vms[0]
219 vm.Uninstall('cuda_toolkit_8')
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py b/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py
--- a/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py
+++ b/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py
@@ -45,7 +45,8 @@
default:
vm_spec:
GCP:
- image: /ubuntu-os-cloud/ubuntu-1604-xenial-v20161115
+ image: ubuntu-1604-xenial-v20161115
+ image_project: ubuntu-os-cloud
machine_type: n1-standard-4-k80x1
zone: us-east1-d
boot_disk_size: 200
|
{"golden_diff": "diff --git a/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py b/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py\n--- a/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py\n+++ b/perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py\n@@ -45,7 +45,8 @@\n default:\n vm_spec:\n GCP:\n- image: /ubuntu-os-cloud/ubuntu-1604-xenial-v20161115\n+ image: ubuntu-1604-xenial-v20161115\n+ image_project: ubuntu-os-cloud\n machine_type: n1-standard-4-k80x1\n zone: us-east1-d\n boot_disk_size: 200\n", "issue": "Default GCP config for gpu_pcie_bandwidth is broken\nWhen running the gpu_pcie_bandwidth benchmark under the default GCP config, the following error is raised: \r\n```\r\nERROR: (gcloud.compute.instances.create) Some requests did not succeed:\r\n- Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/bionic-baton-343/global/images//ubuntu-os-cloud/ubuntu-1604-xenial-v20161115'. The URL is malformed.\r\n```\r\nUntil a patch is released, this error can be avoided by running the benchmark and passing in the desired image and image project on the CLI, like so:\r\n\r\n`./pkb.py --benchmarks=gpu_pcie_bandwidth --cloud=GCP --image=ubuntu-1610-yakkety-v20170307 --image_project=ubuntu-os-cloud`\n", "before_files": [{"content": "# Copyright 2017 PerfKitBenchmarker Authors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"Runs NVIDIA's CUDA PCI-E bandwidth test\n (https://developer.nvidia.com/cuda-code-samples)\n\"\"\"\n\nimport numpy\nimport re\nfrom perfkitbenchmarker import configs\nfrom perfkitbenchmarker import flags\nfrom perfkitbenchmarker import sample\nfrom perfkitbenchmarker import regex_util\nfrom perfkitbenchmarker.linux_packages import cuda_toolkit_8\n\n\nflags.DEFINE_integer('gpu_pcie_bandwidth_iterations', 30,\n 'number of iterations to run',\n lower_bound=1)\n\n\nFLAGS = flags.FLAGS\n\nBENCHMARK_NAME = 'gpu_pcie_bandwidth'\n# Note on the config: gce_migrate_on_maintenance must be false,\n# because GCE does not support migrating the user's GPU state.\nBENCHMARK_CONFIG = \"\"\"\ngpu_pcie_bandwidth:\n description: Runs NVIDIA's CUDA bandwidth test.\n flags:\n gce_migrate_on_maintenance: False\n vm_groups:\n default:\n vm_spec:\n GCP:\n image: /ubuntu-os-cloud/ubuntu-1604-xenial-v20161115\n machine_type: n1-standard-4-k80x1\n zone: us-east1-d\n boot_disk_size: 200\n AWS:\n image: ami-a9d276c9\n machine_type: p2.xlarge\n zone: us-west-2b\n boot_disk_size: 200\n Azure:\n image: Canonical:UbuntuServer:16.04.0-LTS:latest\n machine_type: Standard_NC6\n zone: eastus\n\"\"\"\nBENCHMARK_METRICS = ['Host to device bandwidth',\n 'Device to host bandwidth',\n 'Device to device bandwidth']\n\nEXTRACT_BANDWIDTH_TEST_RESULTS_REGEX = r'\\d+\\s+(\\d+\\.?\\d*)'\nEXTRACT_DEVICE_INFO_REGEX = r'Device\\s*(\\d):\\s*(.*$)'\n\n\ndef GetConfig(user_config):\n config = configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)\n return config\n\n\ndef CheckPrerequisites(benchmark_config):\n \"\"\"Verifies that the required resources are present.\n\n Raises:\n perfkitbenchmarker.data.ResourceNotFound: On missing resource.\n \"\"\"\n cuda_toolkit_8.CheckPrerequisites()\n\n\ndef Prepare(benchmark_spec):\n \"\"\"Install CUDA toolkit 8.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vm = benchmark_spec.vms[0]\n vm.Install('cuda_toolkit_8')\n\n\ndef _ParseDeviceInfo(test_output):\n \"\"\"Parses the GPU device info from the CUDA device bandwidth test output.\n\n Args:\n test_output: The resulting output string from the bandwidth\n test application.\n\n Returns:\n A dictionary mapping the device number to its name, for every\n device available on the system.\n \"\"\"\n matches = regex_util.ExtractAllMatches(EXTRACT_DEVICE_INFO_REGEX,\n test_output, re.MULTILINE)\n devices = {str(i[0]): str(i[1]) for i in matches}\n return devices\n\n\ndef _ParseOutputFromSingleIteration(test_output):\n \"\"\"Parses the output of the CUDA device bandwidth test.\n\n Args:\n test_output: The resulting output string from the bandwidth\n test application.\n\n Returns:\n A dictionary containing the following values as floats:\n * the device to host bandwidth\n * the host to device bandwidth\n * the device to device bandwidth\n All units are in MB/s, as these are the units guaranteed to be output\n by the test.\n \"\"\"\n matches = regex_util.ExtractAllMatches(EXTRACT_BANDWIDTH_TEST_RESULTS_REGEX,\n test_output)\n results = {}\n for i, metric in enumerate(BENCHMARK_METRICS):\n results[metric] = float(matches[i])\n return results\n\n\ndef _CalculateMetricsOverAllIterations(result_dicts, metadata={}):\n \"\"\"Calculates stats given list of result dictionaries.\n\n Each item in the list represends the results from a single\n iteration.\n\n Args:\n result_dicts: a list of result dictionaries. Each result dictionary\n represents a single run of the CUDA device bandwidth test,\n parsed by _ParseOutputFromSingleIteration().\n\n metadata: metadata dict to be added to each Sample.\n\n Returns:\n a list of sample.Samples containing the device to host bandwidth,\n host to device bandwidth, and device to device bandwidth for each\n iteration, along with the following stats for each bandwidth type:\n * mean\n * min\n * max\n * stddev\n \"\"\"\n samples = []\n for metric in BENCHMARK_METRICS:\n sequence = [x[metric] for x in result_dicts]\n # Add a Sample for each iteration, and include the iteration number\n # in the metadata.\n for idx, measurement in enumerate(sequence):\n metadata_copy = metadata.copy()\n metadata_copy['iteration'] = idx\n samples.append(sample.Sample(\n metric, measurement, 'MB/s', metadata_copy))\n\n samples.append(sample.Sample(\n metric + ', min', min(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', max', max(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', mean', numpy.mean(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', stddev', numpy.std(sequence), 'MB/s', metadata))\n return samples\n\n\ndef Run(benchmark_spec):\n \"\"\"Sets the GPU clock speed and runs the CUDA PCIe benchmark.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n\n Returns:\n A list of sample.Sample objects.\n \"\"\"\n vm = benchmark_spec.vms[0]\n # Note: The clock speed is set in this function rather than Prepare()\n # so that the user can perform multiple runs with a specified\n # clock speed without having to re-prepare the VM.\n cuda_toolkit_8.SetAndConfirmGpuClocks(vm)\n num_iterations = FLAGS.gpu_pcie_bandwidth_iterations\n raw_results = []\n metadata = {}\n metadata['num_iterations'] = num_iterations\n metadata['num_gpus'] = cuda_toolkit_8.QueryNumberOfGpus(vm)\n metadata['memory_clock_MHz'] = FLAGS.gpu_clock_speeds[0]\n metadata['graphics_clock_MHz'] = FLAGS.gpu_clock_speeds[1]\n run_command = ('%s/extras/demo_suite/bandwidthTest --device=all'\n % cuda_toolkit_8.CUDA_TOOLKIT_INSTALL_DIR)\n for i in range(num_iterations):\n stdout, _ = vm.RemoteCommand(run_command, should_log=True)\n raw_results.append(_ParseOutputFromSingleIteration(stdout))\n if 'device_info' not in metadata:\n metadata['device_info'] = _ParseDeviceInfo(stdout)\n return _CalculateMetricsOverAllIterations(raw_results, metadata)\n\n\ndef Cleanup(benchmark_spec):\n \"\"\"Uninstalls CUDA toolkit 8\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vm = benchmark_spec.vms[0]\n vm.Uninstall('cuda_toolkit_8')\n", "path": "perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py"}], "after_files": [{"content": "# Copyright 2017 PerfKitBenchmarker Authors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"Runs NVIDIA's CUDA PCI-E bandwidth test\n (https://developer.nvidia.com/cuda-code-samples)\n\"\"\"\n\nimport numpy\nimport re\nfrom perfkitbenchmarker import configs\nfrom perfkitbenchmarker import flags\nfrom perfkitbenchmarker import sample\nfrom perfkitbenchmarker import regex_util\nfrom perfkitbenchmarker.linux_packages import cuda_toolkit_8\n\n\nflags.DEFINE_integer('gpu_pcie_bandwidth_iterations', 30,\n 'number of iterations to run',\n lower_bound=1)\n\n\nFLAGS = flags.FLAGS\n\nBENCHMARK_NAME = 'gpu_pcie_bandwidth'\n# Note on the config: gce_migrate_on_maintenance must be false,\n# because GCE does not support migrating the user's GPU state.\nBENCHMARK_CONFIG = \"\"\"\ngpu_pcie_bandwidth:\n description: Runs NVIDIA's CUDA bandwidth test.\n flags:\n gce_migrate_on_maintenance: False\n vm_groups:\n default:\n vm_spec:\n GCP:\n image: ubuntu-1604-xenial-v20161115\n image_project: ubuntu-os-cloud\n machine_type: n1-standard-4-k80x1\n zone: us-east1-d\n boot_disk_size: 200\n AWS:\n image: ami-a9d276c9\n machine_type: p2.xlarge\n zone: us-west-2b\n boot_disk_size: 200\n Azure:\n image: Canonical:UbuntuServer:16.04.0-LTS:latest\n machine_type: Standard_NC6\n zone: eastus\n\"\"\"\nBENCHMARK_METRICS = ['Host to device bandwidth',\n 'Device to host bandwidth',\n 'Device to device bandwidth']\n\nEXTRACT_BANDWIDTH_TEST_RESULTS_REGEX = r'\\d+\\s+(\\d+\\.?\\d*)'\nEXTRACT_DEVICE_INFO_REGEX = r'Device\\s*(\\d):\\s*(.*$)'\n\n\ndef GetConfig(user_config):\n config = configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)\n return config\n\n\ndef CheckPrerequisites(benchmark_config):\n \"\"\"Verifies that the required resources are present.\n\n Raises:\n perfkitbenchmarker.data.ResourceNotFound: On missing resource.\n \"\"\"\n cuda_toolkit_8.CheckPrerequisites()\n\n\ndef Prepare(benchmark_spec):\n \"\"\"Install CUDA toolkit 8.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vm = benchmark_spec.vms[0]\n vm.Install('cuda_toolkit_8')\n\n\ndef _ParseDeviceInfo(test_output):\n \"\"\"Parses the GPU device info from the CUDA device bandwidth test output.\n\n Args:\n test_output: The resulting output string from the bandwidth\n test application.\n\n Returns:\n A dictionary mapping the device number to its name, for every\n device available on the system.\n \"\"\"\n matches = regex_util.ExtractAllMatches(EXTRACT_DEVICE_INFO_REGEX,\n test_output, re.MULTILINE)\n devices = {str(i[0]): str(i[1]) for i in matches}\n return devices\n\n\ndef _ParseOutputFromSingleIteration(test_output):\n \"\"\"Parses the output of the CUDA device bandwidth test.\n\n Args:\n test_output: The resulting output string from the bandwidth\n test application.\n\n Returns:\n A dictionary containing the following values as floats:\n * the device to host bandwidth\n * the host to device bandwidth\n * the device to device bandwidth\n All units are in MB/s, as these are the units guaranteed to be output\n by the test.\n \"\"\"\n matches = regex_util.ExtractAllMatches(EXTRACT_BANDWIDTH_TEST_RESULTS_REGEX,\n test_output)\n results = {}\n for i, metric in enumerate(BENCHMARK_METRICS):\n results[metric] = float(matches[i])\n return results\n\n\ndef _CalculateMetricsOverAllIterations(result_dicts, metadata={}):\n \"\"\"Calculates stats given list of result dictionaries.\n\n Each item in the list represends the results from a single\n iteration.\n\n Args:\n result_dicts: a list of result dictionaries. Each result dictionary\n represents a single run of the CUDA device bandwidth test,\n parsed by _ParseOutputFromSingleIteration().\n\n metadata: metadata dict to be added to each Sample.\n\n Returns:\n a list of sample.Samples containing the device to host bandwidth,\n host to device bandwidth, and device to device bandwidth for each\n iteration, along with the following stats for each bandwidth type:\n * mean\n * min\n * max\n * stddev\n \"\"\"\n samples = []\n for metric in BENCHMARK_METRICS:\n sequence = [x[metric] for x in result_dicts]\n # Add a Sample for each iteration, and include the iteration number\n # in the metadata.\n for idx, measurement in enumerate(sequence):\n metadata_copy = metadata.copy()\n metadata_copy['iteration'] = idx\n samples.append(sample.Sample(\n metric, measurement, 'MB/s', metadata_copy))\n\n samples.append(sample.Sample(\n metric + ', min', min(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', max', max(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', mean', numpy.mean(sequence), 'MB/s', metadata))\n samples.append(sample.Sample(\n metric + ', stddev', numpy.std(sequence), 'MB/s', metadata))\n return samples\n\n\ndef Run(benchmark_spec):\n \"\"\"Sets the GPU clock speed and runs the CUDA PCIe benchmark.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n\n Returns:\n A list of sample.Sample objects.\n \"\"\"\n vm = benchmark_spec.vms[0]\n # Note: The clock speed is set in this function rather than Prepare()\n # so that the user can perform multiple runs with a specified\n # clock speed without having to re-prepare the VM.\n cuda_toolkit_8.SetAndConfirmGpuClocks(vm)\n num_iterations = FLAGS.gpu_pcie_bandwidth_iterations\n raw_results = []\n metadata = {}\n metadata['num_iterations'] = num_iterations\n metadata['num_gpus'] = cuda_toolkit_8.QueryNumberOfGpus(vm)\n metadata['memory_clock_MHz'] = FLAGS.gpu_clock_speeds[0]\n metadata['graphics_clock_MHz'] = FLAGS.gpu_clock_speeds[1]\n run_command = ('%s/extras/demo_suite/bandwidthTest --device=all'\n % cuda_toolkit_8.CUDA_TOOLKIT_INSTALL_DIR)\n for i in range(num_iterations):\n stdout, _ = vm.RemoteCommand(run_command, should_log=True)\n raw_results.append(_ParseOutputFromSingleIteration(stdout))\n if 'device_info' not in metadata:\n metadata['device_info'] = _ParseDeviceInfo(stdout)\n return _CalculateMetricsOverAllIterations(raw_results, metadata)\n\n\ndef Cleanup(benchmark_spec):\n \"\"\"Uninstalls CUDA toolkit 8\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vm = benchmark_spec.vms[0]\n vm.Uninstall('cuda_toolkit_8')\n", "path": "perfkitbenchmarker/linux_benchmarks/gpu_pcie_bandwidth_benchmark.py"}]}
| 2,729 | 199 |
gh_patches_debug_11797
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-509
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
All logs aren't making it through
It seems that the QueueHandler for our logs doesn't block correctly, so some logs don't make it to our logger service.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/utilities/logging.py`
Content:
```
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2 import logging
3 import os
4 import queue
5 from logging.handlers import QueueHandler, QueueListener
6
7 import prefect
8 from prefect.configuration import config
9
10
11 class RemoteHandler(logging.StreamHandler):
12 def __init__(self) -> None:
13 super().__init__()
14 self.logger_server = config.cloud.log
15 self.client = None
16
17 def emit(self, record):
18 if self.client is None:
19 from prefect.client import Client
20
21 self.client = Client()
22 r = self.client.post(path="", server=self.logger_server, **record.__dict__)
23
24
25 old_factory = logging.getLogRecordFactory()
26
27
28 def cloud_record_factory(*args, **kwargs):
29 record = old_factory(*args, **kwargs)
30 record.flowrunid = prefect.context.get("flow_run_id", "")
31 record.taskrunid = prefect.context.get("task_run_id", "")
32 return record
33
34
35 def configure_logging() -> logging.Logger:
36 """
37 Creates a "prefect" root logger with a `StreamHandler` that has level and formatting
38 set from `prefect.config`.
39
40 Returns:
41 logging.Logger
42 """
43 logger = logging.getLogger("prefect")
44 handler = logging.StreamHandler()
45 formatter = logging.Formatter(config.logging.format)
46 handler.setFormatter(formatter)
47 logger.addHandler(handler)
48 logger.setLevel(config.logging.level)
49
50 # send logs to server
51 if config.logging.log_to_cloud:
52 logging.setLogRecordFactory(cloud_record_factory)
53 log_queue = queue.Queue(-1) # unlimited size queue
54 queue_handler = QueueHandler(log_queue)
55 remote_handler = RemoteHandler()
56 remote_listener = QueueListener(log_queue, remote_handler)
57 logger.addHandler(queue_handler)
58 remote_listener.start()
59
60 return logger
61
62
63 prefect_logger = configure_logging()
64
65
66 def get_logger(name: str = None) -> logging.Logger:
67 """
68 Returns a "prefect" logger.
69
70 Args:
71 - name (str): if `None`, the root Prefect logger is returned. If provided, a child
72 logger of the name `"prefect.{name}"` is returned. The child logger inherits
73 the root logger's settings.
74
75 Returns:
76 logging.Logger
77 """
78 if name is None:
79 return prefect_logger
80 else:
81 return prefect_logger.getChild(name)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py
--- a/src/prefect/utilities/logging.py
+++ b/src/prefect/utilities/logging.py
@@ -1,4 +1,5 @@
# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
+import atexit
import logging
import os
import queue
@@ -56,6 +57,8 @@
remote_listener = QueueListener(log_queue, remote_handler)
logger.addHandler(queue_handler)
remote_listener.start()
+ stopper = lambda listener: listener.stop()
+ atexit.register(stopper, remote_listener)
return logger
|
{"golden_diff": "diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py\n--- a/src/prefect/utilities/logging.py\n+++ b/src/prefect/utilities/logging.py\n@@ -1,4 +1,5 @@\n # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\n+import atexit\n import logging\n import os\n import queue\n@@ -56,6 +57,8 @@\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n+ stopper = lambda listener: listener.stop()\n+ atexit.register(stopper, remote_listener)\n \n return logger\n", "issue": "All logs aren't making it through\nIt seems that the QueueHandler for our logs doesn't block correctly, so some logs don't make it to our logger service.\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\nimport logging\nimport os\nimport queue\nfrom logging.handlers import QueueHandler, QueueListener\n\nimport prefect\nfrom prefect.configuration import config\n\n\nclass RemoteHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__()\n self.logger_server = config.cloud.log\n self.client = None\n\n def emit(self, record):\n if self.client is None:\n from prefect.client import Client\n\n self.client = Client()\n r = self.client.post(path=\"\", server=self.logger_server, **record.__dict__)\n\n\nold_factory = logging.getLogRecordFactory()\n\n\ndef cloud_record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n record.flowrunid = prefect.context.get(\"flow_run_id\", \"\")\n record.taskrunid = prefect.context.get(\"task_run_id\", \"\")\n return record\n\n\ndef configure_logging() -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Returns:\n logging.Logger\n \"\"\"\n logger = logging.getLogger(\"prefect\")\n handler = logging.StreamHandler()\n formatter = logging.Formatter(config.logging.format)\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(config.logging.level)\n\n # send logs to server\n if config.logging.log_to_cloud:\n logging.setLogRecordFactory(cloud_record_factory)\n log_queue = queue.Queue(-1) # unlimited size queue\n queue_handler = QueueHandler(log_queue)\n remote_handler = RemoteHandler()\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n logging.Logger\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}], "after_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\nimport atexit\nimport logging\nimport os\nimport queue\nfrom logging.handlers import QueueHandler, QueueListener\n\nimport prefect\nfrom prefect.configuration import config\n\n\nclass RemoteHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__()\n self.logger_server = config.cloud.log\n self.client = None\n\n def emit(self, record):\n if self.client is None:\n from prefect.client import Client\n\n self.client = Client()\n r = self.client.post(path=\"\", server=self.logger_server, **record.__dict__)\n\n\nold_factory = logging.getLogRecordFactory()\n\n\ndef cloud_record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n record.flowrunid = prefect.context.get(\"flow_run_id\", \"\")\n record.taskrunid = prefect.context.get(\"task_run_id\", \"\")\n return record\n\n\ndef configure_logging() -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Returns:\n logging.Logger\n \"\"\"\n logger = logging.getLogger(\"prefect\")\n handler = logging.StreamHandler()\n formatter = logging.Formatter(config.logging.format)\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(config.logging.level)\n\n # send logs to server\n if config.logging.log_to_cloud:\n logging.setLogRecordFactory(cloud_record_factory)\n log_queue = queue.Queue(-1) # unlimited size queue\n queue_handler = QueueHandler(log_queue)\n remote_handler = RemoteHandler()\n remote_listener = QueueListener(log_queue, remote_handler)\n logger.addHandler(queue_handler)\n remote_listener.start()\n stopper = lambda listener: listener.stop()\n atexit.register(stopper, remote_listener)\n\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n logging.Logger\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}]}
| 954 | 151 |
gh_patches_debug_16717
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-474
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scanning IAM Role Public Access
Describe the bug
It seems when specifying more than one json, the policy does not scan all principals, rather it looks at the first one.
To Reproduce
Steps to reproduce the behavior:
Create policy with more than one SID
`resource "aws_iam_role" "lambdaRole" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal" : {"Service": "lambda.amazonaws.com"},
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Principal" : {"AWS": "*"},
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Principal" : {"Service": "events.amazonaws.com"},
"Effect": "Allow"
},
]
}
EOF
}`
Run Checkov against policy
Expected behavior
I would expect the scan to check each json within the policy rather than the first one
Desktop (please complete the following information):
OS: Mac
Checkov Version: 1.0.459
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMRoleAllowsPublicAssume(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure IAM role allows only specific services or principals to assume it"
10 id = "CKV_AWS_60"
11 supported_resources = ['aws_iam_role']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if isinstance(conf['assume_role_policy'][0], str):
17 try:
18 assume_role_block = json.loads(conf['assume_role_policy'][0])
19 if 'Statement' in assume_role_block.keys():
20 if 'Principal' in assume_role_block['Statement'][0]:
21 if 'AWS' in assume_role_block['Statement'][0]['Principal']:
22 if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':
23 return CheckResult.FAILED
24 except: # nosec
25 pass
26 return CheckResult.PASSED
27
28
29 check = IAMRoleAllowsPublicAssume()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
--- a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
+++ b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py
@@ -17,10 +17,10 @@
try:
assume_role_block = json.loads(conf['assume_role_policy'][0])
if 'Statement' in assume_role_block.keys():
- if 'Principal' in assume_role_block['Statement'][0]:
- if 'AWS' in assume_role_block['Statement'][0]['Principal']:
- if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':
- return CheckResult.FAILED
+ for statement in assume_role_block['Statement']:
+ if 'AWS' in statement['Principal']:
+ if statement['Principal']['AWS'] == '*':
+ return CheckResult.FAILED
except: # nosec
pass
return CheckResult.PASSED
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n--- a/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n+++ b/checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py\n@@ -17,10 +17,10 @@\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n- if 'Principal' in assume_role_block['Statement'][0]:\n- if 'AWS' in assume_role_block['Statement'][0]['Principal']:\n- if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':\n- return CheckResult.FAILED\n+ for statement in assume_role_block['Statement']:\n+ if 'AWS' in statement['Principal']:\n+ if statement['Principal']['AWS'] == '*':\n+ return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n", "issue": "Scanning IAM Role Public Access\nDescribe the bug\r\nIt seems when specifying more than one json, the policy does not scan all principals, rather it looks at the first one. \r\n\r\nTo Reproduce\r\nSteps to reproduce the behavior:\r\n\r\nCreate policy with more than one SID\r\n`resource \"aws_iam_role\" \"lambdaRole\" {\r\n name = \"test-role\"\r\n assume_role_policy = <<EOF\r\n{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"Service\": \"lambda.amazonaws.com\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"AWS\": \"*\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n {\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Principal\" : {\"Service\": \"events.amazonaws.com\"},\r\n \"Effect\": \"Allow\"\r\n },\r\n ]\r\n}\r\n\r\nEOF\r\n}`\r\nRun Checkov against policy\r\nExpected behavior\r\nI would expect the scan to check each json within the policy rather than the first one\r\n\r\nDesktop (please complete the following information):\r\n\r\nOS: Mac\r\nCheckov Version: 1.0.459\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMRoleAllowsPublicAssume(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM role allows only specific services or principals to assume it\"\n id = \"CKV_AWS_60\"\n supported_resources = ['aws_iam_role']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if isinstance(conf['assume_role_policy'][0], str):\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n if 'Principal' in assume_role_block['Statement'][0]:\n if 'AWS' in assume_role_block['Statement'][0]['Principal']:\n if assume_role_block['Statement'][0]['Principal']['AWS'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMRoleAllowsPublicAssume()\n", "path": "checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMRoleAllowsPublicAssume(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM role allows only specific services or principals to assume it\"\n id = \"CKV_AWS_60\"\n supported_resources = ['aws_iam_role']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if isinstance(conf['assume_role_policy'][0], str):\n try:\n assume_role_block = json.loads(conf['assume_role_policy'][0])\n if 'Statement' in assume_role_block.keys():\n for statement in assume_role_block['Statement']:\n if 'AWS' in statement['Principal']:\n if statement['Principal']['AWS'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMRoleAllowsPublicAssume()\n", "path": "checkov/terraform/checks/resource/aws/IAMRoleAllowsPublicAssume.py"}]}
| 848 | 239 |
gh_patches_debug_11463
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1652
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add new method to return the columns for PostgreSQL datasources :electric_plug: :1234:
When MindsDB creates a new PostgreSQL datasource we get information for columns by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_columns method to return the columns name per datasource. The PR should include this method inside the PostgreSQL class .
## Steps :male_detective: :female_detective:
- Implement in https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/postgres/postgres.py#L37
- Push to staging branch
## Additional rewards :1st_place_medal:
Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/integrations/postgres/postgres.py`
Content:
```
1 from contextlib import closing
2 import pg8000
3
4 from lightwood.api import dtype
5 from mindsdb.integrations.base import Integration
6 from mindsdb.utilities.log import log
7
8
9 class PostgreSQLConnectionChecker:
10 def __init__(self, **kwargs):
11 self.host = kwargs.get('host')
12 self.port = kwargs.get('port')
13 self.user = kwargs.get('user')
14 self.password = kwargs.get('password')
15 self.database = kwargs.get('database', 'postgres')
16
17 def _get_connection(self):
18 return pg8000.connect(
19 database=self.database,
20 user=self.user,
21 password=self.password,
22 host=self.host,
23 port=self.port
24 )
25
26 def check_connection(self):
27 try:
28 con = self._get_connection()
29 with closing(con) as con:
30 con.run('select 1;')
31 connected = True
32 except Exception:
33 connected = False
34 return connected
35
36
37 class PostgreSQL(Integration, PostgreSQLConnectionChecker):
38 def __init__(self, config, name, db_info):
39 super().__init__(config, name)
40 self.user = db_info.get('user')
41 self.password = db_info.get('password')
42 self.host = db_info.get('host')
43 self.port = db_info.get('port')
44 self.database = db_info.get('database', 'postgres')
45
46 def _to_postgres_table(self, dtype_dict, predicted_cols, columns):
47 subtype_map = {
48 dtype.integer: ' int8',
49 dtype.float: 'float8',
50 dtype.binary: 'bool',
51 dtype.date: 'date',
52 dtype.datetime: 'timestamp',
53 dtype.binary: 'text',
54 dtype.categorical: 'text',
55 dtype.tags: 'text',
56 dtype.image: 'text',
57 dtype.video: 'text',
58 dtype.audio: 'text',
59 dtype.short_text: 'text',
60 dtype.rich_text: 'text',
61 dtype.array: 'text'
62 }
63
64 column_declaration = []
65 for name in columns:
66 try:
67 col_subtype = dtype_dict[name]
68 new_type = subtype_map[col_subtype]
69 column_declaration.append(f' "{name}" {new_type} ')
70 if name in predicted_cols:
71 column_declaration.append(f' "{name}_original" {new_type} ')
72 except Exception as e:
73 log.error(f'Error: can not determine postgres data type for column {name}: {e}')
74
75 return column_declaration
76
77 def _escape_table_name(self, name):
78 return '"' + name.replace('"', '""') + '"'
79
80 def _query(self, query):
81 con = self._get_connection()
82 with closing(con) as con:
83
84 cur = con.cursor()
85 res = True
86 cur.execute(query)
87
88 try:
89 rows = cur.fetchall()
90 keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]
91 res = [dict(zip(keys, row)) for row in rows]
92 except Exception:
93 pass
94
95 con.commit()
96
97 return res
98
99 def setup(self):
100 user = f"{self.config['api']['mysql']['user']}_{self.name}"
101 password = self.config['api']['mysql']['password']
102 host = self.config['api']['mysql']['host']
103 port = self.config['api']['mysql']['port']
104
105 try:
106 self._query('''
107 DO $$
108 begin
109 if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then
110 CREATE EXTENSION mysql_fdw;
111 end if;
112 END
113 $$;
114 ''')
115 except Exception:
116 print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')
117
118 self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')
119
120 self._query(f"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}")
121
122 self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')
123
124 self._query(f'''
125 CREATE SERVER server_{self.mindsdb_database}
126 FOREIGN DATA WRAPPER mysql_fdw
127 OPTIONS (host '{host}', port '{port}');
128 ''')
129
130 self._query(f'''
131 CREATE USER MAPPING FOR {self.user}
132 SERVER server_{self.mindsdb_database}
133 OPTIONS (username '{user}', password '{password}');
134 ''')
135
136 self._query(f'CREATE SCHEMA {self.mindsdb_database}')
137
138 q = f"""
139 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
140 name text,
141 status text,
142 accuracy text,
143 predict text,
144 select_data_query text,
145 external_datasource text,
146 training_options text
147 )
148 SERVER server_{self.mindsdb_database}
149 OPTIONS (dbname 'mindsdb', table_name 'predictors');
150 """
151 self._query(q)
152
153 q = f"""
154 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
155 command text
156 ) SERVER server_{self.mindsdb_database}
157 OPTIONS (dbname 'mindsdb', table_name 'commands');
158 """
159 self._query(q)
160
161 def register_predictors(self, model_data_arr):
162 for model_meta in model_data_arr:
163 name = model_meta['name']
164 predict = model_meta['predict']
165 if not isinstance(predict, list):
166 predict = [predict]
167 columns_sql = ','.join(self._to_postgres_table(
168 model_meta['dtype_dict'],
169 predict,
170 list(model_meta['dtype_dict'].keys())
171 ))
172 columns_sql += ',"select_data_query" text'
173 columns_sql += ',"external_datasource" text'
174 for col in predict:
175 columns_sql += f',"{col}_confidence" float8'
176 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
177 columns_sql += f',"{col}_min" float8'
178 columns_sql += f',"{col}_max" float8'
179 columns_sql += f',"{col}_explain" text'
180
181 self.unregister_predictor(name)
182 q = f"""
183 CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (
184 {columns_sql}
185 ) SERVER server_{self.mindsdb_database}
186 OPTIONS (dbname 'mindsdb', table_name '{name}');
187 """
188 self._query(q)
189
190 def unregister_predictor(self, name):
191 q = f"""
192 DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};
193 """
194 self._query(q)
195
196 def get_row_count(self, query):
197 q = f"""
198 SELECT COUNT(*) as count
199 FROM ({query}) as query;"""
200 result = self._query(q)
201 return result[0]['count']
202
203 def get_tables_list(self):
204 q = f""" SELECT table_schema, table_name
205 FROM information_schema.tables
206 WHERE table_schema != 'pg_catalog'
207 AND table_schema != 'information_schema'
208 ORDER BY table_schema, table_name"""
209 tables_list = self._query(q)
210 tables= [f"{table['table_schema']}.{table['table_name']}" for table in tables_list]
211 return tables
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py
--- a/mindsdb/integrations/postgres/postgres.py
+++ b/mindsdb/integrations/postgres/postgres.py
@@ -208,4 +208,13 @@
ORDER BY table_schema, table_name"""
tables_list = self._query(q)
tables= [f"{table['table_schema']}.{table['table_name']}" for table in tables_list]
- return tables
\ No newline at end of file
+ return tables
+
+ def get_columns(self):
+ q = f"""SELECT column_name, table_name
+ FROM information_schema.columns
+ WHERE table_schema NOT IN ('information_schema', 'pg_catalog')
+ ORDER BY column_name, table_name;"""
+ columns_list = self._query(q)
+ columns = [f"{columns[0]}.{columns[1]}" for columns in columns_list]
+ return columns
\ No newline at end of file
|
{"golden_diff": "diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py\n--- a/mindsdb/integrations/postgres/postgres.py\n+++ b/mindsdb/integrations/postgres/postgres.py\n@@ -208,4 +208,13 @@\n ORDER BY table_schema, table_name\"\"\"\n tables_list = self._query(q)\n tables= [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n- return tables\n\\ No newline at end of file\n+ return tables\n+\n+ def get_columns(self):\n+ q = f\"\"\"SELECT column_name, table_name\n+\t\tFROM information_schema.columns\n+\t\tWHERE table_schema NOT IN ('information_schema', 'pg_catalog')\n+\t\tORDER BY column_name, table_name;\"\"\"\n+ columns_list = self._query(q)\n+ columns = [f\"{columns[0]}.{columns[1]}\" for columns in columns_list]\n+ return columns\n\\ No newline at end of file\n", "issue": "Add new method to return the columns for PostgreSQL datasources :electric_plug: :1234: \nWhen MindsDB creates a new PostgreSQL datasource we get information for columns by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_columns method to return the columns name per datasource. The PR should include this method inside the PostgreSQL class .\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Implement in https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/postgres/postgres.py#L37\r\n- Push to staging branch\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\r\n \r\n\r\n\n", "before_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n external_datasource text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n columns_sql += ',\"external_datasource\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\" \n SELECT COUNT(*) as count\n FROM ({query}) as query;\"\"\"\n result = self._query(q)\n return result[0]['count']\n \n def get_tables_list(self):\n q = f\"\"\" SELECT table_schema, table_name\n FROM information_schema.tables\n WHERE table_schema != 'pg_catalog'\n AND table_schema != 'information_schema'\n ORDER BY table_schema, table_name\"\"\"\n tables_list = self._query(q)\n tables= [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n return tables", "path": "mindsdb/integrations/postgres/postgres.py"}], "after_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n external_datasource text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n columns_sql += ',\"external_datasource\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\" \n SELECT COUNT(*) as count\n FROM ({query}) as query;\"\"\"\n result = self._query(q)\n return result[0]['count']\n \n def get_tables_list(self):\n q = f\"\"\" SELECT table_schema, table_name\n FROM information_schema.tables\n WHERE table_schema != 'pg_catalog'\n AND table_schema != 'information_schema'\n ORDER BY table_schema, table_name\"\"\"\n tables_list = self._query(q)\n tables= [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n return tables\n\n def get_columns(self):\n q = f\"\"\"SELECT column_name, table_name\n\t\tFROM information_schema.columns\n\t\tWHERE table_schema NOT IN ('information_schema', 'pg_catalog')\n\t\tORDER BY column_name, table_name;\"\"\"\n columns_list = self._query(q)\n columns = [f\"{columns[0]}.{columns[1]}\" for columns in columns_list]\n return columns", "path": "mindsdb/integrations/postgres/postgres.py"}]}
| 2,562 | 230 |
gh_patches_debug_22246
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-38980
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't add a new column to Organization
### Environment
self-hosted (https://develop.sentry.dev/self-hosted/)
### Version
_No response_
### Steps to Reproduce
1. Add a new column to organization
2. Try to make a new migration using `sentry django makemigration`
### Expected Result
Makemigration should generate a new migration file.
### Actual Result
Generation fails due to makemigrations loading the SDK. This in turn hits this line of code for feature checking:
https://github.com/getsentry/sentry/blob/ad99c1fe7d7378c3761bad757aacf27352c693a6/src/sentry/models/projectkey.py#L249-L259
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/models/projectkey.py`
Content:
```
1 import re
2 from urllib.parse import urlparse
3 from uuid import uuid4
4
5 import petname
6 from django.conf import settings
7 from django.db import models, transaction
8 from django.urls import reverse
9 from django.utils import timezone
10 from django.utils.translation import ugettext_lazy as _
11
12 from bitfield import BitField
13 from sentry import features, options
14 from sentry.db.models import (
15 BaseManager,
16 BoundedPositiveIntegerField,
17 FlexibleForeignKey,
18 JSONField,
19 Model,
20 region_silo_model,
21 sane_repr,
22 )
23 from sentry.tasks.relay import schedule_invalidate_project_config
24
25 _uuid4_re = re.compile(r"^[a-f0-9]{32}$")
26
27 # TODO(dcramer): pull in enum library
28
29
30 class ProjectKeyStatus:
31 ACTIVE = 0
32 INACTIVE = 1
33
34
35 class ProjectKeyManager(BaseManager):
36 def post_save(self, instance, **kwargs):
37 # this hook may be called from model hooks during an
38 # open transaction. In that case, wait until the current transaction has
39 # been committed or rolled back to ensure we don't read stale data in the
40 # task.
41 #
42 # If there is no transaction open, on_commit should run immediately.
43 transaction.on_commit(
44 lambda: schedule_invalidate_project_config(
45 public_key=instance.public_key, trigger="projectkey.post_save"
46 )
47 )
48
49 def post_delete(self, instance, **kwargs):
50 # this hook may be called from model hooks during an
51 # open transaction. In that case, wait until the current transaction has
52 # been committed or rolled back to ensure we don't read stale data in the
53 # task.
54 #
55 # If there is no transaction open, on_commit should run immediately.
56 transaction.on_commit(
57 lambda: schedule_invalidate_project_config(
58 public_key=instance.public_key, trigger="projectkey.post_delete"
59 )
60 )
61
62
63 @region_silo_model
64 class ProjectKey(Model):
65 __include_in_export__ = True
66
67 project = FlexibleForeignKey("sentry.Project", related_name="key_set")
68 label = models.CharField(max_length=64, blank=True, null=True)
69 public_key = models.CharField(max_length=32, unique=True, null=True)
70 secret_key = models.CharField(max_length=32, unique=True, null=True)
71 roles = BitField(
72 flags=(
73 # access to post events to the store endpoint
74 ("store", "Event API access"),
75 # read/write access to rest API
76 ("api", "Web API access"),
77 ),
78 default=["store"],
79 )
80 status = BoundedPositiveIntegerField(
81 default=0,
82 choices=(
83 (ProjectKeyStatus.ACTIVE, _("Active")),
84 (ProjectKeyStatus.INACTIVE, _("Inactive")),
85 ),
86 db_index=True,
87 )
88 date_added = models.DateTimeField(default=timezone.now, null=True)
89
90 rate_limit_count = BoundedPositiveIntegerField(null=True)
91 rate_limit_window = BoundedPositiveIntegerField(null=True)
92
93 objects = ProjectKeyManager(
94 cache_fields=("public_key", "secret_key"),
95 # store projectkeys in memcached for longer than other models,
96 # specifically to make the relay_projectconfig endpoint faster.
97 cache_ttl=60 * 30,
98 )
99
100 data = JSONField()
101
102 # support legacy project keys in API
103 scopes = (
104 "project:read",
105 "project:write",
106 "project:admin",
107 "project:releases",
108 "event:read",
109 "event:write",
110 "event:admin",
111 )
112
113 class Meta:
114 app_label = "sentry"
115 db_table = "sentry_projectkey"
116
117 __repr__ = sane_repr("project_id", "public_key")
118
119 def __str__(self):
120 return str(self.public_key)
121
122 @classmethod
123 def generate_api_key(cls):
124 return uuid4().hex
125
126 @classmethod
127 def looks_like_api_key(cls, key):
128 return bool(_uuid4_re.match(key))
129
130 @classmethod
131 def from_dsn(cls, dsn):
132 urlparts = urlparse(dsn)
133
134 public_key = urlparts.username
135 project_id = urlparts.path.rsplit("/", 1)[-1]
136
137 try:
138 return ProjectKey.objects.get(public_key=public_key, project=project_id)
139 except ValueError:
140 # ValueError would come from a non-integer project_id,
141 # which is obviously a DoesNotExist. We catch and rethrow this
142 # so anything downstream expecting DoesNotExist works fine
143 raise ProjectKey.DoesNotExist("ProjectKey matching query does not exist.")
144
145 @classmethod
146 def get_default(cls, project):
147 return cls.objects.filter(
148 project=project,
149 roles=models.F("roles").bitor(cls.roles.store),
150 status=ProjectKeyStatus.ACTIVE,
151 ).first()
152
153 @property
154 def is_active(self):
155 return self.status == ProjectKeyStatus.ACTIVE
156
157 @property
158 def rate_limit(self):
159 if self.rate_limit_count and self.rate_limit_window:
160 return (self.rate_limit_count, self.rate_limit_window)
161 return (0, 0)
162
163 def save(self, *args, **kwargs):
164 if not self.public_key:
165 self.public_key = ProjectKey.generate_api_key()
166 if not self.secret_key:
167 self.secret_key = ProjectKey.generate_api_key()
168 if not self.label:
169 self.label = petname.Generate(2, " ", letters=10).title()
170 super().save(*args, **kwargs)
171
172 def get_dsn(self, domain=None, secure=True, public=False):
173 urlparts = urlparse(self.get_endpoint(public=public))
174
175 if not public:
176 key = f"{self.public_key}:{self.secret_key}"
177 else:
178 key = self.public_key
179
180 # If we do not have a scheme or domain/hostname, dsn is never valid
181 if not urlparts.netloc or not urlparts.scheme:
182 return ""
183
184 return "{}://{}@{}/{}".format(
185 urlparts.scheme,
186 key,
187 urlparts.netloc + urlparts.path,
188 self.project_id,
189 )
190
191 @property
192 def organization_id(self):
193 return self.project.organization_id
194
195 @property
196 def organization(self):
197 return self.project.organization
198
199 @property
200 def dsn_private(self):
201 return self.get_dsn(public=False)
202
203 @property
204 def dsn_public(self):
205 return self.get_dsn(public=True)
206
207 @property
208 def csp_endpoint(self):
209 endpoint = self.get_endpoint()
210
211 return f"{endpoint}/api/{self.project_id}/csp-report/?sentry_key={self.public_key}"
212
213 @property
214 def security_endpoint(self):
215 endpoint = self.get_endpoint()
216
217 return f"{endpoint}/api/{self.project_id}/security/?sentry_key={self.public_key}"
218
219 @property
220 def minidump_endpoint(self):
221 endpoint = self.get_endpoint()
222
223 return f"{endpoint}/api/{self.project_id}/minidump/?sentry_key={self.public_key}"
224
225 @property
226 def unreal_endpoint(self):
227 return f"{self.get_endpoint()}/api/{self.project_id}/unreal/{self.public_key}/"
228
229 @property
230 def js_sdk_loader_cdn_url(self):
231 if settings.JS_SDK_LOADER_CDN_URL:
232 return f"{settings.JS_SDK_LOADER_CDN_URL}{self.public_key}.min.js"
233 else:
234 endpoint = self.get_endpoint()
235 return "{}{}".format(
236 endpoint,
237 reverse("sentry-js-sdk-loader", args=[self.public_key, ".min"]),
238 )
239
240 def get_endpoint(self, public=True):
241 if public:
242 endpoint = settings.SENTRY_PUBLIC_ENDPOINT or settings.SENTRY_ENDPOINT
243 else:
244 endpoint = settings.SENTRY_ENDPOINT
245
246 if not endpoint:
247 endpoint = options.get("system.url-prefix")
248
249 if features.has("organizations:org-subdomains", self.project.organization):
250 urlparts = urlparse(endpoint)
251 if urlparts.scheme and urlparts.netloc:
252 endpoint = "{}://{}.{}{}".format(
253 urlparts.scheme,
254 settings.SENTRY_ORG_SUBDOMAIN_TEMPLATE.format(
255 organization_id=self.project.organization_id
256 ),
257 urlparts.netloc,
258 urlparts.path,
259 )
260
261 return endpoint
262
263 def get_allowed_origins(self):
264 from sentry.utils.http import get_origins
265
266 return get_origins(self.project)
267
268 def get_audit_log_data(self):
269 return {
270 "label": self.label,
271 "public_key": self.public_key,
272 "secret_key": self.secret_key,
273 "roles": int(self.roles),
274 "status": self.status,
275 "rate_limit_count": self.rate_limit_count,
276 "rate_limit_window": self.rate_limit_window,
277 }
278
279 def get_scopes(self):
280 return self.scopes
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/models/projectkey.py b/src/sentry/models/projectkey.py
--- a/src/sentry/models/projectkey.py
+++ b/src/sentry/models/projectkey.py
@@ -4,7 +4,7 @@
import petname
from django.conf import settings
-from django.db import models, transaction
+from django.db import ProgrammingError, models, transaction
from django.urls import reverse
from django.utils import timezone
from django.utils.translation import ugettext_lazy as _
@@ -246,7 +246,16 @@
if not endpoint:
endpoint = options.get("system.url-prefix")
- if features.has("organizations:org-subdomains", self.project.organization):
+ has_org_subdomain = False
+ try:
+ has_org_subdomain = features.has(
+ "organizations:org-subdomains", self.project.organization
+ )
+ except ProgrammingError:
+ # This happens during migration generation for the organization model.
+ pass
+
+ if has_org_subdomain:
urlparts = urlparse(endpoint)
if urlparts.scheme and urlparts.netloc:
endpoint = "{}://{}.{}{}".format(
|
{"golden_diff": "diff --git a/src/sentry/models/projectkey.py b/src/sentry/models/projectkey.py\n--- a/src/sentry/models/projectkey.py\n+++ b/src/sentry/models/projectkey.py\n@@ -4,7 +4,7 @@\n \n import petname\n from django.conf import settings\n-from django.db import models, transaction\n+from django.db import ProgrammingError, models, transaction\n from django.urls import reverse\n from django.utils import timezone\n from django.utils.translation import ugettext_lazy as _\n@@ -246,7 +246,16 @@\n if not endpoint:\n endpoint = options.get(\"system.url-prefix\")\n \n- if features.has(\"organizations:org-subdomains\", self.project.organization):\n+ has_org_subdomain = False\n+ try:\n+ has_org_subdomain = features.has(\n+ \"organizations:org-subdomains\", self.project.organization\n+ )\n+ except ProgrammingError:\n+ # This happens during migration generation for the organization model.\n+ pass\n+\n+ if has_org_subdomain:\n urlparts = urlparse(endpoint)\n if urlparts.scheme and urlparts.netloc:\n endpoint = \"{}://{}.{}{}\".format(\n", "issue": "Can't add a new column to Organization\n### Environment\n\nself-hosted (https://develop.sentry.dev/self-hosted/)\n\n### Version\n\n_No response_\n\n### Steps to Reproduce\n\n1. Add a new column to organization\r\n2. Try to make a new migration using `sentry django makemigration`\n\n### Expected Result\n\nMakemigration should generate a new migration file.\n\n### Actual Result\n\nGeneration fails due to makemigrations loading the SDK. This in turn hits this line of code for feature checking:\r\nhttps://github.com/getsentry/sentry/blob/ad99c1fe7d7378c3761bad757aacf27352c693a6/src/sentry/models/projectkey.py#L249-L259\r\n\r\n\n", "before_files": [{"content": "import re\nfrom urllib.parse import urlparse\nfrom uuid import uuid4\n\nimport petname\nfrom django.conf import settings\nfrom django.db import models, transaction\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom bitfield import BitField\nfrom sentry import features, options\nfrom sentry.db.models import (\n BaseManager,\n BoundedPositiveIntegerField,\n FlexibleForeignKey,\n JSONField,\n Model,\n region_silo_model,\n sane_repr,\n)\nfrom sentry.tasks.relay import schedule_invalidate_project_config\n\n_uuid4_re = re.compile(r\"^[a-f0-9]{32}$\")\n\n# TODO(dcramer): pull in enum library\n\n\nclass ProjectKeyStatus:\n ACTIVE = 0\n INACTIVE = 1\n\n\nclass ProjectKeyManager(BaseManager):\n def post_save(self, instance, **kwargs):\n # this hook may be called from model hooks during an\n # open transaction. In that case, wait until the current transaction has\n # been committed or rolled back to ensure we don't read stale data in the\n # task.\n #\n # If there is no transaction open, on_commit should run immediately.\n transaction.on_commit(\n lambda: schedule_invalidate_project_config(\n public_key=instance.public_key, trigger=\"projectkey.post_save\"\n )\n )\n\n def post_delete(self, instance, **kwargs):\n # this hook may be called from model hooks during an\n # open transaction. In that case, wait until the current transaction has\n # been committed or rolled back to ensure we don't read stale data in the\n # task.\n #\n # If there is no transaction open, on_commit should run immediately.\n transaction.on_commit(\n lambda: schedule_invalidate_project_config(\n public_key=instance.public_key, trigger=\"projectkey.post_delete\"\n )\n )\n\n\n@region_silo_model\nclass ProjectKey(Model):\n __include_in_export__ = True\n\n project = FlexibleForeignKey(\"sentry.Project\", related_name=\"key_set\")\n label = models.CharField(max_length=64, blank=True, null=True)\n public_key = models.CharField(max_length=32, unique=True, null=True)\n secret_key = models.CharField(max_length=32, unique=True, null=True)\n roles = BitField(\n flags=(\n # access to post events to the store endpoint\n (\"store\", \"Event API access\"),\n # read/write access to rest API\n (\"api\", \"Web API access\"),\n ),\n default=[\"store\"],\n )\n status = BoundedPositiveIntegerField(\n default=0,\n choices=(\n (ProjectKeyStatus.ACTIVE, _(\"Active\")),\n (ProjectKeyStatus.INACTIVE, _(\"Inactive\")),\n ),\n db_index=True,\n )\n date_added = models.DateTimeField(default=timezone.now, null=True)\n\n rate_limit_count = BoundedPositiveIntegerField(null=True)\n rate_limit_window = BoundedPositiveIntegerField(null=True)\n\n objects = ProjectKeyManager(\n cache_fields=(\"public_key\", \"secret_key\"),\n # store projectkeys in memcached for longer than other models,\n # specifically to make the relay_projectconfig endpoint faster.\n cache_ttl=60 * 30,\n )\n\n data = JSONField()\n\n # support legacy project keys in API\n scopes = (\n \"project:read\",\n \"project:write\",\n \"project:admin\",\n \"project:releases\",\n \"event:read\",\n \"event:write\",\n \"event:admin\",\n )\n\n class Meta:\n app_label = \"sentry\"\n db_table = \"sentry_projectkey\"\n\n __repr__ = sane_repr(\"project_id\", \"public_key\")\n\n def __str__(self):\n return str(self.public_key)\n\n @classmethod\n def generate_api_key(cls):\n return uuid4().hex\n\n @classmethod\n def looks_like_api_key(cls, key):\n return bool(_uuid4_re.match(key))\n\n @classmethod\n def from_dsn(cls, dsn):\n urlparts = urlparse(dsn)\n\n public_key = urlparts.username\n project_id = urlparts.path.rsplit(\"/\", 1)[-1]\n\n try:\n return ProjectKey.objects.get(public_key=public_key, project=project_id)\n except ValueError:\n # ValueError would come from a non-integer project_id,\n # which is obviously a DoesNotExist. We catch and rethrow this\n # so anything downstream expecting DoesNotExist works fine\n raise ProjectKey.DoesNotExist(\"ProjectKey matching query does not exist.\")\n\n @classmethod\n def get_default(cls, project):\n return cls.objects.filter(\n project=project,\n roles=models.F(\"roles\").bitor(cls.roles.store),\n status=ProjectKeyStatus.ACTIVE,\n ).first()\n\n @property\n def is_active(self):\n return self.status == ProjectKeyStatus.ACTIVE\n\n @property\n def rate_limit(self):\n if self.rate_limit_count and self.rate_limit_window:\n return (self.rate_limit_count, self.rate_limit_window)\n return (0, 0)\n\n def save(self, *args, **kwargs):\n if not self.public_key:\n self.public_key = ProjectKey.generate_api_key()\n if not self.secret_key:\n self.secret_key = ProjectKey.generate_api_key()\n if not self.label:\n self.label = petname.Generate(2, \" \", letters=10).title()\n super().save(*args, **kwargs)\n\n def get_dsn(self, domain=None, secure=True, public=False):\n urlparts = urlparse(self.get_endpoint(public=public))\n\n if not public:\n key = f\"{self.public_key}:{self.secret_key}\"\n else:\n key = self.public_key\n\n # If we do not have a scheme or domain/hostname, dsn is never valid\n if not urlparts.netloc or not urlparts.scheme:\n return \"\"\n\n return \"{}://{}@{}/{}\".format(\n urlparts.scheme,\n key,\n urlparts.netloc + urlparts.path,\n self.project_id,\n )\n\n @property\n def organization_id(self):\n return self.project.organization_id\n\n @property\n def organization(self):\n return self.project.organization\n\n @property\n def dsn_private(self):\n return self.get_dsn(public=False)\n\n @property\n def dsn_public(self):\n return self.get_dsn(public=True)\n\n @property\n def csp_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/csp-report/?sentry_key={self.public_key}\"\n\n @property\n def security_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/security/?sentry_key={self.public_key}\"\n\n @property\n def minidump_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/minidump/?sentry_key={self.public_key}\"\n\n @property\n def unreal_endpoint(self):\n return f\"{self.get_endpoint()}/api/{self.project_id}/unreal/{self.public_key}/\"\n\n @property\n def js_sdk_loader_cdn_url(self):\n if settings.JS_SDK_LOADER_CDN_URL:\n return f\"{settings.JS_SDK_LOADER_CDN_URL}{self.public_key}.min.js\"\n else:\n endpoint = self.get_endpoint()\n return \"{}{}\".format(\n endpoint,\n reverse(\"sentry-js-sdk-loader\", args=[self.public_key, \".min\"]),\n )\n\n def get_endpoint(self, public=True):\n if public:\n endpoint = settings.SENTRY_PUBLIC_ENDPOINT or settings.SENTRY_ENDPOINT\n else:\n endpoint = settings.SENTRY_ENDPOINT\n\n if not endpoint:\n endpoint = options.get(\"system.url-prefix\")\n\n if features.has(\"organizations:org-subdomains\", self.project.organization):\n urlparts = urlparse(endpoint)\n if urlparts.scheme and urlparts.netloc:\n endpoint = \"{}://{}.{}{}\".format(\n urlparts.scheme,\n settings.SENTRY_ORG_SUBDOMAIN_TEMPLATE.format(\n organization_id=self.project.organization_id\n ),\n urlparts.netloc,\n urlparts.path,\n )\n\n return endpoint\n\n def get_allowed_origins(self):\n from sentry.utils.http import get_origins\n\n return get_origins(self.project)\n\n def get_audit_log_data(self):\n return {\n \"label\": self.label,\n \"public_key\": self.public_key,\n \"secret_key\": self.secret_key,\n \"roles\": int(self.roles),\n \"status\": self.status,\n \"rate_limit_count\": self.rate_limit_count,\n \"rate_limit_window\": self.rate_limit_window,\n }\n\n def get_scopes(self):\n return self.scopes\n", "path": "src/sentry/models/projectkey.py"}], "after_files": [{"content": "import re\nfrom urllib.parse import urlparse\nfrom uuid import uuid4\n\nimport petname\nfrom django.conf import settings\nfrom django.db import ProgrammingError, models, transaction\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom bitfield import BitField\nfrom sentry import features, options\nfrom sentry.db.models import (\n BaseManager,\n BoundedPositiveIntegerField,\n FlexibleForeignKey,\n JSONField,\n Model,\n region_silo_model,\n sane_repr,\n)\nfrom sentry.tasks.relay import schedule_invalidate_project_config\n\n_uuid4_re = re.compile(r\"^[a-f0-9]{32}$\")\n\n# TODO(dcramer): pull in enum library\n\n\nclass ProjectKeyStatus:\n ACTIVE = 0\n INACTIVE = 1\n\n\nclass ProjectKeyManager(BaseManager):\n def post_save(self, instance, **kwargs):\n # this hook may be called from model hooks during an\n # open transaction. In that case, wait until the current transaction has\n # been committed or rolled back to ensure we don't read stale data in the\n # task.\n #\n # If there is no transaction open, on_commit should run immediately.\n transaction.on_commit(\n lambda: schedule_invalidate_project_config(\n public_key=instance.public_key, trigger=\"projectkey.post_save\"\n )\n )\n\n def post_delete(self, instance, **kwargs):\n # this hook may be called from model hooks during an\n # open transaction. In that case, wait until the current transaction has\n # been committed or rolled back to ensure we don't read stale data in the\n # task.\n #\n # If there is no transaction open, on_commit should run immediately.\n transaction.on_commit(\n lambda: schedule_invalidate_project_config(\n public_key=instance.public_key, trigger=\"projectkey.post_delete\"\n )\n )\n\n\n@region_silo_model\nclass ProjectKey(Model):\n __include_in_export__ = True\n\n project = FlexibleForeignKey(\"sentry.Project\", related_name=\"key_set\")\n label = models.CharField(max_length=64, blank=True, null=True)\n public_key = models.CharField(max_length=32, unique=True, null=True)\n secret_key = models.CharField(max_length=32, unique=True, null=True)\n roles = BitField(\n flags=(\n # access to post events to the store endpoint\n (\"store\", \"Event API access\"),\n # read/write access to rest API\n (\"api\", \"Web API access\"),\n ),\n default=[\"store\"],\n )\n status = BoundedPositiveIntegerField(\n default=0,\n choices=(\n (ProjectKeyStatus.ACTIVE, _(\"Active\")),\n (ProjectKeyStatus.INACTIVE, _(\"Inactive\")),\n ),\n db_index=True,\n )\n date_added = models.DateTimeField(default=timezone.now, null=True)\n\n rate_limit_count = BoundedPositiveIntegerField(null=True)\n rate_limit_window = BoundedPositiveIntegerField(null=True)\n\n objects = ProjectKeyManager(\n cache_fields=(\"public_key\", \"secret_key\"),\n # store projectkeys in memcached for longer than other models,\n # specifically to make the relay_projectconfig endpoint faster.\n cache_ttl=60 * 30,\n )\n\n data = JSONField()\n\n # support legacy project keys in API\n scopes = (\n \"project:read\",\n \"project:write\",\n \"project:admin\",\n \"project:releases\",\n \"event:read\",\n \"event:write\",\n \"event:admin\",\n )\n\n class Meta:\n app_label = \"sentry\"\n db_table = \"sentry_projectkey\"\n\n __repr__ = sane_repr(\"project_id\", \"public_key\")\n\n def __str__(self):\n return str(self.public_key)\n\n @classmethod\n def generate_api_key(cls):\n return uuid4().hex\n\n @classmethod\n def looks_like_api_key(cls, key):\n return bool(_uuid4_re.match(key))\n\n @classmethod\n def from_dsn(cls, dsn):\n urlparts = urlparse(dsn)\n\n public_key = urlparts.username\n project_id = urlparts.path.rsplit(\"/\", 1)[-1]\n\n try:\n return ProjectKey.objects.get(public_key=public_key, project=project_id)\n except ValueError:\n # ValueError would come from a non-integer project_id,\n # which is obviously a DoesNotExist. We catch and rethrow this\n # so anything downstream expecting DoesNotExist works fine\n raise ProjectKey.DoesNotExist(\"ProjectKey matching query does not exist.\")\n\n @classmethod\n def get_default(cls, project):\n return cls.objects.filter(\n project=project,\n roles=models.F(\"roles\").bitor(cls.roles.store),\n status=ProjectKeyStatus.ACTIVE,\n ).first()\n\n @property\n def is_active(self):\n return self.status == ProjectKeyStatus.ACTIVE\n\n @property\n def rate_limit(self):\n if self.rate_limit_count and self.rate_limit_window:\n return (self.rate_limit_count, self.rate_limit_window)\n return (0, 0)\n\n def save(self, *args, **kwargs):\n if not self.public_key:\n self.public_key = ProjectKey.generate_api_key()\n if not self.secret_key:\n self.secret_key = ProjectKey.generate_api_key()\n if not self.label:\n self.label = petname.Generate(2, \" \", letters=10).title()\n super().save(*args, **kwargs)\n\n def get_dsn(self, domain=None, secure=True, public=False):\n urlparts = urlparse(self.get_endpoint(public=public))\n\n if not public:\n key = f\"{self.public_key}:{self.secret_key}\"\n else:\n key = self.public_key\n\n # If we do not have a scheme or domain/hostname, dsn is never valid\n if not urlparts.netloc or not urlparts.scheme:\n return \"\"\n\n return \"{}://{}@{}/{}\".format(\n urlparts.scheme,\n key,\n urlparts.netloc + urlparts.path,\n self.project_id,\n )\n\n @property\n def organization_id(self):\n return self.project.organization_id\n\n @property\n def organization(self):\n return self.project.organization\n\n @property\n def dsn_private(self):\n return self.get_dsn(public=False)\n\n @property\n def dsn_public(self):\n return self.get_dsn(public=True)\n\n @property\n def csp_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/csp-report/?sentry_key={self.public_key}\"\n\n @property\n def security_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/security/?sentry_key={self.public_key}\"\n\n @property\n def minidump_endpoint(self):\n endpoint = self.get_endpoint()\n\n return f\"{endpoint}/api/{self.project_id}/minidump/?sentry_key={self.public_key}\"\n\n @property\n def unreal_endpoint(self):\n return f\"{self.get_endpoint()}/api/{self.project_id}/unreal/{self.public_key}/\"\n\n @property\n def js_sdk_loader_cdn_url(self):\n if settings.JS_SDK_LOADER_CDN_URL:\n return f\"{settings.JS_SDK_LOADER_CDN_URL}{self.public_key}.min.js\"\n else:\n endpoint = self.get_endpoint()\n return \"{}{}\".format(\n endpoint,\n reverse(\"sentry-js-sdk-loader\", args=[self.public_key, \".min\"]),\n )\n\n def get_endpoint(self, public=True):\n if public:\n endpoint = settings.SENTRY_PUBLIC_ENDPOINT or settings.SENTRY_ENDPOINT\n else:\n endpoint = settings.SENTRY_ENDPOINT\n\n if not endpoint:\n endpoint = options.get(\"system.url-prefix\")\n\n has_org_subdomain = False\n try:\n has_org_subdomain = features.has(\n \"organizations:org-subdomains\", self.project.organization\n )\n except ProgrammingError:\n # This happens during migration generation for the organization model.\n pass\n\n if has_org_subdomain:\n urlparts = urlparse(endpoint)\n if urlparts.scheme and urlparts.netloc:\n endpoint = \"{}://{}.{}{}\".format(\n urlparts.scheme,\n settings.SENTRY_ORG_SUBDOMAIN_TEMPLATE.format(\n organization_id=self.project.organization_id\n ),\n urlparts.netloc,\n urlparts.path,\n )\n\n return endpoint\n\n def get_allowed_origins(self):\n from sentry.utils.http import get_origins\n\n return get_origins(self.project)\n\n def get_audit_log_data(self):\n return {\n \"label\": self.label,\n \"public_key\": self.public_key,\n \"secret_key\": self.secret_key,\n \"roles\": int(self.roles),\n \"status\": self.status,\n \"rate_limit_count\": self.rate_limit_count,\n \"rate_limit_window\": self.rate_limit_window,\n }\n\n def get_scopes(self):\n return self.scopes\n", "path": "src/sentry/models/projectkey.py"}]}
| 3,079 | 249 |
gh_patches_debug_43197
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-2204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to clone a project
Feature description
---------
Add a button on the Projects page that allows the user to clone an existing project and specify which sections to duplicate (Guidelines, Labels, Data, etc) into a new project.
Use Case
----------
Often, I want to do a test run of annotations to check the efficacy of my class labels and instructions before annotating a larger dataset.
There are times when I need to perform multiple test runs after changing instructions, class labels, or both. This is a very manual workflow of creating multiple, near duplicate projects. Having the ability to duplicate or clone an existing project, and then make edits manually would be an improvement. Currently, the only option in Doccano is to create a new project from scratch and manually copy/paste the Guidelines and re-upload a labels file (or manually enter in the labels a second time).
Env: MacOS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/projects/models.py`
Content:
```
1 import abc
2
3 from django.conf import settings
4 from django.contrib.auth.models import User
5 from django.core.exceptions import ValidationError
6 from django.db import models
7 from django.db.models import Manager
8 from polymorphic.models import PolymorphicModel
9
10 from roles.models import Role
11
12 DOCUMENT_CLASSIFICATION = "DocumentClassification"
13 SEQUENCE_LABELING = "SequenceLabeling"
14 SEQ2SEQ = "Seq2seq"
15 SPEECH2TEXT = "Speech2text"
16 IMAGE_CLASSIFICATION = "ImageClassification"
17 BOUNDING_BOX = "BoundingBox"
18 SEGMENTATION = "Segmentation"
19 IMAGE_CAPTIONING = "ImageCaptioning"
20 INTENT_DETECTION_AND_SLOT_FILLING = "IntentDetectionAndSlotFilling"
21 PROJECT_CHOICES = (
22 (DOCUMENT_CLASSIFICATION, "document classification"),
23 (SEQUENCE_LABELING, "sequence labeling"),
24 (SEQ2SEQ, "sequence to sequence"),
25 (INTENT_DETECTION_AND_SLOT_FILLING, "intent detection and slot filling"),
26 (SPEECH2TEXT, "speech to text"),
27 (IMAGE_CLASSIFICATION, "image classification"),
28 (BOUNDING_BOX, "bounding box"),
29 (SEGMENTATION, "segmentation"),
30 (IMAGE_CAPTIONING, "image captioning"),
31 )
32
33
34 class Project(PolymorphicModel):
35 name = models.CharField(max_length=100)
36 description = models.TextField(default="")
37 guideline = models.TextField(default="", blank=True)
38 created_at = models.DateTimeField(auto_now_add=True)
39 updated_at = models.DateTimeField(auto_now=True)
40 created_by = models.ForeignKey(
41 User,
42 on_delete=models.SET_NULL,
43 null=True,
44 )
45 project_type = models.CharField(max_length=30, choices=PROJECT_CHOICES)
46 random_order = models.BooleanField(default=False)
47 collaborative_annotation = models.BooleanField(default=False)
48 single_class_classification = models.BooleanField(default=False)
49
50 def add_admin(self):
51 admin_role = Role.objects.get(name=settings.ROLE_PROJECT_ADMIN)
52 Member.objects.create(
53 project=self,
54 user=self.created_by,
55 role=admin_role,
56 )
57
58 @property
59 @abc.abstractmethod
60 def is_text_project(self) -> bool:
61 return False
62
63 def __str__(self):
64 return self.name
65
66
67 class TextClassificationProject(Project):
68 @property
69 def is_text_project(self) -> bool:
70 return True
71
72
73 class SequenceLabelingProject(Project):
74 allow_overlapping = models.BooleanField(default=False)
75 grapheme_mode = models.BooleanField(default=False)
76 use_relation = models.BooleanField(default=False)
77
78 @property
79 def is_text_project(self) -> bool:
80 return True
81
82
83 class Seq2seqProject(Project):
84 @property
85 def is_text_project(self) -> bool:
86 return True
87
88
89 class IntentDetectionAndSlotFillingProject(Project):
90 @property
91 def is_text_project(self) -> bool:
92 return True
93
94
95 class Speech2textProject(Project):
96 @property
97 def is_text_project(self) -> bool:
98 return False
99
100
101 class ImageClassificationProject(Project):
102 @property
103 def is_text_project(self) -> bool:
104 return False
105
106
107 class BoundingBoxProject(Project):
108 @property
109 def is_text_project(self) -> bool:
110 return False
111
112
113 class SegmentationProject(Project):
114 @property
115 def is_text_project(self) -> bool:
116 return False
117
118
119 class ImageCaptioningProject(Project):
120 @property
121 def is_text_project(self) -> bool:
122 return False
123
124
125 class Tag(models.Model):
126 text = models.TextField()
127 project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name="tags")
128
129 def __str__(self):
130 return self.text
131
132
133 class MemberManager(Manager):
134 def can_update(self, project: int, member_id: int, new_role: str) -> bool:
135 """The project needs at least 1 admin.
136
137 Args:
138 project: The project id.
139 member_id: The member id.
140 new_role: The new role name.
141
142 Returns:
143 Whether the mapping can be updated or not.
144 """
145 queryset = self.filter(project=project, role__name=settings.ROLE_PROJECT_ADMIN)
146 if queryset.count() > 1:
147 return True
148 else:
149 admin = queryset.first()
150 # we can change the role except for the only admin.
151 return admin.id != member_id or new_role == settings.ROLE_PROJECT_ADMIN
152
153 def has_role(self, project_id: int, user: User, role_name: str):
154 return self.filter(project=project_id, user=user, role__name=role_name).exists()
155
156
157 class Member(models.Model):
158 user = models.ForeignKey(to=User, on_delete=models.CASCADE, related_name="role_mappings")
159 project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name="role_mappings")
160 role = models.ForeignKey(to=Role, on_delete=models.CASCADE)
161 created_at = models.DateTimeField(auto_now_add=True)
162 updated_at = models.DateTimeField(auto_now=True)
163 objects = MemberManager()
164
165 def clean(self):
166 members = self.__class__.objects.exclude(id=self.id)
167 if members.filter(user=self.user, project=self.project).exists():
168 message = "This user is already assigned to a role in this project."
169 raise ValidationError(message)
170
171 @property
172 def username(self):
173 return self.user.username
174
175 class Meta:
176 unique_together = ("user", "project")
177
```
Path: `backend/projects/urls.py`
Content:
```
1 from django.urls import path
2
3 from .views.member import MemberDetail, MemberList, MyRole
4 from .views.project import ProjectDetail, ProjectList
5 from .views.tag import TagDetail, TagList
6
7 urlpatterns = [
8 path(route="projects", view=ProjectList.as_view(), name="project_list"),
9 path(route="projects/<int:project_id>", view=ProjectDetail.as_view(), name="project_detail"),
10 path(route="projects/<int:project_id>/my-role", view=MyRole.as_view(), name="my_role"),
11 path(route="projects/<int:project_id>/tags", view=TagList.as_view(), name="tag_list"),
12 path(route="projects/<int:project_id>/tags/<int:tag_id>", view=TagDetail.as_view(), name="tag_detail"),
13 path(route="projects/<int:project_id>/members", view=MemberList.as_view(), name="member_list"),
14 path(route="projects/<int:project_id>/members/<int:member_id>", view=MemberDetail.as_view(), name="member_detail"),
15 ]
16
```
Path: `backend/projects/views/project.py`
Content:
```
1 from django.conf import settings
2 from django_filters.rest_framework import DjangoFilterBackend
3 from rest_framework import filters, generics, status
4 from rest_framework.permissions import IsAdminUser, IsAuthenticated
5 from rest_framework.response import Response
6
7 from projects.models import Project
8 from projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly
9 from projects.serializers import ProjectPolymorphicSerializer
10
11
12 class ProjectList(generics.ListCreateAPIView):
13 serializer_class = ProjectPolymorphicSerializer
14 filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)
15 search_fields = ("name", "description")
16 ordering_fields = ["name", "created_at", "created_by", "project_type"]
17 ordering = ["-created_at"]
18
19 def get_permissions(self):
20 if self.request.method == "GET":
21 self.permission_classes = [
22 IsAuthenticated,
23 ]
24 else:
25 self.permission_classes = [IsAuthenticated & IsAdminUser]
26 return super().get_permissions()
27
28 def get_queryset(self):
29 return Project.objects.filter(role_mappings__user=self.request.user)
30
31 def perform_create(self, serializer):
32 project = serializer.save(created_by=self.request.user)
33 project.add_admin()
34
35 def delete(self, request, *args, **kwargs):
36 delete_ids = request.data["ids"]
37 projects = Project.objects.filter(
38 role_mappings__user=self.request.user,
39 role_mappings__role__name=settings.ROLE_PROJECT_ADMIN,
40 pk__in=delete_ids,
41 )
42 # Todo: I want to use bulk delete.
43 # But it causes the constraint error.
44 # See https://github.com/django-polymorphic/django-polymorphic/issues/229
45 for project in projects:
46 project.delete()
47 return Response(status=status.HTTP_204_NO_CONTENT)
48
49
50 class ProjectDetail(generics.RetrieveUpdateDestroyAPIView):
51 queryset = Project.objects.all()
52 serializer_class = ProjectPolymorphicSerializer
53 lookup_url_kwarg = "project_id"
54 permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/projects/models.py b/backend/projects/models.py
--- a/backend/projects/models.py
+++ b/backend/projects/models.py
@@ -1,4 +1,5 @@
import abc
+import uuid
from django.conf import settings
from django.contrib.auth.models import User
@@ -60,6 +61,54 @@
def is_text_project(self) -> bool:
return False
+ def clone(self) -> "Project":
+ """Clone the project.
+ See https://docs.djangoproject.com/en/4.2/topics/db/queries/#copying-model-instances
+
+ Returns:
+ The cloned project.
+ """
+ project = Project.objects.get(pk=self.pk)
+ project.pk = None
+ project.id = None
+ project._state.adding = True
+ project.save()
+
+ def bulk_clone(queryset: models.QuerySet, field_initializers: dict = None):
+ """Clone the queryset.
+
+ Args:
+ queryset: The queryset to clone.
+ field_initializers: The field initializers.
+ """
+ if field_initializers is None:
+ field_initializers = {}
+ items = []
+ for item in queryset:
+ item.id = None
+ item.pk = None
+ for field, value_or_callable in field_initializers.items():
+ if callable(value_or_callable):
+ value_or_callable = value_or_callable()
+ setattr(item, field, value_or_callable)
+ item.project = project
+ item._state.adding = True
+ items.append(item)
+ queryset.model.objects.bulk_create(items)
+
+ bulk_clone(self.role_mappings.all())
+ bulk_clone(self.tags.all())
+
+ # clone examples
+ bulk_clone(self.examples.all(), field_initializers={"uuid": uuid.uuid4})
+
+ # clone label types
+ bulk_clone(self.categorytype_set.all())
+ bulk_clone(self.spantype_set.all())
+ bulk_clone(self.relationtype_set.all())
+
+ return project
+
def __str__(self):
return self.name
diff --git a/backend/projects/urls.py b/backend/projects/urls.py
--- a/backend/projects/urls.py
+++ b/backend/projects/urls.py
@@ -1,7 +1,7 @@
from django.urls import path
from .views.member import MemberDetail, MemberList, MyRole
-from .views.project import ProjectDetail, ProjectList
+from .views.project import CloneProject, ProjectDetail, ProjectList
from .views.tag import TagDetail, TagList
urlpatterns = [
@@ -11,5 +11,6 @@
path(route="projects/<int:project_id>/tags", view=TagList.as_view(), name="tag_list"),
path(route="projects/<int:project_id>/tags/<int:tag_id>", view=TagDetail.as_view(), name="tag_detail"),
path(route="projects/<int:project_id>/members", view=MemberList.as_view(), name="member_list"),
+ path(route="projects/<int:project_id>/clone", view=CloneProject.as_view(), name="clone_project"),
path(route="projects/<int:project_id>/members/<int:member_id>", view=MemberDetail.as_view(), name="member_detail"),
]
diff --git a/backend/projects/views/project.py b/backend/projects/views/project.py
--- a/backend/projects/views/project.py
+++ b/backend/projects/views/project.py
@@ -1,6 +1,8 @@
from django.conf import settings
+from django.db import transaction
+from django.shortcuts import get_object_or_404
from django_filters.rest_framework import DjangoFilterBackend
-from rest_framework import filters, generics, status
+from rest_framework import filters, generics, status, views
from rest_framework.permissions import IsAdminUser, IsAuthenticated
from rest_framework.response import Response
@@ -52,3 +54,14 @@
serializer_class = ProjectPolymorphicSerializer
lookup_url_kwarg = "project_id"
permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]
+
+
+class CloneProject(views.APIView):
+ permission_classes = [IsAuthenticated & IsProjectAdmin]
+
+ @transaction.atomic
+ def post(self, request, *args, **kwargs):
+ project = get_object_or_404(Project, pk=self.kwargs["project_id"])
+ cloned_project = project.clone()
+ serializer = ProjectPolymorphicSerializer(cloned_project)
+ return Response(serializer.data, status=status.HTTP_201_CREATED)
|
{"golden_diff": "diff --git a/backend/projects/models.py b/backend/projects/models.py\n--- a/backend/projects/models.py\n+++ b/backend/projects/models.py\n@@ -1,4 +1,5 @@\n import abc\n+import uuid\n \n from django.conf import settings\n from django.contrib.auth.models import User\n@@ -60,6 +61,54 @@\n def is_text_project(self) -> bool:\n return False\n \n+ def clone(self) -> \"Project\":\n+ \"\"\"Clone the project.\n+ See https://docs.djangoproject.com/en/4.2/topics/db/queries/#copying-model-instances\n+\n+ Returns:\n+ The cloned project.\n+ \"\"\"\n+ project = Project.objects.get(pk=self.pk)\n+ project.pk = None\n+ project.id = None\n+ project._state.adding = True\n+ project.save()\n+\n+ def bulk_clone(queryset: models.QuerySet, field_initializers: dict = None):\n+ \"\"\"Clone the queryset.\n+\n+ Args:\n+ queryset: The queryset to clone.\n+ field_initializers: The field initializers.\n+ \"\"\"\n+ if field_initializers is None:\n+ field_initializers = {}\n+ items = []\n+ for item in queryset:\n+ item.id = None\n+ item.pk = None\n+ for field, value_or_callable in field_initializers.items():\n+ if callable(value_or_callable):\n+ value_or_callable = value_or_callable()\n+ setattr(item, field, value_or_callable)\n+ item.project = project\n+ item._state.adding = True\n+ items.append(item)\n+ queryset.model.objects.bulk_create(items)\n+\n+ bulk_clone(self.role_mappings.all())\n+ bulk_clone(self.tags.all())\n+\n+ # clone examples\n+ bulk_clone(self.examples.all(), field_initializers={\"uuid\": uuid.uuid4})\n+\n+ # clone label types\n+ bulk_clone(self.categorytype_set.all())\n+ bulk_clone(self.spantype_set.all())\n+ bulk_clone(self.relationtype_set.all())\n+\n+ return project\n+\n def __str__(self):\n return self.name\n \ndiff --git a/backend/projects/urls.py b/backend/projects/urls.py\n--- a/backend/projects/urls.py\n+++ b/backend/projects/urls.py\n@@ -1,7 +1,7 @@\n from django.urls import path\n \n from .views.member import MemberDetail, MemberList, MyRole\n-from .views.project import ProjectDetail, ProjectList\n+from .views.project import CloneProject, ProjectDetail, ProjectList\n from .views.tag import TagDetail, TagList\n \n urlpatterns = [\n@@ -11,5 +11,6 @@\n path(route=\"projects/<int:project_id>/tags\", view=TagList.as_view(), name=\"tag_list\"),\n path(route=\"projects/<int:project_id>/tags/<int:tag_id>\", view=TagDetail.as_view(), name=\"tag_detail\"),\n path(route=\"projects/<int:project_id>/members\", view=MemberList.as_view(), name=\"member_list\"),\n+ path(route=\"projects/<int:project_id>/clone\", view=CloneProject.as_view(), name=\"clone_project\"),\n path(route=\"projects/<int:project_id>/members/<int:member_id>\", view=MemberDetail.as_view(), name=\"member_detail\"),\n ]\ndiff --git a/backend/projects/views/project.py b/backend/projects/views/project.py\n--- a/backend/projects/views/project.py\n+++ b/backend/projects/views/project.py\n@@ -1,6 +1,8 @@\n from django.conf import settings\n+from django.db import transaction\n+from django.shortcuts import get_object_or_404\n from django_filters.rest_framework import DjangoFilterBackend\n-from rest_framework import filters, generics, status\n+from rest_framework import filters, generics, status, views\n from rest_framework.permissions import IsAdminUser, IsAuthenticated\n from rest_framework.response import Response\n \n@@ -52,3 +54,14 @@\n serializer_class = ProjectPolymorphicSerializer\n lookup_url_kwarg = \"project_id\"\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n+\n+\n+class CloneProject(views.APIView):\n+ permission_classes = [IsAuthenticated & IsProjectAdmin]\n+\n+ @transaction.atomic\n+ def post(self, request, *args, **kwargs):\n+ project = get_object_or_404(Project, pk=self.kwargs[\"project_id\"])\n+ cloned_project = project.clone()\n+ serializer = ProjectPolymorphicSerializer(cloned_project)\n+ return Response(serializer.data, status=status.HTTP_201_CREATED)\n", "issue": "Add ability to clone a project\nFeature description\r\n---------\r\nAdd a button on the Projects page that allows the user to clone an existing project and specify which sections to duplicate (Guidelines, Labels, Data, etc) into a new project.\r\n\r\nUse Case\r\n----------\r\nOften, I want to do a test run of annotations to check the efficacy of my class labels and instructions before annotating a larger dataset. \r\nThere are times when I need to perform multiple test runs after changing instructions, class labels, or both. This is a very manual workflow of creating multiple, near duplicate projects. Having the ability to duplicate or clone an existing project, and then make edits manually would be an improvement. Currently, the only option in Doccano is to create a new project from scratch and manually copy/paste the Guidelines and re-upload a labels file (or manually enter in the labels a second time).\r\n\r\n\r\nEnv: MacOS\n", "before_files": [{"content": "import abc\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Manager\nfrom polymorphic.models import PolymorphicModel\n\nfrom roles.models import Role\n\nDOCUMENT_CLASSIFICATION = \"DocumentClassification\"\nSEQUENCE_LABELING = \"SequenceLabeling\"\nSEQ2SEQ = \"Seq2seq\"\nSPEECH2TEXT = \"Speech2text\"\nIMAGE_CLASSIFICATION = \"ImageClassification\"\nBOUNDING_BOX = \"BoundingBox\"\nSEGMENTATION = \"Segmentation\"\nIMAGE_CAPTIONING = \"ImageCaptioning\"\nINTENT_DETECTION_AND_SLOT_FILLING = \"IntentDetectionAndSlotFilling\"\nPROJECT_CHOICES = (\n (DOCUMENT_CLASSIFICATION, \"document classification\"),\n (SEQUENCE_LABELING, \"sequence labeling\"),\n (SEQ2SEQ, \"sequence to sequence\"),\n (INTENT_DETECTION_AND_SLOT_FILLING, \"intent detection and slot filling\"),\n (SPEECH2TEXT, \"speech to text\"),\n (IMAGE_CLASSIFICATION, \"image classification\"),\n (BOUNDING_BOX, \"bounding box\"),\n (SEGMENTATION, \"segmentation\"),\n (IMAGE_CAPTIONING, \"image captioning\"),\n)\n\n\nclass Project(PolymorphicModel):\n name = models.CharField(max_length=100)\n description = models.TextField(default=\"\")\n guideline = models.TextField(default=\"\", blank=True)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n created_by = models.ForeignKey(\n User,\n on_delete=models.SET_NULL,\n null=True,\n )\n project_type = models.CharField(max_length=30, choices=PROJECT_CHOICES)\n random_order = models.BooleanField(default=False)\n collaborative_annotation = models.BooleanField(default=False)\n single_class_classification = models.BooleanField(default=False)\n\n def add_admin(self):\n admin_role = Role.objects.get(name=settings.ROLE_PROJECT_ADMIN)\n Member.objects.create(\n project=self,\n user=self.created_by,\n role=admin_role,\n )\n\n @property\n @abc.abstractmethod\n def is_text_project(self) -> bool:\n return False\n\n def __str__(self):\n return self.name\n\n\nclass TextClassificationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass SequenceLabelingProject(Project):\n allow_overlapping = models.BooleanField(default=False)\n grapheme_mode = models.BooleanField(default=False)\n use_relation = models.BooleanField(default=False)\n\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass Seq2seqProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass IntentDetectionAndSlotFillingProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass Speech2textProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass ImageClassificationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass BoundingBoxProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass SegmentationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass ImageCaptioningProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass Tag(models.Model):\n text = models.TextField()\n project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name=\"tags\")\n\n def __str__(self):\n return self.text\n\n\nclass MemberManager(Manager):\n def can_update(self, project: int, member_id: int, new_role: str) -> bool:\n \"\"\"The project needs at least 1 admin.\n\n Args:\n project: The project id.\n member_id: The member id.\n new_role: The new role name.\n\n Returns:\n Whether the mapping can be updated or not.\n \"\"\"\n queryset = self.filter(project=project, role__name=settings.ROLE_PROJECT_ADMIN)\n if queryset.count() > 1:\n return True\n else:\n admin = queryset.first()\n # we can change the role except for the only admin.\n return admin.id != member_id or new_role == settings.ROLE_PROJECT_ADMIN\n\n def has_role(self, project_id: int, user: User, role_name: str):\n return self.filter(project=project_id, user=user, role__name=role_name).exists()\n\n\nclass Member(models.Model):\n user = models.ForeignKey(to=User, on_delete=models.CASCADE, related_name=\"role_mappings\")\n project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name=\"role_mappings\")\n role = models.ForeignKey(to=Role, on_delete=models.CASCADE)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n objects = MemberManager()\n\n def clean(self):\n members = self.__class__.objects.exclude(id=self.id)\n if members.filter(user=self.user, project=self.project).exists():\n message = \"This user is already assigned to a role in this project.\"\n raise ValidationError(message)\n\n @property\n def username(self):\n return self.user.username\n\n class Meta:\n unique_together = (\"user\", \"project\")\n", "path": "backend/projects/models.py"}, {"content": "from django.urls import path\n\nfrom .views.member import MemberDetail, MemberList, MyRole\nfrom .views.project import ProjectDetail, ProjectList\nfrom .views.tag import TagDetail, TagList\n\nurlpatterns = [\n path(route=\"projects\", view=ProjectList.as_view(), name=\"project_list\"),\n path(route=\"projects/<int:project_id>\", view=ProjectDetail.as_view(), name=\"project_detail\"),\n path(route=\"projects/<int:project_id>/my-role\", view=MyRole.as_view(), name=\"my_role\"),\n path(route=\"projects/<int:project_id>/tags\", view=TagList.as_view(), name=\"tag_list\"),\n path(route=\"projects/<int:project_id>/tags/<int:tag_id>\", view=TagDetail.as_view(), name=\"tag_detail\"),\n path(route=\"projects/<int:project_id>/members\", view=MemberList.as_view(), name=\"member_list\"),\n path(route=\"projects/<int:project_id>/members/<int:member_id>\", view=MemberDetail.as_view(), name=\"member_detail\"),\n]\n", "path": "backend/projects/urls.py"}, {"content": "from django.conf import settings\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import filters, generics, status\nfrom rest_framework.permissions import IsAdminUser, IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom projects.models import Project\nfrom projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly\nfrom projects.serializers import ProjectPolymorphicSerializer\n\n\nclass ProjectList(generics.ListCreateAPIView):\n serializer_class = ProjectPolymorphicSerializer\n filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)\n search_fields = (\"name\", \"description\")\n ordering_fields = [\"name\", \"created_at\", \"created_by\", \"project_type\"]\n ordering = [\"-created_at\"]\n\n def get_permissions(self):\n if self.request.method == \"GET\":\n self.permission_classes = [\n IsAuthenticated,\n ]\n else:\n self.permission_classes = [IsAuthenticated & IsAdminUser]\n return super().get_permissions()\n\n def get_queryset(self):\n return Project.objects.filter(role_mappings__user=self.request.user)\n\n def perform_create(self, serializer):\n project = serializer.save(created_by=self.request.user)\n project.add_admin()\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data[\"ids\"]\n projects = Project.objects.filter(\n role_mappings__user=self.request.user,\n role_mappings__role__name=settings.ROLE_PROJECT_ADMIN,\n pk__in=delete_ids,\n )\n # Todo: I want to use bulk delete.\n # But it causes the constraint error.\n # See https://github.com/django-polymorphic/django-polymorphic/issues/229\n for project in projects:\n project.delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass ProjectDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Project.objects.all()\n serializer_class = ProjectPolymorphicSerializer\n lookup_url_kwarg = \"project_id\"\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n", "path": "backend/projects/views/project.py"}], "after_files": [{"content": "import abc\nimport uuid\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Manager\nfrom polymorphic.models import PolymorphicModel\n\nfrom roles.models import Role\n\nDOCUMENT_CLASSIFICATION = \"DocumentClassification\"\nSEQUENCE_LABELING = \"SequenceLabeling\"\nSEQ2SEQ = \"Seq2seq\"\nSPEECH2TEXT = \"Speech2text\"\nIMAGE_CLASSIFICATION = \"ImageClassification\"\nBOUNDING_BOX = \"BoundingBox\"\nSEGMENTATION = \"Segmentation\"\nIMAGE_CAPTIONING = \"ImageCaptioning\"\nINTENT_DETECTION_AND_SLOT_FILLING = \"IntentDetectionAndSlotFilling\"\nPROJECT_CHOICES = (\n (DOCUMENT_CLASSIFICATION, \"document classification\"),\n (SEQUENCE_LABELING, \"sequence labeling\"),\n (SEQ2SEQ, \"sequence to sequence\"),\n (INTENT_DETECTION_AND_SLOT_FILLING, \"intent detection and slot filling\"),\n (SPEECH2TEXT, \"speech to text\"),\n (IMAGE_CLASSIFICATION, \"image classification\"),\n (BOUNDING_BOX, \"bounding box\"),\n (SEGMENTATION, \"segmentation\"),\n (IMAGE_CAPTIONING, \"image captioning\"),\n)\n\n\nclass Project(PolymorphicModel):\n name = models.CharField(max_length=100)\n description = models.TextField(default=\"\")\n guideline = models.TextField(default=\"\", blank=True)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n created_by = models.ForeignKey(\n User,\n on_delete=models.SET_NULL,\n null=True,\n )\n project_type = models.CharField(max_length=30, choices=PROJECT_CHOICES)\n random_order = models.BooleanField(default=False)\n collaborative_annotation = models.BooleanField(default=False)\n single_class_classification = models.BooleanField(default=False)\n\n def add_admin(self):\n admin_role = Role.objects.get(name=settings.ROLE_PROJECT_ADMIN)\n Member.objects.create(\n project=self,\n user=self.created_by,\n role=admin_role,\n )\n\n @property\n @abc.abstractmethod\n def is_text_project(self) -> bool:\n return False\n\n def clone(self) -> \"Project\":\n \"\"\"Clone the project.\n See https://docs.djangoproject.com/en/4.2/topics/db/queries/#copying-model-instances\n\n Returns:\n The cloned project.\n \"\"\"\n project = Project.objects.get(pk=self.pk)\n project.pk = None\n project.id = None\n project._state.adding = True\n project.save()\n\n def bulk_clone(queryset: models.QuerySet, field_initializers: dict = None):\n \"\"\"Clone the queryset.\n\n Args:\n queryset: The queryset to clone.\n field_initializers: The field initializers.\n \"\"\"\n if field_initializers is None:\n field_initializers = {}\n items = []\n for item in queryset:\n item.id = None\n item.pk = None\n for field, value_or_callable in field_initializers.items():\n if callable(value_or_callable):\n value_or_callable = value_or_callable()\n setattr(item, field, value_or_callable)\n item.project = project\n item._state.adding = True\n items.append(item)\n queryset.model.objects.bulk_create(items)\n\n bulk_clone(self.role_mappings.all())\n bulk_clone(self.tags.all())\n\n # clone examples\n bulk_clone(self.examples.all(), field_initializers={\"uuid\": uuid.uuid4})\n\n # clone label types\n bulk_clone(self.categorytype_set.all())\n bulk_clone(self.spantype_set.all())\n bulk_clone(self.relationtype_set.all())\n\n return project\n\n def __str__(self):\n return self.name\n\n\nclass TextClassificationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass SequenceLabelingProject(Project):\n allow_overlapping = models.BooleanField(default=False)\n grapheme_mode = models.BooleanField(default=False)\n use_relation = models.BooleanField(default=False)\n\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass Seq2seqProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass IntentDetectionAndSlotFillingProject(Project):\n @property\n def is_text_project(self) -> bool:\n return True\n\n\nclass Speech2textProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass ImageClassificationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass BoundingBoxProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass SegmentationProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass ImageCaptioningProject(Project):\n @property\n def is_text_project(self) -> bool:\n return False\n\n\nclass Tag(models.Model):\n text = models.TextField()\n project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name=\"tags\")\n\n def __str__(self):\n return self.text\n\n\nclass MemberManager(Manager):\n def can_update(self, project: int, member_id: int, new_role: str) -> bool:\n \"\"\"The project needs at least 1 admin.\n\n Args:\n project: The project id.\n member_id: The member id.\n new_role: The new role name.\n\n Returns:\n Whether the mapping can be updated or not.\n \"\"\"\n queryset = self.filter(project=project, role__name=settings.ROLE_PROJECT_ADMIN)\n if queryset.count() > 1:\n return True\n else:\n admin = queryset.first()\n # we can change the role except for the only admin.\n return admin.id != member_id or new_role == settings.ROLE_PROJECT_ADMIN\n\n def has_role(self, project_id: int, user: User, role_name: str):\n return self.filter(project=project_id, user=user, role__name=role_name).exists()\n\n\nclass Member(models.Model):\n user = models.ForeignKey(to=User, on_delete=models.CASCADE, related_name=\"role_mappings\")\n project = models.ForeignKey(to=Project, on_delete=models.CASCADE, related_name=\"role_mappings\")\n role = models.ForeignKey(to=Role, on_delete=models.CASCADE)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n objects = MemberManager()\n\n def clean(self):\n members = self.__class__.objects.exclude(id=self.id)\n if members.filter(user=self.user, project=self.project).exists():\n message = \"This user is already assigned to a role in this project.\"\n raise ValidationError(message)\n\n @property\n def username(self):\n return self.user.username\n\n class Meta:\n unique_together = (\"user\", \"project\")\n", "path": "backend/projects/models.py"}, {"content": "from django.urls import path\n\nfrom .views.member import MemberDetail, MemberList, MyRole\nfrom .views.project import CloneProject, ProjectDetail, ProjectList\nfrom .views.tag import TagDetail, TagList\n\nurlpatterns = [\n path(route=\"projects\", view=ProjectList.as_view(), name=\"project_list\"),\n path(route=\"projects/<int:project_id>\", view=ProjectDetail.as_view(), name=\"project_detail\"),\n path(route=\"projects/<int:project_id>/my-role\", view=MyRole.as_view(), name=\"my_role\"),\n path(route=\"projects/<int:project_id>/tags\", view=TagList.as_view(), name=\"tag_list\"),\n path(route=\"projects/<int:project_id>/tags/<int:tag_id>\", view=TagDetail.as_view(), name=\"tag_detail\"),\n path(route=\"projects/<int:project_id>/members\", view=MemberList.as_view(), name=\"member_list\"),\n path(route=\"projects/<int:project_id>/clone\", view=CloneProject.as_view(), name=\"clone_project\"),\n path(route=\"projects/<int:project_id>/members/<int:member_id>\", view=MemberDetail.as_view(), name=\"member_detail\"),\n]\n", "path": "backend/projects/urls.py"}, {"content": "from django.conf import settings\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import filters, generics, status, views\nfrom rest_framework.permissions import IsAdminUser, IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom projects.models import Project\nfrom projects.permissions import IsProjectAdmin, IsProjectStaffAndReadOnly\nfrom projects.serializers import ProjectPolymorphicSerializer\n\n\nclass ProjectList(generics.ListCreateAPIView):\n serializer_class = ProjectPolymorphicSerializer\n filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)\n search_fields = (\"name\", \"description\")\n ordering_fields = [\"name\", \"created_at\", \"created_by\", \"project_type\"]\n ordering = [\"-created_at\"]\n\n def get_permissions(self):\n if self.request.method == \"GET\":\n self.permission_classes = [\n IsAuthenticated,\n ]\n else:\n self.permission_classes = [IsAuthenticated & IsAdminUser]\n return super().get_permissions()\n\n def get_queryset(self):\n return Project.objects.filter(role_mappings__user=self.request.user)\n\n def perform_create(self, serializer):\n project = serializer.save(created_by=self.request.user)\n project.add_admin()\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data[\"ids\"]\n projects = Project.objects.filter(\n role_mappings__user=self.request.user,\n role_mappings__role__name=settings.ROLE_PROJECT_ADMIN,\n pk__in=delete_ids,\n )\n # Todo: I want to use bulk delete.\n # But it causes the constraint error.\n # See https://github.com/django-polymorphic/django-polymorphic/issues/229\n for project in projects:\n project.delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass ProjectDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Project.objects.all()\n serializer_class = ProjectPolymorphicSerializer\n lookup_url_kwarg = \"project_id\"\n permission_classes = [IsAuthenticated & (IsProjectAdmin | IsProjectStaffAndReadOnly)]\n\n\nclass CloneProject(views.APIView):\n permission_classes = [IsAuthenticated & IsProjectAdmin]\n\n @transaction.atomic\n def post(self, request, *args, **kwargs):\n project = get_object_or_404(Project, pk=self.kwargs[\"project_id\"])\n cloned_project = project.clone()\n serializer = ProjectPolymorphicSerializer(cloned_project)\n return Response(serializer.data, status=status.HTTP_201_CREATED)\n", "path": "backend/projects/views/project.py"}]}
| 2,836 | 981 |
gh_patches_debug_20622
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-573
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG-General] Terminal output not formatted and uses ASCII in windows
**Describe the bug**
I am on Windows and command prompt and I `cd` into example_scenes directory and run
```sh
manim basic.py
```
and I get a output like below.

I should get in green colour though.
**To Reproduce**
Just running the one in example_scene in enough.
**Expected behavior**
The ill formatted thing should be in green colour.
**Logs**
<details><summary>Terminal output (Screenshots acceptable)</summary>

<!-- Paste screenshot here -->
</details>
**System Specifications**
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7
- Python version (`python/py/python3 --version`): 3.8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/config/logger.py`
Content:
```
1 """
2 logger.py
3 ---------
4 This is the logging library for manim.
5 This library uses rich for coloured log outputs.
6
7 """
8
9
10 __all__ = ["logger", "console"]
11
12
13 import configparser
14 import logging
15
16 from rich.console import Console
17 from rich.logging import RichHandler
18 from rich.theme import Theme
19 from rich import print as printf
20 from rich import errors, color
21 import json
22 import copy
23
24
25 class JSONFormatter(logging.Formatter):
26 """Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON)."""
27
28 def format(self, record):
29 record_c = copy.deepcopy(record)
30 if record_c.args:
31 for arg in record_c.args:
32 record_c.args[arg] = "<>"
33 return json.dumps(
34 {
35 "levelname": record_c.levelname,
36 "module": record_c.module,
37 "message": super().format(record_c),
38 }
39 )
40
41
42 def _parse_theme(config_logger):
43 theme = dict(
44 zip(
45 [key.replace("_", ".") for key in config_logger.keys()],
46 list(config_logger.values()),
47 )
48 )
49
50 theme["log.width"] = None if theme["log.width"] == "-1" else int(theme["log.width"])
51
52 theme["log.height"] = (
53 None if theme["log.height"] == "-1" else int(theme["log.height"])
54 )
55 theme["log.timestamps"] = False
56 try:
57 customTheme = Theme(
58 {
59 k: v
60 for k, v in theme.items()
61 if k not in ["log.width", "log.height", "log.timestamps"]
62 }
63 )
64 except (color.ColorParseError, errors.StyleSyntaxError):
65 customTheme = None
66 printf(
67 "[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]"
68 )
69 return customTheme
70
71
72 def set_rich_logger(config_logger, verbosity):
73 """Will set the RichHandler of the logger.
74
75 Parameter
76 ----------
77 config_logger :class:
78 Config object of the logger.
79 """
80 theme = _parse_theme(config_logger)
81 global console
82 console = Console(theme=theme)
83 # These keywords Are Highlighted specially.
84 RichHandler.KEYWORDS = [
85 "Played",
86 "animations",
87 "scene",
88 "Reading",
89 "Writing",
90 "script",
91 "arguments",
92 "Invalid",
93 "Aborting",
94 "module",
95 "File",
96 "Rendering",
97 "Rendered",
98 ]
99 rich_handler = RichHandler(
100 console=console, show_time=config_logger.getboolean("log_timestamps")
101 )
102 global logger
103 rich_handler.setLevel(verbosity)
104 logger.addHandler(rich_handler)
105
106
107 def set_file_logger(log_file_path):
108 file_handler = logging.FileHandler(log_file_path, mode="w")
109 file_handler.setFormatter(JSONFormatter())
110 global logger
111 logger.addHandler(file_handler)
112
113
114 logger = logging.getLogger("manim")
115 # The console is set to None as it will be changed by set_rich_logger.
116 console = None
117
118 # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
119 logging.getLogger("PIL").setLevel(logging.INFO)
120 logging.getLogger("matplotlib").setLevel(logging.INFO)
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/config/logger.py b/manim/config/logger.py
--- a/manim/config/logger.py
+++ b/manim/config/logger.py
@@ -10,12 +10,12 @@
__all__ = ["logger", "console"]
-import configparser
import logging
from rich.console import Console
from rich.logging import RichHandler
from rich.theme import Theme
+from rich.traceback import install
from rich import print as printf
from rich import errors, color
import json
@@ -114,7 +114,7 @@
logger = logging.getLogger("manim")
# The console is set to None as it will be changed by set_rich_logger.
console = None
-
+install()
# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots
logging.getLogger("PIL").setLevel(logging.INFO)
logging.getLogger("matplotlib").setLevel(logging.INFO)
|
{"golden_diff": "diff --git a/manim/config/logger.py b/manim/config/logger.py\n--- a/manim/config/logger.py\n+++ b/manim/config/logger.py\n@@ -10,12 +10,12 @@\n __all__ = [\"logger\", \"console\"]\n \n \n-import configparser\n import logging\n \n from rich.console import Console\n from rich.logging import RichHandler\n from rich.theme import Theme\n+from rich.traceback import install\n from rich import print as printf\n from rich import errors, color\n import json\n@@ -114,7 +114,7 @@\n logger = logging.getLogger(\"manim\")\n # The console is set to None as it will be changed by set_rich_logger.\n console = None\n-\n+install()\n # TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\n logging.getLogger(\"PIL\").setLevel(logging.INFO)\n logging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "issue": " [BUG-General] Terminal output not formatted and uses ASCII in windows\n**Describe the bug**\r\nI am on Windows and command prompt and I `cd` into example_scenes directory and run \r\n```sh\r\nmanim basic.py\r\n``` \r\nand I get a output like below.\r\n\r\nI should get in green colour though.\r\n\r\n**To Reproduce**\r\nJust running the one in example_scene in enough.\r\n\r\n**Expected behavior**\r\nThe ill formatted thing should be in green colour.\r\n\r\n**Logs**\r\n<details><summary>Terminal output (Screenshots acceptable)</summary>\r\n\r\n\r\n\r\n<!-- Paste screenshot here -->\r\n\r\n</details>\r\n\r\n**System Specifications**\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)): Windows 7\r\n- Python version (`python/py/python3 --version`): 3.8\r\n\n", "before_files": [{"content": "\"\"\"\nlogger.py\n---------\nThis is the logging library for manim.\nThis library uses rich for coloured log outputs.\n\n\"\"\"\n\n\n__all__ = [\"logger\", \"console\"]\n\n\nimport configparser\nimport logging\n\nfrom rich.console import Console\nfrom rich.logging import RichHandler\nfrom rich.theme import Theme\nfrom rich import print as printf\nfrom rich import errors, color\nimport json\nimport copy\n\n\nclass JSONFormatter(logging.Formatter):\n \"\"\"Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON).\"\"\"\n\n def format(self, record):\n record_c = copy.deepcopy(record)\n if record_c.args:\n for arg in record_c.args:\n record_c.args[arg] = \"<>\"\n return json.dumps(\n {\n \"levelname\": record_c.levelname,\n \"module\": record_c.module,\n \"message\": super().format(record_c),\n }\n )\n\n\ndef _parse_theme(config_logger):\n theme = dict(\n zip(\n [key.replace(\"_\", \".\") for key in config_logger.keys()],\n list(config_logger.values()),\n )\n )\n\n theme[\"log.width\"] = None if theme[\"log.width\"] == \"-1\" else int(theme[\"log.width\"])\n\n theme[\"log.height\"] = (\n None if theme[\"log.height\"] == \"-1\" else int(theme[\"log.height\"])\n )\n theme[\"log.timestamps\"] = False\n try:\n customTheme = Theme(\n {\n k: v\n for k, v in theme.items()\n if k not in [\"log.width\", \"log.height\", \"log.timestamps\"]\n }\n )\n except (color.ColorParseError, errors.StyleSyntaxError):\n customTheme = None\n printf(\n \"[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]\"\n )\n return customTheme\n\n\ndef set_rich_logger(config_logger, verbosity):\n \"\"\"Will set the RichHandler of the logger.\n\n Parameter\n ----------\n config_logger :class:\n Config object of the logger.\n \"\"\"\n theme = _parse_theme(config_logger)\n global console\n console = Console(theme=theme)\n # These keywords Are Highlighted specially.\n RichHandler.KEYWORDS = [\n \"Played\",\n \"animations\",\n \"scene\",\n \"Reading\",\n \"Writing\",\n \"script\",\n \"arguments\",\n \"Invalid\",\n \"Aborting\",\n \"module\",\n \"File\",\n \"Rendering\",\n \"Rendered\",\n ]\n rich_handler = RichHandler(\n console=console, show_time=config_logger.getboolean(\"log_timestamps\")\n )\n global logger\n rich_handler.setLevel(verbosity)\n logger.addHandler(rich_handler)\n\n\ndef set_file_logger(log_file_path):\n file_handler = logging.FileHandler(log_file_path, mode=\"w\")\n file_handler.setFormatter(JSONFormatter())\n global logger\n logger.addHandler(file_handler)\n\n\nlogger = logging.getLogger(\"manim\")\n# The console is set to None as it will be changed by set_rich_logger.\nconsole = None\n\n# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\nlogging.getLogger(\"PIL\").setLevel(logging.INFO)\nlogging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "path": "manim/config/logger.py"}], "after_files": [{"content": "\"\"\"\nlogger.py\n---------\nThis is the logging library for manim.\nThis library uses rich for coloured log outputs.\n\n\"\"\"\n\n\n__all__ = [\"logger\", \"console\"]\n\n\nimport logging\n\nfrom rich.console import Console\nfrom rich.logging import RichHandler\nfrom rich.theme import Theme\nfrom rich.traceback import install\nfrom rich import print as printf\nfrom rich import errors, color\nimport json\nimport copy\n\n\nclass JSONFormatter(logging.Formatter):\n \"\"\"Subclass of `:class:`logging.Formatter`, to build our own format of the logs (JSON).\"\"\"\n\n def format(self, record):\n record_c = copy.deepcopy(record)\n if record_c.args:\n for arg in record_c.args:\n record_c.args[arg] = \"<>\"\n return json.dumps(\n {\n \"levelname\": record_c.levelname,\n \"module\": record_c.module,\n \"message\": super().format(record_c),\n }\n )\n\n\ndef _parse_theme(config_logger):\n theme = dict(\n zip(\n [key.replace(\"_\", \".\") for key in config_logger.keys()],\n list(config_logger.values()),\n )\n )\n\n theme[\"log.width\"] = None if theme[\"log.width\"] == \"-1\" else int(theme[\"log.width\"])\n\n theme[\"log.height\"] = (\n None if theme[\"log.height\"] == \"-1\" else int(theme[\"log.height\"])\n )\n theme[\"log.timestamps\"] = False\n try:\n customTheme = Theme(\n {\n k: v\n for k, v in theme.items()\n if k not in [\"log.width\", \"log.height\", \"log.timestamps\"]\n }\n )\n except (color.ColorParseError, errors.StyleSyntaxError):\n customTheme = None\n printf(\n \"[logging.level.error]It seems your colour configuration couldn't be parsed. Loading the default color configuration...[/logging.level.error]\"\n )\n return customTheme\n\n\ndef set_rich_logger(config_logger, verbosity):\n \"\"\"Will set the RichHandler of the logger.\n\n Parameter\n ----------\n config_logger :class:\n Config object of the logger.\n \"\"\"\n theme = _parse_theme(config_logger)\n global console\n console = Console(theme=theme)\n # These keywords Are Highlighted specially.\n RichHandler.KEYWORDS = [\n \"Played\",\n \"animations\",\n \"scene\",\n \"Reading\",\n \"Writing\",\n \"script\",\n \"arguments\",\n \"Invalid\",\n \"Aborting\",\n \"module\",\n \"File\",\n \"Rendering\",\n \"Rendered\",\n ]\n rich_handler = RichHandler(\n console=console, show_time=config_logger.getboolean(\"log_timestamps\")\n )\n global logger\n rich_handler.setLevel(verbosity)\n logger.addHandler(rich_handler)\n\n\ndef set_file_logger(log_file_path):\n file_handler = logging.FileHandler(log_file_path, mode=\"w\")\n file_handler.setFormatter(JSONFormatter())\n global logger\n logger.addHandler(file_handler)\n\n\nlogger = logging.getLogger(\"manim\")\n# The console is set to None as it will be changed by set_rich_logger.\nconsole = None\ninstall()\n# TODO : This is only temporary to keep the terminal output clean when working with ImageMobject and matplotlib plots\nlogging.getLogger(\"PIL\").setLevel(logging.INFO)\nlogging.getLogger(\"matplotlib\").setLevel(logging.INFO)\n", "path": "manim/config/logger.py"}]}
| 1,516 | 199 |
gh_patches_debug_22444
|
rasdani/github-patches
|
git_diff
|
ietf-tools__datatracker-7083
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confusing draft submission error when docName is missing
### Describe the issue
On draft submit, when `rfc` element's `docName` attribute is not set datatracker produces an error: `Error parsing XML Internet-Draft: expected string or bytes-like object`.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/utils/xmldraft.py`
Content:
```
1 # Copyright The IETF Trust 2022, All Rights Reserved
2 # -*- coding: utf-8 -*-
3 import datetime
4 import io
5 import re
6 import xml2rfc
7
8 import debug # pyflakes: ignore
9
10 from contextlib import ExitStack
11 from lxml.etree import XMLSyntaxError
12 from xml2rfc.util.date import augment_date, extract_date
13 from ietf.utils.timezone import date_today
14
15 from .draft import Draft
16
17
18 class XMLDraft(Draft):
19 """Draft from XML source
20
21 Not all methods from the superclass are implemented yet.
22 """
23 def __init__(self, xml_file):
24 """Initialize XMLDraft instance
25
26 :parameter xml_file: path to file containing XML source
27 """
28 super().__init__()
29 # cast xml_file to str so, e.g., this will work with a Path
30 self.xmltree, self.xml_version = self.parse_xml(str(xml_file))
31 self.xmlroot = self.xmltree.getroot()
32 self.filename, self.revision = self._parse_docname()
33
34 @staticmethod
35 def parse_xml(filename):
36 """Parse XML draft
37
38 Converts to xml2rfc v3 schema, then returns the root of the v3 tree and the original
39 xml version.
40 """
41 orig_write_out = xml2rfc.log.write_out
42 orig_write_err = xml2rfc.log.write_err
43 parser_out = io.StringIO()
44 parser_err = io.StringIO()
45
46 with ExitStack() as stack:
47 @stack.callback
48 def cleanup(): # called when context exited, even if there's an exception
49 xml2rfc.log.write_out = orig_write_out
50 xml2rfc.log.write_err = orig_write_err
51
52 xml2rfc.log.write_out = parser_out
53 xml2rfc.log.write_err = parser_err
54
55 parser = xml2rfc.XmlRfcParser(filename, quiet=True)
56 try:
57 tree = parser.parse()
58 except XMLSyntaxError:
59 raise InvalidXMLError()
60 except Exception as e:
61 raise XMLParseError(parser_out.getvalue(), parser_err.getvalue()) from e
62
63 xml_version = tree.getroot().get('version', '2')
64 if xml_version == '2':
65 v2v3 = xml2rfc.V2v3XmlWriter(tree)
66 tree.tree = v2v3.convert2to3()
67 return tree, xml_version
68
69 def _document_name(self, ref):
70 """Get document name from reference."""
71 series = ["rfc", "bcp", "fyi", "std"]
72 # handle xinclude first
73 # FIXME: this assumes the xinclude is a bibxml href; if it isn't, there can
74 # still be false negatives. it would be better to expand the xinclude and parse
75 # its seriesInfo.
76 if ref.tag.endswith("}include"):
77 name = re.search(
78 rf"reference\.({'|'.join(series).upper()})\.(\d{{4}})\.xml",
79 ref.attrib["href"],
80 )
81 if name:
82 return f"{name.group(1)}{int(name.group(2))}".lower()
83 name = re.search(
84 r"reference\.I-D\.(?:draft-)?(.*)\.xml", ref.attrib["href"]
85 )
86 if name:
87 return f"draft-{name.group(1)}"
88 # can't extract the name, give up
89 return ""
90
91 # check the anchor next
92 anchor = ref.get("anchor").lower() # always give back lowercase
93 label = anchor.rstrip("0123456789") # remove trailing digits
94 maybe_number = anchor[len(label) :]
95 if label in series and maybe_number.isdigit():
96 number = int(maybe_number)
97 return f"{label}{number}"
98
99 # if we couldn't find a match so far, try the seriesInfo
100 series_query = " or ".join(f"@name='{x.upper()}'" for x in series)
101 for info in ref.xpath(
102 f"./seriesInfo[{series_query} or @name='Internet-Draft']"
103 ):
104 if not info.attrib["value"]:
105 continue
106 if info.attrib["name"] == "Internet-Draft":
107 return info.attrib["value"]
108 else:
109 return f'{info.attrib["name"].lower()}{info.attrib["value"]}'
110 return ""
111
112 def _reference_section_type(self, section_name):
113 """Determine reference type from name of references section"""
114 if section_name:
115 section_name = section_name.lower()
116 if 'normative' in section_name:
117 return self.REF_TYPE_NORMATIVE
118 elif 'informative' in section_name:
119 return self.REF_TYPE_INFORMATIVE
120 return self.REF_TYPE_UNKNOWN
121
122 def _reference_section_name(self, section_elt):
123 section_name = section_elt.findtext('name')
124 if section_name is None and 'title' in section_elt.keys():
125 section_name = section_elt.get('title') # fall back to title if we have it
126 return section_name
127
128 def _parse_docname(self):
129 docname = self.xmlroot.attrib.get('docName')
130 revmatch = re.match(
131 r'^(?P<filename>.+?)(?:-(?P<rev>[0-9][0-9]))?$',
132 docname,
133
134 )
135 if revmatch is None:
136 raise ValueError('Unable to parse docName')
137 # If a group had no match it is None
138 return revmatch.group('filename'), revmatch.group('rev')
139
140 def get_title(self):
141 return self.xmlroot.findtext('front/title').strip()
142
143 @staticmethod
144 def parse_creation_date(date_elt):
145 if date_elt is None:
146 return None
147 today = date_today()
148 # ths mimics handling of date elements in the xml2rfc text/html writers
149 year, month, day = extract_date(date_elt, today)
150 year, month, day = augment_date(year, month, day, today)
151 if not day:
152 # Must choose a day for a datetime.date. Per RFC 7991 sect 2.17, we use
153 # today's date if it is consistent with the rest of the date. Otherwise,
154 # arbitrariy (and consistent with the text parser) assume the 15th.
155 if year == today.year and month == today.month:
156 day = today.day
157 else:
158 day = 15
159 return datetime.date(year, month, day)
160
161 def get_creation_date(self):
162 return self.parse_creation_date(self.xmlroot.find("front/date"))
163
164 # todo fix the implementation of XMLDraft.get_abstract()
165 #
166 # This code was pulled from ietf.submit.forms where it existed for some time.
167 # It does not work, at least with modern xml2rfc. This assumes that the abstract
168 # is simply text in the front/abstract node, but the XML schema wraps the actual
169 # abstract text in <t> elements (and allows <dl>, <ol>, and <ul> as well). As a
170 # result, this method normally returns an empty string, which is later replaced by
171 # the abstract parsed from the rendered text. For now, I a commenting this out
172 # and making it explicit that the abstract always comes from the text format.
173 #
174 # def get_abstract(self):
175 # """Extract the abstract"""
176 # abstract = self.xmlroot.findtext('front/abstract')
177 # return abstract.strip() if abstract else ''
178
179 def get_author_list(self):
180 """Get detailed author list
181
182 Returns a list of dicts with the following keys:
183 name, first_name, middle_initial, last_name,
184 name_suffix, email, country, affiliation
185 Values will be None if not available
186 """
187 result = []
188 empty_author = {
189 k: None for k in [
190 'name', 'first_name', 'middle_initial', 'last_name',
191 'name_suffix', 'email', 'country', 'affiliation',
192 ]
193 }
194
195 for author in self.xmlroot.findall('front/author'):
196 info = {
197 'name': author.attrib.get('fullname'),
198 'email': author.findtext('address/email'),
199 'affiliation': author.findtext('organization'),
200 }
201 elem = author.find('address/postal/country')
202 if elem is not None:
203 ascii_country = elem.get('ascii', None)
204 info['country'] = ascii_country if ascii_country else elem.text
205 for item in info:
206 if info[item]:
207 info[item] = info[item].strip()
208 result.append(empty_author | info) # merge, preferring info
209 return result
210
211 def get_refs(self):
212 """Extract references from the draft"""
213 refs = {}
214 # accept nested <references> sections
215 for section in self.xmlroot.findall("back//references"):
216 ref_type = self._reference_section_type(
217 self._reference_section_name(section)
218 )
219 for ref in (
220 section.findall("./reference")
221 + section.findall("./referencegroup")
222 + section.findall(
223 "./xi:include", {"xi": "http://www.w3.org/2001/XInclude"}
224 )
225 ):
226 name = self._document_name(ref)
227 if name:
228 refs[name] = ref_type
229 return refs
230
231
232 class XMLParseError(Exception):
233 """An error occurred while parsing"""
234 def __init__(self, out: str, err: str, *args):
235 super().__init__(*args)
236 self._out = out
237 self._err = err
238
239 def parser_msgs(self):
240 return self._out.splitlines() + self._err.splitlines()
241
242
243 class InvalidXMLError(Exception):
244 """File is not valid XML"""
245 pass
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ietf/utils/xmldraft.py b/ietf/utils/xmldraft.py
--- a/ietf/utils/xmldraft.py
+++ b/ietf/utils/xmldraft.py
@@ -29,7 +29,7 @@
# cast xml_file to str so, e.g., this will work with a Path
self.xmltree, self.xml_version = self.parse_xml(str(xml_file))
self.xmlroot = self.xmltree.getroot()
- self.filename, self.revision = self._parse_docname()
+ self.filename, self.revision = self.parse_docname(self.xmlroot)
@staticmethod
def parse_xml(filename):
@@ -125,8 +125,11 @@
section_name = section_elt.get('title') # fall back to title if we have it
return section_name
- def _parse_docname(self):
- docname = self.xmlroot.attrib.get('docName')
+ @staticmethod
+ def parse_docname(xmlroot):
+ docname = xmlroot.attrib.get('docName')
+ if docname is None:
+ raise ValueError("Missing docName attribute in the XML root element")
revmatch = re.match(
r'^(?P<filename>.+?)(?:-(?P<rev>[0-9][0-9]))?$',
docname,
|
{"golden_diff": "diff --git a/ietf/utils/xmldraft.py b/ietf/utils/xmldraft.py\n--- a/ietf/utils/xmldraft.py\n+++ b/ietf/utils/xmldraft.py\n@@ -29,7 +29,7 @@\n # cast xml_file to str so, e.g., this will work with a Path\n self.xmltree, self.xml_version = self.parse_xml(str(xml_file))\n self.xmlroot = self.xmltree.getroot()\n- self.filename, self.revision = self._parse_docname()\n+ self.filename, self.revision = self.parse_docname(self.xmlroot)\n \n @staticmethod\n def parse_xml(filename):\n@@ -125,8 +125,11 @@\n section_name = section_elt.get('title') # fall back to title if we have it\n return section_name\n \n- def _parse_docname(self):\n- docname = self.xmlroot.attrib.get('docName')\n+ @staticmethod\n+ def parse_docname(xmlroot):\n+ docname = xmlroot.attrib.get('docName')\n+ if docname is None:\n+ raise ValueError(\"Missing docName attribute in the XML root element\")\n revmatch = re.match(\n r'^(?P<filename>.+?)(?:-(?P<rev>[0-9][0-9]))?$',\n docname,\n", "issue": "Confusing draft submission error when docName is missing\n### Describe the issue\r\n\r\nOn draft submit, when `rfc` element's `docName` attribute is not set datatracker produces an error: `Error parsing XML Internet-Draft: expected string or bytes-like object`.\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2022, All Rights Reserved\n# -*- coding: utf-8 -*-\nimport datetime\nimport io\nimport re\nimport xml2rfc\n\nimport debug # pyflakes: ignore\n\nfrom contextlib import ExitStack\nfrom lxml.etree import XMLSyntaxError\nfrom xml2rfc.util.date import augment_date, extract_date\nfrom ietf.utils.timezone import date_today\n\nfrom .draft import Draft\n\n\nclass XMLDraft(Draft):\n \"\"\"Draft from XML source\n\n Not all methods from the superclass are implemented yet.\n \"\"\"\n def __init__(self, xml_file):\n \"\"\"Initialize XMLDraft instance\n\n :parameter xml_file: path to file containing XML source\n \"\"\"\n super().__init__()\n # cast xml_file to str so, e.g., this will work with a Path\n self.xmltree, self.xml_version = self.parse_xml(str(xml_file))\n self.xmlroot = self.xmltree.getroot()\n self.filename, self.revision = self._parse_docname()\n\n @staticmethod\n def parse_xml(filename):\n \"\"\"Parse XML draft\n\n Converts to xml2rfc v3 schema, then returns the root of the v3 tree and the original\n xml version.\n \"\"\"\n orig_write_out = xml2rfc.log.write_out\n orig_write_err = xml2rfc.log.write_err\n parser_out = io.StringIO()\n parser_err = io.StringIO()\n\n with ExitStack() as stack:\n @stack.callback\n def cleanup(): # called when context exited, even if there's an exception\n xml2rfc.log.write_out = orig_write_out\n xml2rfc.log.write_err = orig_write_err\n\n xml2rfc.log.write_out = parser_out\n xml2rfc.log.write_err = parser_err\n\n parser = xml2rfc.XmlRfcParser(filename, quiet=True)\n try:\n tree = parser.parse()\n except XMLSyntaxError:\n raise InvalidXMLError()\n except Exception as e:\n raise XMLParseError(parser_out.getvalue(), parser_err.getvalue()) from e\n\n xml_version = tree.getroot().get('version', '2')\n if xml_version == '2':\n v2v3 = xml2rfc.V2v3XmlWriter(tree)\n tree.tree = v2v3.convert2to3()\n return tree, xml_version\n\n def _document_name(self, ref):\n \"\"\"Get document name from reference.\"\"\"\n series = [\"rfc\", \"bcp\", \"fyi\", \"std\"]\n # handle xinclude first\n # FIXME: this assumes the xinclude is a bibxml href; if it isn't, there can\n # still be false negatives. it would be better to expand the xinclude and parse\n # its seriesInfo.\n if ref.tag.endswith(\"}include\"):\n name = re.search(\n rf\"reference\\.({'|'.join(series).upper()})\\.(\\d{{4}})\\.xml\",\n ref.attrib[\"href\"],\n )\n if name:\n return f\"{name.group(1)}{int(name.group(2))}\".lower()\n name = re.search(\n r\"reference\\.I-D\\.(?:draft-)?(.*)\\.xml\", ref.attrib[\"href\"]\n )\n if name:\n return f\"draft-{name.group(1)}\"\n # can't extract the name, give up\n return \"\"\n\n # check the anchor next\n anchor = ref.get(\"anchor\").lower() # always give back lowercase\n label = anchor.rstrip(\"0123456789\") # remove trailing digits\n maybe_number = anchor[len(label) :]\n if label in series and maybe_number.isdigit():\n number = int(maybe_number)\n return f\"{label}{number}\"\n\n # if we couldn't find a match so far, try the seriesInfo\n series_query = \" or \".join(f\"@name='{x.upper()}'\" for x in series)\n for info in ref.xpath(\n f\"./seriesInfo[{series_query} or @name='Internet-Draft']\"\n ):\n if not info.attrib[\"value\"]:\n continue\n if info.attrib[\"name\"] == \"Internet-Draft\":\n return info.attrib[\"value\"]\n else:\n return f'{info.attrib[\"name\"].lower()}{info.attrib[\"value\"]}'\n return \"\"\n\n def _reference_section_type(self, section_name):\n \"\"\"Determine reference type from name of references section\"\"\"\n if section_name:\n section_name = section_name.lower()\n if 'normative' in section_name:\n return self.REF_TYPE_NORMATIVE\n elif 'informative' in section_name:\n return self.REF_TYPE_INFORMATIVE\n return self.REF_TYPE_UNKNOWN\n\n def _reference_section_name(self, section_elt):\n section_name = section_elt.findtext('name')\n if section_name is None and 'title' in section_elt.keys():\n section_name = section_elt.get('title') # fall back to title if we have it\n return section_name\n\n def _parse_docname(self):\n docname = self.xmlroot.attrib.get('docName')\n revmatch = re.match(\n r'^(?P<filename>.+?)(?:-(?P<rev>[0-9][0-9]))?$',\n docname,\n\n )\n if revmatch is None:\n raise ValueError('Unable to parse docName')\n # If a group had no match it is None\n return revmatch.group('filename'), revmatch.group('rev')\n\n def get_title(self):\n return self.xmlroot.findtext('front/title').strip()\n\n @staticmethod\n def parse_creation_date(date_elt):\n if date_elt is None:\n return None\n today = date_today()\n # ths mimics handling of date elements in the xml2rfc text/html writers\n year, month, day = extract_date(date_elt, today)\n year, month, day = augment_date(year, month, day, today)\n if not day:\n # Must choose a day for a datetime.date. Per RFC 7991 sect 2.17, we use\n # today's date if it is consistent with the rest of the date. Otherwise,\n # arbitrariy (and consistent with the text parser) assume the 15th.\n if year == today.year and month == today.month:\n day = today.day\n else:\n day = 15\n return datetime.date(year, month, day)\n\n def get_creation_date(self):\n return self.parse_creation_date(self.xmlroot.find(\"front/date\"))\n\n # todo fix the implementation of XMLDraft.get_abstract()\n #\n # This code was pulled from ietf.submit.forms where it existed for some time.\n # It does not work, at least with modern xml2rfc. This assumes that the abstract\n # is simply text in the front/abstract node, but the XML schema wraps the actual\n # abstract text in <t> elements (and allows <dl>, <ol>, and <ul> as well). As a\n # result, this method normally returns an empty string, which is later replaced by\n # the abstract parsed from the rendered text. For now, I a commenting this out\n # and making it explicit that the abstract always comes from the text format.\n #\n # def get_abstract(self):\n # \"\"\"Extract the abstract\"\"\"\n # abstract = self.xmlroot.findtext('front/abstract')\n # return abstract.strip() if abstract else ''\n\n def get_author_list(self):\n \"\"\"Get detailed author list\n\n Returns a list of dicts with the following keys:\n name, first_name, middle_initial, last_name,\n name_suffix, email, country, affiliation\n Values will be None if not available\n \"\"\"\n result = []\n empty_author = {\n k: None for k in [\n 'name', 'first_name', 'middle_initial', 'last_name',\n 'name_suffix', 'email', 'country', 'affiliation',\n ]\n }\n\n for author in self.xmlroot.findall('front/author'):\n info = {\n 'name': author.attrib.get('fullname'),\n 'email': author.findtext('address/email'),\n 'affiliation': author.findtext('organization'),\n }\n elem = author.find('address/postal/country')\n if elem is not None:\n ascii_country = elem.get('ascii', None)\n info['country'] = ascii_country if ascii_country else elem.text\n for item in info:\n if info[item]:\n info[item] = info[item].strip()\n result.append(empty_author | info) # merge, preferring info\n return result\n\n def get_refs(self):\n \"\"\"Extract references from the draft\"\"\"\n refs = {}\n # accept nested <references> sections\n for section in self.xmlroot.findall(\"back//references\"):\n ref_type = self._reference_section_type(\n self._reference_section_name(section)\n )\n for ref in (\n section.findall(\"./reference\")\n + section.findall(\"./referencegroup\")\n + section.findall(\n \"./xi:include\", {\"xi\": \"http://www.w3.org/2001/XInclude\"}\n )\n ):\n name = self._document_name(ref)\n if name:\n refs[name] = ref_type\n return refs\n\n\nclass XMLParseError(Exception):\n \"\"\"An error occurred while parsing\"\"\"\n def __init__(self, out: str, err: str, *args):\n super().__init__(*args)\n self._out = out\n self._err = err\n\n def parser_msgs(self):\n return self._out.splitlines() + self._err.splitlines()\n\n\nclass InvalidXMLError(Exception):\n \"\"\"File is not valid XML\"\"\"\n pass\n", "path": "ietf/utils/xmldraft.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2022, All Rights Reserved\n# -*- coding: utf-8 -*-\nimport datetime\nimport io\nimport re\nimport xml2rfc\n\nimport debug # pyflakes: ignore\n\nfrom contextlib import ExitStack\nfrom lxml.etree import XMLSyntaxError\nfrom xml2rfc.util.date import augment_date, extract_date\nfrom ietf.utils.timezone import date_today\n\nfrom .draft import Draft\n\n\nclass XMLDraft(Draft):\n \"\"\"Draft from XML source\n\n Not all methods from the superclass are implemented yet.\n \"\"\"\n def __init__(self, xml_file):\n \"\"\"Initialize XMLDraft instance\n\n :parameter xml_file: path to file containing XML source\n \"\"\"\n super().__init__()\n # cast xml_file to str so, e.g., this will work with a Path\n self.xmltree, self.xml_version = self.parse_xml(str(xml_file))\n self.xmlroot = self.xmltree.getroot()\n self.filename, self.revision = self.parse_docname(self.xmlroot)\n\n @staticmethod\n def parse_xml(filename):\n \"\"\"Parse XML draft\n\n Converts to xml2rfc v3 schema, then returns the root of the v3 tree and the original\n xml version.\n \"\"\"\n orig_write_out = xml2rfc.log.write_out\n orig_write_err = xml2rfc.log.write_err\n parser_out = io.StringIO()\n parser_err = io.StringIO()\n\n with ExitStack() as stack:\n @stack.callback\n def cleanup(): # called when context exited, even if there's an exception\n xml2rfc.log.write_out = orig_write_out\n xml2rfc.log.write_err = orig_write_err\n\n xml2rfc.log.write_out = parser_out\n xml2rfc.log.write_err = parser_err\n\n parser = xml2rfc.XmlRfcParser(filename, quiet=True)\n try:\n tree = parser.parse()\n except XMLSyntaxError:\n raise InvalidXMLError()\n except Exception as e:\n raise XMLParseError(parser_out.getvalue(), parser_err.getvalue()) from e\n\n xml_version = tree.getroot().get('version', '2')\n if xml_version == '2':\n v2v3 = xml2rfc.V2v3XmlWriter(tree)\n tree.tree = v2v3.convert2to3()\n return tree, xml_version\n\n def _document_name(self, ref):\n \"\"\"Get document name from reference.\"\"\"\n series = [\"rfc\", \"bcp\", \"fyi\", \"std\"]\n # handle xinclude first\n # FIXME: this assumes the xinclude is a bibxml href; if it isn't, there can\n # still be false negatives. it would be better to expand the xinclude and parse\n # its seriesInfo.\n if ref.tag.endswith(\"}include\"):\n name = re.search(\n rf\"reference\\.({'|'.join(series).upper()})\\.(\\d{{4}})\\.xml\",\n ref.attrib[\"href\"],\n )\n if name:\n return f\"{name.group(1)}{int(name.group(2))}\".lower()\n name = re.search(\n r\"reference\\.I-D\\.(?:draft-)?(.*)\\.xml\", ref.attrib[\"href\"]\n )\n if name:\n return f\"draft-{name.group(1)}\"\n # can't extract the name, give up\n return \"\"\n\n # check the anchor next\n anchor = ref.get(\"anchor\").lower() # always give back lowercase\n label = anchor.rstrip(\"0123456789\") # remove trailing digits\n maybe_number = anchor[len(label) :]\n if label in series and maybe_number.isdigit():\n number = int(maybe_number)\n return f\"{label}{number}\"\n\n # if we couldn't find a match so far, try the seriesInfo\n series_query = \" or \".join(f\"@name='{x.upper()}'\" for x in series)\n for info in ref.xpath(\n f\"./seriesInfo[{series_query} or @name='Internet-Draft']\"\n ):\n if not info.attrib[\"value\"]:\n continue\n if info.attrib[\"name\"] == \"Internet-Draft\":\n return info.attrib[\"value\"]\n else:\n return f'{info.attrib[\"name\"].lower()}{info.attrib[\"value\"]}'\n return \"\"\n\n def _reference_section_type(self, section_name):\n \"\"\"Determine reference type from name of references section\"\"\"\n if section_name:\n section_name = section_name.lower()\n if 'normative' in section_name:\n return self.REF_TYPE_NORMATIVE\n elif 'informative' in section_name:\n return self.REF_TYPE_INFORMATIVE\n return self.REF_TYPE_UNKNOWN\n\n def _reference_section_name(self, section_elt):\n section_name = section_elt.findtext('name')\n if section_name is None and 'title' in section_elt.keys():\n section_name = section_elt.get('title') # fall back to title if we have it\n return section_name\n\n @staticmethod\n def parse_docname(xmlroot):\n docname = xmlroot.attrib.get('docName')\n if docname is None:\n raise ValueError(\"Missing docName attribute in the XML root element\")\n revmatch = re.match(\n r'^(?P<filename>.+?)(?:-(?P<rev>[0-9][0-9]))?$',\n docname,\n\n )\n if revmatch is None:\n raise ValueError('Unable to parse docName')\n # If a group had no match it is None\n return revmatch.group('filename'), revmatch.group('rev')\n\n def get_title(self):\n return self.xmlroot.findtext('front/title').strip()\n\n @staticmethod\n def parse_creation_date(date_elt):\n if date_elt is None:\n return None\n today = date_today()\n # ths mimics handling of date elements in the xml2rfc text/html writers\n year, month, day = extract_date(date_elt, today)\n year, month, day = augment_date(year, month, day, today)\n if not day:\n # Must choose a day for a datetime.date. Per RFC 7991 sect 2.17, we use\n # today's date if it is consistent with the rest of the date. Otherwise,\n # arbitrariy (and consistent with the text parser) assume the 15th.\n if year == today.year and month == today.month:\n day = today.day\n else:\n day = 15\n return datetime.date(year, month, day)\n\n def get_creation_date(self):\n return self.parse_creation_date(self.xmlroot.find(\"front/date\"))\n\n # todo fix the implementation of XMLDraft.get_abstract()\n #\n # This code was pulled from ietf.submit.forms where it existed for some time.\n # It does not work, at least with modern xml2rfc. This assumes that the abstract\n # is simply text in the front/abstract node, but the XML schema wraps the actual\n # abstract text in <t> elements (and allows <dl>, <ol>, and <ul> as well). As a\n # result, this method normally returns an empty string, which is later replaced by\n # the abstract parsed from the rendered text. For now, I a commenting this out\n # and making it explicit that the abstract always comes from the text format.\n #\n # def get_abstract(self):\n # \"\"\"Extract the abstract\"\"\"\n # abstract = self.xmlroot.findtext('front/abstract')\n # return abstract.strip() if abstract else ''\n\n def get_author_list(self):\n \"\"\"Get detailed author list\n\n Returns a list of dicts with the following keys:\n name, first_name, middle_initial, last_name,\n name_suffix, email, country, affiliation\n Values will be None if not available\n \"\"\"\n result = []\n empty_author = {\n k: None for k in [\n 'name', 'first_name', 'middle_initial', 'last_name',\n 'name_suffix', 'email', 'country', 'affiliation',\n ]\n }\n\n for author in self.xmlroot.findall('front/author'):\n info = {\n 'name': author.attrib.get('fullname'),\n 'email': author.findtext('address/email'),\n 'affiliation': author.findtext('organization'),\n }\n elem = author.find('address/postal/country')\n if elem is not None:\n ascii_country = elem.get('ascii', None)\n info['country'] = ascii_country if ascii_country else elem.text\n for item in info:\n if info[item]:\n info[item] = info[item].strip()\n result.append(empty_author | info) # merge, preferring info\n return result\n\n def get_refs(self):\n \"\"\"Extract references from the draft\"\"\"\n refs = {}\n # accept nested <references> sections\n for section in self.xmlroot.findall(\"back//references\"):\n ref_type = self._reference_section_type(\n self._reference_section_name(section)\n )\n for ref in (\n section.findall(\"./reference\")\n + section.findall(\"./referencegroup\")\n + section.findall(\n \"./xi:include\", {\"xi\": \"http://www.w3.org/2001/XInclude\"}\n )\n ):\n name = self._document_name(ref)\n if name:\n refs[name] = ref_type\n return refs\n\n\nclass XMLParseError(Exception):\n \"\"\"An error occurred while parsing\"\"\"\n def __init__(self, out: str, err: str, *args):\n super().__init__(*args)\n self._out = out\n self._err = err\n\n def parser_msgs(self):\n return self._out.splitlines() + self._err.splitlines()\n\n\nclass InvalidXMLError(Exception):\n \"\"\"File is not valid XML\"\"\"\n pass\n", "path": "ietf/utils/xmldraft.py"}]}
| 3,109 | 299 |
gh_patches_debug_24310
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-5002
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Non-integer timeout for signal.receive causes malformed redis query
<!--
General questions should be asked on the mailing list [email protected].
Questions about how to use Ray should be asked on
[StackOverflow](https://stackoverflow.com/questions/tagged/ray).
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04
- **Ray installed from (source or binary)**: source
- **Ray version**: 0.7.0.dev2
- **Python version**: 3.7.2
### Describe the problem
The timeout argument in `signal.receive()` does not work with non-integer values. For instance, calling `signal.receive(sources, timeout=0.01)` causes a `redis.exceptions.ResponseError: timeout is not an integer or out of range`.
This is because of L131 in `signal.py`, where the timeout is converted to ms and then converted to a string - redis expects an int, but if you pass a double to the method (like 0.1), the result of `str(1000 * timeout)` is `100.0`. The correct string would have been `100`. A fix would be to change `str(1000 * timeout)` to `str(int(1000 * timeout))` and have checks to ensure timeout is not < 1000.
https://github.com/ray-project/ray/blob/d951eb740ffe22c385d75df62aa18da790706804/python/ray/experimental/signal.py#L129-L133
### Source code / logs
Traceback:
```
Traceback (most recent call last):
File "driver.py", line 21, in <module>
signals = signal.receive(self.clients, timeout=0.01)
File "/home/romilb/ray/python/ray/experimental/signal.py", line 141, in receive
answers = ray.worker.global_worker.redis_client.execute_command(query)
File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py", line 668, in execute_command
return self.parse_response(connection, command_name, **options)
File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py", line 680, in parse_response
response = connection.read_response()
File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/connection.py", line 629, in read_response
raise response
redis.exceptions.ResponseError: timeout is not an integer or out of range
```
I also printed the corresponding redis query:
```
(pid=31330) XREAD BLOCK 10.0 STREAMS fc96f993802bb5a31b5868e69179fe76b62b7c3b 0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/experimental/signal.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from collections import defaultdict
6
7 import ray
8 import ray.cloudpickle as cloudpickle
9
10 # This string should be identical to the name of the signal sent upon
11 # detecting that an actor died.
12 # This constant is also used in NodeManager::PublishActorStateTransition()
13 # in node_manager.cc
14 ACTOR_DIED_STR = "ACTOR_DIED_SIGNAL"
15
16
17 class Signal(object):
18 """Base class for Ray signals."""
19 pass
20
21
22 class ErrorSignal(Signal):
23 """Signal raised if an exception happens in a task or actor method."""
24
25 def __init__(self, error):
26 self.error = error
27
28
29 class ActorDiedSignal(Signal):
30 """Signal raised if an exception happens in a task or actor method."""
31
32 def __init__(self):
33 pass
34
35
36 def _get_task_id(source):
37 """Return the task id associated to the generic source of the signal.
38
39 Args:
40 source: source of the signal, it can be either an object id returned
41 by a task, a task id, or an actor handle.
42
43 Returns:
44 - If source is an object id, return id of task which creted object.
45 - If source is an actor handle, return id of actor's task creator.
46 - If source is a task id, return same task id.
47 """
48 if type(source) is ray.actor.ActorHandle:
49 return source._ray_actor_id
50 else:
51 if type(source) is ray.TaskID:
52 return source
53 else:
54 return ray._raylet.compute_task_id(source)
55
56
57 def send(signal):
58 """Send signal.
59
60 The signal has a unique identifier that is computed from (1) the id
61 of the actor or task sending this signal (i.e., the actor or task calling
62 this function), and (2) an index that is incremented every time this
63 source sends a signal. This index starts from 1.
64
65 Args:
66 signal: Signal to be sent.
67 """
68 if hasattr(ray.worker.global_worker, "actor_creation_task_id"):
69 source_key = ray.worker.global_worker.actor_id.hex()
70 else:
71 # No actors; this function must have been called from a task
72 source_key = ray.worker.global_worker.current_task_id.hex()
73
74 encoded_signal = ray.utils.binary_to_hex(cloudpickle.dumps(signal))
75 ray.worker.global_worker.redis_client.execute_command(
76 "XADD " + source_key + " * signal " + encoded_signal)
77
78
79 def receive(sources, timeout=None):
80 """Get all outstanding signals from sources.
81
82 A source can be either (1) an object ID returned by the task (we want
83 to receive signals from), or (2) an actor handle.
84
85 When invoked by the same entity E (where E can be an actor, task or
86 driver), for each source S in sources, this function returns all signals
87 generated by S since the last receive() was invoked by E on S. If this is
88 the first call on S, this function returns all past signals generated by S
89 so far. Note that different actors, tasks or drivers that call receive()
90 on the same source S will get independent copies of the signals generated
91 by S.
92
93 Args:
94 sources: List of sources from which the caller waits for signals.
95 A source is either an object ID returned by a task (in this case
96 the object ID is used to identify that task), or an actor handle.
97 If the user passes the IDs of multiple objects returned by the
98 same task, this function returns a copy of the signals generated
99 by that task for each object ID.
100 timeout: Maximum time (in seconds) this function waits to get a signal
101 from a source in sources. If None, the timeout is infinite.
102
103 Returns:
104 A list of pairs (S, sig), where S is a source in the sources argument,
105 and sig is a signal generated by S since the last time receive()
106 was called on S. Thus, for each S in sources, the return list can
107 contain zero or multiple entries.
108 """
109
110 # If None, initialize the timeout to a huge value (i.e., over 30,000 years
111 # in this case) to "approximate" infinity.
112 if timeout is None:
113 timeout = 10**12
114
115 if timeout < 0:
116 raise ValueError("The 'timeout' argument cannot be less than 0.")
117
118 if not hasattr(ray.worker.global_worker, "signal_counters"):
119 ray.worker.global_worker.signal_counters = defaultdict(lambda: b"0")
120
121 signal_counters = ray.worker.global_worker.signal_counters
122
123 # Map the ID of each source task to the source itself.
124 task_id_to_sources = defaultdict(lambda: [])
125 for s in sources:
126 task_id_to_sources[_get_task_id(s).hex()].append(s)
127
128 # Construct the redis query.
129 query = "XREAD BLOCK "
130 # Multiply by 1000x since timeout is in sec and redis expects ms.
131 query += str(1000 * timeout)
132 query += " STREAMS "
133 query += " ".join([task_id for task_id in task_id_to_sources])
134 query += " "
135 query += " ".join([
136 ray.utils.decode(signal_counters[ray.utils.hex_to_binary(task_id)])
137 for task_id in task_id_to_sources
138 ])
139
140 answers = ray.worker.global_worker.redis_client.execute_command(query)
141 if not answers:
142 return []
143
144 results = []
145 # Decoding is a little bit involved. Iterate through all the answers:
146 for i, answer in enumerate(answers):
147 # Make sure the answer corresponds to a source, s, in sources.
148 task_id = ray.utils.decode(answer[0])
149 task_source_list = task_id_to_sources[task_id]
150 # The list of results for source s is stored in answer[1]
151 for r in answer[1]:
152 for s in task_source_list:
153 if r[1][1].decode("ascii") == ACTOR_DIED_STR:
154 results.append((s, ActorDiedSignal()))
155 else:
156 # Now it gets tricky: r[0] is the redis internal sequence
157 # id
158 signal_counters[ray.utils.hex_to_binary(task_id)] = r[0]
159 # r[1] contains a list with elements (key, value), in our
160 # case we only have one key "signal" and the value is the
161 # signal.
162 signal = cloudpickle.loads(
163 ray.utils.hex_to_binary(r[1][1]))
164 results.append((s, signal))
165
166 return results
167
168
169 def forget(sources):
170 """Ignore all previous signals associated with each source S in sources.
171
172 The index of the next expected signal from S is set to the index of
173 the last signal that S sent plus 1. This means that the next receive()
174 on S will only get the signals generated after this function was invoked.
175
176 Args:
177 sources: list of sources whose past signals are forgotten.
178 """
179 # Just read all signals sent by all sources so far.
180 # This will results in ignoring these signals.
181 receive(sources, timeout=0)
182
183
184 def reset():
185 """
186 Reset the worker state associated with any signals that this worker
187 has received so far.
188
189 If the worker calls receive() on a source next, it will get all the
190 signals generated by that source starting with index = 1.
191 """
192 if hasattr(ray.worker.global_worker, "signal_counters"):
193 ray.worker.global_worker.signal_counters = defaultdict(lambda: b"0")
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/ray/experimental/signal.py b/python/ray/experimental/signal.py
--- a/python/ray/experimental/signal.py
+++ b/python/ray/experimental/signal.py
@@ -2,6 +2,8 @@
from __future__ import division
from __future__ import print_function
+import logging
+
from collections import defaultdict
import ray
@@ -13,6 +15,8 @@
# in node_manager.cc
ACTOR_DIED_STR = "ACTOR_DIED_SIGNAL"
+logger = logging.getLogger(__name__)
+
class Signal(object):
"""Base class for Ray signals."""
@@ -125,10 +129,16 @@
for s in sources:
task_id_to_sources[_get_task_id(s).hex()].append(s)
+ if timeout < 1e-3:
+ logger.warning("Timeout too small. Using 1ms minimum")
+ timeout = 1e-3
+
+ timeout_ms = int(1000 * timeout)
+
# Construct the redis query.
query = "XREAD BLOCK "
- # Multiply by 1000x since timeout is in sec and redis expects ms.
- query += str(1000 * timeout)
+ # redis expects ms.
+ query += str(timeout_ms)
query += " STREAMS "
query += " ".join([task_id for task_id in task_id_to_sources])
query += " "
|
{"golden_diff": "diff --git a/python/ray/experimental/signal.py b/python/ray/experimental/signal.py\n--- a/python/ray/experimental/signal.py\n+++ b/python/ray/experimental/signal.py\n@@ -2,6 +2,8 @@\n from __future__ import division\n from __future__ import print_function\n \n+import logging\n+\n from collections import defaultdict\n \n import ray\n@@ -13,6 +15,8 @@\n # in node_manager.cc\n ACTOR_DIED_STR = \"ACTOR_DIED_SIGNAL\"\n \n+logger = logging.getLogger(__name__)\n+\n \n class Signal(object):\n \"\"\"Base class for Ray signals.\"\"\"\n@@ -125,10 +129,16 @@\n for s in sources:\n task_id_to_sources[_get_task_id(s).hex()].append(s)\n \n+ if timeout < 1e-3:\n+ logger.warning(\"Timeout too small. Using 1ms minimum\")\n+ timeout = 1e-3\n+\n+ timeout_ms = int(1000 * timeout)\n+\n # Construct the redis query.\n query = \"XREAD BLOCK \"\n- # Multiply by 1000x since timeout is in sec and redis expects ms.\n- query += str(1000 * timeout)\n+ # redis expects ms.\n+ query += str(timeout_ms)\n query += \" STREAMS \"\n query += \" \".join([task_id for task_id in task_id_to_sources])\n query += \" \"\n", "issue": "Non-integer timeout for signal.receive causes malformed redis query\n<!--\r\nGeneral questions should be asked on the mailing list [email protected].\r\nQuestions about how to use Ray should be asked on\r\n[StackOverflow](https://stackoverflow.com/questions/tagged/ray).\r\n\r\nBefore submitting an issue, please fill out the following form.\r\n-->\r\n\r\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04\r\n- **Ray installed from (source or binary)**: source\r\n- **Ray version**: 0.7.0.dev2\r\n- **Python version**: 3.7.2\r\n\r\n### Describe the problem\r\nThe timeout argument in `signal.receive()` does not work with non-integer values. For instance, calling `signal.receive(sources, timeout=0.01)` causes a `redis.exceptions.ResponseError: timeout is not an integer or out of range`.\r\n\r\nThis is because of L131 in `signal.py`, where the timeout is converted to ms and then converted to a string - redis expects an int, but if you pass a double to the method (like 0.1), the result of `str(1000 * timeout)` is `100.0`. The correct string would have been `100`. A fix would be to change `str(1000 * timeout)` to `str(int(1000 * timeout))` and have checks to ensure timeout is not < 1000.\r\n\r\nhttps://github.com/ray-project/ray/blob/d951eb740ffe22c385d75df62aa18da790706804/python/ray/experimental/signal.py#L129-L133\r\n\r\n### Source code / logs\r\nTraceback: \r\n```\r\nTraceback (most recent call last):\r\n File \"driver.py\", line 21, in <module>\r\n signals = signal.receive(self.clients, timeout=0.01)\r\n File \"/home/romilb/ray/python/ray/experimental/signal.py\", line 141, in receive\r\n answers = ray.worker.global_worker.redis_client.execute_command(query)\r\n File \"/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py\", line 668, in execute_command\r\n return self.parse_response(connection, command_name, **options)\r\n File \"/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py\", line 680, in parse_response\r\n response = connection.read_response()\r\n File \"/home/romilb/anaconda3/lib/python3.7/site-packages/redis/connection.py\", line 629, in read_response\r\n raise response\r\nredis.exceptions.ResponseError: timeout is not an integer or out of range\r\n```\r\nI also printed the corresponding redis query:\r\n\r\n```\r\n(pid=31330) XREAD BLOCK 10.0 STREAMS fc96f993802bb5a31b5868e69179fe76b62b7c3b 0\r\n```\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom collections import defaultdict\n\nimport ray\nimport ray.cloudpickle as cloudpickle\n\n# This string should be identical to the name of the signal sent upon\n# detecting that an actor died.\n# This constant is also used in NodeManager::PublishActorStateTransition()\n# in node_manager.cc\nACTOR_DIED_STR = \"ACTOR_DIED_SIGNAL\"\n\n\nclass Signal(object):\n \"\"\"Base class for Ray signals.\"\"\"\n pass\n\n\nclass ErrorSignal(Signal):\n \"\"\"Signal raised if an exception happens in a task or actor method.\"\"\"\n\n def __init__(self, error):\n self.error = error\n\n\nclass ActorDiedSignal(Signal):\n \"\"\"Signal raised if an exception happens in a task or actor method.\"\"\"\n\n def __init__(self):\n pass\n\n\ndef _get_task_id(source):\n \"\"\"Return the task id associated to the generic source of the signal.\n\n Args:\n source: source of the signal, it can be either an object id returned\n by a task, a task id, or an actor handle.\n\n Returns:\n - If source is an object id, return id of task which creted object.\n - If source is an actor handle, return id of actor's task creator.\n - If source is a task id, return same task id.\n \"\"\"\n if type(source) is ray.actor.ActorHandle:\n return source._ray_actor_id\n else:\n if type(source) is ray.TaskID:\n return source\n else:\n return ray._raylet.compute_task_id(source)\n\n\ndef send(signal):\n \"\"\"Send signal.\n\n The signal has a unique identifier that is computed from (1) the id\n of the actor or task sending this signal (i.e., the actor or task calling\n this function), and (2) an index that is incremented every time this\n source sends a signal. This index starts from 1.\n\n Args:\n signal: Signal to be sent.\n \"\"\"\n if hasattr(ray.worker.global_worker, \"actor_creation_task_id\"):\n source_key = ray.worker.global_worker.actor_id.hex()\n else:\n # No actors; this function must have been called from a task\n source_key = ray.worker.global_worker.current_task_id.hex()\n\n encoded_signal = ray.utils.binary_to_hex(cloudpickle.dumps(signal))\n ray.worker.global_worker.redis_client.execute_command(\n \"XADD \" + source_key + \" * signal \" + encoded_signal)\n\n\ndef receive(sources, timeout=None):\n \"\"\"Get all outstanding signals from sources.\n\n A source can be either (1) an object ID returned by the task (we want\n to receive signals from), or (2) an actor handle.\n\n When invoked by the same entity E (where E can be an actor, task or\n driver), for each source S in sources, this function returns all signals\n generated by S since the last receive() was invoked by E on S. If this is\n the first call on S, this function returns all past signals generated by S\n so far. Note that different actors, tasks or drivers that call receive()\n on the same source S will get independent copies of the signals generated\n by S.\n\n Args:\n sources: List of sources from which the caller waits for signals.\n A source is either an object ID returned by a task (in this case\n the object ID is used to identify that task), or an actor handle.\n If the user passes the IDs of multiple objects returned by the\n same task, this function returns a copy of the signals generated\n by that task for each object ID.\n timeout: Maximum time (in seconds) this function waits to get a signal\n from a source in sources. If None, the timeout is infinite.\n\n Returns:\n A list of pairs (S, sig), where S is a source in the sources argument,\n and sig is a signal generated by S since the last time receive()\n was called on S. Thus, for each S in sources, the return list can\n contain zero or multiple entries.\n \"\"\"\n\n # If None, initialize the timeout to a huge value (i.e., over 30,000 years\n # in this case) to \"approximate\" infinity.\n if timeout is None:\n timeout = 10**12\n\n if timeout < 0:\n raise ValueError(\"The 'timeout' argument cannot be less than 0.\")\n\n if not hasattr(ray.worker.global_worker, \"signal_counters\"):\n ray.worker.global_worker.signal_counters = defaultdict(lambda: b\"0\")\n\n signal_counters = ray.worker.global_worker.signal_counters\n\n # Map the ID of each source task to the source itself.\n task_id_to_sources = defaultdict(lambda: [])\n for s in sources:\n task_id_to_sources[_get_task_id(s).hex()].append(s)\n\n # Construct the redis query.\n query = \"XREAD BLOCK \"\n # Multiply by 1000x since timeout is in sec and redis expects ms.\n query += str(1000 * timeout)\n query += \" STREAMS \"\n query += \" \".join([task_id for task_id in task_id_to_sources])\n query += \" \"\n query += \" \".join([\n ray.utils.decode(signal_counters[ray.utils.hex_to_binary(task_id)])\n for task_id in task_id_to_sources\n ])\n\n answers = ray.worker.global_worker.redis_client.execute_command(query)\n if not answers:\n return []\n\n results = []\n # Decoding is a little bit involved. Iterate through all the answers:\n for i, answer in enumerate(answers):\n # Make sure the answer corresponds to a source, s, in sources.\n task_id = ray.utils.decode(answer[0])\n task_source_list = task_id_to_sources[task_id]\n # The list of results for source s is stored in answer[1]\n for r in answer[1]:\n for s in task_source_list:\n if r[1][1].decode(\"ascii\") == ACTOR_DIED_STR:\n results.append((s, ActorDiedSignal()))\n else:\n # Now it gets tricky: r[0] is the redis internal sequence\n # id\n signal_counters[ray.utils.hex_to_binary(task_id)] = r[0]\n # r[1] contains a list with elements (key, value), in our\n # case we only have one key \"signal\" and the value is the\n # signal.\n signal = cloudpickle.loads(\n ray.utils.hex_to_binary(r[1][1]))\n results.append((s, signal))\n\n return results\n\n\ndef forget(sources):\n \"\"\"Ignore all previous signals associated with each source S in sources.\n\n The index of the next expected signal from S is set to the index of\n the last signal that S sent plus 1. This means that the next receive()\n on S will only get the signals generated after this function was invoked.\n\n Args:\n sources: list of sources whose past signals are forgotten.\n \"\"\"\n # Just read all signals sent by all sources so far.\n # This will results in ignoring these signals.\n receive(sources, timeout=0)\n\n\ndef reset():\n \"\"\"\n Reset the worker state associated with any signals that this worker\n has received so far.\n\n If the worker calls receive() on a source next, it will get all the\n signals generated by that source starting with index = 1.\n \"\"\"\n if hasattr(ray.worker.global_worker, \"signal_counters\"):\n ray.worker.global_worker.signal_counters = defaultdict(lambda: b\"0\")\n", "path": "python/ray/experimental/signal.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\n\nfrom collections import defaultdict\n\nimport ray\nimport ray.cloudpickle as cloudpickle\n\n# This string should be identical to the name of the signal sent upon\n# detecting that an actor died.\n# This constant is also used in NodeManager::PublishActorStateTransition()\n# in node_manager.cc\nACTOR_DIED_STR = \"ACTOR_DIED_SIGNAL\"\n\nlogger = logging.getLogger(__name__)\n\n\nclass Signal(object):\n \"\"\"Base class for Ray signals.\"\"\"\n pass\n\n\nclass ErrorSignal(Signal):\n \"\"\"Signal raised if an exception happens in a task or actor method.\"\"\"\n\n def __init__(self, error):\n self.error = error\n\n\nclass ActorDiedSignal(Signal):\n \"\"\"Signal raised if an exception happens in a task or actor method.\"\"\"\n\n def __init__(self):\n pass\n\n\ndef _get_task_id(source):\n \"\"\"Return the task id associated to the generic source of the signal.\n\n Args:\n source: source of the signal, it can be either an object id returned\n by a task, a task id, or an actor handle.\n\n Returns:\n - If source is an object id, return id of task which creted object.\n - If source is an actor handle, return id of actor's task creator.\n - If source is a task id, return same task id.\n \"\"\"\n if type(source) is ray.actor.ActorHandle:\n return source._ray_actor_id\n else:\n if type(source) is ray.TaskID:\n return source\n else:\n return ray._raylet.compute_task_id(source)\n\n\ndef send(signal):\n \"\"\"Send signal.\n\n The signal has a unique identifier that is computed from (1) the id\n of the actor or task sending this signal (i.e., the actor or task calling\n this function), and (2) an index that is incremented every time this\n source sends a signal. This index starts from 1.\n\n Args:\n signal: Signal to be sent.\n \"\"\"\n if hasattr(ray.worker.global_worker, \"actor_creation_task_id\"):\n source_key = ray.worker.global_worker.actor_id.hex()\n else:\n # No actors; this function must have been called from a task\n source_key = ray.worker.global_worker.current_task_id.hex()\n\n encoded_signal = ray.utils.binary_to_hex(cloudpickle.dumps(signal))\n ray.worker.global_worker.redis_client.execute_command(\n \"XADD \" + source_key + \" * signal \" + encoded_signal)\n\n\ndef receive(sources, timeout=None):\n \"\"\"Get all outstanding signals from sources.\n\n A source can be either (1) an object ID returned by the task (we want\n to receive signals from), or (2) an actor handle.\n\n When invoked by the same entity E (where E can be an actor, task or\n driver), for each source S in sources, this function returns all signals\n generated by S since the last receive() was invoked by E on S. If this is\n the first call on S, this function returns all past signals generated by S\n so far. Note that different actors, tasks or drivers that call receive()\n on the same source S will get independent copies of the signals generated\n by S.\n\n Args:\n sources: List of sources from which the caller waits for signals.\n A source is either an object ID returned by a task (in this case\n the object ID is used to identify that task), or an actor handle.\n If the user passes the IDs of multiple objects returned by the\n same task, this function returns a copy of the signals generated\n by that task for each object ID.\n timeout: Maximum time (in seconds) this function waits to get a signal\n from a source in sources. If None, the timeout is infinite.\n\n Returns:\n A list of pairs (S, sig), where S is a source in the sources argument,\n and sig is a signal generated by S since the last time receive()\n was called on S. Thus, for each S in sources, the return list can\n contain zero or multiple entries.\n \"\"\"\n\n # If None, initialize the timeout to a huge value (i.e., over 30,000 years\n # in this case) to \"approximate\" infinity.\n if timeout is None:\n timeout = 10**12\n\n if timeout < 0:\n raise ValueError(\"The 'timeout' argument cannot be less than 0.\")\n\n if not hasattr(ray.worker.global_worker, \"signal_counters\"):\n ray.worker.global_worker.signal_counters = defaultdict(lambda: b\"0\")\n\n signal_counters = ray.worker.global_worker.signal_counters\n\n # Map the ID of each source task to the source itself.\n task_id_to_sources = defaultdict(lambda: [])\n for s in sources:\n task_id_to_sources[_get_task_id(s).hex()].append(s)\n\n if timeout < 1e-3:\n logger.warning(\"Timeout too small. Using 1ms minimum\")\n timeout = 1e-3\n\n timeout_ms = int(1000 * timeout)\n\n # Construct the redis query.\n query = \"XREAD BLOCK \"\n # redis expects ms.\n query += str(timeout_ms)\n query += \" STREAMS \"\n query += \" \".join([task_id for task_id in task_id_to_sources])\n query += \" \"\n query += \" \".join([\n ray.utils.decode(signal_counters[ray.utils.hex_to_binary(task_id)])\n for task_id in task_id_to_sources\n ])\n\n answers = ray.worker.global_worker.redis_client.execute_command(query)\n if not answers:\n return []\n\n results = []\n # Decoding is a little bit involved. Iterate through all the answers:\n for i, answer in enumerate(answers):\n # Make sure the answer corresponds to a source, s, in sources.\n task_id = ray.utils.decode(answer[0])\n task_source_list = task_id_to_sources[task_id]\n # The list of results for source s is stored in answer[1]\n for r in answer[1]:\n for s in task_source_list:\n if r[1][1].decode(\"ascii\") == ACTOR_DIED_STR:\n results.append((s, ActorDiedSignal()))\n else:\n # Now it gets tricky: r[0] is the redis internal sequence\n # id\n signal_counters[ray.utils.hex_to_binary(task_id)] = r[0]\n # r[1] contains a list with elements (key, value), in our\n # case we only have one key \"signal\" and the value is the\n # signal.\n signal = cloudpickle.loads(\n ray.utils.hex_to_binary(r[1][1]))\n results.append((s, signal))\n\n return results\n\n\ndef forget(sources):\n \"\"\"Ignore all previous signals associated with each source S in sources.\n\n The index of the next expected signal from S is set to the index of\n the last signal that S sent plus 1. This means that the next receive()\n on S will only get the signals generated after this function was invoked.\n\n Args:\n sources: list of sources whose past signals are forgotten.\n \"\"\"\n # Just read all signals sent by all sources so far.\n # This will results in ignoring these signals.\n receive(sources, timeout=0)\n\n\ndef reset():\n \"\"\"\n Reset the worker state associated with any signals that this worker\n has received so far.\n\n If the worker calls receive() on a source next, it will get all the\n signals generated by that source starting with index = 1.\n \"\"\"\n if hasattr(ray.worker.global_worker, \"signal_counters\"):\n ray.worker.global_worker.signal_counters = defaultdict(lambda: b\"0\")\n", "path": "python/ray/experimental/signal.py"}]}
| 3,080 | 321 |
gh_patches_debug_27175
|
rasdani/github-patches
|
git_diff
|
xorbitsai__inference-143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ENH: auto find available port for API
### Describe the bug
```
~ ❯ xinference 6s base 18:24:18
Traceback (most recent call last):
File "/Users/hekaisheng/miniconda3/bin/xinference", line 33, in <module>
sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1637, in invoke
super().invoke(ctx)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py", line 51, in cli
main(
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 50, in main
loop.run_until_complete(task)
File "/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py", line 36, in _start_local_cluster
url = await start_supervisor_components(address=address, host=host, port=port)
File "/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py", line 35, in start_supervisor_components
sock.bind((host, port))
OSError: [Errno 48] Address already in use
```
Use available port if users not specify.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/deploy/supervisor.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import logging
17 import socket
18 from typing import Dict, Optional
19
20 import xoscar as xo
21
22 from ..core.gradio import GradioApp
23 from ..core.restful_api import RESTfulAPIActor
24 from ..core.service import SupervisorActor
25
26 logger = logging.getLogger("xinference")
27
28
29 async def start_supervisor_components(address: str, host: str, port: int):
30 await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
31 gradio_block = GradioApp(address).build()
32 # create a socket for RESTful API
33 sockets = []
34 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
35 sock.bind((host, port))
36 sockets.append(sock)
37 restful_actor = await xo.create_actor(
38 RESTfulAPIActor,
39 address=address,
40 uid=RESTfulAPIActor.uid(),
41 sockets=sockets,
42 gradio_block=gradio_block,
43 )
44 await restful_actor.serve()
45 url = f"http://{host}:{port}"
46 logger.info(f"Server address: {url}")
47 return url
48
49
50 async def _start_supervisor(
51 address: str, host: str, port: int, logging_conf: Optional[Dict] = None
52 ):
53 pool = None
54 try:
55 pool = await xo.create_actor_pool(
56 address=address, n_process=0, logging_conf=logging_conf
57 )
58 await start_supervisor_components(address=address, host=host, port=port)
59 await pool.join()
60 except asyncio.exceptions.CancelledError:
61 if pool is not None:
62 await pool.stop()
63
64
65 def main(*args, **kwargs):
66 loop = asyncio.get_event_loop()
67 task = loop.create_task(_start_supervisor(*args, **kwargs))
68
69 try:
70 loop.run_until_complete(task)
71 except KeyboardInterrupt:
72 task.cancel()
73 loop.run_until_complete(task)
74 # avoid displaying exception-unhandled warnings
75 task.exception()
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py
--- a/xinference/deploy/supervisor.py
+++ b/xinference/deploy/supervisor.py
@@ -18,7 +18,9 @@
from typing import Dict, Optional
import xoscar as xo
+from xoscar.utils import get_next_port
+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT
from ..core.gradio import GradioApp
from ..core.restful_api import RESTfulAPIActor
from ..core.service import SupervisorActor
@@ -30,10 +32,26 @@
await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())
gradio_block = GradioApp(address).build()
# create a socket for RESTful API
- sockets = []
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- sock.bind((host, port))
- sockets.append(sock)
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.bind((host, port))
+ sockets.append(sock)
+ except OSError:
+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:
+ while True:
+ try:
+ sockets = []
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ port = get_next_port()
+ sock.bind((host, port))
+ sockets.append(sock)
+ break
+ except OSError:
+ pass
+ else:
+ raise OSError
+
restful_actor = await xo.create_actor(
RESTfulAPIActor,
address=address,
|
{"golden_diff": "diff --git a/xinference/deploy/supervisor.py b/xinference/deploy/supervisor.py\n--- a/xinference/deploy/supervisor.py\n+++ b/xinference/deploy/supervisor.py\n@@ -18,7 +18,9 @@\n from typing import Dict, Optional\n \n import xoscar as xo\n+from xoscar.utils import get_next_port\n \n+from ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT\n from ..core.gradio import GradioApp\n from ..core.restful_api import RESTfulAPIActor\n from ..core.service import SupervisorActor\n@@ -30,10 +32,26 @@\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n- sockets = []\n- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n- sock.bind((host, port))\n- sockets.append(sock)\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ except OSError:\n+ if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:\n+ while True:\n+ try:\n+ sockets = []\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ port = get_next_port()\n+ sock.bind((host, port))\n+ sockets.append(sock)\n+ break\n+ except OSError:\n+ pass\n+ else:\n+ raise OSError\n+\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n", "issue": "ENH: auto find available port for API\n### Describe the bug\r\n```\r\n~ \u276f xinference 6s \ue73c base 18:24:18\r\nTraceback (most recent call last):\r\n File \"/Users/hekaisheng/miniconda3/bin/xinference\", line 33, in <module>\r\n sys.exit(load_entry_point('xinference', 'console_scripts', 'xinference')())\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1637, in invoke\r\n super().invoke(ctx)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/site-packages/click/decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/cmdline.py\", line 51, in cli\r\n main(\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 50, in main\r\n loop.run_until_complete(task)\r\n File \"/Users/hekaisheng/miniconda3/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/local.py\", line 36, in _start_local_cluster\r\n url = await start_supervisor_components(address=address, host=host, port=port)\r\n File \"/Users/hekaisheng/Documents/projects/inference/xinference/deploy/supervisor.py\", line 35, in start_supervisor_components\r\n sock.bind((host, port))\r\nOSError: [Errno 48] Address already in use\r\n```\r\n\r\nUse available port if users not specify.\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport logging\nimport socket\nfrom typing import Dict, Optional\n\nimport xoscar as xo\n\nfrom ..core.gradio import GradioApp\nfrom ..core.restful_api import RESTfulAPIActor\nfrom ..core.service import SupervisorActor\n\nlogger = logging.getLogger(\"xinference\")\n\n\nasync def start_supervisor_components(address: str, host: str, port: int):\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.bind((host, port))\n sockets.append(sock)\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n uid=RESTfulAPIActor.uid(),\n sockets=sockets,\n gradio_block=gradio_block,\n )\n await restful_actor.serve()\n url = f\"http://{host}:{port}\"\n logger.info(f\"Server address: {url}\")\n return url\n\n\nasync def _start_supervisor(\n address: str, host: str, port: int, logging_conf: Optional[Dict] = None\n):\n pool = None\n try:\n pool = await xo.create_actor_pool(\n address=address, n_process=0, logging_conf=logging_conf\n )\n await start_supervisor_components(address=address, host=host, port=port)\n await pool.join()\n except asyncio.exceptions.CancelledError:\n if pool is not None:\n await pool.stop()\n\n\ndef main(*args, **kwargs):\n loop = asyncio.get_event_loop()\n task = loop.create_task(_start_supervisor(*args, **kwargs))\n\n try:\n loop.run_until_complete(task)\n except KeyboardInterrupt:\n task.cancel()\n loop.run_until_complete(task)\n # avoid displaying exception-unhandled warnings\n task.exception()\n", "path": "xinference/deploy/supervisor.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport logging\nimport socket\nfrom typing import Dict, Optional\n\nimport xoscar as xo\nfrom xoscar.utils import get_next_port\n\nfrom ..constants import XINFERENCE_DEFAULT_ENDPOINT_PORT\nfrom ..core.gradio import GradioApp\nfrom ..core.restful_api import RESTfulAPIActor\nfrom ..core.service import SupervisorActor\n\nlogger = logging.getLogger(\"xinference\")\n\n\nasync def start_supervisor_components(address: str, host: str, port: int):\n await xo.create_actor(SupervisorActor, address=address, uid=SupervisorActor.uid())\n gradio_block = GradioApp(address).build()\n # create a socket for RESTful API\n try:\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.bind((host, port))\n sockets.append(sock)\n except OSError:\n if port is XINFERENCE_DEFAULT_ENDPOINT_PORT:\n while True:\n try:\n sockets = []\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n port = get_next_port()\n sock.bind((host, port))\n sockets.append(sock)\n break\n except OSError:\n pass\n else:\n raise OSError\n\n restful_actor = await xo.create_actor(\n RESTfulAPIActor,\n address=address,\n uid=RESTfulAPIActor.uid(),\n sockets=sockets,\n gradio_block=gradio_block,\n )\n await restful_actor.serve()\n url = f\"http://{host}:{port}\"\n logger.info(f\"Server address: {url}\")\n return url\n\n\nasync def _start_supervisor(\n address: str, host: str, port: int, logging_conf: Optional[Dict] = None\n):\n pool = None\n try:\n pool = await xo.create_actor_pool(\n address=address, n_process=0, logging_conf=logging_conf\n )\n await start_supervisor_components(address=address, host=host, port=port)\n await pool.join()\n except asyncio.exceptions.CancelledError:\n if pool is not None:\n await pool.stop()\n\n\ndef main(*args, **kwargs):\n loop = asyncio.get_event_loop()\n task = loop.create_task(_start_supervisor(*args, **kwargs))\n\n try:\n loop.run_until_complete(task)\n except KeyboardInterrupt:\n task.cancel()\n loop.run_until_complete(task)\n # avoid displaying exception-unhandled warnings\n task.exception()\n", "path": "xinference/deploy/supervisor.py"}]}
| 1,543 | 368 |
gh_patches_debug_1931
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2710
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot create new shifts: ValueError: 'Shift' instance needs to have a primary key value before this relationship can be used.
Sentry Issue: [CONCREXIT-KK](https://sentry.io/organizations/thalia/issues/3788518453/?referrer=github_integration)
```
ValueError: 'Shift' instance needs to have a primary key value before this relationship can be used.
(14 additional frame(s) were not displayed)
...
File "django/forms/models.py", line 492, in _post_clean
self.instance.full_clean(exclude=exclude, validate_unique=False)
File "django/db/models/base.py", line 1452, in full_clean
self.clean()
File "sales/models/shift.py", line 69, in clean
if self.orders.filter(created_at__lt=self.start):
File "django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "django/db/models/fields/related_descriptors.py", line 687, in get_queryset
raise ValueError(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/models/shift.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.db.models import Count, Q, Sum
4 from django.db.models.expressions import Value
5 from django.db.models.functions import Coalesce
6 from django.utils import timezone
7 from django.utils.translation import gettext_lazy as _
8
9 from queryable_properties.managers import QueryablePropertiesManager
10 from queryable_properties.properties import AggregateProperty, RangeCheckProperty
11
12 from activemembers.models import MemberGroup
13 from payments.models import PaymentAmountField
14 from sales.models.product import ProductList
15
16
17 class Shift(models.Model):
18 class Meta:
19 permissions = [
20 ("override_manager", _("Can access all shifts as manager")),
21 ]
22
23 objects = QueryablePropertiesManager()
24
25 start = models.DateTimeField(
26 verbose_name=_("start"),
27 blank=False,
28 null=False,
29 )
30 end = models.DateTimeField(
31 verbose_name=_("end"),
32 blank=False,
33 null=False,
34 help_text=_(
35 "The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift."
36 ),
37 )
38
39 title = models.CharField(
40 verbose_name=_("title"), blank=True, null=True, max_length=100
41 )
42
43 product_list = models.ForeignKey(
44 ProductList,
45 verbose_name=_("product list"),
46 blank=False,
47 null=False,
48 on_delete=models.PROTECT,
49 )
50
51 managers = models.ManyToManyField(
52 MemberGroup, verbose_name=_("managers"), related_name="manager_shifts"
53 )
54
55 locked = models.BooleanField(
56 verbose_name=_("locked"),
57 blank=False,
58 null=False,
59 default=False,
60 help_text=_(
61 "Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift."
62 ),
63 )
64
65 def clean(self):
66 super().clean()
67 errors = {}
68
69 if self.orders.filter(created_at__lt=self.start):
70 errors.update(
71 {
72 "start": _(
73 "There are already orders created in this shift before this start time."
74 )
75 }
76 )
77
78 if self.end and self.start and self.end <= self.start:
79 errors.update({"end": _("End cannot be before start.")})
80
81 if errors:
82 raise ValidationError(errors)
83
84 def save(
85 self, force_insert=False, force_update=False, using=None, update_fields=None
86 ):
87 if self.locked:
88 self.orders.filter(
89 (Q(payment__isnull=True) & Q(total_amount__gt=0))
90 | Q(order_items__isnull=True)
91 ).delete()
92
93 return super().save(force_insert, force_update, using, update_fields)
94
95 active = RangeCheckProperty("start", "end", timezone.now)
96
97 total_revenue = AggregateProperty(
98 Sum(
99 Coalesce("orders___total_amount", Value(0.00)),
100 output_field=PaymentAmountField(allow_zero=True),
101 )
102 )
103
104 total_revenue_paid = AggregateProperty(
105 Sum(
106 Coalesce("orders__payment__amount", Value(0.00)),
107 output_field=PaymentAmountField(allow_zero=True),
108 )
109 )
110
111 num_orders = AggregateProperty(
112 Count(
113 "orders",
114 )
115 )
116
117 num_orders_paid = AggregateProperty(
118 Count(
119 "orders",
120 filter=Q(orders___is_free=True)
121 | Q(
122 orders__payment__isnull=False, # or the order is free
123 ),
124 )
125 )
126
127 @property
128 def product_sales(self):
129 qs = (
130 self.orders.exclude(order_items__isnull=True)
131 .values("order_items__product")
132 .annotate(sold=Sum("order_items__amount"))
133 .order_by()
134 )
135 return {
136 item[0]: item[1]
137 for item in qs.values_list("order_items__product__product__name", "sold")
138 }
139
140 @property
141 def payment_method_sales(self):
142 qs = (
143 self.orders.values("payment__type")
144 .annotate(sold=Sum("order_items__total"))
145 .order_by()
146 )
147 return {item[0]: item[1] for item in qs.values_list("payment__type", "sold")}
148
149 @property
150 def user_orders_allowed(self):
151 return self.selforderperiod_set.filter(
152 start__lte=timezone.now(), end__gt=timezone.now()
153 ).exists()
154
155 @property
156 def user_order_period(self):
157 qs = self.selforderperiod_set.filter(
158 start__lte=timezone.now(), end__gt=timezone.now()
159 )
160 if qs.exists():
161 return qs.first()
162 return None
163
164 def __str__(self):
165 if self.title and self.title != "":
166 return f"Shift {self.pk} - {self.title}"
167 return f"Shift {self.pk}"
168
169
170 class SelfOrderPeriod(models.Model):
171 class Meta:
172 verbose_name = _("self-order period")
173 verbose_name_plural = _("self-order periods")
174 ordering = ["start"]
175
176 shift = models.ForeignKey(Shift, blank=False, null=False, on_delete=models.CASCADE)
177 start = models.DateTimeField(
178 verbose_name=_("start"),
179 blank=False,
180 null=False,
181 )
182 end = models.DateTimeField(
183 verbose_name=_("end"),
184 blank=False,
185 null=False,
186 help_text=_(
187 "After this moment, users cannot place orders themselves anymore in this shift."
188 ),
189 )
190
191 def __str__(self):
192 return f"Self-order period for shift {self.shift.pk}"
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py
--- a/website/sales/models/shift.py
+++ b/website/sales/models/shift.py
@@ -66,7 +66,7 @@
super().clean()
errors = {}
- if self.orders.filter(created_at__lt=self.start):
+ if self.pk is not None and self.orders.filter(created_at__lt=self.start):
errors.update(
{
"start": _(
|
{"golden_diff": "diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py\n--- a/website/sales/models/shift.py\n+++ b/website/sales/models/shift.py\n@@ -66,7 +66,7 @@\n super().clean()\n errors = {}\n \n- if self.orders.filter(created_at__lt=self.start):\n+ if self.pk is not None and self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n", "issue": "Cannot create new shifts: ValueError: 'Shift' instance needs to have a primary key value before this relationship can be used.\nSentry Issue: [CONCREXIT-KK](https://sentry.io/organizations/thalia/issues/3788518453/?referrer=github_integration)\n\n```\nValueError: 'Shift' instance needs to have a primary key value before this relationship can be used.\n(14 additional frame(s) were not displayed)\n...\n File \"django/forms/models.py\", line 492, in _post_clean\n self.instance.full_clean(exclude=exclude, validate_unique=False)\n File \"django/db/models/base.py\", line 1452, in full_clean\n self.clean()\n File \"sales/models/shift.py\", line 69, in clean\n if self.orders.filter(created_at__lt=self.start):\n File \"django/db/models/manager.py\", line 85, in manager_method\n return getattr(self.get_queryset(), name)(*args, **kwargs)\n File \"django/db/models/fields/related_descriptors.py\", line 687, in get_queryset\n raise ValueError(\n```\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, Q, Sum\nfrom django.db.models.expressions import Value\nfrom django.db.models.functions import Coalesce\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AggregateProperty, RangeCheckProperty\n\nfrom activemembers.models import MemberGroup\nfrom payments.models import PaymentAmountField\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super().save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n total_revenue = AggregateProperty(\n Sum(\n Coalesce(\"orders___total_amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n total_revenue_paid = AggregateProperty(\n Sum(\n Coalesce(\"orders__payment__amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n num_orders = AggregateProperty(\n Count(\n \"orders\",\n )\n )\n\n num_orders_paid = AggregateProperty(\n Count(\n \"orders\",\n filter=Q(orders___is_free=True)\n | Q(\n orders__payment__isnull=False, # or the order is free\n ),\n )\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .annotate(sold=Sum(\"order_items__amount\"))\n .order_by()\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n @property\n def payment_method_sales(self):\n qs = (\n self.orders.values(\"payment__type\")\n .annotate(sold=Sum(\"order_items__total\"))\n .order_by()\n )\n return {item[0]: item[1] for item in qs.values_list(\"payment__type\", \"sold\")}\n\n @property\n def user_orders_allowed(self):\n return self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n ).exists()\n\n @property\n def user_order_period(self):\n qs = self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n )\n if qs.exists():\n return qs.first()\n return None\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n\n\nclass SelfOrderPeriod(models.Model):\n class Meta:\n verbose_name = _(\"self-order period\")\n verbose_name_plural = _(\"self-order periods\")\n ordering = [\"start\"]\n\n shift = models.ForeignKey(Shift, blank=False, null=False, on_delete=models.CASCADE)\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"After this moment, users cannot place orders themselves anymore in this shift.\"\n ),\n )\n\n def __str__(self):\n return f\"Self-order period for shift {self.shift.pk}\"\n", "path": "website/sales/models/shift.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, Q, Sum\nfrom django.db.models.expressions import Value\nfrom django.db.models.functions import Coalesce\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AggregateProperty, RangeCheckProperty\n\nfrom activemembers.models import MemberGroup\nfrom payments.models import PaymentAmountField\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.pk is not None and self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super().save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n total_revenue = AggregateProperty(\n Sum(\n Coalesce(\"orders___total_amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n total_revenue_paid = AggregateProperty(\n Sum(\n Coalesce(\"orders__payment__amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n num_orders = AggregateProperty(\n Count(\n \"orders\",\n )\n )\n\n num_orders_paid = AggregateProperty(\n Count(\n \"orders\",\n filter=Q(orders___is_free=True)\n | Q(\n orders__payment__isnull=False, # or the order is free\n ),\n )\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .annotate(sold=Sum(\"order_items__amount\"))\n .order_by()\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n @property\n def payment_method_sales(self):\n qs = (\n self.orders.values(\"payment__type\")\n .annotate(sold=Sum(\"order_items__total\"))\n .order_by()\n )\n return {item[0]: item[1] for item in qs.values_list(\"payment__type\", \"sold\")}\n\n @property\n def user_orders_allowed(self):\n return self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n ).exists()\n\n @property\n def user_order_period(self):\n qs = self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n )\n if qs.exists():\n return qs.first()\n return None\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n\n\nclass SelfOrderPeriod(models.Model):\n class Meta:\n verbose_name = _(\"self-order period\")\n verbose_name_plural = _(\"self-order periods\")\n ordering = [\"start\"]\n\n shift = models.ForeignKey(Shift, blank=False, null=False, on_delete=models.CASCADE)\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"After this moment, users cannot place orders themselves anymore in this shift.\"\n ),\n )\n\n def __str__(self):\n return f\"Self-order period for shift {self.shift.pk}\"\n", "path": "website/sales/models/shift.py"}]}
| 2,161 | 110 |
gh_patches_debug_50508
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-426
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
How to perform all tensor on np.array packages
Example:
```
import mars.tensor as mt
import numpy as np
def test_x():
X1 = mt.tensor([0, 1])
X2 = mt.tensor([2, 3])
X = np.array([X1,X2])
print(X.execute())
```
How to calculate X?
How to perform all tensor on np.array packages
Example:
```
import mars.tensor as mt
import numpy as np
def test_x():
X1 = mt.tensor([0, 1])
X2 = mt.tensor([2, 3])
X = np.array([X1,X2])
print(X.execute())
```
How to calculate X?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/expressions/datasource/array.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import numpy as np
18
19 from .... import opcodes as OperandDef
20 from ....lib.sparse.core import issparse, get_array_module, cp, cps, sps
21 from ....utils import on_serialize_shape, on_deserialize_shape
22 from ....serialize import ValueType, NDArrayField, TupleField
23 from ...core import TENSOR_TYPE, Tensor
24 from ..utils import get_chunk_slices
25 from .core import TensorNoInput
26 from .scalar import scalar
27
28
29 class ArrayDataSource(TensorNoInput):
30 """
31 Represents data from numpy or cupy array
32 """
33
34 _op_type_ = OperandDef.TENSOR_DATA_SOURCE
35
36 _data = NDArrayField('data')
37
38 def __init__(self, data=None, dtype=None, gpu=None, **kw):
39 if dtype is not None:
40 dtype = np.dtype(dtype)
41 elif data is not None:
42 dtype = np.dtype(data.dtype)
43 super(ArrayDataSource, self).__init__(_data=data, _dtype=dtype, _gpu=gpu, **kw)
44
45 @property
46 def data(self):
47 return self._data
48
49 def to_chunk_op(self, *args):
50 _, idx, chunk_size = args
51 chunk_op = self.copy().reset_key()
52 chunk_op._data = self.data[get_chunk_slices(chunk_size, idx)]
53
54 return chunk_op
55
56
57 class CSRMatrixDataSource(TensorNoInput):
58 """
59 Represents data from sparse array include scipy sparse or cupy sparse matrix.
60 """
61
62 _op_type_ = OperandDef.SPARSE_MATRIX_DATA_SOURCE
63
64 _indices = NDArrayField('indices')
65 _indptr = NDArrayField('indptr')
66 _data = NDArrayField('data')
67 _shape = TupleField('shape', ValueType.int64,
68 on_serialize=on_serialize_shape, on_deserialize=on_deserialize_shape)
69
70 def __init__(self, indices=None, indptr=None, data=None, shape=None,
71 dtype=None, gpu=None, **kw):
72 super(CSRMatrixDataSource, self).__init__(_indices=indices, _indptr=indptr,
73 _data=data, _shape=shape, _dtype=dtype,
74 _gpu=gpu, _sparse=True, **kw)
75
76 def to_chunk_op(self, *args):
77 _, idx, chunk_size = args
78
79 xps = cps if self._gpu else sps
80 if len(self._shape) == 1:
81 shape = (1, self._shape[0])
82 else:
83 shape = self._shape
84 data = xps.csr_matrix(
85 (self._data, self._indices, self._indptr), shape)
86 chunk_data = data[get_chunk_slices(chunk_size, idx)]
87
88 chunk_op = self.copy().reset_key()
89 chunk_op._data = chunk_data.data
90 chunk_op._indices = chunk_data.indices
91 chunk_op._indptr = chunk_data.indptr
92 chunk_shape = chunk_data.shape[1:] \
93 if len(self._shape) == 1 else chunk_data.shape
94 chunk_op._shape = chunk_shape
95
96 return chunk_op
97
98 @property
99 def indices(self):
100 return self._indices
101
102 @property
103 def indptr(self):
104 return self._indptr
105
106 @property
107 def data(self):
108 return self._data
109
110 @property
111 def shape(self):
112 return self._shape
113
114
115 def _from_spmatrix(spmatrix, dtype=None, chunk_size=None, gpu=None):
116 if gpu is None and cp is not None and get_array_module(spmatrix) is cp:
117 gpu = True
118 if dtype and spmatrix.dtype != dtype:
119 spmatrix = spmatrix.astype(dtype)
120 spmatrix = spmatrix.tocsr()
121 op = CSRMatrixDataSource(indices=spmatrix.indices, indptr=spmatrix.indptr,
122 data=spmatrix.data, shape=spmatrix.shape,
123 dtype=spmatrix.dtype, gpu=gpu)
124 return op(spmatrix.shape, chunk_size=chunk_size)
125
126
127 def tensor(data, dtype=None, chunk_size=None, gpu=None, sparse=False):
128 if isinstance(data, TENSOR_TYPE):
129 if dtype is not None and data.dtype != dtype:
130 return data.astype(dtype)
131 return data
132 elif isinstance(data, tuple) and all(isinstance(d, TENSOR_TYPE) for d in data):
133 from ..merge import stack
134
135 data = stack(data)
136 if dtype is not None:
137 data = data.astype(dtype)
138 return data
139 elif np.isscalar(data):
140 return scalar(data, dtype=dtype)
141 elif issparse(data):
142 return _from_spmatrix(data, dtype=dtype, chunk_size=chunk_size, gpu=gpu)
143 else:
144 m = get_array_module(data)
145 data = m.asarray(data, dtype=dtype)
146 if gpu is None and cp is not None and m is cp:
147 gpu = True
148
149 if isinstance(data, np.ndarray):
150 if data.ndim == 0:
151 return scalar(data.item(), dtype=dtype)
152 op = ArrayDataSource(data, dtype=dtype, gpu=gpu)
153 t = op(data.shape, chunk_size=chunk_size)
154 if sparse and not t.issparse():
155 return t.tosparse()
156 return t
157 else:
158 raise ValueError('Cannot create tensor by given data: {0}'.format(data))
159
160
161 def array(x, dtype=None, copy=True, ndmin=None, chunk_size=None):
162 """
163 Create a tensor.
164
165 Parameters
166 ----------
167 object : array_like
168 An array, any object exposing the array interface, an object whose
169 __array__ method returns an array, or any (nested) sequence.
170 dtype : data-type, optional
171 The desired data-type for the array. If not given, then the type will
172 be determined as the minimum type required to hold the objects in the
173 sequence. This argument can only be used to 'upcast' the array. For
174 downcasting, use the .astype(t) method.
175 copy : bool, optional
176 If true (default), then the object is copied. Otherwise, a copy will
177 only be made if __array__ returns a copy, if obj is a nested sequence,
178 or if a copy is needed to satisfy any of the other requirements
179 (`dtype`, `order`, etc.).
180 ndmin : int, optional
181 Specifies the minimum number of dimensions that the resulting
182 array should have. Ones will be pre-pended to the shape as
183 needed to meet this requirement.
184 chunk_size: int, tuple, optional
185 Specifies chunk size for each dimension.
186
187 Returns
188 -------
189 out : Tensor
190 An tensor object satisfying the specified requirements.
191
192 See Also
193 --------
194 empty, empty_like, zeros, zeros_like, ones, ones_like, full, full_like
195
196 Examples
197 --------
198 >>> import mars.tensor as mt
199
200 >>> mt.array([1, 2, 3]).execute()
201 array([1, 2, 3])
202
203 Upcasting:
204
205 >>> mt.array([1, 2, 3.0]).execute()
206 array([ 1., 2., 3.])
207
208 More than one dimension:
209
210 >>> mt.array([[1, 2], [3, 4]]).execute()
211 array([[1, 2],
212 [3, 4]])
213
214 Minimum dimensions 2:
215
216 >>> mt.array([1, 2, 3], ndmin=2).execute()
217 array([[1, 2, 3]])
218
219 Type provided:
220
221 >>> mt.array([1, 2, 3], dtype=complex).execute()
222 array([ 1.+0.j, 2.+0.j, 3.+0.j])
223
224 """
225 raw_x = x
226 x = tensor(x, chunk_size=chunk_size)
227 if copy and x is raw_x:
228 x = Tensor(x.data)
229 while ndmin is not None and x.ndim < ndmin:
230 x = x[np.newaxis, :]
231 if dtype is not None and x.dtype != dtype:
232 x = x.astype(dtype)
233 return x
234
235
236 def asarray(x, dtype=None):
237 """Convert the input to an array.
238
239 Parameters
240 ----------
241 a : array_like
242 Input data, in any form that can be converted to a tensor. This
243 includes lists, lists of tuples, tuples, tuples of tuples, tuples
244 of lists and tensors.
245 dtype : data-type, optional
246 By default, the data-type is inferred from the input data.
247
248 Returns
249 -------
250 out : Tensor
251 Tensor interpretation of `a`. No copy is performed if the input
252 is already an ndarray with matching dtype and order. If `a` is a
253 subclass of ndarray, a base class ndarray is returned.
254
255 Examples
256 --------
257 Convert a list into an array:
258
259 >>> import mars.tensor as mt
260
261 >>> a = [1, 2]
262 >>> mt.asarray(a).execute()
263 array([1, 2])
264
265 Existing arrays are not copied:
266
267 >>> a = mt.array([1, 2])
268 >>> mt.asarray(a) is a
269 True
270
271 If `dtype` is set, array is copied only if dtype does not match:
272
273 >>> a = mt.array([1, 2], dtype=mt.float32)
274 >>> mt.asarray(a, dtype=mt.float32) is a
275 True
276 >>> mt.asarray(a, dtype=mt.float64) is a
277 False
278 """
279 return array(x, dtype=dtype, copy=False)
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/tensor/expressions/datasource/array.py b/mars/tensor/expressions/datasource/array.py
--- a/mars/tensor/expressions/datasource/array.py
+++ b/mars/tensor/expressions/datasource/array.py
@@ -129,7 +129,7 @@
if dtype is not None and data.dtype != dtype:
return data.astype(dtype)
return data
- elif isinstance(data, tuple) and all(isinstance(d, TENSOR_TYPE) for d in data):
+ elif isinstance(data, (tuple, list)) and all(isinstance(d, TENSOR_TYPE) for d in data):
from ..merge import stack
data = stack(data)
|
{"golden_diff": "diff --git a/mars/tensor/expressions/datasource/array.py b/mars/tensor/expressions/datasource/array.py\n--- a/mars/tensor/expressions/datasource/array.py\n+++ b/mars/tensor/expressions/datasource/array.py\n@@ -129,7 +129,7 @@\n if dtype is not None and data.dtype != dtype:\n return data.astype(dtype)\n return data\n- elif isinstance(data, tuple) and all(isinstance(d, TENSOR_TYPE) for d in data):\n+ elif isinstance(data, (tuple, list)) and all(isinstance(d, TENSOR_TYPE) for d in data):\n from ..merge import stack\n \n data = stack(data)\n", "issue": "How to perform all tensor on np.array packages\nExample\uff1a\r\n```\r\nimport mars.tensor as mt\r\nimport numpy as np\r\n\r\ndef test_x():\r\n X1 = mt.tensor([0, 1])\r\n X2 = mt.tensor([2, 3])\r\n X = np.array([X1,X2])\r\n print(X.execute())\r\n```\r\nHow to calculate X\uff1f\nHow to perform all tensor on np.array packages\nExample\uff1a\r\n```\r\nimport mars.tensor as mt\r\nimport numpy as np\r\n\r\ndef test_x():\r\n X1 = mt.tensor([0, 1])\r\n X2 = mt.tensor([2, 3])\r\n X = np.array([X1,X2])\r\n print(X.execute())\r\n```\r\nHow to calculate X\uff1f\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\n\nfrom .... import opcodes as OperandDef\nfrom ....lib.sparse.core import issparse, get_array_module, cp, cps, sps\nfrom ....utils import on_serialize_shape, on_deserialize_shape\nfrom ....serialize import ValueType, NDArrayField, TupleField\nfrom ...core import TENSOR_TYPE, Tensor\nfrom ..utils import get_chunk_slices\nfrom .core import TensorNoInput\nfrom .scalar import scalar\n\n\nclass ArrayDataSource(TensorNoInput):\n \"\"\"\n Represents data from numpy or cupy array\n \"\"\"\n\n _op_type_ = OperandDef.TENSOR_DATA_SOURCE\n\n _data = NDArrayField('data')\n\n def __init__(self, data=None, dtype=None, gpu=None, **kw):\n if dtype is not None:\n dtype = np.dtype(dtype)\n elif data is not None:\n dtype = np.dtype(data.dtype)\n super(ArrayDataSource, self).__init__(_data=data, _dtype=dtype, _gpu=gpu, **kw)\n\n @property\n def data(self):\n return self._data\n\n def to_chunk_op(self, *args):\n _, idx, chunk_size = args\n chunk_op = self.copy().reset_key()\n chunk_op._data = self.data[get_chunk_slices(chunk_size, idx)]\n\n return chunk_op\n\n\nclass CSRMatrixDataSource(TensorNoInput):\n \"\"\"\n Represents data from sparse array include scipy sparse or cupy sparse matrix.\n \"\"\"\n\n _op_type_ = OperandDef.SPARSE_MATRIX_DATA_SOURCE\n\n _indices = NDArrayField('indices')\n _indptr = NDArrayField('indptr')\n _data = NDArrayField('data')\n _shape = TupleField('shape', ValueType.int64,\n on_serialize=on_serialize_shape, on_deserialize=on_deserialize_shape)\n\n def __init__(self, indices=None, indptr=None, data=None, shape=None,\n dtype=None, gpu=None, **kw):\n super(CSRMatrixDataSource, self).__init__(_indices=indices, _indptr=indptr,\n _data=data, _shape=shape, _dtype=dtype,\n _gpu=gpu, _sparse=True, **kw)\n\n def to_chunk_op(self, *args):\n _, idx, chunk_size = args\n\n xps = cps if self._gpu else sps\n if len(self._shape) == 1:\n shape = (1, self._shape[0])\n else:\n shape = self._shape\n data = xps.csr_matrix(\n (self._data, self._indices, self._indptr), shape)\n chunk_data = data[get_chunk_slices(chunk_size, idx)]\n\n chunk_op = self.copy().reset_key()\n chunk_op._data = chunk_data.data\n chunk_op._indices = chunk_data.indices\n chunk_op._indptr = chunk_data.indptr\n chunk_shape = chunk_data.shape[1:] \\\n if len(self._shape) == 1 else chunk_data.shape\n chunk_op._shape = chunk_shape\n\n return chunk_op\n\n @property\n def indices(self):\n return self._indices\n\n @property\n def indptr(self):\n return self._indptr\n\n @property\n def data(self):\n return self._data\n\n @property\n def shape(self):\n return self._shape\n\n\ndef _from_spmatrix(spmatrix, dtype=None, chunk_size=None, gpu=None):\n if gpu is None and cp is not None and get_array_module(spmatrix) is cp:\n gpu = True\n if dtype and spmatrix.dtype != dtype:\n spmatrix = spmatrix.astype(dtype)\n spmatrix = spmatrix.tocsr()\n op = CSRMatrixDataSource(indices=spmatrix.indices, indptr=spmatrix.indptr,\n data=spmatrix.data, shape=spmatrix.shape,\n dtype=spmatrix.dtype, gpu=gpu)\n return op(spmatrix.shape, chunk_size=chunk_size)\n\n\ndef tensor(data, dtype=None, chunk_size=None, gpu=None, sparse=False):\n if isinstance(data, TENSOR_TYPE):\n if dtype is not None and data.dtype != dtype:\n return data.astype(dtype)\n return data\n elif isinstance(data, tuple) and all(isinstance(d, TENSOR_TYPE) for d in data):\n from ..merge import stack\n\n data = stack(data)\n if dtype is not None:\n data = data.astype(dtype)\n return data\n elif np.isscalar(data):\n return scalar(data, dtype=dtype)\n elif issparse(data):\n return _from_spmatrix(data, dtype=dtype, chunk_size=chunk_size, gpu=gpu)\n else:\n m = get_array_module(data)\n data = m.asarray(data, dtype=dtype)\n if gpu is None and cp is not None and m is cp:\n gpu = True\n\n if isinstance(data, np.ndarray):\n if data.ndim == 0:\n return scalar(data.item(), dtype=dtype)\n op = ArrayDataSource(data, dtype=dtype, gpu=gpu)\n t = op(data.shape, chunk_size=chunk_size)\n if sparse and not t.issparse():\n return t.tosparse()\n return t\n else:\n raise ValueError('Cannot create tensor by given data: {0}'.format(data))\n\n\ndef array(x, dtype=None, copy=True, ndmin=None, chunk_size=None):\n \"\"\"\n Create a tensor.\n\n Parameters\n ----------\n object : array_like\n An array, any object exposing the array interface, an object whose\n __array__ method returns an array, or any (nested) sequence.\n dtype : data-type, optional\n The desired data-type for the array. If not given, then the type will\n be determined as the minimum type required to hold the objects in the\n sequence. This argument can only be used to 'upcast' the array. For\n downcasting, use the .astype(t) method.\n copy : bool, optional\n If true (default), then the object is copied. Otherwise, a copy will\n only be made if __array__ returns a copy, if obj is a nested sequence,\n or if a copy is needed to satisfy any of the other requirements\n (`dtype`, `order`, etc.).\n ndmin : int, optional\n Specifies the minimum number of dimensions that the resulting\n array should have. Ones will be pre-pended to the shape as\n needed to meet this requirement.\n chunk_size: int, tuple, optional\n Specifies chunk size for each dimension.\n\n Returns\n -------\n out : Tensor\n An tensor object satisfying the specified requirements.\n\n See Also\n --------\n empty, empty_like, zeros, zeros_like, ones, ones_like, full, full_like\n\n Examples\n --------\n >>> import mars.tensor as mt\n\n >>> mt.array([1, 2, 3]).execute()\n array([1, 2, 3])\n\n Upcasting:\n\n >>> mt.array([1, 2, 3.0]).execute()\n array([ 1., 2., 3.])\n\n More than one dimension:\n\n >>> mt.array([[1, 2], [3, 4]]).execute()\n array([[1, 2],\n [3, 4]])\n\n Minimum dimensions 2:\n\n >>> mt.array([1, 2, 3], ndmin=2).execute()\n array([[1, 2, 3]])\n\n Type provided:\n\n >>> mt.array([1, 2, 3], dtype=complex).execute()\n array([ 1.+0.j, 2.+0.j, 3.+0.j])\n\n \"\"\"\n raw_x = x\n x = tensor(x, chunk_size=chunk_size)\n if copy and x is raw_x:\n x = Tensor(x.data)\n while ndmin is not None and x.ndim < ndmin:\n x = x[np.newaxis, :]\n if dtype is not None and x.dtype != dtype:\n x = x.astype(dtype)\n return x\n\n\ndef asarray(x, dtype=None):\n \"\"\"Convert the input to an array.\n\n Parameters\n ----------\n a : array_like\n Input data, in any form that can be converted to a tensor. This\n includes lists, lists of tuples, tuples, tuples of tuples, tuples\n of lists and tensors.\n dtype : data-type, optional\n By default, the data-type is inferred from the input data.\n\n Returns\n -------\n out : Tensor\n Tensor interpretation of `a`. No copy is performed if the input\n is already an ndarray with matching dtype and order. If `a` is a\n subclass of ndarray, a base class ndarray is returned.\n\n Examples\n --------\n Convert a list into an array:\n\n >>> import mars.tensor as mt\n\n >>> a = [1, 2]\n >>> mt.asarray(a).execute()\n array([1, 2])\n\n Existing arrays are not copied:\n\n >>> a = mt.array([1, 2])\n >>> mt.asarray(a) is a\n True\n\n If `dtype` is set, array is copied only if dtype does not match:\n\n >>> a = mt.array([1, 2], dtype=mt.float32)\n >>> mt.asarray(a, dtype=mt.float32) is a\n True\n >>> mt.asarray(a, dtype=mt.float64) is a\n False\n \"\"\"\n return array(x, dtype=dtype, copy=False)\n", "path": "mars/tensor/expressions/datasource/array.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\n\nfrom .... import opcodes as OperandDef\nfrom ....lib.sparse.core import issparse, get_array_module, cp, cps, sps\nfrom ....utils import on_serialize_shape, on_deserialize_shape\nfrom ....serialize import ValueType, NDArrayField, TupleField\nfrom ...core import TENSOR_TYPE, Tensor\nfrom ..utils import get_chunk_slices\nfrom .core import TensorNoInput\nfrom .scalar import scalar\n\n\nclass ArrayDataSource(TensorNoInput):\n \"\"\"\n Represents data from numpy or cupy array\n \"\"\"\n\n _op_type_ = OperandDef.TENSOR_DATA_SOURCE\n\n _data = NDArrayField('data')\n\n def __init__(self, data=None, dtype=None, gpu=None, **kw):\n if dtype is not None:\n dtype = np.dtype(dtype)\n elif data is not None:\n dtype = np.dtype(data.dtype)\n super(ArrayDataSource, self).__init__(_data=data, _dtype=dtype, _gpu=gpu, **kw)\n\n @property\n def data(self):\n return self._data\n\n def to_chunk_op(self, *args):\n _, idx, chunk_size = args\n chunk_op = self.copy().reset_key()\n chunk_op._data = self.data[get_chunk_slices(chunk_size, idx)]\n\n return chunk_op\n\n\nclass CSRMatrixDataSource(TensorNoInput):\n \"\"\"\n Represents data from sparse array include scipy sparse or cupy sparse matrix.\n \"\"\"\n\n _op_type_ = OperandDef.SPARSE_MATRIX_DATA_SOURCE\n\n _indices = NDArrayField('indices')\n _indptr = NDArrayField('indptr')\n _data = NDArrayField('data')\n _shape = TupleField('shape', ValueType.int64,\n on_serialize=on_serialize_shape, on_deserialize=on_deserialize_shape)\n\n def __init__(self, indices=None, indptr=None, data=None, shape=None,\n dtype=None, gpu=None, **kw):\n super(CSRMatrixDataSource, self).__init__(_indices=indices, _indptr=indptr,\n _data=data, _shape=shape, _dtype=dtype,\n _gpu=gpu, _sparse=True, **kw)\n\n def to_chunk_op(self, *args):\n _, idx, chunk_size = args\n\n xps = cps if self._gpu else sps\n if len(self._shape) == 1:\n shape = (1, self._shape[0])\n else:\n shape = self._shape\n data = xps.csr_matrix(\n (self._data, self._indices, self._indptr), shape)\n chunk_data = data[get_chunk_slices(chunk_size, idx)]\n\n chunk_op = self.copy().reset_key()\n chunk_op._data = chunk_data.data\n chunk_op._indices = chunk_data.indices\n chunk_op._indptr = chunk_data.indptr\n chunk_shape = chunk_data.shape[1:] \\\n if len(self._shape) == 1 else chunk_data.shape\n chunk_op._shape = chunk_shape\n\n return chunk_op\n\n @property\n def indices(self):\n return self._indices\n\n @property\n def indptr(self):\n return self._indptr\n\n @property\n def data(self):\n return self._data\n\n @property\n def shape(self):\n return self._shape\n\n\ndef _from_spmatrix(spmatrix, dtype=None, chunk_size=None, gpu=None):\n if gpu is None and cp is not None and get_array_module(spmatrix) is cp:\n gpu = True\n if dtype and spmatrix.dtype != dtype:\n spmatrix = spmatrix.astype(dtype)\n spmatrix = spmatrix.tocsr()\n op = CSRMatrixDataSource(indices=spmatrix.indices, indptr=spmatrix.indptr,\n data=spmatrix.data, shape=spmatrix.shape,\n dtype=spmatrix.dtype, gpu=gpu)\n return op(spmatrix.shape, chunk_size=chunk_size)\n\n\ndef tensor(data, dtype=None, chunk_size=None, gpu=None, sparse=False):\n if isinstance(data, TENSOR_TYPE):\n if dtype is not None and data.dtype != dtype:\n return data.astype(dtype)\n return data\n elif isinstance(data, (tuple, list)) and all(isinstance(d, TENSOR_TYPE) for d in data):\n from ..merge import stack\n\n data = stack(data)\n if dtype is not None:\n data = data.astype(dtype)\n return data\n elif np.isscalar(data):\n return scalar(data, dtype=dtype)\n elif issparse(data):\n return _from_spmatrix(data, dtype=dtype, chunk_size=chunk_size, gpu=gpu)\n else:\n m = get_array_module(data)\n data = m.asarray(data, dtype=dtype)\n if gpu is None and cp is not None and m is cp:\n gpu = True\n\n if isinstance(data, np.ndarray):\n if data.ndim == 0:\n return scalar(data.item(), dtype=dtype)\n op = ArrayDataSource(data, dtype=dtype, gpu=gpu)\n t = op(data.shape, chunk_size=chunk_size)\n if sparse and not t.issparse():\n return t.tosparse()\n return t\n else:\n raise ValueError('Cannot create tensor by given data: {0}'.format(data))\n\n\ndef array(x, dtype=None, copy=True, ndmin=None, chunk_size=None):\n \"\"\"\n Create a tensor.\n\n Parameters\n ----------\n object : array_like\n An array, any object exposing the array interface, an object whose\n __array__ method returns an array, or any (nested) sequence.\n dtype : data-type, optional\n The desired data-type for the array. If not given, then the type will\n be determined as the minimum type required to hold the objects in the\n sequence. This argument can only be used to 'upcast' the array. For\n downcasting, use the .astype(t) method.\n copy : bool, optional\n If true (default), then the object is copied. Otherwise, a copy will\n only be made if __array__ returns a copy, if obj is a nested sequence,\n or if a copy is needed to satisfy any of the other requirements\n (`dtype`, `order`, etc.).\n ndmin : int, optional\n Specifies the minimum number of dimensions that the resulting\n array should have. Ones will be pre-pended to the shape as\n needed to meet this requirement.\n chunk_size: int, tuple, optional\n Specifies chunk size for each dimension.\n\n Returns\n -------\n out : Tensor\n An tensor object satisfying the specified requirements.\n\n See Also\n --------\n empty, empty_like, zeros, zeros_like, ones, ones_like, full, full_like\n\n Examples\n --------\n >>> import mars.tensor as mt\n\n >>> mt.array([1, 2, 3]).execute()\n array([1, 2, 3])\n\n Upcasting:\n\n >>> mt.array([1, 2, 3.0]).execute()\n array([ 1., 2., 3.])\n\n More than one dimension:\n\n >>> mt.array([[1, 2], [3, 4]]).execute()\n array([[1, 2],\n [3, 4]])\n\n Minimum dimensions 2:\n\n >>> mt.array([1, 2, 3], ndmin=2).execute()\n array([[1, 2, 3]])\n\n Type provided:\n\n >>> mt.array([1, 2, 3], dtype=complex).execute()\n array([ 1.+0.j, 2.+0.j, 3.+0.j])\n\n \"\"\"\n raw_x = x\n x = tensor(x, chunk_size=chunk_size)\n if copy and x is raw_x:\n x = Tensor(x.data)\n while ndmin is not None and x.ndim < ndmin:\n x = x[np.newaxis, :]\n if dtype is not None and x.dtype != dtype:\n x = x.astype(dtype)\n return x\n\n\ndef asarray(x, dtype=None):\n \"\"\"Convert the input to an array.\n\n Parameters\n ----------\n a : array_like\n Input data, in any form that can be converted to a tensor. This\n includes lists, lists of tuples, tuples, tuples of tuples, tuples\n of lists and tensors.\n dtype : data-type, optional\n By default, the data-type is inferred from the input data.\n\n Returns\n -------\n out : Tensor\n Tensor interpretation of `a`. No copy is performed if the input\n is already an ndarray with matching dtype and order. If `a` is a\n subclass of ndarray, a base class ndarray is returned.\n\n Examples\n --------\n Convert a list into an array:\n\n >>> import mars.tensor as mt\n\n >>> a = [1, 2]\n >>> mt.asarray(a).execute()\n array([1, 2])\n\n Existing arrays are not copied:\n\n >>> a = mt.array([1, 2])\n >>> mt.asarray(a) is a\n True\n\n If `dtype` is set, array is copied only if dtype does not match:\n\n >>> a = mt.array([1, 2], dtype=mt.float32)\n >>> mt.asarray(a, dtype=mt.float32) is a\n True\n >>> mt.asarray(a, dtype=mt.float64) is a\n False\n \"\"\"\n return array(x, dtype=dtype, copy=False)\n", "path": "mars/tensor/expressions/datasource/array.py"}]}
| 3,399 | 156 |
gh_patches_debug_52467
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-1859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Repro: logger doesn't work correctly on exception
DVC version: 0.35.5+d80137,
Platform: Linux
Method of installation: pip install from git
https://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51
This lines should be:
```
except DvcException, msg:
logger.exception(msg)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/repro.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import argparse
4 import os
5 import logging
6
7 from dvc.command.base import CmdBase, append_doc_link
8 from dvc.command.metrics import show_metrics
9 from dvc.command.status import CmdDataStatus
10 from dvc.exceptions import DvcException
11
12
13 logger = logging.getLogger(__name__)
14
15
16 class CmdRepro(CmdBase):
17 def run(self):
18 recursive = not self.args.single_item
19 saved_dir = os.path.realpath(os.curdir)
20 if self.args.cwd:
21 os.chdir(self.args.cwd)
22
23 # Dirty hack so the for loop below can at least enter once
24 if self.args.all_pipelines:
25 self.args.targets = [None]
26 elif not self.args.targets:
27 self.args.targets = self.default_targets
28
29 ret = 0
30 for target in self.args.targets:
31 try:
32 stages = self.repo.reproduce(
33 target,
34 recursive=recursive,
35 force=self.args.force,
36 dry=self.args.dry,
37 interactive=self.args.interactive,
38 pipeline=self.args.pipeline,
39 all_pipelines=self.args.all_pipelines,
40 ignore_build_cache=self.args.ignore_build_cache,
41 no_commit=self.args.no_commit,
42 )
43
44 if len(stages) == 0:
45 logger.info(CmdDataStatus.UP_TO_DATE_MSG)
46
47 if self.args.metrics:
48 metrics = self.repo.metrics.show()
49 show_metrics(metrics)
50 except DvcException:
51 logger.exception()
52 ret = 1
53 break
54
55 os.chdir(saved_dir)
56 return ret
57
58
59 def add_parser(subparsers, parent_parser):
60 REPRO_HELP = "Check for changes and reproduce DVC file and dependencies."
61 repro_parser = subparsers.add_parser(
62 "repro",
63 parents=[parent_parser],
64 description=append_doc_link(REPRO_HELP, "repro"),
65 help=REPRO_HELP,
66 formatter_class=argparse.RawDescriptionHelpFormatter,
67 )
68 repro_parser.add_argument(
69 "targets",
70 nargs="*",
71 help="DVC file to reproduce (default - 'Dvcfile').",
72 )
73 repro_parser.add_argument(
74 "-f",
75 "--force",
76 action="store_true",
77 default=False,
78 help="Reproduce even if dependencies were not changed.",
79 )
80 repro_parser.add_argument(
81 "-s",
82 "--single-item",
83 action="store_true",
84 default=False,
85 help="Reproduce only single data item without recursive dependencies "
86 "check.",
87 )
88 repro_parser.add_argument(
89 "-c",
90 "--cwd",
91 default=os.path.curdir,
92 help="Directory within your repo to reproduce from.",
93 )
94 repro_parser.add_argument(
95 "-m",
96 "--metrics",
97 action="store_true",
98 default=False,
99 help="Show metrics after reproduction.",
100 )
101 repro_parser.add_argument(
102 "--dry",
103 action="store_true",
104 default=False,
105 help="Only print the commands that would be executed without "
106 "actually executing.",
107 )
108 repro_parser.add_argument(
109 "-i",
110 "--interactive",
111 action="store_true",
112 default=False,
113 help="Ask for confirmation before reproducing each stage.",
114 )
115 repro_parser.add_argument(
116 "-p",
117 "--pipeline",
118 action="store_true",
119 default=False,
120 help="Reproduce the whole pipeline that the specified stage file "
121 "belongs to.",
122 )
123 repro_parser.add_argument(
124 "-P",
125 "--all-pipelines",
126 action="store_true",
127 default=False,
128 help="Reproduce all pipelines in the repo.",
129 )
130 repro_parser.add_argument(
131 "--ignore-build-cache",
132 action="store_true",
133 default=False,
134 help="Reproduce all descendants of a changed stage even if their "
135 "direct dependencies didn't change.",
136 )
137 repro_parser.add_argument(
138 "--no-commit",
139 action="store_true",
140 default=False,
141 help="Don't put files/directories into cache.",
142 )
143 repro_parser.set_defaults(func=CmdRepro)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/command/repro.py b/dvc/command/repro.py
--- a/dvc/command/repro.py
+++ b/dvc/command/repro.py
@@ -48,7 +48,7 @@
metrics = self.repo.metrics.show()
show_metrics(metrics)
except DvcException:
- logger.exception()
+ logger.exception("")
ret = 1
break
|
{"golden_diff": "diff --git a/dvc/command/repro.py b/dvc/command/repro.py\n--- a/dvc/command/repro.py\n+++ b/dvc/command/repro.py\n@@ -48,7 +48,7 @@\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n- logger.exception()\n+ logger.exception(\"\")\n ret = 1\n break\n", "issue": "Repro: logger doesn't work correctly on exception\nDVC version: 0.35.5+d80137,\r\nPlatform: Linux\r\nMethod of installation: pip install from git\r\n\r\nhttps://github.com/iterative/dvc/blob/54072d70b542115a78a374fa702129b6959a1d02/dvc/command/repro.py#L50-L51 \r\n\r\nThis lines should be:\r\n```\r\nexcept DvcException, msg:\r\n logger.exception(msg)\r\n```\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport logging\n\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.command.metrics import show_metrics\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n logger.exception()\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Check for changes and reproduce DVC file and dependencies.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=append_doc_link(REPRO_HELP, \"repro\"),\n help=REPRO_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n repro_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"DVC file to reproduce (default - 'Dvcfile').\",\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport logging\n\nfrom dvc.command.base import CmdBase, append_doc_link\nfrom dvc.command.metrics import show_metrics\nfrom dvc.command.status import CmdDataStatus\nfrom dvc.exceptions import DvcException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdRepro(CmdBase):\n def run(self):\n recursive = not self.args.single_item\n saved_dir = os.path.realpath(os.curdir)\n if self.args.cwd:\n os.chdir(self.args.cwd)\n\n # Dirty hack so the for loop below can at least enter once\n if self.args.all_pipelines:\n self.args.targets = [None]\n elif not self.args.targets:\n self.args.targets = self.default_targets\n\n ret = 0\n for target in self.args.targets:\n try:\n stages = self.repo.reproduce(\n target,\n recursive=recursive,\n force=self.args.force,\n dry=self.args.dry,\n interactive=self.args.interactive,\n pipeline=self.args.pipeline,\n all_pipelines=self.args.all_pipelines,\n ignore_build_cache=self.args.ignore_build_cache,\n no_commit=self.args.no_commit,\n )\n\n if len(stages) == 0:\n logger.info(CmdDataStatus.UP_TO_DATE_MSG)\n\n if self.args.metrics:\n metrics = self.repo.metrics.show()\n show_metrics(metrics)\n except DvcException:\n logger.exception(\"\")\n ret = 1\n break\n\n os.chdir(saved_dir)\n return ret\n\n\ndef add_parser(subparsers, parent_parser):\n REPRO_HELP = \"Check for changes and reproduce DVC file and dependencies.\"\n repro_parser = subparsers.add_parser(\n \"repro\",\n parents=[parent_parser],\n description=append_doc_link(REPRO_HELP, \"repro\"),\n help=REPRO_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n repro_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"DVC file to reproduce (default - 'Dvcfile').\",\n )\n repro_parser.add_argument(\n \"-f\",\n \"--force\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce even if dependencies were not changed.\",\n )\n repro_parser.add_argument(\n \"-s\",\n \"--single-item\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce only single data item without recursive dependencies \"\n \"check.\",\n )\n repro_parser.add_argument(\n \"-c\",\n \"--cwd\",\n default=os.path.curdir,\n help=\"Directory within your repo to reproduce from.\",\n )\n repro_parser.add_argument(\n \"-m\",\n \"--metrics\",\n action=\"store_true\",\n default=False,\n help=\"Show metrics after reproduction.\",\n )\n repro_parser.add_argument(\n \"--dry\",\n action=\"store_true\",\n default=False,\n help=\"Only print the commands that would be executed without \"\n \"actually executing.\",\n )\n repro_parser.add_argument(\n \"-i\",\n \"--interactive\",\n action=\"store_true\",\n default=False,\n help=\"Ask for confirmation before reproducing each stage.\",\n )\n repro_parser.add_argument(\n \"-p\",\n \"--pipeline\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce the whole pipeline that the specified stage file \"\n \"belongs to.\",\n )\n repro_parser.add_argument(\n \"-P\",\n \"--all-pipelines\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all pipelines in the repo.\",\n )\n repro_parser.add_argument(\n \"--ignore-build-cache\",\n action=\"store_true\",\n default=False,\n help=\"Reproduce all descendants of a changed stage even if their \"\n \"direct dependencies didn't change.\",\n )\n repro_parser.add_argument(\n \"--no-commit\",\n action=\"store_true\",\n default=False,\n help=\"Don't put files/directories into cache.\",\n )\n repro_parser.set_defaults(func=CmdRepro)\n", "path": "dvc/command/repro.py"}]}
| 1,563 | 86 |
gh_patches_debug_16566
|
rasdani/github-patches
|
git_diff
|
vacanza__python-holidays-372
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Revisit #211 to include FR for France
This is an appeal to reconsider the request in #211
Started using this library and I really like it, but this one exception to the [iso3166 standard](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes) is most unexpected. I'll skip a lengthy opinion on the value of standards and go directly to my question: if there was a pull request to adhere to iso3166, would it be considered? Or would that be wasted work because of reasons (backwards compatibility or anything else)?
Thanks,
rfm.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `holidays/countries/__init__.py`
Content:
```
1 # python-holidays
2 # ---------------
3 # A fast, efficient Python library for generating country, province and state
4 # specific sets of holidays on the fly. It aims to make determining whether a
5 # specific date is a holiday as fast and flexible as possible.
6 #
7 # Author: ryanss <[email protected]> (c) 2014-2017
8 # dr-prodigy <[email protected]> (c) 2017-2020
9 # Website: https://github.com/dr-prodigy/python-holidays
10 # License: MIT (see LICENSE file)
11
12 from .argentina import Argentina, AR, ARG
13 from .aruba import Aruba, AW, ABW
14 from .australia import Australia, AU, AUS
15 from .austria import Austria, AT, AUT
16 from .belarus import Belarus, BY, BLR
17 from .belgium import Belgium, BE, BEL
18 from .brazil import Brazil, BR, BRA
19 from .bulgaria import Bulgaria, BG, BLG
20 from .burundi import Burundi, BI, BDI
21 from .canada import Canada, CA, CAN
22 from .chile import Chile, CL, CHL
23 from .colombia import Colombia, CO, COL
24 from .croatia import Croatia, HR, HRV
25 from .czechia import Czechia, Czech, CZ, CZE
26 from .denmark import Denmark, DK, DNK
27 from .djibouti import Djibouti, DJ, DJI
28 from .dominican_republic import DominicanRepublic, DO, DOM
29 from .egypt import Egypt, EG, EGY
30 from .estonia import Estonia, EE, EST
31 from .european_central_bank import EuropeanCentralBank, ECB, TAR
32 from .finland import Finland, FI, FIN
33 from .france import France, FRA
34 from .germany import Germany, DE, DEU
35 from .greece import Greece, GR, GRC
36 from .honduras import Honduras, HN, HND
37 from .hongkong import HongKong, HK, HKG
38 from .hungary import Hungary, HU, HUN
39 from .iceland import Iceland, IS, ISL
40 from .india import India, IN, IND
41 from .ireland import Ireland, IE, IRL
42 from .italy import Italy, IT, ITA
43 from .israel import Israel, IL, ISR
44 from .japan import Japan, JP, JPN
45 from .kenya import Kenya, KE, KEN
46 from .korea import Korea, KR, KOR
47 from .latvia import Latvia, LV, LVA
48 from .lithuania import Lithuania, LT, LTU
49 from .luxembourg import Luxembourg, LU, LUX
50 from .mexico import Mexico, MX, MEX
51 from .morocco import Morocco, MA, MOR
52 from .netherlands import Netherlands, NL, NLD
53 from .new_zealand import NewZealand, NZ, NZL
54 from .nicaragua import Nicaragua, NI, NIC
55 from .nigeria import Nigeria, NG, NGA
56 from .norway import Norway, NO, NOR
57 from .paraguay import Paraguay, PY, PRY
58 from .peru import Peru, PE, PER
59 from .poland import Poland, Polish, PL, POL
60 from .portugal import Portugal, PT, PRT, PortugalExt, PTE
61 from .romania import Romania, RO, ROU
62 from .russia import Russia, RU, RUS
63 from .serbia import Serbia, RS, SRB
64 from .singapore import Singapore, SG, SGP
65 from .slovakia import Slovakia, Slovak, SK, SVK
66 from .slovenia import Slovenia, SI, SVN
67 from .south_africa import SouthAfrica, ZA, ZAF
68 from .spain import Spain, ES, ESP
69 from .sweden import Sweden, SE, SWE
70 from .switzerland import Switzerland, CH, CHE
71 from .turkey import Turkey, TR, TUR
72 from .ukraine import Ukraine, UA, UKR
73 from .united_arab_emirates import UnitedArabEmirates, AE, ARE
74 from .united_kingdom import UnitedKingdom, UK, GB, England, Wales, Scotland,\
75 IsleOfMan, NorthernIreland, GBR
76 from .united_states import UnitedStates, US, USA
77 from .vietnam import Vietnam, VN, VNM
78
```
Path: `holidays/countries/france.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # python-holidays
4 # ---------------
5 # A fast, efficient Python library for generating country, province and state
6 # specific sets of holidays on the fly. It aims to make determining whether a
7 # specific date is a holiday as fast and flexible as possible.
8 #
9 # Author: ryanss <[email protected]> (c) 2014-2017
10 # dr-prodigy <[email protected]> (c) 2017-2020
11 # Website: https://github.com/dr-prodigy/python-holidays
12 # License: MIT (see LICENSE file)
13
14 from datetime import date
15
16 from dateutil.easter import easter
17 from dateutil.relativedelta import relativedelta as rd
18
19 from holidays.constants import JAN, APR, MAY, JUN, JUL, AUG, SEP, OCT, \
20 NOV, DEC
21 from holidays.holiday_base import HolidayBase
22
23
24 class France(HolidayBase):
25 """Official French holidays.
26
27 Some provinces have specific holidays, only those are included in the
28 PROVINCES, because these provinces have different administrative status,
29 which makes it difficult to enumerate.
30
31 For religious holidays usually happening on Sundays (Easter, Pentecost),
32 only the following Monday is considered a holiday.
33
34 Primary sources:
35 https://fr.wikipedia.org/wiki/Fêtes_et_jours_fériés_en_France
36 https://www.service-public.fr/particuliers/vosdroits/F2405
37 """
38
39 PROVINCES = ['Métropole', 'Alsace-Moselle', 'Guadeloupe', 'Guyane',
40 'Martinique', 'Mayotte', 'Nouvelle-Calédonie', 'La Réunion',
41 'Polynésie Française', 'Saint-Barthélémy', 'Saint-Martin',
42 'Wallis-et-Futuna']
43
44 def __init__(self, **kwargs):
45 self.country = 'FR'
46 self.prov = kwargs.pop('prov', 'Métropole')
47 HolidayBase.__init__(self, **kwargs)
48
49 def _populate(self, year):
50 # Civil holidays
51 if year > 1810:
52 self[date(year, JAN, 1)] = "Jour de l'an"
53
54 if year > 1919:
55 name = 'Fête du Travail'
56 if year <= 1948:
57 name += ' et de la Concorde sociale'
58 self[date(year, MAY, 1)] = name
59
60 if (1953 <= year <= 1959) or year > 1981:
61 self[date(year, MAY, 8)] = 'Armistice 1945'
62
63 if year >= 1880:
64 self[date(year, JUL, 14)] = 'Fête nationale'
65
66 if year >= 1918:
67 self[date(year, NOV, 11)] = 'Armistice 1918'
68
69 # Religious holidays
70 if self.prov in ['Alsace-Moselle', 'Guadeloupe', 'Guyane',
71 'Martinique', 'Polynésie Française']:
72 self[easter(year) - rd(days=2)] = 'Vendredi saint'
73
74 if self.prov == 'Alsace-Moselle':
75 self[date(year, DEC, 26)] = 'Deuxième jour de Noël'
76
77 if year >= 1886:
78 self[easter(year) + rd(days=1)] = 'Lundi de Pâques'
79 self[easter(year) + rd(days=50)] = 'Lundi de Pentecôte'
80
81 if year >= 1802:
82 self[easter(year) + rd(days=39)] = 'Ascension'
83 self[date(year, AUG, 15)] = 'Assomption'
84 self[date(year, NOV, 1)] = 'Toussaint'
85
86 name = 'Noël'
87 if self.prov == 'Alsace-Moselle':
88 name = 'Premier jour de ' + name
89 self[date(year, DEC, 25)] = name
90
91 # Non-metropolitan holidays (starting dates missing)
92 if self.prov == 'Mayotte':
93 self[date(year, APR, 27)] = "Abolition de l'esclavage"
94
95 if self.prov == 'Wallis-et-Futuna':
96 self[date(year, APR, 28)] = 'Saint Pierre Chanel'
97
98 if self.prov == 'Martinique':
99 self[date(year, MAY, 22)] = "Abolition de l'esclavage"
100
101 if self.prov in ['Guadeloupe', 'Saint-Martin']:
102 self[date(year, MAY, 27)] = "Abolition de l'esclavage"
103
104 if self.prov == 'Guyane':
105 self[date(year, JUN, 10)] = "Abolition de l'esclavage"
106
107 if self.prov == 'Polynésie Française':
108 self[date(year, JUN, 29)] = "Fête de l'autonomie"
109
110 if self.prov in ['Guadeloupe', 'Martinique']:
111 self[date(year, JUL, 21)] = 'Fête Victor Schoelcher'
112
113 if self.prov == 'Wallis-et-Futuna':
114 self[date(year, JUL, 29)] = 'Fête du Territoire'
115
116 if self.prov == 'Nouvelle-Calédonie':
117 self[date(year, SEP, 24)] = 'Fête de la Citoyenneté'
118
119 if self.prov == 'Saint-Barthélémy':
120 self[date(year, OCT, 9)] = "Abolition de l'esclavage"
121
122 if self.prov == 'La Réunion' and year >= 1981:
123 self[date(year, DEC, 20)] = "Abolition de l'esclavage"
124
125
126 # FR already exists (Friday), we don't want to mess it up
127 class FRA(France):
128 pass
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/holidays/countries/__init__.py b/holidays/countries/__init__.py
--- a/holidays/countries/__init__.py
+++ b/holidays/countries/__init__.py
@@ -30,7 +30,7 @@
from .estonia import Estonia, EE, EST
from .european_central_bank import EuropeanCentralBank, ECB, TAR
from .finland import Finland, FI, FIN
-from .france import France, FRA
+from .france import France, FR, FRA
from .germany import Germany, DE, DEU
from .greece import Greece, GR, GRC
from .honduras import Honduras, HN, HND
diff --git a/holidays/countries/france.py b/holidays/countries/france.py
--- a/holidays/countries/france.py
+++ b/holidays/countries/france.py
@@ -123,6 +123,10 @@
self[date(year, DEC, 20)] = "Abolition de l'esclavage"
-# FR already exists (Friday), we don't want to mess it up
+# *Warning* FR is also used by dateutlis (Friday), so be careful with its use
+class FR(France):
+ pass
+
+
class FRA(France):
pass
|
{"golden_diff": "diff --git a/holidays/countries/__init__.py b/holidays/countries/__init__.py\n--- a/holidays/countries/__init__.py\n+++ b/holidays/countries/__init__.py\n@@ -30,7 +30,7 @@\n from .estonia import Estonia, EE, EST\n from .european_central_bank import EuropeanCentralBank, ECB, TAR\n from .finland import Finland, FI, FIN\n-from .france import France, FRA\n+from .france import France, FR, FRA\n from .germany import Germany, DE, DEU\n from .greece import Greece, GR, GRC\n from .honduras import Honduras, HN, HND\ndiff --git a/holidays/countries/france.py b/holidays/countries/france.py\n--- a/holidays/countries/france.py\n+++ b/holidays/countries/france.py\n@@ -123,6 +123,10 @@\n self[date(year, DEC, 20)] = \"Abolition de l'esclavage\"\n \n \n-# FR already exists (Friday), we don't want to mess it up\n+# *Warning* FR is also used by dateutlis (Friday), so be careful with its use\n+class FR(France):\n+ pass\n+\n+\n class FRA(France):\n pass\n", "issue": "Revisit #211 to include FR for France\nThis is an appeal to reconsider the request in #211 \r\n\r\nStarted using this library and I really like it, but this one exception to the [iso3166 standard](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes) is most unexpected. I'll skip a lengthy opinion on the value of standards and go directly to my question: if there was a pull request to adhere to iso3166, would it be considered? Or would that be wasted work because of reasons (backwards compatibility or anything else)?\r\n\r\nThanks,\r\nrfm.\n", "before_files": [{"content": "# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2020\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom .argentina import Argentina, AR, ARG\nfrom .aruba import Aruba, AW, ABW\nfrom .australia import Australia, AU, AUS\nfrom .austria import Austria, AT, AUT\nfrom .belarus import Belarus, BY, BLR\nfrom .belgium import Belgium, BE, BEL\nfrom .brazil import Brazil, BR, BRA\nfrom .bulgaria import Bulgaria, BG, BLG\nfrom .burundi import Burundi, BI, BDI\nfrom .canada import Canada, CA, CAN\nfrom .chile import Chile, CL, CHL\nfrom .colombia import Colombia, CO, COL\nfrom .croatia import Croatia, HR, HRV\nfrom .czechia import Czechia, Czech, CZ, CZE\nfrom .denmark import Denmark, DK, DNK\nfrom .djibouti import Djibouti, DJ, DJI\nfrom .dominican_republic import DominicanRepublic, DO, DOM\nfrom .egypt import Egypt, EG, EGY\nfrom .estonia import Estonia, EE, EST\nfrom .european_central_bank import EuropeanCentralBank, ECB, TAR\nfrom .finland import Finland, FI, FIN\nfrom .france import France, FRA\nfrom .germany import Germany, DE, DEU\nfrom .greece import Greece, GR, GRC\nfrom .honduras import Honduras, HN, HND\nfrom .hongkong import HongKong, HK, HKG\nfrom .hungary import Hungary, HU, HUN\nfrom .iceland import Iceland, IS, ISL\nfrom .india import India, IN, IND\nfrom .ireland import Ireland, IE, IRL\nfrom .italy import Italy, IT, ITA\nfrom .israel import Israel, IL, ISR\nfrom .japan import Japan, JP, JPN\nfrom .kenya import Kenya, KE, KEN\nfrom .korea import Korea, KR, KOR\nfrom .latvia import Latvia, LV, LVA\nfrom .lithuania import Lithuania, LT, LTU\nfrom .luxembourg import Luxembourg, LU, LUX\nfrom .mexico import Mexico, MX, MEX\nfrom .morocco import Morocco, MA, MOR\nfrom .netherlands import Netherlands, NL, NLD\nfrom .new_zealand import NewZealand, NZ, NZL\nfrom .nicaragua import Nicaragua, NI, NIC\nfrom .nigeria import Nigeria, NG, NGA\nfrom .norway import Norway, NO, NOR\nfrom .paraguay import Paraguay, PY, PRY\nfrom .peru import Peru, PE, PER\nfrom .poland import Poland, Polish, PL, POL\nfrom .portugal import Portugal, PT, PRT, PortugalExt, PTE\nfrom .romania import Romania, RO, ROU\nfrom .russia import Russia, RU, RUS\nfrom .serbia import Serbia, RS, SRB\nfrom .singapore import Singapore, SG, SGP\nfrom .slovakia import Slovakia, Slovak, SK, SVK\nfrom .slovenia import Slovenia, SI, SVN\nfrom .south_africa import SouthAfrica, ZA, ZAF\nfrom .spain import Spain, ES, ESP\nfrom .sweden import Sweden, SE, SWE\nfrom .switzerland import Switzerland, CH, CHE\nfrom .turkey import Turkey, TR, TUR\nfrom .ukraine import Ukraine, UA, UKR\nfrom .united_arab_emirates import UnitedArabEmirates, AE, ARE\nfrom .united_kingdom import UnitedKingdom, UK, GB, England, Wales, Scotland,\\\n IsleOfMan, NorthernIreland, GBR\nfrom .united_states import UnitedStates, US, USA\nfrom .vietnam import Vietnam, VN, VNM\n", "path": "holidays/countries/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2020\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import date\n\nfrom dateutil.easter import easter\nfrom dateutil.relativedelta import relativedelta as rd\n\nfrom holidays.constants import JAN, APR, MAY, JUN, JUL, AUG, SEP, OCT, \\\n NOV, DEC\nfrom holidays.holiday_base import HolidayBase\n\n\nclass France(HolidayBase):\n \"\"\"Official French holidays.\n\n Some provinces have specific holidays, only those are included in the\n PROVINCES, because these provinces have different administrative status,\n which makes it difficult to enumerate.\n\n For religious holidays usually happening on Sundays (Easter, Pentecost),\n only the following Monday is considered a holiday.\n\n Primary sources:\n https://fr.wikipedia.org/wiki/F\u00eates_et_jours_f\u00e9ri\u00e9s_en_France\n https://www.service-public.fr/particuliers/vosdroits/F2405\n \"\"\"\n\n PROVINCES = ['M\u00e9tropole', 'Alsace-Moselle', 'Guadeloupe', 'Guyane',\n 'Martinique', 'Mayotte', 'Nouvelle-Cal\u00e9donie', 'La R\u00e9union',\n 'Polyn\u00e9sie Fran\u00e7aise', 'Saint-Barth\u00e9l\u00e9my', 'Saint-Martin',\n 'Wallis-et-Futuna']\n\n def __init__(self, **kwargs):\n self.country = 'FR'\n self.prov = kwargs.pop('prov', 'M\u00e9tropole')\n HolidayBase.__init__(self, **kwargs)\n\n def _populate(self, year):\n # Civil holidays\n if year > 1810:\n self[date(year, JAN, 1)] = \"Jour de l'an\"\n\n if year > 1919:\n name = 'F\u00eate du Travail'\n if year <= 1948:\n name += ' et de la Concorde sociale'\n self[date(year, MAY, 1)] = name\n\n if (1953 <= year <= 1959) or year > 1981:\n self[date(year, MAY, 8)] = 'Armistice 1945'\n\n if year >= 1880:\n self[date(year, JUL, 14)] = 'F\u00eate nationale'\n\n if year >= 1918:\n self[date(year, NOV, 11)] = 'Armistice 1918'\n\n # Religious holidays\n if self.prov in ['Alsace-Moselle', 'Guadeloupe', 'Guyane',\n 'Martinique', 'Polyn\u00e9sie Fran\u00e7aise']:\n self[easter(year) - rd(days=2)] = 'Vendredi saint'\n\n if self.prov == 'Alsace-Moselle':\n self[date(year, DEC, 26)] = 'Deuxi\u00e8me jour de No\u00ebl'\n\n if year >= 1886:\n self[easter(year) + rd(days=1)] = 'Lundi de P\u00e2ques'\n self[easter(year) + rd(days=50)] = 'Lundi de Pentec\u00f4te'\n\n if year >= 1802:\n self[easter(year) + rd(days=39)] = 'Ascension'\n self[date(year, AUG, 15)] = 'Assomption'\n self[date(year, NOV, 1)] = 'Toussaint'\n\n name = 'No\u00ebl'\n if self.prov == 'Alsace-Moselle':\n name = 'Premier jour de ' + name\n self[date(year, DEC, 25)] = name\n\n # Non-metropolitan holidays (starting dates missing)\n if self.prov == 'Mayotte':\n self[date(year, APR, 27)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Wallis-et-Futuna':\n self[date(year, APR, 28)] = 'Saint Pierre Chanel'\n\n if self.prov == 'Martinique':\n self[date(year, MAY, 22)] = \"Abolition de l'esclavage\"\n\n if self.prov in ['Guadeloupe', 'Saint-Martin']:\n self[date(year, MAY, 27)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Guyane':\n self[date(year, JUN, 10)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Polyn\u00e9sie Fran\u00e7aise':\n self[date(year, JUN, 29)] = \"F\u00eate de l'autonomie\"\n\n if self.prov in ['Guadeloupe', 'Martinique']:\n self[date(year, JUL, 21)] = 'F\u00eate Victor Schoelcher'\n\n if self.prov == 'Wallis-et-Futuna':\n self[date(year, JUL, 29)] = 'F\u00eate du Territoire'\n\n if self.prov == 'Nouvelle-Cal\u00e9donie':\n self[date(year, SEP, 24)] = 'F\u00eate de la Citoyennet\u00e9'\n\n if self.prov == 'Saint-Barth\u00e9l\u00e9my':\n self[date(year, OCT, 9)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'La R\u00e9union' and year >= 1981:\n self[date(year, DEC, 20)] = \"Abolition de l'esclavage\"\n\n\n# FR already exists (Friday), we don't want to mess it up\nclass FRA(France):\n pass\n", "path": "holidays/countries/france.py"}], "after_files": [{"content": "# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2020\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom .argentina import Argentina, AR, ARG\nfrom .aruba import Aruba, AW, ABW\nfrom .australia import Australia, AU, AUS\nfrom .austria import Austria, AT, AUT\nfrom .belarus import Belarus, BY, BLR\nfrom .belgium import Belgium, BE, BEL\nfrom .brazil import Brazil, BR, BRA\nfrom .bulgaria import Bulgaria, BG, BLG\nfrom .burundi import Burundi, BI, BDI\nfrom .canada import Canada, CA, CAN\nfrom .chile import Chile, CL, CHL\nfrom .colombia import Colombia, CO, COL\nfrom .croatia import Croatia, HR, HRV\nfrom .czechia import Czechia, Czech, CZ, CZE\nfrom .denmark import Denmark, DK, DNK\nfrom .djibouti import Djibouti, DJ, DJI\nfrom .dominican_republic import DominicanRepublic, DO, DOM\nfrom .egypt import Egypt, EG, EGY\nfrom .estonia import Estonia, EE, EST\nfrom .european_central_bank import EuropeanCentralBank, ECB, TAR\nfrom .finland import Finland, FI, FIN\nfrom .france import France, FR, FRA\nfrom .germany import Germany, DE, DEU\nfrom .greece import Greece, GR, GRC\nfrom .honduras import Honduras, HN, HND\nfrom .hongkong import HongKong, HK, HKG\nfrom .hungary import Hungary, HU, HUN\nfrom .iceland import Iceland, IS, ISL\nfrom .india import India, IN, IND\nfrom .ireland import Ireland, IE, IRL\nfrom .italy import Italy, IT, ITA\nfrom .israel import Israel, IL, ISR\nfrom .japan import Japan, JP, JPN\nfrom .kenya import Kenya, KE, KEN\nfrom .korea import Korea, KR, KOR\nfrom .latvia import Latvia, LV, LVA\nfrom .lithuania import Lithuania, LT, LTU\nfrom .luxembourg import Luxembourg, LU, LUX\nfrom .mexico import Mexico, MX, MEX\nfrom .morocco import Morocco, MA, MOR\nfrom .netherlands import Netherlands, NL, NLD\nfrom .new_zealand import NewZealand, NZ, NZL\nfrom .nicaragua import Nicaragua, NI, NIC\nfrom .nigeria import Nigeria, NG, NGA\nfrom .norway import Norway, NO, NOR\nfrom .paraguay import Paraguay, PY, PRY\nfrom .peru import Peru, PE, PER\nfrom .poland import Poland, Polish, PL, POL\nfrom .portugal import Portugal, PT, PRT, PortugalExt, PTE\nfrom .romania import Romania, RO, ROU\nfrom .russia import Russia, RU, RUS\nfrom .serbia import Serbia, RS, SRB\nfrom .singapore import Singapore, SG, SGP\nfrom .slovakia import Slovakia, Slovak, SK, SVK\nfrom .slovenia import Slovenia, SI, SVN\nfrom .south_africa import SouthAfrica, ZA, ZAF\nfrom .spain import Spain, ES, ESP\nfrom .sweden import Sweden, SE, SWE\nfrom .switzerland import Switzerland, CH, CHE\nfrom .turkey import Turkey, TR, TUR\nfrom .ukraine import Ukraine, UA, UKR\nfrom .united_arab_emirates import UnitedArabEmirates, AE, ARE\nfrom .united_kingdom import UnitedKingdom, UK, GB, England, Wales, Scotland,\\\n IsleOfMan, NorthernIreland, GBR\nfrom .united_states import UnitedStates, US, USA\nfrom .vietnam import Vietnam, VN, VNM\n", "path": "holidays/countries/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2020\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import date\n\nfrom dateutil.easter import easter\nfrom dateutil.relativedelta import relativedelta as rd\n\nfrom holidays.constants import JAN, APR, MAY, JUN, JUL, AUG, SEP, OCT, \\\n NOV, DEC\nfrom holidays.holiday_base import HolidayBase\n\n\nclass France(HolidayBase):\n \"\"\"Official French holidays.\n\n Some provinces have specific holidays, only those are included in the\n PROVINCES, because these provinces have different administrative status,\n which makes it difficult to enumerate.\n\n For religious holidays usually happening on Sundays (Easter, Pentecost),\n only the following Monday is considered a holiday.\n\n Primary sources:\n https://fr.wikipedia.org/wiki/F\u00eates_et_jours_f\u00e9ri\u00e9s_en_France\n https://www.service-public.fr/particuliers/vosdroits/F2405\n \"\"\"\n\n PROVINCES = ['M\u00e9tropole', 'Alsace-Moselle', 'Guadeloupe', 'Guyane',\n 'Martinique', 'Mayotte', 'Nouvelle-Cal\u00e9donie', 'La R\u00e9union',\n 'Polyn\u00e9sie Fran\u00e7aise', 'Saint-Barth\u00e9l\u00e9my', 'Saint-Martin',\n 'Wallis-et-Futuna']\n\n def __init__(self, **kwargs):\n self.country = 'FR'\n self.prov = kwargs.pop('prov', 'M\u00e9tropole')\n HolidayBase.__init__(self, **kwargs)\n\n def _populate(self, year):\n # Civil holidays\n if year > 1810:\n self[date(year, JAN, 1)] = \"Jour de l'an\"\n\n if year > 1919:\n name = 'F\u00eate du Travail'\n if year <= 1948:\n name += ' et de la Concorde sociale'\n self[date(year, MAY, 1)] = name\n\n if (1953 <= year <= 1959) or year > 1981:\n self[date(year, MAY, 8)] = 'Armistice 1945'\n\n if year >= 1880:\n self[date(year, JUL, 14)] = 'F\u00eate nationale'\n\n if year >= 1918:\n self[date(year, NOV, 11)] = 'Armistice 1918'\n\n # Religious holidays\n if self.prov in ['Alsace-Moselle', 'Guadeloupe', 'Guyane',\n 'Martinique', 'Polyn\u00e9sie Fran\u00e7aise']:\n self[easter(year) - rd(days=2)] = 'Vendredi saint'\n\n if self.prov == 'Alsace-Moselle':\n self[date(year, DEC, 26)] = 'Deuxi\u00e8me jour de No\u00ebl'\n\n if year >= 1886:\n self[easter(year) + rd(days=1)] = 'Lundi de P\u00e2ques'\n self[easter(year) + rd(days=50)] = 'Lundi de Pentec\u00f4te'\n\n if year >= 1802:\n self[easter(year) + rd(days=39)] = 'Ascension'\n self[date(year, AUG, 15)] = 'Assomption'\n self[date(year, NOV, 1)] = 'Toussaint'\n\n name = 'No\u00ebl'\n if self.prov == 'Alsace-Moselle':\n name = 'Premier jour de ' + name\n self[date(year, DEC, 25)] = name\n\n # Non-metropolitan holidays (starting dates missing)\n if self.prov == 'Mayotte':\n self[date(year, APR, 27)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Wallis-et-Futuna':\n self[date(year, APR, 28)] = 'Saint Pierre Chanel'\n\n if self.prov == 'Martinique':\n self[date(year, MAY, 22)] = \"Abolition de l'esclavage\"\n\n if self.prov in ['Guadeloupe', 'Saint-Martin']:\n self[date(year, MAY, 27)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Guyane':\n self[date(year, JUN, 10)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'Polyn\u00e9sie Fran\u00e7aise':\n self[date(year, JUN, 29)] = \"F\u00eate de l'autonomie\"\n\n if self.prov in ['Guadeloupe', 'Martinique']:\n self[date(year, JUL, 21)] = 'F\u00eate Victor Schoelcher'\n\n if self.prov == 'Wallis-et-Futuna':\n self[date(year, JUL, 29)] = 'F\u00eate du Territoire'\n\n if self.prov == 'Nouvelle-Cal\u00e9donie':\n self[date(year, SEP, 24)] = 'F\u00eate de la Citoyennet\u00e9'\n\n if self.prov == 'Saint-Barth\u00e9l\u00e9my':\n self[date(year, OCT, 9)] = \"Abolition de l'esclavage\"\n\n if self.prov == 'La R\u00e9union' and year >= 1981:\n self[date(year, DEC, 20)] = \"Abolition de l'esclavage\"\n\n\n# *Warning* FR is also used by dateutlis (Friday), so be careful with its use\nclass FR(France):\n pass\n\n\nclass FRA(France):\n pass\n", "path": "holidays/countries/france.py"}]}
| 3,213 | 293 |
gh_patches_debug_36285
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-231
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug]Clean Python pip install problem
**Describe the bug**
If starting a new Python installation, DynaConf 2.1.0 will not work successfully. It appears to be choking while reading a YAML file. The only workaround I have found is to roll back to 2.0.4.
**To Reproduce**
Steps to reproduce the behavior:
I have put together a very simple project that will demonstrate the problem, see the attached zip file below. Just make sure you are using a brand new installation of Python - System, VirtualEnv or Conda Environment - problem occurs in all cases. Just for clarity, I use Conda.
pip install -r requirements.txt
python dc-error.py
**Expected behavior**
The program should run and print out the setting from the error.yaml file.
**Debug output**
C:\ProgramData\Anaconda3\envs\DynaConfError\python.exe C:/Development/DynaConfError/dc-error.py
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\loaders\base.py", line 110, in _read
data = source_data[env.lower()]
KeyError: 'dynaconf'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Development/DynaConfError/dc-error.py", line 11, in <module>
settings.load_file(path='error.yaml', silent=False)
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\base.py", line 908, in load_file
filename=path,
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\loaders\__init__.py", line 116, in settings_loader
obj, filename=mod_file, env=env, silent=silent, key=key
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\loaders\yaml_loader.py", line 50, in load
loader.load(filename=filename, key=key, silent=silent)
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\loaders\base.py", line 71, in load
self._read(files, env_list, silent, key)
File "C:\ProgramData\Anaconda3\envs\DynaConfError\lib\site-packages\dynaconf\loaders\base.py", line 125, in _read
raise KeyError(message)
KeyError: 'yaml_loader: dynaconf env not defined in C:\\Development\\DynaConfError\\.\\error.yaml'
**Environment (please complete the following information):**
- OS: Windows 10 Pro, Ubuntu 18.04, CentOS 7
- Dynaconf Version 2.1.0
- Frameworks in use: None
**Additional context**
Interestingly, if you already had an older version of DynaConf installed in a python environment and update it, this code will execute and work as expected.
[DynaConfError.zip](https://github.com/rochacbruno/dynaconf/files/3598272/DynaConfError.zip)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dynaconf/loaders/base.py`
Content:
```
1 import io
2 import os
3
4 from dynaconf.utils import build_env_list
5 from dynaconf.utils import ensure_a_list
6 from dynaconf.utils import raw_logger
7
8
9 logger = raw_logger()
10
11
12 class BaseLoader(object):
13 """Base loader for dynaconf source files.
14
15 :param obj: {[LazySettings]} -- [Dynaconf settings]
16 :param env: {[string]} -- [the current env to be loaded defaults to
17 [development]]
18 :param identifier: {[string]} -- [identifier ini, yaml, json, py, toml]
19 :param extensions: {[list]} -- [List of extensions with dots ['.a', '.b']]
20 :param file_reader: {[callable]} -- [reads file return dict]
21 :param string_reader: {[callable]} -- [reads string return dict]
22 """
23
24 def __init__(
25 self, obj, env, identifier, extensions, file_reader, string_reader
26 ):
27 """Instantiates a loader for different sources"""
28 self.obj = obj
29 self.env = env or obj.current_env
30 self.identifier = identifier
31 self.extensions = extensions
32 self.file_reader = file_reader
33 self.string_reader = string_reader
34
35 @staticmethod
36 def warn_not_installed(obj, identifier): # pragma: no cover
37 if identifier not in obj._not_installed_warnings:
38 logger.warning(
39 "%(ident)s support is not installed in your environment. "
40 "`pip install dynaconf[%(ident)s]`",
41 {"ident": identifier},
42 )
43 obj._not_installed_warnings.append(identifier)
44
45 def load(self, filename=None, key=None, silent=True):
46 """
47 Reads and loads in to `self.obj` a single key or all keys from source
48
49 :param filename: Optional filename to load
50 :param key: if provided load a single key
51 :param silent: if load erros should be silenced
52 """
53 filename = filename or self.obj.get(self.identifier.upper())
54 if not filename:
55 return
56
57 if not isinstance(filename, (list, tuple)):
58 split_files = ensure_a_list(filename)
59 if all([f.endswith(self.extensions) for f in split_files]): # noqa
60 files = split_files # it is a ['file.ext', ...]
61 else: # it is a single config as string
62 files = [filename]
63 else: # it is already a list/tuple
64 files = filename
65
66 self.obj._loaded_files.extend(files)
67
68 env_list = build_env_list(self.obj, self.env)
69
70 # load all envs
71 self._read(files, env_list, silent, key)
72
73 def _read(self, files, envs, silent=True, key=None):
74 for source_file in files:
75 if source_file.endswith(self.extensions):
76 try:
77 with io.open(
78 source_file,
79 encoding=self.obj.get(
80 "ENCODING_FOR_DYNACONF", "utf-8"
81 ),
82 ) as open_file:
83 source_data = self.file_reader(open_file)
84 self.obj.logger.debug(
85 "{}_loader: {}".format(self.identifier, source_file)
86 )
87 except IOError:
88 self.obj.logger.debug(
89 "{}_loader: {} (Ignored, file not Found)".format(
90 self.identifier, source_file
91 )
92 )
93 source_data = None
94 else:
95 # for tests it is possible to pass string
96 source_data = self.string_reader(source_file)
97
98 if not source_data:
99 continue
100
101 # env name is checked in lower
102 source_data = {
103 k.lower(): value for k, value in source_data.items()
104 }
105
106 for env in envs:
107
108 data = {}
109 try:
110 data = source_data[env.lower()]
111 except KeyError:
112 if env not in (
113 self.obj.get("ENVVAR_PREFIX_FOR_DYNACONF")
114 or "DYNACONF",
115 "GLOBAL",
116 ):
117 message = "%s_loader: %s env not defined in %s" % (
118 self.identifier,
119 env,
120 source_file,
121 )
122 if silent:
123 self.obj.logger.warning(message)
124 else:
125 raise KeyError(message)
126 continue
127
128 if (
129 env.lower()
130 != self.obj.get("DEFAULT_ENV_FOR_DYNACONF").lower()
131 ):
132 identifier = "{0}_{1}".format(self.identifier, env.lower())
133 else:
134 identifier = self.identifier
135
136 # data 1st level keys should be transformed to upper case.
137 data = {k.upper(): v for k, v in data.items()}
138 if key:
139 key = key.upper()
140
141 is_secret = "secret" in source_file
142
143 self.obj.logger.debug(
144 "{}_loader: {}[{}]{}".format(
145 self.identifier,
146 os.path.split(source_file)[-1],
147 env.lower(),
148 list(data.keys()) if is_secret else data,
149 )
150 )
151
152 if not key:
153 self.obj.update(
154 data, loader_identifier=identifier, is_secret=is_secret
155 )
156 elif key in data:
157 self.obj.set(
158 key,
159 data.get(key),
160 loader_identifier=identifier,
161 is_secret=is_secret,
162 )
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dynaconf/loaders/base.py b/dynaconf/loaders/base.py
--- a/dynaconf/loaders/base.py
+++ b/dynaconf/loaders/base.py
@@ -103,17 +103,27 @@
k.lower(): value for k, value in source_data.items()
}
- for env in envs:
+ # all lower case for comparison
+ base_envs = [
+ # DYNACONF or MYPROGRAM
+ (self.obj.get("ENVVAR_PREFIX_FOR_DYNACONF") or "").lower(),
+ # DEFAULT
+ self.obj.get("DEFAULT_ENV_FOR_DYNACONF").lower(),
+ # default active env unless ENV_FOR_DYNACONF is changed
+ "development",
+ # backwards compatibility for global
+ "dynaconf",
+ # global that rules all
+ "global",
+ ]
+ for env in envs:
+ env = env.lower() # lower for better comparison
data = {}
try:
- data = source_data[env.lower()]
+ data = source_data[env]
except KeyError:
- if env not in (
- self.obj.get("ENVVAR_PREFIX_FOR_DYNACONF")
- or "DYNACONF",
- "GLOBAL",
- ):
+ if env not in base_envs:
message = "%s_loader: %s env not defined in %s" % (
self.identifier,
env,
@@ -125,11 +135,8 @@
raise KeyError(message)
continue
- if (
- env.lower()
- != self.obj.get("DEFAULT_ENV_FOR_DYNACONF").lower()
- ):
- identifier = "{0}_{1}".format(self.identifier, env.lower())
+ if env != self.obj.get("DEFAULT_ENV_FOR_DYNACONF").lower():
+ identifier = "{0}_{1}".format(self.identifier, env)
else:
identifier = self.identifier
@@ -144,7 +151,7 @@
"{}_loader: {}[{}]{}".format(
self.identifier,
os.path.split(source_file)[-1],
- env.lower(),
+ env,
list(data.keys()) if is_secret else data,
)
)
|
{"golden_diff": "diff --git a/dynaconf/loaders/base.py b/dynaconf/loaders/base.py\n--- a/dynaconf/loaders/base.py\n+++ b/dynaconf/loaders/base.py\n@@ -103,17 +103,27 @@\n k.lower(): value for k, value in source_data.items()\n }\n \n- for env in envs:\n+ # all lower case for comparison\n+ base_envs = [\n+ # DYNACONF or MYPROGRAM\n+ (self.obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\") or \"\").lower(),\n+ # DEFAULT\n+ self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower(),\n+ # default active env unless ENV_FOR_DYNACONF is changed\n+ \"development\",\n+ # backwards compatibility for global\n+ \"dynaconf\",\n+ # global that rules all\n+ \"global\",\n+ ]\n \n+ for env in envs:\n+ env = env.lower() # lower for better comparison\n data = {}\n try:\n- data = source_data[env.lower()]\n+ data = source_data[env]\n except KeyError:\n- if env not in (\n- self.obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\")\n- or \"DYNACONF\",\n- \"GLOBAL\",\n- ):\n+ if env not in base_envs:\n message = \"%s_loader: %s env not defined in %s\" % (\n self.identifier,\n env,\n@@ -125,11 +135,8 @@\n raise KeyError(message)\n continue\n \n- if (\n- env.lower()\n- != self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower()\n- ):\n- identifier = \"{0}_{1}\".format(self.identifier, env.lower())\n+ if env != self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower():\n+ identifier = \"{0}_{1}\".format(self.identifier, env)\n else:\n identifier = self.identifier\n \n@@ -144,7 +151,7 @@\n \"{}_loader: {}[{}]{}\".format(\n self.identifier,\n os.path.split(source_file)[-1],\n- env.lower(),\n+ env,\n list(data.keys()) if is_secret else data,\n )\n )\n", "issue": "[bug]Clean Python pip install problem\n**Describe the bug**\r\nIf starting a new Python installation, DynaConf 2.1.0 will not work successfully. It appears to be choking while reading a YAML file. The only workaround I have found is to roll back to 2.0.4.\r\n\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nI have put together a very simple project that will demonstrate the problem, see the attached zip file below. Just make sure you are using a brand new installation of Python - System, VirtualEnv or Conda Environment - problem occurs in all cases. Just for clarity, I use Conda.\r\n\r\npip install -r requirements.txt\r\npython dc-error.py\r\n\r\n\r\n**Expected behavior**\r\nThe program should run and print out the setting from the error.yaml file.\r\n\r\n\r\n**Debug output**\r\nC:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\python.exe C:/Development/DynaConfError/dc-error.py\r\nTraceback (most recent call last):\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\loaders\\base.py\", line 110, in _read\r\n data = source_data[env.lower()]\r\nKeyError: 'dynaconf'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Development/DynaConfError/dc-error.py\", line 11, in <module>\r\n settings.load_file(path='error.yaml', silent=False)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\base.py\", line 908, in load_file\r\n filename=path,\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\loaders\\__init__.py\", line 116, in settings_loader\r\n obj, filename=mod_file, env=env, silent=silent, key=key\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\loaders\\yaml_loader.py\", line 50, in load\r\n loader.load(filename=filename, key=key, silent=silent)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\loaders\\base.py\", line 71, in load\r\n self._read(files, env_list, silent, key)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\DynaConfError\\lib\\site-packages\\dynaconf\\loaders\\base.py\", line 125, in _read\r\n raise KeyError(message)\r\nKeyError: 'yaml_loader: dynaconf env not defined in C:\\\\Development\\\\DynaConfError\\\\.\\\\error.yaml'\r\n\r\n\r\n**Environment (please complete the following information):**\r\n - OS: Windows 10 Pro, Ubuntu 18.04, CentOS 7\r\n - Dynaconf Version 2.1.0\r\n - Frameworks in use: None\r\n\r\n\r\n**Additional context**\r\nInterestingly, if you already had an older version of DynaConf installed in a python environment and update it, this code will execute and work as expected.\r\n\r\n\r\n[DynaConfError.zip](https://github.com/rochacbruno/dynaconf/files/3598272/DynaConfError.zip)\r\n\n", "before_files": [{"content": "import io\nimport os\n\nfrom dynaconf.utils import build_env_list\nfrom dynaconf.utils import ensure_a_list\nfrom dynaconf.utils import raw_logger\n\n\nlogger = raw_logger()\n\n\nclass BaseLoader(object):\n \"\"\"Base loader for dynaconf source files.\n\n :param obj: {[LazySettings]} -- [Dynaconf settings]\n :param env: {[string]} -- [the current env to be loaded defaults to\n [development]]\n :param identifier: {[string]} -- [identifier ini, yaml, json, py, toml]\n :param extensions: {[list]} -- [List of extensions with dots ['.a', '.b']]\n :param file_reader: {[callable]} -- [reads file return dict]\n :param string_reader: {[callable]} -- [reads string return dict]\n \"\"\"\n\n def __init__(\n self, obj, env, identifier, extensions, file_reader, string_reader\n ):\n \"\"\"Instantiates a loader for different sources\"\"\"\n self.obj = obj\n self.env = env or obj.current_env\n self.identifier = identifier\n self.extensions = extensions\n self.file_reader = file_reader\n self.string_reader = string_reader\n\n @staticmethod\n def warn_not_installed(obj, identifier): # pragma: no cover\n if identifier not in obj._not_installed_warnings:\n logger.warning(\n \"%(ident)s support is not installed in your environment. \"\n \"`pip install dynaconf[%(ident)s]`\",\n {\"ident\": identifier},\n )\n obj._not_installed_warnings.append(identifier)\n\n def load(self, filename=None, key=None, silent=True):\n \"\"\"\n Reads and loads in to `self.obj` a single key or all keys from source\n\n :param filename: Optional filename to load\n :param key: if provided load a single key\n :param silent: if load erros should be silenced\n \"\"\"\n filename = filename or self.obj.get(self.identifier.upper())\n if not filename:\n return\n\n if not isinstance(filename, (list, tuple)):\n split_files = ensure_a_list(filename)\n if all([f.endswith(self.extensions) for f in split_files]): # noqa\n files = split_files # it is a ['file.ext', ...]\n else: # it is a single config as string\n files = [filename]\n else: # it is already a list/tuple\n files = filename\n\n self.obj._loaded_files.extend(files)\n\n env_list = build_env_list(self.obj, self.env)\n\n # load all envs\n self._read(files, env_list, silent, key)\n\n def _read(self, files, envs, silent=True, key=None):\n for source_file in files:\n if source_file.endswith(self.extensions):\n try:\n with io.open(\n source_file,\n encoding=self.obj.get(\n \"ENCODING_FOR_DYNACONF\", \"utf-8\"\n ),\n ) as open_file:\n source_data = self.file_reader(open_file)\n self.obj.logger.debug(\n \"{}_loader: {}\".format(self.identifier, source_file)\n )\n except IOError:\n self.obj.logger.debug(\n \"{}_loader: {} (Ignored, file not Found)\".format(\n self.identifier, source_file\n )\n )\n source_data = None\n else:\n # for tests it is possible to pass string\n source_data = self.string_reader(source_file)\n\n if not source_data:\n continue\n\n # env name is checked in lower\n source_data = {\n k.lower(): value for k, value in source_data.items()\n }\n\n for env in envs:\n\n data = {}\n try:\n data = source_data[env.lower()]\n except KeyError:\n if env not in (\n self.obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\")\n or \"DYNACONF\",\n \"GLOBAL\",\n ):\n message = \"%s_loader: %s env not defined in %s\" % (\n self.identifier,\n env,\n source_file,\n )\n if silent:\n self.obj.logger.warning(message)\n else:\n raise KeyError(message)\n continue\n\n if (\n env.lower()\n != self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower()\n ):\n identifier = \"{0}_{1}\".format(self.identifier, env.lower())\n else:\n identifier = self.identifier\n\n # data 1st level keys should be transformed to upper case.\n data = {k.upper(): v for k, v in data.items()}\n if key:\n key = key.upper()\n\n is_secret = \"secret\" in source_file\n\n self.obj.logger.debug(\n \"{}_loader: {}[{}]{}\".format(\n self.identifier,\n os.path.split(source_file)[-1],\n env.lower(),\n list(data.keys()) if is_secret else data,\n )\n )\n\n if not key:\n self.obj.update(\n data, loader_identifier=identifier, is_secret=is_secret\n )\n elif key in data:\n self.obj.set(\n key,\n data.get(key),\n loader_identifier=identifier,\n is_secret=is_secret,\n )\n", "path": "dynaconf/loaders/base.py"}], "after_files": [{"content": "import io\nimport os\n\nfrom dynaconf.utils import build_env_list\nfrom dynaconf.utils import ensure_a_list\nfrom dynaconf.utils import raw_logger\n\n\nlogger = raw_logger()\n\n\nclass BaseLoader(object):\n \"\"\"Base loader for dynaconf source files.\n\n :param obj: {[LazySettings]} -- [Dynaconf settings]\n :param env: {[string]} -- [the current env to be loaded defaults to\n [development]]\n :param identifier: {[string]} -- [identifier ini, yaml, json, py, toml]\n :param extensions: {[list]} -- [List of extensions with dots ['.a', '.b']]\n :param file_reader: {[callable]} -- [reads file return dict]\n :param string_reader: {[callable]} -- [reads string return dict]\n \"\"\"\n\n def __init__(\n self, obj, env, identifier, extensions, file_reader, string_reader\n ):\n \"\"\"Instantiates a loader for different sources\"\"\"\n self.obj = obj\n self.env = env or obj.current_env\n self.identifier = identifier\n self.extensions = extensions\n self.file_reader = file_reader\n self.string_reader = string_reader\n\n @staticmethod\n def warn_not_installed(obj, identifier): # pragma: no cover\n if identifier not in obj._not_installed_warnings:\n logger.warning(\n \"%(ident)s support is not installed in your environment. \"\n \"`pip install dynaconf[%(ident)s]`\",\n {\"ident\": identifier},\n )\n obj._not_installed_warnings.append(identifier)\n\n def load(self, filename=None, key=None, silent=True):\n \"\"\"\n Reads and loads in to `self.obj` a single key or all keys from source\n\n :param filename: Optional filename to load\n :param key: if provided load a single key\n :param silent: if load erros should be silenced\n \"\"\"\n filename = filename or self.obj.get(self.identifier.upper())\n if not filename:\n return\n\n if not isinstance(filename, (list, tuple)):\n split_files = ensure_a_list(filename)\n if all([f.endswith(self.extensions) for f in split_files]): # noqa\n files = split_files # it is a ['file.ext', ...]\n else: # it is a single config as string\n files = [filename]\n else: # it is already a list/tuple\n files = filename\n\n self.obj._loaded_files.extend(files)\n\n env_list = build_env_list(self.obj, self.env)\n\n # load all envs\n self._read(files, env_list, silent, key)\n\n def _read(self, files, envs, silent=True, key=None):\n for source_file in files:\n if source_file.endswith(self.extensions):\n try:\n with io.open(\n source_file,\n encoding=self.obj.get(\n \"ENCODING_FOR_DYNACONF\", \"utf-8\"\n ),\n ) as open_file:\n source_data = self.file_reader(open_file)\n self.obj.logger.debug(\n \"{}_loader: {}\".format(self.identifier, source_file)\n )\n except IOError:\n self.obj.logger.debug(\n \"{}_loader: {} (Ignored, file not Found)\".format(\n self.identifier, source_file\n )\n )\n source_data = None\n else:\n # for tests it is possible to pass string\n source_data = self.string_reader(source_file)\n\n if not source_data:\n continue\n\n # env name is checked in lower\n source_data = {\n k.lower(): value for k, value in source_data.items()\n }\n\n # all lower case for comparison\n base_envs = [\n # DYNACONF or MYPROGRAM\n (self.obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\") or \"\").lower(),\n # DEFAULT\n self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower(),\n # default active env unless ENV_FOR_DYNACONF is changed\n \"development\",\n # backwards compatibility for global\n \"dynaconf\",\n # global that rules all\n \"global\",\n ]\n\n for env in envs:\n env = env.lower() # lower for better comparison\n data = {}\n try:\n data = source_data[env]\n except KeyError:\n if env not in base_envs:\n message = \"%s_loader: %s env not defined in %s\" % (\n self.identifier,\n env,\n source_file,\n )\n if silent:\n self.obj.logger.warning(message)\n else:\n raise KeyError(message)\n continue\n\n if env != self.obj.get(\"DEFAULT_ENV_FOR_DYNACONF\").lower():\n identifier = \"{0}_{1}\".format(self.identifier, env)\n else:\n identifier = self.identifier\n\n # data 1st level keys should be transformed to upper case.\n data = {k.upper(): v for k, v in data.items()}\n if key:\n key = key.upper()\n\n is_secret = \"secret\" in source_file\n\n self.obj.logger.debug(\n \"{}_loader: {}[{}]{}\".format(\n self.identifier,\n os.path.split(source_file)[-1],\n env,\n list(data.keys()) if is_secret else data,\n )\n )\n\n if not key:\n self.obj.update(\n data, loader_identifier=identifier, is_secret=is_secret\n )\n elif key in data:\n self.obj.set(\n key,\n data.get(key),\n loader_identifier=identifier,\n is_secret=is_secret,\n )\n", "path": "dynaconf/loaders/base.py"}]}
| 2,499 | 509 |
gh_patches_debug_1606
|
rasdani/github-patches
|
git_diff
|
Cloud-CV__EvalAI-3370
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Frontend V2] Fix the media assets endpoint
### Description
We recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `settings/staging.py`
Content:
```
1 from .prod import * # noqa: ignore=F405
2
3 ALLOWED_HOSTS = ["staging.eval.ai"]
4
5 CORS_ORIGIN_ALLOW_ALL = False
6
7 CORS_ORIGIN_WHITELIST = (
8 "https://staging-evalai.s3.amazonaws.com",
9 "https://staging.eval.ai",
10 "https://beta-staging.eval.ai",
11 )
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/settings/staging.py b/settings/staging.py
--- a/settings/staging.py
+++ b/settings/staging.py
@@ -5,6 +5,7 @@
CORS_ORIGIN_ALLOW_ALL = False
CORS_ORIGIN_WHITELIST = (
+ "https://evalai.s3.amazonaws.com",
"https://staging-evalai.s3.amazonaws.com",
"https://staging.eval.ai",
"https://beta-staging.eval.ai",
|
{"golden_diff": "diff --git a/settings/staging.py b/settings/staging.py\n--- a/settings/staging.py\n+++ b/settings/staging.py\n@@ -5,6 +5,7 @@\n CORS_ORIGIN_ALLOW_ALL = False\n \n CORS_ORIGIN_WHITELIST = (\n+ \"https://evalai.s3.amazonaws.com\",\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n", "issue": "[Frontend V2] Fix the media assets endpoint\n### Description\r\n\r\nWe recently moved to `https://evalai.s3.amazonaws.com/` endpoint for our media assets. Frontend v2 is still using `https://staging-evalai.s3.amazonaws.com/` endpoint. We should switch to new enpdoint in frontend v2.\n", "before_files": [{"content": "from .prod import * # noqa: ignore=F405\n\nALLOWED_HOSTS = [\"staging.eval.ai\"]\n\nCORS_ORIGIN_ALLOW_ALL = False\n\nCORS_ORIGIN_WHITELIST = (\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n)\n", "path": "settings/staging.py"}], "after_files": [{"content": "from .prod import * # noqa: ignore=F405\n\nALLOWED_HOSTS = [\"staging.eval.ai\"]\n\nCORS_ORIGIN_ALLOW_ALL = False\n\nCORS_ORIGIN_WHITELIST = (\n \"https://evalai.s3.amazonaws.com\",\n \"https://staging-evalai.s3.amazonaws.com\",\n \"https://staging.eval.ai\",\n \"https://beta-staging.eval.ai\",\n)\n", "path": "settings/staging.py"}]}
| 423 | 98 |
gh_patches_debug_33086
|
rasdani/github-patches
|
git_diff
|
encode__httpx-521
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTTPX claims Python 3.6+ support when asyncio backend TCPStream.start_tls explicitly does not support Python 3.6
As stated, HTTPX explicitly does not support Python 3.6 [here](https://github.com/encode/httpx/blob/master/httpx/concurrency/asyncio.py#L61) but declares Python 3.6 support [here](https://github.com/encode/httpx/blob/master/setup.py#L40). This has caused problems with users of a package of mine that is supposed to support Python 3.6.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/concurrency/asyncio.py`
Content:
```
1 import asyncio
2 import functools
3 import ssl
4 import sys
5 import typing
6 from types import TracebackType
7
8 from ..config import PoolLimits, TimeoutConfig
9 from ..exceptions import ConnectTimeout, PoolTimeout, ReadTimeout, WriteTimeout
10 from .base import (
11 BaseBackgroundManager,
12 BaseEvent,
13 BasePoolSemaphore,
14 BaseQueue,
15 BaseSocketStream,
16 ConcurrencyBackend,
17 TimeoutFlag,
18 )
19
20 SSL_MONKEY_PATCH_APPLIED = False
21
22
23 def ssl_monkey_patch() -> None:
24 """
25 Monkey-patch for https://bugs.python.org/issue36709
26
27 This prevents console errors when outstanding HTTPS connections
28 still exist at the point of exiting.
29
30 Clients which have been opened using a `with` block, or which have
31 had `close()` closed, will not exhibit this issue in the first place.
32 """
33 MonkeyPatch = asyncio.selector_events._SelectorSocketTransport # type: ignore
34
35 _write = MonkeyPatch.write
36
37 def _fixed_write(self, data: bytes) -> None: # type: ignore
38 if self._loop and not self._loop.is_closed():
39 _write(self, data)
40
41 MonkeyPatch.write = _fixed_write
42
43
44 class SocketStream(BaseSocketStream):
45 def __init__(
46 self,
47 stream_reader: asyncio.StreamReader,
48 stream_writer: asyncio.StreamWriter,
49 timeout: TimeoutConfig,
50 ):
51 self.stream_reader = stream_reader
52 self.stream_writer = stream_writer
53 self.timeout = timeout
54
55 self._inner: typing.Optional[SocketStream] = None
56
57 async def start_tls(
58 self, hostname: str, ssl_context: ssl.SSLContext, timeout: TimeoutConfig
59 ) -> "SocketStream":
60 loop = asyncio.get_event_loop()
61 if not hasattr(loop, "start_tls"): # pragma: no cover
62 raise NotImplementedError(
63 "asyncio.AbstractEventLoop.start_tls() is only available in Python 3.7+"
64 )
65
66 stream_reader = asyncio.StreamReader()
67 protocol = asyncio.StreamReaderProtocol(stream_reader)
68 transport = self.stream_writer.transport
69
70 loop_start_tls = loop.start_tls # type: ignore
71 transport = await asyncio.wait_for(
72 loop_start_tls(
73 transport=transport,
74 protocol=protocol,
75 sslcontext=ssl_context,
76 server_hostname=hostname,
77 ),
78 timeout=timeout.connect_timeout,
79 )
80
81 stream_reader.set_transport(transport)
82 stream_writer = asyncio.StreamWriter(
83 transport=transport, protocol=protocol, reader=stream_reader, loop=loop
84 )
85
86 ssl_stream = SocketStream(stream_reader, stream_writer, self.timeout)
87 # When we return a new SocketStream with new StreamReader/StreamWriter instances
88 # we need to keep references to the old StreamReader/StreamWriter so that they
89 # are not garbage collected and closed while we're still using them.
90 ssl_stream._inner = self
91 return ssl_stream
92
93 def get_http_version(self) -> str:
94 ssl_object = self.stream_writer.get_extra_info("ssl_object")
95
96 if ssl_object is None:
97 return "HTTP/1.1"
98
99 ident = ssl_object.selected_alpn_protocol()
100 return "HTTP/2" if ident == "h2" else "HTTP/1.1"
101
102 async def read(
103 self, n: int, timeout: TimeoutConfig = None, flag: TimeoutFlag = None
104 ) -> bytes:
105 if timeout is None:
106 timeout = self.timeout
107
108 while True:
109 # Check our flag at the first possible moment, and use a fine
110 # grained retry loop if we're not yet in read-timeout mode.
111 should_raise = flag is None or flag.raise_on_read_timeout
112 read_timeout = timeout.read_timeout if should_raise else 0.01
113 try:
114 data = await asyncio.wait_for(self.stream_reader.read(n), read_timeout)
115 break
116 except asyncio.TimeoutError:
117 if should_raise:
118 raise ReadTimeout() from None
119 # FIX(py3.6): yield control back to the event loop to give it a chance
120 # to cancel `.read(n)` before we retry.
121 # This prevents concurrent `.read()` calls, which asyncio
122 # doesn't seem to allow on 3.6.
123 # See: https://github.com/encode/httpx/issues/382
124 await asyncio.sleep(0)
125
126 return data
127
128 def write_no_block(self, data: bytes) -> None:
129 self.stream_writer.write(data) # pragma: nocover
130
131 async def write(
132 self, data: bytes, timeout: TimeoutConfig = None, flag: TimeoutFlag = None
133 ) -> None:
134 if not data:
135 return
136
137 if timeout is None:
138 timeout = self.timeout
139
140 self.stream_writer.write(data)
141 while True:
142 try:
143 await asyncio.wait_for( # type: ignore
144 self.stream_writer.drain(), timeout.write_timeout
145 )
146 break
147 except asyncio.TimeoutError:
148 # We check our flag at the first possible moment, in order to
149 # allow us to suppress write timeouts, if we've since
150 # switched over to read-timeout mode.
151 should_raise = flag is None or flag.raise_on_write_timeout
152 if should_raise:
153 raise WriteTimeout() from None
154
155 def is_connection_dropped(self) -> bool:
156 # Counter-intuitively, what we really want to know here is whether the socket is
157 # *readable*, i.e. whether it would return immediately with empty bytes if we
158 # called `.recv()` on it, indicating that the other end has closed the socket.
159 # See: https://github.com/encode/httpx/pull/143#issuecomment-515181778
160 #
161 # As it turns out, asyncio checks for readability in the background
162 # (see: https://github.com/encode/httpx/pull/276#discussion_r322000402),
163 # so checking for EOF or readability here would yield the same result.
164 #
165 # At the cost of rigour, we check for EOF instead of readability because asyncio
166 # does not expose any public API to check for readability.
167 # (For a solution that uses private asyncio APIs, see:
168 # https://github.com/encode/httpx/pull/143#issuecomment-515202982)
169
170 return self.stream_reader.at_eof()
171
172 async def close(self) -> None:
173 self.stream_writer.close()
174 if sys.version_info >= (3, 7):
175 await self.stream_writer.wait_closed()
176
177
178 class PoolSemaphore(BasePoolSemaphore):
179 def __init__(self, pool_limits: PoolLimits):
180 self.pool_limits = pool_limits
181
182 @property
183 def semaphore(self) -> typing.Optional[asyncio.BoundedSemaphore]:
184 if not hasattr(self, "_semaphore"):
185 max_connections = self.pool_limits.hard_limit
186 if max_connections is None:
187 self._semaphore = None
188 else:
189 self._semaphore = asyncio.BoundedSemaphore(value=max_connections)
190 return self._semaphore
191
192 async def acquire(self) -> None:
193 if self.semaphore is None:
194 return
195
196 timeout = self.pool_limits.pool_timeout
197 try:
198 await asyncio.wait_for(self.semaphore.acquire(), timeout)
199 except asyncio.TimeoutError:
200 raise PoolTimeout()
201
202 def release(self) -> None:
203 if self.semaphore is None:
204 return
205
206 self.semaphore.release()
207
208
209 class AsyncioBackend(ConcurrencyBackend):
210 def __init__(self) -> None:
211 global SSL_MONKEY_PATCH_APPLIED
212
213 if not SSL_MONKEY_PATCH_APPLIED:
214 ssl_monkey_patch()
215 SSL_MONKEY_PATCH_APPLIED = True
216
217 @property
218 def loop(self) -> asyncio.AbstractEventLoop:
219 if not hasattr(self, "_loop"):
220 try:
221 self._loop = asyncio.get_event_loop()
222 except RuntimeError:
223 self._loop = asyncio.new_event_loop()
224 return self._loop
225
226 async def open_tcp_stream(
227 self,
228 hostname: str,
229 port: int,
230 ssl_context: typing.Optional[ssl.SSLContext],
231 timeout: TimeoutConfig,
232 ) -> SocketStream:
233 try:
234 stream_reader, stream_writer = await asyncio.wait_for( # type: ignore
235 asyncio.open_connection(hostname, port, ssl=ssl_context),
236 timeout.connect_timeout,
237 )
238 except asyncio.TimeoutError:
239 raise ConnectTimeout()
240
241 return SocketStream(
242 stream_reader=stream_reader, stream_writer=stream_writer, timeout=timeout
243 )
244
245 async def run_in_threadpool(
246 self, func: typing.Callable, *args: typing.Any, **kwargs: typing.Any
247 ) -> typing.Any:
248 if kwargs:
249 # loop.run_in_executor doesn't accept 'kwargs', so bind them in here
250 func = functools.partial(func, **kwargs)
251 return await self.loop.run_in_executor(None, func, *args)
252
253 def run(
254 self, coroutine: typing.Callable, *args: typing.Any, **kwargs: typing.Any
255 ) -> typing.Any:
256 loop = self.loop
257 if loop.is_running():
258 self._loop = asyncio.new_event_loop()
259 try:
260 return self.loop.run_until_complete(coroutine(*args, **kwargs))
261 finally:
262 self._loop = loop
263
264 def get_semaphore(self, limits: PoolLimits) -> BasePoolSemaphore:
265 return PoolSemaphore(limits)
266
267 def create_queue(self, max_size: int) -> BaseQueue:
268 return typing.cast(BaseQueue, asyncio.Queue(maxsize=max_size))
269
270 def create_event(self) -> BaseEvent:
271 return typing.cast(BaseEvent, asyncio.Event())
272
273 def background_manager(
274 self, coroutine: typing.Callable, *args: typing.Any
275 ) -> "BackgroundManager":
276 return BackgroundManager(coroutine, args)
277
278
279 class BackgroundManager(BaseBackgroundManager):
280 def __init__(self, coroutine: typing.Callable, args: typing.Any) -> None:
281 self.coroutine = coroutine
282 self.args = args
283
284 async def __aenter__(self) -> "BackgroundManager":
285 loop = asyncio.get_event_loop()
286 self.task = loop.create_task(self.coroutine(*self.args))
287 return self
288
289 async def __aexit__(
290 self,
291 exc_type: typing.Type[BaseException] = None,
292 exc_value: BaseException = None,
293 traceback: TracebackType = None,
294 ) -> None:
295 await self.task
296 if exc_type is None:
297 self.task.result()
298
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/concurrency/asyncio.py b/httpx/concurrency/asyncio.py
--- a/httpx/concurrency/asyncio.py
+++ b/httpx/concurrency/asyncio.py
@@ -58,16 +58,49 @@
self, hostname: str, ssl_context: ssl.SSLContext, timeout: TimeoutConfig
) -> "SocketStream":
loop = asyncio.get_event_loop()
- if not hasattr(loop, "start_tls"): # pragma: no cover
- raise NotImplementedError(
- "asyncio.AbstractEventLoop.start_tls() is only available in Python 3.7+"
- )
stream_reader = asyncio.StreamReader()
protocol = asyncio.StreamReaderProtocol(stream_reader)
transport = self.stream_writer.transport
- loop_start_tls = loop.start_tls # type: ignore
+ if hasattr(loop, "start_tls"):
+ loop_start_tls = loop.start_tls # type: ignore
+ else:
+
+ async def loop_start_tls(
+ transport: asyncio.BaseTransport,
+ protocol: asyncio.BaseProtocol,
+ sslcontext: ssl.SSLContext = None,
+ *,
+ server_side: bool = False,
+ server_hostname: str = None,
+ ssl_handshake_timeout: float = None,
+ ) -> asyncio.Transport:
+ """Python 3.6 asyncio doesn't have a start_tls() method on the loop
+ so we use this function in place of the loop's start_tls() method.
+ Adapted from this comment:
+ https://github.com/urllib3/urllib3/issues/1323#issuecomment-362494839
+ """
+ import asyncio.sslproto
+
+ waiter = loop.create_future()
+ ssl_protocol = asyncio.sslproto.SSLProtocol(
+ loop,
+ protocol,
+ sslcontext,
+ waiter,
+ server_side=False,
+ server_hostname=server_hostname,
+ call_connection_made=False,
+ )
+
+ transport.set_protocol(ssl_protocol)
+ loop.call_soon(ssl_protocol.connection_made, transport)
+ loop.call_soon(transport.resume_reading) # type: ignore
+
+ await waiter
+ return ssl_protocol._app_transport
+
transport = await asyncio.wait_for(
loop_start_tls(
transport=transport,
|
{"golden_diff": "diff --git a/httpx/concurrency/asyncio.py b/httpx/concurrency/asyncio.py\n--- a/httpx/concurrency/asyncio.py\n+++ b/httpx/concurrency/asyncio.py\n@@ -58,16 +58,49 @@\n self, hostname: str, ssl_context: ssl.SSLContext, timeout: TimeoutConfig\n ) -> \"SocketStream\":\n loop = asyncio.get_event_loop()\n- if not hasattr(loop, \"start_tls\"): # pragma: no cover\n- raise NotImplementedError(\n- \"asyncio.AbstractEventLoop.start_tls() is only available in Python 3.7+\"\n- )\n \n stream_reader = asyncio.StreamReader()\n protocol = asyncio.StreamReaderProtocol(stream_reader)\n transport = self.stream_writer.transport\n \n- loop_start_tls = loop.start_tls # type: ignore\n+ if hasattr(loop, \"start_tls\"):\n+ loop_start_tls = loop.start_tls # type: ignore\n+ else:\n+\n+ async def loop_start_tls(\n+ transport: asyncio.BaseTransport,\n+ protocol: asyncio.BaseProtocol,\n+ sslcontext: ssl.SSLContext = None,\n+ *,\n+ server_side: bool = False,\n+ server_hostname: str = None,\n+ ssl_handshake_timeout: float = None,\n+ ) -> asyncio.Transport:\n+ \"\"\"Python 3.6 asyncio doesn't have a start_tls() method on the loop\n+ so we use this function in place of the loop's start_tls() method.\n+ Adapted from this comment:\n+ https://github.com/urllib3/urllib3/issues/1323#issuecomment-362494839\n+ \"\"\"\n+ import asyncio.sslproto\n+\n+ waiter = loop.create_future()\n+ ssl_protocol = asyncio.sslproto.SSLProtocol(\n+ loop,\n+ protocol,\n+ sslcontext,\n+ waiter,\n+ server_side=False,\n+ server_hostname=server_hostname,\n+ call_connection_made=False,\n+ )\n+\n+ transport.set_protocol(ssl_protocol)\n+ loop.call_soon(ssl_protocol.connection_made, transport)\n+ loop.call_soon(transport.resume_reading) # type: ignore\n+\n+ await waiter\n+ return ssl_protocol._app_transport\n+\n transport = await asyncio.wait_for(\n loop_start_tls(\n transport=transport,\n", "issue": "HTTPX claims Python 3.6+ support when asyncio backend TCPStream.start_tls explicitly does not support Python 3.6\nAs stated, HTTPX explicitly does not support Python 3.6 [here](https://github.com/encode/httpx/blob/master/httpx/concurrency/asyncio.py#L61) but declares Python 3.6 support [here](https://github.com/encode/httpx/blob/master/setup.py#L40). This has caused problems with users of a package of mine that is supposed to support Python 3.6.\n", "before_files": [{"content": "import asyncio\nimport functools\nimport ssl\nimport sys\nimport typing\nfrom types import TracebackType\n\nfrom ..config import PoolLimits, TimeoutConfig\nfrom ..exceptions import ConnectTimeout, PoolTimeout, ReadTimeout, WriteTimeout\nfrom .base import (\n BaseBackgroundManager,\n BaseEvent,\n BasePoolSemaphore,\n BaseQueue,\n BaseSocketStream,\n ConcurrencyBackend,\n TimeoutFlag,\n)\n\nSSL_MONKEY_PATCH_APPLIED = False\n\n\ndef ssl_monkey_patch() -> None:\n \"\"\"\n Monkey-patch for https://bugs.python.org/issue36709\n\n This prevents console errors when outstanding HTTPS connections\n still exist at the point of exiting.\n\n Clients which have been opened using a `with` block, or which have\n had `close()` closed, will not exhibit this issue in the first place.\n \"\"\"\n MonkeyPatch = asyncio.selector_events._SelectorSocketTransport # type: ignore\n\n _write = MonkeyPatch.write\n\n def _fixed_write(self, data: bytes) -> None: # type: ignore\n if self._loop and not self._loop.is_closed():\n _write(self, data)\n\n MonkeyPatch.write = _fixed_write\n\n\nclass SocketStream(BaseSocketStream):\n def __init__(\n self,\n stream_reader: asyncio.StreamReader,\n stream_writer: asyncio.StreamWriter,\n timeout: TimeoutConfig,\n ):\n self.stream_reader = stream_reader\n self.stream_writer = stream_writer\n self.timeout = timeout\n\n self._inner: typing.Optional[SocketStream] = None\n\n async def start_tls(\n self, hostname: str, ssl_context: ssl.SSLContext, timeout: TimeoutConfig\n ) -> \"SocketStream\":\n loop = asyncio.get_event_loop()\n if not hasattr(loop, \"start_tls\"): # pragma: no cover\n raise NotImplementedError(\n \"asyncio.AbstractEventLoop.start_tls() is only available in Python 3.7+\"\n )\n\n stream_reader = asyncio.StreamReader()\n protocol = asyncio.StreamReaderProtocol(stream_reader)\n transport = self.stream_writer.transport\n\n loop_start_tls = loop.start_tls # type: ignore\n transport = await asyncio.wait_for(\n loop_start_tls(\n transport=transport,\n protocol=protocol,\n sslcontext=ssl_context,\n server_hostname=hostname,\n ),\n timeout=timeout.connect_timeout,\n )\n\n stream_reader.set_transport(transport)\n stream_writer = asyncio.StreamWriter(\n transport=transport, protocol=protocol, reader=stream_reader, loop=loop\n )\n\n ssl_stream = SocketStream(stream_reader, stream_writer, self.timeout)\n # When we return a new SocketStream with new StreamReader/StreamWriter instances\n # we need to keep references to the old StreamReader/StreamWriter so that they\n # are not garbage collected and closed while we're still using them.\n ssl_stream._inner = self\n return ssl_stream\n\n def get_http_version(self) -> str:\n ssl_object = self.stream_writer.get_extra_info(\"ssl_object\")\n\n if ssl_object is None:\n return \"HTTP/1.1\"\n\n ident = ssl_object.selected_alpn_protocol()\n return \"HTTP/2\" if ident == \"h2\" else \"HTTP/1.1\"\n\n async def read(\n self, n: int, timeout: TimeoutConfig = None, flag: TimeoutFlag = None\n ) -> bytes:\n if timeout is None:\n timeout = self.timeout\n\n while True:\n # Check our flag at the first possible moment, and use a fine\n # grained retry loop if we're not yet in read-timeout mode.\n should_raise = flag is None or flag.raise_on_read_timeout\n read_timeout = timeout.read_timeout if should_raise else 0.01\n try:\n data = await asyncio.wait_for(self.stream_reader.read(n), read_timeout)\n break\n except asyncio.TimeoutError:\n if should_raise:\n raise ReadTimeout() from None\n # FIX(py3.6): yield control back to the event loop to give it a chance\n # to cancel `.read(n)` before we retry.\n # This prevents concurrent `.read()` calls, which asyncio\n # doesn't seem to allow on 3.6.\n # See: https://github.com/encode/httpx/issues/382\n await asyncio.sleep(0)\n\n return data\n\n def write_no_block(self, data: bytes) -> None:\n self.stream_writer.write(data) # pragma: nocover\n\n async def write(\n self, data: bytes, timeout: TimeoutConfig = None, flag: TimeoutFlag = None\n ) -> None:\n if not data:\n return\n\n if timeout is None:\n timeout = self.timeout\n\n self.stream_writer.write(data)\n while True:\n try:\n await asyncio.wait_for( # type: ignore\n self.stream_writer.drain(), timeout.write_timeout\n )\n break\n except asyncio.TimeoutError:\n # We check our flag at the first possible moment, in order to\n # allow us to suppress write timeouts, if we've since\n # switched over to read-timeout mode.\n should_raise = flag is None or flag.raise_on_write_timeout\n if should_raise:\n raise WriteTimeout() from None\n\n def is_connection_dropped(self) -> bool:\n # Counter-intuitively, what we really want to know here is whether the socket is\n # *readable*, i.e. whether it would return immediately with empty bytes if we\n # called `.recv()` on it, indicating that the other end has closed the socket.\n # See: https://github.com/encode/httpx/pull/143#issuecomment-515181778\n #\n # As it turns out, asyncio checks for readability in the background\n # (see: https://github.com/encode/httpx/pull/276#discussion_r322000402),\n # so checking for EOF or readability here would yield the same result.\n #\n # At the cost of rigour, we check for EOF instead of readability because asyncio\n # does not expose any public API to check for readability.\n # (For a solution that uses private asyncio APIs, see:\n # https://github.com/encode/httpx/pull/143#issuecomment-515202982)\n\n return self.stream_reader.at_eof()\n\n async def close(self) -> None:\n self.stream_writer.close()\n if sys.version_info >= (3, 7):\n await self.stream_writer.wait_closed()\n\n\nclass PoolSemaphore(BasePoolSemaphore):\n def __init__(self, pool_limits: PoolLimits):\n self.pool_limits = pool_limits\n\n @property\n def semaphore(self) -> typing.Optional[asyncio.BoundedSemaphore]:\n if not hasattr(self, \"_semaphore\"):\n max_connections = self.pool_limits.hard_limit\n if max_connections is None:\n self._semaphore = None\n else:\n self._semaphore = asyncio.BoundedSemaphore(value=max_connections)\n return self._semaphore\n\n async def acquire(self) -> None:\n if self.semaphore is None:\n return\n\n timeout = self.pool_limits.pool_timeout\n try:\n await asyncio.wait_for(self.semaphore.acquire(), timeout)\n except asyncio.TimeoutError:\n raise PoolTimeout()\n\n def release(self) -> None:\n if self.semaphore is None:\n return\n\n self.semaphore.release()\n\n\nclass AsyncioBackend(ConcurrencyBackend):\n def __init__(self) -> None:\n global SSL_MONKEY_PATCH_APPLIED\n\n if not SSL_MONKEY_PATCH_APPLIED:\n ssl_monkey_patch()\n SSL_MONKEY_PATCH_APPLIED = True\n\n @property\n def loop(self) -> asyncio.AbstractEventLoop:\n if not hasattr(self, \"_loop\"):\n try:\n self._loop = asyncio.get_event_loop()\n except RuntimeError:\n self._loop = asyncio.new_event_loop()\n return self._loop\n\n async def open_tcp_stream(\n self,\n hostname: str,\n port: int,\n ssl_context: typing.Optional[ssl.SSLContext],\n timeout: TimeoutConfig,\n ) -> SocketStream:\n try:\n stream_reader, stream_writer = await asyncio.wait_for( # type: ignore\n asyncio.open_connection(hostname, port, ssl=ssl_context),\n timeout.connect_timeout,\n )\n except asyncio.TimeoutError:\n raise ConnectTimeout()\n\n return SocketStream(\n stream_reader=stream_reader, stream_writer=stream_writer, timeout=timeout\n )\n\n async def run_in_threadpool(\n self, func: typing.Callable, *args: typing.Any, **kwargs: typing.Any\n ) -> typing.Any:\n if kwargs:\n # loop.run_in_executor doesn't accept 'kwargs', so bind them in here\n func = functools.partial(func, **kwargs)\n return await self.loop.run_in_executor(None, func, *args)\n\n def run(\n self, coroutine: typing.Callable, *args: typing.Any, **kwargs: typing.Any\n ) -> typing.Any:\n loop = self.loop\n if loop.is_running():\n self._loop = asyncio.new_event_loop()\n try:\n return self.loop.run_until_complete(coroutine(*args, **kwargs))\n finally:\n self._loop = loop\n\n def get_semaphore(self, limits: PoolLimits) -> BasePoolSemaphore:\n return PoolSemaphore(limits)\n\n def create_queue(self, max_size: int) -> BaseQueue:\n return typing.cast(BaseQueue, asyncio.Queue(maxsize=max_size))\n\n def create_event(self) -> BaseEvent:\n return typing.cast(BaseEvent, asyncio.Event())\n\n def background_manager(\n self, coroutine: typing.Callable, *args: typing.Any\n ) -> \"BackgroundManager\":\n return BackgroundManager(coroutine, args)\n\n\nclass BackgroundManager(BaseBackgroundManager):\n def __init__(self, coroutine: typing.Callable, args: typing.Any) -> None:\n self.coroutine = coroutine\n self.args = args\n\n async def __aenter__(self) -> \"BackgroundManager\":\n loop = asyncio.get_event_loop()\n self.task = loop.create_task(self.coroutine(*self.args))\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n await self.task\n if exc_type is None:\n self.task.result()\n", "path": "httpx/concurrency/asyncio.py"}], "after_files": [{"content": "import asyncio\nimport functools\nimport ssl\nimport sys\nimport typing\nfrom types import TracebackType\n\nfrom ..config import PoolLimits, TimeoutConfig\nfrom ..exceptions import ConnectTimeout, PoolTimeout, ReadTimeout, WriteTimeout\nfrom .base import (\n BaseBackgroundManager,\n BaseEvent,\n BasePoolSemaphore,\n BaseQueue,\n BaseSocketStream,\n ConcurrencyBackend,\n TimeoutFlag,\n)\n\nSSL_MONKEY_PATCH_APPLIED = False\n\n\ndef ssl_monkey_patch() -> None:\n \"\"\"\n Monkey-patch for https://bugs.python.org/issue36709\n\n This prevents console errors when outstanding HTTPS connections\n still exist at the point of exiting.\n\n Clients which have been opened using a `with` block, or which have\n had `close()` closed, will not exhibit this issue in the first place.\n \"\"\"\n MonkeyPatch = asyncio.selector_events._SelectorSocketTransport # type: ignore\n\n _write = MonkeyPatch.write\n\n def _fixed_write(self, data: bytes) -> None: # type: ignore\n if self._loop and not self._loop.is_closed():\n _write(self, data)\n\n MonkeyPatch.write = _fixed_write\n\n\nclass SocketStream(BaseSocketStream):\n def __init__(\n self,\n stream_reader: asyncio.StreamReader,\n stream_writer: asyncio.StreamWriter,\n timeout: TimeoutConfig,\n ):\n self.stream_reader = stream_reader\n self.stream_writer = stream_writer\n self.timeout = timeout\n\n self._inner: typing.Optional[SocketStream] = None\n\n async def start_tls(\n self, hostname: str, ssl_context: ssl.SSLContext, timeout: TimeoutConfig\n ) -> \"SocketStream\":\n loop = asyncio.get_event_loop()\n\n stream_reader = asyncio.StreamReader()\n protocol = asyncio.StreamReaderProtocol(stream_reader)\n transport = self.stream_writer.transport\n\n if hasattr(loop, \"start_tls\"):\n loop_start_tls = loop.start_tls # type: ignore\n else:\n\n async def loop_start_tls(\n transport: asyncio.BaseTransport,\n protocol: asyncio.BaseProtocol,\n sslcontext: ssl.SSLContext = None,\n *,\n server_side: bool = False,\n server_hostname: str = None,\n ssl_handshake_timeout: float = None,\n ) -> asyncio.Transport:\n \"\"\"Python 3.6 asyncio doesn't have a start_tls() method on the loop\n so we use this function in place of the loop's start_tls() method.\n Adapted from this comment:\n https://github.com/urllib3/urllib3/issues/1323#issuecomment-362494839\n \"\"\"\n import asyncio.sslproto\n\n waiter = loop.create_future()\n ssl_protocol = asyncio.sslproto.SSLProtocol(\n loop,\n protocol,\n sslcontext,\n waiter,\n server_side=False,\n server_hostname=server_hostname,\n call_connection_made=False,\n )\n\n transport.set_protocol(ssl_protocol)\n loop.call_soon(ssl_protocol.connection_made, transport)\n loop.call_soon(transport.resume_reading) # type: ignore\n\n await waiter\n return ssl_protocol._app_transport\n\n transport = await asyncio.wait_for(\n loop_start_tls(\n transport=transport,\n protocol=protocol,\n sslcontext=ssl_context,\n server_hostname=hostname,\n ),\n timeout=timeout.connect_timeout,\n )\n\n stream_reader.set_transport(transport)\n stream_writer = asyncio.StreamWriter(\n transport=transport, protocol=protocol, reader=stream_reader, loop=loop\n )\n\n ssl_stream = SocketStream(stream_reader, stream_writer, self.timeout)\n # When we return a new SocketStream with new StreamReader/StreamWriter instances\n # we need to keep references to the old StreamReader/StreamWriter so that they\n # are not garbage collected and closed while we're still using them.\n ssl_stream._inner = self\n return ssl_stream\n\n def get_http_version(self) -> str:\n ssl_object = self.stream_writer.get_extra_info(\"ssl_object\")\n\n if ssl_object is None:\n return \"HTTP/1.1\"\n\n ident = ssl_object.selected_alpn_protocol()\n return \"HTTP/2\" if ident == \"h2\" else \"HTTP/1.1\"\n\n async def read(\n self, n: int, timeout: TimeoutConfig = None, flag: TimeoutFlag = None\n ) -> bytes:\n if timeout is None:\n timeout = self.timeout\n\n while True:\n # Check our flag at the first possible moment, and use a fine\n # grained retry loop if we're not yet in read-timeout mode.\n should_raise = flag is None or flag.raise_on_read_timeout\n read_timeout = timeout.read_timeout if should_raise else 0.01\n try:\n data = await asyncio.wait_for(self.stream_reader.read(n), read_timeout)\n break\n except asyncio.TimeoutError:\n if should_raise:\n raise ReadTimeout() from None\n # FIX(py3.6): yield control back to the event loop to give it a chance\n # to cancel `.read(n)` before we retry.\n # This prevents concurrent `.read()` calls, which asyncio\n # doesn't seem to allow on 3.6.\n # See: https://github.com/encode/httpx/issues/382\n await asyncio.sleep(0)\n\n return data\n\n def write_no_block(self, data: bytes) -> None:\n self.stream_writer.write(data) # pragma: nocover\n\n async def write(\n self, data: bytes, timeout: TimeoutConfig = None, flag: TimeoutFlag = None\n ) -> None:\n if not data:\n return\n\n if timeout is None:\n timeout = self.timeout\n\n self.stream_writer.write(data)\n while True:\n try:\n await asyncio.wait_for( # type: ignore\n self.stream_writer.drain(), timeout.write_timeout\n )\n break\n except asyncio.TimeoutError:\n # We check our flag at the first possible moment, in order to\n # allow us to suppress write timeouts, if we've since\n # switched over to read-timeout mode.\n should_raise = flag is None or flag.raise_on_write_timeout\n if should_raise:\n raise WriteTimeout() from None\n\n def is_connection_dropped(self) -> bool:\n # Counter-intuitively, what we really want to know here is whether the socket is\n # *readable*, i.e. whether it would return immediately with empty bytes if we\n # called `.recv()` on it, indicating that the other end has closed the socket.\n # See: https://github.com/encode/httpx/pull/143#issuecomment-515181778\n #\n # As it turns out, asyncio checks for readability in the background\n # (see: https://github.com/encode/httpx/pull/276#discussion_r322000402),\n # so checking for EOF or readability here would yield the same result.\n #\n # At the cost of rigour, we check for EOF instead of readability because asyncio\n # does not expose any public API to check for readability.\n # (For a solution that uses private asyncio APIs, see:\n # https://github.com/encode/httpx/pull/143#issuecomment-515202982)\n\n return self.stream_reader.at_eof()\n\n async def close(self) -> None:\n self.stream_writer.close()\n if sys.version_info >= (3, 7):\n await self.stream_writer.wait_closed()\n\n\nclass PoolSemaphore(BasePoolSemaphore):\n def __init__(self, pool_limits: PoolLimits):\n self.pool_limits = pool_limits\n\n @property\n def semaphore(self) -> typing.Optional[asyncio.BoundedSemaphore]:\n if not hasattr(self, \"_semaphore\"):\n max_connections = self.pool_limits.hard_limit\n if max_connections is None:\n self._semaphore = None\n else:\n self._semaphore = asyncio.BoundedSemaphore(value=max_connections)\n return self._semaphore\n\n async def acquire(self) -> None:\n if self.semaphore is None:\n return\n\n timeout = self.pool_limits.pool_timeout\n try:\n await asyncio.wait_for(self.semaphore.acquire(), timeout)\n except asyncio.TimeoutError:\n raise PoolTimeout()\n\n def release(self) -> None:\n if self.semaphore is None:\n return\n\n self.semaphore.release()\n\n\nclass AsyncioBackend(ConcurrencyBackend):\n def __init__(self) -> None:\n global SSL_MONKEY_PATCH_APPLIED\n\n if not SSL_MONKEY_PATCH_APPLIED:\n ssl_monkey_patch()\n SSL_MONKEY_PATCH_APPLIED = True\n\n @property\n def loop(self) -> asyncio.AbstractEventLoop:\n if not hasattr(self, \"_loop\"):\n try:\n self._loop = asyncio.get_event_loop()\n except RuntimeError:\n self._loop = asyncio.new_event_loop()\n return self._loop\n\n async def open_tcp_stream(\n self,\n hostname: str,\n port: int,\n ssl_context: typing.Optional[ssl.SSLContext],\n timeout: TimeoutConfig,\n ) -> SocketStream:\n try:\n stream_reader, stream_writer = await asyncio.wait_for( # type: ignore\n asyncio.open_connection(hostname, port, ssl=ssl_context),\n timeout.connect_timeout,\n )\n except asyncio.TimeoutError:\n raise ConnectTimeout()\n\n return SocketStream(\n stream_reader=stream_reader, stream_writer=stream_writer, timeout=timeout\n )\n\n async def run_in_threadpool(\n self, func: typing.Callable, *args: typing.Any, **kwargs: typing.Any\n ) -> typing.Any:\n if kwargs:\n # loop.run_in_executor doesn't accept 'kwargs', so bind them in here\n func = functools.partial(func, **kwargs)\n return await self.loop.run_in_executor(None, func, *args)\n\n def run(\n self, coroutine: typing.Callable, *args: typing.Any, **kwargs: typing.Any\n ) -> typing.Any:\n loop = self.loop\n if loop.is_running():\n self._loop = asyncio.new_event_loop()\n try:\n return self.loop.run_until_complete(coroutine(*args, **kwargs))\n finally:\n self._loop = loop\n\n def get_semaphore(self, limits: PoolLimits) -> BasePoolSemaphore:\n return PoolSemaphore(limits)\n\n def create_queue(self, max_size: int) -> BaseQueue:\n return typing.cast(BaseQueue, asyncio.Queue(maxsize=max_size))\n\n def create_event(self) -> BaseEvent:\n return typing.cast(BaseEvent, asyncio.Event())\n\n def background_manager(\n self, coroutine: typing.Callable, *args: typing.Any\n ) -> \"BackgroundManager\":\n return BackgroundManager(coroutine, args)\n\n\nclass BackgroundManager(BaseBackgroundManager):\n def __init__(self, coroutine: typing.Callable, args: typing.Any) -> None:\n self.coroutine = coroutine\n self.args = args\n\n async def __aenter__(self) -> \"BackgroundManager\":\n loop = asyncio.get_event_loop()\n self.task = loop.create_task(self.coroutine(*self.args))\n return self\n\n async def __aexit__(\n self,\n exc_type: typing.Type[BaseException] = None,\n exc_value: BaseException = None,\n traceback: TracebackType = None,\n ) -> None:\n await self.task\n if exc_type is None:\n self.task.result()\n", "path": "httpx/concurrency/asyncio.py"}]}
| 3,455 | 518 |
gh_patches_debug_1467
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-7881
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)
## CKAN version
2.10
## Describe the bug
According to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):
> Defaults to never expiring.
But the defined default value is 600 :upside_down_face:
Apart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.
The fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:
```
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 290, in verify_rules
params[key] = verify_options(params[key], types, message)
File "/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py", line 281, in verify_options
raise Exception(error)
Exception: Session timeout must be an integer.
```
This is because our config parsing does not support "int or None", and leaves the string "None" as the value. I guess the alternative is to put a really big number but would be good to handle it better.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/cli/shell.py`
Content:
```
1 # encoding: utf-8
2 import click
3 import logging
4
5 import ckan.model as model
6
7 from typing import Any, Mapping
8
9 from ckan.plugins import toolkit
10
11
12 log = logging.getLogger(__name__)
13
14
15 _banner = """
16 ****** Welcome to the CKAN shell ******
17
18 This session has some variables pre-populated:
19 - app (CKAN Application object)
20 - config (CKAN config dictionary)
21 - model (CKAN model module to access the Database)
22 - toolkit (CKAN toolkit module)
23 """
24
25
26 def ipython(namespace: Mapping[str, Any], banner: str) -> None:
27 import IPython
28 from traitlets.config.loader import Config
29
30 c = Config()
31 c.TerminalInteractiveShell.banner2 = banner # type: ignore
32
33 IPython.start_ipython([], user_ns=namespace, config=c)
34
35
36 def python(namespace: Mapping[str, Any], banner: str) -> None:
37 import code
38 code.interact(banner=banner, local=namespace)
39
40
41 @click.command()
42 @click.help_option("-h", "--help")
43 @click.pass_context
44 def shell(ctx: click.Context):
45 """Run an interactive IPython shell with the context of the
46 CKAN instance.
47
48 It will try to use IPython, if not installed it will callback
49 to the default Python's shell.
50 """
51
52 namespace = {
53 "app": ctx.obj.app._wsgi_app,
54 "model": model,
55 "config": ctx.obj.config,
56 "toolkit": toolkit,
57 }
58
59 try:
60 ipython(namespace, _banner)
61 except ImportError:
62 log.debug("`ipython` library is missing. Using default python shell.")
63 python(namespace, _banner)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py
--- a/ckan/cli/shell.py
+++ b/ckan/cli/shell.py
@@ -28,7 +28,7 @@
from traitlets.config.loader import Config
c = Config()
- c.TerminalInteractiveShell.banner2 = banner # type: ignore
+ c.TerminalInteractiveShell.banner2 = banner
IPython.start_ipython([], user_ns=namespace, config=c)
|
{"golden_diff": "diff --git a/ckan/cli/shell.py b/ckan/cli/shell.py\n--- a/ckan/cli/shell.py\n+++ b/ckan/cli/shell.py\n@@ -28,7 +28,7 @@\n from traitlets.config.loader import Config\n \n c = Config()\n- c.TerminalInteractiveShell.banner2 = banner # type: ignore\n+ c.TerminalInteractiveShell.banner2 = banner\n \n IPython.start_ipython([], user_ns=namespace, config=c)\n", "issue": "Invalid session timeout value on CKAN 2.10 (logged out users unexpectedly)\n## CKAN version\r\n2.10\r\n\r\n## Describe the bug\r\n\r\nAccording to our config declaration for [`beaker.session.timeout`](https://github.com/ckan/ckan/blob/656a39de2e7ed0ce47e15080f0f5d42b66b4929b/ckan/config/config_declaration.yaml#L306):\r\n\r\n> Defaults to never expiring.\r\n\r\nBut the defined default value is 600 :upside_down_face: \r\nApart from the inconsistency, this is problematic because now that the logged-in user id is stored in the session by Flask-login, this means that users are logged out every 10 minutes.\r\n\r\nThe fix is to default it to never expire as described on the docs (which is also the [Beaker default](https://beaker.readthedocs.io/en/latest/configuration.html#session-options)), but the problem is that I can set it to `None` because then Beaker complains that the value is not an int:\r\n\r\n```\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 290, in verify_rules\r\n params[key] = verify_options(params[key], types, message)\r\n File \"/home/adria/dev/pyenvs/gates/lib/python3.8/site-packages/beaker/util.py\", line 281, in verify_options\r\n raise Exception(error)\r\nException: Session timeout must be an integer.\r\n```\r\nThis is because our config parsing does not support \"int or None\", and leaves the string \"None\" as the value. I guess the alternative is to put a really big number but would be good to handle it better.\r\n\n", "before_files": [{"content": "# encoding: utf-8\nimport click\nimport logging\n\nimport ckan.model as model\n\nfrom typing import Any, Mapping\n\nfrom ckan.plugins import toolkit\n\n\nlog = logging.getLogger(__name__)\n\n\n_banner = \"\"\"\n****** Welcome to the CKAN shell ******\n\nThis session has some variables pre-populated:\n - app (CKAN Application object)\n - config (CKAN config dictionary)\n - model (CKAN model module to access the Database)\n - toolkit (CKAN toolkit module)\n \"\"\"\n\n\ndef ipython(namespace: Mapping[str, Any], banner: str) -> None:\n import IPython\n from traitlets.config.loader import Config\n\n c = Config()\n c.TerminalInteractiveShell.banner2 = banner # type: ignore\n\n IPython.start_ipython([], user_ns=namespace, config=c)\n\n\ndef python(namespace: Mapping[str, Any], banner: str) -> None:\n import code\n code.interact(banner=banner, local=namespace)\n\n\[email protected]()\[email protected]_option(\"-h\", \"--help\")\[email protected]_context\ndef shell(ctx: click.Context):\n \"\"\"Run an interactive IPython shell with the context of the\n CKAN instance.\n\n It will try to use IPython, if not installed it will callback\n to the default Python's shell.\n \"\"\"\n\n namespace = {\n \"app\": ctx.obj.app._wsgi_app,\n \"model\": model,\n \"config\": ctx.obj.config,\n \"toolkit\": toolkit,\n }\n\n try:\n ipython(namespace, _banner)\n except ImportError:\n log.debug(\"`ipython` library is missing. Using default python shell.\")\n python(namespace, _banner)\n", "path": "ckan/cli/shell.py"}], "after_files": [{"content": "# encoding: utf-8\nimport click\nimport logging\n\nimport ckan.model as model\n\nfrom typing import Any, Mapping\n\nfrom ckan.plugins import toolkit\n\n\nlog = logging.getLogger(__name__)\n\n\n_banner = \"\"\"\n****** Welcome to the CKAN shell ******\n\nThis session has some variables pre-populated:\n - app (CKAN Application object)\n - config (CKAN config dictionary)\n - model (CKAN model module to access the Database)\n - toolkit (CKAN toolkit module)\n \"\"\"\n\n\ndef ipython(namespace: Mapping[str, Any], banner: str) -> None:\n import IPython\n from traitlets.config.loader import Config\n\n c = Config()\n c.TerminalInteractiveShell.banner2 = banner\n\n IPython.start_ipython([], user_ns=namespace, config=c)\n\n\ndef python(namespace: Mapping[str, Any], banner: str) -> None:\n import code\n code.interact(banner=banner, local=namespace)\n\n\[email protected]()\[email protected]_option(\"-h\", \"--help\")\[email protected]_context\ndef shell(ctx: click.Context):\n \"\"\"Run an interactive IPython shell with the context of the\n CKAN instance.\n\n It will try to use IPython, if not installed it will callback\n to the default Python's shell.\n \"\"\"\n\n namespace = {\n \"app\": ctx.obj.app._wsgi_app,\n \"model\": model,\n \"config\": ctx.obj.config,\n \"toolkit\": toolkit,\n }\n\n try:\n ipython(namespace, _banner)\n except ImportError:\n log.debug(\"`ipython` library is missing. Using default python shell.\")\n python(namespace, _banner)\n", "path": "ckan/cli/shell.py"}]}
| 1,135 | 111 |
gh_patches_debug_23095
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-1660
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ParallelGateOperation has wrong unitary
The unitary returned by `ParallelGateOperation._unitary_()` is incorrect. For example, for XX it returns a 4x4 singular (hence non-unitary) matrix and for XXX it returns a rectangular 16x4 matrix.
Unit tests don't check internal consistency of the operation.
```
In [1]: import cirq
In [2]: q0, q1, q2 = cirq.LineQubit.range(3)
In [3]: xx = cirq.ParallelGateOperation(cirq.X, (q0, q1))
In [4]: xxx = cirq.ParallelGateOperation(cirq.X, (q0, q1, q2))
In [5]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xx)))
Out[5]:
array([[0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],
[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])
In [6]: cirq.unitary(xx)
Out[6]:
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])
In [7]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xxx)))
Out[7]:
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])
In [8]: cirq.unitary(xxx)
Out[8]:
array([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])
In [9]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xxx))).shape
Out[9]: (8, 8)
In [10]: cirq.unitary(xxx).shape
Out[10]: (16, 4)
```
Found when trying to use the operation in a new unit test for #1612.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/ops/parallel_gate_operation.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from typing import Sequence, Tuple, Union, TYPE_CHECKING, Any
17
18 import numpy as np
19
20 from cirq import protocols, value
21 from cirq.ops import raw_types, op_tree
22 from cirq.type_workarounds import NotImplementedType
23
24 if TYPE_CHECKING:
25 # pylint: disable=unused-import
26 from typing import Dict, List
27
28
29 @value.value_equality
30 class ParallelGateOperation(raw_types.Operation):
31 """An application of several copies of a gate to a group of qubits."""
32
33 def __init__(self,
34 gate: raw_types.Gate,
35 qubits: Sequence[raw_types.Qid]) -> None:
36 """
37 Args:
38 gate: the gate to apply
39 qubits: lists of lists of qubits to apply the gate to.
40 """
41 if gate.num_qubits() != 1:
42 raise ValueError("gate must be a single qubit gate")
43 if len(set(qubits)) != len(qubits):
44 raise ValueError("repeated qubits are not allowed")
45 for qubit in qubits:
46 gate.validate_args([qubit])
47 self._gate = gate
48 self._qubits = tuple(qubits)
49
50 @property
51 def gate(self) -> raw_types.Gate:
52 """The single qubit gate applied by the operation."""
53 return self._gate
54
55 @property
56 def qubits(self) -> Tuple[raw_types.Qid, ...]:
57 """The qubits targeted by the operation."""
58 return self._qubits
59
60 def with_qubits(self,
61 *new_qubits: raw_types.Qid) -> 'ParallelGateOperation':
62 """ParallelGateOperation with same the gate but new qubits"""
63 return ParallelGateOperation(self.gate, new_qubits)
64
65 def with_gate(self, new_gate: raw_types.Gate) -> 'ParallelGateOperation':
66 """ParallelGateOperation with same qubits but a new gate"""
67 return ParallelGateOperation(new_gate, self.qubits)
68
69 def __repr__(self):
70 return 'cirq.ParallelGateOperation(gate={!r}, qubits={!r})'.format(
71 self.gate,
72 list(self.qubits))
73
74 def __str__(self):
75 return '{}({})'.format(self.gate,
76 ', '.join(str(e) for e in self.qubits))
77
78 def _value_equality_values_(self):
79 return self.gate, frozenset(self.qubits)
80
81 def _decompose_(self) -> op_tree.OP_TREE:
82 """List of gate operations that correspond to applying the single qubit
83 gate to each of the target qubits individually
84 """
85 return [self.gate.on(qubit) for qubit in self.qubits]
86
87 def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs
88 ) -> Union[np.ndarray, None, NotImplementedType]:
89 """Replicates the logic the simulators use to apply the equivalent
90 sequence of GateOperations
91 """
92 state = args.target_tensor
93 buffer = args.available_buffer
94 for axis in args.axes:
95 result = protocols.apply_unitary(self.gate,
96 protocols.ApplyUnitaryArgs(
97 state,
98 buffer,
99 (axis,)),
100 default=NotImplemented)
101
102 if result is buffer:
103 buffer = state
104
105 state = result
106
107 return state
108
109 def _has_unitary_(self) -> bool:
110 return protocols.has_unitary(self.gate)
111
112 def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:
113 # Obtain the unitary for the single qubit gate
114 single_unitary = protocols.unitary(self.gate, NotImplemented)
115
116 # Make sure we actually have a matrix
117 if single_unitary is NotImplemented:
118 return single_unitary
119
120 # Create a unitary which corresponds to applying the single qubit
121 # unitary to each qubit. This will blow up memory fast.
122 unitary = single_unitary
123 for _ in range(len(self.qubits) - 1):
124 unitary = np.outer(unitary, single_unitary)
125
126 return unitary
127
128 def _is_parameterized_(self) -> bool:
129 return protocols.is_parameterized(self.gate)
130
131 def _resolve_parameters_(self, resolver):
132 resolved_gate = protocols.resolve_parameters(self.gate, resolver)
133 return self.with_gate(resolved_gate)
134
135 def _circuit_diagram_info_(self,
136 args: protocols.CircuitDiagramInfoArgs
137 ) -> protocols.CircuitDiagramInfo:
138 diagram_info = protocols.circuit_diagram_info(self.gate,
139 args,
140 NotImplemented)
141 if diagram_info == NotImplemented:
142 return diagram_info
143
144 # Include symbols for every qubit instead of just one
145 symbol = diagram_info.wire_symbols[0]
146 wire_symbols = (symbol,) * len(self.qubits)
147
148 return protocols.CircuitDiagramInfo(wire_symbols=wire_symbols,
149 exponent=diagram_info.exponent,
150 connected=False)
151
152 def _phase_by_(self, phase_turns: float,
153 qubit_index: int) -> 'ParallelGateOperation':
154 phased_gate = protocols.phase_by(self._gate, phase_turns, qubit_index,
155 default=None)
156 if phased_gate is None:
157 return NotImplemented
158 return self.with_gate(phased_gate)
159
160 def __pow__(self, exponent: Any) -> 'ParallelGateOperation':
161 """Raise gate to a power, then reapply to the same qubits.
162
163 Only works if the gate implements cirq.ExtrapolatableEffect.
164
165 For extrapolatable gate G this means the following two are equivalent:
166
167 (G ** 1.5)(qubit) or G(qubit) ** 1.5
168
169 Args:
170 exponent: The amount to scale the gate's effect by.
171
172 Returns:
173 A new operation on the same qubits with the scaled gate.
174 """
175 new_gate = protocols.pow(self.gate,
176 exponent,
177 NotImplemented)
178 if new_gate is NotImplemented:
179 return NotImplemented
180 return self.with_gate(new_gate)
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/ops/parallel_gate_operation.py b/cirq/ops/parallel_gate_operation.py
--- a/cirq/ops/parallel_gate_operation.py
+++ b/cirq/ops/parallel_gate_operation.py
@@ -121,7 +121,7 @@
# unitary to each qubit. This will blow up memory fast.
unitary = single_unitary
for _ in range(len(self.qubits) - 1):
- unitary = np.outer(unitary, single_unitary)
+ unitary = np.kron(unitary, single_unitary)
return unitary
@@ -149,14 +149,6 @@
exponent=diagram_info.exponent,
connected=False)
- def _phase_by_(self, phase_turns: float,
- qubit_index: int) -> 'ParallelGateOperation':
- phased_gate = protocols.phase_by(self._gate, phase_turns, qubit_index,
- default=None)
- if phased_gate is None:
- return NotImplemented
- return self.with_gate(phased_gate)
-
def __pow__(self, exponent: Any) -> 'ParallelGateOperation':
"""Raise gate to a power, then reapply to the same qubits.
|
{"golden_diff": "diff --git a/cirq/ops/parallel_gate_operation.py b/cirq/ops/parallel_gate_operation.py\n--- a/cirq/ops/parallel_gate_operation.py\n+++ b/cirq/ops/parallel_gate_operation.py\n@@ -121,7 +121,7 @@\n # unitary to each qubit. This will blow up memory fast.\n unitary = single_unitary\n for _ in range(len(self.qubits) - 1):\n- unitary = np.outer(unitary, single_unitary)\n+ unitary = np.kron(unitary, single_unitary)\n \n return unitary\n \n@@ -149,14 +149,6 @@\n exponent=diagram_info.exponent,\n connected=False)\n \n- def _phase_by_(self, phase_turns: float,\n- qubit_index: int) -> 'ParallelGateOperation':\n- phased_gate = protocols.phase_by(self._gate, phase_turns, qubit_index,\n- default=None)\n- if phased_gate is None:\n- return NotImplemented\n- return self.with_gate(phased_gate)\n-\n def __pow__(self, exponent: Any) -> 'ParallelGateOperation':\n \"\"\"Raise gate to a power, then reapply to the same qubits.\n", "issue": "ParallelGateOperation has wrong unitary\nThe unitary returned by `ParallelGateOperation._unitary_()` is incorrect. For example, for XX it returns a 4x4 singular (hence non-unitary) matrix and for XXX it returns a rectangular 16x4 matrix.\r\n\r\nUnit tests don't check internal consistency of the operation.\r\n\r\n```\r\nIn [1]: import cirq\r\n\r\nIn [2]: q0, q1, q2 = cirq.LineQubit.range(3)\r\n\r\nIn [3]: xx = cirq.ParallelGateOperation(cirq.X, (q0, q1))\r\n\r\nIn [4]: xxx = cirq.ParallelGateOperation(cirq.X, (q0, q1, q2))\r\n\r\nIn [5]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xx)))\r\nOut[5]: \r\narray([[0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],\r\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],\r\n [1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])\r\n\r\nIn [6]: cirq.unitary(xx)\r\nOut[6]: \r\narray([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])\r\n\r\nIn [7]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xxx)))\r\nOut[7]: \r\narray([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])\r\n\r\nIn [8]: cirq.unitary(xxx)\r\nOut[8]: \r\narray([[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 1.+0.j, 1.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\r\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]])\r\n\r\nIn [9]: cirq.unitary(cirq.Circuit.from_ops(cirq.decompose(xxx))).shape\r\nOut[9]: (8, 8)\r\n\r\nIn [10]: cirq.unitary(xxx).shape\r\nOut[10]: (16, 4)\r\n```\r\n\r\nFound when trying to use the operation in a new unit test for #1612.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom typing import Sequence, Tuple, Union, TYPE_CHECKING, Any\n\nimport numpy as np\n\nfrom cirq import protocols, value\nfrom cirq.ops import raw_types, op_tree\nfrom cirq.type_workarounds import NotImplementedType\n\nif TYPE_CHECKING:\n # pylint: disable=unused-import\n from typing import Dict, List\n\n\[email protected]_equality\nclass ParallelGateOperation(raw_types.Operation):\n \"\"\"An application of several copies of a gate to a group of qubits.\"\"\"\n\n def __init__(self,\n gate: raw_types.Gate,\n qubits: Sequence[raw_types.Qid]) -> None:\n \"\"\"\n Args:\n gate: the gate to apply\n qubits: lists of lists of qubits to apply the gate to.\n \"\"\"\n if gate.num_qubits() != 1:\n raise ValueError(\"gate must be a single qubit gate\")\n if len(set(qubits)) != len(qubits):\n raise ValueError(\"repeated qubits are not allowed\")\n for qubit in qubits:\n gate.validate_args([qubit])\n self._gate = gate\n self._qubits = tuple(qubits)\n\n @property\n def gate(self) -> raw_types.Gate:\n \"\"\"The single qubit gate applied by the operation.\"\"\"\n return self._gate\n\n @property\n def qubits(self) -> Tuple[raw_types.Qid, ...]:\n \"\"\"The qubits targeted by the operation.\"\"\"\n return self._qubits\n\n def with_qubits(self,\n *new_qubits: raw_types.Qid) -> 'ParallelGateOperation':\n \"\"\"ParallelGateOperation with same the gate but new qubits\"\"\"\n return ParallelGateOperation(self.gate, new_qubits)\n\n def with_gate(self, new_gate: raw_types.Gate) -> 'ParallelGateOperation':\n \"\"\"ParallelGateOperation with same qubits but a new gate\"\"\"\n return ParallelGateOperation(new_gate, self.qubits)\n\n def __repr__(self):\n return 'cirq.ParallelGateOperation(gate={!r}, qubits={!r})'.format(\n self.gate,\n list(self.qubits))\n\n def __str__(self):\n return '{}({})'.format(self.gate,\n ', '.join(str(e) for e in self.qubits))\n\n def _value_equality_values_(self):\n return self.gate, frozenset(self.qubits)\n\n def _decompose_(self) -> op_tree.OP_TREE:\n \"\"\"List of gate operations that correspond to applying the single qubit\n gate to each of the target qubits individually\n \"\"\"\n return [self.gate.on(qubit) for qubit in self.qubits]\n\n def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs\n ) -> Union[np.ndarray, None, NotImplementedType]:\n \"\"\"Replicates the logic the simulators use to apply the equivalent\n sequence of GateOperations\n \"\"\"\n state = args.target_tensor\n buffer = args.available_buffer\n for axis in args.axes:\n result = protocols.apply_unitary(self.gate,\n protocols.ApplyUnitaryArgs(\n state,\n buffer,\n (axis,)),\n default=NotImplemented)\n\n if result is buffer:\n buffer = state\n\n state = result\n\n return state\n\n def _has_unitary_(self) -> bool:\n return protocols.has_unitary(self.gate)\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n # Obtain the unitary for the single qubit gate\n single_unitary = protocols.unitary(self.gate, NotImplemented)\n\n # Make sure we actually have a matrix\n if single_unitary is NotImplemented:\n return single_unitary\n\n # Create a unitary which corresponds to applying the single qubit\n # unitary to each qubit. This will blow up memory fast.\n unitary = single_unitary\n for _ in range(len(self.qubits) - 1):\n unitary = np.outer(unitary, single_unitary)\n\n return unitary\n\n def _is_parameterized_(self) -> bool:\n return protocols.is_parameterized(self.gate)\n\n def _resolve_parameters_(self, resolver):\n resolved_gate = protocols.resolve_parameters(self.gate, resolver)\n return self.with_gate(resolved_gate)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n diagram_info = protocols.circuit_diagram_info(self.gate,\n args,\n NotImplemented)\n if diagram_info == NotImplemented:\n return diagram_info\n\n # Include symbols for every qubit instead of just one\n symbol = diagram_info.wire_symbols[0]\n wire_symbols = (symbol,) * len(self.qubits)\n\n return protocols.CircuitDiagramInfo(wire_symbols=wire_symbols,\n exponent=diagram_info.exponent,\n connected=False)\n\n def _phase_by_(self, phase_turns: float,\n qubit_index: int) -> 'ParallelGateOperation':\n phased_gate = protocols.phase_by(self._gate, phase_turns, qubit_index,\n default=None)\n if phased_gate is None:\n return NotImplemented\n return self.with_gate(phased_gate)\n\n def __pow__(self, exponent: Any) -> 'ParallelGateOperation':\n \"\"\"Raise gate to a power, then reapply to the same qubits.\n\n Only works if the gate implements cirq.ExtrapolatableEffect.\n\n For extrapolatable gate G this means the following two are equivalent:\n\n (G ** 1.5)(qubit) or G(qubit) ** 1.5\n\n Args:\n exponent: The amount to scale the gate's effect by.\n\n Returns:\n A new operation on the same qubits with the scaled gate.\n \"\"\"\n new_gate = protocols.pow(self.gate,\n exponent,\n NotImplemented)\n if new_gate is NotImplemented:\n return NotImplemented\n return self.with_gate(new_gate)\n", "path": "cirq/ops/parallel_gate_operation.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom typing import Sequence, Tuple, Union, TYPE_CHECKING, Any\n\nimport numpy as np\n\nfrom cirq import protocols, value\nfrom cirq.ops import raw_types, op_tree\nfrom cirq.type_workarounds import NotImplementedType\n\nif TYPE_CHECKING:\n # pylint: disable=unused-import\n from typing import Dict, List\n\n\[email protected]_equality\nclass ParallelGateOperation(raw_types.Operation):\n \"\"\"An application of several copies of a gate to a group of qubits.\"\"\"\n\n def __init__(self,\n gate: raw_types.Gate,\n qubits: Sequence[raw_types.Qid]) -> None:\n \"\"\"\n Args:\n gate: the gate to apply\n qubits: lists of lists of qubits to apply the gate to.\n \"\"\"\n if gate.num_qubits() != 1:\n raise ValueError(\"gate must be a single qubit gate\")\n if len(set(qubits)) != len(qubits):\n raise ValueError(\"repeated qubits are not allowed\")\n for qubit in qubits:\n gate.validate_args([qubit])\n self._gate = gate\n self._qubits = tuple(qubits)\n\n @property\n def gate(self) -> raw_types.Gate:\n \"\"\"The single qubit gate applied by the operation.\"\"\"\n return self._gate\n\n @property\n def qubits(self) -> Tuple[raw_types.Qid, ...]:\n \"\"\"The qubits targeted by the operation.\"\"\"\n return self._qubits\n\n def with_qubits(self,\n *new_qubits: raw_types.Qid) -> 'ParallelGateOperation':\n \"\"\"ParallelGateOperation with same the gate but new qubits\"\"\"\n return ParallelGateOperation(self.gate, new_qubits)\n\n def with_gate(self, new_gate: raw_types.Gate) -> 'ParallelGateOperation':\n \"\"\"ParallelGateOperation with same qubits but a new gate\"\"\"\n return ParallelGateOperation(new_gate, self.qubits)\n\n def __repr__(self):\n return 'cirq.ParallelGateOperation(gate={!r}, qubits={!r})'.format(\n self.gate,\n list(self.qubits))\n\n def __str__(self):\n return '{}({})'.format(self.gate,\n ', '.join(str(e) for e in self.qubits))\n\n def _value_equality_values_(self):\n return self.gate, frozenset(self.qubits)\n\n def _decompose_(self) -> op_tree.OP_TREE:\n \"\"\"List of gate operations that correspond to applying the single qubit\n gate to each of the target qubits individually\n \"\"\"\n return [self.gate.on(qubit) for qubit in self.qubits]\n\n def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs\n ) -> Union[np.ndarray, None, NotImplementedType]:\n \"\"\"Replicates the logic the simulators use to apply the equivalent\n sequence of GateOperations\n \"\"\"\n state = args.target_tensor\n buffer = args.available_buffer\n for axis in args.axes:\n result = protocols.apply_unitary(self.gate,\n protocols.ApplyUnitaryArgs(\n state,\n buffer,\n (axis,)),\n default=NotImplemented)\n\n if result is buffer:\n buffer = state\n\n state = result\n\n return state\n\n def _has_unitary_(self) -> bool:\n return protocols.has_unitary(self.gate)\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n # Obtain the unitary for the single qubit gate\n single_unitary = protocols.unitary(self.gate, NotImplemented)\n\n # Make sure we actually have a matrix\n if single_unitary is NotImplemented:\n return single_unitary\n\n # Create a unitary which corresponds to applying the single qubit\n # unitary to each qubit. This will blow up memory fast.\n unitary = single_unitary\n for _ in range(len(self.qubits) - 1):\n unitary = np.kron(unitary, single_unitary)\n\n return unitary\n\n def _is_parameterized_(self) -> bool:\n return protocols.is_parameterized(self.gate)\n\n def _resolve_parameters_(self, resolver):\n resolved_gate = protocols.resolve_parameters(self.gate, resolver)\n return self.with_gate(resolved_gate)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n diagram_info = protocols.circuit_diagram_info(self.gate,\n args,\n NotImplemented)\n if diagram_info == NotImplemented:\n return diagram_info\n\n # Include symbols for every qubit instead of just one\n symbol = diagram_info.wire_symbols[0]\n wire_symbols = (symbol,) * len(self.qubits)\n\n return protocols.CircuitDiagramInfo(wire_symbols=wire_symbols,\n exponent=diagram_info.exponent,\n connected=False)\n\n def __pow__(self, exponent: Any) -> 'ParallelGateOperation':\n \"\"\"Raise gate to a power, then reapply to the same qubits.\n\n Only works if the gate implements cirq.ExtrapolatableEffect.\n\n For extrapolatable gate G this means the following two are equivalent:\n\n (G ** 1.5)(qubit) or G(qubit) ** 1.5\n\n Args:\n exponent: The amount to scale the gate's effect by.\n\n Returns:\n A new operation on the same qubits with the scaled gate.\n \"\"\"\n new_gate = protocols.pow(self.gate,\n exponent,\n NotImplemented)\n if new_gate is NotImplemented:\n return NotImplemented\n return self.with_gate(new_gate)\n", "path": "cirq/ops/parallel_gate_operation.py"}]}
| 3,440 | 276 |
gh_patches_debug_2699
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-1003
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please create an AppData file for Solaar
Please consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!
Richard
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 # from solaar import NAME, __version__
11 __version__ = '1.0.4'
12 NAME = 'Solaar'
13
14
15 def _data_files():
16 from os.path import dirname as _dirname
17
18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
21
22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
23 yield _dirname(mo), [mo]
24
25 yield 'share/applications', ['share/applications/solaar.desktop']
26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
27
28 del _dirname
29
30
31 setup(
32 name=NAME.lower(),
33 version=__version__,
34 description='Linux devices manager for the Logitech Unifying Receiver.',
35 long_description='''
36 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
37 It is able to pair/unpair devices with the receiver, for many devices show
38 battery status, and show and modify some of the modifiable features of devices.
39 '''.strip(),
40 author='Daniel Pavel',
41 license='GPLv2',
42 url='http://pwr-solaar.github.io/Solaar/',
43 classifiers=[
44 'Development Status :: 4 - Beta',
45 'Environment :: X11 Applications :: GTK',
46 'Environment :: Console',
47 'Intended Audience :: End Users/Desktop',
48 'License :: DFSG approved',
49 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
50 'Natural Language :: English',
51 'Programming Language :: Python :: 3 :: Only',
52 'Operating System :: POSIX :: Linux',
53 'Topic :: Utilities',
54 ],
55 platforms=['linux'],
56
57 # sudo apt install python-gi python3-gi \
58 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
59 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
60 python_requires='>=3.6',
61 install_requires=[
62 'pyudev (>= 0.13)',
63 'PyYAML (>= 5.1)',
64 'python-xlib (>= 0.27)',
65 'pynput (>= 1.7.0)',
66 'psutil (>= 5.7.3)',
67 ],
68 package_dir={'': 'lib'},
69 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
70 data_files=list(_data_files()),
71 scripts=_glob('bin/*'),
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,7 @@
yield 'share/applications', ['share/applications/solaar.desktop']
yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']
del _dirname
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,7 @@\n \n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n+ yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n \n del _dirname\n", "issue": "Please create an AppData file for Solaar\nPlease consider writing and installing an AppData file with the application description and some screenshots, else Solaar looks really bad in the GNOME and KDE Software Centers. We'd love to showcase more applications, but without the extra data file we can't. See http://people.freedesktop.org/~hughsient/appdata/ for details; thanks!\n\nRichard\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.0.4'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo/io.github.pwr_solaar.solaar.metainfo.xml', ['share/solaar/metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux devices manager for the Logitech Unifying Receiver.',\n long_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices with the receiver, for many devices show\nbattery status, and show and modify some of the modifiable features of devices.\n'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.6',\n install_requires=[\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 5.1)',\n 'python-xlib (>= 0.27)',\n 'pynput (>= 1.7.0)',\n 'psutil (>= 5.7.3)',\n ],\n package_dir={'': 'lib'},\n packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]}
| 1,143 | 115 |
gh_patches_debug_18272
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2297
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Order.DoesNotExist: Order matching query does not exist.
This error should be caught (I think it happens in very specific cases when orders are being deleted while you're loading the admin)
Sentry Issue: [CONCREXIT-86](https://sentry.io/organizations/thalia/issues/2542195680/?referrer=github_integration)
```
Order.DoesNotExist: Order matching query does not exist.
(6 additional frame(s) were not displayed)
...
File "sales/admin/order_admin.py", line 342, in change_view
return super().change_view(request, object_id, form_url, extra_context)
File "django/contrib/admin/options.py", line 1660, in change_view
return self.changeform_view(request, object_id, form_url, extra_context)
File "sales/admin/order_admin.py", line 227, in changeform_view
obj = self.model.objects.get(pk=object_id)
File "django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/admin/order_admin.py`
Content:
```
1 from functools import partial
2
3 from admin_auto_filters.filters import AutocompleteFilter
4 from django.contrib import admin, messages
5 from django.contrib.admin import register, SimpleListFilter
6 from django.forms import Field
7 from django.http import HttpRequest
8 from django.urls import resolve
9 from django.utils import timezone
10
11 from django.utils.translation import gettext_lazy as _
12
13 from payments.widgets import PaymentWidget
14 from sales import services
15 from sales.models.order import Order, OrderItem
16 from sales.models.shift import Shift
17 from sales.services import is_manager
18
19
20 class OrderItemInline(admin.TabularInline):
21 model = OrderItem
22 extra = 0
23
24 fields = ("product", "product_name", "amount", "total")
25
26 def get_readonly_fields(self, request: HttpRequest, obj: Order = None):
27 default_fields = self.readonly_fields
28
29 if not (request.member and request.member.has_perm("sales.custom_prices")):
30 default_fields += ("total",)
31
32 return default_fields
33
34 def get_queryset(self, request):
35 queryset = super().get_queryset(request)
36 queryset = queryset.prefetch_related("product", "product__product")
37 return queryset
38
39 def has_add_permission(self, request, obj):
40 if obj and obj.shift.locked:
41 return False
42
43 if obj and obj.payment:
44 return False
45
46 parent = self.get_parent_object_from_request(request)
47 if not parent:
48 return False
49
50 return super().has_add_permission(request, obj)
51
52 def has_change_permission(self, request, obj=None):
53 if obj and obj.payment:
54 return False
55 if obj and obj.shift.locked:
56 return False
57 if obj and not is_manager(request.member, obj.shift):
58 return False
59 return True
60
61 def has_delete_permission(self, request, obj=None):
62 if obj and obj.payment:
63 return False
64 if obj and obj.shift.locked:
65 return False
66 if obj and not is_manager(request.member, obj.shift):
67 return False
68 return True
69
70 def get_parent_object_from_request(self, request):
71 """Get parent object to determine product list."""
72 resolved = resolve(request.path_info)
73 if resolved.kwargs:
74 parent = self.parent_model.objects.get(pk=resolved.kwargs["object_id"])
75 return parent
76 return None
77
78 def formfield_for_foreignkey(self, db_field, request=None, **kwargs):
79 """Limit product list items to items of order's shift."""
80 field = super().formfield_for_foreignkey(db_field, request, **kwargs)
81
82 if db_field.name == "product":
83 if request is not None:
84 parent = self.get_parent_object_from_request(request)
85 if parent:
86 field.queryset = parent.shift.product_list.product_items
87 else:
88 field.queryset = field.queryset.none()
89
90 return field
91
92
93 class OrderShiftFilter(AutocompleteFilter):
94 title = _("shift")
95 field_name = "shift"
96 rel_model = Order
97
98 def queryset(self, request, queryset):
99 if self.value():
100 return queryset.filter(shift=self.value())
101 return queryset
102
103
104 class OrderMemberFilter(AutocompleteFilter):
105 title = _("member")
106 field_name = "payer"
107 rel_model = Order
108
109 def queryset(self, request, queryset):
110 if self.value():
111 return queryset.filter(payer=self.value())
112 return queryset
113
114
115 class OrderPaymentFilter(SimpleListFilter):
116 title = _("payment")
117 parameter_name = "payment"
118
119 def lookups(self, request, model_admin):
120 return (
121 ("not_required", _("No payment required")),
122 ("paid", _("Paid")),
123 ("unpaid", _("Unpaid")),
124 )
125
126 def queryset(self, request, queryset):
127 if self.value() is None:
128 return queryset
129 if self.value() == "paid":
130 return queryset.filter(payment__isnull=False)
131 if self.value() == "unpaid":
132 return queryset.filter(payment__isnull=True, total_amount__gt=0)
133 return queryset.filter(total_amount__exact=0)
134
135
136 class OrderProductFilter(SimpleListFilter):
137 title = _("product")
138 parameter_name = "product"
139
140 def lookups(self, request, model_admin):
141 qs = model_admin.get_queryset(request)
142 types = qs.filter(order_items__product__product__isnull=False).values_list(
143 "order_items__product__product__id", "order_items__product__product__name"
144 )
145 return list(types.order_by("order_items__product__product__id").distinct())
146
147 def queryset(self, request, queryset):
148 if self.value() is None:
149 return queryset
150 return queryset.filter(order_items__product__product__id__contains=self.value())
151
152
153 @register(Order)
154 class OrderAdmin(admin.ModelAdmin):
155 class Media:
156 pass
157
158 inlines = [
159 OrderItemInline,
160 ]
161 ordering = ("-created_at",)
162 date_hierarchy = "created_at"
163 search_fields = (
164 "id",
165 "payer__username",
166 "payer__first_name",
167 "payer__last_name",
168 "payer__profile__nickname",
169 )
170
171 list_display = (
172 "id",
173 "shift",
174 "created_at",
175 "order_description",
176 "num_items",
177 "discount",
178 "total_amount",
179 "paid",
180 "payer",
181 )
182 list_filter = [
183 OrderShiftFilter,
184 OrderMemberFilter,
185 OrderPaymentFilter,
186 OrderProductFilter,
187 ]
188
189 fields = (
190 "id",
191 "shift",
192 "created_at",
193 "created_by",
194 "order_description",
195 "num_items",
196 "age_restricted",
197 "subtotal",
198 "discount",
199 "total_amount",
200 "payer",
201 "payment",
202 "payment_url",
203 )
204
205 readonly_fields = (
206 "id",
207 "created_at",
208 "created_by",
209 "order_description",
210 "num_items",
211 "subtotal",
212 "total_amount",
213 "payment",
214 "age_restricted",
215 "payment_url",
216 )
217
218 def get_readonly_fields(self, request: HttpRequest, obj: Order = None):
219 """Disallow changing shift when selected."""
220 default_fields = self.readonly_fields
221
222 if not (request.member and request.member.has_perm("sales.custom_prices")):
223 default_fields += ("discount",)
224
225 if obj and obj.shift:
226 default_fields += ("shift",)
227
228 return default_fields
229
230 def save_model(self, request, obj, form, change):
231 obj.created_by = request.user
232 obj.save()
233
234 def changeform_view(self, request, object_id=None, form_url="", extra_context=None):
235 if object_id:
236 obj = self.model.objects.get(pk=object_id)
237 if obj.age_restricted and obj.payer and not services.is_adult(obj.payer):
238 self.message_user(
239 request,
240 _(
241 "The payer for this order is under-age while the order is age restricted!"
242 ),
243 messages.WARNING,
244 )
245 return super().changeform_view(request, object_id, form_url, extra_context)
246
247 def get_queryset(self, request):
248 queryset = super().get_queryset(request)
249
250 if not request.member:
251 queryset = queryset.none()
252 elif not request.member.has_perm("sales.override_manager"):
253 queryset = queryset.filter(
254 shift__managers__in=request.member.get_member_groups()
255 ).distinct()
256
257 queryset = queryset.select_properties(
258 "total_amount", "subtotal", "num_items", "age_restricted"
259 )
260 queryset = queryset.prefetch_related(
261 "shift", "shift__event", "shift__product_list"
262 )
263 queryset = queryset.prefetch_related(
264 "order_items", "order_items__product", "order_items__product__product"
265 )
266 queryset = queryset.prefetch_related("payment")
267 queryset = queryset.prefetch_related("payer")
268 return queryset
269
270 def has_add_permission(self, request):
271 if not request.member:
272 return False
273 elif not request.member.has_perm("sales.override_manager"):
274 if (
275 Shift.objects.filter(
276 start__lte=timezone.now(),
277 locked=False,
278 managers__in=request.member.get_member_groups(),
279 ).count()
280 == 0
281 ):
282 return False
283 return super().has_view_permission(request)
284
285 def has_view_permission(self, request, obj=None):
286 if obj and not is_manager(request.member, obj.shift):
287 return False
288 return super().has_view_permission(request, obj)
289
290 def has_change_permission(self, request, obj=None):
291 if obj and obj.shift.locked:
292 return False
293 if obj and obj.payment:
294 return False
295
296 if obj and not is_manager(request.member, obj.shift):
297 return False
298
299 return super().has_change_permission(request, obj)
300
301 def has_delete_permission(self, request, obj=None):
302 if obj and obj.shift.locked:
303 return False
304 if obj and obj.payment:
305 return False
306
307 if obj and not is_manager(request.member, obj.shift):
308 return False
309
310 return super().has_delete_permission(request, obj)
311
312 def get_form(self, request, obj=None, **kwargs):
313 """Override get form to use payment widget."""
314 return super().get_form(
315 request,
316 obj,
317 formfield_callback=partial(
318 self.formfield_for_dbfield, request=request, obj=obj
319 ),
320 **kwargs,
321 )
322
323 def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):
324 """Use payment widget for payments."""
325 field = super().formfield_for_dbfield(db_field, request, **kwargs)
326 if db_field.name == "payment":
327 return Field(
328 widget=PaymentWidget(obj=obj), initial=field.initial, required=False
329 )
330 if db_field.name == "shift":
331 field.queryset = Shift.objects.filter(locked=False)
332 if not request.member:
333 field.queryset = field.queryset.none()
334 elif not request.member.has_perm("sales.override_manager"):
335 field.queryset = field.queryset.filter(
336 managers__in=request.member.get_member_groups()
337 )
338 return field
339
340 def changelist_view(self, request, extra_context=None):
341 if not (request.member and request.member.has_perm("sales.override_manager")):
342 self.message_user(
343 request,
344 _("You are only seeing orders that are relevant to you."),
345 messages.WARNING,
346 )
347 return super().changelist_view(request, extra_context)
348
349 def change_view(self, request, object_id, form_url="", extra_context=None):
350 object_id
351 return super().change_view(request, object_id, form_url, extra_context)
352
353 def order_description(self, obj):
354 if obj.order_description:
355 return obj.order_description
356 return "-"
357
358 def num_items(self, obj):
359 return obj.num_items
360
361 def subtotal(self, obj):
362 if obj.subtotal:
363 return f"€{obj.subtotal:.2f}"
364 return "-"
365
366 def discount(self, obj):
367 if obj.discount:
368 return f"€{obj.discount:.2f}"
369 return "-"
370
371 def total_amount(self, obj):
372 if obj.total_amount:
373 return f"€{obj.total_amount:.2f}"
374 return "-"
375
376 def paid(self, obj):
377 if obj.total_amount is None or obj.total_amount == 0:
378 return None
379 return obj.payment is not None
380
381 paid.boolean = True
382
383 def age_restricted(self, obj):
384 return bool(obj.age_restricted) if obj else None
385
386 age_restricted.boolean = True
387
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/sales/admin/order_admin.py b/website/sales/admin/order_admin.py
--- a/website/sales/admin/order_admin.py
+++ b/website/sales/admin/order_admin.py
@@ -347,7 +347,23 @@
return super().changelist_view(request, extra_context)
def change_view(self, request, object_id, form_url="", extra_context=None):
- object_id
+ if object_id:
+ try:
+ obj = self.model.objects.get(pk=object_id)
+ if (
+ obj.age_restricted
+ and obj.payer
+ and not services.is_adult(obj.payer)
+ ):
+ self.message_user(
+ request,
+ _(
+ "The payer for this order is under-age while the order is age restricted!"
+ ),
+ messages.WARNING,
+ )
+ except self.model.DoesNotExist:
+ pass
return super().change_view(request, object_id, form_url, extra_context)
def order_description(self, obj):
|
{"golden_diff": "diff --git a/website/sales/admin/order_admin.py b/website/sales/admin/order_admin.py\n--- a/website/sales/admin/order_admin.py\n+++ b/website/sales/admin/order_admin.py\n@@ -347,7 +347,23 @@\n return super().changelist_view(request, extra_context)\n \n def change_view(self, request, object_id, form_url=\"\", extra_context=None):\n- object_id\n+ if object_id:\n+ try:\n+ obj = self.model.objects.get(pk=object_id)\n+ if (\n+ obj.age_restricted\n+ and obj.payer\n+ and not services.is_adult(obj.payer)\n+ ):\n+ self.message_user(\n+ request,\n+ _(\n+ \"The payer for this order is under-age while the order is age restricted!\"\n+ ),\n+ messages.WARNING,\n+ )\n+ except self.model.DoesNotExist:\n+ pass\n return super().change_view(request, object_id, form_url, extra_context)\n \n def order_description(self, obj):\n", "issue": "Order.DoesNotExist: Order matching query does not exist.\nThis error should be caught (I think it happens in very specific cases when orders are being deleted while you're loading the admin)\n\n\nSentry Issue: [CONCREXIT-86](https://sentry.io/organizations/thalia/issues/2542195680/?referrer=github_integration)\n\n```\nOrder.DoesNotExist: Order matching query does not exist.\n(6 additional frame(s) were not displayed)\n...\n File \"sales/admin/order_admin.py\", line 342, in change_view\n return super().change_view(request, object_id, form_url, extra_context)\n File \"django/contrib/admin/options.py\", line 1660, in change_view\n return self.changeform_view(request, object_id, form_url, extra_context)\n File \"sales/admin/order_admin.py\", line 227, in changeform_view\n obj = self.model.objects.get(pk=object_id)\n File \"django/db/models/manager.py\", line 85, in manager_method\n return getattr(self.get_queryset(), name)(*args, **kwargs)\n File \"django/db/models/query.py\", line 435, in get\n raise self.model.DoesNotExist(\n```\n", "before_files": [{"content": "from functools import partial\n\nfrom admin_auto_filters.filters import AutocompleteFilter\nfrom django.contrib import admin, messages\nfrom django.contrib.admin import register, SimpleListFilter\nfrom django.forms import Field\nfrom django.http import HttpRequest\nfrom django.urls import resolve\nfrom django.utils import timezone\n\nfrom django.utils.translation import gettext_lazy as _\n\nfrom payments.widgets import PaymentWidget\nfrom sales import services\nfrom sales.models.order import Order, OrderItem\nfrom sales.models.shift import Shift\nfrom sales.services import is_manager\n\n\nclass OrderItemInline(admin.TabularInline):\n model = OrderItem\n extra = 0\n\n fields = (\"product\", \"product_name\", \"amount\", \"total\")\n\n def get_readonly_fields(self, request: HttpRequest, obj: Order = None):\n default_fields = self.readonly_fields\n\n if not (request.member and request.member.has_perm(\"sales.custom_prices\")):\n default_fields += (\"total\",)\n\n return default_fields\n\n def get_queryset(self, request):\n queryset = super().get_queryset(request)\n queryset = queryset.prefetch_related(\"product\", \"product__product\")\n return queryset\n\n def has_add_permission(self, request, obj):\n if obj and obj.shift.locked:\n return False\n\n if obj and obj.payment:\n return False\n\n parent = self.get_parent_object_from_request(request)\n if not parent:\n return False\n\n return super().has_add_permission(request, obj)\n\n def has_change_permission(self, request, obj=None):\n if obj and obj.payment:\n return False\n if obj and obj.shift.locked:\n return False\n if obj and not is_manager(request.member, obj.shift):\n return False\n return True\n\n def has_delete_permission(self, request, obj=None):\n if obj and obj.payment:\n return False\n if obj and obj.shift.locked:\n return False\n if obj and not is_manager(request.member, obj.shift):\n return False\n return True\n\n def get_parent_object_from_request(self, request):\n \"\"\"Get parent object to determine product list.\"\"\"\n resolved = resolve(request.path_info)\n if resolved.kwargs:\n parent = self.parent_model.objects.get(pk=resolved.kwargs[\"object_id\"])\n return parent\n return None\n\n def formfield_for_foreignkey(self, db_field, request=None, **kwargs):\n \"\"\"Limit product list items to items of order's shift.\"\"\"\n field = super().formfield_for_foreignkey(db_field, request, **kwargs)\n\n if db_field.name == \"product\":\n if request is not None:\n parent = self.get_parent_object_from_request(request)\n if parent:\n field.queryset = parent.shift.product_list.product_items\n else:\n field.queryset = field.queryset.none()\n\n return field\n\n\nclass OrderShiftFilter(AutocompleteFilter):\n title = _(\"shift\")\n field_name = \"shift\"\n rel_model = Order\n\n def queryset(self, request, queryset):\n if self.value():\n return queryset.filter(shift=self.value())\n return queryset\n\n\nclass OrderMemberFilter(AutocompleteFilter):\n title = _(\"member\")\n field_name = \"payer\"\n rel_model = Order\n\n def queryset(self, request, queryset):\n if self.value():\n return queryset.filter(payer=self.value())\n return queryset\n\n\nclass OrderPaymentFilter(SimpleListFilter):\n title = _(\"payment\")\n parameter_name = \"payment\"\n\n def lookups(self, request, model_admin):\n return (\n (\"not_required\", _(\"No payment required\")),\n (\"paid\", _(\"Paid\")),\n (\"unpaid\", _(\"Unpaid\")),\n )\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n if self.value() == \"paid\":\n return queryset.filter(payment__isnull=False)\n if self.value() == \"unpaid\":\n return queryset.filter(payment__isnull=True, total_amount__gt=0)\n return queryset.filter(total_amount__exact=0)\n\n\nclass OrderProductFilter(SimpleListFilter):\n title = _(\"product\")\n parameter_name = \"product\"\n\n def lookups(self, request, model_admin):\n qs = model_admin.get_queryset(request)\n types = qs.filter(order_items__product__product__isnull=False).values_list(\n \"order_items__product__product__id\", \"order_items__product__product__name\"\n )\n return list(types.order_by(\"order_items__product__product__id\").distinct())\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n return queryset.filter(order_items__product__product__id__contains=self.value())\n\n\n@register(Order)\nclass OrderAdmin(admin.ModelAdmin):\n class Media:\n pass\n\n inlines = [\n OrderItemInline,\n ]\n ordering = (\"-created_at\",)\n date_hierarchy = \"created_at\"\n search_fields = (\n \"id\",\n \"payer__username\",\n \"payer__first_name\",\n \"payer__last_name\",\n \"payer__profile__nickname\",\n )\n\n list_display = (\n \"id\",\n \"shift\",\n \"created_at\",\n \"order_description\",\n \"num_items\",\n \"discount\",\n \"total_amount\",\n \"paid\",\n \"payer\",\n )\n list_filter = [\n OrderShiftFilter,\n OrderMemberFilter,\n OrderPaymentFilter,\n OrderProductFilter,\n ]\n\n fields = (\n \"id\",\n \"shift\",\n \"created_at\",\n \"created_by\",\n \"order_description\",\n \"num_items\",\n \"age_restricted\",\n \"subtotal\",\n \"discount\",\n \"total_amount\",\n \"payer\",\n \"payment\",\n \"payment_url\",\n )\n\n readonly_fields = (\n \"id\",\n \"created_at\",\n \"created_by\",\n \"order_description\",\n \"num_items\",\n \"subtotal\",\n \"total_amount\",\n \"payment\",\n \"age_restricted\",\n \"payment_url\",\n )\n\n def get_readonly_fields(self, request: HttpRequest, obj: Order = None):\n \"\"\"Disallow changing shift when selected.\"\"\"\n default_fields = self.readonly_fields\n\n if not (request.member and request.member.has_perm(\"sales.custom_prices\")):\n default_fields += (\"discount\",)\n\n if obj and obj.shift:\n default_fields += (\"shift\",)\n\n return default_fields\n\n def save_model(self, request, obj, form, change):\n obj.created_by = request.user\n obj.save()\n\n def changeform_view(self, request, object_id=None, form_url=\"\", extra_context=None):\n if object_id:\n obj = self.model.objects.get(pk=object_id)\n if obj.age_restricted and obj.payer and not services.is_adult(obj.payer):\n self.message_user(\n request,\n _(\n \"The payer for this order is under-age while the order is age restricted!\"\n ),\n messages.WARNING,\n )\n return super().changeform_view(request, object_id, form_url, extra_context)\n\n def get_queryset(self, request):\n queryset = super().get_queryset(request)\n\n if not request.member:\n queryset = queryset.none()\n elif not request.member.has_perm(\"sales.override_manager\"):\n queryset = queryset.filter(\n shift__managers__in=request.member.get_member_groups()\n ).distinct()\n\n queryset = queryset.select_properties(\n \"total_amount\", \"subtotal\", \"num_items\", \"age_restricted\"\n )\n queryset = queryset.prefetch_related(\n \"shift\", \"shift__event\", \"shift__product_list\"\n )\n queryset = queryset.prefetch_related(\n \"order_items\", \"order_items__product\", \"order_items__product__product\"\n )\n queryset = queryset.prefetch_related(\"payment\")\n queryset = queryset.prefetch_related(\"payer\")\n return queryset\n\n def has_add_permission(self, request):\n if not request.member:\n return False\n elif not request.member.has_perm(\"sales.override_manager\"):\n if (\n Shift.objects.filter(\n start__lte=timezone.now(),\n locked=False,\n managers__in=request.member.get_member_groups(),\n ).count()\n == 0\n ):\n return False\n return super().has_view_permission(request)\n\n def has_view_permission(self, request, obj=None):\n if obj and not is_manager(request.member, obj.shift):\n return False\n return super().has_view_permission(request, obj)\n\n def has_change_permission(self, request, obj=None):\n if obj and obj.shift.locked:\n return False\n if obj and obj.payment:\n return False\n\n if obj and not is_manager(request.member, obj.shift):\n return False\n\n return super().has_change_permission(request, obj)\n\n def has_delete_permission(self, request, obj=None):\n if obj and obj.shift.locked:\n return False\n if obj and obj.payment:\n return False\n\n if obj and not is_manager(request.member, obj.shift):\n return False\n\n return super().has_delete_permission(request, obj)\n\n def get_form(self, request, obj=None, **kwargs):\n \"\"\"Override get form to use payment widget.\"\"\"\n return super().get_form(\n request,\n obj,\n formfield_callback=partial(\n self.formfield_for_dbfield, request=request, obj=obj\n ),\n **kwargs,\n )\n\n def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):\n \"\"\"Use payment widget for payments.\"\"\"\n field = super().formfield_for_dbfield(db_field, request, **kwargs)\n if db_field.name == \"payment\":\n return Field(\n widget=PaymentWidget(obj=obj), initial=field.initial, required=False\n )\n if db_field.name == \"shift\":\n field.queryset = Shift.objects.filter(locked=False)\n if not request.member:\n field.queryset = field.queryset.none()\n elif not request.member.has_perm(\"sales.override_manager\"):\n field.queryset = field.queryset.filter(\n managers__in=request.member.get_member_groups()\n )\n return field\n\n def changelist_view(self, request, extra_context=None):\n if not (request.member and request.member.has_perm(\"sales.override_manager\")):\n self.message_user(\n request,\n _(\"You are only seeing orders that are relevant to you.\"),\n messages.WARNING,\n )\n return super().changelist_view(request, extra_context)\n\n def change_view(self, request, object_id, form_url=\"\", extra_context=None):\n object_id\n return super().change_view(request, object_id, form_url, extra_context)\n\n def order_description(self, obj):\n if obj.order_description:\n return obj.order_description\n return \"-\"\n\n def num_items(self, obj):\n return obj.num_items\n\n def subtotal(self, obj):\n if obj.subtotal:\n return f\"\u20ac{obj.subtotal:.2f}\"\n return \"-\"\n\n def discount(self, obj):\n if obj.discount:\n return f\"\u20ac{obj.discount:.2f}\"\n return \"-\"\n\n def total_amount(self, obj):\n if obj.total_amount:\n return f\"\u20ac{obj.total_amount:.2f}\"\n return \"-\"\n\n def paid(self, obj):\n if obj.total_amount is None or obj.total_amount == 0:\n return None\n return obj.payment is not None\n\n paid.boolean = True\n\n def age_restricted(self, obj):\n return bool(obj.age_restricted) if obj else None\n\n age_restricted.boolean = True\n", "path": "website/sales/admin/order_admin.py"}], "after_files": [{"content": "from functools import partial\n\nfrom admin_auto_filters.filters import AutocompleteFilter\nfrom django.contrib import admin, messages\nfrom django.contrib.admin import register, SimpleListFilter\nfrom django.forms import Field\nfrom django.http import HttpRequest\nfrom django.urls import resolve\nfrom django.utils import timezone\n\nfrom django.utils.translation import gettext_lazy as _\n\nfrom payments.widgets import PaymentWidget\nfrom sales import services\nfrom sales.models.order import Order, OrderItem\nfrom sales.models.shift import Shift\nfrom sales.services import is_manager\n\n\nclass OrderItemInline(admin.TabularInline):\n model = OrderItem\n extra = 0\n\n fields = (\"product\", \"product_name\", \"amount\", \"total\")\n\n def get_readonly_fields(self, request: HttpRequest, obj: Order = None):\n default_fields = self.readonly_fields\n\n if not (request.member and request.member.has_perm(\"sales.custom_prices\")):\n default_fields += (\"total\",)\n\n return default_fields\n\n def get_queryset(self, request):\n queryset = super().get_queryset(request)\n queryset = queryset.prefetch_related(\"product\", \"product__product\")\n return queryset\n\n def has_add_permission(self, request, obj):\n if obj and obj.shift.locked:\n return False\n\n if obj and obj.payment:\n return False\n\n parent = self.get_parent_object_from_request(request)\n if not parent:\n return False\n\n return super().has_add_permission(request, obj)\n\n def has_change_permission(self, request, obj=None):\n if obj and obj.payment:\n return False\n if obj and obj.shift.locked:\n return False\n if obj and not is_manager(request.member, obj.shift):\n return False\n return True\n\n def has_delete_permission(self, request, obj=None):\n if obj and obj.payment:\n return False\n if obj and obj.shift.locked:\n return False\n if obj and not is_manager(request.member, obj.shift):\n return False\n return True\n\n def get_parent_object_from_request(self, request):\n \"\"\"Get parent object to determine product list.\"\"\"\n resolved = resolve(request.path_info)\n if resolved.kwargs:\n parent = self.parent_model.objects.get(pk=resolved.kwargs[\"object_id\"])\n return parent\n return None\n\n def formfield_for_foreignkey(self, db_field, request=None, **kwargs):\n \"\"\"Limit product list items to items of order's shift.\"\"\"\n field = super().formfield_for_foreignkey(db_field, request, **kwargs)\n\n if db_field.name == \"product\":\n if request is not None:\n parent = self.get_parent_object_from_request(request)\n if parent:\n field.queryset = parent.shift.product_list.product_items\n else:\n field.queryset = field.queryset.none()\n\n return field\n\n\nclass OrderShiftFilter(AutocompleteFilter):\n title = _(\"shift\")\n field_name = \"shift\"\n rel_model = Order\n\n def queryset(self, request, queryset):\n if self.value():\n return queryset.filter(shift=self.value())\n return queryset\n\n\nclass OrderMemberFilter(AutocompleteFilter):\n title = _(\"member\")\n field_name = \"payer\"\n rel_model = Order\n\n def queryset(self, request, queryset):\n if self.value():\n return queryset.filter(payer=self.value())\n return queryset\n\n\nclass OrderPaymentFilter(SimpleListFilter):\n title = _(\"payment\")\n parameter_name = \"payment\"\n\n def lookups(self, request, model_admin):\n return (\n (\"not_required\", _(\"No payment required\")),\n (\"paid\", _(\"Paid\")),\n (\"unpaid\", _(\"Unpaid\")),\n )\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n if self.value() == \"paid\":\n return queryset.filter(payment__isnull=False)\n if self.value() == \"unpaid\":\n return queryset.filter(payment__isnull=True, total_amount__gt=0)\n return queryset.filter(total_amount__exact=0)\n\n\nclass OrderProductFilter(SimpleListFilter):\n title = _(\"product\")\n parameter_name = \"product\"\n\n def lookups(self, request, model_admin):\n qs = model_admin.get_queryset(request)\n types = qs.filter(order_items__product__product__isnull=False).values_list(\n \"order_items__product__product__id\", \"order_items__product__product__name\"\n )\n return list(types.order_by(\"order_items__product__product__id\").distinct())\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n return queryset.filter(order_items__product__product__id__contains=self.value())\n\n\n@register(Order)\nclass OrderAdmin(admin.ModelAdmin):\n class Media:\n pass\n\n inlines = [\n OrderItemInline,\n ]\n ordering = (\"-created_at\",)\n date_hierarchy = \"created_at\"\n search_fields = (\n \"id\",\n \"payer__username\",\n \"payer__first_name\",\n \"payer__last_name\",\n \"payer__profile__nickname\",\n )\n\n list_display = (\n \"id\",\n \"shift\",\n \"created_at\",\n \"order_description\",\n \"num_items\",\n \"discount\",\n \"total_amount\",\n \"paid\",\n \"payer\",\n )\n list_filter = [\n OrderShiftFilter,\n OrderMemberFilter,\n OrderPaymentFilter,\n OrderProductFilter,\n ]\n\n fields = (\n \"id\",\n \"shift\",\n \"created_at\",\n \"created_by\",\n \"order_description\",\n \"num_items\",\n \"age_restricted\",\n \"subtotal\",\n \"discount\",\n \"total_amount\",\n \"payer\",\n \"payment\",\n \"payment_url\",\n )\n\n readonly_fields = (\n \"id\",\n \"created_at\",\n \"created_by\",\n \"order_description\",\n \"num_items\",\n \"subtotal\",\n \"total_amount\",\n \"payment\",\n \"age_restricted\",\n \"payment_url\",\n )\n\n def get_readonly_fields(self, request: HttpRequest, obj: Order = None):\n \"\"\"Disallow changing shift when selected.\"\"\"\n default_fields = self.readonly_fields\n\n if not (request.member and request.member.has_perm(\"sales.custom_prices\")):\n default_fields += (\"discount\",)\n\n if obj and obj.shift:\n default_fields += (\"shift\",)\n\n return default_fields\n\n def save_model(self, request, obj, form, change):\n obj.created_by = request.user\n obj.save()\n\n def changeform_view(self, request, object_id=None, form_url=\"\", extra_context=None):\n if object_id:\n obj = self.model.objects.get(pk=object_id)\n if obj.age_restricted and obj.payer and not services.is_adult(obj.payer):\n self.message_user(\n request,\n _(\n \"The payer for this order is under-age while the order is age restricted!\"\n ),\n messages.WARNING,\n )\n return super().changeform_view(request, object_id, form_url, extra_context)\n\n def get_queryset(self, request):\n queryset = super().get_queryset(request)\n\n if not request.member:\n queryset = queryset.none()\n elif not request.member.has_perm(\"sales.override_manager\"):\n queryset = queryset.filter(\n shift__managers__in=request.member.get_member_groups()\n ).distinct()\n\n queryset = queryset.select_properties(\n \"total_amount\", \"subtotal\", \"num_items\", \"age_restricted\"\n )\n queryset = queryset.prefetch_related(\n \"shift\", \"shift__event\", \"shift__product_list\"\n )\n queryset = queryset.prefetch_related(\n \"order_items\", \"order_items__product\", \"order_items__product__product\"\n )\n queryset = queryset.prefetch_related(\"payment\")\n queryset = queryset.prefetch_related(\"payer\")\n return queryset\n\n def has_add_permission(self, request):\n if not request.member:\n return False\n elif not request.member.has_perm(\"sales.override_manager\"):\n if (\n Shift.objects.filter(\n start__lte=timezone.now(),\n locked=False,\n managers__in=request.member.get_member_groups(),\n ).count()\n == 0\n ):\n return False\n return super().has_view_permission(request)\n\n def has_view_permission(self, request, obj=None):\n if obj and not is_manager(request.member, obj.shift):\n return False\n return super().has_view_permission(request, obj)\n\n def has_change_permission(self, request, obj=None):\n if obj and obj.shift.locked:\n return False\n if obj and obj.payment:\n return False\n\n if obj and not is_manager(request.member, obj.shift):\n return False\n\n return super().has_change_permission(request, obj)\n\n def has_delete_permission(self, request, obj=None):\n if obj and obj.shift.locked:\n return False\n if obj and obj.payment:\n return False\n\n if obj and not is_manager(request.member, obj.shift):\n return False\n\n return super().has_delete_permission(request, obj)\n\n def get_form(self, request, obj=None, **kwargs):\n \"\"\"Override get form to use payment widget.\"\"\"\n return super().get_form(\n request,\n obj,\n formfield_callback=partial(\n self.formfield_for_dbfield, request=request, obj=obj\n ),\n **kwargs,\n )\n\n def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):\n \"\"\"Use payment widget for payments.\"\"\"\n field = super().formfield_for_dbfield(db_field, request, **kwargs)\n if db_field.name == \"payment\":\n return Field(\n widget=PaymentWidget(obj=obj), initial=field.initial, required=False\n )\n if db_field.name == \"shift\":\n field.queryset = Shift.objects.filter(locked=False)\n if not request.member:\n field.queryset = field.queryset.none()\n elif not request.member.has_perm(\"sales.override_manager\"):\n field.queryset = field.queryset.filter(\n managers__in=request.member.get_member_groups()\n )\n return field\n\n def changelist_view(self, request, extra_context=None):\n if not (request.member and request.member.has_perm(\"sales.override_manager\")):\n self.message_user(\n request,\n _(\"You are only seeing orders that are relevant to you.\"),\n messages.WARNING,\n )\n return super().changelist_view(request, extra_context)\n\n def change_view(self, request, object_id, form_url=\"\", extra_context=None):\n if object_id:\n try:\n obj = self.model.objects.get(pk=object_id)\n if (\n obj.age_restricted\n and obj.payer\n and not services.is_adult(obj.payer)\n ):\n self.message_user(\n request,\n _(\n \"The payer for this order is under-age while the order is age restricted!\"\n ),\n messages.WARNING,\n )\n except self.model.DoesNotExist:\n pass\n return super().change_view(request, object_id, form_url, extra_context)\n\n def order_description(self, obj):\n if obj.order_description:\n return obj.order_description\n return \"-\"\n\n def num_items(self, obj):\n return obj.num_items\n\n def subtotal(self, obj):\n if obj.subtotal:\n return f\"\u20ac{obj.subtotal:.2f}\"\n return \"-\"\n\n def discount(self, obj):\n if obj.discount:\n return f\"\u20ac{obj.discount:.2f}\"\n return \"-\"\n\n def total_amount(self, obj):\n if obj.total_amount:\n return f\"\u20ac{obj.total_amount:.2f}\"\n return \"-\"\n\n def paid(self, obj):\n if obj.total_amount is None or obj.total_amount == 0:\n return None\n return obj.payment is not None\n\n paid.boolean = True\n\n def age_restricted(self, obj):\n return bool(obj.age_restricted) if obj else None\n\n age_restricted.boolean = True\n", "path": "website/sales/admin/order_admin.py"}]}
| 4,056 | 232 |
gh_patches_debug_1447
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-945
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't get DataLoader to work
Hello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.
Running the following code on Python 3.8:
```python
import strawberry
from strawberry.dataloader import DataLoader
from typing import List
@strawberry.type
class User:
id: strawberry.ID
async def load_users(keys) -> List[User]:
return [User(id=key) for key in keys]
loader = DataLoader(load_fn=load_users)
@strawberry.type
class Query:
@strawberry.field
async def get_user(self, id: strawberry.ID) -> User:
return await loader.load(id)
schema = strawberry.Schema(query=Query)
```
I get the following error message:
```
Task <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result()
running at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625>
cb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]>
got Future <Future pending> attached to a different loop
```
When I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: "RuntimeError: await wasn't used with future".
I'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/cli/commands/server.py`
Content:
```
1 import importlib
2 import sys
3
4 import click
5 import hupper
6 import uvicorn
7 from starlette.applications import Starlette
8 from starlette.middleware.cors import CORSMiddleware
9
10 from strawberry import Schema
11 from strawberry.asgi import GraphQL
12 from strawberry.utils.importer import import_module_symbol
13
14
15 @click.command("server", short_help="Starts debug server")
16 @click.argument("schema", type=str)
17 @click.option("-h", "--host", default="0.0.0.0", type=str)
18 @click.option("-p", "--port", default=8000, type=int)
19 @click.option(
20 "--app-dir",
21 default=".",
22 type=str,
23 show_default=True,
24 help=(
25 "Look for the module in the specified directory, by adding this to the "
26 "PYTHONPATH. Defaults to the current working directory. "
27 "Works the same as `--app-dir` in uvicorn."
28 ),
29 )
30 def server(schema, host, port, app_dir):
31 sys.path.insert(0, app_dir)
32
33 try:
34 schema_symbol = import_module_symbol(schema, default_symbol_name="schema")
35 except (ImportError, AttributeError) as exc:
36 message = str(exc)
37 raise click.BadArgumentUsage(message)
38
39 if not isinstance(schema_symbol, Schema):
40 message = "The `schema` must be an instance of strawberry.Schema"
41 raise click.BadArgumentUsage(message)
42
43 reloader = hupper.start_reloader("strawberry.cli.run", verbose=False)
44 schema_module = importlib.import_module(schema_symbol.__module__)
45 reloader.watch_files([schema_module.__file__])
46
47 app = Starlette(debug=True)
48 app.add_middleware(
49 CORSMiddleware, allow_headers=["*"], allow_origins=["*"], allow_methods=["*"]
50 )
51
52 graphql_app = GraphQL(schema_symbol, debug=True)
53
54 paths = ["/", "/graphql"]
55 for path in paths:
56 app.add_route(path, graphql_app)
57 app.add_websocket_route(path, graphql_app)
58
59 print(f"Running strawberry on http://{host}:{port}/ 🍓")
60 uvicorn.run(app, host=host, port=port, log_level="error")
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py
--- a/strawberry/cli/commands/server.py
+++ b/strawberry/cli/commands/server.py
@@ -57,4 +57,4 @@
app.add_websocket_route(path, graphql_app)
print(f"Running strawberry on http://{host}:{port}/ 🍓")
- uvicorn.run(app, host=host, port=port, log_level="error")
+ uvicorn.run(app, loop="none", host=host, port=port, log_level="error")
|
{"golden_diff": "diff --git a/strawberry/cli/commands/server.py b/strawberry/cli/commands/server.py\n--- a/strawberry/cli/commands/server.py\n+++ b/strawberry/cli/commands/server.py\n@@ -57,4 +57,4 @@\n app.add_websocket_route(path, graphql_app)\n \n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n- uvicorn.run(app, host=host, port=port, log_level=\"error\")\n+ uvicorn.run(app, loop=\"none\", host=host, port=port, log_level=\"error\")\n", "issue": "Can't get DataLoader to work\nHello! I'm trying examples from this page https://strawberry.rocks/docs/guides/dataloaders.\r\nRunning the following code on Python 3.8:\r\n```python\r\nimport strawberry\r\nfrom strawberry.dataloader import DataLoader\r\nfrom typing import List\r\n\r\n\r\[email protected]\r\nclass User:\r\n id: strawberry.ID\r\n\r\n\r\nasync def load_users(keys) -> List[User]:\r\n return [User(id=key) for key in keys]\r\n\r\nloader = DataLoader(load_fn=load_users)\r\n\r\n\r\[email protected]\r\nclass Query:\r\n @strawberry.field\r\n async def get_user(self, id: strawberry.ID) -> User:\r\n return await loader.load(id)\r\n\r\n\r\nschema = strawberry.Schema(query=Query)\r\n```\r\nI get the following error message:\r\n```\r\nTask <Task pending name='Task-8' coro=<ExecutionContext.resolve_field.<locals>.await_result() \r\nrunning at /Users/-/Documents/src/dataservice-poc/virtualenv/lib/python3.8/site-packages/graphql/execution/execute.py:625> \r\ncb=[gather.<locals>._done_callback() at /usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py:758]> \r\ngot Future <Future pending> attached to a different loop\r\n```\r\n\r\nWhen I try my own code (which is pretty much the same, but the loader is real - it reads data from the db) I get this: \"RuntimeError: await wasn't used with future\".\r\n\r\nI'm stuck, don't really know where to look. I thought Strawberry is supposed to manage async processing, but looks like it doesn't work that way. Any help would be greatly appreciated.\n", "before_files": [{"content": "import importlib\nimport sys\n\nimport click\nimport hupper\nimport uvicorn\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\nfrom strawberry import Schema\nfrom strawberry.asgi import GraphQL\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](\"server\", short_help=\"Starts debug server\")\[email protected](\"schema\", type=str)\[email protected](\"-h\", \"--host\", default=\"0.0.0.0\", type=str)\[email protected](\"-p\", \"--port\", default=8000, type=int)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef server(schema, host, port, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n\n reloader = hupper.start_reloader(\"strawberry.cli.run\", verbose=False)\n schema_module = importlib.import_module(schema_symbol.__module__)\n reloader.watch_files([schema_module.__file__])\n\n app = Starlette(debug=True)\n app.add_middleware(\n CORSMiddleware, allow_headers=[\"*\"], allow_origins=[\"*\"], allow_methods=[\"*\"]\n )\n\n graphql_app = GraphQL(schema_symbol, debug=True)\n\n paths = [\"/\", \"/graphql\"]\n for path in paths:\n app.add_route(path, graphql_app)\n app.add_websocket_route(path, graphql_app)\n\n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n uvicorn.run(app, host=host, port=port, log_level=\"error\")\n", "path": "strawberry/cli/commands/server.py"}], "after_files": [{"content": "import importlib\nimport sys\n\nimport click\nimport hupper\nimport uvicorn\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\nfrom strawberry import Schema\nfrom strawberry.asgi import GraphQL\nfrom strawberry.utils.importer import import_module_symbol\n\n\[email protected](\"server\", short_help=\"Starts debug server\")\[email protected](\"schema\", type=str)\[email protected](\"-h\", \"--host\", default=\"0.0.0.0\", type=str)\[email protected](\"-p\", \"--port\", default=8000, type=int)\[email protected](\n \"--app-dir\",\n default=\".\",\n type=str,\n show_default=True,\n help=(\n \"Look for the module in the specified directory, by adding this to the \"\n \"PYTHONPATH. Defaults to the current working directory. \"\n \"Works the same as `--app-dir` in uvicorn.\"\n ),\n)\ndef server(schema, host, port, app_dir):\n sys.path.insert(0, app_dir)\n\n try:\n schema_symbol = import_module_symbol(schema, default_symbol_name=\"schema\")\n except (ImportError, AttributeError) as exc:\n message = str(exc)\n raise click.BadArgumentUsage(message)\n\n if not isinstance(schema_symbol, Schema):\n message = \"The `schema` must be an instance of strawberry.Schema\"\n raise click.BadArgumentUsage(message)\n\n reloader = hupper.start_reloader(\"strawberry.cli.run\", verbose=False)\n schema_module = importlib.import_module(schema_symbol.__module__)\n reloader.watch_files([schema_module.__file__])\n\n app = Starlette(debug=True)\n app.add_middleware(\n CORSMiddleware, allow_headers=[\"*\"], allow_origins=[\"*\"], allow_methods=[\"*\"]\n )\n\n graphql_app = GraphQL(schema_symbol, debug=True)\n\n paths = [\"/\", \"/graphql\"]\n for path in paths:\n app.add_route(path, graphql_app)\n app.add_websocket_route(path, graphql_app)\n\n print(f\"Running strawberry on http://{host}:{port}/ \ud83c\udf53\")\n uvicorn.run(app, loop=\"none\", host=host, port=port, log_level=\"error\")\n", "path": "strawberry/cli/commands/server.py"}]}
| 1,217 | 133 |
gh_patches_debug_17772
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-2796
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dovecot: support for zstd/lz4 compression is not compiled in
## Environment & Versions
### Environment
- [x] docker-compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
1.9
## Description
When `COMPRESSION` in `mailu.env` is set to `zstd` or `lz4` and a new mail arrives, the imap container logs something like this:
```
Jan 04 23:32:46 lmtp([email protected])<32616><kRbzEw7L1GFofwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd
Jan 04 23:32:46 indexer-worker([email protected])<32619><D5ZaIcXUkKDAqMsD:Z6zPGQ7L1GFrfwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd
```
```
Jan 04 23:43:03 imap([email protected])<178><2CwwXMnUwMXAqMsE>: Error: zlib_save: Support not compiled in for handler: lz4
Jan 04 23:43:03 indexer-worker([email protected])<180><B4qhJXfN1GGvAAAAflj66Q:/8frM3fN1GG0AAAAflj66Q>: Error: zlib_save: Support not compiled in for handler: lz4
```
As far as I can tell the Dovecot package from Alpine is probably not compiled with zstd/lz4 support, but I'm not sure since the build log of the last Alpine package returns 404 :roll_eyes:
This is __not__ breaking anything, mails will be received, but they won't be compressed.
## Replication Steps
1. in `mailu.env` set `COMPRESSION=zstd` or `COMPRESSION=lz4`
2. `docker-compose down && docker-compose up`
3. send yourself a mail
4. watch the logs of the imap container
## Expected behaviour
With `COMPRESSION=zstd`/`COMPRESSION=lz4` set, mails should be compressed with zstd/lz4 and then stored.
## Workaround
Set `COMPRESSION=gz` if you need compressed mail storage.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/postfix/start.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import sys
8 import re
9
10 from podop import run_server
11 from socrate import system, conf
12
13 system.set_env(log_filters=[
14 r'the Postfix mail system is running\: \d+$',
15 r'(dis)?connect from localhost\[(\:\:1|127\.0\.0\.1)\]( quit=1 commands=1)?$',
16 r'haproxy read\: short protocol header\: QUIT$',
17 r'discarding EHLO keywords\: PIPELINING$',
18 ], log_file=os.environ.get('POSTFIX_LOG_FILE'))
19
20 os.system("flock -n /queue/pid/master.pid rm /queue/pid/master.pid")
21
22 def start_podop():
23 system.drop_privs_to('postfix')
24 os.makedirs('/dev/shm/postfix',mode=0o700, exist_ok=True)
25 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
26 # TODO: Remove verbosity setting from Podop?
27 run_server(0, "postfix", "/tmp/podop.socket", [
28 ("transport", "url", url + "transport/§"),
29 ("alias", "url", url + "alias/§"),
30 ("dane", "url", url + "dane/§"),
31 ("domain", "url", url + "domain/§"),
32 ("mailbox", "url", url + "mailbox/§"),
33 ("recipientmap", "url", url + "recipient/map/§"),
34 ("sendermap", "url", url + "sender/map/§"),
35 ("senderlogin", "url", url + "sender/login/§"),
36 ("senderrate", "url", url + "sender/rate/§")
37 ])
38
39 def start_mta_sts_daemon():
40 os.chmod("/root/", 0o755) # read access to /root/.netrc required
41 system.drop_privs_to('postfix')
42 from postfix_mta_sts_resolver import daemon
43 daemon.main()
44
45 def is_valid_postconf_line(line):
46 return not line.startswith("#") \
47 and not line == ''
48
49 # Actual startup script
50 os.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'
51
52 # Postfix requires IPv6 addresses to be wrapped in square brackets
53 if 'RELAYNETS' in os.environ:
54 os.environ["RELAYNETS"] = re.sub(r'([0-9a-fA-F]+:[0-9a-fA-F:]+)/', '[\\1]/', os.environ["RELAYNETS"])
55
56 for postfix_file in glob.glob("/conf/*.cf"):
57 conf.jinja(postfix_file, os.environ, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
58
59 if os.path.exists("/overrides/postfix.cf"):
60 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
61 if is_valid_postconf_line(line):
62 os.system('postconf -e "{}"'.format(line))
63
64 if os.path.exists("/overrides/postfix.master"):
65 for line in open("/overrides/postfix.master").read().strip().split("\n"):
66 if is_valid_postconf_line(line):
67 os.system('postconf -Me "{}"'.format(line))
68
69 for map_file in glob.glob("/overrides/*.map"):
70 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
71 shutil.copyfile(map_file, destination)
72 os.system("postmap {}".format(destination))
73 os.remove(destination)
74
75 if os.path.exists("/overrides/mta-sts-daemon.yml"):
76 shutil.copyfile("/overrides/mta-sts-daemon.yml", "/etc/mta-sts-daemon.yml")
77 else:
78 conf.jinja("/conf/mta-sts-daemon.yml", os.environ, "/etc/mta-sts-daemon.yml")
79
80 for policy in ['tls_policy', 'transport']:
81 if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):
82 open(f'/etc/postfix/{policy}.map', 'a').close()
83 os.system(f'postmap /etc/postfix/{policy}.map')
84
85 if "RELAYUSER" in os.environ:
86 path = "/etc/postfix/sasl_passwd"
87 conf.jinja("/conf/sasl_passwd", os.environ, path)
88 os.system("postmap {}".format(path))
89
90 # Configure logrotate and start crond
91 if os.environ.get('POSTFIX_LOG_FILE'):
92 conf.jinja("/conf/logrotate.conf", os.environ, "/etc/logrotate.d/postfix.conf")
93 os.system("/usr/sbin/crond")
94 if os.path.exists("/overrides/logrotate.conf"):
95 shutil.copyfile("/overrides/logrotate.conf", "/etc/logrotate.d/postfix.conf")
96
97 # Run Podop and Postfix
98 multiprocessing.Process(target=start_podop).start()
99 multiprocessing.Process(target=start_mta_sts_daemon).start()
100 os.system("/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing")
101 # Before starting postfix, we need to check permissions on /queue
102 # in the event that postfix,postdrop id have changed
103 os.system("postfix set-permissions")
104 os.system("postfix start-fg")
105
```
Path: `core/dovecot/start.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 import glob
5 import multiprocessing
6 import logging as log
7 import sys
8
9 from podop import run_server
10 from socrate import system, conf
11
12 system.set_env(log_filters=r'waitpid\(\) returned unknown PID \d+$')
13
14 def start_podop():
15 system.drop_privs_to('mail')
16 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/dovecot/§"
17 run_server(0, "dovecot", "/tmp/podop.socket", [
18 ("quota", "url", url ),
19 ("auth", "url", url),
20 ("sieve", "url", url),
21 ])
22
23 # Actual startup script
24 for dovecot_file in glob.glob("/conf/*.conf"):
25 conf.jinja(dovecot_file, os.environ, os.path.join("/etc/dovecot", os.path.basename(dovecot_file)))
26
27 os.makedirs("/conf/bin", exist_ok=True)
28 for script_file in glob.glob("/conf/*.script"):
29 out_file = os.path.join("/conf/bin/", os.path.basename(script_file).replace('.script',''))
30 conf.jinja(script_file, os.environ, out_file)
31 os.chmod(out_file, 0o555)
32
33 # Run Podop, then postfix
34 os.system("chown mail:mail /mail")
35 os.system("chown -R mail:mail /var/lib/dovecot /conf")
36
37 multiprocessing.Process(target=start_podop).start()
38 os.system("dovecot -c /etc/dovecot/dovecot.conf -F")
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dovecot/start.py b/core/dovecot/start.py
--- a/core/dovecot/start.py
+++ b/core/dovecot/start.py
@@ -9,7 +9,7 @@
from podop import run_server
from socrate import system, conf
-system.set_env(log_filters=r'waitpid\(\) returned unknown PID \d+$')
+system.set_env(log_filters=r'Error\: SSL context initialization failed, disabling SSL\: Can\'t load SSL certificate \(ssl_cert setting\)\: The certificate is empty$')
def start_podop():
system.drop_privs_to('mail')
diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -11,7 +11,6 @@
from socrate import system, conf
system.set_env(log_filters=[
- r'the Postfix mail system is running\: \d+$',
r'(dis)?connect from localhost\[(\:\:1|127\.0\.0\.1)\]( quit=1 commands=1)?$',
r'haproxy read\: short protocol header\: QUIT$',
r'discarding EHLO keywords\: PIPELINING$',
|
{"golden_diff": "diff --git a/core/dovecot/start.py b/core/dovecot/start.py\n--- a/core/dovecot/start.py\n+++ b/core/dovecot/start.py\n@@ -9,7 +9,7 @@\n from podop import run_server\n from socrate import system, conf\n \n-system.set_env(log_filters=r'waitpid\\(\\) returned unknown PID \\d+$')\n+system.set_env(log_filters=r'Error\\: SSL context initialization failed, disabling SSL\\: Can\\'t load SSL certificate \\(ssl_cert setting\\)\\: The certificate is empty$')\n \n def start_podop():\n system.drop_privs_to('mail')\ndiff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -11,7 +11,6 @@\n from socrate import system, conf\n \n system.set_env(log_filters=[\n- r'the Postfix mail system is running\\: \\d+$',\n r'(dis)?connect from localhost\\[(\\:\\:1|127\\.0\\.0\\.1)\\]( quit=1 commands=1)?$',\n r'haproxy read\\: short protocol header\\: QUIT$',\n r'discarding EHLO keywords\\: PIPELINING$',\n", "issue": "dovecot: support for zstd/lz4 compression is not compiled in\n## Environment & Versions\r\n### Environment\r\n - [x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nWhen `COMPRESSION` in `mailu.env` is set to `zstd` or `lz4` and a new mail arrives, the imap container logs something like this:\r\n```\r\nJan 04 23:32:46 lmtp([email protected])<32616><kRbzEw7L1GFofwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd\r\nJan 04 23:32:46 indexer-worker([email protected])<32619><D5ZaIcXUkKDAqMsD:Z6zPGQ7L1GFrfwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd\r\n```\r\n\r\n```\r\nJan 04 23:43:03 imap([email protected])<178><2CwwXMnUwMXAqMsE>: Error: zlib_save: Support not compiled in for handler: lz4\r\nJan 04 23:43:03 indexer-worker([email protected])<180><B4qhJXfN1GGvAAAAflj66Q:/8frM3fN1GG0AAAAflj66Q>: Error: zlib_save: Support not compiled in for handler: lz4\r\n```\r\n\r\nAs far as I can tell the Dovecot package from Alpine is probably not compiled with zstd/lz4 support, but I'm not sure since the build log of the last Alpine package returns 404 :roll_eyes:\r\n\r\nThis is __not__ breaking anything, mails will be received, but they won't be compressed.\r\n\r\n## Replication Steps\r\n1. in `mailu.env` set `COMPRESSION=zstd` or `COMPRESSION=lz4`\r\n2. `docker-compose down && docker-compose up`\r\n3. send yourself a mail\r\n4. watch the logs of the imap container\r\n\r\n## Expected behaviour\r\nWith `COMPRESSION=zstd`/`COMPRESSION=lz4` set, mails should be compressed with zstd/lz4 and then stored.\r\n\r\n## Workaround\r\nSet `COMPRESSION=gz` if you need compressed mail storage.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport sys\nimport re\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=[\n r'the Postfix mail system is running\\: \\d+$',\n r'(dis)?connect from localhost\\[(\\:\\:1|127\\.0\\.0\\.1)\\]( quit=1 commands=1)?$',\n r'haproxy read\\: short protocol header\\: QUIT$',\n r'discarding EHLO keywords\\: PIPELINING$',\n ], log_file=os.environ.get('POSTFIX_LOG_FILE'))\n\nos.system(\"flock -n /queue/pid/master.pid rm /queue/pid/master.pid\")\n\ndef start_podop():\n system.drop_privs_to('postfix')\n os.makedirs('/dev/shm/postfix',mode=0o700, exist_ok=True)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n (\"transport\", \"url\", url + \"transport/\u00a7\"),\n (\"alias\", \"url\", url + \"alias/\u00a7\"),\n (\"dane\", \"url\", url + \"dane/\u00a7\"),\n (\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\"),\n (\"senderrate\", \"url\", url + \"sender/rate/\u00a7\")\n ])\n\ndef start_mta_sts_daemon():\n os.chmod(\"/root/\", 0o755) # read access to /root/.netrc required\n system.drop_privs_to('postfix')\n from postfix_mta_sts_resolver import daemon\n daemon.main()\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'\n\n# Postfix requires IPv6 addresses to be wrapped in square brackets\nif 'RELAYNETS' in os.environ:\n os.environ[\"RELAYNETS\"] = re.sub(r'([0-9a-fA-F]+:[0-9a-fA-F:]+)/', '[\\\\1]/', os.environ[\"RELAYNETS\"])\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif os.path.exists(\"/overrides/mta-sts-daemon.yml\"):\n shutil.copyfile(\"/overrides/mta-sts-daemon.yml\", \"/etc/mta-sts-daemon.yml\")\nelse:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n\nfor policy in ['tls_policy', 'transport']:\n if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):\n open(f'/etc/postfix/{policy}.map', 'a').close()\n os.system(f'postmap /etc/postfix/{policy}.map')\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Configure logrotate and start crond\nif os.environ.get('POSTFIX_LOG_FILE'):\n conf.jinja(\"/conf/logrotate.conf\", os.environ, \"/etc/logrotate.d/postfix.conf\")\n os.system(\"/usr/sbin/crond\")\n if os.path.exists(\"/overrides/logrotate.conf\"):\n shutil.copyfile(\"/overrides/logrotate.conf\", \"/etc/logrotate.d/postfix.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nmultiprocessing.Process(target=start_mta_sts_daemon).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}, {"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=r'waitpid\\(\\) returned unknown PID \\d+$')\n\ndef start_podop():\n system.drop_privs_to('mail')\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/dovecot/\u00a7\"\n run_server(0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", url ),\n\t\t(\"auth\", \"url\", url),\n\t\t(\"sieve\", \"url\", url),\n ])\n\n# Actual startup script\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n conf.jinja(dovecot_file, os.environ, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\nos.makedirs(\"/conf/bin\", exist_ok=True)\nfor script_file in glob.glob(\"/conf/*.script\"):\n out_file = os.path.join(\"/conf/bin/\", os.path.basename(script_file).replace('.script',''))\n conf.jinja(script_file, os.environ, out_file)\n os.chmod(out_file, 0o555)\n\n# Run Podop, then postfix\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\n\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"dovecot -c /etc/dovecot/dovecot.conf -F\")\n", "path": "core/dovecot/start.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport sys\nimport re\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=[\n r'(dis)?connect from localhost\\[(\\:\\:1|127\\.0\\.0\\.1)\\]( quit=1 commands=1)?$',\n r'haproxy read\\: short protocol header\\: QUIT$',\n r'discarding EHLO keywords\\: PIPELINING$',\n ], log_file=os.environ.get('POSTFIX_LOG_FILE'))\n\nos.system(\"flock -n /queue/pid/master.pid rm /queue/pid/master.pid\")\n\ndef start_podop():\n system.drop_privs_to('postfix')\n os.makedirs('/dev/shm/postfix',mode=0o700, exist_ok=True)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n (\"transport\", \"url\", url + \"transport/\u00a7\"),\n (\"alias\", \"url\", url + \"alias/\u00a7\"),\n (\"dane\", \"url\", url + \"dane/\u00a7\"),\n (\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\"),\n (\"senderrate\", \"url\", url + \"sender/rate/\u00a7\")\n ])\n\ndef start_mta_sts_daemon():\n os.chmod(\"/root/\", 0o755) # read access to /root/.netrc required\n system.drop_privs_to('postfix')\n from postfix_mta_sts_resolver import daemon\n daemon.main()\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'\n\n# Postfix requires IPv6 addresses to be wrapped in square brackets\nif 'RELAYNETS' in os.environ:\n os.environ[\"RELAYNETS\"] = re.sub(r'([0-9a-fA-F]+:[0-9a-fA-F:]+)/', '[\\\\1]/', os.environ[\"RELAYNETS\"])\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif os.path.exists(\"/overrides/mta-sts-daemon.yml\"):\n shutil.copyfile(\"/overrides/mta-sts-daemon.yml\", \"/etc/mta-sts-daemon.yml\")\nelse:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n\nfor policy in ['tls_policy', 'transport']:\n if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):\n open(f'/etc/postfix/{policy}.map', 'a').close()\n os.system(f'postmap /etc/postfix/{policy}.map')\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Configure logrotate and start crond\nif os.environ.get('POSTFIX_LOG_FILE'):\n conf.jinja(\"/conf/logrotate.conf\", os.environ, \"/etc/logrotate.d/postfix.conf\")\n os.system(\"/usr/sbin/crond\")\n if os.path.exists(\"/overrides/logrotate.conf\"):\n shutil.copyfile(\"/overrides/logrotate.conf\", \"/etc/logrotate.d/postfix.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nmultiprocessing.Process(target=start_mta_sts_daemon).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}, {"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=r'Error\\: SSL context initialization failed, disabling SSL\\: Can\\'t load SSL certificate \\(ssl_cert setting\\)\\: The certificate is empty$')\n\ndef start_podop():\n system.drop_privs_to('mail')\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/dovecot/\u00a7\"\n run_server(0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", url ),\n\t\t(\"auth\", \"url\", url),\n\t\t(\"sieve\", \"url\", url),\n ])\n\n# Actual startup script\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n conf.jinja(dovecot_file, os.environ, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\nos.makedirs(\"/conf/bin\", exist_ok=True)\nfor script_file in glob.glob(\"/conf/*.script\"):\n out_file = os.path.join(\"/conf/bin/\", os.path.basename(script_file).replace('.script',''))\n conf.jinja(script_file, os.environ, out_file)\n os.chmod(out_file, 0o555)\n\n# Run Podop, then postfix\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\n\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"dovecot -c /etc/dovecot/dovecot.conf -F\")\n", "path": "core/dovecot/start.py"}]}
| 2,579 | 272 |
gh_patches_debug_26929
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-20221
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pants.util.frozendict.FrozenDict has error prone order-sensitive comparisons
**Describe the bug**
The `FrozenDict` type is order sensitive, which differs to a normal `dict`. It also gives `False` when comparing to a normal dict. This makes it easy to get unexpected behaviour, and likely leads to unnecessary cache misses.
https://github.com/pantsbuild/pants/blob/57d1801c80f8273ca386fba198495a05a45f60ae/src/python/pants/util/frozendict.py#L15-L21
```python
d1 = {'a': 0, 'b': 1}
d2 = {'b': 1, 'a': 0}
print(d1 == d2) # True
print(FrozenDict(d1) == FrozenDict(d2)) # False
print(FrozenDict(d1) == d1) # False
```
NB. this seems to apply to all methods that call `tuple(self.items())` or similar: `__eq__`, `__lt__`, `_calculate_hash`/`__hash__`.
**Pants version**
`main`
**OS**
N/A
**Additional info**
This caused half of #20210, see #20220.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/util/frozendict.py`
Content:
```
1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from typing import Any, Callable, Iterable, Iterator, Mapping, TypeVar, cast, overload
7
8 from pants.util.memo import memoized_method
9 from pants.util.strutil import softwrap
10
11 K = TypeVar("K")
12 V = TypeVar("V")
13
14
15 class FrozenDict(Mapping[K, V]):
16 """A wrapper around a normal `dict` that removes all methods to mutate the instance and that
17 implements __hash__.
18
19 This should be used instead of normal dicts when working with the engine because normal dicts
20 are not safe to use.
21 """
22
23 @overload
24 def __init__(self, __items: Iterable[tuple[K, V]], **kwargs: V) -> None:
25 ...
26
27 @overload
28 def __init__(self, __other: Mapping[K, V], **kwargs: V) -> None:
29 ...
30
31 @overload
32 def __init__(self, **kwargs: V) -> None:
33 ...
34
35 def __init__(self, *item: Mapping[K, V] | Iterable[tuple[K, V]], **kwargs: V) -> None:
36 """Creates a `FrozenDict` with arguments accepted by `dict` that also must be hashable."""
37 if len(item) > 1:
38 raise ValueError(
39 f"{type(self).__name__} was called with {len(item)} positional arguments but it expects one."
40 )
41
42 # NB: Keep the variable name `_data` in sync with `externs/mod.rs`.
43 self._data = dict(item[0]) if item else dict()
44 self._data.update(**kwargs)
45
46 # NB: We eagerly compute the hash to validate that the values are hashable and to avoid
47 # performing the calculation multiple times. This can be revisited if it's found to be a
48 # performance bottleneck.
49 self._hash = self._calculate_hash()
50
51 @classmethod
52 def deep_freeze(cls, data: Mapping[K, V]) -> FrozenDict[K, V]:
53 """Convert mutable values to their frozen counter parts.
54
55 Sets and lists are turned into tuples and dicts into FrozenDicts.
56 """
57
58 def _freeze(obj):
59 if isinstance(obj, dict):
60 return cls.deep_freeze(obj)
61 if isinstance(obj, (list, set)):
62 return tuple(map(_freeze, obj))
63 return obj
64
65 return cls({k: _freeze(v) for k, v in data.items()})
66
67 @staticmethod
68 def frozen(to_freeze: Mapping[K, V]) -> FrozenDict[K, V]:
69 """Returns a `FrozenDict` containing the keys and values of `to_freeze`.
70
71 If `to_freeze` is already a `FrozenDict`, returns the same object.
72 """
73
74 return to_freeze if isinstance(to_freeze, FrozenDict) else FrozenDict(to_freeze)
75
76 def __getitem__(self, k: K) -> V:
77 return self._data[k]
78
79 def __len__(self) -> int:
80 return len(self._data)
81
82 def __iter__(self) -> Iterator[K]:
83 return iter(self._data)
84
85 def __reversed__(self) -> Iterator[K]:
86 return reversed(tuple(self._data))
87
88 def __eq__(self, other: Any) -> bool:
89 if not isinstance(other, FrozenDict):
90 return NotImplemented
91 return tuple(self.items()) == tuple(other.items())
92
93 def __lt__(self, other: Any) -> bool:
94 if not isinstance(other, FrozenDict):
95 return NotImplemented
96 return tuple(self._data.items()) < tuple(other._data.items())
97
98 def _calculate_hash(self) -> int:
99 try:
100 return hash(tuple(self._data.items()))
101 except TypeError as e:
102 raise TypeError(
103 softwrap(
104 f"""
105 Even though you are using a `{type(self).__name__}`, the underlying values are
106 not hashable. Please use hashable (and preferably immutable) types for the
107 underlying values, e.g. use tuples instead of lists and use FrozenOrderedSet
108 instead of set().
109
110 Original error message: {e}
111
112 Value: {self}
113 """
114 )
115 )
116
117 def __hash__(self) -> int:
118 return self._hash
119
120 def __repr__(self) -> str:
121 return f"{type(self).__name__}({self._data!r})"
122
123
124 class LazyFrozenDict(FrozenDict[K, V]):
125 """A lazy version of `FrozenDict` where the values are not loaded until referenced."""
126
127 @overload
128 def __init__(
129 self, __items: Iterable[tuple[K, Callable[[], V]]], **kwargs: Callable[[], V]
130 ) -> None:
131 ...
132
133 @overload
134 def __init__(self, __other: Mapping[K, Callable[[], V]], **kwargs: Callable[[], V]) -> None:
135 ...
136
137 @overload
138 def __init__(self, **kwargs: Callable[[], V]) -> None:
139 ...
140
141 def __init__(
142 self,
143 *item: Mapping[K, Callable[[], V]] | Iterable[tuple[K, Callable[[], V]]],
144 **kwargs: Callable[[], V],
145 ) -> None:
146 super().__init__(*item, **kwargs) # type: ignore[arg-type]
147
148 def __getitem__(self, k: K) -> V:
149 return self._get_value(k)
150
151 @memoized_method
152 def _get_value(self, k: K) -> V:
153 return cast("Callable[[], V]", self._data[k])()
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/util/frozendict.py b/src/python/pants/util/frozendict.py
--- a/src/python/pants/util/frozendict.py
+++ b/src/python/pants/util/frozendict.py
@@ -85,19 +85,27 @@
def __reversed__(self) -> Iterator[K]:
return reversed(tuple(self._data))
- def __eq__(self, other: Any) -> bool:
- if not isinstance(other, FrozenDict):
- return NotImplemented
- return tuple(self.items()) == tuple(other.items())
+ def __eq__(self, other: Any) -> Any:
+ # defer to dict's __eq__
+ return self._data == other
def __lt__(self, other: Any) -> bool:
if not isinstance(other, FrozenDict):
return NotImplemented
- return tuple(self._data.items()) < tuple(other._data.items())
+ # If sorting each of these on every __lt__ call ends up being a problem we could consider
+ # optimising this, by, for instance, sorting on construction.
+ return sorted(self._data.items()) < sorted(other._data.items())
def _calculate_hash(self) -> int:
try:
- return hash(tuple(self._data.items()))
+ h = 0
+ for pair in self._data.items():
+ # xor is commutative, i.e. we get the same hash no matter the order of items. This
+ # "relies" on "hash" of the individual elements being unpredictable enough that such
+ # a naive aggregation is okay. In addition, the Python hash isn't / shouldn't be
+ # used for cryptographically sensitive purposes.
+ h ^= hash(pair)
+ return h
except TypeError as e:
raise TypeError(
softwrap(
|
{"golden_diff": "diff --git a/src/python/pants/util/frozendict.py b/src/python/pants/util/frozendict.py\n--- a/src/python/pants/util/frozendict.py\n+++ b/src/python/pants/util/frozendict.py\n@@ -85,19 +85,27 @@\n def __reversed__(self) -> Iterator[K]:\n return reversed(tuple(self._data))\n \n- def __eq__(self, other: Any) -> bool:\n- if not isinstance(other, FrozenDict):\n- return NotImplemented\n- return tuple(self.items()) == tuple(other.items())\n+ def __eq__(self, other: Any) -> Any:\n+ # defer to dict's __eq__\n+ return self._data == other\n \n def __lt__(self, other: Any) -> bool:\n if not isinstance(other, FrozenDict):\n return NotImplemented\n- return tuple(self._data.items()) < tuple(other._data.items())\n+ # If sorting each of these on every __lt__ call ends up being a problem we could consider\n+ # optimising this, by, for instance, sorting on construction.\n+ return sorted(self._data.items()) < sorted(other._data.items())\n \n def _calculate_hash(self) -> int:\n try:\n- return hash(tuple(self._data.items()))\n+ h = 0\n+ for pair in self._data.items():\n+ # xor is commutative, i.e. we get the same hash no matter the order of items. This\n+ # \"relies\" on \"hash\" of the individual elements being unpredictable enough that such\n+ # a naive aggregation is okay. In addition, the Python hash isn't / shouldn't be\n+ # used for cryptographically sensitive purposes.\n+ h ^= hash(pair)\n+ return h\n except TypeError as e:\n raise TypeError(\n softwrap(\n", "issue": "pants.util.frozendict.FrozenDict has error prone order-sensitive comparisons\n**Describe the bug**\r\n\r\nThe `FrozenDict` type is order sensitive, which differs to a normal `dict`. It also gives `False` when comparing to a normal dict. This makes it easy to get unexpected behaviour, and likely leads to unnecessary cache misses.\r\n\r\nhttps://github.com/pantsbuild/pants/blob/57d1801c80f8273ca386fba198495a05a45f60ae/src/python/pants/util/frozendict.py#L15-L21\r\n\r\n```python\r\nd1 = {'a': 0, 'b': 1}\r\nd2 = {'b': 1, 'a': 0}\r\nprint(d1 == d2) # True\r\nprint(FrozenDict(d1) == FrozenDict(d2)) # False\r\n\r\nprint(FrozenDict(d1) == d1) # False\r\n```\r\n\r\nNB. this seems to apply to all methods that call `tuple(self.items())` or similar: `__eq__`, `__lt__`, `_calculate_hash`/`__hash__`.\r\n\r\n**Pants version**\r\n\r\n`main`\r\n\r\n**OS**\r\n\r\nN/A\r\n\r\n**Additional info**\r\n\r\nThis caused half of #20210, see #20220.\n", "before_files": [{"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom typing import Any, Callable, Iterable, Iterator, Mapping, TypeVar, cast, overload\n\nfrom pants.util.memo import memoized_method\nfrom pants.util.strutil import softwrap\n\nK = TypeVar(\"K\")\nV = TypeVar(\"V\")\n\n\nclass FrozenDict(Mapping[K, V]):\n \"\"\"A wrapper around a normal `dict` that removes all methods to mutate the instance and that\n implements __hash__.\n\n This should be used instead of normal dicts when working with the engine because normal dicts\n are not safe to use.\n \"\"\"\n\n @overload\n def __init__(self, __items: Iterable[tuple[K, V]], **kwargs: V) -> None:\n ...\n\n @overload\n def __init__(self, __other: Mapping[K, V], **kwargs: V) -> None:\n ...\n\n @overload\n def __init__(self, **kwargs: V) -> None:\n ...\n\n def __init__(self, *item: Mapping[K, V] | Iterable[tuple[K, V]], **kwargs: V) -> None:\n \"\"\"Creates a `FrozenDict` with arguments accepted by `dict` that also must be hashable.\"\"\"\n if len(item) > 1:\n raise ValueError(\n f\"{type(self).__name__} was called with {len(item)} positional arguments but it expects one.\"\n )\n\n # NB: Keep the variable name `_data` in sync with `externs/mod.rs`.\n self._data = dict(item[0]) if item else dict()\n self._data.update(**kwargs)\n\n # NB: We eagerly compute the hash to validate that the values are hashable and to avoid\n # performing the calculation multiple times. This can be revisited if it's found to be a\n # performance bottleneck.\n self._hash = self._calculate_hash()\n\n @classmethod\n def deep_freeze(cls, data: Mapping[K, V]) -> FrozenDict[K, V]:\n \"\"\"Convert mutable values to their frozen counter parts.\n\n Sets and lists are turned into tuples and dicts into FrozenDicts.\n \"\"\"\n\n def _freeze(obj):\n if isinstance(obj, dict):\n return cls.deep_freeze(obj)\n if isinstance(obj, (list, set)):\n return tuple(map(_freeze, obj))\n return obj\n\n return cls({k: _freeze(v) for k, v in data.items()})\n\n @staticmethod\n def frozen(to_freeze: Mapping[K, V]) -> FrozenDict[K, V]:\n \"\"\"Returns a `FrozenDict` containing the keys and values of `to_freeze`.\n\n If `to_freeze` is already a `FrozenDict`, returns the same object.\n \"\"\"\n\n return to_freeze if isinstance(to_freeze, FrozenDict) else FrozenDict(to_freeze)\n\n def __getitem__(self, k: K) -> V:\n return self._data[k]\n\n def __len__(self) -> int:\n return len(self._data)\n\n def __iter__(self) -> Iterator[K]:\n return iter(self._data)\n\n def __reversed__(self) -> Iterator[K]:\n return reversed(tuple(self._data))\n\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, FrozenDict):\n return NotImplemented\n return tuple(self.items()) == tuple(other.items())\n\n def __lt__(self, other: Any) -> bool:\n if not isinstance(other, FrozenDict):\n return NotImplemented\n return tuple(self._data.items()) < tuple(other._data.items())\n\n def _calculate_hash(self) -> int:\n try:\n return hash(tuple(self._data.items()))\n except TypeError as e:\n raise TypeError(\n softwrap(\n f\"\"\"\n Even though you are using a `{type(self).__name__}`, the underlying values are\n not hashable. Please use hashable (and preferably immutable) types for the\n underlying values, e.g. use tuples instead of lists and use FrozenOrderedSet\n instead of set().\n\n Original error message: {e}\n\n Value: {self}\n \"\"\"\n )\n )\n\n def __hash__(self) -> int:\n return self._hash\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}({self._data!r})\"\n\n\nclass LazyFrozenDict(FrozenDict[K, V]):\n \"\"\"A lazy version of `FrozenDict` where the values are not loaded until referenced.\"\"\"\n\n @overload\n def __init__(\n self, __items: Iterable[tuple[K, Callable[[], V]]], **kwargs: Callable[[], V]\n ) -> None:\n ...\n\n @overload\n def __init__(self, __other: Mapping[K, Callable[[], V]], **kwargs: Callable[[], V]) -> None:\n ...\n\n @overload\n def __init__(self, **kwargs: Callable[[], V]) -> None:\n ...\n\n def __init__(\n self,\n *item: Mapping[K, Callable[[], V]] | Iterable[tuple[K, Callable[[], V]]],\n **kwargs: Callable[[], V],\n ) -> None:\n super().__init__(*item, **kwargs) # type: ignore[arg-type]\n\n def __getitem__(self, k: K) -> V:\n return self._get_value(k)\n\n @memoized_method\n def _get_value(self, k: K) -> V:\n return cast(\"Callable[[], V]\", self._data[k])()\n", "path": "src/python/pants/util/frozendict.py"}], "after_files": [{"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom typing import Any, Callable, Iterable, Iterator, Mapping, TypeVar, cast, overload\n\nfrom pants.util.memo import memoized_method\nfrom pants.util.strutil import softwrap\n\nK = TypeVar(\"K\")\nV = TypeVar(\"V\")\n\n\nclass FrozenDict(Mapping[K, V]):\n \"\"\"A wrapper around a normal `dict` that removes all methods to mutate the instance and that\n implements __hash__.\n\n This should be used instead of normal dicts when working with the engine because normal dicts\n are not safe to use.\n \"\"\"\n\n @overload\n def __init__(self, __items: Iterable[tuple[K, V]], **kwargs: V) -> None:\n ...\n\n @overload\n def __init__(self, __other: Mapping[K, V], **kwargs: V) -> None:\n ...\n\n @overload\n def __init__(self, **kwargs: V) -> None:\n ...\n\n def __init__(self, *item: Mapping[K, V] | Iterable[tuple[K, V]], **kwargs: V) -> None:\n \"\"\"Creates a `FrozenDict` with arguments accepted by `dict` that also must be hashable.\"\"\"\n if len(item) > 1:\n raise ValueError(\n f\"{type(self).__name__} was called with {len(item)} positional arguments but it expects one.\"\n )\n\n # NB: Keep the variable name `_data` in sync with `externs/mod.rs`.\n self._data = dict(item[0]) if item else dict()\n self._data.update(**kwargs)\n\n # NB: We eagerly compute the hash to validate that the values are hashable and to avoid\n # performing the calculation multiple times. This can be revisited if it's found to be a\n # performance bottleneck.\n self._hash = self._calculate_hash()\n\n @classmethod\n def deep_freeze(cls, data: Mapping[K, V]) -> FrozenDict[K, V]:\n \"\"\"Convert mutable values to their frozen counter parts.\n\n Sets and lists are turned into tuples and dicts into FrozenDicts.\n \"\"\"\n\n def _freeze(obj):\n if isinstance(obj, dict):\n return cls.deep_freeze(obj)\n if isinstance(obj, (list, set)):\n return tuple(map(_freeze, obj))\n return obj\n\n return cls({k: _freeze(v) for k, v in data.items()})\n\n @staticmethod\n def frozen(to_freeze: Mapping[K, V]) -> FrozenDict[K, V]:\n \"\"\"Returns a `FrozenDict` containing the keys and values of `to_freeze`.\n\n If `to_freeze` is already a `FrozenDict`, returns the same object.\n \"\"\"\n\n return to_freeze if isinstance(to_freeze, FrozenDict) else FrozenDict(to_freeze)\n\n def __getitem__(self, k: K) -> V:\n return self._data[k]\n\n def __len__(self) -> int:\n return len(self._data)\n\n def __iter__(self) -> Iterator[K]:\n return iter(self._data)\n\n def __reversed__(self) -> Iterator[K]:\n return reversed(tuple(self._data))\n\n def __eq__(self, other: Any) -> Any:\n # defer to dict's __eq__\n return self._data == other\n\n def __lt__(self, other: Any) -> bool:\n if not isinstance(other, FrozenDict):\n return NotImplemented\n # If sorting each of these on every __lt__ call ends up being a problem we could consider\n # optimising this, by, for instance, sorting on construction.\n return sorted(self._data.items()) < sorted(other._data.items())\n\n def _calculate_hash(self) -> int:\n try:\n h = 0\n for pair in self._data.items():\n # xor is commutative, i.e. we get the same hash no matter the order of items. This\n # \"relies\" on \"hash\" of the individual elements being unpredictable enough that such\n # a naive aggregation is okay. In addition, the Python hash isn't / shouldn't be\n # used for cryptographically sensitive purposes.\n h ^= hash(pair)\n return h\n except TypeError as e:\n raise TypeError(\n softwrap(\n f\"\"\"\n Even though you are using a `{type(self).__name__}`, the underlying values are\n not hashable. Please use hashable (and preferably immutable) types for the\n underlying values, e.g. use tuples instead of lists and use FrozenOrderedSet\n instead of set().\n\n Original error message: {e}\n\n Value: {self}\n \"\"\"\n )\n )\n\n def __hash__(self) -> int:\n return self._hash\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}({self._data!r})\"\n\n\nclass LazyFrozenDict(FrozenDict[K, V]):\n \"\"\"A lazy version of `FrozenDict` where the values are not loaded until referenced.\"\"\"\n\n @overload\n def __init__(\n self, __items: Iterable[tuple[K, Callable[[], V]]], **kwargs: Callable[[], V]\n ) -> None:\n ...\n\n @overload\n def __init__(self, __other: Mapping[K, Callable[[], V]], **kwargs: Callable[[], V]) -> None:\n ...\n\n @overload\n def __init__(self, **kwargs: Callable[[], V]) -> None:\n ...\n\n def __init__(\n self,\n *item: Mapping[K, Callable[[], V]] | Iterable[tuple[K, Callable[[], V]]],\n **kwargs: Callable[[], V],\n ) -> None:\n super().__init__(*item, **kwargs) # type: ignore[arg-type]\n\n def __getitem__(self, k: K) -> V:\n return self._get_value(k)\n\n @memoized_method\n def _get_value(self, k: K) -> V:\n return cast(\"Callable[[], V]\", self._data[k])()\n", "path": "src/python/pants/util/frozendict.py"}]}
| 2,149 | 407 |
gh_patches_debug_24265
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-469
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add py3.8 to the test matrix
Adds py3.8 to the test matrix
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6 import sys
7 import platform
8
9 from setuptools import setup
10
11
12 def get_version(package):
13 """
14 Return package version as listed in `__version__` in `init.py`.
15 """
16 path = os.path.join(package, '__init__.py')
17 init_py = open(path, 'r', encoding='utf8').read()
18 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
19
20
21 def get_long_description():
22 """
23 Return the README.
24 """
25 return open('README.md', 'r', encoding='utf8').read()
26
27
28 def get_packages(package):
29 """
30 Return root package and all sub-packages.
31 """
32 return [dirpath
33 for dirpath, dirnames, filenames in os.walk(package)
34 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'pypy'"
41 )
42
43 requirements = [
44 "click==7.*",
45 "h11==0.8.*",
46 "websockets==8.*",
47 "httptools==0.0.13 ;" + env_marker,
48 "uvloop==0.* ;" + env_marker,
49 ]
50
51
52 setup(
53 name='uvicorn',
54 version=get_version('uvicorn'),
55 url='https://github.com/encode/uvicorn',
56 license='BSD',
57 description='The lightning-fast ASGI server.',
58 long_description=get_long_description(),
59 long_description_content_type='text/markdown',
60 author='Tom Christie',
61 author_email='[email protected]',
62 packages=get_packages('uvicorn'),
63 install_requires=requirements,
64 data_files = [("", ["LICENSE.md"])],
65 classifiers=[
66 'Development Status :: 3 - Alpha',
67 'Environment :: Web Environment',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: BSD License',
70 'Operating System :: OS Independent',
71 'Topic :: Internet :: WWW/HTTP',
72 'Programming Language :: Python :: 3',
73 'Programming Language :: Python :: 3.6',
74 'Programming Language :: Python :: 3.7',
75 'Programming Language :: Python :: Implementation :: CPython',
76 'Programming Language :: Python :: Implementation :: PyPy',
77 ],
78 entry_points="""
79 [console_scripts]
80 uvicorn=uvicorn.main:main
81 """
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
"h11==0.8.*",
"websockets==8.*",
"httptools==0.0.13 ;" + env_marker,
- "uvloop==0.* ;" + env_marker,
+ "uvloop==0.14.0rc2 ;" + env_marker,
]
@@ -63,7 +63,7 @@
install_requires=requirements,
data_files = [("", ["LICENSE.md"])],
classifiers=[
- 'Development Status :: 3 - Alpha',
+ 'Development Status :: 4 - Beta',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
@@ -72,6 +72,7 @@
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -45,7 +45,7 @@\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n- \"uvloop==0.* ;\" + env_marker,\n+ \"uvloop==0.14.0rc2 ;\" + env_marker,\n ]\n \n \n@@ -63,7 +63,7 @@\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n- 'Development Status :: 3 - Alpha',\n+ 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n@@ -72,6 +72,7 @@\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n", "issue": "Add py3.8 to the test matrix\nAdds py3.8 to the test matrix\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.* ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport sys\nimport platform\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, '__init__.py')\n init_py = open(path, 'r', encoding='utf8').read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open('README.md', 'r', encoding='utf8').read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'pypy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11==0.8.*\",\n \"websockets==8.*\",\n \"httptools==0.0.13 ;\" + env_marker,\n \"uvloop==0.14.0rc2 ;\" + env_marker,\n]\n\n\nsetup(\n name='uvicorn',\n version=get_version('uvicorn'),\n url='https://github.com/encode/uvicorn',\n license='BSD',\n description='The lightning-fast ASGI server.',\n long_description=get_long_description(),\n long_description_content_type='text/markdown',\n author='Tom Christie',\n author_email='[email protected]',\n packages=get_packages('uvicorn'),\n install_requires=requirements,\n data_files = [(\"\", [\"LICENSE.md\"])],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\"\n)\n", "path": "setup.py"}]}
| 985 | 269 |
gh_patches_debug_56470
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-1544
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] invoke: too many arguments
When running any invoke-command in an WSL-Enviroment I get the following error:
`/bin/bash: line 0: cd: too many arguments`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from shutil import copyfile
4 import os
5 import json
6 import sys
7
8 try:
9 from invoke import ctask as task
10 except:
11 from invoke import task
12
13
14 def apps():
15 """
16 Returns a list of installed apps
17 """
18
19 return [
20 'barcode',
21 'build',
22 'common',
23 'company',
24 'label',
25 'order',
26 'part',
27 'report',
28 'stock',
29 'InvenTree',
30 'users',
31 ]
32
33
34 def localDir():
35 """
36 Returns the directory of *THIS* file.
37 Used to ensure that the various scripts always run
38 in the correct directory.
39 """
40 return os.path.dirname(os.path.abspath(__file__))
41
42
43 def managePyDir():
44 """
45 Returns the directory of the manage.py file
46 """
47
48 return os.path.join(localDir(), 'InvenTree')
49
50
51 def managePyPath():
52 """
53 Return the path of the manage.py file
54 """
55
56 return os.path.join(managePyDir(), 'manage.py')
57
58
59 def manage(c, cmd, pty=False):
60 """
61 Runs a given command against django's "manage.py" script.
62
63 Args:
64 c - Command line context
65 cmd - django command to run
66 """
67
68 c.run('cd {path} && python3 manage.py {cmd}'.format(
69 path=managePyDir(),
70 cmd=cmd
71 ), pty=pty)
72
73
74 @task
75 def install(c):
76 """
77 Installs required python packages
78 """
79
80 # Install required Python packages with PIP
81 c.run('pip3 install -U -r requirements.txt')
82
83 # If a config.yaml file does not exist, copy from the template!
84 CONFIG_FILE = os.path.join(localDir(), 'InvenTree', 'config.yaml')
85 CONFIG_TEMPLATE_FILE = os.path.join(localDir(), 'InvenTree', 'config_template.yaml')
86
87 if not os.path.exists(CONFIG_FILE):
88 print("Config file 'config.yaml' does not exist - copying from template.")
89 copyfile(CONFIG_TEMPLATE_FILE, CONFIG_FILE)
90
91
92 @task
93 def shell(c):
94 """
95 Open a python shell with access to the InvenTree database models.
96 """
97
98 manage(c, 'shell', pty=True)
99
100 @task
101 def worker(c):
102 """
103 Run the InvenTree background worker process
104 """
105
106 manage(c, 'qcluster', pty=True)
107
108 @task
109 def superuser(c):
110 """
111 Create a superuser (admin) account for the database.
112 """
113
114 manage(c, 'createsuperuser', pty=True)
115
116 @task
117 def check(c):
118 """
119 Check validity of django codebase
120 """
121
122 manage(c, "check")
123
124 @task
125 def wait(c):
126 """
127 Wait until the database connection is ready
128 """
129
130 manage(c, "wait_for_db")
131
132 @task
133 def migrate(c):
134 """
135 Performs database migrations.
136 This is a critical step if the database schema have been altered!
137 """
138
139 print("Running InvenTree database migrations...")
140 print("========================================")
141
142 manage(c, "makemigrations")
143 manage(c, "migrate")
144 manage(c, "migrate --run-syncdb")
145 manage(c, "check")
146
147 print("========================================")
148 print("InvenTree database migrations completed!")
149
150
151 @task
152 def static(c):
153 """
154 Copies required static files to the STATIC_ROOT directory,
155 as per Django requirements.
156 """
157
158 manage(c, "prerender")
159 manage(c, "collectstatic --no-input")
160
161
162 @task(pre=[install, migrate, static])
163 def update(c):
164 """
165 Update InvenTree installation.
166
167 This command should be invoked after source code has been updated,
168 e.g. downloading new code from GitHub.
169
170 The following tasks are performed, in order:
171
172 - install
173 - migrate
174 - static
175 """
176 pass
177
178 @task(post=[static])
179 def translate(c):
180 """
181 Regenerate translation files.
182
183 Run this command after added new translatable strings,
184 or after adding translations for existing strings.
185 """
186
187 # Translate applicable .py / .html / .js files
188 manage(c, "makemessages --all -e py,html,js")
189 manage(c, "compilemessages")
190
191 path = os.path.join('InvenTree', 'script', 'translation_stats.py')
192
193 c.run(f'python {path}')
194
195 @task
196 def style(c):
197 """
198 Run PEP style checks against InvenTree sourcecode
199 """
200
201 print("Running PEP style checks...")
202 c.run('flake8 InvenTree')
203
204 @task
205 def test(c, database=None):
206 """
207 Run unit-tests for InvenTree codebase.
208 """
209 # Run sanity check on the django install
210 manage(c, 'check')
211
212 # Run coverage tests
213 manage(c, 'test', pty=True)
214
215 @task
216 def coverage(c):
217 """
218 Run code-coverage of the InvenTree codebase,
219 using the 'coverage' code-analysis tools.
220
221 Generates a code coverage report (available in the htmlcov directory)
222 """
223
224 # Run sanity check on the django install
225 manage(c, 'check')
226
227 # Run coverage tests
228 c.run('coverage run {manage} test {apps}'.format(
229 manage=managePyPath(),
230 apps=' '.join(apps())
231 ))
232
233 # Generate coverage report
234 c.run('coverage html')
235
236
237 def content_excludes():
238 """
239 Returns a list of content types to exclude from import/export
240 """
241
242 excludes = [
243 "contenttypes",
244 "sessions.session",
245 "auth.permission",
246 "error_report.error",
247 "admin.logentry",
248 "django_q.schedule",
249 "django_q.task",
250 "django_q.ormq",
251 "users.owner",
252 ]
253
254 output = ""
255
256 for e in excludes:
257 output += f"--exclude {e} "
258
259 return output
260
261
262 @task(help={'filename': "Output filename (default = 'data.json')"})
263 def export_records(c, filename='data.json'):
264 """
265 Export all database records to a file
266 """
267
268 # Get an absolute path to the file
269 if not os.path.isabs(filename):
270 filename = os.path.join(localDir(), filename)
271 filename = os.path.abspath(filename)
272
273 print(f"Exporting database records to file '{filename}'")
274
275 if os.path.exists(filename):
276 response = input("Warning: file already exists. Do you want to overwrite? [y/N]: ")
277 response = str(response).strip().lower()
278
279 if response not in ['y', 'yes']:
280 print("Cancelled export operation")
281 sys.exit(1)
282
283 tmpfile = f"{filename}.tmp"
284
285 cmd = f"dumpdata --indent 2 --output {tmpfile} {content_excludes()}"
286
287 # Dump data to temporary file
288 manage(c, cmd, pty=True)
289
290 print("Running data post-processing step...")
291
292 # Post-process the file, to remove any "permissions" specified for a user or group
293 with open(tmpfile, "r") as f_in:
294 data = json.loads(f_in.read())
295
296 for entry in data:
297 if "model" in entry:
298
299 # Clear out any permissions specified for a group
300 if entry["model"] == "auth.group":
301 entry["fields"]["permissions"] = []
302
303 # Clear out any permissions specified for a user
304 if entry["model"] == "auth.user":
305 entry["fields"]["user_permissions"] = []
306
307 # Write the processed data to file
308 with open(filename, "w") as f_out:
309 f_out.write(json.dumps(data, indent=2))
310
311 print("Data export completed")
312
313
314 @task(help={'filename': 'Input filename'})
315 def import_records(c, filename='data.json'):
316 """
317 Import database records from a file
318 """
319
320 # Get an absolute path to the supplied filename
321 if not os.path.isabs(filename):
322 filename = os.path.join(localDir(), filename)
323
324 if not os.path.exists(filename):
325 print(f"Error: File '{filename}' does not exist")
326 sys.exit(1)
327
328 print(f"Importing database records from '{filename}'")
329
330 # Pre-process the data, to remove any "permissions" specified for a user or group
331 tmpfile = f"{filename}.tmp.json"
332
333 with open(filename, "r") as f_in:
334 data = json.loads(f_in.read())
335
336 for entry in data:
337 if "model" in entry:
338
339 # Clear out any permissions specified for a group
340 if entry["model"] == "auth.group":
341 entry["fields"]["permissions"] = []
342
343 # Clear out any permissions specified for a user
344 if entry["model"] == "auth.user":
345 entry["fields"]["user_permissions"] = []
346
347 # Write the processed data to the tmp file
348 with open(tmpfile, "w") as f_out:
349 f_out.write(json.dumps(data, indent=2))
350
351 cmd = f"loaddata {tmpfile} -i {content_excludes()}"
352
353 manage(c, cmd, pty=True)
354
355 print("Data import completed")
356
357 @task
358 def import_fixtures(c):
359 """
360 Import fixture data into the database.
361
362 This command imports all existing test fixture data into the database.
363
364 Warning:
365 - Intended for testing / development only!
366 - Running this command may overwrite existing database data!!
367 - Don't say you were not warned...
368 """
369
370 fixtures = [
371 # Build model
372 'build',
373
374 # Common models
375 'settings',
376
377 # Company model
378 'company',
379 'price_breaks',
380 'supplier_part',
381
382 # Order model
383 'order',
384
385 # Part model
386 'bom',
387 'category',
388 'params',
389 'part',
390 'test_templates',
391
392 # Stock model
393 'location',
394 'stock_tests',
395 'stock',
396
397 # Users
398 'users'
399 ]
400
401 command = 'loaddata ' + ' '.join(fixtures)
402
403 manage(c, command, pty=True)
404
405
406 @task(help={'address': 'Server address:port (default=127.0.0.1:8000)'})
407 def server(c, address="127.0.0.1:8000"):
408 """
409 Launch a (deveopment) server using Django's in-built webserver.
410
411 Note: This is *not* sufficient for a production installation.
412 """
413
414 manage(c, "runserver {address}".format(address=address), pty=True)
415
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -65,7 +65,7 @@
cmd - django command to run
"""
- c.run('cd {path} && python3 manage.py {cmd}'.format(
+ c.run('cd "{path}" && python3 manage.py {cmd}'.format(
path=managePyDir(),
cmd=cmd
), pty=pty)
|
{"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -65,7 +65,7 @@\n cmd - django command to run\n \"\"\"\n \n- c.run('cd {path} && python3 manage.py {cmd}'.format(\n+ c.run('cd \"{path}\" && python3 manage.py {cmd}'.format(\n path=managePyDir(),\n cmd=cmd\n ), pty=pty)\n", "issue": "[BUG] invoke: too many arguments\nWhen running any invoke-command in an WSL-Enviroment I get the following error:\r\n`/bin/bash: line 0: cd: too many arguments`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom shutil import copyfile\nimport os\nimport json\nimport sys\n\ntry:\n from invoke import ctask as task\nexcept:\n from invoke import task\n\n\ndef apps():\n \"\"\"\n Returns a list of installed apps\n \"\"\"\n\n return [\n 'barcode',\n 'build',\n 'common',\n 'company',\n 'label',\n 'order',\n 'part',\n 'report',\n 'stock',\n 'InvenTree',\n 'users',\n ]\n\n\ndef localDir():\n \"\"\"\n Returns the directory of *THIS* file.\n Used to ensure that the various scripts always run\n in the correct directory.\n \"\"\"\n return os.path.dirname(os.path.abspath(__file__))\n\n\ndef managePyDir():\n \"\"\"\n Returns the directory of the manage.py file\n \"\"\"\n\n return os.path.join(localDir(), 'InvenTree')\n\n\ndef managePyPath():\n \"\"\"\n Return the path of the manage.py file\n \"\"\"\n\n return os.path.join(managePyDir(), 'manage.py')\n\n\ndef manage(c, cmd, pty=False):\n \"\"\"\n Runs a given command against django's \"manage.py\" script.\n\n Args:\n c - Command line context\n cmd - django command to run\n \"\"\"\n\n c.run('cd {path} && python3 manage.py {cmd}'.format(\n path=managePyDir(),\n cmd=cmd\n ), pty=pty)\n\n\n@task\ndef install(c):\n \"\"\"\n Installs required python packages\n \"\"\"\n\n # Install required Python packages with PIP\n c.run('pip3 install -U -r requirements.txt')\n\n # If a config.yaml file does not exist, copy from the template!\n CONFIG_FILE = os.path.join(localDir(), 'InvenTree', 'config.yaml')\n CONFIG_TEMPLATE_FILE = os.path.join(localDir(), 'InvenTree', 'config_template.yaml')\n\n if not os.path.exists(CONFIG_FILE):\n print(\"Config file 'config.yaml' does not exist - copying from template.\")\n copyfile(CONFIG_TEMPLATE_FILE, CONFIG_FILE)\n\n\n@task\ndef shell(c):\n \"\"\"\n Open a python shell with access to the InvenTree database models.\n \"\"\"\n\n manage(c, 'shell', pty=True)\n\n@task\ndef worker(c):\n \"\"\"\n Run the InvenTree background worker process\n \"\"\"\n\n manage(c, 'qcluster', pty=True)\n\n@task\ndef superuser(c):\n \"\"\"\n Create a superuser (admin) account for the database.\n \"\"\"\n\n manage(c, 'createsuperuser', pty=True)\n\n@task\ndef check(c):\n \"\"\"\n Check validity of django codebase\n \"\"\"\n\n manage(c, \"check\")\n\n@task\ndef wait(c):\n \"\"\"\n Wait until the database connection is ready\n \"\"\"\n\n manage(c, \"wait_for_db\")\n\n@task\ndef migrate(c):\n \"\"\"\n Performs database migrations.\n This is a critical step if the database schema have been altered!\n \"\"\"\n\n print(\"Running InvenTree database migrations...\")\n print(\"========================================\")\n\n manage(c, \"makemigrations\")\n manage(c, \"migrate\")\n manage(c, \"migrate --run-syncdb\")\n manage(c, \"check\")\n\n print(\"========================================\")\n print(\"InvenTree database migrations completed!\")\n\n\n@task\ndef static(c):\n \"\"\"\n Copies required static files to the STATIC_ROOT directory,\n as per Django requirements.\n \"\"\"\n\n manage(c, \"prerender\")\n manage(c, \"collectstatic --no-input\")\n\n\n@task(pre=[install, migrate, static])\ndef update(c):\n \"\"\"\n Update InvenTree installation.\n\n This command should be invoked after source code has been updated,\n e.g. downloading new code from GitHub.\n\n The following tasks are performed, in order:\n\n - install\n - migrate\n - static\n \"\"\"\n pass\n\n@task(post=[static])\ndef translate(c):\n \"\"\"\n Regenerate translation files.\n\n Run this command after added new translatable strings,\n or after adding translations for existing strings.\n \"\"\"\n\n # Translate applicable .py / .html / .js files\n manage(c, \"makemessages --all -e py,html,js\")\n manage(c, \"compilemessages\")\n\n path = os.path.join('InvenTree', 'script', 'translation_stats.py')\n\n c.run(f'python {path}')\n\n@task\ndef style(c):\n \"\"\"\n Run PEP style checks against InvenTree sourcecode\n \"\"\"\n\n print(\"Running PEP style checks...\")\n c.run('flake8 InvenTree')\n\n@task\ndef test(c, database=None):\n \"\"\"\n Run unit-tests for InvenTree codebase.\n \"\"\"\n # Run sanity check on the django install\n manage(c, 'check')\n\n # Run coverage tests\n manage(c, 'test', pty=True)\n\n@task\ndef coverage(c):\n \"\"\"\n Run code-coverage of the InvenTree codebase,\n using the 'coverage' code-analysis tools.\n\n Generates a code coverage report (available in the htmlcov directory)\n \"\"\"\n\n # Run sanity check on the django install\n manage(c, 'check')\n\n # Run coverage tests\n c.run('coverage run {manage} test {apps}'.format(\n manage=managePyPath(),\n apps=' '.join(apps())\n ))\n\n # Generate coverage report\n c.run('coverage html')\n\n\ndef content_excludes():\n \"\"\"\n Returns a list of content types to exclude from import/export\n \"\"\"\n\n excludes = [\n \"contenttypes\",\n \"sessions.session\",\n \"auth.permission\",\n \"error_report.error\",\n \"admin.logentry\",\n \"django_q.schedule\",\n \"django_q.task\",\n \"django_q.ormq\",\n \"users.owner\",\n ]\n\n output = \"\"\n\n for e in excludes:\n output += f\"--exclude {e} \"\n\n return output\n\n\n@task(help={'filename': \"Output filename (default = 'data.json')\"})\ndef export_records(c, filename='data.json'):\n \"\"\"\n Export all database records to a file\n \"\"\"\n\n # Get an absolute path to the file\n if not os.path.isabs(filename):\n filename = os.path.join(localDir(), filename)\n filename = os.path.abspath(filename) \n\n print(f\"Exporting database records to file '{filename}'\")\n\n if os.path.exists(filename):\n response = input(\"Warning: file already exists. Do you want to overwrite? [y/N]: \")\n response = str(response).strip().lower()\n\n if response not in ['y', 'yes']:\n print(\"Cancelled export operation\")\n sys.exit(1)\n\n tmpfile = f\"{filename}.tmp\"\n\n cmd = f\"dumpdata --indent 2 --output {tmpfile} {content_excludes()}\"\n\n # Dump data to temporary file\n manage(c, cmd, pty=True)\n\n print(\"Running data post-processing step...\")\n\n # Post-process the file, to remove any \"permissions\" specified for a user or group\n with open(tmpfile, \"r\") as f_in:\n data = json.loads(f_in.read())\n\n for entry in data:\n if \"model\" in entry:\n \n # Clear out any permissions specified for a group\n if entry[\"model\"] == \"auth.group\":\n entry[\"fields\"][\"permissions\"] = []\n\n # Clear out any permissions specified for a user\n if entry[\"model\"] == \"auth.user\":\n entry[\"fields\"][\"user_permissions\"] = []\n\n # Write the processed data to file\n with open(filename, \"w\") as f_out:\n f_out.write(json.dumps(data, indent=2))\n\n print(\"Data export completed\")\n\n\n@task(help={'filename': 'Input filename'})\ndef import_records(c, filename='data.json'):\n \"\"\"\n Import database records from a file\n \"\"\"\n\n # Get an absolute path to the supplied filename\n if not os.path.isabs(filename):\n filename = os.path.join(localDir(), filename)\n\n if not os.path.exists(filename):\n print(f\"Error: File '{filename}' does not exist\")\n sys.exit(1)\n\n print(f\"Importing database records from '{filename}'\")\n\n # Pre-process the data, to remove any \"permissions\" specified for a user or group\n tmpfile = f\"{filename}.tmp.json\"\n\n with open(filename, \"r\") as f_in:\n data = json.loads(f_in.read())\n\n for entry in data:\n if \"model\" in entry:\n \n # Clear out any permissions specified for a group\n if entry[\"model\"] == \"auth.group\":\n entry[\"fields\"][\"permissions\"] = []\n\n # Clear out any permissions specified for a user\n if entry[\"model\"] == \"auth.user\":\n entry[\"fields\"][\"user_permissions\"] = []\n\n # Write the processed data to the tmp file\n with open(tmpfile, \"w\") as f_out:\n f_out.write(json.dumps(data, indent=2))\n\n cmd = f\"loaddata {tmpfile} -i {content_excludes()}\"\n\n manage(c, cmd, pty=True)\n\n print(\"Data import completed\")\n\n@task\ndef import_fixtures(c):\n \"\"\"\n Import fixture data into the database.\n\n This command imports all existing test fixture data into the database.\n\n Warning:\n - Intended for testing / development only!\n - Running this command may overwrite existing database data!!\n - Don't say you were not warned...\n \"\"\"\n\n fixtures = [\n # Build model\n 'build',\n \n # Common models\n 'settings',\n\n # Company model\n 'company',\n 'price_breaks',\n 'supplier_part',\n\n # Order model\n 'order',\n\n # Part model\n 'bom',\n 'category',\n 'params',\n 'part',\n 'test_templates',\n\n # Stock model\n 'location',\n 'stock_tests',\n 'stock',\n\n # Users\n 'users'\n ]\n\n command = 'loaddata ' + ' '.join(fixtures)\n\n manage(c, command, pty=True)\n\n\n@task(help={'address': 'Server address:port (default=127.0.0.1:8000)'})\ndef server(c, address=\"127.0.0.1:8000\"):\n \"\"\"\n Launch a (deveopment) server using Django's in-built webserver.\n\n Note: This is *not* sufficient for a production installation.\n \"\"\"\n\n manage(c, \"runserver {address}\".format(address=address), pty=True)\n", "path": "tasks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom shutil import copyfile\nimport os\nimport json\nimport sys\n\ntry:\n from invoke import ctask as task\nexcept:\n from invoke import task\n\n\ndef apps():\n \"\"\"\n Returns a list of installed apps\n \"\"\"\n\n return [\n 'barcode',\n 'build',\n 'common',\n 'company',\n 'label',\n 'order',\n 'part',\n 'report',\n 'stock',\n 'InvenTree',\n 'users',\n ]\n\n\ndef localDir():\n \"\"\"\n Returns the directory of *THIS* file.\n Used to ensure that the various scripts always run\n in the correct directory.\n \"\"\"\n return os.path.dirname(os.path.abspath(__file__))\n\n\ndef managePyDir():\n \"\"\"\n Returns the directory of the manage.py file\n \"\"\"\n\n return os.path.join(localDir(), 'InvenTree')\n\n\ndef managePyPath():\n \"\"\"\n Return the path of the manage.py file\n \"\"\"\n\n return os.path.join(managePyDir(), 'manage.py')\n\n\ndef manage(c, cmd, pty=False):\n \"\"\"\n Runs a given command against django's \"manage.py\" script.\n\n Args:\n c - Command line context\n cmd - django command to run\n \"\"\"\n\n c.run('cd \"{path}\" && python3 manage.py {cmd}'.format(\n path=managePyDir(),\n cmd=cmd\n ), pty=pty)\n\n\n@task\ndef install(c):\n \"\"\"\n Installs required python packages\n \"\"\"\n\n # Install required Python packages with PIP\n c.run('pip3 install -U -r requirements.txt')\n\n # If a config.yaml file does not exist, copy from the template!\n CONFIG_FILE = os.path.join(localDir(), 'InvenTree', 'config.yaml')\n CONFIG_TEMPLATE_FILE = os.path.join(localDir(), 'InvenTree', 'config_template.yaml')\n\n if not os.path.exists(CONFIG_FILE):\n print(\"Config file 'config.yaml' does not exist - copying from template.\")\n copyfile(CONFIG_TEMPLATE_FILE, CONFIG_FILE)\n\n\n@task\ndef shell(c):\n \"\"\"\n Open a python shell with access to the InvenTree database models.\n \"\"\"\n\n manage(c, 'shell', pty=True)\n\n@task\ndef worker(c):\n \"\"\"\n Run the InvenTree background worker process\n \"\"\"\n\n manage(c, 'qcluster', pty=True)\n\n@task\ndef superuser(c):\n \"\"\"\n Create a superuser (admin) account for the database.\n \"\"\"\n\n manage(c, 'createsuperuser', pty=True)\n\n@task\ndef check(c):\n \"\"\"\n Check validity of django codebase\n \"\"\"\n\n manage(c, \"check\")\n\n@task\ndef wait(c):\n \"\"\"\n Wait until the database connection is ready\n \"\"\"\n\n manage(c, \"wait_for_db\")\n\n@task\ndef migrate(c):\n \"\"\"\n Performs database migrations.\n This is a critical step if the database schema have been altered!\n \"\"\"\n\n print(\"Running InvenTree database migrations...\")\n print(\"========================================\")\n\n manage(c, \"makemigrations\")\n manage(c, \"migrate\")\n manage(c, \"migrate --run-syncdb\")\n manage(c, \"check\")\n\n print(\"========================================\")\n print(\"InvenTree database migrations completed!\")\n\n\n@task\ndef static(c):\n \"\"\"\n Copies required static files to the STATIC_ROOT directory,\n as per Django requirements.\n \"\"\"\n\n manage(c, \"prerender\")\n manage(c, \"collectstatic --no-input\")\n\n\n@task(pre=[install, migrate, static])\ndef update(c):\n \"\"\"\n Update InvenTree installation.\n\n This command should be invoked after source code has been updated,\n e.g. downloading new code from GitHub.\n\n The following tasks are performed, in order:\n\n - install\n - migrate\n - static\n \"\"\"\n pass\n\n@task(post=[static])\ndef translate(c):\n \"\"\"\n Regenerate translation files.\n\n Run this command after added new translatable strings,\n or after adding translations for existing strings.\n \"\"\"\n\n # Translate applicable .py / .html / .js files\n manage(c, \"makemessages --all -e py,html,js\")\n manage(c, \"compilemessages\")\n\n path = os.path.join('InvenTree', 'script', 'translation_stats.py')\n\n c.run(f'python {path}')\n\n@task\ndef style(c):\n \"\"\"\n Run PEP style checks against InvenTree sourcecode\n \"\"\"\n\n print(\"Running PEP style checks...\")\n c.run('flake8 InvenTree')\n\n@task\ndef test(c, database=None):\n \"\"\"\n Run unit-tests for InvenTree codebase.\n \"\"\"\n # Run sanity check on the django install\n manage(c, 'check')\n\n # Run coverage tests\n manage(c, 'test', pty=True)\n\n@task\ndef coverage(c):\n \"\"\"\n Run code-coverage of the InvenTree codebase,\n using the 'coverage' code-analysis tools.\n\n Generates a code coverage report (available in the htmlcov directory)\n \"\"\"\n\n # Run sanity check on the django install\n manage(c, 'check')\n\n # Run coverage tests\n c.run('coverage run {manage} test {apps}'.format(\n manage=managePyPath(),\n apps=' '.join(apps())\n ))\n\n # Generate coverage report\n c.run('coverage html')\n\n\ndef content_excludes():\n \"\"\"\n Returns a list of content types to exclude from import/export\n \"\"\"\n\n excludes = [\n \"contenttypes\",\n \"sessions.session\",\n \"auth.permission\",\n \"error_report.error\",\n \"admin.logentry\",\n \"django_q.schedule\",\n \"django_q.task\",\n \"django_q.ormq\",\n \"users.owner\",\n ]\n\n output = \"\"\n\n for e in excludes:\n output += f\"--exclude {e} \"\n\n return output\n\n\n@task(help={'filename': \"Output filename (default = 'data.json')\"})\ndef export_records(c, filename='data.json'):\n \"\"\"\n Export all database records to a file\n \"\"\"\n\n # Get an absolute path to the file\n if not os.path.isabs(filename):\n filename = os.path.join(localDir(), filename)\n filename = os.path.abspath(filename) \n\n print(f\"Exporting database records to file '{filename}'\")\n\n if os.path.exists(filename):\n response = input(\"Warning: file already exists. Do you want to overwrite? [y/N]: \")\n response = str(response).strip().lower()\n\n if response not in ['y', 'yes']:\n print(\"Cancelled export operation\")\n sys.exit(1)\n\n tmpfile = f\"{filename}.tmp\"\n\n cmd = f\"dumpdata --indent 2 --output {tmpfile} {content_excludes()}\"\n\n # Dump data to temporary file\n manage(c, cmd, pty=True)\n\n print(\"Running data post-processing step...\")\n\n # Post-process the file, to remove any \"permissions\" specified for a user or group\n with open(tmpfile, \"r\") as f_in:\n data = json.loads(f_in.read())\n\n for entry in data:\n if \"model\" in entry:\n \n # Clear out any permissions specified for a group\n if entry[\"model\"] == \"auth.group\":\n entry[\"fields\"][\"permissions\"] = []\n\n # Clear out any permissions specified for a user\n if entry[\"model\"] == \"auth.user\":\n entry[\"fields\"][\"user_permissions\"] = []\n\n # Write the processed data to file\n with open(filename, \"w\") as f_out:\n f_out.write(json.dumps(data, indent=2))\n\n print(\"Data export completed\")\n\n\n@task(help={'filename': 'Input filename'})\ndef import_records(c, filename='data.json'):\n \"\"\"\n Import database records from a file\n \"\"\"\n\n # Get an absolute path to the supplied filename\n if not os.path.isabs(filename):\n filename = os.path.join(localDir(), filename)\n\n if not os.path.exists(filename):\n print(f\"Error: File '{filename}' does not exist\")\n sys.exit(1)\n\n print(f\"Importing database records from '{filename}'\")\n\n # Pre-process the data, to remove any \"permissions\" specified for a user or group\n tmpfile = f\"{filename}.tmp.json\"\n\n with open(filename, \"r\") as f_in:\n data = json.loads(f_in.read())\n\n for entry in data:\n if \"model\" in entry:\n \n # Clear out any permissions specified for a group\n if entry[\"model\"] == \"auth.group\":\n entry[\"fields\"][\"permissions\"] = []\n\n # Clear out any permissions specified for a user\n if entry[\"model\"] == \"auth.user\":\n entry[\"fields\"][\"user_permissions\"] = []\n\n # Write the processed data to the tmp file\n with open(tmpfile, \"w\") as f_out:\n f_out.write(json.dumps(data, indent=2))\n\n cmd = f\"loaddata {tmpfile} -i {content_excludes()}\"\n\n manage(c, cmd, pty=True)\n\n print(\"Data import completed\")\n\n@task\ndef import_fixtures(c):\n \"\"\"\n Import fixture data into the database.\n\n This command imports all existing test fixture data into the database.\n\n Warning:\n - Intended for testing / development only!\n - Running this command may overwrite existing database data!!\n - Don't say you were not warned...\n \"\"\"\n\n fixtures = [\n # Build model\n 'build',\n \n # Common models\n 'settings',\n\n # Company model\n 'company',\n 'price_breaks',\n 'supplier_part',\n\n # Order model\n 'order',\n\n # Part model\n 'bom',\n 'category',\n 'params',\n 'part',\n 'test_templates',\n\n # Stock model\n 'location',\n 'stock_tests',\n 'stock',\n\n # Users\n 'users'\n ]\n\n command = 'loaddata ' + ' '.join(fixtures)\n\n manage(c, command, pty=True)\n\n\n@task(help={'address': 'Server address:port (default=127.0.0.1:8000)'})\ndef server(c, address=\"127.0.0.1:8000\"):\n \"\"\"\n Launch a (deveopment) server using Django's in-built webserver.\n\n Note: This is *not* sufficient for a production installation.\n \"\"\"\n\n manage(c, \"runserver {address}\".format(address=address), pty=True)\n", "path": "tasks.py"}]}
| 3,780 | 100 |
gh_patches_debug_12259
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs for using `output` with `ObjectDetector`
## 🐛 Bug
I'am trying to use FiftyOneDetectionLabelsOutput output for Object detection but i got the following error
model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)
TypeError: model() got an unexpected keyword argument 'output'
How can i correctly setup the output?
### To Reproduce
from flash.image.detection.output import FiftyOneDetectionLabelsOutput
from flash.image import ObjectDetector
out= FiftyOneDetectionLabelsOutput(threshold=0.7)
objDetc=ObjectDetector(num_classes=81,backbone="medium",head="yolov5",output=out)
### Expected behavior
Class init correctly as described in documentation
https://lightning-flash.readthedocs.io/en/latest/api/generated/flash.image.detection.model.ObjectDetector.html#flash.image.detection.model.ObjectDetector
There is a parameter output in the description. Maybe is an old value
### Environment
- OS (e.g., Linux): Linux
- Python version: 3.8
- PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): 1.10 / 1.5.8 / 0.7.4
- GPU models and configuration: cuda 11.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/image/detection/model.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, List, Optional, Type, Union
15
16 from flash.core.adapter import AdapterTask
17 from flash.core.data.io.input import ServeInput
18 from flash.core.data.io.output import Output
19 from flash.core.integrations.icevision.transforms import IceVisionInputTransform
20 from flash.core.model import Task
21 from flash.core.registry import FlashRegistry
22 from flash.core.serve import Composition
23 from flash.core.utilities.imports import requires
24 from flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE
25 from flash.image.data import ImageDeserializer
26 from flash.image.detection.backbones import OBJECT_DETECTION_HEADS
27 from flash.image.detection.output import OBJECT_DETECTION_OUTPUTS
28
29
30 class ObjectDetector(AdapterTask):
31 """The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see
32 :ref:`object_detection`.
33
34 Args:
35 num_classes: The number of object classes.
36 backbone: String indicating the backbone CNN architecture to use.
37 head: String indicating the head module to use ontop of the backbone.
38 pretrained: Whether the model should be loaded with it's pretrained weights.
39 optimizer: Optimizer to use for training.
40 lr_scheduler: The LR scheduler to use during training.
41 learning_rate: The learning rate to use for training.
42 output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
43 predict_kwargs: dictionary containing parameters that will be used during the prediction phase.
44 kwargs: additional kwargs nessesary for initializing the backbone task
45 """
46
47 heads: FlashRegistry = OBJECT_DETECTION_HEADS
48 outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS
49
50 required_extras: List[str] = ["image", "icevision", "effdet"]
51
52 def __init__(
53 self,
54 num_classes: Optional[int] = None,
55 labels: Optional[List[str]] = None,
56 backbone: Optional[str] = "resnet18_fpn",
57 head: Optional[str] = "retinanet",
58 pretrained: bool = True,
59 optimizer: OPTIMIZER_TYPE = "Adam",
60 lr_scheduler: LR_SCHEDULER_TYPE = None,
61 learning_rate: Optional[float] = None,
62 predict_kwargs: Dict = None,
63 **kwargs: Any,
64 ):
65 self.save_hyperparameters()
66
67 if labels is not None and num_classes is None:
68 num_classes = len(labels)
69
70 self.labels = labels
71 self.num_classes = num_classes
72
73 predict_kwargs = predict_kwargs if predict_kwargs else {}
74 metadata = self.heads.get(head, with_metadata=True)
75 adapter = metadata["metadata"]["adapter"].from_task(
76 self,
77 num_classes=num_classes,
78 backbone=backbone,
79 head=head,
80 pretrained=pretrained,
81 predict_kwargs=predict_kwargs,
82 **kwargs,
83 )
84
85 super().__init__(
86 adapter,
87 learning_rate=learning_rate,
88 optimizer=optimizer,
89 lr_scheduler=lr_scheduler,
90 )
91
92 def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:
93 """This function is used only for debugging usage with CI."""
94 # todo
95
96 @property
97 def predict_kwargs(self) -> Dict[str, Any]:
98 """The kwargs used for the prediction step."""
99 return self.adapter.predict_kwargs
100
101 @predict_kwargs.setter
102 def predict_kwargs(self, predict_kwargs: Dict[str, Any]):
103 self.adapter.predict_kwargs = predict_kwargs
104
105 @requires("serve")
106 def serve(
107 self,
108 host: str = "127.0.0.1",
109 port: int = 8000,
110 sanity_check: bool = True,
111 input_cls: Optional[Type[ServeInput]] = ImageDeserializer,
112 transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,
113 transform_kwargs: Optional[Dict] = None,
114 output: Optional[Union[str, Output]] = None,
115 ) -> Composition:
116 return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flash/image/detection/model.py b/flash/image/detection/model.py
--- a/flash/image/detection/model.py
+++ b/flash/image/detection/model.py
@@ -39,7 +39,6 @@
optimizer: Optimizer to use for training.
lr_scheduler: The LR scheduler to use during training.
learning_rate: The learning rate to use for training.
- output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
predict_kwargs: dictionary containing parameters that will be used during the prediction phase.
kwargs: additional kwargs nessesary for initializing the backbone task
"""
|
{"golden_diff": "diff --git a/flash/image/detection/model.py b/flash/image/detection/model.py\n--- a/flash/image/detection/model.py\n+++ b/flash/image/detection/model.py\n@@ -39,7 +39,6 @@\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n- output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n", "issue": "Docs for using `output` with `ObjectDetector`\n## \ud83d\udc1b Bug\r\n\r\nI'am trying to use FiftyOneDetectionLabelsOutput output for Object detection but i got the following error\r\n\r\nmodel = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)\r\nTypeError: model() got an unexpected keyword argument 'output'\r\n\r\nHow can i correctly setup the output?\r\n\r\n### To Reproduce\r\nfrom flash.image.detection.output import FiftyOneDetectionLabelsOutput\r\nfrom flash.image import ObjectDetector\r\n\r\nout= FiftyOneDetectionLabelsOutput(threshold=0.7)\r\nobjDetc=ObjectDetector(num_classes=81,backbone=\"medium\",head=\"yolov5\",output=out)\r\n\r\n\r\n### Expected behavior\r\nClass init correctly as described in documentation\r\n\r\nhttps://lightning-flash.readthedocs.io/en/latest/api/generated/flash.image.detection.model.ObjectDetector.html#flash.image.detection.model.ObjectDetector\r\nThere is a parameter output in the description. Maybe is an old value\r\n\r\n\r\n### Environment\r\n\r\n - OS (e.g., Linux): Linux\r\n - Python version: 3.8\r\n - PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): 1.10 / 1.5.8 / 0.7.4\r\n - GPU models and configuration: cuda 11.3\r\n \n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional, Type, Union\n\nfrom flash.core.adapter import AdapterTask\nfrom flash.core.data.io.input import ServeInput\nfrom flash.core.data.io.output import Output\nfrom flash.core.integrations.icevision.transforms import IceVisionInputTransform\nfrom flash.core.model import Task\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import requires\nfrom flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE\nfrom flash.image.data import ImageDeserializer\nfrom flash.image.detection.backbones import OBJECT_DETECTION_HEADS\nfrom flash.image.detection.output import OBJECT_DETECTION_OUTPUTS\n\n\nclass ObjectDetector(AdapterTask):\n \"\"\"The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see\n :ref:`object_detection`.\n\n Args:\n num_classes: The number of object classes.\n backbone: String indicating the backbone CNN architecture to use.\n head: String indicating the head module to use ontop of the backbone.\n pretrained: Whether the model should be loaded with it's pretrained weights.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n\n heads: FlashRegistry = OBJECT_DETECTION_HEADS\n outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS\n\n required_extras: List[str] = [\"image\", \"icevision\", \"effdet\"]\n\n def __init__(\n self,\n num_classes: Optional[int] = None,\n labels: Optional[List[str]] = None,\n backbone: Optional[str] = \"resnet18_fpn\",\n head: Optional[str] = \"retinanet\",\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n learning_rate: Optional[float] = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\n ):\n self.save_hyperparameters()\n\n if labels is not None and num_classes is None:\n num_classes = len(labels)\n\n self.labels = labels\n self.num_classes = num_classes\n\n predict_kwargs = predict_kwargs if predict_kwargs else {}\n metadata = self.heads.get(head, with_metadata=True)\n adapter = metadata[\"metadata\"][\"adapter\"].from_task(\n self,\n num_classes=num_classes,\n backbone=backbone,\n head=head,\n pretrained=pretrained,\n predict_kwargs=predict_kwargs,\n **kwargs,\n )\n\n super().__init__(\n adapter,\n learning_rate=learning_rate,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n )\n\n def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n # todo\n\n @property\n def predict_kwargs(self) -> Dict[str, Any]:\n \"\"\"The kwargs used for the prediction step.\"\"\"\n return self.adapter.predict_kwargs\n\n @predict_kwargs.setter\n def predict_kwargs(self, predict_kwargs: Dict[str, Any]):\n self.adapter.predict_kwargs = predict_kwargs\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = ImageDeserializer,\n transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,\n transform_kwargs: Optional[Dict] = None,\n output: Optional[Union[str, Output]] = None,\n ) -> Composition:\n return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)\n", "path": "flash/image/detection/model.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional, Type, Union\n\nfrom flash.core.adapter import AdapterTask\nfrom flash.core.data.io.input import ServeInput\nfrom flash.core.data.io.output import Output\nfrom flash.core.integrations.icevision.transforms import IceVisionInputTransform\nfrom flash.core.model import Task\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import requires\nfrom flash.core.utilities.types import INPUT_TRANSFORM_TYPE, LR_SCHEDULER_TYPE, OPTIMIZER_TYPE\nfrom flash.image.data import ImageDeserializer\nfrom flash.image.detection.backbones import OBJECT_DETECTION_HEADS\nfrom flash.image.detection.output import OBJECT_DETECTION_OUTPUTS\n\n\nclass ObjectDetector(AdapterTask):\n \"\"\"The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see\n :ref:`object_detection`.\n\n Args:\n num_classes: The number of object classes.\n backbone: String indicating the backbone CNN architecture to use.\n head: String indicating the head module to use ontop of the backbone.\n pretrained: Whether the model should be loaded with it's pretrained weights.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n\n heads: FlashRegistry = OBJECT_DETECTION_HEADS\n outputs = Task.outputs + OBJECT_DETECTION_OUTPUTS\n\n required_extras: List[str] = [\"image\", \"icevision\", \"effdet\"]\n\n def __init__(\n self,\n num_classes: Optional[int] = None,\n labels: Optional[List[str]] = None,\n backbone: Optional[str] = \"resnet18_fpn\",\n head: Optional[str] = \"retinanet\",\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n learning_rate: Optional[float] = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\n ):\n self.save_hyperparameters()\n\n if labels is not None and num_classes is None:\n num_classes = len(labels)\n\n self.labels = labels\n self.num_classes = num_classes\n\n predict_kwargs = predict_kwargs if predict_kwargs else {}\n metadata = self.heads.get(head, with_metadata=True)\n adapter = metadata[\"metadata\"][\"adapter\"].from_task(\n self,\n num_classes=num_classes,\n backbone=backbone,\n head=head,\n pretrained=pretrained,\n predict_kwargs=predict_kwargs,\n **kwargs,\n )\n\n super().__init__(\n adapter,\n learning_rate=learning_rate,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n )\n\n def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n # todo\n\n @property\n def predict_kwargs(self) -> Dict[str, Any]:\n \"\"\"The kwargs used for the prediction step.\"\"\"\n return self.adapter.predict_kwargs\n\n @predict_kwargs.setter\n def predict_kwargs(self, predict_kwargs: Dict[str, Any]):\n self.adapter.predict_kwargs = predict_kwargs\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = ImageDeserializer,\n transform: INPUT_TRANSFORM_TYPE = IceVisionInputTransform,\n transform_kwargs: Optional[Dict] = None,\n output: Optional[Union[str, Output]] = None,\n ) -> Composition:\n return super().serve(host, port, sanity_check, input_cls, transform, transform_kwargs, output)\n", "path": "flash/image/detection/model.py"}]}
| 1,781 | 140 |
gh_patches_debug_47732
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-8617
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
[rllib] PyTorch and SampleAsync validation
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
PyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2
It might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).
Ray Version 0.9.0dev (but this applies to any ray version actually)
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/agents/a3c/a3c.py`
Content:
```
1 import logging
2
3 from ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy
4 from ray.rllib.agents.trainer import with_common_config
5 from ray.rllib.agents.trainer_template import build_trainer
6 from ray.rllib.execution.rollout_ops import AsyncGradients
7 from ray.rllib.execution.train_ops import ApplyGradients
8 from ray.rllib.execution.metric_ops import StandardMetricsReporting
9
10 logger = logging.getLogger(__name__)
11
12 # yapf: disable
13 # __sphinx_doc_begin__
14 DEFAULT_CONFIG = with_common_config({
15 # Should use a critic as a baseline (otherwise don't use value baseline;
16 # required for using GAE).
17 "use_critic": True,
18 # If true, use the Generalized Advantage Estimator (GAE)
19 # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.
20 "use_gae": True,
21 # Size of rollout batch
22 "rollout_fragment_length": 10,
23 # GAE(gamma) parameter
24 "lambda": 1.0,
25 # Max global norm for each gradient calculated by worker
26 "grad_clip": 40.0,
27 # Learning rate
28 "lr": 0.0001,
29 # Learning rate schedule
30 "lr_schedule": None,
31 # Value Function Loss coefficient
32 "vf_loss_coeff": 0.5,
33 # Entropy coefficient
34 "entropy_coeff": 0.01,
35 # Min time per iteration
36 "min_iter_time_s": 5,
37 # Workers sample async. Note that this increases the effective
38 # rollout_fragment_length by up to 5x due to async buffering of batches.
39 "sample_async": True,
40 })
41 # __sphinx_doc_end__
42 # yapf: enable
43
44
45 def get_policy_class(config):
46 if config["use_pytorch"]:
47 from ray.rllib.agents.a3c.a3c_torch_policy import \
48 A3CTorchPolicy
49 return A3CTorchPolicy
50 else:
51 return A3CTFPolicy
52
53
54 def validate_config(config):
55 if config["entropy_coeff"] < 0:
56 raise DeprecationWarning("entropy_coeff must be >= 0")
57 if config["sample_async"] and config["use_pytorch"]:
58 config["sample_async"] = False
59 logger.warning(
60 "The sample_async option is not supported with use_pytorch: "
61 "Multithreading can be lead to crashes if used with pytorch.")
62
63
64 def execution_plan(workers, config):
65 # For A3C, compute policy gradients remotely on the rollout workers.
66 grads = AsyncGradients(workers)
67
68 # Apply the gradients as they arrive. We set update_all to False so that
69 # only the worker sending the gradient is updated with new weights.
70 train_op = grads.for_each(ApplyGradients(workers, update_all=False))
71
72 return StandardMetricsReporting(train_op, workers, config)
73
74
75 A3CTrainer = build_trainer(
76 name="A3C",
77 default_config=DEFAULT_CONFIG,
78 default_policy=A3CTFPolicy,
79 get_policy_class=get_policy_class,
80 validate_config=validate_config,
81 execution_plan=execution_plan)
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py
--- a/rllib/agents/a3c/a3c.py
+++ b/rllib/agents/a3c/a3c.py
@@ -54,11 +54,6 @@
def validate_config(config):
if config["entropy_coeff"] < 0:
raise DeprecationWarning("entropy_coeff must be >= 0")
- if config["sample_async"] and config["use_pytorch"]:
- config["sample_async"] = False
- logger.warning(
- "The sample_async option is not supported with use_pytorch: "
- "Multithreading can be lead to crashes if used with pytorch.")
def execution_plan(workers, config):
|
{"golden_diff": "diff --git a/rllib/agents/a3c/a3c.py b/rllib/agents/a3c/a3c.py\n--- a/rllib/agents/a3c/a3c.py\n+++ b/rllib/agents/a3c/a3c.py\n@@ -54,11 +54,6 @@\n def validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n- if config[\"sample_async\"] and config[\"use_pytorch\"]:\n- config[\"sample_async\"] = False\n- logger.warning(\n- \"The sample_async option is not supported with use_pytorch: \"\n- \"Multithreading can be lead to crashes if used with pytorch.\")\n \n \n def execution_plan(workers, config):\n", "issue": "[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n[rllib] PyTorch and SampleAsync validation\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nPyTorch is supposed to be thread-safe, as long as you don't write a tensor using multiple threads. Please see https://discuss.pytorch.org/t/is-pytorch-supposed-to-be-thread-safe/36540/2 \r\n\r\nIt might be worth removing the validation of sample_async and use_pytorch for A3C (and maybe others?).\r\n\r\nRay Version 0.9.0dev (but this applies to any ray version actually)\r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "import logging\n\nfrom ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy\nfrom ray.rllib.agents.trainer import with_common_config\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.execution.rollout_ops import AsyncGradients\nfrom ray.rllib.execution.train_ops import ApplyGradients\nfrom ray.rllib.execution.metric_ops import StandardMetricsReporting\n\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # Should use a critic as a baseline (otherwise don't use value baseline;\n # required for using GAE).\n \"use_critic\": True,\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # Size of rollout batch\n \"rollout_fragment_length\": 10,\n # GAE(gamma) parameter\n \"lambda\": 1.0,\n # Max global norm for each gradient calculated by worker\n \"grad_clip\": 40.0,\n # Learning rate\n \"lr\": 0.0001,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Value Function Loss coefficient\n \"vf_loss_coeff\": 0.5,\n # Entropy coefficient\n \"entropy_coeff\": 0.01,\n # Min time per iteration\n \"min_iter_time_s\": 5,\n # Workers sample async. Note that this increases the effective\n # rollout_fragment_length by up to 5x due to async buffering of batches.\n \"sample_async\": True,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef get_policy_class(config):\n if config[\"use_pytorch\"]:\n from ray.rllib.agents.a3c.a3c_torch_policy import \\\n A3CTorchPolicy\n return A3CTorchPolicy\n else:\n return A3CTFPolicy\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n if config[\"sample_async\"] and config[\"use_pytorch\"]:\n config[\"sample_async\"] = False\n logger.warning(\n \"The sample_async option is not supported with use_pytorch: \"\n \"Multithreading can be lead to crashes if used with pytorch.\")\n\n\ndef execution_plan(workers, config):\n # For A3C, compute policy gradients remotely on the rollout workers.\n grads = AsyncGradients(workers)\n\n # Apply the gradients as they arrive. We set update_all to False so that\n # only the worker sending the gradient is updated with new weights.\n train_op = grads.for_each(ApplyGradients(workers, update_all=False))\n\n return StandardMetricsReporting(train_op, workers, config)\n\n\nA3CTrainer = build_trainer(\n name=\"A3C\",\n default_config=DEFAULT_CONFIG,\n default_policy=A3CTFPolicy,\n get_policy_class=get_policy_class,\n validate_config=validate_config,\n execution_plan=execution_plan)\n", "path": "rllib/agents/a3c/a3c.py"}], "after_files": [{"content": "import logging\n\nfrom ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy\nfrom ray.rllib.agents.trainer import with_common_config\nfrom ray.rllib.agents.trainer_template import build_trainer\nfrom ray.rllib.execution.rollout_ops import AsyncGradients\nfrom ray.rllib.execution.train_ops import ApplyGradients\nfrom ray.rllib.execution.metric_ops import StandardMetricsReporting\n\nlogger = logging.getLogger(__name__)\n\n# yapf: disable\n# __sphinx_doc_begin__\nDEFAULT_CONFIG = with_common_config({\n # Should use a critic as a baseline (otherwise don't use value baseline;\n # required for using GAE).\n \"use_critic\": True,\n # If true, use the Generalized Advantage Estimator (GAE)\n # with a value function, see https://arxiv.org/pdf/1506.02438.pdf.\n \"use_gae\": True,\n # Size of rollout batch\n \"rollout_fragment_length\": 10,\n # GAE(gamma) parameter\n \"lambda\": 1.0,\n # Max global norm for each gradient calculated by worker\n \"grad_clip\": 40.0,\n # Learning rate\n \"lr\": 0.0001,\n # Learning rate schedule\n \"lr_schedule\": None,\n # Value Function Loss coefficient\n \"vf_loss_coeff\": 0.5,\n # Entropy coefficient\n \"entropy_coeff\": 0.01,\n # Min time per iteration\n \"min_iter_time_s\": 5,\n # Workers sample async. Note that this increases the effective\n # rollout_fragment_length by up to 5x due to async buffering of batches.\n \"sample_async\": True,\n})\n# __sphinx_doc_end__\n# yapf: enable\n\n\ndef get_policy_class(config):\n if config[\"use_pytorch\"]:\n from ray.rllib.agents.a3c.a3c_torch_policy import \\\n A3CTorchPolicy\n return A3CTorchPolicy\n else:\n return A3CTFPolicy\n\n\ndef validate_config(config):\n if config[\"entropy_coeff\"] < 0:\n raise DeprecationWarning(\"entropy_coeff must be >= 0\")\n\n\ndef execution_plan(workers, config):\n # For A3C, compute policy gradients remotely on the rollout workers.\n grads = AsyncGradients(workers)\n\n # Apply the gradients as they arrive. We set update_all to False so that\n # only the worker sending the gradient is updated with new weights.\n train_op = grads.for_each(ApplyGradients(workers, update_all=False))\n\n return StandardMetricsReporting(train_op, workers, config)\n\n\nA3CTrainer = build_trainer(\n name=\"A3C\",\n default_config=DEFAULT_CONFIG,\n default_policy=A3CTFPolicy,\n get_policy_class=get_policy_class,\n validate_config=validate_config,\n execution_plan=execution_plan)\n", "path": "rllib/agents/a3c/a3c.py"}]}
| 1,607 | 172 |
gh_patches_debug_24113
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-1286
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
change naming of vars in tutorials
Tutorial 2
Change name of “train_data” variable to “data_dir”
Tutorial 5 in python and colab
Change parameter top_k_per_candidate into top_k
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tutorials/Tutorial5_Evaluation.py`
Content:
```
1 from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
2 from haystack.preprocessor.utils import fetch_archive_from_http
3 from haystack.retriever.sparse import ElasticsearchRetriever
4 from haystack.retriever.dense import DensePassageRetriever
5 from haystack.eval import EvalAnswers, EvalDocuments
6 from haystack.reader.farm import FARMReader
7 from haystack.preprocessor import PreProcessor
8 from haystack.utils import launch_es
9 from haystack import Pipeline
10
11 from farm.utils import initialize_device_settings
12
13 import logging
14
15 logger = logging.getLogger(__name__)
16
17
18 def tutorial5_evaluation():
19
20 ##############################################
21 # Settings
22 ##############################################
23 # Choose from Evaluation style from ['retriever_closed', 'reader_closed', 'retriever_reader_open']
24 # 'retriever_closed' - evaluates only the retriever, based on whether the gold_label document is retrieved.
25 # 'reader_closed' - evaluates only the reader in a closed domain fashion i.e. the reader is given one query
26 # and one document and metrics are calculated on whether the right position in this text is selected by
27 # the model as the answer span (i.e. SQuAD style)
28 # 'retriever_reader_open' - evaluates retriever and reader in open domain fashion i.e. a document is considered
29 # correctly retrieved if it contains the answer string within it. The reader is evaluated based purely on the
30 # predicted string, regardless of which document this came from and the position of the extracted span.
31 style = "retriever_reader_open"
32
33 # make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted
34 doc_index = "tutorial5_docs"
35 label_index = "tutorial5_labels"
36
37 ##############################################
38 # Code
39 ##############################################
40 launch_es()
41 device, n_gpu = initialize_device_settings(use_cuda=True)
42
43 # Download evaluation data, which is a subset of Natural Questions development set containing 50 documents
44 doc_dir = "../data/nq"
45 s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip"
46 fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
47
48 # Connect to Elasticsearch
49 document_store = ElasticsearchDocumentStore(
50 host="localhost", username="", password="", index="document",
51 create_index=False, embedding_field="emb",
52 embedding_dim=768, excluded_meta_data=["emb"]
53 )
54
55 # Add evaluation data to Elasticsearch document store
56 # We first delete the custom tutorial indices to not have duplicate elements
57 # and also split our documents into shorter passages using the PreProcessor
58 preprocessor = PreProcessor(
59 split_by="word",
60 split_length=500,
61 split_overlap=0,
62 split_respect_sentence_boundary=False,
63 clean_empty_lines=False,
64 clean_whitespace=False
65 )
66 document_store.delete_all_documents(index=doc_index)
67 document_store.delete_all_documents(index=label_index)
68 document_store.add_eval_data(
69 filename="../data/nq/nq_dev_subset_v2.json",
70 doc_index=doc_index,
71 label_index=label_index,
72 preprocessor=preprocessor
73 )
74
75 # Let's prepare the labels that we need for the retriever and the reader
76 labels = document_store.get_all_labels_aggregated(index=label_index)
77
78 # Initialize Retriever
79 retriever = ElasticsearchRetriever(document_store=document_store)
80
81 # Alternative: Evaluate DensePassageRetriever
82 # Note, that DPR works best when you index short passages < 512 tokens as only those tokens will be used for the embedding.
83 # Here, for nq_dev_subset_v2.json we have avg. num of tokens = 5220(!).
84 # DPR still outperforms Elastic's BM25 by a small margin here.
85 # retriever = DensePassageRetriever(document_store=document_store,
86 # query_embedding_model="facebook/dpr-question_encoder-single-nq-base",
87 # passage_embedding_model="facebook/dpr-ctx_encoder-single-nq-base",
88 # use_gpu=True,
89 # embed_title=True,
90 # remove_sep_tok_from_untitled_passages=True)
91 # document_store.update_embeddings(retriever, index=doc_index)
92
93 # Initialize Reader
94 reader = FARMReader(
95 model_name_or_path="deepset/roberta-base-squad2",
96 top_k_per_candidate=4,
97 return_no_answer=True
98 )
99
100 # Here we initialize the nodes that perform evaluation
101 eval_retriever = EvalDocuments()
102 eval_reader = EvalAnswers()
103
104
105 ## Evaluate Retriever on its own in closed domain fashion
106 if style == "retriever_closed":
107 retriever_eval_results = retriever.eval(top_k=10, label_index=label_index, doc_index=doc_index)
108 ## Retriever Recall is the proportion of questions for which the correct document containing the answer is
109 ## among the correct documents
110 print("Retriever Recall:", retriever_eval_results["recall"])
111 ## Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank
112 print("Retriever Mean Avg Precision:", retriever_eval_results["map"])
113
114 # Evaluate Reader on its own in closed domain fashion (i.e. SQuAD style)
115 elif style == "reader_closed":
116 reader_eval_results = reader.eval(document_store=document_store, device=device, label_index=label_index, doc_index=doc_index)
117 # Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch
118 #reader_eval_results = reader.eval_on_file("../data/nq", "nq_dev_subset_v2.json", device=device)
119
120 ## Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer
121 print("Reader Top-N-Accuracy:", reader_eval_results["top_n_accuracy"])
122 ## Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer
123 print("Reader Exact Match:", reader_eval_results["EM"])
124 ## Reader F1-Score is the average overlap between the predicted answers and the correct answers
125 print("Reader F1-Score:", reader_eval_results["f1"])
126
127
128 # Evaluate combination of Reader and Retriever in open domain fashion
129 elif style == "retriever_reader_open":
130
131 # Here is the pipeline definition
132 p = Pipeline()
133 p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
134 p.add_node(component=eval_retriever, name="EvalDocuments", inputs=["ESRetriever"])
135 p.add_node(component=reader, name="QAReader", inputs=["EvalDocuments"])
136 p.add_node(component=eval_reader, name="EvalAnswers", inputs=["QAReader"])
137 results = []
138
139 for l in labels:
140 res = p.run(
141 query=l.question,
142 top_k_retriever=10,
143 labels=l,
144 top_k_reader=10,
145 index=doc_index,
146 )
147 results.append(res)
148
149 eval_retriever.print()
150 print()
151 retriever.print_time()
152 print()
153 eval_reader.print(mode="reader")
154 print()
155 reader.print_time()
156 print()
157 eval_reader.print(mode="pipeline")
158 else:
159 raise ValueError(f'style={style} is not a valid option. Choose from retriever_closed, reader_closed, retriever_reader_open')
160
161
162 if __name__ == "__main__":
163 tutorial5_evaluation()
164
165 # This Haystack script was made with love by deepset in Berlin, Germany
166 # Haystack: https://github.com/deepset-ai/haystack
167 # deepset: https://deepset.ai/
```
Path: `tutorials/Tutorial2_Finetune_a_model_on_your_data.py`
Content:
```
1 # # Fine-tuning a model on your own data
2 #
3 # For many use cases it is sufficient to just use one of the existing public models that were trained on SQuAD or
4 # other public QA datasets (e.g. Natural Questions).
5 # However, if you have domain-specific questions, fine-tuning your model on custom examples will very likely boost
6 # your performance. While this varies by domain, we saw that ~ 2000 examples can easily increase performance by +5-20%.
7 #
8 # This tutorial shows you how to fine-tune a pretrained model on your own dataset.
9
10 from haystack.reader.farm import FARMReader
11
12
13 def tutorial2_finetune_a_model_on_your_data():
14 # ## Create Training Data
15 #
16 # There are two ways to generate training data
17 #
18 # 1. **Annotation**: You can use the annotation tool(https://github.com/deepset-ai/haystack#labeling-tool) to label
19 # your data, i.e. highlighting answers to your questions in a document. The tool supports structuring
20 # your workflow with organizations, projects, and users. The labels can be exported in SQuAD format
21 # that is compatible for training with Haystack.
22 #
23 # 2. **Feedback**: For production systems, you can collect training data from direct user feedback via Haystack's
24 # REST API interface. This includes a customizable user feedback API for providing feedback on the
25 # answer returned by the API. The API provides a feedback export endpoint to obtain the feedback data
26 # for fine-tuning your model further.
27 #
28 #
29 # ## Fine-tune your model
30 #
31 # Once you have collected training data, you can fine-tune your base models.
32 # We initialize a reader as a base model and fine-tune it on our own custom dataset (should be in SQuAD-like format).
33 # We recommend using a base model that was trained on SQuAD or a similar QA dataset before to benefit from Transfer
34 # Learning effects.
35
36 #**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`
37
38 reader = FARMReader(model_name_or_path="distilbert-base-uncased-distilled-squad", use_gpu=True)
39 train_data = "data/squad20"
40 # train_data = "PATH/TO_YOUR/TRAIN_DATA"
41 reader.train(data_dir=train_data, train_filename="dev-v2.0.json", use_gpu=True, n_epochs=1, save_dir="my_model")
42
43 # Saving the model happens automatically at the end of training into the `save_dir` you specified
44 # However, you could also save a reader manually again via:
45 reader.save(directory="my_model")
46
47 # If you want to load it at a later point, just do:
48 new_reader = FARMReader(model_name_or_path="my_model")
49
50
51 if __name__ == "__main__":
52 tutorial2_finetune_a_model_on_your_data()
53
54 # This Haystack script was made with love by deepset in Berlin, Germany
55 # Haystack: https://github.com/deepset-ai/haystack
56 # deepset: https://deepset.ai/
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tutorials/Tutorial2_Finetune_a_model_on_your_data.py b/tutorials/Tutorial2_Finetune_a_model_on_your_data.py
--- a/tutorials/Tutorial2_Finetune_a_model_on_your_data.py
+++ b/tutorials/Tutorial2_Finetune_a_model_on_your_data.py
@@ -36,9 +36,9 @@
#**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`
reader = FARMReader(model_name_or_path="distilbert-base-uncased-distilled-squad", use_gpu=True)
- train_data = "data/squad20"
- # train_data = "PATH/TO_YOUR/TRAIN_DATA"
- reader.train(data_dir=train_data, train_filename="dev-v2.0.json", use_gpu=True, n_epochs=1, save_dir="my_model")
+ data_dir = "data/squad20"
+ # data_dir = "PATH/TO_YOUR/TRAIN_DATA"
+ reader.train(data_dir=data_dir, train_filename="dev-v2.0.json", use_gpu=True, n_epochs=1, save_dir="my_model")
# Saving the model happens automatically at the end of training into the `save_dir` you specified
# However, you could also save a reader manually again via:
diff --git a/tutorials/Tutorial5_Evaluation.py b/tutorials/Tutorial5_Evaluation.py
--- a/tutorials/Tutorial5_Evaluation.py
+++ b/tutorials/Tutorial5_Evaluation.py
@@ -93,7 +93,7 @@
# Initialize Reader
reader = FARMReader(
model_name_or_path="deepset/roberta-base-squad2",
- top_k_per_candidate=4,
+ top_k=4,
return_no_answer=True
)
|
{"golden_diff": "diff --git a/tutorials/Tutorial2_Finetune_a_model_on_your_data.py b/tutorials/Tutorial2_Finetune_a_model_on_your_data.py\n--- a/tutorials/Tutorial2_Finetune_a_model_on_your_data.py\n+++ b/tutorials/Tutorial2_Finetune_a_model_on_your_data.py\n@@ -36,9 +36,9 @@\n #**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`\n \n reader = FARMReader(model_name_or_path=\"distilbert-base-uncased-distilled-squad\", use_gpu=True)\n- train_data = \"data/squad20\"\n- # train_data = \"PATH/TO_YOUR/TRAIN_DATA\"\n- reader.train(data_dir=train_data, train_filename=\"dev-v2.0.json\", use_gpu=True, n_epochs=1, save_dir=\"my_model\")\n+ data_dir = \"data/squad20\"\n+ # data_dir = \"PATH/TO_YOUR/TRAIN_DATA\"\n+ reader.train(data_dir=data_dir, train_filename=\"dev-v2.0.json\", use_gpu=True, n_epochs=1, save_dir=\"my_model\")\n \n # Saving the model happens automatically at the end of training into the `save_dir` you specified\n # However, you could also save a reader manually again via:\ndiff --git a/tutorials/Tutorial5_Evaluation.py b/tutorials/Tutorial5_Evaluation.py\n--- a/tutorials/Tutorial5_Evaluation.py\n+++ b/tutorials/Tutorial5_Evaluation.py\n@@ -93,7 +93,7 @@\n # Initialize Reader\n reader = FARMReader(\n model_name_or_path=\"deepset/roberta-base-squad2\",\n- top_k_per_candidate=4,\n+ top_k=4,\n return_no_answer=True\n )\n", "issue": "change naming of vars in tutorials\nTutorial 2\nChange name of \u201ctrain_data\u201d variable to \u201cdata_dir\u201d\n\nTutorial 5 in python and colab\nChange parameter top_k_per_candidate into top_k\n\n", "before_files": [{"content": "from haystack.document_store.elasticsearch import ElasticsearchDocumentStore\nfrom haystack.preprocessor.utils import fetch_archive_from_http\nfrom haystack.retriever.sparse import ElasticsearchRetriever\nfrom haystack.retriever.dense import DensePassageRetriever\nfrom haystack.eval import EvalAnswers, EvalDocuments\nfrom haystack.reader.farm import FARMReader\nfrom haystack.preprocessor import PreProcessor\nfrom haystack.utils import launch_es\nfrom haystack import Pipeline\n\nfrom farm.utils import initialize_device_settings\n\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\ndef tutorial5_evaluation():\n\n ##############################################\n # Settings\n ##############################################\n # Choose from Evaluation style from ['retriever_closed', 'reader_closed', 'retriever_reader_open']\n # 'retriever_closed' - evaluates only the retriever, based on whether the gold_label document is retrieved.\n # 'reader_closed' - evaluates only the reader in a closed domain fashion i.e. the reader is given one query\n # and one document and metrics are calculated on whether the right position in this text is selected by\n # the model as the answer span (i.e. SQuAD style)\n # 'retriever_reader_open' - evaluates retriever and reader in open domain fashion i.e. a document is considered\n # correctly retrieved if it contains the answer string within it. The reader is evaluated based purely on the\n # predicted string, regardless of which document this came from and the position of the extracted span.\n style = \"retriever_reader_open\"\n\n # make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted\n doc_index = \"tutorial5_docs\"\n label_index = \"tutorial5_labels\"\n\n ##############################################\n # Code\n ##############################################\n launch_es()\n device, n_gpu = initialize_device_settings(use_cuda=True)\n\n # Download evaluation data, which is a subset of Natural Questions development set containing 50 documents\n doc_dir = \"../data/nq\"\n s3_url = \"https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip\"\n fetch_archive_from_http(url=s3_url, output_dir=doc_dir)\n\n # Connect to Elasticsearch\n document_store = ElasticsearchDocumentStore(\n host=\"localhost\", username=\"\", password=\"\", index=\"document\",\n create_index=False, embedding_field=\"emb\",\n embedding_dim=768, excluded_meta_data=[\"emb\"]\n )\n\n # Add evaluation data to Elasticsearch document store\n # We first delete the custom tutorial indices to not have duplicate elements\n # and also split our documents into shorter passages using the PreProcessor\n preprocessor = PreProcessor(\n split_by=\"word\",\n split_length=500,\n split_overlap=0,\n split_respect_sentence_boundary=False,\n clean_empty_lines=False,\n clean_whitespace=False\n )\n document_store.delete_all_documents(index=doc_index)\n document_store.delete_all_documents(index=label_index)\n document_store.add_eval_data(\n filename=\"../data/nq/nq_dev_subset_v2.json\",\n doc_index=doc_index,\n label_index=label_index,\n preprocessor=preprocessor\n )\n\n # Let's prepare the labels that we need for the retriever and the reader\n labels = document_store.get_all_labels_aggregated(index=label_index)\n\n # Initialize Retriever\n retriever = ElasticsearchRetriever(document_store=document_store)\n\n # Alternative: Evaluate DensePassageRetriever\n # Note, that DPR works best when you index short passages < 512 tokens as only those tokens will be used for the embedding.\n # Here, for nq_dev_subset_v2.json we have avg. num of tokens = 5220(!).\n # DPR still outperforms Elastic's BM25 by a small margin here.\n # retriever = DensePassageRetriever(document_store=document_store,\n # query_embedding_model=\"facebook/dpr-question_encoder-single-nq-base\",\n # passage_embedding_model=\"facebook/dpr-ctx_encoder-single-nq-base\",\n # use_gpu=True,\n # embed_title=True,\n # remove_sep_tok_from_untitled_passages=True)\n # document_store.update_embeddings(retriever, index=doc_index)\n\n # Initialize Reader\n reader = FARMReader(\n model_name_or_path=\"deepset/roberta-base-squad2\",\n top_k_per_candidate=4,\n return_no_answer=True\n )\n\n # Here we initialize the nodes that perform evaluation\n eval_retriever = EvalDocuments()\n eval_reader = EvalAnswers()\n\n\n ## Evaluate Retriever on its own in closed domain fashion\n if style == \"retriever_closed\":\n retriever_eval_results = retriever.eval(top_k=10, label_index=label_index, doc_index=doc_index)\n ## Retriever Recall is the proportion of questions for which the correct document containing the answer is\n ## among the correct documents\n print(\"Retriever Recall:\", retriever_eval_results[\"recall\"])\n ## Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank\n print(\"Retriever Mean Avg Precision:\", retriever_eval_results[\"map\"])\n\n # Evaluate Reader on its own in closed domain fashion (i.e. SQuAD style)\n elif style == \"reader_closed\":\n reader_eval_results = reader.eval(document_store=document_store, device=device, label_index=label_index, doc_index=doc_index)\n # Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch\n #reader_eval_results = reader.eval_on_file(\"../data/nq\", \"nq_dev_subset_v2.json\", device=device)\n\n ## Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer\n print(\"Reader Top-N-Accuracy:\", reader_eval_results[\"top_n_accuracy\"])\n ## Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer\n print(\"Reader Exact Match:\", reader_eval_results[\"EM\"])\n ## Reader F1-Score is the average overlap between the predicted answers and the correct answers\n print(\"Reader F1-Score:\", reader_eval_results[\"f1\"])\n\n\n # Evaluate combination of Reader and Retriever in open domain fashion\n elif style == \"retriever_reader_open\":\n\n # Here is the pipeline definition\n p = Pipeline()\n p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n p.add_node(component=eval_retriever, name=\"EvalDocuments\", inputs=[\"ESRetriever\"])\n p.add_node(component=reader, name=\"QAReader\", inputs=[\"EvalDocuments\"])\n p.add_node(component=eval_reader, name=\"EvalAnswers\", inputs=[\"QAReader\"])\n results = []\n\n for l in labels:\n res = p.run(\n query=l.question,\n top_k_retriever=10,\n labels=l,\n top_k_reader=10,\n index=doc_index,\n )\n results.append(res)\n\n eval_retriever.print()\n print()\n retriever.print_time()\n print()\n eval_reader.print(mode=\"reader\")\n print()\n reader.print_time()\n print()\n eval_reader.print(mode=\"pipeline\")\n else:\n raise ValueError(f'style={style} is not a valid option. Choose from retriever_closed, reader_closed, retriever_reader_open')\n\n\nif __name__ == \"__main__\":\n tutorial5_evaluation()\n\n# This Haystack script was made with love by deepset in Berlin, Germany\n# Haystack: https://github.com/deepset-ai/haystack\n# deepset: https://deepset.ai/", "path": "tutorials/Tutorial5_Evaluation.py"}, {"content": "# # Fine-tuning a model on your own data\n# \n# For many use cases it is sufficient to just use one of the existing public models that were trained on SQuAD or\n# other public QA datasets (e.g. Natural Questions).\n# However, if you have domain-specific questions, fine-tuning your model on custom examples will very likely boost\n# your performance. While this varies by domain, we saw that ~ 2000 examples can easily increase performance by +5-20%.\n# \n# This tutorial shows you how to fine-tune a pretrained model on your own dataset.\n\nfrom haystack.reader.farm import FARMReader\n\n\ndef tutorial2_finetune_a_model_on_your_data():\n # ## Create Training Data\n #\n # There are two ways to generate training data\n #\n # 1. **Annotation**: You can use the annotation tool(https://github.com/deepset-ai/haystack#labeling-tool) to label\n # your data, i.e. highlighting answers to your questions in a document. The tool supports structuring\n # your workflow with organizations, projects, and users. The labels can be exported in SQuAD format\n # that is compatible for training with Haystack.\n #\n # 2. **Feedback**: For production systems, you can collect training data from direct user feedback via Haystack's\n # REST API interface. This includes a customizable user feedback API for providing feedback on the\n # answer returned by the API. The API provides a feedback export endpoint to obtain the feedback data\n # for fine-tuning your model further.\n #\n #\n # ## Fine-tune your model\n #\n # Once you have collected training data, you can fine-tune your base models.\n # We initialize a reader as a base model and fine-tune it on our own custom dataset (should be in SQuAD-like format).\n # We recommend using a base model that was trained on SQuAD or a similar QA dataset before to benefit from Transfer\n # Learning effects.\n\n #**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`\n\n reader = FARMReader(model_name_or_path=\"distilbert-base-uncased-distilled-squad\", use_gpu=True)\n train_data = \"data/squad20\"\n # train_data = \"PATH/TO_YOUR/TRAIN_DATA\"\n reader.train(data_dir=train_data, train_filename=\"dev-v2.0.json\", use_gpu=True, n_epochs=1, save_dir=\"my_model\")\n\n # Saving the model happens automatically at the end of training into the `save_dir` you specified\n # However, you could also save a reader manually again via:\n reader.save(directory=\"my_model\")\n\n # If you want to load it at a later point, just do:\n new_reader = FARMReader(model_name_or_path=\"my_model\")\n\n\nif __name__ == \"__main__\":\n tutorial2_finetune_a_model_on_your_data()\n\n# This Haystack script was made with love by deepset in Berlin, Germany\n# Haystack: https://github.com/deepset-ai/haystack\n# deepset: https://deepset.ai/", "path": "tutorials/Tutorial2_Finetune_a_model_on_your_data.py"}], "after_files": [{"content": "from haystack.document_store.elasticsearch import ElasticsearchDocumentStore\nfrom haystack.preprocessor.utils import fetch_archive_from_http\nfrom haystack.retriever.sparse import ElasticsearchRetriever\nfrom haystack.retriever.dense import DensePassageRetriever\nfrom haystack.eval import EvalAnswers, EvalDocuments\nfrom haystack.reader.farm import FARMReader\nfrom haystack.preprocessor import PreProcessor\nfrom haystack.utils import launch_es\nfrom haystack import Pipeline\n\nfrom farm.utils import initialize_device_settings\n\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\ndef tutorial5_evaluation():\n\n ##############################################\n # Settings\n ##############################################\n # Choose from Evaluation style from ['retriever_closed', 'reader_closed', 'retriever_reader_open']\n # 'retriever_closed' - evaluates only the retriever, based on whether the gold_label document is retrieved.\n # 'reader_closed' - evaluates only the reader in a closed domain fashion i.e. the reader is given one query\n # and one document and metrics are calculated on whether the right position in this text is selected by\n # the model as the answer span (i.e. SQuAD style)\n # 'retriever_reader_open' - evaluates retriever and reader in open domain fashion i.e. a document is considered\n # correctly retrieved if it contains the answer string within it. The reader is evaluated based purely on the\n # predicted string, regardless of which document this came from and the position of the extracted span.\n style = \"retriever_reader_open\"\n\n # make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted\n doc_index = \"tutorial5_docs\"\n label_index = \"tutorial5_labels\"\n\n ##############################################\n # Code\n ##############################################\n launch_es()\n device, n_gpu = initialize_device_settings(use_cuda=True)\n\n # Download evaluation data, which is a subset of Natural Questions development set containing 50 documents\n doc_dir = \"../data/nq\"\n s3_url = \"https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip\"\n fetch_archive_from_http(url=s3_url, output_dir=doc_dir)\n\n # Connect to Elasticsearch\n document_store = ElasticsearchDocumentStore(\n host=\"localhost\", username=\"\", password=\"\", index=\"document\",\n create_index=False, embedding_field=\"emb\",\n embedding_dim=768, excluded_meta_data=[\"emb\"]\n )\n\n # Add evaluation data to Elasticsearch document store\n # We first delete the custom tutorial indices to not have duplicate elements\n # and also split our documents into shorter passages using the PreProcessor\n preprocessor = PreProcessor(\n split_by=\"word\",\n split_length=500,\n split_overlap=0,\n split_respect_sentence_boundary=False,\n clean_empty_lines=False,\n clean_whitespace=False\n )\n document_store.delete_all_documents(index=doc_index)\n document_store.delete_all_documents(index=label_index)\n document_store.add_eval_data(\n filename=\"../data/nq/nq_dev_subset_v2.json\",\n doc_index=doc_index,\n label_index=label_index,\n preprocessor=preprocessor\n )\n\n # Let's prepare the labels that we need for the retriever and the reader\n labels = document_store.get_all_labels_aggregated(index=label_index)\n\n # Initialize Retriever\n retriever = ElasticsearchRetriever(document_store=document_store)\n\n # Alternative: Evaluate DensePassageRetriever\n # Note, that DPR works best when you index short passages < 512 tokens as only those tokens will be used for the embedding.\n # Here, for nq_dev_subset_v2.json we have avg. num of tokens = 5220(!).\n # DPR still outperforms Elastic's BM25 by a small margin here.\n # retriever = DensePassageRetriever(document_store=document_store,\n # query_embedding_model=\"facebook/dpr-question_encoder-single-nq-base\",\n # passage_embedding_model=\"facebook/dpr-ctx_encoder-single-nq-base\",\n # use_gpu=True,\n # embed_title=True,\n # remove_sep_tok_from_untitled_passages=True)\n # document_store.update_embeddings(retriever, index=doc_index)\n\n # Initialize Reader\n reader = FARMReader(\n model_name_or_path=\"deepset/roberta-base-squad2\",\n top_k=4,\n return_no_answer=True\n )\n\n # Here we initialize the nodes that perform evaluation\n eval_retriever = EvalDocuments()\n eval_reader = EvalAnswers()\n\n\n ## Evaluate Retriever on its own in closed domain fashion\n if style == \"retriever_closed\":\n retriever_eval_results = retriever.eval(top_k=10, label_index=label_index, doc_index=doc_index)\n ## Retriever Recall is the proportion of questions for which the correct document containing the answer is\n ## among the correct documents\n print(\"Retriever Recall:\", retriever_eval_results[\"recall\"])\n ## Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank\n print(\"Retriever Mean Avg Precision:\", retriever_eval_results[\"map\"])\n\n # Evaluate Reader on its own in closed domain fashion (i.e. SQuAD style)\n elif style == \"reader_closed\":\n reader_eval_results = reader.eval(document_store=document_store, device=device, label_index=label_index, doc_index=doc_index)\n # Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch\n #reader_eval_results = reader.eval_on_file(\"../data/nq\", \"nq_dev_subset_v2.json\", device=device)\n\n ## Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer\n print(\"Reader Top-N-Accuracy:\", reader_eval_results[\"top_n_accuracy\"])\n ## Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer\n print(\"Reader Exact Match:\", reader_eval_results[\"EM\"])\n ## Reader F1-Score is the average overlap between the predicted answers and the correct answers\n print(\"Reader F1-Score:\", reader_eval_results[\"f1\"])\n\n\n # Evaluate combination of Reader and Retriever in open domain fashion\n elif style == \"retriever_reader_open\":\n\n # Here is the pipeline definition\n p = Pipeline()\n p.add_node(component=retriever, name=\"ESRetriever\", inputs=[\"Query\"])\n p.add_node(component=eval_retriever, name=\"EvalDocuments\", inputs=[\"ESRetriever\"])\n p.add_node(component=reader, name=\"QAReader\", inputs=[\"EvalDocuments\"])\n p.add_node(component=eval_reader, name=\"EvalAnswers\", inputs=[\"QAReader\"])\n results = []\n\n for l in labels:\n res = p.run(\n query=l.question,\n top_k_retriever=10,\n labels=l,\n top_k_reader=10,\n index=doc_index,\n )\n results.append(res)\n\n eval_retriever.print()\n print()\n retriever.print_time()\n print()\n eval_reader.print(mode=\"reader\")\n print()\n reader.print_time()\n print()\n eval_reader.print(mode=\"pipeline\")\n else:\n raise ValueError(f'style={style} is not a valid option. Choose from retriever_closed, reader_closed, retriever_reader_open')\n\n\nif __name__ == \"__main__\":\n tutorial5_evaluation()\n\n# This Haystack script was made with love by deepset in Berlin, Germany\n# Haystack: https://github.com/deepset-ai/haystack\n# deepset: https://deepset.ai/", "path": "tutorials/Tutorial5_Evaluation.py"}, {"content": "# # Fine-tuning a model on your own data\n# \n# For many use cases it is sufficient to just use one of the existing public models that were trained on SQuAD or\n# other public QA datasets (e.g. Natural Questions).\n# However, if you have domain-specific questions, fine-tuning your model on custom examples will very likely boost\n# your performance. While this varies by domain, we saw that ~ 2000 examples can easily increase performance by +5-20%.\n# \n# This tutorial shows you how to fine-tune a pretrained model on your own dataset.\n\nfrom haystack.reader.farm import FARMReader\n\n\ndef tutorial2_finetune_a_model_on_your_data():\n # ## Create Training Data\n #\n # There are two ways to generate training data\n #\n # 1. **Annotation**: You can use the annotation tool(https://github.com/deepset-ai/haystack#labeling-tool) to label\n # your data, i.e. highlighting answers to your questions in a document. The tool supports structuring\n # your workflow with organizations, projects, and users. The labels can be exported in SQuAD format\n # that is compatible for training with Haystack.\n #\n # 2. **Feedback**: For production systems, you can collect training data from direct user feedback via Haystack's\n # REST API interface. This includes a customizable user feedback API for providing feedback on the\n # answer returned by the API. The API provides a feedback export endpoint to obtain the feedback data\n # for fine-tuning your model further.\n #\n #\n # ## Fine-tune your model\n #\n # Once you have collected training data, you can fine-tune your base models.\n # We initialize a reader as a base model and fine-tune it on our own custom dataset (should be in SQuAD-like format).\n # We recommend using a base model that was trained on SQuAD or a similar QA dataset before to benefit from Transfer\n # Learning effects.\n\n #**Recommendation: Run training on a GPU. To do so change the `use_gpu` arguments below to `True`\n\n reader = FARMReader(model_name_or_path=\"distilbert-base-uncased-distilled-squad\", use_gpu=True)\n data_dir = \"data/squad20\"\n # data_dir = \"PATH/TO_YOUR/TRAIN_DATA\"\n reader.train(data_dir=data_dir, train_filename=\"dev-v2.0.json\", use_gpu=True, n_epochs=1, save_dir=\"my_model\")\n\n # Saving the model happens automatically at the end of training into the `save_dir` you specified\n # However, you could also save a reader manually again via:\n reader.save(directory=\"my_model\")\n\n # If you want to load it at a later point, just do:\n new_reader = FARMReader(model_name_or_path=\"my_model\")\n\n\nif __name__ == \"__main__\":\n tutorial2_finetune_a_model_on_your_data()\n\n# This Haystack script was made with love by deepset in Berlin, Germany\n# Haystack: https://github.com/deepset-ai/haystack\n# deepset: https://deepset.ai/", "path": "tutorials/Tutorial2_Finetune_a_model_on_your_data.py"}]}
| 3,196 | 408 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.