problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_12365
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-5193
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't enable per-domain settings for https://example.com./
When running `qutebrowser --temp-basedir -s content.javascript.enabled false 'https://travis-ci.com./'` and pressing `tsh`, JavaScript is still not allowed for Travis CI.
This was introduced in 8b822e40e3243f9679244cfcdf0e7abd1de0289f / #4707 - cc @jgkamat
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/config/configutils.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2018-2020 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20
21 """Utilities and data structures used by various config code."""
22
23
24 import typing
25 import collections
26 import itertools
27 import operator
28
29 from PyQt5.QtCore import QUrl
30
31 from qutebrowser.utils import utils, urlmatch, usertypes
32 from qutebrowser.config import configexc
33
34 if typing.TYPE_CHECKING:
35 from qutebrowser.config import configdata
36
37
38 def _widened_hostnames(hostname: str) -> typing.Iterable[str]:
39 """A generator for widening string hostnames.
40
41 Ex: a.c.foo -> [a.c.foo, c.foo, foo]"""
42 while hostname:
43 yield hostname
44 hostname = hostname.partition(".")[-1]
45
46
47 class ScopedValue:
48
49 """A configuration value which is valid for a UrlPattern.
50
51 Attributes:
52 value: The value itself.
53 pattern: The UrlPattern for the value, or None for global values.
54 hide_userconfig: Hide this customization from config.dump_userconfig().
55 """
56
57 id_gen = itertools.count(0)
58
59 def __init__(self, value: typing.Any,
60 pattern: typing.Optional[urlmatch.UrlPattern],
61 hide_userconfig: bool = False) -> None:
62 self.value = value
63 self.pattern = pattern
64 self.hide_userconfig = hide_userconfig
65 self.pattern_id = next(ScopedValue.id_gen)
66
67 def __repr__(self) -> str:
68 return utils.get_repr(self, value=self.value, pattern=self.pattern,
69 hide_userconfig=self.hide_userconfig,
70 pattern_id=self.pattern_id)
71
72
73 class Values:
74
75 """A collection of values for a single setting.
76
77 Currently, we store patterns in two dictionaries for different types of
78 lookups. A ordered, pattern keyed map, and an unordered, domain keyed map.
79
80 This means that finding a value based on a pattern is fast, and matching
81 url patterns is fast if all domains are unique.
82
83 If there are many patterns under the domain (or subdomain) that is being
84 evaluated, or any patterns that cannot have a concrete domain found, this
85 will become slow again.
86
87 Attributes:
88 opt: The Option being customized.
89 _vmap: A mapping of all pattern objects to ScopedValues.
90 _domain_map: A mapping from hostnames to all associated ScopedValues.
91 """
92
93 _VmapKeyType = typing.Optional[urlmatch.UrlPattern]
94
95 def __init__(self,
96 opt: 'configdata.Option',
97 values: typing.Sequence[ScopedValue] = ()) -> None:
98 self.opt = opt
99 self._vmap = collections.OrderedDict() \
100 # type: collections.OrderedDict[Values._VmapKeyType, ScopedValue]
101 # A map from domain parts to rules that fall under them.
102 self._domain_map = collections.defaultdict(set) \
103 # type: typing.Dict[typing.Optional[str], typing.Set[ScopedValue]]
104
105 for scoped in values:
106 self._add_scoped(scoped)
107
108 def __repr__(self) -> str:
109 return utils.get_repr(self, opt=self.opt,
110 values=list(self._vmap.values()),
111 constructor=True)
112
113 def __str__(self) -> str:
114 """Get the values as human-readable string."""
115 lines = self.dump(include_hidden=True)
116 if lines:
117 return '\n'.join(lines)
118 return '{}: <unchanged>'.format(self.opt.name)
119
120 def dump(self, include_hidden: bool = False) -> typing.Sequence[str]:
121 """Dump all customizations for this value.
122
123 Arguments:
124 include_hidden: Also show values with hide_userconfig=True.
125 """
126 lines = []
127
128 for scoped in self._vmap.values():
129 if scoped.hide_userconfig and not include_hidden:
130 continue
131
132 str_value = self.opt.typ.to_str(scoped.value)
133 if scoped.pattern is None:
134 lines.append('{} = {}'.format(self.opt.name, str_value))
135 else:
136 lines.append('{}: {} = {}'.format(
137 scoped.pattern, self.opt.name, str_value))
138
139 return lines
140
141 def __iter__(self) -> typing.Iterator['ScopedValue']:
142 """Yield ScopedValue elements.
143
144 This yields in "normal" order, i.e. global and then first-set settings
145 first.
146 """
147 yield from self._vmap.values()
148
149 def __bool__(self) -> bool:
150 """Check whether this value is customized."""
151 return bool(self._vmap)
152
153 def _check_pattern_support(
154 self, arg: typing.Optional[urlmatch.UrlPattern]) -> None:
155 """Make sure patterns are supported if one was given."""
156 if arg is not None and not self.opt.supports_pattern:
157 raise configexc.NoPatternError(self.opt.name)
158
159 def add(self, value: typing.Any,
160 pattern: urlmatch.UrlPattern = None, *,
161 hide_userconfig: bool = False) -> None:
162 """Add a value with the given pattern to the list of values.
163
164 If hide_userconfig is given, the value is hidden from
165 config.dump_userconfig() and thus qute://configdiff.
166 """
167 scoped = ScopedValue(value, pattern, hide_userconfig=hide_userconfig)
168 self._add_scoped(scoped)
169
170 def _add_scoped(self, scoped: ScopedValue) -> None:
171 """Add an existing ScopedValue object."""
172 self._check_pattern_support(scoped.pattern)
173 self.remove(scoped.pattern)
174
175 self._vmap[scoped.pattern] = scoped
176
177 host = scoped.pattern.host if scoped.pattern else None
178 self._domain_map[host].add(scoped)
179
180 def remove(self, pattern: urlmatch.UrlPattern = None) -> bool:
181 """Remove the value with the given pattern.
182
183 If a matching pattern was removed, True is returned.
184 If no matching pattern was found, False is returned.
185 """
186 self._check_pattern_support(pattern)
187 if pattern not in self._vmap:
188 return False
189
190 host = pattern.host if pattern else None
191 scoped_value = self._vmap[pattern]
192 # If we error here, that means domain_map and vmap are out of sync,
193 # report a bug!
194 assert host in self._domain_map
195 self._domain_map[host].remove(scoped_value)
196 del self._vmap[pattern]
197 return True
198
199 def clear(self) -> None:
200 """Clear all customization for this value."""
201 self._vmap.clear()
202 self._domain_map.clear()
203
204 def _get_fallback(self, fallback: bool) -> typing.Any:
205 """Get the fallback global/default value."""
206 if None in self._vmap:
207 return self._vmap[None].value
208
209 if fallback:
210 return self.opt.default
211 else:
212 return usertypes.UNSET
213
214 def get_for_url(self, url: QUrl = None, *,
215 fallback: bool = True) -> typing.Any:
216 """Get a config value, falling back when needed.
217
218 This first tries to find a value matching the URL (if given).
219 If there's no match:
220 With fallback=True, the global/default setting is returned.
221 With fallback=False, usertypes.UNSET is returned.
222 """
223 self._check_pattern_support(url)
224 if url is None:
225 return self._get_fallback(fallback)
226
227 candidates = [] # type: typing.List[ScopedValue]
228 widened_hosts = _widened_hostnames(url.host())
229 # We must check the 'None' key as well, in case any patterns that
230 # did not have a domain match.
231 for host in itertools.chain(widened_hosts, [None]):
232 host_set = self._domain_map.get(host, ())
233 for scoped in host_set:
234 if scoped.pattern is not None and scoped.pattern.matches(url):
235 candidates.append(scoped)
236
237 if candidates:
238 scoped = max(candidates, key=operator.attrgetter('pattern_id'))
239 return scoped.value
240
241 if not fallback:
242 return usertypes.UNSET
243
244 return self._get_fallback(fallback)
245
246 def get_for_pattern(self,
247 pattern: typing.Optional[urlmatch.UrlPattern], *,
248 fallback: bool = True) -> typing.Any:
249 """Get a value only if it's been overridden for the given pattern.
250
251 This is useful when showing values to the user.
252
253 If there's no match:
254 With fallback=True, the global/default setting is returned.
255 With fallback=False, usertypes.UNSET is returned.
256 """
257 self._check_pattern_support(pattern)
258 if pattern is not None:
259 if pattern in self._vmap:
260 return self._vmap[pattern].value
261
262 if not fallback:
263 return usertypes.UNSET
264
265 return self._get_fallback(fallback)
266
267
268 class FontFamilies:
269
270 """A list of font family names."""
271
272 def __init__(self, families: typing.Sequence[str]) -> None:
273 self._families = families
274 self.family = families[0] if families else None
275
276 def __iter__(self) -> typing.Iterator[str]:
277 yield from self._families
278
279 def __repr__(self) -> str:
280 return utils.get_repr(self, families=self._families, constructor=True)
281
282 def __str__(self) -> str:
283 return self.to_str()
284
285 def _quoted_families(self) -> typing.Iterator[str]:
286 for f in self._families:
287 needs_quoting = any(c in f for c in ', ')
288 yield '"{}"'.format(f) if needs_quoting else f
289
290 def to_str(self, *, quote: bool = True) -> str:
291 families = self._quoted_families() if quote else self._families
292 return ', '.join(families)
293
294 @classmethod
295 def from_str(cls, family_str: str) -> 'FontFamilies':
296 """Parse a CSS-like string of font families."""
297 families = []
298
299 for part in family_str.split(','):
300 part = part.strip()
301
302 # The Qt CSS parser handles " and ' before passing the string to
303 # QFont.setFamily.
304 if ((part.startswith("'") and part.endswith("'")) or
305 (part.startswith('"') and part.endswith('"'))):
306 part = part[1:-1]
307
308 if not part:
309 continue
310
311 families.append(part)
312
313 return cls(families)
314
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutebrowser/config/configutils.py b/qutebrowser/config/configutils.py
--- a/qutebrowser/config/configutils.py
+++ b/qutebrowser/config/configutils.py
@@ -225,7 +225,9 @@
return self._get_fallback(fallback)
candidates = [] # type: typing.List[ScopedValue]
- widened_hosts = _widened_hostnames(url.host())
+ # Urls trailing with '.' are equivalent to non-trailing types.
+ # urlutils strips them, so in order to match we will need to as well.
+ widened_hosts = _widened_hostnames(url.host().rstrip('.'))
# We must check the 'None' key as well, in case any patterns that
# did not have a domain match.
for host in itertools.chain(widened_hosts, [None]):
|
{"golden_diff": "diff --git a/qutebrowser/config/configutils.py b/qutebrowser/config/configutils.py\n--- a/qutebrowser/config/configutils.py\n+++ b/qutebrowser/config/configutils.py\n@@ -225,7 +225,9 @@\n return self._get_fallback(fallback)\n \n candidates = [] # type: typing.List[ScopedValue]\n- widened_hosts = _widened_hostnames(url.host())\n+ # Urls trailing with '.' are equivalent to non-trailing types.\n+ # urlutils strips them, so in order to match we will need to as well.\n+ widened_hosts = _widened_hostnames(url.host().rstrip('.'))\n # We must check the 'None' key as well, in case any patterns that\n # did not have a domain match.\n for host in itertools.chain(widened_hosts, [None]):\n", "issue": "Can't enable per-domain settings for https://example.com./\nWhen running `qutebrowser --temp-basedir -s content.javascript.enabled false 'https://travis-ci.com./'` and pressing `tsh`, JavaScript is still not allowed for Travis CI.\r\n\r\nThis was introduced in 8b822e40e3243f9679244cfcdf0e7abd1de0289f / #4707 - cc @jgkamat \n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2018-2020 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"Utilities and data structures used by various config code.\"\"\"\n\n\nimport typing\nimport collections\nimport itertools\nimport operator\n\nfrom PyQt5.QtCore import QUrl\n\nfrom qutebrowser.utils import utils, urlmatch, usertypes\nfrom qutebrowser.config import configexc\n\nif typing.TYPE_CHECKING:\n from qutebrowser.config import configdata\n\n\ndef _widened_hostnames(hostname: str) -> typing.Iterable[str]:\n \"\"\"A generator for widening string hostnames.\n\n Ex: a.c.foo -> [a.c.foo, c.foo, foo]\"\"\"\n while hostname:\n yield hostname\n hostname = hostname.partition(\".\")[-1]\n\n\nclass ScopedValue:\n\n \"\"\"A configuration value which is valid for a UrlPattern.\n\n Attributes:\n value: The value itself.\n pattern: The UrlPattern for the value, or None for global values.\n hide_userconfig: Hide this customization from config.dump_userconfig().\n \"\"\"\n\n id_gen = itertools.count(0)\n\n def __init__(self, value: typing.Any,\n pattern: typing.Optional[urlmatch.UrlPattern],\n hide_userconfig: bool = False) -> None:\n self.value = value\n self.pattern = pattern\n self.hide_userconfig = hide_userconfig\n self.pattern_id = next(ScopedValue.id_gen)\n\n def __repr__(self) -> str:\n return utils.get_repr(self, value=self.value, pattern=self.pattern,\n hide_userconfig=self.hide_userconfig,\n pattern_id=self.pattern_id)\n\n\nclass Values:\n\n \"\"\"A collection of values for a single setting.\n\n Currently, we store patterns in two dictionaries for different types of\n lookups. A ordered, pattern keyed map, and an unordered, domain keyed map.\n\n This means that finding a value based on a pattern is fast, and matching\n url patterns is fast if all domains are unique.\n\n If there are many patterns under the domain (or subdomain) that is being\n evaluated, or any patterns that cannot have a concrete domain found, this\n will become slow again.\n\n Attributes:\n opt: The Option being customized.\n _vmap: A mapping of all pattern objects to ScopedValues.\n _domain_map: A mapping from hostnames to all associated ScopedValues.\n \"\"\"\n\n _VmapKeyType = typing.Optional[urlmatch.UrlPattern]\n\n def __init__(self,\n opt: 'configdata.Option',\n values: typing.Sequence[ScopedValue] = ()) -> None:\n self.opt = opt\n self._vmap = collections.OrderedDict() \\\n # type: collections.OrderedDict[Values._VmapKeyType, ScopedValue]\n # A map from domain parts to rules that fall under them.\n self._domain_map = collections.defaultdict(set) \\\n # type: typing.Dict[typing.Optional[str], typing.Set[ScopedValue]]\n\n for scoped in values:\n self._add_scoped(scoped)\n\n def __repr__(self) -> str:\n return utils.get_repr(self, opt=self.opt,\n values=list(self._vmap.values()),\n constructor=True)\n\n def __str__(self) -> str:\n \"\"\"Get the values as human-readable string.\"\"\"\n lines = self.dump(include_hidden=True)\n if lines:\n return '\\n'.join(lines)\n return '{}: <unchanged>'.format(self.opt.name)\n\n def dump(self, include_hidden: bool = False) -> typing.Sequence[str]:\n \"\"\"Dump all customizations for this value.\n\n Arguments:\n include_hidden: Also show values with hide_userconfig=True.\n \"\"\"\n lines = []\n\n for scoped in self._vmap.values():\n if scoped.hide_userconfig and not include_hidden:\n continue\n\n str_value = self.opt.typ.to_str(scoped.value)\n if scoped.pattern is None:\n lines.append('{} = {}'.format(self.opt.name, str_value))\n else:\n lines.append('{}: {} = {}'.format(\n scoped.pattern, self.opt.name, str_value))\n\n return lines\n\n def __iter__(self) -> typing.Iterator['ScopedValue']:\n \"\"\"Yield ScopedValue elements.\n\n This yields in \"normal\" order, i.e. global and then first-set settings\n first.\n \"\"\"\n yield from self._vmap.values()\n\n def __bool__(self) -> bool:\n \"\"\"Check whether this value is customized.\"\"\"\n return bool(self._vmap)\n\n def _check_pattern_support(\n self, arg: typing.Optional[urlmatch.UrlPattern]) -> None:\n \"\"\"Make sure patterns are supported if one was given.\"\"\"\n if arg is not None and not self.opt.supports_pattern:\n raise configexc.NoPatternError(self.opt.name)\n\n def add(self, value: typing.Any,\n pattern: urlmatch.UrlPattern = None, *,\n hide_userconfig: bool = False) -> None:\n \"\"\"Add a value with the given pattern to the list of values.\n\n If hide_userconfig is given, the value is hidden from\n config.dump_userconfig() and thus qute://configdiff.\n \"\"\"\n scoped = ScopedValue(value, pattern, hide_userconfig=hide_userconfig)\n self._add_scoped(scoped)\n\n def _add_scoped(self, scoped: ScopedValue) -> None:\n \"\"\"Add an existing ScopedValue object.\"\"\"\n self._check_pattern_support(scoped.pattern)\n self.remove(scoped.pattern)\n\n self._vmap[scoped.pattern] = scoped\n\n host = scoped.pattern.host if scoped.pattern else None\n self._domain_map[host].add(scoped)\n\n def remove(self, pattern: urlmatch.UrlPattern = None) -> bool:\n \"\"\"Remove the value with the given pattern.\n\n If a matching pattern was removed, True is returned.\n If no matching pattern was found, False is returned.\n \"\"\"\n self._check_pattern_support(pattern)\n if pattern not in self._vmap:\n return False\n\n host = pattern.host if pattern else None\n scoped_value = self._vmap[pattern]\n # If we error here, that means domain_map and vmap are out of sync,\n # report a bug!\n assert host in self._domain_map\n self._domain_map[host].remove(scoped_value)\n del self._vmap[pattern]\n return True\n\n def clear(self) -> None:\n \"\"\"Clear all customization for this value.\"\"\"\n self._vmap.clear()\n self._domain_map.clear()\n\n def _get_fallback(self, fallback: bool) -> typing.Any:\n \"\"\"Get the fallback global/default value.\"\"\"\n if None in self._vmap:\n return self._vmap[None].value\n\n if fallback:\n return self.opt.default\n else:\n return usertypes.UNSET\n\n def get_for_url(self, url: QUrl = None, *,\n fallback: bool = True) -> typing.Any:\n \"\"\"Get a config value, falling back when needed.\n\n This first tries to find a value matching the URL (if given).\n If there's no match:\n With fallback=True, the global/default setting is returned.\n With fallback=False, usertypes.UNSET is returned.\n \"\"\"\n self._check_pattern_support(url)\n if url is None:\n return self._get_fallback(fallback)\n\n candidates = [] # type: typing.List[ScopedValue]\n widened_hosts = _widened_hostnames(url.host())\n # We must check the 'None' key as well, in case any patterns that\n # did not have a domain match.\n for host in itertools.chain(widened_hosts, [None]):\n host_set = self._domain_map.get(host, ())\n for scoped in host_set:\n if scoped.pattern is not None and scoped.pattern.matches(url):\n candidates.append(scoped)\n\n if candidates:\n scoped = max(candidates, key=operator.attrgetter('pattern_id'))\n return scoped.value\n\n if not fallback:\n return usertypes.UNSET\n\n return self._get_fallback(fallback)\n\n def get_for_pattern(self,\n pattern: typing.Optional[urlmatch.UrlPattern], *,\n fallback: bool = True) -> typing.Any:\n \"\"\"Get a value only if it's been overridden for the given pattern.\n\n This is useful when showing values to the user.\n\n If there's no match:\n With fallback=True, the global/default setting is returned.\n With fallback=False, usertypes.UNSET is returned.\n \"\"\"\n self._check_pattern_support(pattern)\n if pattern is not None:\n if pattern in self._vmap:\n return self._vmap[pattern].value\n\n if not fallback:\n return usertypes.UNSET\n\n return self._get_fallback(fallback)\n\n\nclass FontFamilies:\n\n \"\"\"A list of font family names.\"\"\"\n\n def __init__(self, families: typing.Sequence[str]) -> None:\n self._families = families\n self.family = families[0] if families else None\n\n def __iter__(self) -> typing.Iterator[str]:\n yield from self._families\n\n def __repr__(self) -> str:\n return utils.get_repr(self, families=self._families, constructor=True)\n\n def __str__(self) -> str:\n return self.to_str()\n\n def _quoted_families(self) -> typing.Iterator[str]:\n for f in self._families:\n needs_quoting = any(c in f for c in ', ')\n yield '\"{}\"'.format(f) if needs_quoting else f\n\n def to_str(self, *, quote: bool = True) -> str:\n families = self._quoted_families() if quote else self._families\n return ', '.join(families)\n\n @classmethod\n def from_str(cls, family_str: str) -> 'FontFamilies':\n \"\"\"Parse a CSS-like string of font families.\"\"\"\n families = []\n\n for part in family_str.split(','):\n part = part.strip()\n\n # The Qt CSS parser handles \" and ' before passing the string to\n # QFont.setFamily.\n if ((part.startswith(\"'\") and part.endswith(\"'\")) or\n (part.startswith('\"') and part.endswith('\"'))):\n part = part[1:-1]\n\n if not part:\n continue\n\n families.append(part)\n\n return cls(families)\n", "path": "qutebrowser/config/configutils.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2018-2020 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\n\"\"\"Utilities and data structures used by various config code.\"\"\"\n\n\nimport typing\nimport collections\nimport itertools\nimport operator\n\nfrom PyQt5.QtCore import QUrl\n\nfrom qutebrowser.utils import utils, urlmatch, usertypes\nfrom qutebrowser.config import configexc\n\nif typing.TYPE_CHECKING:\n from qutebrowser.config import configdata\n\n\ndef _widened_hostnames(hostname: str) -> typing.Iterable[str]:\n \"\"\"A generator for widening string hostnames.\n\n Ex: a.c.foo -> [a.c.foo, c.foo, foo]\"\"\"\n while hostname:\n yield hostname\n hostname = hostname.partition(\".\")[-1]\n\n\nclass ScopedValue:\n\n \"\"\"A configuration value which is valid for a UrlPattern.\n\n Attributes:\n value: The value itself.\n pattern: The UrlPattern for the value, or None for global values.\n hide_userconfig: Hide this customization from config.dump_userconfig().\n \"\"\"\n\n id_gen = itertools.count(0)\n\n def __init__(self, value: typing.Any,\n pattern: typing.Optional[urlmatch.UrlPattern],\n hide_userconfig: bool = False) -> None:\n self.value = value\n self.pattern = pattern\n self.hide_userconfig = hide_userconfig\n self.pattern_id = next(ScopedValue.id_gen)\n\n def __repr__(self) -> str:\n return utils.get_repr(self, value=self.value, pattern=self.pattern,\n hide_userconfig=self.hide_userconfig,\n pattern_id=self.pattern_id)\n\n\nclass Values:\n\n \"\"\"A collection of values for a single setting.\n\n Currently, we store patterns in two dictionaries for different types of\n lookups. A ordered, pattern keyed map, and an unordered, domain keyed map.\n\n This means that finding a value based on a pattern is fast, and matching\n url patterns is fast if all domains are unique.\n\n If there are many patterns under the domain (or subdomain) that is being\n evaluated, or any patterns that cannot have a concrete domain found, this\n will become slow again.\n\n Attributes:\n opt: The Option being customized.\n _vmap: A mapping of all pattern objects to ScopedValues.\n _domain_map: A mapping from hostnames to all associated ScopedValues.\n \"\"\"\n\n _VmapKeyType = typing.Optional[urlmatch.UrlPattern]\n\n def __init__(self,\n opt: 'configdata.Option',\n values: typing.Sequence[ScopedValue] = ()) -> None:\n self.opt = opt\n self._vmap = collections.OrderedDict() \\\n # type: collections.OrderedDict[Values._VmapKeyType, ScopedValue]\n # A map from domain parts to rules that fall under them.\n self._domain_map = collections.defaultdict(set) \\\n # type: typing.Dict[typing.Optional[str], typing.Set[ScopedValue]]\n\n for scoped in values:\n self._add_scoped(scoped)\n\n def __repr__(self) -> str:\n return utils.get_repr(self, opt=self.opt,\n values=list(self._vmap.values()),\n constructor=True)\n\n def __str__(self) -> str:\n \"\"\"Get the values as human-readable string.\"\"\"\n lines = self.dump(include_hidden=True)\n if lines:\n return '\\n'.join(lines)\n return '{}: <unchanged>'.format(self.opt.name)\n\n def dump(self, include_hidden: bool = False) -> typing.Sequence[str]:\n \"\"\"Dump all customizations for this value.\n\n Arguments:\n include_hidden: Also show values with hide_userconfig=True.\n \"\"\"\n lines = []\n\n for scoped in self._vmap.values():\n if scoped.hide_userconfig and not include_hidden:\n continue\n\n str_value = self.opt.typ.to_str(scoped.value)\n if scoped.pattern is None:\n lines.append('{} = {}'.format(self.opt.name, str_value))\n else:\n lines.append('{}: {} = {}'.format(\n scoped.pattern, self.opt.name, str_value))\n\n return lines\n\n def __iter__(self) -> typing.Iterator['ScopedValue']:\n \"\"\"Yield ScopedValue elements.\n\n This yields in \"normal\" order, i.e. global and then first-set settings\n first.\n \"\"\"\n yield from self._vmap.values()\n\n def __bool__(self) -> bool:\n \"\"\"Check whether this value is customized.\"\"\"\n return bool(self._vmap)\n\n def _check_pattern_support(\n self, arg: typing.Optional[urlmatch.UrlPattern]) -> None:\n \"\"\"Make sure patterns are supported if one was given.\"\"\"\n if arg is not None and not self.opt.supports_pattern:\n raise configexc.NoPatternError(self.opt.name)\n\n def add(self, value: typing.Any,\n pattern: urlmatch.UrlPattern = None, *,\n hide_userconfig: bool = False) -> None:\n \"\"\"Add a value with the given pattern to the list of values.\n\n If hide_userconfig is given, the value is hidden from\n config.dump_userconfig() and thus qute://configdiff.\n \"\"\"\n scoped = ScopedValue(value, pattern, hide_userconfig=hide_userconfig)\n self._add_scoped(scoped)\n\n def _add_scoped(self, scoped: ScopedValue) -> None:\n \"\"\"Add an existing ScopedValue object.\"\"\"\n self._check_pattern_support(scoped.pattern)\n self.remove(scoped.pattern)\n\n self._vmap[scoped.pattern] = scoped\n\n host = scoped.pattern.host if scoped.pattern else None\n self._domain_map[host].add(scoped)\n\n def remove(self, pattern: urlmatch.UrlPattern = None) -> bool:\n \"\"\"Remove the value with the given pattern.\n\n If a matching pattern was removed, True is returned.\n If no matching pattern was found, False is returned.\n \"\"\"\n self._check_pattern_support(pattern)\n if pattern not in self._vmap:\n return False\n\n host = pattern.host if pattern else None\n scoped_value = self._vmap[pattern]\n # If we error here, that means domain_map and vmap are out of sync,\n # report a bug!\n assert host in self._domain_map\n self._domain_map[host].remove(scoped_value)\n del self._vmap[pattern]\n return True\n\n def clear(self) -> None:\n \"\"\"Clear all customization for this value.\"\"\"\n self._vmap.clear()\n self._domain_map.clear()\n\n def _get_fallback(self, fallback: bool) -> typing.Any:\n \"\"\"Get the fallback global/default value.\"\"\"\n if None in self._vmap:\n return self._vmap[None].value\n\n if fallback:\n return self.opt.default\n else:\n return usertypes.UNSET\n\n def get_for_url(self, url: QUrl = None, *,\n fallback: bool = True) -> typing.Any:\n \"\"\"Get a config value, falling back when needed.\n\n This first tries to find a value matching the URL (if given).\n If there's no match:\n With fallback=True, the global/default setting is returned.\n With fallback=False, usertypes.UNSET is returned.\n \"\"\"\n self._check_pattern_support(url)\n if url is None:\n return self._get_fallback(fallback)\n\n candidates = [] # type: typing.List[ScopedValue]\n # Urls trailing with '.' are equivalent to non-trailing types.\n # urlutils strips them, so in order to match we will need to as well.\n widened_hosts = _widened_hostnames(url.host().rstrip('.'))\n # We must check the 'None' key as well, in case any patterns that\n # did not have a domain match.\n for host in itertools.chain(widened_hosts, [None]):\n host_set = self._domain_map.get(host, ())\n for scoped in host_set:\n if scoped.pattern is not None and scoped.pattern.matches(url):\n candidates.append(scoped)\n\n if candidates:\n scoped = max(candidates, key=operator.attrgetter('pattern_id'))\n return scoped.value\n\n if not fallback:\n return usertypes.UNSET\n\n return self._get_fallback(fallback)\n\n def get_for_pattern(self,\n pattern: typing.Optional[urlmatch.UrlPattern], *,\n fallback: bool = True) -> typing.Any:\n \"\"\"Get a value only if it's been overridden for the given pattern.\n\n This is useful when showing values to the user.\n\n If there's no match:\n With fallback=True, the global/default setting is returned.\n With fallback=False, usertypes.UNSET is returned.\n \"\"\"\n self._check_pattern_support(pattern)\n if pattern is not None:\n if pattern in self._vmap:\n return self._vmap[pattern].value\n\n if not fallback:\n return usertypes.UNSET\n\n return self._get_fallback(fallback)\n\n\nclass FontFamilies:\n\n \"\"\"A list of font family names.\"\"\"\n\n def __init__(self, families: typing.Sequence[str]) -> None:\n self._families = families\n self.family = families[0] if families else None\n\n def __iter__(self) -> typing.Iterator[str]:\n yield from self._families\n\n def __repr__(self) -> str:\n return utils.get_repr(self, families=self._families, constructor=True)\n\n def __str__(self) -> str:\n return self.to_str()\n\n def _quoted_families(self) -> typing.Iterator[str]:\n for f in self._families:\n needs_quoting = any(c in f for c in ', ')\n yield '\"{}\"'.format(f) if needs_quoting else f\n\n def to_str(self, *, quote: bool = True) -> str:\n families = self._quoted_families() if quote else self._families\n return ', '.join(families)\n\n @classmethod\n def from_str(cls, family_str: str) -> 'FontFamilies':\n \"\"\"Parse a CSS-like string of font families.\"\"\"\n families = []\n\n for part in family_str.split(','):\n part = part.strip()\n\n # The Qt CSS parser handles \" and ' before passing the string to\n # QFont.setFamily.\n if ((part.startswith(\"'\") and part.endswith(\"'\")) or\n (part.startswith('\"') and part.endswith('\"'))):\n part = part[1:-1]\n\n if not part:\n continue\n\n families.append(part)\n\n return cls(families)\n", "path": "qutebrowser/config/configutils.py"}]}
| 3,633 | 186 |
gh_patches_debug_1003
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-3556
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_margv for macros seems to be missing
At one point in time, arguments to macro's could be obtained from _margv , but this seems to be missing now ( https://github.com/ipython/ipython/wiki/Cookbook:-Macro-arguments ).
I searched the entire ipython folder and only found _margv in the documentation in the macro.py file.
Just wondering if this is still supported.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/core/macro.py`
Content:
```
1 """Support for interactive macros in IPython"""
2
3 #*****************************************************************************
4 # Copyright (C) 2001-2005 Fernando Perez <[email protected]>
5 #
6 # Distributed under the terms of the BSD License. The full license is in
7 # the file COPYING, distributed as part of this software.
8 #*****************************************************************************
9
10 import re
11
12 from IPython.utils import py3compat
13 from IPython.utils.encoding import DEFAULT_ENCODING
14
15 coding_declaration = re.compile(r"#\s*coding[:=]\s*([-\w.]+)")
16
17 class Macro(object):
18 """Simple class to store the value of macros as strings.
19
20 Macro is just a callable that executes a string of IPython
21 input when called.
22
23 Args to macro are available in _margv list if you need them.
24 """
25
26 def __init__(self,code):
27 """store the macro value, as a single string which can be executed"""
28 lines = []
29 enc = None
30 for line in code.splitlines():
31 coding_match = coding_declaration.match(line)
32 if coding_match:
33 enc = coding_match.group(1)
34 else:
35 lines.append(line)
36 code = "\n".join(lines)
37 if isinstance(code, bytes):
38 code = code.decode(enc or DEFAULT_ENCODING)
39 self.value = code + '\n'
40
41 def __str__(self):
42 return py3compat.unicode_to_str(self.value)
43
44 def __unicode__(self):
45 return self.value
46
47 def __repr__(self):
48 return 'IPython.macro.Macro(%s)' % repr(self.value)
49
50 def __getstate__(self):
51 """ needed for safe pickling via %store """
52 return {'value': self.value}
53
54 def __add__(self, other):
55 if isinstance(other, Macro):
56 return Macro(self.value + other.value)
57 elif isinstance(other, basestring):
58 return Macro(self.value + other)
59 raise TypeError
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/core/macro.py b/IPython/core/macro.py
--- a/IPython/core/macro.py
+++ b/IPython/core/macro.py
@@ -19,8 +19,6 @@
Macro is just a callable that executes a string of IPython
input when called.
-
- Args to macro are available in _margv list if you need them.
"""
def __init__(self,code):
|
{"golden_diff": "diff --git a/IPython/core/macro.py b/IPython/core/macro.py\n--- a/IPython/core/macro.py\n+++ b/IPython/core/macro.py\n@@ -19,8 +19,6 @@\n \n Macro is just a callable that executes a string of IPython\n input when called.\n- \n- Args to macro are available in _margv list if you need them.\n \"\"\"\n \n def __init__(self,code):\n", "issue": " _margv for macros seems to be missing\nAt one point in time, arguments to macro's could be obtained from _margv , but this seems to be missing now ( https://github.com/ipython/ipython/wiki/Cookbook:-Macro-arguments ). \n\nI searched the entire ipython folder and only found _margv in the documentation in the macro.py file. \n\nJust wondering if this is still supported. \n\n", "before_files": [{"content": "\"\"\"Support for interactive macros in IPython\"\"\"\n\n#*****************************************************************************\n# Copyright (C) 2001-2005 Fernando Perez <[email protected]>\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#*****************************************************************************\n\nimport re\n\nfrom IPython.utils import py3compat\nfrom IPython.utils.encoding import DEFAULT_ENCODING\n\ncoding_declaration = re.compile(r\"#\\s*coding[:=]\\s*([-\\w.]+)\")\n\nclass Macro(object):\n \"\"\"Simple class to store the value of macros as strings.\n\n Macro is just a callable that executes a string of IPython\n input when called.\n \n Args to macro are available in _margv list if you need them.\n \"\"\"\n\n def __init__(self,code):\n \"\"\"store the macro value, as a single string which can be executed\"\"\"\n lines = []\n enc = None\n for line in code.splitlines():\n coding_match = coding_declaration.match(line)\n if coding_match:\n enc = coding_match.group(1)\n else:\n lines.append(line)\n code = \"\\n\".join(lines)\n if isinstance(code, bytes):\n code = code.decode(enc or DEFAULT_ENCODING)\n self.value = code + '\\n'\n \n def __str__(self):\n return py3compat.unicode_to_str(self.value)\n \n def __unicode__(self):\n return self.value\n\n def __repr__(self):\n return 'IPython.macro.Macro(%s)' % repr(self.value)\n \n def __getstate__(self):\n \"\"\" needed for safe pickling via %store \"\"\"\n return {'value': self.value}\n \n def __add__(self, other):\n if isinstance(other, Macro):\n return Macro(self.value + other.value)\n elif isinstance(other, basestring):\n return Macro(self.value + other)\n raise TypeError\n", "path": "IPython/core/macro.py"}], "after_files": [{"content": "\"\"\"Support for interactive macros in IPython\"\"\"\n\n#*****************************************************************************\n# Copyright (C) 2001-2005 Fernando Perez <[email protected]>\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#*****************************************************************************\n\nimport re\n\nfrom IPython.utils import py3compat\nfrom IPython.utils.encoding import DEFAULT_ENCODING\n\ncoding_declaration = re.compile(r\"#\\s*coding[:=]\\s*([-\\w.]+)\")\n\nclass Macro(object):\n \"\"\"Simple class to store the value of macros as strings.\n\n Macro is just a callable that executes a string of IPython\n input when called.\n \"\"\"\n\n def __init__(self,code):\n \"\"\"store the macro value, as a single string which can be executed\"\"\"\n lines = []\n enc = None\n for line in code.splitlines():\n coding_match = coding_declaration.match(line)\n if coding_match:\n enc = coding_match.group(1)\n else:\n lines.append(line)\n code = \"\\n\".join(lines)\n if isinstance(code, bytes):\n code = code.decode(enc or DEFAULT_ENCODING)\n self.value = code + '\\n'\n \n def __str__(self):\n return py3compat.unicode_to_str(self.value)\n \n def __unicode__(self):\n return self.value\n\n def __repr__(self):\n return 'IPython.macro.Macro(%s)' % repr(self.value)\n \n def __getstate__(self):\n \"\"\" needed for safe pickling via %store \"\"\"\n return {'value': self.value}\n \n def __add__(self, other):\n if isinstance(other, Macro):\n return Macro(self.value + other.value)\n elif isinstance(other, basestring):\n return Macro(self.value + other)\n raise TypeError\n", "path": "IPython/core/macro.py"}]}
| 878 | 99 |
gh_patches_debug_10846
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-2081
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need to put `norm_layer` as a parameter.
https://github.com/pytorch/vision/blob/684f48db4e6f619389da3a6957b3edebf794ae79/torchvision/models/detection/backbone_utils.py#L47
It works fine with resnet50_fpn, but when I try to use another backbone, for example `resnext101_32x8d`
`norm_layer=misc_nn_ops.FrozenBatchNorm2d` may trouble with Imagenet pretrained weights, which use BatchNorm
```
Traceback (most recent call last):
File "tmp.py", line 4, in <module>
m = maskrcnn_resnext101_32x8d_rpn(pretrained=True)
File "/mnt/data/luan/maskrcnn/models.py", line 218, in maskrcnn_resnext101_32x8d_rpn
"resnext101_32x8d", pretrained=pretrained)
File "/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/detection/backbone_utils.py", line 47, in resnet_fpn_backbone
norm_layer=misc_nn_ops.FrozenBatchNorm2d)
File "/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/resnet.py", line 313, in resnext101_32x8d
pretrained, progress, **kwargs)
File "/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/resnet.py", line 224, in _resnet
model.load_state_dict(state_dict)
File "/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
Unexpected key(s) in state_dict: "bn1.num_batches_tracked", "layer1.0.bn1.num_batches_tracked", "layer1.0.bn2.num_batches_tracked", "layer1.0.bn3.num_batches_tracked", "layer1.0.downsample.1.num_batches_tracked", "layer1.1.bn1.num_batches_tracked", "layer1.1.bn2.num_batches_tracked", "layer1.1.bn3.num_batches_tracked", "layer1.2.bn1.num_batches_tracked", "layer1.2.bn2.num_batches_tracked", "layer1.2.bn3.num_batches_tracked", "layer2.0.bn1.num_batches_tracked", "layer2.0.bn2.num_batches_tracked", "layer2.0.bn3.num_batches_tracked", "layer2.0.downsample.1.num_batches_tracked", "layer2.1.bn1.num_batches_tracked", "layer2.1.bn2.num_batches_tracked", "layer2.1.bn3.num_batches_tracked", "layer2.2.bn1.num_batches_tracked", "layer2.2.bn2.num_batches_tracked", "layer2.2.bn3.num_batches_tracked", "layer2.3.bn1.num_batches_tracked", "layer2.3.bn2.num_batches_tracked", "layer2.3.bn3.num_batches_tracked", "layer3.0.bn1.num_batches_tracked", "layer3.0.bn2.num_batches_tracked", "layer3.0.bn3.num_batches_tracked", "layer3.0.downsample.1.num_batches_tracked", "layer3.1.bn1.num_batches_tracked", "layer3.1.bn2.num_batches_tracked", "layer3.1.bn3.num_batches_tracked", "layer3.2.bn1.num_batches_tracked",
"layer3.2.bn2.num_batches_tracked", "layer3.2.bn3.num_batches_tracked", "layer3.3.bn1.num_batches_tracked", "layer3.3.bn2.num_batches_tracked", "layer3.3.bn3.num_batches_tracked", "layer3.4.bn1.num_batches_tracked", "layer3.4.bn2.num_batches_tracked", "layer3.4.bn3.num_batches_tracked", "layer3.5.bn1.num_batches_tracked", "layer3.5.bn2.num_batches_tracked", "layer3.5.bn3.num_batches_tracked", "layer3.6.bn1.num_batches_tracked", "layer3.6.bn2.num_batches_tracked", "layer3.6.bn3.num_batches_tracked", "layer3.7.bn1.num_batches_tracked", "layer3.7.bn2.num_batches_tracked", "layer3.7.bn3.num_batches_tracked", "layer3.8.bn1.num_batches_tracked", "layer3.8.bn2.num_batches_tracked", "layer3.8.bn3.num_batches_tracked", "layer3.9.bn1.num_batches_tracked", "layer3.9.bn2.num_batches_tracked", "layer3.9.bn3.num_batches_tracked", "layer3.10.bn1.num_batches_tracked",
"layer3.10.bn2.num_batches_tracked", "layer3.10.bn3.num_batches_tracked", "layer3.11.bn1.num_batches_tracked", "layer3.11.bn2.num_batches_tracked", "layer3.11.bn3.num_batches_tracked", "layer3.12.bn1.num_batches_tracked", "layer3.12.bn2.num_batches_tracked", "layer3.12.bn3.num_batches_tracked", "layer3.13.bn1.num_batches_tracked", "layer3.13.bn2.num_batches_tracked", "layer3.13.bn3.num_batches_tracked", "layer3.14.bn1.num_batches_tracked", "layer3.14.bn2.num_batches_tracked", "layer3.14.bn3.num_batches_tracked", "layer3.15.bn1.num_batches_tracked", "layer3.15.bn2.num_batches_tracked", "layer3.15.bn3.num_batches_tracked", "layer3.16.bn1.num_batches_tracked", "layer3.16.bn2.num_batches_tracked", "layer3.16.bn3.num_batches_tracked", "layer3.17.bn1.num_batches_tracked", "layer3.17.bn2.num_batches_tracked", "layer3.17.bn3.num_batches_tracked", "layer3.18.bn1.num_batches_tracked", "layer3.18.bn2.num_batches_tracked", "layer3.18.bn3.num_batches_tracked", "layer3.19.bn1.num_batches_tracked", "layer3.19.bn2.num_batches_tracked", "layer3.19.bn3.num_batches_tracked", "layer3.20.bn1.num_batches_tracked", "layer3.20.bn2.num_batches_tracked", "layer3.20.bn3.num_batches_tracked", "layer3.21.bn1.num_batches_tracked", "layer3.21.bn2.num_batches_tracked", "layer3.21.bn3.num_batches_tracked", "layer3.22.bn1.num_batches_tracked", "layer3.22.bn2.num_batches_tracked", "layer3.22.bn3.num_batches_tracked", "layer4.0.bn1.num_batches_tracked", "layer4.0.bn2.num_batches_tracked", "layer4.0.bn3.num_batches_tracked", "layer4.0.downsample.1.num_batches_tracked", "layer4.1.bn1.num_batches_tracked", "layer4.1.bn2.num_batches_tracked", "layer4.1.bn3.num_batches_tracked", "layer4.2.bn1.num_batches_tracked", "layer4.2.bn2.num_batches_tracked", "layer4.2.bn3.num_batches_tracked".
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/models/detection/backbone_utils.py`
Content:
```
1 from collections import OrderedDict
2 from torch import nn
3 from torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool
4
5 from torchvision.ops import misc as misc_nn_ops
6 from .._utils import IntermediateLayerGetter
7 from .. import resnet
8
9
10 class BackboneWithFPN(nn.Module):
11 """
12 Adds a FPN on top of a model.
13 Internally, it uses torchvision.models._utils.IntermediateLayerGetter to
14 extract a submodel that returns the feature maps specified in return_layers.
15 The same limitations of IntermediatLayerGetter apply here.
16 Arguments:
17 backbone (nn.Module)
18 return_layers (Dict[name, new_name]): a dict containing the names
19 of the modules for which the activations will be returned as
20 the key of the dict, and the value of the dict is the name
21 of the returned activation (which the user can specify).
22 in_channels_list (List[int]): number of channels for each feature map
23 that is returned, in the order they are present in the OrderedDict
24 out_channels (int): number of channels in the FPN.
25 Attributes:
26 out_channels (int): the number of channels in the FPN
27 """
28 def __init__(self, backbone, return_layers, in_channels_list, out_channels):
29 super(BackboneWithFPN, self).__init__()
30 self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)
31 self.fpn = FeaturePyramidNetwork(
32 in_channels_list=in_channels_list,
33 out_channels=out_channels,
34 extra_blocks=LastLevelMaxPool(),
35 )
36 self.out_channels = out_channels
37
38 def forward(self, x):
39 x = self.body(x)
40 x = self.fpn(x)
41 return x
42
43
44 def resnet_fpn_backbone(backbone_name, pretrained):
45 backbone = resnet.__dict__[backbone_name](
46 pretrained=pretrained,
47 norm_layer=misc_nn_ops.FrozenBatchNorm2d)
48 # freeze layers
49 for name, parameter in backbone.named_parameters():
50 if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:
51 parameter.requires_grad_(False)
52
53 return_layers = {'layer1': '0', 'layer2': '1', 'layer3': '2', 'layer4': '3'}
54
55 in_channels_stage2 = backbone.inplanes // 8
56 in_channels_list = [
57 in_channels_stage2,
58 in_channels_stage2 * 2,
59 in_channels_stage2 * 4,
60 in_channels_stage2 * 8,
61 ]
62 out_channels = 256
63 return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/models/detection/backbone_utils.py b/torchvision/models/detection/backbone_utils.py
--- a/torchvision/models/detection/backbone_utils.py
+++ b/torchvision/models/detection/backbone_utils.py
@@ -41,10 +41,10 @@
return x
-def resnet_fpn_backbone(backbone_name, pretrained):
+def resnet_fpn_backbone(backbone_name, pretrained, norm_layer=misc_nn_ops.FrozenBatchNorm2d):
backbone = resnet.__dict__[backbone_name](
pretrained=pretrained,
- norm_layer=misc_nn_ops.FrozenBatchNorm2d)
+ norm_layer=norm_layer)
# freeze layers
for name, parameter in backbone.named_parameters():
if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:
|
{"golden_diff": "diff --git a/torchvision/models/detection/backbone_utils.py b/torchvision/models/detection/backbone_utils.py\n--- a/torchvision/models/detection/backbone_utils.py\n+++ b/torchvision/models/detection/backbone_utils.py\n@@ -41,10 +41,10 @@\n return x\n \n \n-def resnet_fpn_backbone(backbone_name, pretrained):\n+def resnet_fpn_backbone(backbone_name, pretrained, norm_layer=misc_nn_ops.FrozenBatchNorm2d):\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained,\n- norm_layer=misc_nn_ops.FrozenBatchNorm2d)\n+ norm_layer=norm_layer)\n # freeze layers\n for name, parameter in backbone.named_parameters():\n if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:\n", "issue": "Need to put `norm_layer` as a parameter.\nhttps://github.com/pytorch/vision/blob/684f48db4e6f619389da3a6957b3edebf794ae79/torchvision/models/detection/backbone_utils.py#L47\r\n\r\nIt works fine with resnet50_fpn, but when I try to use another backbone, for example `resnext101_32x8d`\r\n\r\n`norm_layer=misc_nn_ops.FrozenBatchNorm2d` may trouble with Imagenet pretrained weights, which use BatchNorm\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"tmp.py\", line 4, in <module>\r\n m = maskrcnn_resnext101_32x8d_rpn(pretrained=True)\r\n File \"/mnt/data/luan/maskrcnn/models.py\", line 218, in maskrcnn_resnext101_32x8d_rpn \r\n \"resnext101_32x8d\", pretrained=pretrained)\r\n File \"/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/detection/backbone_utils.py\", line 47, in resnet_fpn_backbone \r\n norm_layer=misc_nn_ops.FrozenBatchNorm2d)\r\n File \"/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/resnet.py\", line 313, in resnext101_32x8d \r\n pretrained, progress, **kwargs)\r\n File \"/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torchvision/models/resnet.py\", line 224, in _resnet \r\n model.load_state_dict(state_dict)\r\n File \"/mnt/data/luan/anaconda3/envs/mask/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 830, in load_state_dict \r\n self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for ResNet:\r\n Unexpected key(s) in state_dict: \"bn1.num_batches_tracked\", \"layer1.0.bn1.num_batches_tracked\", \"layer1.0.bn2.num_batches_tracked\", \"layer1.0.bn3.num_batches_tracked\", \"layer1.0.downsample.1.num_batches_tracked\", \"layer1.1.bn1.num_batches_tracked\", \"layer1.1.bn2.num_batches_tracked\", \"layer1.1.bn3.num_batches_tracked\", \"layer1.2.bn1.num_batches_tracked\", \"layer1.2.bn2.num_batches_tracked\", \"layer1.2.bn3.num_batches_tracked\", \"layer2.0.bn1.num_batches_tracked\", \"layer2.0.bn2.num_batches_tracked\", \"layer2.0.bn3.num_batches_tracked\", \"layer2.0.downsample.1.num_batches_tracked\", \"layer2.1.bn1.num_batches_tracked\", \"layer2.1.bn2.num_batches_tracked\", \"layer2.1.bn3.num_batches_tracked\", \"layer2.2.bn1.num_batches_tracked\", \"layer2.2.bn2.num_batches_tracked\", \"layer2.2.bn3.num_batches_tracked\", \"layer2.3.bn1.num_batches_tracked\", \"layer2.3.bn2.num_batches_tracked\", \"layer2.3.bn3.num_batches_tracked\", \"layer3.0.bn1.num_batches_tracked\", \"layer3.0.bn2.num_batches_tracked\", \"layer3.0.bn3.num_batches_tracked\", \"layer3.0.downsample.1.num_batches_tracked\", \"layer3.1.bn1.num_batches_tracked\", \"layer3.1.bn2.num_batches_tracked\", \"layer3.1.bn3.num_batches_tracked\", \"layer3.2.bn1.num_batches_tracked\",\r\n\"layer3.2.bn2.num_batches_tracked\", \"layer3.2.bn3.num_batches_tracked\", \"layer3.3.bn1.num_batches_tracked\", \"layer3.3.bn2.num_batches_tracked\", \"layer3.3.bn3.num_batches_tracked\", \"layer3.4.bn1.num_batches_tracked\", \"layer3.4.bn2.num_batches_tracked\", \"layer3.4.bn3.num_batches_tracked\", \"layer3.5.bn1.num_batches_tracked\", \"layer3.5.bn2.num_batches_tracked\", \"layer3.5.bn3.num_batches_tracked\", \"layer3.6.bn1.num_batches_tracked\", \"layer3.6.bn2.num_batches_tracked\", \"layer3.6.bn3.num_batches_tracked\", \"layer3.7.bn1.num_batches_tracked\", \"layer3.7.bn2.num_batches_tracked\", \"layer3.7.bn3.num_batches_tracked\", \"layer3.8.bn1.num_batches_tracked\", \"layer3.8.bn2.num_batches_tracked\", \"layer3.8.bn3.num_batches_tracked\", \"layer3.9.bn1.num_batches_tracked\", \"layer3.9.bn2.num_batches_tracked\", \"layer3.9.bn3.num_batches_tracked\", \"layer3.10.bn1.num_batches_tracked\",\r\n\"layer3.10.bn2.num_batches_tracked\", \"layer3.10.bn3.num_batches_tracked\", \"layer3.11.bn1.num_batches_tracked\", \"layer3.11.bn2.num_batches_tracked\", \"layer3.11.bn3.num_batches_tracked\", \"layer3.12.bn1.num_batches_tracked\", \"layer3.12.bn2.num_batches_tracked\", \"layer3.12.bn3.num_batches_tracked\", \"layer3.13.bn1.num_batches_tracked\", \"layer3.13.bn2.num_batches_tracked\", \"layer3.13.bn3.num_batches_tracked\", \"layer3.14.bn1.num_batches_tracked\", \"layer3.14.bn2.num_batches_tracked\", \"layer3.14.bn3.num_batches_tracked\", \"layer3.15.bn1.num_batches_tracked\", \"layer3.15.bn2.num_batches_tracked\", \"layer3.15.bn3.num_batches_tracked\", \"layer3.16.bn1.num_batches_tracked\", \"layer3.16.bn2.num_batches_tracked\", \"layer3.16.bn3.num_batches_tracked\", \"layer3.17.bn1.num_batches_tracked\", \"layer3.17.bn2.num_batches_tracked\", \"layer3.17.bn3.num_batches_tracked\", \"layer3.18.bn1.num_batches_tracked\", \"layer3.18.bn2.num_batches_tracked\", \"layer3.18.bn3.num_batches_tracked\", \"layer3.19.bn1.num_batches_tracked\", \"layer3.19.bn2.num_batches_tracked\", \"layer3.19.bn3.num_batches_tracked\", \"layer3.20.bn1.num_batches_tracked\", \"layer3.20.bn2.num_batches_tracked\", \"layer3.20.bn3.num_batches_tracked\", \"layer3.21.bn1.num_batches_tracked\", \"layer3.21.bn2.num_batches_tracked\", \"layer3.21.bn3.num_batches_tracked\", \"layer3.22.bn1.num_batches_tracked\", \"layer3.22.bn2.num_batches_tracked\", \"layer3.22.bn3.num_batches_tracked\", \"layer4.0.bn1.num_batches_tracked\", \"layer4.0.bn2.num_batches_tracked\", \"layer4.0.bn3.num_batches_tracked\", \"layer4.0.downsample.1.num_batches_tracked\", \"layer4.1.bn1.num_batches_tracked\", \"layer4.1.bn2.num_batches_tracked\", \"layer4.1.bn3.num_batches_tracked\", \"layer4.2.bn1.num_batches_tracked\", \"layer4.2.bn2.num_batches_tracked\", \"layer4.2.bn3.num_batches_tracked\".\r\n\r\n```\n", "before_files": [{"content": "from collections import OrderedDict\nfrom torch import nn\nfrom torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool\n\nfrom torchvision.ops import misc as misc_nn_ops\nfrom .._utils import IntermediateLayerGetter\nfrom .. import resnet\n\n\nclass BackboneWithFPN(nn.Module):\n \"\"\"\n Adds a FPN on top of a model.\n Internally, it uses torchvision.models._utils.IntermediateLayerGetter to\n extract a submodel that returns the feature maps specified in return_layers.\n The same limitations of IntermediatLayerGetter apply here.\n Arguments:\n backbone (nn.Module)\n return_layers (Dict[name, new_name]): a dict containing the names\n of the modules for which the activations will be returned as\n the key of the dict, and the value of the dict is the name\n of the returned activation (which the user can specify).\n in_channels_list (List[int]): number of channels for each feature map\n that is returned, in the order they are present in the OrderedDict\n out_channels (int): number of channels in the FPN.\n Attributes:\n out_channels (int): the number of channels in the FPN\n \"\"\"\n def __init__(self, backbone, return_layers, in_channels_list, out_channels):\n super(BackboneWithFPN, self).__init__()\n self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n self.fpn = FeaturePyramidNetwork(\n in_channels_list=in_channels_list,\n out_channels=out_channels,\n extra_blocks=LastLevelMaxPool(),\n )\n self.out_channels = out_channels\n\n def forward(self, x):\n x = self.body(x)\n x = self.fpn(x)\n return x\n\n\ndef resnet_fpn_backbone(backbone_name, pretrained):\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained,\n norm_layer=misc_nn_ops.FrozenBatchNorm2d)\n # freeze layers\n for name, parameter in backbone.named_parameters():\n if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:\n parameter.requires_grad_(False)\n\n return_layers = {'layer1': '0', 'layer2': '1', 'layer3': '2', 'layer4': '3'}\n\n in_channels_stage2 = backbone.inplanes // 8\n in_channels_list = [\n in_channels_stage2,\n in_channels_stage2 * 2,\n in_channels_stage2 * 4,\n in_channels_stage2 * 8,\n ]\n out_channels = 256\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels)\n", "path": "torchvision/models/detection/backbone_utils.py"}], "after_files": [{"content": "from collections import OrderedDict\nfrom torch import nn\nfrom torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool\n\nfrom torchvision.ops import misc as misc_nn_ops\nfrom .._utils import IntermediateLayerGetter\nfrom .. import resnet\n\n\nclass BackboneWithFPN(nn.Module):\n \"\"\"\n Adds a FPN on top of a model.\n Internally, it uses torchvision.models._utils.IntermediateLayerGetter to\n extract a submodel that returns the feature maps specified in return_layers.\n The same limitations of IntermediatLayerGetter apply here.\n Arguments:\n backbone (nn.Module)\n return_layers (Dict[name, new_name]): a dict containing the names\n of the modules for which the activations will be returned as\n the key of the dict, and the value of the dict is the name\n of the returned activation (which the user can specify).\n in_channels_list (List[int]): number of channels for each feature map\n that is returned, in the order they are present in the OrderedDict\n out_channels (int): number of channels in the FPN.\n Attributes:\n out_channels (int): the number of channels in the FPN\n \"\"\"\n def __init__(self, backbone, return_layers, in_channels_list, out_channels):\n super(BackboneWithFPN, self).__init__()\n self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n self.fpn = FeaturePyramidNetwork(\n in_channels_list=in_channels_list,\n out_channels=out_channels,\n extra_blocks=LastLevelMaxPool(),\n )\n self.out_channels = out_channels\n\n def forward(self, x):\n x = self.body(x)\n x = self.fpn(x)\n return x\n\n\ndef resnet_fpn_backbone(backbone_name, pretrained, norm_layer=misc_nn_ops.FrozenBatchNorm2d):\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained,\n norm_layer=norm_layer)\n # freeze layers\n for name, parameter in backbone.named_parameters():\n if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:\n parameter.requires_grad_(False)\n\n return_layers = {'layer1': '0', 'layer2': '1', 'layer3': '2', 'layer4': '3'}\n\n in_channels_stage2 = backbone.inplanes // 8\n in_channels_list = [\n in_channels_stage2,\n in_channels_stage2 * 2,\n in_channels_stage2 * 4,\n in_channels_stage2 * 8,\n ]\n out_channels = 256\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels)\n", "path": "torchvision/models/detection/backbone_utils.py"}]}
| 2,754 | 193 |
gh_patches_debug_20079
|
rasdani/github-patches
|
git_diff
|
huggingface__transformers-6437
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in run_tf_squad.py script
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
--> @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
I'm simply trying to train a new question answering model using the TF trainer script, and I get the following error:
```python
Traceback (most recent call last):
File "run_tf_squad.py", line 244, in <module>
main()
File "run_tf_squad.py", line 123, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 40, in __init__
self._add_dataclass_arguments(dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 72, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/usr/lib/python3.6/typing.py", line 1154, in __subclasscheck__
return super().__subclasscheck__(cls)
File "/usr/lib/python3.6/abc.py", line 209, in __subclasscheck__
ok = cls.__subclasshook__(subclass)
File "/usr/lib/python3.6/typing.py", line 890, in __extrahook__
if cls.__extra__ and issubclass(subclass, cls.__extra__):
TypeError: issubclass() arg 1 must be a class
```
## To reproduce
Steps to reproduce the behavior:
1.install transformers from the master branch
2.run the example script in question-answering:
```
python run_tf_squad.py \
--model_name_or_path bert-base-uncased \
--output_dir model \
--max_seq_length 384 \
--num_train_epochs 2 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 16 \
--do_train \
--logging_dir logs \
--logging_steps 10 \
--learning_rate 3e-5 \
--doc_stride 128
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The script should run normally and train the model
<!-- A clear and concise description of what you would expect to happen. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/transformers/hf_argparser.py`
Content:
```
1 import dataclasses
2 import json
3 import sys
4 from argparse import ArgumentParser
5 from enum import Enum
6 from pathlib import Path
7 from typing import Any, Iterable, List, NewType, Tuple, Union
8
9
10 DataClass = NewType("DataClass", Any)
11 DataClassType = NewType("DataClassType", Any)
12
13
14 class HfArgumentParser(ArgumentParser):
15 """
16 This subclass of `argparse.ArgumentParser` uses type hints on dataclasses
17 to generate arguments.
18
19 The class is designed to play well with the native argparse. In particular,
20 you can add more (non-dataclass backed) arguments to the parser after initialization
21 and you'll get the output back after parsing as an additional namespace.
22 """
23
24 dataclass_types: Iterable[DataClassType]
25
26 def __init__(self, dataclass_types: Union[DataClassType, Iterable[DataClassType]], **kwargs):
27 """
28 Args:
29 dataclass_types:
30 Dataclass type, or list of dataclass types for which we will "fill" instances
31 with the parsed args.
32 kwargs:
33 (Optional) Passed to `argparse.ArgumentParser()` in the regular way.
34 """
35 super().__init__(**kwargs)
36 if dataclasses.is_dataclass(dataclass_types):
37 dataclass_types = [dataclass_types]
38 self.dataclass_types = dataclass_types
39 for dtype in self.dataclass_types:
40 self._add_dataclass_arguments(dtype)
41
42 def _add_dataclass_arguments(self, dtype: DataClassType):
43 for field in dataclasses.fields(dtype):
44 field_name = f"--{field.name}"
45 kwargs = field.metadata.copy()
46 # field.metadata is not used at all by Data Classes,
47 # it is provided as a third-party extension mechanism.
48 if isinstance(field.type, str):
49 raise ImportError(
50 "This implementation is not compatible with Postponed Evaluation of Annotations (PEP 563),"
51 "which can be opted in from Python 3.7 with `from __future__ import annotations`."
52 "We will add compatibility when Python 3.9 is released."
53 )
54 typestring = str(field.type)
55 for prim_type in (int, float, str):
56 for collection in (List,):
57 if typestring == f"typing.Union[{collection[prim_type]}, NoneType]":
58 field.type = collection[prim_type]
59 if typestring == f"typing.Union[{prim_type.__name__}, NoneType]":
60 field.type = prim_type
61
62 if isinstance(field.type, type) and issubclass(field.type, Enum):
63 kwargs["choices"] = list(field.type)
64 kwargs["type"] = field.type
65 if field.default is not dataclasses.MISSING:
66 kwargs["default"] = field.default
67 elif field.type is bool:
68 kwargs["action"] = "store_false" if field.default is True else "store_true"
69 if field.default is True:
70 field_name = f"--no-{field.name}"
71 kwargs["dest"] = field.name
72 elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
73 kwargs["nargs"] = "+"
74 kwargs["type"] = field.type.__args__[0]
75 assert all(
76 x == kwargs["type"] for x in field.type.__args__
77 ), "{} cannot be a List of mixed types".format(field.name)
78 if field.default_factory is not dataclasses.MISSING:
79 kwargs["default"] = field.default_factory()
80 else:
81 kwargs["type"] = field.type
82 if field.default is not dataclasses.MISSING:
83 kwargs["default"] = field.default
84 elif field.default_factory is not dataclasses.MISSING:
85 kwargs["default"] = field.default_factory()
86 else:
87 kwargs["required"] = True
88 self.add_argument(field_name, **kwargs)
89
90 def parse_args_into_dataclasses(
91 self, args=None, return_remaining_strings=False, look_for_args_file=True
92 ) -> Tuple[DataClass, ...]:
93 """
94 Parse command-line args into instances of the specified dataclass types.
95
96 This relies on argparse's `ArgumentParser.parse_known_args`.
97 See the doc at:
98 docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args
99
100 Args:
101 args:
102 List of strings to parse. The default is taken from sys.argv.
103 (same as argparse.ArgumentParser)
104 return_remaining_strings:
105 If true, also return a list of remaining argument strings.
106 look_for_args_file:
107 If true, will look for a ".args" file with the same base name
108 as the entry point script for this process, and will append its
109 potential content to the command line args.
110
111 Returns:
112 Tuple consisting of:
113 - the dataclass instances in the same order as they
114 were passed to the initializer.abspath
115 - if applicable, an additional namespace for more
116 (non-dataclass backed) arguments added to the parser
117 after initialization.
118 - The potential list of remaining argument strings.
119 (same as argparse.ArgumentParser.parse_known_args)
120 """
121 if look_for_args_file and len(sys.argv):
122 args_file = Path(sys.argv[0]).with_suffix(".args")
123 if args_file.exists():
124 fargs = args_file.read_text().split()
125 args = fargs + args if args is not None else fargs + sys.argv[1:]
126 # in case of duplicate arguments the first one has precedence
127 # so we append rather than prepend.
128 namespace, remaining_args = self.parse_known_args(args=args)
129 outputs = []
130 for dtype in self.dataclass_types:
131 keys = {f.name for f in dataclasses.fields(dtype)}
132 inputs = {k: v for k, v in vars(namespace).items() if k in keys}
133 for k in keys:
134 delattr(namespace, k)
135 obj = dtype(**inputs)
136 outputs.append(obj)
137 if len(namespace.__dict__) > 0:
138 # additional namespace.
139 outputs.append(namespace)
140 if return_remaining_strings:
141 return (*outputs, remaining_args)
142 else:
143 if remaining_args:
144 raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
145
146 return (*outputs,)
147
148 def parse_json_file(self, json_file: str) -> Tuple[DataClass, ...]:
149 """
150 Alternative helper method that does not use `argparse` at all,
151 instead loading a json file and populating the dataclass types.
152 """
153 data = json.loads(Path(json_file).read_text())
154 outputs = []
155 for dtype in self.dataclass_types:
156 keys = {f.name for f in dataclasses.fields(dtype)}
157 inputs = {k: v for k, v in data.items() if k in keys}
158 obj = dtype(**inputs)
159 outputs.append(obj)
160 return (*outputs,)
161
162 def parse_dict(self, args: dict) -> Tuple[DataClass, ...]:
163 """
164 Alternative helper method that does not use `argparse` at all,
165 instead uses a dict and populating the dataclass types.
166 """
167 outputs = []
168 for dtype in self.dataclass_types:
169 keys = {f.name for f in dataclasses.fields(dtype)}
170 inputs = {k: v for k, v in args.items() if k in keys}
171 obj = dtype(**inputs)
172 outputs.append(obj)
173 return (*outputs,)
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -4,7 +4,7 @@
from argparse import ArgumentParser
from enum import Enum
from pathlib import Path
-from typing import Any, Iterable, List, NewType, Tuple, Union
+from typing import Any, Iterable, List, NewType, Optional, Tuple, Union
DataClass = NewType("DataClass", Any)
@@ -64,7 +64,7 @@
kwargs["type"] = field.type
if field.default is not dataclasses.MISSING:
kwargs["default"] = field.default
- elif field.type is bool:
+ elif field.type is bool or field.type is Optional[bool]:
kwargs["action"] = "store_false" if field.default is True else "store_true"
if field.default is True:
field_name = f"--no-{field.name}"
|
{"golden_diff": "diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py\n--- a/src/transformers/hf_argparser.py\n+++ b/src/transformers/hf_argparser.py\n@@ -4,7 +4,7 @@\n from argparse import ArgumentParser\n from enum import Enum\n from pathlib import Path\n-from typing import Any, Iterable, List, NewType, Tuple, Union\n+from typing import Any, Iterable, List, NewType, Optional, Tuple, Union\n \n \n DataClass = NewType(\"DataClass\", Any)\n@@ -64,7 +64,7 @@\n kwargs[\"type\"] = field.type\n if field.default is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default\n- elif field.type is bool:\n+ elif field.type is bool or field.type is Optional[bool]:\n kwargs[\"action\"] = \"store_false\" if field.default is True else \"store_true\"\n if field.default is True:\n field_name = f\"--no-{field.name}\"\n", "issue": "Error in run_tf_squad.py script\n## Environment info\r\n<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.\r\n Don't forget to fill out the missing fields in that output! -->\r\n \r\n- `transformers` version: 3.0.2\r\n- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.6.0+cu101 (True)\r\n- Tensorflow version (GPU?): 2.3.0 (True)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n### Who can help\r\n<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @\r\n If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.\r\n Please tag fewer than 3 people.\r\n \r\n albert, bert, GPT2, XLM: @LysandreJik \r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n Speed and Memory Benchmarks: @patrickvonplaten\r\n Model Cards: @julien-c\r\n Translation: @sshleifer\r\n Summarization: @sshleifer\r\n TextGeneration: @TevenLeScao \r\n examples/distillation: @VictorSanh\r\n nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n Text Generation: @TevenLeScao\r\n blenderbot: @mariamabarham\r\n Bart: @sshleifer\r\n Marian: @sshleifer\r\n T5: @patrickvonplaten\r\n Longformer/Reformer: @patrickvonplaten\r\n TransfoXL/XLNet: @TevenLeScao \r\n examples/seq2seq: @sshleifer\r\n tensorflow: @jplu \r\ndocumentation: @sgugger\r\n --> @sgugger\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...):\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below)\r\n* [ ] my own modified scripts: (give details below)\r\n\r\nThe tasks I am working on is:\r\n* [x] an official GLUE/SQUaD task: SQUaD\r\n* [ ] my own task or dataset: (give details below)\r\n\r\nI'm simply trying to train a new question answering model using the TF trainer script, and I get the following error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"run_tf_squad.py\", line 244, in <module>\r\n main()\r\n File \"run_tf_squad.py\", line 123, in main\r\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py\", line 40, in __init__\r\n self._add_dataclass_arguments(dtype)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py\", line 72, in _add_dataclass_arguments\r\n elif hasattr(field.type, \"__origin__\") and issubclass(field.type.__origin__, List):\r\n File \"/usr/lib/python3.6/typing.py\", line 1154, in __subclasscheck__\r\n return super().__subclasscheck__(cls)\r\n File \"/usr/lib/python3.6/abc.py\", line 209, in __subclasscheck__\r\n ok = cls.__subclasshook__(subclass)\r\n File \"/usr/lib/python3.6/typing.py\", line 890, in __extrahook__\r\n if cls.__extra__ and issubclass(subclass, cls.__extra__):\r\nTypeError: issubclass() arg 1 must be a class\r\n```\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.install transformers from the master branch\r\n2.run the example script in question-answering:\r\n```\r\npython run_tf_squad.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --output_dir model \\\r\n --max_seq_length 384 \\\r\n --num_train_epochs 2 \\\r\n --per_gpu_train_batch_size 8 \\\r\n --per_gpu_eval_batch_size 16 \\\r\n --do_train \\\r\n --logging_dir logs \\ \r\n --logging_steps 10 \\\r\n --learning_rate 3e-5 \\\r\n --doc_stride 128 \r\n```\r\n\r\n<!-- If you have code snippets, error messages, stack traces please provide them here as well.\r\n Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting\r\n Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->\r\n\r\n## Expected behavior\r\nThe script should run normally and train the model\r\n<!-- A clear and concise description of what you would expect to happen. -->\r\n\n", "before_files": [{"content": "import dataclasses\nimport json\nimport sys\nfrom argparse import ArgumentParser\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Any, Iterable, List, NewType, Tuple, Union\n\n\nDataClass = NewType(\"DataClass\", Any)\nDataClassType = NewType(\"DataClassType\", Any)\n\n\nclass HfArgumentParser(ArgumentParser):\n \"\"\"\n This subclass of `argparse.ArgumentParser` uses type hints on dataclasses\n to generate arguments.\n\n The class is designed to play well with the native argparse. In particular,\n you can add more (non-dataclass backed) arguments to the parser after initialization\n and you'll get the output back after parsing as an additional namespace.\n \"\"\"\n\n dataclass_types: Iterable[DataClassType]\n\n def __init__(self, dataclass_types: Union[DataClassType, Iterable[DataClassType]], **kwargs):\n \"\"\"\n Args:\n dataclass_types:\n Dataclass type, or list of dataclass types for which we will \"fill\" instances\n with the parsed args.\n kwargs:\n (Optional) Passed to `argparse.ArgumentParser()` in the regular way.\n \"\"\"\n super().__init__(**kwargs)\n if dataclasses.is_dataclass(dataclass_types):\n dataclass_types = [dataclass_types]\n self.dataclass_types = dataclass_types\n for dtype in self.dataclass_types:\n self._add_dataclass_arguments(dtype)\n\n def _add_dataclass_arguments(self, dtype: DataClassType):\n for field in dataclasses.fields(dtype):\n field_name = f\"--{field.name}\"\n kwargs = field.metadata.copy()\n # field.metadata is not used at all by Data Classes,\n # it is provided as a third-party extension mechanism.\n if isinstance(field.type, str):\n raise ImportError(\n \"This implementation is not compatible with Postponed Evaluation of Annotations (PEP 563),\"\n \"which can be opted in from Python 3.7 with `from __future__ import annotations`.\"\n \"We will add compatibility when Python 3.9 is released.\"\n )\n typestring = str(field.type)\n for prim_type in (int, float, str):\n for collection in (List,):\n if typestring == f\"typing.Union[{collection[prim_type]}, NoneType]\":\n field.type = collection[prim_type]\n if typestring == f\"typing.Union[{prim_type.__name__}, NoneType]\":\n field.type = prim_type\n\n if isinstance(field.type, type) and issubclass(field.type, Enum):\n kwargs[\"choices\"] = list(field.type)\n kwargs[\"type\"] = field.type\n if field.default is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default\n elif field.type is bool:\n kwargs[\"action\"] = \"store_false\" if field.default is True else \"store_true\"\n if field.default is True:\n field_name = f\"--no-{field.name}\"\n kwargs[\"dest\"] = field.name\n elif hasattr(field.type, \"__origin__\") and issubclass(field.type.__origin__, List):\n kwargs[\"nargs\"] = \"+\"\n kwargs[\"type\"] = field.type.__args__[0]\n assert all(\n x == kwargs[\"type\"] for x in field.type.__args__\n ), \"{} cannot be a List of mixed types\".format(field.name)\n if field.default_factory is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default_factory()\n else:\n kwargs[\"type\"] = field.type\n if field.default is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default\n elif field.default_factory is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default_factory()\n else:\n kwargs[\"required\"] = True\n self.add_argument(field_name, **kwargs)\n\n def parse_args_into_dataclasses(\n self, args=None, return_remaining_strings=False, look_for_args_file=True\n ) -> Tuple[DataClass, ...]:\n \"\"\"\n Parse command-line args into instances of the specified dataclass types.\n\n This relies on argparse's `ArgumentParser.parse_known_args`.\n See the doc at:\n docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args\n\n Args:\n args:\n List of strings to parse. The default is taken from sys.argv.\n (same as argparse.ArgumentParser)\n return_remaining_strings:\n If true, also return a list of remaining argument strings.\n look_for_args_file:\n If true, will look for a \".args\" file with the same base name\n as the entry point script for this process, and will append its\n potential content to the command line args.\n\n Returns:\n Tuple consisting of:\n - the dataclass instances in the same order as they\n were passed to the initializer.abspath\n - if applicable, an additional namespace for more\n (non-dataclass backed) arguments added to the parser\n after initialization.\n - The potential list of remaining argument strings.\n (same as argparse.ArgumentParser.parse_known_args)\n \"\"\"\n if look_for_args_file and len(sys.argv):\n args_file = Path(sys.argv[0]).with_suffix(\".args\")\n if args_file.exists():\n fargs = args_file.read_text().split()\n args = fargs + args if args is not None else fargs + sys.argv[1:]\n # in case of duplicate arguments the first one has precedence\n # so we append rather than prepend.\n namespace, remaining_args = self.parse_known_args(args=args)\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in vars(namespace).items() if k in keys}\n for k in keys:\n delattr(namespace, k)\n obj = dtype(**inputs)\n outputs.append(obj)\n if len(namespace.__dict__) > 0:\n # additional namespace.\n outputs.append(namespace)\n if return_remaining_strings:\n return (*outputs, remaining_args)\n else:\n if remaining_args:\n raise ValueError(f\"Some specified arguments are not used by the HfArgumentParser: {remaining_args}\")\n\n return (*outputs,)\n\n def parse_json_file(self, json_file: str) -> Tuple[DataClass, ...]:\n \"\"\"\n Alternative helper method that does not use `argparse` at all,\n instead loading a json file and populating the dataclass types.\n \"\"\"\n data = json.loads(Path(json_file).read_text())\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in data.items() if k in keys}\n obj = dtype(**inputs)\n outputs.append(obj)\n return (*outputs,)\n\n def parse_dict(self, args: dict) -> Tuple[DataClass, ...]:\n \"\"\"\n Alternative helper method that does not use `argparse` at all,\n instead uses a dict and populating the dataclass types.\n \"\"\"\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in args.items() if k in keys}\n obj = dtype(**inputs)\n outputs.append(obj)\n return (*outputs,)\n", "path": "src/transformers/hf_argparser.py"}], "after_files": [{"content": "import dataclasses\nimport json\nimport sys\nfrom argparse import ArgumentParser\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Any, Iterable, List, NewType, Optional, Tuple, Union\n\n\nDataClass = NewType(\"DataClass\", Any)\nDataClassType = NewType(\"DataClassType\", Any)\n\n\nclass HfArgumentParser(ArgumentParser):\n \"\"\"\n This subclass of `argparse.ArgumentParser` uses type hints on dataclasses\n to generate arguments.\n\n The class is designed to play well with the native argparse. In particular,\n you can add more (non-dataclass backed) arguments to the parser after initialization\n and you'll get the output back after parsing as an additional namespace.\n \"\"\"\n\n dataclass_types: Iterable[DataClassType]\n\n def __init__(self, dataclass_types: Union[DataClassType, Iterable[DataClassType]], **kwargs):\n \"\"\"\n Args:\n dataclass_types:\n Dataclass type, or list of dataclass types for which we will \"fill\" instances\n with the parsed args.\n kwargs:\n (Optional) Passed to `argparse.ArgumentParser()` in the regular way.\n \"\"\"\n super().__init__(**kwargs)\n if dataclasses.is_dataclass(dataclass_types):\n dataclass_types = [dataclass_types]\n self.dataclass_types = dataclass_types\n for dtype in self.dataclass_types:\n self._add_dataclass_arguments(dtype)\n\n def _add_dataclass_arguments(self, dtype: DataClassType):\n for field in dataclasses.fields(dtype):\n field_name = f\"--{field.name}\"\n kwargs = field.metadata.copy()\n # field.metadata is not used at all by Data Classes,\n # it is provided as a third-party extension mechanism.\n if isinstance(field.type, str):\n raise ImportError(\n \"This implementation is not compatible with Postponed Evaluation of Annotations (PEP 563),\"\n \"which can be opted in from Python 3.7 with `from __future__ import annotations`.\"\n \"We will add compatibility when Python 3.9 is released.\"\n )\n typestring = str(field.type)\n for prim_type in (int, float, str):\n for collection in (List,):\n if typestring == f\"typing.Union[{collection[prim_type]}, NoneType]\":\n field.type = collection[prim_type]\n if typestring == f\"typing.Union[{prim_type.__name__}, NoneType]\":\n field.type = prim_type\n\n if isinstance(field.type, type) and issubclass(field.type, Enum):\n kwargs[\"choices\"] = list(field.type)\n kwargs[\"type\"] = field.type\n if field.default is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default\n elif field.type is bool or field.type is Optional[bool]:\n kwargs[\"action\"] = \"store_false\" if field.default is True else \"store_true\"\n if field.default is True:\n field_name = f\"--no-{field.name}\"\n kwargs[\"dest\"] = field.name\n elif hasattr(field.type, \"__origin__\") and issubclass(field.type.__origin__, List):\n kwargs[\"nargs\"] = \"+\"\n kwargs[\"type\"] = field.type.__args__[0]\n assert all(\n x == kwargs[\"type\"] for x in field.type.__args__\n ), \"{} cannot be a List of mixed types\".format(field.name)\n if field.default_factory is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default_factory()\n else:\n kwargs[\"type\"] = field.type\n if field.default is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default\n elif field.default_factory is not dataclasses.MISSING:\n kwargs[\"default\"] = field.default_factory()\n else:\n kwargs[\"required\"] = True\n self.add_argument(field_name, **kwargs)\n\n def parse_args_into_dataclasses(\n self, args=None, return_remaining_strings=False, look_for_args_file=True\n ) -> Tuple[DataClass, ...]:\n \"\"\"\n Parse command-line args into instances of the specified dataclass types.\n\n This relies on argparse's `ArgumentParser.parse_known_args`.\n See the doc at:\n docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args\n\n Args:\n args:\n List of strings to parse. The default is taken from sys.argv.\n (same as argparse.ArgumentParser)\n return_remaining_strings:\n If true, also return a list of remaining argument strings.\n look_for_args_file:\n If true, will look for a \".args\" file with the same base name\n as the entry point script for this process, and will append its\n potential content to the command line args.\n\n Returns:\n Tuple consisting of:\n - the dataclass instances in the same order as they\n were passed to the initializer.abspath\n - if applicable, an additional namespace for more\n (non-dataclass backed) arguments added to the parser\n after initialization.\n - The potential list of remaining argument strings.\n (same as argparse.ArgumentParser.parse_known_args)\n \"\"\"\n if look_for_args_file and len(sys.argv):\n args_file = Path(sys.argv[0]).with_suffix(\".args\")\n if args_file.exists():\n fargs = args_file.read_text().split()\n args = fargs + args if args is not None else fargs + sys.argv[1:]\n # in case of duplicate arguments the first one has precedence\n # so we append rather than prepend.\n namespace, remaining_args = self.parse_known_args(args=args)\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in vars(namespace).items() if k in keys}\n for k in keys:\n delattr(namespace, k)\n obj = dtype(**inputs)\n outputs.append(obj)\n if len(namespace.__dict__) > 0:\n # additional namespace.\n outputs.append(namespace)\n if return_remaining_strings:\n return (*outputs, remaining_args)\n else:\n if remaining_args:\n raise ValueError(f\"Some specified arguments are not used by the HfArgumentParser: {remaining_args}\")\n\n return (*outputs,)\n\n def parse_json_file(self, json_file: str) -> Tuple[DataClass, ...]:\n \"\"\"\n Alternative helper method that does not use `argparse` at all,\n instead loading a json file and populating the dataclass types.\n \"\"\"\n data = json.loads(Path(json_file).read_text())\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in data.items() if k in keys}\n obj = dtype(**inputs)\n outputs.append(obj)\n return (*outputs,)\n\n def parse_dict(self, args: dict) -> Tuple[DataClass, ...]:\n \"\"\"\n Alternative helper method that does not use `argparse` at all,\n instead uses a dict and populating the dataclass types.\n \"\"\"\n outputs = []\n for dtype in self.dataclass_types:\n keys = {f.name for f in dataclasses.fields(dtype)}\n inputs = {k: v for k, v in args.items() if k in keys}\n obj = dtype(**inputs)\n outputs.append(obj)\n return (*outputs,)\n", "path": "src/transformers/hf_argparser.py"}]}
| 3,367 | 224 |
gh_patches_debug_2751
|
rasdani/github-patches
|
git_diff
|
abey79__vpype-607
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default to QT_QPA_PLATFORM=xcb on Linux/Wayland
If we detect a linux box running on wayland, we should force Qt to use the xcb platform as the wayland backend doesn't work properly with moderngl.
This maybe a good way to detect wayland:
```
XDG_SESSION_TYPE=wayland
```
Relevant discussions:
- https://github.com/abey79/vsketch/issues/353
- https://discord.com/channels/550302843777712148/696045774970028062/1072436292798926868
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vpype_viewer/qtviewer/__init__.py`
Content:
```
1 from .viewer import *
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vpype_viewer/qtviewer/__init__.py b/vpype_viewer/qtviewer/__init__.py
--- a/vpype_viewer/qtviewer/__init__.py
+++ b/vpype_viewer/qtviewer/__init__.py
@@ -1 +1,18 @@
+def _check_wayland():
+ """Fix QT env variable on Wayland-based systems.
+
+ See https://github.com/abey79/vpype/issues/596
+ """
+ import os
+ import sys
+
+ if sys.platform.startswith("linux"):
+ if os.environ.get("XDG_SESSION_TYPE", "") == "wayland":
+ if "QT_QPA_PLATFORM" not in os.environ:
+ os.environ["QT_QPA_PLATFORM"] = "xcb"
+
+
+_check_wayland()
+
+
from .viewer import *
|
{"golden_diff": "diff --git a/vpype_viewer/qtviewer/__init__.py b/vpype_viewer/qtviewer/__init__.py\n--- a/vpype_viewer/qtviewer/__init__.py\n+++ b/vpype_viewer/qtviewer/__init__.py\n@@ -1 +1,18 @@\n+def _check_wayland():\n+ \"\"\"Fix QT env variable on Wayland-based systems.\n+\n+ See https://github.com/abey79/vpype/issues/596\n+ \"\"\"\n+ import os\n+ import sys\n+\n+ if sys.platform.startswith(\"linux\"):\n+ if os.environ.get(\"XDG_SESSION_TYPE\", \"\") == \"wayland\":\n+ if \"QT_QPA_PLATFORM\" not in os.environ:\n+ os.environ[\"QT_QPA_PLATFORM\"] = \"xcb\"\n+\n+\n+_check_wayland()\n+\n+\n from .viewer import *\n", "issue": "Default to QT_QPA_PLATFORM=xcb on Linux/Wayland\nIf we detect a linux box running on wayland, we should force Qt to use the xcb platform as the wayland backend doesn't work properly with moderngl.\r\n\r\nThis maybe a good way to detect wayland:\r\n```\r\nXDG_SESSION_TYPE=wayland\r\n```\r\n\r\nRelevant discussions:\r\n- https://github.com/abey79/vsketch/issues/353\r\n- https://discord.com/channels/550302843777712148/696045774970028062/1072436292798926868\n", "before_files": [{"content": "from .viewer import *\n", "path": "vpype_viewer/qtviewer/__init__.py"}], "after_files": [{"content": "def _check_wayland():\n \"\"\"Fix QT env variable on Wayland-based systems.\n\n See https://github.com/abey79/vpype/issues/596\n \"\"\"\n import os\n import sys\n\n if sys.platform.startswith(\"linux\"):\n if os.environ.get(\"XDG_SESSION_TYPE\", \"\") == \"wayland\":\n if \"QT_QPA_PLATFORM\" not in os.environ:\n os.environ[\"QT_QPA_PLATFORM\"] = \"xcb\"\n\n\n_check_wayland()\n\n\nfrom .viewer import *\n", "path": "vpype_viewer/qtviewer/__init__.py"}]}
| 428 | 186 |
gh_patches_debug_16213
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-925
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] asio/1.13.0: broken on Andriod
Commit: https://github.com/conan-io/conan-center-index/commit/804be2ad15b2139960fe10efcd6667d1f2dd2e98 breaks Android because there is no `-lpthread`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/asio/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, tools
3
4
5 class Asio(ConanFile):
6 name = "asio"
7 url = "https://github.com/conan-io/conan-center-index"
8 homepage = "http://think-async.com/Asio"
9 description = "Asio is a cross-platform C++ library for network and low-level I/O"
10 topics = ("conan", "asio", "network", "io", "low-level")
11 license = "BSL-1.0"
12
13 no_copy_source = True
14 _source_subfolder = "source_subfolder"
15
16 def source(self):
17 tools.get(**self.conan_data["sources"][self.version])
18 archive_name = "asio-" + self.version.replace(".", "-")
19 extracted_name = "asio-" + archive_name
20 os.rename(extracted_name, self._source_subfolder)
21
22 def package(self):
23 root_dir = os.path.join(self._source_subfolder, self.name)
24 include_dir = os.path.join(root_dir, "include")
25 self.copy(pattern="LICENSE_1_0.txt", dst="licenses", src=root_dir)
26 self.copy(pattern="*.hpp", dst="include", src=include_dir)
27 self.copy(pattern="*.ipp", dst="include", src=include_dir)
28
29 def package_info(self):
30 self.cpp_info.defines.append('ASIO_STANDALONE')
31 if tools.os_info.is_linux:
32 self.cpp_info.libs.append('pthread')
33
34 def package_id(self):
35 self.info.header_only()
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/asio/all/conanfile.py b/recipes/asio/all/conanfile.py
--- a/recipes/asio/all/conanfile.py
+++ b/recipes/asio/all/conanfile.py
@@ -8,6 +8,7 @@
homepage = "http://think-async.com/Asio"
description = "Asio is a cross-platform C++ library for network and low-level I/O"
topics = ("conan", "asio", "network", "io", "low-level")
+ settings = "os"
license = "BSL-1.0"
no_copy_source = True
@@ -28,7 +29,7 @@
def package_info(self):
self.cpp_info.defines.append('ASIO_STANDALONE')
- if tools.os_info.is_linux:
+ if str(self.settings.os) in ["Linux", "Android"]:
self.cpp_info.libs.append('pthread')
def package_id(self):
|
{"golden_diff": "diff --git a/recipes/asio/all/conanfile.py b/recipes/asio/all/conanfile.py\n--- a/recipes/asio/all/conanfile.py\n+++ b/recipes/asio/all/conanfile.py\n@@ -8,6 +8,7 @@\n homepage = \"http://think-async.com/Asio\"\n description = \"Asio is a cross-platform C++ library for network and low-level I/O\"\n topics = (\"conan\", \"asio\", \"network\", \"io\", \"low-level\")\n+ settings = \"os\"\n license = \"BSL-1.0\"\n \n no_copy_source = True\n@@ -28,7 +29,7 @@\n \n def package_info(self):\n self.cpp_info.defines.append('ASIO_STANDALONE')\n- if tools.os_info.is_linux:\n+ if str(self.settings.os) in [\"Linux\", \"Android\"]:\n self.cpp_info.libs.append('pthread')\n \n def package_id(self):\n", "issue": "[package] asio/1.13.0: broken on Andriod\nCommit: https://github.com/conan-io/conan-center-index/commit/804be2ad15b2139960fe10efcd6667d1f2dd2e98 breaks Android because there is no `-lpthread` \n", "before_files": [{"content": "import os\nfrom conans import ConanFile, tools\n\n\nclass Asio(ConanFile):\n name = \"asio\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://think-async.com/Asio\"\n description = \"Asio is a cross-platform C++ library for network and low-level I/O\"\n topics = (\"conan\", \"asio\", \"network\", \"io\", \"low-level\")\n license = \"BSL-1.0\"\n\n no_copy_source = True\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n archive_name = \"asio-\" + self.version.replace(\".\", \"-\")\n extracted_name = \"asio-\" + archive_name\n os.rename(extracted_name, self._source_subfolder)\n\n def package(self):\n root_dir = os.path.join(self._source_subfolder, self.name)\n include_dir = os.path.join(root_dir, \"include\")\n self.copy(pattern=\"LICENSE_1_0.txt\", dst=\"licenses\", src=root_dir)\n self.copy(pattern=\"*.hpp\", dst=\"include\", src=include_dir)\n self.copy(pattern=\"*.ipp\", dst=\"include\", src=include_dir)\n\n def package_info(self):\n self.cpp_info.defines.append('ASIO_STANDALONE')\n if tools.os_info.is_linux:\n self.cpp_info.libs.append('pthread')\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/asio/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, tools\n\n\nclass Asio(ConanFile):\n name = \"asio\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://think-async.com/Asio\"\n description = \"Asio is a cross-platform C++ library for network and low-level I/O\"\n topics = (\"conan\", \"asio\", \"network\", \"io\", \"low-level\")\n settings = \"os\"\n license = \"BSL-1.0\"\n\n no_copy_source = True\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n archive_name = \"asio-\" + self.version.replace(\".\", \"-\")\n extracted_name = \"asio-\" + archive_name\n os.rename(extracted_name, self._source_subfolder)\n\n def package(self):\n root_dir = os.path.join(self._source_subfolder, self.name)\n include_dir = os.path.join(root_dir, \"include\")\n self.copy(pattern=\"LICENSE_1_0.txt\", dst=\"licenses\", src=root_dir)\n self.copy(pattern=\"*.hpp\", dst=\"include\", src=include_dir)\n self.copy(pattern=\"*.ipp\", dst=\"include\", src=include_dir)\n\n def package_info(self):\n self.cpp_info.defines.append('ASIO_STANDALONE')\n if str(self.settings.os) in [\"Linux\", \"Android\"]:\n self.cpp_info.libs.append('pthread')\n\n def package_id(self):\n self.info.header_only()\n", "path": "recipes/asio/all/conanfile.py"}]}
| 733 | 213 |
gh_patches_debug_19513
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2358
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expose `sfqid` attribute in snowflake exception messages
When contacting snowflake support they always start the conversation with `can you provide a query id`. Exposing this id in all cases would be useful when contacting snowflake support.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/snowflake/dbt/adapters/snowflake/connections.py`
Content:
```
1 import base64
2 import datetime
3 import pytz
4 import re
5 from contextlib import contextmanager
6 from dataclasses import dataclass
7 from io import StringIO
8 from typing import Optional
9
10 from cryptography.hazmat.backends import default_backend
11 from cryptography.hazmat.primitives import serialization
12 import requests
13 import snowflake.connector
14 import snowflake.connector.errors
15
16 from dbt.exceptions import (
17 InternalException, RuntimeException, FailedToConnectException,
18 DatabaseException, warn_or_error
19 )
20 from dbt.adapters.base import Credentials
21 from dbt.adapters.sql import SQLConnectionManager
22 from dbt.logger import GLOBAL_LOGGER as logger
23
24
25 _TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'
26
27
28 @dataclass
29 class SnowflakeCredentials(Credentials):
30 account: str
31 user: str
32 warehouse: Optional[str]
33 role: Optional[str]
34 password: Optional[str]
35 authenticator: Optional[str]
36 private_key_path: Optional[str]
37 private_key_passphrase: Optional[str]
38 token: Optional[str]
39 oauth_client_id: Optional[str]
40 oauth_client_secret: Optional[str]
41 client_session_keep_alive: bool = False
42
43 def __post_init__(self):
44 if (
45 self.authenticator != 'oauth' and
46 (self.oauth_client_secret or self.oauth_client_id or self.token)
47 ):
48 # the user probably forgot to set 'authenticator' like I keep doing
49 warn_or_error(
50 'Authenticator is not set to oauth, but an oauth-only '
51 'parameter is set! Did you mean to set authenticator: oauth?'
52 )
53
54 @property
55 def type(self):
56 return 'snowflake'
57
58 def _connection_keys(self):
59 return (
60 'account', 'user', 'database', 'schema', 'warehouse', 'role',
61 'client_session_keep_alive'
62 )
63
64 def auth_args(self):
65 # Pull all of the optional authentication args for the connector,
66 # let connector handle the actual arg validation
67 result = {}
68 if self.password:
69 result['password'] = self.password
70 if self.authenticator:
71 result['authenticator'] = self.authenticator
72 if self.authenticator == 'oauth':
73 token = self.token
74 # if we have a client ID/client secret, the token is a refresh
75 # token, not an access token
76 if self.oauth_client_id and self.oauth_client_secret:
77 token = self._get_access_token()
78 elif self.oauth_client_id:
79 warn_or_error(
80 'Invalid profile: got an oauth_client_id, but not an '
81 'oauth_client_secret!'
82 )
83 elif self.oauth_client_secret:
84 warn_or_error(
85 'Invalid profile: got an oauth_client_secret, but not '
86 'an oauth_client_id!'
87 )
88
89 result['token'] = token
90 result['private_key'] = self._get_private_key()
91 return result
92
93 def _get_access_token(self) -> str:
94 if self.authenticator != 'oauth':
95 raise InternalException('Can only get access tokens for oauth')
96 missing = any(
97 x is None for x in
98 (self.oauth_client_id, self.oauth_client_secret, self.token)
99 )
100 if missing:
101 raise InternalException(
102 'need a client ID a client secret, and a refresh token to get '
103 'an access token'
104 )
105 # should the full url be a config item?
106 token_url = _TOKEN_REQUEST_URL.format(self.account)
107 # I think this is only used to redirect on success, which we ignore
108 # (it does not have to match the integration's settings in snowflake)
109 redirect_uri = 'http://localhost:9999'
110 data = {
111 'grant_type': 'refresh_token',
112 'refresh_token': self.token,
113 'redirect_uri': redirect_uri
114 }
115
116 auth = base64.b64encode(
117 f'{self.oauth_client_id}:{self.oauth_client_secret}'
118 .encode('ascii')
119 ).decode('ascii')
120 headers = {
121 'Authorization': f'Basic {auth}',
122 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'
123 }
124 result = requests.post(token_url, headers=headers, data=data)
125 result_json = result.json()
126 if 'access_token' not in result_json:
127 raise DatabaseException(f'Did not get a token: {result_json}')
128 return result_json['access_token']
129
130 def _get_private_key(self):
131 """Get Snowflake private key by path or None."""
132 if not self.private_key_path:
133 return None
134
135 if self.private_key_passphrase:
136 encoded_passphrase = self.private_key_passphrase.encode()
137 else:
138 encoded_passphrase = None
139
140 with open(self.private_key_path, 'rb') as key:
141 p_key = serialization.load_pem_private_key(
142 key.read(),
143 password=encoded_passphrase,
144 backend=default_backend())
145
146 return p_key.private_bytes(
147 encoding=serialization.Encoding.DER,
148 format=serialization.PrivateFormat.PKCS8,
149 encryption_algorithm=serialization.NoEncryption())
150
151
152 class SnowflakeConnectionManager(SQLConnectionManager):
153 TYPE = 'snowflake'
154
155 @contextmanager
156 def exception_handler(self, sql):
157 try:
158 yield
159 except snowflake.connector.errors.ProgrammingError as e:
160 msg = str(e)
161
162 logger.debug('Snowflake error: {}'.format(msg))
163
164 if 'Empty SQL statement' in msg:
165 logger.debug("got empty sql statement, moving on")
166 elif 'This session does not have a current database' in msg:
167 self.release()
168 raise FailedToConnectException(
169 ('{}\n\nThis error sometimes occurs when invalid '
170 'credentials are provided, or when your default role '
171 'does not have access to use the specified database. '
172 'Please double check your profile and try again.')
173 .format(msg))
174 else:
175 self.release()
176 raise DatabaseException(msg)
177 except Exception as e:
178 logger.debug("Error running SQL: {}", sql)
179 logger.debug("Rolling back transaction.")
180 self.release()
181 if isinstance(e, RuntimeException):
182 # during a sql query, an internal to dbt exception was raised.
183 # this sounds a lot like a signal handler and probably has
184 # useful information, so raise it without modification.
185 raise
186 raise RuntimeException(str(e)) from e
187
188 @classmethod
189 def open(cls, connection):
190 if connection.state == 'open':
191 logger.debug('Connection is already open, skipping open.')
192 return connection
193
194 try:
195 creds = connection.credentials
196
197 handle = snowflake.connector.connect(
198 account=creds.account,
199 user=creds.user,
200 database=creds.database,
201 schema=creds.schema,
202 warehouse=creds.warehouse,
203 role=creds.role,
204 autocommit=False,
205 client_session_keep_alive=creds.client_session_keep_alive,
206 application='dbt',
207 **creds.auth_args()
208 )
209
210 connection.handle = handle
211 connection.state = 'open'
212 except snowflake.connector.errors.Error as e:
213 logger.debug("Got an error when attempting to open a snowflake "
214 "connection: '{}'"
215 .format(e))
216
217 connection.handle = None
218 connection.state = 'fail'
219
220 raise FailedToConnectException(str(e))
221
222 def cancel(self, connection):
223 handle = connection.handle
224 sid = handle.session_id
225
226 connection_name = connection.name
227
228 sql = 'select system$abort_session({})'.format(sid)
229
230 logger.debug("Cancelling query '{}' ({})".format(connection_name, sid))
231
232 _, cursor = self.add_query(sql)
233 res = cursor.fetchone()
234
235 logger.debug("Cancel query '{}': {}".format(connection_name, res))
236
237 @classmethod
238 def get_status(cls, cursor):
239 state = cursor.sqlstate
240
241 if state is None:
242 state = 'SUCCESS'
243
244 return "{} {}".format(state, cursor.rowcount)
245
246 @classmethod
247 def _split_queries(cls, sql):
248 "Splits sql statements at semicolons into discrete queries"
249
250 sql_s = str(sql)
251 sql_buf = StringIO(sql_s)
252 split_query = snowflake.connector.util_text.split_statements(sql_buf)
253 return [part[0] for part in split_query]
254
255 @classmethod
256 def process_results(cls, column_names, rows):
257 # Override for Snowflake. The datetime objects returned by
258 # snowflake-connector-python are not pickleable, so we need
259 # to replace them with sane timezones
260 fixed = []
261 for row in rows:
262 fixed_row = []
263 for col in row:
264 if isinstance(col, datetime.datetime) and col.tzinfo:
265 offset = col.utcoffset()
266 offset_seconds = offset.total_seconds()
267 new_timezone = pytz.FixedOffset(offset_seconds // 60)
268 col = col.astimezone(tz=new_timezone)
269 fixed_row.append(col)
270
271 fixed.append(fixed_row)
272
273 return super().process_results(column_names, fixed)
274
275 def add_query(self, sql, auto_begin=True,
276 bindings=None, abridge_sql_log=False):
277
278 connection = None
279 cursor = None
280
281 if bindings:
282 # The snowflake connector is more strict than, eg., psycopg2 -
283 # which allows any iterable thing to be passed as a binding.
284 bindings = tuple(bindings)
285
286 queries = self._split_queries(sql)
287
288 for individual_query in queries:
289 # hack -- after the last ';', remove comments and don't run
290 # empty queries. this avoids using exceptions as flow control,
291 # and also allows us to return the status of the last cursor
292 without_comments = re.sub(
293 re.compile('^.*(--.*)$', re.MULTILINE),
294 '', individual_query).strip()
295
296 if without_comments == "":
297 continue
298
299 connection, cursor = super().add_query(
300 individual_query, auto_begin,
301 bindings=bindings,
302 abridge_sql_log=abridge_sql_log
303 )
304
305 if cursor is None:
306 conn = self.get_thread_connection()
307 if conn is None or conn.name is None:
308 conn_name = '<None>'
309 else:
310 conn_name = conn.name
311
312 raise RuntimeException(
313 "Tried to run an empty query on model '{}'. If you are "
314 "conditionally running\nsql, eg. in a model hook, make "
315 "sure your `else` clause contains valid sql!\n\n"
316 "Provided SQL:\n{}"
317 .format(conn_name, sql)
318 )
319
320 return connection, cursor
321
322 @classmethod
323 def _rollback_handle(cls, connection):
324 """On snowflake, rolling back the handle of an aborted session raises
325 an exception.
326 """
327 logger.debug('initiating rollback')
328 try:
329 connection.handle.rollback()
330 except snowflake.connector.errors.ProgrammingError as e:
331 msg = str(e)
332 if 'Session no longer exists' not in msg:
333 raise
334
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/snowflake/dbt/adapters/snowflake/connections.py b/plugins/snowflake/dbt/adapters/snowflake/connections.py
--- a/plugins/snowflake/dbt/adapters/snowflake/connections.py
+++ b/plugins/snowflake/dbt/adapters/snowflake/connections.py
@@ -159,6 +159,7 @@
except snowflake.connector.errors.ProgrammingError as e:
msg = str(e)
+ logger.debug('Snowflake query id: {}'.format(e.sfqid))
logger.debug('Snowflake error: {}'.format(msg))
if 'Empty SQL statement' in msg:
@@ -175,6 +176,9 @@
self.release()
raise DatabaseException(msg)
except Exception as e:
+ if isinstance(e, snowflake.connector.errors.Error):
+ logger.debug('Snowflake query id: {}'.format(e.sfqid))
+
logger.debug("Error running SQL: {}", sql)
logger.debug("Rolling back transaction.")
self.release()
|
{"golden_diff": "diff --git a/plugins/snowflake/dbt/adapters/snowflake/connections.py b/plugins/snowflake/dbt/adapters/snowflake/connections.py\n--- a/plugins/snowflake/dbt/adapters/snowflake/connections.py\n+++ b/plugins/snowflake/dbt/adapters/snowflake/connections.py\n@@ -159,6 +159,7 @@\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n \n+ logger.debug('Snowflake query id: {}'.format(e.sfqid))\n logger.debug('Snowflake error: {}'.format(msg))\n \n if 'Empty SQL statement' in msg:\n@@ -175,6 +176,9 @@\n self.release()\n raise DatabaseException(msg)\n except Exception as e:\n+ if isinstance(e, snowflake.connector.errors.Error):\n+ logger.debug('Snowflake query id: {}'.format(e.sfqid))\n+\n logger.debug(\"Error running SQL: {}\", sql)\n logger.debug(\"Rolling back transaction.\")\n self.release()\n", "issue": "Expose `sfqid` attribute in snowflake exception messages\nWhen contacting snowflake support they always start the conversation with `can you provide a query id`. Exposing this id in all cases would be useful when contacting snowflake support.\n", "before_files": [{"content": "import base64\nimport datetime\nimport pytz\nimport re\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom io import StringIO\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\nimport requests\nimport snowflake.connector\nimport snowflake.connector.errors\n\nfrom dbt.exceptions import (\n InternalException, RuntimeException, FailedToConnectException,\n DatabaseException, warn_or_error\n)\nfrom dbt.adapters.base import Credentials\nfrom dbt.adapters.sql import SQLConnectionManager\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\n_TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'\n\n\n@dataclass\nclass SnowflakeCredentials(Credentials):\n account: str\n user: str\n warehouse: Optional[str]\n role: Optional[str]\n password: Optional[str]\n authenticator: Optional[str]\n private_key_path: Optional[str]\n private_key_passphrase: Optional[str]\n token: Optional[str]\n oauth_client_id: Optional[str]\n oauth_client_secret: Optional[str]\n client_session_keep_alive: bool = False\n\n def __post_init__(self):\n if (\n self.authenticator != 'oauth' and\n (self.oauth_client_secret or self.oauth_client_id or self.token)\n ):\n # the user probably forgot to set 'authenticator' like I keep doing\n warn_or_error(\n 'Authenticator is not set to oauth, but an oauth-only '\n 'parameter is set! Did you mean to set authenticator: oauth?'\n )\n\n @property\n def type(self):\n return 'snowflake'\n\n def _connection_keys(self):\n return (\n 'account', 'user', 'database', 'schema', 'warehouse', 'role',\n 'client_session_keep_alive'\n )\n\n def auth_args(self):\n # Pull all of the optional authentication args for the connector,\n # let connector handle the actual arg validation\n result = {}\n if self.password:\n result['password'] = self.password\n if self.authenticator:\n result['authenticator'] = self.authenticator\n if self.authenticator == 'oauth':\n token = self.token\n # if we have a client ID/client secret, the token is a refresh\n # token, not an access token\n if self.oauth_client_id and self.oauth_client_secret:\n token = self._get_access_token()\n elif self.oauth_client_id:\n warn_or_error(\n 'Invalid profile: got an oauth_client_id, but not an '\n 'oauth_client_secret!'\n )\n elif self.oauth_client_secret:\n warn_or_error(\n 'Invalid profile: got an oauth_client_secret, but not '\n 'an oauth_client_id!'\n )\n\n result['token'] = token\n result['private_key'] = self._get_private_key()\n return result\n\n def _get_access_token(self) -> str:\n if self.authenticator != 'oauth':\n raise InternalException('Can only get access tokens for oauth')\n missing = any(\n x is None for x in\n (self.oauth_client_id, self.oauth_client_secret, self.token)\n )\n if missing:\n raise InternalException(\n 'need a client ID a client secret, and a refresh token to get '\n 'an access token'\n )\n # should the full url be a config item?\n token_url = _TOKEN_REQUEST_URL.format(self.account)\n # I think this is only used to redirect on success, which we ignore\n # (it does not have to match the integration's settings in snowflake)\n redirect_uri = 'http://localhost:9999'\n data = {\n 'grant_type': 'refresh_token',\n 'refresh_token': self.token,\n 'redirect_uri': redirect_uri\n }\n\n auth = base64.b64encode(\n f'{self.oauth_client_id}:{self.oauth_client_secret}'\n .encode('ascii')\n ).decode('ascii')\n headers = {\n 'Authorization': f'Basic {auth}',\n 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'\n }\n result = requests.post(token_url, headers=headers, data=data)\n result_json = result.json()\n if 'access_token' not in result_json:\n raise DatabaseException(f'Did not get a token: {result_json}')\n return result_json['access_token']\n\n def _get_private_key(self):\n \"\"\"Get Snowflake private key by path or None.\"\"\"\n if not self.private_key_path:\n return None\n\n if self.private_key_passphrase:\n encoded_passphrase = self.private_key_passphrase.encode()\n else:\n encoded_passphrase = None\n\n with open(self.private_key_path, 'rb') as key:\n p_key = serialization.load_pem_private_key(\n key.read(),\n password=encoded_passphrase,\n backend=default_backend())\n\n return p_key.private_bytes(\n encoding=serialization.Encoding.DER,\n format=serialization.PrivateFormat.PKCS8,\n encryption_algorithm=serialization.NoEncryption())\n\n\nclass SnowflakeConnectionManager(SQLConnectionManager):\n TYPE = 'snowflake'\n\n @contextmanager\n def exception_handler(self, sql):\n try:\n yield\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n\n logger.debug('Snowflake error: {}'.format(msg))\n\n if 'Empty SQL statement' in msg:\n logger.debug(\"got empty sql statement, moving on\")\n elif 'This session does not have a current database' in msg:\n self.release()\n raise FailedToConnectException(\n ('{}\\n\\nThis error sometimes occurs when invalid '\n 'credentials are provided, or when your default role '\n 'does not have access to use the specified database. '\n 'Please double check your profile and try again.')\n .format(msg))\n else:\n self.release()\n raise DatabaseException(msg)\n except Exception as e:\n logger.debug(\"Error running SQL: {}\", sql)\n logger.debug(\"Rolling back transaction.\")\n self.release()\n if isinstance(e, RuntimeException):\n # during a sql query, an internal to dbt exception was raised.\n # this sounds a lot like a signal handler and probably has\n # useful information, so raise it without modification.\n raise\n raise RuntimeException(str(e)) from e\n\n @classmethod\n def open(cls, connection):\n if connection.state == 'open':\n logger.debug('Connection is already open, skipping open.')\n return connection\n\n try:\n creds = connection.credentials\n\n handle = snowflake.connector.connect(\n account=creds.account,\n user=creds.user,\n database=creds.database,\n schema=creds.schema,\n warehouse=creds.warehouse,\n role=creds.role,\n autocommit=False,\n client_session_keep_alive=creds.client_session_keep_alive,\n application='dbt',\n **creds.auth_args()\n )\n\n connection.handle = handle\n connection.state = 'open'\n except snowflake.connector.errors.Error as e:\n logger.debug(\"Got an error when attempting to open a snowflake \"\n \"connection: '{}'\"\n .format(e))\n\n connection.handle = None\n connection.state = 'fail'\n\n raise FailedToConnectException(str(e))\n\n def cancel(self, connection):\n handle = connection.handle\n sid = handle.session_id\n\n connection_name = connection.name\n\n sql = 'select system$abort_session({})'.format(sid)\n\n logger.debug(\"Cancelling query '{}' ({})\".format(connection_name, sid))\n\n _, cursor = self.add_query(sql)\n res = cursor.fetchone()\n\n logger.debug(\"Cancel query '{}': {}\".format(connection_name, res))\n\n @classmethod\n def get_status(cls, cursor):\n state = cursor.sqlstate\n\n if state is None:\n state = 'SUCCESS'\n\n return \"{} {}\".format(state, cursor.rowcount)\n\n @classmethod\n def _split_queries(cls, sql):\n \"Splits sql statements at semicolons into discrete queries\"\n\n sql_s = str(sql)\n sql_buf = StringIO(sql_s)\n split_query = snowflake.connector.util_text.split_statements(sql_buf)\n return [part[0] for part in split_query]\n\n @classmethod\n def process_results(cls, column_names, rows):\n # Override for Snowflake. The datetime objects returned by\n # snowflake-connector-python are not pickleable, so we need\n # to replace them with sane timezones\n fixed = []\n for row in rows:\n fixed_row = []\n for col in row:\n if isinstance(col, datetime.datetime) and col.tzinfo:\n offset = col.utcoffset()\n offset_seconds = offset.total_seconds()\n new_timezone = pytz.FixedOffset(offset_seconds // 60)\n col = col.astimezone(tz=new_timezone)\n fixed_row.append(col)\n\n fixed.append(fixed_row)\n\n return super().process_results(column_names, fixed)\n\n def add_query(self, sql, auto_begin=True,\n bindings=None, abridge_sql_log=False):\n\n connection = None\n cursor = None\n\n if bindings:\n # The snowflake connector is more strict than, eg., psycopg2 -\n # which allows any iterable thing to be passed as a binding.\n bindings = tuple(bindings)\n\n queries = self._split_queries(sql)\n\n for individual_query in queries:\n # hack -- after the last ';', remove comments and don't run\n # empty queries. this avoids using exceptions as flow control,\n # and also allows us to return the status of the last cursor\n without_comments = re.sub(\n re.compile('^.*(--.*)$', re.MULTILINE),\n '', individual_query).strip()\n\n if without_comments == \"\":\n continue\n\n connection, cursor = super().add_query(\n individual_query, auto_begin,\n bindings=bindings,\n abridge_sql_log=abridge_sql_log\n )\n\n if cursor is None:\n conn = self.get_thread_connection()\n if conn is None or conn.name is None:\n conn_name = '<None>'\n else:\n conn_name = conn.name\n\n raise RuntimeException(\n \"Tried to run an empty query on model '{}'. If you are \"\n \"conditionally running\\nsql, eg. in a model hook, make \"\n \"sure your `else` clause contains valid sql!\\n\\n\"\n \"Provided SQL:\\n{}\"\n .format(conn_name, sql)\n )\n\n return connection, cursor\n\n @classmethod\n def _rollback_handle(cls, connection):\n \"\"\"On snowflake, rolling back the handle of an aborted session raises\n an exception.\n \"\"\"\n logger.debug('initiating rollback')\n try:\n connection.handle.rollback()\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n if 'Session no longer exists' not in msg:\n raise\n", "path": "plugins/snowflake/dbt/adapters/snowflake/connections.py"}], "after_files": [{"content": "import base64\nimport datetime\nimport pytz\nimport re\nfrom contextlib import contextmanager\nfrom dataclasses import dataclass\nfrom io import StringIO\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\nimport requests\nimport snowflake.connector\nimport snowflake.connector.errors\n\nfrom dbt.exceptions import (\n InternalException, RuntimeException, FailedToConnectException,\n DatabaseException, warn_or_error\n)\nfrom dbt.adapters.base import Credentials\nfrom dbt.adapters.sql import SQLConnectionManager\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\n_TOKEN_REQUEST_URL = 'https://{}.snowflakecomputing.com/oauth/token-request'\n\n\n@dataclass\nclass SnowflakeCredentials(Credentials):\n account: str\n user: str\n warehouse: Optional[str]\n role: Optional[str]\n password: Optional[str]\n authenticator: Optional[str]\n private_key_path: Optional[str]\n private_key_passphrase: Optional[str]\n token: Optional[str]\n oauth_client_id: Optional[str]\n oauth_client_secret: Optional[str]\n client_session_keep_alive: bool = False\n\n def __post_init__(self):\n if (\n self.authenticator != 'oauth' and\n (self.oauth_client_secret or self.oauth_client_id or self.token)\n ):\n # the user probably forgot to set 'authenticator' like I keep doing\n warn_or_error(\n 'Authenticator is not set to oauth, but an oauth-only '\n 'parameter is set! Did you mean to set authenticator: oauth?'\n )\n\n @property\n def type(self):\n return 'snowflake'\n\n def _connection_keys(self):\n return (\n 'account', 'user', 'database', 'schema', 'warehouse', 'role',\n 'client_session_keep_alive'\n )\n\n def auth_args(self):\n # Pull all of the optional authentication args for the connector,\n # let connector handle the actual arg validation\n result = {}\n if self.password:\n result['password'] = self.password\n if self.authenticator:\n result['authenticator'] = self.authenticator\n if self.authenticator == 'oauth':\n token = self.token\n # if we have a client ID/client secret, the token is a refresh\n # token, not an access token\n if self.oauth_client_id and self.oauth_client_secret:\n token = self._get_access_token()\n elif self.oauth_client_id:\n warn_or_error(\n 'Invalid profile: got an oauth_client_id, but not an '\n 'oauth_client_secret!'\n )\n elif self.oauth_client_secret:\n warn_or_error(\n 'Invalid profile: got an oauth_client_secret, but not '\n 'an oauth_client_id!'\n )\n\n result['token'] = token\n result['private_key'] = self._get_private_key()\n return result\n\n def _get_access_token(self) -> str:\n if self.authenticator != 'oauth':\n raise InternalException('Can only get access tokens for oauth')\n missing = any(\n x is None for x in\n (self.oauth_client_id, self.oauth_client_secret, self.token)\n )\n if missing:\n raise InternalException(\n 'need a client ID a client secret, and a refresh token to get '\n 'an access token'\n )\n # should the full url be a config item?\n token_url = _TOKEN_REQUEST_URL.format(self.account)\n # I think this is only used to redirect on success, which we ignore\n # (it does not have to match the integration's settings in snowflake)\n redirect_uri = 'http://localhost:9999'\n data = {\n 'grant_type': 'refresh_token',\n 'refresh_token': self.token,\n 'redirect_uri': redirect_uri\n }\n\n auth = base64.b64encode(\n f'{self.oauth_client_id}:{self.oauth_client_secret}'\n .encode('ascii')\n ).decode('ascii')\n headers = {\n 'Authorization': f'Basic {auth}',\n 'Content-type': 'application/x-www-form-urlencoded;charset=utf-8'\n }\n result = requests.post(token_url, headers=headers, data=data)\n result_json = result.json()\n if 'access_token' not in result_json:\n raise DatabaseException(f'Did not get a token: {result_json}')\n return result_json['access_token']\n\n def _get_private_key(self):\n \"\"\"Get Snowflake private key by path or None.\"\"\"\n if not self.private_key_path:\n return None\n\n if self.private_key_passphrase:\n encoded_passphrase = self.private_key_passphrase.encode()\n else:\n encoded_passphrase = None\n\n with open(self.private_key_path, 'rb') as key:\n p_key = serialization.load_pem_private_key(\n key.read(),\n password=encoded_passphrase,\n backend=default_backend())\n\n return p_key.private_bytes(\n encoding=serialization.Encoding.DER,\n format=serialization.PrivateFormat.PKCS8,\n encryption_algorithm=serialization.NoEncryption())\n\n\nclass SnowflakeConnectionManager(SQLConnectionManager):\n TYPE = 'snowflake'\n\n @contextmanager\n def exception_handler(self, sql):\n try:\n yield\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n\n logger.debug('Snowflake query id: {}'.format(e.sfqid))\n logger.debug('Snowflake error: {}'.format(msg))\n\n if 'Empty SQL statement' in msg:\n logger.debug(\"got empty sql statement, moving on\")\n elif 'This session does not have a current database' in msg:\n self.release()\n raise FailedToConnectException(\n ('{}\\n\\nThis error sometimes occurs when invalid '\n 'credentials are provided, or when your default role '\n 'does not have access to use the specified database. '\n 'Please double check your profile and try again.')\n .format(msg))\n else:\n self.release()\n raise DatabaseException(msg)\n except Exception as e:\n if isinstance(e, snowflake.connector.errors.Error):\n logger.debug('Snowflake query id: {}'.format(e.sfqid))\n\n logger.debug(\"Error running SQL: {}\", sql)\n logger.debug(\"Rolling back transaction.\")\n self.release()\n if isinstance(e, RuntimeException):\n # during a sql query, an internal to dbt exception was raised.\n # this sounds a lot like a signal handler and probably has\n # useful information, so raise it without modification.\n raise\n raise RuntimeException(str(e)) from e\n\n @classmethod\n def open(cls, connection):\n if connection.state == 'open':\n logger.debug('Connection is already open, skipping open.')\n return connection\n\n try:\n creds = connection.credentials\n\n handle = snowflake.connector.connect(\n account=creds.account,\n user=creds.user,\n database=creds.database,\n schema=creds.schema,\n warehouse=creds.warehouse,\n role=creds.role,\n autocommit=False,\n client_session_keep_alive=creds.client_session_keep_alive,\n application='dbt',\n **creds.auth_args()\n )\n\n connection.handle = handle\n connection.state = 'open'\n except snowflake.connector.errors.Error as e:\n logger.debug(\"Got an error when attempting to open a snowflake \"\n \"connection: '{}'\"\n .format(e))\n\n connection.handle = None\n connection.state = 'fail'\n\n raise FailedToConnectException(str(e))\n\n def cancel(self, connection):\n handle = connection.handle\n sid = handle.session_id\n\n connection_name = connection.name\n\n sql = 'select system$abort_session({})'.format(sid)\n\n logger.debug(\"Cancelling query '{}' ({})\".format(connection_name, sid))\n\n _, cursor = self.add_query(sql)\n res = cursor.fetchone()\n\n logger.debug(\"Cancel query '{}': {}\".format(connection_name, res))\n\n @classmethod\n def get_status(cls, cursor):\n state = cursor.sqlstate\n\n if state is None:\n state = 'SUCCESS'\n\n return \"{} {}\".format(state, cursor.rowcount)\n\n @classmethod\n def _split_queries(cls, sql):\n \"Splits sql statements at semicolons into discrete queries\"\n\n sql_s = str(sql)\n sql_buf = StringIO(sql_s)\n split_query = snowflake.connector.util_text.split_statements(sql_buf)\n return [part[0] for part in split_query]\n\n @classmethod\n def process_results(cls, column_names, rows):\n # Override for Snowflake. The datetime objects returned by\n # snowflake-connector-python are not pickleable, so we need\n # to replace them with sane timezones\n fixed = []\n for row in rows:\n fixed_row = []\n for col in row:\n if isinstance(col, datetime.datetime) and col.tzinfo:\n offset = col.utcoffset()\n offset_seconds = offset.total_seconds()\n new_timezone = pytz.FixedOffset(offset_seconds // 60)\n col = col.astimezone(tz=new_timezone)\n fixed_row.append(col)\n\n fixed.append(fixed_row)\n\n return super().process_results(column_names, fixed)\n\n def add_query(self, sql, auto_begin=True,\n bindings=None, abridge_sql_log=False):\n\n connection = None\n cursor = None\n\n if bindings:\n # The snowflake connector is more strict than, eg., psycopg2 -\n # which allows any iterable thing to be passed as a binding.\n bindings = tuple(bindings)\n\n queries = self._split_queries(sql)\n\n for individual_query in queries:\n # hack -- after the last ';', remove comments and don't run\n # empty queries. this avoids using exceptions as flow control,\n # and also allows us to return the status of the last cursor\n without_comments = re.sub(\n re.compile('^.*(--.*)$', re.MULTILINE),\n '', individual_query).strip()\n\n if without_comments == \"\":\n continue\n\n connection, cursor = super().add_query(\n individual_query, auto_begin,\n bindings=bindings,\n abridge_sql_log=abridge_sql_log\n )\n\n if cursor is None:\n conn = self.get_thread_connection()\n if conn is None or conn.name is None:\n conn_name = '<None>'\n else:\n conn_name = conn.name\n\n raise RuntimeException(\n \"Tried to run an empty query on model '{}'. If you are \"\n \"conditionally running\\nsql, eg. in a model hook, make \"\n \"sure your `else` clause contains valid sql!\\n\\n\"\n \"Provided SQL:\\n{}\"\n .format(conn_name, sql)\n )\n\n return connection, cursor\n\n @classmethod\n def _rollback_handle(cls, connection):\n \"\"\"On snowflake, rolling back the handle of an aborted session raises\n an exception.\n \"\"\"\n logger.debug('initiating rollback')\n try:\n connection.handle.rollback()\n except snowflake.connector.errors.ProgrammingError as e:\n msg = str(e)\n if 'Session no longer exists' not in msg:\n raise\n", "path": "plugins/snowflake/dbt/adapters/snowflake/connections.py"}]}
| 3,556 | 223 |
gh_patches_debug_18946
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-6730
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v2] `aws sso login` should not require a fully-configured profile
Currently, `aws sso login` operates on a particular profile, even [requiring that `sso_account_id` and `sso_role_name` be present in the profile](https://github.com/aws/aws-cli/blob/f2788558422dac42a5ebe37c7e5a3d24b19dee9f/awscli/customizations/sso/login.py#L32) even though it does not use them, [only fetching the token](https://github.com/aws/aws-cli/blob/f2788558422dac42a5ebe37c7e5a3d24b19dee9f/awscli/customizations/sso/utils.py#L45) (as it should, because AWS SSO-capable SDKs can use the token to get credentials for the appropriate account and role).
At the very least, `sso_account_id` and `sso_role_name` should be removed from the list of required config variables, which would allow a profile like:
```ini
[profile login]
sso_start_url = https://d-2e69cb2b10.awsapps.com/start
sso_region = us-east-2
```
and then `aws sso login --profile login` would just work without requiring a specific account and role that won't be used anyway.
This matters because not all users in an organization have the same permissions, so there's not a good way to provide them all with a single working config file to start from.
A better alternative would be to have AWS SSO configuration be explicit in the config file, perhaps with a new section type:
```ini
[sso default]
sso_start_url = https://d-2e69cb2b10.awsapps.com/start
sso_region = us-east-2
```
Or, `aws sso login` should check the configured profiles and if there's only one AWS SSO configuration (i.e., they all use the same start URL and region), it should just use that.
I've implemented the latter in [`aws-sso-util login`](https://github.com/benkehoe/aws-sso-util#logging-in-and-out).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/sso/login.py`
Content:
```
1 # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 from awscli.customizations.commands import BasicCommand
14 from awscli.customizations.sso.utils import do_sso_login
15 from awscli.customizations.utils import uni_print
16 from awscli.customizations.exceptions import ConfigurationError
17
18
19 class InvalidSSOConfigError(ConfigurationError):
20 pass
21
22
23 class LoginCommand(BasicCommand):
24 NAME = 'login'
25 DESCRIPTION = (
26 'Retrieves and caches an AWS SSO access token to exchange for AWS '
27 'credentials. To login, the requested profile must have first been '
28 'setup using ``aws configure sso``. Each time the ``login`` command '
29 'is called, a new SSO access token will be retrieved.'
30 )
31 ARG_TABLE = []
32 _REQUIRED_SSO_CONFIG_VARS = [
33 'sso_start_url',
34 'sso_region',
35 'sso_role_name',
36 'sso_account_id',
37 ]
38
39 def _run_main(self, parsed_args, parsed_globals):
40 sso_config = self._get_sso_config()
41 do_sso_login(
42 session=self._session,
43 sso_region=sso_config['sso_region'],
44 start_url=sso_config['sso_start_url'],
45 force_refresh=True
46 )
47 success_msg = 'Successully logged into Start URL: %s\n'
48 uni_print(success_msg % sso_config['sso_start_url'])
49 return 0
50
51 def _get_sso_config(self):
52 scoped_config = self._session.get_scoped_config()
53 sso_config = {}
54 missing_vars = []
55 for config_var in self._REQUIRED_SSO_CONFIG_VARS:
56 if config_var not in scoped_config:
57 missing_vars.append(config_var)
58 else:
59 sso_config[config_var] = scoped_config[config_var]
60 if missing_vars:
61 raise InvalidSSOConfigError(
62 'Missing the following required SSO configuration values: %s. '
63 'To make sure this profile is properly configured to use SSO, '
64 'please run: aws configure sso' % ', '.join(missing_vars)
65 )
66 return sso_config
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/sso/login.py b/awscli/customizations/sso/login.py
--- a/awscli/customizations/sso/login.py
+++ b/awscli/customizations/sso/login.py
@@ -26,14 +26,15 @@
'Retrieves and caches an AWS SSO access token to exchange for AWS '
'credentials. To login, the requested profile must have first been '
'setup using ``aws configure sso``. Each time the ``login`` command '
- 'is called, a new SSO access token will be retrieved.'
+ 'is called, a new SSO access token will be retrieved. Please note '
+ 'that only one login session can be active for a given SSO Start URL '
+ 'and creating multiple profiles does not allow for multiple users to '
+ 'be authenticated against the same SSO Start URL.'
)
ARG_TABLE = []
_REQUIRED_SSO_CONFIG_VARS = [
'sso_start_url',
'sso_region',
- 'sso_role_name',
- 'sso_account_id',
]
def _run_main(self, parsed_args, parsed_globals):
|
{"golden_diff": "diff --git a/awscli/customizations/sso/login.py b/awscli/customizations/sso/login.py\n--- a/awscli/customizations/sso/login.py\n+++ b/awscli/customizations/sso/login.py\n@@ -26,14 +26,15 @@\n 'Retrieves and caches an AWS SSO access token to exchange for AWS '\n 'credentials. To login, the requested profile must have first been '\n 'setup using ``aws configure sso``. Each time the ``login`` command '\n- 'is called, a new SSO access token will be retrieved.'\n+ 'is called, a new SSO access token will be retrieved. Please note '\n+ 'that only one login session can be active for a given SSO Start URL '\n+ 'and creating multiple profiles does not allow for multiple users to '\n+ 'be authenticated against the same SSO Start URL.'\n )\n ARG_TABLE = []\n _REQUIRED_SSO_CONFIG_VARS = [\n 'sso_start_url',\n 'sso_region',\n- 'sso_role_name',\n- 'sso_account_id',\n ]\n \n def _run_main(self, parsed_args, parsed_globals):\n", "issue": "[v2] `aws sso login` should not require a fully-configured profile\nCurrently, `aws sso login` operates on a particular profile, even [requiring that `sso_account_id` and `sso_role_name` be present in the profile](https://github.com/aws/aws-cli/blob/f2788558422dac42a5ebe37c7e5a3d24b19dee9f/awscli/customizations/sso/login.py#L32) even though it does not use them, [only fetching the token](https://github.com/aws/aws-cli/blob/f2788558422dac42a5ebe37c7e5a3d24b19dee9f/awscli/customizations/sso/utils.py#L45) (as it should, because AWS SSO-capable SDKs can use the token to get credentials for the appropriate account and role).\r\n\r\nAt the very least, `sso_account_id` and `sso_role_name` should be removed from the list of required config variables, which would allow a profile like:\r\n```ini\r\n[profile login]\r\nsso_start_url = https://d-2e69cb2b10.awsapps.com/start\r\nsso_region = us-east-2\r\n```\r\nand then `aws sso login --profile login` would just work without requiring a specific account and role that won't be used anyway.\r\n\r\nThis matters because not all users in an organization have the same permissions, so there's not a good way to provide them all with a single working config file to start from.\r\n\r\nA better alternative would be to have AWS SSO configuration be explicit in the config file, perhaps with a new section type:\r\n```ini\r\n[sso default]\r\nsso_start_url = https://d-2e69cb2b10.awsapps.com/start\r\nsso_region = us-east-2\r\n```\r\n\r\nOr, `aws sso login` should check the configured profiles and if there's only one AWS SSO configuration (i.e., they all use the same start URL and region), it should just use that.\r\n\r\nI've implemented the latter in [`aws-sso-util login`](https://github.com/benkehoe/aws-sso-util#logging-in-and-out).\n", "before_files": [{"content": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.sso.utils import do_sso_login\nfrom awscli.customizations.utils import uni_print\nfrom awscli.customizations.exceptions import ConfigurationError\n\n\nclass InvalidSSOConfigError(ConfigurationError):\n pass\n\n\nclass LoginCommand(BasicCommand):\n NAME = 'login'\n DESCRIPTION = (\n 'Retrieves and caches an AWS SSO access token to exchange for AWS '\n 'credentials. To login, the requested profile must have first been '\n 'setup using ``aws configure sso``. Each time the ``login`` command '\n 'is called, a new SSO access token will be retrieved.'\n )\n ARG_TABLE = []\n _REQUIRED_SSO_CONFIG_VARS = [\n 'sso_start_url',\n 'sso_region',\n 'sso_role_name',\n 'sso_account_id',\n ]\n\n def _run_main(self, parsed_args, parsed_globals):\n sso_config = self._get_sso_config()\n do_sso_login(\n session=self._session,\n sso_region=sso_config['sso_region'],\n start_url=sso_config['sso_start_url'],\n force_refresh=True\n )\n success_msg = 'Successully logged into Start URL: %s\\n'\n uni_print(success_msg % sso_config['sso_start_url'])\n return 0\n\n def _get_sso_config(self):\n scoped_config = self._session.get_scoped_config()\n sso_config = {}\n missing_vars = []\n for config_var in self._REQUIRED_SSO_CONFIG_VARS:\n if config_var not in scoped_config:\n missing_vars.append(config_var)\n else:\n sso_config[config_var] = scoped_config[config_var]\n if missing_vars:\n raise InvalidSSOConfigError(\n 'Missing the following required SSO configuration values: %s. '\n 'To make sure this profile is properly configured to use SSO, '\n 'please run: aws configure sso' % ', '.join(missing_vars)\n )\n return sso_config\n", "path": "awscli/customizations/sso/login.py"}], "after_files": [{"content": "# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.sso.utils import do_sso_login\nfrom awscli.customizations.utils import uni_print\nfrom awscli.customizations.exceptions import ConfigurationError\n\n\nclass InvalidSSOConfigError(ConfigurationError):\n pass\n\n\nclass LoginCommand(BasicCommand):\n NAME = 'login'\n DESCRIPTION = (\n 'Retrieves and caches an AWS SSO access token to exchange for AWS '\n 'credentials. To login, the requested profile must have first been '\n 'setup using ``aws configure sso``. Each time the ``login`` command '\n 'is called, a new SSO access token will be retrieved. Please note '\n 'that only one login session can be active for a given SSO Start URL '\n 'and creating multiple profiles does not allow for multiple users to '\n 'be authenticated against the same SSO Start URL.'\n )\n ARG_TABLE = []\n _REQUIRED_SSO_CONFIG_VARS = [\n 'sso_start_url',\n 'sso_region',\n ]\n\n def _run_main(self, parsed_args, parsed_globals):\n sso_config = self._get_sso_config()\n do_sso_login(\n session=self._session,\n sso_region=sso_config['sso_region'],\n start_url=sso_config['sso_start_url'],\n force_refresh=True\n )\n success_msg = 'Successully logged into Start URL: %s\\n'\n uni_print(success_msg % sso_config['sso_start_url'])\n return 0\n\n def _get_sso_config(self):\n scoped_config = self._session.get_scoped_config()\n sso_config = {}\n missing_vars = []\n for config_var in self._REQUIRED_SSO_CONFIG_VARS:\n if config_var not in scoped_config:\n missing_vars.append(config_var)\n else:\n sso_config[config_var] = scoped_config[config_var]\n if missing_vars:\n raise InvalidSSOConfigError(\n 'Missing the following required SSO configuration values: %s. '\n 'To make sure this profile is properly configured to use SSO, '\n 'please run: aws configure sso' % ', '.join(missing_vars)\n )\n return sso_config\n", "path": "awscli/customizations/sso/login.py"}]}
| 1,456 | 253 |
gh_patches_debug_32767
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-1586
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
deeplabv3+在python3.6下报错
deeplabv3+在ubuntu14 cuda8 cudnn7 python3.6下有个报错,报错内容如下:
Traceback (most recent call last):
File "./train.py", line 148, in <module>
load_model()
File "./train.py", line 54, in load_model
exe, dirname=args.init_weights_path, main_program=tp)
File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py", line 487, in load_params
filename=filename)
File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py", line 395, in load_vars
filename=filename)
File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py", line 436, in load_vars
executor.run(load_prog)
File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/executor.py", line 472, in run
self.executor.run(program.desc, scope, 0, True, True)
paddle.fluid.core.EnforceNotMet: Cannot open file deeplabv3plus_xception65_initialize.params/xception_65/entry_flow/conv1/weights for load op at [/home/Paddle/paddle/fluid/operators/load_op.cc:39]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fluid/PaddleCV/deeplabv3+/train.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 import os
5 os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'
6
7 import paddle
8 import paddle.fluid as fluid
9 import numpy as np
10 import argparse
11 from reader import CityscapeDataset
12 import reader
13 import models
14 import time
15
16
17 def add_argument(name, type, default, help):
18 parser.add_argument('--' + name, default=default, type=type, help=help)
19
20
21 def add_arguments():
22 add_argument('batch_size', int, 2,
23 "The number of images in each batch during training.")
24 add_argument('train_crop_size', int, 769,
25 "'Image crop size during training.")
26 add_argument('base_lr', float, 0.0001,
27 "The base learning rate for model training.")
28 add_argument('total_step', int, 90000, "Number of the training step.")
29 add_argument('init_weights_path', str, None,
30 "Path of the initial weights in paddlepaddle format.")
31 add_argument('save_weights_path', str, None,
32 "Path of the saved weights during training.")
33 add_argument('dataset_path', str, None, "Cityscape dataset path.")
34 add_argument('parallel', bool, False, "using ParallelExecutor.")
35 add_argument('use_gpu', bool, True, "Whether use GPU or CPU.")
36 add_argument('num_classes', int, 19, "Number of classes.")
37 parser.add_argument('--enable_ce', action='store_true', help='If set, run the task with continuous evaluation logs.')
38
39
40 def load_model():
41 myvars = [
42 x for x in tp.list_vars()
43 if isinstance(x, fluid.framework.Parameter) and x.name.find('logit') ==
44 -1
45 ]
46 if args.init_weights_path.endswith('/'):
47 if args.num_classes == 19:
48 fluid.io.load_params(
49 exe, dirname=args.init_weights_path, main_program=tp)
50 else:
51 fluid.io.load_vars(exe, dirname=args.init_weights_path, vars=myvars)
52 else:
53 if args.num_classes == 19:
54 fluid.io.load_params(
55 exe, dirname=args.init_weights_path, main_program=tp)
56 else:
57 fluid.io.load_vars(
58 exe, dirname="", filename=args.init_weights_path, vars=myvars)
59
60
61 def save_model():
62 if args.save_weights_path.endswith('/'):
63 fluid.io.save_params(
64 exe, dirname=args.save_weights_path, main_program=tp)
65 else:
66 fluid.io.save_params(
67 exe, dirname="", filename=args.save_weights_path, main_program=tp)
68
69
70 def loss(logit, label):
71 label_nignore = (label < num_classes).astype('float32')
72 label = fluid.layers.elementwise_min(
73 label,
74 fluid.layers.assign(np.array(
75 [num_classes - 1], dtype=np.int32)))
76 logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
77 logit = fluid.layers.reshape(logit, [-1, num_classes])
78 label = fluid.layers.reshape(label, [-1, 1])
79 label = fluid.layers.cast(label, 'int64')
80 label_nignore = fluid.layers.reshape(label_nignore, [-1, 1])
81 loss = fluid.layers.softmax_with_cross_entropy(logit, label)
82 loss = loss * label_nignore
83 no_grad_set.add(label_nignore.name)
84 no_grad_set.add(label.name)
85 return loss, label_nignore
86
87
88 def get_cards(args):
89 if args.enable_ce:
90 cards = os.environ.get('CUDA_VISIBLE_DEVICES')
91 num = len(cards.split(","))
92 return num
93 else:
94 return args.num_devices
95
96 CityscapeDataset = reader.CityscapeDataset
97 parser = argparse.ArgumentParser()
98
99 add_arguments()
100
101 args = parser.parse_args()
102
103 models.clean()
104 models.bn_momentum = 0.9997
105 models.dropout_keep_prop = 0.9
106 models.label_number = args.num_classes
107 deeplabv3p = models.deeplabv3p
108
109 sp = fluid.Program()
110 tp = fluid.Program()
111
112 # only for ce
113 if args.enable_ce:
114 SEED = 102
115 sp.random_seed = SEED
116 tp.random_seed = SEED
117
118 crop_size = args.train_crop_size
119 batch_size = args.batch_size
120 image_shape = [crop_size, crop_size]
121 reader.default_config['crop_size'] = crop_size
122 reader.default_config['shuffle'] = True
123 num_classes = args.num_classes
124 weight_decay = 0.00004
125
126 base_lr = args.base_lr
127 total_step = args.total_step
128
129 no_grad_set = set()
130
131 with fluid.program_guard(tp, sp):
132 img = fluid.layers.data(
133 name='img', shape=[3] + image_shape, dtype='float32')
134 label = fluid.layers.data(name='label', shape=image_shape, dtype='int32')
135 logit = deeplabv3p(img)
136 pred = fluid.layers.argmax(logit, axis=1).astype('int32')
137 loss, mask = loss(logit, label)
138 lr = fluid.layers.polynomial_decay(
139 base_lr, total_step, end_learning_rate=0, power=0.9)
140 area = fluid.layers.elementwise_max(
141 fluid.layers.reduce_mean(mask),
142 fluid.layers.assign(np.array(
143 [0.1], dtype=np.float32)))
144 loss_mean = fluid.layers.reduce_mean(loss) / area
145
146 opt = fluid.optimizer.Momentum(
147 lr,
148 momentum=0.9,
149 regularization=fluid.regularizer.L2DecayRegularizer(
150 regularization_coeff=weight_decay), )
151 retv = opt.minimize(loss_mean, startup_program=sp, no_grad_set=no_grad_set)
152
153 fluid.memory_optimize(
154 tp, print_log=False, skip_opt_set=set([pred.name, loss_mean.name]), level=1)
155
156 place = fluid.CPUPlace()
157 if args.use_gpu:
158 place = fluid.CUDAPlace(0)
159 exe = fluid.Executor(place)
160 exe.run(sp)
161
162 if args.init_weights_path:
163 print("load from:", args.init_weights_path)
164 load_model()
165
166 dataset = CityscapeDataset(args.dataset_path, 'train')
167
168 if args.parallel:
169 exe_p = fluid.ParallelExecutor(
170 use_cuda=True, loss_name=loss_mean.name, main_program=tp)
171
172 batches = dataset.get_batch_generator(batch_size, total_step)
173
174 total_time = 0.0
175 epoch_idx = 0
176 train_loss = 0
177
178 for i, imgs, labels, names in batches:
179 epoch_idx += 1
180 begin_time = time.time()
181 prev_start_time = time.time()
182 if args.parallel:
183 retv = exe_p.run(fetch_list=[pred.name, loss_mean.name],
184 feed={'img': imgs,
185 'label': labels})
186 else:
187 retv = exe.run(tp,
188 feed={'img': imgs,
189 'label': labels},
190 fetch_list=[pred, loss_mean])
191 end_time = time.time()
192 total_time += end_time - begin_time
193 if i % 100 == 0:
194 print("Model is saved to", args.save_weights_path)
195 save_model()
196 print("step {:d}, loss: {:.6f}, step_time_cost: {:.3f}".format(
197 i, np.mean(retv[1]), end_time - prev_start_time))
198
199 # only for ce
200 train_loss = np.mean(retv[1])
201
202 if args.enable_ce:
203 gpu_num = get_cards(args)
204 print("kpis\teach_pass_duration_card%s\t%s" %
205 (gpu_num, total_time / epoch_idx))
206 print("kpis\ttrain_loss_card%s\t%s" %
207 (gpu_num, train_loss))
208
209 print("Training done. Model is saved to", args.save_weights_path)
210 save_model()
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/fluid/PaddleCV/deeplabv3+/train.py b/fluid/PaddleCV/deeplabv3+/train.py
--- a/fluid/PaddleCV/deeplabv3+/train.py
+++ b/fluid/PaddleCV/deeplabv3+/train.py
@@ -34,7 +34,10 @@
add_argument('parallel', bool, False, "using ParallelExecutor.")
add_argument('use_gpu', bool, True, "Whether use GPU or CPU.")
add_argument('num_classes', int, 19, "Number of classes.")
- parser.add_argument('--enable_ce', action='store_true', help='If set, run the task with continuous evaluation logs.')
+ parser.add_argument(
+ '--enable_ce',
+ action='store_true',
+ help='If set, run the task with continuous evaluation logs.')
def load_model():
@@ -52,7 +55,10 @@
else:
if args.num_classes == 19:
fluid.io.load_params(
- exe, dirname=args.init_weights_path, main_program=tp)
+ exe,
+ dirname="",
+ filename=args.init_weights_path,
+ main_program=tp)
else:
fluid.io.load_vars(
exe, dirname="", filename=args.init_weights_path, vars=myvars)
@@ -93,6 +99,7 @@
else:
return args.num_devices
+
CityscapeDataset = reader.CityscapeDataset
parser = argparse.ArgumentParser()
@@ -202,9 +209,8 @@
if args.enable_ce:
gpu_num = get_cards(args)
print("kpis\teach_pass_duration_card%s\t%s" %
- (gpu_num, total_time / epoch_idx))
- print("kpis\ttrain_loss_card%s\t%s" %
- (gpu_num, train_loss))
+ (gpu_num, total_time / epoch_idx))
+ print("kpis\ttrain_loss_card%s\t%s" % (gpu_num, train_loss))
print("Training done. Model is saved to", args.save_weights_path)
save_model()
|
{"golden_diff": "diff --git a/fluid/PaddleCV/deeplabv3+/train.py b/fluid/PaddleCV/deeplabv3+/train.py\n--- a/fluid/PaddleCV/deeplabv3+/train.py\n+++ b/fluid/PaddleCV/deeplabv3+/train.py\n@@ -34,7 +34,10 @@\n add_argument('parallel', bool, False, \"using ParallelExecutor.\")\n add_argument('use_gpu', bool, True, \"Whether use GPU or CPU.\")\n add_argument('num_classes', int, 19, \"Number of classes.\")\n- parser.add_argument('--enable_ce', action='store_true', help='If set, run the task with continuous evaluation logs.')\n+ parser.add_argument(\n+ '--enable_ce',\n+ action='store_true',\n+ help='If set, run the task with continuous evaluation logs.')\n \n \n def load_model():\n@@ -52,7 +55,10 @@\n else:\n if args.num_classes == 19:\n fluid.io.load_params(\n- exe, dirname=args.init_weights_path, main_program=tp)\n+ exe,\n+ dirname=\"\",\n+ filename=args.init_weights_path,\n+ main_program=tp)\n else:\n fluid.io.load_vars(\n exe, dirname=\"\", filename=args.init_weights_path, vars=myvars)\n@@ -93,6 +99,7 @@\n else:\n return args.num_devices\n \n+\n CityscapeDataset = reader.CityscapeDataset\n parser = argparse.ArgumentParser()\n \n@@ -202,9 +209,8 @@\n if args.enable_ce:\n gpu_num = get_cards(args)\n print(\"kpis\\teach_pass_duration_card%s\\t%s\" %\n- (gpu_num, total_time / epoch_idx))\n- print(\"kpis\\ttrain_loss_card%s\\t%s\" %\n- (gpu_num, train_loss))\n+ (gpu_num, total_time / epoch_idx))\n+ print(\"kpis\\ttrain_loss_card%s\\t%s\" % (gpu_num, train_loss))\n \n print(\"Training done. Model is saved to\", args.save_weights_path)\n save_model()\n", "issue": "deeplabv3+\u5728python3.6\u4e0b\u62a5\u9519\ndeeplabv3+\u5728ubuntu14 cuda8 cudnn7 python3.6\u4e0b\u6709\u4e2a\u62a5\u9519\uff0c\u62a5\u9519\u5185\u5bb9\u5982\u4e0b\uff1a\r\nTraceback (most recent call last):\r\n File \"./train.py\", line 148, in <module>\r\n load_model()\r\n File \"./train.py\", line 54, in load_model\r\n exe, dirname=args.init_weights_path, main_program=tp)\r\n File \"/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py\", line 487, in load_params\r\n filename=filename)\r\n File \"/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py\", line 395, in load_vars\r\n filename=filename)\r\n File \"/usr/local/lib/python3.6/dist-packages/paddle/fluid/io.py\", line 436, in load_vars\r\n executor.run(load_prog)\r\n File \"/usr/local/lib/python3.6/dist-packages/paddle/fluid/executor.py\", line 472, in run\r\n self.executor.run(program.desc, scope, 0, True, True)\r\npaddle.fluid.core.EnforceNotMet: Cannot open file deeplabv3plus_xception65_initialize.params/xception_65/entry_flow/conv1/weights for load op at [/home/Paddle/paddle/fluid/operators/load_op.cc:39]\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport os\nos.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\nimport argparse\nfrom reader import CityscapeDataset\nimport reader\nimport models\nimport time\n\n\ndef add_argument(name, type, default, help):\n parser.add_argument('--' + name, default=default, type=type, help=help)\n\n\ndef add_arguments():\n add_argument('batch_size', int, 2,\n \"The number of images in each batch during training.\")\n add_argument('train_crop_size', int, 769,\n \"'Image crop size during training.\")\n add_argument('base_lr', float, 0.0001,\n \"The base learning rate for model training.\")\n add_argument('total_step', int, 90000, \"Number of the training step.\")\n add_argument('init_weights_path', str, None,\n \"Path of the initial weights in paddlepaddle format.\")\n add_argument('save_weights_path', str, None,\n \"Path of the saved weights during training.\")\n add_argument('dataset_path', str, None, \"Cityscape dataset path.\")\n add_argument('parallel', bool, False, \"using ParallelExecutor.\")\n add_argument('use_gpu', bool, True, \"Whether use GPU or CPU.\")\n add_argument('num_classes', int, 19, \"Number of classes.\")\n parser.add_argument('--enable_ce', action='store_true', help='If set, run the task with continuous evaluation logs.')\n\n\ndef load_model():\n myvars = [\n x for x in tp.list_vars()\n if isinstance(x, fluid.framework.Parameter) and x.name.find('logit') ==\n -1\n ]\n if args.init_weights_path.endswith('/'):\n if args.num_classes == 19:\n fluid.io.load_params(\n exe, dirname=args.init_weights_path, main_program=tp)\n else:\n fluid.io.load_vars(exe, dirname=args.init_weights_path, vars=myvars)\n else:\n if args.num_classes == 19:\n fluid.io.load_params(\n exe, dirname=args.init_weights_path, main_program=tp)\n else:\n fluid.io.load_vars(\n exe, dirname=\"\", filename=args.init_weights_path, vars=myvars)\n\n\ndef save_model():\n if args.save_weights_path.endswith('/'):\n fluid.io.save_params(\n exe, dirname=args.save_weights_path, main_program=tp)\n else:\n fluid.io.save_params(\n exe, dirname=\"\", filename=args.save_weights_path, main_program=tp)\n\n\ndef loss(logit, label):\n label_nignore = (label < num_classes).astype('float32')\n label = fluid.layers.elementwise_min(\n label,\n fluid.layers.assign(np.array(\n [num_classes - 1], dtype=np.int32)))\n logit = fluid.layers.transpose(logit, [0, 2, 3, 1])\n logit = fluid.layers.reshape(logit, [-1, num_classes])\n label = fluid.layers.reshape(label, [-1, 1])\n label = fluid.layers.cast(label, 'int64')\n label_nignore = fluid.layers.reshape(label_nignore, [-1, 1])\n loss = fluid.layers.softmax_with_cross_entropy(logit, label)\n loss = loss * label_nignore\n no_grad_set.add(label_nignore.name)\n no_grad_set.add(label.name)\n return loss, label_nignore\n\n\ndef get_cards(args):\n if args.enable_ce:\n cards = os.environ.get('CUDA_VISIBLE_DEVICES')\n num = len(cards.split(\",\"))\n return num\n else:\n return args.num_devices\n\nCityscapeDataset = reader.CityscapeDataset\nparser = argparse.ArgumentParser()\n\nadd_arguments()\n\nargs = parser.parse_args()\n\nmodels.clean()\nmodels.bn_momentum = 0.9997\nmodels.dropout_keep_prop = 0.9\nmodels.label_number = args.num_classes\ndeeplabv3p = models.deeplabv3p\n\nsp = fluid.Program()\ntp = fluid.Program()\n\n# only for ce\nif args.enable_ce:\n SEED = 102\n sp.random_seed = SEED\n tp.random_seed = SEED\n\ncrop_size = args.train_crop_size\nbatch_size = args.batch_size\nimage_shape = [crop_size, crop_size]\nreader.default_config['crop_size'] = crop_size\nreader.default_config['shuffle'] = True\nnum_classes = args.num_classes\nweight_decay = 0.00004\n\nbase_lr = args.base_lr\ntotal_step = args.total_step\n\nno_grad_set = set()\n\nwith fluid.program_guard(tp, sp):\n img = fluid.layers.data(\n name='img', shape=[3] + image_shape, dtype='float32')\n label = fluid.layers.data(name='label', shape=image_shape, dtype='int32')\n logit = deeplabv3p(img)\n pred = fluid.layers.argmax(logit, axis=1).astype('int32')\n loss, mask = loss(logit, label)\n lr = fluid.layers.polynomial_decay(\n base_lr, total_step, end_learning_rate=0, power=0.9)\n area = fluid.layers.elementwise_max(\n fluid.layers.reduce_mean(mask),\n fluid.layers.assign(np.array(\n [0.1], dtype=np.float32)))\n loss_mean = fluid.layers.reduce_mean(loss) / area\n\n opt = fluid.optimizer.Momentum(\n lr,\n momentum=0.9,\n regularization=fluid.regularizer.L2DecayRegularizer(\n regularization_coeff=weight_decay), )\n retv = opt.minimize(loss_mean, startup_program=sp, no_grad_set=no_grad_set)\n\nfluid.memory_optimize(\n tp, print_log=False, skip_opt_set=set([pred.name, loss_mean.name]), level=1)\n\nplace = fluid.CPUPlace()\nif args.use_gpu:\n place = fluid.CUDAPlace(0)\nexe = fluid.Executor(place)\nexe.run(sp)\n\nif args.init_weights_path:\n print(\"load from:\", args.init_weights_path)\n load_model()\n\ndataset = CityscapeDataset(args.dataset_path, 'train')\n\nif args.parallel:\n exe_p = fluid.ParallelExecutor(\n use_cuda=True, loss_name=loss_mean.name, main_program=tp)\n\nbatches = dataset.get_batch_generator(batch_size, total_step)\n\ntotal_time = 0.0\nepoch_idx = 0\ntrain_loss = 0\n\nfor i, imgs, labels, names in batches:\n epoch_idx += 1\n begin_time = time.time()\n prev_start_time = time.time()\n if args.parallel:\n retv = exe_p.run(fetch_list=[pred.name, loss_mean.name],\n feed={'img': imgs,\n 'label': labels})\n else:\n retv = exe.run(tp,\n feed={'img': imgs,\n 'label': labels},\n fetch_list=[pred, loss_mean])\n end_time = time.time()\n total_time += end_time - begin_time\n if i % 100 == 0:\n print(\"Model is saved to\", args.save_weights_path)\n save_model()\n print(\"step {:d}, loss: {:.6f}, step_time_cost: {:.3f}\".format(\n i, np.mean(retv[1]), end_time - prev_start_time))\n\n # only for ce\n train_loss = np.mean(retv[1])\n\nif args.enable_ce:\n gpu_num = get_cards(args)\n print(\"kpis\\teach_pass_duration_card%s\\t%s\" %\n (gpu_num, total_time / epoch_idx))\n print(\"kpis\\ttrain_loss_card%s\\t%s\" %\n (gpu_num, train_loss))\n\nprint(\"Training done. Model is saved to\", args.save_weights_path)\nsave_model()\n", "path": "fluid/PaddleCV/deeplabv3+/train.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport os\nos.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\nimport argparse\nfrom reader import CityscapeDataset\nimport reader\nimport models\nimport time\n\n\ndef add_argument(name, type, default, help):\n parser.add_argument('--' + name, default=default, type=type, help=help)\n\n\ndef add_arguments():\n add_argument('batch_size', int, 2,\n \"The number of images in each batch during training.\")\n add_argument('train_crop_size', int, 769,\n \"'Image crop size during training.\")\n add_argument('base_lr', float, 0.0001,\n \"The base learning rate for model training.\")\n add_argument('total_step', int, 90000, \"Number of the training step.\")\n add_argument('init_weights_path', str, None,\n \"Path of the initial weights in paddlepaddle format.\")\n add_argument('save_weights_path', str, None,\n \"Path of the saved weights during training.\")\n add_argument('dataset_path', str, None, \"Cityscape dataset path.\")\n add_argument('parallel', bool, False, \"using ParallelExecutor.\")\n add_argument('use_gpu', bool, True, \"Whether use GPU or CPU.\")\n add_argument('num_classes', int, 19, \"Number of classes.\")\n parser.add_argument(\n '--enable_ce',\n action='store_true',\n help='If set, run the task with continuous evaluation logs.')\n\n\ndef load_model():\n myvars = [\n x for x in tp.list_vars()\n if isinstance(x, fluid.framework.Parameter) and x.name.find('logit') ==\n -1\n ]\n if args.init_weights_path.endswith('/'):\n if args.num_classes == 19:\n fluid.io.load_params(\n exe, dirname=args.init_weights_path, main_program=tp)\n else:\n fluid.io.load_vars(exe, dirname=args.init_weights_path, vars=myvars)\n else:\n if args.num_classes == 19:\n fluid.io.load_params(\n exe,\n dirname=\"\",\n filename=args.init_weights_path,\n main_program=tp)\n else:\n fluid.io.load_vars(\n exe, dirname=\"\", filename=args.init_weights_path, vars=myvars)\n\n\ndef save_model():\n if args.save_weights_path.endswith('/'):\n fluid.io.save_params(\n exe, dirname=args.save_weights_path, main_program=tp)\n else:\n fluid.io.save_params(\n exe, dirname=\"\", filename=args.save_weights_path, main_program=tp)\n\n\ndef loss(logit, label):\n label_nignore = (label < num_classes).astype('float32')\n label = fluid.layers.elementwise_min(\n label,\n fluid.layers.assign(np.array(\n [num_classes - 1], dtype=np.int32)))\n logit = fluid.layers.transpose(logit, [0, 2, 3, 1])\n logit = fluid.layers.reshape(logit, [-1, num_classes])\n label = fluid.layers.reshape(label, [-1, 1])\n label = fluid.layers.cast(label, 'int64')\n label_nignore = fluid.layers.reshape(label_nignore, [-1, 1])\n loss = fluid.layers.softmax_with_cross_entropy(logit, label)\n loss = loss * label_nignore\n no_grad_set.add(label_nignore.name)\n no_grad_set.add(label.name)\n return loss, label_nignore\n\n\ndef get_cards(args):\n if args.enable_ce:\n cards = os.environ.get('CUDA_VISIBLE_DEVICES')\n num = len(cards.split(\",\"))\n return num\n else:\n return args.num_devices\n\n\nCityscapeDataset = reader.CityscapeDataset\nparser = argparse.ArgumentParser()\n\nadd_arguments()\n\nargs = parser.parse_args()\n\nmodels.clean()\nmodels.bn_momentum = 0.9997\nmodels.dropout_keep_prop = 0.9\nmodels.label_number = args.num_classes\ndeeplabv3p = models.deeplabv3p\n\nsp = fluid.Program()\ntp = fluid.Program()\n\n# only for ce\nif args.enable_ce:\n SEED = 102\n sp.random_seed = SEED\n tp.random_seed = SEED\n\ncrop_size = args.train_crop_size\nbatch_size = args.batch_size\nimage_shape = [crop_size, crop_size]\nreader.default_config['crop_size'] = crop_size\nreader.default_config['shuffle'] = True\nnum_classes = args.num_classes\nweight_decay = 0.00004\n\nbase_lr = args.base_lr\ntotal_step = args.total_step\n\nno_grad_set = set()\n\nwith fluid.program_guard(tp, sp):\n img = fluid.layers.data(\n name='img', shape=[3] + image_shape, dtype='float32')\n label = fluid.layers.data(name='label', shape=image_shape, dtype='int32')\n logit = deeplabv3p(img)\n pred = fluid.layers.argmax(logit, axis=1).astype('int32')\n loss, mask = loss(logit, label)\n lr = fluid.layers.polynomial_decay(\n base_lr, total_step, end_learning_rate=0, power=0.9)\n area = fluid.layers.elementwise_max(\n fluid.layers.reduce_mean(mask),\n fluid.layers.assign(np.array(\n [0.1], dtype=np.float32)))\n loss_mean = fluid.layers.reduce_mean(loss) / area\n\n opt = fluid.optimizer.Momentum(\n lr,\n momentum=0.9,\n regularization=fluid.regularizer.L2DecayRegularizer(\n regularization_coeff=weight_decay), )\n retv = opt.minimize(loss_mean, startup_program=sp, no_grad_set=no_grad_set)\n\nfluid.memory_optimize(\n tp, print_log=False, skip_opt_set=set([pred.name, loss_mean.name]), level=1)\n\nplace = fluid.CPUPlace()\nif args.use_gpu:\n place = fluid.CUDAPlace(0)\nexe = fluid.Executor(place)\nexe.run(sp)\n\nif args.init_weights_path:\n print(\"load from:\", args.init_weights_path)\n load_model()\n\ndataset = CityscapeDataset(args.dataset_path, 'train')\n\nif args.parallel:\n exe_p = fluid.ParallelExecutor(\n use_cuda=True, loss_name=loss_mean.name, main_program=tp)\n\nbatches = dataset.get_batch_generator(batch_size, total_step)\n\ntotal_time = 0.0\nepoch_idx = 0\ntrain_loss = 0\n\nfor i, imgs, labels, names in batches:\n epoch_idx += 1\n begin_time = time.time()\n prev_start_time = time.time()\n if args.parallel:\n retv = exe_p.run(fetch_list=[pred.name, loss_mean.name],\n feed={'img': imgs,\n 'label': labels})\n else:\n retv = exe.run(tp,\n feed={'img': imgs,\n 'label': labels},\n fetch_list=[pred, loss_mean])\n end_time = time.time()\n total_time += end_time - begin_time\n if i % 100 == 0:\n print(\"Model is saved to\", args.save_weights_path)\n save_model()\n print(\"step {:d}, loss: {:.6f}, step_time_cost: {:.3f}\".format(\n i, np.mean(retv[1]), end_time - prev_start_time))\n\n # only for ce\n train_loss = np.mean(retv[1])\n\nif args.enable_ce:\n gpu_num = get_cards(args)\n print(\"kpis\\teach_pass_duration_card%s\\t%s\" %\n (gpu_num, total_time / epoch_idx))\n print(\"kpis\\ttrain_loss_card%s\\t%s\" % (gpu_num, train_loss))\n\nprint(\"Training done. Model is saved to\", args.save_weights_path)\nsave_model()\n", "path": "fluid/PaddleCV/deeplabv3+/train.py"}]}
| 2,812 | 462 |
gh_patches_debug_19864
|
rasdani/github-patches
|
git_diff
|
watchdogpolska__feder-433
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sposób widzenia załączników do nas wysłanych
Nie wiem czemu jest tak, że ja wchodzę w korespondencję z daną gminą w danym monitoringu, to przy mailach widzę załączniki:

A jak już wejdę z konkretną wiadomość, to ich nie ma:

Czy to się da zmienić, żeby po wejściu z konkretną wiadomość też było widać te załączniki?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `feder/letters/factories.py`
Content:
```
1 from email.mime.text import MIMEText
2
3 import factory
4 import factory.fuzzy
5 from django.core.mail import EmailMessage
6 from factory.django import FileField
7
8 from feder.cases.factories import CaseFactory
9 from feder.institutions.factories import InstitutionFactory
10 from feder.records.factories import RecordFactory
11 from feder.users.factories import UserFactory
12 from .models import Letter
13
14
15 class MailField(FileField):
16 DEFAULT_FILENAME = 'data.eml'
17
18 def _make_data(self, params):
19 msg = MIMEText("Lorem ipsum")
20 msg['Subject'] = "Example message"
21 msg['From'] = "[email protected]"
22 msg['To'] = "[email protected]"
23
24 return params.get('data', msg.as_string().encode('utf-8'))
25
26
27 class LetterFactory(factory.django.DjangoModelFactory):
28 record = factory.SubFactory(RecordFactory)
29 title = factory.Sequence('title-letter-{0}'.format)
30 body = factory.Sequence('body-{0}'.format)
31 quote = factory.Sequence('quote-{0}'.format)
32
33 class Meta:
34 model = Letter
35
36
37 class IncomingLetterFactory(LetterFactory):
38 author_institution = factory.SubFactory(InstitutionFactory)
39 email = factory.Sequence('xxx-{0}@example.com'.format)
40 note = factory.fuzzy.FuzzyText()
41 eml = MailField()
42
43
44 class OutgoingLetterFactory(LetterFactory):
45 author_user = factory.SubFactory(UserFactory)
46 is_draft = False
47 eml = MailField()
48
49
50 class DraftLetterFactory(OutgoingLetterFactory):
51 is_draft = True
52
53
54 class SendOutgoingLetterFactory(LetterFactory):
55 author_user = factory.SubFactory(UserFactory)
56
57 is_send_yes = factory.PostGenerationMethodCall('send')
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/feder/letters/factories.py b/feder/letters/factories.py
--- a/feder/letters/factories.py
+++ b/feder/letters/factories.py
@@ -1,15 +1,12 @@
from email.mime.text import MIMEText
-import factory
import factory.fuzzy
-from django.core.mail import EmailMessage
from factory.django import FileField
-from feder.cases.factories import CaseFactory
from feder.institutions.factories import InstitutionFactory
from feder.records.factories import RecordFactory
from feder.users.factories import UserFactory
-from .models import Letter
+from .models import Letter, Attachment
class MailField(FileField):
@@ -55,3 +52,11 @@
author_user = factory.SubFactory(UserFactory)
is_send_yes = factory.PostGenerationMethodCall('send')
+
+
+class AttachmentFactory(factory.django.DjangoModelFactory):
+ letter = factory.SubFactory(InstitutionFactory)
+ attachment = factory.django.FileField()
+
+ class Meta:
+ model = Attachment
|
{"golden_diff": "diff --git a/feder/letters/factories.py b/feder/letters/factories.py\n--- a/feder/letters/factories.py\n+++ b/feder/letters/factories.py\n@@ -1,15 +1,12 @@\n from email.mime.text import MIMEText\n \n-import factory\n import factory.fuzzy\n-from django.core.mail import EmailMessage\n from factory.django import FileField\n \n-from feder.cases.factories import CaseFactory\n from feder.institutions.factories import InstitutionFactory\n from feder.records.factories import RecordFactory\n from feder.users.factories import UserFactory\n-from .models import Letter\n+from .models import Letter, Attachment\n \n \n class MailField(FileField):\n@@ -55,3 +52,11 @@\n author_user = factory.SubFactory(UserFactory)\n \n is_send_yes = factory.PostGenerationMethodCall('send')\n+\n+\n+class AttachmentFactory(factory.django.DjangoModelFactory):\n+ letter = factory.SubFactory(InstitutionFactory)\n+ attachment = factory.django.FileField()\n+\n+ class Meta:\n+ model = Attachment\n", "issue": "spos\u00f3b widzenia za\u0142\u0105cznik\u00f3w do nas wys\u0142anych\nNie wiem czemu jest tak, \u017ce ja wchodz\u0119 w korespondencj\u0119 z dan\u0105 gmin\u0105 w danym monitoringu, to przy mailach widz\u0119 za\u0142\u0105czniki:\r\n\r\n\r\n\r\nA jak ju\u017c wejd\u0119 z konkretn\u0105 wiadomo\u015b\u0107, to ich nie ma:\r\n\r\n\r\n\r\nCzy to si\u0119 da zmieni\u0107, \u017ceby po wej\u015bciu z konkretn\u0105 wiadomo\u015b\u0107 te\u017c by\u0142o wida\u0107 te za\u0142\u0105czniki?\n", "before_files": [{"content": "from email.mime.text import MIMEText\n\nimport factory\nimport factory.fuzzy\nfrom django.core.mail import EmailMessage\nfrom factory.django import FileField\n\nfrom feder.cases.factories import CaseFactory\nfrom feder.institutions.factories import InstitutionFactory\nfrom feder.records.factories import RecordFactory\nfrom feder.users.factories import UserFactory\nfrom .models import Letter\n\n\nclass MailField(FileField):\n DEFAULT_FILENAME = 'data.eml'\n\n def _make_data(self, params):\n msg = MIMEText(\"Lorem ipsum\")\n msg['Subject'] = \"Example message\"\n msg['From'] = \"[email protected]\"\n msg['To'] = \"[email protected]\"\n\n return params.get('data', msg.as_string().encode('utf-8'))\n\n\nclass LetterFactory(factory.django.DjangoModelFactory):\n record = factory.SubFactory(RecordFactory)\n title = factory.Sequence('title-letter-{0}'.format)\n body = factory.Sequence('body-{0}'.format)\n quote = factory.Sequence('quote-{0}'.format)\n\n class Meta:\n model = Letter\n\n\nclass IncomingLetterFactory(LetterFactory):\n author_institution = factory.SubFactory(InstitutionFactory)\n email = factory.Sequence('xxx-{0}@example.com'.format)\n note = factory.fuzzy.FuzzyText()\n eml = MailField()\n\n\nclass OutgoingLetterFactory(LetterFactory):\n author_user = factory.SubFactory(UserFactory)\n is_draft = False\n eml = MailField()\n\n\nclass DraftLetterFactory(OutgoingLetterFactory):\n is_draft = True\n\n\nclass SendOutgoingLetterFactory(LetterFactory):\n author_user = factory.SubFactory(UserFactory)\n\n is_send_yes = factory.PostGenerationMethodCall('send')\n", "path": "feder/letters/factories.py"}], "after_files": [{"content": "from email.mime.text import MIMEText\n\nimport factory.fuzzy\nfrom factory.django import FileField\n\nfrom feder.institutions.factories import InstitutionFactory\nfrom feder.records.factories import RecordFactory\nfrom feder.users.factories import UserFactory\nfrom .models import Letter, Attachment\n\n\nclass MailField(FileField):\n DEFAULT_FILENAME = 'data.eml'\n\n def _make_data(self, params):\n msg = MIMEText(\"Lorem ipsum\")\n msg['Subject'] = \"Example message\"\n msg['From'] = \"[email protected]\"\n msg['To'] = \"[email protected]\"\n\n return params.get('data', msg.as_string().encode('utf-8'))\n\n\nclass LetterFactory(factory.django.DjangoModelFactory):\n record = factory.SubFactory(RecordFactory)\n title = factory.Sequence('title-letter-{0}'.format)\n body = factory.Sequence('body-{0}'.format)\n quote = factory.Sequence('quote-{0}'.format)\n\n class Meta:\n model = Letter\n\n\nclass IncomingLetterFactory(LetterFactory):\n author_institution = factory.SubFactory(InstitutionFactory)\n email = factory.Sequence('xxx-{0}@example.com'.format)\n note = factory.fuzzy.FuzzyText()\n eml = MailField()\n\n\nclass OutgoingLetterFactory(LetterFactory):\n author_user = factory.SubFactory(UserFactory)\n is_draft = False\n eml = MailField()\n\n\nclass DraftLetterFactory(OutgoingLetterFactory):\n is_draft = True\n\n\nclass SendOutgoingLetterFactory(LetterFactory):\n author_user = factory.SubFactory(UserFactory)\n\n is_send_yes = factory.PostGenerationMethodCall('send')\n\n\nclass AttachmentFactory(factory.django.DjangoModelFactory):\n letter = factory.SubFactory(InstitutionFactory)\n attachment = factory.django.FileField()\n\n class Meta:\n model = Attachment\n", "path": "feder/letters/factories.py"}]}
| 979 | 228 |
gh_patches_debug_19086
|
rasdani/github-patches
|
git_diff
|
qutip__qutip-2038
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix the handling of file suffixes when saving and loading Qobjs
### Bug Description
This bug was introduced n #1813 and reported by @nleehone in a post-release review of https://github.com/qutip/qutip/pull/1813#pullrequestreview-950335153
### Code to Reproduce the Bug
_No response_
### Code Output
_No response_
### Expected Behaviour
The file suffix should be added if it is not present.
### Your Environment
```shell
QuTiP version: 4.7.0
```
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutip/fileio.py`
Content:
```
1 __all__ = ['file_data_store', 'file_data_read', 'qsave', 'qload']
2
3 import pickle
4 import numpy as np
5 import sys
6 from pathlib import Path
7
8
9 # -----------------------------------------------------------------------------
10 # Write matrix data to a file
11 #
12 def file_data_store(filename, data, numtype="complex", numformat="decimal",
13 sep=","):
14 """Stores a matrix of data to a file to be read by an external program.
15
16 Parameters
17 ----------
18 filename : str or pathlib.Path
19 Name of data file to be stored, including extension.
20 data: array_like
21 Data to be written to file.
22 numtype : str {'complex, 'real'}
23 Type of numerical data.
24 numformat : str {'decimal','exp'}
25 Format for written data.
26 sep : str
27 Single-character field seperator. Usually a tab, space, comma,
28 or semicolon.
29
30 """
31 if filename is None or data is None:
32 raise ValueError("filename or data is unspecified")
33
34 M, N = np.shape(data)
35
36 f = open(filename, "w")
37
38 f.write("# Generated by QuTiP: %dx%d %s matrix " % (M, N, numtype) +
39 "in %s format ['%s' separated values].\n" % (numformat, sep))
40
41 if numtype == "complex":
42
43 if numformat == "exp":
44
45 for m in range(M):
46 for n in range(N):
47 if np.imag(data[m, n]) >= 0.0:
48 f.write("%.10e+%.10ej" % (np.real(data[m, n]),
49 np.imag(data[m, n])))
50 else:
51 f.write("%.10e%.10ej" % (np.real(data[m, n]),
52 np.imag(data[m, n])))
53 if n != N - 1:
54 f.write(sep)
55 f.write("\n")
56
57 elif numformat == "decimal":
58
59 for m in range(M):
60 for n in range(N):
61 if np.imag(data[m, n]) >= 0.0:
62 f.write("%.10f+%.10fj" % (np.real(data[m, n]),
63 np.imag(data[m, n])))
64 else:
65 f.write("%.10f%.10fj" % (np.real(data[m, n]),
66 np.imag(data[m, n])))
67 if n != N - 1:
68 f.write(sep)
69 f.write("\n")
70
71 else:
72 raise ValueError("Illegal numformat value (should be " +
73 "'exp' or 'decimal')")
74
75 elif numtype == "real":
76
77 if numformat == "exp":
78
79 for m in range(M):
80 for n in range(N):
81 f.write("%.10e" % (np.real(data[m, n])))
82 if n != N - 1:
83 f.write(sep)
84 f.write("\n")
85
86 elif numformat == "decimal":
87
88 for m in range(M):
89 for n in range(N):
90 f.write("%.10f" % (np.real(data[m, n])))
91 if n != N - 1:
92 f.write(sep)
93 f.write("\n")
94
95 else:
96 raise ValueError("Illegal numformat value (should be " +
97 "'exp' or 'decimal')")
98
99 else:
100 raise ValueError("Illegal numtype value (should be " +
101 "'complex' or 'real')")
102
103 f.close()
104
105
106 # -----------------------------------------------------------------------------
107 # Read matrix data from a file
108 #
109 def file_data_read(filename, sep=None):
110 """Retrieves an array of data from the requested file.
111
112 Parameters
113 ----------
114 filename : str or pathlib.Path
115 Name of file containing reqested data.
116 sep : str
117 Seperator used to store data.
118
119 Returns
120 -------
121 data : array_like
122 Data from selected file.
123
124 """
125 if filename is None:
126 raise ValueError("filename is unspecified")
127
128 f = open(filename, "r")
129
130 #
131 # first count lines and numbers of
132 #
133 M = N = 0
134 for line in f:
135 # skip comment lines
136 if line[0] == '#' or line[0] == '%':
137 continue
138 # find delim
139 if N == 0 and sep is None:
140 if len(line.rstrip().split(",")) > 1:
141 sep = ","
142 elif len(line.rstrip().split(";")) > 1:
143 sep = ";"
144 elif len(line.rstrip().split(":")) > 1:
145 sep = ":"
146 elif len(line.rstrip().split("|")) > 1:
147 sep = "|"
148 elif len(line.rstrip().split()) > 1:
149 # sepical case for a mix of white space deliminators
150 sep = None
151 else:
152 raise ValueError("Unrecognized column deliminator")
153 # split the line
154 line_vec = line.split(sep)
155 n = len(line_vec)
156 if N == 0 and n > 0:
157 N = n
158 # check type
159 if ("j" in line_vec[0]) or ("i" in line_vec[0]):
160 numtype = "complex"
161 else:
162 numtype = "np.real"
163
164 # check format
165 if ("e" in line_vec[0]) or ("E" in line_vec[0]):
166 numformat = "exp"
167 else:
168 numformat = "decimal"
169
170 elif N != n:
171 raise ValueError("Badly formatted data file: " +
172 "unequal number of columns")
173 M += 1
174
175 #
176 # read data and store in a matrix
177 #
178 f.seek(0)
179
180 if numtype == "complex":
181 data = np.zeros((M, N), dtype="complex")
182 m = n = 0
183 for line in f:
184 # skip comment lines
185 if line[0] == '#' or line[0] == '%':
186 continue
187 n = 0
188 for item in line.rstrip().split(sep):
189 data[m, n] = complex(item)
190 n += 1
191 m += 1
192
193 else:
194 data = np.zeros((M, N), dtype="float")
195 m = n = 0
196 for line in f:
197 # skip comment lines
198 if line[0] == '#' or line[0] == '%':
199 continue
200 n = 0
201 for item in line.rstrip().split(sep):
202 data[m, n] = float(item)
203 n += 1
204 m += 1
205
206 f.close()
207
208 return data
209
210
211 def qsave(data, name='qutip_data'):
212 """
213 Saves given data to file named 'filename.qu' in current directory.
214
215 Parameters
216 ----------
217 data : instance/array_like
218 Input Python object to be stored.
219 filename : str or pathlib.Path
220 Name of output data file.
221
222 """
223 # open the file for writing
224 file = Path(name)
225 file = file.with_suffix(file.suffix + ".qu")
226
227 with open(name, "wb") as fileObject:
228 # this writes the object a to the file named 'filename.qu'
229 pickle.dump(data, fileObject)
230
231
232 def qload(name):
233 """
234 Loads data file from file named 'filename.qu' in current directory.
235
236 Parameters
237 ----------
238 name : str or pathlib.Path
239 Name of data file to be loaded.
240
241 Returns
242 -------
243 qobject : instance / array_like
244 Object retrieved from requested file.
245
246 """
247 file = Path(name)
248 file = file.with_suffix(file.suffix + ".qu")
249
250 with open(name, "rb") as fileObject:
251 if sys.version_info >= (3, 0):
252 out = pickle.load(fileObject, encoding='latin1')
253 else:
254 out = pickle.load(fileObject)
255
256 return out
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutip/fileio.py b/qutip/fileio.py
--- a/qutip/fileio.py
+++ b/qutip/fileio.py
@@ -221,10 +221,10 @@
"""
# open the file for writing
- file = Path(name)
- file = file.with_suffix(file.suffix + ".qu")
+ path = Path(name)
+ path = path.with_suffix(path.suffix + ".qu")
- with open(name, "wb") as fileObject:
+ with open(path, "wb") as fileObject:
# this writes the object a to the file named 'filename.qu'
pickle.dump(data, fileObject)
@@ -244,10 +244,10 @@
Object retrieved from requested file.
"""
- file = Path(name)
- file = file.with_suffix(file.suffix + ".qu")
+ path = Path(name)
+ path = path.with_suffix(path.suffix + ".qu")
- with open(name, "rb") as fileObject:
+ with open(path, "rb") as fileObject:
if sys.version_info >= (3, 0):
out = pickle.load(fileObject, encoding='latin1')
else:
|
{"golden_diff": "diff --git a/qutip/fileio.py b/qutip/fileio.py\n--- a/qutip/fileio.py\n+++ b/qutip/fileio.py\n@@ -221,10 +221,10 @@\n \n \"\"\"\n # open the file for writing\n- file = Path(name)\n- file = file.with_suffix(file.suffix + \".qu\")\n+ path = Path(name)\n+ path = path.with_suffix(path.suffix + \".qu\")\n \n- with open(name, \"wb\") as fileObject:\n+ with open(path, \"wb\") as fileObject:\n # this writes the object a to the file named 'filename.qu'\n pickle.dump(data, fileObject)\n \n@@ -244,10 +244,10 @@\n Object retrieved from requested file.\n \n \"\"\"\n- file = Path(name)\n- file = file.with_suffix(file.suffix + \".qu\")\n+ path = Path(name)\n+ path = path.with_suffix(path.suffix + \".qu\")\n \n- with open(name, \"rb\") as fileObject:\n+ with open(path, \"rb\") as fileObject:\n if sys.version_info >= (3, 0):\n out = pickle.load(fileObject, encoding='latin1')\n else:\n", "issue": "Fix the handling of file suffixes when saving and loading Qobjs\n### Bug Description\n\nThis bug was introduced n #1813 and reported by @nleehone in a post-release review of https://github.com/qutip/qutip/pull/1813#pullrequestreview-950335153\n\n### Code to Reproduce the Bug\n\n_No response_\n\n### Code Output\n\n_No response_\n\n### Expected Behaviour\n\nThe file suffix should be added if it is not present.\n\n### Your Environment\n\n```shell\nQuTiP version: 4.7.0\n```\n\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "__all__ = ['file_data_store', 'file_data_read', 'qsave', 'qload']\n\nimport pickle\nimport numpy as np\nimport sys\nfrom pathlib import Path\n\n\n# -----------------------------------------------------------------------------\n# Write matrix data to a file\n#\ndef file_data_store(filename, data, numtype=\"complex\", numformat=\"decimal\",\n sep=\",\"):\n \"\"\"Stores a matrix of data to a file to be read by an external program.\n\n Parameters\n ----------\n filename : str or pathlib.Path\n Name of data file to be stored, including extension.\n data: array_like\n Data to be written to file.\n numtype : str {'complex, 'real'}\n Type of numerical data.\n numformat : str {'decimal','exp'}\n Format for written data.\n sep : str\n Single-character field seperator. Usually a tab, space, comma,\n or semicolon.\n\n \"\"\"\n if filename is None or data is None:\n raise ValueError(\"filename or data is unspecified\")\n\n M, N = np.shape(data)\n\n f = open(filename, \"w\")\n\n f.write(\"# Generated by QuTiP: %dx%d %s matrix \" % (M, N, numtype) +\n \"in %s format ['%s' separated values].\\n\" % (numformat, sep))\n\n if numtype == \"complex\":\n\n if numformat == \"exp\":\n\n for m in range(M):\n for n in range(N):\n if np.imag(data[m, n]) >= 0.0:\n f.write(\"%.10e+%.10ej\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n else:\n f.write(\"%.10e%.10ej\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n elif numformat == \"decimal\":\n\n for m in range(M):\n for n in range(N):\n if np.imag(data[m, n]) >= 0.0:\n f.write(\"%.10f+%.10fj\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n else:\n f.write(\"%.10f%.10fj\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n else:\n raise ValueError(\"Illegal numformat value (should be \" +\n \"'exp' or 'decimal')\")\n\n elif numtype == \"real\":\n\n if numformat == \"exp\":\n\n for m in range(M):\n for n in range(N):\n f.write(\"%.10e\" % (np.real(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n elif numformat == \"decimal\":\n\n for m in range(M):\n for n in range(N):\n f.write(\"%.10f\" % (np.real(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n else:\n raise ValueError(\"Illegal numformat value (should be \" +\n \"'exp' or 'decimal')\")\n\n else:\n raise ValueError(\"Illegal numtype value (should be \" +\n \"'complex' or 'real')\")\n\n f.close()\n\n\n# -----------------------------------------------------------------------------\n# Read matrix data from a file\n#\ndef file_data_read(filename, sep=None):\n \"\"\"Retrieves an array of data from the requested file.\n\n Parameters\n ----------\n filename : str or pathlib.Path\n Name of file containing reqested data.\n sep : str\n Seperator used to store data.\n\n Returns\n -------\n data : array_like\n Data from selected file.\n\n \"\"\"\n if filename is None:\n raise ValueError(\"filename is unspecified\")\n\n f = open(filename, \"r\")\n\n #\n # first count lines and numbers of\n #\n M = N = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n # find delim\n if N == 0 and sep is None:\n if len(line.rstrip().split(\",\")) > 1:\n sep = \",\"\n elif len(line.rstrip().split(\";\")) > 1:\n sep = \";\"\n elif len(line.rstrip().split(\":\")) > 1:\n sep = \":\"\n elif len(line.rstrip().split(\"|\")) > 1:\n sep = \"|\"\n elif len(line.rstrip().split()) > 1:\n # sepical case for a mix of white space deliminators\n sep = None\n else:\n raise ValueError(\"Unrecognized column deliminator\")\n # split the line\n line_vec = line.split(sep)\n n = len(line_vec)\n if N == 0 and n > 0:\n N = n\n # check type\n if (\"j\" in line_vec[0]) or (\"i\" in line_vec[0]):\n numtype = \"complex\"\n else:\n numtype = \"np.real\"\n\n # check format\n if (\"e\" in line_vec[0]) or (\"E\" in line_vec[0]):\n numformat = \"exp\"\n else:\n numformat = \"decimal\"\n\n elif N != n:\n raise ValueError(\"Badly formatted data file: \" +\n \"unequal number of columns\")\n M += 1\n\n #\n # read data and store in a matrix\n #\n f.seek(0)\n\n if numtype == \"complex\":\n data = np.zeros((M, N), dtype=\"complex\")\n m = n = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n n = 0\n for item in line.rstrip().split(sep):\n data[m, n] = complex(item)\n n += 1\n m += 1\n\n else:\n data = np.zeros((M, N), dtype=\"float\")\n m = n = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n n = 0\n for item in line.rstrip().split(sep):\n data[m, n] = float(item)\n n += 1\n m += 1\n\n f.close()\n\n return data\n\n\ndef qsave(data, name='qutip_data'):\n \"\"\"\n Saves given data to file named 'filename.qu' in current directory.\n\n Parameters\n ----------\n data : instance/array_like\n Input Python object to be stored.\n filename : str or pathlib.Path\n Name of output data file.\n\n \"\"\"\n # open the file for writing\n file = Path(name)\n file = file.with_suffix(file.suffix + \".qu\")\n\n with open(name, \"wb\") as fileObject:\n # this writes the object a to the file named 'filename.qu'\n pickle.dump(data, fileObject)\n\n\ndef qload(name):\n \"\"\"\n Loads data file from file named 'filename.qu' in current directory.\n\n Parameters\n ----------\n name : str or pathlib.Path\n Name of data file to be loaded.\n\n Returns\n -------\n qobject : instance / array_like\n Object retrieved from requested file.\n\n \"\"\"\n file = Path(name)\n file = file.with_suffix(file.suffix + \".qu\")\n\n with open(name, \"rb\") as fileObject:\n if sys.version_info >= (3, 0):\n out = pickle.load(fileObject, encoding='latin1')\n else:\n out = pickle.load(fileObject)\n\n return out\n", "path": "qutip/fileio.py"}], "after_files": [{"content": "__all__ = ['file_data_store', 'file_data_read', 'qsave', 'qload']\n\nimport pickle\nimport numpy as np\nimport sys\nfrom pathlib import Path\n\n\n# -----------------------------------------------------------------------------\n# Write matrix data to a file\n#\ndef file_data_store(filename, data, numtype=\"complex\", numformat=\"decimal\",\n sep=\",\"):\n \"\"\"Stores a matrix of data to a file to be read by an external program.\n\n Parameters\n ----------\n filename : str or pathlib.Path\n Name of data file to be stored, including extension.\n data: array_like\n Data to be written to file.\n numtype : str {'complex, 'real'}\n Type of numerical data.\n numformat : str {'decimal','exp'}\n Format for written data.\n sep : str\n Single-character field seperator. Usually a tab, space, comma,\n or semicolon.\n\n \"\"\"\n if filename is None or data is None:\n raise ValueError(\"filename or data is unspecified\")\n\n M, N = np.shape(data)\n\n f = open(filename, \"w\")\n\n f.write(\"# Generated by QuTiP: %dx%d %s matrix \" % (M, N, numtype) +\n \"in %s format ['%s' separated values].\\n\" % (numformat, sep))\n\n if numtype == \"complex\":\n\n if numformat == \"exp\":\n\n for m in range(M):\n for n in range(N):\n if np.imag(data[m, n]) >= 0.0:\n f.write(\"%.10e+%.10ej\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n else:\n f.write(\"%.10e%.10ej\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n elif numformat == \"decimal\":\n\n for m in range(M):\n for n in range(N):\n if np.imag(data[m, n]) >= 0.0:\n f.write(\"%.10f+%.10fj\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n else:\n f.write(\"%.10f%.10fj\" % (np.real(data[m, n]),\n np.imag(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n else:\n raise ValueError(\"Illegal numformat value (should be \" +\n \"'exp' or 'decimal')\")\n\n elif numtype == \"real\":\n\n if numformat == \"exp\":\n\n for m in range(M):\n for n in range(N):\n f.write(\"%.10e\" % (np.real(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n elif numformat == \"decimal\":\n\n for m in range(M):\n for n in range(N):\n f.write(\"%.10f\" % (np.real(data[m, n])))\n if n != N - 1:\n f.write(sep)\n f.write(\"\\n\")\n\n else:\n raise ValueError(\"Illegal numformat value (should be \" +\n \"'exp' or 'decimal')\")\n\n else:\n raise ValueError(\"Illegal numtype value (should be \" +\n \"'complex' or 'real')\")\n\n f.close()\n\n\n# -----------------------------------------------------------------------------\n# Read matrix data from a file\n#\ndef file_data_read(filename, sep=None):\n \"\"\"Retrieves an array of data from the requested file.\n\n Parameters\n ----------\n filename : str or pathlib.Path\n Name of file containing reqested data.\n sep : str\n Seperator used to store data.\n\n Returns\n -------\n data : array_like\n Data from selected file.\n\n \"\"\"\n if filename is None:\n raise ValueError(\"filename is unspecified\")\n\n f = open(filename, \"r\")\n\n #\n # first count lines and numbers of\n #\n M = N = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n # find delim\n if N == 0 and sep is None:\n if len(line.rstrip().split(\",\")) > 1:\n sep = \",\"\n elif len(line.rstrip().split(\";\")) > 1:\n sep = \";\"\n elif len(line.rstrip().split(\":\")) > 1:\n sep = \":\"\n elif len(line.rstrip().split(\"|\")) > 1:\n sep = \"|\"\n elif len(line.rstrip().split()) > 1:\n # sepical case for a mix of white space deliminators\n sep = None\n else:\n raise ValueError(\"Unrecognized column deliminator\")\n # split the line\n line_vec = line.split(sep)\n n = len(line_vec)\n if N == 0 and n > 0:\n N = n\n # check type\n if (\"j\" in line_vec[0]) or (\"i\" in line_vec[0]):\n numtype = \"complex\"\n else:\n numtype = \"np.real\"\n\n # check format\n if (\"e\" in line_vec[0]) or (\"E\" in line_vec[0]):\n numformat = \"exp\"\n else:\n numformat = \"decimal\"\n\n elif N != n:\n raise ValueError(\"Badly formatted data file: \" +\n \"unequal number of columns\")\n M += 1\n\n #\n # read data and store in a matrix\n #\n f.seek(0)\n\n if numtype == \"complex\":\n data = np.zeros((M, N), dtype=\"complex\")\n m = n = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n n = 0\n for item in line.rstrip().split(sep):\n data[m, n] = complex(item)\n n += 1\n m += 1\n\n else:\n data = np.zeros((M, N), dtype=\"float\")\n m = n = 0\n for line in f:\n # skip comment lines\n if line[0] == '#' or line[0] == '%':\n continue\n n = 0\n for item in line.rstrip().split(sep):\n data[m, n] = float(item)\n n += 1\n m += 1\n\n f.close()\n\n return data\n\n\ndef qsave(data, name='qutip_data'):\n \"\"\"\n Saves given data to file named 'filename.qu' in current directory.\n\n Parameters\n ----------\n data : instance/array_like\n Input Python object to be stored.\n filename : str or pathlib.Path\n Name of output data file.\n\n \"\"\"\n # open the file for writing\n path = Path(name)\n path = path.with_suffix(path.suffix + \".qu\")\n\n with open(path, \"wb\") as fileObject:\n # this writes the object a to the file named 'filename.qu'\n pickle.dump(data, fileObject)\n\n\ndef qload(name):\n \"\"\"\n Loads data file from file named 'filename.qu' in current directory.\n\n Parameters\n ----------\n name : str or pathlib.Path\n Name of data file to be loaded.\n\n Returns\n -------\n qobject : instance / array_like\n Object retrieved from requested file.\n\n \"\"\"\n path = Path(name)\n path = path.with_suffix(path.suffix + \".qu\")\n\n with open(path, \"rb\") as fileObject:\n if sys.version_info >= (3, 0):\n out = pickle.load(fileObject, encoding='latin1')\n else:\n out = pickle.load(fileObject)\n\n return out\n", "path": "qutip/fileio.py"}]}
| 2,780 | 279 |
gh_patches_debug_43440
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-13721
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker repository from build args
**Is your feature request related to a problem? Please describe.**
As we build images for our internal registry, the last part of the image name is usually derived from the git branch. Currently there is no easy way to incorporate this value into the built image name, besides the version tag.
**Describe the solution you'd like**
By allowing to interpolate build args into the repository field value, in the same manner as for image tags, we solve this problem.
**Describe alternatives you've considered**
Discarded the idea to have more predefined values based on environmental facts from things like git etc, in favour of a general solution where you can provide the required information as environment variable values.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/docker/goals/package_image.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 from __future__ import annotations
4
5 import logging
6 from dataclasses import dataclass
7 from os import path
8
9 from pants.backend.docker.registries import DockerRegistries
10 from pants.backend.docker.subsystems.docker_options import DockerOptions
11 from pants.backend.docker.target_types import (
12 DockerImageSourceField,
13 DockerImageTagsField,
14 DockerRegistriesField,
15 DockerRepositoryField,
16 )
17 from pants.backend.docker.util_rules.docker_binary import DockerBinary
18 from pants.backend.docker.util_rules.docker_build_context import (
19 DockerBuildContext,
20 DockerBuildContextRequest,
21 DockerVersionContext,
22 )
23 from pants.core.goals.package import BuiltPackage, BuiltPackageArtifact, PackageFieldSet
24 from pants.core.goals.run import RunFieldSet
25 from pants.engine.process import Process, ProcessResult
26 from pants.engine.rules import Get, collect_rules, rule
27 from pants.engine.unions import UnionRule
28 from pants.util.strutil import bullet_list, pluralize
29
30 logger = logging.getLogger(__name__)
31
32
33 class DockerImageTagValueError(ValueError):
34 pass
35
36
37 class DockerRepositoryNameError(ValueError):
38 pass
39
40
41 @dataclass(frozen=True)
42 class BuiltDockerImage(BuiltPackageArtifact):
43 tags: tuple[str, ...] = ()
44
45 @classmethod
46 def create(cls, tags: tuple[str, ...]) -> BuiltDockerImage:
47 tags_string = tags[0] if len(tags) == 1 else (f"\n{bullet_list(tags)}")
48 return cls(
49 tags=tags,
50 relpath=None,
51 extra_log_lines=(
52 f"Built docker {pluralize(len(tags), 'image', False)}: {tags_string}",
53 ),
54 )
55
56
57 @dataclass(frozen=True)
58 class DockerFieldSet(PackageFieldSet, RunFieldSet):
59 required_fields = (DockerImageSourceField,)
60
61 registries: DockerRegistriesField
62 repository: DockerRepositoryField
63 tags: DockerImageTagsField
64
65 def format_tag(self, tag: str, version_context: DockerVersionContext) -> str:
66 try:
67 return tag.format(**version_context)
68 except (KeyError, ValueError) as e:
69 msg = (
70 "Invalid tag value for the `image_tags` field of the `docker_image` target at "
71 f"{self.address}: {tag!r}.\n\n"
72 )
73 if isinstance(e, KeyError):
74 msg += f"The placeholder {e} is unknown."
75 if version_context:
76 msg += f' Try with one of: {", ".join(version_context.keys())}.'
77 else:
78 msg += (
79 " There are currently no known placeholders to use. These placeholders "
80 "can come from `[docker].build_args` or parsed FROM instructions of "
81 "your `Dockerfile`."
82 )
83 else:
84 msg += str(e)
85 raise DockerImageTagValueError(msg) from e
86
87 def format_repository(self, default_repository: str) -> str:
88 directory = path.basename(self.address.spec_path)
89 parent_directory = path.basename(path.dirname(self.address.spec_path))
90 repository_fmt = self.repository.value or default_repository
91 try:
92 return repository_fmt.format(
93 name=self.address.target_name,
94 directory=directory,
95 parent_directory=parent_directory,
96 )
97 except KeyError as e:
98 if self.repository.value:
99 source = "`repository` field of the `docker_image` target " f"at {self.address}"
100 else:
101 source = "`[docker].default_repository` configuration option"
102
103 raise DockerRepositoryNameError(
104 f"Invalid value for the {source}: {repository_fmt!r}. Unknown placeholder: {e}.\n\n"
105 f"You may only reference any of `name`, `directory` or `parent_directory`."
106 ) from e
107
108 def image_refs(
109 self,
110 default_repository: str,
111 registries: DockerRegistries,
112 version_context: DockerVersionContext,
113 ) -> tuple[str, ...]:
114 """The image refs are the full image name, including any registry and version tag.
115
116 In the Docker world, the term `tag` is used both for what we here prefer to call the image
117 `ref`, as well as for the image version, or tag, that is at the end of the image name
118 separated with a colon. By introducing the image `ref` we can retain the use of `tag` for
119 the version part of the image name.
120
121 Returns all image refs to apply to the Docker image, on the form:
122
123 [<registry>/]<repository-name>[:<tag>]
124
125 Where the `<repository-name>` may have contain any number of separating slashes `/`,
126 depending on the `default_repository` from configuration or the `repository` field
127 on the target `docker_image`.
128
129 This method will always return a non-empty tuple.
130 """
131 repository = self.format_repository(default_repository)
132 image_names = tuple(
133 ":".join(s for s in [repository, self.format_tag(tag, version_context)] if s)
134 for tag in self.tags.value or ()
135 )
136
137 registries_options = tuple(registries.get(*(self.registries.value or [])))
138 if not registries_options:
139 # The image name is also valid as image ref without registry.
140 return image_names
141
142 return tuple(
143 "/".join([registry.address, image_name])
144 for image_name in image_names
145 for registry in registries_options
146 )
147
148
149 @rule
150 async def build_docker_image(
151 field_set: DockerFieldSet,
152 options: DockerOptions,
153 docker: DockerBinary,
154 ) -> BuiltPackage:
155 context = await Get(
156 DockerBuildContext,
157 DockerBuildContextRequest(
158 address=field_set.address,
159 build_upstream_images=True,
160 ),
161 )
162
163 tags = field_set.image_refs(
164 default_repository=options.default_repository,
165 registries=options.registries(),
166 version_context=context.version_context,
167 )
168
169 result = await Get(
170 ProcessResult,
171 Process,
172 docker.build_image(
173 build_args=context.build_args,
174 digest=context.digest,
175 dockerfile=context.dockerfile,
176 env=context.env,
177 tags=tags,
178 ),
179 )
180
181 logger.debug(
182 f"Docker build output for {tags[0]}:\n"
183 f"{result.stdout.decode()}\n"
184 f"{result.stderr.decode()}"
185 )
186
187 return BuiltPackage(
188 result.output_digest,
189 (BuiltDockerImage.create(tags),),
190 )
191
192
193 def rules():
194 return [
195 *collect_rules(),
196 UnionRule(PackageFieldSet, DockerFieldSet),
197 UnionRule(RunFieldSet, DockerFieldSet),
198 ]
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/backend/docker/goals/package_image.py b/src/python/pants/backend/docker/goals/package_image.py
--- a/src/python/pants/backend/docker/goals/package_image.py
+++ b/src/python/pants/backend/docker/goals/package_image.py
@@ -5,6 +5,7 @@
import logging
from dataclasses import dataclass
from os import path
+from typing import Any, Mapping
from pants.backend.docker.registries import DockerRegistries
from pants.backend.docker.subsystems.docker_options import DockerOptions
@@ -73,7 +74,7 @@
if isinstance(e, KeyError):
msg += f"The placeholder {e} is unknown."
if version_context:
- msg += f' Try with one of: {", ".join(version_context.keys())}.'
+ msg += f' Try with one of: {", ".join(sorted(version_context.keys()))}.'
else:
msg += (
" There are currently no known placeholders to use. These placeholders "
@@ -84,26 +85,35 @@
msg += str(e)
raise DockerImageTagValueError(msg) from e
- def format_repository(self, default_repository: str) -> str:
- directory = path.basename(self.address.spec_path)
- parent_directory = path.basename(path.dirname(self.address.spec_path))
+ def format_repository(
+ self, default_repository: str, repository_context: Mapping[str, Any]
+ ) -> str:
+ fmt_context = dict(
+ directory=path.basename(self.address.spec_path),
+ name=self.address.target_name,
+ parent_directory=path.basename(path.dirname(self.address.spec_path)),
+ **repository_context,
+ )
repository_fmt = self.repository.value or default_repository
+
try:
- return repository_fmt.format(
- name=self.address.target_name,
- directory=directory,
- parent_directory=parent_directory,
- )
- except KeyError as e:
+ return repository_fmt.format(**fmt_context)
+ except (KeyError, ValueError) as e:
if self.repository.value:
- source = "`repository` field of the `docker_image` target " f"at {self.address}"
+ source = f"`repository` field of the `docker_image` target at {self.address}"
else:
source = "`[docker].default_repository` configuration option"
- raise DockerRepositoryNameError(
- f"Invalid value for the {source}: {repository_fmt!r}. Unknown placeholder: {e}.\n\n"
- f"You may only reference any of `name`, `directory` or `parent_directory`."
- ) from e
+ msg = f"Invalid value for the {source}: {repository_fmt!r}.\n\n"
+
+ if isinstance(e, KeyError):
+ msg += (
+ f"The placeholder {e} is unknown. "
+ f'Try with one of: {", ".join(sorted(fmt_context.keys()))}.'
+ )
+ else:
+ msg += str(e)
+ raise DockerRepositoryNameError(msg) from e
def image_refs(
self,
@@ -122,13 +132,17 @@
[<registry>/]<repository-name>[:<tag>]
- Where the `<repository-name>` may have contain any number of separating slashes `/`,
- depending on the `default_repository` from configuration or the `repository` field
- on the target `docker_image`.
+ Where the `<repository-name>` may contain any number of separating slashes `/`, depending on
+ the `default_repository` from configuration or the `repository` field on the target
+ `docker_image`.
This method will always return a non-empty tuple.
"""
- repository = self.format_repository(default_repository)
+ repository_context = {}
+ if "build_args" in version_context:
+ repository_context["build_args"] = version_context["build_args"]
+
+ repository = self.format_repository(default_repository, repository_context)
image_names = tuple(
":".join(s for s in [repository, self.format_tag(tag, version_context)] if s)
for tag in self.tags.value or ()
|
{"golden_diff": "diff --git a/src/python/pants/backend/docker/goals/package_image.py b/src/python/pants/backend/docker/goals/package_image.py\n--- a/src/python/pants/backend/docker/goals/package_image.py\n+++ b/src/python/pants/backend/docker/goals/package_image.py\n@@ -5,6 +5,7 @@\n import logging\n from dataclasses import dataclass\n from os import path\n+from typing import Any, Mapping\n \n from pants.backend.docker.registries import DockerRegistries\n from pants.backend.docker.subsystems.docker_options import DockerOptions\n@@ -73,7 +74,7 @@\n if isinstance(e, KeyError):\n msg += f\"The placeholder {e} is unknown.\"\n if version_context:\n- msg += f' Try with one of: {\", \".join(version_context.keys())}.'\n+ msg += f' Try with one of: {\", \".join(sorted(version_context.keys()))}.'\n else:\n msg += (\n \" There are currently no known placeholders to use. These placeholders \"\n@@ -84,26 +85,35 @@\n msg += str(e)\n raise DockerImageTagValueError(msg) from e\n \n- def format_repository(self, default_repository: str) -> str:\n- directory = path.basename(self.address.spec_path)\n- parent_directory = path.basename(path.dirname(self.address.spec_path))\n+ def format_repository(\n+ self, default_repository: str, repository_context: Mapping[str, Any]\n+ ) -> str:\n+ fmt_context = dict(\n+ directory=path.basename(self.address.spec_path),\n+ name=self.address.target_name,\n+ parent_directory=path.basename(path.dirname(self.address.spec_path)),\n+ **repository_context,\n+ )\n repository_fmt = self.repository.value or default_repository\n+\n try:\n- return repository_fmt.format(\n- name=self.address.target_name,\n- directory=directory,\n- parent_directory=parent_directory,\n- )\n- except KeyError as e:\n+ return repository_fmt.format(**fmt_context)\n+ except (KeyError, ValueError) as e:\n if self.repository.value:\n- source = \"`repository` field of the `docker_image` target \" f\"at {self.address}\"\n+ source = f\"`repository` field of the `docker_image` target at {self.address}\"\n else:\n source = \"`[docker].default_repository` configuration option\"\n \n- raise DockerRepositoryNameError(\n- f\"Invalid value for the {source}: {repository_fmt!r}. Unknown placeholder: {e}.\\n\\n\"\n- f\"You may only reference any of `name`, `directory` or `parent_directory`.\"\n- ) from e\n+ msg = f\"Invalid value for the {source}: {repository_fmt!r}.\\n\\n\"\n+\n+ if isinstance(e, KeyError):\n+ msg += (\n+ f\"The placeholder {e} is unknown. \"\n+ f'Try with one of: {\", \".join(sorted(fmt_context.keys()))}.'\n+ )\n+ else:\n+ msg += str(e)\n+ raise DockerRepositoryNameError(msg) from e\n \n def image_refs(\n self,\n@@ -122,13 +132,17 @@\n \n [<registry>/]<repository-name>[:<tag>]\n \n- Where the `<repository-name>` may have contain any number of separating slashes `/`,\n- depending on the `default_repository` from configuration or the `repository` field\n- on the target `docker_image`.\n+ Where the `<repository-name>` may contain any number of separating slashes `/`, depending on\n+ the `default_repository` from configuration or the `repository` field on the target\n+ `docker_image`.\n \n This method will always return a non-empty tuple.\n \"\"\"\n- repository = self.format_repository(default_repository)\n+ repository_context = {}\n+ if \"build_args\" in version_context:\n+ repository_context[\"build_args\"] = version_context[\"build_args\"]\n+\n+ repository = self.format_repository(default_repository, repository_context)\n image_names = tuple(\n \":\".join(s for s in [repository, self.format_tag(tag, version_context)] if s)\n for tag in self.tags.value or ()\n", "issue": "Docker repository from build args\n**Is your feature request related to a problem? Please describe.**\r\n\r\nAs we build images for our internal registry, the last part of the image name is usually derived from the git branch. Currently there is no easy way to incorporate this value into the built image name, besides the version tag.\r\n\r\n**Describe the solution you'd like**\r\n\r\nBy allowing to interpolate build args into the repository field value, in the same manner as for image tags, we solve this problem.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nDiscarded the idea to have more predefined values based on environmental facts from things like git etc, in favour of a general solution where you can provide the required information as environment variable values.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\nfrom __future__ import annotations\n\nimport logging\nfrom dataclasses import dataclass\nfrom os import path\n\nfrom pants.backend.docker.registries import DockerRegistries\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.target_types import (\n DockerImageSourceField,\n DockerImageTagsField,\n DockerRegistriesField,\n DockerRepositoryField,\n)\nfrom pants.backend.docker.util_rules.docker_binary import DockerBinary\nfrom pants.backend.docker.util_rules.docker_build_context import (\n DockerBuildContext,\n DockerBuildContextRequest,\n DockerVersionContext,\n)\nfrom pants.core.goals.package import BuiltPackage, BuiltPackageArtifact, PackageFieldSet\nfrom pants.core.goals.run import RunFieldSet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.unions import UnionRule\nfrom pants.util.strutil import bullet_list, pluralize\n\nlogger = logging.getLogger(__name__)\n\n\nclass DockerImageTagValueError(ValueError):\n pass\n\n\nclass DockerRepositoryNameError(ValueError):\n pass\n\n\n@dataclass(frozen=True)\nclass BuiltDockerImage(BuiltPackageArtifact):\n tags: tuple[str, ...] = ()\n\n @classmethod\n def create(cls, tags: tuple[str, ...]) -> BuiltDockerImage:\n tags_string = tags[0] if len(tags) == 1 else (f\"\\n{bullet_list(tags)}\")\n return cls(\n tags=tags,\n relpath=None,\n extra_log_lines=(\n f\"Built docker {pluralize(len(tags), 'image', False)}: {tags_string}\",\n ),\n )\n\n\n@dataclass(frozen=True)\nclass DockerFieldSet(PackageFieldSet, RunFieldSet):\n required_fields = (DockerImageSourceField,)\n\n registries: DockerRegistriesField\n repository: DockerRepositoryField\n tags: DockerImageTagsField\n\n def format_tag(self, tag: str, version_context: DockerVersionContext) -> str:\n try:\n return tag.format(**version_context)\n except (KeyError, ValueError) as e:\n msg = (\n \"Invalid tag value for the `image_tags` field of the `docker_image` target at \"\n f\"{self.address}: {tag!r}.\\n\\n\"\n )\n if isinstance(e, KeyError):\n msg += f\"The placeholder {e} is unknown.\"\n if version_context:\n msg += f' Try with one of: {\", \".join(version_context.keys())}.'\n else:\n msg += (\n \" There are currently no known placeholders to use. These placeholders \"\n \"can come from `[docker].build_args` or parsed FROM instructions of \"\n \"your `Dockerfile`.\"\n )\n else:\n msg += str(e)\n raise DockerImageTagValueError(msg) from e\n\n def format_repository(self, default_repository: str) -> str:\n directory = path.basename(self.address.spec_path)\n parent_directory = path.basename(path.dirname(self.address.spec_path))\n repository_fmt = self.repository.value or default_repository\n try:\n return repository_fmt.format(\n name=self.address.target_name,\n directory=directory,\n parent_directory=parent_directory,\n )\n except KeyError as e:\n if self.repository.value:\n source = \"`repository` field of the `docker_image` target \" f\"at {self.address}\"\n else:\n source = \"`[docker].default_repository` configuration option\"\n\n raise DockerRepositoryNameError(\n f\"Invalid value for the {source}: {repository_fmt!r}. Unknown placeholder: {e}.\\n\\n\"\n f\"You may only reference any of `name`, `directory` or `parent_directory`.\"\n ) from e\n\n def image_refs(\n self,\n default_repository: str,\n registries: DockerRegistries,\n version_context: DockerVersionContext,\n ) -> tuple[str, ...]:\n \"\"\"The image refs are the full image name, including any registry and version tag.\n\n In the Docker world, the term `tag` is used both for what we here prefer to call the image\n `ref`, as well as for the image version, or tag, that is at the end of the image name\n separated with a colon. By introducing the image `ref` we can retain the use of `tag` for\n the version part of the image name.\n\n Returns all image refs to apply to the Docker image, on the form:\n\n [<registry>/]<repository-name>[:<tag>]\n\n Where the `<repository-name>` may have contain any number of separating slashes `/`,\n depending on the `default_repository` from configuration or the `repository` field\n on the target `docker_image`.\n\n This method will always return a non-empty tuple.\n \"\"\"\n repository = self.format_repository(default_repository)\n image_names = tuple(\n \":\".join(s for s in [repository, self.format_tag(tag, version_context)] if s)\n for tag in self.tags.value or ()\n )\n\n registries_options = tuple(registries.get(*(self.registries.value or [])))\n if not registries_options:\n # The image name is also valid as image ref without registry.\n return image_names\n\n return tuple(\n \"/\".join([registry.address, image_name])\n for image_name in image_names\n for registry in registries_options\n )\n\n\n@rule\nasync def build_docker_image(\n field_set: DockerFieldSet,\n options: DockerOptions,\n docker: DockerBinary,\n) -> BuiltPackage:\n context = await Get(\n DockerBuildContext,\n DockerBuildContextRequest(\n address=field_set.address,\n build_upstream_images=True,\n ),\n )\n\n tags = field_set.image_refs(\n default_repository=options.default_repository,\n registries=options.registries(),\n version_context=context.version_context,\n )\n\n result = await Get(\n ProcessResult,\n Process,\n docker.build_image(\n build_args=context.build_args,\n digest=context.digest,\n dockerfile=context.dockerfile,\n env=context.env,\n tags=tags,\n ),\n )\n\n logger.debug(\n f\"Docker build output for {tags[0]}:\\n\"\n f\"{result.stdout.decode()}\\n\"\n f\"{result.stderr.decode()}\"\n )\n\n return BuiltPackage(\n result.output_digest,\n (BuiltDockerImage.create(tags),),\n )\n\n\ndef rules():\n return [\n *collect_rules(),\n UnionRule(PackageFieldSet, DockerFieldSet),\n UnionRule(RunFieldSet, DockerFieldSet),\n ]\n", "path": "src/python/pants/backend/docker/goals/package_image.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\nfrom __future__ import annotations\n\nimport logging\nfrom dataclasses import dataclass\nfrom os import path\nfrom typing import Any, Mapping\n\nfrom pants.backend.docker.registries import DockerRegistries\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.target_types import (\n DockerImageSourceField,\n DockerImageTagsField,\n DockerRegistriesField,\n DockerRepositoryField,\n)\nfrom pants.backend.docker.util_rules.docker_binary import DockerBinary\nfrom pants.backend.docker.util_rules.docker_build_context import (\n DockerBuildContext,\n DockerBuildContextRequest,\n DockerVersionContext,\n)\nfrom pants.core.goals.package import BuiltPackage, BuiltPackageArtifact, PackageFieldSet\nfrom pants.core.goals.run import RunFieldSet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.unions import UnionRule\nfrom pants.util.strutil import bullet_list, pluralize\n\nlogger = logging.getLogger(__name__)\n\n\nclass DockerImageTagValueError(ValueError):\n pass\n\n\nclass DockerRepositoryNameError(ValueError):\n pass\n\n\n@dataclass(frozen=True)\nclass BuiltDockerImage(BuiltPackageArtifact):\n tags: tuple[str, ...] = ()\n\n @classmethod\n def create(cls, tags: tuple[str, ...]) -> BuiltDockerImage:\n tags_string = tags[0] if len(tags) == 1 else (f\"\\n{bullet_list(tags)}\")\n return cls(\n tags=tags,\n relpath=None,\n extra_log_lines=(\n f\"Built docker {pluralize(len(tags), 'image', False)}: {tags_string}\",\n ),\n )\n\n\n@dataclass(frozen=True)\nclass DockerFieldSet(PackageFieldSet, RunFieldSet):\n required_fields = (DockerImageSourceField,)\n\n registries: DockerRegistriesField\n repository: DockerRepositoryField\n tags: DockerImageTagsField\n\n def format_tag(self, tag: str, version_context: DockerVersionContext) -> str:\n try:\n return tag.format(**version_context)\n except (KeyError, ValueError) as e:\n msg = (\n \"Invalid tag value for the `image_tags` field of the `docker_image` target at \"\n f\"{self.address}: {tag!r}.\\n\\n\"\n )\n if isinstance(e, KeyError):\n msg += f\"The placeholder {e} is unknown.\"\n if version_context:\n msg += f' Try with one of: {\", \".join(sorted(version_context.keys()))}.'\n else:\n msg += (\n \" There are currently no known placeholders to use. These placeholders \"\n \"can come from `[docker].build_args` or parsed FROM instructions of \"\n \"your `Dockerfile`.\"\n )\n else:\n msg += str(e)\n raise DockerImageTagValueError(msg) from e\n\n def format_repository(\n self, default_repository: str, repository_context: Mapping[str, Any]\n ) -> str:\n fmt_context = dict(\n directory=path.basename(self.address.spec_path),\n name=self.address.target_name,\n parent_directory=path.basename(path.dirname(self.address.spec_path)),\n **repository_context,\n )\n repository_fmt = self.repository.value or default_repository\n\n try:\n return repository_fmt.format(**fmt_context)\n except (KeyError, ValueError) as e:\n if self.repository.value:\n source = f\"`repository` field of the `docker_image` target at {self.address}\"\n else:\n source = \"`[docker].default_repository` configuration option\"\n\n msg = f\"Invalid value for the {source}: {repository_fmt!r}.\\n\\n\"\n\n if isinstance(e, KeyError):\n msg += (\n f\"The placeholder {e} is unknown. \"\n f'Try with one of: {\", \".join(sorted(fmt_context.keys()))}.'\n )\n else:\n msg += str(e)\n raise DockerRepositoryNameError(msg) from e\n\n def image_refs(\n self,\n default_repository: str,\n registries: DockerRegistries,\n version_context: DockerVersionContext,\n ) -> tuple[str, ...]:\n \"\"\"The image refs are the full image name, including any registry and version tag.\n\n In the Docker world, the term `tag` is used both for what we here prefer to call the image\n `ref`, as well as for the image version, or tag, that is at the end of the image name\n separated with a colon. By introducing the image `ref` we can retain the use of `tag` for\n the version part of the image name.\n\n Returns all image refs to apply to the Docker image, on the form:\n\n [<registry>/]<repository-name>[:<tag>]\n\n Where the `<repository-name>` may contain any number of separating slashes `/`, depending on\n the `default_repository` from configuration or the `repository` field on the target\n `docker_image`.\n\n This method will always return a non-empty tuple.\n \"\"\"\n repository_context = {}\n if \"build_args\" in version_context:\n repository_context[\"build_args\"] = version_context[\"build_args\"]\n\n repository = self.format_repository(default_repository, repository_context)\n image_names = tuple(\n \":\".join(s for s in [repository, self.format_tag(tag, version_context)] if s)\n for tag in self.tags.value or ()\n )\n\n registries_options = tuple(registries.get(*(self.registries.value or [])))\n if not registries_options:\n # The image name is also valid as image ref without registry.\n return image_names\n\n return tuple(\n \"/\".join([registry.address, image_name])\n for image_name in image_names\n for registry in registries_options\n )\n\n\n@rule\nasync def build_docker_image(\n field_set: DockerFieldSet,\n options: DockerOptions,\n docker: DockerBinary,\n) -> BuiltPackage:\n context = await Get(\n DockerBuildContext,\n DockerBuildContextRequest(\n address=field_set.address,\n build_upstream_images=True,\n ),\n )\n\n tags = field_set.image_refs(\n default_repository=options.default_repository,\n registries=options.registries(),\n version_context=context.version_context,\n )\n\n result = await Get(\n ProcessResult,\n Process,\n docker.build_image(\n build_args=context.build_args,\n digest=context.digest,\n dockerfile=context.dockerfile,\n env=context.env,\n tags=tags,\n ),\n )\n\n logger.debug(\n f\"Docker build output for {tags[0]}:\\n\"\n f\"{result.stdout.decode()}\\n\"\n f\"{result.stderr.decode()}\"\n )\n\n return BuiltPackage(\n result.output_digest,\n (BuiltDockerImage.create(tags),),\n )\n\n\ndef rules():\n return [\n *collect_rules(),\n UnionRule(PackageFieldSet, DockerFieldSet),\n UnionRule(RunFieldSet, DockerFieldSet),\n ]\n", "path": "src/python/pants/backend/docker/goals/package_image.py"}]}
| 2,341 | 898 |
gh_patches_debug_1544
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1653
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
image.tag does not return anything
There's no return statement in `images.tag`:
https://github.com/docker/docker-py/blob/master/docker/models/images.py#L99
[Readthedocs](https://docker-py.readthedocs.io/en/stable/images.html) (and the method comments) suggest it should return a bool for success.
I saw this running version 2.2.1 of the library
```
# pip freeze | grep docker
docker==2.2.1
docker-pycreds==0.2.1
```
**Repro code:**
```
import docker
def test_tag(id):
client = docker.DockerClient()
image = client.images.get(id)
tag_result = image.tag('test_image', tag='test_tag')
if tag_result is None:
print('oops')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/images.py`
Content:
```
1 import re
2
3 import six
4
5 from ..api import APIClient
6 from ..errors import BuildError
7 from ..utils.json_stream import json_stream
8 from .resource import Collection, Model
9
10
11 class Image(Model):
12 """
13 An image on the server.
14 """
15 def __repr__(self):
16 return "<%s: '%s'>" % (self.__class__.__name__, "', '".join(self.tags))
17
18 @property
19 def labels(self):
20 """
21 The labels of an image as dictionary.
22 """
23 result = self.attrs['Config'].get('Labels')
24 return result or {}
25
26 @property
27 def short_id(self):
28 """
29 The ID of the image truncated to 10 characters, plus the ``sha256:``
30 prefix.
31 """
32 if self.id.startswith('sha256:'):
33 return self.id[:17]
34 return self.id[:10]
35
36 @property
37 def tags(self):
38 """
39 The image's tags.
40 """
41 tags = self.attrs.get('RepoTags')
42 if tags is None:
43 tags = []
44 return [tag for tag in tags if tag != '<none>:<none>']
45
46 def history(self):
47 """
48 Show the history of an image.
49
50 Returns:
51 (str): The history of the image.
52
53 Raises:
54 :py:class:`docker.errors.APIError`
55 If the server returns an error.
56 """
57 return self.client.api.history(self.id)
58
59 def save(self):
60 """
61 Get a tarball of an image. Similar to the ``docker save`` command.
62
63 Returns:
64 (urllib3.response.HTTPResponse object): The response from the
65 daemon.
66
67 Raises:
68 :py:class:`docker.errors.APIError`
69 If the server returns an error.
70
71 Example:
72
73 >>> image = cli.images.get("fedora:latest")
74 >>> resp = image.save()
75 >>> f = open('/tmp/fedora-latest.tar', 'w')
76 >>> for chunk in resp.stream():
77 >>> f.write(chunk)
78 >>> f.close()
79 """
80 return self.client.api.get_image(self.id)
81
82 def tag(self, repository, tag=None, **kwargs):
83 """
84 Tag this image into a repository. Similar to the ``docker tag``
85 command.
86
87 Args:
88 repository (str): The repository to set for the tag
89 tag (str): The tag name
90 force (bool): Force
91
92 Raises:
93 :py:class:`docker.errors.APIError`
94 If the server returns an error.
95
96 Returns:
97 (bool): ``True`` if successful
98 """
99 self.client.api.tag(self.id, repository, tag=tag, **kwargs)
100
101
102 class ImageCollection(Collection):
103 model = Image
104
105 def build(self, **kwargs):
106 """
107 Build an image and return it. Similar to the ``docker build``
108 command. Either ``path`` or ``fileobj`` must be set.
109
110 If you have a tar file for the Docker build context (including a
111 Dockerfile) already, pass a readable file-like object to ``fileobj``
112 and also pass ``custom_context=True``. If the stream is compressed
113 also, set ``encoding`` to the correct value (e.g ``gzip``).
114
115 If you want to get the raw output of the build, use the
116 :py:meth:`~docker.api.build.BuildApiMixin.build` method in the
117 low-level API.
118
119 Args:
120 path (str): Path to the directory containing the Dockerfile
121 fileobj: A file object to use as the Dockerfile. (Or a file-like
122 object)
123 tag (str): A tag to add to the final image
124 quiet (bool): Whether to return the status
125 nocache (bool): Don't use the cache when set to ``True``
126 rm (bool): Remove intermediate containers. The ``docker build``
127 command now defaults to ``--rm=true``, but we have kept the old
128 default of `False` to preserve backward compatibility
129 stream (bool): *Deprecated for API version > 1.8 (always True)*.
130 Return a blocking generator you can iterate over to retrieve
131 build output as it happens
132 timeout (int): HTTP timeout
133 custom_context (bool): Optional if using ``fileobj``
134 encoding (str): The encoding for a stream. Set to ``gzip`` for
135 compressing
136 pull (bool): Downloads any updates to the FROM image in Dockerfiles
137 forcerm (bool): Always remove intermediate containers, even after
138 unsuccessful builds
139 dockerfile (str): path within the build context to the Dockerfile
140 buildargs (dict): A dictionary of build arguments
141 container_limits (dict): A dictionary of limits applied to each
142 container created by the build process. Valid keys:
143
144 - memory (int): set memory limit for build
145 - memswap (int): Total memory (memory + swap), -1 to disable
146 swap
147 - cpushares (int): CPU shares (relative weight)
148 - cpusetcpus (str): CPUs in which to allow execution, e.g.,
149 ``"0-3"``, ``"0,1"``
150 decode (bool): If set to ``True``, the returned stream will be
151 decoded into dicts on the fly. Default ``False``.
152 cache_from (list): A list of images used for build cache
153 resolution.
154 target (str): Name of the build-stage to build in a multi-stage
155 Dockerfile.
156
157 Returns:
158 (:py:class:`Image`): The built image.
159
160 Raises:
161 :py:class:`docker.errors.BuildError`
162 If there is an error during the build.
163 :py:class:`docker.errors.APIError`
164 If the server returns any other error.
165 ``TypeError``
166 If neither ``path`` nor ``fileobj`` is specified.
167 """
168 resp = self.client.api.build(**kwargs)
169 if isinstance(resp, six.string_types):
170 return self.get(resp)
171 last_event = None
172 for chunk in json_stream(resp):
173 if 'error' in chunk:
174 raise BuildError(chunk['error'])
175 if 'stream' in chunk:
176 match = re.search(
177 r'(Successfully built |sha256:)([0-9a-f]+)',
178 chunk['stream']
179 )
180 if match:
181 image_id = match.group(2)
182 return self.get(image_id)
183 last_event = chunk
184
185 raise BuildError(last_event or 'Unknown')
186
187 def get(self, name):
188 """
189 Gets an image.
190
191 Args:
192 name (str): The name of the image.
193
194 Returns:
195 (:py:class:`Image`): The image.
196
197 Raises:
198 :py:class:`docker.errors.ImageNotFound`
199 If the image does not exist.
200 :py:class:`docker.errors.APIError`
201 If the server returns an error.
202 """
203 return self.prepare_model(self.client.api.inspect_image(name))
204
205 def list(self, name=None, all=False, filters=None):
206 """
207 List images on the server.
208
209 Args:
210 name (str): Only show images belonging to the repository ``name``
211 all (bool): Show intermediate image layers. By default, these are
212 filtered out.
213 filters (dict): Filters to be processed on the image list.
214 Available filters:
215 - ``dangling`` (bool)
216 - ``label`` (str): format either ``key`` or ``key=value``
217
218 Returns:
219 (list of :py:class:`Image`): The images.
220
221 Raises:
222 :py:class:`docker.errors.APIError`
223 If the server returns an error.
224 """
225 resp = self.client.api.images(name=name, all=all, filters=filters)
226 return [self.prepare_model(r) for r in resp]
227
228 def load(self, data):
229 """
230 Load an image that was previously saved using
231 :py:meth:`~docker.models.images.Image.save` (or ``docker save``).
232 Similar to ``docker load``.
233
234 Args:
235 data (binary): Image data to be loaded.
236
237 Raises:
238 :py:class:`docker.errors.APIError`
239 If the server returns an error.
240 """
241 return self.client.api.load_image(data)
242
243 def pull(self, name, tag=None, **kwargs):
244 """
245 Pull an image of the given name and return it. Similar to the
246 ``docker pull`` command.
247
248 If you want to get the raw pull output, use the
249 :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the
250 low-level API.
251
252 Args:
253 repository (str): The repository to pull
254 tag (str): The tag to pull
255 insecure_registry (bool): Use an insecure registry
256 auth_config (dict): Override the credentials that
257 :py:meth:`~docker.client.DockerClient.login` has set for
258 this request. ``auth_config`` should contain the ``username``
259 and ``password`` keys to be valid.
260
261 Returns:
262 (:py:class:`Image`): The image that has been pulled.
263
264 Raises:
265 :py:class:`docker.errors.APIError`
266 If the server returns an error.
267
268 Example:
269
270 >>> image = client.images.pull('busybox')
271 """
272 self.client.api.pull(name, tag=tag, **kwargs)
273 return self.get('{0}:{1}'.format(name, tag) if tag else name)
274
275 def push(self, repository, tag=None, **kwargs):
276 return self.client.api.push(repository, tag=tag, **kwargs)
277 push.__doc__ = APIClient.push.__doc__
278
279 def remove(self, *args, **kwargs):
280 self.client.api.remove_image(*args, **kwargs)
281 remove.__doc__ = APIClient.remove_image.__doc__
282
283 def search(self, *args, **kwargs):
284 return self.client.api.search(*args, **kwargs)
285 search.__doc__ = APIClient.search.__doc__
286
287 def prune(self, filters=None):
288 return self.client.api.prune_images(filters=filters)
289 prune.__doc__ = APIClient.prune_images.__doc__
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/models/images.py b/docker/models/images.py
--- a/docker/models/images.py
+++ b/docker/models/images.py
@@ -96,7 +96,7 @@
Returns:
(bool): ``True`` if successful
"""
- self.client.api.tag(self.id, repository, tag=tag, **kwargs)
+ return self.client.api.tag(self.id, repository, tag=tag, **kwargs)
class ImageCollection(Collection):
|
{"golden_diff": "diff --git a/docker/models/images.py b/docker/models/images.py\n--- a/docker/models/images.py\n+++ b/docker/models/images.py\n@@ -96,7 +96,7 @@\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n- self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n+ return self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n \n \n class ImageCollection(Collection):\n", "issue": "image.tag does not return anything\nThere's no return statement in `images.tag`:\r\nhttps://github.com/docker/docker-py/blob/master/docker/models/images.py#L99\r\n\r\n[Readthedocs](https://docker-py.readthedocs.io/en/stable/images.html) (and the method comments) suggest it should return a bool for success.\r\n\r\nI saw this running version 2.2.1 of the library\r\n```\r\n# pip freeze | grep docker\r\ndocker==2.2.1\r\ndocker-pycreds==0.2.1\r\n```\r\n\r\n**Repro code:**\r\n```\r\nimport docker\r\ndef test_tag(id):\r\n client = docker.DockerClient()\r\n image = client.images.get(id)\r\n tag_result = image.tag('test_image', tag='test_tag')\r\n if tag_result is None:\r\n print('oops')\r\n```\n", "before_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n target (str): Name of the build-stage to build in a multi-stage\n Dockerfile.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n r'(Successfully built |sha256:)([0-9a-f]+)',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n last_event = chunk\n\n raise BuildError(last_event or 'Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound`\n If the image does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, tag=None, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, tag=tag, **kwargs)\n return self.get('{0}:{1}'.format(name, tag) if tag else name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}], "after_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n return self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n target (str): Name of the build-stage to build in a multi-stage\n Dockerfile.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n r'(Successfully built |sha256:)([0-9a-f]+)',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n last_event = chunk\n\n raise BuildError(last_event or 'Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound`\n If the image does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, tag=None, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, tag=tag, **kwargs)\n return self.get('{0}:{1}'.format(name, tag) if tag else name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}]}
| 3,379 | 98 |
gh_patches_debug_6005
|
rasdani/github-patches
|
git_diff
|
flairNLP__flair-664
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TextClassifier label predictions always have score 1.0
Hi guys,
I have trained my own TextClassifier following this tutorial: https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_7_TRAINING_A_MODEL.md
I am using the option multi_label=False, as each sentence should be assigned only one label. In terms of embeddings, I use FlairEmbeddings mix-forward and mix-backward.
The issue is that every time I predict the label of a new unseen sentence, I get a label score = 1.0. It seems that it never has any different value between 0.0 and 1.0. It is always 1.0.
Is this the expected behavior? What am I doing wrong?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flair/models/text_classification_model.py`
Content:
```
1 import warnings
2 import logging
3 from pathlib import Path
4 from typing import List, Union
5
6 import torch
7 import torch.nn as nn
8
9 import flair.nn
10 import flair.embeddings
11 from flair.data import Dictionary, Sentence, Label
12 from flair.file_utils import cached_path
13 from flair.training_utils import convert_labels_to_one_hot, clear_embeddings
14
15
16 log = logging.getLogger("flair")
17
18
19 class TextClassifier(flair.nn.Model):
20 """
21 Text Classification Model
22 The model takes word embeddings, puts them into an RNN to obtain a text representation, and puts the
23 text representation in the end into a linear layer to get the actual class label.
24 The model can handle single and multi class data sets.
25 """
26
27 def __init__(
28 self,
29 document_embeddings: flair.embeddings.DocumentEmbeddings,
30 label_dictionary: Dictionary,
31 multi_label: bool,
32 ):
33
34 super(TextClassifier, self).__init__()
35
36 self.document_embeddings: flair.embeddings.DocumentRNNEmbeddings = document_embeddings
37 self.label_dictionary: Dictionary = label_dictionary
38 self.multi_label = multi_label
39
40 self.decoder = nn.Linear(
41 self.document_embeddings.embedding_length, len(self.label_dictionary)
42 )
43
44 self._init_weights()
45
46 if multi_label:
47 self.loss_function = nn.BCELoss()
48 else:
49 self.loss_function = nn.CrossEntropyLoss()
50
51 # auto-spawn on GPU if available
52 self.to(flair.device)
53
54 def _init_weights(self):
55 nn.init.xavier_uniform_(self.decoder.weight)
56
57 def forward(self, sentences) -> List[List[float]]:
58 self.document_embeddings.embed(sentences)
59
60 text_embedding_list = [
61 sentence.get_embedding().unsqueeze(0) for sentence in sentences
62 ]
63 text_embedding_tensor = torch.cat(text_embedding_list, 0).to(flair.device)
64
65 label_scores = self.decoder(text_embedding_tensor)
66
67 return label_scores
68
69 def save(self, model_file: Union[str, Path]):
70 """
71 Saves the current model to the provided file.
72 :param model_file: the model file
73 """
74 model_state = {
75 "state_dict": self.state_dict(),
76 "document_embeddings": self.document_embeddings,
77 "label_dictionary": self.label_dictionary,
78 "multi_label": self.multi_label,
79 }
80 torch.save(model_state, str(model_file), pickle_protocol=4)
81
82 def save_checkpoint(
83 self,
84 model_file: Union[str, Path],
85 optimizer_state: dict,
86 scheduler_state: dict,
87 epoch: int,
88 loss: float,
89 ):
90 """
91 Saves the current model to the provided file.
92 :param model_file: the model file
93 """
94 model_state = {
95 "state_dict": self.state_dict(),
96 "document_embeddings": self.document_embeddings,
97 "label_dictionary": self.label_dictionary,
98 "multi_label": self.multi_label,
99 "optimizer_state_dict": optimizer_state,
100 "scheduler_state_dict": scheduler_state,
101 "epoch": epoch,
102 "loss": loss,
103 }
104 torch.save(model_state, str(model_file), pickle_protocol=4)
105
106 @classmethod
107 def load_from_file(cls, model_file: Union[str, Path]):
108 """
109 Loads the model from the given file.
110 :param model_file: the model file
111 :return: the loaded text classifier model
112 """
113 state = TextClassifier._load_state(model_file)
114
115 model = TextClassifier(
116 document_embeddings=state["document_embeddings"],
117 label_dictionary=state["label_dictionary"],
118 multi_label=state["multi_label"],
119 )
120 model.load_state_dict(state["state_dict"])
121 model.eval()
122 model.to(flair.device)
123
124 return model
125
126 @classmethod
127 def load_checkpoint(cls, model_file: Union[str, Path]):
128 state = TextClassifier._load_state(model_file)
129 model = TextClassifier.load_from_file(model_file)
130
131 epoch = state["epoch"] if "epoch" in state else None
132 loss = state["loss"] if "loss" in state else None
133 optimizer_state_dict = (
134 state["optimizer_state_dict"] if "optimizer_state_dict" in state else None
135 )
136 scheduler_state_dict = (
137 state["scheduler_state_dict"] if "scheduler_state_dict" in state else None
138 )
139
140 return {
141 "model": model,
142 "epoch": epoch,
143 "loss": loss,
144 "optimizer_state_dict": optimizer_state_dict,
145 "scheduler_state_dict": scheduler_state_dict,
146 }
147
148 @classmethod
149 def _load_state(cls, model_file: Union[str, Path]):
150 # ATTENTION: suppressing torch serialization warnings. This needs to be taken out once we sort out recursive
151 # serialization of torch objects
152 # https://docs.python.org/3/library/warnings.html#temporarily-suppressing-warnings
153 with warnings.catch_warnings():
154 warnings.filterwarnings("ignore")
155 # load_big_file is a workaround by https://github.com/highway11git to load models on some Mac/Windows setups
156 # see https://github.com/zalandoresearch/flair/issues/351
157 f = flair.file_utils.load_big_file(str(model_file))
158 state = torch.load(f, map_location=flair.device)
159 return state
160
161 def forward_loss(self, sentences: Union[List[Sentence], Sentence]) -> torch.tensor:
162 scores = self.forward(sentences)
163 return self._calculate_loss(scores, sentences)
164
165 def forward_labels_and_loss(
166 self, sentences: Union[Sentence, List[Sentence]]
167 ) -> (List[List[Label]], torch.tensor):
168 scores = self.forward(sentences)
169 labels = self._obtain_labels(scores)
170 loss = self._calculate_loss(scores, sentences)
171 return labels, loss
172
173 def predict(
174 self, sentences: Union[Sentence, List[Sentence]], mini_batch_size: int = 32
175 ) -> List[Sentence]:
176 """
177 Predicts the class labels for the given sentences. The labels are directly added to the sentences.
178 :param sentences: list of sentences
179 :param mini_batch_size: mini batch size to use
180 :return: the list of sentences containing the labels
181 """
182 with torch.no_grad():
183 if type(sentences) is Sentence:
184 sentences = [sentences]
185
186 filtered_sentences = self._filter_empty_sentences(sentences)
187
188 batches = [
189 filtered_sentences[x : x + mini_batch_size]
190 for x in range(0, len(filtered_sentences), mini_batch_size)
191 ]
192
193 for batch in batches:
194 scores = self.forward(batch)
195 predicted_labels = self._obtain_labels(scores)
196
197 for (sentence, labels) in zip(batch, predicted_labels):
198 sentence.labels = labels
199
200 clear_embeddings(batch)
201
202 return sentences
203
204 @staticmethod
205 def _filter_empty_sentences(sentences: List[Sentence]) -> List[Sentence]:
206 filtered_sentences = [sentence for sentence in sentences if sentence.tokens]
207 if len(sentences) != len(filtered_sentences):
208 log.warning(
209 "Ignore {} sentence(s) with no tokens.".format(
210 len(sentences) - len(filtered_sentences)
211 )
212 )
213 return filtered_sentences
214
215 def _calculate_loss(
216 self, scores: List[List[float]], sentences: List[Sentence]
217 ) -> float:
218 """
219 Calculates the loss.
220 :param scores: the prediction scores from the model
221 :param sentences: list of sentences
222 :return: loss value
223 """
224 if self.multi_label:
225 return self._calculate_multi_label_loss(scores, sentences)
226
227 return self._calculate_single_label_loss(scores, sentences)
228
229 def _obtain_labels(self, scores: List[List[float]]) -> List[List[Label]]:
230 """
231 Predicts the labels of sentences.
232 :param scores: the prediction scores from the model
233 :return: list of predicted labels
234 """
235
236 if self.multi_label:
237 return [self._get_multi_label(s) for s in scores]
238
239 return [self._get_single_label(s) for s in scores]
240
241 def _get_multi_label(self, label_scores) -> List[Label]:
242 labels = []
243
244 sigmoid = torch.nn.Sigmoid()
245
246 results = list(map(lambda x: sigmoid(x), label_scores))
247 for idx, conf in enumerate(results):
248 if conf > 0.5:
249 label = self.label_dictionary.get_item_for_index(idx)
250 labels.append(Label(label, conf.item()))
251
252 return labels
253
254 def _get_single_label(self, label_scores) -> List[Label]:
255 conf, idx = torch.max(label_scores, 0)
256 label = self.label_dictionary.get_item_for_index(idx.item())
257
258 return [Label(label, conf.item())]
259
260 def _calculate_multi_label_loss(
261 self, label_scores, sentences: List[Sentence]
262 ) -> float:
263 sigmoid = nn.Sigmoid()
264 return self.loss_function(
265 sigmoid(label_scores), self._labels_to_one_hot(sentences)
266 )
267
268 def _calculate_single_label_loss(
269 self, label_scores, sentences: List[Sentence]
270 ) -> float:
271 return self.loss_function(label_scores, self._labels_to_indices(sentences))
272
273 def _labels_to_one_hot(self, sentences: List[Sentence]):
274 label_list = [sentence.get_label_names() for sentence in sentences]
275 one_hot = convert_labels_to_one_hot(label_list, self.label_dictionary)
276 one_hot = [torch.FloatTensor(l).unsqueeze(0) for l in one_hot]
277 one_hot = torch.cat(one_hot, 0).to(flair.device)
278 return one_hot
279
280 def _labels_to_indices(self, sentences: List[Sentence]):
281 indices = [
282 torch.LongTensor(
283 [
284 self.label_dictionary.get_idx_for_item(label.value)
285 for label in sentence.labels
286 ]
287 )
288 for sentence in sentences
289 ]
290
291 vec = torch.cat(indices, 0).to(flair.device)
292
293 return vec
294
295 @staticmethod
296 def load(model: str):
297 model_file = None
298 aws_resource_path = (
299 "https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models-v0.4"
300 )
301 cache_dir = Path("models")
302
303 if model.lower() == "de-offensive-language":
304 base_path = "/".join(
305 [
306 aws_resource_path,
307 "TEXT-CLASSIFICATION_germ-eval-2018_task-1",
308 "germ-eval-2018-task-1.pt",
309 ]
310 )
311 model_file = cached_path(base_path, cache_dir=cache_dir)
312
313 elif model.lower() == "en-sentiment":
314 base_path = "/".join(
315 [aws_resource_path, "TEXT-CLASSIFICATION_imdb", "imdb.pt"]
316 )
317 model_file = cached_path(base_path, cache_dir=cache_dir)
318
319 if model_file is not None:
320 return TextClassifier.load_from_file(model_file)
321
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flair/models/text_classification_model.py b/flair/models/text_classification_model.py
--- a/flair/models/text_classification_model.py
+++ b/flair/models/text_classification_model.py
@@ -252,7 +252,8 @@
return labels
def _get_single_label(self, label_scores) -> List[Label]:
- conf, idx = torch.max(label_scores, 0)
+ softmax = torch.nn.functional.softmax(label_scores, dim=0)
+ conf, idx = torch.max(softmax, 0)
label = self.label_dictionary.get_item_for_index(idx.item())
return [Label(label, conf.item())]
|
{"golden_diff": "diff --git a/flair/models/text_classification_model.py b/flair/models/text_classification_model.py\n--- a/flair/models/text_classification_model.py\n+++ b/flair/models/text_classification_model.py\n@@ -252,7 +252,8 @@\n return labels\n \n def _get_single_label(self, label_scores) -> List[Label]:\n- conf, idx = torch.max(label_scores, 0)\n+ softmax = torch.nn.functional.softmax(label_scores, dim=0)\n+ conf, idx = torch.max(softmax, 0)\n label = self.label_dictionary.get_item_for_index(idx.item())\n \n return [Label(label, conf.item())]\n", "issue": "TextClassifier label predictions always have score 1.0\nHi guys,\r\n\r\nI have trained my own TextClassifier following this tutorial: https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_7_TRAINING_A_MODEL.md\r\n\r\nI am using the option multi_label=False, as each sentence should be assigned only one label. In terms of embeddings, I use FlairEmbeddings mix-forward and mix-backward. \r\n\r\nThe issue is that every time I predict the label of a new unseen sentence, I get a label score = 1.0. It seems that it never has any different value between 0.0 and 1.0. It is always 1.0. \r\n\r\nIs this the expected behavior? What am I doing wrong?\r\n\r\nThanks!\n", "before_files": [{"content": "import warnings\nimport logging\nfrom pathlib import Path\nfrom typing import List, Union\n\nimport torch\nimport torch.nn as nn\n\nimport flair.nn\nimport flair.embeddings\nfrom flair.data import Dictionary, Sentence, Label\nfrom flair.file_utils import cached_path\nfrom flair.training_utils import convert_labels_to_one_hot, clear_embeddings\n\n\nlog = logging.getLogger(\"flair\")\n\n\nclass TextClassifier(flair.nn.Model):\n \"\"\"\n Text Classification Model\n The model takes word embeddings, puts them into an RNN to obtain a text representation, and puts the\n text representation in the end into a linear layer to get the actual class label.\n The model can handle single and multi class data sets.\n \"\"\"\n\n def __init__(\n self,\n document_embeddings: flair.embeddings.DocumentEmbeddings,\n label_dictionary: Dictionary,\n multi_label: bool,\n ):\n\n super(TextClassifier, self).__init__()\n\n self.document_embeddings: flair.embeddings.DocumentRNNEmbeddings = document_embeddings\n self.label_dictionary: Dictionary = label_dictionary\n self.multi_label = multi_label\n\n self.decoder = nn.Linear(\n self.document_embeddings.embedding_length, len(self.label_dictionary)\n )\n\n self._init_weights()\n\n if multi_label:\n self.loss_function = nn.BCELoss()\n else:\n self.loss_function = nn.CrossEntropyLoss()\n\n # auto-spawn on GPU if available\n self.to(flair.device)\n\n def _init_weights(self):\n nn.init.xavier_uniform_(self.decoder.weight)\n\n def forward(self, sentences) -> List[List[float]]:\n self.document_embeddings.embed(sentences)\n\n text_embedding_list = [\n sentence.get_embedding().unsqueeze(0) for sentence in sentences\n ]\n text_embedding_tensor = torch.cat(text_embedding_list, 0).to(flair.device)\n\n label_scores = self.decoder(text_embedding_tensor)\n\n return label_scores\n\n def save(self, model_file: Union[str, Path]):\n \"\"\"\n Saves the current model to the provided file.\n :param model_file: the model file\n \"\"\"\n model_state = {\n \"state_dict\": self.state_dict(),\n \"document_embeddings\": self.document_embeddings,\n \"label_dictionary\": self.label_dictionary,\n \"multi_label\": self.multi_label,\n }\n torch.save(model_state, str(model_file), pickle_protocol=4)\n\n def save_checkpoint(\n self,\n model_file: Union[str, Path],\n optimizer_state: dict,\n scheduler_state: dict,\n epoch: int,\n loss: float,\n ):\n \"\"\"\n Saves the current model to the provided file.\n :param model_file: the model file\n \"\"\"\n model_state = {\n \"state_dict\": self.state_dict(),\n \"document_embeddings\": self.document_embeddings,\n \"label_dictionary\": self.label_dictionary,\n \"multi_label\": self.multi_label,\n \"optimizer_state_dict\": optimizer_state,\n \"scheduler_state_dict\": scheduler_state,\n \"epoch\": epoch,\n \"loss\": loss,\n }\n torch.save(model_state, str(model_file), pickle_protocol=4)\n\n @classmethod\n def load_from_file(cls, model_file: Union[str, Path]):\n \"\"\"\n Loads the model from the given file.\n :param model_file: the model file\n :return: the loaded text classifier model\n \"\"\"\n state = TextClassifier._load_state(model_file)\n\n model = TextClassifier(\n document_embeddings=state[\"document_embeddings\"],\n label_dictionary=state[\"label_dictionary\"],\n multi_label=state[\"multi_label\"],\n )\n model.load_state_dict(state[\"state_dict\"])\n model.eval()\n model.to(flair.device)\n\n return model\n\n @classmethod\n def load_checkpoint(cls, model_file: Union[str, Path]):\n state = TextClassifier._load_state(model_file)\n model = TextClassifier.load_from_file(model_file)\n\n epoch = state[\"epoch\"] if \"epoch\" in state else None\n loss = state[\"loss\"] if \"loss\" in state else None\n optimizer_state_dict = (\n state[\"optimizer_state_dict\"] if \"optimizer_state_dict\" in state else None\n )\n scheduler_state_dict = (\n state[\"scheduler_state_dict\"] if \"scheduler_state_dict\" in state else None\n )\n\n return {\n \"model\": model,\n \"epoch\": epoch,\n \"loss\": loss,\n \"optimizer_state_dict\": optimizer_state_dict,\n \"scheduler_state_dict\": scheduler_state_dict,\n }\n\n @classmethod\n def _load_state(cls, model_file: Union[str, Path]):\n # ATTENTION: suppressing torch serialization warnings. This needs to be taken out once we sort out recursive\n # serialization of torch objects\n # https://docs.python.org/3/library/warnings.html#temporarily-suppressing-warnings\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n # load_big_file is a workaround by https://github.com/highway11git to load models on some Mac/Windows setups\n # see https://github.com/zalandoresearch/flair/issues/351\n f = flair.file_utils.load_big_file(str(model_file))\n state = torch.load(f, map_location=flair.device)\n return state\n\n def forward_loss(self, sentences: Union[List[Sentence], Sentence]) -> torch.tensor:\n scores = self.forward(sentences)\n return self._calculate_loss(scores, sentences)\n\n def forward_labels_and_loss(\n self, sentences: Union[Sentence, List[Sentence]]\n ) -> (List[List[Label]], torch.tensor):\n scores = self.forward(sentences)\n labels = self._obtain_labels(scores)\n loss = self._calculate_loss(scores, sentences)\n return labels, loss\n\n def predict(\n self, sentences: Union[Sentence, List[Sentence]], mini_batch_size: int = 32\n ) -> List[Sentence]:\n \"\"\"\n Predicts the class labels for the given sentences. The labels are directly added to the sentences.\n :param sentences: list of sentences\n :param mini_batch_size: mini batch size to use\n :return: the list of sentences containing the labels\n \"\"\"\n with torch.no_grad():\n if type(sentences) is Sentence:\n sentences = [sentences]\n\n filtered_sentences = self._filter_empty_sentences(sentences)\n\n batches = [\n filtered_sentences[x : x + mini_batch_size]\n for x in range(0, len(filtered_sentences), mini_batch_size)\n ]\n\n for batch in batches:\n scores = self.forward(batch)\n predicted_labels = self._obtain_labels(scores)\n\n for (sentence, labels) in zip(batch, predicted_labels):\n sentence.labels = labels\n\n clear_embeddings(batch)\n\n return sentences\n\n @staticmethod\n def _filter_empty_sentences(sentences: List[Sentence]) -> List[Sentence]:\n filtered_sentences = [sentence for sentence in sentences if sentence.tokens]\n if len(sentences) != len(filtered_sentences):\n log.warning(\n \"Ignore {} sentence(s) with no tokens.\".format(\n len(sentences) - len(filtered_sentences)\n )\n )\n return filtered_sentences\n\n def _calculate_loss(\n self, scores: List[List[float]], sentences: List[Sentence]\n ) -> float:\n \"\"\"\n Calculates the loss.\n :param scores: the prediction scores from the model\n :param sentences: list of sentences\n :return: loss value\n \"\"\"\n if self.multi_label:\n return self._calculate_multi_label_loss(scores, sentences)\n\n return self._calculate_single_label_loss(scores, sentences)\n\n def _obtain_labels(self, scores: List[List[float]]) -> List[List[Label]]:\n \"\"\"\n Predicts the labels of sentences.\n :param scores: the prediction scores from the model\n :return: list of predicted labels\n \"\"\"\n\n if self.multi_label:\n return [self._get_multi_label(s) for s in scores]\n\n return [self._get_single_label(s) for s in scores]\n\n def _get_multi_label(self, label_scores) -> List[Label]:\n labels = []\n\n sigmoid = torch.nn.Sigmoid()\n\n results = list(map(lambda x: sigmoid(x), label_scores))\n for idx, conf in enumerate(results):\n if conf > 0.5:\n label = self.label_dictionary.get_item_for_index(idx)\n labels.append(Label(label, conf.item()))\n\n return labels\n\n def _get_single_label(self, label_scores) -> List[Label]:\n conf, idx = torch.max(label_scores, 0)\n label = self.label_dictionary.get_item_for_index(idx.item())\n\n return [Label(label, conf.item())]\n\n def _calculate_multi_label_loss(\n self, label_scores, sentences: List[Sentence]\n ) -> float:\n sigmoid = nn.Sigmoid()\n return self.loss_function(\n sigmoid(label_scores), self._labels_to_one_hot(sentences)\n )\n\n def _calculate_single_label_loss(\n self, label_scores, sentences: List[Sentence]\n ) -> float:\n return self.loss_function(label_scores, self._labels_to_indices(sentences))\n\n def _labels_to_one_hot(self, sentences: List[Sentence]):\n label_list = [sentence.get_label_names() for sentence in sentences]\n one_hot = convert_labels_to_one_hot(label_list, self.label_dictionary)\n one_hot = [torch.FloatTensor(l).unsqueeze(0) for l in one_hot]\n one_hot = torch.cat(one_hot, 0).to(flair.device)\n return one_hot\n\n def _labels_to_indices(self, sentences: List[Sentence]):\n indices = [\n torch.LongTensor(\n [\n self.label_dictionary.get_idx_for_item(label.value)\n for label in sentence.labels\n ]\n )\n for sentence in sentences\n ]\n\n vec = torch.cat(indices, 0).to(flair.device)\n\n return vec\n\n @staticmethod\n def load(model: str):\n model_file = None\n aws_resource_path = (\n \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models-v0.4\"\n )\n cache_dir = Path(\"models\")\n\n if model.lower() == \"de-offensive-language\":\n base_path = \"/\".join(\n [\n aws_resource_path,\n \"TEXT-CLASSIFICATION_germ-eval-2018_task-1\",\n \"germ-eval-2018-task-1.pt\",\n ]\n )\n model_file = cached_path(base_path, cache_dir=cache_dir)\n\n elif model.lower() == \"en-sentiment\":\n base_path = \"/\".join(\n [aws_resource_path, \"TEXT-CLASSIFICATION_imdb\", \"imdb.pt\"]\n )\n model_file = cached_path(base_path, cache_dir=cache_dir)\n\n if model_file is not None:\n return TextClassifier.load_from_file(model_file)\n", "path": "flair/models/text_classification_model.py"}], "after_files": [{"content": "import warnings\nimport logging\nfrom pathlib import Path\nfrom typing import List, Union\n\nimport torch\nimport torch.nn as nn\n\nimport flair.nn\nimport flair.embeddings\nfrom flair.data import Dictionary, Sentence, Label\nfrom flair.file_utils import cached_path\nfrom flair.training_utils import convert_labels_to_one_hot, clear_embeddings\n\n\nlog = logging.getLogger(\"flair\")\n\n\nclass TextClassifier(flair.nn.Model):\n \"\"\"\n Text Classification Model\n The model takes word embeddings, puts them into an RNN to obtain a text representation, and puts the\n text representation in the end into a linear layer to get the actual class label.\n The model can handle single and multi class data sets.\n \"\"\"\n\n def __init__(\n self,\n document_embeddings: flair.embeddings.DocumentEmbeddings,\n label_dictionary: Dictionary,\n multi_label: bool,\n ):\n\n super(TextClassifier, self).__init__()\n\n self.document_embeddings: flair.embeddings.DocumentRNNEmbeddings = document_embeddings\n self.label_dictionary: Dictionary = label_dictionary\n self.multi_label = multi_label\n\n self.decoder = nn.Linear(\n self.document_embeddings.embedding_length, len(self.label_dictionary)\n )\n\n self._init_weights()\n\n if multi_label:\n self.loss_function = nn.BCELoss()\n else:\n self.loss_function = nn.CrossEntropyLoss()\n\n # auto-spawn on GPU if available\n self.to(flair.device)\n\n def _init_weights(self):\n nn.init.xavier_uniform_(self.decoder.weight)\n\n def forward(self, sentences) -> List[List[float]]:\n self.document_embeddings.embed(sentences)\n\n text_embedding_list = [\n sentence.get_embedding().unsqueeze(0) for sentence in sentences\n ]\n text_embedding_tensor = torch.cat(text_embedding_list, 0).to(flair.device)\n\n label_scores = self.decoder(text_embedding_tensor)\n\n return label_scores\n\n def save(self, model_file: Union[str, Path]):\n \"\"\"\n Saves the current model to the provided file.\n :param model_file: the model file\n \"\"\"\n model_state = {\n \"state_dict\": self.state_dict(),\n \"document_embeddings\": self.document_embeddings,\n \"label_dictionary\": self.label_dictionary,\n \"multi_label\": self.multi_label,\n }\n torch.save(model_state, str(model_file), pickle_protocol=4)\n\n def save_checkpoint(\n self,\n model_file: Union[str, Path],\n optimizer_state: dict,\n scheduler_state: dict,\n epoch: int,\n loss: float,\n ):\n \"\"\"\n Saves the current model to the provided file.\n :param model_file: the model file\n \"\"\"\n model_state = {\n \"state_dict\": self.state_dict(),\n \"document_embeddings\": self.document_embeddings,\n \"label_dictionary\": self.label_dictionary,\n \"multi_label\": self.multi_label,\n \"optimizer_state_dict\": optimizer_state,\n \"scheduler_state_dict\": scheduler_state,\n \"epoch\": epoch,\n \"loss\": loss,\n }\n torch.save(model_state, str(model_file), pickle_protocol=4)\n\n @classmethod\n def load_from_file(cls, model_file: Union[str, Path]):\n \"\"\"\n Loads the model from the given file.\n :param model_file: the model file\n :return: the loaded text classifier model\n \"\"\"\n state = TextClassifier._load_state(model_file)\n\n model = TextClassifier(\n document_embeddings=state[\"document_embeddings\"],\n label_dictionary=state[\"label_dictionary\"],\n multi_label=state[\"multi_label\"],\n )\n model.load_state_dict(state[\"state_dict\"])\n model.eval()\n model.to(flair.device)\n\n return model\n\n @classmethod\n def load_checkpoint(cls, model_file: Union[str, Path]):\n state = TextClassifier._load_state(model_file)\n model = TextClassifier.load_from_file(model_file)\n\n epoch = state[\"epoch\"] if \"epoch\" in state else None\n loss = state[\"loss\"] if \"loss\" in state else None\n optimizer_state_dict = (\n state[\"optimizer_state_dict\"] if \"optimizer_state_dict\" in state else None\n )\n scheduler_state_dict = (\n state[\"scheduler_state_dict\"] if \"scheduler_state_dict\" in state else None\n )\n\n return {\n \"model\": model,\n \"epoch\": epoch,\n \"loss\": loss,\n \"optimizer_state_dict\": optimizer_state_dict,\n \"scheduler_state_dict\": scheduler_state_dict,\n }\n\n @classmethod\n def _load_state(cls, model_file: Union[str, Path]):\n # ATTENTION: suppressing torch serialization warnings. This needs to be taken out once we sort out recursive\n # serialization of torch objects\n # https://docs.python.org/3/library/warnings.html#temporarily-suppressing-warnings\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n # load_big_file is a workaround by https://github.com/highway11git to load models on some Mac/Windows setups\n # see https://github.com/zalandoresearch/flair/issues/351\n f = flair.file_utils.load_big_file(str(model_file))\n state = torch.load(f, map_location=flair.device)\n return state\n\n def forward_loss(self, sentences: Union[List[Sentence], Sentence]) -> torch.tensor:\n scores = self.forward(sentences)\n return self._calculate_loss(scores, sentences)\n\n def forward_labels_and_loss(\n self, sentences: Union[Sentence, List[Sentence]]\n ) -> (List[List[Label]], torch.tensor):\n scores = self.forward(sentences)\n labels = self._obtain_labels(scores)\n loss = self._calculate_loss(scores, sentences)\n return labels, loss\n\n def predict(\n self, sentences: Union[Sentence, List[Sentence]], mini_batch_size: int = 32\n ) -> List[Sentence]:\n \"\"\"\n Predicts the class labels for the given sentences. The labels are directly added to the sentences.\n :param sentences: list of sentences\n :param mini_batch_size: mini batch size to use\n :return: the list of sentences containing the labels\n \"\"\"\n with torch.no_grad():\n if type(sentences) is Sentence:\n sentences = [sentences]\n\n filtered_sentences = self._filter_empty_sentences(sentences)\n\n batches = [\n filtered_sentences[x : x + mini_batch_size]\n for x in range(0, len(filtered_sentences), mini_batch_size)\n ]\n\n for batch in batches:\n scores = self.forward(batch)\n predicted_labels = self._obtain_labels(scores)\n\n for (sentence, labels) in zip(batch, predicted_labels):\n sentence.labels = labels\n\n clear_embeddings(batch)\n\n return sentences\n\n @staticmethod\n def _filter_empty_sentences(sentences: List[Sentence]) -> List[Sentence]:\n filtered_sentences = [sentence for sentence in sentences if sentence.tokens]\n if len(sentences) != len(filtered_sentences):\n log.warning(\n \"Ignore {} sentence(s) with no tokens.\".format(\n len(sentences) - len(filtered_sentences)\n )\n )\n return filtered_sentences\n\n def _calculate_loss(\n self, scores: List[List[float]], sentences: List[Sentence]\n ) -> float:\n \"\"\"\n Calculates the loss.\n :param scores: the prediction scores from the model\n :param sentences: list of sentences\n :return: loss value\n \"\"\"\n if self.multi_label:\n return self._calculate_multi_label_loss(scores, sentences)\n\n return self._calculate_single_label_loss(scores, sentences)\n\n def _obtain_labels(self, scores: List[List[float]]) -> List[List[Label]]:\n \"\"\"\n Predicts the labels of sentences.\n :param scores: the prediction scores from the model\n :return: list of predicted labels\n \"\"\"\n\n if self.multi_label:\n return [self._get_multi_label(s) for s in scores]\n\n return [self._get_single_label(s) for s in scores]\n\n def _get_multi_label(self, label_scores) -> List[Label]:\n labels = []\n\n sigmoid = torch.nn.Sigmoid()\n\n results = list(map(lambda x: sigmoid(x), label_scores))\n for idx, conf in enumerate(results):\n if conf > 0.5:\n label = self.label_dictionary.get_item_for_index(idx)\n labels.append(Label(label, conf.item()))\n\n return labels\n\n def _get_single_label(self, label_scores) -> List[Label]:\n softmax = torch.nn.functional.softmax(label_scores, dim=0)\n conf, idx = torch.max(softmax, 0)\n label = self.label_dictionary.get_item_for_index(idx.item())\n\n return [Label(label, conf.item())]\n\n def _calculate_multi_label_loss(\n self, label_scores, sentences: List[Sentence]\n ) -> float:\n sigmoid = nn.Sigmoid()\n return self.loss_function(\n sigmoid(label_scores), self._labels_to_one_hot(sentences)\n )\n\n def _calculate_single_label_loss(\n self, label_scores, sentences: List[Sentence]\n ) -> float:\n return self.loss_function(label_scores, self._labels_to_indices(sentences))\n\n def _labels_to_one_hot(self, sentences: List[Sentence]):\n label_list = [sentence.get_label_names() for sentence in sentences]\n one_hot = convert_labels_to_one_hot(label_list, self.label_dictionary)\n one_hot = [torch.FloatTensor(l).unsqueeze(0) for l in one_hot]\n one_hot = torch.cat(one_hot, 0).to(flair.device)\n return one_hot\n\n def _labels_to_indices(self, sentences: List[Sentence]):\n indices = [\n torch.LongTensor(\n [\n self.label_dictionary.get_idx_for_item(label.value)\n for label in sentence.labels\n ]\n )\n for sentence in sentences\n ]\n\n vec = torch.cat(indices, 0).to(flair.device)\n\n return vec\n\n @staticmethod\n def load(model: str):\n model_file = None\n aws_resource_path = (\n \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models-v0.4\"\n )\n cache_dir = Path(\"models\")\n\n if model.lower() == \"de-offensive-language\":\n base_path = \"/\".join(\n [\n aws_resource_path,\n \"TEXT-CLASSIFICATION_germ-eval-2018_task-1\",\n \"germ-eval-2018-task-1.pt\",\n ]\n )\n model_file = cached_path(base_path, cache_dir=cache_dir)\n\n elif model.lower() == \"en-sentiment\":\n base_path = \"/\".join(\n [aws_resource_path, \"TEXT-CLASSIFICATION_imdb\", \"imdb.pt\"]\n )\n model_file = cached_path(base_path, cache_dir=cache_dir)\n\n if model_file is not None:\n return TextClassifier.load_from_file(model_file)\n", "path": "flair/models/text_classification_model.py"}]}
| 3,607 | 141 |
gh_patches_debug_12674
|
rasdani/github-patches
|
git_diff
|
openfun__richie-1537
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve meta description on each type of page
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
The meta description is filled via the "meta_description" field on the page. This field is rarely filled whereas we have some interesting information on each page that could be used to fill it.
**Describe the solution you'd like**
If the "meta_description" field is filled on a page, use it. Otherwise, use the following information:
- blogpost page : use first few words of the "excerpt" placeholder OR "body" placeholder
- category use first few words of the "description" placeholder
- course page : use first few words of the "course_introduction" placeholder OR "course_description" placeholder. Maybe start by the state of the course and the title of the main organization if any?
- organization page : use first few words of the "description" placeholder
- person page : use first few words of the "bio" placeholder OR "maincontent" placeholder
- program page : use first few words of the "program_excerpt" OR "program_body" placeholder
**Discovery, Documentation, Adoption, Migration Strategy**
Trunk the content used for the meta description to **200 characters** (150 recommended).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/richie/apps/courses/templatetags/extra_tags.py`
Content:
```
1 """Custom template tags for the courses application of Richie."""
2 import json
3
4 from django import template
5 from django.core.exceptions import ObjectDoesNotExist
6 from django.template.loader import render_to_string
7 from django.utils import timezone
8 from django.utils.translation import get_language
9 from django.utils.translation import gettext as _
10 from django.utils.translation import to_locale
11
12 import arrow
13 from classytags.arguments import Argument, MultiValueArgument
14 from classytags.core import Options, Tag
15 from classytags.utils import flatten_context
16 from cms.templatetags.cms_tags import (
17 Placeholder,
18 PlaceholderOptions,
19 _get_page_by_untyped_arg,
20 )
21 from cms.toolbar.utils import get_toolbar_from_request
22 from cms.utils import get_site_id
23 from cms.utils.plugins import get_plugins
24
25 from ..lms import LMSHandler
26
27 # pylint: disable=invalid-name
28 register = template.Library()
29
30
31 def get_plugins_render_tag(context, name, varname, nodelist, page_lookup=None):
32 """
33 Retrieve the placeholder's plugins and set them as a variable in the template context.
34 If the placeholder is empty, render the block as fallback content and return the
35 resulting HTML.
36 If the placeholder is editable and rendered on its own page, the edit script and markup
37 are added to the HTML content.
38 """
39 content = ""
40 request = context.get("request")
41
42 if request:
43
44 context[varname] = []
45 page = _get_page_by_untyped_arg(page_lookup, request, get_site_id(None))
46
47 if not page:
48 return ""
49
50 try:
51 placeholder = page.placeholders.get(slot=name)
52 except ObjectDoesNotExist:
53 return ""
54 else:
55 context[varname] = [
56 cms_plugin.get_plugin_instance()[0]
57 for cms_plugin in get_plugins(
58 request, placeholder, template=page.get_template()
59 )
60 ]
61
62 # Default content if there is no plugins in the placeholder
63 if not context[varname] and nodelist:
64 content = nodelist.render(context)
65
66 # Add the edit script and markup to the content, only if the placeholder is editable
67 # and the visited page is the one on which the placeholder is declared.
68 toolbar = get_toolbar_from_request(request)
69 if placeholder.page == request.current_page and toolbar.edit_mode_active:
70 renderer = toolbar.get_content_renderer()
71 data = renderer.get_editable_placeholder_context(placeholder, page=page)
72 data["content"] = content
73 content = renderer.placeholder_edit_template.format(**data)
74
75 return content
76
77
78 @register.tag("get_placeholder_plugins")
79 class GetPlaceholderPlugins(Placeholder):
80 """
81 A template tag that declares a placeholder and sets its plugins as a context variable
82 instead of rendering them eg:
83
84 {% get_placeholder_plugins "logo" as varname %}
85 {% get_placeholder_plugins "logo" as varname or %}
86 <div>No content</div>
87 {% endget_placeholder_plugins %}
88
89 This tag can typically be used in association with the block_plugin tag, to customize the
90 way it is rendered eg:
91
92 {% get_placeholder_plugins "logo" as plugins %}
93 {% blockplugin plugins.0 %}
94 <img src="{% thumbnail instance.picture 300x150 %}"/>
95 {% endblockplugin %}
96
97 Keyword arguments:
98 name: the name of the placeholder
99 varname: context variable name. Output will be added to template context as this variable
100 instead of being returned.
101 or: optional argument which if given will make the template tag a block
102 tag whose content is shown if the placeholder is empty
103
104 Note: We must derive from the Placeholder class so that the tag is recognized as a
105 placeholder and shown in the structure toolbar.
106 """
107
108 name = "get_placeholder_plugins"
109 options = PlaceholderOptions(
110 Argument("name", resolve=False),
111 "as",
112 Argument("varname", resolve=False),
113 MultiValueArgument("extra_bits", required=False, resolve=False),
114 blocks=[("endget_placeholder_plugins", "nodelist")],
115 )
116
117 # pylint: disable=arguments-differ,too-many-arguments
118 def render_tag(self, context, name, varname, extra_bits, nodelist=None):
119 return get_plugins_render_tag(context, name, varname, nodelist)
120
121
122 @register.tag("get_page_plugins")
123 class GetPagePlugins(Tag):
124 """
125 A template tag that gets plugins from a page's placeholder returns them as a context variable:
126
127 {% get_page_plugins "logo" page_lookup as varname %}
128 {% get_page_plugins "logo" page_lookup as varname or %}
129 <div>No content</div>
130 {% endget_page_plugins %}
131
132 This tag can typically be used in association with the block_plugin tag,
133 to render the retrieved plugins:
134
135 {% get_page_plugins "logo" page_lookup as plugins %}
136 {% blockplugin plugins.0 %}
137 <img src="{% thumbnail instance.picture 300x150 %}"/>
138 {% endblockplugin %}
139
140 Keyword arguments:
141 name: the name of the placeholder
142 page_lookup: lookup argument for Page. See `_get_page_by_untyped_arg()`
143 for detailed information on the allowed types and their interpretation for the
144 `page_lookup` argument.
145 varname: context variable name. Output will be added to template context as this variable
146 instead of being returned.
147 or: optional argument which if given will make the template tag a block
148 tag whose content is shown if the placeholder is empty
149 """
150
151 name = "get_page_plugins"
152 options = PlaceholderOptions(
153 Argument("name", resolve=False),
154 Argument("page_lookup"),
155 "as",
156 Argument("varname", resolve=False),
157 MultiValueArgument("extra_bits", required=False, resolve=False),
158 blocks=[("endget_page_plugins", "nodelist")],
159 )
160
161 # pylint: disable=arguments-differ,too-many-arguments, unused-argument
162 def render_tag(
163 self, context, name, page_lookup, varname, extra_bits, nodelist=None
164 ):
165 return get_plugins_render_tag(context, name, varname, nodelist, page_lookup)
166
167
168 @register.tag()
169 class BlockPlugin(Tag):
170 """
171 Like DjangoCMS 'render_plugin_block' but only includes the edit script and markup when
172 the related placeholder is editable.
173
174 This issue was raised to DjangoCMS and we need our own template tag until they find a way
175 to fix it in DjangoCMS (https://github.com/divio/django-cms/issues/6683).
176 """
177
178 name = "blockplugin"
179 template = "cms/toolbar/plugin.html"
180 options = Options(Argument("plugin"), blocks=[("endblockplugin", "nodelist")])
181
182 # pylint: disable=arguments-differ
183 def render_tag(self, context, plugin, nodelist):
184 """
185 Renders the block for the plugin and returns the resulting HTML leaving the temmpate
186 context untouched.
187 If the placholder is editable, the edit script and markup are added to the rendered HTML.
188 """
189 request = context.get("request")
190 if not plugin or not request:
191 return ""
192
193 # Add the plugin and its rendered content to an internal context
194 internal_context = flatten_context(context)
195 internal_context["instance"] = plugin
196 internal_context["content"] = nodelist.render(context.new(internal_context))
197
198 # Add the edit script and markup to the content, only if the placeholder is editable
199 # and the visited page is the one on which the plugin's placeholder is declared.
200 toolbar = get_toolbar_from_request(request)
201 if plugin.placeholder.page == request.current_page and toolbar.edit_mode_active:
202 return render_to_string(self.template, internal_context)
203
204 return internal_context["content"]
205
206
207 @register.filter()
208 def is_empty_placeholder(page, slot):
209 """A template filter to determine if a placeholder is empty.
210
211 This is useful when we don't want to include any wrapper markup in our template unless
212 the placeholder unless it actually contains plugins.
213 """
214 placeholder = page.placeholders.get(slot=slot)
215 return not placeholder.cmsplugin_set.exists()
216
217
218 @register.filter()
219 def order_by(queryset, args):
220 """A template filter to force ordering on a queryset.
221
222 Taken from: https://djangosnippets.org/snippets/741/
223 This is useful for DjangoCMS page querysets because we don't have access to the view.
224 """
225 args = [x.strip() for x in args.split(",")]
226 return queryset.order_by(*args)
227
228
229 @register.filter()
230 def has_connected_lms(course_run):
231 """
232 Determine if the passed course run has a connected LMS (as determined through out LMSHandler
233 and settings).
234 This enables our templates to either use the <CourseRunEnrollment /> component or a simple
235 link to the course run.
236 """
237 return LMSHandler.select_lms(course_run.resource_link) is not None
238
239
240 @register.simple_tag(takes_context=True)
241 def course_enrollment_widget_props(context):
242 """
243 Return a json dumps which contains all course_run's properties required by
244 CourseEnrollment React widget
245 """
246 course_run = context["run"]
247
248 profile_urls = json.loads(
249 context.get("AUTHENTICATION", {}).get("profile_urls", "{}")
250 )
251 dashboard_link = profile_urls.get("dashboard", {}).get("action")
252
253 starts_in_message = None
254 if course_run.start > timezone.now():
255 course_start = arrow.get(course_run.start)
256 humanized_course_start = course_start.humanize(
257 arrow.now(), locale=to_locale(get_language())
258 )
259 # Translators: delay indicates when the course will start as a duration.
260 # In english the string will be "The course will start in 3 days"
261 starts_in_message = _("The course will start {delay:s}").format(
262 delay=humanized_course_start
263 )
264
265 return json.dumps(
266 {
267 "courseRun": {
268 "id": course_run.id,
269 "resource_link": course_run.resource_link,
270 "priority": course_run.state["priority"],
271 "starts_in_message": starts_in_message,
272 "dashboard_link": dashboard_link,
273 }
274 }
275 )
276
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/richie/apps/courses/templatetags/extra_tags.py b/src/richie/apps/courses/templatetags/extra_tags.py
--- a/src/richie/apps/courses/templatetags/extra_tags.py
+++ b/src/richie/apps/courses/templatetags/extra_tags.py
@@ -3,6 +3,7 @@
from django import template
from django.core.exceptions import ObjectDoesNotExist
+from django.template.defaultfilters import stringfilter
from django.template.loader import render_to_string
from django.utils import timezone
from django.utils.translation import get_language
@@ -273,3 +274,12 @@
}
}
)
+
+
[email protected]
+@stringfilter
+def trim(value):
+ """
+ Remove whitespaces before and after a string.
+ """
+ return value.strip()
|
{"golden_diff": "diff --git a/src/richie/apps/courses/templatetags/extra_tags.py b/src/richie/apps/courses/templatetags/extra_tags.py\n--- a/src/richie/apps/courses/templatetags/extra_tags.py\n+++ b/src/richie/apps/courses/templatetags/extra_tags.py\n@@ -3,6 +3,7 @@\n \n from django import template\n from django.core.exceptions import ObjectDoesNotExist\n+from django.template.defaultfilters import stringfilter\n from django.template.loader import render_to_string\n from django.utils import timezone\n from django.utils.translation import get_language\n@@ -273,3 +274,12 @@\n }\n }\n )\n+\n+\[email protected]\n+@stringfilter\n+def trim(value):\n+ \"\"\"\n+ Remove whitespaces before and after a string.\n+ \"\"\"\n+ return value.strip()\n", "issue": "Improve meta description on each type of page\n## Feature Request\r\n\r\n**Is your feature request related to a problem or unsupported use case? Please describe.**\r\nThe meta description is filled via the \"meta_description\" field on the page. This field is rarely filled whereas we have some interesting information on each page that could be used to fill it.\r\n\r\n**Describe the solution you'd like**\r\nIf the \"meta_description\" field is filled on a page, use it. Otherwise, use the following information:\r\n- blogpost page : use first few words of the \"excerpt\" placeholder OR \"body\" placeholder\r\n- category use first few words of the \"description\" placeholder\r\n- course page : use first few words of the \"course_introduction\" placeholder OR \"course_description\" placeholder. Maybe start by the state of the course and the title of the main organization if any?\r\n- organization page : use first few words of the \"description\" placeholder\r\n- person page : use first few words of the \"bio\" placeholder OR \"maincontent\" placeholder\r\n- program page : use first few words of the \"program_excerpt\" OR \"program_body\" placeholder\r\n\r\n**Discovery, Documentation, Adoption, Migration Strategy**\r\nTrunk the content used for the meta description to **200 characters** (150 recommended).\r\n\n", "before_files": [{"content": "\"\"\"Custom template tags for the courses application of Richie.\"\"\"\nimport json\n\nfrom django import template\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.template.loader import render_to_string\nfrom django.utils import timezone\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import to_locale\n\nimport arrow\nfrom classytags.arguments import Argument, MultiValueArgument\nfrom classytags.core import Options, Tag\nfrom classytags.utils import flatten_context\nfrom cms.templatetags.cms_tags import (\n Placeholder,\n PlaceholderOptions,\n _get_page_by_untyped_arg,\n)\nfrom cms.toolbar.utils import get_toolbar_from_request\nfrom cms.utils import get_site_id\nfrom cms.utils.plugins import get_plugins\n\nfrom ..lms import LMSHandler\n\n# pylint: disable=invalid-name\nregister = template.Library()\n\n\ndef get_plugins_render_tag(context, name, varname, nodelist, page_lookup=None):\n \"\"\"\n Retrieve the placeholder's plugins and set them as a variable in the template context.\n If the placeholder is empty, render the block as fallback content and return the\n resulting HTML.\n If the placeholder is editable and rendered on its own page, the edit script and markup\n are added to the HTML content.\n \"\"\"\n content = \"\"\n request = context.get(\"request\")\n\n if request:\n\n context[varname] = []\n page = _get_page_by_untyped_arg(page_lookup, request, get_site_id(None))\n\n if not page:\n return \"\"\n\n try:\n placeholder = page.placeholders.get(slot=name)\n except ObjectDoesNotExist:\n return \"\"\n else:\n context[varname] = [\n cms_plugin.get_plugin_instance()[0]\n for cms_plugin in get_plugins(\n request, placeholder, template=page.get_template()\n )\n ]\n\n # Default content if there is no plugins in the placeholder\n if not context[varname] and nodelist:\n content = nodelist.render(context)\n\n # Add the edit script and markup to the content, only if the placeholder is editable\n # and the visited page is the one on which the placeholder is declared.\n toolbar = get_toolbar_from_request(request)\n if placeholder.page == request.current_page and toolbar.edit_mode_active:\n renderer = toolbar.get_content_renderer()\n data = renderer.get_editable_placeholder_context(placeholder, page=page)\n data[\"content\"] = content\n content = renderer.placeholder_edit_template.format(**data)\n\n return content\n\n\[email protected](\"get_placeholder_plugins\")\nclass GetPlaceholderPlugins(Placeholder):\n \"\"\"\n A template tag that declares a placeholder and sets its plugins as a context variable\n instead of rendering them eg:\n\n {% get_placeholder_plugins \"logo\" as varname %}\n {% get_placeholder_plugins \"logo\" as varname or %}\n <div>No content</div>\n {% endget_placeholder_plugins %}\n\n This tag can typically be used in association with the block_plugin tag, to customize the\n way it is rendered eg:\n\n {% get_placeholder_plugins \"logo\" as plugins %}\n {% blockplugin plugins.0 %}\n <img src=\"{% thumbnail instance.picture 300x150 %}\"/>\n {% endblockplugin %}\n\n Keyword arguments:\n name: the name of the placeholder\n varname: context variable name. Output will be added to template context as this variable\n instead of being returned.\n or: optional argument which if given will make the template tag a block\n tag whose content is shown if the placeholder is empty\n\n Note: We must derive from the Placeholder class so that the tag is recognized as a\n placeholder and shown in the structure toolbar.\n \"\"\"\n\n name = \"get_placeholder_plugins\"\n options = PlaceholderOptions(\n Argument(\"name\", resolve=False),\n \"as\",\n Argument(\"varname\", resolve=False),\n MultiValueArgument(\"extra_bits\", required=False, resolve=False),\n blocks=[(\"endget_placeholder_plugins\", \"nodelist\")],\n )\n\n # pylint: disable=arguments-differ,too-many-arguments\n def render_tag(self, context, name, varname, extra_bits, nodelist=None):\n return get_plugins_render_tag(context, name, varname, nodelist)\n\n\[email protected](\"get_page_plugins\")\nclass GetPagePlugins(Tag):\n \"\"\"\n A template tag that gets plugins from a page's placeholder returns them as a context variable:\n\n {% get_page_plugins \"logo\" page_lookup as varname %}\n {% get_page_plugins \"logo\" page_lookup as varname or %}\n <div>No content</div>\n {% endget_page_plugins %}\n\n This tag can typically be used in association with the block_plugin tag,\n to render the retrieved plugins:\n\n {% get_page_plugins \"logo\" page_lookup as plugins %}\n {% blockplugin plugins.0 %}\n <img src=\"{% thumbnail instance.picture 300x150 %}\"/>\n {% endblockplugin %}\n\n Keyword arguments:\n name: the name of the placeholder\n page_lookup: lookup argument for Page. See `_get_page_by_untyped_arg()`\n for detailed information on the allowed types and their interpretation for the\n `page_lookup` argument.\n varname: context variable name. Output will be added to template context as this variable\n instead of being returned.\n or: optional argument which if given will make the template tag a block\n tag whose content is shown if the placeholder is empty\n \"\"\"\n\n name = \"get_page_plugins\"\n options = PlaceholderOptions(\n Argument(\"name\", resolve=False),\n Argument(\"page_lookup\"),\n \"as\",\n Argument(\"varname\", resolve=False),\n MultiValueArgument(\"extra_bits\", required=False, resolve=False),\n blocks=[(\"endget_page_plugins\", \"nodelist\")],\n )\n\n # pylint: disable=arguments-differ,too-many-arguments, unused-argument\n def render_tag(\n self, context, name, page_lookup, varname, extra_bits, nodelist=None\n ):\n return get_plugins_render_tag(context, name, varname, nodelist, page_lookup)\n\n\[email protected]()\nclass BlockPlugin(Tag):\n \"\"\"\n Like DjangoCMS 'render_plugin_block' but only includes the edit script and markup when\n the related placeholder is editable.\n\n This issue was raised to DjangoCMS and we need our own template tag until they find a way\n to fix it in DjangoCMS (https://github.com/divio/django-cms/issues/6683).\n \"\"\"\n\n name = \"blockplugin\"\n template = \"cms/toolbar/plugin.html\"\n options = Options(Argument(\"plugin\"), blocks=[(\"endblockplugin\", \"nodelist\")])\n\n # pylint: disable=arguments-differ\n def render_tag(self, context, plugin, nodelist):\n \"\"\"\n Renders the block for the plugin and returns the resulting HTML leaving the temmpate\n context untouched.\n If the placholder is editable, the edit script and markup are added to the rendered HTML.\n \"\"\"\n request = context.get(\"request\")\n if not plugin or not request:\n return \"\"\n\n # Add the plugin and its rendered content to an internal context\n internal_context = flatten_context(context)\n internal_context[\"instance\"] = plugin\n internal_context[\"content\"] = nodelist.render(context.new(internal_context))\n\n # Add the edit script and markup to the content, only if the placeholder is editable\n # and the visited page is the one on which the plugin's placeholder is declared.\n toolbar = get_toolbar_from_request(request)\n if plugin.placeholder.page == request.current_page and toolbar.edit_mode_active:\n return render_to_string(self.template, internal_context)\n\n return internal_context[\"content\"]\n\n\[email protected]()\ndef is_empty_placeholder(page, slot):\n \"\"\"A template filter to determine if a placeholder is empty.\n\n This is useful when we don't want to include any wrapper markup in our template unless\n the placeholder unless it actually contains plugins.\n \"\"\"\n placeholder = page.placeholders.get(slot=slot)\n return not placeholder.cmsplugin_set.exists()\n\n\[email protected]()\ndef order_by(queryset, args):\n \"\"\"A template filter to force ordering on a queryset.\n\n Taken from: https://djangosnippets.org/snippets/741/\n This is useful for DjangoCMS page querysets because we don't have access to the view.\n \"\"\"\n args = [x.strip() for x in args.split(\",\")]\n return queryset.order_by(*args)\n\n\[email protected]()\ndef has_connected_lms(course_run):\n \"\"\"\n Determine if the passed course run has a connected LMS (as determined through out LMSHandler\n and settings).\n This enables our templates to either use the <CourseRunEnrollment /> component or a simple\n link to the course run.\n \"\"\"\n return LMSHandler.select_lms(course_run.resource_link) is not None\n\n\[email protected]_tag(takes_context=True)\ndef course_enrollment_widget_props(context):\n \"\"\"\n Return a json dumps which contains all course_run's properties required by\n CourseEnrollment React widget\n \"\"\"\n course_run = context[\"run\"]\n\n profile_urls = json.loads(\n context.get(\"AUTHENTICATION\", {}).get(\"profile_urls\", \"{}\")\n )\n dashboard_link = profile_urls.get(\"dashboard\", {}).get(\"action\")\n\n starts_in_message = None\n if course_run.start > timezone.now():\n course_start = arrow.get(course_run.start)\n humanized_course_start = course_start.humanize(\n arrow.now(), locale=to_locale(get_language())\n )\n # Translators: delay indicates when the course will start as a duration.\n # In english the string will be \"The course will start in 3 days\"\n starts_in_message = _(\"The course will start {delay:s}\").format(\n delay=humanized_course_start\n )\n\n return json.dumps(\n {\n \"courseRun\": {\n \"id\": course_run.id,\n \"resource_link\": course_run.resource_link,\n \"priority\": course_run.state[\"priority\"],\n \"starts_in_message\": starts_in_message,\n \"dashboard_link\": dashboard_link,\n }\n }\n )\n", "path": "src/richie/apps/courses/templatetags/extra_tags.py"}], "after_files": [{"content": "\"\"\"Custom template tags for the courses application of Richie.\"\"\"\nimport json\n\nfrom django import template\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.template.defaultfilters import stringfilter\nfrom django.template.loader import render_to_string\nfrom django.utils import timezone\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import to_locale\n\nimport arrow\nfrom classytags.arguments import Argument, MultiValueArgument\nfrom classytags.core import Options, Tag\nfrom classytags.utils import flatten_context\nfrom cms.templatetags.cms_tags import (\n Placeholder,\n PlaceholderOptions,\n _get_page_by_untyped_arg,\n)\nfrom cms.toolbar.utils import get_toolbar_from_request\nfrom cms.utils import get_site_id\nfrom cms.utils.plugins import get_plugins\n\nfrom ..lms import LMSHandler\n\n# pylint: disable=invalid-name\nregister = template.Library()\n\n\ndef get_plugins_render_tag(context, name, varname, nodelist, page_lookup=None):\n \"\"\"\n Retrieve the placeholder's plugins and set them as a variable in the template context.\n If the placeholder is empty, render the block as fallback content and return the\n resulting HTML.\n If the placeholder is editable and rendered on its own page, the edit script and markup\n are added to the HTML content.\n \"\"\"\n content = \"\"\n request = context.get(\"request\")\n\n if request:\n\n context[varname] = []\n page = _get_page_by_untyped_arg(page_lookup, request, get_site_id(None))\n\n if not page:\n return \"\"\n\n try:\n placeholder = page.placeholders.get(slot=name)\n except ObjectDoesNotExist:\n return \"\"\n else:\n context[varname] = [\n cms_plugin.get_plugin_instance()[0]\n for cms_plugin in get_plugins(\n request, placeholder, template=page.get_template()\n )\n ]\n\n # Default content if there is no plugins in the placeholder\n if not context[varname] and nodelist:\n content = nodelist.render(context)\n\n # Add the edit script and markup to the content, only if the placeholder is editable\n # and the visited page is the one on which the placeholder is declared.\n toolbar = get_toolbar_from_request(request)\n if placeholder.page == request.current_page and toolbar.edit_mode_active:\n renderer = toolbar.get_content_renderer()\n data = renderer.get_editable_placeholder_context(placeholder, page=page)\n data[\"content\"] = content\n content = renderer.placeholder_edit_template.format(**data)\n\n return content\n\n\[email protected](\"get_placeholder_plugins\")\nclass GetPlaceholderPlugins(Placeholder):\n \"\"\"\n A template tag that declares a placeholder and sets its plugins as a context variable\n instead of rendering them eg:\n\n {% get_placeholder_plugins \"logo\" as varname %}\n {% get_placeholder_plugins \"logo\" as varname or %}\n <div>No content</div>\n {% endget_placeholder_plugins %}\n\n This tag can typically be used in association with the block_plugin tag, to customize the\n way it is rendered eg:\n\n {% get_placeholder_plugins \"logo\" as plugins %}\n {% blockplugin plugins.0 %}\n <img src=\"{% thumbnail instance.picture 300x150 %}\"/>\n {% endblockplugin %}\n\n Keyword arguments:\n name: the name of the placeholder\n varname: context variable name. Output will be added to template context as this variable\n instead of being returned.\n or: optional argument which if given will make the template tag a block\n tag whose content is shown if the placeholder is empty\n\n Note: We must derive from the Placeholder class so that the tag is recognized as a\n placeholder and shown in the structure toolbar.\n \"\"\"\n\n name = \"get_placeholder_plugins\"\n options = PlaceholderOptions(\n Argument(\"name\", resolve=False),\n \"as\",\n Argument(\"varname\", resolve=False),\n MultiValueArgument(\"extra_bits\", required=False, resolve=False),\n blocks=[(\"endget_placeholder_plugins\", \"nodelist\")],\n )\n\n # pylint: disable=arguments-differ,too-many-arguments\n def render_tag(self, context, name, varname, extra_bits, nodelist=None):\n return get_plugins_render_tag(context, name, varname, nodelist)\n\n\[email protected](\"get_page_plugins\")\nclass GetPagePlugins(Tag):\n \"\"\"\n A template tag that gets plugins from a page's placeholder returns them as a context variable:\n\n {% get_page_plugins \"logo\" page_lookup as varname %}\n {% get_page_plugins \"logo\" page_lookup as varname or %}\n <div>No content</div>\n {% endget_page_plugins %}\n\n This tag can typically be used in association with the block_plugin tag,\n to render the retrieved plugins:\n\n {% get_page_plugins \"logo\" page_lookup as plugins %}\n {% blockplugin plugins.0 %}\n <img src=\"{% thumbnail instance.picture 300x150 %}\"/>\n {% endblockplugin %}\n\n Keyword arguments:\n name: the name of the placeholder\n page_lookup: lookup argument for Page. See `_get_page_by_untyped_arg()`\n for detailed information on the allowed types and their interpretation for the\n `page_lookup` argument.\n varname: context variable name. Output will be added to template context as this variable\n instead of being returned.\n or: optional argument which if given will make the template tag a block\n tag whose content is shown if the placeholder is empty\n \"\"\"\n\n name = \"get_page_plugins\"\n options = PlaceholderOptions(\n Argument(\"name\", resolve=False),\n Argument(\"page_lookup\"),\n \"as\",\n Argument(\"varname\", resolve=False),\n MultiValueArgument(\"extra_bits\", required=False, resolve=False),\n blocks=[(\"endget_page_plugins\", \"nodelist\")],\n )\n\n # pylint: disable=arguments-differ,too-many-arguments, unused-argument\n def render_tag(\n self, context, name, page_lookup, varname, extra_bits, nodelist=None\n ):\n return get_plugins_render_tag(context, name, varname, nodelist, page_lookup)\n\n\[email protected]()\nclass BlockPlugin(Tag):\n \"\"\"\n Like DjangoCMS 'render_plugin_block' but only includes the edit script and markup when\n the related placeholder is editable.\n\n This issue was raised to DjangoCMS and we need our own template tag until they find a way\n to fix it in DjangoCMS (https://github.com/divio/django-cms/issues/6683).\n \"\"\"\n\n name = \"blockplugin\"\n template = \"cms/toolbar/plugin.html\"\n options = Options(Argument(\"plugin\"), blocks=[(\"endblockplugin\", \"nodelist\")])\n\n # pylint: disable=arguments-differ\n def render_tag(self, context, plugin, nodelist):\n \"\"\"\n Renders the block for the plugin and returns the resulting HTML leaving the temmpate\n context untouched.\n If the placholder is editable, the edit script and markup are added to the rendered HTML.\n \"\"\"\n request = context.get(\"request\")\n if not plugin or not request:\n return \"\"\n\n # Add the plugin and its rendered content to an internal context\n internal_context = flatten_context(context)\n internal_context[\"instance\"] = plugin\n internal_context[\"content\"] = nodelist.render(context.new(internal_context))\n\n # Add the edit script and markup to the content, only if the placeholder is editable\n # and the visited page is the one on which the plugin's placeholder is declared.\n toolbar = get_toolbar_from_request(request)\n if plugin.placeholder.page == request.current_page and toolbar.edit_mode_active:\n return render_to_string(self.template, internal_context)\n\n return internal_context[\"content\"]\n\n\[email protected]()\ndef is_empty_placeholder(page, slot):\n \"\"\"A template filter to determine if a placeholder is empty.\n\n This is useful when we don't want to include any wrapper markup in our template unless\n the placeholder unless it actually contains plugins.\n \"\"\"\n placeholder = page.placeholders.get(slot=slot)\n return not placeholder.cmsplugin_set.exists()\n\n\[email protected]()\ndef order_by(queryset, args):\n \"\"\"A template filter to force ordering on a queryset.\n\n Taken from: https://djangosnippets.org/snippets/741/\n This is useful for DjangoCMS page querysets because we don't have access to the view.\n \"\"\"\n args = [x.strip() for x in args.split(\",\")]\n return queryset.order_by(*args)\n\n\[email protected]()\ndef has_connected_lms(course_run):\n \"\"\"\n Determine if the passed course run has a connected LMS (as determined through out LMSHandler\n and settings).\n This enables our templates to either use the <CourseRunEnrollment /> component or a simple\n link to the course run.\n \"\"\"\n return LMSHandler.select_lms(course_run.resource_link) is not None\n\n\[email protected]_tag(takes_context=True)\ndef course_enrollment_widget_props(context):\n \"\"\"\n Return a json dumps which contains all course_run's properties required by\n CourseEnrollment React widget\n \"\"\"\n course_run = context[\"run\"]\n\n profile_urls = json.loads(\n context.get(\"AUTHENTICATION\", {}).get(\"profile_urls\", \"{}\")\n )\n dashboard_link = profile_urls.get(\"dashboard\", {}).get(\"action\")\n\n starts_in_message = None\n if course_run.start > timezone.now():\n course_start = arrow.get(course_run.start)\n humanized_course_start = course_start.humanize(\n arrow.now(), locale=to_locale(get_language())\n )\n # Translators: delay indicates when the course will start as a duration.\n # In english the string will be \"The course will start in 3 days\"\n starts_in_message = _(\"The course will start {delay:s}\").format(\n delay=humanized_course_start\n )\n\n return json.dumps(\n {\n \"courseRun\": {\n \"id\": course_run.id,\n \"resource_link\": course_run.resource_link,\n \"priority\": course_run.state[\"priority\"],\n \"starts_in_message\": starts_in_message,\n \"dashboard_link\": dashboard_link,\n }\n }\n )\n\n\[email protected]\n@stringfilter\ndef trim(value):\n \"\"\"\n Remove whitespaces before and after a string.\n \"\"\"\n return value.strip()\n", "path": "src/richie/apps/courses/templatetags/extra_tags.py"}]}
| 3,435 | 195 |
gh_patches_debug_37001
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-6110
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prompt users to fix upgrade issue for Tails 4.14 - 4.18 (or fix it for them?)
## Description
Tails automatic upgrades were broken between versions 4.14 - 4.18 inclusive ([Tails announcement](https://tails.boum.org/doc/upgrade/error/check/index.en.html#4.18)), and some users on older versions of Tails may not realize that they are missing auto-updates, and that there is a manual step required to fix them.
We could:
- Make no code changes, and continue to use support messaging channels etc to remind folks of this issue
- Add text to the SecureDrop updater wizard, prompting users to update if their version of Tails is too old, or
- Perform the steps to fix automatic Tails updates ourselves, which (according to the above link) consist of
```
torsocks curl --silent https://tails.boum.org/isrg-root-x1-cross-signed.pem \
| sudo tee --append /usr/local/etc/ssl/certs/tails.boum.org-CA.pem \
&& systemctl --user restart tails-upgrade-frontend
```
I'm kind of in favour of the last option, and I can put in a PR for a check at the end of `securedrop_init` right before our GUI updater runs. What do others think? [edit: filing now so we can discuss inclusion in 2.1.0]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install_files/ansible-base/roles/tails-config/files/securedrop_init.py`
Content:
```
1 #!/usr/bin/python3
2
3 import grp
4 import os
5 import io
6 import pwd
7 import sys
8 import subprocess
9
10 from shutil import copyfile
11
12
13 # check for root
14 if os.geteuid() != 0:
15 sys.exit('You need to run this as root')
16
17 # paths
18 path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'
19 path_torrc_backup = '/etc/tor/torrc.bak'
20 path_torrc = '/etc/tor/torrc'
21 path_desktop = '/home/amnesia/Desktop/'
22 path_persistent_desktop = '/lib/live/mount/persistence/TailsData_unlocked/dotfiles/Desktop/' # noqa: E501
23 path_securedrop_root = '/home/amnesia/Persistent/securedrop'
24 path_securedrop_admin_venv = os.path.join(path_securedrop_root,
25 'admin/.venv3/bin/python')
26 path_securedrop_admin_init = os.path.join(path_securedrop_root,
27 'admin/securedrop_admin/__init__.py')
28 path_gui_updater = os.path.join(path_securedrop_root,
29 'journalist_gui/SecureDropUpdater')
30
31 paths_v3_authfiles = {
32 "app-journalist": os.path.join(path_securedrop_root,
33 'install_files/ansible-base/app-journalist.auth_private'),
34 "app-ssh": os.path.join(path_securedrop_root,
35 'install_files/ansible-base/app-ssh.auth_private'),
36 "mon-ssh": os.path.join(path_securedrop_root,
37 'install_files/ansible-base/mon-ssh.auth_private')
38 }
39 path_onion_auth_dir = '/var/lib/tor/onion_auth'
40
41 # load torrc_additions
42 if os.path.isfile(path_torrc_additions):
43 with io.open(path_torrc_additions) as f:
44 torrc_additions = f.read()
45 else:
46 sys.exit('Error opening {0} for reading'.format(path_torrc_additions))
47
48 # load torrc
49 if os.path.isfile(path_torrc_backup):
50 with io.open(path_torrc_backup) as f:
51 torrc = f.read()
52 else:
53 if os.path.isfile(path_torrc):
54 with io.open(path_torrc) as f:
55 torrc = f.read()
56 else:
57 sys.exit('Error opening {0} for reading'.format(path_torrc))
58
59 # save a backup
60 with io.open(path_torrc_backup, 'w') as f:
61 f.write(torrc)
62
63 # append the additions
64 with io.open(path_torrc, 'w') as f:
65 f.write(torrc + torrc_additions)
66
67 # check for v3 aths files
68 v3_authfiles_present = False
69 for f in paths_v3_authfiles.values():
70 if os.path.isfile(f):
71 v3_authfiles_present = True
72
73 # if there are v3 authfiles, make dir and copy them into place
74 debian_tor_uid = pwd.getpwnam("debian-tor").pw_uid
75 debian_tor_gid = grp.getgrnam("debian-tor").gr_gid
76
77 if not os.path.isdir(path_onion_auth_dir):
78 os.mkdir(path_onion_auth_dir)
79
80 os.chmod(path_onion_auth_dir, 0o700)
81 os.chown(path_onion_auth_dir, debian_tor_uid, debian_tor_gid)
82
83 for key, f in paths_v3_authfiles.items():
84 if os.path.isfile(f):
85 filename = os.path.basename(f)
86 new_f = os.path.join(path_onion_auth_dir, filename)
87 copyfile(f, new_f)
88 os.chmod(new_f, 0o400)
89 os.chown(new_f, debian_tor_uid, debian_tor_gid)
90
91 # restart tor
92 try:
93 subprocess.check_call(['systemctl', 'restart', '[email protected]'])
94 except subprocess.CalledProcessError:
95 sys.exit('Error restarting Tor')
96
97 # Set journalist.desktop and source.desktop links as trusted with Nautilus (see
98 # https://github.com/freedomofpress/securedrop/issues/2586)
99 # set euid and env variables to amnesia user
100 amnesia_gid = grp.getgrnam('amnesia').gr_gid
101 amnesia_uid = pwd.getpwnam('amnesia').pw_uid
102 os.setresgid(amnesia_gid, amnesia_gid, -1)
103 os.setresuid(amnesia_uid, amnesia_uid, -1)
104 env = os.environ.copy()
105 env['XDG_CURRENT_DESKTOP'] = 'GNOME'
106 env['DESKTOP_SESSION'] = 'default'
107 env['DISPLAY'] = ':1'
108 env['XDG_RUNTIME_DIR'] = '/run/user/{}'.format(amnesia_uid)
109 env['XDG_DATA_DIR'] = '/usr/share/gnome:/usr/local/share/:/usr/share/'
110 env['HOME'] = '/home/amnesia'
111 env['LOGNAME'] = 'amnesia'
112 env['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(
113 amnesia_uid)
114
115 # remove existing shortcut, recreate symlink and change metadata attribute
116 # to trust .desktop
117 for shortcut in ['source.desktop', 'journalist.desktop']:
118 subprocess.call(['rm', path_desktop + shortcut], env=env)
119 subprocess.call(['ln', '-s', path_persistent_desktop + shortcut,
120 path_desktop + shortcut], env=env)
121 subprocess.call(['gio', 'set', path_desktop + shortcut,
122 'metadata::trusted', 'true'], env=env)
123
124 # in Tails 4, reload gnome-shell desktop icons extension to update with changes above
125 cmd = ["lsb_release", "--id", "--short"]
126 p = subprocess.check_output(cmd)
127 distro_id = p.rstrip()
128 if distro_id == 'Debian' and os.uname()[1] == 'amnesia':
129 subprocess.call(['gnome-shell-extension-tool', '-r', 'desktop-icons@csoriano'], env=env)
130
131 # reacquire uid0 and notify the user
132 os.setresuid(0, 0, -1)
133 os.setresgid(0, 0, -1)
134 success_message = 'You can now access the Journalist Interface.\nIf you are an admin, you can now SSH to the servers.' # noqa: E501
135 subprocess.call(['tails-notify-user',
136 'SecureDrop successfully auto-configured!',
137 success_message])
138
139 # As the amnesia user, check for SecureDrop workstation updates.
140 os.setresgid(amnesia_gid, amnesia_gid, -1)
141 os.setresuid(amnesia_uid, amnesia_uid, -1)
142 output = subprocess.check_output([path_securedrop_admin_venv,
143 path_securedrop_admin_init,
144 '--root', path_securedrop_root,
145 'check_for_updates'], env=env)
146
147 flag_location = "/home/amnesia/Persistent/.securedrop/securedrop_update.flag"
148 if b'Update needed' in output or os.path.exists(flag_location):
149 # Start the SecureDrop updater GUI.
150 subprocess.Popen(['python3', path_gui_updater], env=env)
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
--- a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
+++ b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
@@ -7,7 +7,8 @@
import sys
import subprocess
-from shutil import copyfile
+import tempfile
+from shutil import copyfile, copyfileobj
# check for root
@@ -148,3 +149,70 @@
if b'Update needed' in output or os.path.exists(flag_location):
# Start the SecureDrop updater GUI.
subprocess.Popen(['python3', path_gui_updater], env=env)
+
+# Check for Tails < 4.19 and apply a fix to the auto-updater.
+# See https://tails.boum.org/news/version_4.18/
+# (Suggested removal: 2022/01)
+tails_4_min_version = 19
+needs_update = False
+tails_current_version = None
+
+with open('/etc/os-release') as file:
+ for line in file:
+ try:
+ k, v = line.strip().split("=")
+ if k == "TAILS_VERSION_ID":
+ tails_current_version = v.strip("\"").split(".")
+ except ValueError:
+ continue
+
+if tails_current_version:
+ try:
+ needs_update = (len(tails_current_version) >= 2 and
+ int(tails_current_version[1]) < tails_4_min_version)
+
+ except (TypeError, ValueError):
+ sys.exit(0) # Don't break tailsconfig trying to fix this
+
+ if needs_update:
+ cert_name = 'isrg-root-x1-cross-signed.pem'
+ pem_file = tempfile.NamedTemporaryFile(delete=True)
+
+ try:
+ subprocess.call(['torsocks', 'curl', '--silent',
+ 'https://tails.boum.org/' + cert_name],
+ stdout=pem_file, env=env)
+
+ # Verify against /etc/ssl/certs/DST_Root_CA_X3.pem, which cross-signs
+ # the new LetsEncrypt cert but is expiring
+ verify_proc = subprocess.check_output(['openssl', 'verify',
+ '-no_check_time', '-no-CApath',
+ '-CAfile',
+ '/etc/ssl/certs/DST_Root_CA_X3.pem',
+ pem_file.name],
+ universal_newlines=True, env=env)
+
+ if 'OK' in verify_proc:
+
+ # Updating the cert chain requires sudo privileges
+ os.setresgid(0, 0, -1)
+ os.setresuid(0, 0, -1)
+
+ with open('/usr/local/etc/ssl/certs/tails.boum.org-CA.pem', 'a') as chain:
+ pem_file.seek(0)
+ copyfileobj(pem_file, chain)
+
+ # As amnesia user, start updater GUI
+ os.setresgid(amnesia_gid, amnesia_gid, -1)
+ os.setresuid(amnesia_uid, amnesia_uid, -1)
+ restart_proc = subprocess.call(['systemctl', '--user', 'restart',
+ 'tails-upgrade-frontend'], env=env)
+
+ except subprocess.CalledProcessError:
+ sys.exit(0) # Don't break tailsconfig trying to fix this
+
+ except IOError:
+ sys.exit(0)
+
+ finally:
+ pem_file.close()
|
{"golden_diff": "diff --git a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n--- a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n+++ b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n@@ -7,7 +7,8 @@\n import sys\n import subprocess\n \n-from shutil import copyfile\n+import tempfile\n+from shutil import copyfile, copyfileobj\n \n \n # check for root\n@@ -148,3 +149,70 @@\n if b'Update needed' in output or os.path.exists(flag_location):\n # Start the SecureDrop updater GUI.\n subprocess.Popen(['python3', path_gui_updater], env=env)\n+\n+# Check for Tails < 4.19 and apply a fix to the auto-updater.\n+# See https://tails.boum.org/news/version_4.18/\n+# (Suggested removal: 2022/01)\n+tails_4_min_version = 19\n+needs_update = False\n+tails_current_version = None\n+\n+with open('/etc/os-release') as file:\n+ for line in file:\n+ try:\n+ k, v = line.strip().split(\"=\")\n+ if k == \"TAILS_VERSION_ID\":\n+ tails_current_version = v.strip(\"\\\"\").split(\".\")\n+ except ValueError:\n+ continue\n+\n+if tails_current_version:\n+ try:\n+ needs_update = (len(tails_current_version) >= 2 and\n+ int(tails_current_version[1]) < tails_4_min_version)\n+\n+ except (TypeError, ValueError):\n+ sys.exit(0) # Don't break tailsconfig trying to fix this\n+\n+ if needs_update:\n+ cert_name = 'isrg-root-x1-cross-signed.pem'\n+ pem_file = tempfile.NamedTemporaryFile(delete=True)\n+\n+ try:\n+ subprocess.call(['torsocks', 'curl', '--silent',\n+ 'https://tails.boum.org/' + cert_name],\n+ stdout=pem_file, env=env)\n+\n+ # Verify against /etc/ssl/certs/DST_Root_CA_X3.pem, which cross-signs\n+ # the new LetsEncrypt cert but is expiring\n+ verify_proc = subprocess.check_output(['openssl', 'verify',\n+ '-no_check_time', '-no-CApath',\n+ '-CAfile',\n+ '/etc/ssl/certs/DST_Root_CA_X3.pem',\n+ pem_file.name],\n+ universal_newlines=True, env=env)\n+\n+ if 'OK' in verify_proc:\n+\n+ # Updating the cert chain requires sudo privileges\n+ os.setresgid(0, 0, -1)\n+ os.setresuid(0, 0, -1)\n+\n+ with open('/usr/local/etc/ssl/certs/tails.boum.org-CA.pem', 'a') as chain:\n+ pem_file.seek(0)\n+ copyfileobj(pem_file, chain)\n+\n+ # As amnesia user, start updater GUI\n+ os.setresgid(amnesia_gid, amnesia_gid, -1)\n+ os.setresuid(amnesia_uid, amnesia_uid, -1)\n+ restart_proc = subprocess.call(['systemctl', '--user', 'restart',\n+ 'tails-upgrade-frontend'], env=env)\n+\n+ except subprocess.CalledProcessError:\n+ sys.exit(0) # Don't break tailsconfig trying to fix this\n+\n+ except IOError:\n+ sys.exit(0)\n+\n+ finally:\n+ pem_file.close()\n", "issue": "Prompt users to fix upgrade issue for Tails 4.14 - 4.18 (or fix it for them?)\n## Description\r\n\r\nTails automatic upgrades were broken between versions 4.14 - 4.18 inclusive ([Tails announcement](https://tails.boum.org/doc/upgrade/error/check/index.en.html#4.18)), and some users on older versions of Tails may not realize that they are missing auto-updates, and that there is a manual step required to fix them. \r\n\r\nWe could:\r\n- Make no code changes, and continue to use support messaging channels etc to remind folks of this issue\r\n- Add text to the SecureDrop updater wizard, prompting users to update if their version of Tails is too old, or\r\n- Perform the steps to fix automatic Tails updates ourselves, which (according to the above link) consist of \r\n```\r\ntorsocks curl --silent https://tails.boum.org/isrg-root-x1-cross-signed.pem \\\r\n| sudo tee --append /usr/local/etc/ssl/certs/tails.boum.org-CA.pem \\\r\n&& systemctl --user restart tails-upgrade-frontend\r\n``` \r\n\r\nI'm kind of in favour of the last option, and I can put in a PR for a check at the end of `securedrop_init` right before our GUI updater runs. What do others think? [edit: filing now so we can discuss inclusion in 2.1.0]\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport grp\nimport os\nimport io\nimport pwd\nimport sys\nimport subprocess\n\nfrom shutil import copyfile\n\n\n# check for root\nif os.geteuid() != 0:\n sys.exit('You need to run this as root')\n\n# paths\npath_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\npath_torrc_backup = '/etc/tor/torrc.bak'\npath_torrc = '/etc/tor/torrc'\npath_desktop = '/home/amnesia/Desktop/'\npath_persistent_desktop = '/lib/live/mount/persistence/TailsData_unlocked/dotfiles/Desktop/' # noqa: E501\npath_securedrop_root = '/home/amnesia/Persistent/securedrop'\npath_securedrop_admin_venv = os.path.join(path_securedrop_root,\n 'admin/.venv3/bin/python')\npath_securedrop_admin_init = os.path.join(path_securedrop_root,\n 'admin/securedrop_admin/__init__.py')\npath_gui_updater = os.path.join(path_securedrop_root,\n 'journalist_gui/SecureDropUpdater')\n\npaths_v3_authfiles = {\n \"app-journalist\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/app-journalist.auth_private'),\n \"app-ssh\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/app-ssh.auth_private'),\n \"mon-ssh\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/mon-ssh.auth_private')\n}\npath_onion_auth_dir = '/var/lib/tor/onion_auth'\n\n# load torrc_additions\nif os.path.isfile(path_torrc_additions):\n with io.open(path_torrc_additions) as f:\n torrc_additions = f.read()\nelse:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions))\n\n# load torrc\nif os.path.isfile(path_torrc_backup):\n with io.open(path_torrc_backup) as f:\n torrc = f.read()\nelse:\n if os.path.isfile(path_torrc):\n with io.open(path_torrc) as f:\n torrc = f.read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc))\n\n # save a backup\n with io.open(path_torrc_backup, 'w') as f:\n f.write(torrc)\n\n# append the additions\nwith io.open(path_torrc, 'w') as f:\n f.write(torrc + torrc_additions)\n\n# check for v3 aths files\nv3_authfiles_present = False\nfor f in paths_v3_authfiles.values():\n if os.path.isfile(f):\n v3_authfiles_present = True\n\n# if there are v3 authfiles, make dir and copy them into place\ndebian_tor_uid = pwd.getpwnam(\"debian-tor\").pw_uid\ndebian_tor_gid = grp.getgrnam(\"debian-tor\").gr_gid\n\nif not os.path.isdir(path_onion_auth_dir):\n os.mkdir(path_onion_auth_dir)\n\nos.chmod(path_onion_auth_dir, 0o700)\nos.chown(path_onion_auth_dir, debian_tor_uid, debian_tor_gid)\n\nfor key, f in paths_v3_authfiles.items():\n if os.path.isfile(f):\n filename = os.path.basename(f)\n new_f = os.path.join(path_onion_auth_dir, filename)\n copyfile(f, new_f)\n os.chmod(new_f, 0o400)\n os.chown(new_f, debian_tor_uid, debian_tor_gid)\n\n# restart tor\ntry:\n subprocess.check_call(['systemctl', 'restart', '[email protected]'])\nexcept subprocess.CalledProcessError:\n sys.exit('Error restarting Tor')\n\n# Set journalist.desktop and source.desktop links as trusted with Nautilus (see\n# https://github.com/freedomofpress/securedrop/issues/2586)\n# set euid and env variables to amnesia user\namnesia_gid = grp.getgrnam('amnesia').gr_gid\namnesia_uid = pwd.getpwnam('amnesia').pw_uid\nos.setresgid(amnesia_gid, amnesia_gid, -1)\nos.setresuid(amnesia_uid, amnesia_uid, -1)\nenv = os.environ.copy()\nenv['XDG_CURRENT_DESKTOP'] = 'GNOME'\nenv['DESKTOP_SESSION'] = 'default'\nenv['DISPLAY'] = ':1'\nenv['XDG_RUNTIME_DIR'] = '/run/user/{}'.format(amnesia_uid)\nenv['XDG_DATA_DIR'] = '/usr/share/gnome:/usr/local/share/:/usr/share/'\nenv['HOME'] = '/home/amnesia'\nenv['LOGNAME'] = 'amnesia'\nenv['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(\n amnesia_uid)\n\n# remove existing shortcut, recreate symlink and change metadata attribute\n# to trust .desktop\nfor shortcut in ['source.desktop', 'journalist.desktop']:\n subprocess.call(['rm', path_desktop + shortcut], env=env)\n subprocess.call(['ln', '-s', path_persistent_desktop + shortcut,\n path_desktop + shortcut], env=env)\n subprocess.call(['gio', 'set', path_desktop + shortcut,\n 'metadata::trusted', 'true'], env=env)\n\n# in Tails 4, reload gnome-shell desktop icons extension to update with changes above\ncmd = [\"lsb_release\", \"--id\", \"--short\"]\np = subprocess.check_output(cmd)\ndistro_id = p.rstrip()\nif distro_id == 'Debian' and os.uname()[1] == 'amnesia':\n subprocess.call(['gnome-shell-extension-tool', '-r', 'desktop-icons@csoriano'], env=env)\n\n# reacquire uid0 and notify the user\nos.setresuid(0, 0, -1)\nos.setresgid(0, 0, -1)\nsuccess_message = 'You can now access the Journalist Interface.\\nIf you are an admin, you can now SSH to the servers.' # noqa: E501\nsubprocess.call(['tails-notify-user',\n 'SecureDrop successfully auto-configured!',\n success_message])\n\n# As the amnesia user, check for SecureDrop workstation updates.\nos.setresgid(amnesia_gid, amnesia_gid, -1)\nos.setresuid(amnesia_uid, amnesia_uid, -1)\noutput = subprocess.check_output([path_securedrop_admin_venv,\n path_securedrop_admin_init,\n '--root', path_securedrop_root,\n 'check_for_updates'], env=env)\n\nflag_location = \"/home/amnesia/Persistent/.securedrop/securedrop_update.flag\"\nif b'Update needed' in output or os.path.exists(flag_location):\n # Start the SecureDrop updater GUI.\n subprocess.Popen(['python3', path_gui_updater], env=env)\n", "path": "install_files/ansible-base/roles/tails-config/files/securedrop_init.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport grp\nimport os\nimport io\nimport pwd\nimport sys\nimport subprocess\n\nimport tempfile\nfrom shutil import copyfile, copyfileobj\n\n\n# check for root\nif os.geteuid() != 0:\n sys.exit('You need to run this as root')\n\n# paths\npath_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\npath_torrc_backup = '/etc/tor/torrc.bak'\npath_torrc = '/etc/tor/torrc'\npath_desktop = '/home/amnesia/Desktop/'\npath_persistent_desktop = '/lib/live/mount/persistence/TailsData_unlocked/dotfiles/Desktop/' # noqa: E501\npath_securedrop_root = '/home/amnesia/Persistent/securedrop'\npath_securedrop_admin_venv = os.path.join(path_securedrop_root,\n 'admin/.venv3/bin/python')\npath_securedrop_admin_init = os.path.join(path_securedrop_root,\n 'admin/securedrop_admin/__init__.py')\npath_gui_updater = os.path.join(path_securedrop_root,\n 'journalist_gui/SecureDropUpdater')\n\npaths_v3_authfiles = {\n \"app-journalist\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/app-journalist.auth_private'),\n \"app-ssh\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/app-ssh.auth_private'),\n \"mon-ssh\": os.path.join(path_securedrop_root,\n 'install_files/ansible-base/mon-ssh.auth_private')\n}\npath_onion_auth_dir = '/var/lib/tor/onion_auth'\n\n# load torrc_additions\nif os.path.isfile(path_torrc_additions):\n with io.open(path_torrc_additions) as f:\n torrc_additions = f.read()\nelse:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions))\n\n# load torrc\nif os.path.isfile(path_torrc_backup):\n with io.open(path_torrc_backup) as f:\n torrc = f.read()\nelse:\n if os.path.isfile(path_torrc):\n with io.open(path_torrc) as f:\n torrc = f.read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc))\n\n # save a backup\n with io.open(path_torrc_backup, 'w') as f:\n f.write(torrc)\n\n# append the additions\nwith io.open(path_torrc, 'w') as f:\n f.write(torrc + torrc_additions)\n\n# check for v3 aths files\nv3_authfiles_present = False\nfor f in paths_v3_authfiles.values():\n if os.path.isfile(f):\n v3_authfiles_present = True\n\n# if there are v3 authfiles, make dir and copy them into place\ndebian_tor_uid = pwd.getpwnam(\"debian-tor\").pw_uid\ndebian_tor_gid = grp.getgrnam(\"debian-tor\").gr_gid\n\nif not os.path.isdir(path_onion_auth_dir):\n os.mkdir(path_onion_auth_dir)\n\nos.chmod(path_onion_auth_dir, 0o700)\nos.chown(path_onion_auth_dir, debian_tor_uid, debian_tor_gid)\n\nfor key, f in paths_v3_authfiles.items():\n if os.path.isfile(f):\n filename = os.path.basename(f)\n new_f = os.path.join(path_onion_auth_dir, filename)\n copyfile(f, new_f)\n os.chmod(new_f, 0o400)\n os.chown(new_f, debian_tor_uid, debian_tor_gid)\n\n# restart tor\ntry:\n subprocess.check_call(['systemctl', 'restart', '[email protected]'])\nexcept subprocess.CalledProcessError:\n sys.exit('Error restarting Tor')\n\n# Set journalist.desktop and source.desktop links as trusted with Nautilus (see\n# https://github.com/freedomofpress/securedrop/issues/2586)\n# set euid and env variables to amnesia user\namnesia_gid = grp.getgrnam('amnesia').gr_gid\namnesia_uid = pwd.getpwnam('amnesia').pw_uid\nos.setresgid(amnesia_gid, amnesia_gid, -1)\nos.setresuid(amnesia_uid, amnesia_uid, -1)\nenv = os.environ.copy()\nenv['XDG_CURRENT_DESKTOP'] = 'GNOME'\nenv['DESKTOP_SESSION'] = 'default'\nenv['DISPLAY'] = ':1'\nenv['XDG_RUNTIME_DIR'] = '/run/user/{}'.format(amnesia_uid)\nenv['XDG_DATA_DIR'] = '/usr/share/gnome:/usr/local/share/:/usr/share/'\nenv['HOME'] = '/home/amnesia'\nenv['LOGNAME'] = 'amnesia'\nenv['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/{}/bus'.format(\n amnesia_uid)\n\n# remove existing shortcut, recreate symlink and change metadata attribute\n# to trust .desktop\nfor shortcut in ['source.desktop', 'journalist.desktop']:\n subprocess.call(['rm', path_desktop + shortcut], env=env)\n subprocess.call(['ln', '-s', path_persistent_desktop + shortcut,\n path_desktop + shortcut], env=env)\n subprocess.call(['gio', 'set', path_desktop + shortcut,\n 'metadata::trusted', 'true'], env=env)\n\n# in Tails 4, reload gnome-shell desktop icons extension to update with changes above\ncmd = [\"lsb_release\", \"--id\", \"--short\"]\np = subprocess.check_output(cmd)\ndistro_id = p.rstrip()\nif distro_id == 'Debian' and os.uname()[1] == 'amnesia':\n subprocess.call(['gnome-shell-extension-tool', '-r', 'desktop-icons@csoriano'], env=env)\n\n# reacquire uid0 and notify the user\nos.setresuid(0, 0, -1)\nos.setresgid(0, 0, -1)\nsuccess_message = 'You can now access the Journalist Interface.\\nIf you are an admin, you can now SSH to the servers.' # noqa: E501\nsubprocess.call(['tails-notify-user',\n 'SecureDrop successfully auto-configured!',\n success_message])\n\n# As the amnesia user, check for SecureDrop workstation updates.\nos.setresgid(amnesia_gid, amnesia_gid, -1)\nos.setresuid(amnesia_uid, amnesia_uid, -1)\noutput = subprocess.check_output([path_securedrop_admin_venv,\n path_securedrop_admin_init,\n '--root', path_securedrop_root,\n 'check_for_updates'], env=env)\n\nflag_location = \"/home/amnesia/Persistent/.securedrop/securedrop_update.flag\"\nif b'Update needed' in output or os.path.exists(flag_location):\n # Start the SecureDrop updater GUI.\n subprocess.Popen(['python3', path_gui_updater], env=env)\n\n# Check for Tails < 4.19 and apply a fix to the auto-updater.\n# See https://tails.boum.org/news/version_4.18/\n# (Suggested removal: 2022/01)\ntails_4_min_version = 19\nneeds_update = False\ntails_current_version = None\n\nwith open('/etc/os-release') as file:\n for line in file:\n try:\n k, v = line.strip().split(\"=\")\n if k == \"TAILS_VERSION_ID\":\n tails_current_version = v.strip(\"\\\"\").split(\".\")\n except ValueError:\n continue\n\nif tails_current_version:\n try:\n needs_update = (len(tails_current_version) >= 2 and\n int(tails_current_version[1]) < tails_4_min_version)\n\n except (TypeError, ValueError):\n sys.exit(0) # Don't break tailsconfig trying to fix this\n\n if needs_update:\n cert_name = 'isrg-root-x1-cross-signed.pem'\n pem_file = tempfile.NamedTemporaryFile(delete=True)\n\n try:\n subprocess.call(['torsocks', 'curl', '--silent',\n 'https://tails.boum.org/' + cert_name],\n stdout=pem_file, env=env)\n\n # Verify against /etc/ssl/certs/DST_Root_CA_X3.pem, which cross-signs\n # the new LetsEncrypt cert but is expiring\n verify_proc = subprocess.check_output(['openssl', 'verify',\n '-no_check_time', '-no-CApath',\n '-CAfile',\n '/etc/ssl/certs/DST_Root_CA_X3.pem',\n pem_file.name],\n universal_newlines=True, env=env)\n\n if 'OK' in verify_proc:\n\n # Updating the cert chain requires sudo privileges\n os.setresgid(0, 0, -1)\n os.setresuid(0, 0, -1)\n\n with open('/usr/local/etc/ssl/certs/tails.boum.org-CA.pem', 'a') as chain:\n pem_file.seek(0)\n copyfileobj(pem_file, chain)\n\n # As amnesia user, start updater GUI\n os.setresgid(amnesia_gid, amnesia_gid, -1)\n os.setresuid(amnesia_uid, amnesia_uid, -1)\n restart_proc = subprocess.call(['systemctl', '--user', 'restart',\n 'tails-upgrade-frontend'], env=env)\n\n except subprocess.CalledProcessError:\n sys.exit(0) # Don't break tailsconfig trying to fix this\n\n except IOError:\n sys.exit(0)\n\n finally:\n pem_file.close()\n", "path": "install_files/ansible-base/roles/tails-config/files/securedrop_init.py"}]}
| 2,444 | 810 |
gh_patches_debug_13983
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1620
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IntegrityError if you try to create two records with the same last_modified value in a batch
```
demo:
collections:
demo:
records:
abc:
data:
last_modified: 123
efg:
data:
last_modified: 123
```
```
$ kinto-wizard load -s https://kinto.dev.mozaws.net/v1 -a admin:admin demo.yaml --force -b demo -c demo
kinto_http.exceptions.KintoException: POST /v1/batch - 503 503 -
{'message': 'Service temporary unavailable due to overloading or maintenance, please retry later.',
'code': 503, 'errno': 201, 'error': 'Service Unavailable'}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/views/batch.py`
Content:
```
1 import logging
2
3 import colander
4 from cornice.validators import colander_validator
5 from pyramid import httpexceptions
6 from pyramid.security import NO_PERMISSION_REQUIRED
7
8 from kinto.core import errors
9 from kinto.core import Service
10 from kinto.core.errors import ErrorSchema
11 from kinto.core.utils import merge_dicts, build_request, build_response
12
13
14 subrequest_logger = logging.getLogger('subrequest.summary')
15
16 valid_http_method = colander.OneOf(('GET', 'HEAD', 'DELETE', 'TRACE',
17 'POST', 'PUT', 'PATCH'))
18
19
20 def string_values(node, cstruct):
21 """Validate that a ``colander.Mapping`` only has strings in its values.
22
23 .. warning::
24
25 Should be associated to a ``colander.Mapping`` schema node.
26 """
27 are_strings = [isinstance(v, str) for v in cstruct.values()]
28 if not all(are_strings):
29 error_msg = '{} contains non string value'.format(cstruct)
30 raise colander.Invalid(node, error_msg)
31
32
33 class BatchRequestSchema(colander.MappingSchema):
34 method = colander.SchemaNode(colander.String(),
35 validator=valid_http_method,
36 missing=colander.drop)
37 path = colander.SchemaNode(colander.String(),
38 validator=colander.Regex('^/'))
39 headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),
40 validator=string_values,
41 missing=colander.drop)
42 body = colander.SchemaNode(colander.Mapping(unknown='preserve'),
43 missing=colander.drop)
44
45 @staticmethod
46 def schema_type():
47 return colander.Mapping(unknown='raise')
48
49
50 class BatchPayloadSchema(colander.MappingSchema):
51 defaults = BatchRequestSchema(missing=colander.drop).clone()
52 requests = colander.SchemaNode(colander.Sequence(),
53 BatchRequestSchema())
54
55 @staticmethod
56 def schema_type():
57 return colander.Mapping(unknown='raise')
58
59 def __init__(self, *args, **kwargs):
60 super().__init__(*args, **kwargs)
61 # On defaults, path is not mandatory.
62 self.get('defaults').get('path').missing = colander.drop
63
64 def deserialize(self, cstruct=colander.null):
65 """Preprocess received data to carefully merge defaults.
66 """
67 if cstruct is not colander.null:
68 defaults = cstruct.get('defaults')
69 requests = cstruct.get('requests')
70 if isinstance(defaults, dict) and isinstance(requests, list):
71 for request in requests:
72 if isinstance(request, dict):
73 merge_dicts(request, defaults)
74 return super().deserialize(cstruct)
75
76
77 class BatchRequest(colander.MappingSchema):
78 body = BatchPayloadSchema()
79
80
81 class BatchResponseSchema(colander.MappingSchema):
82 status = colander.SchemaNode(colander.Integer())
83 path = colander.SchemaNode(colander.String())
84 headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),
85 validator=string_values,
86 missing=colander.drop)
87 body = colander.SchemaNode(colander.Mapping(unknown='preserve'),
88 missing=colander.drop)
89
90
91 class BatchResponseBodySchema(colander.MappingSchema):
92 responses = colander.SequenceSchema(BatchResponseSchema(missing=colander.drop))
93
94
95 class BatchResponse(colander.MappingSchema):
96 body = BatchResponseBodySchema()
97
98
99 class ErrorResponseSchema(colander.MappingSchema):
100 body = ErrorSchema()
101
102
103 batch_responses = {
104 '200': BatchResponse(description='Return a list of operation responses.'),
105 '400': ErrorResponseSchema(description='The request was badly formatted.'),
106 'default': ErrorResponseSchema(description='an unknown error occurred.')
107 }
108
109 batch = Service(name='batch', path='/batch',
110 description='Batch operations')
111
112
113 @batch.post(schema=BatchRequest,
114 validators=(colander_validator,),
115 permission=NO_PERMISSION_REQUIRED,
116 tags=['Batch'], operation_id='batch',
117 response_schemas=batch_responses)
118 def post_batch(request):
119 requests = request.validated['body']['requests']
120
121 request.log_context(batch_size=len(requests))
122
123 limit = request.registry.settings['batch_max_requests']
124 if limit and len(requests) > int(limit):
125 error_msg = 'Number of requests is limited to {}'.format(limit)
126 request.errors.add('body', 'requests', error_msg)
127 return
128
129 if any([batch.path in req['path'] for req in requests]):
130 error_msg = 'Recursive call on {} endpoint is forbidden.'.format(batch.path)
131 request.errors.add('body', 'requests', error_msg)
132 return
133
134 responses = []
135
136 for subrequest_spec in requests:
137 subrequest = build_request(request, subrequest_spec)
138
139 log_context = {**request.log_context(),
140 'path': subrequest.path,
141 'method': subrequest.method}
142 try:
143 # Invoke subrequest without individual transaction.
144 resp, subrequest = request.follow_subrequest(subrequest,
145 use_tweens=False)
146 except httpexceptions.HTTPException as e:
147 if e.content_type == 'application/json':
148 resp = e
149 else:
150 # JSONify raw Pyramid errors.
151 resp = errors.http_error(e)
152
153 subrequest_logger.info('subrequest.summary', extra=log_context)
154
155 dict_resp = build_response(resp, subrequest)
156 responses.append(dict_resp)
157
158 return {
159 'responses': responses
160 }
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/core/views/batch.py b/kinto/core/views/batch.py
--- a/kinto/core/views/batch.py
+++ b/kinto/core/views/batch.py
@@ -144,6 +144,13 @@
resp, subrequest = request.follow_subrequest(subrequest,
use_tweens=False)
except httpexceptions.HTTPException as e:
+ # Since some request in the batch failed, we need to stop the parent request
+ # through Pyramid's transaction manager. 5XX errors are already caught by
+ # pyramid_tm's commit_veto
+ # https://github.com/Kinto/kinto/issues/624
+ if e.status_code == 409:
+ request.tm.abort()
+
if e.content_type == 'application/json':
resp = e
else:
|
{"golden_diff": "diff --git a/kinto/core/views/batch.py b/kinto/core/views/batch.py\n--- a/kinto/core/views/batch.py\n+++ b/kinto/core/views/batch.py\n@@ -144,6 +144,13 @@\n resp, subrequest = request.follow_subrequest(subrequest,\n use_tweens=False)\n except httpexceptions.HTTPException as e:\n+ # Since some request in the batch failed, we need to stop the parent request\n+ # through Pyramid's transaction manager. 5XX errors are already caught by\n+ # pyramid_tm's commit_veto\n+ # https://github.com/Kinto/kinto/issues/624\n+ if e.status_code == 409:\n+ request.tm.abort()\n+\n if e.content_type == 'application/json':\n resp = e\n else:\n", "issue": "IntegrityError if you try to create two records with the same last_modified value in a batch\n```\r\ndemo:\r\n collections:\r\n demo:\r\n records:\r\n abc:\r\n data:\r\n last_modified: 123\r\n efg:\r\n data:\r\n last_modified: 123\r\n```\r\n\r\n```\r\n$ kinto-wizard load -s https://kinto.dev.mozaws.net/v1 -a admin:admin demo.yaml --force -b demo -c demo\r\nkinto_http.exceptions.KintoException: POST /v1/batch - 503 503 - \r\n{'message': 'Service temporary unavailable due to overloading or maintenance, please retry later.',\r\n 'code': 503, 'errno': 201, 'error': 'Service Unavailable'}\r\n```\n", "before_files": [{"content": "import logging\n\nimport colander\nfrom cornice.validators import colander_validator\nfrom pyramid import httpexceptions\nfrom pyramid.security import NO_PERMISSION_REQUIRED\n\nfrom kinto.core import errors\nfrom kinto.core import Service\nfrom kinto.core.errors import ErrorSchema\nfrom kinto.core.utils import merge_dicts, build_request, build_response\n\n\nsubrequest_logger = logging.getLogger('subrequest.summary')\n\nvalid_http_method = colander.OneOf(('GET', 'HEAD', 'DELETE', 'TRACE',\n 'POST', 'PUT', 'PATCH'))\n\n\ndef string_values(node, cstruct):\n \"\"\"Validate that a ``colander.Mapping`` only has strings in its values.\n\n .. warning::\n\n Should be associated to a ``colander.Mapping`` schema node.\n \"\"\"\n are_strings = [isinstance(v, str) for v in cstruct.values()]\n if not all(are_strings):\n error_msg = '{} contains non string value'.format(cstruct)\n raise colander.Invalid(node, error_msg)\n\n\nclass BatchRequestSchema(colander.MappingSchema):\n method = colander.SchemaNode(colander.String(),\n validator=valid_http_method,\n missing=colander.drop)\n path = colander.SchemaNode(colander.String(),\n validator=colander.Regex('^/'))\n headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n validator=string_values,\n missing=colander.drop)\n body = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n missing=colander.drop)\n\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown='raise')\n\n\nclass BatchPayloadSchema(colander.MappingSchema):\n defaults = BatchRequestSchema(missing=colander.drop).clone()\n requests = colander.SchemaNode(colander.Sequence(),\n BatchRequestSchema())\n\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown='raise')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # On defaults, path is not mandatory.\n self.get('defaults').get('path').missing = colander.drop\n\n def deserialize(self, cstruct=colander.null):\n \"\"\"Preprocess received data to carefully merge defaults.\n \"\"\"\n if cstruct is not colander.null:\n defaults = cstruct.get('defaults')\n requests = cstruct.get('requests')\n if isinstance(defaults, dict) and isinstance(requests, list):\n for request in requests:\n if isinstance(request, dict):\n merge_dicts(request, defaults)\n return super().deserialize(cstruct)\n\n\nclass BatchRequest(colander.MappingSchema):\n body = BatchPayloadSchema()\n\n\nclass BatchResponseSchema(colander.MappingSchema):\n status = colander.SchemaNode(colander.Integer())\n path = colander.SchemaNode(colander.String())\n headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n validator=string_values,\n missing=colander.drop)\n body = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n missing=colander.drop)\n\n\nclass BatchResponseBodySchema(colander.MappingSchema):\n responses = colander.SequenceSchema(BatchResponseSchema(missing=colander.drop))\n\n\nclass BatchResponse(colander.MappingSchema):\n body = BatchResponseBodySchema()\n\n\nclass ErrorResponseSchema(colander.MappingSchema):\n body = ErrorSchema()\n\n\nbatch_responses = {\n '200': BatchResponse(description='Return a list of operation responses.'),\n '400': ErrorResponseSchema(description='The request was badly formatted.'),\n 'default': ErrorResponseSchema(description='an unknown error occurred.')\n}\n\nbatch = Service(name='batch', path='/batch',\n description='Batch operations')\n\n\[email protected](schema=BatchRequest,\n validators=(colander_validator,),\n permission=NO_PERMISSION_REQUIRED,\n tags=['Batch'], operation_id='batch',\n response_schemas=batch_responses)\ndef post_batch(request):\n requests = request.validated['body']['requests']\n\n request.log_context(batch_size=len(requests))\n\n limit = request.registry.settings['batch_max_requests']\n if limit and len(requests) > int(limit):\n error_msg = 'Number of requests is limited to {}'.format(limit)\n request.errors.add('body', 'requests', error_msg)\n return\n\n if any([batch.path in req['path'] for req in requests]):\n error_msg = 'Recursive call on {} endpoint is forbidden.'.format(batch.path)\n request.errors.add('body', 'requests', error_msg)\n return\n\n responses = []\n\n for subrequest_spec in requests:\n subrequest = build_request(request, subrequest_spec)\n\n log_context = {**request.log_context(),\n 'path': subrequest.path,\n 'method': subrequest.method}\n try:\n # Invoke subrequest without individual transaction.\n resp, subrequest = request.follow_subrequest(subrequest,\n use_tweens=False)\n except httpexceptions.HTTPException as e:\n if e.content_type == 'application/json':\n resp = e\n else:\n # JSONify raw Pyramid errors.\n resp = errors.http_error(e)\n\n subrequest_logger.info('subrequest.summary', extra=log_context)\n\n dict_resp = build_response(resp, subrequest)\n responses.append(dict_resp)\n\n return {\n 'responses': responses\n }\n", "path": "kinto/core/views/batch.py"}], "after_files": [{"content": "import logging\n\nimport colander\nfrom cornice.validators import colander_validator\nfrom pyramid import httpexceptions\nfrom pyramid.security import NO_PERMISSION_REQUIRED\n\nfrom kinto.core import errors\nfrom kinto.core import Service\nfrom kinto.core.errors import ErrorSchema\nfrom kinto.core.utils import merge_dicts, build_request, build_response\n\n\nsubrequest_logger = logging.getLogger('subrequest.summary')\n\nvalid_http_method = colander.OneOf(('GET', 'HEAD', 'DELETE', 'TRACE',\n 'POST', 'PUT', 'PATCH'))\n\n\ndef string_values(node, cstruct):\n \"\"\"Validate that a ``colander.Mapping`` only has strings in its values.\n\n .. warning::\n\n Should be associated to a ``colander.Mapping`` schema node.\n \"\"\"\n are_strings = [isinstance(v, str) for v in cstruct.values()]\n if not all(are_strings):\n error_msg = '{} contains non string value'.format(cstruct)\n raise colander.Invalid(node, error_msg)\n\n\nclass BatchRequestSchema(colander.MappingSchema):\n method = colander.SchemaNode(colander.String(),\n validator=valid_http_method,\n missing=colander.drop)\n path = colander.SchemaNode(colander.String(),\n validator=colander.Regex('^/'))\n headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n validator=string_values,\n missing=colander.drop)\n body = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n missing=colander.drop)\n\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown='raise')\n\n\nclass BatchPayloadSchema(colander.MappingSchema):\n defaults = BatchRequestSchema(missing=colander.drop).clone()\n requests = colander.SchemaNode(colander.Sequence(),\n BatchRequestSchema())\n\n @staticmethod\n def schema_type():\n return colander.Mapping(unknown='raise')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # On defaults, path is not mandatory.\n self.get('defaults').get('path').missing = colander.drop\n\n def deserialize(self, cstruct=colander.null):\n \"\"\"Preprocess received data to carefully merge defaults.\n \"\"\"\n if cstruct is not colander.null:\n defaults = cstruct.get('defaults')\n requests = cstruct.get('requests')\n if isinstance(defaults, dict) and isinstance(requests, list):\n for request in requests:\n if isinstance(request, dict):\n merge_dicts(request, defaults)\n return super().deserialize(cstruct)\n\n\nclass BatchRequest(colander.MappingSchema):\n body = BatchPayloadSchema()\n\n\nclass BatchResponseSchema(colander.MappingSchema):\n status = colander.SchemaNode(colander.Integer())\n path = colander.SchemaNode(colander.String())\n headers = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n validator=string_values,\n missing=colander.drop)\n body = colander.SchemaNode(colander.Mapping(unknown='preserve'),\n missing=colander.drop)\n\n\nclass BatchResponseBodySchema(colander.MappingSchema):\n responses = colander.SequenceSchema(BatchResponseSchema(missing=colander.drop))\n\n\nclass BatchResponse(colander.MappingSchema):\n body = BatchResponseBodySchema()\n\n\nclass ErrorResponseSchema(colander.MappingSchema):\n body = ErrorSchema()\n\n\nbatch_responses = {\n '200': BatchResponse(description='Return a list of operation responses.'),\n '400': ErrorResponseSchema(description='The request was badly formatted.'),\n 'default': ErrorResponseSchema(description='an unknown error occurred.')\n}\n\nbatch = Service(name='batch', path='/batch',\n description='Batch operations')\n\n\[email protected](schema=BatchRequest,\n validators=(colander_validator,),\n permission=NO_PERMISSION_REQUIRED,\n tags=['Batch'], operation_id='batch',\n response_schemas=batch_responses)\ndef post_batch(request):\n requests = request.validated['body']['requests']\n\n request.log_context(batch_size=len(requests))\n\n limit = request.registry.settings['batch_max_requests']\n if limit and len(requests) > int(limit):\n error_msg = 'Number of requests is limited to {}'.format(limit)\n request.errors.add('body', 'requests', error_msg)\n return\n\n if any([batch.path in req['path'] for req in requests]):\n error_msg = 'Recursive call on {} endpoint is forbidden.'.format(batch.path)\n request.errors.add('body', 'requests', error_msg)\n return\n\n responses = []\n\n for subrequest_spec in requests:\n subrequest = build_request(request, subrequest_spec)\n\n log_context = {**request.log_context(),\n 'path': subrequest.path,\n 'method': subrequest.method}\n try:\n # Invoke subrequest without individual transaction.\n resp, subrequest = request.follow_subrequest(subrequest,\n use_tweens=False)\n except httpexceptions.HTTPException as e:\n # Since some request in the batch failed, we need to stop the parent request\n # through Pyramid's transaction manager. 5XX errors are already caught by\n # pyramid_tm's commit_veto\n # https://github.com/Kinto/kinto/issues/624\n if e.status_code == 409:\n request.tm.abort()\n\n if e.content_type == 'application/json':\n resp = e\n else:\n # JSONify raw Pyramid errors.\n resp = errors.http_error(e)\n\n subrequest_logger.info('subrequest.summary', extra=log_context)\n\n dict_resp = build_response(resp, subrequest)\n responses.append(dict_resp)\n\n return {\n 'responses': responses\n }\n", "path": "kinto/core/views/batch.py"}]}
| 1,938 | 185 |
gh_patches_debug_2903
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-1517
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No field to edit Cluster comments
### Issue type
[ ] Feature request <!-- Requesting the implementation of a new feature -->
[X] Bug report <!-- Reporting unexpected or erroneous behavior -->
[ ] Documentation <!-- Proposing a modification to the documentation -->
### Environment
* Python version: 3.5.2
* NetBox version: origin/dev-2.2 e93129f1
### Description
When you view Clusters you can see there is a 'comments' field. However, when you create a Cluster there is no field to enter the comments. Ditto when you click Edit on an existing cluster (e.g. `/virtualization/clusters/1/edit/`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/virtualization/forms.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from mptt.forms import TreeNodeChoiceField
4
5 from django import forms
6 from django.db.models import Count
7
8 from dcim.constants import IFACE_FF_VIRTUAL, VIFACE_FF_CHOICES
9 from dcim.formfields import MACAddressFormField
10 from dcim.models import Device, Interface, Platform, Rack, Region, Site
11 from extras.forms import CustomFieldBulkEditForm, CustomFieldForm, CustomFieldFilterForm
12 from tenancy.forms import TenancyForm
13 from tenancy.models import Tenant
14 from utilities.forms import (
15 add_blank_choice, APISelect, APISelectMultiple, BootstrapMixin, BulkEditForm, BulkEditNullBooleanSelect,
16 ChainedFieldsMixin, ChainedModelChoiceField, ChainedModelMultipleChoiceField, CommentField, ComponentForm,
17 ConfirmationForm, CSVChoiceField, ExpandableNameField, FilterChoiceField, SlugField, SmallTextarea,
18 )
19 from .constants import STATUS_CHOICES
20 from .models import Cluster, ClusterGroup, ClusterType, VirtualMachine
21
22
23 #
24 # Cluster types
25 #
26
27 class ClusterTypeForm(BootstrapMixin, forms.ModelForm):
28 slug = SlugField()
29
30 class Meta:
31 model = ClusterType
32 fields = ['name', 'slug']
33
34
35 #
36 # Cluster groups
37 #
38
39 class ClusterGroupForm(BootstrapMixin, forms.ModelForm):
40 slug = SlugField()
41
42 class Meta:
43 model = ClusterGroup
44 fields = ['name', 'slug']
45
46
47 #
48 # Clusters
49 #
50
51 class ClusterForm(BootstrapMixin, CustomFieldForm):
52
53 class Meta:
54 model = Cluster
55 fields = ['name', 'type', 'group']
56
57
58 class ClusterCSVForm(forms.ModelForm):
59 type = forms.ModelChoiceField(
60 queryset=ClusterType.objects.all(),
61 to_field_name='name',
62 help_text='Name of cluster type',
63 error_messages={
64 'invalid_choice': 'Invalid cluster type name.',
65 }
66 )
67 group = forms.ModelChoiceField(
68 queryset=ClusterGroup.objects.all(),
69 to_field_name='name',
70 required=False,
71 help_text='Name of cluster group',
72 error_messages={
73 'invalid_choice': 'Invalid cluster group name.',
74 }
75 )
76
77 class Meta:
78 model = Cluster
79 fields = ['name', 'type', 'group', 'comments']
80
81
82 class ClusterBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):
83 pk = forms.ModelMultipleChoiceField(queryset=Cluster.objects.all(), widget=forms.MultipleHiddenInput)
84 type = forms.ModelChoiceField(queryset=ClusterType.objects.all(), required=False)
85 group = forms.ModelChoiceField(queryset=ClusterGroup.objects.all(), required=False)
86 comments = CommentField(widget=SmallTextarea)
87
88 class Meta:
89 nullable_fields = ['group', 'comments']
90
91
92 class ClusterFilterForm(BootstrapMixin, CustomFieldFilterForm):
93 model = Cluster
94 q = forms.CharField(required=False, label='Search')
95 group = FilterChoiceField(
96 queryset=ClusterGroup.objects.annotate(filter_count=Count('clusters')),
97 to_field_name='slug',
98 null_option=(0, 'None'),
99 required=False,
100 )
101 type = FilterChoiceField(
102 queryset=ClusterType.objects.annotate(filter_count=Count('clusters')),
103 to_field_name='slug',
104 required=False,
105 )
106
107
108 class ClusterAddDevicesForm(BootstrapMixin, ChainedFieldsMixin, forms.Form):
109 region = TreeNodeChoiceField(
110 queryset=Region.objects.all(),
111 required=False,
112 widget=forms.Select(
113 attrs={'filter-for': 'site', 'nullable': 'true'}
114 )
115 )
116 site = ChainedModelChoiceField(
117 queryset=Site.objects.all(),
118 chains=(
119 ('region', 'region'),
120 ),
121 required=False,
122 widget=APISelect(
123 api_url='/api/dcim/sites/?region_id={{region}}',
124 attrs={'filter-for': 'rack'}
125 )
126 )
127 rack = ChainedModelChoiceField(
128 queryset=Rack.objects.all(),
129 chains=(
130 ('site', 'site'),
131 ),
132 required=False,
133 widget=APISelect(
134 api_url='/api/dcim/racks/?site_id={{site}}',
135 attrs={'filter-for': 'devices', 'nullable': 'true'}
136 )
137 )
138 devices = ChainedModelMultipleChoiceField(
139 queryset=Device.objects.filter(cluster__isnull=True),
140 chains=(
141 ('site', 'site'),
142 ('rack', 'rack'),
143 ),
144 label='Device',
145 widget=APISelectMultiple(
146 api_url='/api/dcim/devices/?site_id={{site}}&rack_id={{rack}}',
147 display_field='display_name',
148 disabled_indicator='cluster'
149 )
150 )
151
152 class Meta:
153 fields = ['region', 'site', 'rack', 'devices']
154
155 def __init__(self, *args, **kwargs):
156
157 super(ClusterAddDevicesForm, self).__init__(*args, **kwargs)
158
159 self.fields['devices'].choices = []
160
161
162 class ClusterRemoveDevicesForm(ConfirmationForm):
163 pk = forms.ModelMultipleChoiceField(queryset=Device.objects.all(), widget=forms.MultipleHiddenInput)
164
165
166 #
167 # Virtual Machines
168 #
169
170 class VirtualMachineForm(BootstrapMixin, TenancyForm, CustomFieldForm):
171 cluster_group = forms.ModelChoiceField(
172 queryset=ClusterGroup.objects.all(),
173 required=False,
174 widget=forms.Select(
175 attrs={'filter-for': 'cluster', 'nullable': 'true'}
176 )
177 )
178 cluster = ChainedModelChoiceField(
179 queryset=Cluster.objects.all(),
180 chains=(
181 ('group', 'cluster_group'),
182 ),
183 widget=APISelect(
184 api_url='/api/virtualization/clusters/?group_id={{cluster_group}}'
185 )
186 )
187
188 class Meta:
189 model = VirtualMachine
190 fields = [
191 'name', 'status', 'cluster_group', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments',
192 ]
193
194 def __init__(self, *args, **kwargs):
195
196 # Initialize helper selector
197 instance = kwargs.get('instance')
198 if instance.pk and instance.cluster is not None:
199 initial = kwargs.get('initial', {}).copy()
200 initial['cluster_group'] = instance.cluster.group
201 kwargs['initial'] = initial
202
203 super(VirtualMachineForm, self).__init__(*args, **kwargs)
204
205
206 class VirtualMachineCSVForm(forms.ModelForm):
207 status = CSVChoiceField(
208 choices=STATUS_CHOICES,
209 required=False,
210 help_text='Operational status of device'
211 )
212 cluster = forms.ModelChoiceField(
213 queryset=Cluster.objects.all(),
214 to_field_name='name',
215 help_text='Name of parent cluster',
216 error_messages={
217 'invalid_choice': 'Invalid cluster name.',
218 }
219 )
220 tenant = forms.ModelChoiceField(
221 queryset=Tenant.objects.all(),
222 required=False,
223 to_field_name='name',
224 help_text='Name of assigned tenant',
225 error_messages={
226 'invalid_choice': 'Tenant not found.'
227 }
228 )
229 platform = forms.ModelChoiceField(
230 queryset=Platform.objects.all(),
231 required=False,
232 to_field_name='name',
233 help_text='Name of assigned platform',
234 error_messages={
235 'invalid_choice': 'Invalid platform.',
236 }
237 )
238
239 class Meta:
240 model = VirtualMachine
241 fields = ['name', 'status', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']
242
243
244 class VirtualMachineBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):
245 pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)
246 status = forms.ChoiceField(choices=add_blank_choice(STATUS_CHOICES), required=False, initial='')
247 cluster = forms.ModelChoiceField(queryset=Cluster.objects.all(), required=False)
248 tenant = forms.ModelChoiceField(queryset=Tenant.objects.all(), required=False)
249 platform = forms.ModelChoiceField(queryset=Platform.objects.all(), required=False)
250 vcpus = forms.IntegerField(required=False, label='vCPUs')
251 memory = forms.IntegerField(required=False, label='Memory (MB)')
252 disk = forms.IntegerField(required=False, label='Disk (GB)')
253 comments = CommentField(widget=SmallTextarea)
254
255 class Meta:
256 nullable_fields = ['tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']
257
258
259 def vm_status_choices():
260 status_counts = {}
261 for status in VirtualMachine.objects.values('status').annotate(count=Count('status')).order_by('status'):
262 status_counts[status['status']] = status['count']
263 return [(s[0], '{} ({})'.format(s[1], status_counts.get(s[0], 0))) for s in STATUS_CHOICES]
264
265
266 class VirtualMachineFilterForm(BootstrapMixin, CustomFieldFilterForm):
267 model = VirtualMachine
268 q = forms.CharField(required=False, label='Search')
269 cluster_group = FilterChoiceField(
270 queryset=ClusterGroup.objects.all(),
271 to_field_name='slug',
272 null_option=(0, 'None'),
273 )
274 cluster_id = FilterChoiceField(
275 queryset=Cluster.objects.annotate(filter_count=Count('virtual_machines')),
276 label='Cluster'
277 )
278 status = forms.MultipleChoiceField(choices=vm_status_choices, required=False)
279
280
281 #
282 # VM interfaces
283 #
284
285 class InterfaceForm(BootstrapMixin, forms.ModelForm):
286
287 class Meta:
288 model = Interface
289 fields = ['virtual_machine', 'name', 'form_factor', 'enabled', 'mac_address', 'mtu', 'description']
290 widgets = {
291 'virtual_machine': forms.HiddenInput(),
292 'form_factor': forms.HiddenInput(),
293 }
294
295
296 class InterfaceCreateForm(ComponentForm):
297 name_pattern = ExpandableNameField(label='Name')
298 form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES, initial=IFACE_FF_VIRTUAL, widget=forms.HiddenInput())
299 enabled = forms.BooleanField(required=False)
300 mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')
301 mac_address = MACAddressFormField(required=False, label='MAC Address')
302 description = forms.CharField(max_length=100, required=False)
303
304 def __init__(self, *args, **kwargs):
305
306 # Set interfaces enabled by default
307 kwargs['initial'] = kwargs.get('initial', {}).copy()
308 kwargs['initial'].update({'enabled': True})
309
310 super(InterfaceCreateForm, self).__init__(*args, **kwargs)
311
312
313 class InterfaceBulkEditForm(BootstrapMixin, BulkEditForm):
314 pk = forms.ModelMultipleChoiceField(queryset=Interface.objects.all(), widget=forms.MultipleHiddenInput)
315 virtual_machine = forms.ModelChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.HiddenInput)
316 enabled = forms.NullBooleanField(required=False, widget=BulkEditNullBooleanSelect)
317 mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')
318 description = forms.CharField(max_length=100, required=False)
319
320 class Meta:
321 nullable_fields = ['mtu', 'description']
322
323
324 #
325 # Bulk VirtualMachine component creation
326 #
327
328 class VirtualMachineBulkAddComponentForm(BootstrapMixin, forms.Form):
329 pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)
330 name_pattern = ExpandableNameField(label='Name')
331
332
333 class VirtualMachineBulkAddInterfaceForm(VirtualMachineBulkAddComponentForm):
334 form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES, initial=IFACE_FF_VIRTUAL, widget=forms.HiddenInput())
335 enabled = forms.BooleanField(required=False, initial=True)
336 mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')
337 description = forms.CharField(max_length=100, required=False)
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/virtualization/forms.py b/netbox/virtualization/forms.py
--- a/netbox/virtualization/forms.py
+++ b/netbox/virtualization/forms.py
@@ -49,10 +49,11 @@
#
class ClusterForm(BootstrapMixin, CustomFieldForm):
+ comments = CommentField(widget=SmallTextarea)
class Meta:
model = Cluster
- fields = ['name', 'type', 'group']
+ fields = ['name', 'type', 'group', 'comments']
class ClusterCSVForm(forms.ModelForm):
|
{"golden_diff": "diff --git a/netbox/virtualization/forms.py b/netbox/virtualization/forms.py\n--- a/netbox/virtualization/forms.py\n+++ b/netbox/virtualization/forms.py\n@@ -49,10 +49,11 @@\n #\n \n class ClusterForm(BootstrapMixin, CustomFieldForm):\n+ comments = CommentField(widget=SmallTextarea)\n \n class Meta:\n model = Cluster\n- fields = ['name', 'type', 'group']\n+ fields = ['name', 'type', 'group', 'comments']\n \n \n class ClusterCSVForm(forms.ModelForm):\n", "issue": "No field to edit Cluster comments\n### Issue type\r\n[ ] Feature request <!-- Requesting the implementation of a new feature -->\r\n[X] Bug report <!-- Reporting unexpected or erroneous behavior -->\r\n[ ] Documentation <!-- Proposing a modification to the documentation -->\r\n\r\n### Environment\r\n\r\n* Python version: 3.5.2\r\n* NetBox version: origin/dev-2.2 e93129f1\r\n\r\n### Description\r\n\r\nWhen you view Clusters you can see there is a 'comments' field. However, when you create a Cluster there is no field to enter the comments. Ditto when you click Edit on an existing cluster (e.g. `/virtualization/clusters/1/edit/`).\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom mptt.forms import TreeNodeChoiceField\n\nfrom django import forms\nfrom django.db.models import Count\n\nfrom dcim.constants import IFACE_FF_VIRTUAL, VIFACE_FF_CHOICES\nfrom dcim.formfields import MACAddressFormField\nfrom dcim.models import Device, Interface, Platform, Rack, Region, Site\nfrom extras.forms import CustomFieldBulkEditForm, CustomFieldForm, CustomFieldFilterForm\nfrom tenancy.forms import TenancyForm\nfrom tenancy.models import Tenant\nfrom utilities.forms import (\n add_blank_choice, APISelect, APISelectMultiple, BootstrapMixin, BulkEditForm, BulkEditNullBooleanSelect,\n ChainedFieldsMixin, ChainedModelChoiceField, ChainedModelMultipleChoiceField, CommentField, ComponentForm,\n ConfirmationForm, CSVChoiceField, ExpandableNameField, FilterChoiceField, SlugField, SmallTextarea,\n)\nfrom .constants import STATUS_CHOICES\nfrom .models import Cluster, ClusterGroup, ClusterType, VirtualMachine\n\n\n#\n# Cluster types\n#\n\nclass ClusterTypeForm(BootstrapMixin, forms.ModelForm):\n slug = SlugField()\n\n class Meta:\n model = ClusterType\n fields = ['name', 'slug']\n\n\n#\n# Cluster groups\n#\n\nclass ClusterGroupForm(BootstrapMixin, forms.ModelForm):\n slug = SlugField()\n\n class Meta:\n model = ClusterGroup\n fields = ['name', 'slug']\n\n\n#\n# Clusters\n#\n\nclass ClusterForm(BootstrapMixin, CustomFieldForm):\n\n class Meta:\n model = Cluster\n fields = ['name', 'type', 'group']\n\n\nclass ClusterCSVForm(forms.ModelForm):\n type = forms.ModelChoiceField(\n queryset=ClusterType.objects.all(),\n to_field_name='name',\n help_text='Name of cluster type',\n error_messages={\n 'invalid_choice': 'Invalid cluster type name.',\n }\n )\n group = forms.ModelChoiceField(\n queryset=ClusterGroup.objects.all(),\n to_field_name='name',\n required=False,\n help_text='Name of cluster group',\n error_messages={\n 'invalid_choice': 'Invalid cluster group name.',\n }\n )\n\n class Meta:\n model = Cluster\n fields = ['name', 'type', 'group', 'comments']\n\n\nclass ClusterBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Cluster.objects.all(), widget=forms.MultipleHiddenInput)\n type = forms.ModelChoiceField(queryset=ClusterType.objects.all(), required=False)\n group = forms.ModelChoiceField(queryset=ClusterGroup.objects.all(), required=False)\n comments = CommentField(widget=SmallTextarea)\n\n class Meta:\n nullable_fields = ['group', 'comments']\n\n\nclass ClusterFilterForm(BootstrapMixin, CustomFieldFilterForm):\n model = Cluster\n q = forms.CharField(required=False, label='Search')\n group = FilterChoiceField(\n queryset=ClusterGroup.objects.annotate(filter_count=Count('clusters')),\n to_field_name='slug',\n null_option=(0, 'None'),\n required=False,\n )\n type = FilterChoiceField(\n queryset=ClusterType.objects.annotate(filter_count=Count('clusters')),\n to_field_name='slug',\n required=False,\n )\n\n\nclass ClusterAddDevicesForm(BootstrapMixin, ChainedFieldsMixin, forms.Form):\n region = TreeNodeChoiceField(\n queryset=Region.objects.all(),\n required=False,\n widget=forms.Select(\n attrs={'filter-for': 'site', 'nullable': 'true'}\n )\n )\n site = ChainedModelChoiceField(\n queryset=Site.objects.all(),\n chains=(\n ('region', 'region'),\n ),\n required=False,\n widget=APISelect(\n api_url='/api/dcim/sites/?region_id={{region}}',\n attrs={'filter-for': 'rack'}\n )\n )\n rack = ChainedModelChoiceField(\n queryset=Rack.objects.all(),\n chains=(\n ('site', 'site'),\n ),\n required=False,\n widget=APISelect(\n api_url='/api/dcim/racks/?site_id={{site}}',\n attrs={'filter-for': 'devices', 'nullable': 'true'}\n )\n )\n devices = ChainedModelMultipleChoiceField(\n queryset=Device.objects.filter(cluster__isnull=True),\n chains=(\n ('site', 'site'),\n ('rack', 'rack'),\n ),\n label='Device',\n widget=APISelectMultiple(\n api_url='/api/dcim/devices/?site_id={{site}}&rack_id={{rack}}',\n display_field='display_name',\n disabled_indicator='cluster'\n )\n )\n\n class Meta:\n fields = ['region', 'site', 'rack', 'devices']\n\n def __init__(self, *args, **kwargs):\n\n super(ClusterAddDevicesForm, self).__init__(*args, **kwargs)\n\n self.fields['devices'].choices = []\n\n\nclass ClusterRemoveDevicesForm(ConfirmationForm):\n pk = forms.ModelMultipleChoiceField(queryset=Device.objects.all(), widget=forms.MultipleHiddenInput)\n\n\n#\n# Virtual Machines\n#\n\nclass VirtualMachineForm(BootstrapMixin, TenancyForm, CustomFieldForm):\n cluster_group = forms.ModelChoiceField(\n queryset=ClusterGroup.objects.all(),\n required=False,\n widget=forms.Select(\n attrs={'filter-for': 'cluster', 'nullable': 'true'}\n )\n )\n cluster = ChainedModelChoiceField(\n queryset=Cluster.objects.all(),\n chains=(\n ('group', 'cluster_group'),\n ),\n widget=APISelect(\n api_url='/api/virtualization/clusters/?group_id={{cluster_group}}'\n )\n )\n\n class Meta:\n model = VirtualMachine\n fields = [\n 'name', 'status', 'cluster_group', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments',\n ]\n\n def __init__(self, *args, **kwargs):\n\n # Initialize helper selector\n instance = kwargs.get('instance')\n if instance.pk and instance.cluster is not None:\n initial = kwargs.get('initial', {}).copy()\n initial['cluster_group'] = instance.cluster.group\n kwargs['initial'] = initial\n\n super(VirtualMachineForm, self).__init__(*args, **kwargs)\n\n\nclass VirtualMachineCSVForm(forms.ModelForm):\n status = CSVChoiceField(\n choices=STATUS_CHOICES,\n required=False,\n help_text='Operational status of device'\n )\n cluster = forms.ModelChoiceField(\n queryset=Cluster.objects.all(),\n to_field_name='name',\n help_text='Name of parent cluster',\n error_messages={\n 'invalid_choice': 'Invalid cluster name.',\n }\n )\n tenant = forms.ModelChoiceField(\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n help_text='Name of assigned tenant',\n error_messages={\n 'invalid_choice': 'Tenant not found.'\n }\n )\n platform = forms.ModelChoiceField(\n queryset=Platform.objects.all(),\n required=False,\n to_field_name='name',\n help_text='Name of assigned platform',\n error_messages={\n 'invalid_choice': 'Invalid platform.',\n }\n )\n\n class Meta:\n model = VirtualMachine\n fields = ['name', 'status', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']\n\n\nclass VirtualMachineBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)\n status = forms.ChoiceField(choices=add_blank_choice(STATUS_CHOICES), required=False, initial='')\n cluster = forms.ModelChoiceField(queryset=Cluster.objects.all(), required=False)\n tenant = forms.ModelChoiceField(queryset=Tenant.objects.all(), required=False)\n platform = forms.ModelChoiceField(queryset=Platform.objects.all(), required=False)\n vcpus = forms.IntegerField(required=False, label='vCPUs')\n memory = forms.IntegerField(required=False, label='Memory (MB)')\n disk = forms.IntegerField(required=False, label='Disk (GB)')\n comments = CommentField(widget=SmallTextarea)\n\n class Meta:\n nullable_fields = ['tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']\n\n\ndef vm_status_choices():\n status_counts = {}\n for status in VirtualMachine.objects.values('status').annotate(count=Count('status')).order_by('status'):\n status_counts[status['status']] = status['count']\n return [(s[0], '{} ({})'.format(s[1], status_counts.get(s[0], 0))) for s in STATUS_CHOICES]\n\n\nclass VirtualMachineFilterForm(BootstrapMixin, CustomFieldFilterForm):\n model = VirtualMachine\n q = forms.CharField(required=False, label='Search')\n cluster_group = FilterChoiceField(\n queryset=ClusterGroup.objects.all(),\n to_field_name='slug',\n null_option=(0, 'None'),\n )\n cluster_id = FilterChoiceField(\n queryset=Cluster.objects.annotate(filter_count=Count('virtual_machines')),\n label='Cluster'\n )\n status = forms.MultipleChoiceField(choices=vm_status_choices, required=False)\n\n\n#\n# VM interfaces\n#\n\nclass InterfaceForm(BootstrapMixin, forms.ModelForm):\n\n class Meta:\n model = Interface\n fields = ['virtual_machine', 'name', 'form_factor', 'enabled', 'mac_address', 'mtu', 'description']\n widgets = {\n 'virtual_machine': forms.HiddenInput(),\n 'form_factor': forms.HiddenInput(),\n }\n\n\nclass InterfaceCreateForm(ComponentForm):\n name_pattern = ExpandableNameField(label='Name')\n form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES, initial=IFACE_FF_VIRTUAL, widget=forms.HiddenInput())\n enabled = forms.BooleanField(required=False)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n mac_address = MACAddressFormField(required=False, label='MAC Address')\n description = forms.CharField(max_length=100, required=False)\n\n def __init__(self, *args, **kwargs):\n\n # Set interfaces enabled by default\n kwargs['initial'] = kwargs.get('initial', {}).copy()\n kwargs['initial'].update({'enabled': True})\n\n super(InterfaceCreateForm, self).__init__(*args, **kwargs)\n\n\nclass InterfaceBulkEditForm(BootstrapMixin, BulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Interface.objects.all(), widget=forms.MultipleHiddenInput)\n virtual_machine = forms.ModelChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.HiddenInput)\n enabled = forms.NullBooleanField(required=False, widget=BulkEditNullBooleanSelect)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n description = forms.CharField(max_length=100, required=False)\n\n class Meta:\n nullable_fields = ['mtu', 'description']\n\n\n#\n# Bulk VirtualMachine component creation\n#\n\nclass VirtualMachineBulkAddComponentForm(BootstrapMixin, forms.Form):\n pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)\n name_pattern = ExpandableNameField(label='Name')\n\n\nclass VirtualMachineBulkAddInterfaceForm(VirtualMachineBulkAddComponentForm):\n form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES, initial=IFACE_FF_VIRTUAL, widget=forms.HiddenInput())\n enabled = forms.BooleanField(required=False, initial=True)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n description = forms.CharField(max_length=100, required=False)\n", "path": "netbox/virtualization/forms.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom mptt.forms import TreeNodeChoiceField\n\nfrom django import forms\nfrom django.db.models import Count\n\nfrom dcim.constants import VIFACE_FF_CHOICES\nfrom dcim.formfields import MACAddressFormField\nfrom dcim.models import Device, Interface, Platform, Rack, Region, Site\nfrom extras.forms import CustomFieldBulkEditForm, CustomFieldForm, CustomFieldFilterForm\nfrom tenancy.forms import TenancyForm\nfrom tenancy.models import Tenant\nfrom utilities.forms import (\n add_blank_choice, APISelect, APISelectMultiple, BootstrapMixin, BulkEditForm, BulkEditNullBooleanSelect,\n ChainedFieldsMixin, ChainedModelChoiceField, ChainedModelMultipleChoiceField, CommentField, ComponentForm,\n ConfirmationForm, CSVChoiceField, ExpandableNameField, FilterChoiceField, SlugField, SmallTextarea,\n)\nfrom .constants import STATUS_CHOICES\nfrom .models import Cluster, ClusterGroup, ClusterType, VirtualMachine\n\n\n#\n# Cluster types\n#\n\nclass ClusterTypeForm(BootstrapMixin, forms.ModelForm):\n slug = SlugField()\n\n class Meta:\n model = ClusterType\n fields = ['name', 'slug']\n\n\n#\n# Cluster groups\n#\n\nclass ClusterGroupForm(BootstrapMixin, forms.ModelForm):\n slug = SlugField()\n\n class Meta:\n model = ClusterGroup\n fields = ['name', 'slug']\n\n\n#\n# Clusters\n#\n\nclass ClusterForm(BootstrapMixin, CustomFieldForm):\n comments = CommentField(widget=SmallTextarea)\n\n class Meta:\n model = Cluster\n fields = ['name', 'type', 'group', 'comments']\n\n\nclass ClusterCSVForm(forms.ModelForm):\n type = forms.ModelChoiceField(\n queryset=ClusterType.objects.all(),\n to_field_name='name',\n help_text='Name of cluster type',\n error_messages={\n 'invalid_choice': 'Invalid cluster type name.',\n }\n )\n group = forms.ModelChoiceField(\n queryset=ClusterGroup.objects.all(),\n to_field_name='name',\n required=False,\n help_text='Name of cluster group',\n error_messages={\n 'invalid_choice': 'Invalid cluster group name.',\n }\n )\n\n class Meta:\n model = Cluster\n fields = ['name', 'type', 'group', 'comments']\n\n\nclass ClusterBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Cluster.objects.all(), widget=forms.MultipleHiddenInput)\n type = forms.ModelChoiceField(queryset=ClusterType.objects.all(), required=False)\n group = forms.ModelChoiceField(queryset=ClusterGroup.objects.all(), required=False)\n comments = CommentField(widget=SmallTextarea)\n\n class Meta:\n nullable_fields = ['group', 'comments']\n\n\nclass ClusterFilterForm(BootstrapMixin, CustomFieldFilterForm):\n model = Cluster\n q = forms.CharField(required=False, label='Search')\n group = FilterChoiceField(\n queryset=ClusterGroup.objects.annotate(filter_count=Count('clusters')),\n to_field_name='slug',\n null_option=(0, 'None'),\n required=False,\n )\n type = FilterChoiceField(\n queryset=ClusterType.objects.annotate(filter_count=Count('clusters')),\n to_field_name='slug',\n required=False,\n )\n\n\nclass ClusterAddDevicesForm(BootstrapMixin, ChainedFieldsMixin, forms.Form):\n region = TreeNodeChoiceField(\n queryset=Region.objects.all(),\n required=False,\n widget=forms.Select(\n attrs={'filter-for': 'site', 'nullable': 'true'}\n )\n )\n site = ChainedModelChoiceField(\n queryset=Site.objects.all(),\n chains=(\n ('region', 'region'),\n ),\n required=False,\n widget=APISelect(\n api_url='/api/dcim/sites/?region_id={{region}}',\n attrs={'filter-for': 'rack'}\n )\n )\n rack = ChainedModelChoiceField(\n queryset=Rack.objects.all(),\n chains=(\n ('site', 'site'),\n ),\n required=False,\n widget=APISelect(\n api_url='/api/dcim/racks/?site_id={{site}}',\n attrs={'filter-for': 'devices', 'nullable': 'true'}\n )\n )\n devices = ChainedModelMultipleChoiceField(\n queryset=Device.objects.filter(cluster__isnull=True),\n chains=(\n ('site', 'site'),\n ('rack', 'rack'),\n ),\n label='Device',\n widget=APISelectMultiple(\n api_url='/api/dcim/devices/?site_id={{site}}&rack_id={{rack}}',\n display_field='display_name',\n disabled_indicator='cluster'\n )\n )\n\n class Meta:\n fields = ['region', 'site', 'rack', 'devices']\n\n def __init__(self, *args, **kwargs):\n\n super(ClusterAddDevicesForm, self).__init__(*args, **kwargs)\n\n self.fields['devices'].choices = []\n\n\nclass ClusterRemoveDevicesForm(ConfirmationForm):\n pk = forms.ModelMultipleChoiceField(queryset=Device.objects.all(), widget=forms.MultipleHiddenInput)\n\n\n#\n# Virtual Machines\n#\n\nclass VirtualMachineForm(BootstrapMixin, TenancyForm, CustomFieldForm):\n cluster_group = forms.ModelChoiceField(\n queryset=ClusterGroup.objects.all(),\n required=False,\n widget=forms.Select(\n attrs={'filter-for': 'cluster', 'nullable': 'true'}\n )\n )\n cluster = ChainedModelChoiceField(\n queryset=Cluster.objects.all(),\n chains=(\n ('group', 'cluster_group'),\n ),\n widget=APISelect(\n api_url='/api/virtualization/clusters/?group_id={{cluster_group}}'\n )\n )\n\n class Meta:\n model = VirtualMachine\n fields = [\n 'name', 'status', 'cluster_group', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments',\n ]\n\n def __init__(self, *args, **kwargs):\n\n # Initialize helper selector\n instance = kwargs.get('instance')\n if instance.pk and instance.cluster is not None:\n initial = kwargs.get('initial', {}).copy()\n initial['cluster_group'] = instance.cluster.group\n kwargs['initial'] = initial\n\n super(VirtualMachineForm, self).__init__(*args, **kwargs)\n\n\nclass VirtualMachineCSVForm(forms.ModelForm):\n status = CSVChoiceField(\n choices=STATUS_CHOICES,\n required=False,\n help_text='Operational status of device'\n )\n cluster = forms.ModelChoiceField(\n queryset=Cluster.objects.all(),\n to_field_name='name',\n help_text='Name of parent cluster',\n error_messages={\n 'invalid_choice': 'Invalid cluster name.',\n }\n )\n tenant = forms.ModelChoiceField(\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n help_text='Name of assigned tenant',\n error_messages={\n 'invalid_choice': 'Tenant not found.'\n }\n )\n platform = forms.ModelChoiceField(\n queryset=Platform.objects.all(),\n required=False,\n to_field_name='name',\n help_text='Name of assigned platform',\n error_messages={\n 'invalid_choice': 'Invalid platform.',\n }\n )\n\n class Meta:\n model = VirtualMachine\n fields = ['name', 'status', 'cluster', 'tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']\n\n\nclass VirtualMachineBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)\n status = forms.ChoiceField(choices=add_blank_choice(STATUS_CHOICES), required=False, initial='')\n cluster = forms.ModelChoiceField(queryset=Cluster.objects.all(), required=False)\n tenant = forms.ModelChoiceField(queryset=Tenant.objects.all(), required=False)\n platform = forms.ModelChoiceField(queryset=Platform.objects.all(), required=False)\n vcpus = forms.IntegerField(required=False, label='vCPUs')\n memory = forms.IntegerField(required=False, label='Memory (MB)')\n disk = forms.IntegerField(required=False, label='Disk (GB)')\n comments = CommentField(widget=SmallTextarea)\n\n class Meta:\n nullable_fields = ['tenant', 'platform', 'vcpus', 'memory', 'disk', 'comments']\n\n\ndef vm_status_choices():\n status_counts = {}\n for status in VirtualMachine.objects.values('status').annotate(count=Count('status')).order_by('status'):\n status_counts[status['status']] = status['count']\n return [(s[0], '{} ({})'.format(s[1], status_counts.get(s[0], 0))) for s in STATUS_CHOICES]\n\n\nclass VirtualMachineFilterForm(BootstrapMixin, CustomFieldFilterForm):\n model = VirtualMachine\n q = forms.CharField(required=False, label='Search')\n cluster_group = FilterChoiceField(\n queryset=ClusterGroup.objects.all(),\n to_field_name='slug',\n null_option=(0, 'None'),\n )\n cluster_id = FilterChoiceField(\n queryset=Cluster.objects.annotate(filter_count=Count('virtual_machines')),\n label='Cluster'\n )\n status = forms.MultipleChoiceField(choices=vm_status_choices, required=False)\n\n\n#\n# VM interfaces\n#\n\nclass InterfaceForm(BootstrapMixin, forms.ModelForm):\n\n class Meta:\n model = Interface\n fields = ['virtual_machine', 'name', 'form_factor', 'enabled', 'mac_address', 'mtu', 'description']\n widgets = {\n 'virtual_machine': forms.HiddenInput(),\n }\n\n\nclass InterfaceCreateForm(ComponentForm):\n name_pattern = ExpandableNameField(label='Name')\n form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES)\n enabled = forms.BooleanField(required=False)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n mac_address = MACAddressFormField(required=False, label='MAC Address')\n description = forms.CharField(max_length=100, required=False)\n\n def __init__(self, *args, **kwargs):\n\n # Set interfaces enabled by default\n kwargs['initial'] = kwargs.get('initial', {}).copy()\n kwargs['initial'].update({'enabled': True})\n\n super(InterfaceCreateForm, self).__init__(*args, **kwargs)\n\n\nclass InterfaceBulkEditForm(BootstrapMixin, BulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Interface.objects.all(), widget=forms.MultipleHiddenInput)\n virtual_machine = forms.ModelChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.HiddenInput)\n enabled = forms.NullBooleanField(required=False, widget=BulkEditNullBooleanSelect)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n description = forms.CharField(max_length=100, required=False)\n\n class Meta:\n nullable_fields = ['mtu', 'description']\n\n\n#\n# Bulk VirtualMachine component creation\n#\n\nclass VirtualMachineBulkAddComponentForm(BootstrapMixin, forms.Form):\n pk = forms.ModelMultipleChoiceField(queryset=VirtualMachine.objects.all(), widget=forms.MultipleHiddenInput)\n name_pattern = ExpandableNameField(label='Name')\n\n\nclass VirtualMachineBulkAddInterfaceForm(VirtualMachineBulkAddComponentForm):\n form_factor = forms.ChoiceField(choices=VIFACE_FF_CHOICES)\n enabled = forms.BooleanField(required=False, initial=True)\n mtu = forms.IntegerField(required=False, min_value=1, max_value=32767, label='MTU')\n description = forms.CharField(max_length=100, required=False)\n", "path": "netbox/virtualization/forms.py"}]}
| 3,842 | 125 |
gh_patches_debug_33644
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-1275
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement basic reporting functionality for MVP (besides Domain Growth report)
### Story
As an Admin, I want to quickly export domain data reports directly from the domains page (/admin/registrar/domain/) so that I can easily access and analyze the domain data.
### Acceptance Criteria
- [ ] Three reports are available to download on the domains page:
- [ ] [Domains by type](https://docs.google.com/spreadsheets/d/1_nMU2obW22U6NlOSC2ARxf3PpsJnSe2wMo5AyLSzXzk/edit?usp=sharing) (sorted by domain name)
- [ ] [current-full.csv](https://github.com/cisagov/dotgov-data/blob/main/current-full.csv) (sorted by domain name, then agency, then domain type)
- [ ] [current-federal.csv](https://github.com/cisagov/dotgov-data/blob/main/current-federal.csv) (sorted by domain name, then agency, then domain type)
- [ ] Each CSV report should contain accurate and up-to-date domain data from the database, sorted in the ways they are in the examples above.
- [ ] Single dropdown with the three report options which the user can select
- [ ] Clicking on each report triggers an immediate download of the relevant CSV report
- [ ] The UI components should be consistent with the existing design language of the admin portal.
### Additional Context
- This feature is a stop-gap measure, meant to provide immediate access to crucial reports while the ideal report interface is being developed. Future work is at #997.
- Security email may be pulled from .gov database rather than thru EPP call to registry.
### Issue Links
🔄 Relates to: #938 #143 #1075
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/utility/csv_export.py`
Content:
```
1 import csv
2 from registrar.models.domain import Domain
3 from registrar.models.domain_information import DomainInformation
4 from registrar.models.public_contact import PublicContact
5
6
7 def export_domains_to_writer(writer, columns, sort_fields, filter_condition):
8 # write columns headers to writer
9 writer.writerow(columns)
10
11 domainInfos = DomainInformation.objects.filter(**filter_condition).order_by(
12 *sort_fields
13 )
14 for domainInfo in domainInfos:
15 security_contacts = domainInfo.domain.contacts.filter(
16 contact_type=PublicContact.ContactTypeChoices.SECURITY
17 )
18
19 # create a dictionary of fields which can be included in output
20 FIELDS = {
21 "Domain name": domainInfo.domain.name,
22 "Domain type": domainInfo.get_organization_type_display()
23 + " - "
24 + domainInfo.get_federal_type_display()
25 if domainInfo.federal_type
26 else domainInfo.get_organization_type_display(),
27 "Agency": domainInfo.federal_agency,
28 "Organization name": domainInfo.organization_name,
29 "City": domainInfo.city,
30 "State": domainInfo.state_territory,
31 "AO": domainInfo.authorizing_official.first_name
32 + " "
33 + domainInfo.authorizing_official.last_name
34 if domainInfo.authorizing_official
35 else " ",
36 "AO email": domainInfo.authorizing_official.email
37 if domainInfo.authorizing_official
38 else " ",
39 "Security Contact Email": security_contacts[0].email
40 if security_contacts
41 else " ",
42 "Status": domainInfo.domain.state,
43 "Expiration Date": domainInfo.domain.expiration_date,
44 }
45 writer.writerow([FIELDS.get(column, "") for column in columns])
46
47
48 def export_data_type_to_csv(csv_file):
49 writer = csv.writer(csv_file)
50 # define columns to include in export
51 columns = [
52 "Domain name",
53 "Domain type",
54 "Agency",
55 "Organization name",
56 "City",
57 "State",
58 "AO",
59 "AO email",
60 "Security Contact Email",
61 "Status",
62 "Expiration Date",
63 ]
64 sort_fields = ["domain__name"]
65 filter_condition = {
66 "domain__state__in": [
67 Domain.State.READY,
68 Domain.State.DNS_NEEDED,
69 Domain.State.ON_HOLD,
70 ],
71 }
72 export_domains_to_writer(writer, columns, sort_fields, filter_condition)
73
74
75 def export_data_full_to_csv(csv_file):
76 writer = csv.writer(csv_file)
77 # define columns to include in export
78 columns = [
79 "Domain name",
80 "Domain type",
81 "Agency",
82 "Organization name",
83 "City",
84 "State",
85 "Security Contact Email",
86 ]
87 sort_fields = ["domain__name", "federal_agency", "organization_type"]
88 filter_condition = {
89 "domain__state__in": [
90 Domain.State.READY,
91 Domain.State.DNS_NEEDED,
92 Domain.State.ON_HOLD,
93 ],
94 }
95 export_domains_to_writer(writer, columns, sort_fields, filter_condition)
96
97
98 def export_data_federal_to_csv(csv_file):
99 writer = csv.writer(csv_file)
100 # define columns to include in export
101 columns = [
102 "Domain name",
103 "Domain type",
104 "Agency",
105 "Organization name",
106 "City",
107 "State",
108 "Security Contact Email",
109 ]
110 sort_fields = ["domain__name", "federal_agency", "organization_type"]
111 filter_condition = {
112 "organization_type__icontains": "federal",
113 "domain__state__in": [
114 Domain.State.READY,
115 Domain.State.DNS_NEEDED,
116 Domain.State.ON_HOLD,
117 ],
118 }
119 export_domains_to_writer(writer, columns, sort_fields, filter_condition)
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/registrar/utility/csv_export.py b/src/registrar/utility/csv_export.py
--- a/src/registrar/utility/csv_export.py
+++ b/src/registrar/utility/csv_export.py
@@ -2,6 +2,8 @@
from registrar.models.domain import Domain
from registrar.models.domain_information import DomainInformation
from registrar.models.public_contact import PublicContact
+from django.db.models import Value
+from django.db.models.functions import Coalesce
def export_domains_to_writer(writer, columns, sort_fields, filter_condition):
@@ -61,7 +63,13 @@
"Status",
"Expiration Date",
]
- sort_fields = ["domain__name"]
+ # Coalesce is used to replace federal_type of None with ZZZZZ
+ sort_fields = [
+ "organization_type",
+ Coalesce("federal_type", Value("ZZZZZ")),
+ "federal_agency",
+ "domain__name",
+ ]
filter_condition = {
"domain__state__in": [
Domain.State.READY,
@@ -84,7 +92,13 @@
"State",
"Security Contact Email",
]
- sort_fields = ["domain__name", "federal_agency", "organization_type"]
+ # Coalesce is used to replace federal_type of None with ZZZZZ
+ sort_fields = [
+ "organization_type",
+ Coalesce("federal_type", Value("ZZZZZ")),
+ "federal_agency",
+ "domain__name",
+ ]
filter_condition = {
"domain__state__in": [
Domain.State.READY,
@@ -107,7 +121,13 @@
"State",
"Security Contact Email",
]
- sort_fields = ["domain__name", "federal_agency", "organization_type"]
+ # Coalesce is used to replace federal_type of None with ZZZZZ
+ sort_fields = [
+ "organization_type",
+ Coalesce("federal_type", Value("ZZZZZ")),
+ "federal_agency",
+ "domain__name",
+ ]
filter_condition = {
"organization_type__icontains": "federal",
"domain__state__in": [
|
{"golden_diff": "diff --git a/src/registrar/utility/csv_export.py b/src/registrar/utility/csv_export.py\n--- a/src/registrar/utility/csv_export.py\n+++ b/src/registrar/utility/csv_export.py\n@@ -2,6 +2,8 @@\n from registrar.models.domain import Domain\n from registrar.models.domain_information import DomainInformation\n from registrar.models.public_contact import PublicContact\n+from django.db.models import Value\n+from django.db.models.functions import Coalesce\n \n \n def export_domains_to_writer(writer, columns, sort_fields, filter_condition):\n@@ -61,7 +63,13 @@\n \"Status\",\n \"Expiration Date\",\n ]\n- sort_fields = [\"domain__name\"]\n+ # Coalesce is used to replace federal_type of None with ZZZZZ\n+ sort_fields = [\n+ \"organization_type\",\n+ Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n+ \"federal_agency\",\n+ \"domain__name\",\n+ ]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n@@ -84,7 +92,13 @@\n \"State\",\n \"Security Contact Email\",\n ]\n- sort_fields = [\"domain__name\", \"federal_agency\", \"organization_type\"]\n+ # Coalesce is used to replace federal_type of None with ZZZZZ\n+ sort_fields = [\n+ \"organization_type\",\n+ Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n+ \"federal_agency\",\n+ \"domain__name\",\n+ ]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n@@ -107,7 +121,13 @@\n \"State\",\n \"Security Contact Email\",\n ]\n- sort_fields = [\"domain__name\", \"federal_agency\", \"organization_type\"]\n+ # Coalesce is used to replace federal_type of None with ZZZZZ\n+ sort_fields = [\n+ \"organization_type\",\n+ Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n+ \"federal_agency\",\n+ \"domain__name\",\n+ ]\n filter_condition = {\n \"organization_type__icontains\": \"federal\",\n \"domain__state__in\": [\n", "issue": "Implement basic reporting functionality for MVP (besides Domain Growth report)\n### Story\r\n\r\nAs an Admin, I want to quickly export domain data reports directly from the domains page (/admin/registrar/domain/) so that I can easily access and analyze the domain data.\r\n\r\n### Acceptance Criteria\r\n\r\n- [ ] Three reports are available to download on the domains page:\r\n - [ ] [Domains by type](https://docs.google.com/spreadsheets/d/1_nMU2obW22U6NlOSC2ARxf3PpsJnSe2wMo5AyLSzXzk/edit?usp=sharing) (sorted by domain name)\r\n - [ ] [current-full.csv](https://github.com/cisagov/dotgov-data/blob/main/current-full.csv) (sorted by domain name, then agency, then domain type)\r\n - [ ] [current-federal.csv](https://github.com/cisagov/dotgov-data/blob/main/current-federal.csv) (sorted by domain name, then agency, then domain type)\r\n- [ ] Each CSV report should contain accurate and up-to-date domain data from the database, sorted in the ways they are in the examples above.\r\n- [ ] Single dropdown with the three report options which the user can select\r\n- [ ] Clicking on each report triggers an immediate download of the relevant CSV report\r\n- [ ] The UI components should be consistent with the existing design language of the admin portal.\r\n\r\n### Additional Context\r\n\r\n- This feature is a stop-gap measure, meant to provide immediate access to crucial reports while the ideal report interface is being developed. Future work is at #997.\r\n\r\n- Security email may be pulled from .gov database rather than thru EPP call to registry.\r\n\r\n### Issue Links\r\n\r\n\ud83d\udd04 Relates to: #938 #143 #1075 \n", "before_files": [{"content": "import csv\nfrom registrar.models.domain import Domain\nfrom registrar.models.domain_information import DomainInformation\nfrom registrar.models.public_contact import PublicContact\n\n\ndef export_domains_to_writer(writer, columns, sort_fields, filter_condition):\n # write columns headers to writer\n writer.writerow(columns)\n\n domainInfos = DomainInformation.objects.filter(**filter_condition).order_by(\n *sort_fields\n )\n for domainInfo in domainInfos:\n security_contacts = domainInfo.domain.contacts.filter(\n contact_type=PublicContact.ContactTypeChoices.SECURITY\n )\n\n # create a dictionary of fields which can be included in output\n FIELDS = {\n \"Domain name\": domainInfo.domain.name,\n \"Domain type\": domainInfo.get_organization_type_display()\n + \" - \"\n + domainInfo.get_federal_type_display()\n if domainInfo.federal_type\n else domainInfo.get_organization_type_display(),\n \"Agency\": domainInfo.federal_agency,\n \"Organization name\": domainInfo.organization_name,\n \"City\": domainInfo.city,\n \"State\": domainInfo.state_territory,\n \"AO\": domainInfo.authorizing_official.first_name\n + \" \"\n + domainInfo.authorizing_official.last_name\n if domainInfo.authorizing_official\n else \" \",\n \"AO email\": domainInfo.authorizing_official.email\n if domainInfo.authorizing_official\n else \" \",\n \"Security Contact Email\": security_contacts[0].email\n if security_contacts\n else \" \",\n \"Status\": domainInfo.domain.state,\n \"Expiration Date\": domainInfo.domain.expiration_date,\n }\n writer.writerow([FIELDS.get(column, \"\") for column in columns])\n\n\ndef export_data_type_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"AO\",\n \"AO email\",\n \"Security Contact Email\",\n \"Status\",\n \"Expiration Date\",\n ]\n sort_fields = [\"domain__name\"]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n\n\ndef export_data_full_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"Security Contact Email\",\n ]\n sort_fields = [\"domain__name\", \"federal_agency\", \"organization_type\"]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n\n\ndef export_data_federal_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"Security Contact Email\",\n ]\n sort_fields = [\"domain__name\", \"federal_agency\", \"organization_type\"]\n filter_condition = {\n \"organization_type__icontains\": \"federal\",\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n", "path": "src/registrar/utility/csv_export.py"}], "after_files": [{"content": "import csv\nfrom registrar.models.domain import Domain\nfrom registrar.models.domain_information import DomainInformation\nfrom registrar.models.public_contact import PublicContact\nfrom django.db.models import Value\nfrom django.db.models.functions import Coalesce\n\n\ndef export_domains_to_writer(writer, columns, sort_fields, filter_condition):\n # write columns headers to writer\n writer.writerow(columns)\n\n domainInfos = DomainInformation.objects.filter(**filter_condition).order_by(\n *sort_fields\n )\n for domainInfo in domainInfos:\n security_contacts = domainInfo.domain.contacts.filter(\n contact_type=PublicContact.ContactTypeChoices.SECURITY\n )\n\n # create a dictionary of fields which can be included in output\n FIELDS = {\n \"Domain name\": domainInfo.domain.name,\n \"Domain type\": domainInfo.get_organization_type_display()\n + \" - \"\n + domainInfo.get_federal_type_display()\n if domainInfo.federal_type\n else domainInfo.get_organization_type_display(),\n \"Agency\": domainInfo.federal_agency,\n \"Organization name\": domainInfo.organization_name,\n \"City\": domainInfo.city,\n \"State\": domainInfo.state_territory,\n \"AO\": domainInfo.authorizing_official.first_name\n + \" \"\n + domainInfo.authorizing_official.last_name\n if domainInfo.authorizing_official\n else \" \",\n \"AO email\": domainInfo.authorizing_official.email\n if domainInfo.authorizing_official\n else \" \",\n \"Security Contact Email\": security_contacts[0].email\n if security_contacts\n else \" \",\n \"Status\": domainInfo.domain.state,\n \"Expiration Date\": domainInfo.domain.expiration_date,\n }\n writer.writerow([FIELDS.get(column, \"\") for column in columns])\n\n\ndef export_data_type_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"AO\",\n \"AO email\",\n \"Security Contact Email\",\n \"Status\",\n \"Expiration Date\",\n ]\n # Coalesce is used to replace federal_type of None with ZZZZZ\n sort_fields = [\n \"organization_type\",\n Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n \"federal_agency\",\n \"domain__name\",\n ]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n\n\ndef export_data_full_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"Security Contact Email\",\n ]\n # Coalesce is used to replace federal_type of None with ZZZZZ\n sort_fields = [\n \"organization_type\",\n Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n \"federal_agency\",\n \"domain__name\",\n ]\n filter_condition = {\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n\n\ndef export_data_federal_to_csv(csv_file):\n writer = csv.writer(csv_file)\n # define columns to include in export\n columns = [\n \"Domain name\",\n \"Domain type\",\n \"Agency\",\n \"Organization name\",\n \"City\",\n \"State\",\n \"Security Contact Email\",\n ]\n # Coalesce is used to replace federal_type of None with ZZZZZ\n sort_fields = [\n \"organization_type\",\n Coalesce(\"federal_type\", Value(\"ZZZZZ\")),\n \"federal_agency\",\n \"domain__name\",\n ]\n filter_condition = {\n \"organization_type__icontains\": \"federal\",\n \"domain__state__in\": [\n Domain.State.READY,\n Domain.State.DNS_NEEDED,\n Domain.State.ON_HOLD,\n ],\n }\n export_domains_to_writer(writer, columns, sort_fields, filter_condition)\n", "path": "src/registrar/utility/csv_export.py"}]}
| 1,682 | 489 |
gh_patches_debug_7967
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-7861
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Slim sidebar always showing scrollbars
On Wagtail 2.15.1, with Firefox 93, scrollbars are always showing on the slim sidebar, causing the logo to be clipped:

Slim sidebar always showing scrollbars
On Wagtail 2.15.1, with Firefox 93, scrollbars are always showing on the slim sidebar, causing the logo to be clipped:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/ui/sidebar.py`
Content:
```
1 from typing import List
2
3 from django import forms
4 from django.urls import reverse
5 from django.utils.functional import cached_property
6
7 from wagtail.admin.staticfiles import versioned_static
8 from wagtail.core.telepath import Adapter, adapter
9
10
11 class BaseSidebarAdapter(Adapter):
12 @cached_property
13 def media(self):
14 return forms.Media(js=[
15 versioned_static('wagtailadmin/js/sidebar.js'),
16 ])
17
18
19 # Main menu
20
21 class MenuItem:
22 def __init__(self, name: str, label: str, icon_name: str = '', classnames: str = ''):
23 self.name = name
24 self.label = label
25 self.icon_name = icon_name
26 self.classnames = classnames
27
28 def js_args(self):
29 return [
30 {
31 'name': self.name,
32 'label': self.label,
33 'icon_name': self.icon_name,
34 'classnames': self.classnames,
35 }
36 ]
37
38
39 @adapter('wagtail.sidebar.LinkMenuItem', base=BaseSidebarAdapter)
40 class LinkMenuItem(MenuItem):
41 def __init__(self, name: str, label: str, url: str, icon_name: str = '', classnames: str = ''):
42 super().__init__(name, label, icon_name=icon_name, classnames=classnames)
43 self.url = url
44
45 def js_args(self):
46 args = super().js_args()
47 args[0]['url'] = self.url
48 return args
49
50 def __eq__(self, other):
51 return (
52 self.__class__ == other.__class__
53 and self.name == other.name
54 and self.label == other.label
55 and self.url == other.url
56 and self.icon_name == other.icon_name
57 and self.classnames == other.classnames
58 )
59
60
61 @adapter('wagtail.sidebar.SubMenuItem', base=BaseSidebarAdapter)
62 class SubMenuItem(MenuItem):
63 def __init__(self, name: str, label: str, menu_items: List[MenuItem], icon_name: str = '', classnames: str = '', footer_text: str = ''):
64 super().__init__(name, label, icon_name=icon_name, classnames=classnames)
65 self.menu_items = menu_items
66 self.footer_text = footer_text
67
68 def js_args(self):
69 args = super().js_args()
70 args[0]['footer_text'] = self.footer_text
71 args.append(self.menu_items)
72 return args
73
74 def __eq__(self, other):
75 return (
76 self.__class__ == other.__class__
77 and self.name == other.name
78 and self.label == other.label
79 and self.menu_items == other.menu_items
80 and self.icon_name == other.icon_name
81 and self.classnames == other.classnames
82 and self.footer_text == other.footer_text
83 )
84
85
86 @adapter('wagtail.sidebar.PageExplorerMenuItem', base=BaseSidebarAdapter)
87 class PageExplorerMenuItem(LinkMenuItem):
88 def __init__(self, name: str, label: str, url: str, start_page_id: int, icon_name: str = '', classnames: str = ''):
89 super().__init__(name, label, url, icon_name=icon_name, classnames=classnames)
90 self.start_page_id = start_page_id
91
92 def js_args(self):
93 args = super().js_args()
94 args.append(self.start_page_id)
95 return args
96
97 def __eq__(self, other):
98 return (
99 self.__class__ == other.__class__
100 and self.name == other.name
101 and self.label == other.label
102 and self.url == other.url
103 and self.start_page_id == other.start_page_id
104 and self.icon_name == other.icon_name
105 and self.classnames == other.classnames
106 )
107
108
109 # Modules
110
111 @adapter('wagtail.sidebar.WagtailBrandingModule', base=BaseSidebarAdapter)
112 class WagtailBrandingModule:
113 def js_args(self):
114 return [
115 reverse('wagtailadmin_home'),
116 {
117 'mobileLogo': versioned_static('wagtailadmin/images/wagtail-logo.svg'),
118 'desktopLogoBody': versioned_static('wagtailadmin/images/logo-body.svg'),
119 'desktopLogoTail': versioned_static('wagtailadmin/images/logo-tail.svg'),
120 'desktopLogoEyeOpen': versioned_static('wagtailadmin/images/logo-eyeopen.svg'),
121 'desktopLogoEyeClosed': versioned_static('wagtailadmin/images/logo-eyeclosed.svg'),
122 }
123 ]
124
125
126 @adapter('wagtail.sidebar.CustomBrandingModule', base=BaseSidebarAdapter)
127 class CustomBrandingModule:
128 def __init__(self, html, collapsible=False):
129 self.html = html
130 self.collapsible = collapsible
131
132 def js_args(self):
133 return [
134 self.html,
135 self.collapsible,
136 ]
137
138
139 @adapter('wagtail.sidebar.SearchModule', base=BaseSidebarAdapter)
140 class SearchModule:
141 def __init__(self, search_area):
142 self.search_area = search_area
143
144 def js_args(self):
145 return [
146 self.search_area.url
147 ]
148
149
150 @adapter('wagtail.sidebar.MainMenuModule', base=BaseSidebarAdapter)
151 class MainMenuModule:
152 def __init__(self, menu_items: List[MenuItem], account_menu_items: List[MenuItem], user):
153 self.menu_items = menu_items
154 self.account_menu_items = account_menu_items
155 self.user = user
156
157 def js_args(self):
158 from wagtail.admin.templatetags.wagtailadmin_tags import avatar_url
159
160 try:
161 first_name = self.user.first_name
162 except AttributeError:
163 first_name = None
164
165 return [
166 self.menu_items,
167 self.account_menu_items,
168 {
169 'name': first_name or self.user.get_username(),
170 'avatarUrl': avatar_url(self.user, size=50),
171 }
172 ]
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/admin/ui/sidebar.py b/wagtail/admin/ui/sidebar.py
--- a/wagtail/admin/ui/sidebar.py
+++ b/wagtail/admin/ui/sidebar.py
@@ -123,19 +123,6 @@
]
-@adapter('wagtail.sidebar.CustomBrandingModule', base=BaseSidebarAdapter)
-class CustomBrandingModule:
- def __init__(self, html, collapsible=False):
- self.html = html
- self.collapsible = collapsible
-
- def js_args(self):
- return [
- self.html,
- self.collapsible,
- ]
-
-
@adapter('wagtail.sidebar.SearchModule', base=BaseSidebarAdapter)
class SearchModule:
def __init__(self, search_area):
|
{"golden_diff": "diff --git a/wagtail/admin/ui/sidebar.py b/wagtail/admin/ui/sidebar.py\n--- a/wagtail/admin/ui/sidebar.py\n+++ b/wagtail/admin/ui/sidebar.py\n@@ -123,19 +123,6 @@\n ]\n \n \n-@adapter('wagtail.sidebar.CustomBrandingModule', base=BaseSidebarAdapter)\n-class CustomBrandingModule:\n- def __init__(self, html, collapsible=False):\n- self.html = html\n- self.collapsible = collapsible\n-\n- def js_args(self):\n- return [\n- self.html,\n- self.collapsible,\n- ]\n-\n-\n @adapter('wagtail.sidebar.SearchModule', base=BaseSidebarAdapter)\n class SearchModule:\n def __init__(self, search_area):\n", "issue": "Slim sidebar always showing scrollbars\nOn Wagtail 2.15.1, with Firefox 93, scrollbars are always showing on the slim sidebar, causing the logo to be clipped:\r\n\r\n\nSlim sidebar always showing scrollbars\nOn Wagtail 2.15.1, with Firefox 93, scrollbars are always showing on the slim sidebar, causing the logo to be clipped:\r\n\r\n\n", "before_files": [{"content": "from typing import List\n\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom wagtail.admin.staticfiles import versioned_static\nfrom wagtail.core.telepath import Adapter, adapter\n\n\nclass BaseSidebarAdapter(Adapter):\n @cached_property\n def media(self):\n return forms.Media(js=[\n versioned_static('wagtailadmin/js/sidebar.js'),\n ])\n\n\n# Main menu\n\nclass MenuItem:\n def __init__(self, name: str, label: str, icon_name: str = '', classnames: str = ''):\n self.name = name\n self.label = label\n self.icon_name = icon_name\n self.classnames = classnames\n\n def js_args(self):\n return [\n {\n 'name': self.name,\n 'label': self.label,\n 'icon_name': self.icon_name,\n 'classnames': self.classnames,\n }\n ]\n\n\n@adapter('wagtail.sidebar.LinkMenuItem', base=BaseSidebarAdapter)\nclass LinkMenuItem(MenuItem):\n def __init__(self, name: str, label: str, url: str, icon_name: str = '', classnames: str = ''):\n super().__init__(name, label, icon_name=icon_name, classnames=classnames)\n self.url = url\n\n def js_args(self):\n args = super().js_args()\n args[0]['url'] = self.url\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.url == other.url\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n )\n\n\n@adapter('wagtail.sidebar.SubMenuItem', base=BaseSidebarAdapter)\nclass SubMenuItem(MenuItem):\n def __init__(self, name: str, label: str, menu_items: List[MenuItem], icon_name: str = '', classnames: str = '', footer_text: str = ''):\n super().__init__(name, label, icon_name=icon_name, classnames=classnames)\n self.menu_items = menu_items\n self.footer_text = footer_text\n\n def js_args(self):\n args = super().js_args()\n args[0]['footer_text'] = self.footer_text\n args.append(self.menu_items)\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.menu_items == other.menu_items\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n and self.footer_text == other.footer_text\n )\n\n\n@adapter('wagtail.sidebar.PageExplorerMenuItem', base=BaseSidebarAdapter)\nclass PageExplorerMenuItem(LinkMenuItem):\n def __init__(self, name: str, label: str, url: str, start_page_id: int, icon_name: str = '', classnames: str = ''):\n super().__init__(name, label, url, icon_name=icon_name, classnames=classnames)\n self.start_page_id = start_page_id\n\n def js_args(self):\n args = super().js_args()\n args.append(self.start_page_id)\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.url == other.url\n and self.start_page_id == other.start_page_id\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n )\n\n\n# Modules\n\n@adapter('wagtail.sidebar.WagtailBrandingModule', base=BaseSidebarAdapter)\nclass WagtailBrandingModule:\n def js_args(self):\n return [\n reverse('wagtailadmin_home'),\n {\n 'mobileLogo': versioned_static('wagtailadmin/images/wagtail-logo.svg'),\n 'desktopLogoBody': versioned_static('wagtailadmin/images/logo-body.svg'),\n 'desktopLogoTail': versioned_static('wagtailadmin/images/logo-tail.svg'),\n 'desktopLogoEyeOpen': versioned_static('wagtailadmin/images/logo-eyeopen.svg'),\n 'desktopLogoEyeClosed': versioned_static('wagtailadmin/images/logo-eyeclosed.svg'),\n }\n ]\n\n\n@adapter('wagtail.sidebar.CustomBrandingModule', base=BaseSidebarAdapter)\nclass CustomBrandingModule:\n def __init__(self, html, collapsible=False):\n self.html = html\n self.collapsible = collapsible\n\n def js_args(self):\n return [\n self.html,\n self.collapsible,\n ]\n\n\n@adapter('wagtail.sidebar.SearchModule', base=BaseSidebarAdapter)\nclass SearchModule:\n def __init__(self, search_area):\n self.search_area = search_area\n\n def js_args(self):\n return [\n self.search_area.url\n ]\n\n\n@adapter('wagtail.sidebar.MainMenuModule', base=BaseSidebarAdapter)\nclass MainMenuModule:\n def __init__(self, menu_items: List[MenuItem], account_menu_items: List[MenuItem], user):\n self.menu_items = menu_items\n self.account_menu_items = account_menu_items\n self.user = user\n\n def js_args(self):\n from wagtail.admin.templatetags.wagtailadmin_tags import avatar_url\n\n try:\n first_name = self.user.first_name\n except AttributeError:\n first_name = None\n\n return [\n self.menu_items,\n self.account_menu_items,\n {\n 'name': first_name or self.user.get_username(),\n 'avatarUrl': avatar_url(self.user, size=50),\n }\n ]\n", "path": "wagtail/admin/ui/sidebar.py"}], "after_files": [{"content": "from typing import List\n\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom wagtail.admin.staticfiles import versioned_static\nfrom wagtail.core.telepath import Adapter, adapter\n\n\nclass BaseSidebarAdapter(Adapter):\n @cached_property\n def media(self):\n return forms.Media(js=[\n versioned_static('wagtailadmin/js/sidebar.js'),\n ])\n\n\n# Main menu\n\nclass MenuItem:\n def __init__(self, name: str, label: str, icon_name: str = '', classnames: str = ''):\n self.name = name\n self.label = label\n self.icon_name = icon_name\n self.classnames = classnames\n\n def js_args(self):\n return [\n {\n 'name': self.name,\n 'label': self.label,\n 'icon_name': self.icon_name,\n 'classnames': self.classnames,\n }\n ]\n\n\n@adapter('wagtail.sidebar.LinkMenuItem', base=BaseSidebarAdapter)\nclass LinkMenuItem(MenuItem):\n def __init__(self, name: str, label: str, url: str, icon_name: str = '', classnames: str = ''):\n super().__init__(name, label, icon_name=icon_name, classnames=classnames)\n self.url = url\n\n def js_args(self):\n args = super().js_args()\n args[0]['url'] = self.url\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.url == other.url\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n )\n\n\n@adapter('wagtail.sidebar.SubMenuItem', base=BaseSidebarAdapter)\nclass SubMenuItem(MenuItem):\n def __init__(self, name: str, label: str, menu_items: List[MenuItem], icon_name: str = '', classnames: str = '', footer_text: str = ''):\n super().__init__(name, label, icon_name=icon_name, classnames=classnames)\n self.menu_items = menu_items\n self.footer_text = footer_text\n\n def js_args(self):\n args = super().js_args()\n args[0]['footer_text'] = self.footer_text\n args.append(self.menu_items)\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.menu_items == other.menu_items\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n and self.footer_text == other.footer_text\n )\n\n\n@adapter('wagtail.sidebar.PageExplorerMenuItem', base=BaseSidebarAdapter)\nclass PageExplorerMenuItem(LinkMenuItem):\n def __init__(self, name: str, label: str, url: str, start_page_id: int, icon_name: str = '', classnames: str = ''):\n super().__init__(name, label, url, icon_name=icon_name, classnames=classnames)\n self.start_page_id = start_page_id\n\n def js_args(self):\n args = super().js_args()\n args.append(self.start_page_id)\n return args\n\n def __eq__(self, other):\n return (\n self.__class__ == other.__class__\n and self.name == other.name\n and self.label == other.label\n and self.url == other.url\n and self.start_page_id == other.start_page_id\n and self.icon_name == other.icon_name\n and self.classnames == other.classnames\n )\n\n\n# Modules\n\n@adapter('wagtail.sidebar.WagtailBrandingModule', base=BaseSidebarAdapter)\nclass WagtailBrandingModule:\n def js_args(self):\n return [\n reverse('wagtailadmin_home'),\n {\n 'mobileLogo': versioned_static('wagtailadmin/images/wagtail-logo.svg'),\n 'desktopLogoBody': versioned_static('wagtailadmin/images/logo-body.svg'),\n 'desktopLogoTail': versioned_static('wagtailadmin/images/logo-tail.svg'),\n 'desktopLogoEyeOpen': versioned_static('wagtailadmin/images/logo-eyeopen.svg'),\n 'desktopLogoEyeClosed': versioned_static('wagtailadmin/images/logo-eyeclosed.svg'),\n }\n ]\n\n\n@adapter('wagtail.sidebar.SearchModule', base=BaseSidebarAdapter)\nclass SearchModule:\n def __init__(self, search_area):\n self.search_area = search_area\n\n def js_args(self):\n return [\n self.search_area.url\n ]\n\n\n@adapter('wagtail.sidebar.MainMenuModule', base=BaseSidebarAdapter)\nclass MainMenuModule:\n def __init__(self, menu_items: List[MenuItem], account_menu_items: List[MenuItem], user):\n self.menu_items = menu_items\n self.account_menu_items = account_menu_items\n self.user = user\n\n def js_args(self):\n from wagtail.admin.templatetags.wagtailadmin_tags import avatar_url\n\n try:\n first_name = self.user.first_name\n except AttributeError:\n first_name = None\n\n return [\n self.menu_items,\n self.account_menu_items,\n {\n 'name': first_name or self.user.get_username(),\n 'avatarUrl': avatar_url(self.user, size=50),\n }\n ]\n", "path": "wagtail/admin/ui/sidebar.py"}]}
| 2,140 | 171 |
gh_patches_debug_43084
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-771
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Plasma store changes the location of PlasmaObjectNonexistent and PlasmaStoreFull
As ``PlasmaObjectNonexistent`` and ``PlasmaStoreFull`` are moved from ``pyarrow.lib`` into ``pyarrow.plasma`` in 0.15.0, we need to add a try-except block on import.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/worker/storage/sharedstore.py`
Content:
```
1 # Copyright 1999-2018 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16
17 from ...actors import FunctionActor
18 from ...errors import StorageFull, StorageDataExists
19 from ...utils import calc_data_size
20
21 logger = logging.getLogger(__name__)
22
23
24 class PlasmaKeyMapActor(FunctionActor):
25 @classmethod
26 def default_uid(cls):
27 return 'w:0:' + cls.__name__
28
29 def __init__(self):
30 super(PlasmaKeyMapActor, self).__init__()
31 self._mapping = dict()
32
33 def put(self, session_id, chunk_key, obj_id):
34 session_chunk_key = (session_id, chunk_key)
35 if session_chunk_key in self._mapping:
36 raise StorageDataExists(session_chunk_key)
37 self._mapping[session_chunk_key] = obj_id
38
39 def get(self, session_id, chunk_key):
40 return self._mapping.get((session_id, chunk_key))
41
42 def delete(self, session_id, chunk_key):
43 try:
44 del self._mapping[(session_id, chunk_key)]
45 except KeyError:
46 pass
47
48
49 class PlasmaSharedStore(object):
50 """
51 Wrapper of plasma client for Mars objects
52 """
53 def __init__(self, plasma_client, mapper_ref):
54 from ...serialize.dataserializer import mars_serialize_context
55
56 self._plasma_client = plasma_client
57 self._actual_size = None
58 self._serialize_context = mars_serialize_context()
59
60 self._mapper_ref = mapper_ref
61
62 def get_actual_capacity(self, store_limit):
63 """
64 Get actual capacity of plasma store
65 :return: actual storage size in bytes
66 """
67 if self._actual_size is None:
68 from pyarrow import plasma, lib
69
70 bufs = []
71 left_size = store_limit
72 total_size = 0
73 alloc_fraction = 0.9
74 while left_size:
75 allocate_size = int(left_size * alloc_fraction)
76 if allocate_size < 1 * 1024 ** 2:
77 break
78
79 try:
80 obj_id = plasma.ObjectID.from_random()
81 bufs.append(self._plasma_client.create(obj_id, allocate_size))
82 self._plasma_client.seal(obj_id)
83 total_size += allocate_size
84 left_size -= allocate_size
85 alloc_fraction = 0.9
86 except lib.PlasmaStoreFull:
87 alloc_fraction -= 0.1
88 if alloc_fraction < 1e-6:
89 break
90 del bufs
91 self._plasma_client.evict(total_size)
92 self._actual_size = total_size
93 return self._actual_size
94
95 def _new_object_id(self, session_id, data_key):
96 """
97 Calc unique object id for chunks
98 """
99 from pyarrow.plasma import ObjectID
100 while True:
101 new_id = ObjectID.from_random()
102 if not self._plasma_client.contains(new_id):
103 break
104 self._mapper_ref.put(session_id, data_key, new_id)
105 return new_id
106
107 def _get_object_id(self, session_id, data_key):
108 obj_id = self._mapper_ref.get(session_id, data_key)
109 if obj_id is None:
110 raise KeyError((session_id, data_key))
111 return obj_id
112
113 def create(self, session_id, data_key, size):
114 from pyarrow.lib import PlasmaStoreFull
115 obj_id = self._new_object_id(session_id, data_key)
116
117 try:
118 self._plasma_client.evict(size)
119 buffer = self._plasma_client.create(obj_id, size)
120 return buffer
121 except PlasmaStoreFull:
122 exc_type = PlasmaStoreFull
123 self._mapper_ref.delete(session_id, data_key)
124 logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',
125 data_key, size)
126 except: # noqa: E722
127 self._mapper_ref.delete(session_id, data_key)
128 raise
129
130 if exc_type is PlasmaStoreFull:
131 raise StorageFull(request_size=size, total_size=self._actual_size)
132
133 def seal(self, session_id, data_key):
134 from pyarrow.lib import PlasmaObjectNonexistent
135 obj_id = self._get_object_id(session_id, data_key)
136 try:
137 self._plasma_client.seal(obj_id)
138 except PlasmaObjectNonexistent:
139 self._mapper_ref.delete(session_id, data_key)
140 raise KeyError((session_id, data_key))
141
142 def get(self, session_id, data_key):
143 """
144 Get deserialized Mars object from plasma store
145 """
146 from pyarrow.plasma import ObjectNotAvailable
147
148 obj_id = self._get_object_id(session_id, data_key)
149 obj = self._plasma_client.get(obj_id, serialization_context=self._serialize_context, timeout_ms=10)
150 if obj is ObjectNotAvailable:
151 self._mapper_ref.delete(session_id, data_key)
152 raise KeyError((session_id, data_key))
153 return obj
154
155 def get_buffer(self, session_id, data_key):
156 """
157 Get raw buffer from plasma store
158 """
159 obj_id = self._get_object_id(session_id, data_key)
160 [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)
161 if buf is None:
162 self._mapper_ref.delete(session_id, data_key)
163 raise KeyError((session_id, data_key))
164 return buf
165
166 def get_actual_size(self, session_id, data_key):
167 """
168 Get actual size of Mars object from plasma store
169 """
170 buf = None
171 try:
172 obj_id = self._get_object_id(session_id, data_key)
173 [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)
174 if buf is None:
175 self._mapper_ref.delete(session_id, data_key)
176 raise KeyError((session_id, data_key))
177 return buf.size
178 finally:
179 del buf
180
181 def put(self, session_id, data_key, value):
182 """
183 Put a Mars object into plasma store
184 :param session_id: session id
185 :param data_key: chunk key
186 :param value: Mars object to be put
187 """
188 import pyarrow
189 from pyarrow.lib import PlasmaStoreFull
190
191 data_size = calc_data_size(value)
192
193 try:
194 obj_id = self._new_object_id(session_id, data_key)
195 except StorageDataExists:
196 obj_id = self._get_object_id(session_id, data_key)
197 if self._plasma_client.contains(obj_id):
198 logger.debug('Data %s already exists, returning existing', data_key)
199 [buffer] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)
200 del value
201 return buffer
202 else:
203 logger.warning('Data %s registered but no data found, reconstructed', data_key)
204 self._mapper_ref.delete(session_id, data_key)
205 obj_id = self._new_object_id(session_id, data_key)
206
207 try:
208 serialized = pyarrow.serialize(value, self._serialize_context)
209 del value
210 data_size = serialized.total_bytes
211 try:
212 buffer = self._plasma_client.create(obj_id, serialized.total_bytes)
213 stream = pyarrow.FixedSizeBufferWriter(buffer)
214 stream.set_memcopy_threads(6)
215 serialized.write_to(stream)
216 self._plasma_client.seal(obj_id)
217 finally:
218 del serialized
219 return buffer
220 except PlasmaStoreFull:
221 self._mapper_ref.delete(session_id, data_key)
222 logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',
223 data_key, data_size)
224 exc = PlasmaStoreFull
225 except: # noqa: E722
226 self._mapper_ref.delete(session_id, data_key)
227 raise
228
229 if exc is PlasmaStoreFull:
230 raise StorageFull(request_size=data_size, total_size=self._actual_size)
231
232 def contains(self, session_id, data_key):
233 """
234 Check if given chunk key exists in current plasma store
235 """
236 try:
237 obj_id = self._get_object_id(session_id, data_key)
238 if self._plasma_client.contains(obj_id):
239 return True
240 else:
241 self._mapper_ref.delete(session_id, data_key)
242 return False
243 except KeyError:
244 return False
245
246 def delete(self, session_id, data_key):
247 self._mapper_ref.delete(session_id, data_key)
248
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/worker/storage/sharedstore.py b/mars/worker/storage/sharedstore.py
--- a/mars/worker/storage/sharedstore.py
+++ b/mars/worker/storage/sharedstore.py
@@ -18,6 +18,16 @@
from ...errors import StorageFull, StorageDataExists
from ...utils import calc_data_size
+try:
+ import pyarrow
+ from pyarrow import plasma
+ try:
+ from pyarrow.plasma import PlasmaObjectNonexistent, PlasmaStoreFull
+ except ImportError:
+ from pyarrow.lib import PlasmaObjectNonexistent, PlasmaStoreFull
+except ImportError: # pragma: no cover
+ pyarrow, plasma, PlasmaObjectNonexistent, PlasmaStoreFull = None, None, None, None
+
logger = logging.getLogger(__name__)
@@ -65,8 +75,6 @@
:return: actual storage size in bytes
"""
if self._actual_size is None:
- from pyarrow import plasma, lib
-
bufs = []
left_size = store_limit
total_size = 0
@@ -83,7 +91,7 @@
total_size += allocate_size
left_size -= allocate_size
alloc_fraction = 0.9
- except lib.PlasmaStoreFull:
+ except PlasmaStoreFull:
alloc_fraction -= 0.1
if alloc_fraction < 1e-6:
break
@@ -96,9 +104,8 @@
"""
Calc unique object id for chunks
"""
- from pyarrow.plasma import ObjectID
while True:
- new_id = ObjectID.from_random()
+ new_id = plasma.ObjectID.from_random()
if not self._plasma_client.contains(new_id):
break
self._mapper_ref.put(session_id, data_key, new_id)
@@ -111,7 +118,6 @@
return obj_id
def create(self, session_id, data_key, size):
- from pyarrow.lib import PlasmaStoreFull
obj_id = self._new_object_id(session_id, data_key)
try:
@@ -131,7 +137,6 @@
raise StorageFull(request_size=size, total_size=self._actual_size)
def seal(self, session_id, data_key):
- from pyarrow.lib import PlasmaObjectNonexistent
obj_id = self._get_object_id(session_id, data_key)
try:
self._plasma_client.seal(obj_id)
@@ -143,11 +148,9 @@
"""
Get deserialized Mars object from plasma store
"""
- from pyarrow.plasma import ObjectNotAvailable
-
obj_id = self._get_object_id(session_id, data_key)
obj = self._plasma_client.get(obj_id, serialization_context=self._serialize_context, timeout_ms=10)
- if obj is ObjectNotAvailable:
+ if obj is plasma.ObjectNotAvailable:
self._mapper_ref.delete(session_id, data_key)
raise KeyError((session_id, data_key))
return obj
@@ -185,9 +188,6 @@
:param data_key: chunk key
:param value: Mars object to be put
"""
- import pyarrow
- from pyarrow.lib import PlasmaStoreFull
-
data_size = calc_data_size(value)
try:
|
{"golden_diff": "diff --git a/mars/worker/storage/sharedstore.py b/mars/worker/storage/sharedstore.py\n--- a/mars/worker/storage/sharedstore.py\n+++ b/mars/worker/storage/sharedstore.py\n@@ -18,6 +18,16 @@\n from ...errors import StorageFull, StorageDataExists\n from ...utils import calc_data_size\n \n+try:\n+ import pyarrow\n+ from pyarrow import plasma\n+ try:\n+ from pyarrow.plasma import PlasmaObjectNonexistent, PlasmaStoreFull\n+ except ImportError:\n+ from pyarrow.lib import PlasmaObjectNonexistent, PlasmaStoreFull\n+except ImportError: # pragma: no cover\n+ pyarrow, plasma, PlasmaObjectNonexistent, PlasmaStoreFull = None, None, None, None\n+\n logger = logging.getLogger(__name__)\n \n \n@@ -65,8 +75,6 @@\n :return: actual storage size in bytes\n \"\"\"\n if self._actual_size is None:\n- from pyarrow import plasma, lib\n-\n bufs = []\n left_size = store_limit\n total_size = 0\n@@ -83,7 +91,7 @@\n total_size += allocate_size\n left_size -= allocate_size\n alloc_fraction = 0.9\n- except lib.PlasmaStoreFull:\n+ except PlasmaStoreFull:\n alloc_fraction -= 0.1\n if alloc_fraction < 1e-6:\n break\n@@ -96,9 +104,8 @@\n \"\"\"\n Calc unique object id for chunks\n \"\"\"\n- from pyarrow.plasma import ObjectID\n while True:\n- new_id = ObjectID.from_random()\n+ new_id = plasma.ObjectID.from_random()\n if not self._plasma_client.contains(new_id):\n break\n self._mapper_ref.put(session_id, data_key, new_id)\n@@ -111,7 +118,6 @@\n return obj_id\n \n def create(self, session_id, data_key, size):\n- from pyarrow.lib import PlasmaStoreFull\n obj_id = self._new_object_id(session_id, data_key)\n \n try:\n@@ -131,7 +137,6 @@\n raise StorageFull(request_size=size, total_size=self._actual_size)\n \n def seal(self, session_id, data_key):\n- from pyarrow.lib import PlasmaObjectNonexistent\n obj_id = self._get_object_id(session_id, data_key)\n try:\n self._plasma_client.seal(obj_id)\n@@ -143,11 +148,9 @@\n \"\"\"\n Get deserialized Mars object from plasma store\n \"\"\"\n- from pyarrow.plasma import ObjectNotAvailable\n-\n obj_id = self._get_object_id(session_id, data_key)\n obj = self._plasma_client.get(obj_id, serialization_context=self._serialize_context, timeout_ms=10)\n- if obj is ObjectNotAvailable:\n+ if obj is plasma.ObjectNotAvailable:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return obj\n@@ -185,9 +188,6 @@\n :param data_key: chunk key\n :param value: Mars object to be put\n \"\"\"\n- import pyarrow\n- from pyarrow.lib import PlasmaStoreFull\n-\n data_size = calc_data_size(value)\n \n try:\n", "issue": "[BUG] Plasma store changes the location of PlasmaObjectNonexistent and PlasmaStoreFull\nAs ``PlasmaObjectNonexistent`` and ``PlasmaStoreFull`` are moved from ``pyarrow.lib`` into ``pyarrow.plasma`` in 0.15.0, we need to add a try-except block on import.\n", "before_files": [{"content": "# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom ...actors import FunctionActor\nfrom ...errors import StorageFull, StorageDataExists\nfrom ...utils import calc_data_size\n\nlogger = logging.getLogger(__name__)\n\n\nclass PlasmaKeyMapActor(FunctionActor):\n @classmethod\n def default_uid(cls):\n return 'w:0:' + cls.__name__\n\n def __init__(self):\n super(PlasmaKeyMapActor, self).__init__()\n self._mapping = dict()\n\n def put(self, session_id, chunk_key, obj_id):\n session_chunk_key = (session_id, chunk_key)\n if session_chunk_key in self._mapping:\n raise StorageDataExists(session_chunk_key)\n self._mapping[session_chunk_key] = obj_id\n\n def get(self, session_id, chunk_key):\n return self._mapping.get((session_id, chunk_key))\n\n def delete(self, session_id, chunk_key):\n try:\n del self._mapping[(session_id, chunk_key)]\n except KeyError:\n pass\n\n\nclass PlasmaSharedStore(object):\n \"\"\"\n Wrapper of plasma client for Mars objects\n \"\"\"\n def __init__(self, plasma_client, mapper_ref):\n from ...serialize.dataserializer import mars_serialize_context\n\n self._plasma_client = plasma_client\n self._actual_size = None\n self._serialize_context = mars_serialize_context()\n\n self._mapper_ref = mapper_ref\n\n def get_actual_capacity(self, store_limit):\n \"\"\"\n Get actual capacity of plasma store\n :return: actual storage size in bytes\n \"\"\"\n if self._actual_size is None:\n from pyarrow import plasma, lib\n\n bufs = []\n left_size = store_limit\n total_size = 0\n alloc_fraction = 0.9\n while left_size:\n allocate_size = int(left_size * alloc_fraction)\n if allocate_size < 1 * 1024 ** 2:\n break\n\n try:\n obj_id = plasma.ObjectID.from_random()\n bufs.append(self._plasma_client.create(obj_id, allocate_size))\n self._plasma_client.seal(obj_id)\n total_size += allocate_size\n left_size -= allocate_size\n alloc_fraction = 0.9\n except lib.PlasmaStoreFull:\n alloc_fraction -= 0.1\n if alloc_fraction < 1e-6:\n break\n del bufs\n self._plasma_client.evict(total_size)\n self._actual_size = total_size\n return self._actual_size\n\n def _new_object_id(self, session_id, data_key):\n \"\"\"\n Calc unique object id for chunks\n \"\"\"\n from pyarrow.plasma import ObjectID\n while True:\n new_id = ObjectID.from_random()\n if not self._plasma_client.contains(new_id):\n break\n self._mapper_ref.put(session_id, data_key, new_id)\n return new_id\n\n def _get_object_id(self, session_id, data_key):\n obj_id = self._mapper_ref.get(session_id, data_key)\n if obj_id is None:\n raise KeyError((session_id, data_key))\n return obj_id\n\n def create(self, session_id, data_key, size):\n from pyarrow.lib import PlasmaStoreFull\n obj_id = self._new_object_id(session_id, data_key)\n\n try:\n self._plasma_client.evict(size)\n buffer = self._plasma_client.create(obj_id, size)\n return buffer\n except PlasmaStoreFull:\n exc_type = PlasmaStoreFull\n self._mapper_ref.delete(session_id, data_key)\n logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',\n data_key, size)\n except: # noqa: E722\n self._mapper_ref.delete(session_id, data_key)\n raise\n\n if exc_type is PlasmaStoreFull:\n raise StorageFull(request_size=size, total_size=self._actual_size)\n\n def seal(self, session_id, data_key):\n from pyarrow.lib import PlasmaObjectNonexistent\n obj_id = self._get_object_id(session_id, data_key)\n try:\n self._plasma_client.seal(obj_id)\n except PlasmaObjectNonexistent:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n\n def get(self, session_id, data_key):\n \"\"\"\n Get deserialized Mars object from plasma store\n \"\"\"\n from pyarrow.plasma import ObjectNotAvailable\n\n obj_id = self._get_object_id(session_id, data_key)\n obj = self._plasma_client.get(obj_id, serialization_context=self._serialize_context, timeout_ms=10)\n if obj is ObjectNotAvailable:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return obj\n\n def get_buffer(self, session_id, data_key):\n \"\"\"\n Get raw buffer from plasma store\n \"\"\"\n obj_id = self._get_object_id(session_id, data_key)\n [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n if buf is None:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return buf\n\n def get_actual_size(self, session_id, data_key):\n \"\"\"\n Get actual size of Mars object from plasma store\n \"\"\"\n buf = None\n try:\n obj_id = self._get_object_id(session_id, data_key)\n [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n if buf is None:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return buf.size\n finally:\n del buf\n\n def put(self, session_id, data_key, value):\n \"\"\"\n Put a Mars object into plasma store\n :param session_id: session id\n :param data_key: chunk key\n :param value: Mars object to be put\n \"\"\"\n import pyarrow\n from pyarrow.lib import PlasmaStoreFull\n\n data_size = calc_data_size(value)\n\n try:\n obj_id = self._new_object_id(session_id, data_key)\n except StorageDataExists:\n obj_id = self._get_object_id(session_id, data_key)\n if self._plasma_client.contains(obj_id):\n logger.debug('Data %s already exists, returning existing', data_key)\n [buffer] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n del value\n return buffer\n else:\n logger.warning('Data %s registered but no data found, reconstructed', data_key)\n self._mapper_ref.delete(session_id, data_key)\n obj_id = self._new_object_id(session_id, data_key)\n\n try:\n serialized = pyarrow.serialize(value, self._serialize_context)\n del value\n data_size = serialized.total_bytes\n try:\n buffer = self._plasma_client.create(obj_id, serialized.total_bytes)\n stream = pyarrow.FixedSizeBufferWriter(buffer)\n stream.set_memcopy_threads(6)\n serialized.write_to(stream)\n self._plasma_client.seal(obj_id)\n finally:\n del serialized\n return buffer\n except PlasmaStoreFull:\n self._mapper_ref.delete(session_id, data_key)\n logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',\n data_key, data_size)\n exc = PlasmaStoreFull\n except: # noqa: E722\n self._mapper_ref.delete(session_id, data_key)\n raise\n\n if exc is PlasmaStoreFull:\n raise StorageFull(request_size=data_size, total_size=self._actual_size)\n\n def contains(self, session_id, data_key):\n \"\"\"\n Check if given chunk key exists in current plasma store\n \"\"\"\n try:\n obj_id = self._get_object_id(session_id, data_key)\n if self._plasma_client.contains(obj_id):\n return True\n else:\n self._mapper_ref.delete(session_id, data_key)\n return False\n except KeyError:\n return False\n\n def delete(self, session_id, data_key):\n self._mapper_ref.delete(session_id, data_key)\n", "path": "mars/worker/storage/sharedstore.py"}], "after_files": [{"content": "# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom ...actors import FunctionActor\nfrom ...errors import StorageFull, StorageDataExists\nfrom ...utils import calc_data_size\n\ntry:\n import pyarrow\n from pyarrow import plasma\n try:\n from pyarrow.plasma import PlasmaObjectNonexistent, PlasmaStoreFull\n except ImportError:\n from pyarrow.lib import PlasmaObjectNonexistent, PlasmaStoreFull\nexcept ImportError: # pragma: no cover\n pyarrow, plasma, PlasmaObjectNonexistent, PlasmaStoreFull = None, None, None, None\n\nlogger = logging.getLogger(__name__)\n\n\nclass PlasmaKeyMapActor(FunctionActor):\n @classmethod\n def default_uid(cls):\n return 'w:0:' + cls.__name__\n\n def __init__(self):\n super(PlasmaKeyMapActor, self).__init__()\n self._mapping = dict()\n\n def put(self, session_id, chunk_key, obj_id):\n session_chunk_key = (session_id, chunk_key)\n if session_chunk_key in self._mapping:\n raise StorageDataExists(session_chunk_key)\n self._mapping[session_chunk_key] = obj_id\n\n def get(self, session_id, chunk_key):\n return self._mapping.get((session_id, chunk_key))\n\n def delete(self, session_id, chunk_key):\n try:\n del self._mapping[(session_id, chunk_key)]\n except KeyError:\n pass\n\n\nclass PlasmaSharedStore(object):\n \"\"\"\n Wrapper of plasma client for Mars objects\n \"\"\"\n def __init__(self, plasma_client, mapper_ref):\n from ...serialize.dataserializer import mars_serialize_context\n\n self._plasma_client = plasma_client\n self._actual_size = None\n self._serialize_context = mars_serialize_context()\n\n self._mapper_ref = mapper_ref\n\n def get_actual_capacity(self, store_limit):\n \"\"\"\n Get actual capacity of plasma store\n :return: actual storage size in bytes\n \"\"\"\n if self._actual_size is None:\n bufs = []\n left_size = store_limit\n total_size = 0\n alloc_fraction = 0.9\n while left_size:\n allocate_size = int(left_size * alloc_fraction)\n if allocate_size < 1 * 1024 ** 2:\n break\n\n try:\n obj_id = plasma.ObjectID.from_random()\n bufs.append(self._plasma_client.create(obj_id, allocate_size))\n self._plasma_client.seal(obj_id)\n total_size += allocate_size\n left_size -= allocate_size\n alloc_fraction = 0.9\n except PlasmaStoreFull:\n alloc_fraction -= 0.1\n if alloc_fraction < 1e-6:\n break\n del bufs\n self._plasma_client.evict(total_size)\n self._actual_size = total_size\n return self._actual_size\n\n def _new_object_id(self, session_id, data_key):\n \"\"\"\n Calc unique object id for chunks\n \"\"\"\n while True:\n new_id = plasma.ObjectID.from_random()\n if not self._plasma_client.contains(new_id):\n break\n self._mapper_ref.put(session_id, data_key, new_id)\n return new_id\n\n def _get_object_id(self, session_id, data_key):\n obj_id = self._mapper_ref.get(session_id, data_key)\n if obj_id is None:\n raise KeyError((session_id, data_key))\n return obj_id\n\n def create(self, session_id, data_key, size):\n obj_id = self._new_object_id(session_id, data_key)\n\n try:\n self._plasma_client.evict(size)\n buffer = self._plasma_client.create(obj_id, size)\n return buffer\n except PlasmaStoreFull:\n exc_type = PlasmaStoreFull\n self._mapper_ref.delete(session_id, data_key)\n logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',\n data_key, size)\n except: # noqa: E722\n self._mapper_ref.delete(session_id, data_key)\n raise\n\n if exc_type is PlasmaStoreFull:\n raise StorageFull(request_size=size, total_size=self._actual_size)\n\n def seal(self, session_id, data_key):\n obj_id = self._get_object_id(session_id, data_key)\n try:\n self._plasma_client.seal(obj_id)\n except PlasmaObjectNonexistent:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n\n def get(self, session_id, data_key):\n \"\"\"\n Get deserialized Mars object from plasma store\n \"\"\"\n obj_id = self._get_object_id(session_id, data_key)\n obj = self._plasma_client.get(obj_id, serialization_context=self._serialize_context, timeout_ms=10)\n if obj is plasma.ObjectNotAvailable:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return obj\n\n def get_buffer(self, session_id, data_key):\n \"\"\"\n Get raw buffer from plasma store\n \"\"\"\n obj_id = self._get_object_id(session_id, data_key)\n [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n if buf is None:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return buf\n\n def get_actual_size(self, session_id, data_key):\n \"\"\"\n Get actual size of Mars object from plasma store\n \"\"\"\n buf = None\n try:\n obj_id = self._get_object_id(session_id, data_key)\n [buf] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n if buf is None:\n self._mapper_ref.delete(session_id, data_key)\n raise KeyError((session_id, data_key))\n return buf.size\n finally:\n del buf\n\n def put(self, session_id, data_key, value):\n \"\"\"\n Put a Mars object into plasma store\n :param session_id: session id\n :param data_key: chunk key\n :param value: Mars object to be put\n \"\"\"\n data_size = calc_data_size(value)\n\n try:\n obj_id = self._new_object_id(session_id, data_key)\n except StorageDataExists:\n obj_id = self._get_object_id(session_id, data_key)\n if self._plasma_client.contains(obj_id):\n logger.debug('Data %s already exists, returning existing', data_key)\n [buffer] = self._plasma_client.get_buffers([obj_id], timeout_ms=10)\n del value\n return buffer\n else:\n logger.warning('Data %s registered but no data found, reconstructed', data_key)\n self._mapper_ref.delete(session_id, data_key)\n obj_id = self._new_object_id(session_id, data_key)\n\n try:\n serialized = pyarrow.serialize(value, self._serialize_context)\n del value\n data_size = serialized.total_bytes\n try:\n buffer = self._plasma_client.create(obj_id, serialized.total_bytes)\n stream = pyarrow.FixedSizeBufferWriter(buffer)\n stream.set_memcopy_threads(6)\n serialized.write_to(stream)\n self._plasma_client.seal(obj_id)\n finally:\n del serialized\n return buffer\n except PlasmaStoreFull:\n self._mapper_ref.delete(session_id, data_key)\n logger.warning('Data %s(%d) failed to store to plasma due to StorageFull',\n data_key, data_size)\n exc = PlasmaStoreFull\n except: # noqa: E722\n self._mapper_ref.delete(session_id, data_key)\n raise\n\n if exc is PlasmaStoreFull:\n raise StorageFull(request_size=data_size, total_size=self._actual_size)\n\n def contains(self, session_id, data_key):\n \"\"\"\n Check if given chunk key exists in current plasma store\n \"\"\"\n try:\n obj_id = self._get_object_id(session_id, data_key)\n if self._plasma_client.contains(obj_id):\n return True\n else:\n self._mapper_ref.delete(session_id, data_key)\n return False\n except KeyError:\n return False\n\n def delete(self, session_id, data_key):\n self._mapper_ref.delete(session_id, data_key)\n", "path": "mars/worker/storage/sharedstore.py"}]}
| 2,899 | 746 |
gh_patches_debug_2697
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-3553
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
VOCDataset object has no attribute dataset
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
I tried to train my model on Pascal VOC 2012 dataset, and set the config for data as follows:
```python3
batch_size = 8
data = dict(
samples_per_gpu=batch_size,
workers_per_gpu=4,
train=dict(
type=dataset_type,
ann_file=data_root + 'VOC2012/ImageSets/Main/train.txt',
img_prefix=data_root + 'VOC2012/',
pipeline=train_pipeline,),
val=dict(
type=dataset_type,
ann_file=data_root + 'VOC2012/ImageSets/Main/val.txt',
img_prefix=data_root + 'VOC2012/',
pipeline=test_pipeline,),
)
evaluation=dict(interval=1, metric='mAP')
```
But during evaluation, it raised following error:
```shell
File "train.py", line 166, in <module>
main()
File "train.py", line 162, in main
meta=meta)
File "/home/lfc199471/mmdetection/mmdet/apis/train.py", line 128, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 46, in train
self.call_hook('after_train_epoch')
File "/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 282, in call_hook
getattr(hook, fn_name)(self)
File "/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 28, in after_train_epoch
self.evaluate(runner, results)
File "/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 32, in evaluate
results, logger=runner.logger, **self.eval_kwargs)
File "/home/lfc199471/mmdetection/mmdet/datasets/voc.py", line 43, in evaluate
ds_name = self.dataset.CLASSES
AttributeError: 'VOCDataset' object has no attribute 'dataset'
```
I checked the `voc.py` in `mmdet` and found that in line 43, it was
```python3
ds_name = self.dataset.CLASSES
```
but `VOCDataset` and its superclasses `XMLDataset` and `CustomDataset` don't have this attribute. Is it a bug or did I make some mistakes in the config?
**Reproduction**
1. What command or script did you run?
```
python tools/train.py --gpus 1 configs/<my_config_file>
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
Yes, please see above.
3. What dataset did you use?
Pascal VOC 2012 detection
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment infomation and paste it here.
```shell
sys.platform: linux
Python: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GPU 0: Tesla P100-PCIE-16GB
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.5.1
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0a0+35d732a
OpenCV: 4.2.0
MMCV: 0.6.1
MMDetection: 2.1.0+b44e78b
MMDetection Compiler: GCC 7.5
MMDetection CUDA Compiler: 10.2
```
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source] : conda
If you need any log file or some source code from me, just let me know.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/datasets/voc.py`
Content:
```
1 from mmdet.core import eval_map, eval_recalls
2 from .builder import DATASETS
3 from .xml_style import XMLDataset
4
5
6 @DATASETS.register_module()
7 class VOCDataset(XMLDataset):
8
9 CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',
10 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',
11 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',
12 'tvmonitor')
13
14 def __init__(self, **kwargs):
15 super(VOCDataset, self).__init__(**kwargs)
16 if 'VOC2007' in self.img_prefix:
17 self.year = 2007
18 elif 'VOC2012' in self.img_prefix:
19 self.year = 2012
20 else:
21 raise ValueError('Cannot infer dataset year from img_prefix')
22
23 def evaluate(self,
24 results,
25 metric='mAP',
26 logger=None,
27 proposal_nums=(100, 300, 1000),
28 iou_thr=0.5,
29 scale_ranges=None):
30 """Evaluate in VOC protocol.
31
32 Args:
33 results (list[list | tuple]): Testing results of the dataset.
34 metric (str | list[str]): Metrics to be evaluated. Options are
35 'mAP', 'recall'.
36 logger (logging.Logger | str, optional): Logger used for printing
37 related information during evaluation. Default: None.
38 proposal_nums (Sequence[int]): Proposal number used for evaluating
39 recalls, such as recall@100, recall@1000.
40 Default: (100, 300, 1000).
41 iou_thr (float | list[float]): IoU threshold. It must be a float
42 when evaluating mAP, and can be a list when evaluating recall.
43 Default: 0.5.
44 scale_ranges (list[tuple], optional): Scale ranges for evaluating
45 mAP. If not specified, all bounding boxes would be included in
46 evaluation. Default: None.
47
48 Returns:
49 dict[str, float]: AP/recall metrics.
50 """
51
52 if not isinstance(metric, str):
53 assert len(metric) == 1
54 metric = metric[0]
55 allowed_metrics = ['mAP', 'recall']
56 if metric not in allowed_metrics:
57 raise KeyError(f'metric {metric} is not supported')
58 annotations = [self.get_ann_info(i) for i in range(len(self))]
59 eval_results = {}
60 if metric == 'mAP':
61 assert isinstance(iou_thr, float)
62 if self.year == 2007:
63 ds_name = 'voc07'
64 else:
65 ds_name = self.dataset.CLASSES
66 mean_ap, _ = eval_map(
67 results,
68 annotations,
69 scale_ranges=None,
70 iou_thr=iou_thr,
71 dataset=ds_name,
72 logger=logger)
73 eval_results['mAP'] = mean_ap
74 elif metric == 'recall':
75 gt_bboxes = [ann['bboxes'] for ann in annotations]
76 if isinstance(iou_thr, float):
77 iou_thr = [iou_thr]
78 recalls = eval_recalls(
79 gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
80 for i, num in enumerate(proposal_nums):
81 for j, iou in enumerate(iou_thr):
82 eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
83 if recalls.shape[1] > 1:
84 ar = recalls.mean(axis=1)
85 for i, num in enumerate(proposal_nums):
86 eval_results[f'AR@{num}'] = ar[i]
87 return eval_results
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/datasets/voc.py b/mmdet/datasets/voc.py
--- a/mmdet/datasets/voc.py
+++ b/mmdet/datasets/voc.py
@@ -62,7 +62,7 @@
if self.year == 2007:
ds_name = 'voc07'
else:
- ds_name = self.dataset.CLASSES
+ ds_name = self.CLASSES
mean_ap, _ = eval_map(
results,
annotations,
|
{"golden_diff": "diff --git a/mmdet/datasets/voc.py b/mmdet/datasets/voc.py\n--- a/mmdet/datasets/voc.py\n+++ b/mmdet/datasets/voc.py\n@@ -62,7 +62,7 @@\n if self.year == 2007:\n ds_name = 'voc07'\n else:\n- ds_name = self.dataset.CLASSES\n+ ds_name = self.CLASSES\n mean_ap, _ = eval_map(\n results,\n annotations,\n", "issue": "VOCDataset object has no attribute dataset\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n1. I have searched related issues but cannot get the expected help.\r\n2. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nI tried to train my model on Pascal VOC 2012 dataset, and set the config for data as follows:\r\n```python3\r\nbatch_size = 8\r\n\r\ndata = dict(\r\n samples_per_gpu=batch_size,\r\n workers_per_gpu=4,\r\n train=dict(\r\n type=dataset_type,\r\n ann_file=data_root + 'VOC2012/ImageSets/Main/train.txt',\r\n img_prefix=data_root + 'VOC2012/',\r\n pipeline=train_pipeline,),\r\n val=dict(\r\n type=dataset_type,\r\n ann_file=data_root + 'VOC2012/ImageSets/Main/val.txt',\r\n img_prefix=data_root + 'VOC2012/',\r\n pipeline=test_pipeline,),\r\n)\r\n\r\nevaluation=dict(interval=1, metric='mAP')\r\n```\r\nBut during evaluation, it raised following error:\r\n```shell\r\nFile \"train.py\", line 166, in <module>\r\n main()\r\n File \"train.py\", line 162, in main\r\n meta=meta)\r\n File \"/home/lfc199471/mmdetection/mmdet/apis/train.py\", line 128, in train_detector\r\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\r\n File \"/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py\", line 122, in run\r\n epoch_runner(data_loaders[i], **kwargs)\r\n File \"/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py\", line 46, in train\r\n self.call_hook('after_train_epoch')\r\n File \"/home/lfc199471/anaconda3/lib/python3.7/site-packages/mmcv/runner/base_runner.py\", line 282, in call_hook\r\n getattr(hook, fn_name)(self)\r\n File \"/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py\", line 28, in after_train_epoch\r\n self.evaluate(runner, results)\r\n File \"/home/lfc199471/mmdetection/mmdet/core/evaluation/eval_hooks.py\", line 32, in evaluate\r\n results, logger=runner.logger, **self.eval_kwargs)\r\n File \"/home/lfc199471/mmdetection/mmdet/datasets/voc.py\", line 43, in evaluate\r\n ds_name = self.dataset.CLASSES\r\nAttributeError: 'VOCDataset' object has no attribute 'dataset'\r\n```\r\nI checked the `voc.py` in `mmdet` and found that in line 43, it was\r\n```python3\r\nds_name = self.dataset.CLASSES\r\n```\r\nbut `VOCDataset` and its superclasses `XMLDataset` and `CustomDataset` don't have this attribute. Is it a bug or did I make some mistakes in the config?\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\n```\r\npython tools/train.py --gpus 1 configs/<my_config_file>\r\n```\r\n2. Did you make any modifications on the code or config? Did you understand what you have modified?\r\nYes, please see above.\r\n\r\n3. What dataset did you use?\r\nPascal VOC 2012 detection\r\n**Environment**\r\n1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment infomation and paste it here.\r\n```shell\r\nsys.platform: linux\r\nPython: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]\r\nCUDA available: True\r\nCUDA_HOME: /usr/local/cuda\r\nNVCC: Cuda compilation tools, release 10.2, V10.2.89\r\nGPU 0: Tesla P100-PCIE-16GB\r\nGCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nPyTorch: 1.5.1\r\nPyTorch compiling details: PyTorch built with:\r\n - GCC 7.3\r\n - C++ Version: 201402\r\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 10.2\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37\r\n - CuDNN 7.6.5\r\n - Magma 2.5.2\r\n - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, \r\n\r\nTorchVision: 0.6.0a0+35d732a\r\nOpenCV: 4.2.0\r\nMMCV: 0.6.1\r\nMMDetection: 2.1.0+b44e78b\r\nMMDetection Compiler: GCC 7.5\r\nMMDetection CUDA Compiler: 10.2\r\n```\r\n\r\n2. You may add addition that may be helpful for locating the problem, such as\r\n - How you installed PyTorch [e.g., pip, conda, source] : conda\r\n\r\n\r\nIf you need any log file or some source code from me, just let me know. \n", "before_files": [{"content": "from mmdet.core import eval_map, eval_recalls\nfrom .builder import DATASETS\nfrom .xml_style import XMLDataset\n\n\[email protected]_module()\nclass VOCDataset(XMLDataset):\n\n CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',\n 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',\n 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',\n 'tvmonitor')\n\n def __init__(self, **kwargs):\n super(VOCDataset, self).__init__(**kwargs)\n if 'VOC2007' in self.img_prefix:\n self.year = 2007\n elif 'VOC2012' in self.img_prefix:\n self.year = 2012\n else:\n raise ValueError('Cannot infer dataset year from img_prefix')\n\n def evaluate(self,\n results,\n metric='mAP',\n logger=None,\n proposal_nums=(100, 300, 1000),\n iou_thr=0.5,\n scale_ranges=None):\n \"\"\"Evaluate in VOC protocol.\n\n Args:\n results (list[list | tuple]): Testing results of the dataset.\n metric (str | list[str]): Metrics to be evaluated. Options are\n 'mAP', 'recall'.\n logger (logging.Logger | str, optional): Logger used for printing\n related information during evaluation. Default: None.\n proposal_nums (Sequence[int]): Proposal number used for evaluating\n recalls, such as recall@100, recall@1000.\n Default: (100, 300, 1000).\n iou_thr (float | list[float]): IoU threshold. It must be a float\n when evaluating mAP, and can be a list when evaluating recall.\n Default: 0.5.\n scale_ranges (list[tuple], optional): Scale ranges for evaluating\n mAP. If not specified, all bounding boxes would be included in\n evaluation. Default: None.\n\n Returns:\n dict[str, float]: AP/recall metrics.\n \"\"\"\n\n if not isinstance(metric, str):\n assert len(metric) == 1\n metric = metric[0]\n allowed_metrics = ['mAP', 'recall']\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n annotations = [self.get_ann_info(i) for i in range(len(self))]\n eval_results = {}\n if metric == 'mAP':\n assert isinstance(iou_thr, float)\n if self.year == 2007:\n ds_name = 'voc07'\n else:\n ds_name = self.dataset.CLASSES\n mean_ap, _ = eval_map(\n results,\n annotations,\n scale_ranges=None,\n iou_thr=iou_thr,\n dataset=ds_name,\n logger=logger)\n eval_results['mAP'] = mean_ap\n elif metric == 'recall':\n gt_bboxes = [ann['bboxes'] for ann in annotations]\n if isinstance(iou_thr, float):\n iou_thr = [iou_thr]\n recalls = eval_recalls(\n gt_bboxes, results, proposal_nums, iou_thr, logger=logger)\n for i, num in enumerate(proposal_nums):\n for j, iou in enumerate(iou_thr):\n eval_results[f'recall@{num}@{iou}'] = recalls[i, j]\n if recalls.shape[1] > 1:\n ar = recalls.mean(axis=1)\n for i, num in enumerate(proposal_nums):\n eval_results[f'AR@{num}'] = ar[i]\n return eval_results\n", "path": "mmdet/datasets/voc.py"}], "after_files": [{"content": "from mmdet.core import eval_map, eval_recalls\nfrom .builder import DATASETS\nfrom .xml_style import XMLDataset\n\n\[email protected]_module()\nclass VOCDataset(XMLDataset):\n\n CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car',\n 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',\n 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',\n 'tvmonitor')\n\n def __init__(self, **kwargs):\n super(VOCDataset, self).__init__(**kwargs)\n if 'VOC2007' in self.img_prefix:\n self.year = 2007\n elif 'VOC2012' in self.img_prefix:\n self.year = 2012\n else:\n raise ValueError('Cannot infer dataset year from img_prefix')\n\n def evaluate(self,\n results,\n metric='mAP',\n logger=None,\n proposal_nums=(100, 300, 1000),\n iou_thr=0.5,\n scale_ranges=None):\n \"\"\"Evaluate in VOC protocol.\n\n Args:\n results (list[list | tuple]): Testing results of the dataset.\n metric (str | list[str]): Metrics to be evaluated. Options are\n 'mAP', 'recall'.\n logger (logging.Logger | str, optional): Logger used for printing\n related information during evaluation. Default: None.\n proposal_nums (Sequence[int]): Proposal number used for evaluating\n recalls, such as recall@100, recall@1000.\n Default: (100, 300, 1000).\n iou_thr (float | list[float]): IoU threshold. It must be a float\n when evaluating mAP, and can be a list when evaluating recall.\n Default: 0.5.\n scale_ranges (list[tuple], optional): Scale ranges for evaluating\n mAP. If not specified, all bounding boxes would be included in\n evaluation. Default: None.\n\n Returns:\n dict[str, float]: AP/recall metrics.\n \"\"\"\n\n if not isinstance(metric, str):\n assert len(metric) == 1\n metric = metric[0]\n allowed_metrics = ['mAP', 'recall']\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n annotations = [self.get_ann_info(i) for i in range(len(self))]\n eval_results = {}\n if metric == 'mAP':\n assert isinstance(iou_thr, float)\n if self.year == 2007:\n ds_name = 'voc07'\n else:\n ds_name = self.CLASSES\n mean_ap, _ = eval_map(\n results,\n annotations,\n scale_ranges=None,\n iou_thr=iou_thr,\n dataset=ds_name,\n logger=logger)\n eval_results['mAP'] = mean_ap\n elif metric == 'recall':\n gt_bboxes = [ann['bboxes'] for ann in annotations]\n if isinstance(iou_thr, float):\n iou_thr = [iou_thr]\n recalls = eval_recalls(\n gt_bboxes, results, proposal_nums, iou_thr, logger=logger)\n for i, num in enumerate(proposal_nums):\n for j, iou in enumerate(iou_thr):\n eval_results[f'recall@{num}@{iou}'] = recalls[i, j]\n if recalls.shape[1] > 1:\n ar = recalls.mean(axis=1)\n for i, num in enumerate(proposal_nums):\n eval_results[f'AR@{num}'] = ar[i]\n return eval_results\n", "path": "mmdet/datasets/voc.py"}]}
| 3,039 | 114 |
gh_patches_debug_6734
|
rasdani/github-patches
|
git_diff
|
boto__botocore-1312
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when trying to read 0 byte from StreamingBody
Referring to the read method of `StreamingBody`:
https://github.com/boto/botocore/blob/c632931a6cc5eab4113c976d430bcb9c059f829f/botocore/response.py#L69-L81
If anyone asks for 0 bytes from a StreamingBody, the conditional on line 76 will pass because chunk is empty (since 0 bytes were asked for) and amount was set to 0 (not None). This leads to the content length verification, which will fail because you've read 0 bytes so far out of the entire content.
Might be an odd use case, but I feel like is a valid use case.
In fact, I ran into this issue when trying to use the `ijson` package [link](https://pypi.python.org/pypi/ijson).
That library uses `.read(0)` in order to figure out what type of encoding the stream reader should use. Whether that's the best way to do it or not, I'm not entirely sure. But I feel like `.read(0)` should still be supported.
If you guys agree that it should be supported, maybe considering a condition like this:
```
if (not chunk and amt > 0) or amt is None:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `botocore/response.py`
Content:
```
1 # Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/
2 # Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"). You
5 # may not use this file except in compliance with the License. A copy of
6 # the License is located at
7 #
8 # http://aws.amazon.com/apache2.0/
9 #
10 # or in the "license" file accompanying this file. This file is
11 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
12 # ANY KIND, either express or implied. See the License for the specific
13 # language governing permissions and limitations under the License.
14
15 import sys
16 import xml.etree.cElementTree
17 import logging
18
19 from botocore import ScalarTypes
20 from botocore.hooks import first_non_none_response
21 from botocore.compat import json, set_socket_timeout, XMLParseError
22 from botocore.exceptions import IncompleteReadError
23 from botocore import parsers
24
25
26 logger = logging.getLogger(__name__)
27
28
29 class StreamingBody(object):
30 """Wrapper class for an http response body.
31
32 This provides a few additional conveniences that do not exist
33 in the urllib3 model:
34
35 * Set the timeout on the socket (i.e read() timeouts)
36 * Auto validation of content length, if the amount of bytes
37 we read does not match the content length, an exception
38 is raised.
39
40 """
41 def __init__(self, raw_stream, content_length):
42 self._raw_stream = raw_stream
43 self._content_length = content_length
44 self._amount_read = 0
45
46 def set_socket_timeout(self, timeout):
47 """Set the timeout seconds on the socket."""
48 # The problem we're trying to solve is to prevent .read() calls from
49 # hanging. This can happen in rare cases. What we'd like to ideally
50 # do is set a timeout on the .read() call so that callers can retry
51 # the request.
52 # Unfortunately, this isn't currently possible in requests.
53 # See: https://github.com/kennethreitz/requests/issues/1803
54 # So what we're going to do is reach into the guts of the stream and
55 # grab the socket object, which we can set the timeout on. We're
56 # putting in a check here so in case this interface goes away, we'll
57 # know.
58 try:
59 # To further complicate things, the way to grab the
60 # underlying socket object from an HTTPResponse is different
61 # in py2 and py3. So this code has been pushed to botocore.compat.
62 set_socket_timeout(self._raw_stream, timeout)
63 except AttributeError:
64 logger.error("Cannot access the socket object of "
65 "a streaming response. It's possible "
66 "the interface has changed.", exc_info=True)
67 raise
68
69 def read(self, amt=None):
70 """Read at most amt bytes from the stream.
71
72 If the amt argument is omitted, read all data.
73 """
74 chunk = self._raw_stream.read(amt)
75 self._amount_read += len(chunk)
76 if not chunk or amt is None:
77 # If the server sends empty contents or
78 # we ask to read all of the contents, then we know
79 # we need to verify the content length.
80 self._verify_content_length()
81 return chunk
82
83 def _verify_content_length(self):
84 # See: https://github.com/kennethreitz/requests/issues/1855
85 # Basically, our http library doesn't do this for us, so we have
86 # to do this ourself.
87 if self._content_length is not None and \
88 self._amount_read != int(self._content_length):
89 raise IncompleteReadError(
90 actual_bytes=self._amount_read,
91 expected_bytes=int(self._content_length))
92
93 def close(self):
94 """Close the underlying http response stream."""
95 self._raw_stream.close()
96
97
98 def get_response(operation_model, http_response):
99 protocol = operation_model.metadata['protocol']
100 response_dict = {
101 'headers': http_response.headers,
102 'status_code': http_response.status_code,
103 }
104 # TODO: Unfortunately, we have to have error logic here.
105 # If it looks like an error, in the streaming response case we
106 # need to actually grab the contents.
107 if response_dict['status_code'] >= 300:
108 response_dict['body'] = http_response.content
109 elif operation_model.has_streaming_output:
110 response_dict['body'] = StreamingBody(
111 http_response.raw, response_dict['headers'].get('content-length'))
112 else:
113 response_dict['body'] = http_response.content
114
115 parser = parsers.create_parser(protocol)
116 return http_response, parser.parse(response_dict,
117 operation_model.output_shape)
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/botocore/response.py b/botocore/response.py
--- a/botocore/response.py
+++ b/botocore/response.py
@@ -73,7 +73,7 @@
"""
chunk = self._raw_stream.read(amt)
self._amount_read += len(chunk)
- if not chunk or amt is None:
+ if amt is None or (not chunk and amt > 0):
# If the server sends empty contents or
# we ask to read all of the contents, then we know
# we need to verify the content length.
|
{"golden_diff": "diff --git a/botocore/response.py b/botocore/response.py\n--- a/botocore/response.py\n+++ b/botocore/response.py\n@@ -73,7 +73,7 @@\n \"\"\"\n chunk = self._raw_stream.read(amt)\n self._amount_read += len(chunk)\n- if not chunk or amt is None:\n+ if amt is None or (not chunk and amt > 0):\n # If the server sends empty contents or\n # we ask to read all of the contents, then we know\n # we need to verify the content length.\n", "issue": "Error when trying to read 0 byte from StreamingBody\nReferring to the read method of `StreamingBody`:\r\nhttps://github.com/boto/botocore/blob/c632931a6cc5eab4113c976d430bcb9c059f829f/botocore/response.py#L69-L81\r\n\r\nIf anyone asks for 0 bytes from a StreamingBody, the conditional on line 76 will pass because chunk is empty (since 0 bytes were asked for) and amount was set to 0 (not None). This leads to the content length verification, which will fail because you've read 0 bytes so far out of the entire content.\r\n\r\nMight be an odd use case, but I feel like is a valid use case.\r\nIn fact, I ran into this issue when trying to use the `ijson` package [link](https://pypi.python.org/pypi/ijson).\r\nThat library uses `.read(0)` in order to figure out what type of encoding the stream reader should use. Whether that's the best way to do it or not, I'm not entirely sure. But I feel like `.read(0)` should still be supported.\r\n\r\nIf you guys agree that it should be supported, maybe considering a condition like this:\r\n```\r\nif (not chunk and amt > 0) or amt is None:\r\n```\n", "before_files": [{"content": "# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/\n# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\nimport sys\nimport xml.etree.cElementTree\nimport logging\n\nfrom botocore import ScalarTypes\nfrom botocore.hooks import first_non_none_response\nfrom botocore.compat import json, set_socket_timeout, XMLParseError\nfrom botocore.exceptions import IncompleteReadError\nfrom botocore import parsers\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass StreamingBody(object):\n \"\"\"Wrapper class for an http response body.\n\n This provides a few additional conveniences that do not exist\n in the urllib3 model:\n\n * Set the timeout on the socket (i.e read() timeouts)\n * Auto validation of content length, if the amount of bytes\n we read does not match the content length, an exception\n is raised.\n\n \"\"\"\n def __init__(self, raw_stream, content_length):\n self._raw_stream = raw_stream\n self._content_length = content_length\n self._amount_read = 0\n\n def set_socket_timeout(self, timeout):\n \"\"\"Set the timeout seconds on the socket.\"\"\"\n # The problem we're trying to solve is to prevent .read() calls from\n # hanging. This can happen in rare cases. What we'd like to ideally\n # do is set a timeout on the .read() call so that callers can retry\n # the request.\n # Unfortunately, this isn't currently possible in requests.\n # See: https://github.com/kennethreitz/requests/issues/1803\n # So what we're going to do is reach into the guts of the stream and\n # grab the socket object, which we can set the timeout on. We're\n # putting in a check here so in case this interface goes away, we'll\n # know.\n try:\n # To further complicate things, the way to grab the\n # underlying socket object from an HTTPResponse is different\n # in py2 and py3. So this code has been pushed to botocore.compat.\n set_socket_timeout(self._raw_stream, timeout)\n except AttributeError:\n logger.error(\"Cannot access the socket object of \"\n \"a streaming response. It's possible \"\n \"the interface has changed.\", exc_info=True)\n raise\n\n def read(self, amt=None):\n \"\"\"Read at most amt bytes from the stream.\n\n If the amt argument is omitted, read all data.\n \"\"\"\n chunk = self._raw_stream.read(amt)\n self._amount_read += len(chunk)\n if not chunk or amt is None:\n # If the server sends empty contents or\n # we ask to read all of the contents, then we know\n # we need to verify the content length.\n self._verify_content_length()\n return chunk\n\n def _verify_content_length(self):\n # See: https://github.com/kennethreitz/requests/issues/1855\n # Basically, our http library doesn't do this for us, so we have\n # to do this ourself.\n if self._content_length is not None and \\\n self._amount_read != int(self._content_length):\n raise IncompleteReadError(\n actual_bytes=self._amount_read,\n expected_bytes=int(self._content_length))\n\n def close(self):\n \"\"\"Close the underlying http response stream.\"\"\"\n self._raw_stream.close()\n\n\ndef get_response(operation_model, http_response):\n protocol = operation_model.metadata['protocol']\n response_dict = {\n 'headers': http_response.headers,\n 'status_code': http_response.status_code,\n }\n # TODO: Unfortunately, we have to have error logic here.\n # If it looks like an error, in the streaming response case we\n # need to actually grab the contents.\n if response_dict['status_code'] >= 300:\n response_dict['body'] = http_response.content\n elif operation_model.has_streaming_output:\n response_dict['body'] = StreamingBody(\n http_response.raw, response_dict['headers'].get('content-length'))\n else:\n response_dict['body'] = http_response.content\n\n parser = parsers.create_parser(protocol)\n return http_response, parser.parse(response_dict,\n operation_model.output_shape)\n", "path": "botocore/response.py"}], "after_files": [{"content": "# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/\n# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\nimport sys\nimport xml.etree.cElementTree\nimport logging\n\nfrom botocore import ScalarTypes\nfrom botocore.hooks import first_non_none_response\nfrom botocore.compat import json, set_socket_timeout, XMLParseError\nfrom botocore.exceptions import IncompleteReadError\nfrom botocore import parsers\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass StreamingBody(object):\n \"\"\"Wrapper class for an http response body.\n\n This provides a few additional conveniences that do not exist\n in the urllib3 model:\n\n * Set the timeout on the socket (i.e read() timeouts)\n * Auto validation of content length, if the amount of bytes\n we read does not match the content length, an exception\n is raised.\n\n \"\"\"\n def __init__(self, raw_stream, content_length):\n self._raw_stream = raw_stream\n self._content_length = content_length\n self._amount_read = 0\n\n def set_socket_timeout(self, timeout):\n \"\"\"Set the timeout seconds on the socket.\"\"\"\n # The problem we're trying to solve is to prevent .read() calls from\n # hanging. This can happen in rare cases. What we'd like to ideally\n # do is set a timeout on the .read() call so that callers can retry\n # the request.\n # Unfortunately, this isn't currently possible in requests.\n # See: https://github.com/kennethreitz/requests/issues/1803\n # So what we're going to do is reach into the guts of the stream and\n # grab the socket object, which we can set the timeout on. We're\n # putting in a check here so in case this interface goes away, we'll\n # know.\n try:\n # To further complicate things, the way to grab the\n # underlying socket object from an HTTPResponse is different\n # in py2 and py3. So this code has been pushed to botocore.compat.\n set_socket_timeout(self._raw_stream, timeout)\n except AttributeError:\n logger.error(\"Cannot access the socket object of \"\n \"a streaming response. It's possible \"\n \"the interface has changed.\", exc_info=True)\n raise\n\n def read(self, amt=None):\n \"\"\"Read at most amt bytes from the stream.\n\n If the amt argument is omitted, read all data.\n \"\"\"\n chunk = self._raw_stream.read(amt)\n self._amount_read += len(chunk)\n if amt is None or (not chunk and amt > 0):\n # If the server sends empty contents or\n # we ask to read all of the contents, then we know\n # we need to verify the content length.\n self._verify_content_length()\n return chunk\n\n def _verify_content_length(self):\n # See: https://github.com/kennethreitz/requests/issues/1855\n # Basically, our http library doesn't do this for us, so we have\n # to do this ourself.\n if self._content_length is not None and \\\n self._amount_read != int(self._content_length):\n raise IncompleteReadError(\n actual_bytes=self._amount_read,\n expected_bytes=int(self._content_length))\n\n def close(self):\n \"\"\"Close the underlying http response stream.\"\"\"\n self._raw_stream.close()\n\n\ndef get_response(operation_model, http_response):\n protocol = operation_model.metadata['protocol']\n response_dict = {\n 'headers': http_response.headers,\n 'status_code': http_response.status_code,\n }\n # TODO: Unfortunately, we have to have error logic here.\n # If it looks like an error, in the streaming response case we\n # need to actually grab the contents.\n if response_dict['status_code'] >= 300:\n response_dict['body'] = http_response.content\n elif operation_model.has_streaming_output:\n response_dict['body'] = StreamingBody(\n http_response.raw, response_dict['headers'].get('content-length'))\n else:\n response_dict['body'] = http_response.content\n\n parser = parsers.create_parser(protocol)\n return http_response, parser.parse(response_dict,\n operation_model.output_shape)\n", "path": "botocore/response.py"}]}
| 1,871 | 131 |
gh_patches_debug_38736
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-1656
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Service Bus Sessions never terminate
After a processor receives a message on a session, it hangs onto that session indefinitely, blocking the thread and meaning other messages cannot be processed.
We need to terminate the session after each message has been processed / errored out.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `resource_processor/vmss_porter/runner.py`
Content:
```
1 import threading
2 from multiprocessing import Process
3 import json
4 import socket
5 import asyncio
6 import logging
7 import sys
8 from resources.commands import build_porter_command, build_porter_command_for_outputs
9 from shared.config import get_config
10 from resources.helpers import get_installation_id
11 from resources.httpserver import start_server
12
13 from shared.logging import disable_unwanted_loggers, initialize_logging, get_message_id_logger, shell_output_logger # pylint: disable=import-error # noqa
14 from resources import strings, statuses # pylint: disable=import-error # noqa
15 from contextlib import asynccontextmanager
16 from azure.servicebus import ServiceBusMessage, NEXT_AVAILABLE_SESSION
17 from azure.servicebus.exceptions import OperationTimeoutError, ServiceBusConnectionError
18 from azure.servicebus.aio import ServiceBusClient, AutoLockRenewer
19 from azure.identity.aio import DefaultAzureCredential
20
21
22 # Initialise logging
23 logger_adapter = initialize_logging(logging.INFO, socket.gethostname())
24 disable_unwanted_loggers()
25
26 # Initialise config
27 try:
28 config = get_config(logger_adapter)
29 except KeyError as e:
30 logger_adapter.error(f"Environment variable {e} is not set correctly...Exiting")
31 sys.exit(1)
32
33
34 @asynccontextmanager
35 async def default_credentials(msi_id):
36 """
37 Context manager which yields the default credentials.
38 """
39 credential = DefaultAzureCredential(managed_identity_client_id=msi_id) if msi_id else DefaultAzureCredential()
40 yield credential
41 await credential.close()
42
43
44 async def receive_message(service_bus_client):
45 """
46 This method is run per process. Each process will connect to service bus and try to establish a session.
47 If messages are there, the process will continue to receive all the messages associated with that session.
48 If no messages are there, the session connection will time out, sleep, and retry.
49 """
50 q_name = config["resource_request_queue"]
51
52 while True:
53 try:
54 logger_adapter.info("Looking for new session...")
55 async with service_bus_client.get_queue_receiver(queue_name=q_name, session_id=NEXT_AVAILABLE_SESSION) as receiver:
56 logger_adapter.info("Got a session containing messages")
57 async with AutoLockRenewer() as renewer:
58 # allow a message to be auto lock renewed for up to an hour
59 renewer.register(receiver, receiver.session, max_lock_renewal_duration=3600)
60
61 async for msg in receiver:
62 result = True
63 message = ""
64
65 try:
66 message = json.loads(str(msg))
67 logger_adapter.info(f"Message received for resource_id={message['id']}, operation_id={message['operationId']}")
68 message_logger_adapter = get_message_id_logger(message['operationId']) # correlate messages per operation
69 result = await invoke_porter_action(message, service_bus_client, message_logger_adapter)
70 except (json.JSONDecodeError) as e:
71 logging.error(f"Received bad service bus resource request message: {e}")
72
73 if result:
74 logging.info(f"Resource request for {message} is complete")
75 else:
76 logging.error('Message processing failed!')
77
78 logger_adapter.info(f"Message with id = {message['id']} processed as {result} and marked complete.")
79 await receiver.complete_message(msg)
80
81 except OperationTimeoutError:
82 # Timeout occurred whilst connecting to a session - this is expected and indicates no non-empty sessions are available
83 logger_adapter.info("No sessions for this process. Sleeping 30s then will look again...")
84
85 except ServiceBusConnectionError:
86 # Occasionally there will be a transient / network-level error in connecting to SB.
87 logger_adapter.info("Unknown Service Bus connection error. Sleeping and will retry...")
88
89 except Exception:
90 # Catch all other exceptions, log them via .exception to get the stack trace, sleep, and reconnect
91 logger_adapter.exception("Unknown exception. Sleeping and will retry...")
92
93 finally:
94 await asyncio.sleep(30)
95
96
97 async def run_porter(command):
98 """
99 Run a Porter command
100 """
101 proc = await asyncio.create_subprocess_shell(
102 ''.join(command),
103 stdout=asyncio.subprocess.PIPE,
104 stderr=asyncio.subprocess.PIPE,
105 env=config["porter_env"])
106
107 stdout, stderr = await proc.communicate()
108 logging.info(f'run porter exited with {proc.returncode}')
109 result_stdout = None
110 result_stderr = None
111
112 if stdout:
113 result_stdout = stdout.decode()
114 shell_output_logger(result_stdout, '[stdout]', logger_adapter, logging.INFO)
115
116 if stderr:
117 result_stderr = stderr.decode()
118 shell_output_logger(result_stderr, '[stderr]', logger_adapter, logging.WARN)
119
120 return (proc.returncode, result_stdout, result_stderr)
121
122
123 def service_bus_message_generator(sb_message, status, deployment_message, outputs=None):
124 """
125 Generate a resource request message
126 """
127 installation_id = get_installation_id(sb_message)
128 message_dict = {
129 "operationId": sb_message["operationId"],
130 "id": sb_message["id"],
131 "status": status,
132 "message": f"{installation_id}: {deployment_message}"}
133
134 if outputs is not None:
135 message_dict["outputs"] = outputs
136
137 resource_request_message = json.dumps(message_dict)
138 return resource_request_message
139
140
141 async def invoke_porter_action(msg_body, sb_client, message_logger_adapter) -> bool:
142 """
143 Handle resource message by invoking specified porter action (i.e. install, uninstall)
144 """
145 installation_id = get_installation_id(msg_body)
146 action = msg_body["action"]
147 message_logger_adapter.info(f"{installation_id}: {action} action starting...")
148 sb_sender = sb_client.get_queue_sender(queue_name=config["deployment_status_queue"])
149
150 # If the action is install/upgrade, post message on sb queue to start a deployment job
151 if action == "install" or action == "upgrade":
152 resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYING, "Deployment job starting")
153 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
154
155 # Build and run porter command (flagging if its a built-in action or custom so we can adapt porter command appropriately)
156 is_custom_action = action not in ["install", "upgrade", "uninstall"]
157 porter_command = await build_porter_command(config, message_logger_adapter, msg_body, is_custom_action)
158 returncode, _, err = await run_porter(porter_command)
159
160 # Handle command output
161 if returncode != 0:
162 error_message = "Error context message = " + " ".join(err.split('\n')) + " ; Command executed: ".join(porter_command)
163 resource_request_message = service_bus_message_generator(msg_body, statuses.failed_status_string_for[action], error_message)
164
165 # Post message on sb queue to notify receivers of action failure
166 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
167 message_logger_adapter.info(f"{installation_id}: Porter action failed with error = {error_message}")
168 return False
169
170 else:
171 # Get the outputs
172 # TODO: decide if this should "fail" the deployment
173 _, outputs = await get_porter_outputs(msg_body, message_logger_adapter)
174
175 success_message = f"{action} action completed successfully."
176 resource_request_message = service_bus_message_generator(msg_body, statuses.pass_status_string_for[action], success_message, outputs)
177
178 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
179 message_logger_adapter.info(f"{installation_id}: {success_message}")
180 return True
181
182
183 async def get_porter_outputs(msg_body, message_logger_adapter):
184 """
185 Get outputs JSON from a Porter command
186 """
187 porter_command = await build_porter_command_for_outputs(msg_body)
188 returncode, stdout, err = await run_porter(porter_command)
189
190 if returncode != 0:
191 error_message = "Error context message = " + " ".join(err.split('\n'))
192 message_logger_adapter.info(f"{get_installation_id(msg_body)}: Failed to get outputs with error = {error_message}")
193 return False, ""
194 else:
195 outputs_json = {}
196 try:
197 outputs_json = json.loads(stdout)
198 message_logger_adapter.info(f"Got outputs as json: {outputs_json}")
199 except ValueError:
200 message_logger_adapter.error(f"Got outputs invalid json: {stdout}")
201
202 return True, outputs_json
203
204
205 async def runner():
206 async with default_credentials(config["vmss_msi_id"]) as credential:
207 service_bus_client = ServiceBusClient(config["service_bus_namespace"], credential)
208 await receive_message(service_bus_client)
209
210
211 def start_runner_process():
212 asyncio.ensure_future(runner())
213 event_loop = asyncio.get_event_loop()
214 event_loop.run_forever()
215 logger_adapter.info("Started resource processor")
216
217
218 if __name__ == "__main__":
219 httpserver_thread = threading.Thread(target=start_server)
220 httpserver_thread.start()
221 logger_adapter.info("Started http server")
222
223 logger_adapter.info(f'Starting {str(config["number_processes_int"])} processes...')
224 for i in range(config["number_processes_int"]):
225 logger_adapter.info(f'Starting process {str(i)}')
226 process = Process(target=start_runner_process)
227 process.start()
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/resource_processor/vmss_porter/runner.py b/resource_processor/vmss_porter/runner.py
--- a/resource_processor/vmss_porter/runner.py
+++ b/resource_processor/vmss_porter/runner.py
@@ -52,10 +52,11 @@
while True:
try:
logger_adapter.info("Looking for new session...")
- async with service_bus_client.get_queue_receiver(queue_name=q_name, session_id=NEXT_AVAILABLE_SESSION) as receiver:
+ # max_wait_time=1 -> don't hold the session open after processing of the message has finished
+ async with service_bus_client.get_queue_receiver(queue_name=q_name, max_wait_time=1, session_id=NEXT_AVAILABLE_SESSION) as receiver:
logger_adapter.info("Got a session containing messages")
async with AutoLockRenewer() as renewer:
- # allow a message to be auto lock renewed for up to an hour
+ # allow a session to be auto lock renewed for up to an hour - if it's processing a message
renewer.register(receiver, receiver.session, max_lock_renewal_duration=3600)
async for msg in receiver:
@@ -75,23 +76,23 @@
else:
logging.error('Message processing failed!')
- logger_adapter.info(f"Message with id = {message['id']} processed as {result} and marked complete.")
+ logger_adapter.info(f"Message for resource_id={message['id']}, operation_id={message['operationId']} processed as {result} and marked complete.")
await receiver.complete_message(msg)
+ logger_adapter.info("Closing session")
+ await renewer.close()
+
except OperationTimeoutError:
# Timeout occurred whilst connecting to a session - this is expected and indicates no non-empty sessions are available
- logger_adapter.info("No sessions for this process. Sleeping 30s then will look again...")
+ logger_adapter.info("No sessions for this process. Will look again...")
except ServiceBusConnectionError:
# Occasionally there will be a transient / network-level error in connecting to SB.
- logger_adapter.info("Unknown Service Bus connection error. Sleeping and will retry...")
+ logger_adapter.info("Unknown Service Bus connection error. Will retry...")
except Exception:
# Catch all other exceptions, log them via .exception to get the stack trace, sleep, and reconnect
- logger_adapter.exception("Unknown exception. Sleeping and will retry...")
-
- finally:
- await asyncio.sleep(30)
+ logger_adapter.exception("Unknown exception. Will retry...")
async def run_porter(command):
|
{"golden_diff": "diff --git a/resource_processor/vmss_porter/runner.py b/resource_processor/vmss_porter/runner.py\n--- a/resource_processor/vmss_porter/runner.py\n+++ b/resource_processor/vmss_porter/runner.py\n@@ -52,10 +52,11 @@\n while True:\n try:\n logger_adapter.info(\"Looking for new session...\")\n- async with service_bus_client.get_queue_receiver(queue_name=q_name, session_id=NEXT_AVAILABLE_SESSION) as receiver:\n+ # max_wait_time=1 -> don't hold the session open after processing of the message has finished\n+ async with service_bus_client.get_queue_receiver(queue_name=q_name, max_wait_time=1, session_id=NEXT_AVAILABLE_SESSION) as receiver:\n logger_adapter.info(\"Got a session containing messages\")\n async with AutoLockRenewer() as renewer:\n- # allow a message to be auto lock renewed for up to an hour\n+ # allow a session to be auto lock renewed for up to an hour - if it's processing a message\n renewer.register(receiver, receiver.session, max_lock_renewal_duration=3600)\n \n async for msg in receiver:\n@@ -75,23 +76,23 @@\n else:\n logging.error('Message processing failed!')\n \n- logger_adapter.info(f\"Message with id = {message['id']} processed as {result} and marked complete.\")\n+ logger_adapter.info(f\"Message for resource_id={message['id']}, operation_id={message['operationId']} processed as {result} and marked complete.\")\n await receiver.complete_message(msg)\n \n+ logger_adapter.info(\"Closing session\")\n+ await renewer.close()\n+\n except OperationTimeoutError:\n # Timeout occurred whilst connecting to a session - this is expected and indicates no non-empty sessions are available\n- logger_adapter.info(\"No sessions for this process. Sleeping 30s then will look again...\")\n+ logger_adapter.info(\"No sessions for this process. Will look again...\")\n \n except ServiceBusConnectionError:\n # Occasionally there will be a transient / network-level error in connecting to SB.\n- logger_adapter.info(\"Unknown Service Bus connection error. Sleeping and will retry...\")\n+ logger_adapter.info(\"Unknown Service Bus connection error. Will retry...\")\n \n except Exception:\n # Catch all other exceptions, log them via .exception to get the stack trace, sleep, and reconnect\n- logger_adapter.exception(\"Unknown exception. Sleeping and will retry...\")\n-\n- finally:\n- await asyncio.sleep(30)\n+ logger_adapter.exception(\"Unknown exception. Will retry...\")\n \n \n async def run_porter(command):\n", "issue": "Service Bus Sessions never terminate\nAfter a processor receives a message on a session, it hangs onto that session indefinitely, blocking the thread and meaning other messages cannot be processed. \r\n\r\nWe need to terminate the session after each message has been processed / errored out. \n", "before_files": [{"content": "import threading\nfrom multiprocessing import Process\nimport json\nimport socket\nimport asyncio\nimport logging\nimport sys\nfrom resources.commands import build_porter_command, build_porter_command_for_outputs\nfrom shared.config import get_config\nfrom resources.helpers import get_installation_id\nfrom resources.httpserver import start_server\n\nfrom shared.logging import disable_unwanted_loggers, initialize_logging, get_message_id_logger, shell_output_logger # pylint: disable=import-error # noqa\nfrom resources import strings, statuses # pylint: disable=import-error # noqa\nfrom contextlib import asynccontextmanager\nfrom azure.servicebus import ServiceBusMessage, NEXT_AVAILABLE_SESSION\nfrom azure.servicebus.exceptions import OperationTimeoutError, ServiceBusConnectionError\nfrom azure.servicebus.aio import ServiceBusClient, AutoLockRenewer\nfrom azure.identity.aio import DefaultAzureCredential\n\n\n# Initialise logging\nlogger_adapter = initialize_logging(logging.INFO, socket.gethostname())\ndisable_unwanted_loggers()\n\n# Initialise config\ntry:\n config = get_config(logger_adapter)\nexcept KeyError as e:\n logger_adapter.error(f\"Environment variable {e} is not set correctly...Exiting\")\n sys.exit(1)\n\n\n@asynccontextmanager\nasync def default_credentials(msi_id):\n \"\"\"\n Context manager which yields the default credentials.\n \"\"\"\n credential = DefaultAzureCredential(managed_identity_client_id=msi_id) if msi_id else DefaultAzureCredential()\n yield credential\n await credential.close()\n\n\nasync def receive_message(service_bus_client):\n \"\"\"\n This method is run per process. Each process will connect to service bus and try to establish a session.\n If messages are there, the process will continue to receive all the messages associated with that session.\n If no messages are there, the session connection will time out, sleep, and retry.\n \"\"\"\n q_name = config[\"resource_request_queue\"]\n\n while True:\n try:\n logger_adapter.info(\"Looking for new session...\")\n async with service_bus_client.get_queue_receiver(queue_name=q_name, session_id=NEXT_AVAILABLE_SESSION) as receiver:\n logger_adapter.info(\"Got a session containing messages\")\n async with AutoLockRenewer() as renewer:\n # allow a message to be auto lock renewed for up to an hour\n renewer.register(receiver, receiver.session, max_lock_renewal_duration=3600)\n\n async for msg in receiver:\n result = True\n message = \"\"\n\n try:\n message = json.loads(str(msg))\n logger_adapter.info(f\"Message received for resource_id={message['id']}, operation_id={message['operationId']}\")\n message_logger_adapter = get_message_id_logger(message['operationId']) # correlate messages per operation\n result = await invoke_porter_action(message, service_bus_client, message_logger_adapter)\n except (json.JSONDecodeError) as e:\n logging.error(f\"Received bad service bus resource request message: {e}\")\n\n if result:\n logging.info(f\"Resource request for {message} is complete\")\n else:\n logging.error('Message processing failed!')\n\n logger_adapter.info(f\"Message with id = {message['id']} processed as {result} and marked complete.\")\n await receiver.complete_message(msg)\n\n except OperationTimeoutError:\n # Timeout occurred whilst connecting to a session - this is expected and indicates no non-empty sessions are available\n logger_adapter.info(\"No sessions for this process. Sleeping 30s then will look again...\")\n\n except ServiceBusConnectionError:\n # Occasionally there will be a transient / network-level error in connecting to SB.\n logger_adapter.info(\"Unknown Service Bus connection error. Sleeping and will retry...\")\n\n except Exception:\n # Catch all other exceptions, log them via .exception to get the stack trace, sleep, and reconnect\n logger_adapter.exception(\"Unknown exception. Sleeping and will retry...\")\n\n finally:\n await asyncio.sleep(30)\n\n\nasync def run_porter(command):\n \"\"\"\n Run a Porter command\n \"\"\"\n proc = await asyncio.create_subprocess_shell(\n ''.join(command),\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE,\n env=config[\"porter_env\"])\n\n stdout, stderr = await proc.communicate()\n logging.info(f'run porter exited with {proc.returncode}')\n result_stdout = None\n result_stderr = None\n\n if stdout:\n result_stdout = stdout.decode()\n shell_output_logger(result_stdout, '[stdout]', logger_adapter, logging.INFO)\n\n if stderr:\n result_stderr = stderr.decode()\n shell_output_logger(result_stderr, '[stderr]', logger_adapter, logging.WARN)\n\n return (proc.returncode, result_stdout, result_stderr)\n\n\ndef service_bus_message_generator(sb_message, status, deployment_message, outputs=None):\n \"\"\"\n Generate a resource request message\n \"\"\"\n installation_id = get_installation_id(sb_message)\n message_dict = {\n \"operationId\": sb_message[\"operationId\"],\n \"id\": sb_message[\"id\"],\n \"status\": status,\n \"message\": f\"{installation_id}: {deployment_message}\"}\n\n if outputs is not None:\n message_dict[\"outputs\"] = outputs\n\n resource_request_message = json.dumps(message_dict)\n return resource_request_message\n\n\nasync def invoke_porter_action(msg_body, sb_client, message_logger_adapter) -> bool:\n \"\"\"\n Handle resource message by invoking specified porter action (i.e. install, uninstall)\n \"\"\"\n installation_id = get_installation_id(msg_body)\n action = msg_body[\"action\"]\n message_logger_adapter.info(f\"{installation_id}: {action} action starting...\")\n sb_sender = sb_client.get_queue_sender(queue_name=config[\"deployment_status_queue\"])\n\n # If the action is install/upgrade, post message on sb queue to start a deployment job\n if action == \"install\" or action == \"upgrade\":\n resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYING, \"Deployment job starting\")\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n\n # Build and run porter command (flagging if its a built-in action or custom so we can adapt porter command appropriately)\n is_custom_action = action not in [\"install\", \"upgrade\", \"uninstall\"]\n porter_command = await build_porter_command(config, message_logger_adapter, msg_body, is_custom_action)\n returncode, _, err = await run_porter(porter_command)\n\n # Handle command output\n if returncode != 0:\n error_message = \"Error context message = \" + \" \".join(err.split('\\n')) + \" ; Command executed: \".join(porter_command)\n resource_request_message = service_bus_message_generator(msg_body, statuses.failed_status_string_for[action], error_message)\n\n # Post message on sb queue to notify receivers of action failure\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: Porter action failed with error = {error_message}\")\n return False\n\n else:\n # Get the outputs\n # TODO: decide if this should \"fail\" the deployment\n _, outputs = await get_porter_outputs(msg_body, message_logger_adapter)\n\n success_message = f\"{action} action completed successfully.\"\n resource_request_message = service_bus_message_generator(msg_body, statuses.pass_status_string_for[action], success_message, outputs)\n\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: {success_message}\")\n return True\n\n\nasync def get_porter_outputs(msg_body, message_logger_adapter):\n \"\"\"\n Get outputs JSON from a Porter command\n \"\"\"\n porter_command = await build_porter_command_for_outputs(msg_body)\n returncode, stdout, err = await run_porter(porter_command)\n\n if returncode != 0:\n error_message = \"Error context message = \" + \" \".join(err.split('\\n'))\n message_logger_adapter.info(f\"{get_installation_id(msg_body)}: Failed to get outputs with error = {error_message}\")\n return False, \"\"\n else:\n outputs_json = {}\n try:\n outputs_json = json.loads(stdout)\n message_logger_adapter.info(f\"Got outputs as json: {outputs_json}\")\n except ValueError:\n message_logger_adapter.error(f\"Got outputs invalid json: {stdout}\")\n\n return True, outputs_json\n\n\nasync def runner():\n async with default_credentials(config[\"vmss_msi_id\"]) as credential:\n service_bus_client = ServiceBusClient(config[\"service_bus_namespace\"], credential)\n await receive_message(service_bus_client)\n\n\ndef start_runner_process():\n asyncio.ensure_future(runner())\n event_loop = asyncio.get_event_loop()\n event_loop.run_forever()\n logger_adapter.info(\"Started resource processor\")\n\n\nif __name__ == \"__main__\":\n httpserver_thread = threading.Thread(target=start_server)\n httpserver_thread.start()\n logger_adapter.info(\"Started http server\")\n\n logger_adapter.info(f'Starting {str(config[\"number_processes_int\"])} processes...')\n for i in range(config[\"number_processes_int\"]):\n logger_adapter.info(f'Starting process {str(i)}')\n process = Process(target=start_runner_process)\n process.start()\n", "path": "resource_processor/vmss_porter/runner.py"}], "after_files": [{"content": "import threading\nfrom multiprocessing import Process\nimport json\nimport socket\nimport asyncio\nimport logging\nimport sys\nfrom resources.commands import build_porter_command, build_porter_command_for_outputs\nfrom shared.config import get_config\nfrom resources.helpers import get_installation_id\nfrom resources.httpserver import start_server\n\nfrom shared.logging import disable_unwanted_loggers, initialize_logging, get_message_id_logger, shell_output_logger # pylint: disable=import-error # noqa\nfrom resources import strings, statuses # pylint: disable=import-error # noqa\nfrom contextlib import asynccontextmanager\nfrom azure.servicebus import ServiceBusMessage, NEXT_AVAILABLE_SESSION\nfrom azure.servicebus.exceptions import OperationTimeoutError, ServiceBusConnectionError\nfrom azure.servicebus.aio import ServiceBusClient, AutoLockRenewer\nfrom azure.identity.aio import DefaultAzureCredential\n\n\n# Initialise logging\nlogger_adapter = initialize_logging(logging.INFO, socket.gethostname())\ndisable_unwanted_loggers()\n\n# Initialise config\ntry:\n config = get_config(logger_adapter)\nexcept KeyError as e:\n logger_adapter.error(f\"Environment variable {e} is not set correctly...Exiting\")\n sys.exit(1)\n\n\n@asynccontextmanager\nasync def default_credentials(msi_id):\n \"\"\"\n Context manager which yields the default credentials.\n \"\"\"\n credential = DefaultAzureCredential(managed_identity_client_id=msi_id) if msi_id else DefaultAzureCredential()\n yield credential\n await credential.close()\n\n\nasync def receive_message(service_bus_client):\n \"\"\"\n This method is run per process. Each process will connect to service bus and try to establish a session.\n If messages are there, the process will continue to receive all the messages associated with that session.\n If no messages are there, the session connection will time out, sleep, and retry.\n \"\"\"\n q_name = config[\"resource_request_queue\"]\n\n while True:\n try:\n logger_adapter.info(\"Looking for new session...\")\n # max_wait_time=1 -> don't hold the session open after processing of the message has finished\n async with service_bus_client.get_queue_receiver(queue_name=q_name, max_wait_time=1, session_id=NEXT_AVAILABLE_SESSION) as receiver:\n logger_adapter.info(\"Got a session containing messages\")\n async with AutoLockRenewer() as renewer:\n # allow a session to be auto lock renewed for up to an hour - if it's processing a message\n renewer.register(receiver, receiver.session, max_lock_renewal_duration=3600)\n\n async for msg in receiver:\n result = True\n message = \"\"\n\n try:\n message = json.loads(str(msg))\n logger_adapter.info(f\"Message received for resource_id={message['id']}, operation_id={message['operationId']}\")\n message_logger_adapter = get_message_id_logger(message['operationId']) # correlate messages per operation\n result = await invoke_porter_action(message, service_bus_client, message_logger_adapter)\n except (json.JSONDecodeError) as e:\n logging.error(f\"Received bad service bus resource request message: {e}\")\n\n if result:\n logging.info(f\"Resource request for {message} is complete\")\n else:\n logging.error('Message processing failed!')\n\n logger_adapter.info(f\"Message for resource_id={message['id']}, operation_id={message['operationId']} processed as {result} and marked complete.\")\n await receiver.complete_message(msg)\n\n logger_adapter.info(\"Closing session\")\n await renewer.close()\n\n except OperationTimeoutError:\n # Timeout occurred whilst connecting to a session - this is expected and indicates no non-empty sessions are available\n logger_adapter.info(\"No sessions for this process. Will look again...\")\n\n except ServiceBusConnectionError:\n # Occasionally there will be a transient / network-level error in connecting to SB.\n logger_adapter.info(\"Unknown Service Bus connection error. Will retry...\")\n\n except Exception:\n # Catch all other exceptions, log them via .exception to get the stack trace, sleep, and reconnect\n logger_adapter.exception(\"Unknown exception. Will retry...\")\n\n\nasync def run_porter(command):\n \"\"\"\n Run a Porter command\n \"\"\"\n proc = await asyncio.create_subprocess_shell(\n ''.join(command),\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE,\n env=config[\"porter_env\"])\n\n stdout, stderr = await proc.communicate()\n logging.info(f'run porter exited with {proc.returncode}')\n result_stdout = None\n result_stderr = None\n\n if stdout:\n result_stdout = stdout.decode()\n shell_output_logger(result_stdout, '[stdout]', logger_adapter, logging.INFO)\n\n if stderr:\n result_stderr = stderr.decode()\n shell_output_logger(result_stderr, '[stderr]', logger_adapter, logging.WARN)\n\n return (proc.returncode, result_stdout, result_stderr)\n\n\ndef service_bus_message_generator(sb_message, status, deployment_message, outputs=None):\n \"\"\"\n Generate a resource request message\n \"\"\"\n installation_id = get_installation_id(sb_message)\n message_dict = {\n \"operationId\": sb_message[\"operationId\"],\n \"id\": sb_message[\"id\"],\n \"status\": status,\n \"message\": f\"{installation_id}: {deployment_message}\"}\n\n if outputs is not None:\n message_dict[\"outputs\"] = outputs\n\n resource_request_message = json.dumps(message_dict)\n return resource_request_message\n\n\nasync def invoke_porter_action(msg_body, sb_client, message_logger_adapter) -> bool:\n \"\"\"\n Handle resource message by invoking specified porter action (i.e. install, uninstall)\n \"\"\"\n installation_id = get_installation_id(msg_body)\n action = msg_body[\"action\"]\n message_logger_adapter.info(f\"{installation_id}: {action} action starting...\")\n sb_sender = sb_client.get_queue_sender(queue_name=config[\"deployment_status_queue\"])\n\n # If the action is install/upgrade, post message on sb queue to start a deployment job\n if action == \"install\" or action == \"upgrade\":\n resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYING, \"Deployment job starting\")\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n\n # Build and run porter command (flagging if its a built-in action or custom so we can adapt porter command appropriately)\n is_custom_action = action not in [\"install\", \"upgrade\", \"uninstall\"]\n porter_command = await build_porter_command(config, message_logger_adapter, msg_body, is_custom_action)\n returncode, _, err = await run_porter(porter_command)\n\n # Handle command output\n if returncode != 0:\n error_message = \"Error context message = \" + \" \".join(err.split('\\n')) + \" ; Command executed: \".join(porter_command)\n resource_request_message = service_bus_message_generator(msg_body, statuses.failed_status_string_for[action], error_message)\n\n # Post message on sb queue to notify receivers of action failure\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: Porter action failed with error = {error_message}\")\n return False\n\n else:\n # Get the outputs\n # TODO: decide if this should \"fail\" the deployment\n _, outputs = await get_porter_outputs(msg_body, message_logger_adapter)\n\n success_message = f\"{action} action completed successfully.\"\n resource_request_message = service_bus_message_generator(msg_body, statuses.pass_status_string_for[action], success_message, outputs)\n\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: {success_message}\")\n return True\n\n\nasync def get_porter_outputs(msg_body, message_logger_adapter):\n \"\"\"\n Get outputs JSON from a Porter command\n \"\"\"\n porter_command = await build_porter_command_for_outputs(msg_body)\n returncode, stdout, err = await run_porter(porter_command)\n\n if returncode != 0:\n error_message = \"Error context message = \" + \" \".join(err.split('\\n'))\n message_logger_adapter.info(f\"{get_installation_id(msg_body)}: Failed to get outputs with error = {error_message}\")\n return False, \"\"\n else:\n outputs_json = {}\n try:\n outputs_json = json.loads(stdout)\n message_logger_adapter.info(f\"Got outputs as json: {outputs_json}\")\n except ValueError:\n message_logger_adapter.error(f\"Got outputs invalid json: {stdout}\")\n\n return True, outputs_json\n\n\nasync def runner():\n async with default_credentials(config[\"vmss_msi_id\"]) as credential:\n service_bus_client = ServiceBusClient(config[\"service_bus_namespace\"], credential)\n await receive_message(service_bus_client)\n\n\ndef start_runner_process():\n asyncio.ensure_future(runner())\n event_loop = asyncio.get_event_loop()\n event_loop.run_forever()\n logger_adapter.info(\"Started resource processor\")\n\n\nif __name__ == \"__main__\":\n httpserver_thread = threading.Thread(target=start_server)\n httpserver_thread.start()\n logger_adapter.info(\"Started http server\")\n\n logger_adapter.info(f'Starting {str(config[\"number_processes_int\"])} processes...')\n for i in range(config[\"number_processes_int\"]):\n logger_adapter.info(f'Starting process {str(i)}')\n process = Process(target=start_runner_process)\n process.start()\n", "path": "resource_processor/vmss_porter/runner.py"}]}
| 2,866 | 565 |
gh_patches_debug_31166
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-3265
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pyramid.scripting.get_root() triggers AttributeError
In Pyramid 1.9.1, get_root() triggers an AttributeError on threadlocal_manager:
```
# bin/pshell mything/development.ini
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help" for more information.
Environment:
app The WSGI application.
registry Active Pyramid registry.
request Active request object.
root Root of the default resource tree.
root_factory Default root factory used to create `root`.
>>> from pyramid.scripting import get_root
>>> x = get_root(app)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/shane/src/pyramidtest/eggs/pyramid-1.9.1-py3.5.egg/pyramid/scripting.py", line 30, in get_root
app.threadlocal_manager.push(threadlocals)
AttributeError: 'Router' object has no attribute 'threadlocal_manager'
>>>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/scripting.py`
Content:
```
1 from pyramid.config import global_registries
2 from pyramid.exceptions import ConfigurationError
3
4 from pyramid.interfaces import (
5 IRequestFactory,
6 IRootFactory,
7 )
8 from pyramid.request import Request
9 from pyramid.request import apply_request_extensions
10
11 from pyramid.threadlocal import manager as threadlocal_manager
12 from pyramid.traversal import DefaultRootFactory
13
14 def get_root(app, request=None):
15 """ Return a tuple composed of ``(root, closer)`` when provided a
16 :term:`router` instance as the ``app`` argument. The ``root``
17 returned is the application root object. The ``closer`` returned
18 is a callable (accepting no arguments) that should be called when
19 your scripting application is finished using the root.
20
21 ``request`` is passed to the :app:`Pyramid` application root
22 factory to compute the root. If ``request`` is None, a default
23 will be constructed using the registry's :term:`Request Factory`
24 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
25 """
26 registry = app.registry
27 if request is None:
28 request = _make_request('/', registry)
29 threadlocals = {'registry':registry, 'request':request}
30 app.threadlocal_manager.push(threadlocals)
31 def closer(request=request): # keep request alive via this function default
32 app.threadlocal_manager.pop()
33 root = app.root_factory(request)
34 return root, closer
35
36 def prepare(request=None, registry=None):
37 """ This function pushes data onto the Pyramid threadlocal stack
38 (request and registry), making those objects 'current'. It
39 returns a dictionary useful for bootstrapping a Pyramid
40 application in a scripting environment.
41
42 ``request`` is passed to the :app:`Pyramid` application root
43 factory to compute the root. If ``request`` is None, a default
44 will be constructed using the registry's :term:`Request Factory`
45 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
46
47 If ``registry`` is not supplied, the last registry loaded from
48 :attr:`pyramid.config.global_registries` will be used. If you
49 have loaded more than one :app:`Pyramid` application in the
50 current process, you may not want to use the last registry
51 loaded, thus you can search the ``global_registries`` and supply
52 the appropriate one based on your own criteria.
53
54 The function returns a dictionary composed of ``root``,
55 ``closer``, ``registry``, ``request`` and ``root_factory``. The
56 ``root`` returned is the application's root resource object. The
57 ``closer`` returned is a callable (accepting no arguments) that
58 should be called when your scripting application is finished
59 using the root. ``registry`` is the resolved registry object.
60 ``request`` is the request object passed or the constructed request
61 if no request is passed. ``root_factory`` is the root factory used
62 to construct the root.
63
64 This function may be used as a context manager to call the ``closer``
65 automatically:
66
67 .. code-block:: python
68
69 registry = config.registry
70 with prepare(registry) as env:
71 request = env['request']
72 # ...
73
74 .. versionchanged:: 1.8
75
76 Added the ability to use the return value as a context manager.
77
78 """
79 if registry is None:
80 registry = getattr(request, 'registry', global_registries.last)
81 if registry is None:
82 raise ConfigurationError('No valid Pyramid applications could be '
83 'found, make sure one has been created '
84 'before trying to activate it.')
85 if request is None:
86 request = _make_request('/', registry)
87 # NB: even though _make_request might have already set registry on
88 # request, we reset it in case someone has passed in their own
89 # request.
90 request.registry = registry
91 threadlocals = {'registry':registry, 'request':request}
92 threadlocal_manager.push(threadlocals)
93 apply_request_extensions(request)
94 def closer():
95 threadlocal_manager.pop()
96 root_factory = registry.queryUtility(IRootFactory,
97 default=DefaultRootFactory)
98 root = root_factory(request)
99 if getattr(request, 'context', None) is None:
100 request.context = root
101 return AppEnvironment(
102 root=root,
103 closer=closer,
104 registry=registry,
105 request=request,
106 root_factory=root_factory,
107 )
108
109 class AppEnvironment(dict):
110 def __enter__(self):
111 return self
112
113 def __exit__(self, type, value, traceback):
114 self['closer']()
115
116 def _make_request(path, registry=None):
117 """ Return a :meth:`pyramid.request.Request` object anchored at a
118 given path. The object returned will be generated from the supplied
119 registry's :term:`Request Factory` using the
120 :meth:`pyramid.interfaces.IRequestFactory.blank` method.
121
122 This request object can be passed to :meth:`pyramid.scripting.get_root`
123 or :meth:`pyramid.scripting.prepare` to initialize an application in
124 preparation for executing a script with a proper environment setup.
125 URLs can then be generated with the object, as well as rendering
126 templates.
127
128 If ``registry`` is not supplied, the last registry loaded from
129 :attr:`pyramid.config.global_registries` will be used. If you have
130 loaded more than one :app:`Pyramid` application in the current
131 process, you may not want to use the last registry loaded, thus
132 you can search the ``global_registries`` and supply the appropriate
133 one based on your own criteria.
134 """
135 if registry is None:
136 registry = global_registries.last
137 request_factory = registry.queryUtility(IRequestFactory, default=Request)
138 request = request_factory.blank(path)
139 request.registry = registry
140 return request
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyramid/scripting.py b/pyramid/scripting.py
--- a/pyramid/scripting.py
+++ b/pyramid/scripting.py
@@ -8,7 +8,7 @@
from pyramid.request import Request
from pyramid.request import apply_request_extensions
-from pyramid.threadlocal import manager as threadlocal_manager
+from pyramid.threadlocal import RequestContext
from pyramid.traversal import DefaultRootFactory
def get_root(app, request=None):
@@ -26,10 +26,11 @@
registry = app.registry
if request is None:
request = _make_request('/', registry)
- threadlocals = {'registry':registry, 'request':request}
- app.threadlocal_manager.push(threadlocals)
- def closer(request=request): # keep request alive via this function default
- app.threadlocal_manager.pop()
+ request.registry = registry
+ ctx = RequestContext(request)
+ ctx.begin()
+ def closer():
+ ctx.end()
root = app.root_factory(request)
return root, closer
@@ -87,12 +88,12 @@
# NB: even though _make_request might have already set registry on
# request, we reset it in case someone has passed in their own
# request.
- request.registry = registry
- threadlocals = {'registry':registry, 'request':request}
- threadlocal_manager.push(threadlocals)
+ request.registry = registry
+ ctx = RequestContext(request)
+ ctx.begin()
apply_request_extensions(request)
def closer():
- threadlocal_manager.pop()
+ ctx.end()
root_factory = registry.queryUtility(IRootFactory,
default=DefaultRootFactory)
root = root_factory(request)
|
{"golden_diff": "diff --git a/pyramid/scripting.py b/pyramid/scripting.py\n--- a/pyramid/scripting.py\n+++ b/pyramid/scripting.py\n@@ -8,7 +8,7 @@\n from pyramid.request import Request\n from pyramid.request import apply_request_extensions\n \n-from pyramid.threadlocal import manager as threadlocal_manager\n+from pyramid.threadlocal import RequestContext\n from pyramid.traversal import DefaultRootFactory\n \n def get_root(app, request=None):\n@@ -26,10 +26,11 @@\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n- threadlocals = {'registry':registry, 'request':request}\n- app.threadlocal_manager.push(threadlocals)\n- def closer(request=request): # keep request alive via this function default\n- app.threadlocal_manager.pop()\n+ request.registry = registry\n+ ctx = RequestContext(request)\n+ ctx.begin()\n+ def closer():\n+ ctx.end()\n root = app.root_factory(request)\n return root, closer\n \n@@ -87,12 +88,12 @@\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n- request.registry = registry \n- threadlocals = {'registry':registry, 'request':request}\n- threadlocal_manager.push(threadlocals)\n+ request.registry = registry\n+ ctx = RequestContext(request)\n+ ctx.begin()\n apply_request_extensions(request)\n def closer():\n- threadlocal_manager.pop()\n+ ctx.end()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n", "issue": "pyramid.scripting.get_root() triggers AttributeError \nIn Pyramid 1.9.1, get_root() triggers an AttributeError on threadlocal_manager:\r\n\r\n```\r\n# bin/pshell mything/development.ini \r\nPython 3.5.2 (default, Nov 23 2017, 16:37:01) \r\n[GCC 5.4.0 20160609] on linux\r\nType \"help\" for more information.\r\n\r\nEnvironment:\r\n app The WSGI application.\r\n registry Active Pyramid registry.\r\n request Active request object.\r\n root Root of the default resource tree.\r\n root_factory Default root factory used to create `root`.\r\n\r\n>>> from pyramid.scripting import get_root\r\n>>> x = get_root(app)\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/home/shane/src/pyramidtest/eggs/pyramid-1.9.1-py3.5.egg/pyramid/scripting.py\", line 30, in get_root\r\n app.threadlocal_manager.push(threadlocals)\r\nAttributeError: 'Router' object has no attribute 'threadlocal_manager'\r\n>>>\r\n```\n", "before_files": [{"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import manager as threadlocal_manager\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n threadlocals = {'registry':registry, 'request':request}\n app.threadlocal_manager.push(threadlocals)\n def closer(request=request): # keep request alive via this function default\n app.threadlocal_manager.pop()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n registry = config.registry\n with prepare(registry) as env:\n request = env['request']\n # ...\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry \n threadlocals = {'registry':registry, 'request':request}\n threadlocal_manager.push(threadlocals)\n apply_request_extensions(request)\n def closer():\n threadlocal_manager.pop()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return AppEnvironment(\n root=root,\n closer=closer,\n registry=registry,\n request=request,\n root_factory=root_factory,\n )\n\nclass AppEnvironment(dict):\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self['closer']()\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}], "after_files": [{"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import RequestContext\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n request.registry = registry\n ctx = RequestContext(request)\n ctx.begin()\n def closer():\n ctx.end()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n registry = config.registry\n with prepare(registry) as env:\n request = env['request']\n # ...\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry\n ctx = RequestContext(request)\n ctx.begin()\n apply_request_extensions(request)\n def closer():\n ctx.end()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return AppEnvironment(\n root=root,\n closer=closer,\n registry=registry,\n request=request,\n root_factory=root_factory,\n )\n\nclass AppEnvironment(dict):\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self['closer']()\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}]}
| 2,102 | 366 |
gh_patches_debug_42131
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-nlp-143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use KerasTuner to hyper-parameter search for the BERT finetuning script
From the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf)...
```
For fine-tuning, most model hyperparameters are
the same as in pre-training, with the exception of
the batch size, learning rate, and number of training
epochs. The dropout probability was always
kept at 0.1. The optimal hyperparameter values
are task-specific, but we found the following range
of possible values to work well across all tasks:
• Batch size: 16, 32
• Learning rate (Adam): 5e-5, 3e-5, 2e-5
• Number of epochs: 2, 3, 4
```
We should allow our [BERT finetuning script](https://github.com/keras-team/keras-nlp/blob/master/examples/bert/run_glue_finetuning.py) to do this search automatically. [KerasTuner](https://keras.io/keras_tuner/) is a good fit for this.
Steps:
- [ ] Add an setup.py `examples` dependency on keras-tuner.
- [ ] Remove epochs, batch size and learning rage arguments from run_glue_finetuning.py.
- [ ] Use keras tuner to hyperparemeter search on the above value ranges with the validation set.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2021 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script."""
16
17 import pathlib
18
19 from setuptools import find_packages
20 from setuptools import setup
21
22 HERE = pathlib.Path(__file__).parent
23 README = (HERE / "README.md").read_text()
24
25 setup(
26 name="keras-nlp",
27 description=(
28 "Industry-strength Natural Language Processing extensions for Keras."
29 ),
30 long_description=README,
31 long_description_content_type="text/markdown",
32 version="0.2.0-dev.1",
33 url="https://github.com/keras-team/keras-nlp",
34 author="Keras team",
35 author_email="[email protected]",
36 license="Apache License 2.0",
37 install_requires=[
38 "absl-py",
39 "numpy",
40 "packaging",
41 "tensorflow",
42 "tensorflow_text",
43 ],
44 extras_require={
45 "tests": [
46 "black",
47 "flake8",
48 "isort",
49 "pytest",
50 "pytest-cov",
51 ],
52 "examples": [
53 "datasets", # For GLUE in BERT example.
54 "nltk",
55 "wikiextractor",
56 ],
57 },
58 classifiers=[
59 "Programming Language :: Python",
60 "Programming Language :: Python :: 3.7",
61 "Operating System :: Unix",
62 "Operating System :: Microsoft :: Windows",
63 "Operating System :: MacOS",
64 "Intended Audience :: Science/Research",
65 "Topic :: Scientific/Engineering",
66 "Topic :: Software Development",
67 ],
68 packages=find_packages(exclude=("*_test.py",)),
69 )
70
```
Path: `examples/bert/run_glue_finetuning.py`
Content:
```
1 # Copyright 2022 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Run finetuning on a GLUE task."""
15
16 import json
17
18 import datasets
19 import tensorflow as tf
20 import tensorflow_text as tftext
21 from absl import app
22 from absl import flags
23 from tensorflow import keras
24
25 FLAGS = flags.FLAGS
26
27 flags.DEFINE_string(
28 "bert_config_file",
29 None,
30 "The json config file for the bert model parameters.",
31 )
32
33 flags.DEFINE_string(
34 "vocab_file",
35 None,
36 "The vocabulary file that the BERT model was trained on.",
37 )
38
39 flags.DEFINE_string(
40 "saved_model_input",
41 None,
42 "The directory containing the input pretrained model to finetune.",
43 )
44
45 flags.DEFINE_string(
46 "saved_model_output", None, "The directory to save the finetuned model in."
47 )
48
49
50 flags.DEFINE_string(
51 "task_name", "mrpc", "The name of the GLUE task to finetune on."
52 )
53
54 flags.DEFINE_bool(
55 "do_lower_case",
56 True,
57 "Whether to lower case the input text. Should be True for uncased "
58 "models and False for cased models.",
59 )
60
61 flags.DEFINE_bool(
62 "do_evaluation",
63 True,
64 "Whether to run evaluation on the validation set for a given task.",
65 )
66
67 flags.DEFINE_integer("batch_size", 32, "The batch size.")
68
69 flags.DEFINE_integer("epochs", 3, "The number of training epochs.")
70
71 flags.DEFINE_float("learning_rate", 2e-5, "The initial learning rate for Adam.")
72
73 flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.")
74
75
76 def pack_inputs(
77 inputs,
78 seq_length,
79 start_of_sequence_id,
80 end_of_segment_id,
81 padding_id,
82 ):
83 # In case inputs weren't truncated (as they should have been),
84 # fall back to some ad-hoc truncation.
85 trimmed_segments = tftext.RoundRobinTrimmer(
86 seq_length - len(inputs) - 1
87 ).trim(inputs)
88 # Combine segments.
89 segments_combined, segment_ids = tftext.combine_segments(
90 trimmed_segments,
91 start_of_sequence_id=start_of_sequence_id,
92 end_of_segment_id=end_of_segment_id,
93 )
94 # Pad to dense Tensors.
95 input_word_ids, _ = tftext.pad_model_inputs(
96 segments_combined, seq_length, pad_value=padding_id
97 )
98 input_type_ids, input_mask = tftext.pad_model_inputs(
99 segment_ids, seq_length, pad_value=0
100 )
101 # Assemble nest of input tensors as expected by BERT model.
102 return {
103 "input_ids": input_word_ids,
104 "input_mask": input_mask,
105 "segment_ids": input_type_ids,
106 }
107
108
109 def load_data(task_name):
110 if task_name in ("cola", "sst2"):
111 feature_names = ("sentence",)
112 elif task_name in ("mrpc", "stsb", "rte", "wnli"):
113 feature_names = ("sentence1", "sentence2")
114 elif task_name in ("mnli", "mnli_matched", "mnli_mismatched"):
115 feature_names = ("premise", "hypothesis")
116 elif task_name in "qnli":
117 feature_names = ("question", "sentence")
118 elif task_name in "qqp":
119 feature_names = ("question1", "question2")
120 else:
121 raise ValueError(f"Unkown task_name {task_name}.")
122
123 test_suffix = ""
124 if task_name in ("mnli", "mnli_matched"):
125 # For "mnli", just run default to "mnli_matched".
126 task_name = "mnli"
127 test_suffix = "_matched"
128 elif task_name in ("mnli_mismatched",):
129 task_name = "mnli"
130 test_suffix = "_mismatched"
131
132 def to_tf_dataset(split):
133 # Format each sample as a tuple of string features and an int label.
134 features = tuple([split[f] for f in feature_names])
135 label = tf.cast(split["label"], tf.int32)
136 return tf.data.Dataset.from_tensor_slices((features, label))
137
138 data = datasets.load_dataset("glue", task_name)
139 data.set_format(type="tensorflow")
140 train_ds = to_tf_dataset(data["train"])
141 test_ds = to_tf_dataset(data["test" + test_suffix])
142 validation_ds = to_tf_dataset(data["validation" + test_suffix])
143 return train_ds, test_ds, validation_ds
144
145
146 class BertClassificationFinetuner(keras.Model):
147 """Adds a classification head to a pre-trained BERT model for finetuning"""
148
149 def __init__(self, bert_model, hidden_size, num_classes, **kwargs):
150 super().__init__(**kwargs)
151 self.bert_model = bert_model
152 self._pooler_layer = keras.layers.Dense(
153 hidden_size,
154 activation="tanh",
155 name="pooler",
156 )
157 self._logit_layer = tf.keras.layers.Dense(
158 num_classes,
159 name="logits",
160 )
161
162 def call(self, inputs):
163 outputs = self.bert_model(inputs)
164 # Get the first [CLS] token from each output.
165 outputs = outputs[:, 0, :]
166 outputs = self._pooler_layer(outputs)
167 return self._logit_layer(outputs)
168
169
170 def main(_):
171 print(f"Reading input model from {FLAGS.saved_model_input}")
172 model = keras.models.load_model(FLAGS.saved_model_input)
173
174 vocab = []
175 with open(FLAGS.vocab_file, "r") as vocab_file:
176 for line in vocab_file:
177 vocab.append(line.strip())
178 tokenizer = tftext.BertTokenizer(
179 FLAGS.vocab_file,
180 lower_case=FLAGS.do_lower_case,
181 token_out_type=tf.int32,
182 )
183 start_id = vocab.index("[CLS]")
184 end_id = vocab.index("[SEP]")
185 pad_id = vocab.index("[PAD]")
186
187 with open(FLAGS.bert_config_file, "r") as bert_config_file:
188 bert_config = json.loads(bert_config_file.read())
189
190 def preprocess_data(inputs, labels):
191 inputs = [tokenizer.tokenize(x).merge_dims(1, -1) for x in inputs]
192 inputs = pack_inputs(
193 inputs,
194 FLAGS.max_seq_length,
195 start_of_sequence_id=start_id,
196 end_of_segment_id=end_id,
197 padding_id=pad_id,
198 )
199 return inputs, labels
200
201 # Read and preprocess GLUE task data.
202 train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)
203 train_ds = train_ds.batch(FLAGS.batch_size).map(
204 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
205 )
206 validation_ds = validation_ds.batch(FLAGS.batch_size).map(
207 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
208 )
209 test_ds = test_ds.batch(FLAGS.batch_size).map(
210 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
211 )
212
213 finetuning_model = BertClassificationFinetuner(
214 bert_model=model,
215 hidden_size=bert_config["hidden_size"],
216 num_classes=3 if FLAGS.task_name in ("mnli", "ax") else 2,
217 )
218 finetuning_model.compile(
219 optimizer=keras.optimizers.Adam(learning_rate=FLAGS.learning_rate),
220 loss="sparse_categorical_crossentropy",
221 metrics=["accuracy"],
222 )
223 finetuning_model.fit(
224 train_ds, epochs=FLAGS.epochs, validation_data=validation_ds
225 )
226
227 if FLAGS.do_evaluation:
228 print("Evaluating on test set.")
229 finetuning_model.evaluate(test_ds)
230
231 # TODO(mattdangerw): After incorporating keras_nlp tokenization, save an
232 # end-to-end model includeing preprocessing that operates on raw strings.
233 if FLAGS.saved_model_output:
234 print(f"Saving to {FLAGS.saved_model_output}")
235 finetuning_model.save(FLAGS.saved_model_output)
236
237
238 if __name__ == "__main__":
239 flags.mark_flag_as_required("vocab_file")
240 flags.mark_flag_as_required("bert_config_file")
241 flags.mark_flag_as_required("saved_model_input")
242 app.run(main)
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/bert/run_glue_finetuning.py b/examples/bert/run_glue_finetuning.py
--- a/examples/bert/run_glue_finetuning.py
+++ b/examples/bert/run_glue_finetuning.py
@@ -16,6 +16,7 @@
import json
import datasets
+import keras_tuner
import tensorflow as tf
import tensorflow_text as tftext
from absl import app
@@ -68,8 +69,6 @@
flags.DEFINE_integer("epochs", 3, "The number of training epochs.")
-flags.DEFINE_float("learning_rate", 2e-5, "The initial learning rate for Adam.")
-
flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.")
@@ -167,9 +166,32 @@
return self._logit_layer(outputs)
+class BertHyperModel(keras_tuner.HyperModel):
+ """Creates a hypermodel to help with the search space for finetuning."""
+
+ def __init__(self, bert_config):
+ self.bert_config = bert_config
+
+ def build(self, hp):
+ model = keras.models.load_model(FLAGS.saved_model_input, compile=False)
+ bert_config = self.bert_config
+ finetuning_model = BertClassificationFinetuner(
+ bert_model=model,
+ hidden_size=bert_config["hidden_size"],
+ num_classes=3 if FLAGS.task_name in ("mnli", "ax") else 2,
+ )
+ finetuning_model.compile(
+ optimizer=keras.optimizers.Adam(
+ learning_rate=hp.Choice("lr", [5e-5, 4e-5, 3e-5, 2e-5])
+ ),
+ loss="sparse_categorical_crossentropy",
+ metrics=["accuracy"],
+ )
+ return finetuning_model
+
+
def main(_):
print(f"Reading input model from {FLAGS.saved_model_input}")
- model = keras.models.load_model(FLAGS.saved_model_input)
vocab = []
with open(FLAGS.vocab_file, "r") as vocab_file:
@@ -200,6 +222,7 @@
# Read and preprocess GLUE task data.
train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)
+
train_ds = train_ds.batch(FLAGS.batch_size).map(
preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
)
@@ -210,18 +233,27 @@
preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
)
- finetuning_model = BertClassificationFinetuner(
- bert_model=model,
- hidden_size=bert_config["hidden_size"],
- num_classes=3 if FLAGS.task_name in ("mnli", "ax") else 2,
- )
- finetuning_model.compile(
- optimizer=keras.optimizers.Adam(learning_rate=FLAGS.learning_rate),
- loss="sparse_categorical_crossentropy",
- metrics=["accuracy"],
+ # Create a hypermodel object for a RandomSearch.
+ hypermodel = BertHyperModel(bert_config)
+
+ # Initialize the random search over the 4 learning rate parameters, for 4
+ # trials and 3 epochs for each trial.
+ tuner = keras_tuner.RandomSearch(
+ hypermodel=hypermodel,
+ objective=keras_tuner.Objective("val_loss", direction="min"),
+ max_trials=4,
+ overwrite=True,
+ project_name="hyperparameter_tuner_results",
)
- finetuning_model.fit(
- train_ds, epochs=FLAGS.epochs, validation_data=validation_ds
+
+ tuner.search(train_ds, epochs=FLAGS.epochs, validation_data=validation_ds)
+
+ # Extract the best hyperparameters after the search.
+ best_hp = tuner.get_best_hyperparameters()[0]
+ finetuning_model = tuner.get_best_models()[0]
+
+ print(
+ f"The best hyperparameters found are:\nLearning Rate: {best_hp['lr']}"
)
if FLAGS.do_evaluation:
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,6 +53,7 @@
"datasets", # For GLUE in BERT example.
"nltk",
"wikiextractor",
+ "keras-tuner",
],
},
classifiers=[
|
{"golden_diff": "diff --git a/examples/bert/run_glue_finetuning.py b/examples/bert/run_glue_finetuning.py\n--- a/examples/bert/run_glue_finetuning.py\n+++ b/examples/bert/run_glue_finetuning.py\n@@ -16,6 +16,7 @@\n import json\n \n import datasets\n+import keras_tuner\n import tensorflow as tf\n import tensorflow_text as tftext\n from absl import app\n@@ -68,8 +69,6 @@\n \n flags.DEFINE_integer(\"epochs\", 3, \"The number of training epochs.\")\n \n-flags.DEFINE_float(\"learning_rate\", 2e-5, \"The initial learning rate for Adam.\")\n-\n flags.DEFINE_integer(\"max_seq_length\", 128, \"Maximum sequence length.\")\n \n \n@@ -167,9 +166,32 @@\n return self._logit_layer(outputs)\n \n \n+class BertHyperModel(keras_tuner.HyperModel):\n+ \"\"\"Creates a hypermodel to help with the search space for finetuning.\"\"\"\n+\n+ def __init__(self, bert_config):\n+ self.bert_config = bert_config\n+\n+ def build(self, hp):\n+ model = keras.models.load_model(FLAGS.saved_model_input, compile=False)\n+ bert_config = self.bert_config\n+ finetuning_model = BertClassificationFinetuner(\n+ bert_model=model,\n+ hidden_size=bert_config[\"hidden_size\"],\n+ num_classes=3 if FLAGS.task_name in (\"mnli\", \"ax\") else 2,\n+ )\n+ finetuning_model.compile(\n+ optimizer=keras.optimizers.Adam(\n+ learning_rate=hp.Choice(\"lr\", [5e-5, 4e-5, 3e-5, 2e-5])\n+ ),\n+ loss=\"sparse_categorical_crossentropy\",\n+ metrics=[\"accuracy\"],\n+ )\n+ return finetuning_model\n+\n+\n def main(_):\n print(f\"Reading input model from {FLAGS.saved_model_input}\")\n- model = keras.models.load_model(FLAGS.saved_model_input)\n \n vocab = []\n with open(FLAGS.vocab_file, \"r\") as vocab_file:\n@@ -200,6 +222,7 @@\n \n # Read and preprocess GLUE task data.\n train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)\n+\n train_ds = train_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n@@ -210,18 +233,27 @@\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n \n- finetuning_model = BertClassificationFinetuner(\n- bert_model=model,\n- hidden_size=bert_config[\"hidden_size\"],\n- num_classes=3 if FLAGS.task_name in (\"mnli\", \"ax\") else 2,\n- )\n- finetuning_model.compile(\n- optimizer=keras.optimizers.Adam(learning_rate=FLAGS.learning_rate),\n- loss=\"sparse_categorical_crossentropy\",\n- metrics=[\"accuracy\"],\n+ # Create a hypermodel object for a RandomSearch.\n+ hypermodel = BertHyperModel(bert_config)\n+\n+ # Initialize the random search over the 4 learning rate parameters, for 4\n+ # trials and 3 epochs for each trial.\n+ tuner = keras_tuner.RandomSearch(\n+ hypermodel=hypermodel,\n+ objective=keras_tuner.Objective(\"val_loss\", direction=\"min\"),\n+ max_trials=4,\n+ overwrite=True,\n+ project_name=\"hyperparameter_tuner_results\",\n )\n- finetuning_model.fit(\n- train_ds, epochs=FLAGS.epochs, validation_data=validation_ds\n+\n+ tuner.search(train_ds, epochs=FLAGS.epochs, validation_data=validation_ds)\n+\n+ # Extract the best hyperparameters after the search.\n+ best_hp = tuner.get_best_hyperparameters()[0]\n+ finetuning_model = tuner.get_best_models()[0]\n+\n+ print(\n+ f\"The best hyperparameters found are:\\nLearning Rate: {best_hp['lr']}\"\n )\n \n if FLAGS.do_evaluation:\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,6 +53,7 @@\n \"datasets\", # For GLUE in BERT example.\n \"nltk\",\n \"wikiextractor\",\n+ \"keras-tuner\",\n ],\n },\n classifiers=[\n", "issue": "Use KerasTuner to hyper-parameter search for the BERT finetuning script\nFrom the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf)...\r\n\r\n```\r\nFor fine-tuning, most model hyperparameters are\r\nthe same as in pre-training, with the exception of\r\nthe batch size, learning rate, and number of training\r\nepochs. The dropout probability was always\r\nkept at 0.1. The optimal hyperparameter values\r\nare task-specific, but we found the following range\r\nof possible values to work well across all tasks:\r\n\r\n\u2022 Batch size: 16, 32\r\n\u2022 Learning rate (Adam): 5e-5, 3e-5, 2e-5\r\n\u2022 Number of epochs: 2, 3, 4\r\n```\r\n\r\nWe should allow our [BERT finetuning script](https://github.com/keras-team/keras-nlp/blob/master/examples/bert/run_glue_finetuning.py) to do this search automatically. [KerasTuner](https://keras.io/keras_tuner/) is a good fit for this.\r\n\r\nSteps:\r\n - [ ] Add an setup.py `examples` dependency on keras-tuner.\r\n - [ ] Remove epochs, batch size and learning rage arguments from run_glue_finetuning.py.\r\n - [ ] Use keras tuner to hyperparemeter search on the above value ranges with the validation set.\n", "before_files": [{"content": "# Copyright 2021 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script.\"\"\"\n\nimport pathlib\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nHERE = pathlib.Path(__file__).parent\nREADME = (HERE / \"README.md\").read_text()\n\nsetup(\n name=\"keras-nlp\",\n description=(\n \"Industry-strength Natural Language Processing extensions for Keras.\"\n ),\n long_description=README,\n long_description_content_type=\"text/markdown\",\n version=\"0.2.0-dev.1\",\n url=\"https://github.com/keras-team/keras-nlp\",\n author=\"Keras team\",\n author_email=\"[email protected]\",\n license=\"Apache License 2.0\",\n install_requires=[\n \"absl-py\",\n \"numpy\",\n \"packaging\",\n \"tensorflow\",\n \"tensorflow_text\",\n ],\n extras_require={\n \"tests\": [\n \"black\",\n \"flake8\",\n \"isort\",\n \"pytest\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"datasets\", # For GLUE in BERT example.\n \"nltk\",\n \"wikiextractor\",\n ],\n },\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: Unix\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n ],\n packages=find_packages(exclude=(\"*_test.py\",)),\n)\n", "path": "setup.py"}, {"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Run finetuning on a GLUE task.\"\"\"\n\nimport json\n\nimport datasets\nimport tensorflow as tf\nimport tensorflow_text as tftext\nfrom absl import app\nfrom absl import flags\nfrom tensorflow import keras\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n \"bert_config_file\",\n None,\n \"The json config file for the bert model parameters.\",\n)\n\nflags.DEFINE_string(\n \"vocab_file\",\n None,\n \"The vocabulary file that the BERT model was trained on.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_input\",\n None,\n \"The directory containing the input pretrained model to finetune.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_output\", None, \"The directory to save the finetuned model in.\"\n)\n\n\nflags.DEFINE_string(\n \"task_name\", \"mrpc\", \"The name of the GLUE task to finetune on.\"\n)\n\nflags.DEFINE_bool(\n \"do_lower_case\",\n True,\n \"Whether to lower case the input text. Should be True for uncased \"\n \"models and False for cased models.\",\n)\n\nflags.DEFINE_bool(\n \"do_evaluation\",\n True,\n \"Whether to run evaluation on the validation set for a given task.\",\n)\n\nflags.DEFINE_integer(\"batch_size\", 32, \"The batch size.\")\n\nflags.DEFINE_integer(\"epochs\", 3, \"The number of training epochs.\")\n\nflags.DEFINE_float(\"learning_rate\", 2e-5, \"The initial learning rate for Adam.\")\n\nflags.DEFINE_integer(\"max_seq_length\", 128, \"Maximum sequence length.\")\n\n\ndef pack_inputs(\n inputs,\n seq_length,\n start_of_sequence_id,\n end_of_segment_id,\n padding_id,\n):\n # In case inputs weren't truncated (as they should have been),\n # fall back to some ad-hoc truncation.\n trimmed_segments = tftext.RoundRobinTrimmer(\n seq_length - len(inputs) - 1\n ).trim(inputs)\n # Combine segments.\n segments_combined, segment_ids = tftext.combine_segments(\n trimmed_segments,\n start_of_sequence_id=start_of_sequence_id,\n end_of_segment_id=end_of_segment_id,\n )\n # Pad to dense Tensors.\n input_word_ids, _ = tftext.pad_model_inputs(\n segments_combined, seq_length, pad_value=padding_id\n )\n input_type_ids, input_mask = tftext.pad_model_inputs(\n segment_ids, seq_length, pad_value=0\n )\n # Assemble nest of input tensors as expected by BERT model.\n return {\n \"input_ids\": input_word_ids,\n \"input_mask\": input_mask,\n \"segment_ids\": input_type_ids,\n }\n\n\ndef load_data(task_name):\n if task_name in (\"cola\", \"sst2\"):\n feature_names = (\"sentence\",)\n elif task_name in (\"mrpc\", \"stsb\", \"rte\", \"wnli\"):\n feature_names = (\"sentence1\", \"sentence2\")\n elif task_name in (\"mnli\", \"mnli_matched\", \"mnli_mismatched\"):\n feature_names = (\"premise\", \"hypothesis\")\n elif task_name in \"qnli\":\n feature_names = (\"question\", \"sentence\")\n elif task_name in \"qqp\":\n feature_names = (\"question1\", \"question2\")\n else:\n raise ValueError(f\"Unkown task_name {task_name}.\")\n\n test_suffix = \"\"\n if task_name in (\"mnli\", \"mnli_matched\"):\n # For \"mnli\", just run default to \"mnli_matched\".\n task_name = \"mnli\"\n test_suffix = \"_matched\"\n elif task_name in (\"mnli_mismatched\",):\n task_name = \"mnli\"\n test_suffix = \"_mismatched\"\n\n def to_tf_dataset(split):\n # Format each sample as a tuple of string features and an int label.\n features = tuple([split[f] for f in feature_names])\n label = tf.cast(split[\"label\"], tf.int32)\n return tf.data.Dataset.from_tensor_slices((features, label))\n\n data = datasets.load_dataset(\"glue\", task_name)\n data.set_format(type=\"tensorflow\")\n train_ds = to_tf_dataset(data[\"train\"])\n test_ds = to_tf_dataset(data[\"test\" + test_suffix])\n validation_ds = to_tf_dataset(data[\"validation\" + test_suffix])\n return train_ds, test_ds, validation_ds\n\n\nclass BertClassificationFinetuner(keras.Model):\n \"\"\"Adds a classification head to a pre-trained BERT model for finetuning\"\"\"\n\n def __init__(self, bert_model, hidden_size, num_classes, **kwargs):\n super().__init__(**kwargs)\n self.bert_model = bert_model\n self._pooler_layer = keras.layers.Dense(\n hidden_size,\n activation=\"tanh\",\n name=\"pooler\",\n )\n self._logit_layer = tf.keras.layers.Dense(\n num_classes,\n name=\"logits\",\n )\n\n def call(self, inputs):\n outputs = self.bert_model(inputs)\n # Get the first [CLS] token from each output.\n outputs = outputs[:, 0, :]\n outputs = self._pooler_layer(outputs)\n return self._logit_layer(outputs)\n\n\ndef main(_):\n print(f\"Reading input model from {FLAGS.saved_model_input}\")\n model = keras.models.load_model(FLAGS.saved_model_input)\n\n vocab = []\n with open(FLAGS.vocab_file, \"r\") as vocab_file:\n for line in vocab_file:\n vocab.append(line.strip())\n tokenizer = tftext.BertTokenizer(\n FLAGS.vocab_file,\n lower_case=FLAGS.do_lower_case,\n token_out_type=tf.int32,\n )\n start_id = vocab.index(\"[CLS]\")\n end_id = vocab.index(\"[SEP]\")\n pad_id = vocab.index(\"[PAD]\")\n\n with open(FLAGS.bert_config_file, \"r\") as bert_config_file:\n bert_config = json.loads(bert_config_file.read())\n\n def preprocess_data(inputs, labels):\n inputs = [tokenizer.tokenize(x).merge_dims(1, -1) for x in inputs]\n inputs = pack_inputs(\n inputs,\n FLAGS.max_seq_length,\n start_of_sequence_id=start_id,\n end_of_segment_id=end_id,\n padding_id=pad_id,\n )\n return inputs, labels\n\n # Read and preprocess GLUE task data.\n train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)\n train_ds = train_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n validation_ds = validation_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n test_ds = test_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n\n finetuning_model = BertClassificationFinetuner(\n bert_model=model,\n hidden_size=bert_config[\"hidden_size\"],\n num_classes=3 if FLAGS.task_name in (\"mnli\", \"ax\") else 2,\n )\n finetuning_model.compile(\n optimizer=keras.optimizers.Adam(learning_rate=FLAGS.learning_rate),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n )\n finetuning_model.fit(\n train_ds, epochs=FLAGS.epochs, validation_data=validation_ds\n )\n\n if FLAGS.do_evaluation:\n print(\"Evaluating on test set.\")\n finetuning_model.evaluate(test_ds)\n\n # TODO(mattdangerw): After incorporating keras_nlp tokenization, save an\n # end-to-end model includeing preprocessing that operates on raw strings.\n if FLAGS.saved_model_output:\n print(f\"Saving to {FLAGS.saved_model_output}\")\n finetuning_model.save(FLAGS.saved_model_output)\n\n\nif __name__ == \"__main__\":\n flags.mark_flag_as_required(\"vocab_file\")\n flags.mark_flag_as_required(\"bert_config_file\")\n flags.mark_flag_as_required(\"saved_model_input\")\n app.run(main)\n", "path": "examples/bert/run_glue_finetuning.py"}], "after_files": [{"content": "# Copyright 2021 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script.\"\"\"\n\nimport pathlib\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nHERE = pathlib.Path(__file__).parent\nREADME = (HERE / \"README.md\").read_text()\n\nsetup(\n name=\"keras-nlp\",\n description=(\n \"Industry-strength Natural Language Processing extensions for Keras.\"\n ),\n long_description=README,\n long_description_content_type=\"text/markdown\",\n version=\"0.2.0-dev.1\",\n url=\"https://github.com/keras-team/keras-nlp\",\n author=\"Keras team\",\n author_email=\"[email protected]\",\n license=\"Apache License 2.0\",\n install_requires=[\n \"absl-py\",\n \"numpy\",\n \"packaging\",\n \"tensorflow\",\n \"tensorflow_text\",\n ],\n extras_require={\n \"tests\": [\n \"black\",\n \"flake8\",\n \"isort\",\n \"pytest\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"datasets\", # For GLUE in BERT example.\n \"nltk\",\n \"wikiextractor\",\n \"keras-tuner\",\n ],\n },\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: Unix\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n ],\n packages=find_packages(exclude=(\"*_test.py\",)),\n)\n", "path": "setup.py"}, {"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Run finetuning on a GLUE task.\"\"\"\n\nimport json\n\nimport datasets\nimport keras_tuner\nimport tensorflow as tf\nimport tensorflow_text as tftext\nfrom absl import app\nfrom absl import flags\nfrom tensorflow import keras\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n \"bert_config_file\",\n None,\n \"The json config file for the bert model parameters.\",\n)\n\nflags.DEFINE_string(\n \"vocab_file\",\n None,\n \"The vocabulary file that the BERT model was trained on.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_input\",\n None,\n \"The directory containing the input pretrained model to finetune.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_output\", None, \"The directory to save the finetuned model in.\"\n)\n\n\nflags.DEFINE_string(\n \"task_name\", \"mrpc\", \"The name of the GLUE task to finetune on.\"\n)\n\nflags.DEFINE_bool(\n \"do_lower_case\",\n True,\n \"Whether to lower case the input text. Should be True for uncased \"\n \"models and False for cased models.\",\n)\n\nflags.DEFINE_bool(\n \"do_evaluation\",\n True,\n \"Whether to run evaluation on the validation set for a given task.\",\n)\n\nflags.DEFINE_integer(\"batch_size\", 32, \"The batch size.\")\n\nflags.DEFINE_integer(\"epochs\", 3, \"The number of training epochs.\")\n\nflags.DEFINE_integer(\"max_seq_length\", 128, \"Maximum sequence length.\")\n\n\ndef pack_inputs(\n inputs,\n seq_length,\n start_of_sequence_id,\n end_of_segment_id,\n padding_id,\n):\n # In case inputs weren't truncated (as they should have been),\n # fall back to some ad-hoc truncation.\n trimmed_segments = tftext.RoundRobinTrimmer(\n seq_length - len(inputs) - 1\n ).trim(inputs)\n # Combine segments.\n segments_combined, segment_ids = tftext.combine_segments(\n trimmed_segments,\n start_of_sequence_id=start_of_sequence_id,\n end_of_segment_id=end_of_segment_id,\n )\n # Pad to dense Tensors.\n input_word_ids, _ = tftext.pad_model_inputs(\n segments_combined, seq_length, pad_value=padding_id\n )\n input_type_ids, input_mask = tftext.pad_model_inputs(\n segment_ids, seq_length, pad_value=0\n )\n # Assemble nest of input tensors as expected by BERT model.\n return {\n \"input_ids\": input_word_ids,\n \"input_mask\": input_mask,\n \"segment_ids\": input_type_ids,\n }\n\n\ndef load_data(task_name):\n if task_name in (\"cola\", \"sst2\"):\n feature_names = (\"sentence\",)\n elif task_name in (\"mrpc\", \"stsb\", \"rte\", \"wnli\"):\n feature_names = (\"sentence1\", \"sentence2\")\n elif task_name in (\"mnli\", \"mnli_matched\", \"mnli_mismatched\"):\n feature_names = (\"premise\", \"hypothesis\")\n elif task_name in \"qnli\":\n feature_names = (\"question\", \"sentence\")\n elif task_name in \"qqp\":\n feature_names = (\"question1\", \"question2\")\n else:\n raise ValueError(f\"Unkown task_name {task_name}.\")\n\n test_suffix = \"\"\n if task_name in (\"mnli\", \"mnli_matched\"):\n # For \"mnli\", just run default to \"mnli_matched\".\n task_name = \"mnli\"\n test_suffix = \"_matched\"\n elif task_name in (\"mnli_mismatched\",):\n task_name = \"mnli\"\n test_suffix = \"_mismatched\"\n\n def to_tf_dataset(split):\n # Format each sample as a tuple of string features and an int label.\n features = tuple([split[f] for f in feature_names])\n label = tf.cast(split[\"label\"], tf.int32)\n return tf.data.Dataset.from_tensor_slices((features, label))\n\n data = datasets.load_dataset(\"glue\", task_name)\n data.set_format(type=\"tensorflow\")\n train_ds = to_tf_dataset(data[\"train\"])\n test_ds = to_tf_dataset(data[\"test\" + test_suffix])\n validation_ds = to_tf_dataset(data[\"validation\" + test_suffix])\n return train_ds, test_ds, validation_ds\n\n\nclass BertClassificationFinetuner(keras.Model):\n \"\"\"Adds a classification head to a pre-trained BERT model for finetuning\"\"\"\n\n def __init__(self, bert_model, hidden_size, num_classes, **kwargs):\n super().__init__(**kwargs)\n self.bert_model = bert_model\n self._pooler_layer = keras.layers.Dense(\n hidden_size,\n activation=\"tanh\",\n name=\"pooler\",\n )\n self._logit_layer = tf.keras.layers.Dense(\n num_classes,\n name=\"logits\",\n )\n\n def call(self, inputs):\n outputs = self.bert_model(inputs)\n # Get the first [CLS] token from each output.\n outputs = outputs[:, 0, :]\n outputs = self._pooler_layer(outputs)\n return self._logit_layer(outputs)\n\n\nclass BertHyperModel(keras_tuner.HyperModel):\n \"\"\"Creates a hypermodel to help with the search space for finetuning.\"\"\"\n\n def __init__(self, bert_config):\n self.bert_config = bert_config\n\n def build(self, hp):\n model = keras.models.load_model(FLAGS.saved_model_input, compile=False)\n bert_config = self.bert_config\n finetuning_model = BertClassificationFinetuner(\n bert_model=model,\n hidden_size=bert_config[\"hidden_size\"],\n num_classes=3 if FLAGS.task_name in (\"mnli\", \"ax\") else 2,\n )\n finetuning_model.compile(\n optimizer=keras.optimizers.Adam(\n learning_rate=hp.Choice(\"lr\", [5e-5, 4e-5, 3e-5, 2e-5])\n ),\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"],\n )\n return finetuning_model\n\n\ndef main(_):\n print(f\"Reading input model from {FLAGS.saved_model_input}\")\n\n vocab = []\n with open(FLAGS.vocab_file, \"r\") as vocab_file:\n for line in vocab_file:\n vocab.append(line.strip())\n tokenizer = tftext.BertTokenizer(\n FLAGS.vocab_file,\n lower_case=FLAGS.do_lower_case,\n token_out_type=tf.int32,\n )\n start_id = vocab.index(\"[CLS]\")\n end_id = vocab.index(\"[SEP]\")\n pad_id = vocab.index(\"[PAD]\")\n\n with open(FLAGS.bert_config_file, \"r\") as bert_config_file:\n bert_config = json.loads(bert_config_file.read())\n\n def preprocess_data(inputs, labels):\n inputs = [tokenizer.tokenize(x).merge_dims(1, -1) for x in inputs]\n inputs = pack_inputs(\n inputs,\n FLAGS.max_seq_length,\n start_of_sequence_id=start_id,\n end_of_segment_id=end_id,\n padding_id=pad_id,\n )\n return inputs, labels\n\n # Read and preprocess GLUE task data.\n train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)\n\n train_ds = train_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n validation_ds = validation_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n test_ds = test_ds.batch(FLAGS.batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n\n # Create a hypermodel object for a RandomSearch.\n hypermodel = BertHyperModel(bert_config)\n\n # Initialize the random search over the 4 learning rate parameters, for 4\n # trials and 3 epochs for each trial.\n tuner = keras_tuner.RandomSearch(\n hypermodel=hypermodel,\n objective=keras_tuner.Objective(\"val_loss\", direction=\"min\"),\n max_trials=4,\n overwrite=True,\n project_name=\"hyperparameter_tuner_results\",\n )\n\n tuner.search(train_ds, epochs=FLAGS.epochs, validation_data=validation_ds)\n\n # Extract the best hyperparameters after the search.\n best_hp = tuner.get_best_hyperparameters()[0]\n finetuning_model = tuner.get_best_models()[0]\n\n print(\n f\"The best hyperparameters found are:\\nLearning Rate: {best_hp['lr']}\"\n )\n\n if FLAGS.do_evaluation:\n print(\"Evaluating on test set.\")\n finetuning_model.evaluate(test_ds)\n\n # TODO(mattdangerw): After incorporating keras_nlp tokenization, save an\n # end-to-end model includeing preprocessing that operates on raw strings.\n if FLAGS.saved_model_output:\n print(f\"Saving to {FLAGS.saved_model_output}\")\n finetuning_model.save(FLAGS.saved_model_output)\n\n\nif __name__ == \"__main__\":\n flags.mark_flag_as_required(\"vocab_file\")\n flags.mark_flag_as_required(\"bert_config_file\")\n flags.mark_flag_as_required(\"saved_model_input\")\n app.run(main)\n", "path": "examples/bert/run_glue_finetuning.py"}]}
| 3,660 | 975 |
gh_patches_debug_20536
|
rasdani/github-patches
|
git_diff
|
encode__httpx-237
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle HEAD responses with Brotli decoder
Currently if you receive a response with `Content-Encoding: br` set and no body we get an error because Brotli doesn't like being called on an empty stream.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/decoders.py`
Content:
```
1 """
2 Handlers for Content-Encoding.
3
4 See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
5 """
6 import codecs
7 import typing
8 import zlib
9
10 import chardet
11
12 from .exceptions import DecodingError
13
14 try:
15 import brotli
16 except ImportError: # pragma: nocover
17 brotli = None
18
19
20 class Decoder:
21 def decode(self, data: bytes) -> bytes:
22 raise NotImplementedError() # pragma: nocover
23
24 def flush(self) -> bytes:
25 raise NotImplementedError() # pragma: nocover
26
27
28 class IdentityDecoder(Decoder):
29 """
30 Handle unencoded data.
31 """
32
33 def decode(self, data: bytes) -> bytes:
34 return data
35
36 def flush(self) -> bytes:
37 return b""
38
39
40 class DeflateDecoder(Decoder):
41 """
42 Handle 'deflate' decoding.
43
44 See: https://stackoverflow.com/questions/1838699
45 """
46
47 def __init__(self) -> None:
48 self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)
49
50 def decode(self, data: bytes) -> bytes:
51 try:
52 return self.decompressor.decompress(data)
53 except zlib.error as exc:
54 raise DecodingError from exc
55
56 def flush(self) -> bytes:
57 try:
58 return self.decompressor.flush()
59 except zlib.error as exc: # pragma: nocover
60 raise DecodingError from exc
61
62
63 class GZipDecoder(Decoder):
64 """
65 Handle 'gzip' decoding.
66
67 See: https://stackoverflow.com/questions/1838699
68 """
69
70 def __init__(self) -> None:
71 self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)
72
73 def decode(self, data: bytes) -> bytes:
74 try:
75 return self.decompressor.decompress(data)
76 except zlib.error as exc:
77 raise DecodingError from exc
78
79 def flush(self) -> bytes:
80 try:
81 return self.decompressor.flush()
82 except zlib.error as exc: # pragma: nocover
83 raise DecodingError from exc
84
85
86 class BrotliDecoder(Decoder):
87 """
88 Handle 'brotli' decoding.
89
90 Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/
91 or `pip install brotli`. See https://github.com/google/brotli
92 Supports both 'brotlipy' and 'Brotli' packages since they share an import
93 name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'
94 """
95
96 def __init__(self) -> None:
97 assert (
98 brotli is not None
99 ), "The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'"
100 self.decompressor = brotli.Decompressor()
101
102 def decode(self, data: bytes) -> bytes:
103 try:
104 if hasattr(self.decompressor, "decompress"):
105 return self.decompressor.decompress(data)
106 return self.decompressor.process(data) # pragma: nocover
107 except brotli.error as exc:
108 raise DecodingError from exc
109
110 def flush(self) -> bytes:
111 try:
112 if hasattr(self.decompressor, "finish"):
113 self.decompressor.finish()
114 return b""
115 except brotli.error as exc: # pragma: nocover
116 raise DecodingError from exc
117
118
119 class MultiDecoder(Decoder):
120 """
121 Handle the case where multiple encodings have been applied.
122 """
123
124 def __init__(self, children: typing.Sequence[Decoder]) -> None:
125 """
126 'children' should be a sequence of decoders in the order in which
127 each was applied.
128 """
129 # Note that we reverse the order for decoding.
130 self.children = list(reversed(children))
131
132 def decode(self, data: bytes) -> bytes:
133 for child in self.children:
134 data = child.decode(data)
135 return data
136
137 def flush(self) -> bytes:
138 data = b""
139 for child in self.children:
140 data = child.decode(data) + child.flush()
141 return data
142
143
144 class TextDecoder:
145 """
146 Handles incrementally decoding bytes into text
147 """
148
149 def __init__(self, encoding: typing.Optional[str] = None):
150 self.decoder: typing.Optional[codecs.IncrementalDecoder] = (
151 None if encoding is None else codecs.getincrementaldecoder(encoding)()
152 )
153 self.detector = chardet.universaldetector.UniversalDetector()
154
155 # This buffer is only needed if 'decoder' is 'None'
156 # we want to trigger errors if data is getting added to
157 # our internal buffer for some silly reason while
158 # a decoder is discovered.
159 self.buffer: typing.Optional[bytearray] = None if self.decoder else bytearray()
160
161 def decode(self, data: bytes) -> str:
162 try:
163 if self.decoder is not None:
164 text = self.decoder.decode(data)
165 else:
166 assert self.buffer is not None
167 text = ""
168 self.detector.feed(data)
169 self.buffer += data
170
171 # Should be more than enough data to process, we don't
172 # want to buffer too long as chardet will wait until
173 # detector.close() is used to give back common
174 # encodings like 'utf-8'.
175 if len(self.buffer) >= 4096:
176 self.decoder = codecs.getincrementaldecoder(
177 self._detector_result()
178 )()
179 text = self.decoder.decode(bytes(self.buffer), False)
180 self.buffer = None
181
182 return text
183 except UnicodeDecodeError: # pragma: nocover
184 raise DecodingError() from None
185
186 def flush(self) -> str:
187 try:
188 if self.decoder is None:
189 # Empty string case as chardet is guaranteed to not have a guess.
190 assert self.buffer is not None
191 if len(self.buffer) == 0:
192 return ""
193 return bytes(self.buffer).decode(self._detector_result())
194
195 return self.decoder.decode(b"", True)
196 except UnicodeDecodeError: # pragma: nocover
197 raise DecodingError() from None
198
199 def _detector_result(self) -> str:
200 self.detector.close()
201 result = self.detector.result["encoding"]
202 if not result: # pragma: nocover
203 raise DecodingError("Unable to determine encoding of content")
204
205 return result
206
207
208 SUPPORTED_DECODERS = {
209 "identity": IdentityDecoder,
210 "gzip": GZipDecoder,
211 "deflate": DeflateDecoder,
212 "br": BrotliDecoder,
213 }
214
215
216 if brotli is None:
217 SUPPORTED_DECODERS.pop("br") # pragma: nocover
218
219
220 ACCEPT_ENCODING = ", ".join(
221 [key for key in SUPPORTED_DECODERS.keys() if key != "identity"]
222 )
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/decoders.py b/httpx/decoders.py
--- a/httpx/decoders.py
+++ b/httpx/decoders.py
@@ -98,8 +98,12 @@
brotli is not None
), "The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'"
self.decompressor = brotli.Decompressor()
+ self.seen_data = False
def decode(self, data: bytes) -> bytes:
+ if not data:
+ return b""
+ self.seen_data = True
try:
if hasattr(self.decompressor, "decompress"):
return self.decompressor.decompress(data)
@@ -108,6 +112,8 @@
raise DecodingError from exc
def flush(self) -> bytes:
+ if not self.seen_data:
+ return b""
try:
if hasattr(self.decompressor, "finish"):
self.decompressor.finish()
|
{"golden_diff": "diff --git a/httpx/decoders.py b/httpx/decoders.py\n--- a/httpx/decoders.py\n+++ b/httpx/decoders.py\n@@ -98,8 +98,12 @@\n brotli is not None\n ), \"The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'\"\n self.decompressor = brotli.Decompressor()\n+ self.seen_data = False\n \n def decode(self, data: bytes) -> bytes:\n+ if not data:\n+ return b\"\"\n+ self.seen_data = True\n try:\n if hasattr(self.decompressor, \"decompress\"):\n return self.decompressor.decompress(data)\n@@ -108,6 +112,8 @@\n raise DecodingError from exc\n \n def flush(self) -> bytes:\n+ if not self.seen_data:\n+ return b\"\"\n try:\n if hasattr(self.decompressor, \"finish\"):\n self.decompressor.finish()\n", "issue": "Handle HEAD responses with Brotli decoder\nCurrently if you receive a response with `Content-Encoding: br` set and no body we get an error because Brotli doesn't like being called on an empty stream.\n", "before_files": [{"content": "\"\"\"\nHandlers for Content-Encoding.\n\nSee: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding\n\"\"\"\nimport codecs\nimport typing\nimport zlib\n\nimport chardet\n\nfrom .exceptions import DecodingError\n\ntry:\n import brotli\nexcept ImportError: # pragma: nocover\n brotli = None\n\n\nclass Decoder:\n def decode(self, data: bytes) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def flush(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass IdentityDecoder(Decoder):\n \"\"\"\n Handle unencoded data.\n \"\"\"\n\n def decode(self, data: bytes) -> bytes:\n return data\n\n def flush(self) -> bytes:\n return b\"\"\n\n\nclass DeflateDecoder(Decoder):\n \"\"\"\n Handle 'deflate' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass GZipDecoder(Decoder):\n \"\"\"\n Handle 'gzip' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass BrotliDecoder(Decoder):\n \"\"\"\n Handle 'brotli' decoding.\n\n Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/\n or `pip install brotli`. See https://github.com/google/brotli\n Supports both 'brotlipy' and 'Brotli' packages since they share an import\n name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'\n \"\"\"\n\n def __init__(self) -> None:\n assert (\n brotli is not None\n ), \"The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'\"\n self.decompressor = brotli.Decompressor()\n\n def decode(self, data: bytes) -> bytes:\n try:\n if hasattr(self.decompressor, \"decompress\"):\n return self.decompressor.decompress(data)\n return self.decompressor.process(data) # pragma: nocover\n except brotli.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n if hasattr(self.decompressor, \"finish\"):\n self.decompressor.finish()\n return b\"\"\n except brotli.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass MultiDecoder(Decoder):\n \"\"\"\n Handle the case where multiple encodings have been applied.\n \"\"\"\n\n def __init__(self, children: typing.Sequence[Decoder]) -> None:\n \"\"\"\n 'children' should be a sequence of decoders in the order in which\n each was applied.\n \"\"\"\n # Note that we reverse the order for decoding.\n self.children = list(reversed(children))\n\n def decode(self, data: bytes) -> bytes:\n for child in self.children:\n data = child.decode(data)\n return data\n\n def flush(self) -> bytes:\n data = b\"\"\n for child in self.children:\n data = child.decode(data) + child.flush()\n return data\n\n\nclass TextDecoder:\n \"\"\"\n Handles incrementally decoding bytes into text\n \"\"\"\n\n def __init__(self, encoding: typing.Optional[str] = None):\n self.decoder: typing.Optional[codecs.IncrementalDecoder] = (\n None if encoding is None else codecs.getincrementaldecoder(encoding)()\n )\n self.detector = chardet.universaldetector.UniversalDetector()\n\n # This buffer is only needed if 'decoder' is 'None'\n # we want to trigger errors if data is getting added to\n # our internal buffer for some silly reason while\n # a decoder is discovered.\n self.buffer: typing.Optional[bytearray] = None if self.decoder else bytearray()\n\n def decode(self, data: bytes) -> str:\n try:\n if self.decoder is not None:\n text = self.decoder.decode(data)\n else:\n assert self.buffer is not None\n text = \"\"\n self.detector.feed(data)\n self.buffer += data\n\n # Should be more than enough data to process, we don't\n # want to buffer too long as chardet will wait until\n # detector.close() is used to give back common\n # encodings like 'utf-8'.\n if len(self.buffer) >= 4096:\n self.decoder = codecs.getincrementaldecoder(\n self._detector_result()\n )()\n text = self.decoder.decode(bytes(self.buffer), False)\n self.buffer = None\n\n return text\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def flush(self) -> str:\n try:\n if self.decoder is None:\n # Empty string case as chardet is guaranteed to not have a guess.\n assert self.buffer is not None\n if len(self.buffer) == 0:\n return \"\"\n return bytes(self.buffer).decode(self._detector_result())\n\n return self.decoder.decode(b\"\", True)\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def _detector_result(self) -> str:\n self.detector.close()\n result = self.detector.result[\"encoding\"]\n if not result: # pragma: nocover\n raise DecodingError(\"Unable to determine encoding of content\")\n\n return result\n\n\nSUPPORTED_DECODERS = {\n \"identity\": IdentityDecoder,\n \"gzip\": GZipDecoder,\n \"deflate\": DeflateDecoder,\n \"br\": BrotliDecoder,\n}\n\n\nif brotli is None:\n SUPPORTED_DECODERS.pop(\"br\") # pragma: nocover\n\n\nACCEPT_ENCODING = \", \".join(\n [key for key in SUPPORTED_DECODERS.keys() if key != \"identity\"]\n)\n", "path": "httpx/decoders.py"}], "after_files": [{"content": "\"\"\"\nHandlers for Content-Encoding.\n\nSee: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding\n\"\"\"\nimport codecs\nimport typing\nimport zlib\n\nimport chardet\n\nfrom .exceptions import DecodingError\n\ntry:\n import brotli\nexcept ImportError: # pragma: nocover\n brotli = None\n\n\nclass Decoder:\n def decode(self, data: bytes) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def flush(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass IdentityDecoder(Decoder):\n \"\"\"\n Handle unencoded data.\n \"\"\"\n\n def decode(self, data: bytes) -> bytes:\n return data\n\n def flush(self) -> bytes:\n return b\"\"\n\n\nclass DeflateDecoder(Decoder):\n \"\"\"\n Handle 'deflate' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(-zlib.MAX_WBITS)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass GZipDecoder(Decoder):\n \"\"\"\n Handle 'gzip' decoding.\n\n See: https://stackoverflow.com/questions/1838699\n \"\"\"\n\n def __init__(self) -> None:\n self.decompressor = zlib.decompressobj(zlib.MAX_WBITS | 16)\n\n def decode(self, data: bytes) -> bytes:\n try:\n return self.decompressor.decompress(data)\n except zlib.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n try:\n return self.decompressor.flush()\n except zlib.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass BrotliDecoder(Decoder):\n \"\"\"\n Handle 'brotli' decoding.\n\n Requires `pip install brotlipy`. See: https://brotlipy.readthedocs.io/\n or `pip install brotli`. See https://github.com/google/brotli\n Supports both 'brotlipy' and 'Brotli' packages since they share an import\n name. The top branches are for 'brotlipy' and bottom branches for 'Brotli'\n \"\"\"\n\n def __init__(self) -> None:\n assert (\n brotli is not None\n ), \"The 'brotlipy' or 'brotli' library must be installed to use 'BrotliDecoder'\"\n self.decompressor = brotli.Decompressor()\n self.seen_data = False\n\n def decode(self, data: bytes) -> bytes:\n if not data:\n return b\"\"\n self.seen_data = True\n try:\n if hasattr(self.decompressor, \"decompress\"):\n return self.decompressor.decompress(data)\n return self.decompressor.process(data) # pragma: nocover\n except brotli.error as exc:\n raise DecodingError from exc\n\n def flush(self) -> bytes:\n if not self.seen_data:\n return b\"\"\n try:\n if hasattr(self.decompressor, \"finish\"):\n self.decompressor.finish()\n return b\"\"\n except brotli.error as exc: # pragma: nocover\n raise DecodingError from exc\n\n\nclass MultiDecoder(Decoder):\n \"\"\"\n Handle the case where multiple encodings have been applied.\n \"\"\"\n\n def __init__(self, children: typing.Sequence[Decoder]) -> None:\n \"\"\"\n 'children' should be a sequence of decoders in the order in which\n each was applied.\n \"\"\"\n # Note that we reverse the order for decoding.\n self.children = list(reversed(children))\n\n def decode(self, data: bytes) -> bytes:\n for child in self.children:\n data = child.decode(data)\n return data\n\n def flush(self) -> bytes:\n data = b\"\"\n for child in self.children:\n data = child.decode(data) + child.flush()\n return data\n\n\nclass TextDecoder:\n \"\"\"\n Handles incrementally decoding bytes into text\n \"\"\"\n\n def __init__(self, encoding: typing.Optional[str] = None):\n self.decoder: typing.Optional[codecs.IncrementalDecoder] = (\n None if encoding is None else codecs.getincrementaldecoder(encoding)()\n )\n self.detector = chardet.universaldetector.UniversalDetector()\n\n # This buffer is only needed if 'decoder' is 'None'\n # we want to trigger errors if data is getting added to\n # our internal buffer for some silly reason while\n # a decoder is discovered.\n self.buffer: typing.Optional[bytearray] = None if self.decoder else bytearray()\n\n def decode(self, data: bytes) -> str:\n try:\n if self.decoder is not None:\n text = self.decoder.decode(data)\n else:\n assert self.buffer is not None\n text = \"\"\n self.detector.feed(data)\n self.buffer += data\n\n # Should be more than enough data to process, we don't\n # want to buffer too long as chardet will wait until\n # detector.close() is used to give back common\n # encodings like 'utf-8'.\n if len(self.buffer) >= 4096:\n self.decoder = codecs.getincrementaldecoder(\n self._detector_result()\n )()\n text = self.decoder.decode(bytes(self.buffer), False)\n self.buffer = None\n\n return text\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def flush(self) -> str:\n try:\n if self.decoder is None:\n # Empty string case as chardet is guaranteed to not have a guess.\n assert self.buffer is not None\n if len(self.buffer) == 0:\n return \"\"\n return bytes(self.buffer).decode(self._detector_result())\n\n return self.decoder.decode(b\"\", True)\n except UnicodeDecodeError: # pragma: nocover\n raise DecodingError() from None\n\n def _detector_result(self) -> str:\n self.detector.close()\n result = self.detector.result[\"encoding\"]\n if not result: # pragma: nocover\n raise DecodingError(\"Unable to determine encoding of content\")\n\n return result\n\n\nSUPPORTED_DECODERS = {\n \"identity\": IdentityDecoder,\n \"gzip\": GZipDecoder,\n \"deflate\": DeflateDecoder,\n \"br\": BrotliDecoder,\n}\n\n\nif brotli is None:\n SUPPORTED_DECODERS.pop(\"br\") # pragma: nocover\n\n\nACCEPT_ENCODING = \", \".join(\n [key for key in SUPPORTED_DECODERS.keys() if key != \"identity\"]\n)\n", "path": "httpx/decoders.py"}]}
| 2,395 | 229 |
gh_patches_debug_5265
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-2562
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix simple typo: whith -> with
There is a small typo in obspy/io/gcf/core.py.
Should read `with` rather than `whith`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `obspy/io/gcf/core.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 GCF bindings to ObsPy core module.
4 """
5 from __future__ import (absolute_import, division, print_function,
6 unicode_literals)
7 from future.builtins import * # NOQA
8
9 from obspy import Stream, Trace, UTCDateTime
10
11 from . import libgcf
12
13
14 def merge_gcf_stream(st):
15 """
16 Merges GCF stream (replacing Stream.merge(-1) for headonly=True)
17
18 :type st: :class:`~obspy.core.stream.Stream`
19 :param st: GCF Stream object whith no data
20 :rtype: :class:`~obspy.core.stream.Stream`
21 :returns: Stream object containing header and data.
22 """
23 traces = []
24 for tr in st:
25 delta = tr.stats.delta
26 starttime = tr.stats.starttime
27 endtime = tr.stats.endtime
28 for trace in traces:
29 if tr.id == trace.id and delta == trace.stats.delta \
30 and not starttime == trace.stats.starttime:
31 if 0 < starttime - trace.stats.endtime <= delta:
32 trace.stats.npts += tr.stats.npts
33 break
34 elif 0 < trace.stats.starttime - endtime <= delta:
35 trace.stats.starttime = UTCDateTime(starttime)
36 trace.stats.npts += tr.stats.npts
37 break
38 else:
39 traces.append(tr)
40 return Stream(traces=traces)
41
42
43 def _is_gcf(filename):
44 """
45 Checks whether a file is GCF or not.
46
47 :type filename: str
48 :param filename: GCF file to be checked.
49 :rtype: bool
50 :return: ``True`` if a GCF file.
51 """
52 try:
53 with open(filename, 'rb') as f:
54 libgcf.is_gcf(f)
55 except Exception:
56 return False
57 return True
58
59
60 def _read_gcf(filename, headonly=False, **kwargs): # @UnusedVariable
61 """
62 Reads a GCF file and returns a Stream object.
63
64 only GCF files containing data records are supported.
65
66 .. warning::
67 This function should NOT be called directly, it registers via the
68 ObsPy :func:`~obspy.core.stream.read` function, call this instead.
69
70 :type filename: str
71 :param filename: GCF file to be read.
72 :type headonly: bool, optional
73 :param headonly: If True read only head of GCF file.
74 :type channel_prefix: str, optional
75 :param channel_prefix: Channel band and instrument codes.
76 Defaults to ``HH``.
77 :rtype: :class:`~obspy.core.stream.Stream`
78 :returns: Stream object containing header and data.
79
80 .. rubric:: Example
81 >>> from obspy import read
82 >>> st = read("/path/to/20160603_1955n.gcf", format="GCF")
83 """
84 traces = []
85 with open(filename, 'rb') as f:
86 while True:
87 try:
88 if headonly:
89 header = libgcf.read_header(f, **kwargs)
90 if header:
91 traces.append(Trace(header=header))
92 else:
93 hd = libgcf.read(f, **kwargs)
94 if hd:
95 traces.append(Trace(header=hd[0], data=hd[1]))
96 except EOFError:
97 break
98 st = Stream(traces=traces)
99 if headonly:
100 st = merge_gcf_stream(st)
101 else:
102 st.merge(-1)
103 return st
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/obspy/io/gcf/core.py b/obspy/io/gcf/core.py
--- a/obspy/io/gcf/core.py
+++ b/obspy/io/gcf/core.py
@@ -16,7 +16,7 @@
Merges GCF stream (replacing Stream.merge(-1) for headonly=True)
:type st: :class:`~obspy.core.stream.Stream`
- :param st: GCF Stream object whith no data
+ :param st: GCF Stream object with no data
:rtype: :class:`~obspy.core.stream.Stream`
:returns: Stream object containing header and data.
"""
|
{"golden_diff": "diff --git a/obspy/io/gcf/core.py b/obspy/io/gcf/core.py\n--- a/obspy/io/gcf/core.py\n+++ b/obspy/io/gcf/core.py\n@@ -16,7 +16,7 @@\n Merges GCF stream (replacing Stream.merge(-1) for headonly=True)\n \n :type st: :class:`~obspy.core.stream.Stream`\n- :param st: GCF Stream object whith no data\n+ :param st: GCF Stream object with no data\n :rtype: :class:`~obspy.core.stream.Stream`\n :returns: Stream object containing header and data.\n \"\"\"\n", "issue": "Fix simple typo: whith -> with\nThere is a small typo in obspy/io/gcf/core.py.\nShould read `with` rather than `whith`.\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nGCF bindings to ObsPy core module.\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\nfrom obspy import Stream, Trace, UTCDateTime\n\nfrom . import libgcf\n\n\ndef merge_gcf_stream(st):\n \"\"\"\n Merges GCF stream (replacing Stream.merge(-1) for headonly=True)\n\n :type st: :class:`~obspy.core.stream.Stream`\n :param st: GCF Stream object whith no data\n :rtype: :class:`~obspy.core.stream.Stream`\n :returns: Stream object containing header and data.\n \"\"\"\n traces = []\n for tr in st:\n delta = tr.stats.delta\n starttime = tr.stats.starttime\n endtime = tr.stats.endtime\n for trace in traces:\n if tr.id == trace.id and delta == trace.stats.delta \\\n and not starttime == trace.stats.starttime:\n if 0 < starttime - trace.stats.endtime <= delta:\n trace.stats.npts += tr.stats.npts\n break\n elif 0 < trace.stats.starttime - endtime <= delta:\n trace.stats.starttime = UTCDateTime(starttime)\n trace.stats.npts += tr.stats.npts\n break\n else:\n traces.append(tr)\n return Stream(traces=traces)\n\n\ndef _is_gcf(filename):\n \"\"\"\n Checks whether a file is GCF or not.\n\n :type filename: str\n :param filename: GCF file to be checked.\n :rtype: bool\n :return: ``True`` if a GCF file.\n \"\"\"\n try:\n with open(filename, 'rb') as f:\n libgcf.is_gcf(f)\n except Exception:\n return False\n return True\n\n\ndef _read_gcf(filename, headonly=False, **kwargs): # @UnusedVariable\n \"\"\"\n Reads a GCF file and returns a Stream object.\n\n only GCF files containing data records are supported.\n\n .. warning::\n This function should NOT be called directly, it registers via the\n ObsPy :func:`~obspy.core.stream.read` function, call this instead.\n\n :type filename: str\n :param filename: GCF file to be read.\n :type headonly: bool, optional\n :param headonly: If True read only head of GCF file.\n :type channel_prefix: str, optional\n :param channel_prefix: Channel band and instrument codes.\n Defaults to ``HH``.\n :rtype: :class:`~obspy.core.stream.Stream`\n :returns: Stream object containing header and data.\n\n .. rubric:: Example\n >>> from obspy import read\n >>> st = read(\"/path/to/20160603_1955n.gcf\", format=\"GCF\")\n \"\"\"\n traces = []\n with open(filename, 'rb') as f:\n while True:\n try:\n if headonly:\n header = libgcf.read_header(f, **kwargs)\n if header:\n traces.append(Trace(header=header))\n else:\n hd = libgcf.read(f, **kwargs)\n if hd:\n traces.append(Trace(header=hd[0], data=hd[1]))\n except EOFError:\n break\n st = Stream(traces=traces)\n if headonly:\n st = merge_gcf_stream(st)\n else:\n st.merge(-1)\n return st\n", "path": "obspy/io/gcf/core.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nGCF bindings to ObsPy core module.\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\nfrom obspy import Stream, Trace, UTCDateTime\n\nfrom . import libgcf\n\n\ndef merge_gcf_stream(st):\n \"\"\"\n Merges GCF stream (replacing Stream.merge(-1) for headonly=True)\n\n :type st: :class:`~obspy.core.stream.Stream`\n :param st: GCF Stream object with no data\n :rtype: :class:`~obspy.core.stream.Stream`\n :returns: Stream object containing header and data.\n \"\"\"\n traces = []\n for tr in st:\n delta = tr.stats.delta\n starttime = tr.stats.starttime\n endtime = tr.stats.endtime\n for trace in traces:\n if tr.id == trace.id and delta == trace.stats.delta \\\n and not starttime == trace.stats.starttime:\n if 0 < starttime - trace.stats.endtime <= delta:\n trace.stats.npts += tr.stats.npts\n break\n elif 0 < trace.stats.starttime - endtime <= delta:\n trace.stats.starttime = UTCDateTime(starttime)\n trace.stats.npts += tr.stats.npts\n break\n else:\n traces.append(tr)\n return Stream(traces=traces)\n\n\ndef _is_gcf(filename):\n \"\"\"\n Checks whether a file is GCF or not.\n\n :type filename: str\n :param filename: GCF file to be checked.\n :rtype: bool\n :return: ``True`` if a GCF file.\n \"\"\"\n try:\n with open(filename, 'rb') as f:\n libgcf.is_gcf(f)\n except Exception:\n return False\n return True\n\n\ndef _read_gcf(filename, headonly=False, **kwargs): # @UnusedVariable\n \"\"\"\n Reads a GCF file and returns a Stream object.\n\n only GCF files containing data records are supported.\n\n .. warning::\n This function should NOT be called directly, it registers via the\n ObsPy :func:`~obspy.core.stream.read` function, call this instead.\n\n :type filename: str\n :param filename: GCF file to be read.\n :type headonly: bool, optional\n :param headonly: If True read only head of GCF file.\n :type channel_prefix: str, optional\n :param channel_prefix: Channel band and instrument codes.\n Defaults to ``HH``.\n :rtype: :class:`~obspy.core.stream.Stream`\n :returns: Stream object containing header and data.\n\n .. rubric:: Example\n >>> from obspy import read\n >>> st = read(\"/path/to/20160603_1955n.gcf\", format=\"GCF\")\n \"\"\"\n traces = []\n with open(filename, 'rb') as f:\n while True:\n try:\n if headonly:\n header = libgcf.read_header(f, **kwargs)\n if header:\n traces.append(Trace(header=header))\n else:\n hd = libgcf.read(f, **kwargs)\n if hd:\n traces.append(Trace(header=hd[0], data=hd[1]))\n except EOFError:\n break\n st = Stream(traces=traces)\n if headonly:\n st = merge_gcf_stream(st)\n else:\n st.merge(-1)\n return st\n", "path": "obspy/io/gcf/core.py"}]}
| 1,269 | 146 |
gh_patches_debug_30548
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-4657
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loading 16bit png images
ported here from https://github.com/pytorch/pytorch/issues/32971
Original description:
When I was trying to load 16 bit .png grayscale image with torchvision.datasets.imagefolder ,it is loading every image as white only.
I solved this issue by doing transformation operations outside Compose function.
cc @pmeier @wanifarooq @choidongyeon
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/io/image.py`
Content:
```
1 from enum import Enum
2
3 import torch
4
5 from .._internally_replaced_utils import _get_extension_path
6
7
8 try:
9 lib_path = _get_extension_path("image")
10 torch.ops.load_library(lib_path)
11 except (ImportError, OSError):
12 pass
13
14
15 class ImageReadMode(Enum):
16 """
17 Support for various modes while reading images.
18
19 Use ``ImageReadMode.UNCHANGED`` for loading the image as-is,
20 ``ImageReadMode.GRAY`` for converting to grayscale,
21 ``ImageReadMode.GRAY_ALPHA`` for grayscale with transparency,
22 ``ImageReadMode.RGB`` for RGB and ``ImageReadMode.RGB_ALPHA`` for
23 RGB with transparency.
24 """
25
26 UNCHANGED = 0
27 GRAY = 1
28 GRAY_ALPHA = 2
29 RGB = 3
30 RGB_ALPHA = 4
31
32
33 def read_file(path: str) -> torch.Tensor:
34 """
35 Reads and outputs the bytes contents of a file as a uint8 Tensor
36 with one dimension.
37
38 Args:
39 path (str): the path to the file to be read
40
41 Returns:
42 data (Tensor)
43 """
44 data = torch.ops.image.read_file(path)
45 return data
46
47
48 def write_file(filename: str, data: torch.Tensor) -> None:
49 """
50 Writes the contents of a uint8 tensor with one dimension to a
51 file.
52
53 Args:
54 filename (str): the path to the file to be written
55 data (Tensor): the contents to be written to the output file
56 """
57 torch.ops.image.write_file(filename, data)
58
59
60 def decode_png(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:
61 """
62 Decodes a PNG image into a 3 dimensional RGB or grayscale Tensor.
63 Optionally converts the image to the desired format.
64 The values of the output tensor are uint8 between 0 and 255.
65
66 Args:
67 input (Tensor[1]): a one dimensional uint8 tensor containing
68 the raw bytes of the PNG image.
69 mode (ImageReadMode): the read mode used for optionally
70 converting the image. Default: ``ImageReadMode.UNCHANGED``.
71 See `ImageReadMode` class for more information on various
72 available modes.
73
74 Returns:
75 output (Tensor[image_channels, image_height, image_width])
76 """
77 output = torch.ops.image.decode_png(input, mode.value)
78 return output
79
80
81 def encode_png(input: torch.Tensor, compression_level: int = 6) -> torch.Tensor:
82 """
83 Takes an input tensor in CHW layout and returns a buffer with the contents
84 of its corresponding PNG file.
85
86 Args:
87 input (Tensor[channels, image_height, image_width]): int8 image tensor of
88 ``c`` channels, where ``c`` must 3 or 1.
89 compression_level (int): Compression factor for the resulting file, it must be a number
90 between 0 and 9. Default: 6
91
92 Returns:
93 Tensor[1]: A one dimensional int8 tensor that contains the raw bytes of the
94 PNG file.
95 """
96 output = torch.ops.image.encode_png(input, compression_level)
97 return output
98
99
100 def write_png(input: torch.Tensor, filename: str, compression_level: int = 6):
101 """
102 Takes an input tensor in CHW layout (or HW in the case of grayscale images)
103 and saves it in a PNG file.
104
105 Args:
106 input (Tensor[channels, image_height, image_width]): int8 image tensor of
107 ``c`` channels, where ``c`` must be 1 or 3.
108 filename (str): Path to save the image.
109 compression_level (int): Compression factor for the resulting file, it must be a number
110 between 0 and 9. Default: 6
111 """
112 output = encode_png(input, compression_level)
113 write_file(filename, output)
114
115
116 def decode_jpeg(
117 input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED, device: str = "cpu"
118 ) -> torch.Tensor:
119 """
120 Decodes a JPEG image into a 3 dimensional RGB or grayscale Tensor.
121 Optionally converts the image to the desired format.
122 The values of the output tensor are uint8 between 0 and 255.
123
124 Args:
125 input (Tensor[1]): a one dimensional uint8 tensor containing
126 the raw bytes of the JPEG image. This tensor must be on CPU,
127 regardless of the ``device`` parameter.
128 mode (ImageReadMode): the read mode used for optionally
129 converting the image. Default: ``ImageReadMode.UNCHANGED``.
130 See ``ImageReadMode`` class for more information on various
131 available modes.
132 device (str or torch.device): The device on which the decoded image will
133 be stored. If a cuda device is specified, the image will be decoded
134 with `nvjpeg <https://developer.nvidia.com/nvjpeg>`_. This is only
135 supported for CUDA version >= 10.1
136
137 Returns:
138 output (Tensor[image_channels, image_height, image_width])
139 """
140 device = torch.device(device)
141 if device.type == "cuda":
142 output = torch.ops.image.decode_jpeg_cuda(input, mode.value, device)
143 else:
144 output = torch.ops.image.decode_jpeg(input, mode.value)
145 return output
146
147
148 def encode_jpeg(input: torch.Tensor, quality: int = 75) -> torch.Tensor:
149 """
150 Takes an input tensor in CHW layout and returns a buffer with the contents
151 of its corresponding JPEG file.
152
153 Args:
154 input (Tensor[channels, image_height, image_width])): int8 image tensor of
155 ``c`` channels, where ``c`` must be 1 or 3.
156 quality (int): Quality of the resulting JPEG file, it must be a number between
157 1 and 100. Default: 75
158
159 Returns:
160 output (Tensor[1]): A one dimensional int8 tensor that contains the raw bytes of the
161 JPEG file.
162 """
163 if quality < 1 or quality > 100:
164 raise ValueError("Image quality should be a positive number " "between 1 and 100")
165
166 output = torch.ops.image.encode_jpeg(input, quality)
167 return output
168
169
170 def write_jpeg(input: torch.Tensor, filename: str, quality: int = 75):
171 """
172 Takes an input tensor in CHW layout and saves it in a JPEG file.
173
174 Args:
175 input (Tensor[channels, image_height, image_width]): int8 image tensor of ``c``
176 channels, where ``c`` must be 1 or 3.
177 filename (str): Path to save the image.
178 quality (int): Quality of the resulting JPEG file, it must be a number
179 between 1 and 100. Default: 75
180 """
181 output = encode_jpeg(input, quality)
182 write_file(filename, output)
183
184
185 def decode_image(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:
186 """
187 Detects whether an image is a JPEG or PNG and performs the appropriate
188 operation to decode the image into a 3 dimensional RGB or grayscale Tensor.
189
190 Optionally converts the image to the desired format.
191 The values of the output tensor are uint8 between 0 and 255.
192
193 Args:
194 input (Tensor): a one dimensional uint8 tensor containing the raw bytes of the
195 PNG or JPEG image.
196 mode (ImageReadMode): the read mode used for optionally converting the image.
197 Default: ``ImageReadMode.UNCHANGED``.
198 See ``ImageReadMode`` class for more information on various
199 available modes.
200
201 Returns:
202 output (Tensor[image_channels, image_height, image_width])
203 """
204 output = torch.ops.image.decode_image(input, mode.value)
205 return output
206
207
208 def read_image(path: str, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:
209 """
210 Reads a JPEG or PNG image into a 3 dimensional RGB or grayscale Tensor.
211 Optionally converts the image to the desired format.
212 The values of the output tensor are uint8 between 0 and 255.
213
214 Args:
215 path (str): path of the JPEG or PNG image.
216 mode (ImageReadMode): the read mode used for optionally converting the image.
217 Default: ``ImageReadMode.UNCHANGED``.
218 See ``ImageReadMode`` class for more information on various
219 available modes.
220
221 Returns:
222 output (Tensor[image_channels, image_height, image_width])
223 """
224 data = read_file(path)
225 return decode_image(data, mode)
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/io/image.py b/torchvision/io/image.py
--- a/torchvision/io/image.py
+++ b/torchvision/io/image.py
@@ -61,7 +61,12 @@
"""
Decodes a PNG image into a 3 dimensional RGB or grayscale Tensor.
Optionally converts the image to the desired format.
- The values of the output tensor are uint8 between 0 and 255.
+ The values of the output tensor are uint8 in [0, 255], except for
+ 16-bits pngs which are int32 tensors in [0, 65535].
+
+ .. warning::
+ Should pytorch ever support the uint16 dtype natively, the dtype of the
+ output for 16-bits pngs will be updated from int32 to uint16.
Args:
input (Tensor[1]): a one dimensional uint8 tensor containing
@@ -188,7 +193,8 @@
operation to decode the image into a 3 dimensional RGB or grayscale Tensor.
Optionally converts the image to the desired format.
- The values of the output tensor are uint8 between 0 and 255.
+ The values of the output tensor are uint8 in [0, 255], except for
+ 16-bits pngs which are int32 tensors in [0, 65535].
Args:
input (Tensor): a one dimensional uint8 tensor containing the raw bytes of the
@@ -209,7 +215,8 @@
"""
Reads a JPEG or PNG image into a 3 dimensional RGB or grayscale Tensor.
Optionally converts the image to the desired format.
- The values of the output tensor are uint8 between 0 and 255.
+ The values of the output tensor are uint8 in [0, 255], except for
+ 16-bits pngs which are int32 tensors in [0, 65535].
Args:
path (str): path of the JPEG or PNG image.
|
{"golden_diff": "diff --git a/torchvision/io/image.py b/torchvision/io/image.py\n--- a/torchvision/io/image.py\n+++ b/torchvision/io/image.py\n@@ -61,7 +61,12 @@\n \"\"\"\n Decodes a PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n- The values of the output tensor are uint8 between 0 and 255.\n+ The values of the output tensor are uint8 in [0, 255], except for\n+ 16-bits pngs which are int32 tensors in [0, 65535].\n+\n+ .. warning::\n+ Should pytorch ever support the uint16 dtype natively, the dtype of the\n+ output for 16-bits pngs will be updated from int32 to uint16.\n \n Args:\n input (Tensor[1]): a one dimensional uint8 tensor containing\n@@ -188,7 +193,8 @@\n operation to decode the image into a 3 dimensional RGB or grayscale Tensor.\n \n Optionally converts the image to the desired format.\n- The values of the output tensor are uint8 between 0 and 255.\n+ The values of the output tensor are uint8 in [0, 255], except for\n+ 16-bits pngs which are int32 tensors in [0, 65535].\n \n Args:\n input (Tensor): a one dimensional uint8 tensor containing the raw bytes of the\n@@ -209,7 +215,8 @@\n \"\"\"\n Reads a JPEG or PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n- The values of the output tensor are uint8 between 0 and 255.\n+ The values of the output tensor are uint8 in [0, 255], except for\n+ 16-bits pngs which are int32 tensors in [0, 65535].\n \n Args:\n path (str): path of the JPEG or PNG image.\n", "issue": "Loading 16bit png images\nported here from https://github.com/pytorch/pytorch/issues/32971\r\n\r\nOriginal description:\r\n\r\nWhen I was trying to load 16 bit .png grayscale image with torchvision.datasets.imagefolder ,it is loading every image as white only. \r\nI solved this issue by doing transformation operations outside Compose function.\r\n\r\n\r\ncc @pmeier @wanifarooq @choidongyeon \n", "before_files": [{"content": "from enum import Enum\n\nimport torch\n\nfrom .._internally_replaced_utils import _get_extension_path\n\n\ntry:\n lib_path = _get_extension_path(\"image\")\n torch.ops.load_library(lib_path)\nexcept (ImportError, OSError):\n pass\n\n\nclass ImageReadMode(Enum):\n \"\"\"\n Support for various modes while reading images.\n\n Use ``ImageReadMode.UNCHANGED`` for loading the image as-is,\n ``ImageReadMode.GRAY`` for converting to grayscale,\n ``ImageReadMode.GRAY_ALPHA`` for grayscale with transparency,\n ``ImageReadMode.RGB`` for RGB and ``ImageReadMode.RGB_ALPHA`` for\n RGB with transparency.\n \"\"\"\n\n UNCHANGED = 0\n GRAY = 1\n GRAY_ALPHA = 2\n RGB = 3\n RGB_ALPHA = 4\n\n\ndef read_file(path: str) -> torch.Tensor:\n \"\"\"\n Reads and outputs the bytes contents of a file as a uint8 Tensor\n with one dimension.\n\n Args:\n path (str): the path to the file to be read\n\n Returns:\n data (Tensor)\n \"\"\"\n data = torch.ops.image.read_file(path)\n return data\n\n\ndef write_file(filename: str, data: torch.Tensor) -> None:\n \"\"\"\n Writes the contents of a uint8 tensor with one dimension to a\n file.\n\n Args:\n filename (str): the path to the file to be written\n data (Tensor): the contents to be written to the output file\n \"\"\"\n torch.ops.image.write_file(filename, data)\n\n\ndef decode_png(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Decodes a PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 between 0 and 255.\n\n Args:\n input (Tensor[1]): a one dimensional uint8 tensor containing\n the raw bytes of the PNG image.\n mode (ImageReadMode): the read mode used for optionally\n converting the image. Default: ``ImageReadMode.UNCHANGED``.\n See `ImageReadMode` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n output = torch.ops.image.decode_png(input, mode.value)\n return output\n\n\ndef encode_png(input: torch.Tensor, compression_level: int = 6) -> torch.Tensor:\n \"\"\"\n Takes an input tensor in CHW layout and returns a buffer with the contents\n of its corresponding PNG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of\n ``c`` channels, where ``c`` must 3 or 1.\n compression_level (int): Compression factor for the resulting file, it must be a number\n between 0 and 9. Default: 6\n\n Returns:\n Tensor[1]: A one dimensional int8 tensor that contains the raw bytes of the\n PNG file.\n \"\"\"\n output = torch.ops.image.encode_png(input, compression_level)\n return output\n\n\ndef write_png(input: torch.Tensor, filename: str, compression_level: int = 6):\n \"\"\"\n Takes an input tensor in CHW layout (or HW in the case of grayscale images)\n and saves it in a PNG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of\n ``c`` channels, where ``c`` must be 1 or 3.\n filename (str): Path to save the image.\n compression_level (int): Compression factor for the resulting file, it must be a number\n between 0 and 9. Default: 6\n \"\"\"\n output = encode_png(input, compression_level)\n write_file(filename, output)\n\n\ndef decode_jpeg(\n input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED, device: str = \"cpu\"\n) -> torch.Tensor:\n \"\"\"\n Decodes a JPEG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 between 0 and 255.\n\n Args:\n input (Tensor[1]): a one dimensional uint8 tensor containing\n the raw bytes of the JPEG image. This tensor must be on CPU,\n regardless of the ``device`` parameter.\n mode (ImageReadMode): the read mode used for optionally\n converting the image. Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n device (str or torch.device): The device on which the decoded image will\n be stored. If a cuda device is specified, the image will be decoded\n with `nvjpeg <https://developer.nvidia.com/nvjpeg>`_. This is only\n supported for CUDA version >= 10.1\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n device = torch.device(device)\n if device.type == \"cuda\":\n output = torch.ops.image.decode_jpeg_cuda(input, mode.value, device)\n else:\n output = torch.ops.image.decode_jpeg(input, mode.value)\n return output\n\n\ndef encode_jpeg(input: torch.Tensor, quality: int = 75) -> torch.Tensor:\n \"\"\"\n Takes an input tensor in CHW layout and returns a buffer with the contents\n of its corresponding JPEG file.\n\n Args:\n input (Tensor[channels, image_height, image_width])): int8 image tensor of\n ``c`` channels, where ``c`` must be 1 or 3.\n quality (int): Quality of the resulting JPEG file, it must be a number between\n 1 and 100. Default: 75\n\n Returns:\n output (Tensor[1]): A one dimensional int8 tensor that contains the raw bytes of the\n JPEG file.\n \"\"\"\n if quality < 1 or quality > 100:\n raise ValueError(\"Image quality should be a positive number \" \"between 1 and 100\")\n\n output = torch.ops.image.encode_jpeg(input, quality)\n return output\n\n\ndef write_jpeg(input: torch.Tensor, filename: str, quality: int = 75):\n \"\"\"\n Takes an input tensor in CHW layout and saves it in a JPEG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of ``c``\n channels, where ``c`` must be 1 or 3.\n filename (str): Path to save the image.\n quality (int): Quality of the resulting JPEG file, it must be a number\n between 1 and 100. Default: 75\n \"\"\"\n output = encode_jpeg(input, quality)\n write_file(filename, output)\n\n\ndef decode_image(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Detects whether an image is a JPEG or PNG and performs the appropriate\n operation to decode the image into a 3 dimensional RGB or grayscale Tensor.\n\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 between 0 and 255.\n\n Args:\n input (Tensor): a one dimensional uint8 tensor containing the raw bytes of the\n PNG or JPEG image.\n mode (ImageReadMode): the read mode used for optionally converting the image.\n Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n output = torch.ops.image.decode_image(input, mode.value)\n return output\n\n\ndef read_image(path: str, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Reads a JPEG or PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 between 0 and 255.\n\n Args:\n path (str): path of the JPEG or PNG image.\n mode (ImageReadMode): the read mode used for optionally converting the image.\n Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n data = read_file(path)\n return decode_image(data, mode)\n", "path": "torchvision/io/image.py"}], "after_files": [{"content": "from enum import Enum\n\nimport torch\n\nfrom .._internally_replaced_utils import _get_extension_path\n\n\ntry:\n lib_path = _get_extension_path(\"image\")\n torch.ops.load_library(lib_path)\nexcept (ImportError, OSError):\n pass\n\n\nclass ImageReadMode(Enum):\n \"\"\"\n Support for various modes while reading images.\n\n Use ``ImageReadMode.UNCHANGED`` for loading the image as-is,\n ``ImageReadMode.GRAY`` for converting to grayscale,\n ``ImageReadMode.GRAY_ALPHA`` for grayscale with transparency,\n ``ImageReadMode.RGB`` for RGB and ``ImageReadMode.RGB_ALPHA`` for\n RGB with transparency.\n \"\"\"\n\n UNCHANGED = 0\n GRAY = 1\n GRAY_ALPHA = 2\n RGB = 3\n RGB_ALPHA = 4\n\n\ndef read_file(path: str) -> torch.Tensor:\n \"\"\"\n Reads and outputs the bytes contents of a file as a uint8 Tensor\n with one dimension.\n\n Args:\n path (str): the path to the file to be read\n\n Returns:\n data (Tensor)\n \"\"\"\n data = torch.ops.image.read_file(path)\n return data\n\n\ndef write_file(filename: str, data: torch.Tensor) -> None:\n \"\"\"\n Writes the contents of a uint8 tensor with one dimension to a\n file.\n\n Args:\n filename (str): the path to the file to be written\n data (Tensor): the contents to be written to the output file\n \"\"\"\n torch.ops.image.write_file(filename, data)\n\n\ndef decode_png(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Decodes a PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 in [0, 255], except for\n 16-bits pngs which are int32 tensors in [0, 65535].\n\n .. warning::\n Should pytorch ever support the uint16 dtype natively, the dtype of the\n output for 16-bits pngs will be updated from int32 to uint16.\n\n Args:\n input (Tensor[1]): a one dimensional uint8 tensor containing\n the raw bytes of the PNG image.\n mode (ImageReadMode): the read mode used for optionally\n converting the image. Default: ``ImageReadMode.UNCHANGED``.\n See `ImageReadMode` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n output = torch.ops.image.decode_png(input, mode.value)\n return output\n\n\ndef encode_png(input: torch.Tensor, compression_level: int = 6) -> torch.Tensor:\n \"\"\"\n Takes an input tensor in CHW layout and returns a buffer with the contents\n of its corresponding PNG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of\n ``c`` channels, where ``c`` must 3 or 1.\n compression_level (int): Compression factor for the resulting file, it must be a number\n between 0 and 9. Default: 6\n\n Returns:\n Tensor[1]: A one dimensional int8 tensor that contains the raw bytes of the\n PNG file.\n \"\"\"\n output = torch.ops.image.encode_png(input, compression_level)\n return output\n\n\ndef write_png(input: torch.Tensor, filename: str, compression_level: int = 6):\n \"\"\"\n Takes an input tensor in CHW layout (or HW in the case of grayscale images)\n and saves it in a PNG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of\n ``c`` channels, where ``c`` must be 1 or 3.\n filename (str): Path to save the image.\n compression_level (int): Compression factor for the resulting file, it must be a number\n between 0 and 9. Default: 6\n \"\"\"\n output = encode_png(input, compression_level)\n write_file(filename, output)\n\n\ndef decode_jpeg(\n input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED, device: str = \"cpu\"\n) -> torch.Tensor:\n \"\"\"\n Decodes a JPEG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 between 0 and 255.\n\n Args:\n input (Tensor[1]): a one dimensional uint8 tensor containing\n the raw bytes of the JPEG image. This tensor must be on CPU,\n regardless of the ``device`` parameter.\n mode (ImageReadMode): the read mode used for optionally\n converting the image. Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n device (str or torch.device): The device on which the decoded image will\n be stored. If a cuda device is specified, the image will be decoded\n with `nvjpeg <https://developer.nvidia.com/nvjpeg>`_. This is only\n supported for CUDA version >= 10.1\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n device = torch.device(device)\n if device.type == \"cuda\":\n output = torch.ops.image.decode_jpeg_cuda(input, mode.value, device)\n else:\n output = torch.ops.image.decode_jpeg(input, mode.value)\n return output\n\n\ndef encode_jpeg(input: torch.Tensor, quality: int = 75) -> torch.Tensor:\n \"\"\"\n Takes an input tensor in CHW layout and returns a buffer with the contents\n of its corresponding JPEG file.\n\n Args:\n input (Tensor[channels, image_height, image_width])): int8 image tensor of\n ``c`` channels, where ``c`` must be 1 or 3.\n quality (int): Quality of the resulting JPEG file, it must be a number between\n 1 and 100. Default: 75\n\n Returns:\n output (Tensor[1]): A one dimensional int8 tensor that contains the raw bytes of the\n JPEG file.\n \"\"\"\n if quality < 1 or quality > 100:\n raise ValueError(\"Image quality should be a positive number \" \"between 1 and 100\")\n\n output = torch.ops.image.encode_jpeg(input, quality)\n return output\n\n\ndef write_jpeg(input: torch.Tensor, filename: str, quality: int = 75):\n \"\"\"\n Takes an input tensor in CHW layout and saves it in a JPEG file.\n\n Args:\n input (Tensor[channels, image_height, image_width]): int8 image tensor of ``c``\n channels, where ``c`` must be 1 or 3.\n filename (str): Path to save the image.\n quality (int): Quality of the resulting JPEG file, it must be a number\n between 1 and 100. Default: 75\n \"\"\"\n output = encode_jpeg(input, quality)\n write_file(filename, output)\n\n\ndef decode_image(input: torch.Tensor, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Detects whether an image is a JPEG or PNG and performs the appropriate\n operation to decode the image into a 3 dimensional RGB or grayscale Tensor.\n\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 in [0, 255], except for\n 16-bits pngs which are int32 tensors in [0, 65535].\n\n Args:\n input (Tensor): a one dimensional uint8 tensor containing the raw bytes of the\n PNG or JPEG image.\n mode (ImageReadMode): the read mode used for optionally converting the image.\n Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n output = torch.ops.image.decode_image(input, mode.value)\n return output\n\n\ndef read_image(path: str, mode: ImageReadMode = ImageReadMode.UNCHANGED) -> torch.Tensor:\n \"\"\"\n Reads a JPEG or PNG image into a 3 dimensional RGB or grayscale Tensor.\n Optionally converts the image to the desired format.\n The values of the output tensor are uint8 in [0, 255], except for\n 16-bits pngs which are int32 tensors in [0, 65535].\n\n Args:\n path (str): path of the JPEG or PNG image.\n mode (ImageReadMode): the read mode used for optionally converting the image.\n Default: ``ImageReadMode.UNCHANGED``.\n See ``ImageReadMode`` class for more information on various\n available modes.\n\n Returns:\n output (Tensor[image_channels, image_height, image_width])\n \"\"\"\n data = read_file(path)\n return decode_image(data, mode)\n", "path": "torchvision/io/image.py"}]}
| 2,819 | 475 |
gh_patches_debug_24617
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-2048
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Empty circuits from transpiler fail qobj validation
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**:
- **Operating system**:
### What is the current behavior?
The compiler removes all the gates from this circuit and leaves an empty circuit that fails validation.
```
qr = QuantumRegister(2, 'qr')
circuit = QuantumCircuit(qr)
circuit.h(qr[0])
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
coupling_map = [[0, 1]]
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']
backend = BasicAer.get_backend('qasm_simulator')
qobj = compile(circuit, backend=backend, coupling_map=coupling_map, basis_gates=basis_gates)
```
```
ModelValidationError: {'instructions': ['Shorter than minimum length 1.']}
```
### Steps to reproduce the problem
### What is the expected behavior?
I believe that a valid circuit returned by the transpiler, in this case an empty circuit, should result in a valid qobj.
### Suggested solutions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/qobj/models/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """The generic qobj models."""
9
10 from marshmallow.validate import Length, Range
11
12 from qiskit.validation import BaseSchema, bind_schema, BaseModel
13 from qiskit.validation.fields import String, Nested, Integer
14
15
16 class QobjInstructionSchema(BaseSchema):
17 """Base Schema for QobjInstruction."""
18
19 # Required properties
20 name = String(required=True)
21
22
23 class QobjExperimentHeaderSchema(BaseSchema):
24 """Base Schema for QobjExperimentHeader."""
25 pass
26
27
28 class QobjExperimentConfigSchema(BaseSchema):
29 """Base Schema for QobjExperimentConfig."""
30 pass
31
32
33 class QobjExperimentSchema(BaseSchema):
34 """Base Schema for QobjExperiment."""
35
36 # Required properties.
37 instructions = Nested(QobjInstructionSchema, required=True, many=True,
38 validate=Length(min=1))
39
40 # Optional properties.
41 header = Nested(QobjExperimentHeaderSchema)
42 config = Nested(QobjExperimentConfigSchema)
43
44
45 class QobjConfigSchema(BaseSchema):
46 """Base Schema for QobjConfig."""
47
48 # Optional properties.
49 max_credits = Integer()
50 seed = Integer()
51 memory_slots = Integer(validate=Range(min=0))
52 shots = Integer(validate=Range(min=1))
53
54
55 class QobjHeaderSchema(BaseSchema):
56 """Base Schema for QobjHeader."""
57
58 # Optional properties.
59 backend_name = String()
60 backend_version = String()
61
62
63 @bind_schema(QobjInstructionSchema)
64 class QobjInstruction(BaseModel):
65 """Model for QobjInstruction.
66
67 Please note that this class only describes the required fields. For the
68 full description of the model, please check ``QobjInstructionSchema``.
69
70 Attributes:
71 name (str): name of the instruction
72 """
73 def __init__(self, name, **kwargs):
74 self.name = name
75
76 super().__init__(**kwargs)
77
78
79 @bind_schema(QobjExperimentHeaderSchema)
80 class QobjExperimentHeader(BaseModel):
81 """Model for QobjExperimentHeader.
82
83 Please note that this class only describes the required fields. For the
84 full description of the model, please check ``QobjExperimentHeaderSchema``.
85 """
86 pass
87
88
89 @bind_schema(QobjExperimentConfigSchema)
90 class QobjExperimentConfig(BaseModel):
91 """Model for QobjExperimentConfig.
92
93 Please note that this class only describes the required fields. For the
94 full description of the model, please check ``QobjExperimentConfigSchema``.
95 """
96 pass
97
98
99 @bind_schema(QobjExperimentSchema)
100 class QobjExperiment(BaseModel):
101 """Model for QobjExperiment.
102
103 Please note that this class only describes the required fields. For the
104 full description of the model, please check ``QobjExperimentSchema``.
105
106 Attributes:
107 instructions (list[QobjInstruction]): list of instructions.
108 """
109 def __init__(self, instructions, **kwargs):
110 self.instructions = instructions
111
112 super().__init__(**kwargs)
113
114
115 @bind_schema(QobjConfigSchema)
116 class QobjConfig(BaseModel):
117 """Model for QobjConfig.
118
119 Please note that this class only describes the required fields. For the
120 full description of the model, please check ``QobjConfigSchema``.
121 """
122 pass
123
124
125 @bind_schema(QobjHeaderSchema)
126 class QobjHeader(BaseModel):
127 """Model for QobjHeader.
128
129 Please note that this class only describes the required fields. For the
130 full description of the model, please check ``QobjHeaderSchema``.
131 """
132 pass
133
```
Path: `qiskit/qobj/models/qasm.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2019, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """The qasm qobj models."""
9
10 from marshmallow.validate import Range, Length, Regexp
11
12 from qiskit.validation import bind_schema, BaseSchema, BaseModel
13 from qiskit.validation.fields import List, Integer, InstructionParameter, Nested, String
14 from .base import (QobjInstructionSchema, QobjExperimentConfigSchema, QobjExperimentSchema,
15 QobjConfigSchema, QobjInstruction, QobjExperimentConfig,
16 QobjExperiment, QobjConfig)
17
18
19 class QobjConditionalSchema(BaseSchema):
20 """Schema for QobjConditional."""
21
22 # Required properties.
23 mask = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))
24 type = String(required=True)
25 val = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))
26
27
28 class QasmQobjInstructionSchema(QobjInstructionSchema):
29 """Schema for QasmQobjInstruction."""
30
31 # Optional properties.
32 qubits = List(Integer(validate=Range(min=0)),
33 validate=Length(min=1))
34 params = List(InstructionParameter())
35 memory = List(Integer(validate=Range(min=0)),
36 validate=Length(min=1))
37 conditional = Nested(QobjConditionalSchema)
38
39
40 class QasmQobjExperimentConfigSchema(QobjExperimentConfigSchema):
41 """Schema for QasmQobjExperimentConfig."""
42
43 # Optional properties.
44 memory_slots = Integer(validate=Range(min=0))
45 n_qubits = Integer(validate=Range(min=1))
46
47
48 class QasmQobjExperimentSchema(QobjExperimentSchema):
49 """Schema for QasmQobjExperiment."""
50
51 # Required properties.
52 instructions = Nested(QasmQobjInstructionSchema, required=True, many=True,
53 validate=Length(min=1))
54
55 # Optional properties.
56 config = Nested(QasmQobjExperimentConfigSchema)
57
58
59 class QasmQobjConfigSchema(QobjConfigSchema):
60 """Schema for QasmQobjConfig."""
61
62 # Optional properties.
63 n_qubits = Integer(validate=Range(min=1))
64
65
66 @bind_schema(QobjConditionalSchema)
67 class QobjConditional(BaseModel):
68 """Model for QobjConditional.
69
70 Please note that this class only describes the required fields. For the
71 full description of the model, please check ``QobjConditionalSchema``.
72
73 Attributes:
74 mask (str): hexadecimal mask of the conditional
75 type (str): type of the conditional
76 val (str): hexadecimal value of the conditional
77 """
78 def __init__(self, mask, type, val, **kwargs):
79 # pylint: disable=redefined-builtin
80 self.mask = mask
81 self.type = type
82 self.val = val
83
84 super().__init__(**kwargs)
85
86
87 @bind_schema(QasmQobjInstructionSchema)
88 class QasmQobjInstruction(QobjInstruction):
89 """Model for QasmQobjInstruction inherit from QobjInstruction.
90
91 Please note that this class only describes the required fields. For the
92 full description of the model, please check ``QasmQobjInstructionSchema``.
93
94 Attributes:
95 name (str): name of the instruction
96 """
97 def __init__(self, name, **kwargs):
98 super().__init__(name=name,
99 **kwargs)
100
101
102 @bind_schema(QasmQobjExperimentConfigSchema)
103 class QasmQobjExperimentConfig(QobjExperimentConfig):
104 """Model for QasmQobjExperimentConfig inherit from QobjExperimentConfig.
105
106 Please note that this class only describes the required fields. For the
107 full description of the model, please check ``QasmQobjExperimentConfigSchema``.
108 """
109 pass
110
111
112 @bind_schema(QasmQobjExperimentSchema)
113 class QasmQobjExperiment(QobjExperiment):
114 """Model for QasmQobjExperiment inherit from QobjExperiment.
115
116 Please note that this class only describes the required fields. For the
117 full description of the model, please check ``QasmQobjExperimentSchema``.
118
119 Attributes:
120 instructions (list[QasmQobjInstruction]): list of instructions.
121 """
122 def __init__(self, instructions, **kwargs):
123 super().__init__(instructions=instructions,
124 **kwargs)
125
126
127 @bind_schema(QasmQobjConfigSchema)
128 class QasmQobjConfig(QobjConfig):
129 """Model for QasmQobjConfig inherit from QobjConfig.
130
131 Please note that this class only describes the required fields. For the
132 full description of the model, please check ``QasmQobjConfigSchema``.
133 """
134 pass
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/qobj/models/base.py b/qiskit/qobj/models/base.py
--- a/qiskit/qobj/models/base.py
+++ b/qiskit/qobj/models/base.py
@@ -7,7 +7,7 @@
"""The generic qobj models."""
-from marshmallow.validate import Length, Range
+from marshmallow.validate import Range
from qiskit.validation import BaseSchema, bind_schema, BaseModel
from qiskit.validation.fields import String, Nested, Integer
@@ -34,8 +34,7 @@
"""Base Schema for QobjExperiment."""
# Required properties.
- instructions = Nested(QobjInstructionSchema, required=True, many=True,
- validate=Length(min=1))
+ instructions = Nested(QobjInstructionSchema, required=True, many=True)
# Optional properties.
header = Nested(QobjExperimentHeaderSchema)
diff --git a/qiskit/qobj/models/qasm.py b/qiskit/qobj/models/qasm.py
--- a/qiskit/qobj/models/qasm.py
+++ b/qiskit/qobj/models/qasm.py
@@ -49,8 +49,7 @@
"""Schema for QasmQobjExperiment."""
# Required properties.
- instructions = Nested(QasmQobjInstructionSchema, required=True, many=True,
- validate=Length(min=1))
+ instructions = Nested(QasmQobjInstructionSchema, required=True, many=True)
# Optional properties.
config = Nested(QasmQobjExperimentConfigSchema)
|
{"golden_diff": "diff --git a/qiskit/qobj/models/base.py b/qiskit/qobj/models/base.py\n--- a/qiskit/qobj/models/base.py\n+++ b/qiskit/qobj/models/base.py\n@@ -7,7 +7,7 @@\n \n \"\"\"The generic qobj models.\"\"\"\n \n-from marshmallow.validate import Length, Range\n+from marshmallow.validate import Range\n \n from qiskit.validation import BaseSchema, bind_schema, BaseModel\n from qiskit.validation.fields import String, Nested, Integer\n@@ -34,8 +34,7 @@\n \"\"\"Base Schema for QobjExperiment.\"\"\"\n \n # Required properties.\n- instructions = Nested(QobjInstructionSchema, required=True, many=True,\n- validate=Length(min=1))\n+ instructions = Nested(QobjInstructionSchema, required=True, many=True)\n \n # Optional properties.\n header = Nested(QobjExperimentHeaderSchema)\ndiff --git a/qiskit/qobj/models/qasm.py b/qiskit/qobj/models/qasm.py\n--- a/qiskit/qobj/models/qasm.py\n+++ b/qiskit/qobj/models/qasm.py\n@@ -49,8 +49,7 @@\n \"\"\"Schema for QasmQobjExperiment.\"\"\"\n \n # Required properties.\n- instructions = Nested(QasmQobjInstructionSchema, required=True, many=True,\n- validate=Length(min=1))\n+ instructions = Nested(QasmQobjInstructionSchema, required=True, many=True)\n \n # Optional properties.\n config = Nested(QasmQobjExperimentConfigSchema)\n", "issue": "Empty circuits from transpiler fail qobj validation\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: master\r\n- **Python version**:\r\n- **Operating system**:\r\n\r\n### What is the current behavior?\r\nThe compiler removes all the gates from this circuit and leaves an empty circuit that fails validation.\r\n\r\n```\r\nqr = QuantumRegister(2, 'qr')\r\ncircuit = QuantumCircuit(qr)\r\ncircuit.h(qr[0])\r\ncircuit.h(qr[0])\r\ncircuit.cx(qr[0], qr[1])\r\ncircuit.cx(qr[0], qr[1])\r\ncircuit.cx(qr[0], qr[1])\r\ncircuit.cx(qr[0], qr[1])\r\n\r\ncoupling_map = [[0, 1]]\r\nbasis_gates = ['u1', 'u2', 'u3', 'cx', 'id']\r\n\r\nbackend = BasicAer.get_backend('qasm_simulator')\r\n\r\nqobj = compile(circuit, backend=backend, coupling_map=coupling_map, basis_gates=basis_gates)\r\n```\r\n\r\n```\r\nModelValidationError: {'instructions': ['Shorter than minimum length 1.']}\r\n```\r\n### Steps to reproduce the problem\r\n\r\n\r\n\r\n### What is the expected behavior?\r\nI believe that a valid circuit returned by the transpiler, in this case an empty circuit, should result in a valid qobj.\r\n\r\n### Suggested solutions\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2019, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"The generic qobj models.\"\"\"\n\nfrom marshmallow.validate import Length, Range\n\nfrom qiskit.validation import BaseSchema, bind_schema, BaseModel\nfrom qiskit.validation.fields import String, Nested, Integer\n\n\nclass QobjInstructionSchema(BaseSchema):\n \"\"\"Base Schema for QobjInstruction.\"\"\"\n\n # Required properties\n name = String(required=True)\n\n\nclass QobjExperimentHeaderSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperimentHeader.\"\"\"\n pass\n\n\nclass QobjExperimentConfigSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperimentConfig.\"\"\"\n pass\n\n\nclass QobjExperimentSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperiment.\"\"\"\n\n # Required properties.\n instructions = Nested(QobjInstructionSchema, required=True, many=True,\n validate=Length(min=1))\n\n # Optional properties.\n header = Nested(QobjExperimentHeaderSchema)\n config = Nested(QobjExperimentConfigSchema)\n\n\nclass QobjConfigSchema(BaseSchema):\n \"\"\"Base Schema for QobjConfig.\"\"\"\n\n # Optional properties.\n max_credits = Integer()\n seed = Integer()\n memory_slots = Integer(validate=Range(min=0))\n shots = Integer(validate=Range(min=1))\n\n\nclass QobjHeaderSchema(BaseSchema):\n \"\"\"Base Schema for QobjHeader.\"\"\"\n\n # Optional properties.\n backend_name = String()\n backend_version = String()\n\n\n@bind_schema(QobjInstructionSchema)\nclass QobjInstruction(BaseModel):\n \"\"\"Model for QobjInstruction.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjInstructionSchema``.\n\n Attributes:\n name (str): name of the instruction\n \"\"\"\n def __init__(self, name, **kwargs):\n self.name = name\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QobjExperimentHeaderSchema)\nclass QobjExperimentHeader(BaseModel):\n \"\"\"Model for QobjExperimentHeader.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentHeaderSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjExperimentConfigSchema)\nclass QobjExperimentConfig(BaseModel):\n \"\"\"Model for QobjExperimentConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjExperimentSchema)\nclass QobjExperiment(BaseModel):\n \"\"\"Model for QobjExperiment.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentSchema``.\n\n Attributes:\n instructions (list[QobjInstruction]): list of instructions.\n \"\"\"\n def __init__(self, instructions, **kwargs):\n self.instructions = instructions\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QobjConfigSchema)\nclass QobjConfig(BaseModel):\n \"\"\"Model for QobjConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjHeaderSchema)\nclass QobjHeader(BaseModel):\n \"\"\"Model for QobjHeader.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjHeaderSchema``.\n \"\"\"\n pass\n", "path": "qiskit/qobj/models/base.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2019, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"The qasm qobj models.\"\"\"\n\nfrom marshmallow.validate import Range, Length, Regexp\n\nfrom qiskit.validation import bind_schema, BaseSchema, BaseModel\nfrom qiskit.validation.fields import List, Integer, InstructionParameter, Nested, String\nfrom .base import (QobjInstructionSchema, QobjExperimentConfigSchema, QobjExperimentSchema,\n QobjConfigSchema, QobjInstruction, QobjExperimentConfig,\n QobjExperiment, QobjConfig)\n\n\nclass QobjConditionalSchema(BaseSchema):\n \"\"\"Schema for QobjConditional.\"\"\"\n\n # Required properties.\n mask = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))\n type = String(required=True)\n val = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))\n\n\nclass QasmQobjInstructionSchema(QobjInstructionSchema):\n \"\"\"Schema for QasmQobjInstruction.\"\"\"\n\n # Optional properties.\n qubits = List(Integer(validate=Range(min=0)),\n validate=Length(min=1))\n params = List(InstructionParameter())\n memory = List(Integer(validate=Range(min=0)),\n validate=Length(min=1))\n conditional = Nested(QobjConditionalSchema)\n\n\nclass QasmQobjExperimentConfigSchema(QobjExperimentConfigSchema):\n \"\"\"Schema for QasmQobjExperimentConfig.\"\"\"\n\n # Optional properties.\n memory_slots = Integer(validate=Range(min=0))\n n_qubits = Integer(validate=Range(min=1))\n\n\nclass QasmQobjExperimentSchema(QobjExperimentSchema):\n \"\"\"Schema for QasmQobjExperiment.\"\"\"\n\n # Required properties.\n instructions = Nested(QasmQobjInstructionSchema, required=True, many=True,\n validate=Length(min=1))\n\n # Optional properties.\n config = Nested(QasmQobjExperimentConfigSchema)\n\n\nclass QasmQobjConfigSchema(QobjConfigSchema):\n \"\"\"Schema for QasmQobjConfig.\"\"\"\n\n # Optional properties.\n n_qubits = Integer(validate=Range(min=1))\n\n\n@bind_schema(QobjConditionalSchema)\nclass QobjConditional(BaseModel):\n \"\"\"Model for QobjConditional.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjConditionalSchema``.\n\n Attributes:\n mask (str): hexadecimal mask of the conditional\n type (str): type of the conditional\n val (str): hexadecimal value of the conditional\n \"\"\"\n def __init__(self, mask, type, val, **kwargs):\n # pylint: disable=redefined-builtin\n self.mask = mask\n self.type = type\n self.val = val\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QasmQobjInstructionSchema)\nclass QasmQobjInstruction(QobjInstruction):\n \"\"\"Model for QasmQobjInstruction inherit from QobjInstruction.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjInstructionSchema``.\n\n Attributes:\n name (str): name of the instruction\n \"\"\"\n def __init__(self, name, **kwargs):\n super().__init__(name=name,\n **kwargs)\n\n\n@bind_schema(QasmQobjExperimentConfigSchema)\nclass QasmQobjExperimentConfig(QobjExperimentConfig):\n \"\"\"Model for QasmQobjExperimentConfig inherit from QobjExperimentConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjExperimentConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QasmQobjExperimentSchema)\nclass QasmQobjExperiment(QobjExperiment):\n \"\"\"Model for QasmQobjExperiment inherit from QobjExperiment.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjExperimentSchema``.\n\n Attributes:\n instructions (list[QasmQobjInstruction]): list of instructions.\n \"\"\"\n def __init__(self, instructions, **kwargs):\n super().__init__(instructions=instructions,\n **kwargs)\n\n\n@bind_schema(QasmQobjConfigSchema)\nclass QasmQobjConfig(QobjConfig):\n \"\"\"Model for QasmQobjConfig inherit from QobjConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjConfigSchema``.\n \"\"\"\n pass\n", "path": "qiskit/qobj/models/qasm.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2019, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"The generic qobj models.\"\"\"\n\nfrom marshmallow.validate import Range\n\nfrom qiskit.validation import BaseSchema, bind_schema, BaseModel\nfrom qiskit.validation.fields import String, Nested, Integer\n\n\nclass QobjInstructionSchema(BaseSchema):\n \"\"\"Base Schema for QobjInstruction.\"\"\"\n\n # Required properties\n name = String(required=True)\n\n\nclass QobjExperimentHeaderSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperimentHeader.\"\"\"\n pass\n\n\nclass QobjExperimentConfigSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperimentConfig.\"\"\"\n pass\n\n\nclass QobjExperimentSchema(BaseSchema):\n \"\"\"Base Schema for QobjExperiment.\"\"\"\n\n # Required properties.\n instructions = Nested(QobjInstructionSchema, required=True, many=True)\n\n # Optional properties.\n header = Nested(QobjExperimentHeaderSchema)\n config = Nested(QobjExperimentConfigSchema)\n\n\nclass QobjConfigSchema(BaseSchema):\n \"\"\"Base Schema for QobjConfig.\"\"\"\n\n # Optional properties.\n max_credits = Integer()\n seed = Integer()\n memory_slots = Integer(validate=Range(min=0))\n shots = Integer(validate=Range(min=1))\n\n\nclass QobjHeaderSchema(BaseSchema):\n \"\"\"Base Schema for QobjHeader.\"\"\"\n\n # Optional properties.\n backend_name = String()\n backend_version = String()\n\n\n@bind_schema(QobjInstructionSchema)\nclass QobjInstruction(BaseModel):\n \"\"\"Model for QobjInstruction.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjInstructionSchema``.\n\n Attributes:\n name (str): name of the instruction\n \"\"\"\n def __init__(self, name, **kwargs):\n self.name = name\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QobjExperimentHeaderSchema)\nclass QobjExperimentHeader(BaseModel):\n \"\"\"Model for QobjExperimentHeader.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentHeaderSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjExperimentConfigSchema)\nclass QobjExperimentConfig(BaseModel):\n \"\"\"Model for QobjExperimentConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjExperimentSchema)\nclass QobjExperiment(BaseModel):\n \"\"\"Model for QobjExperiment.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjExperimentSchema``.\n\n Attributes:\n instructions (list[QobjInstruction]): list of instructions.\n \"\"\"\n def __init__(self, instructions, **kwargs):\n self.instructions = instructions\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QobjConfigSchema)\nclass QobjConfig(BaseModel):\n \"\"\"Model for QobjConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QobjHeaderSchema)\nclass QobjHeader(BaseModel):\n \"\"\"Model for QobjHeader.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjHeaderSchema``.\n \"\"\"\n pass\n", "path": "qiskit/qobj/models/base.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2019, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"The qasm qobj models.\"\"\"\n\nfrom marshmallow.validate import Range, Length, Regexp\n\nfrom qiskit.validation import bind_schema, BaseSchema, BaseModel\nfrom qiskit.validation.fields import List, Integer, InstructionParameter, Nested, String\nfrom .base import (QobjInstructionSchema, QobjExperimentConfigSchema, QobjExperimentSchema,\n QobjConfigSchema, QobjInstruction, QobjExperimentConfig,\n QobjExperiment, QobjConfig)\n\n\nclass QobjConditionalSchema(BaseSchema):\n \"\"\"Schema for QobjConditional.\"\"\"\n\n # Required properties.\n mask = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))\n type = String(required=True)\n val = String(required=True, validate=Regexp('^0x([0-9A-Fa-f])+$'))\n\n\nclass QasmQobjInstructionSchema(QobjInstructionSchema):\n \"\"\"Schema for QasmQobjInstruction.\"\"\"\n\n # Optional properties.\n qubits = List(Integer(validate=Range(min=0)),\n validate=Length(min=1))\n params = List(InstructionParameter())\n memory = List(Integer(validate=Range(min=0)),\n validate=Length(min=1))\n conditional = Nested(QobjConditionalSchema)\n\n\nclass QasmQobjExperimentConfigSchema(QobjExperimentConfigSchema):\n \"\"\"Schema for QasmQobjExperimentConfig.\"\"\"\n\n # Optional properties.\n memory_slots = Integer(validate=Range(min=0))\n n_qubits = Integer(validate=Range(min=1))\n\n\nclass QasmQobjExperimentSchema(QobjExperimentSchema):\n \"\"\"Schema for QasmQobjExperiment.\"\"\"\n\n # Required properties.\n instructions = Nested(QasmQobjInstructionSchema, required=True, many=True)\n\n # Optional properties.\n config = Nested(QasmQobjExperimentConfigSchema)\n\n\nclass QasmQobjConfigSchema(QobjConfigSchema):\n \"\"\"Schema for QasmQobjConfig.\"\"\"\n\n # Optional properties.\n n_qubits = Integer(validate=Range(min=1))\n\n\n@bind_schema(QobjConditionalSchema)\nclass QobjConditional(BaseModel):\n \"\"\"Model for QobjConditional.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QobjConditionalSchema``.\n\n Attributes:\n mask (str): hexadecimal mask of the conditional\n type (str): type of the conditional\n val (str): hexadecimal value of the conditional\n \"\"\"\n def __init__(self, mask, type, val, **kwargs):\n # pylint: disable=redefined-builtin\n self.mask = mask\n self.type = type\n self.val = val\n\n super().__init__(**kwargs)\n\n\n@bind_schema(QasmQobjInstructionSchema)\nclass QasmQobjInstruction(QobjInstruction):\n \"\"\"Model for QasmQobjInstruction inherit from QobjInstruction.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjInstructionSchema``.\n\n Attributes:\n name (str): name of the instruction\n \"\"\"\n def __init__(self, name, **kwargs):\n super().__init__(name=name,\n **kwargs)\n\n\n@bind_schema(QasmQobjExperimentConfigSchema)\nclass QasmQobjExperimentConfig(QobjExperimentConfig):\n \"\"\"Model for QasmQobjExperimentConfig inherit from QobjExperimentConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjExperimentConfigSchema``.\n \"\"\"\n pass\n\n\n@bind_schema(QasmQobjExperimentSchema)\nclass QasmQobjExperiment(QobjExperiment):\n \"\"\"Model for QasmQobjExperiment inherit from QobjExperiment.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjExperimentSchema``.\n\n Attributes:\n instructions (list[QasmQobjInstruction]): list of instructions.\n \"\"\"\n def __init__(self, instructions, **kwargs):\n super().__init__(instructions=instructions,\n **kwargs)\n\n\n@bind_schema(QasmQobjConfigSchema)\nclass QasmQobjConfig(QobjConfig):\n \"\"\"Model for QasmQobjConfig inherit from QobjConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``QasmQobjConfigSchema``.\n \"\"\"\n pass\n", "path": "qiskit/qobj/models/qasm.py"}]}
| 3,029 | 330 |
gh_patches_debug_14426
|
rasdani/github-patches
|
git_diff
|
boto__boto-3045
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
boto.vpc.routetable.Route does not contain route origin
RouteSet responses will contain an origin for each route, but boto.vpc.routetable.Route is not populated with the route origin:
Example response before:
```
In [8]: vars(all_route_tables[1].routes[1])
Out[8]:
{'destination_cidr_block': u'0.0.0.0/0',
'gateway_id': None,
'instance_id': u'i-123',
'interface_id': u'eni-123',
'state': u'active',
'vpc_peering_connection_id': None}
```
After:
```
In [25]: vars(all_route_tables[1].routes[1])
Out[25]:
{'destination_cidr_block': u'0.0.0.0/0',
'gateway_id': None,
'instance_id': u'i-123',
'interface_id': u'eni-123',
'origin': u'CreateRoute',
'state': u'active',
'vpc_peering_connection_id': None}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/vpc/routetable.py`
Content:
```
1 # Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the
5 # "Software"), to deal in the Software without restriction, including
6 # without limitation the rights to use, copy, modify, merge, publish, dis-
7 # tribute, sublicense, and/or sell copies of the Software, and to permit
8 # persons to whom the Software is furnished to do so, subject to the fol-
9 # lowing conditions:
10 #
11 # The above copyright notice and this permission notice shall be included
12 # in all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
20 # IN THE SOFTWARE.
21
22 """
23 Represents a Route Table
24 """
25
26 from boto.ec2.ec2object import TaggedEC2Object
27 from boto.resultset import ResultSet
28
29 class RouteTable(TaggedEC2Object):
30
31 def __init__(self, connection=None):
32 super(RouteTable, self).__init__(connection)
33 self.id = None
34 self.vpc_id = None
35 self.routes = []
36 self.associations = []
37
38 def __repr__(self):
39 return 'RouteTable:%s' % self.id
40
41 def startElement(self, name, attrs, connection):
42 result = super(RouteTable, self).startElement(name, attrs, connection)
43
44 if result is not None:
45 # Parent found an interested element, just return it
46 return result
47
48 if name == 'routeSet':
49 self.routes = ResultSet([('item', Route)])
50 return self.routes
51 elif name == 'associationSet':
52 self.associations = ResultSet([('item', RouteAssociation)])
53 return self.associations
54 else:
55 return None
56
57 def endElement(self, name, value, connection):
58 if name == 'routeTableId':
59 self.id = value
60 elif name == 'vpcId':
61 self.vpc_id = value
62 else:
63 setattr(self, name, value)
64
65 class Route(object):
66 def __init__(self, connection=None):
67 self.destination_cidr_block = None
68 self.gateway_id = None
69 self.instance_id = None
70 self.interface_id = None
71 self.vpc_peering_connection_id = None
72 self.state = None
73
74 def __repr__(self):
75 return 'Route:%s' % self.destination_cidr_block
76
77 def startElement(self, name, attrs, connection):
78 return None
79
80 def endElement(self, name, value, connection):
81 if name == 'destinationCidrBlock':
82 self.destination_cidr_block = value
83 elif name == 'gatewayId':
84 self.gateway_id = value
85 elif name == 'instanceId':
86 self.instance_id = value
87 elif name == 'networkInterfaceId':
88 self.interface_id = value
89 elif name == 'vpcPeeringConnectionId':
90 self.vpc_peering_connection_id = value
91 elif name == 'state':
92 self.state = value
93
94 class RouteAssociation(object):
95 def __init__(self, connection=None):
96 self.id = None
97 self.route_table_id = None
98 self.subnet_id = None
99 self.main = False
100
101 def __repr__(self):
102 return 'RouteAssociation:%s' % self.id
103
104 def startElement(self, name, attrs, connection):
105 return None
106
107 def endElement(self, name, value, connection):
108 if name == 'routeTableAssociationId':
109 self.id = value
110 elif name == 'routeTableId':
111 self.route_table_id = value
112 elif name == 'subnetId':
113 self.subnet_id = value
114 elif name == 'main':
115 self.main = value == 'true'
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/boto/vpc/routetable.py b/boto/vpc/routetable.py
--- a/boto/vpc/routetable.py
+++ b/boto/vpc/routetable.py
@@ -70,6 +70,7 @@
self.interface_id = None
self.vpc_peering_connection_id = None
self.state = None
+ self.origin = None
def __repr__(self):
return 'Route:%s' % self.destination_cidr_block
@@ -90,6 +91,8 @@
self.vpc_peering_connection_id = value
elif name == 'state':
self.state = value
+ elif name == 'origin':
+ self.origin = value
class RouteAssociation(object):
def __init__(self, connection=None):
|
{"golden_diff": "diff --git a/boto/vpc/routetable.py b/boto/vpc/routetable.py\n--- a/boto/vpc/routetable.py\n+++ b/boto/vpc/routetable.py\n@@ -70,6 +70,7 @@\n self.interface_id = None\n self.vpc_peering_connection_id = None\n self.state = None\n+ self.origin = None\n \n def __repr__(self):\n return 'Route:%s' % self.destination_cidr_block\n@@ -90,6 +91,8 @@\n self.vpc_peering_connection_id = value\n elif name == 'state':\n self.state = value\n+ elif name == 'origin':\n+ self.origin = value\n \n class RouteAssociation(object):\n def __init__(self, connection=None):\n", "issue": "boto.vpc.routetable.Route does not contain route origin\nRouteSet responses will contain an origin for each route, but boto.vpc.routetable.Route is not populated with the route origin:\n\nExample response before:\n\n```\nIn [8]: vars(all_route_tables[1].routes[1])\nOut[8]:\n{'destination_cidr_block': u'0.0.0.0/0',\n 'gateway_id': None,\n 'instance_id': u'i-123',\n 'interface_id': u'eni-123',\n 'state': u'active',\n 'vpc_peering_connection_id': None}\n```\n\nAfter:\n\n```\nIn [25]: vars(all_route_tables[1].routes[1])\nOut[25]:\n{'destination_cidr_block': u'0.0.0.0/0',\n 'gateway_id': None,\n 'instance_id': u'i-123',\n 'interface_id': u'eni-123',\n 'origin': u'CreateRoute',\n 'state': u'active',\n 'vpc_peering_connection_id': None}\n```\n\n", "before_files": [{"content": "# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\"\"\"\nRepresents a Route Table\n\"\"\"\n\nfrom boto.ec2.ec2object import TaggedEC2Object\nfrom boto.resultset import ResultSet\n\nclass RouteTable(TaggedEC2Object):\n\n def __init__(self, connection=None):\n super(RouteTable, self).__init__(connection)\n self.id = None\n self.vpc_id = None\n self.routes = []\n self.associations = []\n\n def __repr__(self):\n return 'RouteTable:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n result = super(RouteTable, self).startElement(name, attrs, connection)\n\n if result is not None:\n # Parent found an interested element, just return it\n return result\n\n if name == 'routeSet':\n self.routes = ResultSet([('item', Route)])\n return self.routes\n elif name == 'associationSet':\n self.associations = ResultSet([('item', RouteAssociation)])\n return self.associations\n else:\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableId':\n self.id = value\n elif name == 'vpcId':\n self.vpc_id = value\n else:\n setattr(self, name, value)\n\nclass Route(object):\n def __init__(self, connection=None):\n self.destination_cidr_block = None\n self.gateway_id = None\n self.instance_id = None\n self.interface_id = None\n self.vpc_peering_connection_id = None\n self.state = None\n\n def __repr__(self):\n return 'Route:%s' % self.destination_cidr_block\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'destinationCidrBlock':\n self.destination_cidr_block = value\n elif name == 'gatewayId':\n self.gateway_id = value\n elif name == 'instanceId':\n self.instance_id = value\n elif name == 'networkInterfaceId':\n self.interface_id = value\n elif name == 'vpcPeeringConnectionId':\n self.vpc_peering_connection_id = value\n elif name == 'state':\n self.state = value\n\nclass RouteAssociation(object):\n def __init__(self, connection=None):\n self.id = None\n self.route_table_id = None\n self.subnet_id = None\n self.main = False\n\n def __repr__(self):\n return 'RouteAssociation:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableAssociationId':\n self.id = value\n elif name == 'routeTableId':\n self.route_table_id = value\n elif name == 'subnetId':\n self.subnet_id = value\n elif name == 'main':\n self.main = value == 'true'\n", "path": "boto/vpc/routetable.py"}], "after_files": [{"content": "# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\"\"\"\nRepresents a Route Table\n\"\"\"\n\nfrom boto.ec2.ec2object import TaggedEC2Object\nfrom boto.resultset import ResultSet\n\nclass RouteTable(TaggedEC2Object):\n\n def __init__(self, connection=None):\n super(RouteTable, self).__init__(connection)\n self.id = None\n self.vpc_id = None\n self.routes = []\n self.associations = []\n\n def __repr__(self):\n return 'RouteTable:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n result = super(RouteTable, self).startElement(name, attrs, connection)\n\n if result is not None:\n # Parent found an interested element, just return it\n return result\n\n if name == 'routeSet':\n self.routes = ResultSet([('item', Route)])\n return self.routes\n elif name == 'associationSet':\n self.associations = ResultSet([('item', RouteAssociation)])\n return self.associations\n else:\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableId':\n self.id = value\n elif name == 'vpcId':\n self.vpc_id = value\n else:\n setattr(self, name, value)\n\nclass Route(object):\n def __init__(self, connection=None):\n self.destination_cidr_block = None\n self.gateway_id = None\n self.instance_id = None\n self.interface_id = None\n self.vpc_peering_connection_id = None\n self.state = None\n self.origin = None\n\n def __repr__(self):\n return 'Route:%s' % self.destination_cidr_block\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'destinationCidrBlock':\n self.destination_cidr_block = value\n elif name == 'gatewayId':\n self.gateway_id = value\n elif name == 'instanceId':\n self.instance_id = value\n elif name == 'networkInterfaceId':\n self.interface_id = value\n elif name == 'vpcPeeringConnectionId':\n self.vpc_peering_connection_id = value\n elif name == 'state':\n self.state = value\n elif name == 'origin':\n self.origin = value\n\nclass RouteAssociation(object):\n def __init__(self, connection=None):\n self.id = None\n self.route_table_id = None\n self.subnet_id = None\n self.main = False\n\n def __repr__(self):\n return 'RouteAssociation:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableAssociationId':\n self.id = value\n elif name == 'routeTableId':\n self.route_table_id = value\n elif name == 'subnetId':\n self.subnet_id = value\n elif name == 'main':\n self.main = value == 'true'\n", "path": "boto/vpc/routetable.py"}]}
| 1,637 | 174 |
gh_patches_debug_24565
|
rasdani/github-patches
|
git_diff
|
borgbackup__borg-4393
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update bundled zstd code
we have 1.3.4 bundled, current see there: https://github.com/facebook/zstd/releases
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup_zstd.py`
Content:
```
1 # Support code for building a C extension with zstd files
2 #
3 # Copyright (c) 2016-present, Gregory Szorc
4 # 2017-present, Thomas Waldmann (mods to make it more generic)
5 # All rights reserved.
6 #
7 # This software may be modified and distributed under the terms
8 # of the BSD license. See the LICENSE file for details.
9
10 import os
11
12 # zstd files, structure as seen in zstd project repository:
13
14 zstd_sources = [
15 'lib/common/entropy_common.c',
16 'lib/common/error_private.c',
17 'lib/common/fse_decompress.c',
18 'lib/common/pool.c',
19 'lib/common/threading.c',
20 'lib/common/xxhash.c',
21 'lib/common/zstd_common.c',
22 'lib/compress/fse_compress.c',
23 'lib/compress/huf_compress.c',
24 'lib/compress/zstd_compress.c',
25 'lib/compress/zstd_double_fast.c',
26 'lib/compress/zstd_fast.c',
27 'lib/compress/zstd_lazy.c',
28 'lib/compress/zstd_ldm.c',
29 'lib/compress/zstd_opt.c',
30 'lib/compress/zstdmt_compress.c',
31 'lib/decompress/huf_decompress.c',
32 'lib/decompress/zstd_decompress.c',
33 'lib/dictBuilder/cover.c',
34 'lib/dictBuilder/divsufsort.c',
35 'lib/dictBuilder/zdict.c',
36 ]
37
38 zstd_sources_legacy = [
39 'lib/deprecated/zbuff_common.c',
40 'lib/deprecated/zbuff_compress.c',
41 'lib/deprecated/zbuff_decompress.c',
42 'lib/legacy/zstd_v01.c',
43 'lib/legacy/zstd_v02.c',
44 'lib/legacy/zstd_v03.c',
45 'lib/legacy/zstd_v04.c',
46 'lib/legacy/zstd_v05.c',
47 'lib/legacy/zstd_v06.c',
48 'lib/legacy/zstd_v07.c',
49 ]
50
51 zstd_includes = [
52 'lib',
53 'lib/common',
54 'lib/compress',
55 'lib/decompress',
56 'lib/dictBuilder',
57 ]
58
59 zstd_includes_legacy = [
60 'lib/deprecated',
61 'lib/legacy',
62 ]
63
64
65 def zstd_system_prefix(prefixes):
66 for prefix in prefixes:
67 filename = os.path.join(prefix, 'include', 'zstd.h')
68 if os.path.exists(filename):
69 with open(filename, 'rb') as fd:
70 if b'ZSTD_getFrameContentSize' in fd.read(): # checks for zstd >= 1.3.0
71 return prefix
72
73
74 def zstd_ext_kwargs(bundled_path, system_prefix=None, system=False, multithreaded=False, legacy=False, **kwargs):
75 """amend kwargs with zstd suff for a distutils.extension.Extension initialization.
76
77 bundled_path: relative (to this file) path to the bundled library source code files
78 system_prefix: where the system-installed library can be found
79 system: True: use the system-installed shared library, False: use the bundled library code
80 multithreaded: True: define ZSTD_MULTITHREAD
81 legacy: include legacy API support
82 kwargs: distutils.extension.Extension kwargs that should be amended
83 returns: amended kwargs
84 """
85 def multi_join(paths, *path_segments):
86 """apply os.path.join on a list of paths"""
87 return [os.path.join(*(path_segments + (path, ))) for path in paths]
88
89 use_system = system and system_prefix is not None
90
91 sources = kwargs.get('sources', [])
92 if not use_system:
93 sources += multi_join(zstd_sources, bundled_path)
94 if legacy:
95 sources += multi_join(zstd_sources_legacy, bundled_path)
96
97 include_dirs = kwargs.get('include_dirs', [])
98 if use_system:
99 include_dirs += multi_join(['include'], system_prefix)
100 else:
101 include_dirs += multi_join(zstd_includes, bundled_path)
102 if legacy:
103 include_dirs += multi_join(zstd_includes_legacy, bundled_path)
104
105 library_dirs = kwargs.get('library_dirs', [])
106 if use_system:
107 library_dirs += multi_join(['lib'], system_prefix)
108
109 libraries = kwargs.get('libraries', [])
110 if use_system:
111 libraries += ['zstd', ]
112
113 extra_compile_args = kwargs.get('extra_compile_args', [])
114 if multithreaded:
115 extra_compile_args += ['-DZSTD_MULTITHREAD', ]
116 if not use_system:
117 extra_compile_args += ['-DZSTDLIB_VISIBILITY=', '-DZDICTLIB_VISIBILITY=', '-DZSTDERRORLIB_VISIBILITY=', ]
118 # '-fvisibility=hidden' does not work, doesn't find PyInit_compress then
119 if legacy:
120 extra_compile_args += ['-DZSTD_LEGACY_SUPPORT=1', ]
121
122 ret = dict(**kwargs)
123 ret.update(dict(sources=sources, extra_compile_args=extra_compile_args,
124 include_dirs=include_dirs, library_dirs=library_dirs, libraries=libraries))
125 return ret
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup_zstd.py b/setup_zstd.py
--- a/setup_zstd.py
+++ b/setup_zstd.py
@@ -12,6 +12,7 @@
# zstd files, structure as seen in zstd project repository:
zstd_sources = [
+ 'lib/common/debug.c',
'lib/common/entropy_common.c',
'lib/common/error_private.c',
'lib/common/fse_decompress.c',
@@ -20,6 +21,7 @@
'lib/common/xxhash.c',
'lib/common/zstd_common.c',
'lib/compress/fse_compress.c',
+ 'lib/compress/hist.c',
'lib/compress/huf_compress.c',
'lib/compress/zstd_compress.c',
'lib/compress/zstd_double_fast.c',
@@ -29,9 +31,12 @@
'lib/compress/zstd_opt.c',
'lib/compress/zstdmt_compress.c',
'lib/decompress/huf_decompress.c',
+ 'lib/decompress/zstd_ddict.c',
'lib/decompress/zstd_decompress.c',
+ 'lib/decompress/zstd_decompress_block.c',
'lib/dictBuilder/cover.c',
'lib/dictBuilder/divsufsort.c',
+ 'lib/dictBuilder/fastcover.c',
'lib/dictBuilder/zdict.c',
]
|
{"golden_diff": "diff --git a/setup_zstd.py b/setup_zstd.py\n--- a/setup_zstd.py\n+++ b/setup_zstd.py\n@@ -12,6 +12,7 @@\n # zstd files, structure as seen in zstd project repository:\n \n zstd_sources = [\n+ 'lib/common/debug.c',\n 'lib/common/entropy_common.c',\n 'lib/common/error_private.c',\n 'lib/common/fse_decompress.c',\n@@ -20,6 +21,7 @@\n 'lib/common/xxhash.c',\n 'lib/common/zstd_common.c',\n 'lib/compress/fse_compress.c',\n+ 'lib/compress/hist.c',\n 'lib/compress/huf_compress.c',\n 'lib/compress/zstd_compress.c',\n 'lib/compress/zstd_double_fast.c',\n@@ -29,9 +31,12 @@\n 'lib/compress/zstd_opt.c',\n 'lib/compress/zstdmt_compress.c',\n 'lib/decompress/huf_decompress.c',\n+ 'lib/decompress/zstd_ddict.c',\n 'lib/decompress/zstd_decompress.c',\n+ 'lib/decompress/zstd_decompress_block.c',\n 'lib/dictBuilder/cover.c',\n 'lib/dictBuilder/divsufsort.c',\n+ 'lib/dictBuilder/fastcover.c',\n 'lib/dictBuilder/zdict.c',\n ]\n", "issue": "update bundled zstd code\nwe have 1.3.4 bundled, current see there: https://github.com/facebook/zstd/releases\n", "before_files": [{"content": "# Support code for building a C extension with zstd files\n#\n# Copyright (c) 2016-present, Gregory Szorc\n# 2017-present, Thomas Waldmann (mods to make it more generic)\n# All rights reserved.\n#\n# This software may be modified and distributed under the terms\n# of the BSD license. See the LICENSE file for details.\n\nimport os\n\n# zstd files, structure as seen in zstd project repository:\n\nzstd_sources = [\n 'lib/common/entropy_common.c',\n 'lib/common/error_private.c',\n 'lib/common/fse_decompress.c',\n 'lib/common/pool.c',\n 'lib/common/threading.c',\n 'lib/common/xxhash.c',\n 'lib/common/zstd_common.c',\n 'lib/compress/fse_compress.c',\n 'lib/compress/huf_compress.c',\n 'lib/compress/zstd_compress.c',\n 'lib/compress/zstd_double_fast.c',\n 'lib/compress/zstd_fast.c',\n 'lib/compress/zstd_lazy.c',\n 'lib/compress/zstd_ldm.c',\n 'lib/compress/zstd_opt.c',\n 'lib/compress/zstdmt_compress.c',\n 'lib/decompress/huf_decompress.c',\n 'lib/decompress/zstd_decompress.c',\n 'lib/dictBuilder/cover.c',\n 'lib/dictBuilder/divsufsort.c',\n 'lib/dictBuilder/zdict.c',\n]\n\nzstd_sources_legacy = [\n 'lib/deprecated/zbuff_common.c',\n 'lib/deprecated/zbuff_compress.c',\n 'lib/deprecated/zbuff_decompress.c',\n 'lib/legacy/zstd_v01.c',\n 'lib/legacy/zstd_v02.c',\n 'lib/legacy/zstd_v03.c',\n 'lib/legacy/zstd_v04.c',\n 'lib/legacy/zstd_v05.c',\n 'lib/legacy/zstd_v06.c',\n 'lib/legacy/zstd_v07.c',\n]\n\nzstd_includes = [\n 'lib',\n 'lib/common',\n 'lib/compress',\n 'lib/decompress',\n 'lib/dictBuilder',\n]\n\nzstd_includes_legacy = [\n 'lib/deprecated',\n 'lib/legacy',\n]\n\n\ndef zstd_system_prefix(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'zstd.h')\n if os.path.exists(filename):\n with open(filename, 'rb') as fd:\n if b'ZSTD_getFrameContentSize' in fd.read(): # checks for zstd >= 1.3.0\n return prefix\n\n\ndef zstd_ext_kwargs(bundled_path, system_prefix=None, system=False, multithreaded=False, legacy=False, **kwargs):\n \"\"\"amend kwargs with zstd suff for a distutils.extension.Extension initialization.\n\n bundled_path: relative (to this file) path to the bundled library source code files\n system_prefix: where the system-installed library can be found\n system: True: use the system-installed shared library, False: use the bundled library code\n multithreaded: True: define ZSTD_MULTITHREAD\n legacy: include legacy API support\n kwargs: distutils.extension.Extension kwargs that should be amended\n returns: amended kwargs\n \"\"\"\n def multi_join(paths, *path_segments):\n \"\"\"apply os.path.join on a list of paths\"\"\"\n return [os.path.join(*(path_segments + (path, ))) for path in paths]\n\n use_system = system and system_prefix is not None\n\n sources = kwargs.get('sources', [])\n if not use_system:\n sources += multi_join(zstd_sources, bundled_path)\n if legacy:\n sources += multi_join(zstd_sources_legacy, bundled_path)\n\n include_dirs = kwargs.get('include_dirs', [])\n if use_system:\n include_dirs += multi_join(['include'], system_prefix)\n else:\n include_dirs += multi_join(zstd_includes, bundled_path)\n if legacy:\n include_dirs += multi_join(zstd_includes_legacy, bundled_path)\n\n library_dirs = kwargs.get('library_dirs', [])\n if use_system:\n library_dirs += multi_join(['lib'], system_prefix)\n\n libraries = kwargs.get('libraries', [])\n if use_system:\n libraries += ['zstd', ]\n\n extra_compile_args = kwargs.get('extra_compile_args', [])\n if multithreaded:\n extra_compile_args += ['-DZSTD_MULTITHREAD', ]\n if not use_system:\n extra_compile_args += ['-DZSTDLIB_VISIBILITY=', '-DZDICTLIB_VISIBILITY=', '-DZSTDERRORLIB_VISIBILITY=', ]\n # '-fvisibility=hidden' does not work, doesn't find PyInit_compress then\n if legacy:\n extra_compile_args += ['-DZSTD_LEGACY_SUPPORT=1', ]\n\n ret = dict(**kwargs)\n ret.update(dict(sources=sources, extra_compile_args=extra_compile_args,\n include_dirs=include_dirs, library_dirs=library_dirs, libraries=libraries))\n return ret\n", "path": "setup_zstd.py"}], "after_files": [{"content": "# Support code for building a C extension with zstd files\n#\n# Copyright (c) 2016-present, Gregory Szorc\n# 2017-present, Thomas Waldmann (mods to make it more generic)\n# All rights reserved.\n#\n# This software may be modified and distributed under the terms\n# of the BSD license. See the LICENSE file for details.\n\nimport os\n\n# zstd files, structure as seen in zstd project repository:\n\nzstd_sources = [\n 'lib/common/debug.c',\n 'lib/common/entropy_common.c',\n 'lib/common/error_private.c',\n 'lib/common/fse_decompress.c',\n 'lib/common/pool.c',\n 'lib/common/threading.c',\n 'lib/common/xxhash.c',\n 'lib/common/zstd_common.c',\n 'lib/compress/fse_compress.c',\n 'lib/compress/hist.c',\n 'lib/compress/huf_compress.c',\n 'lib/compress/zstd_compress.c',\n 'lib/compress/zstd_double_fast.c',\n 'lib/compress/zstd_fast.c',\n 'lib/compress/zstd_lazy.c',\n 'lib/compress/zstd_ldm.c',\n 'lib/compress/zstd_opt.c',\n 'lib/compress/zstdmt_compress.c',\n 'lib/decompress/huf_decompress.c',\n 'lib/decompress/zstd_ddict.c',\n 'lib/decompress/zstd_decompress.c',\n 'lib/decompress/zstd_decompress_block.c',\n 'lib/dictBuilder/cover.c',\n 'lib/dictBuilder/divsufsort.c',\n 'lib/dictBuilder/fastcover.c',\n 'lib/dictBuilder/zdict.c',\n]\n\nzstd_sources_legacy = [\n 'lib/deprecated/zbuff_common.c',\n 'lib/deprecated/zbuff_compress.c',\n 'lib/deprecated/zbuff_decompress.c',\n 'lib/legacy/zstd_v01.c',\n 'lib/legacy/zstd_v02.c',\n 'lib/legacy/zstd_v03.c',\n 'lib/legacy/zstd_v04.c',\n 'lib/legacy/zstd_v05.c',\n 'lib/legacy/zstd_v06.c',\n 'lib/legacy/zstd_v07.c',\n]\n\nzstd_includes = [\n 'lib',\n 'lib/common',\n 'lib/compress',\n 'lib/decompress',\n 'lib/dictBuilder',\n]\n\nzstd_includes_legacy = [\n 'lib/deprecated',\n 'lib/legacy',\n]\n\n\ndef zstd_system_prefix(prefixes):\n for prefix in prefixes:\n filename = os.path.join(prefix, 'include', 'zstd.h')\n if os.path.exists(filename):\n with open(filename, 'rb') as fd:\n if b'ZSTD_getFrameContentSize' in fd.read(): # checks for zstd >= 1.3.0\n return prefix\n\n\ndef zstd_ext_kwargs(bundled_path, system_prefix=None, system=False, multithreaded=False, legacy=False, **kwargs):\n \"\"\"amend kwargs with zstd suff for a distutils.extension.Extension initialization.\n\n bundled_path: relative (to this file) path to the bundled library source code files\n system_prefix: where the system-installed library can be found\n system: True: use the system-installed shared library, False: use the bundled library code\n multithreaded: True: define ZSTD_MULTITHREAD\n legacy: include legacy API support\n kwargs: distutils.extension.Extension kwargs that should be amended\n returns: amended kwargs\n \"\"\"\n def multi_join(paths, *path_segments):\n \"\"\"apply os.path.join on a list of paths\"\"\"\n return [os.path.join(*(path_segments + (path, ))) for path in paths]\n\n use_system = system and system_prefix is not None\n\n sources = kwargs.get('sources', [])\n if not use_system:\n sources += multi_join(zstd_sources, bundled_path)\n if legacy:\n sources += multi_join(zstd_sources_legacy, bundled_path)\n\n include_dirs = kwargs.get('include_dirs', [])\n if use_system:\n include_dirs += multi_join(['include'], system_prefix)\n else:\n include_dirs += multi_join(zstd_includes, bundled_path)\n if legacy:\n include_dirs += multi_join(zstd_includes_legacy, bundled_path)\n\n library_dirs = kwargs.get('library_dirs', [])\n if use_system:\n library_dirs += multi_join(['lib'], system_prefix)\n\n libraries = kwargs.get('libraries', [])\n if use_system:\n libraries += ['zstd', ]\n\n extra_compile_args = kwargs.get('extra_compile_args', [])\n if multithreaded:\n extra_compile_args += ['-DZSTD_MULTITHREAD', ]\n if not use_system:\n extra_compile_args += ['-DZSTDLIB_VISIBILITY=', '-DZDICTLIB_VISIBILITY=', '-DZSTDERRORLIB_VISIBILITY=', ]\n # '-fvisibility=hidden' does not work, doesn't find PyInit_compress then\n if legacy:\n extra_compile_args += ['-DZSTD_LEGACY_SUPPORT=1', ]\n\n ret = dict(**kwargs)\n ret.update(dict(sources=sources, extra_compile_args=extra_compile_args,\n include_dirs=include_dirs, library_dirs=library_dirs, libraries=libraries))\n return ret\n", "path": "setup_zstd.py"}]}
| 1,660 | 301 |
gh_patches_debug_38372
|
rasdani/github-patches
|
git_diff
|
nerfstudio-project__nerfstudio-667
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Minor arg improvements
Minor TODOs, I can do these in the next day or two:
- All of the fixed args can be suppressed via `dcargs.conf.SuppressFixed[]`, this will remove all of the unnecessary `_type` args from the usage + helptext
- Formatting in the base config descriptions, https://github.com/nerfstudio-project/nerfstudio/blob/5a76b955cdd833fd59b90edff33875fa05894847/nerfstudio/configs/model_configs.py#L46-L56, currently exploits a bug in `dcargs`, which will be patched in the next release. The tags will need to be manually converted to ANSI sequences in nerfstudio
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nerfstudio/configs/method_configs.py`
Content:
```
1 # Copyright 2022 The Nerfstudio Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Put all the method implementations in one location.
17 """
18
19 from __future__ import annotations
20
21 from typing import Dict
22
23 import dcargs
24
25 from nerfstudio.configs.base_config import Config, TrainerConfig, ViewerConfig
26 from nerfstudio.data.datamanagers import VanillaDataManagerConfig
27 from nerfstudio.data.dataparsers.blender_dataparser import BlenderDataParserConfig
28 from nerfstudio.data.dataparsers.friends_dataparser import FriendsDataParserConfig
29 from nerfstudio.data.dataparsers.nerfstudio_dataparser import NerfstudioDataParserConfig
30 from nerfstudio.engine.optimizers import AdamOptimizerConfig, RAdamOptimizerConfig
31 from nerfstudio.models.base_model import VanillaModelConfig
32 from nerfstudio.models.instant_ngp import InstantNGPModelConfig
33 from nerfstudio.models.mipnerf import MipNerfModel
34 from nerfstudio.models.nerfacto import NerfactoModelConfig
35 from nerfstudio.models.semantic_nerfw import SemanticNerfWModelConfig
36 from nerfstudio.models.vanilla_nerf import NeRFModel
37 from nerfstudio.pipelines.base_pipeline import VanillaPipelineConfig
38 from nerfstudio.pipelines.dynamic_batch import DynamicBatchPipelineConfig
39
40 method_configs: Dict[str, Config] = {}
41 descriptions = {
42 "nerfacto": "[bold green]Recommended[/bold green] Real-time model tuned for real captures. "
43 + "This model will be continually updated.",
44 "instant-ngp": "Implementation of Instant-NGP. Recommended real-time model for bounded synthetic data.",
45 "mipnerf": "High quality model for bounded scenes. [red]*slow*",
46 "semantic-nerfw": "Predicts semantic segmentations and filters out transient objects.",
47 "vanilla-nerf": "Original NeRF model. [red]*slow*",
48 }
49
50 method_configs["nerfacto"] = Config(
51 method_name="nerfacto",
52 trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),
53 pipeline=VanillaPipelineConfig(
54 datamanager=VanillaDataManagerConfig(
55 dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192
56 ),
57 model=NerfactoModelConfig(eval_num_rays_per_chunk=1 << 14),
58 ),
59 optimizers={
60 "proposal_networks": {
61 "optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15),
62 "scheduler": None,
63 },
64 "fields": {
65 "optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15),
66 "scheduler": None,
67 },
68 },
69 viewer=ViewerConfig(num_rays_per_chunk=1 << 14),
70 vis="viewer",
71 )
72
73 method_configs["instant-ngp"] = Config(
74 method_name="instant-ngp",
75 trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),
76 pipeline=DynamicBatchPipelineConfig(
77 datamanager=VanillaDataManagerConfig(dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=8192),
78 model=InstantNGPModelConfig(eval_num_rays_per_chunk=8192),
79 ),
80 optimizers={
81 "fields": {
82 "optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15),
83 "scheduler": None,
84 }
85 },
86 viewer=ViewerConfig(num_rays_per_chunk=64000),
87 vis="viewer",
88 )
89
90 method_configs["mipnerf"] = Config(
91 method_name="mipnerf",
92 pipeline=VanillaPipelineConfig(
93 datamanager=VanillaDataManagerConfig(dataparser=BlenderDataParserConfig(), train_num_rays_per_batch=8192),
94 model=VanillaModelConfig(
95 _target=MipNerfModel,
96 loss_coefficients={"rgb_loss_coarse": 0.1, "rgb_loss_fine": 1.0},
97 num_coarse_samples=128,
98 num_importance_samples=128,
99 eval_num_rays_per_chunk=8192,
100 ),
101 ),
102 optimizers={
103 "fields": {
104 "optimizer": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),
105 "scheduler": None,
106 }
107 },
108 )
109
110 method_configs["semantic-nerfw"] = Config(
111 method_name="semantic-nerfw",
112 trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),
113 pipeline=VanillaPipelineConfig(
114 datamanager=VanillaDataManagerConfig(
115 dataparser=FriendsDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192
116 ),
117 model=SemanticNerfWModelConfig(eval_num_rays_per_chunk=1 << 16),
118 ),
119 optimizers={
120 "proposal_networks": {
121 "optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15),
122 "scheduler": None,
123 },
124 "fields": {
125 "optimizer": AdamOptimizerConfig(lr=1e-2, eps=1e-15),
126 "scheduler": None,
127 },
128 },
129 viewer=ViewerConfig(num_rays_per_chunk=1 << 16),
130 vis="viewer",
131 )
132
133 method_configs["vanilla-nerf"] = Config(
134 method_name="vanilla-nerf",
135 pipeline=VanillaPipelineConfig(
136 datamanager=VanillaDataManagerConfig(
137 dataparser=BlenderDataParserConfig(),
138 ),
139 model=VanillaModelConfig(_target=NeRFModel),
140 ),
141 optimizers={
142 "fields": {
143 "optimizer": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),
144 "scheduler": None,
145 }
146 },
147 )
148
149
150 AnnotatedBaseConfigUnion = dcargs.extras.subcommand_type_from_defaults(
151 defaults=method_configs, descriptions=descriptions
152 )
153 """Union[] type over config types, annotated with default instances for use with
154 dcargs.cli(). Allows the user to pick between one of several base configurations, and
155 then override values in it."""
156
```
Path: `nerfstudio/configs/config_utils.py`
Content:
```
1 # Copyright 2022 The Nerfstudio Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Some utility code for configs.
17 """
18
19 from __future__ import annotations
20
21 from dataclasses import field
22 from typing import Any, Dict
23
24 # pylint: disable=import-outside-toplevel
25
26 # cannot use mutable types directly within dataclass; abstracting default factory calls
27 def to_immutable_dict(d: Dict[str, Any]):
28 """Method to convert mutable dict to default factory dict
29
30 Args:
31 d: dictionary to convert into default factory dict for dataclass
32 """
33 return field(default_factory=lambda: dict(d))
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nerfstudio/configs/config_utils.py b/nerfstudio/configs/config_utils.py
--- a/nerfstudio/configs/config_utils.py
+++ b/nerfstudio/configs/config_utils.py
@@ -21,6 +21,8 @@
from dataclasses import field
from typing import Any, Dict
+from rich.console import Console
+
# pylint: disable=import-outside-toplevel
# cannot use mutable types directly within dataclass; abstracting default factory calls
@@ -31,3 +33,18 @@
d: dictionary to convert into default factory dict for dataclass
"""
return field(default_factory=lambda: dict(d))
+
+
+def convert_markup_to_ansi(markup_string: str) -> str:
+ """Convert rich-style markup to ANSI sequences for command-line formatting.
+
+ Args:
+ markup_string: Text with rich-style markup.
+
+ Returns:
+ Text formatted via ANSI sequences.
+ """
+ console = Console()
+ with console.capture() as out:
+ console.print(markup_string, soft_wrap=True)
+ return out.get()
diff --git a/nerfstudio/configs/method_configs.py b/nerfstudio/configs/method_configs.py
--- a/nerfstudio/configs/method_configs.py
+++ b/nerfstudio/configs/method_configs.py
@@ -21,8 +21,10 @@
from typing import Dict
import dcargs
+from rich.console import Console
from nerfstudio.configs.base_config import Config, TrainerConfig, ViewerConfig
+from nerfstudio.configs.config_utils import convert_markup_to_ansi
from nerfstudio.data.datamanagers import VanillaDataManagerConfig
from nerfstudio.data.dataparsers.blender_dataparser import BlenderDataParserConfig
from nerfstudio.data.dataparsers.friends_dataparser import FriendsDataParserConfig
@@ -46,6 +48,8 @@
"semantic-nerfw": "Predicts semantic segmentations and filters out transient objects.",
"vanilla-nerf": "Original NeRF model. [red]*slow*",
}
+descriptions = {k: convert_markup_to_ansi(v) for k, v in descriptions.items()}
+
method_configs["nerfacto"] = Config(
method_name="nerfacto",
@@ -147,9 +151,9 @@
)
-AnnotatedBaseConfigUnion = dcargs.extras.subcommand_type_from_defaults(
- defaults=method_configs, descriptions=descriptions
-)
+AnnotatedBaseConfigUnion = dcargs.conf.SuppressFixed[ # Don't show unparseable (fixed) arguments in helptext.
+ dcargs.extras.subcommand_type_from_defaults(defaults=method_configs, descriptions=descriptions)
+]
"""Union[] type over config types, annotated with default instances for use with
dcargs.cli(). Allows the user to pick between one of several base configurations, and
then override values in it."""
|
{"golden_diff": "diff --git a/nerfstudio/configs/config_utils.py b/nerfstudio/configs/config_utils.py\n--- a/nerfstudio/configs/config_utils.py\n+++ b/nerfstudio/configs/config_utils.py\n@@ -21,6 +21,8 @@\n from dataclasses import field\n from typing import Any, Dict\n \n+from rich.console import Console\n+\n # pylint: disable=import-outside-toplevel\n \n # cannot use mutable types directly within dataclass; abstracting default factory calls\n@@ -31,3 +33,18 @@\n d: dictionary to convert into default factory dict for dataclass\n \"\"\"\n return field(default_factory=lambda: dict(d))\n+\n+\n+def convert_markup_to_ansi(markup_string: str) -> str:\n+ \"\"\"Convert rich-style markup to ANSI sequences for command-line formatting.\n+\n+ Args:\n+ markup_string: Text with rich-style markup.\n+\n+ Returns:\n+ Text formatted via ANSI sequences.\n+ \"\"\"\n+ console = Console()\n+ with console.capture() as out:\n+ console.print(markup_string, soft_wrap=True)\n+ return out.get()\ndiff --git a/nerfstudio/configs/method_configs.py b/nerfstudio/configs/method_configs.py\n--- a/nerfstudio/configs/method_configs.py\n+++ b/nerfstudio/configs/method_configs.py\n@@ -21,8 +21,10 @@\n from typing import Dict\n \n import dcargs\n+from rich.console import Console\n \n from nerfstudio.configs.base_config import Config, TrainerConfig, ViewerConfig\n+from nerfstudio.configs.config_utils import convert_markup_to_ansi\n from nerfstudio.data.datamanagers import VanillaDataManagerConfig\n from nerfstudio.data.dataparsers.blender_dataparser import BlenderDataParserConfig\n from nerfstudio.data.dataparsers.friends_dataparser import FriendsDataParserConfig\n@@ -46,6 +48,8 @@\n \"semantic-nerfw\": \"Predicts semantic segmentations and filters out transient objects.\",\n \"vanilla-nerf\": \"Original NeRF model. [red]*slow*\",\n }\n+descriptions = {k: convert_markup_to_ansi(v) for k, v in descriptions.items()}\n+\n \n method_configs[\"nerfacto\"] = Config(\n method_name=\"nerfacto\",\n@@ -147,9 +151,9 @@\n )\n \n \n-AnnotatedBaseConfigUnion = dcargs.extras.subcommand_type_from_defaults(\n- defaults=method_configs, descriptions=descriptions\n-)\n+AnnotatedBaseConfigUnion = dcargs.conf.SuppressFixed[ # Don't show unparseable (fixed) arguments in helptext.\n+ dcargs.extras.subcommand_type_from_defaults(defaults=method_configs, descriptions=descriptions)\n+]\n \"\"\"Union[] type over config types, annotated with default instances for use with\n dcargs.cli(). Allows the user to pick between one of several base configurations, and\n then override values in it.\"\"\"\n", "issue": "Minor arg improvements\nMinor TODOs, I can do these in the next day or two:\r\n- All of the fixed args can be suppressed via `dcargs.conf.SuppressFixed[]`, this will remove all of the unnecessary `_type` args from the usage + helptext\r\n- Formatting in the base config descriptions, https://github.com/nerfstudio-project/nerfstudio/blob/5a76b955cdd833fd59b90edff33875fa05894847/nerfstudio/configs/model_configs.py#L46-L56, currently exploits a bug in `dcargs`, which will be patched in the next release. The tags will need to be manually converted to ANSI sequences in nerfstudio\n", "before_files": [{"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nPut all the method implementations in one location.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import Dict\n\nimport dcargs\n\nfrom nerfstudio.configs.base_config import Config, TrainerConfig, ViewerConfig\nfrom nerfstudio.data.datamanagers import VanillaDataManagerConfig\nfrom nerfstudio.data.dataparsers.blender_dataparser import BlenderDataParserConfig\nfrom nerfstudio.data.dataparsers.friends_dataparser import FriendsDataParserConfig\nfrom nerfstudio.data.dataparsers.nerfstudio_dataparser import NerfstudioDataParserConfig\nfrom nerfstudio.engine.optimizers import AdamOptimizerConfig, RAdamOptimizerConfig\nfrom nerfstudio.models.base_model import VanillaModelConfig\nfrom nerfstudio.models.instant_ngp import InstantNGPModelConfig\nfrom nerfstudio.models.mipnerf import MipNerfModel\nfrom nerfstudio.models.nerfacto import NerfactoModelConfig\nfrom nerfstudio.models.semantic_nerfw import SemanticNerfWModelConfig\nfrom nerfstudio.models.vanilla_nerf import NeRFModel\nfrom nerfstudio.pipelines.base_pipeline import VanillaPipelineConfig\nfrom nerfstudio.pipelines.dynamic_batch import DynamicBatchPipelineConfig\n\nmethod_configs: Dict[str, Config] = {}\ndescriptions = {\n \"nerfacto\": \"[bold green]Recommended[/bold green] Real-time model tuned for real captures. \"\n + \"This model will be continually updated.\",\n \"instant-ngp\": \"Implementation of Instant-NGP. Recommended real-time model for bounded synthetic data.\",\n \"mipnerf\": \"High quality model for bounded scenes. [red]*slow*\",\n \"semantic-nerfw\": \"Predicts semantic segmentations and filters out transient objects.\",\n \"vanilla-nerf\": \"Original NeRF model. [red]*slow*\",\n}\n\nmethod_configs[\"nerfacto\"] = Config(\n method_name=\"nerfacto\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192\n ),\n model=NerfactoModelConfig(eval_num_rays_per_chunk=1 << 14),\n ),\n optimizers={\n \"proposal_networks\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n },\n viewer=ViewerConfig(num_rays_per_chunk=1 << 14),\n vis=\"viewer\",\n)\n\nmethod_configs[\"instant-ngp\"] = Config(\n method_name=\"instant-ngp\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=DynamicBatchPipelineConfig(\n datamanager=VanillaDataManagerConfig(dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=8192),\n model=InstantNGPModelConfig(eval_num_rays_per_chunk=8192),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n }\n },\n viewer=ViewerConfig(num_rays_per_chunk=64000),\n vis=\"viewer\",\n)\n\nmethod_configs[\"mipnerf\"] = Config(\n method_name=\"mipnerf\",\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(dataparser=BlenderDataParserConfig(), train_num_rays_per_batch=8192),\n model=VanillaModelConfig(\n _target=MipNerfModel,\n loss_coefficients={\"rgb_loss_coarse\": 0.1, \"rgb_loss_fine\": 1.0},\n num_coarse_samples=128,\n num_importance_samples=128,\n eval_num_rays_per_chunk=8192,\n ),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),\n \"scheduler\": None,\n }\n },\n)\n\nmethod_configs[\"semantic-nerfw\"] = Config(\n method_name=\"semantic-nerfw\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=FriendsDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192\n ),\n model=SemanticNerfWModelConfig(eval_num_rays_per_chunk=1 << 16),\n ),\n optimizers={\n \"proposal_networks\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n },\n viewer=ViewerConfig(num_rays_per_chunk=1 << 16),\n vis=\"viewer\",\n)\n\nmethod_configs[\"vanilla-nerf\"] = Config(\n method_name=\"vanilla-nerf\",\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=BlenderDataParserConfig(),\n ),\n model=VanillaModelConfig(_target=NeRFModel),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),\n \"scheduler\": None,\n }\n },\n)\n\n\nAnnotatedBaseConfigUnion = dcargs.extras.subcommand_type_from_defaults(\n defaults=method_configs, descriptions=descriptions\n)\n\"\"\"Union[] type over config types, annotated with default instances for use with\ndcargs.cli(). Allows the user to pick between one of several base configurations, and\nthen override values in it.\"\"\"\n", "path": "nerfstudio/configs/method_configs.py"}, {"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSome utility code for configs.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import field\nfrom typing import Any, Dict\n\n# pylint: disable=import-outside-toplevel\n\n# cannot use mutable types directly within dataclass; abstracting default factory calls\ndef to_immutable_dict(d: Dict[str, Any]):\n \"\"\"Method to convert mutable dict to default factory dict\n\n Args:\n d: dictionary to convert into default factory dict for dataclass\n \"\"\"\n return field(default_factory=lambda: dict(d))\n", "path": "nerfstudio/configs/config_utils.py"}], "after_files": [{"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nPut all the method implementations in one location.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import Dict\n\nimport dcargs\nfrom rich.console import Console\n\nfrom nerfstudio.configs.base_config import Config, TrainerConfig, ViewerConfig\nfrom nerfstudio.configs.config_utils import convert_markup_to_ansi\nfrom nerfstudio.data.datamanagers import VanillaDataManagerConfig\nfrom nerfstudio.data.dataparsers.blender_dataparser import BlenderDataParserConfig\nfrom nerfstudio.data.dataparsers.friends_dataparser import FriendsDataParserConfig\nfrom nerfstudio.data.dataparsers.nerfstudio_dataparser import NerfstudioDataParserConfig\nfrom nerfstudio.engine.optimizers import AdamOptimizerConfig, RAdamOptimizerConfig\nfrom nerfstudio.models.base_model import VanillaModelConfig\nfrom nerfstudio.models.instant_ngp import InstantNGPModelConfig\nfrom nerfstudio.models.mipnerf import MipNerfModel\nfrom nerfstudio.models.nerfacto import NerfactoModelConfig\nfrom nerfstudio.models.semantic_nerfw import SemanticNerfWModelConfig\nfrom nerfstudio.models.vanilla_nerf import NeRFModel\nfrom nerfstudio.pipelines.base_pipeline import VanillaPipelineConfig\nfrom nerfstudio.pipelines.dynamic_batch import DynamicBatchPipelineConfig\n\nmethod_configs: Dict[str, Config] = {}\ndescriptions = {\n \"nerfacto\": \"[bold green]Recommended[/bold green] Real-time model tuned for real captures. \"\n + \"This model will be continually updated.\",\n \"instant-ngp\": \"Implementation of Instant-NGP. Recommended real-time model for bounded synthetic data.\",\n \"mipnerf\": \"High quality model for bounded scenes. [red]*slow*\",\n \"semantic-nerfw\": \"Predicts semantic segmentations and filters out transient objects.\",\n \"vanilla-nerf\": \"Original NeRF model. [red]*slow*\",\n}\ndescriptions = {k: convert_markup_to_ansi(v) for k, v in descriptions.items()}\n\n\nmethod_configs[\"nerfacto\"] = Config(\n method_name=\"nerfacto\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192\n ),\n model=NerfactoModelConfig(eval_num_rays_per_chunk=1 << 14),\n ),\n optimizers={\n \"proposal_networks\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n },\n viewer=ViewerConfig(num_rays_per_chunk=1 << 14),\n vis=\"viewer\",\n)\n\nmethod_configs[\"instant-ngp\"] = Config(\n method_name=\"instant-ngp\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=DynamicBatchPipelineConfig(\n datamanager=VanillaDataManagerConfig(dataparser=NerfstudioDataParserConfig(), train_num_rays_per_batch=8192),\n model=InstantNGPModelConfig(eval_num_rays_per_chunk=8192),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n }\n },\n viewer=ViewerConfig(num_rays_per_chunk=64000),\n vis=\"viewer\",\n)\n\nmethod_configs[\"mipnerf\"] = Config(\n method_name=\"mipnerf\",\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(dataparser=BlenderDataParserConfig(), train_num_rays_per_batch=8192),\n model=VanillaModelConfig(\n _target=MipNerfModel,\n loss_coefficients={\"rgb_loss_coarse\": 0.1, \"rgb_loss_fine\": 1.0},\n num_coarse_samples=128,\n num_importance_samples=128,\n eval_num_rays_per_chunk=8192,\n ),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),\n \"scheduler\": None,\n }\n },\n)\n\nmethod_configs[\"semantic-nerfw\"] = Config(\n method_name=\"semantic-nerfw\",\n trainer=TrainerConfig(steps_per_eval_batch=500, steps_per_save=2000, mixed_precision=True),\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=FriendsDataParserConfig(), train_num_rays_per_batch=4096, eval_num_rays_per_batch=8192\n ),\n model=SemanticNerfWModelConfig(eval_num_rays_per_chunk=1 << 16),\n ),\n optimizers={\n \"proposal_networks\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n \"fields\": {\n \"optimizer\": AdamOptimizerConfig(lr=1e-2, eps=1e-15),\n \"scheduler\": None,\n },\n },\n viewer=ViewerConfig(num_rays_per_chunk=1 << 16),\n vis=\"viewer\",\n)\n\nmethod_configs[\"vanilla-nerf\"] = Config(\n method_name=\"vanilla-nerf\",\n pipeline=VanillaPipelineConfig(\n datamanager=VanillaDataManagerConfig(\n dataparser=BlenderDataParserConfig(),\n ),\n model=VanillaModelConfig(_target=NeRFModel),\n ),\n optimizers={\n \"fields\": {\n \"optimizer\": RAdamOptimizerConfig(lr=5e-4, eps=1e-08),\n \"scheduler\": None,\n }\n },\n)\n\n\nAnnotatedBaseConfigUnion = dcargs.conf.SuppressFixed[ # Don't show unparseable (fixed) arguments in helptext.\n dcargs.extras.subcommand_type_from_defaults(defaults=method_configs, descriptions=descriptions)\n]\n\"\"\"Union[] type over config types, annotated with default instances for use with\ndcargs.cli(). Allows the user to pick between one of several base configurations, and\nthen override values in it.\"\"\"\n", "path": "nerfstudio/configs/method_configs.py"}, {"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSome utility code for configs.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import field\nfrom typing import Any, Dict\n\nfrom rich.console import Console\n\n# pylint: disable=import-outside-toplevel\n\n# cannot use mutable types directly within dataclass; abstracting default factory calls\ndef to_immutable_dict(d: Dict[str, Any]):\n \"\"\"Method to convert mutable dict to default factory dict\n\n Args:\n d: dictionary to convert into default factory dict for dataclass\n \"\"\"\n return field(default_factory=lambda: dict(d))\n\n\ndef convert_markup_to_ansi(markup_string: str) -> str:\n \"\"\"Convert rich-style markup to ANSI sequences for command-line formatting.\n\n Args:\n markup_string: Text with rich-style markup.\n\n Returns:\n Text formatted via ANSI sequences.\n \"\"\"\n console = Console()\n with console.capture() as out:\n console.print(markup_string, soft_wrap=True)\n return out.get()\n", "path": "nerfstudio/configs/config_utils.py"}]}
| 2,638 | 639 |
gh_patches_debug_2881
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1334
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix negative values in std
edit. There is an error in the numeric_utils.
This is a wrong order of operations
std_devs = np.diag(cov ** 0.5)
Correct order is
std_devs = np.diag(cov) ** 0.5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/numeric_utils.py`
Content:
```
1 """Numerical utility functions for ArviZ."""
2 import warnings
3 import numpy as np
4 from scipy.signal import convolve, convolve2d
5 from scipy.signal.windows import gaussian
6 from scipy.sparse import coo_matrix
7
8 from .stats.stats_utils import histogram
9 from .utils import _stack, _dot, _cov
10
11
12 def _fast_kde(x, cumulative=False, bw=4.5, xmin=None, xmax=None):
13 """Fast Fourier transform-based Gaussian kernel density estimate (KDE).
14
15 The code was adapted from https://github.com/mfouesneau/faststats
16
17 Parameters
18 ----------
19 x : Numpy array or list
20 cumulative : bool
21 If true, estimate the cdf instead of the pdf
22 bw : float
23 Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the
24 smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule
25 of thumb (the default rule used by SciPy).
26 xmin : float
27 Manually set lower limit.
28 xmax : float
29 Manually set upper limit.
30
31 Returns
32 -------
33 density: A gridded 1D KDE of the input points (x)
34 xmin: minimum value of x
35 xmax: maximum value of x
36 """
37 x = np.asarray(x, dtype=float)
38 x = x[np.isfinite(x)]
39 if x.size == 0:
40 warnings.warn("kde plot failed, you may want to check your data")
41 return np.array([np.nan]), np.nan, np.nan
42
43 len_x = len(x)
44 n_points = 200 if (xmin or xmax) is None else 500
45
46 if xmin is None:
47 xmin = np.min(x)
48 if xmax is None:
49 xmax = np.max(x)
50
51 assert np.min(x) >= xmin
52 assert np.max(x) <= xmax
53
54 log_len_x = np.log(len_x) * bw
55
56 n_bins = min(int(len_x ** (1 / 3) * log_len_x * 2), n_points)
57 if n_bins < 2:
58 warnings.warn("kde plot failed, you may want to check your data")
59 return np.array([np.nan]), np.nan, np.nan
60
61 # hist, bin_edges = np.histogram(x, bins=n_bins, range=(xmin, xmax))
62 # grid = hist / (hist.sum() * np.diff(bin_edges))
63
64 _, grid, _ = histogram(x, n_bins, range_hist=(xmin, xmax))
65
66 scotts_factor = len_x ** (-0.2)
67 kern_nx = int(scotts_factor * 2 * np.pi * log_len_x)
68 kernel = gaussian(kern_nx, scotts_factor * log_len_x)
69
70 npad = min(n_bins, 2 * kern_nx)
71 grid = np.concatenate([grid[npad:0:-1], grid, grid[n_bins : n_bins - npad : -1]])
72 density = convolve(grid, kernel, mode="same", method="direct")[npad : npad + n_bins]
73 norm_factor = (2 * np.pi * log_len_x ** 2 * scotts_factor ** 2) ** 0.5
74
75 density /= norm_factor
76
77 if cumulative:
78 density = density.cumsum() / density.sum()
79
80 return density, xmin, xmax
81
82
83 def _fast_kde_2d(x, y, gridsize=(128, 128), circular=False):
84 """
85 2D fft-based Gaussian kernel density estimate (KDE).
86
87 The code was adapted from https://github.com/mfouesneau/faststats
88
89 Parameters
90 ----------
91 x : Numpy array or list
92 y : Numpy array or list
93 gridsize : tuple
94 Number of points used to discretize data. Use powers of 2 for fft optimization
95 circular: bool
96 If True, use circular boundaries. Defaults to False
97 Returns
98 -------
99 grid: A gridded 2D KDE of the input points (x, y)
100 xmin: minimum value of x
101 xmax: maximum value of x
102 ymin: minimum value of y
103 ymax: maximum value of y
104 """
105 x = np.asarray(x, dtype=float)
106 x = x[np.isfinite(x)]
107 y = np.asarray(y, dtype=float)
108 y = y[np.isfinite(y)]
109
110 xmin, xmax = x.min(), x.max()
111 ymin, ymax = y.min(), y.max()
112
113 len_x = len(x)
114 weights = np.ones(len_x)
115 n_x, n_y = gridsize
116
117 d_x = (xmax - xmin) / (n_x - 1)
118 d_y = (ymax - ymin) / (n_y - 1)
119
120 xyi = _stack(x, y).T
121 xyi -= [xmin, ymin]
122 xyi /= [d_x, d_y]
123 xyi = np.floor(xyi, xyi).T
124
125 scotts_factor = len_x ** (-1 / 6)
126 cov = _cov(xyi)
127 std_devs = np.diag(cov ** 0.5)
128 kern_nx, kern_ny = np.round(scotts_factor * 2 * np.pi * std_devs)
129
130 inv_cov = np.linalg.inv(cov * scotts_factor ** 2)
131
132 x_x = np.arange(kern_nx) - kern_nx / 2
133 y_y = np.arange(kern_ny) - kern_ny / 2
134 x_x, y_y = np.meshgrid(x_x, y_y)
135
136 kernel = _stack(x_x.flatten(), y_y.flatten())
137 kernel = _dot(inv_cov, kernel) * kernel
138 kernel = np.exp(-kernel.sum(axis=0) / 2)
139 kernel = kernel.reshape((int(kern_ny), int(kern_nx)))
140
141 boundary = "wrap" if circular else "symm"
142
143 grid = coo_matrix((weights, xyi), shape=(n_x, n_y)).toarray()
144 grid = convolve2d(grid, kernel, mode="same", boundary=boundary)
145
146 norm_factor = np.linalg.det(2 * np.pi * cov * scotts_factor ** 2)
147 norm_factor = len_x * d_x * d_y * norm_factor ** 0.5
148
149 grid /= norm_factor
150
151 return grid, xmin, xmax, ymin, ymax
152
153
154 def get_bins(values):
155 """
156 Automatically compute the number of bins for discrete variables.
157
158 Parameters
159 ----------
160 values = numpy array
161 values
162
163 Returns
164 -------
165 array with the bins
166
167 Notes
168 -----
169 Computes the width of the bins by taking the maximun of the Sturges and the Freedman-Diaconis
170 estimators. Acording to numpy `np.histogram` this provides good all around performance.
171
172 The Sturges is a very simplistic estimator based on the assumption of normality of the data.
173 This estimator has poor performance for non-normal data, which becomes especially obvious for
174 large data sets. The estimate depends only on size of the data.
175
176 The Freedman-Diaconis rule uses interquartile range (IQR) to estimate the binwidth.
177 It is considered a robusts version of the Scott rule as the IQR is less affected by outliers
178 than the standard deviation. However, the IQR depends on fewer points than the standard
179 deviation, so it is less accurate, especially for long tailed distributions.
180 """
181 x_min = values.min().astype(int)
182 x_max = values.max().astype(int)
183
184 # Sturges histogram bin estimator
185 bins_sturges = (x_max - x_min) / (np.log2(values.size) + 1)
186
187 # The Freedman-Diaconis histogram bin estimator.
188 iqr = np.subtract(*np.percentile(values, [75, 25])) # pylint: disable=assignment-from-no-return
189 bins_fd = 2 * iqr * values.size ** (-1 / 3)
190
191 width = np.round(np.max([1, bins_sturges, bins_fd])).astype(int)
192
193 return np.arange(x_min, x_max + width + 1, width)
194
195
196 def _sturges_formula(dataset, mult=1):
197 """Use Sturges' formula to determine number of bins.
198
199 See https://en.wikipedia.org/wiki/Histogram#Sturges'_formula
200 or https://doi.org/10.1080%2F01621459.1926.10502161
201
202 Parameters
203 ----------
204 dataset: xarray.DataSet
205 Must have the `draw` dimension
206
207 mult: float
208 Used to scale the number of bins up or down. Default is 1 for Sturges' formula.
209
210 Returns
211 -------
212 int
213 Number of bins to use
214 """
215 return int(np.ceil(mult * np.log2(dataset.draw.size)) + 1)
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/numeric_utils.py b/arviz/numeric_utils.py
--- a/arviz/numeric_utils.py
+++ b/arviz/numeric_utils.py
@@ -124,7 +124,7 @@
scotts_factor = len_x ** (-1 / 6)
cov = _cov(xyi)
- std_devs = np.diag(cov ** 0.5)
+ std_devs = np.diag(cov) ** 0.5
kern_nx, kern_ny = np.round(scotts_factor * 2 * np.pi * std_devs)
inv_cov = np.linalg.inv(cov * scotts_factor ** 2)
|
{"golden_diff": "diff --git a/arviz/numeric_utils.py b/arviz/numeric_utils.py\n--- a/arviz/numeric_utils.py\n+++ b/arviz/numeric_utils.py\n@@ -124,7 +124,7 @@\n \n scotts_factor = len_x ** (-1 / 6)\n cov = _cov(xyi)\n- std_devs = np.diag(cov ** 0.5)\n+ std_devs = np.diag(cov) ** 0.5\n kern_nx, kern_ny = np.round(scotts_factor * 2 * np.pi * std_devs)\n \n inv_cov = np.linalg.inv(cov * scotts_factor ** 2)\n", "issue": "Fix negative values in std\nedit. There is an error in the numeric_utils.\r\n\r\nThis is a wrong order of operations\r\n\r\n std_devs = np.diag(cov ** 0.5)\r\n\r\nCorrect order is\r\n\r\n std_devs = np.diag(cov) ** 0.5\n", "before_files": [{"content": "\"\"\"Numerical utility functions for ArviZ.\"\"\"\nimport warnings\nimport numpy as np\nfrom scipy.signal import convolve, convolve2d\nfrom scipy.signal.windows import gaussian\nfrom scipy.sparse import coo_matrix\n\nfrom .stats.stats_utils import histogram\nfrom .utils import _stack, _dot, _cov\n\n\ndef _fast_kde(x, cumulative=False, bw=4.5, xmin=None, xmax=None):\n \"\"\"Fast Fourier transform-based Gaussian kernel density estimate (KDE).\n\n The code was adapted from https://github.com/mfouesneau/faststats\n\n Parameters\n ----------\n x : Numpy array or list\n cumulative : bool\n If true, estimate the cdf instead of the pdf\n bw : float\n Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the\n smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule\n of thumb (the default rule used by SciPy).\n xmin : float\n Manually set lower limit.\n xmax : float\n Manually set upper limit.\n\n Returns\n -------\n density: A gridded 1D KDE of the input points (x)\n xmin: minimum value of x\n xmax: maximum value of x\n \"\"\"\n x = np.asarray(x, dtype=float)\n x = x[np.isfinite(x)]\n if x.size == 0:\n warnings.warn(\"kde plot failed, you may want to check your data\")\n return np.array([np.nan]), np.nan, np.nan\n\n len_x = len(x)\n n_points = 200 if (xmin or xmax) is None else 500\n\n if xmin is None:\n xmin = np.min(x)\n if xmax is None:\n xmax = np.max(x)\n\n assert np.min(x) >= xmin\n assert np.max(x) <= xmax\n\n log_len_x = np.log(len_x) * bw\n\n n_bins = min(int(len_x ** (1 / 3) * log_len_x * 2), n_points)\n if n_bins < 2:\n warnings.warn(\"kde plot failed, you may want to check your data\")\n return np.array([np.nan]), np.nan, np.nan\n\n # hist, bin_edges = np.histogram(x, bins=n_bins, range=(xmin, xmax))\n # grid = hist / (hist.sum() * np.diff(bin_edges))\n\n _, grid, _ = histogram(x, n_bins, range_hist=(xmin, xmax))\n\n scotts_factor = len_x ** (-0.2)\n kern_nx = int(scotts_factor * 2 * np.pi * log_len_x)\n kernel = gaussian(kern_nx, scotts_factor * log_len_x)\n\n npad = min(n_bins, 2 * kern_nx)\n grid = np.concatenate([grid[npad:0:-1], grid, grid[n_bins : n_bins - npad : -1]])\n density = convolve(grid, kernel, mode=\"same\", method=\"direct\")[npad : npad + n_bins]\n norm_factor = (2 * np.pi * log_len_x ** 2 * scotts_factor ** 2) ** 0.5\n\n density /= norm_factor\n\n if cumulative:\n density = density.cumsum() / density.sum()\n\n return density, xmin, xmax\n\n\ndef _fast_kde_2d(x, y, gridsize=(128, 128), circular=False):\n \"\"\"\n 2D fft-based Gaussian kernel density estimate (KDE).\n\n The code was adapted from https://github.com/mfouesneau/faststats\n\n Parameters\n ----------\n x : Numpy array or list\n y : Numpy array or list\n gridsize : tuple\n Number of points used to discretize data. Use powers of 2 for fft optimization\n circular: bool\n If True, use circular boundaries. Defaults to False\n Returns\n -------\n grid: A gridded 2D KDE of the input points (x, y)\n xmin: minimum value of x\n xmax: maximum value of x\n ymin: minimum value of y\n ymax: maximum value of y\n \"\"\"\n x = np.asarray(x, dtype=float)\n x = x[np.isfinite(x)]\n y = np.asarray(y, dtype=float)\n y = y[np.isfinite(y)]\n\n xmin, xmax = x.min(), x.max()\n ymin, ymax = y.min(), y.max()\n\n len_x = len(x)\n weights = np.ones(len_x)\n n_x, n_y = gridsize\n\n d_x = (xmax - xmin) / (n_x - 1)\n d_y = (ymax - ymin) / (n_y - 1)\n\n xyi = _stack(x, y).T\n xyi -= [xmin, ymin]\n xyi /= [d_x, d_y]\n xyi = np.floor(xyi, xyi).T\n\n scotts_factor = len_x ** (-1 / 6)\n cov = _cov(xyi)\n std_devs = np.diag(cov ** 0.5)\n kern_nx, kern_ny = np.round(scotts_factor * 2 * np.pi * std_devs)\n\n inv_cov = np.linalg.inv(cov * scotts_factor ** 2)\n\n x_x = np.arange(kern_nx) - kern_nx / 2\n y_y = np.arange(kern_ny) - kern_ny / 2\n x_x, y_y = np.meshgrid(x_x, y_y)\n\n kernel = _stack(x_x.flatten(), y_y.flatten())\n kernel = _dot(inv_cov, kernel) * kernel\n kernel = np.exp(-kernel.sum(axis=0) / 2)\n kernel = kernel.reshape((int(kern_ny), int(kern_nx)))\n\n boundary = \"wrap\" if circular else \"symm\"\n\n grid = coo_matrix((weights, xyi), shape=(n_x, n_y)).toarray()\n grid = convolve2d(grid, kernel, mode=\"same\", boundary=boundary)\n\n norm_factor = np.linalg.det(2 * np.pi * cov * scotts_factor ** 2)\n norm_factor = len_x * d_x * d_y * norm_factor ** 0.5\n\n grid /= norm_factor\n\n return grid, xmin, xmax, ymin, ymax\n\n\ndef get_bins(values):\n \"\"\"\n Automatically compute the number of bins for discrete variables.\n\n Parameters\n ----------\n values = numpy array\n values\n\n Returns\n -------\n array with the bins\n\n Notes\n -----\n Computes the width of the bins by taking the maximun of the Sturges and the Freedman-Diaconis\n estimators. Acording to numpy `np.histogram` this provides good all around performance.\n\n The Sturges is a very simplistic estimator based on the assumption of normality of the data.\n This estimator has poor performance for non-normal data, which becomes especially obvious for\n large data sets. The estimate depends only on size of the data.\n\n The Freedman-Diaconis rule uses interquartile range (IQR) to estimate the binwidth.\n It is considered a robusts version of the Scott rule as the IQR is less affected by outliers\n than the standard deviation. However, the IQR depends on fewer points than the standard\n deviation, so it is less accurate, especially for long tailed distributions.\n \"\"\"\n x_min = values.min().astype(int)\n x_max = values.max().astype(int)\n\n # Sturges histogram bin estimator\n bins_sturges = (x_max - x_min) / (np.log2(values.size) + 1)\n\n # The Freedman-Diaconis histogram bin estimator.\n iqr = np.subtract(*np.percentile(values, [75, 25])) # pylint: disable=assignment-from-no-return\n bins_fd = 2 * iqr * values.size ** (-1 / 3)\n\n width = np.round(np.max([1, bins_sturges, bins_fd])).astype(int)\n\n return np.arange(x_min, x_max + width + 1, width)\n\n\ndef _sturges_formula(dataset, mult=1):\n \"\"\"Use Sturges' formula to determine number of bins.\n\n See https://en.wikipedia.org/wiki/Histogram#Sturges'_formula\n or https://doi.org/10.1080%2F01621459.1926.10502161\n\n Parameters\n ----------\n dataset: xarray.DataSet\n Must have the `draw` dimension\n\n mult: float\n Used to scale the number of bins up or down. Default is 1 for Sturges' formula.\n\n Returns\n -------\n int\n Number of bins to use\n \"\"\"\n return int(np.ceil(mult * np.log2(dataset.draw.size)) + 1)\n", "path": "arviz/numeric_utils.py"}], "after_files": [{"content": "\"\"\"Numerical utility functions for ArviZ.\"\"\"\nimport warnings\nimport numpy as np\nfrom scipy.signal import convolve, convolve2d\nfrom scipy.signal.windows import gaussian\nfrom scipy.sparse import coo_matrix\n\nfrom .stats.stats_utils import histogram\nfrom .utils import _stack, _dot, _cov\n\n\ndef _fast_kde(x, cumulative=False, bw=4.5, xmin=None, xmax=None):\n \"\"\"Fast Fourier transform-based Gaussian kernel density estimate (KDE).\n\n The code was adapted from https://github.com/mfouesneau/faststats\n\n Parameters\n ----------\n x : Numpy array or list\n cumulative : bool\n If true, estimate the cdf instead of the pdf\n bw : float\n Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the\n smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule\n of thumb (the default rule used by SciPy).\n xmin : float\n Manually set lower limit.\n xmax : float\n Manually set upper limit.\n\n Returns\n -------\n density: A gridded 1D KDE of the input points (x)\n xmin: minimum value of x\n xmax: maximum value of x\n \"\"\"\n x = np.asarray(x, dtype=float)\n x = x[np.isfinite(x)]\n if x.size == 0:\n warnings.warn(\"kde plot failed, you may want to check your data\")\n return np.array([np.nan]), np.nan, np.nan\n\n len_x = len(x)\n n_points = 200 if (xmin or xmax) is None else 500\n\n if xmin is None:\n xmin = np.min(x)\n if xmax is None:\n xmax = np.max(x)\n\n assert np.min(x) >= xmin\n assert np.max(x) <= xmax\n\n log_len_x = np.log(len_x) * bw\n\n n_bins = min(int(len_x ** (1 / 3) * log_len_x * 2), n_points)\n if n_bins < 2:\n warnings.warn(\"kde plot failed, you may want to check your data\")\n return np.array([np.nan]), np.nan, np.nan\n\n # hist, bin_edges = np.histogram(x, bins=n_bins, range=(xmin, xmax))\n # grid = hist / (hist.sum() * np.diff(bin_edges))\n\n _, grid, _ = histogram(x, n_bins, range_hist=(xmin, xmax))\n\n scotts_factor = len_x ** (-0.2)\n kern_nx = int(scotts_factor * 2 * np.pi * log_len_x)\n kernel = gaussian(kern_nx, scotts_factor * log_len_x)\n\n npad = min(n_bins, 2 * kern_nx)\n grid = np.concatenate([grid[npad:0:-1], grid, grid[n_bins : n_bins - npad : -1]])\n density = convolve(grid, kernel, mode=\"same\", method=\"direct\")[npad : npad + n_bins]\n norm_factor = (2 * np.pi * log_len_x ** 2 * scotts_factor ** 2) ** 0.5\n\n density /= norm_factor\n\n if cumulative:\n density = density.cumsum() / density.sum()\n\n return density, xmin, xmax\n\n\ndef _fast_kde_2d(x, y, gridsize=(128, 128), circular=False):\n \"\"\"\n 2D fft-based Gaussian kernel density estimate (KDE).\n\n The code was adapted from https://github.com/mfouesneau/faststats\n\n Parameters\n ----------\n x : Numpy array or list\n y : Numpy array or list\n gridsize : tuple\n Number of points used to discretize data. Use powers of 2 for fft optimization\n circular: bool\n If True, use circular boundaries. Defaults to False\n Returns\n -------\n grid: A gridded 2D KDE of the input points (x, y)\n xmin: minimum value of x\n xmax: maximum value of x\n ymin: minimum value of y\n ymax: maximum value of y\n \"\"\"\n x = np.asarray(x, dtype=float)\n x = x[np.isfinite(x)]\n y = np.asarray(y, dtype=float)\n y = y[np.isfinite(y)]\n\n xmin, xmax = x.min(), x.max()\n ymin, ymax = y.min(), y.max()\n\n len_x = len(x)\n weights = np.ones(len_x)\n n_x, n_y = gridsize\n\n d_x = (xmax - xmin) / (n_x - 1)\n d_y = (ymax - ymin) / (n_y - 1)\n\n xyi = _stack(x, y).T\n xyi -= [xmin, ymin]\n xyi /= [d_x, d_y]\n xyi = np.floor(xyi, xyi).T\n\n scotts_factor = len_x ** (-1 / 6)\n cov = _cov(xyi)\n std_devs = np.diag(cov) ** 0.5\n kern_nx, kern_ny = np.round(scotts_factor * 2 * np.pi * std_devs)\n\n inv_cov = np.linalg.inv(cov * scotts_factor ** 2)\n\n x_x = np.arange(kern_nx) - kern_nx / 2\n y_y = np.arange(kern_ny) - kern_ny / 2\n x_x, y_y = np.meshgrid(x_x, y_y)\n\n kernel = _stack(x_x.flatten(), y_y.flatten())\n kernel = _dot(inv_cov, kernel) * kernel\n kernel = np.exp(-kernel.sum(axis=0) / 2)\n kernel = kernel.reshape((int(kern_ny), int(kern_nx)))\n\n boundary = \"wrap\" if circular else \"symm\"\n\n grid = coo_matrix((weights, xyi), shape=(n_x, n_y)).toarray()\n grid = convolve2d(grid, kernel, mode=\"same\", boundary=boundary)\n\n norm_factor = np.linalg.det(2 * np.pi * cov * scotts_factor ** 2)\n norm_factor = len_x * d_x * d_y * norm_factor ** 0.5\n\n grid /= norm_factor\n\n return grid, xmin, xmax, ymin, ymax\n\n\ndef get_bins(values):\n \"\"\"\n Automatically compute the number of bins for discrete variables.\n\n Parameters\n ----------\n values = numpy array\n values\n\n Returns\n -------\n array with the bins\n\n Notes\n -----\n Computes the width of the bins by taking the maximun of the Sturges and the Freedman-Diaconis\n estimators. Acording to numpy `np.histogram` this provides good all around performance.\n\n The Sturges is a very simplistic estimator based on the assumption of normality of the data.\n This estimator has poor performance for non-normal data, which becomes especially obvious for\n large data sets. The estimate depends only on size of the data.\n\n The Freedman-Diaconis rule uses interquartile range (IQR) to estimate the binwidth.\n It is considered a robusts version of the Scott rule as the IQR is less affected by outliers\n than the standard deviation. However, the IQR depends on fewer points than the standard\n deviation, so it is less accurate, especially for long tailed distributions.\n \"\"\"\n x_min = values.min().astype(int)\n x_max = values.max().astype(int)\n\n # Sturges histogram bin estimator\n bins_sturges = (x_max - x_min) / (np.log2(values.size) + 1)\n\n # The Freedman-Diaconis histogram bin estimator.\n iqr = np.subtract(*np.percentile(values, [75, 25])) # pylint: disable=assignment-from-no-return\n bins_fd = 2 * iqr * values.size ** (-1 / 3)\n\n width = np.round(np.max([1, bins_sturges, bins_fd])).astype(int)\n\n return np.arange(x_min, x_max + width + 1, width)\n\n\ndef _sturges_formula(dataset, mult=1):\n \"\"\"Use Sturges' formula to determine number of bins.\n\n See https://en.wikipedia.org/wiki/Histogram#Sturges'_formula\n or https://doi.org/10.1080%2F01621459.1926.10502161\n\n Parameters\n ----------\n dataset: xarray.DataSet\n Must have the `draw` dimension\n\n mult: float\n Used to scale the number of bins up or down. Default is 1 for Sturges' formula.\n\n Returns\n -------\n int\n Number of bins to use\n \"\"\"\n return int(np.ceil(mult * np.log2(dataset.draw.size)) + 1)\n", "path": "arviz/numeric_utils.py"}]}
| 2,849 | 149 |
gh_patches_debug_35254
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-6454
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: move away from .execute() in favor of explicit methods
### What happened?
per https://github.com/ibis-project/ibis/issues/6351, opening this to track updating docs
### What version of ibis are you using?
n/a
### What backend(s) are you using, if any?
n/a
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/example_streamlit_app/example_streamlit_app.py`
Content:
```
1 import requests
2 import streamlit as st
3
4 from ibis import _
5 from ibis.streamlit import IbisConnection
6
7 st.set_page_config(page_title="Yummy Data", layout="wide")
8 st.title("Yummy Data :bacon:")
9
10
11 @st.cache_data
12 def get_emoji():
13 resp = requests.get(
14 "https://raw.githubusercontent.com/omnidan/node-emoji/master/lib/emoji.json"
15 )
16 resp.raise_for_status()
17 emojis = resp.json()
18 return emojis
19
20
21 options = [1, 5, 10, 25, 50, 100]
22
23
24 @st.cache_data
25 def query():
26 return (
27 con.tables.recipes.relabel("snake_case")
28 .mutate(ner=_.ner.map(lambda n: n.lower()).unnest())
29 .ner.topk(max(options))
30 .relabel(dict(ner="ingredient"))
31 .execute()
32 .assign(
33 emoji=lambda df: df.ingredient.map(
34 lambda emoji: f"{emojis.get(emoji, '-')}"
35 )
36 )
37 .set_index("ingredient")
38 )
39
40
41 emojis = get_emoji()
42
43 con = st.experimental_connection("ch", type=IbisConnection)
44
45 if n := st.radio("Ingredients", options, index=1, horizontal=True):
46 table, whole = st.columns((2, 1))
47 idx = options.index(n)
48 k = 0
49 base = query()
50 for m in options[: idx + 1]:
51 df = base.iloc[k:m]
52 if not k:
53 word = "first"
54 elif m < n:
55 word = "next"
56 else:
57 word = "last"
58
59 uniq_emojis = " ".join(df.emoji[df.emoji != "-"].unique())
60 table.header(f"{word.title()} {m - k:d}")
61 table.subheader(uniq_emojis)
62
63 table.dataframe(df, use_container_width=True)
64 k = m
65
66 b = base.iloc[:n]
67 uniq_emojis = " ".join(b.emoji[b.emoji != "-"].unique())
68 whole.header(f"Top {n:d}")
69 whole.subheader(uniq_emojis)
70 whole.dataframe(b, use_container_width=True)
71
```
Path: `docs/backends/app/backend_info_app.py`
Content:
```
1 import datetime
2 import tempfile
3 from pathlib import Path
4 from typing import List, Optional
5
6 import pandas as pd
7 import requests
8 import sqlglot
9 import streamlit as st
10
11 import ibis
12 from ibis import _
13
14 ONE_HOUR_IN_SECONDS = datetime.timedelta(hours=1).total_seconds()
15
16 st.set_page_config(layout='wide')
17
18 # Track all queries. We display them at the bottom of the page.
19 ibis.options.verbose = True
20 sql_queries = []
21 ibis.options.verbose_log = lambda sql: sql_queries.append(sql)
22
23
24 @st.experimental_memo(ttl=ONE_HOUR_IN_SECONDS)
25 def support_matrix_df():
26 resp = requests.get("https://ibis-project.org/backends/raw_support_matrix.csv")
27 resp.raise_for_status()
28
29 with tempfile.NamedTemporaryFile() as f:
30 f.write(resp.content)
31 return (
32 ibis.read_csv(f.name)
33 .relabel({'FullOperation': 'full_operation'})
34 .mutate(
35 short_operation=_.full_operation.split(".")[-1],
36 operation_category=_.full_operation.split(".")[-2],
37 )
38 .execute()
39 )
40
41
42 @st.experimental_memo(ttl=ONE_HOUR_IN_SECONDS)
43 def backends_info_df():
44 return pd.DataFrame(
45 {
46 "bigquery": ["string", "sql"],
47 "clickhouse": ["string", "sql"],
48 "dask": ["dataframe"],
49 "datafusion": ["sql"],
50 "druid": ["sqlalchemy", "sql"],
51 "duckdb": ["sqlalchemy", "sql"],
52 "impala": ["string", "sql"],
53 "mssql": ["sqlalchemy", "sql"],
54 "mysql": ["sqlalchemy", "sql"],
55 "oracle": ["sqlalchemy", "sql"],
56 "pandas": ["dataframe"],
57 "polars": ["dataframe"],
58 "postgres": ["sqlalchemy", "sql"],
59 "pyspark": ["dataframe"],
60 "snowflake": ["sqlalchemy", "sql"],
61 "sqlite": ["sqlalchemy", "sql"],
62 "trino": ["sqlalchemy", "sql"],
63 }.items(),
64 columns=['backend_name', 'categories'],
65 )
66
67
68 backend_info_table = ibis.memtable(backends_info_df())
69 support_matrix_table = ibis.memtable(support_matrix_df())
70
71
72 @st.experimental_memo(ttl=ONE_HOUR_IN_SECONDS)
73 def get_all_backend_categories():
74 return (
75 backend_info_table.select(category=_.categories.unnest())
76 .distinct()
77 .order_by('category')['category']
78 .execute()
79 .tolist()
80 )
81
82
83 @st.experimental_memo(ttl=ONE_HOUR_IN_SECONDS)
84 def get_all_operation_categories():
85 return (
86 support_matrix_table.select(_.operation_category)
87 .distinct()['operation_category']
88 .execute()
89 .tolist()
90 )
91
92
93 @st.experimental_memo(ttl=ONE_HOUR_IN_SECONDS)
94 def get_backend_names(categories: Optional[List[str]] = None):
95 backend_expr = backend_info_table.mutate(category=_.categories.unnest())
96 if categories:
97 backend_expr = backend_expr.filter(_.category.isin(categories))
98 return (
99 backend_expr.select(_.backend_name).distinct().backend_name.execute().tolist()
100 )
101
102
103 def get_selected_backend_name():
104 backend_categories = get_all_backend_categories()
105 selected_categories_names = st.sidebar.multiselect(
106 'Backend category',
107 options=backend_categories,
108 default=None,
109 )
110 if not selected_categories_names:
111 return get_backend_names()
112 return get_backend_names(selected_categories_names)
113
114
115 def get_selected_operation_categories():
116 all_ops_categories = get_all_operation_categories()
117
118 selected_ops_categories = st.sidebar.multiselect(
119 'Operation category',
120 options=sorted(all_ops_categories),
121 default=None,
122 )
123 if not selected_ops_categories:
124 selected_ops_categories = all_ops_categories
125 show_geospatial = st.sidebar.checkbox('Include Geospatial ops', value=True)
126 if not show_geospatial and 'geospatial' in selected_ops_categories:
127 selected_ops_categories.remove("geospatial")
128 return selected_ops_categories
129
130
131 current_backend_names = get_selected_backend_name()
132 sort_by_coverage = st.sidebar.checkbox('Sort by API Coverage', value=False)
133 current_ops_categories = get_selected_operation_categories()
134
135 hide_supported_by_all_backends = st.sidebar.selectbox(
136 'Operation compatibility',
137 ['Show all', 'Show supported by all backends', 'Hide supported by all backends'],
138 0,
139 )
140 show_full_ops_name = st.sidebar.checkbox('Show full operation name', False)
141
142 # Start ibis expression
143 table_expr = support_matrix_table
144
145 # Add index to result
146 if show_full_ops_name:
147 table_expr = table_expr.mutate(index=_.full_operation)
148 else:
149 table_expr = table_expr.mutate(index=_.short_operation)
150 table_expr = table_expr.order_by(_.index)
151
152 # Filter operations by selected categories
153 table_expr = table_expr.filter(_.operation_category.isin(current_ops_categories))
154
155 # Filter operation by compatibility
156 supported_backend_count = sum(
157 getattr(table_expr, backend_name).ifelse(1, 0)
158 for backend_name in current_backend_names
159 )
160 if hide_supported_by_all_backends == 'Show supported by all backends':
161 table_expr = table_expr.filter(
162 supported_backend_count == len(current_backend_names)
163 )
164 elif hide_supported_by_all_backends == 'Hide supported by all backends':
165 table_expr = table_expr.filter(
166 supported_backend_count != len(current_backend_names)
167 )
168
169 # Show only selected backend
170 table_expr = table_expr[current_backend_names + ["index"]]
171
172 # Execute query
173 df = table_expr.execute()
174 df = df.set_index('index')
175
176 # Display result
177 all_visible_ops_count = len(df.index)
178 if all_visible_ops_count:
179 # Compute coverage
180 coverage = (
181 df.sum()
182 .sort_values(ascending=False)
183 .map(lambda n: f"{n} ({round(100 * n / all_visible_ops_count)}%)")
184 .to_frame(name="API Coverage")
185 .T
186 )
187
188 table = pd.concat([coverage, df.replace({True: "✔", False: "🚫"})]).loc[
189 :, slice(None) if sort_by_coverage else sorted(df.columns)
190 ]
191 st.dataframe(table)
192 else:
193 st.write("No data")
194
195 with st.expander("SQL queries"):
196 for sql_query in sql_queries:
197 pretty_sql_query = sqlglot.transpile(
198 sql_query, read='duckdb', write='duckdb', pretty=True
199 )[0]
200 st.code(pretty_sql_query, language='sql')
201
202 with st.expander("Source code"):
203 st.code(Path(__file__).read_text())
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/backends/app/backend_info_app.py b/docs/backends/app/backend_info_app.py
--- a/docs/backends/app/backend_info_app.py
+++ b/docs/backends/app/backend_info_app.py
@@ -35,7 +35,7 @@
short_operation=_.full_operation.split(".")[-1],
operation_category=_.full_operation.split(".")[-2],
)
- .execute()
+ .to_pandas()
)
@@ -75,7 +75,7 @@
backend_info_table.select(category=_.categories.unnest())
.distinct()
.order_by('category')['category']
- .execute()
+ .to_pandas()
.tolist()
)
@@ -85,7 +85,7 @@
return (
support_matrix_table.select(_.operation_category)
.distinct()['operation_category']
- .execute()
+ .to_pandas()
.tolist()
)
@@ -96,7 +96,7 @@
if categories:
backend_expr = backend_expr.filter(_.category.isin(categories))
return (
- backend_expr.select(_.backend_name).distinct().backend_name.execute().tolist()
+ backend_expr.select(_.backend_name).distinct().backend_name.to_pandas().tolist()
)
@@ -170,7 +170,7 @@
table_expr = table_expr[current_backend_names + ["index"]]
# Execute query
-df = table_expr.execute()
+df = table_expr.to_pandas()
df = df.set_index('index')
# Display result
diff --git a/docs/example_streamlit_app/example_streamlit_app.py b/docs/example_streamlit_app/example_streamlit_app.py
--- a/docs/example_streamlit_app/example_streamlit_app.py
+++ b/docs/example_streamlit_app/example_streamlit_app.py
@@ -28,7 +28,7 @@
.mutate(ner=_.ner.map(lambda n: n.lower()).unnest())
.ner.topk(max(options))
.relabel(dict(ner="ingredient"))
- .execute()
+ .to_pandas()
.assign(
emoji=lambda df: df.ingredient.map(
lambda emoji: f"{emojis.get(emoji, '-')}"
|
{"golden_diff": "diff --git a/docs/backends/app/backend_info_app.py b/docs/backends/app/backend_info_app.py\n--- a/docs/backends/app/backend_info_app.py\n+++ b/docs/backends/app/backend_info_app.py\n@@ -35,7 +35,7 @@\n short_operation=_.full_operation.split(\".\")[-1],\n operation_category=_.full_operation.split(\".\")[-2],\n )\n- .execute()\n+ .to_pandas()\n )\n \n \n@@ -75,7 +75,7 @@\n backend_info_table.select(category=_.categories.unnest())\n .distinct()\n .order_by('category')['category']\n- .execute()\n+ .to_pandas()\n .tolist()\n )\n \n@@ -85,7 +85,7 @@\n return (\n support_matrix_table.select(_.operation_category)\n .distinct()['operation_category']\n- .execute()\n+ .to_pandas()\n .tolist()\n )\n \n@@ -96,7 +96,7 @@\n if categories:\n backend_expr = backend_expr.filter(_.category.isin(categories))\n return (\n- backend_expr.select(_.backend_name).distinct().backend_name.execute().tolist()\n+ backend_expr.select(_.backend_name).distinct().backend_name.to_pandas().tolist()\n )\n \n \n@@ -170,7 +170,7 @@\n table_expr = table_expr[current_backend_names + [\"index\"]]\n \n # Execute query\n-df = table_expr.execute()\n+df = table_expr.to_pandas()\n df = df.set_index('index')\n \n # Display result\ndiff --git a/docs/example_streamlit_app/example_streamlit_app.py b/docs/example_streamlit_app/example_streamlit_app.py\n--- a/docs/example_streamlit_app/example_streamlit_app.py\n+++ b/docs/example_streamlit_app/example_streamlit_app.py\n@@ -28,7 +28,7 @@\n .mutate(ner=_.ner.map(lambda n: n.lower()).unnest())\n .ner.topk(max(options))\n .relabel(dict(ner=\"ingredient\"))\n- .execute()\n+ .to_pandas()\n .assign(\n emoji=lambda df: df.ingredient.map(\n lambda emoji: f\"{emojis.get(emoji, '-')}\"\n", "issue": "docs: move away from .execute() in favor of explicit methods\n### What happened?\n\nper https://github.com/ibis-project/ibis/issues/6351, opening this to track updating docs\n\n### What version of ibis are you using?\n\nn/a\n\n### What backend(s) are you using, if any?\n\nn/a\n\n### Relevant log output\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "import requests\nimport streamlit as st\n\nfrom ibis import _\nfrom ibis.streamlit import IbisConnection\n\nst.set_page_config(page_title=\"Yummy Data\", layout=\"wide\")\nst.title(\"Yummy Data :bacon:\")\n\n\[email protected]_data\ndef get_emoji():\n resp = requests.get(\n \"https://raw.githubusercontent.com/omnidan/node-emoji/master/lib/emoji.json\"\n )\n resp.raise_for_status()\n emojis = resp.json()\n return emojis\n\n\noptions = [1, 5, 10, 25, 50, 100]\n\n\[email protected]_data\ndef query():\n return (\n con.tables.recipes.relabel(\"snake_case\")\n .mutate(ner=_.ner.map(lambda n: n.lower()).unnest())\n .ner.topk(max(options))\n .relabel(dict(ner=\"ingredient\"))\n .execute()\n .assign(\n emoji=lambda df: df.ingredient.map(\n lambda emoji: f\"{emojis.get(emoji, '-')}\"\n )\n )\n .set_index(\"ingredient\")\n )\n\n\nemojis = get_emoji()\n\ncon = st.experimental_connection(\"ch\", type=IbisConnection)\n\nif n := st.radio(\"Ingredients\", options, index=1, horizontal=True):\n table, whole = st.columns((2, 1))\n idx = options.index(n)\n k = 0\n base = query()\n for m in options[: idx + 1]:\n df = base.iloc[k:m]\n if not k:\n word = \"first\"\n elif m < n:\n word = \"next\"\n else:\n word = \"last\"\n\n uniq_emojis = \" \".join(df.emoji[df.emoji != \"-\"].unique())\n table.header(f\"{word.title()} {m - k:d}\")\n table.subheader(uniq_emojis)\n\n table.dataframe(df, use_container_width=True)\n k = m\n\n b = base.iloc[:n]\n uniq_emojis = \" \".join(b.emoji[b.emoji != \"-\"].unique())\n whole.header(f\"Top {n:d}\")\n whole.subheader(uniq_emojis)\n whole.dataframe(b, use_container_width=True)\n", "path": "docs/example_streamlit_app/example_streamlit_app.py"}, {"content": "import datetime\nimport tempfile\nfrom pathlib import Path\nfrom typing import List, Optional\n\nimport pandas as pd\nimport requests\nimport sqlglot\nimport streamlit as st\n\nimport ibis\nfrom ibis import _\n\nONE_HOUR_IN_SECONDS = datetime.timedelta(hours=1).total_seconds()\n\nst.set_page_config(layout='wide')\n\n# Track all queries. We display them at the bottom of the page.\nibis.options.verbose = True\nsql_queries = []\nibis.options.verbose_log = lambda sql: sql_queries.append(sql)\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef support_matrix_df():\n resp = requests.get(\"https://ibis-project.org/backends/raw_support_matrix.csv\")\n resp.raise_for_status()\n\n with tempfile.NamedTemporaryFile() as f:\n f.write(resp.content)\n return (\n ibis.read_csv(f.name)\n .relabel({'FullOperation': 'full_operation'})\n .mutate(\n short_operation=_.full_operation.split(\".\")[-1],\n operation_category=_.full_operation.split(\".\")[-2],\n )\n .execute()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef backends_info_df():\n return pd.DataFrame(\n {\n \"bigquery\": [\"string\", \"sql\"],\n \"clickhouse\": [\"string\", \"sql\"],\n \"dask\": [\"dataframe\"],\n \"datafusion\": [\"sql\"],\n \"druid\": [\"sqlalchemy\", \"sql\"],\n \"duckdb\": [\"sqlalchemy\", \"sql\"],\n \"impala\": [\"string\", \"sql\"],\n \"mssql\": [\"sqlalchemy\", \"sql\"],\n \"mysql\": [\"sqlalchemy\", \"sql\"],\n \"oracle\": [\"sqlalchemy\", \"sql\"],\n \"pandas\": [\"dataframe\"],\n \"polars\": [\"dataframe\"],\n \"postgres\": [\"sqlalchemy\", \"sql\"],\n \"pyspark\": [\"dataframe\"],\n \"snowflake\": [\"sqlalchemy\", \"sql\"],\n \"sqlite\": [\"sqlalchemy\", \"sql\"],\n \"trino\": [\"sqlalchemy\", \"sql\"],\n }.items(),\n columns=['backend_name', 'categories'],\n )\n\n\nbackend_info_table = ibis.memtable(backends_info_df())\nsupport_matrix_table = ibis.memtable(support_matrix_df())\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_all_backend_categories():\n return (\n backend_info_table.select(category=_.categories.unnest())\n .distinct()\n .order_by('category')['category']\n .execute()\n .tolist()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_all_operation_categories():\n return (\n support_matrix_table.select(_.operation_category)\n .distinct()['operation_category']\n .execute()\n .tolist()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_backend_names(categories: Optional[List[str]] = None):\n backend_expr = backend_info_table.mutate(category=_.categories.unnest())\n if categories:\n backend_expr = backend_expr.filter(_.category.isin(categories))\n return (\n backend_expr.select(_.backend_name).distinct().backend_name.execute().tolist()\n )\n\n\ndef get_selected_backend_name():\n backend_categories = get_all_backend_categories()\n selected_categories_names = st.sidebar.multiselect(\n 'Backend category',\n options=backend_categories,\n default=None,\n )\n if not selected_categories_names:\n return get_backend_names()\n return get_backend_names(selected_categories_names)\n\n\ndef get_selected_operation_categories():\n all_ops_categories = get_all_operation_categories()\n\n selected_ops_categories = st.sidebar.multiselect(\n 'Operation category',\n options=sorted(all_ops_categories),\n default=None,\n )\n if not selected_ops_categories:\n selected_ops_categories = all_ops_categories\n show_geospatial = st.sidebar.checkbox('Include Geospatial ops', value=True)\n if not show_geospatial and 'geospatial' in selected_ops_categories:\n selected_ops_categories.remove(\"geospatial\")\n return selected_ops_categories\n\n\ncurrent_backend_names = get_selected_backend_name()\nsort_by_coverage = st.sidebar.checkbox('Sort by API Coverage', value=False)\ncurrent_ops_categories = get_selected_operation_categories()\n\nhide_supported_by_all_backends = st.sidebar.selectbox(\n 'Operation compatibility',\n ['Show all', 'Show supported by all backends', 'Hide supported by all backends'],\n 0,\n)\nshow_full_ops_name = st.sidebar.checkbox('Show full operation name', False)\n\n# Start ibis expression\ntable_expr = support_matrix_table\n\n# Add index to result\nif show_full_ops_name:\n table_expr = table_expr.mutate(index=_.full_operation)\nelse:\n table_expr = table_expr.mutate(index=_.short_operation)\ntable_expr = table_expr.order_by(_.index)\n\n# Filter operations by selected categories\ntable_expr = table_expr.filter(_.operation_category.isin(current_ops_categories))\n\n# Filter operation by compatibility\nsupported_backend_count = sum(\n getattr(table_expr, backend_name).ifelse(1, 0)\n for backend_name in current_backend_names\n)\nif hide_supported_by_all_backends == 'Show supported by all backends':\n table_expr = table_expr.filter(\n supported_backend_count == len(current_backend_names)\n )\nelif hide_supported_by_all_backends == 'Hide supported by all backends':\n table_expr = table_expr.filter(\n supported_backend_count != len(current_backend_names)\n )\n\n# Show only selected backend\ntable_expr = table_expr[current_backend_names + [\"index\"]]\n\n# Execute query\ndf = table_expr.execute()\ndf = df.set_index('index')\n\n# Display result\nall_visible_ops_count = len(df.index)\nif all_visible_ops_count:\n # Compute coverage\n coverage = (\n df.sum()\n .sort_values(ascending=False)\n .map(lambda n: f\"{n} ({round(100 * n / all_visible_ops_count)}%)\")\n .to_frame(name=\"API Coverage\")\n .T\n )\n\n table = pd.concat([coverage, df.replace({True: \"\u2714\", False: \"\ud83d\udeab\"})]).loc[\n :, slice(None) if sort_by_coverage else sorted(df.columns)\n ]\n st.dataframe(table)\nelse:\n st.write(\"No data\")\n\nwith st.expander(\"SQL queries\"):\n for sql_query in sql_queries:\n pretty_sql_query = sqlglot.transpile(\n sql_query, read='duckdb', write='duckdb', pretty=True\n )[0]\n st.code(pretty_sql_query, language='sql')\n\nwith st.expander(\"Source code\"):\n st.code(Path(__file__).read_text())\n", "path": "docs/backends/app/backend_info_app.py"}], "after_files": [{"content": "import requests\nimport streamlit as st\n\nfrom ibis import _\nfrom ibis.streamlit import IbisConnection\n\nst.set_page_config(page_title=\"Yummy Data\", layout=\"wide\")\nst.title(\"Yummy Data :bacon:\")\n\n\[email protected]_data\ndef get_emoji():\n resp = requests.get(\n \"https://raw.githubusercontent.com/omnidan/node-emoji/master/lib/emoji.json\"\n )\n resp.raise_for_status()\n emojis = resp.json()\n return emojis\n\n\noptions = [1, 5, 10, 25, 50, 100]\n\n\[email protected]_data\ndef query():\n return (\n con.tables.recipes.relabel(\"snake_case\")\n .mutate(ner=_.ner.map(lambda n: n.lower()).unnest())\n .ner.topk(max(options))\n .relabel(dict(ner=\"ingredient\"))\n .to_pandas()\n .assign(\n emoji=lambda df: df.ingredient.map(\n lambda emoji: f\"{emojis.get(emoji, '-')}\"\n )\n )\n .set_index(\"ingredient\")\n )\n\n\nemojis = get_emoji()\n\ncon = st.experimental_connection(\"ch\", type=IbisConnection)\n\nif n := st.radio(\"Ingredients\", options, index=1, horizontal=True):\n table, whole = st.columns((2, 1))\n idx = options.index(n)\n k = 0\n base = query()\n for m in options[: idx + 1]:\n df = base.iloc[k:m]\n if not k:\n word = \"first\"\n elif m < n:\n word = \"next\"\n else:\n word = \"last\"\n\n uniq_emojis = \" \".join(df.emoji[df.emoji != \"-\"].unique())\n table.header(f\"{word.title()} {m - k:d}\")\n table.subheader(uniq_emojis)\n\n table.dataframe(df, use_container_width=True)\n k = m\n\n b = base.iloc[:n]\n uniq_emojis = \" \".join(b.emoji[b.emoji != \"-\"].unique())\n whole.header(f\"Top {n:d}\")\n whole.subheader(uniq_emojis)\n whole.dataframe(b, use_container_width=True)\n", "path": "docs/example_streamlit_app/example_streamlit_app.py"}, {"content": "import datetime\nimport tempfile\nfrom pathlib import Path\nfrom typing import List, Optional\n\nimport pandas as pd\nimport requests\nimport sqlglot\nimport streamlit as st\n\nimport ibis\nfrom ibis import _\n\nONE_HOUR_IN_SECONDS = datetime.timedelta(hours=1).total_seconds()\n\nst.set_page_config(layout='wide')\n\n# Track all queries. We display them at the bottom of the page.\nibis.options.verbose = True\nsql_queries = []\nibis.options.verbose_log = lambda sql: sql_queries.append(sql)\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef support_matrix_df():\n resp = requests.get(\"https://ibis-project.org/backends/raw_support_matrix.csv\")\n resp.raise_for_status()\n\n with tempfile.NamedTemporaryFile() as f:\n f.write(resp.content)\n return (\n ibis.read_csv(f.name)\n .relabel({'FullOperation': 'full_operation'})\n .mutate(\n short_operation=_.full_operation.split(\".\")[-1],\n operation_category=_.full_operation.split(\".\")[-2],\n )\n .to_pandas()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef backends_info_df():\n return pd.DataFrame(\n {\n \"bigquery\": [\"string\", \"sql\"],\n \"clickhouse\": [\"string\", \"sql\"],\n \"dask\": [\"dataframe\"],\n \"datafusion\": [\"sql\"],\n \"druid\": [\"sqlalchemy\", \"sql\"],\n \"duckdb\": [\"sqlalchemy\", \"sql\"],\n \"impala\": [\"string\", \"sql\"],\n \"mssql\": [\"sqlalchemy\", \"sql\"],\n \"mysql\": [\"sqlalchemy\", \"sql\"],\n \"oracle\": [\"sqlalchemy\", \"sql\"],\n \"pandas\": [\"dataframe\"],\n \"polars\": [\"dataframe\"],\n \"postgres\": [\"sqlalchemy\", \"sql\"],\n \"pyspark\": [\"dataframe\"],\n \"snowflake\": [\"sqlalchemy\", \"sql\"],\n \"sqlite\": [\"sqlalchemy\", \"sql\"],\n \"trino\": [\"sqlalchemy\", \"sql\"],\n }.items(),\n columns=['backend_name', 'categories'],\n )\n\n\nbackend_info_table = ibis.memtable(backends_info_df())\nsupport_matrix_table = ibis.memtable(support_matrix_df())\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_all_backend_categories():\n return (\n backend_info_table.select(category=_.categories.unnest())\n .distinct()\n .order_by('category')['category']\n .to_pandas()\n .tolist()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_all_operation_categories():\n return (\n support_matrix_table.select(_.operation_category)\n .distinct()['operation_category']\n .to_pandas()\n .tolist()\n )\n\n\[email protected]_memo(ttl=ONE_HOUR_IN_SECONDS)\ndef get_backend_names(categories: Optional[List[str]] = None):\n backend_expr = backend_info_table.mutate(category=_.categories.unnest())\n if categories:\n backend_expr = backend_expr.filter(_.category.isin(categories))\n return (\n backend_expr.select(_.backend_name).distinct().backend_name.to_pandas().tolist()\n )\n\n\ndef get_selected_backend_name():\n backend_categories = get_all_backend_categories()\n selected_categories_names = st.sidebar.multiselect(\n 'Backend category',\n options=backend_categories,\n default=None,\n )\n if not selected_categories_names:\n return get_backend_names()\n return get_backend_names(selected_categories_names)\n\n\ndef get_selected_operation_categories():\n all_ops_categories = get_all_operation_categories()\n\n selected_ops_categories = st.sidebar.multiselect(\n 'Operation category',\n options=sorted(all_ops_categories),\n default=None,\n )\n if not selected_ops_categories:\n selected_ops_categories = all_ops_categories\n show_geospatial = st.sidebar.checkbox('Include Geospatial ops', value=True)\n if not show_geospatial and 'geospatial' in selected_ops_categories:\n selected_ops_categories.remove(\"geospatial\")\n return selected_ops_categories\n\n\ncurrent_backend_names = get_selected_backend_name()\nsort_by_coverage = st.sidebar.checkbox('Sort by API Coverage', value=False)\ncurrent_ops_categories = get_selected_operation_categories()\n\nhide_supported_by_all_backends = st.sidebar.selectbox(\n 'Operation compatibility',\n ['Show all', 'Show supported by all backends', 'Hide supported by all backends'],\n 0,\n)\nshow_full_ops_name = st.sidebar.checkbox('Show full operation name', False)\n\n# Start ibis expression\ntable_expr = support_matrix_table\n\n# Add index to result\nif show_full_ops_name:\n table_expr = table_expr.mutate(index=_.full_operation)\nelse:\n table_expr = table_expr.mutate(index=_.short_operation)\ntable_expr = table_expr.order_by(_.index)\n\n# Filter operations by selected categories\ntable_expr = table_expr.filter(_.operation_category.isin(current_ops_categories))\n\n# Filter operation by compatibility\nsupported_backend_count = sum(\n getattr(table_expr, backend_name).ifelse(1, 0)\n for backend_name in current_backend_names\n)\nif hide_supported_by_all_backends == 'Show supported by all backends':\n table_expr = table_expr.filter(\n supported_backend_count == len(current_backend_names)\n )\nelif hide_supported_by_all_backends == 'Hide supported by all backends':\n table_expr = table_expr.filter(\n supported_backend_count != len(current_backend_names)\n )\n\n# Show only selected backend\ntable_expr = table_expr[current_backend_names + [\"index\"]]\n\n# Execute query\ndf = table_expr.to_pandas()\ndf = df.set_index('index')\n\n# Display result\nall_visible_ops_count = len(df.index)\nif all_visible_ops_count:\n # Compute coverage\n coverage = (\n df.sum()\n .sort_values(ascending=False)\n .map(lambda n: f\"{n} ({round(100 * n / all_visible_ops_count)}%)\")\n .to_frame(name=\"API Coverage\")\n .T\n )\n\n table = pd.concat([coverage, df.replace({True: \"\u2714\", False: \"\ud83d\udeab\"})]).loc[\n :, slice(None) if sort_by_coverage else sorted(df.columns)\n ]\n st.dataframe(table)\nelse:\n st.write(\"No data\")\n\nwith st.expander(\"SQL queries\"):\n for sql_query in sql_queries:\n pretty_sql_query = sqlglot.transpile(\n sql_query, read='duckdb', write='duckdb', pretty=True\n )[0]\n st.code(pretty_sql_query, language='sql')\n\nwith st.expander(\"Source code\"):\n st.code(Path(__file__).read_text())\n", "path": "docs/backends/app/backend_info_app.py"}]}
| 2,923 | 475 |
gh_patches_debug_38017
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-756
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Race condition in recent Travis builds
#726, #752 and #754 are affected by annoying race condition that causes Travis build to fail randomly (see https://travis-ci.org/letsencrypt/letsencrypt/builds/77715204, https://travis-ci.org/letsencrypt/letsencrypt/builds/78978888, https://travis-ci.org/letsencrypt/letsencrypt/builds/78990354, resp.).
It seems that manual authenticator doesn't manage to bootstrap on time before we proceed to `simple_verify`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `letsencrypt/plugins/manual.py`
Content:
```
1 """Manual plugin."""
2 import os
3 import logging
4 import pipes
5 import shutil
6 import signal
7 import subprocess
8 import sys
9 import tempfile
10 import time
11
12 import zope.component
13 import zope.interface
14
15 from acme import challenges
16
17 from letsencrypt import errors
18 from letsencrypt import interfaces
19 from letsencrypt.plugins import common
20
21
22 logger = logging.getLogger(__name__)
23
24
25 class ManualAuthenticator(common.Plugin):
26 """Manual Authenticator.
27
28 .. todo:: Support for `~.challenges.DVSNI`.
29
30 """
31 zope.interface.implements(interfaces.IAuthenticator)
32 zope.interface.classProvides(interfaces.IPluginFactory)
33
34 description = "Manual Authenticator"
35
36 MESSAGE_TEMPLATE = """\
37 Make sure your web server displays the following content at
38 {uri} before continuing:
39
40 {achall.token}
41
42 Content-Type header MUST be set to {ct}.
43
44 If you don't have HTTP server configured, you can run the following
45 command on the target server (as root):
46
47 {command}
48 """
49
50 # "cd /tmp/letsencrypt" makes sure user doesn't serve /root,
51 # separate "public_html" ensures that cert.pem/key.pem are not
52 # served and makes it more obvious that Python command will serve
53 # anything recursively under the cwd
54
55 HTTP_TEMPLATE = """\
56 mkdir -p {root}/public_html/{response.URI_ROOT_PATH}
57 cd {root}/public_html
58 echo -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}
59 # run only once per server:
60 $(command -v python2 || command -v python2.7 || command -v python2.6) -c \\
61 "import BaseHTTPServer, SimpleHTTPServer; \\
62 SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\
63 s = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\
64 s.serve_forever()" """
65 """Non-TLS command template."""
66
67 # https://www.piware.de/2011/01/creating-an-https-server-in-python/
68 HTTPS_TEMPLATE = """\
69 mkdir -p {root}/public_html/{response.URI_ROOT_PATH}
70 cd {root}/public_html
71 echo -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}
72 # run only once per server:
73 openssl req -new -newkey rsa:4096 -subj "/" -days 1 -nodes -x509 -keyout ../key.pem -out ../cert.pem
74 $(command -v python2 || command -v python2.7 || command -v python2.6) -c \\
75 "import BaseHTTPServer, SimpleHTTPServer, ssl; \\
76 SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\
77 s = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\
78 s.socket = ssl.wrap_socket(s.socket, keyfile='../key.pem', certfile='../cert.pem'); \\
79 s.serve_forever()" """
80 """TLS command template.
81
82 According to the ACME specification, "the ACME server MUST ignore
83 the certificate provided by the HTTPS server", so the first command
84 generates temporary self-signed certificate.
85
86 """
87
88 def __init__(self, *args, **kwargs):
89 super(ManualAuthenticator, self).__init__(*args, **kwargs)
90 self.template = (self.HTTP_TEMPLATE if self.config.no_simple_http_tls
91 else self.HTTPS_TEMPLATE)
92 self._root = (tempfile.mkdtemp() if self.conf("test-mode")
93 else "/tmp/letsencrypt")
94 self._httpd = None
95
96 @classmethod
97 def add_parser_arguments(cls, add):
98 add("test-mode", action="store_true",
99 help="Test mode. Executes the manual command in subprocess. "
100 "Requires openssl to be installed unless --no-simple-http-tls.")
101
102 def prepare(self): # pylint: disable=missing-docstring,no-self-use
103 pass # pragma: no cover
104
105 def more_info(self): # pylint: disable=missing-docstring,no-self-use
106 return """\
107 This plugin requires user's manual intervention in setting up a HTTP
108 server for solving SimpleHTTP challenges and thus does not need to be
109 run as a privilidged process. Alternatively shows instructions on how
110 to use Python's built-in HTTP server and, in case of HTTPS, openssl
111 binary for temporary key/certificate generation.""".replace("\n", "")
112
113 def get_chall_pref(self, domain):
114 # pylint: disable=missing-docstring,no-self-use,unused-argument
115 return [challenges.SimpleHTTP]
116
117 def perform(self, achalls): # pylint: disable=missing-docstring
118 responses = []
119 # TODO: group achalls by the same socket.gethostbyname(_ex)
120 # and prompt only once per server (one "echo -n" per domain)
121 for achall in achalls:
122 responses.append(self._perform_single(achall))
123 return responses
124
125 def _perform_single(self, achall):
126 # same path for each challenge response would be easier for
127 # users, but will not work if multiple domains point at the
128 # same server: default command doesn't support virtual hosts
129 response, validation = achall.gen_response_and_validation(
130 tls=(not self.config.no_simple_http_tls))
131
132 command = self.template.format(
133 root=self._root, achall=achall, response=response,
134 validation=pipes.quote(validation.json_dumps()),
135 encoded_token=achall.chall.encode("token"),
136 ct=response.CONTENT_TYPE, port=(
137 response.port if self.config.simple_http_port is None
138 else self.config.simple_http_port))
139 if self.conf("test-mode"):
140 logger.debug("Test mode. Executing the manual command: %s", command)
141 try:
142 self._httpd = subprocess.Popen(
143 command,
144 # don't care about setting stdout and stderr,
145 # we're in test mode anyway
146 shell=True,
147 # "preexec_fn" is UNIX specific, but so is "command"
148 preexec_fn=os.setsid)
149 except OSError as error: # ValueError should not happen!
150 logger.debug(
151 "Couldn't execute manual command: %s", error, exc_info=True)
152 return False
153 logger.debug("Manual command running as PID %s.", self._httpd.pid)
154 # give it some time to bootstrap, before we try to verify
155 # (cert generation in case of simpleHttpS might take time)
156 time.sleep(4) # XXX
157 if self._httpd.poll() is not None:
158 raise errors.Error("Couldn't execute manual command")
159 else:
160 self._notify_and_wait(self.MESSAGE_TEMPLATE.format(
161 achall=achall, response=response,
162 uri=response.uri(achall.domain, achall.challb.chall),
163 ct=response.CONTENT_TYPE, command=command))
164
165 if response.simple_verify(
166 achall.chall, achall.domain,
167 achall.account_key.public_key(), self.config.simple_http_port):
168 return response
169 else:
170 if self.conf("test-mode") and self._httpd.poll() is not None:
171 # simply verify cause command failure...
172 return False
173 return None
174
175 def _notify_and_wait(self, message): # pylint: disable=no-self-use
176 # TODO: IDisplay wraps messages, breaking the command
177 #answer = zope.component.getUtility(interfaces.IDisplay).notification(
178 # message=message, height=25, pause=True)
179 sys.stdout.write(message)
180 raw_input("Press ENTER to continue")
181
182 def cleanup(self, achalls):
183 # pylint: disable=missing-docstring,no-self-use,unused-argument
184 if self.conf("test-mode"):
185 assert self._httpd is not None, (
186 "cleanup() must be called after perform()")
187 if self._httpd.poll() is None:
188 logger.debug("Terminating manual command process")
189 os.killpg(self._httpd.pid, signal.SIGTERM)
190 else:
191 logger.debug("Manual command process already terminated "
192 "with %s code", self._httpd.returncode)
193 shutil.rmtree(self._root)
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/letsencrypt/plugins/manual.py b/letsencrypt/plugins/manual.py
--- a/letsencrypt/plugins/manual.py
+++ b/letsencrypt/plugins/manual.py
@@ -4,6 +4,7 @@
import pipes
import shutil
import signal
+import socket
import subprocess
import sys
import tempfile
@@ -122,6 +123,20 @@
responses.append(self._perform_single(achall))
return responses
+ @classmethod
+ def _test_mode_busy_wait(cls, port):
+ while True:
+ time.sleep(1)
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ try:
+ sock.connect(("localhost", port))
+ except socket.error: # pragma: no cover
+ pass
+ else:
+ break
+ finally:
+ sock.close()
+
def _perform_single(self, achall):
# same path for each challenge response would be easier for
# users, but will not work if multiple domains point at the
@@ -129,13 +144,13 @@
response, validation = achall.gen_response_and_validation(
tls=(not self.config.no_simple_http_tls))
+ port = (response.port if self.config.simple_http_port is None
+ else int(self.config.simple_http_port))
command = self.template.format(
root=self._root, achall=achall, response=response,
validation=pipes.quote(validation.json_dumps()),
encoded_token=achall.chall.encode("token"),
- ct=response.CONTENT_TYPE, port=(
- response.port if self.config.simple_http_port is None
- else self.config.simple_http_port))
+ ct=response.CONTENT_TYPE, port=port)
if self.conf("test-mode"):
logger.debug("Test mode. Executing the manual command: %s", command)
try:
@@ -153,7 +168,7 @@
logger.debug("Manual command running as PID %s.", self._httpd.pid)
# give it some time to bootstrap, before we try to verify
# (cert generation in case of simpleHttpS might take time)
- time.sleep(4) # XXX
+ self._test_mode_busy_wait(port)
if self._httpd.poll() is not None:
raise errors.Error("Couldn't execute manual command")
else:
|
{"golden_diff": "diff --git a/letsencrypt/plugins/manual.py b/letsencrypt/plugins/manual.py\n--- a/letsencrypt/plugins/manual.py\n+++ b/letsencrypt/plugins/manual.py\n@@ -4,6 +4,7 @@\n import pipes\n import shutil\n import signal\n+import socket\n import subprocess\n import sys\n import tempfile\n@@ -122,6 +123,20 @@\n responses.append(self._perform_single(achall))\n return responses\n \n+ @classmethod\n+ def _test_mode_busy_wait(cls, port):\n+ while True:\n+ time.sleep(1)\n+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n+ try:\n+ sock.connect((\"localhost\", port))\n+ except socket.error: # pragma: no cover\n+ pass\n+ else:\n+ break\n+ finally:\n+ sock.close()\n+\n def _perform_single(self, achall):\n # same path for each challenge response would be easier for\n # users, but will not work if multiple domains point at the\n@@ -129,13 +144,13 @@\n response, validation = achall.gen_response_and_validation(\n tls=(not self.config.no_simple_http_tls))\n \n+ port = (response.port if self.config.simple_http_port is None\n+ else int(self.config.simple_http_port))\n command = self.template.format(\n root=self._root, achall=achall, response=response,\n validation=pipes.quote(validation.json_dumps()),\n encoded_token=achall.chall.encode(\"token\"),\n- ct=response.CONTENT_TYPE, port=(\n- response.port if self.config.simple_http_port is None\n- else self.config.simple_http_port))\n+ ct=response.CONTENT_TYPE, port=port)\n if self.conf(\"test-mode\"):\n logger.debug(\"Test mode. Executing the manual command: %s\", command)\n try:\n@@ -153,7 +168,7 @@\n logger.debug(\"Manual command running as PID %s.\", self._httpd.pid)\n # give it some time to bootstrap, before we try to verify\n # (cert generation in case of simpleHttpS might take time)\n- time.sleep(4) # XXX\n+ self._test_mode_busy_wait(port)\n if self._httpd.poll() is not None:\n raise errors.Error(\"Couldn't execute manual command\")\n else:\n", "issue": "Race condition in recent Travis builds\n#726, #752 and #754 are affected by annoying race condition that causes Travis build to fail randomly (see https://travis-ci.org/letsencrypt/letsencrypt/builds/77715204, https://travis-ci.org/letsencrypt/letsencrypt/builds/78978888, https://travis-ci.org/letsencrypt/letsencrypt/builds/78990354, resp.).\n\nIt seems that manual authenticator doesn't manage to bootstrap on time before we proceed to `simple_verify`.\n\n", "before_files": [{"content": "\"\"\"Manual plugin.\"\"\"\nimport os\nimport logging\nimport pipes\nimport shutil\nimport signal\nimport subprocess\nimport sys\nimport tempfile\nimport time\n\nimport zope.component\nimport zope.interface\n\nfrom acme import challenges\n\nfrom letsencrypt import errors\nfrom letsencrypt import interfaces\nfrom letsencrypt.plugins import common\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ManualAuthenticator(common.Plugin):\n \"\"\"Manual Authenticator.\n\n .. todo:: Support for `~.challenges.DVSNI`.\n\n \"\"\"\n zope.interface.implements(interfaces.IAuthenticator)\n zope.interface.classProvides(interfaces.IPluginFactory)\n\n description = \"Manual Authenticator\"\n\n MESSAGE_TEMPLATE = \"\"\"\\\nMake sure your web server displays the following content at\n{uri} before continuing:\n\n{achall.token}\n\nContent-Type header MUST be set to {ct}.\n\nIf you don't have HTTP server configured, you can run the following\ncommand on the target server (as root):\n\n{command}\n\"\"\"\n\n # \"cd /tmp/letsencrypt\" makes sure user doesn't serve /root,\n # separate \"public_html\" ensures that cert.pem/key.pem are not\n # served and makes it more obvious that Python command will serve\n # anything recursively under the cwd\n\n HTTP_TEMPLATE = \"\"\"\\\nmkdir -p {root}/public_html/{response.URI_ROOT_PATH}\ncd {root}/public_html\necho -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}\n# run only once per server:\n$(command -v python2 || command -v python2.7 || command -v python2.6) -c \\\\\n\"import BaseHTTPServer, SimpleHTTPServer; \\\\\nSimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\\\\ns = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\\\\ns.serve_forever()\" \"\"\"\n \"\"\"Non-TLS command template.\"\"\"\n\n # https://www.piware.de/2011/01/creating-an-https-server-in-python/\n HTTPS_TEMPLATE = \"\"\"\\\nmkdir -p {root}/public_html/{response.URI_ROOT_PATH}\ncd {root}/public_html\necho -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}\n# run only once per server:\nopenssl req -new -newkey rsa:4096 -subj \"/\" -days 1 -nodes -x509 -keyout ../key.pem -out ../cert.pem\n$(command -v python2 || command -v python2.7 || command -v python2.6) -c \\\\\n\"import BaseHTTPServer, SimpleHTTPServer, ssl; \\\\\nSimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\\\\ns = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\\\\ns.socket = ssl.wrap_socket(s.socket, keyfile='../key.pem', certfile='../cert.pem'); \\\\\ns.serve_forever()\" \"\"\"\n \"\"\"TLS command template.\n\n According to the ACME specification, \"the ACME server MUST ignore\n the certificate provided by the HTTPS server\", so the first command\n generates temporary self-signed certificate.\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super(ManualAuthenticator, self).__init__(*args, **kwargs)\n self.template = (self.HTTP_TEMPLATE if self.config.no_simple_http_tls\n else self.HTTPS_TEMPLATE)\n self._root = (tempfile.mkdtemp() if self.conf(\"test-mode\")\n else \"/tmp/letsencrypt\")\n self._httpd = None\n\n @classmethod\n def add_parser_arguments(cls, add):\n add(\"test-mode\", action=\"store_true\",\n help=\"Test mode. Executes the manual command in subprocess. \"\n \"Requires openssl to be installed unless --no-simple-http-tls.\")\n\n def prepare(self): # pylint: disable=missing-docstring,no-self-use\n pass # pragma: no cover\n\n def more_info(self): # pylint: disable=missing-docstring,no-self-use\n return \"\"\"\\\nThis plugin requires user's manual intervention in setting up a HTTP\nserver for solving SimpleHTTP challenges and thus does not need to be\nrun as a privilidged process. Alternatively shows instructions on how\nto use Python's built-in HTTP server and, in case of HTTPS, openssl\nbinary for temporary key/certificate generation.\"\"\".replace(\"\\n\", \"\")\n\n def get_chall_pref(self, domain):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n return [challenges.SimpleHTTP]\n\n def perform(self, achalls): # pylint: disable=missing-docstring\n responses = []\n # TODO: group achalls by the same socket.gethostbyname(_ex)\n # and prompt only once per server (one \"echo -n\" per domain)\n for achall in achalls:\n responses.append(self._perform_single(achall))\n return responses\n\n def _perform_single(self, achall):\n # same path for each challenge response would be easier for\n # users, but will not work if multiple domains point at the\n # same server: default command doesn't support virtual hosts\n response, validation = achall.gen_response_and_validation(\n tls=(not self.config.no_simple_http_tls))\n\n command = self.template.format(\n root=self._root, achall=achall, response=response,\n validation=pipes.quote(validation.json_dumps()),\n encoded_token=achall.chall.encode(\"token\"),\n ct=response.CONTENT_TYPE, port=(\n response.port if self.config.simple_http_port is None\n else self.config.simple_http_port))\n if self.conf(\"test-mode\"):\n logger.debug(\"Test mode. Executing the manual command: %s\", command)\n try:\n self._httpd = subprocess.Popen(\n command,\n # don't care about setting stdout and stderr,\n # we're in test mode anyway\n shell=True,\n # \"preexec_fn\" is UNIX specific, but so is \"command\"\n preexec_fn=os.setsid)\n except OSError as error: # ValueError should not happen!\n logger.debug(\n \"Couldn't execute manual command: %s\", error, exc_info=True)\n return False\n logger.debug(\"Manual command running as PID %s.\", self._httpd.pid)\n # give it some time to bootstrap, before we try to verify\n # (cert generation in case of simpleHttpS might take time)\n time.sleep(4) # XXX\n if self._httpd.poll() is not None:\n raise errors.Error(\"Couldn't execute manual command\")\n else:\n self._notify_and_wait(self.MESSAGE_TEMPLATE.format(\n achall=achall, response=response,\n uri=response.uri(achall.domain, achall.challb.chall),\n ct=response.CONTENT_TYPE, command=command))\n\n if response.simple_verify(\n achall.chall, achall.domain,\n achall.account_key.public_key(), self.config.simple_http_port):\n return response\n else:\n if self.conf(\"test-mode\") and self._httpd.poll() is not None:\n # simply verify cause command failure...\n return False\n return None\n\n def _notify_and_wait(self, message): # pylint: disable=no-self-use\n # TODO: IDisplay wraps messages, breaking the command\n #answer = zope.component.getUtility(interfaces.IDisplay).notification(\n # message=message, height=25, pause=True)\n sys.stdout.write(message)\n raw_input(\"Press ENTER to continue\")\n\n def cleanup(self, achalls):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n if self.conf(\"test-mode\"):\n assert self._httpd is not None, (\n \"cleanup() must be called after perform()\")\n if self._httpd.poll() is None:\n logger.debug(\"Terminating manual command process\")\n os.killpg(self._httpd.pid, signal.SIGTERM)\n else:\n logger.debug(\"Manual command process already terminated \"\n \"with %s code\", self._httpd.returncode)\n shutil.rmtree(self._root)\n", "path": "letsencrypt/plugins/manual.py"}], "after_files": [{"content": "\"\"\"Manual plugin.\"\"\"\nimport os\nimport logging\nimport pipes\nimport shutil\nimport signal\nimport socket\nimport subprocess\nimport sys\nimport tempfile\nimport time\n\nimport zope.component\nimport zope.interface\n\nfrom acme import challenges\n\nfrom letsencrypt import errors\nfrom letsencrypt import interfaces\nfrom letsencrypt.plugins import common\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ManualAuthenticator(common.Plugin):\n \"\"\"Manual Authenticator.\n\n .. todo:: Support for `~.challenges.DVSNI`.\n\n \"\"\"\n zope.interface.implements(interfaces.IAuthenticator)\n zope.interface.classProvides(interfaces.IPluginFactory)\n\n description = \"Manual Authenticator\"\n\n MESSAGE_TEMPLATE = \"\"\"\\\nMake sure your web server displays the following content at\n{uri} before continuing:\n\n{achall.token}\n\nContent-Type header MUST be set to {ct}.\n\nIf you don't have HTTP server configured, you can run the following\ncommand on the target server (as root):\n\n{command}\n\"\"\"\n\n # \"cd /tmp/letsencrypt\" makes sure user doesn't serve /root,\n # separate \"public_html\" ensures that cert.pem/key.pem are not\n # served and makes it more obvious that Python command will serve\n # anything recursively under the cwd\n\n HTTP_TEMPLATE = \"\"\"\\\nmkdir -p {root}/public_html/{response.URI_ROOT_PATH}\ncd {root}/public_html\necho -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}\n# run only once per server:\n$(command -v python2 || command -v python2.7 || command -v python2.6) -c \\\\\n\"import BaseHTTPServer, SimpleHTTPServer; \\\\\nSimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\\\\ns = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\\\\ns.serve_forever()\" \"\"\"\n \"\"\"Non-TLS command template.\"\"\"\n\n # https://www.piware.de/2011/01/creating-an-https-server-in-python/\n HTTPS_TEMPLATE = \"\"\"\\\nmkdir -p {root}/public_html/{response.URI_ROOT_PATH}\ncd {root}/public_html\necho -n {validation} > {response.URI_ROOT_PATH}/{encoded_token}\n# run only once per server:\nopenssl req -new -newkey rsa:4096 -subj \"/\" -days 1 -nodes -x509 -keyout ../key.pem -out ../cert.pem\n$(command -v python2 || command -v python2.7 || command -v python2.6) -c \\\\\n\"import BaseHTTPServer, SimpleHTTPServer, ssl; \\\\\nSimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {{'': '{ct}'}}; \\\\\ns = BaseHTTPServer.HTTPServer(('', {port}), SimpleHTTPServer.SimpleHTTPRequestHandler); \\\\\ns.socket = ssl.wrap_socket(s.socket, keyfile='../key.pem', certfile='../cert.pem'); \\\\\ns.serve_forever()\" \"\"\"\n \"\"\"TLS command template.\n\n According to the ACME specification, \"the ACME server MUST ignore\n the certificate provided by the HTTPS server\", so the first command\n generates temporary self-signed certificate.\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super(ManualAuthenticator, self).__init__(*args, **kwargs)\n self.template = (self.HTTP_TEMPLATE if self.config.no_simple_http_tls\n else self.HTTPS_TEMPLATE)\n self._root = (tempfile.mkdtemp() if self.conf(\"test-mode\")\n else \"/tmp/letsencrypt\")\n self._httpd = None\n\n @classmethod\n def add_parser_arguments(cls, add):\n add(\"test-mode\", action=\"store_true\",\n help=\"Test mode. Executes the manual command in subprocess. \"\n \"Requires openssl to be installed unless --no-simple-http-tls.\")\n\n def prepare(self): # pylint: disable=missing-docstring,no-self-use\n pass # pragma: no cover\n\n def more_info(self): # pylint: disable=missing-docstring,no-self-use\n return \"\"\"\\\nThis plugin requires user's manual intervention in setting up a HTTP\nserver for solving SimpleHTTP challenges and thus does not need to be\nrun as a privilidged process. Alternatively shows instructions on how\nto use Python's built-in HTTP server and, in case of HTTPS, openssl\nbinary for temporary key/certificate generation.\"\"\".replace(\"\\n\", \"\")\n\n def get_chall_pref(self, domain):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n return [challenges.SimpleHTTP]\n\n def perform(self, achalls): # pylint: disable=missing-docstring\n responses = []\n # TODO: group achalls by the same socket.gethostbyname(_ex)\n # and prompt only once per server (one \"echo -n\" per domain)\n for achall in achalls:\n responses.append(self._perform_single(achall))\n return responses\n\n @classmethod\n def _test_mode_busy_wait(cls, port):\n while True:\n time.sleep(1)\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n sock.connect((\"localhost\", port))\n except socket.error: # pragma: no cover\n pass\n else:\n break\n finally:\n sock.close()\n\n def _perform_single(self, achall):\n # same path for each challenge response would be easier for\n # users, but will not work if multiple domains point at the\n # same server: default command doesn't support virtual hosts\n response, validation = achall.gen_response_and_validation(\n tls=(not self.config.no_simple_http_tls))\n\n port = (response.port if self.config.simple_http_port is None\n else int(self.config.simple_http_port))\n command = self.template.format(\n root=self._root, achall=achall, response=response,\n validation=pipes.quote(validation.json_dumps()),\n encoded_token=achall.chall.encode(\"token\"),\n ct=response.CONTENT_TYPE, port=port)\n if self.conf(\"test-mode\"):\n logger.debug(\"Test mode. Executing the manual command: %s\", command)\n try:\n self._httpd = subprocess.Popen(\n command,\n # don't care about setting stdout and stderr,\n # we're in test mode anyway\n shell=True,\n # \"preexec_fn\" is UNIX specific, but so is \"command\"\n preexec_fn=os.setsid)\n except OSError as error: # ValueError should not happen!\n logger.debug(\n \"Couldn't execute manual command: %s\", error, exc_info=True)\n return False\n logger.debug(\"Manual command running as PID %s.\", self._httpd.pid)\n # give it some time to bootstrap, before we try to verify\n # (cert generation in case of simpleHttpS might take time)\n self._test_mode_busy_wait(port)\n if self._httpd.poll() is not None:\n raise errors.Error(\"Couldn't execute manual command\")\n else:\n self._notify_and_wait(self.MESSAGE_TEMPLATE.format(\n achall=achall, response=response,\n uri=response.uri(achall.domain, achall.challb.chall),\n ct=response.CONTENT_TYPE, command=command))\n\n if response.simple_verify(\n achall.chall, achall.domain,\n achall.account_key.public_key(), self.config.simple_http_port):\n return response\n else:\n if self.conf(\"test-mode\") and self._httpd.poll() is not None:\n # simply verify cause command failure...\n return False\n return None\n\n def _notify_and_wait(self, message): # pylint: disable=no-self-use\n # TODO: IDisplay wraps messages, breaking the command\n #answer = zope.component.getUtility(interfaces.IDisplay).notification(\n # message=message, height=25, pause=True)\n sys.stdout.write(message)\n raw_input(\"Press ENTER to continue\")\n\n def cleanup(self, achalls):\n # pylint: disable=missing-docstring,no-self-use,unused-argument\n if self.conf(\"test-mode\"):\n assert self._httpd is not None, (\n \"cleanup() must be called after perform()\")\n if self._httpd.poll() is None:\n logger.debug(\"Terminating manual command process\")\n os.killpg(self._httpd.pid, signal.SIGTERM)\n else:\n logger.debug(\"Manual command process already terminated \"\n \"with %s code\", self._httpd.returncode)\n shutil.rmtree(self._root)\n", "path": "letsencrypt/plugins/manual.py"}]}
| 2,615 | 517 |
gh_patches_debug_38802
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-6116
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove deprecated scrapy.utils.response.response_httprepr
Deprecated in 2.6.0.
Remove deprecated support from scrapy.utils.log.logformatter_adapter()
This helper was added in 1.0.0 (I think), it looks like the parts of it that don't show warnings are still useful, though I haven't looked closely at them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/log.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4 import sys
5 import warnings
6 from logging.config import dictConfig
7 from types import TracebackType
8 from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union, cast
9
10 from twisted.python import log as twisted_log
11 from twisted.python.failure import Failure
12
13 import scrapy
14 from scrapy.exceptions import ScrapyDeprecationWarning
15 from scrapy.settings import Settings
16 from scrapy.utils.versions import scrapy_components_versions
17
18 if TYPE_CHECKING:
19 from scrapy.crawler import Crawler
20
21 logger = logging.getLogger(__name__)
22
23
24 def failure_to_exc_info(
25 failure: Failure,
26 ) -> Optional[Tuple[Type[BaseException], BaseException, Optional[TracebackType]]]:
27 """Extract exc_info from Failure instances"""
28 if isinstance(failure, Failure):
29 assert failure.type
30 assert failure.value
31 return (
32 failure.type,
33 failure.value,
34 cast(Optional[TracebackType], failure.getTracebackObject()),
35 )
36 return None
37
38
39 class TopLevelFormatter(logging.Filter):
40 """Keep only top level loggers's name (direct children from root) from
41 records.
42
43 This filter will replace Scrapy loggers' names with 'scrapy'. This mimics
44 the old Scrapy log behaviour and helps shortening long names.
45
46 Since it can't be set for just one logger (it won't propagate for its
47 children), it's going to be set in the root handler, with a parametrized
48 ``loggers`` list where it should act.
49 """
50
51 def __init__(self, loggers: Optional[List[str]] = None):
52 self.loggers: List[str] = loggers or []
53
54 def filter(self, record: logging.LogRecord) -> bool:
55 if any(record.name.startswith(logger + ".") for logger in self.loggers):
56 record.name = record.name.split(".", 1)[0]
57 return True
58
59
60 DEFAULT_LOGGING = {
61 "version": 1,
62 "disable_existing_loggers": False,
63 "loggers": {
64 "filelock": {
65 "level": "ERROR",
66 },
67 "hpack": {
68 "level": "ERROR",
69 },
70 "scrapy": {
71 "level": "DEBUG",
72 },
73 "twisted": {
74 "level": "ERROR",
75 },
76 },
77 }
78
79
80 def configure_logging(
81 settings: Union[Settings, dict, None] = None, install_root_handler: bool = True
82 ) -> None:
83 """
84 Initialize logging defaults for Scrapy.
85
86 :param settings: settings used to create and configure a handler for the
87 root logger (default: None).
88 :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``
89
90 :param install_root_handler: whether to install root logging handler
91 (default: True)
92 :type install_root_handler: bool
93
94 This function does:
95
96 - Route warnings and twisted logging through Python standard logging
97 - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively
98 - Route stdout to log if LOG_STDOUT setting is True
99
100 When ``install_root_handler`` is True (default), this function also
101 creates a handler for the root logger according to given settings
102 (see :ref:`topics-logging-settings`). You can override default options
103 using ``settings`` argument. When ``settings`` is empty or None, defaults
104 are used.
105 """
106 if not sys.warnoptions:
107 # Route warnings through python logging
108 logging.captureWarnings(True)
109
110 observer = twisted_log.PythonLoggingObserver("twisted")
111 observer.start()
112
113 dictConfig(DEFAULT_LOGGING)
114
115 if isinstance(settings, dict) or settings is None:
116 settings = Settings(settings)
117
118 if settings.getbool("LOG_STDOUT"):
119 sys.stdout = StreamLogger(logging.getLogger("stdout")) # type: ignore[assignment]
120
121 if install_root_handler:
122 install_scrapy_root_handler(settings)
123
124
125 _scrapy_root_handler: Optional[logging.Handler] = None
126
127
128 def install_scrapy_root_handler(settings: Settings) -> None:
129 global _scrapy_root_handler
130
131 if (
132 _scrapy_root_handler is not None
133 and _scrapy_root_handler in logging.root.handlers
134 ):
135 logging.root.removeHandler(_scrapy_root_handler)
136 logging.root.setLevel(logging.NOTSET)
137 _scrapy_root_handler = _get_handler(settings)
138 logging.root.addHandler(_scrapy_root_handler)
139
140
141 def get_scrapy_root_handler() -> Optional[logging.Handler]:
142 return _scrapy_root_handler
143
144
145 def _get_handler(settings: Settings) -> logging.Handler:
146 """Return a log handler object according to settings"""
147 filename = settings.get("LOG_FILE")
148 handler: logging.Handler
149 if filename:
150 mode = "a" if settings.getbool("LOG_FILE_APPEND") else "w"
151 encoding = settings.get("LOG_ENCODING")
152 handler = logging.FileHandler(filename, mode=mode, encoding=encoding)
153 elif settings.getbool("LOG_ENABLED"):
154 handler = logging.StreamHandler()
155 else:
156 handler = logging.NullHandler()
157
158 formatter = logging.Formatter(
159 fmt=settings.get("LOG_FORMAT"), datefmt=settings.get("LOG_DATEFORMAT")
160 )
161 handler.setFormatter(formatter)
162 handler.setLevel(settings.get("LOG_LEVEL"))
163 if settings.getbool("LOG_SHORT_NAMES"):
164 handler.addFilter(TopLevelFormatter(["scrapy"]))
165 return handler
166
167
168 def log_scrapy_info(settings: Settings) -> None:
169 logger.info(
170 "Scrapy %(version)s started (bot: %(bot)s)",
171 {"version": scrapy.__version__, "bot": settings["BOT_NAME"]},
172 )
173 versions = [
174 f"{name} {version}"
175 for name, version in scrapy_components_versions()
176 if name != "Scrapy"
177 ]
178 logger.info("Versions: %(versions)s", {"versions": ", ".join(versions)})
179
180
181 def log_reactor_info() -> None:
182 from twisted.internet import reactor
183
184 logger.debug("Using reactor: %s.%s", reactor.__module__, reactor.__class__.__name__)
185 from twisted.internet import asyncioreactor
186
187 if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):
188 logger.debug(
189 "Using asyncio event loop: %s.%s",
190 reactor._asyncioEventloop.__module__,
191 reactor._asyncioEventloop.__class__.__name__,
192 )
193
194
195 class StreamLogger:
196 """Fake file-like stream object that redirects writes to a logger instance
197
198 Taken from:
199 https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/
200 """
201
202 def __init__(self, logger: logging.Logger, log_level: int = logging.INFO):
203 self.logger: logging.Logger = logger
204 self.log_level: int = log_level
205 self.linebuf: str = ""
206
207 def write(self, buf: str) -> None:
208 for line in buf.rstrip().splitlines():
209 self.logger.log(self.log_level, line.rstrip())
210
211 def flush(self) -> None:
212 for h in self.logger.handlers:
213 h.flush()
214
215
216 class LogCounterHandler(logging.Handler):
217 """Record log levels count into a crawler stats"""
218
219 def __init__(self, crawler: Crawler, *args: Any, **kwargs: Any):
220 super().__init__(*args, **kwargs)
221 self.crawler: Crawler = crawler
222
223 def emit(self, record: logging.LogRecord) -> None:
224 sname = f"log_count/{record.levelname}"
225 assert self.crawler.stats
226 self.crawler.stats.inc_value(sname)
227
228
229 def logformatter_adapter(logkws: dict) -> Tuple[int, str, dict]:
230 """
231 Helper that takes the dictionary output from the methods in LogFormatter
232 and adapts it into a tuple of positional arguments for logger.log calls,
233 handling backward compatibility as well.
234 """
235 if not {"level", "msg", "args"} <= set(logkws):
236 warnings.warn("Missing keys in LogFormatter method", ScrapyDeprecationWarning)
237
238 if "format" in logkws:
239 warnings.warn(
240 "`format` key in LogFormatter methods has been "
241 "deprecated, use `msg` instead",
242 ScrapyDeprecationWarning,
243 )
244
245 level = logkws.get("level", logging.INFO)
246 message = logkws.get("format", logkws.get("msg"))
247 # NOTE: This also handles 'args' being an empty dict, that case doesn't
248 # play well in logger.log calls
249 args = logkws if not logkws.get("args") else logkws["args"]
250
251 return (level, message, args)
252
```
Path: `scrapy/utils/response.py`
Content:
```
1 """
2 This module provides some useful functions for working with
3 scrapy.http.Response objects
4 """
5 import os
6 import re
7 import tempfile
8 import webbrowser
9 from typing import Any, Callable, Iterable, Tuple, Union
10 from weakref import WeakKeyDictionary
11
12 from twisted.web import http
13 from w3lib import html
14
15 import scrapy
16 from scrapy.http.response import Response
17 from scrapy.utils.decorators import deprecated
18 from scrapy.utils.python import to_bytes, to_unicode
19
20 _baseurl_cache: "WeakKeyDictionary[Response, str]" = WeakKeyDictionary()
21
22
23 def get_base_url(response: "scrapy.http.response.text.TextResponse") -> str:
24 """Return the base url of the given response, joined with the response url"""
25 if response not in _baseurl_cache:
26 text = response.text[0:4096]
27 _baseurl_cache[response] = html.get_base_url(
28 text, response.url, response.encoding
29 )
30 return _baseurl_cache[response]
31
32
33 _metaref_cache: "WeakKeyDictionary[Response, Union[Tuple[None, None], Tuple[float, str]]]" = (
34 WeakKeyDictionary()
35 )
36
37
38 def get_meta_refresh(
39 response: "scrapy.http.response.text.TextResponse",
40 ignore_tags: Iterable[str] = ("script", "noscript"),
41 ) -> Union[Tuple[None, None], Tuple[float, str]]:
42 """Parse the http-equiv refresh parameter from the given response"""
43 if response not in _metaref_cache:
44 text = response.text[0:4096]
45 _metaref_cache[response] = html.get_meta_refresh(
46 text, response.url, response.encoding, ignore_tags=ignore_tags
47 )
48 return _metaref_cache[response]
49
50
51 def response_status_message(status: Union[bytes, float, int, str]) -> str:
52 """Return status code plus status text descriptive message"""
53 status_int = int(status)
54 message = http.RESPONSES.get(status_int, "Unknown Status")
55 return f"{status_int} {to_unicode(message)}"
56
57
58 @deprecated
59 def response_httprepr(response: Response) -> bytes:
60 """Return raw HTTP representation (as bytes) of the given response. This
61 is provided only for reference, since it's not the exact stream of bytes
62 that was received (that's not exposed by Twisted).
63 """
64 values = [
65 b"HTTP/1.1 ",
66 to_bytes(str(response.status)),
67 b" ",
68 to_bytes(http.RESPONSES.get(response.status, b"")),
69 b"\r\n",
70 ]
71 if response.headers:
72 values.extend([response.headers.to_string(), b"\r\n"])
73 values.extend([b"\r\n", response.body])
74 return b"".join(values)
75
76
77 def open_in_browser(
78 response: Union[
79 "scrapy.http.response.html.HtmlResponse",
80 "scrapy.http.response.text.TextResponse",
81 ],
82 _openfunc: Callable[[str], Any] = webbrowser.open,
83 ) -> Any:
84 """Open the given response in a local web browser, populating the <base>
85 tag for external links to work
86 """
87 from scrapy.http import HtmlResponse, TextResponse
88
89 # XXX: this implementation is a bit dirty and could be improved
90 body = response.body
91 if isinstance(response, HtmlResponse):
92 if b"<base" not in body:
93 repl = rf'\1<base href="{response.url}">'
94 body = re.sub(b"<!--.*?-->", b"", body, flags=re.DOTALL)
95 body = re.sub(rb"(<head(?:>|\s.*?>))", to_bytes(repl), body)
96 ext = ".html"
97 elif isinstance(response, TextResponse):
98 ext = ".txt"
99 else:
100 raise TypeError("Unsupported response type: " f"{response.__class__.__name__}")
101 fd, fname = tempfile.mkstemp(ext)
102 os.write(fd, body)
103 os.close(fd)
104 return _openfunc(f"file://{fname}")
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/utils/log.py b/scrapy/utils/log.py
--- a/scrapy/utils/log.py
+++ b/scrapy/utils/log.py
@@ -2,7 +2,6 @@
import logging
import sys
-import warnings
from logging.config import dictConfig
from types import TracebackType
from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union, cast
@@ -11,7 +10,6 @@
from twisted.python.failure import Failure
import scrapy
-from scrapy.exceptions import ScrapyDeprecationWarning
from scrapy.settings import Settings
from scrapy.utils.versions import scrapy_components_versions
@@ -232,18 +230,9 @@
and adapts it into a tuple of positional arguments for logger.log calls,
handling backward compatibility as well.
"""
- if not {"level", "msg", "args"} <= set(logkws):
- warnings.warn("Missing keys in LogFormatter method", ScrapyDeprecationWarning)
-
- if "format" in logkws:
- warnings.warn(
- "`format` key in LogFormatter methods has been "
- "deprecated, use `msg` instead",
- ScrapyDeprecationWarning,
- )
level = logkws.get("level", logging.INFO)
- message = logkws.get("format", logkws.get("msg"))
+ message = logkws.get("msg") or ""
# NOTE: This also handles 'args' being an empty dict, that case doesn't
# play well in logger.log calls
args = logkws if not logkws.get("args") else logkws["args"]
diff --git a/scrapy/utils/response.py b/scrapy/utils/response.py
--- a/scrapy/utils/response.py
+++ b/scrapy/utils/response.py
@@ -14,7 +14,6 @@
import scrapy
from scrapy.http.response import Response
-from scrapy.utils.decorators import deprecated
from scrapy.utils.python import to_bytes, to_unicode
_baseurl_cache: "WeakKeyDictionary[Response, str]" = WeakKeyDictionary()
@@ -55,25 +54,6 @@
return f"{status_int} {to_unicode(message)}"
-@deprecated
-def response_httprepr(response: Response) -> bytes:
- """Return raw HTTP representation (as bytes) of the given response. This
- is provided only for reference, since it's not the exact stream of bytes
- that was received (that's not exposed by Twisted).
- """
- values = [
- b"HTTP/1.1 ",
- to_bytes(str(response.status)),
- b" ",
- to_bytes(http.RESPONSES.get(response.status, b"")),
- b"\r\n",
- ]
- if response.headers:
- values.extend([response.headers.to_string(), b"\r\n"])
- values.extend([b"\r\n", response.body])
- return b"".join(values)
-
-
def open_in_browser(
response: Union[
"scrapy.http.response.html.HtmlResponse",
|
{"golden_diff": "diff --git a/scrapy/utils/log.py b/scrapy/utils/log.py\n--- a/scrapy/utils/log.py\n+++ b/scrapy/utils/log.py\n@@ -2,7 +2,6 @@\n \n import logging\n import sys\n-import warnings\n from logging.config import dictConfig\n from types import TracebackType\n from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union, cast\n@@ -11,7 +10,6 @@\n from twisted.python.failure import Failure\n \n import scrapy\n-from scrapy.exceptions import ScrapyDeprecationWarning\n from scrapy.settings import Settings\n from scrapy.utils.versions import scrapy_components_versions\n \n@@ -232,18 +230,9 @@\n and adapts it into a tuple of positional arguments for logger.log calls,\n handling backward compatibility as well.\n \"\"\"\n- if not {\"level\", \"msg\", \"args\"} <= set(logkws):\n- warnings.warn(\"Missing keys in LogFormatter method\", ScrapyDeprecationWarning)\n-\n- if \"format\" in logkws:\n- warnings.warn(\n- \"`format` key in LogFormatter methods has been \"\n- \"deprecated, use `msg` instead\",\n- ScrapyDeprecationWarning,\n- )\n \n level = logkws.get(\"level\", logging.INFO)\n- message = logkws.get(\"format\", logkws.get(\"msg\"))\n+ message = logkws.get(\"msg\") or \"\"\n # NOTE: This also handles 'args' being an empty dict, that case doesn't\n # play well in logger.log calls\n args = logkws if not logkws.get(\"args\") else logkws[\"args\"]\ndiff --git a/scrapy/utils/response.py b/scrapy/utils/response.py\n--- a/scrapy/utils/response.py\n+++ b/scrapy/utils/response.py\n@@ -14,7 +14,6 @@\n \n import scrapy\n from scrapy.http.response import Response\n-from scrapy.utils.decorators import deprecated\n from scrapy.utils.python import to_bytes, to_unicode\n \n _baseurl_cache: \"WeakKeyDictionary[Response, str]\" = WeakKeyDictionary()\n@@ -55,25 +54,6 @@\n return f\"{status_int} {to_unicode(message)}\"\n \n \n-@deprecated\n-def response_httprepr(response: Response) -> bytes:\n- \"\"\"Return raw HTTP representation (as bytes) of the given response. This\n- is provided only for reference, since it's not the exact stream of bytes\n- that was received (that's not exposed by Twisted).\n- \"\"\"\n- values = [\n- b\"HTTP/1.1 \",\n- to_bytes(str(response.status)),\n- b\" \",\n- to_bytes(http.RESPONSES.get(response.status, b\"\")),\n- b\"\\r\\n\",\n- ]\n- if response.headers:\n- values.extend([response.headers.to_string(), b\"\\r\\n\"])\n- values.extend([b\"\\r\\n\", response.body])\n- return b\"\".join(values)\n-\n-\n def open_in_browser(\n response: Union[\n \"scrapy.http.response.html.HtmlResponse\",\n", "issue": "Remove deprecated scrapy.utils.response.response_httprepr\nDeprecated in 2.6.0.\nRemove deprecated support from scrapy.utils.log.logformatter_adapter()\nThis helper was added in 1.0.0 (I think), it looks like the parts of it that don't show warnings are still useful, though I haven't looked closely at them.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport sys\nimport warnings\nfrom logging.config import dictConfig\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union, cast\n\nfrom twisted.python import log as twisted_log\nfrom twisted.python.failure import Failure\n\nimport scrapy\nfrom scrapy.exceptions import ScrapyDeprecationWarning\nfrom scrapy.settings import Settings\nfrom scrapy.utils.versions import scrapy_components_versions\n\nif TYPE_CHECKING:\n from scrapy.crawler import Crawler\n\nlogger = logging.getLogger(__name__)\n\n\ndef failure_to_exc_info(\n failure: Failure,\n) -> Optional[Tuple[Type[BaseException], BaseException, Optional[TracebackType]]]:\n \"\"\"Extract exc_info from Failure instances\"\"\"\n if isinstance(failure, Failure):\n assert failure.type\n assert failure.value\n return (\n failure.type,\n failure.value,\n cast(Optional[TracebackType], failure.getTracebackObject()),\n )\n return None\n\n\nclass TopLevelFormatter(logging.Filter):\n \"\"\"Keep only top level loggers's name (direct children from root) from\n records.\n\n This filter will replace Scrapy loggers' names with 'scrapy'. This mimics\n the old Scrapy log behaviour and helps shortening long names.\n\n Since it can't be set for just one logger (it won't propagate for its\n children), it's going to be set in the root handler, with a parametrized\n ``loggers`` list where it should act.\n \"\"\"\n\n def __init__(self, loggers: Optional[List[str]] = None):\n self.loggers: List[str] = loggers or []\n\n def filter(self, record: logging.LogRecord) -> bool:\n if any(record.name.startswith(logger + \".\") for logger in self.loggers):\n record.name = record.name.split(\".\", 1)[0]\n return True\n\n\nDEFAULT_LOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"loggers\": {\n \"filelock\": {\n \"level\": \"ERROR\",\n },\n \"hpack\": {\n \"level\": \"ERROR\",\n },\n \"scrapy\": {\n \"level\": \"DEBUG\",\n },\n \"twisted\": {\n \"level\": \"ERROR\",\n },\n },\n}\n\n\ndef configure_logging(\n settings: Union[Settings, dict, None] = None, install_root_handler: bool = True\n) -> None:\n \"\"\"\n Initialize logging defaults for Scrapy.\n\n :param settings: settings used to create and configure a handler for the\n root logger (default: None).\n :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``\n\n :param install_root_handler: whether to install root logging handler\n (default: True)\n :type install_root_handler: bool\n\n This function does:\n\n - Route warnings and twisted logging through Python standard logging\n - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively\n - Route stdout to log if LOG_STDOUT setting is True\n\n When ``install_root_handler`` is True (default), this function also\n creates a handler for the root logger according to given settings\n (see :ref:`topics-logging-settings`). You can override default options\n using ``settings`` argument. When ``settings`` is empty or None, defaults\n are used.\n \"\"\"\n if not sys.warnoptions:\n # Route warnings through python logging\n logging.captureWarnings(True)\n\n observer = twisted_log.PythonLoggingObserver(\"twisted\")\n observer.start()\n\n dictConfig(DEFAULT_LOGGING)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n if settings.getbool(\"LOG_STDOUT\"):\n sys.stdout = StreamLogger(logging.getLogger(\"stdout\")) # type: ignore[assignment]\n\n if install_root_handler:\n install_scrapy_root_handler(settings)\n\n\n_scrapy_root_handler: Optional[logging.Handler] = None\n\n\ndef install_scrapy_root_handler(settings: Settings) -> None:\n global _scrapy_root_handler\n\n if (\n _scrapy_root_handler is not None\n and _scrapy_root_handler in logging.root.handlers\n ):\n logging.root.removeHandler(_scrapy_root_handler)\n logging.root.setLevel(logging.NOTSET)\n _scrapy_root_handler = _get_handler(settings)\n logging.root.addHandler(_scrapy_root_handler)\n\n\ndef get_scrapy_root_handler() -> Optional[logging.Handler]:\n return _scrapy_root_handler\n\n\ndef _get_handler(settings: Settings) -> logging.Handler:\n \"\"\"Return a log handler object according to settings\"\"\"\n filename = settings.get(\"LOG_FILE\")\n handler: logging.Handler\n if filename:\n mode = \"a\" if settings.getbool(\"LOG_FILE_APPEND\") else \"w\"\n encoding = settings.get(\"LOG_ENCODING\")\n handler = logging.FileHandler(filename, mode=mode, encoding=encoding)\n elif settings.getbool(\"LOG_ENABLED\"):\n handler = logging.StreamHandler()\n else:\n handler = logging.NullHandler()\n\n formatter = logging.Formatter(\n fmt=settings.get(\"LOG_FORMAT\"), datefmt=settings.get(\"LOG_DATEFORMAT\")\n )\n handler.setFormatter(formatter)\n handler.setLevel(settings.get(\"LOG_LEVEL\"))\n if settings.getbool(\"LOG_SHORT_NAMES\"):\n handler.addFilter(TopLevelFormatter([\"scrapy\"]))\n return handler\n\n\ndef log_scrapy_info(settings: Settings) -> None:\n logger.info(\n \"Scrapy %(version)s started (bot: %(bot)s)\",\n {\"version\": scrapy.__version__, \"bot\": settings[\"BOT_NAME\"]},\n )\n versions = [\n f\"{name} {version}\"\n for name, version in scrapy_components_versions()\n if name != \"Scrapy\"\n ]\n logger.info(\"Versions: %(versions)s\", {\"versions\": \", \".join(versions)})\n\n\ndef log_reactor_info() -> None:\n from twisted.internet import reactor\n\n logger.debug(\"Using reactor: %s.%s\", reactor.__module__, reactor.__class__.__name__)\n from twisted.internet import asyncioreactor\n\n if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):\n logger.debug(\n \"Using asyncio event loop: %s.%s\",\n reactor._asyncioEventloop.__module__,\n reactor._asyncioEventloop.__class__.__name__,\n )\n\n\nclass StreamLogger:\n \"\"\"Fake file-like stream object that redirects writes to a logger instance\n\n Taken from:\n https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/\n \"\"\"\n\n def __init__(self, logger: logging.Logger, log_level: int = logging.INFO):\n self.logger: logging.Logger = logger\n self.log_level: int = log_level\n self.linebuf: str = \"\"\n\n def write(self, buf: str) -> None:\n for line in buf.rstrip().splitlines():\n self.logger.log(self.log_level, line.rstrip())\n\n def flush(self) -> None:\n for h in self.logger.handlers:\n h.flush()\n\n\nclass LogCounterHandler(logging.Handler):\n \"\"\"Record log levels count into a crawler stats\"\"\"\n\n def __init__(self, crawler: Crawler, *args: Any, **kwargs: Any):\n super().__init__(*args, **kwargs)\n self.crawler: Crawler = crawler\n\n def emit(self, record: logging.LogRecord) -> None:\n sname = f\"log_count/{record.levelname}\"\n assert self.crawler.stats\n self.crawler.stats.inc_value(sname)\n\n\ndef logformatter_adapter(logkws: dict) -> Tuple[int, str, dict]:\n \"\"\"\n Helper that takes the dictionary output from the methods in LogFormatter\n and adapts it into a tuple of positional arguments for logger.log calls,\n handling backward compatibility as well.\n \"\"\"\n if not {\"level\", \"msg\", \"args\"} <= set(logkws):\n warnings.warn(\"Missing keys in LogFormatter method\", ScrapyDeprecationWarning)\n\n if \"format\" in logkws:\n warnings.warn(\n \"`format` key in LogFormatter methods has been \"\n \"deprecated, use `msg` instead\",\n ScrapyDeprecationWarning,\n )\n\n level = logkws.get(\"level\", logging.INFO)\n message = logkws.get(\"format\", logkws.get(\"msg\"))\n # NOTE: This also handles 'args' being an empty dict, that case doesn't\n # play well in logger.log calls\n args = logkws if not logkws.get(\"args\") else logkws[\"args\"]\n\n return (level, message, args)\n", "path": "scrapy/utils/log.py"}, {"content": "\"\"\"\nThis module provides some useful functions for working with\nscrapy.http.Response objects\n\"\"\"\nimport os\nimport re\nimport tempfile\nimport webbrowser\nfrom typing import Any, Callable, Iterable, Tuple, Union\nfrom weakref import WeakKeyDictionary\n\nfrom twisted.web import http\nfrom w3lib import html\n\nimport scrapy\nfrom scrapy.http.response import Response\nfrom scrapy.utils.decorators import deprecated\nfrom scrapy.utils.python import to_bytes, to_unicode\n\n_baseurl_cache: \"WeakKeyDictionary[Response, str]\" = WeakKeyDictionary()\n\n\ndef get_base_url(response: \"scrapy.http.response.text.TextResponse\") -> str:\n \"\"\"Return the base url of the given response, joined with the response url\"\"\"\n if response not in _baseurl_cache:\n text = response.text[0:4096]\n _baseurl_cache[response] = html.get_base_url(\n text, response.url, response.encoding\n )\n return _baseurl_cache[response]\n\n\n_metaref_cache: \"WeakKeyDictionary[Response, Union[Tuple[None, None], Tuple[float, str]]]\" = (\n WeakKeyDictionary()\n)\n\n\ndef get_meta_refresh(\n response: \"scrapy.http.response.text.TextResponse\",\n ignore_tags: Iterable[str] = (\"script\", \"noscript\"),\n) -> Union[Tuple[None, None], Tuple[float, str]]:\n \"\"\"Parse the http-equiv refresh parameter from the given response\"\"\"\n if response not in _metaref_cache:\n text = response.text[0:4096]\n _metaref_cache[response] = html.get_meta_refresh(\n text, response.url, response.encoding, ignore_tags=ignore_tags\n )\n return _metaref_cache[response]\n\n\ndef response_status_message(status: Union[bytes, float, int, str]) -> str:\n \"\"\"Return status code plus status text descriptive message\"\"\"\n status_int = int(status)\n message = http.RESPONSES.get(status_int, \"Unknown Status\")\n return f\"{status_int} {to_unicode(message)}\"\n\n\n@deprecated\ndef response_httprepr(response: Response) -> bytes:\n \"\"\"Return raw HTTP representation (as bytes) of the given response. This\n is provided only for reference, since it's not the exact stream of bytes\n that was received (that's not exposed by Twisted).\n \"\"\"\n values = [\n b\"HTTP/1.1 \",\n to_bytes(str(response.status)),\n b\" \",\n to_bytes(http.RESPONSES.get(response.status, b\"\")),\n b\"\\r\\n\",\n ]\n if response.headers:\n values.extend([response.headers.to_string(), b\"\\r\\n\"])\n values.extend([b\"\\r\\n\", response.body])\n return b\"\".join(values)\n\n\ndef open_in_browser(\n response: Union[\n \"scrapy.http.response.html.HtmlResponse\",\n \"scrapy.http.response.text.TextResponse\",\n ],\n _openfunc: Callable[[str], Any] = webbrowser.open,\n) -> Any:\n \"\"\"Open the given response in a local web browser, populating the <base>\n tag for external links to work\n \"\"\"\n from scrapy.http import HtmlResponse, TextResponse\n\n # XXX: this implementation is a bit dirty and could be improved\n body = response.body\n if isinstance(response, HtmlResponse):\n if b\"<base\" not in body:\n repl = rf'\\1<base href=\"{response.url}\">'\n body = re.sub(b\"<!--.*?-->\", b\"\", body, flags=re.DOTALL)\n body = re.sub(rb\"(<head(?:>|\\s.*?>))\", to_bytes(repl), body)\n ext = \".html\"\n elif isinstance(response, TextResponse):\n ext = \".txt\"\n else:\n raise TypeError(\"Unsupported response type: \" f\"{response.__class__.__name__}\")\n fd, fname = tempfile.mkstemp(ext)\n os.write(fd, body)\n os.close(fd)\n return _openfunc(f\"file://{fname}\")\n", "path": "scrapy/utils/response.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport sys\nfrom logging.config import dictConfig\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, List, Optional, Tuple, Type, Union, cast\n\nfrom twisted.python import log as twisted_log\nfrom twisted.python.failure import Failure\n\nimport scrapy\nfrom scrapy.settings import Settings\nfrom scrapy.utils.versions import scrapy_components_versions\n\nif TYPE_CHECKING:\n from scrapy.crawler import Crawler\n\nlogger = logging.getLogger(__name__)\n\n\ndef failure_to_exc_info(\n failure: Failure,\n) -> Optional[Tuple[Type[BaseException], BaseException, Optional[TracebackType]]]:\n \"\"\"Extract exc_info from Failure instances\"\"\"\n if isinstance(failure, Failure):\n assert failure.type\n assert failure.value\n return (\n failure.type,\n failure.value,\n cast(Optional[TracebackType], failure.getTracebackObject()),\n )\n return None\n\n\nclass TopLevelFormatter(logging.Filter):\n \"\"\"Keep only top level loggers's name (direct children from root) from\n records.\n\n This filter will replace Scrapy loggers' names with 'scrapy'. This mimics\n the old Scrapy log behaviour and helps shortening long names.\n\n Since it can't be set for just one logger (it won't propagate for its\n children), it's going to be set in the root handler, with a parametrized\n ``loggers`` list where it should act.\n \"\"\"\n\n def __init__(self, loggers: Optional[List[str]] = None):\n self.loggers: List[str] = loggers or []\n\n def filter(self, record: logging.LogRecord) -> bool:\n if any(record.name.startswith(logger + \".\") for logger in self.loggers):\n record.name = record.name.split(\".\", 1)[0]\n return True\n\n\nDEFAULT_LOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"loggers\": {\n \"filelock\": {\n \"level\": \"ERROR\",\n },\n \"hpack\": {\n \"level\": \"ERROR\",\n },\n \"scrapy\": {\n \"level\": \"DEBUG\",\n },\n \"twisted\": {\n \"level\": \"ERROR\",\n },\n },\n}\n\n\ndef configure_logging(\n settings: Union[Settings, dict, None] = None, install_root_handler: bool = True\n) -> None:\n \"\"\"\n Initialize logging defaults for Scrapy.\n\n :param settings: settings used to create and configure a handler for the\n root logger (default: None).\n :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``\n\n :param install_root_handler: whether to install root logging handler\n (default: True)\n :type install_root_handler: bool\n\n This function does:\n\n - Route warnings and twisted logging through Python standard logging\n - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively\n - Route stdout to log if LOG_STDOUT setting is True\n\n When ``install_root_handler`` is True (default), this function also\n creates a handler for the root logger according to given settings\n (see :ref:`topics-logging-settings`). You can override default options\n using ``settings`` argument. When ``settings`` is empty or None, defaults\n are used.\n \"\"\"\n if not sys.warnoptions:\n # Route warnings through python logging\n logging.captureWarnings(True)\n\n observer = twisted_log.PythonLoggingObserver(\"twisted\")\n observer.start()\n\n dictConfig(DEFAULT_LOGGING)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n if settings.getbool(\"LOG_STDOUT\"):\n sys.stdout = StreamLogger(logging.getLogger(\"stdout\")) # type: ignore[assignment]\n\n if install_root_handler:\n install_scrapy_root_handler(settings)\n\n\n_scrapy_root_handler: Optional[logging.Handler] = None\n\n\ndef install_scrapy_root_handler(settings: Settings) -> None:\n global _scrapy_root_handler\n\n if (\n _scrapy_root_handler is not None\n and _scrapy_root_handler in logging.root.handlers\n ):\n logging.root.removeHandler(_scrapy_root_handler)\n logging.root.setLevel(logging.NOTSET)\n _scrapy_root_handler = _get_handler(settings)\n logging.root.addHandler(_scrapy_root_handler)\n\n\ndef get_scrapy_root_handler() -> Optional[logging.Handler]:\n return _scrapy_root_handler\n\n\ndef _get_handler(settings: Settings) -> logging.Handler:\n \"\"\"Return a log handler object according to settings\"\"\"\n filename = settings.get(\"LOG_FILE\")\n handler: logging.Handler\n if filename:\n mode = \"a\" if settings.getbool(\"LOG_FILE_APPEND\") else \"w\"\n encoding = settings.get(\"LOG_ENCODING\")\n handler = logging.FileHandler(filename, mode=mode, encoding=encoding)\n elif settings.getbool(\"LOG_ENABLED\"):\n handler = logging.StreamHandler()\n else:\n handler = logging.NullHandler()\n\n formatter = logging.Formatter(\n fmt=settings.get(\"LOG_FORMAT\"), datefmt=settings.get(\"LOG_DATEFORMAT\")\n )\n handler.setFormatter(formatter)\n handler.setLevel(settings.get(\"LOG_LEVEL\"))\n if settings.getbool(\"LOG_SHORT_NAMES\"):\n handler.addFilter(TopLevelFormatter([\"scrapy\"]))\n return handler\n\n\ndef log_scrapy_info(settings: Settings) -> None:\n logger.info(\n \"Scrapy %(version)s started (bot: %(bot)s)\",\n {\"version\": scrapy.__version__, \"bot\": settings[\"BOT_NAME\"]},\n )\n versions = [\n f\"{name} {version}\"\n for name, version in scrapy_components_versions()\n if name != \"Scrapy\"\n ]\n logger.info(\"Versions: %(versions)s\", {\"versions\": \", \".join(versions)})\n\n\ndef log_reactor_info() -> None:\n from twisted.internet import reactor\n\n logger.debug(\"Using reactor: %s.%s\", reactor.__module__, reactor.__class__.__name__)\n from twisted.internet import asyncioreactor\n\n if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):\n logger.debug(\n \"Using asyncio event loop: %s.%s\",\n reactor._asyncioEventloop.__module__,\n reactor._asyncioEventloop.__class__.__name__,\n )\n\n\nclass StreamLogger:\n \"\"\"Fake file-like stream object that redirects writes to a logger instance\n\n Taken from:\n https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/\n \"\"\"\n\n def __init__(self, logger: logging.Logger, log_level: int = logging.INFO):\n self.logger: logging.Logger = logger\n self.log_level: int = log_level\n self.linebuf: str = \"\"\n\n def write(self, buf: str) -> None:\n for line in buf.rstrip().splitlines():\n self.logger.log(self.log_level, line.rstrip())\n\n def flush(self) -> None:\n for h in self.logger.handlers:\n h.flush()\n\n\nclass LogCounterHandler(logging.Handler):\n \"\"\"Record log levels count into a crawler stats\"\"\"\n\n def __init__(self, crawler: Crawler, *args: Any, **kwargs: Any):\n super().__init__(*args, **kwargs)\n self.crawler: Crawler = crawler\n\n def emit(self, record: logging.LogRecord) -> None:\n sname = f\"log_count/{record.levelname}\"\n assert self.crawler.stats\n self.crawler.stats.inc_value(sname)\n\n\ndef logformatter_adapter(logkws: dict) -> Tuple[int, str, dict]:\n \"\"\"\n Helper that takes the dictionary output from the methods in LogFormatter\n and adapts it into a tuple of positional arguments for logger.log calls,\n handling backward compatibility as well.\n \"\"\"\n\n level = logkws.get(\"level\", logging.INFO)\n message = logkws.get(\"msg\") or \"\"\n # NOTE: This also handles 'args' being an empty dict, that case doesn't\n # play well in logger.log calls\n args = logkws if not logkws.get(\"args\") else logkws[\"args\"]\n\n return (level, message, args)\n", "path": "scrapy/utils/log.py"}, {"content": "\"\"\"\nThis module provides some useful functions for working with\nscrapy.http.Response objects\n\"\"\"\nimport os\nimport re\nimport tempfile\nimport webbrowser\nfrom typing import Any, Callable, Iterable, Tuple, Union\nfrom weakref import WeakKeyDictionary\n\nfrom twisted.web import http\nfrom w3lib import html\n\nimport scrapy\nfrom scrapy.http.response import Response\nfrom scrapy.utils.python import to_bytes, to_unicode\n\n_baseurl_cache: \"WeakKeyDictionary[Response, str]\" = WeakKeyDictionary()\n\n\ndef get_base_url(response: \"scrapy.http.response.text.TextResponse\") -> str:\n \"\"\"Return the base url of the given response, joined with the response url\"\"\"\n if response not in _baseurl_cache:\n text = response.text[0:4096]\n _baseurl_cache[response] = html.get_base_url(\n text, response.url, response.encoding\n )\n return _baseurl_cache[response]\n\n\n_metaref_cache: \"WeakKeyDictionary[Response, Union[Tuple[None, None], Tuple[float, str]]]\" = (\n WeakKeyDictionary()\n)\n\n\ndef get_meta_refresh(\n response: \"scrapy.http.response.text.TextResponse\",\n ignore_tags: Iterable[str] = (\"script\", \"noscript\"),\n) -> Union[Tuple[None, None], Tuple[float, str]]:\n \"\"\"Parse the http-equiv refresh parameter from the given response\"\"\"\n if response not in _metaref_cache:\n text = response.text[0:4096]\n _metaref_cache[response] = html.get_meta_refresh(\n text, response.url, response.encoding, ignore_tags=ignore_tags\n )\n return _metaref_cache[response]\n\n\ndef response_status_message(status: Union[bytes, float, int, str]) -> str:\n \"\"\"Return status code plus status text descriptive message\"\"\"\n status_int = int(status)\n message = http.RESPONSES.get(status_int, \"Unknown Status\")\n return f\"{status_int} {to_unicode(message)}\"\n\n\ndef open_in_browser(\n response: Union[\n \"scrapy.http.response.html.HtmlResponse\",\n \"scrapy.http.response.text.TextResponse\",\n ],\n _openfunc: Callable[[str], Any] = webbrowser.open,\n) -> Any:\n \"\"\"Open the given response in a local web browser, populating the <base>\n tag for external links to work\n \"\"\"\n from scrapy.http import HtmlResponse, TextResponse\n\n # XXX: this implementation is a bit dirty and could be improved\n body = response.body\n if isinstance(response, HtmlResponse):\n if b\"<base\" not in body:\n repl = rf'\\1<base href=\"{response.url}\">'\n body = re.sub(b\"<!--.*?-->\", b\"\", body, flags=re.DOTALL)\n body = re.sub(rb\"(<head(?:>|\\s.*?>))\", to_bytes(repl), body)\n ext = \".html\"\n elif isinstance(response, TextResponse):\n ext = \".txt\"\n else:\n raise TypeError(\"Unsupported response type: \" f\"{response.__class__.__name__}\")\n fd, fname = tempfile.mkstemp(ext)\n os.write(fd, body)\n os.close(fd)\n return _openfunc(f\"file://{fname}\")\n", "path": "scrapy/utils/response.py"}]}
| 3,942 | 667 |
gh_patches_debug_66594
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-5038
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Web Hook Rules check http headers in case sensitive manner
## SUMMARY
The case used for the header name in trigger.headers[<headername>] in a web-hook rule is treated in a case sensitive manner. HTTP headers are case insensitive so the case of the name in the headers should not e relevant.
### STACKSTORM VERSION
3.2.0
##### OS, environment, install method
Seen on one-line install and HA
## Steps to reproduce the problem
See https://github.com/StackStorm/st2/issues/4995 for initial case.
1. Configure webhookrule with trigger.headers['X-GitHub-Event']
2. Send in header via curl of X-GitHub-Event to webhook
3. Rule doesn't match
4. Change rule to be trigger.headers['X-Github-Event'] - rule matches
## Expected Results
As http headers are case insensitive then it should not matter what case is used in the rule. Therefore no matter what case header is or case of rule then they should match.
## Actual Results
Only matched when rule defined as X-Github-Event
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2api/st2api/controllers/v1/webhooks.py`
Content:
```
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import six
17 import uuid
18 from six.moves.urllib import parse as urlparse # pylint: disable=import-error
19 from six.moves import http_client
20
21 from st2common import log as logging
22 from st2common.constants.auth import (
23 HEADER_API_KEY_ATTRIBUTE_NAME,
24 HEADER_ATTRIBUTE_NAME,
25 )
26 from st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES
27 from st2common.models.api.trace import TraceContext
28 from st2common.models.api.trigger import TriggerAPI
29 from st2common.models.db.webhook import WebhookDB
30 import st2common.services.triggers as trigger_service
31 from st2common.rbac.types import PermissionType
32 from st2common.rbac.backends import get_rbac_backend
33 from st2common.services.triggerwatcher import TriggerWatcher
34 from st2common.services.trigger_dispatcher import TriggerDispatcherService
35 from st2common.router import abort
36 from st2common.router import Response
37 from st2common.util.jsonify import get_json_type_for_python_value
38
39 LOG = logging.getLogger(__name__)
40
41 TRACE_TAG_HEADER = "St2-Trace-Tag"
42
43
44 class HooksHolder(object):
45 """
46 Maintains a hook to TriggerDB mapping.
47 """
48
49 def __init__(self):
50 self._triggers_by_hook = {}
51
52 def __contains__(self, key):
53 return key in self._triggers_by_hook
54
55 def add_hook(self, hook, trigger):
56 if hook not in self._triggers_by_hook:
57 self._triggers_by_hook[hook] = []
58 self._triggers_by_hook[hook].append(trigger)
59
60 def remove_hook(self, hook, trigger):
61 if hook not in self._triggers_by_hook:
62 return False
63 remove_index = -1
64 for idx, item in enumerate(self._triggers_by_hook[hook]):
65 if item["id"] == trigger["id"]:
66 remove_index = idx
67 break
68 if remove_index < 0:
69 return False
70 self._triggers_by_hook[hook].pop(remove_index)
71 if not self._triggers_by_hook[hook]:
72 del self._triggers_by_hook[hook]
73 return True
74
75 def get_triggers_for_hook(self, hook):
76 return self._triggers_by_hook.get(hook, [])
77
78 def get_all(self):
79 triggers = []
80 for values in six.itervalues(self._triggers_by_hook):
81 triggers.extend(values)
82 return triggers
83
84
85 class WebhooksController(object):
86 def __init__(self, *args, **kwargs):
87 self._hooks = HooksHolder()
88 self._base_url = "/webhooks/"
89 self._trigger_types = list(WEBHOOK_TRIGGER_TYPES.keys())
90
91 self._trigger_dispatcher_service = TriggerDispatcherService(LOG)
92 queue_suffix = self.__class__.__name__
93 self._trigger_watcher = TriggerWatcher(
94 create_handler=self._handle_create_trigger,
95 update_handler=self._handle_update_trigger,
96 delete_handler=self._handle_delete_trigger,
97 trigger_types=self._trigger_types,
98 queue_suffix=queue_suffix,
99 exclusive=True,
100 )
101 self._trigger_watcher.start()
102 self._register_webhook_trigger_types()
103
104 def get_all(self):
105 # Return only the hooks known by this controller.
106 return self._hooks.get_all()
107
108 def get_one(self, url, requester_user):
109 triggers = self._hooks.get_triggers_for_hook(url)
110
111 if not triggers:
112 abort(http_client.NOT_FOUND)
113 return
114
115 permission_type = PermissionType.WEBHOOK_VIEW
116 rbac_utils = get_rbac_backend().get_utils_class()
117 rbac_utils.assert_user_has_resource_db_permission(
118 user_db=requester_user,
119 resource_db=WebhookDB(name=url),
120 permission_type=permission_type,
121 )
122
123 # For demonstration purpose return 1st
124 return triggers[0]
125
126 def post(self, hook, webhook_body_api, headers, requester_user):
127 body = webhook_body_api.data
128
129 permission_type = PermissionType.WEBHOOK_SEND
130 rbac_utils = get_rbac_backend().get_utils_class()
131 rbac_utils.assert_user_has_resource_db_permission(
132 user_db=requester_user,
133 resource_db=WebhookDB(name=hook),
134 permission_type=permission_type,
135 )
136
137 headers = self._get_headers_as_dict(headers)
138 headers = self._filter_authentication_headers(headers)
139
140 # If webhook contains a trace-tag use that else create create a unique trace-tag.
141 trace_context = self._create_trace_context(
142 trace_tag=headers.pop(TRACE_TAG_HEADER, None), hook=hook
143 )
144
145 if hook == "st2" or hook == "st2/":
146 # When using st2 or system webhook, body needs to always be a dict
147 if not isinstance(body, dict):
148 type_string = get_json_type_for_python_value(body)
149 msg = "Webhook body needs to be an object, got: %s" % (type_string)
150 raise ValueError(msg)
151
152 trigger = body.get("trigger", None)
153 payload = body.get("payload", None)
154
155 if not trigger:
156 msg = "Trigger not specified."
157 return abort(http_client.BAD_REQUEST, msg)
158
159 self._trigger_dispatcher_service.dispatch_with_context(
160 trigger=trigger,
161 payload=payload,
162 trace_context=trace_context,
163 throw_on_validation_error=True,
164 )
165 else:
166 if not self._is_valid_hook(hook):
167 self._log_request("Invalid hook.", headers, body)
168 msg = "Webhook %s not registered with st2" % hook
169 return abort(http_client.NOT_FOUND, msg)
170
171 triggers = self._hooks.get_triggers_for_hook(hook)
172 payload = {}
173
174 payload["headers"] = headers
175 payload["body"] = body
176
177 # Dispatch trigger instance for each of the trigger found
178 for trigger_dict in triggers:
179 # TODO: Instead of dispatching the whole dict we should just
180 # dispatch TriggerDB.ref or similar
181 self._trigger_dispatcher_service.dispatch_with_context(
182 trigger=trigger_dict,
183 payload=payload,
184 trace_context=trace_context,
185 throw_on_validation_error=True,
186 )
187
188 # NOTE: For url encoded request bodies, values will be bytes instead of unicode and this
189 # doesn't work with orjson so we first need to "cast" all the values from bytes to unicode
190
191 return Response(json=body, status=http_client.ACCEPTED)
192
193 def _is_valid_hook(self, hook):
194 # TODO: Validate hook payload with payload_schema.
195 return hook in self._hooks
196
197 def _register_webhook_trigger_types(self):
198 for trigger_type in WEBHOOK_TRIGGER_TYPES.values():
199 trigger_service.create_trigger_type_db(trigger_type)
200
201 def _create_trace_context(self, trace_tag, hook):
202 # if no trace_tag then create a unique one
203 if not trace_tag:
204 trace_tag = "webhook-%s-%s" % (hook, uuid.uuid4().hex)
205 return TraceContext(trace_tag=trace_tag)
206
207 def add_trigger(self, trigger):
208 # NOTE: trigger is a dictionary
209 # Note: Permission checking for creating and deleting a webhook is done during rule
210 # creation
211 url = self._get_normalized_url(trigger)
212 LOG.info("Listening to endpoint: %s", urlparse.urljoin(self._base_url, url))
213 self._hooks.add_hook(url, trigger)
214
215 def update_trigger(self, trigger):
216 pass
217
218 def remove_trigger(self, trigger):
219 # Note: Permission checking for creating and deleting a webhook is done during rule
220 # creation
221 url = self._get_normalized_url(trigger)
222
223 removed = self._hooks.remove_hook(url, trigger)
224 if removed:
225 LOG.info(
226 "Stop listening to endpoint: %s", urlparse.urljoin(self._base_url, url)
227 )
228
229 def _get_normalized_url(self, trigger):
230 """
231 remove the trailing and leading / so that the hook url and those coming
232 from trigger parameters end up being the same.
233 """
234 return trigger["parameters"]["url"].strip("/")
235
236 def _get_headers_as_dict(self, headers):
237 headers_dict = {}
238 for key, value in headers.items():
239 headers_dict[key] = value
240 return headers_dict
241
242 def _filter_authentication_headers(self, headers):
243 auth_headers = [HEADER_API_KEY_ATTRIBUTE_NAME, HEADER_ATTRIBUTE_NAME, "Cookie"]
244 return {key: value for key, value in headers.items() if key not in auth_headers}
245
246 def _log_request(self, msg, headers, body, log_method=LOG.debug):
247 headers = self._get_headers_as_dict(headers)
248 body = str(body)
249 log_method("%s\n\trequest.header: %s.\n\trequest.body: %s.", msg, headers, body)
250
251 ##############################################
252 # Event handler methods for the trigger events
253 ##############################################
254
255 def _handle_create_trigger(self, trigger):
256 LOG.debug('Calling "add_trigger" method (trigger.type=%s)' % (trigger.type))
257 trigger = self._sanitize_trigger(trigger=trigger)
258 self.add_trigger(trigger=trigger)
259
260 def _handle_update_trigger(self, trigger):
261 LOG.debug('Calling "update_trigger" method (trigger.type=%s)' % (trigger.type))
262 trigger = self._sanitize_trigger(trigger=trigger)
263 self.update_trigger(trigger=trigger)
264
265 def _handle_delete_trigger(self, trigger):
266 LOG.debug('Calling "remove_trigger" method (trigger.type=%s)' % (trigger.type))
267 trigger = self._sanitize_trigger(trigger=trigger)
268 self.remove_trigger(trigger=trigger)
269
270 def _sanitize_trigger(self, trigger):
271 sanitized = TriggerAPI.from_model(trigger).to_dict()
272 return sanitized
273
274
275 webhooks_controller = WebhooksController()
276
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2api/st2api/controllers/v1/webhooks.py b/st2api/st2api/controllers/v1/webhooks.py
--- a/st2api/st2api/controllers/v1/webhooks.py
+++ b/st2api/st2api/controllers/v1/webhooks.py
@@ -172,6 +172,7 @@
payload = {}
payload["headers"] = headers
+ payload["headers_lower"] = {k.lower(): v for k, v in headers.items()}
payload["body"] = body
# Dispatch trigger instance for each of the trigger found
|
{"golden_diff": "diff --git a/st2api/st2api/controllers/v1/webhooks.py b/st2api/st2api/controllers/v1/webhooks.py\n--- a/st2api/st2api/controllers/v1/webhooks.py\n+++ b/st2api/st2api/controllers/v1/webhooks.py\n@@ -172,6 +172,7 @@\n payload = {}\n \n payload[\"headers\"] = headers\n+ payload[\"headers_lower\"] = {k.lower(): v for k, v in headers.items()}\n payload[\"body\"] = body\n \n # Dispatch trigger instance for each of the trigger found\n", "issue": "Web Hook Rules check http headers in case sensitive manner\n## SUMMARY\r\n\r\nThe case used for the header name in trigger.headers[<headername>] in a web-hook rule is treated in a case sensitive manner. HTTP headers are case insensitive so the case of the name in the headers should not e relevant.\r\n\r\n### STACKSTORM VERSION\r\n\r\n3.2.0\r\n\r\n##### OS, environment, install method\r\n\r\nSeen on one-line install and HA\r\n\r\n## Steps to reproduce the problem\r\n\r\nSee https://github.com/StackStorm/st2/issues/4995 for initial case.\r\n1. Configure webhookrule with trigger.headers['X-GitHub-Event']\r\n2. Send in header via curl of X-GitHub-Event to webhook\r\n3. Rule doesn't match\r\n4. Change rule to be trigger.headers['X-Github-Event'] - rule matches\r\n\r\n## Expected Results\r\n\r\nAs http headers are case insensitive then it should not matter what case is used in the rule. Therefore no matter what case header is or case of rule then they should match.\r\n\r\n## Actual Results\r\n\r\nOnly matched when rule defined as X-Github-Event\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport six\nimport uuid\nfrom six.moves.urllib import parse as urlparse # pylint: disable=import-error\nfrom six.moves import http_client\n\nfrom st2common import log as logging\nfrom st2common.constants.auth import (\n HEADER_API_KEY_ATTRIBUTE_NAME,\n HEADER_ATTRIBUTE_NAME,\n)\nfrom st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES\nfrom st2common.models.api.trace import TraceContext\nfrom st2common.models.api.trigger import TriggerAPI\nfrom st2common.models.db.webhook import WebhookDB\nimport st2common.services.triggers as trigger_service\nfrom st2common.rbac.types import PermissionType\nfrom st2common.rbac.backends import get_rbac_backend\nfrom st2common.services.triggerwatcher import TriggerWatcher\nfrom st2common.services.trigger_dispatcher import TriggerDispatcherService\nfrom st2common.router import abort\nfrom st2common.router import Response\nfrom st2common.util.jsonify import get_json_type_for_python_value\n\nLOG = logging.getLogger(__name__)\n\nTRACE_TAG_HEADER = \"St2-Trace-Tag\"\n\n\nclass HooksHolder(object):\n \"\"\"\n Maintains a hook to TriggerDB mapping.\n \"\"\"\n\n def __init__(self):\n self._triggers_by_hook = {}\n\n def __contains__(self, key):\n return key in self._triggers_by_hook\n\n def add_hook(self, hook, trigger):\n if hook not in self._triggers_by_hook:\n self._triggers_by_hook[hook] = []\n self._triggers_by_hook[hook].append(trigger)\n\n def remove_hook(self, hook, trigger):\n if hook not in self._triggers_by_hook:\n return False\n remove_index = -1\n for idx, item in enumerate(self._triggers_by_hook[hook]):\n if item[\"id\"] == trigger[\"id\"]:\n remove_index = idx\n break\n if remove_index < 0:\n return False\n self._triggers_by_hook[hook].pop(remove_index)\n if not self._triggers_by_hook[hook]:\n del self._triggers_by_hook[hook]\n return True\n\n def get_triggers_for_hook(self, hook):\n return self._triggers_by_hook.get(hook, [])\n\n def get_all(self):\n triggers = []\n for values in six.itervalues(self._triggers_by_hook):\n triggers.extend(values)\n return triggers\n\n\nclass WebhooksController(object):\n def __init__(self, *args, **kwargs):\n self._hooks = HooksHolder()\n self._base_url = \"/webhooks/\"\n self._trigger_types = list(WEBHOOK_TRIGGER_TYPES.keys())\n\n self._trigger_dispatcher_service = TriggerDispatcherService(LOG)\n queue_suffix = self.__class__.__name__\n self._trigger_watcher = TriggerWatcher(\n create_handler=self._handle_create_trigger,\n update_handler=self._handle_update_trigger,\n delete_handler=self._handle_delete_trigger,\n trigger_types=self._trigger_types,\n queue_suffix=queue_suffix,\n exclusive=True,\n )\n self._trigger_watcher.start()\n self._register_webhook_trigger_types()\n\n def get_all(self):\n # Return only the hooks known by this controller.\n return self._hooks.get_all()\n\n def get_one(self, url, requester_user):\n triggers = self._hooks.get_triggers_for_hook(url)\n\n if not triggers:\n abort(http_client.NOT_FOUND)\n return\n\n permission_type = PermissionType.WEBHOOK_VIEW\n rbac_utils = get_rbac_backend().get_utils_class()\n rbac_utils.assert_user_has_resource_db_permission(\n user_db=requester_user,\n resource_db=WebhookDB(name=url),\n permission_type=permission_type,\n )\n\n # For demonstration purpose return 1st\n return triggers[0]\n\n def post(self, hook, webhook_body_api, headers, requester_user):\n body = webhook_body_api.data\n\n permission_type = PermissionType.WEBHOOK_SEND\n rbac_utils = get_rbac_backend().get_utils_class()\n rbac_utils.assert_user_has_resource_db_permission(\n user_db=requester_user,\n resource_db=WebhookDB(name=hook),\n permission_type=permission_type,\n )\n\n headers = self._get_headers_as_dict(headers)\n headers = self._filter_authentication_headers(headers)\n\n # If webhook contains a trace-tag use that else create create a unique trace-tag.\n trace_context = self._create_trace_context(\n trace_tag=headers.pop(TRACE_TAG_HEADER, None), hook=hook\n )\n\n if hook == \"st2\" or hook == \"st2/\":\n # When using st2 or system webhook, body needs to always be a dict\n if not isinstance(body, dict):\n type_string = get_json_type_for_python_value(body)\n msg = \"Webhook body needs to be an object, got: %s\" % (type_string)\n raise ValueError(msg)\n\n trigger = body.get(\"trigger\", None)\n payload = body.get(\"payload\", None)\n\n if not trigger:\n msg = \"Trigger not specified.\"\n return abort(http_client.BAD_REQUEST, msg)\n\n self._trigger_dispatcher_service.dispatch_with_context(\n trigger=trigger,\n payload=payload,\n trace_context=trace_context,\n throw_on_validation_error=True,\n )\n else:\n if not self._is_valid_hook(hook):\n self._log_request(\"Invalid hook.\", headers, body)\n msg = \"Webhook %s not registered with st2\" % hook\n return abort(http_client.NOT_FOUND, msg)\n\n triggers = self._hooks.get_triggers_for_hook(hook)\n payload = {}\n\n payload[\"headers\"] = headers\n payload[\"body\"] = body\n\n # Dispatch trigger instance for each of the trigger found\n for trigger_dict in triggers:\n # TODO: Instead of dispatching the whole dict we should just\n # dispatch TriggerDB.ref or similar\n self._trigger_dispatcher_service.dispatch_with_context(\n trigger=trigger_dict,\n payload=payload,\n trace_context=trace_context,\n throw_on_validation_error=True,\n )\n\n # NOTE: For url encoded request bodies, values will be bytes instead of unicode and this\n # doesn't work with orjson so we first need to \"cast\" all the values from bytes to unicode\n\n return Response(json=body, status=http_client.ACCEPTED)\n\n def _is_valid_hook(self, hook):\n # TODO: Validate hook payload with payload_schema.\n return hook in self._hooks\n\n def _register_webhook_trigger_types(self):\n for trigger_type in WEBHOOK_TRIGGER_TYPES.values():\n trigger_service.create_trigger_type_db(trigger_type)\n\n def _create_trace_context(self, trace_tag, hook):\n # if no trace_tag then create a unique one\n if not trace_tag:\n trace_tag = \"webhook-%s-%s\" % (hook, uuid.uuid4().hex)\n return TraceContext(trace_tag=trace_tag)\n\n def add_trigger(self, trigger):\n # NOTE: trigger is a dictionary\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = self._get_normalized_url(trigger)\n LOG.info(\"Listening to endpoint: %s\", urlparse.urljoin(self._base_url, url))\n self._hooks.add_hook(url, trigger)\n\n def update_trigger(self, trigger):\n pass\n\n def remove_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = self._get_normalized_url(trigger)\n\n removed = self._hooks.remove_hook(url, trigger)\n if removed:\n LOG.info(\n \"Stop listening to endpoint: %s\", urlparse.urljoin(self._base_url, url)\n )\n\n def _get_normalized_url(self, trigger):\n \"\"\"\n remove the trailing and leading / so that the hook url and those coming\n from trigger parameters end up being the same.\n \"\"\"\n return trigger[\"parameters\"][\"url\"].strip(\"/\")\n\n def _get_headers_as_dict(self, headers):\n headers_dict = {}\n for key, value in headers.items():\n headers_dict[key] = value\n return headers_dict\n\n def _filter_authentication_headers(self, headers):\n auth_headers = [HEADER_API_KEY_ATTRIBUTE_NAME, HEADER_ATTRIBUTE_NAME, \"Cookie\"]\n return {key: value for key, value in headers.items() if key not in auth_headers}\n\n def _log_request(self, msg, headers, body, log_method=LOG.debug):\n headers = self._get_headers_as_dict(headers)\n body = str(body)\n log_method(\"%s\\n\\trequest.header: %s.\\n\\trequest.body: %s.\", msg, headers, body)\n\n ##############################################\n # Event handler methods for the trigger events\n ##############################################\n\n def _handle_create_trigger(self, trigger):\n LOG.debug('Calling \"add_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.add_trigger(trigger=trigger)\n\n def _handle_update_trigger(self, trigger):\n LOG.debug('Calling \"update_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.update_trigger(trigger=trigger)\n\n def _handle_delete_trigger(self, trigger):\n LOG.debug('Calling \"remove_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.remove_trigger(trigger=trigger)\n\n def _sanitize_trigger(self, trigger):\n sanitized = TriggerAPI.from_model(trigger).to_dict()\n return sanitized\n\n\nwebhooks_controller = WebhooksController()\n", "path": "st2api/st2api/controllers/v1/webhooks.py"}], "after_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport six\nimport uuid\nfrom six.moves.urllib import parse as urlparse # pylint: disable=import-error\nfrom six.moves import http_client\n\nfrom st2common import log as logging\nfrom st2common.constants.auth import (\n HEADER_API_KEY_ATTRIBUTE_NAME,\n HEADER_ATTRIBUTE_NAME,\n)\nfrom st2common.constants.triggers import WEBHOOK_TRIGGER_TYPES\nfrom st2common.models.api.trace import TraceContext\nfrom st2common.models.api.trigger import TriggerAPI\nfrom st2common.models.db.webhook import WebhookDB\nimport st2common.services.triggers as trigger_service\nfrom st2common.rbac.types import PermissionType\nfrom st2common.rbac.backends import get_rbac_backend\nfrom st2common.services.triggerwatcher import TriggerWatcher\nfrom st2common.services.trigger_dispatcher import TriggerDispatcherService\nfrom st2common.router import abort\nfrom st2common.router import Response\nfrom st2common.util.jsonify import get_json_type_for_python_value\n\nLOG = logging.getLogger(__name__)\n\nTRACE_TAG_HEADER = \"St2-Trace-Tag\"\n\n\nclass HooksHolder(object):\n \"\"\"\n Maintains a hook to TriggerDB mapping.\n \"\"\"\n\n def __init__(self):\n self._triggers_by_hook = {}\n\n def __contains__(self, key):\n return key in self._triggers_by_hook\n\n def add_hook(self, hook, trigger):\n if hook not in self._triggers_by_hook:\n self._triggers_by_hook[hook] = []\n self._triggers_by_hook[hook].append(trigger)\n\n def remove_hook(self, hook, trigger):\n if hook not in self._triggers_by_hook:\n return False\n remove_index = -1\n for idx, item in enumerate(self._triggers_by_hook[hook]):\n if item[\"id\"] == trigger[\"id\"]:\n remove_index = idx\n break\n if remove_index < 0:\n return False\n self._triggers_by_hook[hook].pop(remove_index)\n if not self._triggers_by_hook[hook]:\n del self._triggers_by_hook[hook]\n return True\n\n def get_triggers_for_hook(self, hook):\n return self._triggers_by_hook.get(hook, [])\n\n def get_all(self):\n triggers = []\n for values in six.itervalues(self._triggers_by_hook):\n triggers.extend(values)\n return triggers\n\n\nclass WebhooksController(object):\n def __init__(self, *args, **kwargs):\n self._hooks = HooksHolder()\n self._base_url = \"/webhooks/\"\n self._trigger_types = list(WEBHOOK_TRIGGER_TYPES.keys())\n\n self._trigger_dispatcher_service = TriggerDispatcherService(LOG)\n queue_suffix = self.__class__.__name__\n self._trigger_watcher = TriggerWatcher(\n create_handler=self._handle_create_trigger,\n update_handler=self._handle_update_trigger,\n delete_handler=self._handle_delete_trigger,\n trigger_types=self._trigger_types,\n queue_suffix=queue_suffix,\n exclusive=True,\n )\n self._trigger_watcher.start()\n self._register_webhook_trigger_types()\n\n def get_all(self):\n # Return only the hooks known by this controller.\n return self._hooks.get_all()\n\n def get_one(self, url, requester_user):\n triggers = self._hooks.get_triggers_for_hook(url)\n\n if not triggers:\n abort(http_client.NOT_FOUND)\n return\n\n permission_type = PermissionType.WEBHOOK_VIEW\n rbac_utils = get_rbac_backend().get_utils_class()\n rbac_utils.assert_user_has_resource_db_permission(\n user_db=requester_user,\n resource_db=WebhookDB(name=url),\n permission_type=permission_type,\n )\n\n # For demonstration purpose return 1st\n return triggers[0]\n\n def post(self, hook, webhook_body_api, headers, requester_user):\n body = webhook_body_api.data\n\n permission_type = PermissionType.WEBHOOK_SEND\n rbac_utils = get_rbac_backend().get_utils_class()\n rbac_utils.assert_user_has_resource_db_permission(\n user_db=requester_user,\n resource_db=WebhookDB(name=hook),\n permission_type=permission_type,\n )\n\n headers = self._get_headers_as_dict(headers)\n headers = self._filter_authentication_headers(headers)\n\n # If webhook contains a trace-tag use that else create create a unique trace-tag.\n trace_context = self._create_trace_context(\n trace_tag=headers.pop(TRACE_TAG_HEADER, None), hook=hook\n )\n\n if hook == \"st2\" or hook == \"st2/\":\n # When using st2 or system webhook, body needs to always be a dict\n if not isinstance(body, dict):\n type_string = get_json_type_for_python_value(body)\n msg = \"Webhook body needs to be an object, got: %s\" % (type_string)\n raise ValueError(msg)\n\n trigger = body.get(\"trigger\", None)\n payload = body.get(\"payload\", None)\n\n if not trigger:\n msg = \"Trigger not specified.\"\n return abort(http_client.BAD_REQUEST, msg)\n\n self._trigger_dispatcher_service.dispatch_with_context(\n trigger=trigger,\n payload=payload,\n trace_context=trace_context,\n throw_on_validation_error=True,\n )\n else:\n if not self._is_valid_hook(hook):\n self._log_request(\"Invalid hook.\", headers, body)\n msg = \"Webhook %s not registered with st2\" % hook\n return abort(http_client.NOT_FOUND, msg)\n\n triggers = self._hooks.get_triggers_for_hook(hook)\n payload = {}\n\n payload[\"headers\"] = headers\n payload[\"headers_lower\"] = {k.lower(): v for k, v in headers.items()}\n payload[\"body\"] = body\n\n # Dispatch trigger instance for each of the trigger found\n for trigger_dict in triggers:\n # TODO: Instead of dispatching the whole dict we should just\n # dispatch TriggerDB.ref or similar\n self._trigger_dispatcher_service.dispatch_with_context(\n trigger=trigger_dict,\n payload=payload,\n trace_context=trace_context,\n throw_on_validation_error=True,\n )\n\n # NOTE: For url encoded request bodies, values will be bytes instead of unicode and this\n # doesn't work with orjson so we first need to \"cast\" all the values from bytes to unicode\n\n return Response(json=body, status=http_client.ACCEPTED)\n\n def _is_valid_hook(self, hook):\n # TODO: Validate hook payload with payload_schema.\n return hook in self._hooks\n\n def _register_webhook_trigger_types(self):\n for trigger_type in WEBHOOK_TRIGGER_TYPES.values():\n trigger_service.create_trigger_type_db(trigger_type)\n\n def _create_trace_context(self, trace_tag, hook):\n # if no trace_tag then create a unique one\n if not trace_tag:\n trace_tag = \"webhook-%s-%s\" % (hook, uuid.uuid4().hex)\n return TraceContext(trace_tag=trace_tag)\n\n def add_trigger(self, trigger):\n # NOTE: trigger is a dictionary\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = self._get_normalized_url(trigger)\n LOG.info(\"Listening to endpoint: %s\", urlparse.urljoin(self._base_url, url))\n self._hooks.add_hook(url, trigger)\n\n def update_trigger(self, trigger):\n pass\n\n def remove_trigger(self, trigger):\n # Note: Permission checking for creating and deleting a webhook is done during rule\n # creation\n url = self._get_normalized_url(trigger)\n\n removed = self._hooks.remove_hook(url, trigger)\n if removed:\n LOG.info(\n \"Stop listening to endpoint: %s\", urlparse.urljoin(self._base_url, url)\n )\n\n def _get_normalized_url(self, trigger):\n \"\"\"\n remove the trailing and leading / so that the hook url and those coming\n from trigger parameters end up being the same.\n \"\"\"\n return trigger[\"parameters\"][\"url\"].strip(\"/\")\n\n def _get_headers_as_dict(self, headers):\n headers_dict = {}\n for key, value in headers.items():\n headers_dict[key] = value\n return headers_dict\n\n def _filter_authentication_headers(self, headers):\n auth_headers = [HEADER_API_KEY_ATTRIBUTE_NAME, HEADER_ATTRIBUTE_NAME, \"Cookie\"]\n return {key: value for key, value in headers.items() if key not in auth_headers}\n\n def _log_request(self, msg, headers, body, log_method=LOG.debug):\n headers = self._get_headers_as_dict(headers)\n body = str(body)\n log_method(\"%s\\n\\trequest.header: %s.\\n\\trequest.body: %s.\", msg, headers, body)\n\n ##############################################\n # Event handler methods for the trigger events\n ##############################################\n\n def _handle_create_trigger(self, trigger):\n LOG.debug('Calling \"add_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.add_trigger(trigger=trigger)\n\n def _handle_update_trigger(self, trigger):\n LOG.debug('Calling \"update_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.update_trigger(trigger=trigger)\n\n def _handle_delete_trigger(self, trigger):\n LOG.debug('Calling \"remove_trigger\" method (trigger.type=%s)' % (trigger.type))\n trigger = self._sanitize_trigger(trigger=trigger)\n self.remove_trigger(trigger=trigger)\n\n def _sanitize_trigger(self, trigger):\n sanitized = TriggerAPI.from_model(trigger).to_dict()\n return sanitized\n\n\nwebhooks_controller = WebhooksController()\n", "path": "st2api/st2api/controllers/v1/webhooks.py"}]}
| 3,439 | 128 |
gh_patches_debug_2260
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase default timeout of retry objects to 10 minutes
Per internal issue 195337762, the general timeout for jobs.insert API is 4 minutes. We should increase our default deadline to 10 minutes to allow for at least 1 retry if the first request fails.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/cloud/bigquery/retry.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from google.api_core import exceptions
16 from google.api_core import retry
17 from google.auth import exceptions as auth_exceptions
18 import requests.exceptions
19
20
21 _RETRYABLE_REASONS = frozenset(
22 ["rateLimitExceeded", "backendError", "internalError", "badGateway"]
23 )
24
25 _UNSTRUCTURED_RETRYABLE_TYPES = (
26 ConnectionError,
27 exceptions.TooManyRequests,
28 exceptions.InternalServerError,
29 exceptions.BadGateway,
30 requests.exceptions.ChunkedEncodingError,
31 requests.exceptions.ConnectionError,
32 auth_exceptions.TransportError,
33 )
34
35
36 def _should_retry(exc):
37 """Predicate for determining when to retry.
38
39 We retry if and only if the 'reason' is 'backendError'
40 or 'rateLimitExceeded'.
41 """
42 if not hasattr(exc, "errors") or len(exc.errors) == 0:
43 # Check for unstructured error returns, e.g. from GFE
44 return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)
45
46 reason = exc.errors[0]["reason"]
47 return reason in _RETRYABLE_REASONS
48
49
50 DEFAULT_RETRY = retry.Retry(predicate=_should_retry)
51 """The default retry object.
52
53 Any method with a ``retry`` parameter will be retried automatically,
54 with reasonable defaults. To disable retry, pass ``retry=None``.
55 To modify the default retry behavior, call a ``with_XXX`` method
56 on ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,
57 pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
58 """
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py
--- a/google/cloud/bigquery/retry.py
+++ b/google/cloud/bigquery/retry.py
@@ -47,7 +47,7 @@
return reason in _RETRYABLE_REASONS
-DEFAULT_RETRY = retry.Retry(predicate=_should_retry)
+DEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)
"""The default retry object.
Any method with a ``retry`` parameter will be retried automatically,
|
{"golden_diff": "diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py\n--- a/google/cloud/bigquery/retry.py\n+++ b/google/cloud/bigquery/retry.py\n@@ -47,7 +47,7 @@\n return reason in _RETRYABLE_REASONS\n \n \n-DEFAULT_RETRY = retry.Retry(predicate=_should_retry)\n+DEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)\n \"\"\"The default retry object.\n \n Any method with a ``retry`` parameter will be retried automatically,\n", "issue": "Increase default timeout of retry objects to 10 minutes\nPer internal issue 195337762, the general timeout for jobs.insert API is 4 minutes. We should increase our default deadline to 10 minutes to allow for at least 1 retry if the first request fails.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n auth_exceptions.TransportError,\n)\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n auth_exceptions.TransportError,\n)\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}]}
| 893 | 118 |
gh_patches_debug_8422
|
rasdani/github-patches
|
git_diff
|
huggingface__diffusers-6737
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Tracker] change to posix for better Windows support
In https://github.com/huggingface/diffusers/pull/6564, @fabiorigano introduced the use of Posix to better support Windows compatibility.
It'd be nice to change the instances of `os.path.join()` to `Path(...).as_posix()`.
Feel free to open PRs for this and tag me.
While opening PRs, please target only ONE script at a time.
Let's go 🚀
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/diffusers/pipelines/onnx_utils.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2023 The HuggingFace Inc. team.
3 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17
18 import os
19 import shutil
20 from pathlib import Path
21 from typing import Optional, Union
22
23 import numpy as np
24 from huggingface_hub import hf_hub_download
25 from huggingface_hub.utils import validate_hf_hub_args
26
27 from ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging
28
29
30 if is_onnx_available():
31 import onnxruntime as ort
32
33
34 logger = logging.get_logger(__name__)
35
36 ORT_TO_NP_TYPE = {
37 "tensor(bool)": np.bool_,
38 "tensor(int8)": np.int8,
39 "tensor(uint8)": np.uint8,
40 "tensor(int16)": np.int16,
41 "tensor(uint16)": np.uint16,
42 "tensor(int32)": np.int32,
43 "tensor(uint32)": np.uint32,
44 "tensor(int64)": np.int64,
45 "tensor(uint64)": np.uint64,
46 "tensor(float16)": np.float16,
47 "tensor(float)": np.float32,
48 "tensor(double)": np.float64,
49 }
50
51
52 class OnnxRuntimeModel:
53 def __init__(self, model=None, **kwargs):
54 logger.info("`diffusers.OnnxRuntimeModel` is experimental and might change in the future.")
55 self.model = model
56 self.model_save_dir = kwargs.get("model_save_dir", None)
57 self.latest_model_name = kwargs.get("latest_model_name", ONNX_WEIGHTS_NAME)
58
59 def __call__(self, **kwargs):
60 inputs = {k: np.array(v) for k, v in kwargs.items()}
61 return self.model.run(None, inputs)
62
63 @staticmethod
64 def load_model(path: Union[str, Path], provider=None, sess_options=None):
65 """
66 Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider`
67
68 Arguments:
69 path (`str` or `Path`):
70 Directory from which to load
71 provider(`str`, *optional*):
72 Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider`
73 """
74 if provider is None:
75 logger.info("No onnxruntime provider specified, using CPUExecutionProvider")
76 provider = "CPUExecutionProvider"
77
78 return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
79
80 def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs):
81 """
82 Save a model and its configuration file to a directory, so that it can be re-loaded using the
83 [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the
84 latest_model_name.
85
86 Arguments:
87 save_directory (`str` or `Path`):
88 Directory where to save the model file.
89 file_name(`str`, *optional*):
90 Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to save the
91 model with a different name.
92 """
93 model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME
94
95 src_path = self.model_save_dir.joinpath(self.latest_model_name)
96 dst_path = Path(save_directory).joinpath(model_file_name)
97 try:
98 shutil.copyfile(src_path, dst_path)
99 except shutil.SameFileError:
100 pass
101
102 # copy external weights (for models >2GB)
103 src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)
104 if src_path.exists():
105 dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)
106 try:
107 shutil.copyfile(src_path, dst_path)
108 except shutil.SameFileError:
109 pass
110
111 def save_pretrained(
112 self,
113 save_directory: Union[str, os.PathLike],
114 **kwargs,
115 ):
116 """
117 Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class
118 method.:
119
120 Arguments:
121 save_directory (`str` or `os.PathLike`):
122 Directory to which to save. Will be created if it doesn't exist.
123 """
124 if os.path.isfile(save_directory):
125 logger.error(f"Provided path ({save_directory}) should be a directory, not a file")
126 return
127
128 os.makedirs(save_directory, exist_ok=True)
129
130 # saving model weights/files
131 self._save_pretrained(save_directory, **kwargs)
132
133 @classmethod
134 @validate_hf_hub_args
135 def _from_pretrained(
136 cls,
137 model_id: Union[str, Path],
138 token: Optional[Union[bool, str, None]] = None,
139 revision: Optional[Union[str, None]] = None,
140 force_download: bool = False,
141 cache_dir: Optional[str] = None,
142 file_name: Optional[str] = None,
143 provider: Optional[str] = None,
144 sess_options: Optional["ort.SessionOptions"] = None,
145 **kwargs,
146 ):
147 """
148 Load a model from a directory or the HF Hub.
149
150 Arguments:
151 model_id (`str` or `Path`):
152 Directory from which to load
153 token (`str` or `bool`):
154 Is needed to load models from a private or gated repository
155 revision (`str`):
156 Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id
157 cache_dir (`Union[str, Path]`, *optional*):
158 Path to a directory in which a downloaded pretrained model configuration should be cached if the
159 standard cache should not be used.
160 force_download (`bool`, *optional*, defaults to `False`):
161 Whether or not to force the (re-)download of the model weights and configuration files, overriding the
162 cached versions if they exist.
163 file_name(`str`):
164 Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to load
165 different model files from the same repository or directory.
166 provider(`str`):
167 The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`.
168 kwargs (`Dict`, *optional*):
169 kwargs will be passed to the model during initialization
170 """
171 model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME
172 # load model from local directory
173 if os.path.isdir(model_id):
174 model = OnnxRuntimeModel.load_model(
175 os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options
176 )
177 kwargs["model_save_dir"] = Path(model_id)
178 # load model from hub
179 else:
180 # download model
181 model_cache_path = hf_hub_download(
182 repo_id=model_id,
183 filename=model_file_name,
184 token=token,
185 revision=revision,
186 cache_dir=cache_dir,
187 force_download=force_download,
188 )
189 kwargs["model_save_dir"] = Path(model_cache_path).parent
190 kwargs["latest_model_name"] = Path(model_cache_path).name
191 model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options)
192 return cls(model=model, **kwargs)
193
194 @classmethod
195 @validate_hf_hub_args
196 def from_pretrained(
197 cls,
198 model_id: Union[str, Path],
199 force_download: bool = True,
200 token: Optional[str] = None,
201 cache_dir: Optional[str] = None,
202 **model_kwargs,
203 ):
204 revision = None
205 if len(str(model_id).split("@")) == 2:
206 model_id, revision = model_id.split("@")
207
208 return cls._from_pretrained(
209 model_id=model_id,
210 revision=revision,
211 cache_dir=cache_dir,
212 force_download=force_download,
213 token=token,
214 **model_kwargs,
215 )
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/diffusers/pipelines/onnx_utils.py b/src/diffusers/pipelines/onnx_utils.py
--- a/src/diffusers/pipelines/onnx_utils.py
+++ b/src/diffusers/pipelines/onnx_utils.py
@@ -172,7 +172,7 @@
# load model from local directory
if os.path.isdir(model_id):
model = OnnxRuntimeModel.load_model(
- os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options
+ Path(model_id, model_file_name).as_posix(), provider=provider, sess_options=sess_options
)
kwargs["model_save_dir"] = Path(model_id)
# load model from hub
|
{"golden_diff": "diff --git a/src/diffusers/pipelines/onnx_utils.py b/src/diffusers/pipelines/onnx_utils.py\n--- a/src/diffusers/pipelines/onnx_utils.py\n+++ b/src/diffusers/pipelines/onnx_utils.py\n@@ -172,7 +172,7 @@\n # load model from local directory\n if os.path.isdir(model_id):\n model = OnnxRuntimeModel.load_model(\n- os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options\n+ Path(model_id, model_file_name).as_posix(), provider=provider, sess_options=sess_options\n )\n kwargs[\"model_save_dir\"] = Path(model_id)\n # load model from hub\n", "issue": "[Tracker] change to posix for better Windows support\nIn https://github.com/huggingface/diffusers/pull/6564, @fabiorigano introduced the use of Posix to better support Windows compatibility. \r\n\r\nIt'd be nice to change the instances of `os.path.join()` to `Path(...).as_posix()`. \r\n\r\nFeel free to open PRs for this and tag me. \r\n\r\nWhile opening PRs, please target only ONE script at a time. \r\n\r\nLet's go \ud83d\ude80 \r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team.\n# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport os\nimport shutil\nfrom pathlib import Path\nfrom typing import Optional, Union\n\nimport numpy as np\nfrom huggingface_hub import hf_hub_download\nfrom huggingface_hub.utils import validate_hf_hub_args\n\nfrom ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging\n\n\nif is_onnx_available():\n import onnxruntime as ort\n\n\nlogger = logging.get_logger(__name__)\n\nORT_TO_NP_TYPE = {\n \"tensor(bool)\": np.bool_,\n \"tensor(int8)\": np.int8,\n \"tensor(uint8)\": np.uint8,\n \"tensor(int16)\": np.int16,\n \"tensor(uint16)\": np.uint16,\n \"tensor(int32)\": np.int32,\n \"tensor(uint32)\": np.uint32,\n \"tensor(int64)\": np.int64,\n \"tensor(uint64)\": np.uint64,\n \"tensor(float16)\": np.float16,\n \"tensor(float)\": np.float32,\n \"tensor(double)\": np.float64,\n}\n\n\nclass OnnxRuntimeModel:\n def __init__(self, model=None, **kwargs):\n logger.info(\"`diffusers.OnnxRuntimeModel` is experimental and might change in the future.\")\n self.model = model\n self.model_save_dir = kwargs.get(\"model_save_dir\", None)\n self.latest_model_name = kwargs.get(\"latest_model_name\", ONNX_WEIGHTS_NAME)\n\n def __call__(self, **kwargs):\n inputs = {k: np.array(v) for k, v in kwargs.items()}\n return self.model.run(None, inputs)\n\n @staticmethod\n def load_model(path: Union[str, Path], provider=None, sess_options=None):\n \"\"\"\n Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider`\n\n Arguments:\n path (`str` or `Path`):\n Directory from which to load\n provider(`str`, *optional*):\n Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider`\n \"\"\"\n if provider is None:\n logger.info(\"No onnxruntime provider specified, using CPUExecutionProvider\")\n provider = \"CPUExecutionProvider\"\n\n return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)\n\n def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs):\n \"\"\"\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\n [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the\n latest_model_name.\n\n Arguments:\n save_directory (`str` or `Path`):\n Directory where to save the model file.\n file_name(`str`, *optional*):\n Overwrites the default model file name from `\"model.onnx\"` to `file_name`. This allows you to save the\n model with a different name.\n \"\"\"\n model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME\n\n src_path = self.model_save_dir.joinpath(self.latest_model_name)\n dst_path = Path(save_directory).joinpath(model_file_name)\n try:\n shutil.copyfile(src_path, dst_path)\n except shutil.SameFileError:\n pass\n\n # copy external weights (for models >2GB)\n src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)\n if src_path.exists():\n dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)\n try:\n shutil.copyfile(src_path, dst_path)\n except shutil.SameFileError:\n pass\n\n def save_pretrained(\n self,\n save_directory: Union[str, os.PathLike],\n **kwargs,\n ):\n \"\"\"\n Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class\n method.:\n\n Arguments:\n save_directory (`str` or `os.PathLike`):\n Directory to which to save. Will be created if it doesn't exist.\n \"\"\"\n if os.path.isfile(save_directory):\n logger.error(f\"Provided path ({save_directory}) should be a directory, not a file\")\n return\n\n os.makedirs(save_directory, exist_ok=True)\n\n # saving model weights/files\n self._save_pretrained(save_directory, **kwargs)\n\n @classmethod\n @validate_hf_hub_args\n def _from_pretrained(\n cls,\n model_id: Union[str, Path],\n token: Optional[Union[bool, str, None]] = None,\n revision: Optional[Union[str, None]] = None,\n force_download: bool = False,\n cache_dir: Optional[str] = None,\n file_name: Optional[str] = None,\n provider: Optional[str] = None,\n sess_options: Optional[\"ort.SessionOptions\"] = None,\n **kwargs,\n ):\n \"\"\"\n Load a model from a directory or the HF Hub.\n\n Arguments:\n model_id (`str` or `Path`):\n Directory from which to load\n token (`str` or `bool`):\n Is needed to load models from a private or gated repository\n revision (`str`):\n Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id\n cache_dir (`Union[str, Path]`, *optional*):\n Path to a directory in which a downloaded pretrained model configuration should be cached if the\n standard cache should not be used.\n force_download (`bool`, *optional*, defaults to `False`):\n Whether or not to force the (re-)download of the model weights and configuration files, overriding the\n cached versions if they exist.\n file_name(`str`):\n Overwrites the default model file name from `\"model.onnx\"` to `file_name`. This allows you to load\n different model files from the same repository or directory.\n provider(`str`):\n The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`.\n kwargs (`Dict`, *optional*):\n kwargs will be passed to the model during initialization\n \"\"\"\n model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME\n # load model from local directory\n if os.path.isdir(model_id):\n model = OnnxRuntimeModel.load_model(\n os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options\n )\n kwargs[\"model_save_dir\"] = Path(model_id)\n # load model from hub\n else:\n # download model\n model_cache_path = hf_hub_download(\n repo_id=model_id,\n filename=model_file_name,\n token=token,\n revision=revision,\n cache_dir=cache_dir,\n force_download=force_download,\n )\n kwargs[\"model_save_dir\"] = Path(model_cache_path).parent\n kwargs[\"latest_model_name\"] = Path(model_cache_path).name\n model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options)\n return cls(model=model, **kwargs)\n\n @classmethod\n @validate_hf_hub_args\n def from_pretrained(\n cls,\n model_id: Union[str, Path],\n force_download: bool = True,\n token: Optional[str] = None,\n cache_dir: Optional[str] = None,\n **model_kwargs,\n ):\n revision = None\n if len(str(model_id).split(\"@\")) == 2:\n model_id, revision = model_id.split(\"@\")\n\n return cls._from_pretrained(\n model_id=model_id,\n revision=revision,\n cache_dir=cache_dir,\n force_download=force_download,\n token=token,\n **model_kwargs,\n )\n", "path": "src/diffusers/pipelines/onnx_utils.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team.\n# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport os\nimport shutil\nfrom pathlib import Path\nfrom typing import Optional, Union\n\nimport numpy as np\nfrom huggingface_hub import hf_hub_download\nfrom huggingface_hub.utils import validate_hf_hub_args\n\nfrom ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging\n\n\nif is_onnx_available():\n import onnxruntime as ort\n\n\nlogger = logging.get_logger(__name__)\n\nORT_TO_NP_TYPE = {\n \"tensor(bool)\": np.bool_,\n \"tensor(int8)\": np.int8,\n \"tensor(uint8)\": np.uint8,\n \"tensor(int16)\": np.int16,\n \"tensor(uint16)\": np.uint16,\n \"tensor(int32)\": np.int32,\n \"tensor(uint32)\": np.uint32,\n \"tensor(int64)\": np.int64,\n \"tensor(uint64)\": np.uint64,\n \"tensor(float16)\": np.float16,\n \"tensor(float)\": np.float32,\n \"tensor(double)\": np.float64,\n}\n\n\nclass OnnxRuntimeModel:\n def __init__(self, model=None, **kwargs):\n logger.info(\"`diffusers.OnnxRuntimeModel` is experimental and might change in the future.\")\n self.model = model\n self.model_save_dir = kwargs.get(\"model_save_dir\", None)\n self.latest_model_name = kwargs.get(\"latest_model_name\", ONNX_WEIGHTS_NAME)\n\n def __call__(self, **kwargs):\n inputs = {k: np.array(v) for k, v in kwargs.items()}\n return self.model.run(None, inputs)\n\n @staticmethod\n def load_model(path: Union[str, Path], provider=None, sess_options=None):\n \"\"\"\n Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider`\n\n Arguments:\n path (`str` or `Path`):\n Directory from which to load\n provider(`str`, *optional*):\n Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider`\n \"\"\"\n if provider is None:\n logger.info(\"No onnxruntime provider specified, using CPUExecutionProvider\")\n provider = \"CPUExecutionProvider\"\n\n return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)\n\n def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs):\n \"\"\"\n Save a model and its configuration file to a directory, so that it can be re-loaded using the\n [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the\n latest_model_name.\n\n Arguments:\n save_directory (`str` or `Path`):\n Directory where to save the model file.\n file_name(`str`, *optional*):\n Overwrites the default model file name from `\"model.onnx\"` to `file_name`. This allows you to save the\n model with a different name.\n \"\"\"\n model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME\n\n src_path = self.model_save_dir.joinpath(self.latest_model_name)\n dst_path = Path(save_directory).joinpath(model_file_name)\n try:\n shutil.copyfile(src_path, dst_path)\n except shutil.SameFileError:\n pass\n\n # copy external weights (for models >2GB)\n src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)\n if src_path.exists():\n dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)\n try:\n shutil.copyfile(src_path, dst_path)\n except shutil.SameFileError:\n pass\n\n def save_pretrained(\n self,\n save_directory: Union[str, os.PathLike],\n **kwargs,\n ):\n \"\"\"\n Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class\n method.:\n\n Arguments:\n save_directory (`str` or `os.PathLike`):\n Directory to which to save. Will be created if it doesn't exist.\n \"\"\"\n if os.path.isfile(save_directory):\n logger.error(f\"Provided path ({save_directory}) should be a directory, not a file\")\n return\n\n os.makedirs(save_directory, exist_ok=True)\n\n # saving model weights/files\n self._save_pretrained(save_directory, **kwargs)\n\n @classmethod\n @validate_hf_hub_args\n def _from_pretrained(\n cls,\n model_id: Union[str, Path],\n token: Optional[Union[bool, str, None]] = None,\n revision: Optional[Union[str, None]] = None,\n force_download: bool = False,\n cache_dir: Optional[str] = None,\n file_name: Optional[str] = None,\n provider: Optional[str] = None,\n sess_options: Optional[\"ort.SessionOptions\"] = None,\n **kwargs,\n ):\n \"\"\"\n Load a model from a directory or the HF Hub.\n\n Arguments:\n model_id (`str` or `Path`):\n Directory from which to load\n token (`str` or `bool`):\n Is needed to load models from a private or gated repository\n revision (`str`):\n Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id\n cache_dir (`Union[str, Path]`, *optional*):\n Path to a directory in which a downloaded pretrained model configuration should be cached if the\n standard cache should not be used.\n force_download (`bool`, *optional*, defaults to `False`):\n Whether or not to force the (re-)download of the model weights and configuration files, overriding the\n cached versions if they exist.\n file_name(`str`):\n Overwrites the default model file name from `\"model.onnx\"` to `file_name`. This allows you to load\n different model files from the same repository or directory.\n provider(`str`):\n The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`.\n kwargs (`Dict`, *optional*):\n kwargs will be passed to the model during initialization\n \"\"\"\n model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME\n # load model from local directory\n if os.path.isdir(model_id):\n model = OnnxRuntimeModel.load_model(\n Path(model_id, model_file_name).as_posix(), provider=provider, sess_options=sess_options\n )\n kwargs[\"model_save_dir\"] = Path(model_id)\n # load model from hub\n else:\n # download model\n model_cache_path = hf_hub_download(\n repo_id=model_id,\n filename=model_file_name,\n token=token,\n revision=revision,\n cache_dir=cache_dir,\n force_download=force_download,\n )\n kwargs[\"model_save_dir\"] = Path(model_cache_path).parent\n kwargs[\"latest_model_name\"] = Path(model_cache_path).name\n model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options)\n return cls(model=model, **kwargs)\n\n @classmethod\n @validate_hf_hub_args\n def from_pretrained(\n cls,\n model_id: Union[str, Path],\n force_download: bool = True,\n token: Optional[str] = None,\n cache_dir: Optional[str] = None,\n **model_kwargs,\n ):\n revision = None\n if len(str(model_id).split(\"@\")) == 2:\n model_id, revision = model_id.split(\"@\")\n\n return cls._from_pretrained(\n model_id=model_id,\n revision=revision,\n cache_dir=cache_dir,\n force_download=force_download,\n token=token,\n **model_kwargs,\n )\n", "path": "src/diffusers/pipelines/onnx_utils.py"}]}
| 2,779 | 159 |
gh_patches_debug_24193
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-2243
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typos in cohens_kappa.py
This is a very minor issue but raising an issue here in order to submit a PR immediately after.
Issues:
1. `...tp be...` should be `...to be...` in a comment. See[ L45 in cohens_kappy.py](https://github.com/tensorflow/addons/blob/b13140719a3de5d1354b12cb73940acaa8dd4a79/tensorflow_addons/metrics/cohens_kappa.py#L45).
2. One row of a matrix is not aligned with the other other rows in an example. See [L80 in cohens_kappy.py](https://github.com/tensorflow/addons/blob/b13140719a3de5d1354b12cb73940acaa8dd4a79/tensorflow_addons/metrics/cohens_kappa.py#L80).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/metrics/cohens_kappa.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements Cohen's Kappa."""
16
17 import tensorflow as tf
18 import numpy as np
19 import tensorflow.keras.backend as K
20 from tensorflow.keras.metrics import Metric
21 from tensorflow_addons.utils.types import AcceptableDTypes, FloatTensorLike
22
23 from typeguard import typechecked
24 from typing import Optional
25
26
27 @tf.keras.utils.register_keras_serializable(package="Addons")
28 class CohenKappa(Metric):
29 """Computes Kappa score between two raters.
30
31 The score lies in the range `[-1, 1]`. A score of -1 represents
32 complete disagreement between two raters whereas a score of 1
33 represents complete agreement between the two raters.
34 A score of 0 means agreement by chance.
35
36 Note: As of now, this implementation considers all labels
37 while calculating the Cohen's Kappa score.
38
39 Args:
40 num_classes: Number of unique classes in your dataset.
41 weightage: (optional) Weighting to be considered for calculating
42 kappa statistics. A valid value is one of
43 [None, 'linear', 'quadratic']. Defaults to `None`
44 sparse_labels: (bool) Valid only for multi-class scenario.
45 If True, ground truth labels are expected tp be integers
46 and not one-hot encoded.
47 regression: (bool) If set, that means the problem is being treated
48 as a regression problem where you are regressing the predictions.
49 **Note:** If you are regressing for the values, the the output layer
50 should contain a single unit.
51 name: (optional) String name of the metric instance
52 dtype: (optional) Data type of the metric result. Defaults to `None`.
53
54 Raises:
55 ValueError: If the value passed for `weightage` is invalid
56 i.e. not any one of [None, 'linear', 'quadratic'].
57
58 Usage:
59
60 >>> y_true = np.array([4, 4, 3, 4, 2, 4, 1, 1], dtype=np.int32)
61 >>> y_pred = np.array([4, 4, 3, 4, 4, 2, 1, 1], dtype=np.int32)
62 >>> weights = np.array([1, 1, 2, 5, 10, 2, 3, 3], dtype=np.int32)
63 >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)
64 >>> metric.update_state(y_true , y_pred)
65 <tf.Tensor: shape=(5, 5), dtype=float32, numpy=
66 array([[0., 0., 0., 0., 0.],
67 [0., 2., 0., 0., 0.],
68 [0., 0., 0., 0., 1.],
69 [0., 0., 0., 1., 0.],
70 [0., 0., 1., 0., 3.]], dtype=float32)>
71 >>> result = metric.result()
72 >>> result.numpy()
73 0.61904764
74 >>> # To use this with weights, sample_weight argument can be used.
75 >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)
76 >>> metric.update_state(y_true , y_pred , sample_weight=weights)
77 <tf.Tensor: shape=(5, 5), dtype=float32, numpy=
78 array([[ 0., 0., 0., 0., 0.],
79 [ 0., 6., 0., 0., 0.],
80 [ 0., 0., 0., 0., 10.],
81 [ 0., 0., 0., 2., 0.],
82 [ 0., 0., 2., 0., 7.]], dtype=float32)>
83 >>> result = metric.result()
84 >>> result.numpy()
85 0.37209308
86
87 Usage with `tf.keras` API:
88
89 >>> inputs = tf.keras.Input(shape=(10,))
90 >>> x = tf.keras.layers.Dense(10)(inputs)
91 >>> outputs = tf.keras.layers.Dense(1)(x)
92 >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
93 >>> model.compile('sgd', loss='mse', metrics=[tfa.metrics.CohenKappa(num_classes=3, sparse_labels=True)])
94 """
95
96 @typechecked
97 def __init__(
98 self,
99 num_classes: FloatTensorLike,
100 name: str = "cohen_kappa",
101 weightage: Optional[str] = None,
102 sparse_labels: bool = False,
103 regression: bool = False,
104 dtype: AcceptableDTypes = None,
105 ):
106 """Creates a `CohenKappa` instance."""
107 super().__init__(name=name, dtype=dtype)
108
109 if weightage not in (None, "linear", "quadratic"):
110 raise ValueError("Unknown kappa weighting type.")
111
112 if num_classes == 2:
113 self._update = self._update_binary_class_model
114 elif num_classes > 2:
115 self._update = self._update_multi_class_model
116 else:
117 raise ValueError(
118 """Number of classes must be
119 greater than or euqal to two"""
120 )
121
122 self.weightage = weightage
123 self.num_classes = num_classes
124 self.regression = regression
125 self.sparse_labels = sparse_labels
126 self.conf_mtx = self.add_weight(
127 "conf_mtx",
128 shape=(self.num_classes, self.num_classes),
129 initializer=tf.keras.initializers.zeros,
130 dtype=tf.float32,
131 )
132
133 def update_state(self, y_true, y_pred, sample_weight=None):
134 """Accumulates the confusion matrix condition statistics.
135
136 Args:
137 y_true: Labels assigned by the first annotator with shape
138 `[num_samples,]`.
139 y_pred: Labels assigned by the second annotator with shape
140 `[num_samples,]`. The kappa statistic is symmetric,
141 so swapping `y_true` and `y_pred` doesn't change the value.
142 sample_weight (optional): for weighting labels in confusion matrix
143 Defaults to `None`. The dtype for weights should be the same
144 as the dtype for confusion matrix. For more details,
145 please check `tf.math.confusion_matrix`.
146
147 Returns:
148 Update op.
149 """
150 return self._update(y_true, y_pred, sample_weight)
151
152 def _update_binary_class_model(self, y_true, y_pred, sample_weight=None):
153 y_true = tf.cast(y_true, dtype=tf.int64)
154 y_pred = tf.cast(y_pred, dtype=tf.float32)
155 y_pred = tf.cast(y_pred > 0.5, dtype=tf.int64)
156 return self._update_confusion_matrix(y_true, y_pred, sample_weight)
157
158 @tf.function
159 def _update_multi_class_model(self, y_true, y_pred, sample_weight=None):
160 v = tf.argmax(y_true, axis=1) if not self.sparse_labels else y_true
161 y_true = tf.cast(v, dtype=tf.int64)
162
163 y_pred = self._cast_ypred(y_pred)
164
165 return self._update_confusion_matrix(y_true, y_pred, sample_weight)
166
167 @tf.function
168 def _cast_ypred(self, y_pred):
169 if tf.rank(y_pred) > 1:
170 if not self.regression:
171 y_pred = tf.cast(tf.argmax(y_pred, axis=-1), dtype=tf.int64)
172 else:
173 y_pred = tf.math.round(tf.math.abs(y_pred))
174 y_pred = tf.cast(y_pred, dtype=tf.int64)
175 else:
176 y_pred = tf.cast(y_pred, dtype=tf.int64)
177 return y_pred
178
179 @tf.function
180 def _safe_squeeze(self, y):
181 y = tf.squeeze(y)
182
183 # Check for scalar result
184 if tf.rank(y) == 0:
185 y = tf.expand_dims(y, 0)
186
187 return y
188
189 def _update_confusion_matrix(self, y_true, y_pred, sample_weight):
190 y_true = self._safe_squeeze(y_true)
191 y_pred = self._safe_squeeze(y_pred)
192
193 new_conf_mtx = tf.math.confusion_matrix(
194 labels=y_true,
195 predictions=y_pred,
196 num_classes=self.num_classes,
197 weights=sample_weight,
198 dtype=tf.float32,
199 )
200
201 return self.conf_mtx.assign_add(new_conf_mtx)
202
203 def result(self):
204 nb_ratings = tf.shape(self.conf_mtx)[0]
205 weight_mtx = tf.ones([nb_ratings, nb_ratings], dtype=tf.float32)
206
207 # 2. Create a weight matrix
208 if self.weightage is None:
209 diagonal = tf.zeros([nb_ratings], dtype=tf.float32)
210 weight_mtx = tf.linalg.set_diag(weight_mtx, diagonal=diagonal)
211 else:
212 weight_mtx += tf.cast(tf.range(nb_ratings), dtype=tf.float32)
213 weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)
214
215 if self.weightage == "linear":
216 weight_mtx = tf.abs(weight_mtx - tf.transpose(weight_mtx))
217 else:
218 weight_mtx = tf.pow((weight_mtx - tf.transpose(weight_mtx)), 2)
219
220 weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)
221
222 # 3. Get counts
223 actual_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=1)
224 pred_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=0)
225
226 # 4. Get the outer product
227 out_prod = pred_ratings_hist[..., None] * actual_ratings_hist[None, ...]
228
229 # 5. Normalize the confusion matrix and outer product
230 conf_mtx = self.conf_mtx / tf.reduce_sum(self.conf_mtx)
231 out_prod = out_prod / tf.reduce_sum(out_prod)
232
233 conf_mtx = tf.cast(conf_mtx, dtype=self.dtype)
234 out_prod = tf.cast(out_prod, dtype=self.dtype)
235
236 # 6. Calculate Kappa score
237 numerator = tf.reduce_sum(conf_mtx * weight_mtx)
238 denominator = tf.reduce_sum(out_prod * weight_mtx)
239 return tf.cond(
240 tf.math.is_nan(denominator),
241 true_fn=lambda: 0.0,
242 false_fn=lambda: 1 - (numerator / denominator),
243 )
244
245 def get_config(self):
246 """Returns the serializable config of the metric."""
247
248 config = {
249 "num_classes": self.num_classes,
250 "weightage": self.weightage,
251 "sparse_labels": self.sparse_labels,
252 "regression": self.regression,
253 }
254 base_config = super().get_config()
255 return {**base_config, **config}
256
257 def reset_states(self):
258 """Resets all of the metric state variables."""
259
260 for v in self.variables:
261 K.set_value(
262 v,
263 np.zeros((self.num_classes, self.num_classes), v.dtype.as_numpy_dtype),
264 )
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/metrics/cohens_kappa.py b/tensorflow_addons/metrics/cohens_kappa.py
--- a/tensorflow_addons/metrics/cohens_kappa.py
+++ b/tensorflow_addons/metrics/cohens_kappa.py
@@ -42,7 +42,7 @@
kappa statistics. A valid value is one of
[None, 'linear', 'quadratic']. Defaults to `None`
sparse_labels: (bool) Valid only for multi-class scenario.
- If True, ground truth labels are expected tp be integers
+ If True, ground truth labels are expected to be integers
and not one-hot encoded.
regression: (bool) If set, that means the problem is being treated
as a regression problem where you are regressing the predictions.
@@ -77,7 +77,7 @@
<tf.Tensor: shape=(5, 5), dtype=float32, numpy=
array([[ 0., 0., 0., 0., 0.],
[ 0., 6., 0., 0., 0.],
- [ 0., 0., 0., 0., 10.],
+ [ 0., 0., 0., 0., 10.],
[ 0., 0., 0., 2., 0.],
[ 0., 0., 2., 0., 7.]], dtype=float32)>
>>> result = metric.result()
|
{"golden_diff": "diff --git a/tensorflow_addons/metrics/cohens_kappa.py b/tensorflow_addons/metrics/cohens_kappa.py\n--- a/tensorflow_addons/metrics/cohens_kappa.py\n+++ b/tensorflow_addons/metrics/cohens_kappa.py\n@@ -42,7 +42,7 @@\n kappa statistics. A valid value is one of\n [None, 'linear', 'quadratic']. Defaults to `None`\n sparse_labels: (bool) Valid only for multi-class scenario.\n- If True, ground truth labels are expected tp be integers\n+ If True, ground truth labels are expected to be integers\n and not one-hot encoded.\n regression: (bool) If set, that means the problem is being treated\n as a regression problem where you are regressing the predictions.\n@@ -77,7 +77,7 @@\n <tf.Tensor: shape=(5, 5), dtype=float32, numpy=\n array([[ 0., 0., 0., 0., 0.],\n [ 0., 6., 0., 0., 0.],\n- [ 0., 0., 0., 0., 10.],\n+ [ 0., 0., 0., 0., 10.],\n [ 0., 0., 0., 2., 0.],\n [ 0., 0., 2., 0., 7.]], dtype=float32)>\n >>> result = metric.result()\n", "issue": "Typos in cohens_kappa.py\nThis is a very minor issue but raising an issue here in order to submit a PR immediately after.\r\n\r\nIssues:\r\n1. `...tp be...` should be `...to be...` in a comment. See[ L45 in cohens_kappy.py](https://github.com/tensorflow/addons/blob/b13140719a3de5d1354b12cb73940acaa8dd4a79/tensorflow_addons/metrics/cohens_kappa.py#L45).\r\n2. One row of a matrix is not aligned with the other other rows in an example. See [L80 in cohens_kappy.py](https://github.com/tensorflow/addons/blob/b13140719a3de5d1354b12cb73940acaa8dd4a79/tensorflow_addons/metrics/cohens_kappa.py#L80).\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Cohen's Kappa.\"\"\"\n\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.keras.backend as K\nfrom tensorflow.keras.metrics import Metric\nfrom tensorflow_addons.utils.types import AcceptableDTypes, FloatTensorLike\n\nfrom typeguard import typechecked\nfrom typing import Optional\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass CohenKappa(Metric):\n \"\"\"Computes Kappa score between two raters.\n\n The score lies in the range `[-1, 1]`. A score of -1 represents\n complete disagreement between two raters whereas a score of 1\n represents complete agreement between the two raters.\n A score of 0 means agreement by chance.\n\n Note: As of now, this implementation considers all labels\n while calculating the Cohen's Kappa score.\n\n Args:\n num_classes: Number of unique classes in your dataset.\n weightage: (optional) Weighting to be considered for calculating\n kappa statistics. A valid value is one of\n [None, 'linear', 'quadratic']. Defaults to `None`\n sparse_labels: (bool) Valid only for multi-class scenario.\n If True, ground truth labels are expected tp be integers\n and not one-hot encoded.\n regression: (bool) If set, that means the problem is being treated\n as a regression problem where you are regressing the predictions.\n **Note:** If you are regressing for the values, the the output layer\n should contain a single unit.\n name: (optional) String name of the metric instance\n dtype: (optional) Data type of the metric result. Defaults to `None`.\n\n Raises:\n ValueError: If the value passed for `weightage` is invalid\n i.e. not any one of [None, 'linear', 'quadratic'].\n\n Usage:\n\n >>> y_true = np.array([4, 4, 3, 4, 2, 4, 1, 1], dtype=np.int32)\n >>> y_pred = np.array([4, 4, 3, 4, 4, 2, 1, 1], dtype=np.int32)\n >>> weights = np.array([1, 1, 2, 5, 10, 2, 3, 3], dtype=np.int32)\n >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)\n >>> metric.update_state(y_true , y_pred)\n <tf.Tensor: shape=(5, 5), dtype=float32, numpy=\n array([[0., 0., 0., 0., 0.],\n [0., 2., 0., 0., 0.],\n [0., 0., 0., 0., 1.],\n [0., 0., 0., 1., 0.],\n [0., 0., 1., 0., 3.]], dtype=float32)>\n >>> result = metric.result()\n >>> result.numpy()\n 0.61904764\n >>> # To use this with weights, sample_weight argument can be used.\n >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)\n >>> metric.update_state(y_true , y_pred , sample_weight=weights)\n <tf.Tensor: shape=(5, 5), dtype=float32, numpy=\n array([[ 0., 0., 0., 0., 0.],\n [ 0., 6., 0., 0., 0.],\n [ 0., 0., 0., 0., 10.],\n [ 0., 0., 0., 2., 0.],\n [ 0., 0., 2., 0., 7.]], dtype=float32)>\n >>> result = metric.result()\n >>> result.numpy()\n 0.37209308\n\n Usage with `tf.keras` API:\n\n >>> inputs = tf.keras.Input(shape=(10,))\n >>> x = tf.keras.layers.Dense(10)(inputs)\n >>> outputs = tf.keras.layers.Dense(1)(x)\n >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)\n >>> model.compile('sgd', loss='mse', metrics=[tfa.metrics.CohenKappa(num_classes=3, sparse_labels=True)])\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n num_classes: FloatTensorLike,\n name: str = \"cohen_kappa\",\n weightage: Optional[str] = None,\n sparse_labels: bool = False,\n regression: bool = False,\n dtype: AcceptableDTypes = None,\n ):\n \"\"\"Creates a `CohenKappa` instance.\"\"\"\n super().__init__(name=name, dtype=dtype)\n\n if weightage not in (None, \"linear\", \"quadratic\"):\n raise ValueError(\"Unknown kappa weighting type.\")\n\n if num_classes == 2:\n self._update = self._update_binary_class_model\n elif num_classes > 2:\n self._update = self._update_multi_class_model\n else:\n raise ValueError(\n \"\"\"Number of classes must be\n greater than or euqal to two\"\"\"\n )\n\n self.weightage = weightage\n self.num_classes = num_classes\n self.regression = regression\n self.sparse_labels = sparse_labels\n self.conf_mtx = self.add_weight(\n \"conf_mtx\",\n shape=(self.num_classes, self.num_classes),\n initializer=tf.keras.initializers.zeros,\n dtype=tf.float32,\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n \"\"\"Accumulates the confusion matrix condition statistics.\n\n Args:\n y_true: Labels assigned by the first annotator with shape\n `[num_samples,]`.\n y_pred: Labels assigned by the second annotator with shape\n `[num_samples,]`. The kappa statistic is symmetric,\n so swapping `y_true` and `y_pred` doesn't change the value.\n sample_weight (optional): for weighting labels in confusion matrix\n Defaults to `None`. The dtype for weights should be the same\n as the dtype for confusion matrix. For more details,\n please check `tf.math.confusion_matrix`.\n\n Returns:\n Update op.\n \"\"\"\n return self._update(y_true, y_pred, sample_weight)\n\n def _update_binary_class_model(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, dtype=tf.int64)\n y_pred = tf.cast(y_pred, dtype=tf.float32)\n y_pred = tf.cast(y_pred > 0.5, dtype=tf.int64)\n return self._update_confusion_matrix(y_true, y_pred, sample_weight)\n\n @tf.function\n def _update_multi_class_model(self, y_true, y_pred, sample_weight=None):\n v = tf.argmax(y_true, axis=1) if not self.sparse_labels else y_true\n y_true = tf.cast(v, dtype=tf.int64)\n\n y_pred = self._cast_ypred(y_pred)\n\n return self._update_confusion_matrix(y_true, y_pred, sample_weight)\n\n @tf.function\n def _cast_ypred(self, y_pred):\n if tf.rank(y_pred) > 1:\n if not self.regression:\n y_pred = tf.cast(tf.argmax(y_pred, axis=-1), dtype=tf.int64)\n else:\n y_pred = tf.math.round(tf.math.abs(y_pred))\n y_pred = tf.cast(y_pred, dtype=tf.int64)\n else:\n y_pred = tf.cast(y_pred, dtype=tf.int64)\n return y_pred\n\n @tf.function\n def _safe_squeeze(self, y):\n y = tf.squeeze(y)\n\n # Check for scalar result\n if tf.rank(y) == 0:\n y = tf.expand_dims(y, 0)\n\n return y\n\n def _update_confusion_matrix(self, y_true, y_pred, sample_weight):\n y_true = self._safe_squeeze(y_true)\n y_pred = self._safe_squeeze(y_pred)\n\n new_conf_mtx = tf.math.confusion_matrix(\n labels=y_true,\n predictions=y_pred,\n num_classes=self.num_classes,\n weights=sample_weight,\n dtype=tf.float32,\n )\n\n return self.conf_mtx.assign_add(new_conf_mtx)\n\n def result(self):\n nb_ratings = tf.shape(self.conf_mtx)[0]\n weight_mtx = tf.ones([nb_ratings, nb_ratings], dtype=tf.float32)\n\n # 2. Create a weight matrix\n if self.weightage is None:\n diagonal = tf.zeros([nb_ratings], dtype=tf.float32)\n weight_mtx = tf.linalg.set_diag(weight_mtx, diagonal=diagonal)\n else:\n weight_mtx += tf.cast(tf.range(nb_ratings), dtype=tf.float32)\n weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)\n\n if self.weightage == \"linear\":\n weight_mtx = tf.abs(weight_mtx - tf.transpose(weight_mtx))\n else:\n weight_mtx = tf.pow((weight_mtx - tf.transpose(weight_mtx)), 2)\n\n weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)\n\n # 3. Get counts\n actual_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=1)\n pred_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=0)\n\n # 4. Get the outer product\n out_prod = pred_ratings_hist[..., None] * actual_ratings_hist[None, ...]\n\n # 5. Normalize the confusion matrix and outer product\n conf_mtx = self.conf_mtx / tf.reduce_sum(self.conf_mtx)\n out_prod = out_prod / tf.reduce_sum(out_prod)\n\n conf_mtx = tf.cast(conf_mtx, dtype=self.dtype)\n out_prod = tf.cast(out_prod, dtype=self.dtype)\n\n # 6. Calculate Kappa score\n numerator = tf.reduce_sum(conf_mtx * weight_mtx)\n denominator = tf.reduce_sum(out_prod * weight_mtx)\n return tf.cond(\n tf.math.is_nan(denominator),\n true_fn=lambda: 0.0,\n false_fn=lambda: 1 - (numerator / denominator),\n )\n\n def get_config(self):\n \"\"\"Returns the serializable config of the metric.\"\"\"\n\n config = {\n \"num_classes\": self.num_classes,\n \"weightage\": self.weightage,\n \"sparse_labels\": self.sparse_labels,\n \"regression\": self.regression,\n }\n base_config = super().get_config()\n return {**base_config, **config}\n\n def reset_states(self):\n \"\"\"Resets all of the metric state variables.\"\"\"\n\n for v in self.variables:\n K.set_value(\n v,\n np.zeros((self.num_classes, self.num_classes), v.dtype.as_numpy_dtype),\n )\n", "path": "tensorflow_addons/metrics/cohens_kappa.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Cohen's Kappa.\"\"\"\n\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.keras.backend as K\nfrom tensorflow.keras.metrics import Metric\nfrom tensorflow_addons.utils.types import AcceptableDTypes, FloatTensorLike\n\nfrom typeguard import typechecked\nfrom typing import Optional\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass CohenKappa(Metric):\n \"\"\"Computes Kappa score between two raters.\n\n The score lies in the range `[-1, 1]`. A score of -1 represents\n complete disagreement between two raters whereas a score of 1\n represents complete agreement between the two raters.\n A score of 0 means agreement by chance.\n\n Note: As of now, this implementation considers all labels\n while calculating the Cohen's Kappa score.\n\n Args:\n num_classes: Number of unique classes in your dataset.\n weightage: (optional) Weighting to be considered for calculating\n kappa statistics. A valid value is one of\n [None, 'linear', 'quadratic']. Defaults to `None`\n sparse_labels: (bool) Valid only for multi-class scenario.\n If True, ground truth labels are expected to be integers\n and not one-hot encoded.\n regression: (bool) If set, that means the problem is being treated\n as a regression problem where you are regressing the predictions.\n **Note:** If you are regressing for the values, the the output layer\n should contain a single unit.\n name: (optional) String name of the metric instance\n dtype: (optional) Data type of the metric result. Defaults to `None`.\n\n Raises:\n ValueError: If the value passed for `weightage` is invalid\n i.e. not any one of [None, 'linear', 'quadratic'].\n\n Usage:\n\n >>> y_true = np.array([4, 4, 3, 4, 2, 4, 1, 1], dtype=np.int32)\n >>> y_pred = np.array([4, 4, 3, 4, 4, 2, 1, 1], dtype=np.int32)\n >>> weights = np.array([1, 1, 2, 5, 10, 2, 3, 3], dtype=np.int32)\n >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)\n >>> metric.update_state(y_true , y_pred)\n <tf.Tensor: shape=(5, 5), dtype=float32, numpy=\n array([[0., 0., 0., 0., 0.],\n [0., 2., 0., 0., 0.],\n [0., 0., 0., 0., 1.],\n [0., 0., 0., 1., 0.],\n [0., 0., 1., 0., 3.]], dtype=float32)>\n >>> result = metric.result()\n >>> result.numpy()\n 0.61904764\n >>> # To use this with weights, sample_weight argument can be used.\n >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)\n >>> metric.update_state(y_true , y_pred , sample_weight=weights)\n <tf.Tensor: shape=(5, 5), dtype=float32, numpy=\n array([[ 0., 0., 0., 0., 0.],\n [ 0., 6., 0., 0., 0.],\n [ 0., 0., 0., 0., 10.],\n [ 0., 0., 0., 2., 0.],\n [ 0., 0., 2., 0., 7.]], dtype=float32)>\n >>> result = metric.result()\n >>> result.numpy()\n 0.37209308\n\n Usage with `tf.keras` API:\n\n >>> inputs = tf.keras.Input(shape=(10,))\n >>> x = tf.keras.layers.Dense(10)(inputs)\n >>> outputs = tf.keras.layers.Dense(1)(x)\n >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)\n >>> model.compile('sgd', loss='mse', metrics=[tfa.metrics.CohenKappa(num_classes=3, sparse_labels=True)])\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n num_classes: FloatTensorLike,\n name: str = \"cohen_kappa\",\n weightage: Optional[str] = None,\n sparse_labels: bool = False,\n regression: bool = False,\n dtype: AcceptableDTypes = None,\n ):\n \"\"\"Creates a `CohenKappa` instance.\"\"\"\n super().__init__(name=name, dtype=dtype)\n\n if weightage not in (None, \"linear\", \"quadratic\"):\n raise ValueError(\"Unknown kappa weighting type.\")\n\n if num_classes == 2:\n self._update = self._update_binary_class_model\n elif num_classes > 2:\n self._update = self._update_multi_class_model\n else:\n raise ValueError(\n \"\"\"Number of classes must be\n greater than or euqal to two\"\"\"\n )\n\n self.weightage = weightage\n self.num_classes = num_classes\n self.regression = regression\n self.sparse_labels = sparse_labels\n self.conf_mtx = self.add_weight(\n \"conf_mtx\",\n shape=(self.num_classes, self.num_classes),\n initializer=tf.keras.initializers.zeros,\n dtype=tf.float32,\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n \"\"\"Accumulates the confusion matrix condition statistics.\n\n Args:\n y_true: Labels assigned by the first annotator with shape\n `[num_samples,]`.\n y_pred: Labels assigned by the second annotator with shape\n `[num_samples,]`. The kappa statistic is symmetric,\n so swapping `y_true` and `y_pred` doesn't change the value.\n sample_weight (optional): for weighting labels in confusion matrix\n Defaults to `None`. The dtype for weights should be the same\n as the dtype for confusion matrix. For more details,\n please check `tf.math.confusion_matrix`.\n\n Returns:\n Update op.\n \"\"\"\n return self._update(y_true, y_pred, sample_weight)\n\n def _update_binary_class_model(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, dtype=tf.int64)\n y_pred = tf.cast(y_pred, dtype=tf.float32)\n y_pred = tf.cast(y_pred > 0.5, dtype=tf.int64)\n return self._update_confusion_matrix(y_true, y_pred, sample_weight)\n\n @tf.function\n def _update_multi_class_model(self, y_true, y_pred, sample_weight=None):\n v = tf.argmax(y_true, axis=1) if not self.sparse_labels else y_true\n y_true = tf.cast(v, dtype=tf.int64)\n\n y_pred = self._cast_ypred(y_pred)\n\n return self._update_confusion_matrix(y_true, y_pred, sample_weight)\n\n @tf.function\n def _cast_ypred(self, y_pred):\n if tf.rank(y_pred) > 1:\n if not self.regression:\n y_pred = tf.cast(tf.argmax(y_pred, axis=-1), dtype=tf.int64)\n else:\n y_pred = tf.math.round(tf.math.abs(y_pred))\n y_pred = tf.cast(y_pred, dtype=tf.int64)\n else:\n y_pred = tf.cast(y_pred, dtype=tf.int64)\n return y_pred\n\n @tf.function\n def _safe_squeeze(self, y):\n y = tf.squeeze(y)\n\n # Check for scalar result\n if tf.rank(y) == 0:\n y = tf.expand_dims(y, 0)\n\n return y\n\n def _update_confusion_matrix(self, y_true, y_pred, sample_weight):\n y_true = self._safe_squeeze(y_true)\n y_pred = self._safe_squeeze(y_pred)\n\n new_conf_mtx = tf.math.confusion_matrix(\n labels=y_true,\n predictions=y_pred,\n num_classes=self.num_classes,\n weights=sample_weight,\n dtype=tf.float32,\n )\n\n return self.conf_mtx.assign_add(new_conf_mtx)\n\n def result(self):\n nb_ratings = tf.shape(self.conf_mtx)[0]\n weight_mtx = tf.ones([nb_ratings, nb_ratings], dtype=tf.float32)\n\n # 2. Create a weight matrix\n if self.weightage is None:\n diagonal = tf.zeros([nb_ratings], dtype=tf.float32)\n weight_mtx = tf.linalg.set_diag(weight_mtx, diagonal=diagonal)\n else:\n weight_mtx += tf.cast(tf.range(nb_ratings), dtype=tf.float32)\n weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)\n\n if self.weightage == \"linear\":\n weight_mtx = tf.abs(weight_mtx - tf.transpose(weight_mtx))\n else:\n weight_mtx = tf.pow((weight_mtx - tf.transpose(weight_mtx)), 2)\n\n weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)\n\n # 3. Get counts\n actual_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=1)\n pred_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=0)\n\n # 4. Get the outer product\n out_prod = pred_ratings_hist[..., None] * actual_ratings_hist[None, ...]\n\n # 5. Normalize the confusion matrix and outer product\n conf_mtx = self.conf_mtx / tf.reduce_sum(self.conf_mtx)\n out_prod = out_prod / tf.reduce_sum(out_prod)\n\n conf_mtx = tf.cast(conf_mtx, dtype=self.dtype)\n out_prod = tf.cast(out_prod, dtype=self.dtype)\n\n # 6. Calculate Kappa score\n numerator = tf.reduce_sum(conf_mtx * weight_mtx)\n denominator = tf.reduce_sum(out_prod * weight_mtx)\n return tf.cond(\n tf.math.is_nan(denominator),\n true_fn=lambda: 0.0,\n false_fn=lambda: 1 - (numerator / denominator),\n )\n\n def get_config(self):\n \"\"\"Returns the serializable config of the metric.\"\"\"\n\n config = {\n \"num_classes\": self.num_classes,\n \"weightage\": self.weightage,\n \"sparse_labels\": self.sparse_labels,\n \"regression\": self.regression,\n }\n base_config = super().get_config()\n return {**base_config, **config}\n\n def reset_states(self):\n \"\"\"Resets all of the metric state variables.\"\"\"\n\n for v in self.variables:\n K.set_value(\n v,\n np.zeros((self.num_classes, self.num_classes), v.dtype.as_numpy_dtype),\n )\n", "path": "tensorflow_addons/metrics/cohens_kappa.py"}]}
| 3,705 | 360 |
gh_patches_debug_24323
|
rasdani/github-patches
|
git_diff
|
cltk__cltk-1116
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lexicon process for Latin fails on regex special characters
LatinLexiconProcess fails when regex special characters, e.g. single open parenthesis (i.e. ```(```) are included in tokenized input. Occurred while running MacOS 11.4; Python 3.9.5; CLTK 1.0.15; regex 2021.4.4 (but should fail in any case when this input is passed to the regex module). The solution is to escape the input ```lemma``` before running ```regex.match``` at https://github.com/cltk/cltk/blob/5dbfcf6fccade146d322cae036b35533aec59286/src/cltk/lexicon/lat.py#L70
I have written the patch and will make a PR soon.
Example and traceback:
```
from cltk import NLP
text = "Omnes igitur partes mundi (tangam autem maximas) calore fultae sustinentur." # Cic. Nat. D. 2.25
cltk_nlp = NLP(language="lat")
cltk_doc = cltk_nlp.analyze(text=test)
```
```
Traceback (most recent call last):
File "test.py", line 4, in <module>
cltk_doc = cltk_nlp.analyze(text=text)
File "[PATH]/lib/python3.9/site-packages/cltk/nlp.py", line 142, in analyze
doc = a_process.run(doc)
File "[PATH]/lib/python3.9/site-packages/cltk/lexicon/processes.py", line 45, in run
word.definition = lookup_algo.lookup(word.lemma)
File "[PATH]/lib/python3.9/site-packages/cltk/lexicon/lat.py", line 70, in lookup
matches = [key for key in keys if regex.match(rf"^{lemma}[0-9]?$", key)]
File "[PATH]/lib/python3.9/site-packages/cltk/lexicon/lat.py", line 70, in <listcomp>
matches = [key for key in keys if regex.match(rf"^{lemma}[0-9]?$", key)]
File "[PATH]/lib/python3.9/site-packages/regex/regex.py", line 253, in match
pat = _compile(pattern, flags, ignore_unused, kwargs, True)
File "[PATH]/lib/python3.9/site-packages/regex/regex.py", line 532, in _compile
raise error(caught_exception.msg, caught_exception.pattern,
regex._regex_core.error: missing ) at position 9
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cltk/lexicon/lat.py`
Content:
```
1 """Code for querying Latin language dictionaries/lexicons."""
2
3 import regex
4 import yaml
5
6 from cltk.core.exceptions import CLTKException
7 from cltk.data.fetch import FetchCorpus
8 from cltk.utils.file_operations import make_cltk_path
9 from cltk.utils.utils import query_yes_no
10
11 __author__ = ["Clément Besnier <[email protected]>"]
12
13
14 class LatinLewisLexicon:
15 """Access a digital form of Charlton T. Lewis's *An Elementary Latin Dictionary* (1890)."""
16
17 def __init__(self, interactive: bool = True):
18 self.interactive = interactive
19 self.lewis_yaml_fp = make_cltk_path(
20 "lat", "lexicon", "cltk_lat_lewis_elementary_lexicon", "lewis.yaml"
21 )
22 try:
23 self.entries = self._load_entries()
24 except FileNotFoundError:
25 if self.interactive:
26 dl_msg = f"This part of the CLTK depends upon Lewis's *An Elementary Latin Dictionary* (1890)."
27 print(dl_msg)
28 dl_question = "Do you want to download this?"
29 do_download = query_yes_no(question=dl_question)
30 else:
31 do_download = True
32 if do_download:
33 fetch_corpus = FetchCorpus(language="lat")
34 fetch_corpus.import_corpus(
35 corpus_name="cltk_lat_lewis_elementary_lexicon"
36 )
37 else:
38 raise CLTKException(
39 f"File '{self.lewis_yaml_fp}' is not found. It is required for this class."
40 )
41 self.entries = self._load_entries()
42
43 def lookup(self, lemma: str) -> str:
44 """Perform match of a lemma against headwords. If more than one match,
45 then return the concatenated entries. For example:
46
47 >>> lll = LatinLewisLexicon()
48 >>> lll.lookup("clemens")[:50]
49 'clēmēns entis (abl. -tī; rarely -te, L.), adj. wit'
50 >>> lll.lookup("omnia")
51 ''
52 >>> lll.lookup(".")
53 ''
54 >>> lll.lookup("123")
55 ''
56 >>> lll.lookup("175.")
57 ''
58 """
59 if not self.entries:
60 raise CLTKException(
61 "No lexicon entries found in the .yaml file. This should never happen."
62 )
63
64 if regex.match(r"^[0-9\.\?,\:;\!\<\>\-]*$", lemma) is not None:
65 return ""
66
67 lemma = lemma.lower()
68
69 keys = self.entries.keys()
70 matches = [key for key in keys if regex.match(rf"^{lemma}[0-9]?$", key)]
71 n_matches = len(matches)
72 if n_matches > 1:
73 return "\n".join([self.entries[key] for key in matches])
74 elif n_matches == 1:
75 return self.entries[matches[0]]
76 else:
77 return ""
78
79 def _load_entries(self):
80 """Read the yaml file of the lexion."""
81 with open(self.lewis_yaml_fp) as file_open:
82 entries = yaml.load(file_open, Loader=yaml.Loader)
83 return entries
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cltk/lexicon/lat.py b/src/cltk/lexicon/lat.py
--- a/src/cltk/lexicon/lat.py
+++ b/src/cltk/lexicon/lat.py
@@ -47,6 +47,8 @@
>>> lll = LatinLewisLexicon()
>>> lll.lookup("clemens")[:50]
'clēmēns entis (abl. -tī; rarely -te, L.), adj. wit'
+ >>> all(word in lll.lookup("levis") for word in ["levis","lēvis"]) # Test for concatenated entries
+ True
>>> lll.lookup("omnia")
''
>>> lll.lookup(".")
@@ -55,6 +57,8 @@
''
>>> lll.lookup("175.")
''
+ >>> lll.lookup("(") # Test for regex special character
+ ''
"""
if not self.entries:
raise CLTKException(
@@ -64,7 +68,7 @@
if regex.match(r"^[0-9\.\?,\:;\!\<\>\-]*$", lemma) is not None:
return ""
- lemma = lemma.lower()
+ lemma = regex.escape(lemma.lower())
keys = self.entries.keys()
matches = [key for key in keys if regex.match(rf"^{lemma}[0-9]?$", key)]
|
{"golden_diff": "diff --git a/src/cltk/lexicon/lat.py b/src/cltk/lexicon/lat.py\n--- a/src/cltk/lexicon/lat.py\n+++ b/src/cltk/lexicon/lat.py\n@@ -47,6 +47,8 @@\n >>> lll = LatinLewisLexicon()\n >>> lll.lookup(\"clemens\")[:50]\n 'cl\u0113m\u0113ns entis (abl. -t\u012b; rarely -te, L.), adj. wit'\n+ >>> all(word in lll.lookup(\"levis\") for word in [\"levis\",\"l\u0113vis\"]) # Test for concatenated entries\n+ True\n >>> lll.lookup(\"omnia\")\n ''\n >>> lll.lookup(\".\")\n@@ -55,6 +57,8 @@\n ''\n >>> lll.lookup(\"175.\")\n ''\n+ >>> lll.lookup(\"(\") # Test for regex special character\n+ ''\n \"\"\"\n if not self.entries:\n raise CLTKException(\n@@ -64,7 +68,7 @@\n if regex.match(r\"^[0-9\\.\\?,\\:;\\!\\<\\>\\-]*$\", lemma) is not None:\n return \"\"\n \n- lemma = lemma.lower()\n+ lemma = regex.escape(lemma.lower())\n \n keys = self.entries.keys()\n matches = [key for key in keys if regex.match(rf\"^{lemma}[0-9]?$\", key)]\n", "issue": "Lexicon process for Latin fails on regex special characters\nLatinLexiconProcess fails when regex special characters, e.g. single open parenthesis (i.e. ```(```) are included in tokenized input. Occurred while running MacOS 11.4; Python 3.9.5; CLTK 1.0.15; regex 2021.4.4 (but should fail in any case when this input is passed to the regex module). The solution is to escape the input ```lemma``` before running ```regex.match``` at https://github.com/cltk/cltk/blob/5dbfcf6fccade146d322cae036b35533aec59286/src/cltk/lexicon/lat.py#L70\r\n\r\nI have written the patch and will make a PR soon.\r\n\r\nExample and traceback:\r\n\r\n```\r\nfrom cltk import NLP\r\ntext = \"Omnes igitur partes mundi (tangam autem maximas) calore fultae sustinentur.\" # Cic. Nat. D. 2.25\r\ncltk_nlp = NLP(language=\"lat\")\r\ncltk_doc = cltk_nlp.analyze(text=test)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 4, in <module>\r\n cltk_doc = cltk_nlp.analyze(text=text)\r\n File \"[PATH]/lib/python3.9/site-packages/cltk/nlp.py\", line 142, in analyze\r\n doc = a_process.run(doc)\r\n File \"[PATH]/lib/python3.9/site-packages/cltk/lexicon/processes.py\", line 45, in run\r\n word.definition = lookup_algo.lookup(word.lemma)\r\n File \"[PATH]/lib/python3.9/site-packages/cltk/lexicon/lat.py\", line 70, in lookup\r\n matches = [key for key in keys if regex.match(rf\"^{lemma}[0-9]?$\", key)]\r\n File \"[PATH]/lib/python3.9/site-packages/cltk/lexicon/lat.py\", line 70, in <listcomp>\r\n matches = [key for key in keys if regex.match(rf\"^{lemma}[0-9]?$\", key)]\r\n File \"[PATH]/lib/python3.9/site-packages/regex/regex.py\", line 253, in match\r\n pat = _compile(pattern, flags, ignore_unused, kwargs, True)\r\n File \"[PATH]/lib/python3.9/site-packages/regex/regex.py\", line 532, in _compile\r\n raise error(caught_exception.msg, caught_exception.pattern,\r\nregex._regex_core.error: missing ) at position 9\r\n```\n", "before_files": [{"content": "\"\"\"Code for querying Latin language dictionaries/lexicons.\"\"\"\n\nimport regex\nimport yaml\n\nfrom cltk.core.exceptions import CLTKException\nfrom cltk.data.fetch import FetchCorpus\nfrom cltk.utils.file_operations import make_cltk_path\nfrom cltk.utils.utils import query_yes_no\n\n__author__ = [\"Cl\u00e9ment Besnier <[email protected]>\"]\n\n\nclass LatinLewisLexicon:\n \"\"\"Access a digital form of Charlton T. Lewis's *An Elementary Latin Dictionary* (1890).\"\"\"\n\n def __init__(self, interactive: bool = True):\n self.interactive = interactive\n self.lewis_yaml_fp = make_cltk_path(\n \"lat\", \"lexicon\", \"cltk_lat_lewis_elementary_lexicon\", \"lewis.yaml\"\n )\n try:\n self.entries = self._load_entries()\n except FileNotFoundError:\n if self.interactive:\n dl_msg = f\"This part of the CLTK depends upon Lewis's *An Elementary Latin Dictionary* (1890).\"\n print(dl_msg)\n dl_question = \"Do you want to download this?\"\n do_download = query_yes_no(question=dl_question)\n else:\n do_download = True\n if do_download:\n fetch_corpus = FetchCorpus(language=\"lat\")\n fetch_corpus.import_corpus(\n corpus_name=\"cltk_lat_lewis_elementary_lexicon\"\n )\n else:\n raise CLTKException(\n f\"File '{self.lewis_yaml_fp}' is not found. It is required for this class.\"\n )\n self.entries = self._load_entries()\n\n def lookup(self, lemma: str) -> str:\n \"\"\"Perform match of a lemma against headwords. If more than one match,\n then return the concatenated entries. For example:\n\n >>> lll = LatinLewisLexicon()\n >>> lll.lookup(\"clemens\")[:50]\n 'cl\u0113m\u0113ns entis (abl. -t\u012b; rarely -te, L.), adj. wit'\n >>> lll.lookup(\"omnia\")\n ''\n >>> lll.lookup(\".\")\n ''\n >>> lll.lookup(\"123\")\n ''\n >>> lll.lookup(\"175.\")\n ''\n \"\"\"\n if not self.entries:\n raise CLTKException(\n \"No lexicon entries found in the .yaml file. This should never happen.\"\n )\n\n if regex.match(r\"^[0-9\\.\\?,\\:;\\!\\<\\>\\-]*$\", lemma) is not None:\n return \"\"\n\n lemma = lemma.lower()\n\n keys = self.entries.keys()\n matches = [key for key in keys if regex.match(rf\"^{lemma}[0-9]?$\", key)]\n n_matches = len(matches)\n if n_matches > 1:\n return \"\\n\".join([self.entries[key] for key in matches])\n elif n_matches == 1:\n return self.entries[matches[0]]\n else:\n return \"\"\n\n def _load_entries(self):\n \"\"\"Read the yaml file of the lexion.\"\"\"\n with open(self.lewis_yaml_fp) as file_open:\n entries = yaml.load(file_open, Loader=yaml.Loader)\n return entries\n", "path": "src/cltk/lexicon/lat.py"}], "after_files": [{"content": "\"\"\"Code for querying Latin language dictionaries/lexicons.\"\"\"\n\nimport regex\nimport yaml\n\nfrom cltk.core.exceptions import CLTKException\nfrom cltk.data.fetch import FetchCorpus\nfrom cltk.utils.file_operations import make_cltk_path\nfrom cltk.utils.utils import query_yes_no\n\n__author__ = [\"Cl\u00e9ment Besnier <[email protected]>\"]\n\n\nclass LatinLewisLexicon:\n \"\"\"Access a digital form of Charlton T. Lewis's *An Elementary Latin Dictionary* (1890).\"\"\"\n\n def __init__(self, interactive: bool = True):\n self.interactive = interactive\n self.lewis_yaml_fp = make_cltk_path(\n \"lat\", \"lexicon\", \"cltk_lat_lewis_elementary_lexicon\", \"lewis.yaml\"\n )\n try:\n self.entries = self._load_entries()\n except FileNotFoundError:\n if self.interactive:\n dl_msg = f\"This part of the CLTK depends upon Lewis's *An Elementary Latin Dictionary* (1890).\"\n print(dl_msg)\n dl_question = \"Do you want to download this?\"\n do_download = query_yes_no(question=dl_question)\n else:\n do_download = True\n if do_download:\n fetch_corpus = FetchCorpus(language=\"lat\")\n fetch_corpus.import_corpus(\n corpus_name=\"cltk_lat_lewis_elementary_lexicon\"\n )\n else:\n raise CLTKException(\n f\"File '{self.lewis_yaml_fp}' is not found. It is required for this class.\"\n )\n self.entries = self._load_entries()\n\n def lookup(self, lemma: str) -> str:\n \"\"\"Perform match of a lemma against headwords. If more than one match,\n then return the concatenated entries. For example:\n\n >>> lll = LatinLewisLexicon()\n >>> lll.lookup(\"clemens\")[:50]\n 'cl\u0113m\u0113ns entis (abl. -t\u012b; rarely -te, L.), adj. wit'\n >>> all(word in lll.lookup(\"levis\") for word in [\"levis\",\"l\u0113vis\"]) # Test for concatenated entries\n True\n >>> lll.lookup(\"omnia\")\n ''\n >>> lll.lookup(\".\")\n ''\n >>> lll.lookup(\"123\")\n ''\n >>> lll.lookup(\"175.\")\n ''\n >>> lll.lookup(\"(\") # Test for regex special character\n ''\n \"\"\"\n if not self.entries:\n raise CLTKException(\n \"No lexicon entries found in the .yaml file. This should never happen.\"\n )\n\n if regex.match(r\"^[0-9\\.\\?,\\:;\\!\\<\\>\\-]*$\", lemma) is not None:\n return \"\"\n\n lemma = regex.escape(lemma.lower())\n\n keys = self.entries.keys()\n matches = [key for key in keys if regex.match(rf\"^{lemma}[0-9]?$\", key)]\n n_matches = len(matches)\n if n_matches > 1:\n return \"\\n\".join([self.entries[key] for key in matches])\n elif n_matches == 1:\n return self.entries[matches[0]]\n else:\n return \"\"\n\n def _load_entries(self):\n \"\"\"Read the yaml file of the lexion.\"\"\"\n with open(self.lewis_yaml_fp) as file_open:\n entries = yaml.load(file_open, Loader=yaml.Loader)\n return entries\n", "path": "src/cltk/lexicon/lat.py"}]}
| 1,693 | 314 |
gh_patches_debug_1348
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5024
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception in terminology management view
When visiting https://mozilla.locamotion.org/eu/firefox/terminology/ the following exception is thrown:
`'SortedRelatedManager' object does not support indexing`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_terminology/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.core.urlresolvers import reverse
10 from django.shortcuts import render
11
12 from pootle.core.decorators import get_path_obj, permission_required
13 from pootle_app.views.admin import util
14 from pootle_store.models import Store, Unit
15
16 from .forms import term_unit_form_factory
17
18
19 def get_terminology_filename(translation_project):
20 try:
21 # See if a terminology store already exists
22 return translation_project.stores.live().filter(
23 name__startswith='pootle-terminology.',
24 ).values_list('name', flat=True)[0]
25 except IndexError:
26 pass
27
28 return (
29 'pootle-terminology.%s'
30 % translation_project.project.filetypes[0].extension)
31
32
33 def manage_store(request, ctx, language, term_store):
34 TermUnitForm = term_unit_form_factory(term_store)
35 template_name = 'translation_projects/terminology/manage.html'
36 return util.edit(request, template_name, Unit, ctx,
37 None, None, queryset=term_store.units, can_delete=True,
38 form=TermUnitForm)
39
40
41 @get_path_obj
42 @permission_required('administrate')
43 def manage(request, translation_project):
44 ctx = {
45 'page': 'admin-terminology',
46
47 'browse_url': reverse('pootle-tp-browse', kwargs={
48 'language_code': translation_project.language.code,
49 'project_code': translation_project.project.code,
50 }),
51 'translate_url': reverse('pootle-tp-translate', kwargs={
52 'language_code': translation_project.language.code,
53 'project_code': translation_project.project.code,
54 }),
55
56 'translation_project': translation_project,
57 'language': translation_project.language,
58 'project': translation_project.project,
59 'source_language': translation_project.project.source_language,
60 'directory': translation_project.directory,
61 }
62
63 if translation_project.project.is_terminology:
64 # Which file should we edit?
65 stores = list(Store.objects.live().filter(
66 translation_project=translation_project,
67 ))
68 if len(stores) == 1:
69 # There is only one, and we're not going to offer file-level
70 # activities, so let's just edit the one that is there.
71 return manage_store(request, ctx, ctx['language'], stores[0])
72 elif len(stores) > 1:
73 for store in stores:
74 path_length = len(translation_project.pootle_path)
75 store.nice_name = store.pootle_path[path_length:]
76
77 ctx['stores'] = stores
78 return render(request,
79 "translation_projects/terminology/stores.html", ctx)
80
81 try:
82 terminology_filename = get_terminology_filename(translation_project)
83 term_store = Store.objects.get(
84 pootle_path=translation_project.pootle_path + terminology_filename,
85 )
86 return manage_store(request, ctx, ctx['language'], term_store)
87 except Store.DoesNotExist:
88 return render(request, "translation_projects/terminology/manage.html",
89 ctx)
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_terminology/views.py b/pootle/apps/pootle_terminology/views.py
--- a/pootle/apps/pootle_terminology/views.py
+++ b/pootle/apps/pootle_terminology/views.py
@@ -27,7 +27,7 @@
return (
'pootle-terminology.%s'
- % translation_project.project.filetypes[0].extension)
+ % translation_project.project.filetypes.first().extension)
def manage_store(request, ctx, language, term_store):
|
{"golden_diff": "diff --git a/pootle/apps/pootle_terminology/views.py b/pootle/apps/pootle_terminology/views.py\n--- a/pootle/apps/pootle_terminology/views.py\n+++ b/pootle/apps/pootle_terminology/views.py\n@@ -27,7 +27,7 @@\n \n return (\n 'pootle-terminology.%s'\n- % translation_project.project.filetypes[0].extension)\n+ % translation_project.project.filetypes.first().extension)\n \n \n def manage_store(request, ctx, language, term_store):\n", "issue": "Exception in terminology management view\nWhen visiting https://mozilla.locamotion.org/eu/firefox/terminology/ the following exception is thrown:\n\n`'SortedRelatedManager' object does not support indexing`\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.urlresolvers import reverse\nfrom django.shortcuts import render\n\nfrom pootle.core.decorators import get_path_obj, permission_required\nfrom pootle_app.views.admin import util\nfrom pootle_store.models import Store, Unit\n\nfrom .forms import term_unit_form_factory\n\n\ndef get_terminology_filename(translation_project):\n try:\n # See if a terminology store already exists\n return translation_project.stores.live().filter(\n name__startswith='pootle-terminology.',\n ).values_list('name', flat=True)[0]\n except IndexError:\n pass\n\n return (\n 'pootle-terminology.%s'\n % translation_project.project.filetypes[0].extension)\n\n\ndef manage_store(request, ctx, language, term_store):\n TermUnitForm = term_unit_form_factory(term_store)\n template_name = 'translation_projects/terminology/manage.html'\n return util.edit(request, template_name, Unit, ctx,\n None, None, queryset=term_store.units, can_delete=True,\n form=TermUnitForm)\n\n\n@get_path_obj\n@permission_required('administrate')\ndef manage(request, translation_project):\n ctx = {\n 'page': 'admin-terminology',\n\n 'browse_url': reverse('pootle-tp-browse', kwargs={\n 'language_code': translation_project.language.code,\n 'project_code': translation_project.project.code,\n }),\n 'translate_url': reverse('pootle-tp-translate', kwargs={\n 'language_code': translation_project.language.code,\n 'project_code': translation_project.project.code,\n }),\n\n 'translation_project': translation_project,\n 'language': translation_project.language,\n 'project': translation_project.project,\n 'source_language': translation_project.project.source_language,\n 'directory': translation_project.directory,\n }\n\n if translation_project.project.is_terminology:\n # Which file should we edit?\n stores = list(Store.objects.live().filter(\n translation_project=translation_project,\n ))\n if len(stores) == 1:\n # There is only one, and we're not going to offer file-level\n # activities, so let's just edit the one that is there.\n return manage_store(request, ctx, ctx['language'], stores[0])\n elif len(stores) > 1:\n for store in stores:\n path_length = len(translation_project.pootle_path)\n store.nice_name = store.pootle_path[path_length:]\n\n ctx['stores'] = stores\n return render(request,\n \"translation_projects/terminology/stores.html\", ctx)\n\n try:\n terminology_filename = get_terminology_filename(translation_project)\n term_store = Store.objects.get(\n pootle_path=translation_project.pootle_path + terminology_filename,\n )\n return manage_store(request, ctx, ctx['language'], term_store)\n except Store.DoesNotExist:\n return render(request, \"translation_projects/terminology/manage.html\",\n ctx)\n", "path": "pootle/apps/pootle_terminology/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.urlresolvers import reverse\nfrom django.shortcuts import render\n\nfrom pootle.core.decorators import get_path_obj, permission_required\nfrom pootle_app.views.admin import util\nfrom pootle_store.models import Store, Unit\n\nfrom .forms import term_unit_form_factory\n\n\ndef get_terminology_filename(translation_project):\n try:\n # See if a terminology store already exists\n return translation_project.stores.live().filter(\n name__startswith='pootle-terminology.',\n ).values_list('name', flat=True)[0]\n except IndexError:\n pass\n\n return (\n 'pootle-terminology.%s'\n % translation_project.project.filetypes.first().extension)\n\n\ndef manage_store(request, ctx, language, term_store):\n TermUnitForm = term_unit_form_factory(term_store)\n template_name = 'translation_projects/terminology/manage.html'\n return util.edit(request, template_name, Unit, ctx,\n None, None, queryset=term_store.units, can_delete=True,\n form=TermUnitForm)\n\n\n@get_path_obj\n@permission_required('administrate')\ndef manage(request, translation_project):\n ctx = {\n 'page': 'admin-terminology',\n\n 'browse_url': reverse('pootle-tp-browse', kwargs={\n 'language_code': translation_project.language.code,\n 'project_code': translation_project.project.code,\n }),\n 'translate_url': reverse('pootle-tp-translate', kwargs={\n 'language_code': translation_project.language.code,\n 'project_code': translation_project.project.code,\n }),\n\n 'translation_project': translation_project,\n 'language': translation_project.language,\n 'project': translation_project.project,\n 'source_language': translation_project.project.source_language,\n 'directory': translation_project.directory,\n }\n\n if translation_project.project.is_terminology:\n # Which file should we edit?\n stores = list(Store.objects.live().filter(\n translation_project=translation_project,\n ))\n if len(stores) == 1:\n # There is only one, and we're not going to offer file-level\n # activities, so let's just edit the one that is there.\n return manage_store(request, ctx, ctx['language'], stores[0])\n elif len(stores) > 1:\n for store in stores:\n path_length = len(translation_project.pootle_path)\n store.nice_name = store.pootle_path[path_length:]\n\n ctx['stores'] = stores\n return render(request,\n \"translation_projects/terminology/stores.html\", ctx)\n\n try:\n terminology_filename = get_terminology_filename(translation_project)\n term_store = Store.objects.get(\n pootle_path=translation_project.pootle_path + terminology_filename,\n )\n return manage_store(request, ctx, ctx['language'], term_store)\n except Store.DoesNotExist:\n return render(request, \"translation_projects/terminology/manage.html\",\n ctx)\n", "path": "pootle/apps/pootle_terminology/views.py"}]}
| 1,183 | 125 |
gh_patches_debug_17740
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-359
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add administrative page
The application needs an administrative page so that we can add admin accounts, etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/admin.py`
Content:
```
1 from django.contrib import admin
2 from workshops.models import Airport
3 from workshops.models import Site
4
5 admin.site.register(Airport)
6 admin.site.register(Site)
7
```
Path: `amy/settings.py`
Content:
```
1 """
2 Django settings for amy project.
3
4 For more information on this file, see
5 https://docs.djangoproject.com/en/1.7/topics/settings/
6
7 For the full list of settings and their values, see
8 https://docs.djangoproject.com/en/1.7/ref/settings/
9 """
10
11 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
12 import os
13 import json
14
15 from django.utils.translation import ugettext_lazy as _
16
17 BASE_DIR = os.path.dirname(os.path.dirname(__file__))
18
19
20 # Quick-start development settings - unsuitable for production
21 # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
22
23
24 # SECURITY WARNING: don't run with DEBUG turned on in production!
25 DEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))
26 # For deployment in production:
27 # AMY_DEBUG=false AMY_SECRET_KEY="..." ./manage.py runserver ...
28
29 if DEBUG:
30 SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'
31 else:
32 SECRET_KEY = None
33 SECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)
34
35
36 # New template settings (for Django >= 1.8)
37 TEMPLATES = [
38 {
39 'BACKEND': 'django.template.backends.django.DjangoTemplates',
40 'APP_DIRS': True,
41 'OPTIONS': {
42 'debug': DEBUG,
43
44 # default processors + a request processor + amy-version
45 'context_processors': [
46 'django.contrib.auth.context_processors.auth',
47 'django.template.context_processors.debug',
48 'django.template.context_processors.i18n',
49 'django.template.context_processors.media',
50 'django.template.context_processors.static',
51 'django.template.context_processors.tz',
52 'django.contrib.messages.context_processors.messages',
53 'django.core.context_processors.request',
54 'workshops.context_processors.version',
55 ],
56
57 # Warn viewers of invalid template strings
58 'string_if_invalid': 'XXX-unset-variable-XXX',
59 }
60 }
61 ]
62
63 ALLOWED_HOSTS = [
64 'software-carpentry.org',
65 'software-carpentry.org.',
66 'amy.software-carpentry.org',
67 'amy.software-carpentry.org.'
68 ]
69
70
71 # Application definition
72
73 INSTALLED_APPS = (
74 'django.contrib.auth',
75 'django.contrib.contenttypes',
76 'django.contrib.sessions',
77 'django.contrib.messages',
78 'django.contrib.staticfiles',
79 'workshops',
80 # this should be after 'workshops' because templates in
81 # 'templates/registration/' clash
82 'django.contrib.admin',
83 'crispy_forms',
84 'selectable',
85 'django_countries',
86 )
87
88 CRISPY_TEMPLATE_PACK = 'bootstrap3'
89
90 MIDDLEWARE_CLASSES = (
91 'django.contrib.sessions.middleware.SessionMiddleware',
92 'django.middleware.common.CommonMiddleware',
93 'django.middleware.csrf.CsrfViewMiddleware',
94 'django.contrib.auth.middleware.AuthenticationMiddleware',
95 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
96 'django.contrib.messages.middleware.MessageMiddleware',
97 'django.middleware.clickjacking.XFrameOptionsMiddleware',
98 )
99
100 ROOT_URLCONF = 'amy.urls'
101
102 WSGI_APPLICATION = 'amy.wsgi.application'
103
104 from django.contrib.messages import constants as message_constants
105 MESSAGE_TAGS = {
106 message_constants.INFO: 'alert-info',
107 message_constants.SUCCESS: 'alert-success',
108 message_constants.WARNING: 'alert-warning',
109 message_constants.ERROR: 'alert-danger',
110 }
111
112
113 # Database
114 # https://docs.djangoproject.com/en/1.7/ref/settings/#databases
115
116 DATABASES = {
117 'default': {
118 'ENGINE': 'django.db.backends.sqlite3',
119 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
120 }
121 }
122
123 # Authentication
124
125 AUTH_USER_MODEL = 'workshops.Person'
126
127 # Internationalization
128 # https://docs.djangoproject.com/en/1.7/topics/i18n/
129
130 LANGUAGE_CODE = 'en-us'
131
132 TIME_ZONE = 'EST'
133
134 USE_I18N = True
135
136 USE_L10N = True
137
138 USE_TZ = True
139
140
141 # Static files (CSS, JavaScript, Images)
142 # https://docs.djangoproject.com/en/1.7/howto/static-files/
143
144 STATIC_URL = '/static/'
145 STATIC_ROOT = os.path.join(BASE_DIR, 'static')
146 STATICFILES_DIRS = (
147 os.path.join(BASE_DIR, 'bower_components'),
148 )
149
150 # if "next" (or "?next") variable is not set when logging in, redirect to
151 # workshops
152 LOGIN_REDIRECT_URL = '/workshops/'
153
154 # here's where @login_required redirects to:
155 LOGIN_URL = '/account/login/'
156
157 # explicitely add European Union as a country
158 COUNTRIES_OVERRIDE = {
159 'EU': _('European Union'),
160 }
161
```
Path: `amy/urls.py`
Content:
```
1 from django.conf.urls import patterns, include, url
2 from django.contrib import admin
3
4 urlpatterns = patterns('',
5 url(r'^workshops/', include('workshops.urls')),
6 url(r'^admin/', include(admin.site.urls)),
7 # url(r'^account/', include('django.contrib.auth.urls')),
8
9 # django views for authentication
10 # taken in almost exact form from django.contrib.auth.views.urls:
11 url(r'^account/login/$', 'django.contrib.auth.views.login',
12 {"template_name": "account/login.html",
13 "extra_context": {"title": "Log in"}}, name='login'),
14 url(r'^account/logout/$', 'django.contrib.auth.views.logout',
15 {"template_name": "account/logged_out.html"}, name='logout'),
16
17 # TODO: implement URLs below (add templates, etc.)
18 # url(r'^account/password_change/$', 'django.contrib.auth.views.password_change', name='password_change'),
19 # url(r'^account/password_change/done/$', 'django.contrib.auth.views.password_change_done', name='password_change_done'),
20 # url(r'^account/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'),
21 # url(r'^account/password_reset/done/$', 'django.contrib.auth.views.password_reset_done', name='password_reset_done'),
22 # url(r'^account/reset/(?P<uidb64>[0-9A-Za-z_\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',
23 # 'django.contrib.auth.views.password_reset_confirm',
24 # name='password_reset_confirm'),
25 # url(r'^account/reset/done/$', 'django.contrib.auth.views.password_reset_complete', name='password_reset_complete'),
26
27 url(r'^selectable/', include('selectable.urls')),
28 )
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/amy/settings.py b/amy/settings.py
--- a/amy/settings.py
+++ b/amy/settings.py
@@ -79,7 +79,6 @@
'workshops',
# this should be after 'workshops' because templates in
# 'templates/registration/' clash
- 'django.contrib.admin',
'crispy_forms',
'selectable',
'django_countries',
diff --git a/amy/urls.py b/amy/urls.py
--- a/amy/urls.py
+++ b/amy/urls.py
@@ -3,7 +3,6 @@
urlpatterns = patterns('',
url(r'^workshops/', include('workshops.urls')),
- url(r'^admin/', include(admin.site.urls)),
# url(r'^account/', include('django.contrib.auth.urls')),
# django views for authentication
diff --git a/workshops/admin.py b/workshops/admin.py
deleted file mode 100644
--- a/workshops/admin.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from django.contrib import admin
-from workshops.models import Airport
-from workshops.models import Site
-
-admin.site.register(Airport)
-admin.site.register(Site)
|
{"golden_diff": "diff --git a/amy/settings.py b/amy/settings.py\n--- a/amy/settings.py\n+++ b/amy/settings.py\n@@ -79,7 +79,6 @@\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n- 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\ndiff --git a/amy/urls.py b/amy/urls.py\n--- a/amy/urls.py\n+++ b/amy/urls.py\n@@ -3,7 +3,6 @@\n \n urlpatterns = patterns('',\n url(r'^workshops/', include('workshops.urls')),\n- url(r'^admin/', include(admin.site.urls)),\n # url(r'^account/', include('django.contrib.auth.urls')),\n \n # django views for authentication\ndiff --git a/workshops/admin.py b/workshops/admin.py\ndeleted file mode 100644\n--- a/workshops/admin.py\n+++ /dev/null\n@@ -1,6 +0,0 @@\n-from django.contrib import admin\n-from workshops.models import Airport\n-from workshops.models import Site\n-\n-admin.site.register(Airport)\n-admin.site.register(Site)\n", "issue": "Add administrative page\nThe application needs an administrative page so that we can add admin accounts, etc.\n\n", "before_files": [{"content": "from django.contrib import admin\nfrom workshops.models import Airport\nfrom workshops.models import Site\n\nadmin.site.register(Airport)\nadmin.site.register(Site)\n", "path": "workshops/admin.py"}, {"content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport os\nimport json\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'debug': DEBUG,\n\n # default processors + a request processor + amy-version\n 'context_processors': [\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n 'django.core.context_processors.request',\n 'workshops.context_processors.version',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'software-carpentry.org',\n 'software-carpentry.org.',\n 'amy.software-carpentry.org',\n 'amy.software-carpentry.org.'\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n }\n}\n\n# Authentication\n\nAUTH_USER_MODEL = 'workshops.Person'\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n}\n", "path": "amy/settings.py"}, {"content": "from django.conf.urls import patterns, include, url\nfrom django.contrib import admin\n\nurlpatterns = patterns('',\n url(r'^workshops/', include('workshops.urls')),\n url(r'^admin/', include(admin.site.urls)),\n # url(r'^account/', include('django.contrib.auth.urls')),\n\n # django views for authentication\n # taken in almost exact form from django.contrib.auth.views.urls:\n url(r'^account/login/$', 'django.contrib.auth.views.login',\n {\"template_name\": \"account/login.html\",\n \"extra_context\": {\"title\": \"Log in\"}}, name='login'),\n url(r'^account/logout/$', 'django.contrib.auth.views.logout',\n {\"template_name\": \"account/logged_out.html\"}, name='logout'),\n\n # TODO: implement URLs below (add templates, etc.)\n # url(r'^account/password_change/$', 'django.contrib.auth.views.password_change', name='password_change'),\n # url(r'^account/password_change/done/$', 'django.contrib.auth.views.password_change_done', name='password_change_done'),\n # url(r'^account/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'),\n # url(r'^account/password_reset/done/$', 'django.contrib.auth.views.password_reset_done', name='password_reset_done'),\n # url(r'^account/reset/(?P<uidb64>[0-9A-Za-z_\\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',\n # 'django.contrib.auth.views.password_reset_confirm',\n # name='password_reset_confirm'),\n # url(r'^account/reset/done/$', 'django.contrib.auth.views.password_reset_complete', name='password_reset_complete'),\n\n url(r'^selectable/', include('selectable.urls')),\n)\n", "path": "amy/urls.py"}], "after_files": [{"content": null, "path": "workshops/admin.py"}, {"content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport os\nimport json\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'debug': DEBUG,\n\n # default processors + a request processor + amy-version\n 'context_processors': [\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n 'django.core.context_processors.request',\n 'workshops.context_processors.version',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'software-carpentry.org',\n 'software-carpentry.org.',\n 'amy.software-carpentry.org',\n 'amy.software-carpentry.org.'\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n }\n}\n\n# Authentication\n\nAUTH_USER_MODEL = 'workshops.Person'\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n}\n", "path": "amy/settings.py"}, {"content": "from django.conf.urls import patterns, include, url\nfrom django.contrib import admin\n\nurlpatterns = patterns('',\n url(r'^workshops/', include('workshops.urls')),\n # url(r'^account/', include('django.contrib.auth.urls')),\n\n # django views for authentication\n # taken in almost exact form from django.contrib.auth.views.urls:\n url(r'^account/login/$', 'django.contrib.auth.views.login',\n {\"template_name\": \"account/login.html\",\n \"extra_context\": {\"title\": \"Log in\"}}, name='login'),\n url(r'^account/logout/$', 'django.contrib.auth.views.logout',\n {\"template_name\": \"account/logged_out.html\"}, name='logout'),\n\n # TODO: implement URLs below (add templates, etc.)\n # url(r'^account/password_change/$', 'django.contrib.auth.views.password_change', name='password_change'),\n # url(r'^account/password_change/done/$', 'django.contrib.auth.views.password_change_done', name='password_change_done'),\n # url(r'^account/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'),\n # url(r'^account/password_reset/done/$', 'django.contrib.auth.views.password_reset_done', name='password_reset_done'),\n # url(r'^account/reset/(?P<uidb64>[0-9A-Za-z_\\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',\n # 'django.contrib.auth.views.password_reset_confirm',\n # name='password_reset_confirm'),\n # url(r'^account/reset/done/$', 'django.contrib.auth.views.password_reset_complete', name='password_reset_complete'),\n\n url(r'^selectable/', include('selectable.urls')),\n)\n", "path": "amy/urls.py"}]}
| 2,174 | 266 |
gh_patches_debug_23775
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-1102
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error while checking Dockerfile USER set with env variable
**Describe the bug**
The **checkov** CLI return an error when analyzing a Dockerfile with USER set according to an ENV variable.
**To Reproduce**
Steps to reproduce the behavior:
1. Get this snippet :
```Dockerfile
FROM python:alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
ENV USERNAME=app
RUN addgroup -S ${USERNAME} && adduser -s /sbin/nologin -S ${USERNAME} -G ${USERNAME} && chown -R ${USERNAME} /app
USER ${USERNAME}
COPY --chown=${USERNAME} script.py .
CMD python3 script.py
```
2. Run cli command 'checkov -f Dockerfile'
3. See error
**Expected behavior**
No error.
**Screenshots**
<img width="750" alt="" src="https://user-images.githubusercontent.com/44492274/115271564-c380b080-a13d-11eb-9c4d-cb086e3bd9fd.png">
**Desktop (please complete the following information):**
- OS: macOS Big Sur 11.2.3
- Checkov Version 2.0.55
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/dockerfile/checks/RootUser.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
3
4
5 class RootUser(BaseDockerfileCheck):
6 def __init__(self):
7 name = "Ensure the last USER is not root"
8 id = "CKV_DOCKER_8"
9 supported_instructions = ["USER"]
10 categories = [CheckCategories.APPLICATION_SECURITY]
11 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
12
13 def scan_entity_conf(self, conf):
14 contents = conf.get("USER")
15
16 if contents:
17 last_user = contents[-1]
18 if last_user["value"] == "root":
19 return CheckResult.FAILED, last_user
20
21 return CheckResult.PASSED, last_user
22
23 return CheckResult.UNKNOWN, None
24
25
26 check = RootUser()
27
```
Path: `checkov/dockerfile/checks/MaintainerExists.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
3
4
5 class MaintainerExists(BaseDockerfileCheck):
6 def __init__(self):
7 name = "Ensure that LABEL maintainer is used instead of MAINTAINER (deprecated)"
8 id = "CKV_DOCKER_6"
9 supported_instructions = ["MAINTAINER"]
10 categories = [CheckCategories.CONVENTION]
11 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
12
13 def scan_entity_conf(self, conf):
14 for instruction, content in conf.items():
15 if instruction == "MAINTAINER":
16 return CheckResult.FAILED, content[0]
17 return CheckResult.PASSED, None
18
19
20 check = MaintainerExists()
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/dockerfile/checks/MaintainerExists.py b/checkov/dockerfile/checks/MaintainerExists.py
--- a/checkov/dockerfile/checks/MaintainerExists.py
+++ b/checkov/dockerfile/checks/MaintainerExists.py
@@ -11,10 +11,7 @@
super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
def scan_entity_conf(self, conf):
- for instruction, content in conf.items():
- if instruction == "MAINTAINER":
- return CheckResult.FAILED, content[0]
- return CheckResult.PASSED, None
+ return CheckResult.FAILED, conf[0]
check = MaintainerExists()
diff --git a/checkov/dockerfile/checks/RootUser.py b/checkov/dockerfile/checks/RootUser.py
--- a/checkov/dockerfile/checks/RootUser.py
+++ b/checkov/dockerfile/checks/RootUser.py
@@ -11,16 +11,11 @@
super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
def scan_entity_conf(self, conf):
- contents = conf.get("USER")
+ last_user = conf[-1]
+ if last_user["value"] == "root":
+ return CheckResult.FAILED, last_user
- if contents:
- last_user = contents[-1]
- if last_user["value"] == "root":
- return CheckResult.FAILED, last_user
-
- return CheckResult.PASSED, last_user
-
- return CheckResult.UNKNOWN, None
+ return CheckResult.PASSED, last_user
check = RootUser()
|
{"golden_diff": "diff --git a/checkov/dockerfile/checks/MaintainerExists.py b/checkov/dockerfile/checks/MaintainerExists.py\n--- a/checkov/dockerfile/checks/MaintainerExists.py\n+++ b/checkov/dockerfile/checks/MaintainerExists.py\n@@ -11,10 +11,7 @@\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n \n def scan_entity_conf(self, conf):\n- for instruction, content in conf.items():\n- if instruction == \"MAINTAINER\":\n- return CheckResult.FAILED, content[0]\n- return CheckResult.PASSED, None\n+ return CheckResult.FAILED, conf[0]\n \n \n check = MaintainerExists()\ndiff --git a/checkov/dockerfile/checks/RootUser.py b/checkov/dockerfile/checks/RootUser.py\n--- a/checkov/dockerfile/checks/RootUser.py\n+++ b/checkov/dockerfile/checks/RootUser.py\n@@ -11,16 +11,11 @@\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n \n def scan_entity_conf(self, conf):\n- contents = conf.get(\"USER\")\n+ last_user = conf[-1]\n+ if last_user[\"value\"] == \"root\":\n+ return CheckResult.FAILED, last_user\n \n- if contents:\n- last_user = contents[-1]\n- if last_user[\"value\"] == \"root\":\n- return CheckResult.FAILED, last_user\n-\n- return CheckResult.PASSED, last_user\n-\n- return CheckResult.UNKNOWN, None\n+ return CheckResult.PASSED, last_user\n \n \n check = RootUser()\n", "issue": "Error while checking Dockerfile USER set with env variable\n**Describe the bug**\r\nThe **checkov** CLI return an error when analyzing a Dockerfile with USER set according to an ENV variable. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Get this snippet :\r\n```Dockerfile\r\nFROM python:alpine\r\n\r\nWORKDIR /app\r\nCOPY requirements.txt .\r\nRUN pip install -r requirements.txt\r\n\r\nENV USERNAME=app\r\nRUN addgroup -S ${USERNAME} && adduser -s /sbin/nologin -S ${USERNAME} -G ${USERNAME} && chown -R ${USERNAME} /app\r\nUSER ${USERNAME}\r\n\r\nCOPY --chown=${USERNAME} script.py .\r\n\r\nCMD python3 script.py\r\n```\r\n2. Run cli command 'checkov -f Dockerfile'\r\n3. See error\r\n\r\n**Expected behavior**\r\nNo error.\r\n\r\n**Screenshots**\r\n<img width=\"750\" alt=\"\" src=\"https://user-images.githubusercontent.com/44492274/115271564-c380b080-a13d-11eb-9c4d-cb086e3bd9fd.png\">\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS Big Sur 11.2.3\r\n - Checkov Version 2.0.55\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\n\nclass RootUser(BaseDockerfileCheck):\n def __init__(self):\n name = \"Ensure the last USER is not root\"\n id = \"CKV_DOCKER_8\"\n supported_instructions = [\"USER\"]\n categories = [CheckCategories.APPLICATION_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf):\n contents = conf.get(\"USER\")\n\n if contents:\n last_user = contents[-1]\n if last_user[\"value\"] == \"root\":\n return CheckResult.FAILED, last_user\n\n return CheckResult.PASSED, last_user\n\n return CheckResult.UNKNOWN, None\n\n\ncheck = RootUser()\n", "path": "checkov/dockerfile/checks/RootUser.py"}, {"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\n\nclass MaintainerExists(BaseDockerfileCheck):\n def __init__(self):\n name = \"Ensure that LABEL maintainer is used instead of MAINTAINER (deprecated)\"\n id = \"CKV_DOCKER_6\"\n supported_instructions = [\"MAINTAINER\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf):\n for instruction, content in conf.items():\n if instruction == \"MAINTAINER\":\n return CheckResult.FAILED, content[0]\n return CheckResult.PASSED, None\n\n\ncheck = MaintainerExists()\n", "path": "checkov/dockerfile/checks/MaintainerExists.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\n\nclass RootUser(BaseDockerfileCheck):\n def __init__(self):\n name = \"Ensure the last USER is not root\"\n id = \"CKV_DOCKER_8\"\n supported_instructions = [\"USER\"]\n categories = [CheckCategories.APPLICATION_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf):\n last_user = conf[-1]\n if last_user[\"value\"] == \"root\":\n return CheckResult.FAILED, last_user\n\n return CheckResult.PASSED, last_user\n\n\ncheck = RootUser()\n", "path": "checkov/dockerfile/checks/RootUser.py"}, {"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\n\nclass MaintainerExists(BaseDockerfileCheck):\n def __init__(self):\n name = \"Ensure that LABEL maintainer is used instead of MAINTAINER (deprecated)\"\n id = \"CKV_DOCKER_6\"\n supported_instructions = [\"MAINTAINER\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf):\n return CheckResult.FAILED, conf[0]\n\n\ncheck = MaintainerExists()\n", "path": "checkov/dockerfile/checks/MaintainerExists.py"}]}
| 1,019 | 382 |
gh_patches_debug_14436
|
rasdani/github-patches
|
git_diff
|
mindspore-lab__mindnlp-107
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Trainer" doesn't take into account the case that "loss_fn" doesn't need to be passed in.
"Trainer" does not take into account the case where "loss" is already defined in the model, and there is no need to pass "loss_fn" to "Trainer".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindnlp/engine/trainer.py`
Content:
```
1 # Copyright 2022 Huawei Technologies Co., Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ============================================================================
15 # pylint: disable=W0212
16 # pylint: disable=no-name-in-module, ungrouped-imports
17 """
18 Trainer for training.
19 """
20 from inspect import signature
21 from tqdm import tqdm
22 from mindspore import ops
23 from mindspore import log, mutable
24 from mindspore.ops import value_and_grad
25 from mindnlp import ms_jit
26 from mindnlp.abc.callback import Callback
27 from mindnlp.engine.callbacks.callback_manager import CallbackManager, RunContext
28 from mindnlp.engine.callbacks.earlystop_callback import EarlyStopCallback
29 from mindnlp.engine.callbacks.best_model_callback import BestModelCallback
30 from mindnlp.engine.evaluator import Evaluator
31
32 class Trainer:
33 r"""
34 Trainer to train the model.
35
36
37 Args:
38 network (Cell): A training network.
39 train_dataset (Dataset): A training dataset iterator. If `loss_fn` is defined, the data and label will be
40 passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)
41 should be returned from dataset. If there is multiple data or labels, set `loss_fn`
42 to None and implement calculation of loss in `network`,
43 then a tuple (data1, data2, data3, ...) with all data returned from dataset will be
44 passed to the `network`.
45 eval_dataset (Dataset): A evaluating dataset iterator. If `loss_fn` is defined, the data and label will be
46 passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)
47 should be returned from dataset. If there is multiple data or labels, set `loss_fn`
48 to None and implement calculation of loss in `network`,
49 then a tuple (data1, data2, data3, ...) with all data returned from dataset will be
50 passed to the `network`.
51 metrcis (Optional[list[Metrics], Metrics]): List of metrics objects which should be used
52 while evaluating. Default:None.
53 epochs (int): Total number of iterations on the data. Default: 10.
54 optimizer (Cell): Optimizer for updating the weights. If `optimizer` is None, the `network` needs to
55 do backpropagation and update weights. Default value: None.
56 loss_fn (Cell): Objective function. If `loss_fn` is None, the `network` should contain the calculation of loss
57 and parallel if needed. Default: None.
58 callbacks (Optional[list[Callback], Callback]): List of callback objects which should be executed
59 while training. Default: None.
60
61 """
62
63 def __init__(self, network=None, train_dataset=None, eval_dataset=None, metrics=None, epochs=10,
64 loss_fn=None, optimizer=None, callbacks=None):
65 self.network = network
66 self.train_dataset = train_dataset
67 self.eval_dataset = eval_dataset
68 self.metrics = metrics
69 self.epochs = epochs
70 self.loss_fn = loss_fn
71 self.optimizer = optimizer
72 self.callbacks = callbacks
73 self.cur_epoch_nums = 0
74 self.cur_step_nums = 0
75 self.earlystop = False
76 self.grad_fn = None
77 if callbacks:
78 self._prepare_callbacks(callbacks)
79 self._prepare_eval()
80 self.callback_manager = CallbackManager(callbacks=self.callbacks)
81
82 def _prepare_callbacks(self, callbacks):
83 self.callbacks = []
84 if isinstance(callbacks, Callback):
85 self.callbacks.append(callbacks)
86 elif isinstance(callbacks, list):
87 if all(isinstance(cb, Callback) for cb in callbacks) is True:
88 self.callbacks = callbacks
89 else:
90 obj = [not isinstance(cb, Callback) for cb in callbacks][0]
91 raise TypeError(f"Expect sub-classes of Callback. Got {type(obj)}")
92 else:
93 raise TypeError(f"Expect callbacks to be list or Callback. Got {type(callbacks)}.")
94
95 def _check_callbacks_type(self):
96 for callback in self.callbacks:
97 if isinstance(callback, EarlyStopCallback):
98 raise ValueError("EarlyStopCallback is not effective when eval_dataset is None.")
99 if isinstance(callback, BestModelCallback):
100 raise ValueError("BestModelCallback is not effective when eval_dataset is None.")
101
102 def _prepare_eval(self):
103 if self.eval_dataset is not None and self.metrics is not None:
104 self.evaluator = Evaluator(network=self.network, eval_dataset=self.eval_dataset, metrics=self.metrics,
105 callbacks=self.callbacks)
106 elif self.eval_dataset is None and self.metrics is None:
107 if self.callbacks:
108 self._check_callbacks_type()
109 self.evaluator = None
110 else:
111 raise ValueError("For evaluation in training process, both eval dataset and metrics should be not None.")
112
113 def _check_amp_level_arg(self, optimizer, amp_level):
114 """Check mixed-precision argument rules."""
115 raise NotImplementedError
116
117 def _check_for_graph_cell(self, kwargs):
118 """Check network rules of GraphCell."""
119 raise NotImplementedError
120
121 def _build_boost_network(self, *kwargs):
122 """Build boost network."""
123 raise NotImplementedError
124
125 def _check_reuse_dataset(self, dataset):
126 """Check if dataset is being used by other models under the data sink mode."""
127 if not hasattr(dataset, '__model_hash__'):
128 dataset.__model_hash__ = hash(self)
129 if hasattr(dataset, '__model_hash__') and dataset.__model_hash__ != hash(self):
130 raise RuntimeError("The dataset object had been used in other model by model.train(...), "
131 "please create a new dataset.")
132
133 def run(self, tgt_columns=None, jit=False):
134 """
135 Training process entry.
136
137 Args:
138 tgt_columns (Optional[list[str], str]): Target label column names for loss function.
139 jit (bool): Whether use Just-In-Time compile.
140
141 """
142
143 args_dict = vars(self)
144 run_context = RunContext(args_dict)
145 self.callback_manager.train_begin(run_context)
146 self._run(run_context, tgt_columns, jit)
147 self.callback_manager.train_end(run_context)
148
149 def _run(self, run_context, tgt_columns=None, jit=False):
150 """
151 Training process for non-data sinking mode. The data would be passed to network directly.
152 """
153 # forward function
154 net = self.network
155
156 loss_fn = self.loss_fn
157 optimizer = self.optimizer
158 def forward_fn(inputs, labels):
159 logits_list = ()
160 logits = net(*inputs)
161 if isinstance(logits, tuple):
162 logits_list += logits
163 else:
164 logits_list += (logits,)
165
166 loss = loss_fn(*logits_list, *labels)
167 return_list = (loss,) + logits_list
168 return return_list
169
170 grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
171
172 def _run_step(inputs, labels):
173 """Core process of each step, including the forward propagation process and back propagation of data."""
174 (loss, *_), grads = grad_fn(inputs, labels)
175 optimizer(grads)
176 return loss
177
178 @ms_jit
179 def _run_step_graph(inputs, labels):
180 """Core process of each step, including the forward propagation process and back propagation of data."""
181 (loss, _), grads = grad_fn(inputs, labels)
182 loss = ops.depend(loss, optimizer(grads))
183 return loss
184
185 total = self.train_dataset.get_dataset_size()
186 # train epoch begin
187 for epoch in range(0, self.epochs):
188 net.set_train()
189 self.cur_epoch_nums = epoch + 1
190 self.cur_step_nums = 0
191 run_context.cur_epoch_nums = self.cur_epoch_nums
192 run_context.cur_step_nums = 0
193 if self.earlystop is True:
194 break
195 self.callback_manager.train_epoch_begin(run_context)
196 with tqdm(total=total) as progress:
197 progress.set_description(f'Epoch {epoch}')
198 loss_total = 0
199 # step begin
200 for data in self.train_dataset.create_dict_iterator():
201 inputs, tgts = self._data_process(data, tgt_columns)
202 run_context.cur_step_nums += 1
203 self.cur_step_nums += 1
204 self.callback_manager.train_step_begin(run_context)
205 if jit:
206 loss = _run_step_graph(inputs, tgts)
207 else:
208 loss = _run_step(inputs, tgts)
209 loss_total += loss
210 progress.set_postfix(loss=loss_total/self.cur_step_nums)
211 progress.update(1)
212 # step end
213 self.callback_manager.train_step_end(run_context)
214 # train epoch end
215 progress.close()
216 self.callback_manager.train_epoch_end(run_context)
217 # do epoch evaluation
218 if self.evaluator is not None:
219 self._do_eval_epoch(run_context, tgt_columns, jit)
220
221 def _run_ds_sink(self, train_dataset, eval_dataset, list_callback,
222 cb_params, print_steps, eval_steps):
223 """Training process for data sinking mode."""
224 raise NotImplementedError
225
226 def _load_checkpoint(self, path):
227 """Load checkpoint."""
228 raise NotImplementedError
229
230 def _save_checkpoint(self, path):
231 """Save checkpoint."""
232 raise NotImplementedError
233
234 def _do_eval_steps(self, steps, eval_dataset):
235 """Evaluate the model after n steps."""
236 raise NotImplementedError
237
238 def _do_eval_epoch(self, run_context, tgt_columns=None, jit=False):
239 """Evaluate the model after an epoch."""
240 self.callback_manager.evaluate_begin(run_context)
241 self.evaluator.clear_metrics()
242 metrics_result, metrics_names, metrics_values = self.evaluator._run(tgt_columns, jit)
243 setattr(run_context, "metrics_values", metrics_values)
244 setattr(run_context, "metrics_result", metrics_result)
245 setattr(run_context, "metrics_names", metrics_names)
246 self.callback_manager.evaluate_end(run_context)
247 self.earlystop = run_context.earlystop
248
249 def _data_process(self, data, tgt_columns):
250 """Process data match the network construct"""
251 # prepare input dataset.
252 sig = signature(self.network.construct)
253 net_args = sig.parameters
254 inputs = ()
255 for arg in net_args:
256 if arg == 'self':
257 continue
258 if arg not in data.keys():
259 if str(net_args[arg])[-4:] == 'None':
260 continue
261 inputs = inputs + (data[arg],)
262 # process target dataset.
263 tgt_columns = self._prepare_tgt_columns(tgt_columns)
264 tgts = ()
265 for tgt_column in tgt_columns:
266 tgts = tgts + (data[tgt_column],)
267 return mutable(inputs), mutable(tgts)
268
269 def _prepare_tgt_columns(self, tgt_columns):
270 """Check and prepare target columns for training."""
271 out_columns = []
272 if tgt_columns is None:
273 log.warning("In the process of training model, tgt_column can not be None.")
274 return []
275 if isinstance(tgt_columns, str):
276 out_columns.append(tgt_columns)
277 elif isinstance(tgt_columns, list):
278 if all(isinstance(tgt_column, str) for tgt_column in tgt_columns) is True:
279 out_columns = tgt_columns
280 else:
281 obj = [not isinstance(tgt_column, str) for tgt_column in tgt_columns][0]
282 raise TypeError(f"Expect str of tgt_column. Got {type(obj)}")
283 else:
284 raise TypeError(f"Expect tgt_columns to be list or str. Got {type(tgt_columns)}.")
285 return out_columns
286
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mindnlp/engine/trainer.py b/mindnlp/engine/trainer.py
--- a/mindnlp/engine/trainer.py
+++ b/mindnlp/engine/trainer.py
@@ -167,7 +167,14 @@
return_list = (loss,) + logits_list
return return_list
- grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
+ def forward_without_loss_fn(inputs, labels):
+ loss_and_logits = net(*inputs, *labels)
+ return loss_and_logits
+
+ if self.loss_fn is None:
+ grad_fn = value_and_grad(forward_without_loss_fn, None, optimizer.parameters, has_aux=True)
+ else:
+ grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
def _run_step(inputs, labels):
"""Core process of each step, including the forward propagation process and back propagation of data."""
|
{"golden_diff": "diff --git a/mindnlp/engine/trainer.py b/mindnlp/engine/trainer.py\n--- a/mindnlp/engine/trainer.py\n+++ b/mindnlp/engine/trainer.py\n@@ -167,7 +167,14 @@\n return_list = (loss,) + logits_list\n return return_list\n \n- grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n+ def forward_without_loss_fn(inputs, labels):\n+ loss_and_logits = net(*inputs, *labels)\n+ return loss_and_logits\n+\n+ if self.loss_fn is None:\n+ grad_fn = value_and_grad(forward_without_loss_fn, None, optimizer.parameters, has_aux=True)\n+ else:\n+ grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n \n def _run_step(inputs, labels):\n \"\"\"Core process of each step, including the forward propagation process and back propagation of data.\"\"\"\n", "issue": "\"Trainer\" doesn't take into account the case that \"loss_fn\" doesn't need to be passed in.\n\"Trainer\" does not take into account the case where \"loss\" is already defined in the model, and there is no need to pass \"loss_fn\" to \"Trainer\".\n", "before_files": [{"content": "# Copyright 2022 Huawei Technologies Co., Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================\n# pylint: disable=W0212\n# pylint: disable=no-name-in-module, ungrouped-imports\n\"\"\"\nTrainer for training.\n\"\"\"\nfrom inspect import signature\nfrom tqdm import tqdm\nfrom mindspore import ops\nfrom mindspore import log, mutable\nfrom mindspore.ops import value_and_grad\nfrom mindnlp import ms_jit\nfrom mindnlp.abc.callback import Callback\nfrom mindnlp.engine.callbacks.callback_manager import CallbackManager, RunContext\nfrom mindnlp.engine.callbacks.earlystop_callback import EarlyStopCallback\nfrom mindnlp.engine.callbacks.best_model_callback import BestModelCallback\nfrom mindnlp.engine.evaluator import Evaluator\n\nclass Trainer:\n r\"\"\"\n Trainer to train the model.\n\n\n Args:\n network (Cell): A training network.\n train_dataset (Dataset): A training dataset iterator. If `loss_fn` is defined, the data and label will be\n passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)\n should be returned from dataset. If there is multiple data or labels, set `loss_fn`\n to None and implement calculation of loss in `network`,\n then a tuple (data1, data2, data3, ...) with all data returned from dataset will be\n passed to the `network`.\n eval_dataset (Dataset): A evaluating dataset iterator. If `loss_fn` is defined, the data and label will be\n passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)\n should be returned from dataset. If there is multiple data or labels, set `loss_fn`\n to None and implement calculation of loss in `network`,\n then a tuple (data1, data2, data3, ...) with all data returned from dataset will be\n passed to the `network`.\n metrcis (Optional[list[Metrics], Metrics]): List of metrics objects which should be used\n while evaluating. Default:None.\n epochs (int): Total number of iterations on the data. Default: 10.\n optimizer (Cell): Optimizer for updating the weights. If `optimizer` is None, the `network` needs to\n do backpropagation and update weights. Default value: None.\n loss_fn (Cell): Objective function. If `loss_fn` is None, the `network` should contain the calculation of loss\n and parallel if needed. Default: None.\n callbacks (Optional[list[Callback], Callback]): List of callback objects which should be executed\n while training. Default: None.\n\n \"\"\"\n\n def __init__(self, network=None, train_dataset=None, eval_dataset=None, metrics=None, epochs=10,\n loss_fn=None, optimizer=None, callbacks=None):\n self.network = network\n self.train_dataset = train_dataset\n self.eval_dataset = eval_dataset\n self.metrics = metrics\n self.epochs = epochs\n self.loss_fn = loss_fn\n self.optimizer = optimizer\n self.callbacks = callbacks\n self.cur_epoch_nums = 0\n self.cur_step_nums = 0\n self.earlystop = False\n self.grad_fn = None\n if callbacks:\n self._prepare_callbacks(callbacks)\n self._prepare_eval()\n self.callback_manager = CallbackManager(callbacks=self.callbacks)\n\n def _prepare_callbacks(self, callbacks):\n self.callbacks = []\n if isinstance(callbacks, Callback):\n self.callbacks.append(callbacks)\n elif isinstance(callbacks, list):\n if all(isinstance(cb, Callback) for cb in callbacks) is True:\n self.callbacks = callbacks\n else:\n obj = [not isinstance(cb, Callback) for cb in callbacks][0]\n raise TypeError(f\"Expect sub-classes of Callback. Got {type(obj)}\")\n else:\n raise TypeError(f\"Expect callbacks to be list or Callback. Got {type(callbacks)}.\")\n\n def _check_callbacks_type(self):\n for callback in self.callbacks:\n if isinstance(callback, EarlyStopCallback):\n raise ValueError(\"EarlyStopCallback is not effective when eval_dataset is None.\")\n if isinstance(callback, BestModelCallback):\n raise ValueError(\"BestModelCallback is not effective when eval_dataset is None.\")\n\n def _prepare_eval(self):\n if self.eval_dataset is not None and self.metrics is not None:\n self.evaluator = Evaluator(network=self.network, eval_dataset=self.eval_dataset, metrics=self.metrics,\n callbacks=self.callbacks)\n elif self.eval_dataset is None and self.metrics is None:\n if self.callbacks:\n self._check_callbacks_type()\n self.evaluator = None\n else:\n raise ValueError(\"For evaluation in training process, both eval dataset and metrics should be not None.\")\n\n def _check_amp_level_arg(self, optimizer, amp_level):\n \"\"\"Check mixed-precision argument rules.\"\"\"\n raise NotImplementedError\n\n def _check_for_graph_cell(self, kwargs):\n \"\"\"Check network rules of GraphCell.\"\"\"\n raise NotImplementedError\n\n def _build_boost_network(self, *kwargs):\n \"\"\"Build boost network.\"\"\"\n raise NotImplementedError\n\n def _check_reuse_dataset(self, dataset):\n \"\"\"Check if dataset is being used by other models under the data sink mode.\"\"\"\n if not hasattr(dataset, '__model_hash__'):\n dataset.__model_hash__ = hash(self)\n if hasattr(dataset, '__model_hash__') and dataset.__model_hash__ != hash(self):\n raise RuntimeError(\"The dataset object had been used in other model by model.train(...), \"\n \"please create a new dataset.\")\n\n def run(self, tgt_columns=None, jit=False):\n \"\"\"\n Training process entry.\n\n Args:\n tgt_columns (Optional[list[str], str]): Target label column names for loss function.\n jit (bool): Whether use Just-In-Time compile.\n\n \"\"\"\n\n args_dict = vars(self)\n run_context = RunContext(args_dict)\n self.callback_manager.train_begin(run_context)\n self._run(run_context, tgt_columns, jit)\n self.callback_manager.train_end(run_context)\n\n def _run(self, run_context, tgt_columns=None, jit=False):\n \"\"\"\n Training process for non-data sinking mode. The data would be passed to network directly.\n \"\"\"\n # forward function\n net = self.network\n\n loss_fn = self.loss_fn\n optimizer = self.optimizer\n def forward_fn(inputs, labels):\n logits_list = ()\n logits = net(*inputs)\n if isinstance(logits, tuple):\n logits_list += logits\n else:\n logits_list += (logits,)\n\n loss = loss_fn(*logits_list, *labels)\n return_list = (loss,) + logits_list\n return return_list\n\n grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n\n def _run_step(inputs, labels):\n \"\"\"Core process of each step, including the forward propagation process and back propagation of data.\"\"\"\n (loss, *_), grads = grad_fn(inputs, labels)\n optimizer(grads)\n return loss\n\n @ms_jit\n def _run_step_graph(inputs, labels):\n \"\"\"Core process of each step, including the forward propagation process and back propagation of data.\"\"\"\n (loss, _), grads = grad_fn(inputs, labels)\n loss = ops.depend(loss, optimizer(grads))\n return loss\n\n total = self.train_dataset.get_dataset_size()\n # train epoch begin\n for epoch in range(0, self.epochs):\n net.set_train()\n self.cur_epoch_nums = epoch + 1\n self.cur_step_nums = 0\n run_context.cur_epoch_nums = self.cur_epoch_nums\n run_context.cur_step_nums = 0\n if self.earlystop is True:\n break\n self.callback_manager.train_epoch_begin(run_context)\n with tqdm(total=total) as progress:\n progress.set_description(f'Epoch {epoch}')\n loss_total = 0\n # step begin\n for data in self.train_dataset.create_dict_iterator():\n inputs, tgts = self._data_process(data, tgt_columns)\n run_context.cur_step_nums += 1\n self.cur_step_nums += 1\n self.callback_manager.train_step_begin(run_context)\n if jit:\n loss = _run_step_graph(inputs, tgts)\n else:\n loss = _run_step(inputs, tgts)\n loss_total += loss\n progress.set_postfix(loss=loss_total/self.cur_step_nums)\n progress.update(1)\n # step end\n self.callback_manager.train_step_end(run_context)\n # train epoch end\n progress.close()\n self.callback_manager.train_epoch_end(run_context)\n # do epoch evaluation\n if self.evaluator is not None:\n self._do_eval_epoch(run_context, tgt_columns, jit)\n\n def _run_ds_sink(self, train_dataset, eval_dataset, list_callback,\n cb_params, print_steps, eval_steps):\n \"\"\"Training process for data sinking mode.\"\"\"\n raise NotImplementedError\n\n def _load_checkpoint(self, path):\n \"\"\"Load checkpoint.\"\"\"\n raise NotImplementedError\n\n def _save_checkpoint(self, path):\n \"\"\"Save checkpoint.\"\"\"\n raise NotImplementedError\n\n def _do_eval_steps(self, steps, eval_dataset):\n \"\"\"Evaluate the model after n steps.\"\"\"\n raise NotImplementedError\n\n def _do_eval_epoch(self, run_context, tgt_columns=None, jit=False):\n \"\"\"Evaluate the model after an epoch.\"\"\"\n self.callback_manager.evaluate_begin(run_context)\n self.evaluator.clear_metrics()\n metrics_result, metrics_names, metrics_values = self.evaluator._run(tgt_columns, jit)\n setattr(run_context, \"metrics_values\", metrics_values)\n setattr(run_context, \"metrics_result\", metrics_result)\n setattr(run_context, \"metrics_names\", metrics_names)\n self.callback_manager.evaluate_end(run_context)\n self.earlystop = run_context.earlystop\n\n def _data_process(self, data, tgt_columns):\n \"\"\"Process data match the network construct\"\"\"\n # prepare input dataset.\n sig = signature(self.network.construct)\n net_args = sig.parameters\n inputs = ()\n for arg in net_args:\n if arg == 'self':\n continue\n if arg not in data.keys():\n if str(net_args[arg])[-4:] == 'None':\n continue\n inputs = inputs + (data[arg],)\n # process target dataset.\n tgt_columns = self._prepare_tgt_columns(tgt_columns)\n tgts = ()\n for tgt_column in tgt_columns:\n tgts = tgts + (data[tgt_column],)\n return mutable(inputs), mutable(tgts)\n\n def _prepare_tgt_columns(self, tgt_columns):\n \"\"\"Check and prepare target columns for training.\"\"\"\n out_columns = []\n if tgt_columns is None:\n log.warning(\"In the process of training model, tgt_column can not be None.\")\n return []\n if isinstance(tgt_columns, str):\n out_columns.append(tgt_columns)\n elif isinstance(tgt_columns, list):\n if all(isinstance(tgt_column, str) for tgt_column in tgt_columns) is True:\n out_columns = tgt_columns\n else:\n obj = [not isinstance(tgt_column, str) for tgt_column in tgt_columns][0]\n raise TypeError(f\"Expect str of tgt_column. Got {type(obj)}\")\n else:\n raise TypeError(f\"Expect tgt_columns to be list or str. Got {type(tgt_columns)}.\")\n return out_columns\n", "path": "mindnlp/engine/trainer.py"}], "after_files": [{"content": "# Copyright 2022 Huawei Technologies Co., Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================\n# pylint: disable=W0212\n# pylint: disable=no-name-in-module, ungrouped-imports\n\"\"\"\nTrainer for training.\n\"\"\"\nfrom inspect import signature\nfrom tqdm import tqdm\nfrom mindspore import ops\nfrom mindspore import log, mutable\nfrom mindspore.ops import value_and_grad\nfrom mindnlp import ms_jit\nfrom mindnlp.abc.callback import Callback\nfrom mindnlp.engine.callbacks.callback_manager import CallbackManager, RunContext\nfrom mindnlp.engine.callbacks.earlystop_callback import EarlyStopCallback\nfrom mindnlp.engine.callbacks.best_model_callback import BestModelCallback\nfrom mindnlp.engine.evaluator import Evaluator\n\nclass Trainer:\n r\"\"\"\n Trainer to train the model.\n\n\n Args:\n network (Cell): A training network.\n train_dataset (Dataset): A training dataset iterator. If `loss_fn` is defined, the data and label will be\n passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)\n should be returned from dataset. If there is multiple data or labels, set `loss_fn`\n to None and implement calculation of loss in `network`,\n then a tuple (data1, data2, data3, ...) with all data returned from dataset will be\n passed to the `network`.\n eval_dataset (Dataset): A evaluating dataset iterator. If `loss_fn` is defined, the data and label will be\n passed to the `network` and the `loss_fn` respectively, so a tuple (data, label)\n should be returned from dataset. If there is multiple data or labels, set `loss_fn`\n to None and implement calculation of loss in `network`,\n then a tuple (data1, data2, data3, ...) with all data returned from dataset will be\n passed to the `network`.\n metrcis (Optional[list[Metrics], Metrics]): List of metrics objects which should be used\n while evaluating. Default:None.\n epochs (int): Total number of iterations on the data. Default: 10.\n optimizer (Cell): Optimizer for updating the weights. If `optimizer` is None, the `network` needs to\n do backpropagation and update weights. Default value: None.\n loss_fn (Cell): Objective function. If `loss_fn` is None, the `network` should contain the calculation of loss\n and parallel if needed. Default: None.\n callbacks (Optional[list[Callback], Callback]): List of callback objects which should be executed\n while training. Default: None.\n\n \"\"\"\n\n def __init__(self, network=None, train_dataset=None, eval_dataset=None, metrics=None, epochs=10,\n loss_fn=None, optimizer=None, callbacks=None):\n self.network = network\n self.train_dataset = train_dataset\n self.eval_dataset = eval_dataset\n self.metrics = metrics\n self.epochs = epochs\n self.loss_fn = loss_fn\n self.optimizer = optimizer\n self.callbacks = callbacks\n self.cur_epoch_nums = 0\n self.cur_step_nums = 0\n self.earlystop = False\n self.grad_fn = None\n if callbacks:\n self._prepare_callbacks(callbacks)\n self._prepare_eval()\n self.callback_manager = CallbackManager(callbacks=self.callbacks)\n\n def _prepare_callbacks(self, callbacks):\n self.callbacks = []\n if isinstance(callbacks, Callback):\n self.callbacks.append(callbacks)\n elif isinstance(callbacks, list):\n if all(isinstance(cb, Callback) for cb in callbacks) is True:\n self.callbacks = callbacks\n else:\n obj = [not isinstance(cb, Callback) for cb in callbacks][0]\n raise TypeError(f\"Expect sub-classes of Callback. Got {type(obj)}\")\n else:\n raise TypeError(f\"Expect callbacks to be list or Callback. Got {type(callbacks)}.\")\n\n def _check_callbacks_type(self):\n for callback in self.callbacks:\n if isinstance(callback, EarlyStopCallback):\n raise ValueError(\"EarlyStopCallback is not effective when eval_dataset is None.\")\n if isinstance(callback, BestModelCallback):\n raise ValueError(\"BestModelCallback is not effective when eval_dataset is None.\")\n\n def _prepare_eval(self):\n if self.eval_dataset is not None and self.metrics is not None:\n self.evaluator = Evaluator(network=self.network, eval_dataset=self.eval_dataset, metrics=self.metrics,\n callbacks=self.callbacks)\n elif self.eval_dataset is None and self.metrics is None:\n if self.callbacks:\n self._check_callbacks_type()\n self.evaluator = None\n else:\n raise ValueError(\"For evaluation in training process, both eval dataset and metrics should be not None.\")\n\n def _check_amp_level_arg(self, optimizer, amp_level):\n \"\"\"Check mixed-precision argument rules.\"\"\"\n raise NotImplementedError\n\n def _check_for_graph_cell(self, kwargs):\n \"\"\"Check network rules of GraphCell.\"\"\"\n raise NotImplementedError\n\n def _build_boost_network(self, *kwargs):\n \"\"\"Build boost network.\"\"\"\n raise NotImplementedError\n\n def _check_reuse_dataset(self, dataset):\n \"\"\"Check if dataset is being used by other models under the data sink mode.\"\"\"\n if not hasattr(dataset, '__model_hash__'):\n dataset.__model_hash__ = hash(self)\n if hasattr(dataset, '__model_hash__') and dataset.__model_hash__ != hash(self):\n raise RuntimeError(\"The dataset object had been used in other model by model.train(...), \"\n \"please create a new dataset.\")\n\n def run(self, tgt_columns=None, jit=False):\n \"\"\"\n Training process entry.\n\n Args:\n tgt_columns (Optional[list[str], str]): Target label column names for loss function.\n jit (bool): Whether use Just-In-Time compile.\n\n \"\"\"\n\n args_dict = vars(self)\n run_context = RunContext(args_dict)\n self.callback_manager.train_begin(run_context)\n self._run(run_context, tgt_columns, jit)\n self.callback_manager.train_end(run_context)\n\n def _run(self, run_context, tgt_columns=None, jit=False):\n \"\"\"\n Training process for non-data sinking mode. The data would be passed to network directly.\n \"\"\"\n # forward function\n net = self.network\n\n loss_fn = self.loss_fn\n optimizer = self.optimizer\n def forward_fn(inputs, labels):\n logits_list = ()\n logits = net(*inputs)\n if isinstance(logits, tuple):\n logits_list += logits\n else:\n logits_list += (logits,)\n\n loss = loss_fn(*logits_list, *labels)\n return_list = (loss,) + logits_list\n return return_list\n\n def forward_without_loss_fn(inputs, labels):\n loss_and_logits = net(*inputs, *labels)\n return loss_and_logits\n\n if self.loss_fn is None:\n grad_fn = value_and_grad(forward_without_loss_fn, None, optimizer.parameters, has_aux=True)\n else:\n grad_fn = value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n\n def _run_step(inputs, labels):\n \"\"\"Core process of each step, including the forward propagation process and back propagation of data.\"\"\"\n (loss, *_), grads = grad_fn(inputs, labels)\n optimizer(grads)\n return loss\n\n @ms_jit\n def _run_step_graph(inputs, labels):\n \"\"\"Core process of each step, including the forward propagation process and back propagation of data.\"\"\"\n (loss, _), grads = grad_fn(inputs, labels)\n loss = ops.depend(loss, optimizer(grads))\n return loss\n\n total = self.train_dataset.get_dataset_size()\n # train epoch begin\n for epoch in range(0, self.epochs):\n net.set_train()\n self.cur_epoch_nums = epoch + 1\n self.cur_step_nums = 0\n run_context.cur_epoch_nums = self.cur_epoch_nums\n run_context.cur_step_nums = 0\n if self.earlystop is True:\n break\n self.callback_manager.train_epoch_begin(run_context)\n with tqdm(total=total) as progress:\n progress.set_description(f'Epoch {epoch}')\n loss_total = 0\n # step begin\n for data in self.train_dataset.create_dict_iterator():\n inputs, tgts = self._data_process(data, tgt_columns)\n run_context.cur_step_nums += 1\n self.cur_step_nums += 1\n self.callback_manager.train_step_begin(run_context)\n if jit:\n loss = _run_step_graph(inputs, tgts)\n else:\n loss = _run_step(inputs, tgts)\n loss_total += loss\n progress.set_postfix(loss=loss_total/self.cur_step_nums)\n progress.update(1)\n # step end\n self.callback_manager.train_step_end(run_context)\n # train epoch end\n progress.close()\n self.callback_manager.train_epoch_end(run_context)\n # do epoch evaluation\n if self.evaluator is not None:\n self._do_eval_epoch(run_context, tgt_columns, jit)\n\n def _run_ds_sink(self, train_dataset, eval_dataset, list_callback,\n cb_params, print_steps, eval_steps):\n \"\"\"Training process for data sinking mode.\"\"\"\n raise NotImplementedError\n\n def _load_checkpoint(self, path):\n \"\"\"Load checkpoint.\"\"\"\n raise NotImplementedError\n\n def _save_checkpoint(self, path):\n \"\"\"Save checkpoint.\"\"\"\n raise NotImplementedError\n\n def _do_eval_steps(self, steps, eval_dataset):\n \"\"\"Evaluate the model after n steps.\"\"\"\n raise NotImplementedError\n\n def _do_eval_epoch(self, run_context, tgt_columns=None, jit=False):\n \"\"\"Evaluate the model after an epoch.\"\"\"\n self.callback_manager.evaluate_begin(run_context)\n self.evaluator.clear_metrics()\n metrics_result, metrics_names, metrics_values = self.evaluator._run(tgt_columns, jit)\n setattr(run_context, \"metrics_values\", metrics_values)\n setattr(run_context, \"metrics_result\", metrics_result)\n setattr(run_context, \"metrics_names\", metrics_names)\n self.callback_manager.evaluate_end(run_context)\n self.earlystop = run_context.earlystop\n\n def _data_process(self, data, tgt_columns):\n \"\"\"Process data match the network construct\"\"\"\n # prepare input dataset.\n sig = signature(self.network.construct)\n net_args = sig.parameters\n inputs = ()\n for arg in net_args:\n if arg == 'self':\n continue\n if arg not in data.keys():\n if str(net_args[arg])[-4:] == 'None':\n continue\n inputs = inputs + (data[arg],)\n # process target dataset.\n tgt_columns = self._prepare_tgt_columns(tgt_columns)\n tgts = ()\n for tgt_column in tgt_columns:\n tgts = tgts + (data[tgt_column],)\n return mutable(inputs), mutable(tgts)\n\n def _prepare_tgt_columns(self, tgt_columns):\n \"\"\"Check and prepare target columns for training.\"\"\"\n out_columns = []\n if tgt_columns is None:\n log.warning(\"In the process of training model, tgt_column can not be None.\")\n return []\n if isinstance(tgt_columns, str):\n out_columns.append(tgt_columns)\n elif isinstance(tgt_columns, list):\n if all(isinstance(tgt_column, str) for tgt_column in tgt_columns) is True:\n out_columns = tgt_columns\n else:\n obj = [not isinstance(tgt_column, str) for tgt_column in tgt_columns][0]\n raise TypeError(f\"Expect str of tgt_column. Got {type(obj)}\")\n else:\n raise TypeError(f\"Expect tgt_columns to be list or str. Got {type(tgt_columns)}.\")\n return out_columns\n", "path": "mindnlp/engine/trainer.py"}]}
| 3,623 | 214 |
gh_patches_debug_38055
|
rasdani/github-patches
|
git_diff
|
huggingface__dataset-viewer-2580
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support JWT on cookies
private conversion: https://huggingface.slack.com/archives/D030YA5BW91/p1696507761676679
When the users goes to https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need, moonlanding can put a cookie on `datasets-server.huggingface.co` with name `hf_jwt_[sha256(/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)]` and with the JWT as the value
This cookie would be read on datasets server when accessing a gated dataset
Doing so would simplify a lot the code on the Hub (moonlanding) by removing the need to refresh the JWT (remove an endpoint), and avoid the logic in the frontend code that refreshes the JWT. It would be a security improvement too, because the Hub's frontend code (javascript) would no more have access to the JWT (the browser directly adds the cookie to the HTTP request)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libs/libapi/src/libapi/authentication.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5 from collections.abc import Generator
6 from typing import Literal, Optional
7
8 import httpx
9 from libcommon.prometheus import StepProfiler
10 from starlette.requests import Request
11
12 from libapi.exceptions import (
13 AuthCheckHubRequestError,
14 ExternalAuthenticatedError,
15 ExternalUnauthenticatedError,
16 )
17 from libapi.jwt_token import validate_jwt
18
19
20 class RequestAuth(httpx.Auth):
21 """Attaches input Request authentication headers to the given Request object."""
22
23 def __init__(self, request: Optional[Request]) -> None:
24 self.cookie = request.headers.get("cookie") if request else None
25 self.authorization = request.headers.get("authorization") if request else None
26
27 def auth_flow(self, request: httpx.Request) -> Generator[httpx.Request, httpx.Response, None]:
28 # modify and yield the request
29 if self.cookie:
30 request.headers["cookie"] = self.cookie
31 if self.authorization:
32 request.headers["authorization"] = self.authorization
33 yield request
34
35
36 def get_jwt_token(request: Optional[Request] = None) -> Optional[str]:
37 if not request:
38 return None
39 # x-api-token is deprecated and will be removed in the future
40 if token := request.headers.get("x-api-key"):
41 return token
42 authorization = request.headers.get("authorization")
43 if not authorization:
44 return None
45 token = authorization.removeprefix("Bearer jwt:")
46 return None if token == authorization else token
47
48
49 async def auth_check(
50 dataset: str,
51 external_auth_url: Optional[str] = None,
52 request: Optional[Request] = None,
53 hf_jwt_public_keys: Optional[list[str]] = None,
54 hf_jwt_algorithm: Optional[str] = None,
55 hf_timeout_seconds: Optional[float] = None,
56 ) -> Literal[True]:
57 """check if the dataset is authorized for the request
58
59 It sends a request to the Hugging Face API to check if the dataset is authorized for the input request. The request
60 to the Hugging Face API is authenticated with the same authentication headers as the input request. It timeouts
61 after 200ms.
62
63 Args:
64 dataset (`str`): the dataset name
65 external_auth_url (`str`, *optional*): the URL of an external authentication service. The URL must contain `%s`,
66 which will be replaced with the dataset name, for example: https://huggingface.co/api/datasets/%s/auth-check
67 The authentication service must return 200, 401, 403 or 404.
68 If None, the dataset is always authorized.
69 request (`Request`, *optional*): the request which optionally bears authentication headers: "cookie",
70 "authorization" or "X-Api-Key"
71 hf_jwt_public_keys (`list[str]`, *optional*): the public keys to use to decode the JWT token
72 hf_jwt_algorithm (`str`): the algorithm to use to decode the JWT token
73 hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the external authentication service. It
74 is used both for the connection timeout and the read timeout. If None, the request never timeouts.
75
76 Returns:
77 `Literal[True]`: the dataset is authorized for the request
78 """
79 with StepProfiler(method="auth_check", step="all"):
80 with StepProfiler(method="auth_check", step="check JWT"):
81 if (jwt_token := get_jwt_token(request)) and hf_jwt_public_keys and hf_jwt_algorithm:
82 validate_jwt(
83 dataset=dataset, token=jwt_token, public_keys=hf_jwt_public_keys, algorithm=hf_jwt_algorithm
84 )
85 logging.debug(
86 "By-passing the authentication step, because a valid JWT was passed in headers"
87 f" for dataset {dataset}. JWT was: {jwt_token}"
88 )
89 return True
90 with StepProfiler(method="auth_check", step="prepare parameters"):
91 if external_auth_url is None:
92 return True
93 try:
94 url = external_auth_url % dataset
95 except TypeError as e:
96 raise ValueError("external_auth_url must contain %s") from e
97 with StepProfiler(method="auth_check", step="create auth parameter"):
98 auth = RequestAuth(request)
99 with StepProfiler(
100 method="auth_check",
101 step="requests.get",
102 context=f"external_auth_url={external_auth_url} timeout={hf_timeout_seconds}",
103 ):
104 try:
105 logging.debug(
106 f"Checking authentication on the Hugging Face Hub for dataset {dataset}, url: {url}, timeout:"
107 f" {hf_timeout_seconds}, authorization: {auth.authorization}"
108 )
109 async with httpx.AsyncClient() as client:
110 response = await client.get(url, auth=auth, timeout=hf_timeout_seconds)
111 except Exception as err:
112 raise AuthCheckHubRequestError(
113 (
114 "Authentication check on the Hugging Face Hub failed or timed out. Please try again later,"
115 " it's a temporary internal issue."
116 ),
117 err,
118 ) from err
119 with StepProfiler(method="auth_check", step="return or raise"):
120 if response.status_code == 200:
121 return True
122 elif response.status_code == 401:
123 raise ExternalUnauthenticatedError(
124 "The dataset does not exist, or is not accessible without authentication (private or gated). Please"
125 " check the spelling of the dataset name or retry with authentication."
126 )
127 elif response.status_code in {403, 404}:
128 raise ExternalAuthenticatedError(
129 "The dataset does not exist, or is not accessible with the current credentials (private or gated)."
130 " Please check the spelling of the dataset name or retry with other authentication credentials."
131 )
132 else:
133 raise ValueError(f"Unexpected status code {response.status_code}")
134
```
Path: `services/admin/src/admin/authentication.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 from typing import Literal, Optional
5
6 import httpx
7 from libapi.authentication import RequestAuth
8 from libapi.exceptions import ExternalAuthenticatedError, ExternalUnauthenticatedError
9 from starlette.requests import Request
10
11
12 async def auth_check(
13 external_auth_url: Optional[str] = None,
14 request: Optional[Request] = None,
15 organization: Optional[str] = None,
16 hf_timeout_seconds: Optional[float] = None,
17 ) -> Literal[True]:
18 """check if the user is member of the organization
19
20 Args:
21 external_auth_url (`str`, *optional*): the URL of an external authentication service. If None, the dataset is always
22 authorized.
23 request (`Request`, *optional*): the request which optionally bears authentication headers: "cookie" or
24 "authorization"
25 organization (`str`, *optional*): the organization name. If None, the dataset is always
26 authorized.
27 hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the HTTP request to the external authentication
28 service.
29
30 Returns:
31 `Literal[True]`: the user is authorized
32 """
33 if organization is None or external_auth_url is None:
34 return True
35 try:
36 async with httpx.AsyncClient() as client:
37 response = await client.get(external_auth_url, auth=RequestAuth(request), timeout=hf_timeout_seconds)
38 except Exception as err:
39 raise RuntimeError("External authentication check failed", err) from err
40 if response.status_code == 200:
41 try:
42 json = response.json()
43 if organization is None or organization in {org["name"] for org in json["orgs"]}:
44 return True
45 else:
46 raise ExternalAuthenticatedError("You are not member of the organization")
47 except Exception as err:
48 raise ExternalAuthenticatedError(
49 "Cannot access the route with the current credentials. Please retry with other authentication"
50 " credentials."
51 ) from err
52 elif response.status_code == 401:
53 raise ExternalUnauthenticatedError("Cannot access the route. Please retry with authentication.")
54 elif response.status_code in {403, 404}:
55 raise ExternalAuthenticatedError(
56 "Cannot access the route with the current credentials. Please retry with other authentication credentials."
57 )
58 else:
59 raise ValueError(f"Unexpected status code {response.status_code}")
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libs/libapi/src/libapi/authentication.py b/libs/libapi/src/libapi/authentication.py
--- a/libs/libapi/src/libapi/authentication.py
+++ b/libs/libapi/src/libapi/authentication.py
@@ -21,13 +21,10 @@
"""Attaches input Request authentication headers to the given Request object."""
def __init__(self, request: Optional[Request]) -> None:
- self.cookie = request.headers.get("cookie") if request else None
self.authorization = request.headers.get("authorization") if request else None
def auth_flow(self, request: httpx.Request) -> Generator[httpx.Request, httpx.Response, None]:
# modify and yield the request
- if self.cookie:
- request.headers["cookie"] = self.cookie
if self.authorization:
request.headers["authorization"] = self.authorization
yield request
@@ -66,7 +63,7 @@
which will be replaced with the dataset name, for example: https://huggingface.co/api/datasets/%s/auth-check
The authentication service must return 200, 401, 403 or 404.
If None, the dataset is always authorized.
- request (`Request`, *optional*): the request which optionally bears authentication headers: "cookie",
+ request (`Request`, *optional*): the request which optionally bears authentication headers:
"authorization" or "X-Api-Key"
hf_jwt_public_keys (`list[str]`, *optional*): the public keys to use to decode the JWT token
hf_jwt_algorithm (`str`): the algorithm to use to decode the JWT token
diff --git a/services/admin/src/admin/authentication.py b/services/admin/src/admin/authentication.py
--- a/services/admin/src/admin/authentication.py
+++ b/services/admin/src/admin/authentication.py
@@ -20,8 +20,7 @@
Args:
external_auth_url (`str`, *optional*): the URL of an external authentication service. If None, the dataset is always
authorized.
- request (`Request`, *optional*): the request which optionally bears authentication headers: "cookie" or
- "authorization"
+ request (`Request`, *optional*): the request which optionally bears authentication header: "authorization"
organization (`str`, *optional*): the organization name. If None, the dataset is always
authorized.
hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the HTTP request to the external authentication
|
{"golden_diff": "diff --git a/libs/libapi/src/libapi/authentication.py b/libs/libapi/src/libapi/authentication.py\n--- a/libs/libapi/src/libapi/authentication.py\n+++ b/libs/libapi/src/libapi/authentication.py\n@@ -21,13 +21,10 @@\n \"\"\"Attaches input Request authentication headers to the given Request object.\"\"\"\n \n def __init__(self, request: Optional[Request]) -> None:\n- self.cookie = request.headers.get(\"cookie\") if request else None\n self.authorization = request.headers.get(\"authorization\") if request else None\n \n def auth_flow(self, request: httpx.Request) -> Generator[httpx.Request, httpx.Response, None]:\n # modify and yield the request\n- if self.cookie:\n- request.headers[\"cookie\"] = self.cookie\n if self.authorization:\n request.headers[\"authorization\"] = self.authorization\n yield request\n@@ -66,7 +63,7 @@\n which will be replaced with the dataset name, for example: https://huggingface.co/api/datasets/%s/auth-check\n The authentication service must return 200, 401, 403 or 404.\n If None, the dataset is always authorized.\n- request (`Request`, *optional*): the request which optionally bears authentication headers: \"cookie\",\n+ request (`Request`, *optional*): the request which optionally bears authentication headers:\n \"authorization\" or \"X-Api-Key\"\n hf_jwt_public_keys (`list[str]`, *optional*): the public keys to use to decode the JWT token\n hf_jwt_algorithm (`str`): the algorithm to use to decode the JWT token\ndiff --git a/services/admin/src/admin/authentication.py b/services/admin/src/admin/authentication.py\n--- a/services/admin/src/admin/authentication.py\n+++ b/services/admin/src/admin/authentication.py\n@@ -20,8 +20,7 @@\n Args:\n external_auth_url (`str`, *optional*): the URL of an external authentication service. If None, the dataset is always\n authorized.\n- request (`Request`, *optional*): the request which optionally bears authentication headers: \"cookie\" or\n- \"authorization\"\n+ request (`Request`, *optional*): the request which optionally bears authentication header: \"authorization\"\n organization (`str`, *optional*): the organization name. If None, the dataset is always\n authorized.\n hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the HTTP request to the external authentication\n", "issue": "Support JWT on cookies\nprivate conversion: https://huggingface.slack.com/archives/D030YA5BW91/p1696507761676679\r\n\r\nWhen the users goes to https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need, moonlanding can put a cookie on `datasets-server.huggingface.co` with name `hf_jwt_[sha256(/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)]` and with the JWT as the value\r\n\r\nThis cookie would be read on datasets server when accessing a gated dataset\r\n\r\nDoing so would simplify a lot the code on the Hub (moonlanding) by removing the need to refresh the JWT (remove an endpoint), and avoid the logic in the frontend code that refreshes the JWT. It would be a security improvement too, because the Hub's frontend code (javascript) would no more have access to the JWT (the browser directly adds the cookie to the HTTP request)\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom collections.abc import Generator\nfrom typing import Literal, Optional\n\nimport httpx\nfrom libcommon.prometheus import StepProfiler\nfrom starlette.requests import Request\n\nfrom libapi.exceptions import (\n AuthCheckHubRequestError,\n ExternalAuthenticatedError,\n ExternalUnauthenticatedError,\n)\nfrom libapi.jwt_token import validate_jwt\n\n\nclass RequestAuth(httpx.Auth):\n \"\"\"Attaches input Request authentication headers to the given Request object.\"\"\"\n\n def __init__(self, request: Optional[Request]) -> None:\n self.cookie = request.headers.get(\"cookie\") if request else None\n self.authorization = request.headers.get(\"authorization\") if request else None\n\n def auth_flow(self, request: httpx.Request) -> Generator[httpx.Request, httpx.Response, None]:\n # modify and yield the request\n if self.cookie:\n request.headers[\"cookie\"] = self.cookie\n if self.authorization:\n request.headers[\"authorization\"] = self.authorization\n yield request\n\n\ndef get_jwt_token(request: Optional[Request] = None) -> Optional[str]:\n if not request:\n return None\n # x-api-token is deprecated and will be removed in the future\n if token := request.headers.get(\"x-api-key\"):\n return token\n authorization = request.headers.get(\"authorization\")\n if not authorization:\n return None\n token = authorization.removeprefix(\"Bearer jwt:\")\n return None if token == authorization else token\n\n\nasync def auth_check(\n dataset: str,\n external_auth_url: Optional[str] = None,\n request: Optional[Request] = None,\n hf_jwt_public_keys: Optional[list[str]] = None,\n hf_jwt_algorithm: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n) -> Literal[True]:\n \"\"\"check if the dataset is authorized for the request\n\n It sends a request to the Hugging Face API to check if the dataset is authorized for the input request. The request\n to the Hugging Face API is authenticated with the same authentication headers as the input request. It timeouts\n after 200ms.\n\n Args:\n dataset (`str`): the dataset name\n external_auth_url (`str`, *optional*): the URL of an external authentication service. The URL must contain `%s`,\n which will be replaced with the dataset name, for example: https://huggingface.co/api/datasets/%s/auth-check\n The authentication service must return 200, 401, 403 or 404.\n If None, the dataset is always authorized.\n request (`Request`, *optional*): the request which optionally bears authentication headers: \"cookie\",\n \"authorization\" or \"X-Api-Key\"\n hf_jwt_public_keys (`list[str]`, *optional*): the public keys to use to decode the JWT token\n hf_jwt_algorithm (`str`): the algorithm to use to decode the JWT token\n hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the external authentication service. It\n is used both for the connection timeout and the read timeout. If None, the request never timeouts.\n\n Returns:\n `Literal[True]`: the dataset is authorized for the request\n \"\"\"\n with StepProfiler(method=\"auth_check\", step=\"all\"):\n with StepProfiler(method=\"auth_check\", step=\"check JWT\"):\n if (jwt_token := get_jwt_token(request)) and hf_jwt_public_keys and hf_jwt_algorithm:\n validate_jwt(\n dataset=dataset, token=jwt_token, public_keys=hf_jwt_public_keys, algorithm=hf_jwt_algorithm\n )\n logging.debug(\n \"By-passing the authentication step, because a valid JWT was passed in headers\"\n f\" for dataset {dataset}. JWT was: {jwt_token}\"\n )\n return True\n with StepProfiler(method=\"auth_check\", step=\"prepare parameters\"):\n if external_auth_url is None:\n return True\n try:\n url = external_auth_url % dataset\n except TypeError as e:\n raise ValueError(\"external_auth_url must contain %s\") from e\n with StepProfiler(method=\"auth_check\", step=\"create auth parameter\"):\n auth = RequestAuth(request)\n with StepProfiler(\n method=\"auth_check\",\n step=\"requests.get\",\n context=f\"external_auth_url={external_auth_url} timeout={hf_timeout_seconds}\",\n ):\n try:\n logging.debug(\n f\"Checking authentication on the Hugging Face Hub for dataset {dataset}, url: {url}, timeout:\"\n f\" {hf_timeout_seconds}, authorization: {auth.authorization}\"\n )\n async with httpx.AsyncClient() as client:\n response = await client.get(url, auth=auth, timeout=hf_timeout_seconds)\n except Exception as err:\n raise AuthCheckHubRequestError(\n (\n \"Authentication check on the Hugging Face Hub failed or timed out. Please try again later,\"\n \" it's a temporary internal issue.\"\n ),\n err,\n ) from err\n with StepProfiler(method=\"auth_check\", step=\"return or raise\"):\n if response.status_code == 200:\n return True\n elif response.status_code == 401:\n raise ExternalUnauthenticatedError(\n \"The dataset does not exist, or is not accessible without authentication (private or gated). Please\"\n \" check the spelling of the dataset name or retry with authentication.\"\n )\n elif response.status_code in {403, 404}:\n raise ExternalAuthenticatedError(\n \"The dataset does not exist, or is not accessible with the current credentials (private or gated).\"\n \" Please check the spelling of the dataset name or retry with other authentication credentials.\"\n )\n else:\n raise ValueError(f\"Unexpected status code {response.status_code}\")\n", "path": "libs/libapi/src/libapi/authentication.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nfrom typing import Literal, Optional\n\nimport httpx\nfrom libapi.authentication import RequestAuth\nfrom libapi.exceptions import ExternalAuthenticatedError, ExternalUnauthenticatedError\nfrom starlette.requests import Request\n\n\nasync def auth_check(\n external_auth_url: Optional[str] = None,\n request: Optional[Request] = None,\n organization: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n) -> Literal[True]:\n \"\"\"check if the user is member of the organization\n\n Args:\n external_auth_url (`str`, *optional*): the URL of an external authentication service. If None, the dataset is always\n authorized.\n request (`Request`, *optional*): the request which optionally bears authentication headers: \"cookie\" or\n \"authorization\"\n organization (`str`, *optional*): the organization name. If None, the dataset is always\n authorized.\n hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the HTTP request to the external authentication\n service.\n\n Returns:\n `Literal[True]`: the user is authorized\n \"\"\"\n if organization is None or external_auth_url is None:\n return True\n try:\n async with httpx.AsyncClient() as client:\n response = await client.get(external_auth_url, auth=RequestAuth(request), timeout=hf_timeout_seconds)\n except Exception as err:\n raise RuntimeError(\"External authentication check failed\", err) from err\n if response.status_code == 200:\n try:\n json = response.json()\n if organization is None or organization in {org[\"name\"] for org in json[\"orgs\"]}:\n return True\n else:\n raise ExternalAuthenticatedError(\"You are not member of the organization\")\n except Exception as err:\n raise ExternalAuthenticatedError(\n \"Cannot access the route with the current credentials. Please retry with other authentication\"\n \" credentials.\"\n ) from err\n elif response.status_code == 401:\n raise ExternalUnauthenticatedError(\"Cannot access the route. Please retry with authentication.\")\n elif response.status_code in {403, 404}:\n raise ExternalAuthenticatedError(\n \"Cannot access the route with the current credentials. Please retry with other authentication credentials.\"\n )\n else:\n raise ValueError(f\"Unexpected status code {response.status_code}\")\n", "path": "services/admin/src/admin/authentication.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom collections.abc import Generator\nfrom typing import Literal, Optional\n\nimport httpx\nfrom libcommon.prometheus import StepProfiler\nfrom starlette.requests import Request\n\nfrom libapi.exceptions import (\n AuthCheckHubRequestError,\n ExternalAuthenticatedError,\n ExternalUnauthenticatedError,\n)\nfrom libapi.jwt_token import validate_jwt\n\n\nclass RequestAuth(httpx.Auth):\n \"\"\"Attaches input Request authentication headers to the given Request object.\"\"\"\n\n def __init__(self, request: Optional[Request]) -> None:\n self.authorization = request.headers.get(\"authorization\") if request else None\n\n def auth_flow(self, request: httpx.Request) -> Generator[httpx.Request, httpx.Response, None]:\n # modify and yield the request\n if self.authorization:\n request.headers[\"authorization\"] = self.authorization\n yield request\n\n\ndef get_jwt_token(request: Optional[Request] = None) -> Optional[str]:\n if not request:\n return None\n # x-api-token is deprecated and will be removed in the future\n if token := request.headers.get(\"x-api-key\"):\n return token\n authorization = request.headers.get(\"authorization\")\n if not authorization:\n return None\n token = authorization.removeprefix(\"Bearer jwt:\")\n return None if token == authorization else token\n\n\nasync def auth_check(\n dataset: str,\n external_auth_url: Optional[str] = None,\n request: Optional[Request] = None,\n hf_jwt_public_keys: Optional[list[str]] = None,\n hf_jwt_algorithm: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n) -> Literal[True]:\n \"\"\"check if the dataset is authorized for the request\n\n It sends a request to the Hugging Face API to check if the dataset is authorized for the input request. The request\n to the Hugging Face API is authenticated with the same authentication headers as the input request. It timeouts\n after 200ms.\n\n Args:\n dataset (`str`): the dataset name\n external_auth_url (`str`, *optional*): the URL of an external authentication service. The URL must contain `%s`,\n which will be replaced with the dataset name, for example: https://huggingface.co/api/datasets/%s/auth-check\n The authentication service must return 200, 401, 403 or 404.\n If None, the dataset is always authorized.\n request (`Request`, *optional*): the request which optionally bears authentication headers:\n \"authorization\" or \"X-Api-Key\"\n hf_jwt_public_keys (`list[str]`, *optional*): the public keys to use to decode the JWT token\n hf_jwt_algorithm (`str`): the algorithm to use to decode the JWT token\n hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the external authentication service. It\n is used both for the connection timeout and the read timeout. If None, the request never timeouts.\n\n Returns:\n `Literal[True]`: the dataset is authorized for the request\n \"\"\"\n with StepProfiler(method=\"auth_check\", step=\"all\"):\n with StepProfiler(method=\"auth_check\", step=\"check JWT\"):\n if (jwt_token := get_jwt_token(request)) and hf_jwt_public_keys and hf_jwt_algorithm:\n validate_jwt(\n dataset=dataset, token=jwt_token, public_keys=hf_jwt_public_keys, algorithm=hf_jwt_algorithm\n )\n logging.debug(\n \"By-passing the authentication step, because a valid JWT was passed in headers\"\n f\" for dataset {dataset}. JWT was: {jwt_token}\"\n )\n return True\n with StepProfiler(method=\"auth_check\", step=\"prepare parameters\"):\n if external_auth_url is None:\n return True\n try:\n url = external_auth_url % dataset\n except TypeError as e:\n raise ValueError(\"external_auth_url must contain %s\") from e\n with StepProfiler(method=\"auth_check\", step=\"create auth parameter\"):\n auth = RequestAuth(request)\n with StepProfiler(\n method=\"auth_check\",\n step=\"requests.get\",\n context=f\"external_auth_url={external_auth_url} timeout={hf_timeout_seconds}\",\n ):\n try:\n logging.debug(\n f\"Checking authentication on the Hugging Face Hub for dataset {dataset}, url: {url}, timeout:\"\n f\" {hf_timeout_seconds}, authorization: {auth.authorization}\"\n )\n async with httpx.AsyncClient() as client:\n response = await client.get(url, auth=auth, timeout=hf_timeout_seconds)\n except Exception as err:\n raise AuthCheckHubRequestError(\n (\n \"Authentication check on the Hugging Face Hub failed or timed out. Please try again later,\"\n \" it's a temporary internal issue.\"\n ),\n err,\n ) from err\n with StepProfiler(method=\"auth_check\", step=\"return or raise\"):\n if response.status_code == 200:\n return True\n elif response.status_code == 401:\n raise ExternalUnauthenticatedError(\n \"The dataset does not exist, or is not accessible without authentication (private or gated). Please\"\n \" check the spelling of the dataset name or retry with authentication.\"\n )\n elif response.status_code in {403, 404}:\n raise ExternalAuthenticatedError(\n \"The dataset does not exist, or is not accessible with the current credentials (private or gated).\"\n \" Please check the spelling of the dataset name or retry with other authentication credentials.\"\n )\n else:\n raise ValueError(f\"Unexpected status code {response.status_code}\")\n", "path": "libs/libapi/src/libapi/authentication.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nfrom typing import Literal, Optional\n\nimport httpx\nfrom libapi.authentication import RequestAuth\nfrom libapi.exceptions import ExternalAuthenticatedError, ExternalUnauthenticatedError\nfrom starlette.requests import Request\n\n\nasync def auth_check(\n external_auth_url: Optional[str] = None,\n request: Optional[Request] = None,\n organization: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n) -> Literal[True]:\n \"\"\"check if the user is member of the organization\n\n Args:\n external_auth_url (`str`, *optional*): the URL of an external authentication service. If None, the dataset is always\n authorized.\n request (`Request`, *optional*): the request which optionally bears authentication header: \"authorization\"\n organization (`str`, *optional*): the organization name. If None, the dataset is always\n authorized.\n hf_timeout_seconds (`float`, *optional*): the timeout in seconds for the HTTP request to the external authentication\n service.\n\n Returns:\n `Literal[True]`: the user is authorized\n \"\"\"\n if organization is None or external_auth_url is None:\n return True\n try:\n async with httpx.AsyncClient() as client:\n response = await client.get(external_auth_url, auth=RequestAuth(request), timeout=hf_timeout_seconds)\n except Exception as err:\n raise RuntimeError(\"External authentication check failed\", err) from err\n if response.status_code == 200:\n try:\n json = response.json()\n if organization is None or organization in {org[\"name\"] for org in json[\"orgs\"]}:\n return True\n else:\n raise ExternalAuthenticatedError(\"You are not member of the organization\")\n except Exception as err:\n raise ExternalAuthenticatedError(\n \"Cannot access the route with the current credentials. Please retry with other authentication\"\n \" credentials.\"\n ) from err\n elif response.status_code == 401:\n raise ExternalUnauthenticatedError(\"Cannot access the route. Please retry with authentication.\")\n elif response.status_code in {403, 404}:\n raise ExternalAuthenticatedError(\n \"Cannot access the route with the current credentials. Please retry with other authentication credentials.\"\n )\n else:\n raise ValueError(f\"Unexpected status code {response.status_code}\")\n", "path": "services/admin/src/admin/authentication.py"}]}
| 2,683 | 535 |
gh_patches_debug_1886
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-535
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mpdstats: last_played is documented but not implemented
As pointed out [on the mailing list](https://groups.google.com/d/msg/beets-users/VW0pxtCVZG4/sq9gGsNS9zEJ), the mpdstats plugin (paging @pscn and @kljohann) does not seem to set the `last_played` field, even though the field is described in [the plugin's docs](http://beets.readthedocs.org/en/v1.3.2/plugins/mpdstats.html). Grepping in mpdstats.py for "last_played" shows that doesn't seem to be implemented. We should probably either add it to the plugin or remove it from the docs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/mpdstats.py`
Content:
```
1 # coding=utf-8
2 # This file is part of beets.
3 # Copyright 2013, Peter Schnebel and Johann Klähn.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 import logging
17 import mpd
18 import socket
19 import select
20 import time
21 import os
22
23 from beets import ui
24 from beets import config
25 from beets import plugins
26 from beets import library
27 from beets.util import displayable_path
28
29 log = logging.getLogger('beets')
30
31 # If we lose the connection, how many times do we want to retry and how
32 # much time should we wait between retries?
33 RETRIES = 10
34 RETRY_INTERVAL = 5
35
36
37 def is_url(path):
38 """Try to determine if the path is an URL.
39 """
40 return path.split('://', 1)[0] in ['http', 'https']
41
42
43 # Use the MPDClient internals to get unicode.
44 # see http://www.tarmack.eu/code/mpdunicode.py for the general idea
45 class MPDClient(mpd.MPDClient):
46 def _write_command(self, command, args=[]):
47 args = [unicode(arg).encode('utf-8') for arg in args]
48 super(MPDClient, self)._write_command(command, args)
49
50 def _read_line(self):
51 line = super(MPDClient, self)._read_line()
52 if line is not None:
53 return line.decode('utf-8')
54 return None
55
56
57 class MPDClientWrapper(object):
58 def __init__(self):
59 self.music_directory = (
60 config['mpdstats']['music_directory'].get(unicode))
61
62 self.client = MPDClient()
63
64 def connect(self):
65 """Connect to the MPD.
66 """
67 host = config['mpd']['host'].get(unicode)
68 port = config['mpd']['port'].get(int)
69
70 if host[0] in ['/', '~']:
71 host = os.path.expanduser(host)
72
73 log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))
74 try:
75 self.client.connect(host, port)
76 except socket.error as e:
77 raise ui.UserError('could not connect to MPD: {0}'.format(e))
78
79 password = config['mpd']['password'].get(unicode)
80 if password:
81 try:
82 self.client.password(password)
83 except mpd.CommandError as e:
84 raise ui.UserError(
85 'could not authenticate to MPD: {0}'.format(e)
86 )
87
88 def disconnect(self):
89 """Disconnect from the MPD.
90 """
91 self.client.close()
92 self.client.disconnect()
93
94 def get(self, command, retries=RETRIES):
95 """Wrapper for requests to the MPD server. Tries to re-connect if the
96 connection was lost (f.ex. during MPD's library refresh).
97 """
98 try:
99 return getattr(self.client, command)()
100 except (select.error, mpd.ConnectionError) as err:
101 log.error(u'mpdstats: {0}'.format(err))
102
103 if retries <= 0:
104 # if we exited without breaking, we couldn't reconnect in time :(
105 raise ui.UserError(u'communication with MPD server failed')
106
107 time.sleep(RETRY_INTERVAL)
108
109 try:
110 self.disconnect()
111 except mpd.ConnectionError:
112 pass
113
114 self.connect()
115 return self.get(command, retries=retries - 1)
116
117 def playlist(self):
118 """Return the currently active playlist. Prefixes paths with the
119 music_directory, to get the absolute path.
120 """
121 result = {}
122 for entry in self.get('playlistinfo'):
123 if not is_url(entry['file']):
124 result[entry['id']] = os.path.join(
125 self.music_directory, entry['file'])
126 else:
127 result[entry['id']] = entry['file']
128 return result
129
130 def status(self):
131 """Return the current status of the MPD.
132 """
133 return self.get('status')
134
135 def events(self):
136 """Return list of events. This may block a long time while waiting for
137 an answer from MPD.
138 """
139 return self.get('idle')
140
141
142 class MPDStats(object):
143 def __init__(self, lib):
144 self.lib = lib
145
146 self.do_rating = config['mpdstats']['rating'].get(bool)
147 self.rating_mix = config['mpdstats']['rating_mix'].get(float)
148 self.time_threshold = 10.0 # TODO: maybe add config option?
149
150 self.now_playing = None
151 self.mpd = MPDClientWrapper()
152
153 def rating(self, play_count, skip_count, rating, skipped):
154 """Calculate a new rating for a song based on play count, skip count,
155 old rating and the fact if it was skipped or not.
156 """
157 if skipped:
158 rolling = (rating - rating / 2.0)
159 else:
160 rolling = (rating + (1.0 - rating) / 2.0)
161 stable = (play_count + 1.0) / (play_count + skip_count + 2.0)
162 return (self.rating_mix * stable
163 + (1.0 - self.rating_mix) * rolling)
164
165 def get_item(self, path):
166 """Return the beets item related to path.
167 """
168 query = library.PathQuery('path', path)
169 item = self.lib.items(query).get()
170 if item:
171 return item
172 else:
173 log.info(u'mpdstats: item not found: {0}'.format(
174 displayable_path(path)
175 ))
176
177 @staticmethod
178 def update_item(item, attribute, value=None, increment=None):
179 """Update the beets item. Set attribute to value or increment the value
180 of attribute. If the increment argument is used the value is cast to the
181 corresponding type.
182 """
183 if item is None:
184 return
185
186 if increment is not None:
187 item.load()
188 value = type(increment)(item.get(attribute, 0)) + increment
189
190 if value is not None:
191 item[attribute] = value
192 item.store()
193
194 log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(
195 attribute,
196 item[attribute],
197 displayable_path(item.path),
198 ))
199
200 def update_rating(self, item, skipped):
201 """Update the rating for a beets item.
202 """
203 item.load()
204 rating = self.rating(
205 int(item.get('play_count', 0)),
206 int(item.get('skip_count', 0)),
207 float(item.get('rating', 0.5)),
208 skipped)
209
210 self.update_item(item, 'rating', rating)
211
212 def handle_song_change(self, song):
213 """Determine if a song was skipped or not and update its attributes.
214 To this end the difference between the song's supposed end time
215 and the current time is calculated. If it's greater than a threshold,
216 the song is considered skipped.
217 """
218 diff = abs(song['remaining'] - (time.time() - song['started']))
219
220 skipped = diff >= self.time_threshold
221
222 if skipped:
223 self.handle_skipped(song)
224 else:
225 self.handle_played(song)
226
227 if self.do_rating:
228 self.update_rating(song['beets_item'], skipped)
229
230 def handle_played(self, song):
231 """Updates the play count of a song.
232 """
233 self.update_item(song['beets_item'], 'play_count', increment=1)
234 log.info(u'mpdstats: played {0}'.format(
235 displayable_path(song['path'])
236 ))
237
238 def handle_skipped(self, song):
239 """Updates the skip count of a song.
240 """
241 self.update_item(song['beets_item'], 'skip_count', increment=1)
242 log.info(u'mpdstats: skipped {0}'.format(
243 displayable_path(song['path'])
244 ))
245
246 def on_stop(self, status):
247 log.info(u'mpdstats: stop')
248 self.now_playing = None
249
250 def on_pause(self, status):
251 log.info(u'mpdstats: pause')
252 self.now_playing = None
253
254 def on_play(self, status):
255 playlist = self.mpd.playlist()
256 path = playlist.get(status['songid'])
257
258 if not path:
259 return
260
261 if is_url(path):
262 log.info(u'mpdstats: playing stream {0}'.format(
263 displayable_path(path)
264 ))
265 return
266
267 played, duration = map(int, status['time'].split(':', 1))
268 remaining = duration - played
269
270 if self.now_playing and self.now_playing['path'] != path:
271 self.handle_song_change(self.now_playing)
272
273 log.info(u'mpdstats: playing {0}'.format(
274 displayable_path(path)
275 ))
276
277 self.now_playing = {
278 'started': time.time(),
279 'remaining': remaining,
280 'path': path,
281 'beets_item': self.get_item(path),
282 }
283
284 def run(self):
285 self.mpd.connect()
286 events = ['player']
287
288 while True:
289 if 'player' in events:
290 status = self.mpd.status()
291
292 handler = getattr(self, 'on_' + status['state'], None)
293
294 if handler:
295 handler(status)
296 else:
297 log.debug(u'mpdstats: unhandled status "{0}"'.format(status))
298
299 events = self.mpd.events()
300
301
302 class MPDStatsPlugin(plugins.BeetsPlugin):
303 def __init__(self):
304 super(MPDStatsPlugin, self).__init__()
305 self.config.add({
306 'music_directory': config['directory'].as_filename(),
307 'rating': True,
308 'rating_mix': 0.75,
309 })
310 config['mpd'].add({
311 'host': u'localhost',
312 'port': 6600,
313 'password': u'',
314 })
315
316 def commands(self):
317 cmd = ui.Subcommand(
318 'mpdstats',
319 help='run a MPD client to gather play statistics')
320 cmd.parser.add_option(
321 '--host', dest='host', type='string',
322 help='set the hostname of the server to connect to')
323 cmd.parser.add_option(
324 '--port', dest='port', type='int',
325 help='set the port of the MPD server to connect to')
326 cmd.parser.add_option(
327 '--password', dest='password', type='string',
328 help='set the password of the MPD server to connect to')
329
330 def func(lib, opts, args):
331 self.config.set_args(opts)
332
333 # Overrides for MPD settings.
334 if opts.host:
335 config['mpd']['host'] = opts.host.decode('utf8')
336 if opts.port:
337 config['mpd']['host'] = int(opts.port)
338 if opts.password:
339 config['mpd']['password'] = opts.password.decode('utf8')
340
341 try:
342 MPDStats(lib).run()
343 except KeyboardInterrupt:
344 pass
345
346 cmd.func = func
347 return [cmd]
348
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/mpdstats.py b/beetsplug/mpdstats.py
--- a/beetsplug/mpdstats.py
+++ b/beetsplug/mpdstats.py
@@ -281,6 +281,9 @@
'beets_item': self.get_item(path),
}
+ self.update_item(self.now_playing['beets_item'],
+ 'last_played', value=int(time.time()))
+
def run(self):
self.mpd.connect()
events = ['player']
|
{"golden_diff": "diff --git a/beetsplug/mpdstats.py b/beetsplug/mpdstats.py\n--- a/beetsplug/mpdstats.py\n+++ b/beetsplug/mpdstats.py\n@@ -281,6 +281,9 @@\n 'beets_item': self.get_item(path),\n }\n \n+ self.update_item(self.now_playing['beets_item'],\n+ 'last_played', value=int(time.time()))\n+\n def run(self):\n self.mpd.connect()\n events = ['player']\n", "issue": "mpdstats: last_played is documented but not implemented\nAs pointed out [on the mailing list](https://groups.google.com/d/msg/beets-users/VW0pxtCVZG4/sq9gGsNS9zEJ), the mpdstats plugin (paging @pscn and @kljohann) does not seem to set the `last_played` field, even though the field is described in [the plugin's docs](http://beets.readthedocs.org/en/v1.3.2/plugins/mpdstats.html). Grepping in mpdstats.py for \"last_played\" shows that doesn't seem to be implemented. We should probably either add it to the plugin or remove it from the docs.\n\n", "before_files": [{"content": "# coding=utf-8\n# This file is part of beets.\n# Copyright 2013, Peter Schnebel and Johann Kl\u00e4hn.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nimport logging\nimport mpd\nimport socket\nimport select\nimport time\nimport os\n\nfrom beets import ui\nfrom beets import config\nfrom beets import plugins\nfrom beets import library\nfrom beets.util import displayable_path\n\nlog = logging.getLogger('beets')\n\n# If we lose the connection, how many times do we want to retry and how\n# much time should we wait between retries?\nRETRIES = 10\nRETRY_INTERVAL = 5\n\n\ndef is_url(path):\n \"\"\"Try to determine if the path is an URL.\n \"\"\"\n return path.split('://', 1)[0] in ['http', 'https']\n\n\n# Use the MPDClient internals to get unicode.\n# see http://www.tarmack.eu/code/mpdunicode.py for the general idea\nclass MPDClient(mpd.MPDClient):\n def _write_command(self, command, args=[]):\n args = [unicode(arg).encode('utf-8') for arg in args]\n super(MPDClient, self)._write_command(command, args)\n\n def _read_line(self):\n line = super(MPDClient, self)._read_line()\n if line is not None:\n return line.decode('utf-8')\n return None\n\n\nclass MPDClientWrapper(object):\n def __init__(self):\n self.music_directory = (\n config['mpdstats']['music_directory'].get(unicode))\n\n self.client = MPDClient()\n\n def connect(self):\n \"\"\"Connect to the MPD.\n \"\"\"\n host = config['mpd']['host'].get(unicode)\n port = config['mpd']['port'].get(int)\n\n if host[0] in ['/', '~']:\n host = os.path.expanduser(host)\n\n log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))\n try:\n self.client.connect(host, port)\n except socket.error as e:\n raise ui.UserError('could not connect to MPD: {0}'.format(e))\n\n password = config['mpd']['password'].get(unicode)\n if password:\n try:\n self.client.password(password)\n except mpd.CommandError as e:\n raise ui.UserError(\n 'could not authenticate to MPD: {0}'.format(e)\n )\n\n def disconnect(self):\n \"\"\"Disconnect from the MPD.\n \"\"\"\n self.client.close()\n self.client.disconnect()\n\n def get(self, command, retries=RETRIES):\n \"\"\"Wrapper for requests to the MPD server. Tries to re-connect if the\n connection was lost (f.ex. during MPD's library refresh).\n \"\"\"\n try:\n return getattr(self.client, command)()\n except (select.error, mpd.ConnectionError) as err:\n log.error(u'mpdstats: {0}'.format(err))\n\n if retries <= 0:\n # if we exited without breaking, we couldn't reconnect in time :(\n raise ui.UserError(u'communication with MPD server failed')\n\n time.sleep(RETRY_INTERVAL)\n\n try:\n self.disconnect()\n except mpd.ConnectionError:\n pass\n\n self.connect()\n return self.get(command, retries=retries - 1)\n\n def playlist(self):\n \"\"\"Return the currently active playlist. Prefixes paths with the\n music_directory, to get the absolute path.\n \"\"\"\n result = {}\n for entry in self.get('playlistinfo'):\n if not is_url(entry['file']):\n result[entry['id']] = os.path.join(\n self.music_directory, entry['file'])\n else:\n result[entry['id']] = entry['file']\n return result\n\n def status(self):\n \"\"\"Return the current status of the MPD.\n \"\"\"\n return self.get('status')\n\n def events(self):\n \"\"\"Return list of events. This may block a long time while waiting for\n an answer from MPD.\n \"\"\"\n return self.get('idle')\n\n\nclass MPDStats(object):\n def __init__(self, lib):\n self.lib = lib\n\n self.do_rating = config['mpdstats']['rating'].get(bool)\n self.rating_mix = config['mpdstats']['rating_mix'].get(float)\n self.time_threshold = 10.0 # TODO: maybe add config option?\n\n self.now_playing = None\n self.mpd = MPDClientWrapper()\n\n def rating(self, play_count, skip_count, rating, skipped):\n \"\"\"Calculate a new rating for a song based on play count, skip count,\n old rating and the fact if it was skipped or not.\n \"\"\"\n if skipped:\n rolling = (rating - rating / 2.0)\n else:\n rolling = (rating + (1.0 - rating) / 2.0)\n stable = (play_count + 1.0) / (play_count + skip_count + 2.0)\n return (self.rating_mix * stable\n + (1.0 - self.rating_mix) * rolling)\n\n def get_item(self, path):\n \"\"\"Return the beets item related to path.\n \"\"\"\n query = library.PathQuery('path', path)\n item = self.lib.items(query).get()\n if item:\n return item\n else:\n log.info(u'mpdstats: item not found: {0}'.format(\n displayable_path(path)\n ))\n\n @staticmethod\n def update_item(item, attribute, value=None, increment=None):\n \"\"\"Update the beets item. Set attribute to value or increment the value\n of attribute. If the increment argument is used the value is cast to the\n corresponding type.\n \"\"\"\n if item is None:\n return\n\n if increment is not None:\n item.load()\n value = type(increment)(item.get(attribute, 0)) + increment\n\n if value is not None:\n item[attribute] = value\n item.store()\n\n log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(\n attribute,\n item[attribute],\n displayable_path(item.path),\n ))\n\n def update_rating(self, item, skipped):\n \"\"\"Update the rating for a beets item.\n \"\"\"\n item.load()\n rating = self.rating(\n int(item.get('play_count', 0)),\n int(item.get('skip_count', 0)),\n float(item.get('rating', 0.5)),\n skipped)\n\n self.update_item(item, 'rating', rating)\n\n def handle_song_change(self, song):\n \"\"\"Determine if a song was skipped or not and update its attributes.\n To this end the difference between the song's supposed end time\n and the current time is calculated. If it's greater than a threshold,\n the song is considered skipped.\n \"\"\"\n diff = abs(song['remaining'] - (time.time() - song['started']))\n\n skipped = diff >= self.time_threshold\n\n if skipped:\n self.handle_skipped(song)\n else:\n self.handle_played(song)\n\n if self.do_rating:\n self.update_rating(song['beets_item'], skipped)\n\n def handle_played(self, song):\n \"\"\"Updates the play count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'play_count', increment=1)\n log.info(u'mpdstats: played {0}'.format(\n displayable_path(song['path'])\n ))\n\n def handle_skipped(self, song):\n \"\"\"Updates the skip count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'skip_count', increment=1)\n log.info(u'mpdstats: skipped {0}'.format(\n displayable_path(song['path'])\n ))\n\n def on_stop(self, status):\n log.info(u'mpdstats: stop')\n self.now_playing = None\n\n def on_pause(self, status):\n log.info(u'mpdstats: pause')\n self.now_playing = None\n\n def on_play(self, status):\n playlist = self.mpd.playlist()\n path = playlist.get(status['songid'])\n\n if not path:\n return\n\n if is_url(path):\n log.info(u'mpdstats: playing stream {0}'.format(\n displayable_path(path)\n ))\n return\n\n played, duration = map(int, status['time'].split(':', 1))\n remaining = duration - played\n\n if self.now_playing and self.now_playing['path'] != path:\n self.handle_song_change(self.now_playing)\n\n log.info(u'mpdstats: playing {0}'.format(\n displayable_path(path)\n ))\n\n self.now_playing = {\n 'started': time.time(),\n 'remaining': remaining,\n 'path': path,\n 'beets_item': self.get_item(path),\n }\n\n def run(self):\n self.mpd.connect()\n events = ['player']\n\n while True:\n if 'player' in events:\n status = self.mpd.status()\n\n handler = getattr(self, 'on_' + status['state'], None)\n\n if handler:\n handler(status)\n else:\n log.debug(u'mpdstats: unhandled status \"{0}\"'.format(status))\n\n events = self.mpd.events()\n\n\nclass MPDStatsPlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(MPDStatsPlugin, self).__init__()\n self.config.add({\n 'music_directory': config['directory'].as_filename(),\n 'rating': True,\n 'rating_mix': 0.75,\n })\n config['mpd'].add({\n 'host': u'localhost',\n 'port': 6600,\n 'password': u'',\n })\n\n def commands(self):\n cmd = ui.Subcommand(\n 'mpdstats',\n help='run a MPD client to gather play statistics')\n cmd.parser.add_option(\n '--host', dest='host', type='string',\n help='set the hostname of the server to connect to')\n cmd.parser.add_option(\n '--port', dest='port', type='int',\n help='set the port of the MPD server to connect to')\n cmd.parser.add_option(\n '--password', dest='password', type='string',\n help='set the password of the MPD server to connect to')\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n\n # Overrides for MPD settings.\n if opts.host:\n config['mpd']['host'] = opts.host.decode('utf8')\n if opts.port:\n config['mpd']['host'] = int(opts.port)\n if opts.password:\n config['mpd']['password'] = opts.password.decode('utf8')\n\n try:\n MPDStats(lib).run()\n except KeyboardInterrupt:\n pass\n\n cmd.func = func\n return [cmd]\n", "path": "beetsplug/mpdstats.py"}], "after_files": [{"content": "# coding=utf-8\n# This file is part of beets.\n# Copyright 2013, Peter Schnebel and Johann Kl\u00e4hn.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nimport logging\nimport mpd\nimport socket\nimport select\nimport time\nimport os\n\nfrom beets import ui\nfrom beets import config\nfrom beets import plugins\nfrom beets import library\nfrom beets.util import displayable_path\n\nlog = logging.getLogger('beets')\n\n# If we lose the connection, how many times do we want to retry and how\n# much time should we wait between retries?\nRETRIES = 10\nRETRY_INTERVAL = 5\n\n\ndef is_url(path):\n \"\"\"Try to determine if the path is an URL.\n \"\"\"\n return path.split('://', 1)[0] in ['http', 'https']\n\n\n# Use the MPDClient internals to get unicode.\n# see http://www.tarmack.eu/code/mpdunicode.py for the general idea\nclass MPDClient(mpd.MPDClient):\n def _write_command(self, command, args=[]):\n args = [unicode(arg).encode('utf-8') for arg in args]\n super(MPDClient, self)._write_command(command, args)\n\n def _read_line(self):\n line = super(MPDClient, self)._read_line()\n if line is not None:\n return line.decode('utf-8')\n return None\n\n\nclass MPDClientWrapper(object):\n def __init__(self):\n self.music_directory = (\n config['mpdstats']['music_directory'].get(unicode))\n\n self.client = MPDClient()\n\n def connect(self):\n \"\"\"Connect to the MPD.\n \"\"\"\n host = config['mpd']['host'].get(unicode)\n port = config['mpd']['port'].get(int)\n\n if host[0] in ['/', '~']:\n host = os.path.expanduser(host)\n\n log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))\n try:\n self.client.connect(host, port)\n except socket.error as e:\n raise ui.UserError('could not connect to MPD: {0}'.format(e))\n\n password = config['mpd']['password'].get(unicode)\n if password:\n try:\n self.client.password(password)\n except mpd.CommandError as e:\n raise ui.UserError(\n 'could not authenticate to MPD: {0}'.format(e)\n )\n\n def disconnect(self):\n \"\"\"Disconnect from the MPD.\n \"\"\"\n self.client.close()\n self.client.disconnect()\n\n def get(self, command, retries=RETRIES):\n \"\"\"Wrapper for requests to the MPD server. Tries to re-connect if the\n connection was lost (f.ex. during MPD's library refresh).\n \"\"\"\n try:\n return getattr(self.client, command)()\n except (select.error, mpd.ConnectionError) as err:\n log.error(u'mpdstats: {0}'.format(err))\n\n if retries <= 0:\n # if we exited without breaking, we couldn't reconnect in time :(\n raise ui.UserError(u'communication with MPD server failed')\n\n time.sleep(RETRY_INTERVAL)\n\n try:\n self.disconnect()\n except mpd.ConnectionError:\n pass\n\n self.connect()\n return self.get(command, retries=retries - 1)\n\n def playlist(self):\n \"\"\"Return the currently active playlist. Prefixes paths with the\n music_directory, to get the absolute path.\n \"\"\"\n result = {}\n for entry in self.get('playlistinfo'):\n if not is_url(entry['file']):\n result[entry['id']] = os.path.join(\n self.music_directory, entry['file'])\n else:\n result[entry['id']] = entry['file']\n return result\n\n def status(self):\n \"\"\"Return the current status of the MPD.\n \"\"\"\n return self.get('status')\n\n def events(self):\n \"\"\"Return list of events. This may block a long time while waiting for\n an answer from MPD.\n \"\"\"\n return self.get('idle')\n\n\nclass MPDStats(object):\n def __init__(self, lib):\n self.lib = lib\n\n self.do_rating = config['mpdstats']['rating'].get(bool)\n self.rating_mix = config['mpdstats']['rating_mix'].get(float)\n self.time_threshold = 10.0 # TODO: maybe add config option?\n\n self.now_playing = None\n self.mpd = MPDClientWrapper()\n\n def rating(self, play_count, skip_count, rating, skipped):\n \"\"\"Calculate a new rating for a song based on play count, skip count,\n old rating and the fact if it was skipped or not.\n \"\"\"\n if skipped:\n rolling = (rating - rating / 2.0)\n else:\n rolling = (rating + (1.0 - rating) / 2.0)\n stable = (play_count + 1.0) / (play_count + skip_count + 2.0)\n return (self.rating_mix * stable\n + (1.0 - self.rating_mix) * rolling)\n\n def get_item(self, path):\n \"\"\"Return the beets item related to path.\n \"\"\"\n query = library.PathQuery('path', path)\n item = self.lib.items(query).get()\n if item:\n return item\n else:\n log.info(u'mpdstats: item not found: {0}'.format(\n displayable_path(path)\n ))\n\n @staticmethod\n def update_item(item, attribute, value=None, increment=None):\n \"\"\"Update the beets item. Set attribute to value or increment the value\n of attribute. If the increment argument is used the value is cast to the\n corresponding type.\n \"\"\"\n if item is None:\n return\n\n if increment is not None:\n item.load()\n value = type(increment)(item.get(attribute, 0)) + increment\n\n if value is not None:\n item[attribute] = value\n item.store()\n\n log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(\n attribute,\n item[attribute],\n displayable_path(item.path),\n ))\n\n def update_rating(self, item, skipped):\n \"\"\"Update the rating for a beets item.\n \"\"\"\n item.load()\n rating = self.rating(\n int(item.get('play_count', 0)),\n int(item.get('skip_count', 0)),\n float(item.get('rating', 0.5)),\n skipped)\n\n self.update_item(item, 'rating', rating)\n\n def handle_song_change(self, song):\n \"\"\"Determine if a song was skipped or not and update its attributes.\n To this end the difference between the song's supposed end time\n and the current time is calculated. If it's greater than a threshold,\n the song is considered skipped.\n \"\"\"\n diff = abs(song['remaining'] - (time.time() - song['started']))\n\n skipped = diff >= self.time_threshold\n\n if skipped:\n self.handle_skipped(song)\n else:\n self.handle_played(song)\n\n if self.do_rating:\n self.update_rating(song['beets_item'], skipped)\n\n def handle_played(self, song):\n \"\"\"Updates the play count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'play_count', increment=1)\n log.info(u'mpdstats: played {0}'.format(\n displayable_path(song['path'])\n ))\n\n def handle_skipped(self, song):\n \"\"\"Updates the skip count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'skip_count', increment=1)\n log.info(u'mpdstats: skipped {0}'.format(\n displayable_path(song['path'])\n ))\n\n def on_stop(self, status):\n log.info(u'mpdstats: stop')\n self.now_playing = None\n\n def on_pause(self, status):\n log.info(u'mpdstats: pause')\n self.now_playing = None\n\n def on_play(self, status):\n playlist = self.mpd.playlist()\n path = playlist.get(status['songid'])\n\n if not path:\n return\n\n if is_url(path):\n log.info(u'mpdstats: playing stream {0}'.format(\n displayable_path(path)\n ))\n return\n\n played, duration = map(int, status['time'].split(':', 1))\n remaining = duration - played\n\n if self.now_playing and self.now_playing['path'] != path:\n self.handle_song_change(self.now_playing)\n\n log.info(u'mpdstats: playing {0}'.format(\n displayable_path(path)\n ))\n\n self.now_playing = {\n 'started': time.time(),\n 'remaining': remaining,\n 'path': path,\n 'beets_item': self.get_item(path),\n }\n\n self.update_item(self.now_playing['beets_item'],\n 'last_played', value=int(time.time()))\n\n def run(self):\n self.mpd.connect()\n events = ['player']\n\n while True:\n if 'player' in events:\n status = self.mpd.status()\n\n handler = getattr(self, 'on_' + status['state'], None)\n\n if handler:\n handler(status)\n else:\n log.debug(u'mpdstats: unhandled status \"{0}\"'.format(status))\n\n events = self.mpd.events()\n\n\nclass MPDStatsPlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(MPDStatsPlugin, self).__init__()\n self.config.add({\n 'music_directory': config['directory'].as_filename(),\n 'rating': True,\n 'rating_mix': 0.75,\n })\n config['mpd'].add({\n 'host': u'localhost',\n 'port': 6600,\n 'password': u'',\n })\n\n def commands(self):\n cmd = ui.Subcommand(\n 'mpdstats',\n help='run a MPD client to gather play statistics')\n cmd.parser.add_option(\n '--host', dest='host', type='string',\n help='set the hostname of the server to connect to')\n cmd.parser.add_option(\n '--port', dest='port', type='int',\n help='set the port of the MPD server to connect to')\n cmd.parser.add_option(\n '--password', dest='password', type='string',\n help='set the password of the MPD server to connect to')\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n\n # Overrides for MPD settings.\n if opts.host:\n config['mpd']['host'] = opts.host.decode('utf8')\n if opts.port:\n config['mpd']['host'] = int(opts.port)\n if opts.password:\n config['mpd']['password'] = opts.password.decode('utf8')\n\n try:\n MPDStats(lib).run()\n except KeyboardInterrupt:\n pass\n\n cmd.func = func\n return [cmd]\n", "path": "beetsplug/mpdstats.py"}]}
| 3,857 | 111 |
gh_patches_debug_17449
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-nlp-876
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deberta tokenizer.detokenize() errors out with mask token
When working on the Deberta masked language model, we had to do some special treatment for the mask token in the tokenizer.
We left one outstanding bug on the main PR, which is that detokenize will error out with a mask token. See:
https://github.com/keras-team/keras-nlp/pull/732#issuecomment-1449746110
Here's a colab:
https://colab.research.google.com/gist/mattdangerw/5164a7cad80e9f5fcbb9a495264f80e1/deberta-detokenize-error.ipynb
We should either strip or properly render the mask token during detokenize so the call does not error out.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py`
Content:
```
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """DeBERTa tokenizer."""
16
17 import copy
18
19 from keras_nlp.api_export import keras_nlp_export
20 from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets
21 from keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer
22 from keras_nlp.utils.python_utils import classproperty
23
24
25 @keras_nlp_export("keras_nlp.models.DebertaV3Tokenizer")
26 class DebertaV3Tokenizer(SentencePieceTokenizer):
27 """DeBERTa tokenizer layer based on SentencePiece.
28
29 This tokenizer class will tokenize raw strings into integer sequences and
30 is based on `keras_nlp.tokenizers.SentencePieceTokenizer`. Unlike the
31 underlying tokenizer, it will check for all special tokens needed by
32 DeBERTa models and provides a `from_preset()` method to automatically
33 download a matching vocabulary for a DeBERTa preset.
34
35 This tokenizer does not provide truncation or padding of inputs. It can be
36 combined with a `keras_nlp.models.DebertaV3Preprocessor` layer for input
37 packing.
38
39 If input is a batch of strings (rank > 0), the layer will output a
40 `tf.RaggedTensor` where the last dimension of the output is ragged.
41
42 If input is a scalar string (rank == 0), the layer will output a dense
43 `tf.Tensor` with static shape `[None]`.
44
45 Note: The mask token (`"[MASK]"`) is handled differently in this tokenizer.
46 If the token is not present in the provided SentencePiece vocabulary, the
47 token will be appended to the vocabulary. For example, if the vocabulary
48 size is 100, the mask token will be assigned the ID 100.
49
50 Args:
51 proto: Either a `string` path to a SentencePiece proto file, or a
52 `bytes` object with a serialized SentencePiece proto. See the
53 [SentencePiece repository](https://github.com/google/sentencepiece)
54 for more details on the format.
55
56 Examples:
57
58 ```python
59 tokenizer = keras_nlp.models.DebertaV3Tokenizer(proto="model.spm")
60
61 # Batched inputs.
62 tokenizer(["the quick brown fox", "the earth is round"])
63
64 # Unbatched inputs.
65 tokenizer("the quick brown fox")
66
67 # Detokenization.
68 tokenizer.detokenize(tf.constant([[1, 4, 9, 5, 7, 2]]))
69 ```
70 """
71
72 def __init__(self, proto, **kwargs):
73 super().__init__(proto=proto, **kwargs)
74
75 # Check for necessary special tokens.
76 cls_token = "[CLS]"
77 sep_token = "[SEP]"
78 pad_token = "[PAD]"
79 mask_token = "[MASK]"
80
81 # We do not throw an error if `mask_token` is not present in the
82 # vocabulary.
83 for token in [cls_token, pad_token, sep_token]:
84 if token not in super().get_vocabulary():
85 raise ValueError(
86 f"Cannot find token `'{token}'` in the provided "
87 f"`vocabulary`. Please provide `'{token}'` in your "
88 "`vocabulary` or use a pretrained `vocabulary` name."
89 )
90
91 self.cls_token_id = self.token_to_id(cls_token)
92 self.sep_token_id = self.token_to_id(sep_token)
93 self.pad_token_id = self.token_to_id(pad_token)
94 # If the mask token is not in the vocabulary, add it to the end of the
95 # vocabulary.
96 if mask_token in super().get_vocabulary():
97 self.mask_token_id = super().token_to_id(mask_token)
98 else:
99 self.mask_token_id = super().vocabulary_size()
100
101 def vocabulary_size(self):
102 sentence_piece_size = super().vocabulary_size()
103 if sentence_piece_size == self.mask_token_id:
104 return sentence_piece_size + 1
105 return sentence_piece_size
106
107 def get_vocabulary(self):
108 sentence_piece_vocabulary = super().get_vocabulary()
109 if self.mask_token_id < super().vocabulary_size():
110 return sentence_piece_vocabulary
111 return sentence_piece_vocabulary + ["[MASK]"]
112
113 def id_to_token(self, id):
114 if id == self.mask_token_id:
115 return "[MASK]"
116 return super().id_to_token(id)
117
118 def token_to_id(self, token):
119 if token == "[MASK]":
120 return self.mask_token_id
121 return super().token_to_id(token)
122
123 @classproperty
124 def presets(cls):
125 return copy.deepcopy(backbone_presets)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py b/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py
--- a/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py
+++ b/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py
@@ -16,6 +16,8 @@
import copy
+import tensorflow as tf
+
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets
from keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer
@@ -120,6 +122,10 @@
return self.mask_token_id
return super().token_to_id(token)
+ def detokenize(self, ids):
+ ids = tf.ragged.boolean_mask(ids, tf.not_equal(ids, self.mask_token_id))
+ return super().detokenize(ids)
+
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
|
{"golden_diff": "diff --git a/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py b/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py\n--- a/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py\n+++ b/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py\n@@ -16,6 +16,8 @@\n \n import copy\n \n+import tensorflow as tf\n+\n from keras_nlp.api_export import keras_nlp_export\n from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets\n from keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer\n@@ -120,6 +122,10 @@\n return self.mask_token_id\n return super().token_to_id(token)\n \n+ def detokenize(self, ids):\n+ ids = tf.ragged.boolean_mask(ids, tf.not_equal(ids, self.mask_token_id))\n+ return super().detokenize(ids)\n+\n @classproperty\n def presets(cls):\n return copy.deepcopy(backbone_presets)\n", "issue": "Deberta tokenizer.detokenize() errors out with mask token\nWhen working on the Deberta masked language model, we had to do some special treatment for the mask token in the tokenizer.\r\n\r\nWe left one outstanding bug on the main PR, which is that detokenize will error out with a mask token. See:\r\nhttps://github.com/keras-team/keras-nlp/pull/732#issuecomment-1449746110\r\n\r\nHere's a colab:\r\nhttps://colab.research.google.com/gist/mattdangerw/5164a7cad80e9f5fcbb9a495264f80e1/deberta-detokenize-error.ipynb\r\n\r\nWe should either strip or properly render the mask token during detokenize so the call does not error out.\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"DeBERTa tokenizer.\"\"\"\n\nimport copy\n\nfrom keras_nlp.api_export import keras_nlp_export\nfrom keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets\nfrom keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer\nfrom keras_nlp.utils.python_utils import classproperty\n\n\n@keras_nlp_export(\"keras_nlp.models.DebertaV3Tokenizer\")\nclass DebertaV3Tokenizer(SentencePieceTokenizer):\n \"\"\"DeBERTa tokenizer layer based on SentencePiece.\n\n This tokenizer class will tokenize raw strings into integer sequences and\n is based on `keras_nlp.tokenizers.SentencePieceTokenizer`. Unlike the\n underlying tokenizer, it will check for all special tokens needed by\n DeBERTa models and provides a `from_preset()` method to automatically\n download a matching vocabulary for a DeBERTa preset.\n\n This tokenizer does not provide truncation or padding of inputs. It can be\n combined with a `keras_nlp.models.DebertaV3Preprocessor` layer for input\n packing.\n\n If input is a batch of strings (rank > 0), the layer will output a\n `tf.RaggedTensor` where the last dimension of the output is ragged.\n\n If input is a scalar string (rank == 0), the layer will output a dense\n `tf.Tensor` with static shape `[None]`.\n\n Note: The mask token (`\"[MASK]\"`) is handled differently in this tokenizer.\n If the token is not present in the provided SentencePiece vocabulary, the\n token will be appended to the vocabulary. For example, if the vocabulary\n size is 100, the mask token will be assigned the ID 100.\n\n Args:\n proto: Either a `string` path to a SentencePiece proto file, or a\n `bytes` object with a serialized SentencePiece proto. See the\n [SentencePiece repository](https://github.com/google/sentencepiece)\n for more details on the format.\n\n Examples:\n\n ```python\n tokenizer = keras_nlp.models.DebertaV3Tokenizer(proto=\"model.spm\")\n\n # Batched inputs.\n tokenizer([\"the quick brown fox\", \"the earth is round\"])\n\n # Unbatched inputs.\n tokenizer(\"the quick brown fox\")\n\n # Detokenization.\n tokenizer.detokenize(tf.constant([[1, 4, 9, 5, 7, 2]]))\n ```\n \"\"\"\n\n def __init__(self, proto, **kwargs):\n super().__init__(proto=proto, **kwargs)\n\n # Check for necessary special tokens.\n cls_token = \"[CLS]\"\n sep_token = \"[SEP]\"\n pad_token = \"[PAD]\"\n mask_token = \"[MASK]\"\n\n # We do not throw an error if `mask_token` is not present in the\n # vocabulary.\n for token in [cls_token, pad_token, sep_token]:\n if token not in super().get_vocabulary():\n raise ValueError(\n f\"Cannot find token `'{token}'` in the provided \"\n f\"`vocabulary`. Please provide `'{token}'` in your \"\n \"`vocabulary` or use a pretrained `vocabulary` name.\"\n )\n\n self.cls_token_id = self.token_to_id(cls_token)\n self.sep_token_id = self.token_to_id(sep_token)\n self.pad_token_id = self.token_to_id(pad_token)\n # If the mask token is not in the vocabulary, add it to the end of the\n # vocabulary.\n if mask_token in super().get_vocabulary():\n self.mask_token_id = super().token_to_id(mask_token)\n else:\n self.mask_token_id = super().vocabulary_size()\n\n def vocabulary_size(self):\n sentence_piece_size = super().vocabulary_size()\n if sentence_piece_size == self.mask_token_id:\n return sentence_piece_size + 1\n return sentence_piece_size\n\n def get_vocabulary(self):\n sentence_piece_vocabulary = super().get_vocabulary()\n if self.mask_token_id < super().vocabulary_size():\n return sentence_piece_vocabulary\n return sentence_piece_vocabulary + [\"[MASK]\"]\n\n def id_to_token(self, id):\n if id == self.mask_token_id:\n return \"[MASK]\"\n return super().id_to_token(id)\n\n def token_to_id(self, token):\n if token == \"[MASK]\":\n return self.mask_token_id\n return super().token_to_id(token)\n\n @classproperty\n def presets(cls):\n return copy.deepcopy(backbone_presets)\n", "path": "keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py"}], "after_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"DeBERTa tokenizer.\"\"\"\n\nimport copy\n\nimport tensorflow as tf\n\nfrom keras_nlp.api_export import keras_nlp_export\nfrom keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets\nfrom keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer\nfrom keras_nlp.utils.python_utils import classproperty\n\n\n@keras_nlp_export(\"keras_nlp.models.DebertaV3Tokenizer\")\nclass DebertaV3Tokenizer(SentencePieceTokenizer):\n \"\"\"DeBERTa tokenizer layer based on SentencePiece.\n\n This tokenizer class will tokenize raw strings into integer sequences and\n is based on `keras_nlp.tokenizers.SentencePieceTokenizer`. Unlike the\n underlying tokenizer, it will check for all special tokens needed by\n DeBERTa models and provides a `from_preset()` method to automatically\n download a matching vocabulary for a DeBERTa preset.\n\n This tokenizer does not provide truncation or padding of inputs. It can be\n combined with a `keras_nlp.models.DebertaV3Preprocessor` layer for input\n packing.\n\n If input is a batch of strings (rank > 0), the layer will output a\n `tf.RaggedTensor` where the last dimension of the output is ragged.\n\n If input is a scalar string (rank == 0), the layer will output a dense\n `tf.Tensor` with static shape `[None]`.\n\n Note: The mask token (`\"[MASK]\"`) is handled differently in this tokenizer.\n If the token is not present in the provided SentencePiece vocabulary, the\n token will be appended to the vocabulary. For example, if the vocabulary\n size is 100, the mask token will be assigned the ID 100.\n\n Args:\n proto: Either a `string` path to a SentencePiece proto file, or a\n `bytes` object with a serialized SentencePiece proto. See the\n [SentencePiece repository](https://github.com/google/sentencepiece)\n for more details on the format.\n\n Examples:\n\n ```python\n tokenizer = keras_nlp.models.DebertaV3Tokenizer(proto=\"model.spm\")\n\n # Batched inputs.\n tokenizer([\"the quick brown fox\", \"the earth is round\"])\n\n # Unbatched inputs.\n tokenizer(\"the quick brown fox\")\n\n # Detokenization.\n tokenizer.detokenize(tf.constant([[1, 4, 9, 5, 7, 2]]))\n ```\n \"\"\"\n\n def __init__(self, proto, **kwargs):\n super().__init__(proto=proto, **kwargs)\n\n # Check for necessary special tokens.\n cls_token = \"[CLS]\"\n sep_token = \"[SEP]\"\n pad_token = \"[PAD]\"\n mask_token = \"[MASK]\"\n\n # We do not throw an error if `mask_token` is not present in the\n # vocabulary.\n for token in [cls_token, pad_token, sep_token]:\n if token not in super().get_vocabulary():\n raise ValueError(\n f\"Cannot find token `'{token}'` in the provided \"\n f\"`vocabulary`. Please provide `'{token}'` in your \"\n \"`vocabulary` or use a pretrained `vocabulary` name.\"\n )\n\n self.cls_token_id = self.token_to_id(cls_token)\n self.sep_token_id = self.token_to_id(sep_token)\n self.pad_token_id = self.token_to_id(pad_token)\n # If the mask token is not in the vocabulary, add it to the end of the\n # vocabulary.\n if mask_token in super().get_vocabulary():\n self.mask_token_id = super().token_to_id(mask_token)\n else:\n self.mask_token_id = super().vocabulary_size()\n\n def vocabulary_size(self):\n sentence_piece_size = super().vocabulary_size()\n if sentence_piece_size == self.mask_token_id:\n return sentence_piece_size + 1\n return sentence_piece_size\n\n def get_vocabulary(self):\n sentence_piece_vocabulary = super().get_vocabulary()\n if self.mask_token_id < super().vocabulary_size():\n return sentence_piece_vocabulary\n return sentence_piece_vocabulary + [\"[MASK]\"]\n\n def id_to_token(self, id):\n if id == self.mask_token_id:\n return \"[MASK]\"\n return super().id_to_token(id)\n\n def token_to_id(self, token):\n if token == \"[MASK]\":\n return self.mask_token_id\n return super().token_to_id(token)\n\n def detokenize(self, ids):\n ids = tf.ragged.boolean_mask(ids, tf.not_equal(ids, self.mask_token_id))\n return super().detokenize(ids)\n\n @classproperty\n def presets(cls):\n return copy.deepcopy(backbone_presets)\n", "path": "keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py"}]}
| 1,853 | 254 |
gh_patches_debug_39546
|
rasdani/github-patches
|
git_diff
|
canonical__snapcraft-4353
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remote-build: add control logic for envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`
### What needs to get done
This adds control logic to determine whether to execute the new or legacy remote-build code.
There are four possibilities with `SNAPCRAFT_REMOTE_BUILD_STRATEGY`:
- `disable-fallback` - use new remote-build code
- `force-fallback` - use legacy remote-build code
- unset - continue on to next control logic step
- unknown - raise an error

### Why it needs to get done
remote-build needs to be migrated because it does not leverage the new craft libraries, has issues with building core22 snaps, and has issues related to how the local project is bundled.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snapcraft/commands/remote.py`
Content:
```
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright 2022-2023 Canonical Ltd.
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17 """Snapcraft remote build command."""
18
19 import argparse
20 import os
21 import textwrap
22
23 from craft_cli import BaseCommand, emit
24 from craft_cli.helptexts import HIDDEN
25 from overrides import overrides
26
27 from snapcraft.errors import MaintenanceBase, SnapcraftError
28 from snapcraft.legacy_cli import run_legacy
29 from snapcraft.parts import yaml_utils
30 from snapcraft.utils import confirm_with_user
31 from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
32
33 _CONFIRMATION_PROMPT = (
34 "All data sent to remote builders will be publicly available. "
35 "Are you sure you want to continue?"
36 )
37
38
39 class RemoteBuildCommand(BaseCommand):
40 """Command passthrough for the remote-build command."""
41
42 name = "remote-build"
43 help_msg = "Dispatch a snap for remote build"
44 overview = textwrap.dedent(
45 """
46 Command remote-build sends the current project to be built
47 remotely. After the build is complete, packages for each
48 architecture are retrieved and will be available in the
49 local filesystem.
50
51 If not specified in the snapcraft.yaml file, the list of
52 architectures to build can be set using the --build-on option.
53 If both are specified, an error will occur.
54
55 Interrupted remote builds can be resumed using the --recover
56 option, followed by the build number informed when the remote
57 build was originally dispatched. The current state of the
58 remote build for each architecture can be checked using the
59 --status option."""
60 )
61
62 @overrides
63 def fill_parser(self, parser: argparse.ArgumentParser) -> None:
64 parser.add_argument(
65 "--recover", action="store_true", help="recover an interrupted build"
66 )
67 parser.add_argument(
68 "--status", action="store_true", help="display remote build status"
69 )
70 parser_target = parser.add_mutually_exclusive_group()
71 parser_target.add_argument(
72 "--build-on",
73 metavar="arch",
74 nargs="+",
75 help=HIDDEN,
76 )
77 parser_target.add_argument(
78 "--build-for",
79 metavar="arch",
80 nargs="+",
81 help="architecture to build for",
82 )
83 parser.add_argument(
84 "--build-id", metavar="build-id", help="specific build id to retrieve"
85 )
86 parser.add_argument(
87 "--launchpad-accept-public-upload",
88 action="store_true",
89 help="acknowledge that uploaded code will be publicly available.",
90 )
91
92 def _get_effective_base(self) -> str:
93 """Get a valid effective base from the project's snapcraft.yaml.
94
95 :returns: The project's effective base.
96
97 :raises SnapcraftError: If the base is unknown or missing or if the
98 snapcraft.yaml cannot be loaded.
99 :raises MaintenanceBase: If the base is not supported
100 """
101 snapcraft_yaml = yaml_utils.get_snap_project().project_file
102
103 with open(snapcraft_yaml, encoding="utf-8") as file:
104 base = yaml_utils.get_base(file)
105
106 if base is None:
107 raise SnapcraftError(
108 f"Could not determine base from {str(snapcraft_yaml)!r}."
109 )
110
111 emit.debug(f"Got base {base!r} from {str(snapcraft_yaml)!r}.")
112
113 if base in yaml_utils.ESM_BASES:
114 raise MaintenanceBase(base)
115
116 if base not in yaml_utils.BASES:
117 raise SnapcraftError(f"Unknown base {base!r} in {str(snapcraft_yaml)!r}.")
118
119 return base
120
121 def _run_remote_build(self, base: str) -> None:
122 # bases newer than core22 must use the new remote-build
123 if base in yaml_utils.CURRENT_BASES - {"core22"}:
124 emit.debug(
125 "Using fallback remote-build because new remote-build is not available."
126 )
127 # TODO: use new remote-build code (#4323)
128 run_legacy()
129 return
130
131 emit.debug("Running fallback remote-build.")
132 run_legacy()
133
134 @overrides
135 def run(self, parsed_args) -> None:
136 if os.getenv("SUDO_USER") and os.geteuid() == 0:
137 emit.message(
138 "Running with 'sudo' may cause permission errors and is discouraged."
139 )
140
141 emit.message(
142 "snapcraft remote-build is experimental and is subject to change "
143 "- use with caution."
144 )
145
146 if parsed_args.build_on:
147 emit.message("Use --build-for instead of --build-on")
148 parsed_args.build_for = parsed_args.build_on
149
150 if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(
151 _CONFIRMATION_PROMPT
152 ):
153 raise AcceptPublicUploadError()
154
155 base = self._get_effective_base()
156 self._run_remote_build(base)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py
--- a/snapcraft/commands/remote.py
+++ b/snapcraft/commands/remote.py
@@ -19,6 +19,8 @@
import argparse
import os
import textwrap
+from enum import Enum
+from typing import Optional
from craft_cli import BaseCommand, emit
from craft_cli.helptexts import HIDDEN
@@ -27,7 +29,7 @@
from snapcraft.errors import MaintenanceBase, SnapcraftError
from snapcraft.legacy_cli import run_legacy
from snapcraft.parts import yaml_utils
-from snapcraft.utils import confirm_with_user
+from snapcraft.utils import confirm_with_user, humanize_list
from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError
_CONFIRMATION_PROMPT = (
@@ -36,6 +38,16 @@
)
+_STRATEGY_ENVVAR = "SNAPCRAFT_REMOTE_BUILD_STRATEGY"
+
+
+class _Strategies(Enum):
+ """Possible values of the build strategy."""
+
+ DISABLE_FALLBACK = "disable-fallback"
+ FORCE_FALLBACK = "force-fallback"
+
+
class RemoteBuildCommand(BaseCommand):
"""Command passthrough for the remote-build command."""
@@ -89,6 +101,29 @@
help="acknowledge that uploaded code will be publicly available.",
)
+ def _get_build_strategy(self) -> Optional[_Strategies]:
+ """Get the build strategy from the envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`.
+
+ :returns: The strategy or None.
+
+ :raises SnapcraftError: If the variable is set to an invalid value.
+ """
+ strategy = os.getenv(_STRATEGY_ENVVAR)
+
+ if not strategy:
+ return None
+
+ try:
+ return _Strategies(strategy)
+ except ValueError as err:
+ valid_strategies = humanize_list(
+ (strategy.value for strategy in _Strategies), "and"
+ )
+ raise SnapcraftError(
+ f"Unknown value {strategy!r} in environment variable "
+ f"{_STRATEGY_ENVVAR!r}. Valid values are {valid_strategies}."
+ ) from err
+
def _get_effective_base(self) -> str:
"""Get a valid effective base from the project's snapcraft.yaml.
@@ -128,6 +163,25 @@
run_legacy()
return
+ strategy = self._get_build_strategy()
+
+ if strategy == _Strategies.DISABLE_FALLBACK:
+ emit.debug(
+ f"Environment variable {_STRATEGY_ENVVAR!r} is "
+ f"{_Strategies.DISABLE_FALLBACK.value!r} but running fallback "
+ "remote-build because new remote-build is not available."
+ )
+ run_legacy()
+ return
+
+ if strategy == _Strategies.FORCE_FALLBACK:
+ emit.debug(
+ "Running fallback remote-build because environment variable "
+ f"{_STRATEGY_ENVVAR!r} is {_Strategies.FORCE_FALLBACK.value!r}."
+ )
+ run_legacy()
+ return
+
emit.debug("Running fallback remote-build.")
run_legacy()
|
{"golden_diff": "diff --git a/snapcraft/commands/remote.py b/snapcraft/commands/remote.py\n--- a/snapcraft/commands/remote.py\n+++ b/snapcraft/commands/remote.py\n@@ -19,6 +19,8 @@\n import argparse\n import os\n import textwrap\n+from enum import Enum\n+from typing import Optional\n \n from craft_cli import BaseCommand, emit\n from craft_cli.helptexts import HIDDEN\n@@ -27,7 +29,7 @@\n from snapcraft.errors import MaintenanceBase, SnapcraftError\n from snapcraft.legacy_cli import run_legacy\n from snapcraft.parts import yaml_utils\n-from snapcraft.utils import confirm_with_user\n+from snapcraft.utils import confirm_with_user, humanize_list\n from snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n \n _CONFIRMATION_PROMPT = (\n@@ -36,6 +38,16 @@\n )\n \n \n+_STRATEGY_ENVVAR = \"SNAPCRAFT_REMOTE_BUILD_STRATEGY\"\n+\n+\n+class _Strategies(Enum):\n+ \"\"\"Possible values of the build strategy.\"\"\"\n+\n+ DISABLE_FALLBACK = \"disable-fallback\"\n+ FORCE_FALLBACK = \"force-fallback\"\n+\n+\n class RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n \n@@ -89,6 +101,29 @@\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n \n+ def _get_build_strategy(self) -> Optional[_Strategies]:\n+ \"\"\"Get the build strategy from the envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`.\n+\n+ :returns: The strategy or None.\n+\n+ :raises SnapcraftError: If the variable is set to an invalid value.\n+ \"\"\"\n+ strategy = os.getenv(_STRATEGY_ENVVAR)\n+\n+ if not strategy:\n+ return None\n+\n+ try:\n+ return _Strategies(strategy)\n+ except ValueError as err:\n+ valid_strategies = humanize_list(\n+ (strategy.value for strategy in _Strategies), \"and\"\n+ )\n+ raise SnapcraftError(\n+ f\"Unknown value {strategy!r} in environment variable \"\n+ f\"{_STRATEGY_ENVVAR!r}. Valid values are {valid_strategies}.\"\n+ ) from err\n+\n def _get_effective_base(self) -> str:\n \"\"\"Get a valid effective base from the project's snapcraft.yaml.\n \n@@ -128,6 +163,25 @@\n run_legacy()\n return\n \n+ strategy = self._get_build_strategy()\n+\n+ if strategy == _Strategies.DISABLE_FALLBACK:\n+ emit.debug(\n+ f\"Environment variable {_STRATEGY_ENVVAR!r} is \"\n+ f\"{_Strategies.DISABLE_FALLBACK.value!r} but running fallback \"\n+ \"remote-build because new remote-build is not available.\"\n+ )\n+ run_legacy()\n+ return\n+\n+ if strategy == _Strategies.FORCE_FALLBACK:\n+ emit.debug(\n+ \"Running fallback remote-build because environment variable \"\n+ f\"{_STRATEGY_ENVVAR!r} is {_Strategies.FORCE_FALLBACK.value!r}.\"\n+ )\n+ run_legacy()\n+ return\n+\n emit.debug(\"Running fallback remote-build.\")\n run_legacy()\n", "issue": "remote-build: add control logic for envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`\n### What needs to get done\n\nThis adds control logic to determine whether to execute the new or legacy remote-build code.\r\n\r\nThere are four possibilities with `SNAPCRAFT_REMOTE_BUILD_STRATEGY`:\r\n\r\n- `disable-fallback` - use new remote-build code\r\n- `force-fallback` - use legacy remote-build code\r\n- unset - continue on to next control logic step\r\n- unknown - raise an error\r\n\r\n\r\n\n\n### Why it needs to get done\n\nremote-build needs to be migrated because it does not leverage the new craft libraries, has issues with building core22 snaps, and has issues related to how the local project is bundled.\n", "before_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022-2023 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.errors import MaintenanceBase, SnapcraftError\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts import yaml_utils\nfrom snapcraft.utils import confirm_with_user\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n\n def _get_effective_base(self) -> str:\n \"\"\"Get a valid effective base from the project's snapcraft.yaml.\n\n :returns: The project's effective base.\n\n :raises SnapcraftError: If the base is unknown or missing or if the\n snapcraft.yaml cannot be loaded.\n :raises MaintenanceBase: If the base is not supported\n \"\"\"\n snapcraft_yaml = yaml_utils.get_snap_project().project_file\n\n with open(snapcraft_yaml, encoding=\"utf-8\") as file:\n base = yaml_utils.get_base(file)\n\n if base is None:\n raise SnapcraftError(\n f\"Could not determine base from {str(snapcraft_yaml)!r}.\"\n )\n\n emit.debug(f\"Got base {base!r} from {str(snapcraft_yaml)!r}.\")\n\n if base in yaml_utils.ESM_BASES:\n raise MaintenanceBase(base)\n\n if base not in yaml_utils.BASES:\n raise SnapcraftError(f\"Unknown base {base!r} in {str(snapcraft_yaml)!r}.\")\n\n return base\n\n def _run_remote_build(self, base: str) -> None:\n # bases newer than core22 must use the new remote-build\n if base in yaml_utils.CURRENT_BASES - {\"core22\"}:\n emit.debug(\n \"Using fallback remote-build because new remote-build is not available.\"\n )\n # TODO: use new remote-build code (#4323)\n run_legacy()\n return\n\n emit.debug(\"Running fallback remote-build.\")\n run_legacy()\n\n @overrides\n def run(self, parsed_args) -> None:\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change \"\n \"- use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n base = self._get_effective_base()\n self._run_remote_build(base)\n", "path": "snapcraft/commands/remote.py"}], "after_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2022-2023 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Snapcraft remote build command.\"\"\"\n\nimport argparse\nimport os\nimport textwrap\nfrom enum import Enum\nfrom typing import Optional\n\nfrom craft_cli import BaseCommand, emit\nfrom craft_cli.helptexts import HIDDEN\nfrom overrides import overrides\n\nfrom snapcraft.errors import MaintenanceBase, SnapcraftError\nfrom snapcraft.legacy_cli import run_legacy\nfrom snapcraft.parts import yaml_utils\nfrom snapcraft.utils import confirm_with_user, humanize_list\nfrom snapcraft_legacy.internal.remote_build.errors import AcceptPublicUploadError\n\n_CONFIRMATION_PROMPT = (\n \"All data sent to remote builders will be publicly available. \"\n \"Are you sure you want to continue?\"\n)\n\n\n_STRATEGY_ENVVAR = \"SNAPCRAFT_REMOTE_BUILD_STRATEGY\"\n\n\nclass _Strategies(Enum):\n \"\"\"Possible values of the build strategy.\"\"\"\n\n DISABLE_FALLBACK = \"disable-fallback\"\n FORCE_FALLBACK = \"force-fallback\"\n\n\nclass RemoteBuildCommand(BaseCommand):\n \"\"\"Command passthrough for the remote-build command.\"\"\"\n\n name = \"remote-build\"\n help_msg = \"Dispatch a snap for remote build\"\n overview = textwrap.dedent(\n \"\"\"\n Command remote-build sends the current project to be built\n remotely. After the build is complete, packages for each\n architecture are retrieved and will be available in the\n local filesystem.\n\n If not specified in the snapcraft.yaml file, the list of\n architectures to build can be set using the --build-on option.\n If both are specified, an error will occur.\n\n Interrupted remote builds can be resumed using the --recover\n option, followed by the build number informed when the remote\n build was originally dispatched. The current state of the\n remote build for each architecture can be checked using the\n --status option.\"\"\"\n )\n\n @overrides\n def fill_parser(self, parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--recover\", action=\"store_true\", help=\"recover an interrupted build\"\n )\n parser.add_argument(\n \"--status\", action=\"store_true\", help=\"display remote build status\"\n )\n parser_target = parser.add_mutually_exclusive_group()\n parser_target.add_argument(\n \"--build-on\",\n metavar=\"arch\",\n nargs=\"+\",\n help=HIDDEN,\n )\n parser_target.add_argument(\n \"--build-for\",\n metavar=\"arch\",\n nargs=\"+\",\n help=\"architecture to build for\",\n )\n parser.add_argument(\n \"--build-id\", metavar=\"build-id\", help=\"specific build id to retrieve\"\n )\n parser.add_argument(\n \"--launchpad-accept-public-upload\",\n action=\"store_true\",\n help=\"acknowledge that uploaded code will be publicly available.\",\n )\n\n def _get_build_strategy(self) -> Optional[_Strategies]:\n \"\"\"Get the build strategy from the envvar `SNAPCRAFT_REMOTE_BUILD_STRATEGY`.\n\n :returns: The strategy or None.\n\n :raises SnapcraftError: If the variable is set to an invalid value.\n \"\"\"\n strategy = os.getenv(_STRATEGY_ENVVAR)\n\n if not strategy:\n return None\n\n try:\n return _Strategies(strategy)\n except ValueError as err:\n valid_strategies = humanize_list(\n (strategy.value for strategy in _Strategies), \"and\"\n )\n raise SnapcraftError(\n f\"Unknown value {strategy!r} in environment variable \"\n f\"{_STRATEGY_ENVVAR!r}. Valid values are {valid_strategies}.\"\n ) from err\n\n def _get_effective_base(self) -> str:\n \"\"\"Get a valid effective base from the project's snapcraft.yaml.\n\n :returns: The project's effective base.\n\n :raises SnapcraftError: If the base is unknown or missing or if the\n snapcraft.yaml cannot be loaded.\n :raises MaintenanceBase: If the base is not supported\n \"\"\"\n snapcraft_yaml = yaml_utils.get_snap_project().project_file\n\n with open(snapcraft_yaml, encoding=\"utf-8\") as file:\n base = yaml_utils.get_base(file)\n\n if base is None:\n raise SnapcraftError(\n f\"Could not determine base from {str(snapcraft_yaml)!r}.\"\n )\n\n emit.debug(f\"Got base {base!r} from {str(snapcraft_yaml)!r}.\")\n\n if base in yaml_utils.ESM_BASES:\n raise MaintenanceBase(base)\n\n if base not in yaml_utils.BASES:\n raise SnapcraftError(f\"Unknown base {base!r} in {str(snapcraft_yaml)!r}.\")\n\n return base\n\n def _run_remote_build(self, base: str) -> None:\n # bases newer than core22 must use the new remote-build\n if base in yaml_utils.CURRENT_BASES - {\"core22\"}:\n emit.debug(\n \"Using fallback remote-build because new remote-build is not available.\"\n )\n # TODO: use new remote-build code (#4323)\n run_legacy()\n return\n\n strategy = self._get_build_strategy()\n\n if strategy == _Strategies.DISABLE_FALLBACK:\n emit.debug(\n f\"Environment variable {_STRATEGY_ENVVAR!r} is \"\n f\"{_Strategies.DISABLE_FALLBACK.value!r} but running fallback \"\n \"remote-build because new remote-build is not available.\"\n )\n run_legacy()\n return\n\n if strategy == _Strategies.FORCE_FALLBACK:\n emit.debug(\n \"Running fallback remote-build because environment variable \"\n f\"{_STRATEGY_ENVVAR!r} is {_Strategies.FORCE_FALLBACK.value!r}.\"\n )\n run_legacy()\n return\n\n emit.debug(\"Running fallback remote-build.\")\n run_legacy()\n\n @overrides\n def run(self, parsed_args) -> None:\n if os.getenv(\"SUDO_USER\") and os.geteuid() == 0:\n emit.message(\n \"Running with 'sudo' may cause permission errors and is discouraged.\"\n )\n\n emit.message(\n \"snapcraft remote-build is experimental and is subject to change \"\n \"- use with caution.\"\n )\n\n if parsed_args.build_on:\n emit.message(\"Use --build-for instead of --build-on\")\n parsed_args.build_for = parsed_args.build_on\n\n if not parsed_args.launchpad_accept_public_upload and not confirm_with_user(\n _CONFIRMATION_PROMPT\n ):\n raise AcceptPublicUploadError()\n\n base = self._get_effective_base()\n self._run_remote_build(base)\n", "path": "snapcraft/commands/remote.py"}]}
| 2,020 | 723 |
gh_patches_debug_22036
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-2687
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unreasonable Query Result
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
**The query result seems not correct. The code is self-explained. Thank you!**
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
```
from gensim.summarization.bm25 import BM25, get_bm25_weights
text1 = "A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere."
text2 = "The 88 modern constellations are formally defined regions of the sky together covering the entire celestial sphere."
text = [text1, text2]
corpus = [text1.split(" "), text2.split(" ")]
print(f'corpus: {corpus}')
query = text2.split(" ")
bm25 = BM25(corpus)
scores = bm25.get_scores(query)
scores = [(s, i) for i, s in enumerate(scores)]
scores.sort(key=lambda t: t[0], reverse=True)
print(f'scores: {scores}')
for s, idx in scores:
print(f'{s}\t{idx}: {text[idx]}')
```
Output:
```
-0.3601521710456333 0: A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere.
-0.44989406787023367 1: The 88 modern constellations are formally defined regions of the sky together covering the entire celestial sphere.
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
Output:
```
macOS-10.14.6-x86_64-i386-64bit
Python 3.8.0 (default, Nov 6 2019, 15:49:01)
[Clang 4.0.1 (tags/RELEASE_401/final)]
NumPy 1.17.4
SciPy 1.3.3
gensim 3.8.1
FAST_VERSION 0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gensim/summarization/bm25.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
5
6 """This module contains function of computing rank scores for documents in
7 corpus and helper class `BM25` used in calculations. Original algorithm
8 descibed in [1]_, also you may check Wikipedia page [2]_.
9
10
11 .. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,
12 http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf
13 .. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25
14
15
16
17 Examples
18 --------
19
20 .. sourcecode:: pycon
21
22 >>> from gensim.summarization.bm25 import get_bm25_weights
23 >>> corpus = [
24 ... ["black", "cat", "white", "cat"],
25 ... ["cat", "outer", "space"],
26 ... ["wag", "dog"]
27 ... ]
28 >>> result = get_bm25_weights(corpus, n_jobs=-1)
29
30
31 Data:
32 -----
33 .. data:: PARAM_K1 - Free smoothing parameter for BM25.
34 .. data:: PARAM_B - Free smoothing parameter for BM25.
35 .. data:: EPSILON - Constant used for negative idf of document in corpus.
36
37 """
38
39
40 import math
41 from six import iteritems
42 from six.moves import range
43 from functools import partial
44 from multiprocessing import Pool
45 from ..utils import effective_n_jobs
46
47 PARAM_K1 = 1.5
48 PARAM_B = 0.75
49 EPSILON = 0.25
50
51
52 class BM25(object):
53 """Implementation of Best Matching 25 ranking function.
54
55 Attributes
56 ----------
57 corpus_size : int
58 Size of corpus (number of documents).
59 avgdl : float
60 Average length of document in `corpus`.
61 doc_freqs : list of dicts of int
62 Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.
63 idf : dict
64 Dictionary with inversed documents frequencies for whole `corpus`. Words used as keys and frequencies as values.
65 doc_len : list of int
66 List of document lengths.
67 """
68
69 def __init__(self, corpus):
70 """
71 Parameters
72 ----------
73 corpus : list of list of str
74 Given corpus.
75
76 """
77 self.corpus_size = 0
78 self.avgdl = 0
79 self.doc_freqs = []
80 self.idf = {}
81 self.doc_len = []
82 self._initialize(corpus)
83
84 def _initialize(self, corpus):
85 """Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies."""
86 nd = {} # word -> number of documents with word
87 num_doc = 0
88 for document in corpus:
89 self.corpus_size += 1
90 self.doc_len.append(len(document))
91 num_doc += len(document)
92
93 frequencies = {}
94 for word in document:
95 if word not in frequencies:
96 frequencies[word] = 0
97 frequencies[word] += 1
98 self.doc_freqs.append(frequencies)
99
100 for word, freq in iteritems(frequencies):
101 if word not in nd:
102 nd[word] = 0
103 nd[word] += 1
104
105 self.avgdl = float(num_doc) / self.corpus_size
106 # collect idf sum to calculate an average idf for epsilon value
107 idf_sum = 0
108 # collect words with negative idf to set them a special epsilon value.
109 # idf can be negative if word is contained in more than half of documents
110 negative_idfs = []
111 for word, freq in iteritems(nd):
112 idf = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)
113 self.idf[word] = idf
114 idf_sum += idf
115 if idf < 0:
116 negative_idfs.append(word)
117 self.average_idf = float(idf_sum) / len(self.idf)
118
119 eps = EPSILON * self.average_idf
120 for word in negative_idfs:
121 self.idf[word] = eps
122
123 def get_score(self, document, index):
124 """Computes BM25 score of given `document` in relation to item of corpus selected by `index`.
125
126 Parameters
127 ----------
128 document : list of str
129 Document to be scored.
130 index : int
131 Index of document in corpus selected to score with `document`.
132
133 Returns
134 -------
135 float
136 BM25 score.
137
138 """
139 score = 0
140 doc_freqs = self.doc_freqs[index]
141 for word in document:
142 if word not in doc_freqs:
143 continue
144 score += (self.idf[word] * doc_freqs[word] * (PARAM_K1 + 1)
145 / (doc_freqs[word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))
146 return score
147
148 def get_scores(self, document):
149 """Computes and returns BM25 scores of given `document` in relation to
150 every item in corpus.
151
152 Parameters
153 ----------
154 document : list of str
155 Document to be scored.
156
157 Returns
158 -------
159 list of float
160 BM25 scores.
161
162 """
163 scores = [self.get_score(document, index) for index in range(self.corpus_size)]
164 return scores
165
166 def get_scores_bow(self, document):
167 """Computes and returns BM25 scores of given `document` in relation to
168 every item in corpus.
169
170 Parameters
171 ----------
172 document : list of str
173 Document to be scored.
174
175 Returns
176 -------
177 list of float
178 BM25 scores.
179
180 """
181 scores = []
182 for index in range(self.corpus_size):
183 score = self.get_score(document, index)
184 if score > 0:
185 scores.append((index, score))
186 return scores
187
188
189 def _get_scores_bow(bm25, document):
190 """Helper function for retrieving bm25 scores of given `document` in parallel
191 in relation to every item in corpus.
192
193 Parameters
194 ----------
195 bm25 : BM25 object
196 BM25 object fitted on the corpus where documents are retrieved.
197 document : list of str
198 Document to be scored.
199
200 Returns
201 -------
202 list of (index, float)
203 BM25 scores in a bag of weights format.
204
205 """
206 return bm25.get_scores_bow(document)
207
208
209 def _get_scores(bm25, document):
210 """Helper function for retrieving bm25 scores of given `document` in parallel
211 in relation to every item in corpus.
212
213 Parameters
214 ----------
215 bm25 : BM25 object
216 BM25 object fitted on the corpus where documents are retrieved.
217 document : list of str
218 Document to be scored.
219
220 Returns
221 -------
222 list of float
223 BM25 scores.
224
225 """
226 return bm25.get_scores(document)
227
228
229 def iter_bm25_bow(corpus, n_jobs=1):
230 """Yield BM25 scores (weights) of documents in corpus.
231 Each document has to be weighted with every document in given corpus.
232
233 Parameters
234 ----------
235 corpus : list of list of str
236 Corpus of documents.
237 n_jobs : int
238 The number of processes to use for computing bm25.
239
240 Yields
241 -------
242 list of (index, float)
243 BM25 scores in bag of weights format.
244
245 Examples
246 --------
247 .. sourcecode:: pycon
248
249 >>> from gensim.summarization.bm25 import iter_bm25_weights
250 >>> corpus = [
251 ... ["black", "cat", "white", "cat"],
252 ... ["cat", "outer", "space"],
253 ... ["wag", "dog"]
254 ... ]
255 >>> result = iter_bm25_weights(corpus, n_jobs=-1)
256
257 """
258 bm25 = BM25(corpus)
259
260 n_processes = effective_n_jobs(n_jobs)
261 if n_processes == 1:
262 for doc in corpus:
263 yield bm25.get_scores_bow(doc)
264 return
265
266 get_score = partial(_get_scores_bow, bm25)
267 pool = Pool(n_processes)
268
269 for bow in pool.imap(get_score, corpus):
270 yield bow
271 pool.close()
272 pool.join()
273
274
275 def get_bm25_weights(corpus, n_jobs=1):
276 """Returns BM25 scores (weights) of documents in corpus.
277 Each document has to be weighted with every document in given corpus.
278
279 Parameters
280 ----------
281 corpus : list of list of str
282 Corpus of documents.
283 n_jobs : int
284 The number of processes to use for computing bm25.
285
286 Returns
287 -------
288 list of list of float
289 BM25 scores.
290
291 Examples
292 --------
293 .. sourcecode:: pycon
294
295 >>> from gensim.summarization.bm25 import get_bm25_weights
296 >>> corpus = [
297 ... ["black", "cat", "white", "cat"],
298 ... ["cat", "outer", "space"],
299 ... ["wag", "dog"]
300 ... ]
301 >>> result = get_bm25_weights(corpus, n_jobs=-1)
302
303 """
304 bm25 = BM25(corpus)
305
306 n_processes = effective_n_jobs(n_jobs)
307 if n_processes == 1:
308 weights = [bm25.get_scores(doc) for doc in corpus]
309 return weights
310
311 get_score = partial(_get_scores, bm25)
312 pool = Pool(n_processes)
313 weights = pool.map(get_score, corpus)
314 pool.close()
315 pool.join()
316 return weights
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py
--- a/gensim/summarization/bm25.py
+++ b/gensim/summarization/bm25.py
@@ -37,6 +37,7 @@
"""
+import logging
import math
from six import iteritems
from six.moves import range
@@ -48,6 +49,8 @@
PARAM_B = 0.75
EPSILON = 0.25
+logger = logging.getLogger(__name__)
+
class BM25(object):
"""Implementation of Best Matching 25 ranking function.
@@ -116,6 +119,13 @@
negative_idfs.append(word)
self.average_idf = float(idf_sum) / len(self.idf)
+ if self.average_idf < 0:
+ logger.warning(
+ 'Average inverse document frequency is less than zero. Your corpus of {} documents'
+ ' is either too small or it does not originate from natural text. BM25 may produce'
+ ' unintuitive results.'.format(self.corpus_size)
+ )
+
eps = EPSILON * self.average_idf
for word in negative_idfs:
self.idf[word] = eps
|
{"golden_diff": "diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py\n--- a/gensim/summarization/bm25.py\n+++ b/gensim/summarization/bm25.py\n@@ -37,6 +37,7 @@\n \"\"\"\n \n \n+import logging\n import math\n from six import iteritems\n from six.moves import range\n@@ -48,6 +49,8 @@\n PARAM_B = 0.75\n EPSILON = 0.25\n \n+logger = logging.getLogger(__name__)\n+\n \n class BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n@@ -116,6 +119,13 @@\n negative_idfs.append(word)\n self.average_idf = float(idf_sum) / len(self.idf)\n \n+ if self.average_idf < 0:\n+ logger.warning(\n+ 'Average inverse document frequency is less than zero. Your corpus of {} documents'\n+ ' is either too small or it does not originate from natural text. BM25 may produce'\n+ ' unintuitive results.'.format(self.corpus_size)\n+ )\n+\n eps = EPSILON * self.average_idf\n for word in negative_idfs:\n self.idf[word] = eps\n", "issue": "Unreasonable Query Result\n<!--\r\n**IMPORTANT**:\r\n\r\n- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.\r\n- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.\r\n\r\nGithub bug reports that do not include relevant information and context will be closed without an answer. Thanks!\r\n-->\r\n\r\n#### Problem description\r\n\r\n **The query result seems not correct. The code is self-explained. Thank you!**\r\n\r\n#### Steps/code/corpus to reproduce\r\n\r\nInclude full tracebacks, logs and datasets if necessary. Please keep the examples minimal (\"minimal reproducible example\").\r\n\r\n```\r\nfrom gensim.summarization.bm25 import BM25, get_bm25_weights\r\n\r\n\r\ntext1 = \"A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere.\"\r\ntext2 = \"The 88 modern constellations are formally defined regions of the sky together covering the entire celestial sphere.\"\r\ntext = [text1, text2]\r\n\r\ncorpus = [text1.split(\" \"), text2.split(\" \")]\r\nprint(f'corpus: {corpus}')\r\n\r\nquery = text2.split(\" \")\r\n\r\nbm25 = BM25(corpus)\r\nscores = bm25.get_scores(query)\r\nscores = [(s, i) for i, s in enumerate(scores)]\r\nscores.sort(key=lambda t: t[0], reverse=True)\r\nprint(f'scores: {scores}')\r\n\r\nfor s, idx in scores:\r\n print(f'{s}\\t{idx}: {text[idx]}')\r\n```\r\n\r\nOutput:\r\n\r\n```\r\n-0.3601521710456333 0: A constellation is a group of stars that are considered to form imaginary outlines or meaningful patterns on the celestial sphere.\r\n-0.44989406787023367 1: The 88 modern constellations are formally defined regions of the sky together covering the entire celestial sphere.\r\n```\r\n\r\n#### Versions\r\n\r\nPlease provide the output of:\r\n\r\n```python\r\nimport platform; print(platform.platform())\r\nimport sys; print(\"Python\", sys.version)\r\nimport numpy; print(\"NumPy\", numpy.__version__)\r\nimport scipy; print(\"SciPy\", scipy.__version__)\r\nimport gensim; print(\"gensim\", gensim.__version__)\r\nfrom gensim.models import word2vec;print(\"FAST_VERSION\", word2vec.FAST_VERSION)\r\n```\r\n\r\nOutput:\r\n```\r\nmacOS-10.14.6-x86_64-i386-64bit\r\nPython 3.8.0 (default, Nov 6 2019, 15:49:01)\r\n[Clang 4.0.1 (tags/RELEASE_401/final)]\r\nNumPy 1.17.4\r\nSciPy 1.3.3\r\ngensim 3.8.1\r\nFAST_VERSION 0\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains function of computing rank scores for documents in\ncorpus and helper class `BM25` used in calculations. Original algorithm\ndescibed in [1]_, also you may check Wikipedia page [2]_.\n\n\n.. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,\n http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf\n.. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25\n\n\n\nExamples\n--------\n\n.. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus, n_jobs=-1)\n\n\nData:\n-----\n.. data:: PARAM_K1 - Free smoothing parameter for BM25.\n.. data:: PARAM_B - Free smoothing parameter for BM25.\n.. data:: EPSILON - Constant used for negative idf of document in corpus.\n\n\"\"\"\n\n\nimport math\nfrom six import iteritems\nfrom six.moves import range\nfrom functools import partial\nfrom multiprocessing import Pool\nfrom ..utils import effective_n_jobs\n\nPARAM_K1 = 1.5\nPARAM_B = 0.75\nEPSILON = 0.25\n\n\nclass BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n\n Attributes\n ----------\n corpus_size : int\n Size of corpus (number of documents).\n avgdl : float\n Average length of document in `corpus`.\n doc_freqs : list of dicts of int\n Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed documents frequencies for whole `corpus`. Words used as keys and frequencies as values.\n doc_len : list of int\n List of document lengths.\n \"\"\"\n\n def __init__(self, corpus):\n \"\"\"\n Parameters\n ----------\n corpus : list of list of str\n Given corpus.\n\n \"\"\"\n self.corpus_size = 0\n self.avgdl = 0\n self.doc_freqs = []\n self.idf = {}\n self.doc_len = []\n self._initialize(corpus)\n\n def _initialize(self, corpus):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n nd = {} # word -> number of documents with word\n num_doc = 0\n for document in corpus:\n self.corpus_size += 1\n self.doc_len.append(len(document))\n num_doc += len(document)\n\n frequencies = {}\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n frequencies[word] += 1\n self.doc_freqs.append(frequencies)\n\n for word, freq in iteritems(frequencies):\n if word not in nd:\n nd[word] = 0\n nd[word] += 1\n\n self.avgdl = float(num_doc) / self.corpus_size\n # collect idf sum to calculate an average idf for epsilon value\n idf_sum = 0\n # collect words with negative idf to set them a special epsilon value.\n # idf can be negative if word is contained in more than half of documents\n negative_idfs = []\n for word, freq in iteritems(nd):\n idf = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)\n self.idf[word] = idf\n idf_sum += idf\n if idf < 0:\n negative_idfs.append(word)\n self.average_idf = float(idf_sum) / len(self.idf)\n\n eps = EPSILON * self.average_idf\n for word in negative_idfs:\n self.idf[word] = eps\n\n def get_score(self, document, index):\n \"\"\"Computes BM25 score of given `document` in relation to item of corpus selected by `index`.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n index : int\n Index of document in corpus selected to score with `document`.\n\n Returns\n -------\n float\n BM25 score.\n\n \"\"\"\n score = 0\n doc_freqs = self.doc_freqs[index]\n for word in document:\n if word not in doc_freqs:\n continue\n score += (self.idf[word] * doc_freqs[word] * (PARAM_K1 + 1)\n / (doc_freqs[word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))\n return score\n\n def get_scores(self, document):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = [self.get_score(document, index) for index in range(self.corpus_size)]\n return scores\n\n def get_scores_bow(self, document):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = []\n for index in range(self.corpus_size):\n score = self.get_score(document, index)\n if score > 0:\n scores.append((index, score))\n return scores\n\n\ndef _get_scores_bow(bm25, document):\n \"\"\"Helper function for retrieving bm25 scores of given `document` in parallel\n in relation to every item in corpus.\n\n Parameters\n ----------\n bm25 : BM25 object\n BM25 object fitted on the corpus where documents are retrieved.\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of (index, float)\n BM25 scores in a bag of weights format.\n\n \"\"\"\n return bm25.get_scores_bow(document)\n\n\ndef _get_scores(bm25, document):\n \"\"\"Helper function for retrieving bm25 scores of given `document` in parallel\n in relation to every item in corpus.\n\n Parameters\n ----------\n bm25 : BM25 object\n BM25 object fitted on the corpus where documents are retrieved.\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n return bm25.get_scores(document)\n\n\ndef iter_bm25_bow(corpus, n_jobs=1):\n \"\"\"Yield BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n n_jobs : int\n The number of processes to use for computing bm25.\n\n Yields\n -------\n list of (index, float)\n BM25 scores in bag of weights format.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import iter_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = iter_bm25_weights(corpus, n_jobs=-1)\n\n \"\"\"\n bm25 = BM25(corpus)\n\n n_processes = effective_n_jobs(n_jobs)\n if n_processes == 1:\n for doc in corpus:\n yield bm25.get_scores_bow(doc)\n return\n\n get_score = partial(_get_scores_bow, bm25)\n pool = Pool(n_processes)\n\n for bow in pool.imap(get_score, corpus):\n yield bow\n pool.close()\n pool.join()\n\n\ndef get_bm25_weights(corpus, n_jobs=1):\n \"\"\"Returns BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n n_jobs : int\n The number of processes to use for computing bm25.\n\n Returns\n -------\n list of list of float\n BM25 scores.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus, n_jobs=-1)\n\n \"\"\"\n bm25 = BM25(corpus)\n\n n_processes = effective_n_jobs(n_jobs)\n if n_processes == 1:\n weights = [bm25.get_scores(doc) for doc in corpus]\n return weights\n\n get_score = partial(_get_scores, bm25)\n pool = Pool(n_processes)\n weights = pool.map(get_score, corpus)\n pool.close()\n pool.join()\n return weights\n", "path": "gensim/summarization/bm25.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains function of computing rank scores for documents in\ncorpus and helper class `BM25` used in calculations. Original algorithm\ndescibed in [1]_, also you may check Wikipedia page [2]_.\n\n\n.. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,\n http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf\n.. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25\n\n\n\nExamples\n--------\n\n.. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus, n_jobs=-1)\n\n\nData:\n-----\n.. data:: PARAM_K1 - Free smoothing parameter for BM25.\n.. data:: PARAM_B - Free smoothing parameter for BM25.\n.. data:: EPSILON - Constant used for negative idf of document in corpus.\n\n\"\"\"\n\n\nimport logging\nimport math\nfrom six import iteritems\nfrom six.moves import range\nfrom functools import partial\nfrom multiprocessing import Pool\nfrom ..utils import effective_n_jobs\n\nPARAM_K1 = 1.5\nPARAM_B = 0.75\nEPSILON = 0.25\n\nlogger = logging.getLogger(__name__)\n\n\nclass BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n\n Attributes\n ----------\n corpus_size : int\n Size of corpus (number of documents).\n avgdl : float\n Average length of document in `corpus`.\n doc_freqs : list of dicts of int\n Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed documents frequencies for whole `corpus`. Words used as keys and frequencies as values.\n doc_len : list of int\n List of document lengths.\n \"\"\"\n\n def __init__(self, corpus):\n \"\"\"\n Parameters\n ----------\n corpus : list of list of str\n Given corpus.\n\n \"\"\"\n self.corpus_size = 0\n self.avgdl = 0\n self.doc_freqs = []\n self.idf = {}\n self.doc_len = []\n self._initialize(corpus)\n\n def _initialize(self, corpus):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n nd = {} # word -> number of documents with word\n num_doc = 0\n for document in corpus:\n self.corpus_size += 1\n self.doc_len.append(len(document))\n num_doc += len(document)\n\n frequencies = {}\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n frequencies[word] += 1\n self.doc_freqs.append(frequencies)\n\n for word, freq in iteritems(frequencies):\n if word not in nd:\n nd[word] = 0\n nd[word] += 1\n\n self.avgdl = float(num_doc) / self.corpus_size\n # collect idf sum to calculate an average idf for epsilon value\n idf_sum = 0\n # collect words with negative idf to set them a special epsilon value.\n # idf can be negative if word is contained in more than half of documents\n negative_idfs = []\n for word, freq in iteritems(nd):\n idf = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)\n self.idf[word] = idf\n idf_sum += idf\n if idf < 0:\n negative_idfs.append(word)\n self.average_idf = float(idf_sum) / len(self.idf)\n\n if self.average_idf < 0:\n logger.warning(\n 'Average inverse document frequency is less than zero. Your corpus of {} documents'\n ' is either too small or it does not originate from natural text. BM25 may produce'\n ' unintuitive results.'.format(self.corpus_size)\n )\n\n eps = EPSILON * self.average_idf\n for word in negative_idfs:\n self.idf[word] = eps\n\n def get_score(self, document, index):\n \"\"\"Computes BM25 score of given `document` in relation to item of corpus selected by `index`.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n index : int\n Index of document in corpus selected to score with `document`.\n\n Returns\n -------\n float\n BM25 score.\n\n \"\"\"\n score = 0\n doc_freqs = self.doc_freqs[index]\n for word in document:\n if word not in doc_freqs:\n continue\n score += (self.idf[word] * doc_freqs[word] * (PARAM_K1 + 1)\n / (doc_freqs[word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))\n return score\n\n def get_scores(self, document):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = [self.get_score(document, index) for index in range(self.corpus_size)]\n return scores\n\n def get_scores_bow(self, document):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = []\n for index in range(self.corpus_size):\n score = self.get_score(document, index)\n if score > 0:\n scores.append((index, score))\n return scores\n\n\ndef _get_scores_bow(bm25, document):\n \"\"\"Helper function for retrieving bm25 scores of given `document` in parallel\n in relation to every item in corpus.\n\n Parameters\n ----------\n bm25 : BM25 object\n BM25 object fitted on the corpus where documents are retrieved.\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of (index, float)\n BM25 scores in a bag of weights format.\n\n \"\"\"\n return bm25.get_scores_bow(document)\n\n\ndef _get_scores(bm25, document):\n \"\"\"Helper function for retrieving bm25 scores of given `document` in parallel\n in relation to every item in corpus.\n\n Parameters\n ----------\n bm25 : BM25 object\n BM25 object fitted on the corpus where documents are retrieved.\n document : list of str\n Document to be scored.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n return bm25.get_scores(document)\n\n\ndef iter_bm25_bow(corpus, n_jobs=1):\n \"\"\"Yield BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n n_jobs : int\n The number of processes to use for computing bm25.\n\n Yields\n -------\n list of (index, float)\n BM25 scores in bag of weights format.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import iter_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = iter_bm25_weights(corpus, n_jobs=-1)\n\n \"\"\"\n bm25 = BM25(corpus)\n\n n_processes = effective_n_jobs(n_jobs)\n if n_processes == 1:\n for doc in corpus:\n yield bm25.get_scores_bow(doc)\n return\n\n get_score = partial(_get_scores_bow, bm25)\n pool = Pool(n_processes)\n\n for bow in pool.imap(get_score, corpus):\n yield bow\n pool.close()\n pool.join()\n\n\ndef get_bm25_weights(corpus, n_jobs=1):\n \"\"\"Returns BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n n_jobs : int\n The number of processes to use for computing bm25.\n\n Returns\n -------\n list of list of float\n BM25 scores.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus, n_jobs=-1)\n\n \"\"\"\n bm25 = BM25(corpus)\n\n n_processes = effective_n_jobs(n_jobs)\n if n_processes == 1:\n weights = [bm25.get_scores(doc) for doc in corpus]\n return weights\n\n get_score = partial(_get_scores, bm25)\n pool = Pool(n_processes)\n weights = pool.map(get_score, corpus)\n pool.close()\n pool.join()\n return weights\n", "path": "gensim/summarization/bm25.py"}]}
| 3,961 | 292 |
gh_patches_debug_5608
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-43032
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception with `ansible_port` on delegate task result
##### SUMMARY
If we have a task that delegate to localhost, we have an exception thrown :
````
ERROR! Unexpected Exception, this is probably a bug: 'ansible_port'
````
##### ISSUE TYPE
This seems to be related to #42577. Reverting this commit fix the issue.
##### COMPONENT NAME
delegate_to
##### ANSIBLE VERSION
```
ansible 2.6.1.post0
config file = /opt/monitoring/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]
```
##### CONFIGURATION
````
DEFAULT_ROLES_PATH(/opt/monitoring/ansible/ansible.cfg) = [u'/opt/monitoring/ansible/.galaxy_roles']
````
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: '*'
tasks:
- name: Write gossip encryption key locally for use with new servers
copy:
content: "{{ consul_raw_key }}"
dest: '/tmp/consul_raw.key'
become: no
no_log: true
run_once: true
register: consul_local_key
delegate_to: localhost
changed_when: false
when: consul_raw_key is defined
```
##### EXPECTED RESULTS
Something working, not an exception
##### ACTUAL RESULTS
```
TASK [Write gossip encryption key locally for use with new servers] ******************************************************************************************************
task path: /opt/monitoring/ansible/playbook.yml:8
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818 `" && echo ansible-tmp-1532011875.95-252544437995818="` echo /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818 `" ) && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/files/stat.py
<localhost> PUT /root/.ansible/tmp/ansible-local-18955F96q_C/tmplKd6qB TO /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/files/file.py
<localhost> PUT /root/.ansible/tmp/ansible-local-18955F96q_C/tmpFYSMGp TO /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ > /dev/null 2>&1 && sleep 0'
ERROR! Unexpected Exception, this is probably a bug: 'ansible_port'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 118, in <module>
exit_code = cli.run()
File "/usr/local/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py", line 159, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", line 289, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/linear.py", line 323, in run
results += self._wait_on_pending_results(iterator)
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 674, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 117, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 636, in _process_pending_results
self._tqm.send_callback('v2_runner_on_ok', task_result)
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", line 366, in send_callback
new_args.append(arg.clean_copy())
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_result.py", line 127, in clean_copy
x[sub][key] = self._result[sub][key]
KeyError: 'ansible_port'
```
The error is in #42577
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/executor/task_result.py`
Content:
```
1 # Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
2
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 from __future__ import (absolute_import, division, print_function)
6 __metaclass__ = type
7
8 from copy import deepcopy
9
10 from ansible import constants as C
11 from ansible.parsing.dataloader import DataLoader
12 from ansible.vars.clean import strip_internal_keys
13
14 _IGNORE = ('failed', 'skipped')
15 _PRESERVE = ('attempts', 'changed', 'retries')
16 _SUB_PRESERVE = {'_ansible_delegated_vars': ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection')}
17
18
19 class TaskResult:
20 '''
21 This class is responsible for interpreting the resulting data
22 from an executed task, and provides helper methods for determining
23 the result of a given task.
24 '''
25
26 def __init__(self, host, task, return_data, task_fields=None):
27 self._host = host
28 self._task = task
29
30 if isinstance(return_data, dict):
31 self._result = return_data.copy()
32 else:
33 self._result = DataLoader().load(return_data)
34
35 if task_fields is None:
36 self._task_fields = dict()
37 else:
38 self._task_fields = task_fields
39
40 @property
41 def task_name(self):
42 return self._task_fields.get('name', None) or self._task.get_name()
43
44 def is_changed(self):
45 return self._check_key('changed')
46
47 def is_skipped(self):
48 # loop results
49 if 'results' in self._result:
50 results = self._result['results']
51 # Loop tasks are only considered skipped if all items were skipped.
52 # some squashed results (eg, yum) are not dicts and can't be skipped individually
53 if results and all(isinstance(res, dict) and res.get('skipped', False) for res in results):
54 return True
55
56 # regular tasks and squashed non-dict results
57 return self._result.get('skipped', False)
58
59 def is_failed(self):
60 if 'failed_when_result' in self._result or \
61 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:
62 return self._check_key('failed_when_result')
63 else:
64 return self._check_key('failed')
65
66 def is_unreachable(self):
67 return self._check_key('unreachable')
68
69 def needs_debugger(self, globally_enabled=False):
70 _debugger = self._task_fields.get('debugger')
71 _ignore_errors = C.TASK_DEBUGGER_IGNORE_ERRORS and self._task_fields.get('ignore_errors')
72
73 ret = False
74 if globally_enabled and ((self.is_failed() and not _ignore_errors) or self.is_unreachable()):
75 ret = True
76
77 if _debugger in ('always',):
78 ret = True
79 elif _debugger in ('never',):
80 ret = False
81 elif _debugger in ('on_failed',) and self.is_failed() and not _ignore_errors:
82 ret = True
83 elif _debugger in ('on_unreachable',) and self.is_unreachable():
84 ret = True
85 elif _debugger in('on_skipped',) and self.is_skipped():
86 ret = True
87
88 return ret
89
90 def _check_key(self, key):
91 '''get a specific key from the result or its items'''
92
93 if isinstance(self._result, dict) and key in self._result:
94 return self._result.get(key, False)
95 else:
96 flag = False
97 for res in self._result.get('results', []):
98 if isinstance(res, dict):
99 flag |= res.get(key, False)
100 return flag
101
102 def clean_copy(self):
103
104 ''' returns 'clean' taskresult object '''
105
106 # FIXME: clean task_fields, _task and _host copies
107 result = TaskResult(self._host, self._task, {}, self._task_fields)
108
109 # statuses are already reflected on the event type
110 if result._task and result._task.action in ['debug']:
111 # debug is verbose by default to display vars, no need to add invocation
112 ignore = _IGNORE + ('invocation',)
113 else:
114 ignore = _IGNORE
115
116 if self._task.no_log or self._result.get('_ansible_no_log', False):
117 x = {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}
118
119 # preserve full
120 for preserve in _PRESERVE:
121 if preserve in self._result:
122 x[preserve] = self._result[preserve]
123
124 # preserve subset
125 for sub in _SUB_PRESERVE:
126 if sub in self._result:
127 x[sub] = {}
128 for key in _SUB_PRESERVE[sub]:
129 x[sub][key] = self._result[sub][key]
130
131 result._result = x
132 elif self._result:
133 result._result = deepcopy(self._result)
134
135 # actualy remove
136 for remove_key in ignore:
137 if remove_key in result._result:
138 del result._result[remove_key]
139
140 # remove almost ALL internal keys, keep ones relevant to callback
141 strip_internal_keys(result._result, exceptions=('_ansible_verbose_always', '_ansible_item_label', '_ansible_no_log'))
142
143 return result
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansible/executor/task_result.py b/lib/ansible/executor/task_result.py
--- a/lib/ansible/executor/task_result.py
+++ b/lib/ansible/executor/task_result.py
@@ -126,7 +126,8 @@
if sub in self._result:
x[sub] = {}
for key in _SUB_PRESERVE[sub]:
- x[sub][key] = self._result[sub][key]
+ if key in self._result[sub]:
+ x[sub][key] = self._result[sub][key]
result._result = x
elif self._result:
|
{"golden_diff": "diff --git a/lib/ansible/executor/task_result.py b/lib/ansible/executor/task_result.py\n--- a/lib/ansible/executor/task_result.py\n+++ b/lib/ansible/executor/task_result.py\n@@ -126,7 +126,8 @@\n if sub in self._result:\n x[sub] = {}\n for key in _SUB_PRESERVE[sub]:\n- x[sub][key] = self._result[sub][key]\n+ if key in self._result[sub]:\n+ x[sub][key] = self._result[sub][key]\n \n result._result = x\n elif self._result:\n", "issue": "Exception with `ansible_port` on delegate task result\n##### SUMMARY\r\nIf we have a task that delegate to localhost, we have an exception thrown : \r\n\r\n````\r\nERROR! Unexpected Exception, this is probably a bug: 'ansible_port'\r\n````\r\n\r\n##### ISSUE TYPE\r\nThis seems to be related to #42577. Reverting this commit fix the issue.\r\n\r\n##### COMPONENT NAME\r\ndelegate_to\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.6.1.post0\r\n config file = /opt/monitoring/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0]\r\n```\r\n\r\n##### CONFIGURATION\r\n````\r\nDEFAULT_ROLES_PATH(/opt/monitoring/ansible/ansible.cfg) = [u'/opt/monitoring/ansible/.galaxy_roles']\r\n````\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 18.04\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- hosts: '*'\r\n tasks:\r\n - name: Write gossip encryption key locally for use with new servers\r\n copy:\r\n content: \"{{ consul_raw_key }}\"\r\n dest: '/tmp/consul_raw.key'\r\n become: no\r\n no_log: true\r\n run_once: true\r\n register: consul_local_key\r\n delegate_to: localhost\r\n changed_when: false\r\n when: consul_raw_key is defined\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nSomething working, not an exception\r\n##### ACTUAL RESULTS\r\n```\r\nTASK [Write gossip encryption key locally for use with new servers] ******************************************************************************************************\r\ntask path: /opt/monitoring/ansible/playbook.yml:8\r\n<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root\r\n<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'\r\n<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818 `\" && echo ansible-tmp-1532011875.95-252544437995818=\"` echo /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818 `\" ) && sleep 0'\r\nUsing module file /usr/local/lib/python2.7/dist-packages/ansible/modules/files/stat.py\r\n<localhost> PUT /root/.ansible/tmp/ansible-local-18955F96q_C/tmplKd6qB TO /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py\r\n<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py && sleep 0'\r\n<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/stat.py && sleep 0'\r\nUsing module file /usr/local/lib/python2.7/dist-packages/ansible/modules/files/file.py\r\n<localhost> PUT /root/.ansible/tmp/ansible-local-18955F96q_C/tmpFYSMGp TO /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py\r\n<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py && sleep 0'\r\n<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/file.py && sleep 0'\r\n<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1532011875.95-252544437995818/ > /dev/null 2>&1 && sleep 0'\r\nERROR! Unexpected Exception, this is probably a bug: 'ansible_port'\r\nthe full traceback was:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/ansible-playbook\", line 118, in <module>\r\n exit_code = cli.run()\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/cli/playbook.py\", line 122, in run\r\n results = pbex.run()\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py\", line 159, in run\r\n result = self._tqm.run(play=play)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py\", line 289, in run\r\n play_return = strategy.run(iterator, play_context)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/linear.py\", line 323, in run\r\n results += self._wait_on_pending_results(iterator)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py\", line 674, in _wait_on_pending_results\r\n results = self._process_pending_results(iterator)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py\", line 117, in inner\r\n results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py\", line 636, in _process_pending_results\r\n self._tqm.send_callback('v2_runner_on_ok', task_result)\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py\", line 366, in send_callback\r\n new_args.append(arg.clean_copy())\r\n File \"/usr/local/lib/python2.7/dist-packages/ansible/executor/task_result.py\", line 127, in clean_copy\r\n x[sub][key] = self._result[sub][key]\r\nKeyError: 'ansible_port'\r\n```\r\n\r\n\r\nThe error is in #42577\n", "before_files": [{"content": "# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>\n\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom copy import deepcopy\n\nfrom ansible import constants as C\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.vars.clean import strip_internal_keys\n\n_IGNORE = ('failed', 'skipped')\n_PRESERVE = ('attempts', 'changed', 'retries')\n_SUB_PRESERVE = {'_ansible_delegated_vars': ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection')}\n\n\nclass TaskResult:\n '''\n This class is responsible for interpreting the resulting data\n from an executed task, and provides helper methods for determining\n the result of a given task.\n '''\n\n def __init__(self, host, task, return_data, task_fields=None):\n self._host = host\n self._task = task\n\n if isinstance(return_data, dict):\n self._result = return_data.copy()\n else:\n self._result = DataLoader().load(return_data)\n\n if task_fields is None:\n self._task_fields = dict()\n else:\n self._task_fields = task_fields\n\n @property\n def task_name(self):\n return self._task_fields.get('name', None) or self._task.get_name()\n\n def is_changed(self):\n return self._check_key('changed')\n\n def is_skipped(self):\n # loop results\n if 'results' in self._result:\n results = self._result['results']\n # Loop tasks are only considered skipped if all items were skipped.\n # some squashed results (eg, yum) are not dicts and can't be skipped individually\n if results and all(isinstance(res, dict) and res.get('skipped', False) for res in results):\n return True\n\n # regular tasks and squashed non-dict results\n return self._result.get('skipped', False)\n\n def is_failed(self):\n if 'failed_when_result' in self._result or \\\n 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:\n return self._check_key('failed_when_result')\n else:\n return self._check_key('failed')\n\n def is_unreachable(self):\n return self._check_key('unreachable')\n\n def needs_debugger(self, globally_enabled=False):\n _debugger = self._task_fields.get('debugger')\n _ignore_errors = C.TASK_DEBUGGER_IGNORE_ERRORS and self._task_fields.get('ignore_errors')\n\n ret = False\n if globally_enabled and ((self.is_failed() and not _ignore_errors) or self.is_unreachable()):\n ret = True\n\n if _debugger in ('always',):\n ret = True\n elif _debugger in ('never',):\n ret = False\n elif _debugger in ('on_failed',) and self.is_failed() and not _ignore_errors:\n ret = True\n elif _debugger in ('on_unreachable',) and self.is_unreachable():\n ret = True\n elif _debugger in('on_skipped',) and self.is_skipped():\n ret = True\n\n return ret\n\n def _check_key(self, key):\n '''get a specific key from the result or its items'''\n\n if isinstance(self._result, dict) and key in self._result:\n return self._result.get(key, False)\n else:\n flag = False\n for res in self._result.get('results', []):\n if isinstance(res, dict):\n flag |= res.get(key, False)\n return flag\n\n def clean_copy(self):\n\n ''' returns 'clean' taskresult object '''\n\n # FIXME: clean task_fields, _task and _host copies\n result = TaskResult(self._host, self._task, {}, self._task_fields)\n\n # statuses are already reflected on the event type\n if result._task and result._task.action in ['debug']:\n # debug is verbose by default to display vars, no need to add invocation\n ignore = _IGNORE + ('invocation',)\n else:\n ignore = _IGNORE\n\n if self._task.no_log or self._result.get('_ansible_no_log', False):\n x = {\"censored\": \"the output has been hidden due to the fact that 'no_log: true' was specified for this result\"}\n\n # preserve full\n for preserve in _PRESERVE:\n if preserve in self._result:\n x[preserve] = self._result[preserve]\n\n # preserve subset\n for sub in _SUB_PRESERVE:\n if sub in self._result:\n x[sub] = {}\n for key in _SUB_PRESERVE[sub]:\n x[sub][key] = self._result[sub][key]\n\n result._result = x\n elif self._result:\n result._result = deepcopy(self._result)\n\n # actualy remove\n for remove_key in ignore:\n if remove_key in result._result:\n del result._result[remove_key]\n\n # remove almost ALL internal keys, keep ones relevant to callback\n strip_internal_keys(result._result, exceptions=('_ansible_verbose_always', '_ansible_item_label', '_ansible_no_log'))\n\n return result\n", "path": "lib/ansible/executor/task_result.py"}], "after_files": [{"content": "# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>\n\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom copy import deepcopy\n\nfrom ansible import constants as C\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.vars.clean import strip_internal_keys\n\n_IGNORE = ('failed', 'skipped')\n_PRESERVE = ('attempts', 'changed', 'retries')\n_SUB_PRESERVE = {'_ansible_delegated_vars': ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection')}\n\n\nclass TaskResult:\n '''\n This class is responsible for interpreting the resulting data\n from an executed task, and provides helper methods for determining\n the result of a given task.\n '''\n\n def __init__(self, host, task, return_data, task_fields=None):\n self._host = host\n self._task = task\n\n if isinstance(return_data, dict):\n self._result = return_data.copy()\n else:\n self._result = DataLoader().load(return_data)\n\n if task_fields is None:\n self._task_fields = dict()\n else:\n self._task_fields = task_fields\n\n @property\n def task_name(self):\n return self._task_fields.get('name', None) or self._task.get_name()\n\n def is_changed(self):\n return self._check_key('changed')\n\n def is_skipped(self):\n # loop results\n if 'results' in self._result:\n results = self._result['results']\n # Loop tasks are only considered skipped if all items were skipped.\n # some squashed results (eg, yum) are not dicts and can't be skipped individually\n if results and all(isinstance(res, dict) and res.get('skipped', False) for res in results):\n return True\n\n # regular tasks and squashed non-dict results\n return self._result.get('skipped', False)\n\n def is_failed(self):\n if 'failed_when_result' in self._result or \\\n 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:\n return self._check_key('failed_when_result')\n else:\n return self._check_key('failed')\n\n def is_unreachable(self):\n return self._check_key('unreachable')\n\n def needs_debugger(self, globally_enabled=False):\n _debugger = self._task_fields.get('debugger')\n _ignore_errors = C.TASK_DEBUGGER_IGNORE_ERRORS and self._task_fields.get('ignore_errors')\n\n ret = False\n if globally_enabled and ((self.is_failed() and not _ignore_errors) or self.is_unreachable()):\n ret = True\n\n if _debugger in ('always',):\n ret = True\n elif _debugger in ('never',):\n ret = False\n elif _debugger in ('on_failed',) and self.is_failed() and not _ignore_errors:\n ret = True\n elif _debugger in ('on_unreachable',) and self.is_unreachable():\n ret = True\n elif _debugger in('on_skipped',) and self.is_skipped():\n ret = True\n\n return ret\n\n def _check_key(self, key):\n '''get a specific key from the result or its items'''\n\n if isinstance(self._result, dict) and key in self._result:\n return self._result.get(key, False)\n else:\n flag = False\n for res in self._result.get('results', []):\n if isinstance(res, dict):\n flag |= res.get(key, False)\n return flag\n\n def clean_copy(self):\n\n ''' returns 'clean' taskresult object '''\n\n # FIXME: clean task_fields, _task and _host copies\n result = TaskResult(self._host, self._task, {}, self._task_fields)\n\n # statuses are already reflected on the event type\n if result._task and result._task.action in ['debug']:\n # debug is verbose by default to display vars, no need to add invocation\n ignore = _IGNORE + ('invocation',)\n else:\n ignore = _IGNORE\n\n if self._task.no_log or self._result.get('_ansible_no_log', False):\n x = {\"censored\": \"the output has been hidden due to the fact that 'no_log: true' was specified for this result\"}\n\n # preserve full\n for preserve in _PRESERVE:\n if preserve in self._result:\n x[preserve] = self._result[preserve]\n\n # preserve subset\n for sub in _SUB_PRESERVE:\n if sub in self._result:\n x[sub] = {}\n for key in _SUB_PRESERVE[sub]:\n if key in self._result[sub]:\n x[sub][key] = self._result[sub][key]\n\n result._result = x\n elif self._result:\n result._result = deepcopy(self._result)\n\n # actualy remove\n for remove_key in ignore:\n if remove_key in result._result:\n del result._result[remove_key]\n\n # remove almost ALL internal keys, keep ones relevant to callback\n strip_internal_keys(result._result, exceptions=('_ansible_verbose_always', '_ansible_item_label', '_ansible_no_log'))\n\n return result\n", "path": "lib/ansible/executor/task_result.py"}]}
| 3,468 | 137 |
gh_patches_debug_6411
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-625
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
starting mlserver using `mlserver start .` is not consistent with `mlserver start $PWD`
When I started mlserver using `mlserver start .` in directory tree
```
└── iris1
└── 1
├── model.joblib
└── model-settings.json
```
and settings `{"name":"iris1","implementation":"mlserver_sklearn.SKLearnModel","parameters":{"version":"1"}}`
results in an error:
```
mlserver.errors.InvalidModelURI: Invalid URI specified for model iris1 (iris1/1/iris1/1)
```
However using
`mlserver start $PWD` is successful.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlserver/repository.py`
Content:
```
1 import os
2 import glob
3
4 from typing import List
5
6 from .settings import ModelParameters, ModelSettings
7 from .errors import ModelNotFound
8 from .logging import logger
9
10 DEFAULT_MODEL_SETTINGS_FILENAME = "model-settings.json"
11
12
13 class ModelRepository:
14 """
15 Model repository, responsible of the discovery of models which can be
16 loaded onto the model registry.
17 """
18
19 def __init__(self, root: str = None):
20 self._root = root
21
22 async def list(self) -> List[ModelSettings]:
23 all_model_settings = []
24
25 # TODO: Use an async alternative for filesys ops
26 if self._root:
27 pattern = os.path.join(self._root, "**", DEFAULT_MODEL_SETTINGS_FILENAME)
28 matches = glob.glob(pattern, recursive=True)
29
30 for model_settings_path in matches:
31 model_settings = self._load_model_settings(model_settings_path)
32 all_model_settings.append(model_settings)
33
34 # If there were no matches, try to load model from environment
35 if not all_model_settings:
36 # return default
37 model_settings = ModelSettings()
38 model_settings.parameters = ModelParameters()
39 all_model_settings.append(model_settings)
40
41 return all_model_settings
42
43 def _load_model_settings(self, model_settings_path: str) -> ModelSettings:
44 model_settings = ModelSettings.parse_file(model_settings_path)
45 model_settings._source = model_settings_path
46
47 # If name not present, default to folder name
48 model_settings_folder = os.path.dirname(model_settings_path)
49 folder_name = os.path.basename(model_settings_folder)
50 if model_settings.name:
51 if not self._folder_matches(folder_name, model_settings):
52 # Raise warning if name is different than folder's name
53 logger.warning(
54 f"Model name '{model_settings.name}' is different than "
55 f"model's folder name '{folder_name}'."
56 )
57 else:
58 model_settings.name = folder_name
59
60 if not model_settings.parameters:
61 model_settings.parameters = ModelParameters()
62
63 if not model_settings.parameters.uri:
64 # If not specified, default to its own folder
65 default_model_uri = os.path.dirname(model_settings_path)
66 model_settings.parameters.uri = default_model_uri
67
68 return model_settings
69
70 def _folder_matches(self, folder_name: str, model_settings: ModelSettings) -> bool:
71 if model_settings.name == folder_name:
72 return True
73
74 # To be compatible with Triton, check whether the folder name matches
75 # with the model's version
76 if model_settings.parameters and model_settings.parameters.version:
77 model_version = model_settings.parameters.version
78 if model_version == folder_name:
79 return True
80
81 return False
82
83 async def find(self, name: str) -> List[ModelSettings]:
84 all_settings = await self.list()
85 selected = []
86 for model_settings in all_settings:
87 # TODO: Implement other version policies (e.g. "Last N")
88 if model_settings.name == name:
89 selected.append(model_settings)
90
91 if len(selected) == 0:
92 raise ModelNotFound(name)
93
94 return selected
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlserver/repository.py b/mlserver/repository.py
--- a/mlserver/repository.py
+++ b/mlserver/repository.py
@@ -24,7 +24,8 @@
# TODO: Use an async alternative for filesys ops
if self._root:
- pattern = os.path.join(self._root, "**", DEFAULT_MODEL_SETTINGS_FILENAME)
+ abs_root = os.path.abspath(self._root)
+ pattern = os.path.join(abs_root, "**", DEFAULT_MODEL_SETTINGS_FILENAME)
matches = glob.glob(pattern, recursive=True)
for model_settings_path in matches:
|
{"golden_diff": "diff --git a/mlserver/repository.py b/mlserver/repository.py\n--- a/mlserver/repository.py\n+++ b/mlserver/repository.py\n@@ -24,7 +24,8 @@\n \n # TODO: Use an async alternative for filesys ops\n if self._root:\n- pattern = os.path.join(self._root, \"**\", DEFAULT_MODEL_SETTINGS_FILENAME)\n+ abs_root = os.path.abspath(self._root)\n+ pattern = os.path.join(abs_root, \"**\", DEFAULT_MODEL_SETTINGS_FILENAME)\n matches = glob.glob(pattern, recursive=True)\n \n for model_settings_path in matches:\n", "issue": "starting mlserver using `mlserver start .` is not consistent with `mlserver start $PWD`\nWhen I started mlserver using `mlserver start .` in directory tree \r\n```\r\n\u2514\u2500\u2500 iris1\r\n \u2514\u2500\u2500 1\r\n \u251c\u2500\u2500 model.joblib\r\n \u2514\u2500\u2500 model-settings.json\r\n```\r\nand settings `{\"name\":\"iris1\",\"implementation\":\"mlserver_sklearn.SKLearnModel\",\"parameters\":{\"version\":\"1\"}}`\r\n\r\nresults in an error:\r\n```\r\nmlserver.errors.InvalidModelURI: Invalid URI specified for model iris1 (iris1/1/iris1/1)\r\n```\r\n\r\nHowever using\r\n`mlserver start $PWD` is successful.\n", "before_files": [{"content": "import os\nimport glob\n\nfrom typing import List\n\nfrom .settings import ModelParameters, ModelSettings\nfrom .errors import ModelNotFound\nfrom .logging import logger\n\nDEFAULT_MODEL_SETTINGS_FILENAME = \"model-settings.json\"\n\n\nclass ModelRepository:\n \"\"\"\n Model repository, responsible of the discovery of models which can be\n loaded onto the model registry.\n \"\"\"\n\n def __init__(self, root: str = None):\n self._root = root\n\n async def list(self) -> List[ModelSettings]:\n all_model_settings = []\n\n # TODO: Use an async alternative for filesys ops\n if self._root:\n pattern = os.path.join(self._root, \"**\", DEFAULT_MODEL_SETTINGS_FILENAME)\n matches = glob.glob(pattern, recursive=True)\n\n for model_settings_path in matches:\n model_settings = self._load_model_settings(model_settings_path)\n all_model_settings.append(model_settings)\n\n # If there were no matches, try to load model from environment\n if not all_model_settings:\n # return default\n model_settings = ModelSettings()\n model_settings.parameters = ModelParameters()\n all_model_settings.append(model_settings)\n\n return all_model_settings\n\n def _load_model_settings(self, model_settings_path: str) -> ModelSettings:\n model_settings = ModelSettings.parse_file(model_settings_path)\n model_settings._source = model_settings_path\n\n # If name not present, default to folder name\n model_settings_folder = os.path.dirname(model_settings_path)\n folder_name = os.path.basename(model_settings_folder)\n if model_settings.name:\n if not self._folder_matches(folder_name, model_settings):\n # Raise warning if name is different than folder's name\n logger.warning(\n f\"Model name '{model_settings.name}' is different than \"\n f\"model's folder name '{folder_name}'.\"\n )\n else:\n model_settings.name = folder_name\n\n if not model_settings.parameters:\n model_settings.parameters = ModelParameters()\n\n if not model_settings.parameters.uri:\n # If not specified, default to its own folder\n default_model_uri = os.path.dirname(model_settings_path)\n model_settings.parameters.uri = default_model_uri\n\n return model_settings\n\n def _folder_matches(self, folder_name: str, model_settings: ModelSettings) -> bool:\n if model_settings.name == folder_name:\n return True\n\n # To be compatible with Triton, check whether the folder name matches\n # with the model's version\n if model_settings.parameters and model_settings.parameters.version:\n model_version = model_settings.parameters.version\n if model_version == folder_name:\n return True\n\n return False\n\n async def find(self, name: str) -> List[ModelSettings]:\n all_settings = await self.list()\n selected = []\n for model_settings in all_settings:\n # TODO: Implement other version policies (e.g. \"Last N\")\n if model_settings.name == name:\n selected.append(model_settings)\n\n if len(selected) == 0:\n raise ModelNotFound(name)\n\n return selected\n", "path": "mlserver/repository.py"}], "after_files": [{"content": "import os\nimport glob\n\nfrom typing import List\n\nfrom .settings import ModelParameters, ModelSettings\nfrom .errors import ModelNotFound\nfrom .logging import logger\n\nDEFAULT_MODEL_SETTINGS_FILENAME = \"model-settings.json\"\n\n\nclass ModelRepository:\n \"\"\"\n Model repository, responsible of the discovery of models which can be\n loaded onto the model registry.\n \"\"\"\n\n def __init__(self, root: str = None):\n self._root = root\n\n async def list(self) -> List[ModelSettings]:\n all_model_settings = []\n\n # TODO: Use an async alternative for filesys ops\n if self._root:\n abs_root = os.path.abspath(self._root)\n pattern = os.path.join(abs_root, \"**\", DEFAULT_MODEL_SETTINGS_FILENAME)\n matches = glob.glob(pattern, recursive=True)\n\n for model_settings_path in matches:\n model_settings = self._load_model_settings(model_settings_path)\n all_model_settings.append(model_settings)\n\n # If there were no matches, try to load model from environment\n if not all_model_settings:\n # return default\n model_settings = ModelSettings()\n model_settings.parameters = ModelParameters()\n all_model_settings.append(model_settings)\n\n return all_model_settings\n\n def _load_model_settings(self, model_settings_path: str) -> ModelSettings:\n model_settings = ModelSettings.parse_file(model_settings_path)\n model_settings._source = model_settings_path\n\n # If name not present, default to folder name\n model_settings_folder = os.path.dirname(model_settings_path)\n folder_name = os.path.basename(model_settings_folder)\n if model_settings.name:\n if not self._folder_matches(folder_name, model_settings):\n # Raise warning if name is different than folder's name\n logger.warning(\n f\"Model name '{model_settings.name}' is different than \"\n f\"model's folder name '{folder_name}'.\"\n )\n else:\n model_settings.name = folder_name\n\n if not model_settings.parameters:\n model_settings.parameters = ModelParameters()\n\n if not model_settings.parameters.uri:\n # If not specified, default to its own folder\n default_model_uri = os.path.dirname(model_settings_path)\n model_settings.parameters.uri = default_model_uri\n\n return model_settings\n\n def _folder_matches(self, folder_name: str, model_settings: ModelSettings) -> bool:\n if model_settings.name == folder_name:\n return True\n\n # To be compatible with Triton, check whether the folder name matches\n # with the model's version\n if model_settings.parameters and model_settings.parameters.version:\n model_version = model_settings.parameters.version\n if model_version == folder_name:\n return True\n\n return False\n\n async def find(self, name: str) -> List[ModelSettings]:\n all_settings = await self.list()\n selected = []\n for model_settings in all_settings:\n # TODO: Implement other version policies (e.g. \"Last N\")\n if model_settings.name == name:\n selected.append(model_settings)\n\n if len(selected) == 0:\n raise ModelNotFound(name)\n\n return selected\n", "path": "mlserver/repository.py"}]}
| 1,236 | 126 |
gh_patches_debug_17480
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-3915
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Offset in Y coordinate with Windows 7 and Multitouch HID Device
Hi all,
It seems there is a bug with Windows 7 (not tested on others Windows) and Kivy about the Y position of the cursor. There is a constant offset between the Windows Cursor position and the one Kivy uses. (See attached pictures). Note that the offset is bigger in fullscreen that in windowed mode.
After having a quick look at the code, it seems that this offset is due to the caption size which is substracted to calculate the Y coordinate (line 165 in file wm_touch.py).
I can try to run additional tests if needed.
Regards.
Touchtracer in windowed mode:

Touchtracer in fullscreen mode:

## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/6477040-offset-in-y-coordinate-with-windows-7-and-multitouch-hid-device?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github).
</bountysource-plugin>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/input/providers/wm_touch.py`
Content:
```
1 '''
2 Support for WM_TOUCH messages (Windows platform)
3 ================================================
4 '''
5
6 __all__ = ('WM_MotionEventProvider', 'WM_MotionEvent')
7
8 import os
9 from kivy.input.providers.wm_common import (
10 WM_TABLET_QUERYSYSTEMGESTURE,
11 GWL_WNDPROC, QUERYSYSTEMGESTURE_WNDPROC, WM_TOUCH, WM_MOUSEMOVE,
12 WM_MOUSELAST, PEN_OR_TOUCH_MASK, PEN_OR_TOUCH_SIGNATURE,
13 PEN_EVENT_TOUCH_MASK, TOUCHEVENTF_UP, TOUCHEVENTF_DOWN,
14 TOUCHEVENTF_MOVE, SM_CYCAPTION)
15 from kivy.input.motionevent import MotionEvent
16 from kivy.input.shape import ShapeRect
17
18
19 class WM_MotionEvent(MotionEvent):
20 '''MotionEvent representing the WM_MotionEvent event.
21 Supports pos, shape and size profiles.
22 '''
23 __attrs__ = ('size', )
24
25 def depack(self, args):
26 self.is_touch = True
27 self.shape = ShapeRect()
28 self.sx, self.sy = args[0], args[1]
29 self.shape.width = args[2][0]
30 self.shape.height = args[2][1]
31 self.size = self.shape.width * self.shape.height
32 self.profile = ('pos', 'shape', 'size')
33
34 super(WM_MotionEvent, self).depack(args)
35
36 def __str__(self):
37 args = (self.id, self.uid, str(self.spos), self.device)
38 return '<WMMotionEvent id:%d uid:%d pos:%s device:%s>' % args
39
40 if 'KIVY_DOC' in os.environ:
41 # documentation hack
42 WM_MotionEventProvider = None
43
44 else:
45 from ctypes.wintypes import (ULONG, HANDLE, DWORD, LONG, UINT,
46 WPARAM, LPARAM, BOOL)
47 from ctypes import (windll, WINFUNCTYPE, POINTER,
48 c_int, Structure, sizeof, byref)
49 from collections import deque
50 from kivy.input.provider import MotionEventProvider
51 from kivy.input.factory import MotionEventFactory
52
53 # check availability of RegisterTouchWindow
54 if not hasattr(windll.user32, 'RegisterTouchWindow'):
55 raise Exception('Unsupported Window version')
56
57 LRESULT = LPARAM
58 WNDPROC = WINFUNCTYPE(LRESULT, HANDLE, UINT, WPARAM, LPARAM)
59
60 class TOUCHINPUT(Structure):
61 _fields_ = [
62 ('x', LONG),
63 ('y', LONG),
64 ('pSource', HANDLE),
65 ('id', DWORD),
66 ('flags', DWORD),
67 ('mask', DWORD),
68 ('time', DWORD),
69 ('extraInfo', POINTER(ULONG)),
70 ('size_x', DWORD),
71 ('size_y', DWORD)]
72
73 def size(self):
74 return (self.size_x, self.size_y)
75
76 def screen_x(self):
77 return self.x / 100.0
78
79 def screen_y(self):
80 return self.y / 100.0
81
82 def _event_type(self):
83 if self.flags & TOUCHEVENTF_MOVE:
84 return 'update'
85 if self.flags & TOUCHEVENTF_DOWN:
86 return 'begin'
87 if self.flags & TOUCHEVENTF_UP:
88 return 'end'
89 event_type = property(_event_type)
90
91 class RECT(Structure):
92 _fields_ = [
93 ('left', LONG),
94 ('top', LONG),
95 ('right', LONG),
96 ('bottom', LONG)]
97
98 x = property(lambda self: self.left)
99 y = property(lambda self: self.top)
100 w = property(lambda self: self.right - self.left)
101 h = property(lambda self: self.bottom - self.top)
102
103 try:
104 windll.user32.SetWindowLongPtrW.restype = WNDPROC
105 windll.user32.SetWindowLongPtrW.argtypes = [HANDLE, c_int, WNDPROC]
106 SetWindowLong_wrapper = windll.user32.SetWindowLongPtrW
107 except AttributeError:
108 windll.user32.SetWindowLongW.restype = WNDPROC
109 windll.user32.SetWindowLongW.argtypes = [HANDLE, c_int, WNDPROC]
110 SetWindowLong_wrapper = windll.user32.SetWindowLongW
111
112 windll.user32.GetMessageExtraInfo.restype = LPARAM
113 windll.user32.GetMessageExtraInfo.argtypes = []
114 windll.user32.GetClientRect.restype = BOOL
115 windll.user32.GetClientRect.argtypes = [HANDLE, POINTER(RECT)]
116 windll.user32.GetWindowRect.restype = BOOL
117 windll.user32.GetWindowRect.argtypes = [HANDLE, POINTER(RECT)]
118 windll.user32.CallWindowProcW.restype = LRESULT
119 windll.user32.CallWindowProcW.argtypes = [WNDPROC, HANDLE, UINT, WPARAM,
120 LPARAM]
121 windll.user32.GetActiveWindow.restype = HANDLE
122 windll.user32.GetActiveWindow.argtypes = []
123 windll.user32.RegisterTouchWindow.restype = BOOL
124 windll.user32.RegisterTouchWindow.argtypes = [HANDLE, ULONG]
125 windll.user32.UnregisterTouchWindow.restype = BOOL
126 windll.user32.UnregisterTouchWindow.argtypes = [HANDLE]
127 windll.user32.GetTouchInputInfo.restype = BOOL
128 windll.user32.GetTouchInputInfo.argtypes = [HANDLE, UINT,
129 POINTER(TOUCHINPUT), c_int]
130 windll.user32.GetSystemMetrics.restype = c_int
131 windll.user32.GetSystemMetrics.argtypes = [c_int]
132
133 class WM_MotionEventProvider(MotionEventProvider):
134
135 def start(self):
136 self.touch_events = deque()
137 self.touches = {}
138 self.uid = 0
139
140 # get window handle, and register to recive WM_TOUCH messages
141 self.hwnd = windll.user32.GetActiveWindow()
142 windll.user32.RegisterTouchWindow(self.hwnd, 1)
143
144 # inject our own wndProc to handle messages
145 # before window manager does
146 self.new_windProc = WNDPROC(self._touch_wndProc)
147 self.old_windProc = SetWindowLong_wrapper(
148 self.hwnd, GWL_WNDPROC, self.new_windProc)
149
150 self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)
151
152 def update(self, dispatch_fn):
153 win_rect = RECT()
154 windll.user32.GetWindowRect(self.hwnd, byref(win_rect))
155 caption = self.caption_size
156
157 while True:
158 try:
159 t = self.touch_events.pop()
160 except:
161 break
162
163 # adjust x,y to window coordinates (0.0 to 1.0)
164 x = (t.screen_x() - win_rect.x) / float(win_rect.w)
165 y = 1.0 - (t.screen_y() - win_rect.y - caption
166 ) / float(win_rect.h)
167
168 # actually dispatch input
169 if t.event_type == 'begin':
170 self.uid += 1
171 self.touches[t.id] = WM_MotionEvent(
172 self.device, self.uid, [x, y, t.size()])
173 dispatch_fn('begin', self.touches[t.id])
174
175 if t.event_type == 'update' and t.id in self.touches:
176 self.touches[t.id].move([x, y, t.size()])
177 dispatch_fn('update', self.touches[t.id])
178
179 if t.event_type == 'end' and t.id in self.touches:
180 touch = self.touches[t.id]
181 touch.move([x, y, t.size()])
182 touch.update_time_end()
183 dispatch_fn('end', touch)
184 del self.touches[t.id]
185
186 def stop(self):
187 windll.user32.UnregisterTouchWindow(self.hwnd)
188 self.new_windProc = SetWindowLong_wrapper(
189 self.hwnd, GWL_WNDPROC, self.old_windProc)
190
191 # we inject this wndProc into our main window, to process
192 # WM_TOUCH and mouse messages before the window manager does
193 def _touch_wndProc(self, hwnd, msg, wParam, lParam):
194 done = False
195 if msg == WM_TABLET_QUERYSYSTEMGESTURE:
196 return QUERYSYSTEMGESTURE_WNDPROC
197
198 if msg == WM_TOUCH:
199 done = self._touch_handler(msg, wParam, lParam)
200
201 if msg >= WM_MOUSEMOVE and msg <= WM_MOUSELAST:
202 done = self._mouse_handler(msg, wParam, lParam)
203
204 if not done:
205 return windll.user32.CallWindowProcW(self.old_windProc,
206 hwnd, msg, wParam,
207 lParam)
208 return 1
209
210 # this on pushes WM_TOUCH messages onto our event stack
211 def _touch_handler(self, msg, wParam, lParam):
212 touches = (TOUCHINPUT * wParam)()
213 windll.user32.GetTouchInputInfo(HANDLE(lParam),
214 wParam,
215 touches,
216 sizeof(TOUCHINPUT))
217 for i in range(wParam):
218 self.touch_events.appendleft(touches[i])
219 windll.user32.CloseTouchInputHandle(HANDLE(lParam))
220 return True
221
222 # filter fake mouse events, because touch and stylus
223 # also make mouse events
224 def _mouse_handler(self, msg, wparam, lParam):
225 info = windll.user32.GetMessageExtraInfo()
226 # its a touch or a pen
227 if (info & PEN_OR_TOUCH_MASK) == PEN_OR_TOUCH_SIGNATURE:
228 if info & PEN_EVENT_TOUCH_MASK:
229 return True
230
231 MotionEventFactory.register('wm_touch', WM_MotionEventProvider)
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/input/providers/wm_touch.py b/kivy/input/providers/wm_touch.py
--- a/kivy/input/providers/wm_touch.py
+++ b/kivy/input/providers/wm_touch.py
@@ -14,6 +14,7 @@
TOUCHEVENTF_MOVE, SM_CYCAPTION)
from kivy.input.motionevent import MotionEvent
from kivy.input.shape import ShapeRect
+from kivy.core.window import Window
class WM_MotionEvent(MotionEvent):
@@ -147,7 +148,10 @@
self.old_windProc = SetWindowLong_wrapper(
self.hwnd, GWL_WNDPROC, self.new_windProc)
- self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)
+ if Window.borderless or Window.fullscreen:
+ self.caption_size = 0
+ else:
+ self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)
def update(self, dispatch_fn):
win_rect = RECT()
|
{"golden_diff": "diff --git a/kivy/input/providers/wm_touch.py b/kivy/input/providers/wm_touch.py\n--- a/kivy/input/providers/wm_touch.py\n+++ b/kivy/input/providers/wm_touch.py\n@@ -14,6 +14,7 @@\n TOUCHEVENTF_MOVE, SM_CYCAPTION)\n from kivy.input.motionevent import MotionEvent\n from kivy.input.shape import ShapeRect\n+from kivy.core.window import Window\n \n \n class WM_MotionEvent(MotionEvent):\n@@ -147,7 +148,10 @@\n self.old_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.new_windProc)\n \n- self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)\n+ if Window.borderless or Window.fullscreen:\n+ self.caption_size = 0\n+ else:\n+ self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)\n \n def update(self, dispatch_fn):\n win_rect = RECT()\n", "issue": "Offset in Y coordinate with Windows 7 and Multitouch HID Device\nHi all,\n\nIt seems there is a bug with Windows 7 (not tested on others Windows) and Kivy about the Y position of the cursor. There is a constant offset between the Windows Cursor position and the one Kivy uses. (See attached pictures). Note that the offset is bigger in fullscreen that in windowed mode.\n\nAfter having a quick look at the code, it seems that this offset is due to the caption size which is substracted to calculate the Y coordinate (line 165 in file wm_touch.py).\n\nI can try to run additional tests if needed.\n\nRegards.\n\nTouchtracer in windowed mode:\n\n\nTouchtracer in fullscreen mode:\n\n## <bountysource-plugin>\n\nWant to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/6477040-offset-in-y-coordinate-with-windows-7-and-multitouch-hid-device?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F42681&utm_medium=issues&utm_source=github).\n</bountysource-plugin>\n\n", "before_files": [{"content": "'''\nSupport for WM_TOUCH messages (Windows platform)\n================================================\n'''\n\n__all__ = ('WM_MotionEventProvider', 'WM_MotionEvent')\n\nimport os\nfrom kivy.input.providers.wm_common import (\n WM_TABLET_QUERYSYSTEMGESTURE,\n GWL_WNDPROC, QUERYSYSTEMGESTURE_WNDPROC, WM_TOUCH, WM_MOUSEMOVE,\n WM_MOUSELAST, PEN_OR_TOUCH_MASK, PEN_OR_TOUCH_SIGNATURE,\n PEN_EVENT_TOUCH_MASK, TOUCHEVENTF_UP, TOUCHEVENTF_DOWN,\n TOUCHEVENTF_MOVE, SM_CYCAPTION)\nfrom kivy.input.motionevent import MotionEvent\nfrom kivy.input.shape import ShapeRect\n\n\nclass WM_MotionEvent(MotionEvent):\n '''MotionEvent representing the WM_MotionEvent event.\n Supports pos, shape and size profiles.\n '''\n __attrs__ = ('size', )\n\n def depack(self, args):\n self.is_touch = True\n self.shape = ShapeRect()\n self.sx, self.sy = args[0], args[1]\n self.shape.width = args[2][0]\n self.shape.height = args[2][1]\n self.size = self.shape.width * self.shape.height\n self.profile = ('pos', 'shape', 'size')\n\n super(WM_MotionEvent, self).depack(args)\n\n def __str__(self):\n args = (self.id, self.uid, str(self.spos), self.device)\n return '<WMMotionEvent id:%d uid:%d pos:%s device:%s>' % args\n\nif 'KIVY_DOC' in os.environ:\n # documentation hack\n WM_MotionEventProvider = None\n\nelse:\n from ctypes.wintypes import (ULONG, HANDLE, DWORD, LONG, UINT,\n WPARAM, LPARAM, BOOL)\n from ctypes import (windll, WINFUNCTYPE, POINTER,\n c_int, Structure, sizeof, byref)\n from collections import deque\n from kivy.input.provider import MotionEventProvider\n from kivy.input.factory import MotionEventFactory\n\n # check availability of RegisterTouchWindow\n if not hasattr(windll.user32, 'RegisterTouchWindow'):\n raise Exception('Unsupported Window version')\n\n LRESULT = LPARAM\n WNDPROC = WINFUNCTYPE(LRESULT, HANDLE, UINT, WPARAM, LPARAM)\n\n class TOUCHINPUT(Structure):\n _fields_ = [\n ('x', LONG),\n ('y', LONG),\n ('pSource', HANDLE),\n ('id', DWORD),\n ('flags', DWORD),\n ('mask', DWORD),\n ('time', DWORD),\n ('extraInfo', POINTER(ULONG)),\n ('size_x', DWORD),\n ('size_y', DWORD)]\n\n def size(self):\n return (self.size_x, self.size_y)\n\n def screen_x(self):\n return self.x / 100.0\n\n def screen_y(self):\n return self.y / 100.0\n\n def _event_type(self):\n if self.flags & TOUCHEVENTF_MOVE:\n return 'update'\n if self.flags & TOUCHEVENTF_DOWN:\n return 'begin'\n if self.flags & TOUCHEVENTF_UP:\n return 'end'\n event_type = property(_event_type)\n\n class RECT(Structure):\n _fields_ = [\n ('left', LONG),\n ('top', LONG),\n ('right', LONG),\n ('bottom', LONG)]\n\n x = property(lambda self: self.left)\n y = property(lambda self: self.top)\n w = property(lambda self: self.right - self.left)\n h = property(lambda self: self.bottom - self.top)\n\n try:\n windll.user32.SetWindowLongPtrW.restype = WNDPROC\n windll.user32.SetWindowLongPtrW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongPtrW\n except AttributeError:\n windll.user32.SetWindowLongW.restype = WNDPROC\n windll.user32.SetWindowLongW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongW\n\n windll.user32.GetMessageExtraInfo.restype = LPARAM\n windll.user32.GetMessageExtraInfo.argtypes = []\n windll.user32.GetClientRect.restype = BOOL\n windll.user32.GetClientRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.GetWindowRect.restype = BOOL\n windll.user32.GetWindowRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.CallWindowProcW.restype = LRESULT\n windll.user32.CallWindowProcW.argtypes = [WNDPROC, HANDLE, UINT, WPARAM,\n LPARAM]\n windll.user32.GetActiveWindow.restype = HANDLE\n windll.user32.GetActiveWindow.argtypes = []\n windll.user32.RegisterTouchWindow.restype = BOOL\n windll.user32.RegisterTouchWindow.argtypes = [HANDLE, ULONG]\n windll.user32.UnregisterTouchWindow.restype = BOOL\n windll.user32.UnregisterTouchWindow.argtypes = [HANDLE]\n windll.user32.GetTouchInputInfo.restype = BOOL\n windll.user32.GetTouchInputInfo.argtypes = [HANDLE, UINT,\n POINTER(TOUCHINPUT), c_int]\n windll.user32.GetSystemMetrics.restype = c_int\n windll.user32.GetSystemMetrics.argtypes = [c_int]\n\n class WM_MotionEventProvider(MotionEventProvider):\n\n def start(self):\n self.touch_events = deque()\n self.touches = {}\n self.uid = 0\n\n # get window handle, and register to recive WM_TOUCH messages\n self.hwnd = windll.user32.GetActiveWindow()\n windll.user32.RegisterTouchWindow(self.hwnd, 1)\n\n # inject our own wndProc to handle messages\n # before window manager does\n self.new_windProc = WNDPROC(self._touch_wndProc)\n self.old_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.new_windProc)\n\n self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)\n\n def update(self, dispatch_fn):\n win_rect = RECT()\n windll.user32.GetWindowRect(self.hwnd, byref(win_rect))\n caption = self.caption_size\n\n while True:\n try:\n t = self.touch_events.pop()\n except:\n break\n\n # adjust x,y to window coordinates (0.0 to 1.0)\n x = (t.screen_x() - win_rect.x) / float(win_rect.w)\n y = 1.0 - (t.screen_y() - win_rect.y - caption\n ) / float(win_rect.h)\n\n # actually dispatch input\n if t.event_type == 'begin':\n self.uid += 1\n self.touches[t.id] = WM_MotionEvent(\n self.device, self.uid, [x, y, t.size()])\n dispatch_fn('begin', self.touches[t.id])\n\n if t.event_type == 'update' and t.id in self.touches:\n self.touches[t.id].move([x, y, t.size()])\n dispatch_fn('update', self.touches[t.id])\n\n if t.event_type == 'end' and t.id in self.touches:\n touch = self.touches[t.id]\n touch.move([x, y, t.size()])\n touch.update_time_end()\n dispatch_fn('end', touch)\n del self.touches[t.id]\n\n def stop(self):\n windll.user32.UnregisterTouchWindow(self.hwnd)\n self.new_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.old_windProc)\n\n # we inject this wndProc into our main window, to process\n # WM_TOUCH and mouse messages before the window manager does\n def _touch_wndProc(self, hwnd, msg, wParam, lParam):\n done = False\n if msg == WM_TABLET_QUERYSYSTEMGESTURE:\n return QUERYSYSTEMGESTURE_WNDPROC\n\n if msg == WM_TOUCH:\n done = self._touch_handler(msg, wParam, lParam)\n\n if msg >= WM_MOUSEMOVE and msg <= WM_MOUSELAST:\n done = self._mouse_handler(msg, wParam, lParam)\n\n if not done:\n return windll.user32.CallWindowProcW(self.old_windProc,\n hwnd, msg, wParam,\n lParam)\n return 1\n\n # this on pushes WM_TOUCH messages onto our event stack\n def _touch_handler(self, msg, wParam, lParam):\n touches = (TOUCHINPUT * wParam)()\n windll.user32.GetTouchInputInfo(HANDLE(lParam),\n wParam,\n touches,\n sizeof(TOUCHINPUT))\n for i in range(wParam):\n self.touch_events.appendleft(touches[i])\n windll.user32.CloseTouchInputHandle(HANDLE(lParam))\n return True\n\n # filter fake mouse events, because touch and stylus\n # also make mouse events\n def _mouse_handler(self, msg, wparam, lParam):\n info = windll.user32.GetMessageExtraInfo()\n # its a touch or a pen\n if (info & PEN_OR_TOUCH_MASK) == PEN_OR_TOUCH_SIGNATURE:\n if info & PEN_EVENT_TOUCH_MASK:\n return True\n\n MotionEventFactory.register('wm_touch', WM_MotionEventProvider)\n", "path": "kivy/input/providers/wm_touch.py"}], "after_files": [{"content": "'''\nSupport for WM_TOUCH messages (Windows platform)\n================================================\n'''\n\n__all__ = ('WM_MotionEventProvider', 'WM_MotionEvent')\n\nimport os\nfrom kivy.input.providers.wm_common import (\n WM_TABLET_QUERYSYSTEMGESTURE,\n GWL_WNDPROC, QUERYSYSTEMGESTURE_WNDPROC, WM_TOUCH, WM_MOUSEMOVE,\n WM_MOUSELAST, PEN_OR_TOUCH_MASK, PEN_OR_TOUCH_SIGNATURE,\n PEN_EVENT_TOUCH_MASK, TOUCHEVENTF_UP, TOUCHEVENTF_DOWN,\n TOUCHEVENTF_MOVE, SM_CYCAPTION)\nfrom kivy.input.motionevent import MotionEvent\nfrom kivy.input.shape import ShapeRect\nfrom kivy.core.window import Window\n\n\nclass WM_MotionEvent(MotionEvent):\n '''MotionEvent representing the WM_MotionEvent event.\n Supports pos, shape and size profiles.\n '''\n __attrs__ = ('size', )\n\n def depack(self, args):\n self.is_touch = True\n self.shape = ShapeRect()\n self.sx, self.sy = args[0], args[1]\n self.shape.width = args[2][0]\n self.shape.height = args[2][1]\n self.size = self.shape.width * self.shape.height\n self.profile = ('pos', 'shape', 'size')\n\n super(WM_MotionEvent, self).depack(args)\n\n def __str__(self):\n args = (self.id, self.uid, str(self.spos), self.device)\n return '<WMMotionEvent id:%d uid:%d pos:%s device:%s>' % args\n\nif 'KIVY_DOC' in os.environ:\n # documentation hack\n WM_MotionEventProvider = None\n\nelse:\n from ctypes.wintypes import (ULONG, HANDLE, DWORD, LONG, UINT,\n WPARAM, LPARAM, BOOL)\n from ctypes import (windll, WINFUNCTYPE, POINTER,\n c_int, Structure, sizeof, byref)\n from collections import deque\n from kivy.input.provider import MotionEventProvider\n from kivy.input.factory import MotionEventFactory\n\n # check availability of RegisterTouchWindow\n if not hasattr(windll.user32, 'RegisterTouchWindow'):\n raise Exception('Unsupported Window version')\n\n LRESULT = LPARAM\n WNDPROC = WINFUNCTYPE(LRESULT, HANDLE, UINT, WPARAM, LPARAM)\n\n class TOUCHINPUT(Structure):\n _fields_ = [\n ('x', LONG),\n ('y', LONG),\n ('pSource', HANDLE),\n ('id', DWORD),\n ('flags', DWORD),\n ('mask', DWORD),\n ('time', DWORD),\n ('extraInfo', POINTER(ULONG)),\n ('size_x', DWORD),\n ('size_y', DWORD)]\n\n def size(self):\n return (self.size_x, self.size_y)\n\n def screen_x(self):\n return self.x / 100.0\n\n def screen_y(self):\n return self.y / 100.0\n\n def _event_type(self):\n if self.flags & TOUCHEVENTF_MOVE:\n return 'update'\n if self.flags & TOUCHEVENTF_DOWN:\n return 'begin'\n if self.flags & TOUCHEVENTF_UP:\n return 'end'\n event_type = property(_event_type)\n\n class RECT(Structure):\n _fields_ = [\n ('left', LONG),\n ('top', LONG),\n ('right', LONG),\n ('bottom', LONG)]\n\n x = property(lambda self: self.left)\n y = property(lambda self: self.top)\n w = property(lambda self: self.right - self.left)\n h = property(lambda self: self.bottom - self.top)\n\n try:\n windll.user32.SetWindowLongPtrW.restype = WNDPROC\n windll.user32.SetWindowLongPtrW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongPtrW\n except AttributeError:\n windll.user32.SetWindowLongW.restype = WNDPROC\n windll.user32.SetWindowLongW.argtypes = [HANDLE, c_int, WNDPROC]\n SetWindowLong_wrapper = windll.user32.SetWindowLongW\n\n windll.user32.GetMessageExtraInfo.restype = LPARAM\n windll.user32.GetMessageExtraInfo.argtypes = []\n windll.user32.GetClientRect.restype = BOOL\n windll.user32.GetClientRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.GetWindowRect.restype = BOOL\n windll.user32.GetWindowRect.argtypes = [HANDLE, POINTER(RECT)]\n windll.user32.CallWindowProcW.restype = LRESULT\n windll.user32.CallWindowProcW.argtypes = [WNDPROC, HANDLE, UINT, WPARAM,\n LPARAM]\n windll.user32.GetActiveWindow.restype = HANDLE\n windll.user32.GetActiveWindow.argtypes = []\n windll.user32.RegisterTouchWindow.restype = BOOL\n windll.user32.RegisterTouchWindow.argtypes = [HANDLE, ULONG]\n windll.user32.UnregisterTouchWindow.restype = BOOL\n windll.user32.UnregisterTouchWindow.argtypes = [HANDLE]\n windll.user32.GetTouchInputInfo.restype = BOOL\n windll.user32.GetTouchInputInfo.argtypes = [HANDLE, UINT,\n POINTER(TOUCHINPUT), c_int]\n windll.user32.GetSystemMetrics.restype = c_int\n windll.user32.GetSystemMetrics.argtypes = [c_int]\n\n class WM_MotionEventProvider(MotionEventProvider):\n\n def start(self):\n self.touch_events = deque()\n self.touches = {}\n self.uid = 0\n\n # get window handle, and register to recive WM_TOUCH messages\n self.hwnd = windll.user32.GetActiveWindow()\n windll.user32.RegisterTouchWindow(self.hwnd, 1)\n\n # inject our own wndProc to handle messages\n # before window manager does\n self.new_windProc = WNDPROC(self._touch_wndProc)\n self.old_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.new_windProc)\n\n if Window.borderless or Window.fullscreen:\n self.caption_size = 0\n else:\n self.caption_size = windll.user32.GetSystemMetrics(SM_CYCAPTION)\n\n def update(self, dispatch_fn):\n win_rect = RECT()\n windll.user32.GetWindowRect(self.hwnd, byref(win_rect))\n caption = self.caption_size\n\n while True:\n try:\n t = self.touch_events.pop()\n except:\n break\n\n # adjust x,y to window coordinates (0.0 to 1.0)\n x = (t.screen_x() - win_rect.x) / float(win_rect.w)\n y = 1.0 - (t.screen_y() - win_rect.y - caption\n ) / float(win_rect.h)\n\n # actually dispatch input\n if t.event_type == 'begin':\n self.uid += 1\n self.touches[t.id] = WM_MotionEvent(\n self.device, self.uid, [x, y, t.size()])\n dispatch_fn('begin', self.touches[t.id])\n\n if t.event_type == 'update' and t.id in self.touches:\n self.touches[t.id].move([x, y, t.size()])\n dispatch_fn('update', self.touches[t.id])\n\n if t.event_type == 'end' and t.id in self.touches:\n touch = self.touches[t.id]\n touch.move([x, y, t.size()])\n touch.update_time_end()\n dispatch_fn('end', touch)\n del self.touches[t.id]\n\n def stop(self):\n windll.user32.UnregisterTouchWindow(self.hwnd)\n self.new_windProc = SetWindowLong_wrapper(\n self.hwnd, GWL_WNDPROC, self.old_windProc)\n\n # we inject this wndProc into our main window, to process\n # WM_TOUCH and mouse messages before the window manager does\n def _touch_wndProc(self, hwnd, msg, wParam, lParam):\n done = False\n if msg == WM_TABLET_QUERYSYSTEMGESTURE:\n return QUERYSYSTEMGESTURE_WNDPROC\n\n if msg == WM_TOUCH:\n done = self._touch_handler(msg, wParam, lParam)\n\n if msg >= WM_MOUSEMOVE and msg <= WM_MOUSELAST:\n done = self._mouse_handler(msg, wParam, lParam)\n\n if not done:\n return windll.user32.CallWindowProcW(self.old_windProc,\n hwnd, msg, wParam,\n lParam)\n return 1\n\n # this on pushes WM_TOUCH messages onto our event stack\n def _touch_handler(self, msg, wParam, lParam):\n touches = (TOUCHINPUT * wParam)()\n windll.user32.GetTouchInputInfo(HANDLE(lParam),\n wParam,\n touches,\n sizeof(TOUCHINPUT))\n for i in range(wParam):\n self.touch_events.appendleft(touches[i])\n windll.user32.CloseTouchInputHandle(HANDLE(lParam))\n return True\n\n # filter fake mouse events, because touch and stylus\n # also make mouse events\n def _mouse_handler(self, msg, wparam, lParam):\n info = windll.user32.GetMessageExtraInfo()\n # its a touch or a pen\n if (info & PEN_OR_TOUCH_MASK) == PEN_OR_TOUCH_SIGNATURE:\n if info & PEN_EVENT_TOUCH_MASK:\n return True\n\n MotionEventFactory.register('wm_touch', WM_MotionEventProvider)\n", "path": "kivy/input/providers/wm_touch.py"}]}
| 3,333 | 226 |
gh_patches_debug_2870
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-753
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug for replay feature from pwd
Running the following command inside of a template repo:
`$ cookiecutter -o tmp .`
Will cause `replay.dump` to files like this:
`~/.cookiecutter_replay/..json`
Identified by @eliasdorneles
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/main.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.main
6 -----------------
7
8 Main entry point for the `cookiecutter` command.
9
10 The code in this module is also a good example of how to use Cookiecutter as a
11 library rather than a script.
12 """
13
14 from __future__ import unicode_literals
15 import logging
16 import os
17 import re
18
19 from .config import get_user_config, USER_CONFIG_PATH
20 from .exceptions import InvalidModeException, RepositoryNotFound
21 from .prompt import prompt_for_config
22 from .generate import generate_context, generate_files
23 from .vcs import clone
24 from .replay import dump, load
25
26 logger = logging.getLogger(__name__)
27
28 builtin_abbreviations = {
29 'gh': 'https://github.com/{0}.git',
30 'bb': 'https://bitbucket.org/{0}',
31 }
32
33 REPO_REGEX = re.compile(r"""
34 (?x)
35 ((((git|hg)\+)?(git|ssh|https?):(//)?) # something like git:// ssh:// etc.
36 | # or
37 (\w+@[\w\.]+) # something like user@...
38 )
39 """)
40
41
42 def is_repo_url(value):
43 """Return True if value is a repository URL."""
44 return bool(REPO_REGEX.match(value))
45
46
47 def expand_abbreviations(template, config_dict):
48 """
49 Expand abbreviations in a template name.
50
51 :param template: The project template name.
52 :param config_dict: The user config, which will contain abbreviation
53 definitions.
54 """
55
56 abbreviations = builtin_abbreviations.copy()
57 abbreviations.update(config_dict.get('abbreviations', {}))
58
59 if template in abbreviations:
60 return abbreviations[template]
61
62 # Split on colon. If there is no colon, rest will be empty
63 # and prefix will be the whole template
64 prefix, sep, rest = template.partition(':')
65 if prefix in abbreviations:
66 return abbreviations[prefix].format(rest)
67
68 return template
69
70
71 def cookiecutter(
72 template, checkout=None, no_input=False, extra_context=None,
73 replay=False, overwrite_if_exists=False, output_dir='.',
74 config_file=USER_CONFIG_PATH):
75 """
76 API equivalent to using Cookiecutter at the command line.
77
78 :param template: A directory containing a project template directory,
79 or a URL to a git repository.
80 :param checkout: The branch, tag or commit ID to checkout after clone.
81 :param no_input: Prompt the user at command line for manual configuration?
82 :param extra_context: A dictionary of context that overrides default
83 and user configuration.
84 :param: overwrite_if_exists: Overwrite the contents of output directory
85 if it exists
86 :param output_dir: Where to output the generated project dir into.
87 :param config_file: User configuration file path.
88 """
89 if replay and ((no_input is not False) or (extra_context is not None)):
90 err_msg = (
91 "You can not use both replay and no_input or extra_context "
92 "at the same time."
93 )
94 raise InvalidModeException(err_msg)
95
96 # Get user config from ~/.cookiecutterrc or equivalent
97 # If no config file, sensible defaults from config.DEFAULT_CONFIG are used
98 config_dict = get_user_config(config_file=config_file)
99
100 template = expand_abbreviations(template, config_dict)
101
102 if is_repo_url(template):
103 repo_dir = clone(
104 repo_url=template,
105 checkout=checkout,
106 clone_to_dir=config_dict['cookiecutters_dir'],
107 no_input=no_input
108 )
109 else:
110 # If it's a local repo, no need to clone or copy to your
111 # cookiecutters_dir
112 repo_dir = template
113
114 if not os.path.isdir(repo_dir):
115 raise RepositoryNotFound(
116 'The repository {0} could not be located.'.format(template)
117 )
118
119 template_name = os.path.basename(template)
120
121 if replay:
122 context = load(config_dict['replay_dir'], template_name)
123 else:
124 context_file = os.path.join(repo_dir, 'cookiecutter.json')
125 logging.debug('context_file is {0}'.format(context_file))
126
127 context = generate_context(
128 context_file=context_file,
129 default_context=config_dict['default_context'],
130 extra_context=extra_context,
131 )
132
133 # prompt the user to manually configure at the command line.
134 # except when 'no-input' flag is set
135 context['cookiecutter'] = prompt_for_config(context, no_input)
136
137 dump(config_dict['replay_dir'], template_name, context)
138
139 # Create project from local context and project template.
140 return generate_files(
141 repo_dir=repo_dir,
142 context=context,
143 overwrite_if_exists=overwrite_if_exists,
144 output_dir=output_dir
145 )
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cookiecutter/main.py b/cookiecutter/main.py
--- a/cookiecutter/main.py
+++ b/cookiecutter/main.py
@@ -116,7 +116,7 @@
'The repository {0} could not be located.'.format(template)
)
- template_name = os.path.basename(template)
+ template_name = os.path.basename(os.path.abspath(template))
if replay:
context = load(config_dict['replay_dir'], template_name)
|
{"golden_diff": "diff --git a/cookiecutter/main.py b/cookiecutter/main.py\n--- a/cookiecutter/main.py\n+++ b/cookiecutter/main.py\n@@ -116,7 +116,7 @@\n 'The repository {0} could not be located.'.format(template)\n )\n \n- template_name = os.path.basename(template)\n+ template_name = os.path.basename(os.path.abspath(template))\n \n if replay:\n context = load(config_dict['replay_dir'], template_name)\n", "issue": "Bug for replay feature from pwd\nRunning the following command inside of a template repo:\n\n`$ cookiecutter -o tmp .`\n\nWill cause `replay.dump` to files like this:\n\n`~/.cookiecutter_replay/..json`\n\nIdentified by @eliasdorneles \n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.main\n-----------------\n\nMain entry point for the `cookiecutter` command.\n\nThe code in this module is also a good example of how to use Cookiecutter as a\nlibrary rather than a script.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport re\n\nfrom .config import get_user_config, USER_CONFIG_PATH\nfrom .exceptions import InvalidModeException, RepositoryNotFound\nfrom .prompt import prompt_for_config\nfrom .generate import generate_context, generate_files\nfrom .vcs import clone\nfrom .replay import dump, load\n\nlogger = logging.getLogger(__name__)\n\nbuiltin_abbreviations = {\n 'gh': 'https://github.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nREPO_REGEX = re.compile(r\"\"\"\n(?x)\n((((git|hg)\\+)?(git|ssh|https?):(//)?) # something like git:// ssh:// etc.\n | # or\n (\\w+@[\\w\\.]+) # something like user@...\n)\n\"\"\")\n\n\ndef is_repo_url(value):\n \"\"\"Return True if value is a repository URL.\"\"\"\n return bool(REPO_REGEX.match(value))\n\n\ndef expand_abbreviations(template, config_dict):\n \"\"\"\n Expand abbreviations in a template name.\n\n :param template: The project template name.\n :param config_dict: The user config, which will contain abbreviation\n definitions.\n \"\"\"\n\n abbreviations = builtin_abbreviations.copy()\n abbreviations.update(config_dict.get('abbreviations', {}))\n\n if template in abbreviations:\n return abbreviations[template]\n\n # Split on colon. If there is no colon, rest will be empty\n # and prefix will be the whole template\n prefix, sep, rest = template.partition(':')\n if prefix in abbreviations:\n return abbreviations[prefix].format(rest)\n\n return template\n\n\ndef cookiecutter(\n template, checkout=None, no_input=False, extra_context=None,\n replay=False, overwrite_if_exists=False, output_dir='.',\n config_file=USER_CONFIG_PATH):\n \"\"\"\n API equivalent to using Cookiecutter at the command line.\n\n :param template: A directory containing a project template directory,\n or a URL to a git repository.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param no_input: Prompt the user at command line for manual configuration?\n :param extra_context: A dictionary of context that overrides default\n and user configuration.\n :param: overwrite_if_exists: Overwrite the contents of output directory\n if it exists\n :param output_dir: Where to output the generated project dir into.\n :param config_file: User configuration file path.\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n \"You can not use both replay and no_input or extra_context \"\n \"at the same time.\"\n )\n raise InvalidModeException(err_msg)\n\n # Get user config from ~/.cookiecutterrc or equivalent\n # If no config file, sensible defaults from config.DEFAULT_CONFIG are used\n config_dict = get_user_config(config_file=config_file)\n\n template = expand_abbreviations(template, config_dict)\n\n if is_repo_url(template):\n repo_dir = clone(\n repo_url=template,\n checkout=checkout,\n clone_to_dir=config_dict['cookiecutters_dir'],\n no_input=no_input\n )\n else:\n # If it's a local repo, no need to clone or copy to your\n # cookiecutters_dir\n repo_dir = template\n\n if not os.path.isdir(repo_dir):\n raise RepositoryNotFound(\n 'The repository {0} could not be located.'.format(template)\n )\n\n template_name = os.path.basename(template)\n\n if replay:\n context = load(config_dict['replay_dir'], template_name)\n else:\n context_file = os.path.join(repo_dir, 'cookiecutter.json')\n logging.debug('context_file is {0}'.format(context_file))\n\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=extra_context,\n )\n\n # prompt the user to manually configure at the command line.\n # except when 'no-input' flag is set\n context['cookiecutter'] = prompt_for_config(context, no_input)\n\n dump(config_dict['replay_dir'], template_name, context)\n\n # Create project from local context and project template.\n return generate_files(\n repo_dir=repo_dir,\n context=context,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n", "path": "cookiecutter/main.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.main\n-----------------\n\nMain entry point for the `cookiecutter` command.\n\nThe code in this module is also a good example of how to use Cookiecutter as a\nlibrary rather than a script.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport re\n\nfrom .config import get_user_config, USER_CONFIG_PATH\nfrom .exceptions import InvalidModeException, RepositoryNotFound\nfrom .prompt import prompt_for_config\nfrom .generate import generate_context, generate_files\nfrom .vcs import clone\nfrom .replay import dump, load\n\nlogger = logging.getLogger(__name__)\n\nbuiltin_abbreviations = {\n 'gh': 'https://github.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nREPO_REGEX = re.compile(r\"\"\"\n(?x)\n((((git|hg)\\+)?(git|ssh|https?):(//)?) # something like git:// ssh:// etc.\n | # or\n (\\w+@[\\w\\.]+) # something like user@...\n)\n\"\"\")\n\n\ndef is_repo_url(value):\n \"\"\"Return True if value is a repository URL.\"\"\"\n return bool(REPO_REGEX.match(value))\n\n\ndef expand_abbreviations(template, config_dict):\n \"\"\"\n Expand abbreviations in a template name.\n\n :param template: The project template name.\n :param config_dict: The user config, which will contain abbreviation\n definitions.\n \"\"\"\n\n abbreviations = builtin_abbreviations.copy()\n abbreviations.update(config_dict.get('abbreviations', {}))\n\n if template in abbreviations:\n return abbreviations[template]\n\n # Split on colon. If there is no colon, rest will be empty\n # and prefix will be the whole template\n prefix, sep, rest = template.partition(':')\n if prefix in abbreviations:\n return abbreviations[prefix].format(rest)\n\n return template\n\n\ndef cookiecutter(\n template, checkout=None, no_input=False, extra_context=None,\n replay=False, overwrite_if_exists=False, output_dir='.',\n config_file=USER_CONFIG_PATH):\n \"\"\"\n API equivalent to using Cookiecutter at the command line.\n\n :param template: A directory containing a project template directory,\n or a URL to a git repository.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param no_input: Prompt the user at command line for manual configuration?\n :param extra_context: A dictionary of context that overrides default\n and user configuration.\n :param: overwrite_if_exists: Overwrite the contents of output directory\n if it exists\n :param output_dir: Where to output the generated project dir into.\n :param config_file: User configuration file path.\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n \"You can not use both replay and no_input or extra_context \"\n \"at the same time.\"\n )\n raise InvalidModeException(err_msg)\n\n # Get user config from ~/.cookiecutterrc or equivalent\n # If no config file, sensible defaults from config.DEFAULT_CONFIG are used\n config_dict = get_user_config(config_file=config_file)\n\n template = expand_abbreviations(template, config_dict)\n\n if is_repo_url(template):\n repo_dir = clone(\n repo_url=template,\n checkout=checkout,\n clone_to_dir=config_dict['cookiecutters_dir'],\n no_input=no_input\n )\n else:\n # If it's a local repo, no need to clone or copy to your\n # cookiecutters_dir\n repo_dir = template\n\n if not os.path.isdir(repo_dir):\n raise RepositoryNotFound(\n 'The repository {0} could not be located.'.format(template)\n )\n\n template_name = os.path.basename(os.path.abspath(template))\n\n if replay:\n context = load(config_dict['replay_dir'], template_name)\n else:\n context_file = os.path.join(repo_dir, 'cookiecutter.json')\n logging.debug('context_file is {0}'.format(context_file))\n\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=extra_context,\n )\n\n # prompt the user to manually configure at the command line.\n # except when 'no-input' flag is set\n context['cookiecutter'] = prompt_for_config(context, no_input)\n\n dump(config_dict['replay_dir'], template_name, context)\n\n # Create project from local context and project template.\n return generate_files(\n repo_dir=repo_dir,\n context=context,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n", "path": "cookiecutter/main.py"}]}
| 1,681 | 110 |
gh_patches_debug_13322
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-285
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove save button on disabled forms
Even if objects are archived and the corresponding forms are disabled, the save buttons are still visible, leading to errors when submitting.
Remove the buttons for:
- [ ] Pages
- [ ] Events
- [x] POIs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/cms/views/pages/page_view.py`
Content:
```
1 """
2
3 Returns:
4 [type]: [description]
5 """
6 import logging
7
8 from django.contrib import messages
9 from django.contrib.auth.decorators import login_required
10 from django.contrib.auth.mixins import PermissionRequiredMixin
11 from django.core.exceptions import PermissionDenied
12 from django.shortcuts import render, redirect
13 from django.utils.decorators import method_decorator
14 from django.utils.translation import ugettext as _
15 from django.views.generic import TemplateView
16
17 from ...constants import status
18 from ...decorators import region_permission_required
19 from ...forms.pages import PageForm, PageTranslationForm
20 from ...models import Page, PageTranslation, Region, Language
21
22 logger = logging.getLogger(__name__)
23
24
25 @method_decorator(login_required, name='dispatch')
26 @method_decorator(region_permission_required, name='dispatch')
27 class PageView(PermissionRequiredMixin, TemplateView):
28 permission_required = 'cms.view_pages'
29 raise_exception = True
30
31 template_name = 'pages/page_form.html'
32 base_context = {
33 'current_menu_item': 'pages',
34 'PUBLIC': status.PUBLIC
35 }
36
37 def get(self, request, *args, **kwargs):
38
39 region = Region.objects.get(slug=kwargs.get('region_slug'))
40
41 language = Language.objects.get(code=kwargs.get('language_code'))
42
43 # get page and translation objects if they exist
44 page = Page.objects.filter(id=kwargs.get('page_id')).first()
45 page_translation = PageTranslation.objects.filter(
46 page=page,
47 language=language,
48 ).first()
49
50 # Make form disabled if user has no permission to edit the page
51 disabled = not request.user.has_perm('cms.edit_page', page)
52 if disabled:
53 messages.warning(request, _("You don't have the permission to edit this page."))
54
55 page_form = PageForm(
56 instance=page,
57 region=region,
58 language=language,
59 disabled=disabled
60 )
61 page_translation_form = PageTranslationForm(
62 instance=page_translation,
63 disabled=disabled
64 )
65
66 return render(request, self.template_name, {
67 **self.base_context,
68 'page_form': page_form,
69 'page_translation_form': page_translation_form,
70 'page': page,
71 'language': language,
72 # Languages for tab view
73 'languages': region.languages if page else [language],
74 })
75
76 # pylint: disable=too-many-branches,unused-argument
77 def post(self, request, *args, **kwargs):
78
79 region = Region.objects.get(slug=kwargs.get('region_slug'))
80 language = Language.objects.get(code=kwargs.get('language_code'))
81
82 page_instance = Page.objects.filter(id=kwargs.get('page_id')).first()
83 page_translation_instance = PageTranslation.objects.filter(
84 page=page_instance,
85 language=language,
86 ).first()
87
88 if not request.user.has_perm('cms.edit_page', page_instance):
89 raise PermissionDenied
90
91 page_form = PageForm(
92 request.POST,
93 instance=page_instance,
94 region=region,
95 language=language,
96 )
97 page_translation_form = PageTranslationForm(
98 request.POST,
99 instance=page_translation_instance,
100 region=region,
101 language=language,
102 )
103
104 if page_translation_form.data.get('public') and 'public' in page_translation_form.changed_data:
105 if not request.user.has_perm('cms.publish_page', page_instance):
106 raise PermissionDenied
107
108 # TODO: error handling
109 if not page_form.is_valid() or not page_translation_form.is_valid():
110 messages.error(request, _('Errors have occurred.'))
111 return render(request, self.template_name, {
112 **self.base_context,
113 'page_form': page_form,
114 'page_translation_form': page_translation_form,
115 'page': page_instance,
116 'language': language,
117 # Languages for tab view
118 'languages': region.languages if page_instance else [language],
119 })
120
121 if not page_form.has_changed() and not page_translation_form.has_changed():
122 messages.info(request, _('No changes detected.'))
123 return render(request, self.template_name, {
124 **self.base_context,
125 'page_form': page_form,
126 'page_translation_form': page_translation_form,
127 'page': page_instance,
128 'language': language,
129 # Languages for tab view
130 'languages': region.languages if page_instance else [language],
131 })
132
133 page = page_form.save()
134 page_translation = page_translation_form.save(
135 page=page,
136 user=request.user,
137 )
138
139 published = page_translation.status == status.PUBLIC
140 if not page_instance:
141 if published:
142 messages.success(request, _('Page was successfully created and published.'))
143 else:
144 messages.success(request, _('Page was successfully created.'))
145 elif not page_translation_instance:
146 if published:
147 messages.success(request, _('Translation was successfully created and published.'))
148 else:
149 messages.success(request, _('Translation was successfully created.'))
150 else:
151 if published:
152 messages.success(request, _('Translation was successfully published.'))
153 else:
154 messages.success(request, _('Translation was successfully saved.'))
155
156 return redirect('edit_page', **{
157 'page_id': page.id,
158 'region_slug': region.slug,
159 'language_code': language.code,
160 })
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/cms/views/pages/page_view.py b/backend/cms/views/pages/page_view.py
--- a/backend/cms/views/pages/page_view.py
+++ b/backend/cms/views/pages/page_view.py
@@ -48,9 +48,14 @@
).first()
# Make form disabled if user has no permission to edit the page
- disabled = not request.user.has_perm('cms.edit_page', page)
- if disabled:
+ if not request.user.has_perm('cms.edit_page', page):
+ disabled = True
messages.warning(request, _("You don't have the permission to edit this page."))
+ elif page and page.archived:
+ disabled = True
+ messages.warning(request, _("You cannot edit this page because it is archived."))
+ else:
+ disabled = False
page_form = PageForm(
instance=page,
|
{"golden_diff": "diff --git a/backend/cms/views/pages/page_view.py b/backend/cms/views/pages/page_view.py\n--- a/backend/cms/views/pages/page_view.py\n+++ b/backend/cms/views/pages/page_view.py\n@@ -48,9 +48,14 @@\n ).first()\n \n # Make form disabled if user has no permission to edit the page\n- disabled = not request.user.has_perm('cms.edit_page', page)\n- if disabled:\n+ if not request.user.has_perm('cms.edit_page', page):\n+ disabled = True\n messages.warning(request, _(\"You don't have the permission to edit this page.\"))\n+ elif page and page.archived:\n+ disabled = True\n+ messages.warning(request, _(\"You cannot edit this page because it is archived.\"))\n+ else:\n+ disabled = False\n \n page_form = PageForm(\n instance=page,\n", "issue": "Remove save button on disabled forms\nEven if objects are archived and the corresponding forms are disabled, the save buttons are still visible, leading to errors when submitting.\r\nRemove the buttons for:\r\n- [ ] Pages\r\n- [ ] Events\r\n- [x] POIs\n", "before_files": [{"content": "\"\"\"\n\nReturns:\n [type]: [description]\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...constants import status\nfrom ...decorators import region_permission_required\nfrom ...forms.pages import PageForm, PageTranslationForm\nfrom ...models import Page, PageTranslation, Region, Language\n\nlogger = logging.getLogger(__name__)\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass PageView(PermissionRequiredMixin, TemplateView):\n permission_required = 'cms.view_pages'\n raise_exception = True\n\n template_name = 'pages/page_form.html'\n base_context = {\n 'current_menu_item': 'pages',\n 'PUBLIC': status.PUBLIC\n }\n\n def get(self, request, *args, **kwargs):\n\n region = Region.objects.get(slug=kwargs.get('region_slug'))\n\n language = Language.objects.get(code=kwargs.get('language_code'))\n\n # get page and translation objects if they exist\n page = Page.objects.filter(id=kwargs.get('page_id')).first()\n page_translation = PageTranslation.objects.filter(\n page=page,\n language=language,\n ).first()\n\n # Make form disabled if user has no permission to edit the page\n disabled = not request.user.has_perm('cms.edit_page', page)\n if disabled:\n messages.warning(request, _(\"You don't have the permission to edit this page.\"))\n\n page_form = PageForm(\n instance=page,\n region=region,\n language=language,\n disabled=disabled\n )\n page_translation_form = PageTranslationForm(\n instance=page_translation,\n disabled=disabled\n )\n\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page else [language],\n })\n\n # pylint: disable=too-many-branches,unused-argument\n def post(self, request, *args, **kwargs):\n\n region = Region.objects.get(slug=kwargs.get('region_slug'))\n language = Language.objects.get(code=kwargs.get('language_code'))\n\n page_instance = Page.objects.filter(id=kwargs.get('page_id')).first()\n page_translation_instance = PageTranslation.objects.filter(\n page=page_instance,\n language=language,\n ).first()\n\n if not request.user.has_perm('cms.edit_page', page_instance):\n raise PermissionDenied\n\n page_form = PageForm(\n request.POST,\n instance=page_instance,\n region=region,\n language=language,\n )\n page_translation_form = PageTranslationForm(\n request.POST,\n instance=page_translation_instance,\n region=region,\n language=language,\n )\n\n if page_translation_form.data.get('public') and 'public' in page_translation_form.changed_data:\n if not request.user.has_perm('cms.publish_page', page_instance):\n raise PermissionDenied\n\n # TODO: error handling\n if not page_form.is_valid() or not page_translation_form.is_valid():\n messages.error(request, _('Errors have occurred.'))\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page_instance,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page_instance else [language],\n })\n\n if not page_form.has_changed() and not page_translation_form.has_changed():\n messages.info(request, _('No changes detected.'))\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page_instance,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page_instance else [language],\n })\n\n page = page_form.save()\n page_translation = page_translation_form.save(\n page=page,\n user=request.user,\n )\n\n published = page_translation.status == status.PUBLIC\n if not page_instance:\n if published:\n messages.success(request, _('Page was successfully created and published.'))\n else:\n messages.success(request, _('Page was successfully created.'))\n elif not page_translation_instance:\n if published:\n messages.success(request, _('Translation was successfully created and published.'))\n else:\n messages.success(request, _('Translation was successfully created.'))\n else:\n if published:\n messages.success(request, _('Translation was successfully published.'))\n else:\n messages.success(request, _('Translation was successfully saved.'))\n\n return redirect('edit_page', **{\n 'page_id': page.id,\n 'region_slug': region.slug,\n 'language_code': language.code,\n })\n", "path": "backend/cms/views/pages/page_view.py"}], "after_files": [{"content": "\"\"\"\n\nReturns:\n [type]: [description]\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import render, redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...constants import status\nfrom ...decorators import region_permission_required\nfrom ...forms.pages import PageForm, PageTranslationForm\nfrom ...models import Page, PageTranslation, Region, Language\n\nlogger = logging.getLogger(__name__)\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass PageView(PermissionRequiredMixin, TemplateView):\n permission_required = 'cms.view_pages'\n raise_exception = True\n\n template_name = 'pages/page_form.html'\n base_context = {\n 'current_menu_item': 'pages',\n 'PUBLIC': status.PUBLIC\n }\n\n def get(self, request, *args, **kwargs):\n\n region = Region.objects.get(slug=kwargs.get('region_slug'))\n\n language = Language.objects.get(code=kwargs.get('language_code'))\n\n # get page and translation objects if they exist\n page = Page.objects.filter(id=kwargs.get('page_id')).first()\n page_translation = PageTranslation.objects.filter(\n page=page,\n language=language,\n ).first()\n\n # Make form disabled if user has no permission to edit the page\n if not request.user.has_perm('cms.edit_page', page):\n disabled = True\n messages.warning(request, _(\"You don't have the permission to edit this page.\"))\n elif page and page.archived:\n disabled = True\n messages.warning(request, _(\"You cannot edit this page because it is archived.\"))\n else:\n disabled = False\n\n page_form = PageForm(\n instance=page,\n region=region,\n language=language,\n disabled=disabled\n )\n page_translation_form = PageTranslationForm(\n instance=page_translation,\n disabled=disabled\n )\n\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page else [language],\n })\n\n # pylint: disable=too-many-branches,unused-argument\n def post(self, request, *args, **kwargs):\n\n region = Region.objects.get(slug=kwargs.get('region_slug'))\n language = Language.objects.get(code=kwargs.get('language_code'))\n\n page_instance = Page.objects.filter(id=kwargs.get('page_id')).first()\n page_translation_instance = PageTranslation.objects.filter(\n page=page_instance,\n language=language,\n ).first()\n\n if not request.user.has_perm('cms.edit_page', page_instance):\n raise PermissionDenied\n\n page_form = PageForm(\n request.POST,\n instance=page_instance,\n region=region,\n language=language,\n )\n page_translation_form = PageTranslationForm(\n request.POST,\n instance=page_translation_instance,\n region=region,\n language=language,\n )\n\n if page_translation_form.data.get('public') and 'public' in page_translation_form.changed_data:\n if not request.user.has_perm('cms.publish_page', page_instance):\n raise PermissionDenied\n\n # TODO: error handling\n if not page_form.is_valid() or not page_translation_form.is_valid():\n messages.error(request, _('Errors have occurred.'))\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page_instance,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page_instance else [language],\n })\n\n if not page_form.has_changed() and not page_translation_form.has_changed():\n messages.info(request, _('No changes detected.'))\n return render(request, self.template_name, {\n **self.base_context,\n 'page_form': page_form,\n 'page_translation_form': page_translation_form,\n 'page': page_instance,\n 'language': language,\n # Languages for tab view\n 'languages': region.languages if page_instance else [language],\n })\n\n page = page_form.save()\n page_translation = page_translation_form.save(\n page=page,\n user=request.user,\n )\n\n published = page_translation.status == status.PUBLIC\n if not page_instance:\n if published:\n messages.success(request, _('Page was successfully created and published.'))\n else:\n messages.success(request, _('Page was successfully created.'))\n elif not page_translation_instance:\n if published:\n messages.success(request, _('Translation was successfully created and published.'))\n else:\n messages.success(request, _('Translation was successfully created.'))\n else:\n if published:\n messages.success(request, _('Translation was successfully published.'))\n else:\n messages.success(request, _('Translation was successfully saved.'))\n\n return redirect('edit_page', **{\n 'page_id': page.id,\n 'region_slug': region.slug,\n 'language_code': language.code,\n })\n", "path": "backend/cms/views/pages/page_view.py"}]}
| 1,788 | 188 |
gh_patches_debug_59100
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-2343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KServe 0.9 release tracking
/kind feature
**Describe the solution you'd like**
KServe 0.9 release tracking:
RC release Date: 6/13/2022
Release Date: 6/27/2022
## KServe Model Serving:
- [X] Storage spec for unifying single model serving and model mesh
- https://github.com/kserve/kserve/pull/1899 @Tomcli
- [x] Transformer ModelMesh support
- https://github.com/kserve/kserve/pull/2136 @chinhuang007
- [x] Model Status API for unifying single model serving and model mesh
- https://github.com/kserve/kserve/pull/2084 @pvaneck
- https://github.com/kserve/kserve/pull/2088 @Suresh-Nakkeran
- [x] Inferece Graph v1alpha1 API and impmentation
- https://github.com/kserve/kserve/pull/1910 @yuzisun @Iamlovingit
- [X] KServe control plane HA
- https://github.com/kserve/kserve/pull/2160 @Suresh-Nakkeran
- [X] Enable inference protocol version auto selection for servingruntime
- https://github.com/kserve/kserve/pull/2118 @Suresh-Nakkeran
- [x] Webhdfs storage uri support
- https://github.com/kserve/kserve/pull/2077 @markwinter
- [x] Azure file share support for storage initializer
- https://github.com/kserve/kserve/pull/1985 @laozc
- [x] KServe Autoscaling spec API
- https://github.com/kserve/kserve/pull/2082 @andyi2it
- [X] KServe ingress class and domain template support for raw deployment mode
- https://github.com/kserve/kserve/pull/2054 @pradithya
- https://github.com/kserve/kserve/pull/2049 @pradithya
## ModelMesh:
- [X] OpenVINO model server support
- https://github.com/kserve/modelmesh-runtime-adapter/pull/18 @tjohnson31415
- [x] Import ServingRuntime and InferenceService types from KServe
- https://github.com/kserve/modelmesh-serving/pull/146 @tjohnson31415
- https://github.com/kserve/modelmesh-serving/pull/140 @pvaneck
- [x] Azure storage support for ModelMesh
- https://github.com/kserve/modelmesh-runtime-adapter/pull/23 @pvaneck
## Models UI:
- [x] Models Web App KServe 0.8 release support
- https://github.com/kserve/models-web-app/pull/35 @DavidSpek
## Website:
- [x] Website doc update
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kserve/setup.py`
Content:
```
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 TESTS_REQUIRES = [
18 'pytest',
19 'pytest-xdist',
20 'pytest-cov',
21 'pytest-asyncio',
22 'pytest-tornasync',
23 'mypy'
24 ]
25
26 with open('requirements.txt') as f:
27 REQUIRES = f.readlines()
28
29 setuptools.setup(
30 name='kserve',
31 version='0.9.0rc0',
32 author="The KServe Authors",
33 author_email='[email protected], [email protected], [email protected]',
34 license="Apache License Version 2.0",
35 url="https://github.com/kserve/kserve/tree/master/python/kserve",
36 description="KServe Python SDK",
37 long_description="Python SDK for KServe Server and Client.",
38 python_requires='>=3.7',
39 packages=[
40 'kserve',
41 'kserve.api',
42 'kserve.constants',
43 'kserve.models',
44 'kserve.handlers',
45 'kserve.utils',
46 ],
47 package_data={'': ['requirements.txt']},
48 include_package_data=True,
49 zip_safe=False,
50 classifiers=[
51 'Intended Audience :: Developers',
52 'Intended Audience :: Education',
53 'Intended Audience :: Science/Research',
54 'Programming Language :: Python :: 3',
55 'Programming Language :: Python :: 3.7',
56 'Programming Language :: Python :: 3.8',
57 'Programming Language :: Python :: 3.9',
58 "License :: OSI Approved :: Apache Software License",
59 "Operating System :: OS Independent",
60 'Topic :: Scientific/Engineering',
61 'Topic :: Scientific/Engineering :: Artificial Intelligence',
62 'Topic :: Software Development',
63 'Topic :: Software Development :: Libraries',
64 'Topic :: Software Development :: Libraries :: Python Modules',
65 ],
66 install_requires=REQUIRES,
67 tests_require=TESTS_REQUIRES,
68 extras_require={'test': TESTS_REQUIRES}
69 )
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/kserve/setup.py b/python/kserve/setup.py
--- a/python/kserve/setup.py
+++ b/python/kserve/setup.py
@@ -28,7 +28,7 @@
setuptools.setup(
name='kserve',
- version='0.9.0rc0',
+ version='0.9.0',
author="The KServe Authors",
author_email='[email protected], [email protected], [email protected]',
license="Apache License Version 2.0",
|
{"golden_diff": "diff --git a/python/kserve/setup.py b/python/kserve/setup.py\n--- a/python/kserve/setup.py\n+++ b/python/kserve/setup.py\n@@ -28,7 +28,7 @@\n \n setuptools.setup(\n name='kserve',\n- version='0.9.0rc0',\n+ version='0.9.0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n", "issue": "KServe 0.9 release tracking\n/kind feature\r\n\r\n**Describe the solution you'd like**\r\nKServe 0.9 release tracking:\r\nRC release Date: 6/13/2022\r\nRelease Date: 6/27/2022\r\n\r\n## KServe Model Serving:\r\n- [X] Storage spec for unifying single model serving and model mesh\r\n - https://github.com/kserve/kserve/pull/1899 @Tomcli \r\n- [x] Transformer ModelMesh support\r\n - https://github.com/kserve/kserve/pull/2136 @chinhuang007 \r\n- [x] Model Status API for unifying single model serving and model mesh\r\n - https://github.com/kserve/kserve/pull/2084 @pvaneck \r\n - https://github.com/kserve/kserve/pull/2088 @Suresh-Nakkeran \r\n- [x] Inferece Graph v1alpha1 API and impmentation\r\n - https://github.com/kserve/kserve/pull/1910 @yuzisun @Iamlovingit \r\n- [X] KServe control plane HA\r\n - https://github.com/kserve/kserve/pull/2160 @Suresh-Nakkeran \r\n- [X] Enable inference protocol version auto selection for servingruntime \r\n - https://github.com/kserve/kserve/pull/2118 @Suresh-Nakkeran \r\n- [x] Webhdfs storage uri support\r\n - https://github.com/kserve/kserve/pull/2077 @markwinter \r\n- [x] Azure file share support for storage initializer \r\n - https://github.com/kserve/kserve/pull/1985 @laozc \r\n- [x] KServe Autoscaling spec API\r\n - https://github.com/kserve/kserve/pull/2082 @andyi2it \r\n- [X] KServe ingress class and domain template support for raw deployment mode\r\n - https://github.com/kserve/kserve/pull/2054 @pradithya \r\n - https://github.com/kserve/kserve/pull/2049 @pradithya \r\n\r\n## ModelMesh:\r\n- [X] OpenVINO model server support\r\n - https://github.com/kserve/modelmesh-runtime-adapter/pull/18 @tjohnson31415\r\n- [x] Import ServingRuntime and InferenceService types from KServe \r\n - https://github.com/kserve/modelmesh-serving/pull/146 @tjohnson31415 \r\n - https://github.com/kserve/modelmesh-serving/pull/140 @pvaneck \r\n- [x] Azure storage support for ModelMesh\r\n - https://github.com/kserve/modelmesh-runtime-adapter/pull/23 @pvaneck \r\n\r\n## Models UI:\r\n- [x] Models Web App KServe 0.8 release support \r\n - https://github.com/kserve/models-web-app/pull/35 @DavidSpek \r\n\r\n \r\n## Website: \r\n- [x] Website doc update\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.9.0rc0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.7',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}], "after_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.9.0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.7',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}]}
| 1,639 | 124 |
gh_patches_debug_28851
|
rasdani/github-patches
|
git_diff
|
webkom__lego-2560
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RestrictedMail notification
> Restricted mail is used when sending mails to multiple users at once by selecting users/events/meetings, and then send the email to <[email protected]> together with the token.
The `restricted mail sent` should be sent to the proper email, not the `user.email` field. The address `user.email_address` should be used instead.
If the `from_address` is not the same as the `user.email_address`, both should receive the mail.
https://github.com/webkom/lego/blob/ccab14fbee223f16842ace6ca2ba0c2f3ac3ac86/lego/apps/restricted/notifications.py#L9
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lego/apps/restricted/serializers.py`
Content:
```
1 from lego.apps.events.fields import PublicEventListField
2 from lego.apps.meetings.fields import MeetingListField
3 from lego.apps.restricted.models import RestrictedMail
4 from lego.apps.users.fields import AbakusGroupListField, PublicUserListField
5 from lego.utils.serializers import BasisModelSerializer
6
7
8 class RestrictedMailListSerializer(BasisModelSerializer):
9 class Meta:
10 model = RestrictedMail
11 fields = ("id", "from_address", "hide_sender", "used", "created_at", "weekly")
12 read_only_fields = ("used", "created_at")
13
14 def save(self, **kwargs):
15 kwargs["token"] = RestrictedMail.create_token()
16 return super().save(**kwargs)
17
18
19 class RestrictedMailSerializer(RestrictedMailListSerializer):
20 class Meta(RestrictedMailListSerializer.Meta):
21 fields = RestrictedMailListSerializer.Meta.fields + (
22 "users",
23 "groups",
24 "events",
25 "meetings",
26 "raw_addresses",
27 "weekly",
28 "hide_sender",
29 )
30
31
32 class RestrictedMailDetailSerializer(RestrictedMailSerializer):
33 users = PublicUserListField({"read_only": True})
34 groups = AbakusGroupListField({"read_only": True})
35 events = PublicEventListField({"read_only": True})
36 meetings = MeetingListField({"read_only": True})
37
38 class Meta(RestrictedMailSerializer.Meta):
39 fields = RestrictedMailSerializer.Meta.fields + ("token_query_param",)
40
```
Path: `lego/apps/restricted/notifications.py`
Content:
```
1 from lego.apps.notifications.constants import RESTRICTED_MAIL_SENT
2 from lego.apps.notifications.notification import Notification
3
4
5 class RestrictedMailSentNotification(Notification):
6
7 name = RESTRICTED_MAIL_SENT
8
9 def generate_mail(self):
10 return self._delay_mail(
11 to_email=self.user.email,
12 context={"first_name": self.user.first_name},
13 subject="Begrenset epost sendt ut",
14 plain_template="restricted/email/process_success.txt",
15 html_template="restricted/email/process_success.html",
16 )
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lego/apps/restricted/notifications.py b/lego/apps/restricted/notifications.py
--- a/lego/apps/restricted/notifications.py
+++ b/lego/apps/restricted/notifications.py
@@ -8,7 +8,7 @@
def generate_mail(self):
return self._delay_mail(
- to_email=self.user.email,
+ to_email=self.user.email.address,
context={"first_name": self.user.first_name},
subject="Begrenset epost sendt ut",
plain_template="restricted/email/process_success.txt",
diff --git a/lego/apps/restricted/serializers.py b/lego/apps/restricted/serializers.py
--- a/lego/apps/restricted/serializers.py
+++ b/lego/apps/restricted/serializers.py
@@ -1,3 +1,7 @@
+from functools import reduce
+
+from rest_framework import exceptions
+
from lego.apps.events.fields import PublicEventListField
from lego.apps.meetings.fields import MeetingListField
from lego.apps.restricted.models import RestrictedMail
@@ -28,6 +32,18 @@
"hide_sender",
)
+ def create(self, validated_data):
+ groups = validated_data["groups"]
+ events = validated_data["events"]
+ MaxPermittedAmout = 500
+ num = reduce((lambda a, b: a + b.number_of_users), groups, 0)
+ num += reduce((lambda a, b: a + b.registration_count), events, 0)
+ if num > MaxPermittedAmout:
+ raise exceptions.ValidationError(
+ f"The number of students in selected groups/events exceed the permitted amount which is {MaxPermittedAmout}"
+ )
+ return super().create(validated_data)
+
class RestrictedMailDetailSerializer(RestrictedMailSerializer):
users = PublicUserListField({"read_only": True})
|
{"golden_diff": "diff --git a/lego/apps/restricted/notifications.py b/lego/apps/restricted/notifications.py\n--- a/lego/apps/restricted/notifications.py\n+++ b/lego/apps/restricted/notifications.py\n@@ -8,7 +8,7 @@\n \n def generate_mail(self):\n return self._delay_mail(\n- to_email=self.user.email,\n+ to_email=self.user.email.address,\n context={\"first_name\": self.user.first_name},\n subject=\"Begrenset epost sendt ut\",\n plain_template=\"restricted/email/process_success.txt\",\ndiff --git a/lego/apps/restricted/serializers.py b/lego/apps/restricted/serializers.py\n--- a/lego/apps/restricted/serializers.py\n+++ b/lego/apps/restricted/serializers.py\n@@ -1,3 +1,7 @@\n+from functools import reduce\n+\n+from rest_framework import exceptions\n+\n from lego.apps.events.fields import PublicEventListField\n from lego.apps.meetings.fields import MeetingListField\n from lego.apps.restricted.models import RestrictedMail\n@@ -28,6 +32,18 @@\n \"hide_sender\",\n )\n \n+ def create(self, validated_data):\n+ groups = validated_data[\"groups\"]\n+ events = validated_data[\"events\"]\n+ MaxPermittedAmout = 500\n+ num = reduce((lambda a, b: a + b.number_of_users), groups, 0)\n+ num += reduce((lambda a, b: a + b.registration_count), events, 0)\n+ if num > MaxPermittedAmout:\n+ raise exceptions.ValidationError(\n+ f\"The number of students in selected groups/events exceed the permitted amount which is {MaxPermittedAmout}\"\n+ )\n+ return super().create(validated_data)\n+\n \n class RestrictedMailDetailSerializer(RestrictedMailSerializer):\n users = PublicUserListField({\"read_only\": True})\n", "issue": "RestrictedMail notification\n> Restricted mail is used when sending mails to multiple users at once by selecting users/events/meetings, and then send the email to <[email protected]> together with the token.\r\n\r\nThe `restricted mail sent` should be sent to the proper email, not the `user.email` field. The address `user.email_address` should be used instead.\r\n\r\nIf the `from_address` is not the same as the `user.email_address`, both should receive the mail.\r\n\r\nhttps://github.com/webkom/lego/blob/ccab14fbee223f16842ace6ca2ba0c2f3ac3ac86/lego/apps/restricted/notifications.py#L9\n", "before_files": [{"content": "from lego.apps.events.fields import PublicEventListField\nfrom lego.apps.meetings.fields import MeetingListField\nfrom lego.apps.restricted.models import RestrictedMail\nfrom lego.apps.users.fields import AbakusGroupListField, PublicUserListField\nfrom lego.utils.serializers import BasisModelSerializer\n\n\nclass RestrictedMailListSerializer(BasisModelSerializer):\n class Meta:\n model = RestrictedMail\n fields = (\"id\", \"from_address\", \"hide_sender\", \"used\", \"created_at\", \"weekly\")\n read_only_fields = (\"used\", \"created_at\")\n\n def save(self, **kwargs):\n kwargs[\"token\"] = RestrictedMail.create_token()\n return super().save(**kwargs)\n\n\nclass RestrictedMailSerializer(RestrictedMailListSerializer):\n class Meta(RestrictedMailListSerializer.Meta):\n fields = RestrictedMailListSerializer.Meta.fields + (\n \"users\",\n \"groups\",\n \"events\",\n \"meetings\",\n \"raw_addresses\",\n \"weekly\",\n \"hide_sender\",\n )\n\n\nclass RestrictedMailDetailSerializer(RestrictedMailSerializer):\n users = PublicUserListField({\"read_only\": True})\n groups = AbakusGroupListField({\"read_only\": True})\n events = PublicEventListField({\"read_only\": True})\n meetings = MeetingListField({\"read_only\": True})\n\n class Meta(RestrictedMailSerializer.Meta):\n fields = RestrictedMailSerializer.Meta.fields + (\"token_query_param\",)\n", "path": "lego/apps/restricted/serializers.py"}, {"content": "from lego.apps.notifications.constants import RESTRICTED_MAIL_SENT\nfrom lego.apps.notifications.notification import Notification\n\n\nclass RestrictedMailSentNotification(Notification):\n\n name = RESTRICTED_MAIL_SENT\n\n def generate_mail(self):\n return self._delay_mail(\n to_email=self.user.email,\n context={\"first_name\": self.user.first_name},\n subject=\"Begrenset epost sendt ut\",\n plain_template=\"restricted/email/process_success.txt\",\n html_template=\"restricted/email/process_success.html\",\n )\n", "path": "lego/apps/restricted/notifications.py"}], "after_files": [{"content": "from functools import reduce\n\nfrom rest_framework import exceptions\n\nfrom lego.apps.events.fields import PublicEventListField\nfrom lego.apps.meetings.fields import MeetingListField\nfrom lego.apps.restricted.models import RestrictedMail\nfrom lego.apps.users.fields import AbakusGroupListField, PublicUserListField\nfrom lego.utils.serializers import BasisModelSerializer\n\n\nclass RestrictedMailListSerializer(BasisModelSerializer):\n class Meta:\n model = RestrictedMail\n fields = (\"id\", \"from_address\", \"hide_sender\", \"used\", \"created_at\", \"weekly\")\n read_only_fields = (\"used\", \"created_at\")\n\n def save(self, **kwargs):\n kwargs[\"token\"] = RestrictedMail.create_token()\n return super().save(**kwargs)\n\n\nclass RestrictedMailSerializer(RestrictedMailListSerializer):\n class Meta(RestrictedMailListSerializer.Meta):\n fields = RestrictedMailListSerializer.Meta.fields + (\n \"users\",\n \"groups\",\n \"events\",\n \"meetings\",\n \"raw_addresses\",\n \"weekly\",\n \"hide_sender\",\n )\n\n def create(self, validated_data):\n groups = validated_data[\"groups\"]\n events = validated_data[\"events\"]\n MaxPermittedAmout = 500\n num = reduce((lambda a, b: a + b.number_of_users), groups, 0)\n num += reduce((lambda a, b: a + b.registration_count), events, 0)\n if num > MaxPermittedAmout:\n raise exceptions.ValidationError(\n f\"The number of students in selected groups/events exceed the permitted amount which is {MaxPermittedAmout}\"\n )\n return super().create(validated_data)\n\n\nclass RestrictedMailDetailSerializer(RestrictedMailSerializer):\n users = PublicUserListField({\"read_only\": True})\n groups = AbakusGroupListField({\"read_only\": True})\n events = PublicEventListField({\"read_only\": True})\n meetings = MeetingListField({\"read_only\": True})\n\n class Meta(RestrictedMailSerializer.Meta):\n fields = RestrictedMailSerializer.Meta.fields + (\"token_query_param\",)\n", "path": "lego/apps/restricted/serializers.py"}, {"content": "from lego.apps.notifications.constants import RESTRICTED_MAIL_SENT\nfrom lego.apps.notifications.notification import Notification\n\n\nclass RestrictedMailSentNotification(Notification):\n\n name = RESTRICTED_MAIL_SENT\n\n def generate_mail(self):\n return self._delay_mail(\n to_email=self.user.email.address,\n context={\"first_name\": self.user.first_name},\n subject=\"Begrenset epost sendt ut\",\n plain_template=\"restricted/email/process_success.txt\",\n html_template=\"restricted/email/process_success.html\",\n )\n", "path": "lego/apps/restricted/notifications.py"}]}
| 952 | 422 |
gh_patches_debug_33047
|
rasdani/github-patches
|
git_diff
|
adap__flower-1753
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pytorch RayClientProxy suppresses errors
### Describe the bug
When running in PyTorch simulated and an error occurs on the client side, the error message gets buried under `DEBUG` and may never be shown to failing clients and thus the simulated_server process.
I assume that the following lines should change their loglevel flags to `ERROR`:
https://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L55
https://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L72
https://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L87
https://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L104
### Steps/Code to Reproduce
Taking the code from `examples/simulation_pytorch/main.py`
```python
import argparse
import flwr as fl
from flwr.common.typing import Scalar
import ray
import torch
import torchvision
import numpy as np
from collections import OrderedDict
from pathlib import Path
from typing import Dict, Callable, Optional, Tuple, List
from dataset_utils import get_cifar_10, do_fl_partitioning, get_dataloader
from utils import Net, train, test
import logging
from flwr.common.logger import logger as flwr_logger
flwr_logger.setLevel(logging.INFO)
parser = argparse.ArgumentParser(description="Flower Simulation with PyTorch")
parser.add_argument("--num_client_cpus", type=int, default=1)
parser.add_argument("--num_rounds", type=int, default=5)
# Flower client, adapted from Pytorch quickstart example
class FlowerClient(fl.client.NumPyClient):
def __init__(self, cid: str, fed_dir_data: str):
self.cid = cid
self.fed_dir = Path(fed_dir_data)
self.properties: Dict[str, Scalar] = {"tensor_type": "numpy.ndarray"}
# Instantiate model
self.net = Net()
# Determine device
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def get_parameters(self, config):
assert(1 == 0) # This will be hidden
return get_params(self.net)
def fit(self, parameters, config):
set_params(self.net, parameters)
assert(1 == 0) # This will be hidden
# Load data for this client and get trainloader
num_workers = int(ray.get_runtime_context().get_assigned_resources()["CPU"])
trainloader = get_dataloader(
self.fed_dir,
self.cid,
is_train=True,
batch_size=config["batch_size"],
workers=num_workers,
)
# Send model to device
self.net.to(self.device)
# Train
train(self.net, trainloader, epochs=config["epochs"], device=self.device)
# Return local model and statistics
return get_params(self.net), len(trainloader.dataset), {}
def evaluate(self, parameters, config):
set_params(self.net, parameters)
assert(1 == 0) # This will be hidden
# Load data for this client and get trainloader
num_workers = int(ray.get_runtime_context().get_assigned_resources()["CPU"])
valloader = get_dataloader(
self.fed_dir, self.cid, is_train=False, batch_size=50, workers=num_workers
)
# Send model to device
self.net.to(self.device)
# Evaluate
loss, accuracy = test(self.net, valloader, device=self.device)
# Return statistics
return float(loss), len(valloader.dataset), {"accuracy": float(accuracy)}
def fit_config(server_round: int) -> Dict[str, Scalar]:
"""Return a configuration with static batch size and (local) epochs."""
config = {
"epochs": 5, # number of local epochs
"batch_size": 64,
}
return config
def get_params(model: torch.nn.ModuleList) -> List[np.ndarray]:
"""Get model weights as a list of NumPy ndarrays."""
return [val.cpu().numpy() for _, val in model.state_dict().items()]
def set_params(model: torch.nn.ModuleList, params: List[np.ndarray]):
"""Set model weights from a list of NumPy ndarrays."""
params_dict = zip(model.state_dict().keys(), params)
state_dict = OrderedDict({k: torch.from_numpy(np.copy(v)) for k, v in params_dict})
model.load_state_dict(state_dict, strict=True)
def get_evaluate_fn(
testset: torchvision.datasets.CIFAR10,
) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:
"""Return an evaluation function for centralized evaluation."""
def evaluate(
server_round: int, parameters: fl.common.NDArrays, config: Dict[str, Scalar]
) -> Optional[Tuple[float, float]]:
"""Use the entire CIFAR-10 test set for evaluation."""
# determine device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Net()
set_params(model, parameters)
model.to(device)
testloader = torch.utils.data.DataLoader(testset, batch_size=50)
loss, accuracy = test(model, testloader, device=device)
# return statistics
return loss, {"accuracy": accuracy}
return evaluate
# Start simulation (a _default server_ will be created)
# This example does:
# 1. Downloads CIFAR-10
# 2. Partitions the dataset into N splits, where N is the total number of
# clients. We refere to this as `pool_size`. The partition can be IID or non-IID
# 3. Starts a simulation where a % of clients are sample each round.
# 4. After the M rounds end, the global model is evaluated on the entire testset.
# Also, the global model is evaluated on the valset partition residing in each
# client. This is useful to get a sense on how well the global model can generalise
# to each client's data.
if __name__ == "__main__":
# parse input arguments
args = parser.parse_args()
pool_size = 100 # number of dataset partions (= number of total clients)
client_resources = {
"num_cpus": args.num_client_cpus
} # each client will get allocated 1 CPUs
# Download CIFAR-10 dataset
train_path, testset = get_cifar_10()
# partition dataset (use a large `alpha` to make it IID;
# a small value (e.g. 1) will make it non-IID)
# This will create a new directory called "federated": in the directory where
# CIFAR-10 lives. Inside it, there will be N=pool_size sub-directories each with
# its own train/set split.
fed_dir = do_fl_partitioning(
train_path, pool_size=pool_size, alpha=1000, num_classes=10, val_ratio=0.1
)
# configure the strategy
strategy = fl.server.strategy.FedAvg(
fraction_fit=0.1,
fraction_evaluate=0.1,
min_fit_clients=10,
min_evaluate_clients=10,
min_available_clients=pool_size, # All clients should be available
on_fit_config_fn=fit_config,
evaluate_fn=get_evaluate_fn(testset), # centralised evaluation of global model
)
def client_fn(cid: str):
# create a single client instance
return FlowerClient(cid, fed_dir)
# (optional) specify Ray config
ray_init_args = {"include_dashboard": False}
# start simulation
fl.simulation.start_simulation(
client_fn=client_fn,
num_clients=pool_size,
client_resources=client_resources,
config=fl.server.ServerConfig(num_rounds=args.num_rounds),
strategy=strategy,
ray_init_args=ray_init_args,
)
```
And running `examples/simulation_pytorch/run.sh`.
### Expected Results
Expectation is to see the assertions errors, as this is unintended behaviour.
### Actual Results
Errors are suppressed because of `flwr_logger.setLevel(logging.INFO)`, but should not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/py/flwr/simulation/ray_transport/ray_client_proxy.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Ray-based Flower ClientProxy implementation."""
16
17
18 from logging import DEBUG
19 from typing import Callable, Dict, Optional, cast
20
21 import ray
22
23 from flwr import common
24 from flwr.client import Client, ClientLike, to_client
25 from flwr.client.client import (
26 maybe_call_evaluate,
27 maybe_call_fit,
28 maybe_call_get_parameters,
29 maybe_call_get_properties,
30 )
31 from flwr.common.logger import log
32 from flwr.server.client_proxy import ClientProxy
33
34 ClientFn = Callable[[str], ClientLike]
35
36
37 class RayClientProxy(ClientProxy):
38 """Flower client proxy which delegates work using Ray."""
39
40 def __init__(self, client_fn: ClientFn, cid: str, resources: Dict[str, float]):
41 super().__init__(cid)
42 self.client_fn = client_fn
43 self.resources = resources
44
45 def get_properties(
46 self, ins: common.GetPropertiesIns, timeout: Optional[float]
47 ) -> common.GetPropertiesRes:
48 """Returns client's properties."""
49 future_get_properties_res = launch_and_get_properties.options( # type: ignore
50 **self.resources,
51 ).remote(self.client_fn, self.cid, ins)
52 try:
53 res = ray.get(future_get_properties_res, timeout=timeout)
54 except Exception as ex:
55 log(DEBUG, ex)
56 raise ex
57 return cast(
58 common.GetPropertiesRes,
59 res,
60 )
61
62 def get_parameters(
63 self, ins: common.GetParametersIns, timeout: Optional[float]
64 ) -> common.GetParametersRes:
65 """Return the current local model parameters."""
66 future_paramseters_res = launch_and_get_parameters.options( # type: ignore
67 **self.resources,
68 ).remote(self.client_fn, self.cid, ins)
69 try:
70 res = ray.get(future_paramseters_res, timeout=timeout)
71 except Exception as ex:
72 log(DEBUG, ex)
73 raise ex
74 return cast(
75 common.GetParametersRes,
76 res,
77 )
78
79 def fit(self, ins: common.FitIns, timeout: Optional[float]) -> common.FitRes:
80 """Train model parameters on the locally held dataset."""
81 future_fit_res = launch_and_fit.options( # type: ignore
82 **self.resources,
83 ).remote(self.client_fn, self.cid, ins)
84 try:
85 res = ray.get(future_fit_res, timeout=timeout)
86 except Exception as ex:
87 log(DEBUG, ex)
88 raise ex
89 return cast(
90 common.FitRes,
91 res,
92 )
93
94 def evaluate(
95 self, ins: common.EvaluateIns, timeout: Optional[float]
96 ) -> common.EvaluateRes:
97 """Evaluate model parameters on the locally held dataset."""
98 future_evaluate_res = launch_and_evaluate.options( # type: ignore
99 **self.resources,
100 ).remote(self.client_fn, self.cid, ins)
101 try:
102 res = ray.get(future_evaluate_res, timeout=timeout)
103 except Exception as ex:
104 log(DEBUG, ex)
105 raise ex
106 return cast(
107 common.EvaluateRes,
108 res,
109 )
110
111 def reconnect(
112 self, ins: common.ReconnectIns, timeout: Optional[float]
113 ) -> common.DisconnectRes:
114 """Disconnect and (optionally) reconnect later."""
115 return common.DisconnectRes(reason="") # Nothing to do here (yet)
116
117
118 @ray.remote
119 def launch_and_get_properties(
120 client_fn: ClientFn, cid: str, get_properties_ins: common.GetPropertiesIns
121 ) -> common.GetPropertiesRes:
122 """Exectue get_properties remotely."""
123 client: Client = _create_client(client_fn, cid)
124 return maybe_call_get_properties(
125 client=client,
126 get_properties_ins=get_properties_ins,
127 )
128
129
130 @ray.remote
131 def launch_and_get_parameters(
132 client_fn: ClientFn, cid: str, get_parameters_ins: common.GetParametersIns
133 ) -> common.GetParametersRes:
134 """Exectue get_parameters remotely."""
135 client: Client = _create_client(client_fn, cid)
136 return maybe_call_get_parameters(
137 client=client,
138 get_parameters_ins=get_parameters_ins,
139 )
140
141
142 @ray.remote
143 def launch_and_fit(
144 client_fn: ClientFn, cid: str, fit_ins: common.FitIns
145 ) -> common.FitRes:
146 """Exectue fit remotely."""
147 client: Client = _create_client(client_fn, cid)
148 return maybe_call_fit(
149 client=client,
150 fit_ins=fit_ins,
151 )
152
153
154 @ray.remote
155 def launch_and_evaluate(
156 client_fn: ClientFn, cid: str, evaluate_ins: common.EvaluateIns
157 ) -> common.EvaluateRes:
158 """Exectue evaluate remotely."""
159 client: Client = _create_client(client_fn, cid)
160 return maybe_call_evaluate(
161 client=client,
162 evaluate_ins=evaluate_ins,
163 )
164
165
166 def _create_client(client_fn: ClientFn, cid: str) -> Client:
167 """Create a client instance."""
168 client_like: ClientLike = client_fn(cid)
169 return to_client(client_like=client_like)
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/py/flwr/simulation/ray_transport/ray_client_proxy.py b/src/py/flwr/simulation/ray_transport/ray_client_proxy.py
--- a/src/py/flwr/simulation/ray_transport/ray_client_proxy.py
+++ b/src/py/flwr/simulation/ray_transport/ray_client_proxy.py
@@ -15,7 +15,7 @@
"""Ray-based Flower ClientProxy implementation."""
-from logging import DEBUG
+from logging import ERROR
from typing import Callable, Dict, Optional, cast
import ray
@@ -52,7 +52,7 @@
try:
res = ray.get(future_get_properties_res, timeout=timeout)
except Exception as ex:
- log(DEBUG, ex)
+ log(ERROR, ex)
raise ex
return cast(
common.GetPropertiesRes,
@@ -69,7 +69,7 @@
try:
res = ray.get(future_paramseters_res, timeout=timeout)
except Exception as ex:
- log(DEBUG, ex)
+ log(ERROR, ex)
raise ex
return cast(
common.GetParametersRes,
@@ -84,7 +84,7 @@
try:
res = ray.get(future_fit_res, timeout=timeout)
except Exception as ex:
- log(DEBUG, ex)
+ log(ERROR, ex)
raise ex
return cast(
common.FitRes,
@@ -101,7 +101,7 @@
try:
res = ray.get(future_evaluate_res, timeout=timeout)
except Exception as ex:
- log(DEBUG, ex)
+ log(ERROR, ex)
raise ex
return cast(
common.EvaluateRes,
|
{"golden_diff": "diff --git a/src/py/flwr/simulation/ray_transport/ray_client_proxy.py b/src/py/flwr/simulation/ray_transport/ray_client_proxy.py\n--- a/src/py/flwr/simulation/ray_transport/ray_client_proxy.py\n+++ b/src/py/flwr/simulation/ray_transport/ray_client_proxy.py\n@@ -15,7 +15,7 @@\n \"\"\"Ray-based Flower ClientProxy implementation.\"\"\"\n \n \n-from logging import DEBUG\n+from logging import ERROR\n from typing import Callable, Dict, Optional, cast\n \n import ray\n@@ -52,7 +52,7 @@\n try:\n res = ray.get(future_get_properties_res, timeout=timeout)\n except Exception as ex:\n- log(DEBUG, ex)\n+ log(ERROR, ex)\n raise ex\n return cast(\n common.GetPropertiesRes,\n@@ -69,7 +69,7 @@\n try:\n res = ray.get(future_paramseters_res, timeout=timeout)\n except Exception as ex:\n- log(DEBUG, ex)\n+ log(ERROR, ex)\n raise ex\n return cast(\n common.GetParametersRes,\n@@ -84,7 +84,7 @@\n try:\n res = ray.get(future_fit_res, timeout=timeout)\n except Exception as ex:\n- log(DEBUG, ex)\n+ log(ERROR, ex)\n raise ex\n return cast(\n common.FitRes,\n@@ -101,7 +101,7 @@\n try:\n res = ray.get(future_evaluate_res, timeout=timeout)\n except Exception as ex:\n- log(DEBUG, ex)\n+ log(ERROR, ex)\n raise ex\n return cast(\n common.EvaluateRes,\n", "issue": "Pytorch RayClientProxy suppresses errors\n### Describe the bug\n\nWhen running in PyTorch simulated and an error occurs on the client side, the error message gets buried under `DEBUG` and may never be shown to failing clients and thus the simulated_server process.\r\n\r\nI assume that the following lines should change their loglevel flags to `ERROR`:\r\n\r\nhttps://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L55\r\nhttps://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L72\r\nhttps://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L87\r\nhttps://github.com/adap/flower/blob/3918d62ce8c346dc9c8cbf7471d0c8912c1e9e8f/src/py/flwr/simulation/ray_transport/ray_client_proxy.py#L104\n\n### Steps/Code to Reproduce\n\nTaking the code from `examples/simulation_pytorch/main.py`\r\n\r\n```python\r\nimport argparse\r\nimport flwr as fl\r\nfrom flwr.common.typing import Scalar\r\nimport ray\r\nimport torch\r\nimport torchvision\r\nimport numpy as np\r\nfrom collections import OrderedDict\r\nfrom pathlib import Path\r\nfrom typing import Dict, Callable, Optional, Tuple, List\r\nfrom dataset_utils import get_cifar_10, do_fl_partitioning, get_dataloader\r\nfrom utils import Net, train, test\r\n\r\nimport logging\r\nfrom flwr.common.logger import logger as flwr_logger\r\nflwr_logger.setLevel(logging.INFO)\r\n\r\nparser = argparse.ArgumentParser(description=\"Flower Simulation with PyTorch\")\r\n\r\nparser.add_argument(\"--num_client_cpus\", type=int, default=1)\r\nparser.add_argument(\"--num_rounds\", type=int, default=5)\r\n\r\n\r\n# Flower client, adapted from Pytorch quickstart example\r\nclass FlowerClient(fl.client.NumPyClient):\r\n def __init__(self, cid: str, fed_dir_data: str):\r\n self.cid = cid\r\n self.fed_dir = Path(fed_dir_data)\r\n self.properties: Dict[str, Scalar] = {\"tensor_type\": \"numpy.ndarray\"}\r\n\r\n # Instantiate model\r\n self.net = Net()\r\n\r\n # Determine device\r\n self.device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\n def get_parameters(self, config):\r\n assert(1 == 0) # This will be hidden\r\n return get_params(self.net)\r\n\r\n def fit(self, parameters, config):\r\n set_params(self.net, parameters)\r\n assert(1 == 0) # This will be hidden \r\n \r\n # Load data for this client and get trainloader\r\n num_workers = int(ray.get_runtime_context().get_assigned_resources()[\"CPU\"])\r\n trainloader = get_dataloader(\r\n self.fed_dir,\r\n self.cid,\r\n is_train=True,\r\n batch_size=config[\"batch_size\"],\r\n workers=num_workers,\r\n )\r\n\r\n # Send model to device\r\n self.net.to(self.device)\r\n\r\n # Train\r\n train(self.net, trainloader, epochs=config[\"epochs\"], device=self.device)\r\n\r\n # Return local model and statistics\r\n return get_params(self.net), len(trainloader.dataset), {}\r\n\r\n def evaluate(self, parameters, config):\r\n set_params(self.net, parameters)\r\n assert(1 == 0) # This will be hidden\r\n\r\n # Load data for this client and get trainloader\r\n num_workers = int(ray.get_runtime_context().get_assigned_resources()[\"CPU\"])\r\n valloader = get_dataloader(\r\n self.fed_dir, self.cid, is_train=False, batch_size=50, workers=num_workers\r\n )\r\n\r\n # Send model to device\r\n self.net.to(self.device)\r\n\r\n # Evaluate\r\n loss, accuracy = test(self.net, valloader, device=self.device)\r\n\r\n # Return statistics\r\n return float(loss), len(valloader.dataset), {\"accuracy\": float(accuracy)}\r\n\r\n\r\ndef fit_config(server_round: int) -> Dict[str, Scalar]:\r\n \"\"\"Return a configuration with static batch size and (local) epochs.\"\"\"\r\n config = {\r\n \"epochs\": 5, # number of local epochs\r\n \"batch_size\": 64,\r\n }\r\n return config\r\n\r\n\r\ndef get_params(model: torch.nn.ModuleList) -> List[np.ndarray]:\r\n \"\"\"Get model weights as a list of NumPy ndarrays.\"\"\"\r\n return [val.cpu().numpy() for _, val in model.state_dict().items()]\r\n\r\n\r\ndef set_params(model: torch.nn.ModuleList, params: List[np.ndarray]):\r\n \"\"\"Set model weights from a list of NumPy ndarrays.\"\"\"\r\n params_dict = zip(model.state_dict().keys(), params)\r\n state_dict = OrderedDict({k: torch.from_numpy(np.copy(v)) for k, v in params_dict})\r\n model.load_state_dict(state_dict, strict=True)\r\n\r\n\r\ndef get_evaluate_fn(\r\n testset: torchvision.datasets.CIFAR10,\r\n) -> Callable[[fl.common.NDArrays], Optional[Tuple[float, float]]]:\r\n \"\"\"Return an evaluation function for centralized evaluation.\"\"\"\r\n\r\n def evaluate(\r\n server_round: int, parameters: fl.common.NDArrays, config: Dict[str, Scalar]\r\n ) -> Optional[Tuple[float, float]]:\r\n \"\"\"Use the entire CIFAR-10 test set for evaluation.\"\"\"\r\n\r\n # determine device\r\n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\n model = Net()\r\n set_params(model, parameters)\r\n model.to(device)\r\n\r\n testloader = torch.utils.data.DataLoader(testset, batch_size=50)\r\n loss, accuracy = test(model, testloader, device=device)\r\n\r\n # return statistics\r\n return loss, {\"accuracy\": accuracy}\r\n\r\n return evaluate\r\n\r\n\r\n# Start simulation (a _default server_ will be created)\r\n# This example does:\r\n# 1. Downloads CIFAR-10\r\n# 2. Partitions the dataset into N splits, where N is the total number of\r\n# clients. We refere to this as `pool_size`. The partition can be IID or non-IID\r\n# 3. Starts a simulation where a % of clients are sample each round.\r\n# 4. After the M rounds end, the global model is evaluated on the entire testset.\r\n# Also, the global model is evaluated on the valset partition residing in each\r\n# client. This is useful to get a sense on how well the global model can generalise\r\n# to each client's data.\r\nif __name__ == \"__main__\":\r\n # parse input arguments\r\n args = parser.parse_args()\r\n\r\n pool_size = 100 # number of dataset partions (= number of total clients)\r\n client_resources = {\r\n \"num_cpus\": args.num_client_cpus\r\n } # each client will get allocated 1 CPUs\r\n\r\n # Download CIFAR-10 dataset\r\n train_path, testset = get_cifar_10()\r\n\r\n # partition dataset (use a large `alpha` to make it IID;\r\n # a small value (e.g. 1) will make it non-IID)\r\n # This will create a new directory called \"federated\": in the directory where\r\n # CIFAR-10 lives. Inside it, there will be N=pool_size sub-directories each with\r\n # its own train/set split.\r\n fed_dir = do_fl_partitioning(\r\n train_path, pool_size=pool_size, alpha=1000, num_classes=10, val_ratio=0.1\r\n )\r\n\r\n # configure the strategy\r\n strategy = fl.server.strategy.FedAvg(\r\n fraction_fit=0.1,\r\n fraction_evaluate=0.1,\r\n min_fit_clients=10,\r\n min_evaluate_clients=10,\r\n min_available_clients=pool_size, # All clients should be available\r\n on_fit_config_fn=fit_config,\r\n evaluate_fn=get_evaluate_fn(testset), # centralised evaluation of global model\r\n )\r\n\r\n def client_fn(cid: str):\r\n # create a single client instance\r\n return FlowerClient(cid, fed_dir)\r\n\r\n # (optional) specify Ray config\r\n ray_init_args = {\"include_dashboard\": False}\r\n\r\n # start simulation\r\n fl.simulation.start_simulation(\r\n client_fn=client_fn,\r\n num_clients=pool_size,\r\n client_resources=client_resources,\r\n config=fl.server.ServerConfig(num_rounds=args.num_rounds),\r\n strategy=strategy,\r\n ray_init_args=ray_init_args,\r\n )\r\n```\r\n\r\nAnd running `examples/simulation_pytorch/run.sh`.\n\n### Expected Results\n\nExpectation is to see the assertions errors, as this is unintended behaviour.\n\n### Actual Results\n\nErrors are suppressed because of `flwr_logger.setLevel(logging.INFO)`, but should not.\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Ray-based Flower ClientProxy implementation.\"\"\"\n\n\nfrom logging import DEBUG\nfrom typing import Callable, Dict, Optional, cast\n\nimport ray\n\nfrom flwr import common\nfrom flwr.client import Client, ClientLike, to_client\nfrom flwr.client.client import (\n maybe_call_evaluate,\n maybe_call_fit,\n maybe_call_get_parameters,\n maybe_call_get_properties,\n)\nfrom flwr.common.logger import log\nfrom flwr.server.client_proxy import ClientProxy\n\nClientFn = Callable[[str], ClientLike]\n\n\nclass RayClientProxy(ClientProxy):\n \"\"\"Flower client proxy which delegates work using Ray.\"\"\"\n\n def __init__(self, client_fn: ClientFn, cid: str, resources: Dict[str, float]):\n super().__init__(cid)\n self.client_fn = client_fn\n self.resources = resources\n\n def get_properties(\n self, ins: common.GetPropertiesIns, timeout: Optional[float]\n ) -> common.GetPropertiesRes:\n \"\"\"Returns client's properties.\"\"\"\n future_get_properties_res = launch_and_get_properties.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_get_properties_res, timeout=timeout)\n except Exception as ex:\n log(DEBUG, ex)\n raise ex\n return cast(\n common.GetPropertiesRes,\n res,\n )\n\n def get_parameters(\n self, ins: common.GetParametersIns, timeout: Optional[float]\n ) -> common.GetParametersRes:\n \"\"\"Return the current local model parameters.\"\"\"\n future_paramseters_res = launch_and_get_parameters.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_paramseters_res, timeout=timeout)\n except Exception as ex:\n log(DEBUG, ex)\n raise ex\n return cast(\n common.GetParametersRes,\n res,\n )\n\n def fit(self, ins: common.FitIns, timeout: Optional[float]) -> common.FitRes:\n \"\"\"Train model parameters on the locally held dataset.\"\"\"\n future_fit_res = launch_and_fit.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_fit_res, timeout=timeout)\n except Exception as ex:\n log(DEBUG, ex)\n raise ex\n return cast(\n common.FitRes,\n res,\n )\n\n def evaluate(\n self, ins: common.EvaluateIns, timeout: Optional[float]\n ) -> common.EvaluateRes:\n \"\"\"Evaluate model parameters on the locally held dataset.\"\"\"\n future_evaluate_res = launch_and_evaluate.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_evaluate_res, timeout=timeout)\n except Exception as ex:\n log(DEBUG, ex)\n raise ex\n return cast(\n common.EvaluateRes,\n res,\n )\n\n def reconnect(\n self, ins: common.ReconnectIns, timeout: Optional[float]\n ) -> common.DisconnectRes:\n \"\"\"Disconnect and (optionally) reconnect later.\"\"\"\n return common.DisconnectRes(reason=\"\") # Nothing to do here (yet)\n\n\[email protected]\ndef launch_and_get_properties(\n client_fn: ClientFn, cid: str, get_properties_ins: common.GetPropertiesIns\n) -> common.GetPropertiesRes:\n \"\"\"Exectue get_properties remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_get_properties(\n client=client,\n get_properties_ins=get_properties_ins,\n )\n\n\[email protected]\ndef launch_and_get_parameters(\n client_fn: ClientFn, cid: str, get_parameters_ins: common.GetParametersIns\n) -> common.GetParametersRes:\n \"\"\"Exectue get_parameters remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_get_parameters(\n client=client,\n get_parameters_ins=get_parameters_ins,\n )\n\n\[email protected]\ndef launch_and_fit(\n client_fn: ClientFn, cid: str, fit_ins: common.FitIns\n) -> common.FitRes:\n \"\"\"Exectue fit remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_fit(\n client=client,\n fit_ins=fit_ins,\n )\n\n\[email protected]\ndef launch_and_evaluate(\n client_fn: ClientFn, cid: str, evaluate_ins: common.EvaluateIns\n) -> common.EvaluateRes:\n \"\"\"Exectue evaluate remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_evaluate(\n client=client,\n evaluate_ins=evaluate_ins,\n )\n\n\ndef _create_client(client_fn: ClientFn, cid: str) -> Client:\n \"\"\"Create a client instance.\"\"\"\n client_like: ClientLike = client_fn(cid)\n return to_client(client_like=client_like)\n", "path": "src/py/flwr/simulation/ray_transport/ray_client_proxy.py"}], "after_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Ray-based Flower ClientProxy implementation.\"\"\"\n\n\nfrom logging import ERROR\nfrom typing import Callable, Dict, Optional, cast\n\nimport ray\n\nfrom flwr import common\nfrom flwr.client import Client, ClientLike, to_client\nfrom flwr.client.client import (\n maybe_call_evaluate,\n maybe_call_fit,\n maybe_call_get_parameters,\n maybe_call_get_properties,\n)\nfrom flwr.common.logger import log\nfrom flwr.server.client_proxy import ClientProxy\n\nClientFn = Callable[[str], ClientLike]\n\n\nclass RayClientProxy(ClientProxy):\n \"\"\"Flower client proxy which delegates work using Ray.\"\"\"\n\n def __init__(self, client_fn: ClientFn, cid: str, resources: Dict[str, float]):\n super().__init__(cid)\n self.client_fn = client_fn\n self.resources = resources\n\n def get_properties(\n self, ins: common.GetPropertiesIns, timeout: Optional[float]\n ) -> common.GetPropertiesRes:\n \"\"\"Returns client's properties.\"\"\"\n future_get_properties_res = launch_and_get_properties.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_get_properties_res, timeout=timeout)\n except Exception as ex:\n log(ERROR, ex)\n raise ex\n return cast(\n common.GetPropertiesRes,\n res,\n )\n\n def get_parameters(\n self, ins: common.GetParametersIns, timeout: Optional[float]\n ) -> common.GetParametersRes:\n \"\"\"Return the current local model parameters.\"\"\"\n future_paramseters_res = launch_and_get_parameters.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_paramseters_res, timeout=timeout)\n except Exception as ex:\n log(ERROR, ex)\n raise ex\n return cast(\n common.GetParametersRes,\n res,\n )\n\n def fit(self, ins: common.FitIns, timeout: Optional[float]) -> common.FitRes:\n \"\"\"Train model parameters on the locally held dataset.\"\"\"\n future_fit_res = launch_and_fit.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_fit_res, timeout=timeout)\n except Exception as ex:\n log(ERROR, ex)\n raise ex\n return cast(\n common.FitRes,\n res,\n )\n\n def evaluate(\n self, ins: common.EvaluateIns, timeout: Optional[float]\n ) -> common.EvaluateRes:\n \"\"\"Evaluate model parameters on the locally held dataset.\"\"\"\n future_evaluate_res = launch_and_evaluate.options( # type: ignore\n **self.resources,\n ).remote(self.client_fn, self.cid, ins)\n try:\n res = ray.get(future_evaluate_res, timeout=timeout)\n except Exception as ex:\n log(ERROR, ex)\n raise ex\n return cast(\n common.EvaluateRes,\n res,\n )\n\n def reconnect(\n self, ins: common.ReconnectIns, timeout: Optional[float]\n ) -> common.DisconnectRes:\n \"\"\"Disconnect and (optionally) reconnect later.\"\"\"\n return common.DisconnectRes(reason=\"\") # Nothing to do here (yet)\n\n\[email protected]\ndef launch_and_get_properties(\n client_fn: ClientFn, cid: str, get_properties_ins: common.GetPropertiesIns\n) -> common.GetPropertiesRes:\n \"\"\"Exectue get_properties remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_get_properties(\n client=client,\n get_properties_ins=get_properties_ins,\n )\n\n\[email protected]\ndef launch_and_get_parameters(\n client_fn: ClientFn, cid: str, get_parameters_ins: common.GetParametersIns\n) -> common.GetParametersRes:\n \"\"\"Exectue get_parameters remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_get_parameters(\n client=client,\n get_parameters_ins=get_parameters_ins,\n )\n\n\[email protected]\ndef launch_and_fit(\n client_fn: ClientFn, cid: str, fit_ins: common.FitIns\n) -> common.FitRes:\n \"\"\"Exectue fit remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_fit(\n client=client,\n fit_ins=fit_ins,\n )\n\n\[email protected]\ndef launch_and_evaluate(\n client_fn: ClientFn, cid: str, evaluate_ins: common.EvaluateIns\n) -> common.EvaluateRes:\n \"\"\"Exectue evaluate remotely.\"\"\"\n client: Client = _create_client(client_fn, cid)\n return maybe_call_evaluate(\n client=client,\n evaluate_ins=evaluate_ins,\n )\n\n\ndef _create_client(client_fn: ClientFn, cid: str) -> Client:\n \"\"\"Create a client instance.\"\"\"\n client_like: ClientLike = client_fn(cid)\n return to_client(client_like=client_like)\n", "path": "src/py/flwr/simulation/ray_transport/ray_client_proxy.py"}]}
| 3,869 | 371 |
gh_patches_debug_3626
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-25492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
multinomial
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/random.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.func_wrapper import with_supported_device_and_dtypes, with_unsupported_dtypes
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @with_supported_dtypes(
11 {"2.5.1 and below": ("float32", "float64")},
12 "paddle",
13 )
14 @to_ivy_arrays_and_back
15 def normal(mean=0.0, std=1.0, shape=None, name=None):
16 return ivy.random_normal(mean=mean, std=std, shape=shape)
17
18
19 @with_supported_dtypes(
20 {"2.5.1 and below": ("float32", "float64")},
21 "paddle",
22 )
23 @to_ivy_arrays_and_back
24 def poisson(x, name=None):
25 return ivy.poisson(x, shape=None, device=None, dtype=None, seed=None, out=None)
26
27
28 @with_supported_device_and_dtypes(
29 {
30 "2.5.1 and above": {
31 "cpu": (
32 "bfloat16",
33 "float32",
34 "float64",
35 ),
36 "gpu": (
37 "bfloat16",
38 "float16",
39 "float32",
40 "float64",
41 ),
42 },
43 "2.4.2 and below": {
44 "cpu": (
45 "float32",
46 "float64",
47 ),
48 "gpu": (
49 "float16",
50 "float32",
51 "float64",
52 ),
53 },
54 },
55 "paddle",
56 )
57 @to_ivy_arrays_and_back
58 def rand(shape, dtype=None, name=None):
59 return ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=None)
60
61
62 @to_ivy_arrays_and_back
63 def randint(low=0, high=None, shape=[1], dtype=None, name=None):
64 return ivy.randint(low, high, shape=shape, dtype=dtype)
65
66
67 @with_unsupported_dtypes(
68 {"2.5.1 and below": ("int16", "float16", "bfloat16", "uint8")},
69 "paddle",
70 )
71 @to_ivy_arrays_and_back
72 def randint_like(x, low=0, high=None, dtype=None, name=None):
73 if high is None:
74 high = low
75 low = 0
76 if high <= 0:
77 raise ivy.exceptions.IvyError(
78 "If high is None, low must be greater than 0, but received low = 0."
79 )
80 return ivy.randint(low, high, shape=x.shape, dtype=dtype, seed=None)
81
82
83 def randn(shape, dtype=None, name=None):
84 if dtype not in ["float32", "float64"]:
85 raise ivy.exceptions.IvyError(
86 "Unsupported dtype for randn, only float32 and float64 are supported, "
87 )
88 return ivy.random_normal(shape=shape, dtype=dtype, seed=None)
89
90
91 @with_supported_dtypes(
92 {"2.5.1 and below": ("float32", "float64")},
93 "paddle",
94 )
95 @to_ivy_arrays_and_back
96 def standard_normal(shape, dtype=None, name=None):
97 return ivy.random_normal(mean=0, std=1, shape=shape, dtype=dtype)
98
99
100 @with_supported_dtypes(
101 {"2.5.1 and below": ("float32", "float64")},
102 "paddle",
103 )
104 @to_ivy_arrays_and_back
105 def uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):
106 return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/paddle/random.py b/ivy/functional/frontends/paddle/random.py
--- a/ivy/functional/frontends/paddle/random.py
+++ b/ivy/functional/frontends/paddle/random.py
@@ -7,6 +7,16 @@
)
+@with_supported_dtypes(
+ {"2.5.1 and below": ("float32", "float64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def multinomial(x, num_samples=1, replacement=False, name=None):
+ n = num_samples + 1
+ return ivy.multinomial(n, num_samples, probs=x, replace=replacement)
+
+
@with_supported_dtypes(
{"2.5.1 and below": ("float32", "float64")},
"paddle",
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/random.py b/ivy/functional/frontends/paddle/random.py\n--- a/ivy/functional/frontends/paddle/random.py\n+++ b/ivy/functional/frontends/paddle/random.py\n@@ -7,6 +7,16 @@\n )\n \n \n+@with_supported_dtypes(\n+ {\"2.5.1 and below\": (\"float32\", \"float64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def multinomial(x, num_samples=1, replacement=False, name=None):\n+ n = num_samples + 1\n+ return ivy.multinomial(n, num_samples, probs=x, replace=replacement)\n+\n+\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n", "issue": "multinomial\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.func_wrapper import with_supported_device_and_dtypes, with_unsupported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef normal(mean=0.0, std=1.0, shape=None, name=None):\n return ivy.random_normal(mean=mean, std=std, shape=shape)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef poisson(x, name=None):\n return ivy.poisson(x, shape=None, device=None, dtype=None, seed=None, out=None)\n\n\n@with_supported_device_and_dtypes(\n {\n \"2.5.1 and above\": {\n \"cpu\": (\n \"bfloat16\",\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\n \"bfloat16\",\n \"float16\",\n \"float32\",\n \"float64\",\n ),\n },\n \"2.4.2 and below\": {\n \"cpu\": (\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\n \"float16\",\n \"float32\",\n \"float64\",\n ),\n },\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef rand(shape, dtype=None, name=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=None)\n\n\n@to_ivy_arrays_and_back\ndef randint(low=0, high=None, shape=[1], dtype=None, name=None):\n return ivy.randint(low, high, shape=shape, dtype=dtype)\n\n\n@with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"int16\", \"float16\", \"bfloat16\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef randint_like(x, low=0, high=None, dtype=None, name=None):\n if high is None:\n high = low\n low = 0\n if high <= 0:\n raise ivy.exceptions.IvyError(\n \"If high is None, low must be greater than 0, but received low = 0.\"\n )\n return ivy.randint(low, high, shape=x.shape, dtype=dtype, seed=None)\n\n\ndef randn(shape, dtype=None, name=None):\n if dtype not in [\"float32\", \"float64\"]:\n raise ivy.exceptions.IvyError(\n \"Unsupported dtype for randn, only float32 and float64 are supported, \"\n )\n return ivy.random_normal(shape=shape, dtype=dtype, seed=None)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef standard_normal(shape, dtype=None, name=None):\n return ivy.random_normal(mean=0, std=1, shape=shape, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):\n return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)\n", "path": "ivy/functional/frontends/paddle/random.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.func_wrapper import with_supported_device_and_dtypes, with_unsupported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef multinomial(x, num_samples=1, replacement=False, name=None):\n n = num_samples + 1\n return ivy.multinomial(n, num_samples, probs=x, replace=replacement)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef normal(mean=0.0, std=1.0, shape=None, name=None):\n return ivy.random_normal(mean=mean, std=std, shape=shape)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef poisson(x, name=None):\n return ivy.poisson(x, shape=None, device=None, dtype=None, seed=None, out=None)\n\n\n@with_supported_device_and_dtypes(\n {\n \"2.5.1 and above\": {\n \"cpu\": (\n \"bfloat16\",\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\n \"bfloat16\",\n \"float16\",\n \"float32\",\n \"float64\",\n ),\n },\n \"2.4.2 and below\": {\n \"cpu\": (\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\n \"float16\",\n \"float32\",\n \"float64\",\n ),\n },\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef rand(shape, dtype=None, name=None):\n return ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=None)\n\n\n@to_ivy_arrays_and_back\ndef randint(low=0, high=None, shape=[1], dtype=None, name=None):\n return ivy.randint(low, high, shape=shape, dtype=dtype)\n\n\n@with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"int16\", \"float16\", \"bfloat16\", \"uint8\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef randint_like(x, low=0, high=None, dtype=None, name=None):\n if high is None:\n high = low\n low = 0\n if high <= 0:\n raise ivy.exceptions.IvyError(\n \"If high is None, low must be greater than 0, but received low = 0.\"\n )\n return ivy.randint(low, high, shape=x.shape, dtype=dtype, seed=None)\n\n\ndef randn(shape, dtype=None, name=None):\n if dtype not in [\"float32\", \"float64\"]:\n raise ivy.exceptions.IvyError(\n \"Unsupported dtype for randn, only float32 and float64 are supported, \"\n )\n return ivy.random_normal(shape=shape, dtype=dtype, seed=None)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef standard_normal(shape, dtype=None, name=None):\n return ivy.random_normal(mean=0, std=1, shape=shape, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef uniform(shape, dtype=None, min=-1.0, max=1.0, seed=0, name=None):\n return ivy.random_uniform(low=min, high=max, shape=shape, dtype=dtype, seed=seed)\n", "path": "ivy/functional/frontends/paddle/random.py"}]}
| 1,324 | 190 |
gh_patches_debug_36920
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-723
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Path segment in one route's URI template masks the field expression in another route
The following test demonstrates the issue. The assertion fails since the resulting status code is 404, rather than 200. "/v2.0/thing" should route to `/{version}/thing` instead of `/v2.0`.
``` py
def test_string_vs_var(self):
self.api.add_route('/v2.0', self.resource)
self.simulate_request('/v2.0')
self.api.add_route('/{version}/thing', testing.TestResource())
self.simulate_request('/v2.0/thing')
self.assertEqual(self.srmock.status, falcon.HTTP_200)
```
```
/{version}/foo/bar
/v1.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/routing/compiled.py`
Content:
```
1 # Copyright 2013 by Richard Olsson
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import re
16
17
18 TAB_STR = ' ' * 4
19
20
21 class CompiledRouter(object):
22 """Fast URI router which compiles its routing logic to Python code.
23
24 Generally you do not need to use this router class directly, as an
25 instance is created by default when the falcon.API class is initialized.
26
27 The router treats URI paths as a tree of URI segments and searches by
28 checking the URI one segment at a time. Instead of interpreting the route
29 tree for each look-up, it generates inlined, bespoke Python code to
30 perform the search, then compiles that code. This makes the route
31 processing quite fast.
32 """
33
34 def __init__(self):
35 self._roots = []
36 self._find = self._compile()
37 self._code_lines = None
38 self._src = None
39 self._expressions = None
40 self._return_values = None
41
42 def add_route(self, uri_template, method_map, resource):
43 """Adds a route between URI path template and resource."""
44 # Can't start with a number, since these eventually get passed as
45 # args to on_* responders
46 if re.search('{\d', uri_template):
47 raise ValueError('Field names may not start with a digit.')
48
49 if re.search('\s', uri_template):
50 raise ValueError('URI templates may not include whitespace.')
51
52 path = uri_template.strip('/').split('/')
53
54 def insert(nodes, path_index=0):
55 for node in nodes:
56 segment = path[path_index]
57 if node.matches(segment):
58 path_index += 1
59 if path_index == len(path):
60 # NOTE(kgriffs): Override previous node
61 node.method_map = method_map
62 node.resource = resource
63 else:
64 insert(node.children, path_index)
65
66 return
67
68 if node.conflicts_with(segment):
69 raise ValueError('The URI template for this route '
70 "conflicts with another route's "
71 'template.')
72
73 # NOTE(richardolsson): If we got this far, the node doesn't already
74 # exist and needs to be created. This builds a new branch of the
75 # routing tree recursively until it reaches the new node leaf.
76 new_node = CompiledRouterNode(path[path_index])
77 nodes.append(new_node)
78 if path_index == len(path) - 1:
79 new_node.method_map = method_map
80 new_node.resource = resource
81 else:
82 insert(new_node.children, path_index + 1)
83
84 insert(self._roots)
85 self._find = self._compile()
86
87 def find(self, uri):
88 """Finds resource and method map for a URI, or returns None."""
89 path = uri.lstrip('/').split('/')
90 params = {}
91 node = self._find(path, self._return_values, self._expressions, params)
92
93 if node is not None:
94 return node.resource, node.method_map, params
95 else:
96 return None, None, None
97
98 def _compile_tree(self, nodes, indent=1, level=0):
99 """Generates Python code for a routing tree or subtree."""
100
101 def line(text, indent_offset=0):
102 pad = TAB_STR * (indent + indent_offset)
103 self._code_lines.append(pad + text)
104
105 # NOTE(kgriffs): Base case
106 if not nodes:
107 return
108
109 line('if path_len > %d:' % level)
110 indent += 1
111
112 level_indent = indent
113 found_simple = False
114
115 # NOTE(kgriffs & philiptzou): Sort nodes in this sequence:
116 # static nodes(0), complex var nodes(1) and simple var nodes(2).
117 # so that none of them get masked.
118 nodes = sorted(
119 nodes, key=lambda node: node.is_var + (node.is_var and
120 not node.is_complex))
121
122 for node in nodes:
123 if node.is_var:
124 if node.is_complex:
125 # NOTE(richardolsson): Complex nodes are nodes which
126 # contain anything more than a single literal or variable,
127 # and they need to be checked using a pre-compiled regular
128 # expression.
129 expression_idx = len(self._expressions)
130 self._expressions.append(node.var_regex)
131
132 line('match = expressions[%d].match(path[%d]) # %s' % (
133 expression_idx, level, node.var_regex.pattern))
134
135 line('if match is not None:')
136 indent += 1
137 line('params.update(match.groupdict())')
138
139 else:
140 # NOTE(kgriffs): Simple nodes just capture the entire path
141 # segment as the value for the param.
142 line('params["%s"] = path[%d]' % (node.var_name, level))
143
144 # NOTE(kgriffs): We don't allow multiple simple var nodes
145 # to exist at the same level, e.g.:
146 #
147 # /foo/{id}/bar
148 # /foo/{name}/bar
149 #
150 assert len([_node for _node in nodes
151 if _node.is_var and not _node.is_complex]) == 1
152 found_simple = True
153
154 else:
155 # NOTE(kgriffs): Not a param, so must match exactly
156 line('if path[%d] == "%s":' % (level, node.raw_segment))
157 indent += 1
158
159 if node.resource is not None:
160 # NOTE(kgriffs): This is a valid route, so we will want to
161 # return the relevant information.
162 resource_idx = len(self._return_values)
163 self._return_values.append(node)
164
165 self._compile_tree(node.children, indent, level + 1)
166
167 if node.resource is None:
168 line('return None')
169 else:
170 # NOTE(kgriffs): Make sure that we have consumed all of
171 # the segments for the requested route; otherwise we could
172 # mistakenly match "/foo/23/bar" against "/foo/{id}".
173 line('if path_len == %d:' % (level + 1))
174 line('return return_values[%d]' % resource_idx, 1)
175
176 line('return None')
177
178 indent = level_indent
179
180 if not found_simple:
181 line('return None')
182
183 def _compile(self):
184 """Generates Python code for entire routing tree.
185
186 The generated code is compiled and the resulting Python method is
187 returned.
188 """
189 self._return_values = []
190 self._expressions = []
191 self._code_lines = [
192 'def find(path, return_values, expressions, params):',
193 TAB_STR + 'path_len = len(path)',
194 ]
195
196 self._compile_tree(self._roots)
197
198 self._code_lines.append(
199 # PERF(kgriffs): Explicit return of None is faster than implicit
200 TAB_STR + 'return None'
201 )
202
203 self._src = '\n'.join(self._code_lines)
204
205 scope = {}
206 exec(compile(self._src, '<string>', 'exec'), scope)
207
208 return scope['find']
209
210
211 class CompiledRouterNode(object):
212 """Represents a single URI segment in a URI."""
213
214 _regex_vars = re.compile('{([-_a-zA-Z0-9]+)}')
215
216 def __init__(self, raw_segment, method_map=None, resource=None):
217 self.children = []
218
219 self.raw_segment = raw_segment
220 self.method_map = method_map
221 self.resource = resource
222
223 self.is_var = False
224 self.is_complex = False
225 self.var_name = None
226
227 seg = raw_segment.replace('.', '\\.')
228
229 matches = list(self._regex_vars.finditer(seg))
230 if matches:
231 self.is_var = True
232 # NOTE(richardolsson): if there is a single variable and it spans
233 # the entire segment, the segment is uncomplex and the variable
234 # name is simply the string contained within curly braces.
235 if len(matches) == 1 and matches[0].span() == (0, len(seg)):
236 self.is_complex = False
237 self.var_name = raw_segment[1:-1]
238 else:
239 # NOTE(richardolsson): Complex segments need to be converted
240 # into regular expressions will be used to match and extract
241 # variable values. The regular expressions contain both
242 # literal spans and named group expressions for the variables.
243 self.is_complex = True
244 seg_fields = []
245 prev_end_idx = 0
246 for match in matches:
247 var_start_idx, var_end_idx = match.span()
248 seg_fields.append(seg[prev_end_idx:var_start_idx])
249
250 var_name = match.groups()[0].replace('-', '_')
251 seg_fields.append('(?P<%s>[^/]+)' % var_name)
252
253 prev_end_idx = var_end_idx
254
255 seg_fields.append(seg[prev_end_idx:])
256 seg_pattern = ''.join(seg_fields)
257 self.var_regex = re.compile(seg_pattern)
258 else:
259 self.is_var = False
260
261 def matches(self, segment):
262 """Returns True if this node matches the supplied template segment."""
263
264 return segment == self.raw_segment
265
266 def conflicts_with(self, segment):
267 """Returns True if this node conflicts with a given template segment."""
268
269 # NOTE(kgriffs): This method assumes that the caller has already
270 # checked if the segment matches. By definition, only unmatched
271 # segments may conflict, so there isn't any sense in calling
272 # conflicts_with in that case.
273 assert not self.matches(segment)
274
275 # NOTE(kgriffs): Possible combinations are as follows.
276 #
277 # simple, simple ==> True
278 # simple, complex ==> False
279 # simple, string ==> False
280 # complex, simple ==> False
281 # complex, complex ==> (Depend)
282 # complex, string ==> False
283 # string, simple ==> False
284 # string, complex ==> False
285 # string, string ==> False
286 #
287 other = CompiledRouterNode(segment)
288
289 if self.is_var:
290 # NOTE(kgriffs & philiptzou): Falcon does not accept multiple
291 # simple var nodes exist at the same level as following:
292 #
293 # /foo/{thing1}
294 # /foo/{thing2}
295 #
296 # Nor two complex nodes like this:
297 #
298 # /foo/{thing1}.{ext}
299 # /foo/{thing2}.{ext}
300 #
301 # On the other hand, those are all OK:
302 #
303 # /foo/{thing1}
304 # /foo/all
305 # /foo/{thing1}.{ext}
306 # /foo/{thing2}.detail.{ext}
307 #
308 if self.is_complex:
309 if other.is_complex:
310 return (self._regex_vars.sub('v', self.raw_segment) ==
311 self._regex_vars.sub('v', segment))
312
313 return False
314 else:
315 return other.is_var and not other.is_complex
316
317 # NOTE(kgriffs): If self is a static string match, then all the cases
318 # for other are False, so no need to check.
319 return False
320
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/falcon/routing/compiled.py b/falcon/routing/compiled.py
--- a/falcon/routing/compiled.py
+++ b/falcon/routing/compiled.py
@@ -95,7 +95,7 @@
else:
return None, None, None
- def _compile_tree(self, nodes, indent=1, level=0):
+ def _compile_tree(self, nodes, indent=1, level=0, fast_return=True):
"""Generates Python code for a routing tree or subtree."""
def line(text, indent_offset=0):
@@ -119,6 +119,18 @@
nodes, key=lambda node: node.is_var + (node.is_var and
not node.is_complex))
+ # NOTE(kgriffs): Down to this branch in the tree, we can do a
+ # fast 'return None'. See if the nodes at this branch are
+ # all still simple, meaning there is only one possible path.
+ if fast_return:
+ if len(nodes) > 1:
+ # NOTE(kgriffs): There's the possibility of more than
+ # one path.
+ var_nodes = [node for node in nodes if node.is_var]
+ found_var_nodes = bool(var_nodes)
+
+ fast_return = not found_var_nodes
+
for node in nodes:
if node.is_var:
if node.is_complex:
@@ -162,10 +174,11 @@
resource_idx = len(self._return_values)
self._return_values.append(node)
- self._compile_tree(node.children, indent, level + 1)
+ self._compile_tree(node.children, indent, level + 1, fast_return)
if node.resource is None:
- line('return None')
+ if fast_return:
+ line('return None')
else:
# NOTE(kgriffs): Make sure that we have consumed all of
# the segments for the requested route; otherwise we could
@@ -173,11 +186,12 @@
line('if path_len == %d:' % (level + 1))
line('return return_values[%d]' % resource_idx, 1)
- line('return None')
+ if fast_return:
+ line('return None')
indent = level_indent
- if not found_simple:
+ if not found_simple and fast_return:
line('return None')
def _compile(self):
|
{"golden_diff": "diff --git a/falcon/routing/compiled.py b/falcon/routing/compiled.py\n--- a/falcon/routing/compiled.py\n+++ b/falcon/routing/compiled.py\n@@ -95,7 +95,7 @@\n else:\n return None, None, None\n \n- def _compile_tree(self, nodes, indent=1, level=0):\n+ def _compile_tree(self, nodes, indent=1, level=0, fast_return=True):\n \"\"\"Generates Python code for a routing tree or subtree.\"\"\"\n \n def line(text, indent_offset=0):\n@@ -119,6 +119,18 @@\n nodes, key=lambda node: node.is_var + (node.is_var and\n not node.is_complex))\n \n+ # NOTE(kgriffs): Down to this branch in the tree, we can do a\n+ # fast 'return None'. See if the nodes at this branch are\n+ # all still simple, meaning there is only one possible path.\n+ if fast_return:\n+ if len(nodes) > 1:\n+ # NOTE(kgriffs): There's the possibility of more than\n+ # one path.\n+ var_nodes = [node for node in nodes if node.is_var]\n+ found_var_nodes = bool(var_nodes)\n+\n+ fast_return = not found_var_nodes\n+\n for node in nodes:\n if node.is_var:\n if node.is_complex:\n@@ -162,10 +174,11 @@\n resource_idx = len(self._return_values)\n self._return_values.append(node)\n \n- self._compile_tree(node.children, indent, level + 1)\n+ self._compile_tree(node.children, indent, level + 1, fast_return)\n \n if node.resource is None:\n- line('return None')\n+ if fast_return:\n+ line('return None')\n else:\n # NOTE(kgriffs): Make sure that we have consumed all of\n # the segments for the requested route; otherwise we could\n@@ -173,11 +186,12 @@\n line('if path_len == %d:' % (level + 1))\n line('return return_values[%d]' % resource_idx, 1)\n \n- line('return None')\n+ if fast_return:\n+ line('return None')\n \n indent = level_indent\n \n- if not found_simple:\n+ if not found_simple and fast_return:\n line('return None')\n \n def _compile(self):\n", "issue": "Path segment in one route's URI template masks the field expression in another route\nThe following test demonstrates the issue. The assertion fails since the resulting status code is 404, rather than 200. \"/v2.0/thing\" should route to `/{version}/thing` instead of `/v2.0`.\n\n``` py\ndef test_string_vs_var(self):\n self.api.add_route('/v2.0', self.resource)\n self.simulate_request('/v2.0')\n\n self.api.add_route('/{version}/thing', testing.TestResource())\n self.simulate_request('/v2.0/thing')\n self.assertEqual(self.srmock.status, falcon.HTTP_200)\n```\n\n```\n/{version}/foo/bar\n/v1.0\n```\n\n", "before_files": [{"content": "# Copyright 2013 by Richard Olsson\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport re\n\n\nTAB_STR = ' ' * 4\n\n\nclass CompiledRouter(object):\n \"\"\"Fast URI router which compiles its routing logic to Python code.\n\n Generally you do not need to use this router class directly, as an\n instance is created by default when the falcon.API class is initialized.\n\n The router treats URI paths as a tree of URI segments and searches by\n checking the URI one segment at a time. Instead of interpreting the route\n tree for each look-up, it generates inlined, bespoke Python code to\n perform the search, then compiles that code. This makes the route\n processing quite fast.\n \"\"\"\n\n def __init__(self):\n self._roots = []\n self._find = self._compile()\n self._code_lines = None\n self._src = None\n self._expressions = None\n self._return_values = None\n\n def add_route(self, uri_template, method_map, resource):\n \"\"\"Adds a route between URI path template and resource.\"\"\"\n # Can't start with a number, since these eventually get passed as\n # args to on_* responders\n if re.search('{\\d', uri_template):\n raise ValueError('Field names may not start with a digit.')\n\n if re.search('\\s', uri_template):\n raise ValueError('URI templates may not include whitespace.')\n\n path = uri_template.strip('/').split('/')\n\n def insert(nodes, path_index=0):\n for node in nodes:\n segment = path[path_index]\n if node.matches(segment):\n path_index += 1\n if path_index == len(path):\n # NOTE(kgriffs): Override previous node\n node.method_map = method_map\n node.resource = resource\n else:\n insert(node.children, path_index)\n\n return\n\n if node.conflicts_with(segment):\n raise ValueError('The URI template for this route '\n \"conflicts with another route's \"\n 'template.')\n\n # NOTE(richardolsson): If we got this far, the node doesn't already\n # exist and needs to be created. This builds a new branch of the\n # routing tree recursively until it reaches the new node leaf.\n new_node = CompiledRouterNode(path[path_index])\n nodes.append(new_node)\n if path_index == len(path) - 1:\n new_node.method_map = method_map\n new_node.resource = resource\n else:\n insert(new_node.children, path_index + 1)\n\n insert(self._roots)\n self._find = self._compile()\n\n def find(self, uri):\n \"\"\"Finds resource and method map for a URI, or returns None.\"\"\"\n path = uri.lstrip('/').split('/')\n params = {}\n node = self._find(path, self._return_values, self._expressions, params)\n\n if node is not None:\n return node.resource, node.method_map, params\n else:\n return None, None, None\n\n def _compile_tree(self, nodes, indent=1, level=0):\n \"\"\"Generates Python code for a routing tree or subtree.\"\"\"\n\n def line(text, indent_offset=0):\n pad = TAB_STR * (indent + indent_offset)\n self._code_lines.append(pad + text)\n\n # NOTE(kgriffs): Base case\n if not nodes:\n return\n\n line('if path_len > %d:' % level)\n indent += 1\n\n level_indent = indent\n found_simple = False\n\n # NOTE(kgriffs & philiptzou): Sort nodes in this sequence:\n # static nodes(0), complex var nodes(1) and simple var nodes(2).\n # so that none of them get masked.\n nodes = sorted(\n nodes, key=lambda node: node.is_var + (node.is_var and\n not node.is_complex))\n\n for node in nodes:\n if node.is_var:\n if node.is_complex:\n # NOTE(richardolsson): Complex nodes are nodes which\n # contain anything more than a single literal or variable,\n # and they need to be checked using a pre-compiled regular\n # expression.\n expression_idx = len(self._expressions)\n self._expressions.append(node.var_regex)\n\n line('match = expressions[%d].match(path[%d]) # %s' % (\n expression_idx, level, node.var_regex.pattern))\n\n line('if match is not None:')\n indent += 1\n line('params.update(match.groupdict())')\n\n else:\n # NOTE(kgriffs): Simple nodes just capture the entire path\n # segment as the value for the param.\n line('params[\"%s\"] = path[%d]' % (node.var_name, level))\n\n # NOTE(kgriffs): We don't allow multiple simple var nodes\n # to exist at the same level, e.g.:\n #\n # /foo/{id}/bar\n # /foo/{name}/bar\n #\n assert len([_node for _node in nodes\n if _node.is_var and not _node.is_complex]) == 1\n found_simple = True\n\n else:\n # NOTE(kgriffs): Not a param, so must match exactly\n line('if path[%d] == \"%s\":' % (level, node.raw_segment))\n indent += 1\n\n if node.resource is not None:\n # NOTE(kgriffs): This is a valid route, so we will want to\n # return the relevant information.\n resource_idx = len(self._return_values)\n self._return_values.append(node)\n\n self._compile_tree(node.children, indent, level + 1)\n\n if node.resource is None:\n line('return None')\n else:\n # NOTE(kgriffs): Make sure that we have consumed all of\n # the segments for the requested route; otherwise we could\n # mistakenly match \"/foo/23/bar\" against \"/foo/{id}\".\n line('if path_len == %d:' % (level + 1))\n line('return return_values[%d]' % resource_idx, 1)\n\n line('return None')\n\n indent = level_indent\n\n if not found_simple:\n line('return None')\n\n def _compile(self):\n \"\"\"Generates Python code for entire routing tree.\n\n The generated code is compiled and the resulting Python method is\n returned.\n \"\"\"\n self._return_values = []\n self._expressions = []\n self._code_lines = [\n 'def find(path, return_values, expressions, params):',\n TAB_STR + 'path_len = len(path)',\n ]\n\n self._compile_tree(self._roots)\n\n self._code_lines.append(\n # PERF(kgriffs): Explicit return of None is faster than implicit\n TAB_STR + 'return None'\n )\n\n self._src = '\\n'.join(self._code_lines)\n\n scope = {}\n exec(compile(self._src, '<string>', 'exec'), scope)\n\n return scope['find']\n\n\nclass CompiledRouterNode(object):\n \"\"\"Represents a single URI segment in a URI.\"\"\"\n\n _regex_vars = re.compile('{([-_a-zA-Z0-9]+)}')\n\n def __init__(self, raw_segment, method_map=None, resource=None):\n self.children = []\n\n self.raw_segment = raw_segment\n self.method_map = method_map\n self.resource = resource\n\n self.is_var = False\n self.is_complex = False\n self.var_name = None\n\n seg = raw_segment.replace('.', '\\\\.')\n\n matches = list(self._regex_vars.finditer(seg))\n if matches:\n self.is_var = True\n # NOTE(richardolsson): if there is a single variable and it spans\n # the entire segment, the segment is uncomplex and the variable\n # name is simply the string contained within curly braces.\n if len(matches) == 1 and matches[0].span() == (0, len(seg)):\n self.is_complex = False\n self.var_name = raw_segment[1:-1]\n else:\n # NOTE(richardolsson): Complex segments need to be converted\n # into regular expressions will be used to match and extract\n # variable values. The regular expressions contain both\n # literal spans and named group expressions for the variables.\n self.is_complex = True\n seg_fields = []\n prev_end_idx = 0\n for match in matches:\n var_start_idx, var_end_idx = match.span()\n seg_fields.append(seg[prev_end_idx:var_start_idx])\n\n var_name = match.groups()[0].replace('-', '_')\n seg_fields.append('(?P<%s>[^/]+)' % var_name)\n\n prev_end_idx = var_end_idx\n\n seg_fields.append(seg[prev_end_idx:])\n seg_pattern = ''.join(seg_fields)\n self.var_regex = re.compile(seg_pattern)\n else:\n self.is_var = False\n\n def matches(self, segment):\n \"\"\"Returns True if this node matches the supplied template segment.\"\"\"\n\n return segment == self.raw_segment\n\n def conflicts_with(self, segment):\n \"\"\"Returns True if this node conflicts with a given template segment.\"\"\"\n\n # NOTE(kgriffs): This method assumes that the caller has already\n # checked if the segment matches. By definition, only unmatched\n # segments may conflict, so there isn't any sense in calling\n # conflicts_with in that case.\n assert not self.matches(segment)\n\n # NOTE(kgriffs): Possible combinations are as follows.\n #\n # simple, simple ==> True\n # simple, complex ==> False\n # simple, string ==> False\n # complex, simple ==> False\n # complex, complex ==> (Depend)\n # complex, string ==> False\n # string, simple ==> False\n # string, complex ==> False\n # string, string ==> False\n #\n other = CompiledRouterNode(segment)\n\n if self.is_var:\n # NOTE(kgriffs & philiptzou): Falcon does not accept multiple\n # simple var nodes exist at the same level as following:\n #\n # /foo/{thing1}\n # /foo/{thing2}\n #\n # Nor two complex nodes like this:\n #\n # /foo/{thing1}.{ext}\n # /foo/{thing2}.{ext}\n #\n # On the other hand, those are all OK:\n #\n # /foo/{thing1}\n # /foo/all\n # /foo/{thing1}.{ext}\n # /foo/{thing2}.detail.{ext}\n #\n if self.is_complex:\n if other.is_complex:\n return (self._regex_vars.sub('v', self.raw_segment) ==\n self._regex_vars.sub('v', segment))\n\n return False\n else:\n return other.is_var and not other.is_complex\n\n # NOTE(kgriffs): If self is a static string match, then all the cases\n # for other are False, so no need to check.\n return False\n", "path": "falcon/routing/compiled.py"}], "after_files": [{"content": "# Copyright 2013 by Richard Olsson\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport re\n\n\nTAB_STR = ' ' * 4\n\n\nclass CompiledRouter(object):\n \"\"\"Fast URI router which compiles its routing logic to Python code.\n\n Generally you do not need to use this router class directly, as an\n instance is created by default when the falcon.API class is initialized.\n\n The router treats URI paths as a tree of URI segments and searches by\n checking the URI one segment at a time. Instead of interpreting the route\n tree for each look-up, it generates inlined, bespoke Python code to\n perform the search, then compiles that code. This makes the route\n processing quite fast.\n \"\"\"\n\n def __init__(self):\n self._roots = []\n self._find = self._compile()\n self._code_lines = None\n self._src = None\n self._expressions = None\n self._return_values = None\n\n def add_route(self, uri_template, method_map, resource):\n \"\"\"Adds a route between URI path template and resource.\"\"\"\n # Can't start with a number, since these eventually get passed as\n # args to on_* responders\n if re.search('{\\d', uri_template):\n raise ValueError('Field names may not start with a digit.')\n\n if re.search('\\s', uri_template):\n raise ValueError('URI templates may not include whitespace.')\n\n path = uri_template.strip('/').split('/')\n\n def insert(nodes, path_index=0):\n for node in nodes:\n segment = path[path_index]\n if node.matches(segment):\n path_index += 1\n if path_index == len(path):\n # NOTE(kgriffs): Override previous node\n node.method_map = method_map\n node.resource = resource\n else:\n insert(node.children, path_index)\n\n return\n\n if node.conflicts_with(segment):\n raise ValueError('The URI template for this route '\n \"conflicts with another route's \"\n 'template.')\n\n # NOTE(richardolsson): If we got this far, the node doesn't already\n # exist and needs to be created. This builds a new branch of the\n # routing tree recursively until it reaches the new node leaf.\n new_node = CompiledRouterNode(path[path_index])\n nodes.append(new_node)\n if path_index == len(path) - 1:\n new_node.method_map = method_map\n new_node.resource = resource\n else:\n insert(new_node.children, path_index + 1)\n\n insert(self._roots)\n self._find = self._compile()\n\n def find(self, uri):\n \"\"\"Finds resource and method map for a URI, or returns None.\"\"\"\n path = uri.lstrip('/').split('/')\n params = {}\n node = self._find(path, self._return_values, self._expressions, params)\n\n if node is not None:\n return node.resource, node.method_map, params\n else:\n return None, None, None\n\n def _compile_tree(self, nodes, indent=1, level=0, fast_return=True):\n \"\"\"Generates Python code for a routing tree or subtree.\"\"\"\n\n def line(text, indent_offset=0):\n pad = TAB_STR * (indent + indent_offset)\n self._code_lines.append(pad + text)\n\n # NOTE(kgriffs): Base case\n if not nodes:\n return\n\n line('if path_len > %d:' % level)\n indent += 1\n\n level_indent = indent\n found_simple = False\n\n # NOTE(kgriffs & philiptzou): Sort nodes in this sequence:\n # static nodes(0), complex var nodes(1) and simple var nodes(2).\n # so that none of them get masked.\n nodes = sorted(\n nodes, key=lambda node: node.is_var + (node.is_var and\n not node.is_complex))\n\n # NOTE(kgriffs): Down to this branch in the tree, we can do a\n # fast 'return None'. See if the nodes at this branch are\n # all still simple, meaning there is only one possible path.\n if fast_return:\n if len(nodes) > 1:\n # NOTE(kgriffs): There's the possibility of more than\n # one path.\n var_nodes = [node for node in nodes if node.is_var]\n found_var_nodes = bool(var_nodes)\n\n fast_return = not found_var_nodes\n\n for node in nodes:\n if node.is_var:\n if node.is_complex:\n # NOTE(richardolsson): Complex nodes are nodes which\n # contain anything more than a single literal or variable,\n # and they need to be checked using a pre-compiled regular\n # expression.\n expression_idx = len(self._expressions)\n self._expressions.append(node.var_regex)\n\n line('match = expressions[%d].match(path[%d]) # %s' % (\n expression_idx, level, node.var_regex.pattern))\n\n line('if match is not None:')\n indent += 1\n line('params.update(match.groupdict())')\n\n else:\n # NOTE(kgriffs): Simple nodes just capture the entire path\n # segment as the value for the param.\n line('params[\"%s\"] = path[%d]' % (node.var_name, level))\n\n # NOTE(kgriffs): We don't allow multiple simple var nodes\n # to exist at the same level, e.g.:\n #\n # /foo/{id}/bar\n # /foo/{name}/bar\n #\n assert len([_node for _node in nodes\n if _node.is_var and not _node.is_complex]) == 1\n found_simple = True\n\n else:\n # NOTE(kgriffs): Not a param, so must match exactly\n line('if path[%d] == \"%s\":' % (level, node.raw_segment))\n indent += 1\n\n if node.resource is not None:\n # NOTE(kgriffs): This is a valid route, so we will want to\n # return the relevant information.\n resource_idx = len(self._return_values)\n self._return_values.append(node)\n\n self._compile_tree(node.children, indent, level + 1, fast_return)\n\n if node.resource is None:\n if fast_return:\n line('return None')\n else:\n # NOTE(kgriffs): Make sure that we have consumed all of\n # the segments for the requested route; otherwise we could\n # mistakenly match \"/foo/23/bar\" against \"/foo/{id}\".\n line('if path_len == %d:' % (level + 1))\n line('return return_values[%d]' % resource_idx, 1)\n\n if fast_return:\n line('return None')\n\n indent = level_indent\n\n if not found_simple and fast_return:\n line('return None')\n\n def _compile(self):\n \"\"\"Generates Python code for entire routing tree.\n\n The generated code is compiled and the resulting Python method is\n returned.\n \"\"\"\n self._return_values = []\n self._expressions = []\n self._code_lines = [\n 'def find(path, return_values, expressions, params):',\n TAB_STR + 'path_len = len(path)',\n ]\n\n self._compile_tree(self._roots)\n\n self._code_lines.append(\n # PERF(kgriffs): Explicit return of None is faster than implicit\n TAB_STR + 'return None'\n )\n\n self._src = '\\n'.join(self._code_lines)\n\n scope = {}\n exec(compile(self._src, '<string>', 'exec'), scope)\n\n return scope['find']\n\n\nclass CompiledRouterNode(object):\n \"\"\"Represents a single URI segment in a URI.\"\"\"\n\n _regex_vars = re.compile('{([-_a-zA-Z0-9]+)}')\n\n def __init__(self, raw_segment, method_map=None, resource=None):\n self.children = []\n\n self.raw_segment = raw_segment\n self.method_map = method_map\n self.resource = resource\n\n self.is_var = False\n self.is_complex = False\n self.var_name = None\n\n seg = raw_segment.replace('.', '\\\\.')\n\n matches = list(self._regex_vars.finditer(seg))\n if matches:\n self.is_var = True\n # NOTE(richardolsson): if there is a single variable and it spans\n # the entire segment, the segment is uncomplex and the variable\n # name is simply the string contained within curly braces.\n if len(matches) == 1 and matches[0].span() == (0, len(seg)):\n self.is_complex = False\n self.var_name = raw_segment[1:-1]\n else:\n # NOTE(richardolsson): Complex segments need to be converted\n # into regular expressions will be used to match and extract\n # variable values. The regular expressions contain both\n # literal spans and named group expressions for the variables.\n self.is_complex = True\n seg_fields = []\n prev_end_idx = 0\n for match in matches:\n var_start_idx, var_end_idx = match.span()\n seg_fields.append(seg[prev_end_idx:var_start_idx])\n\n var_name = match.groups()[0].replace('-', '_')\n seg_fields.append('(?P<%s>[^/]+)' % var_name)\n\n prev_end_idx = var_end_idx\n\n seg_fields.append(seg[prev_end_idx:])\n seg_pattern = ''.join(seg_fields)\n self.var_regex = re.compile(seg_pattern)\n else:\n self.is_var = False\n\n def matches(self, segment):\n \"\"\"Returns True if this node matches the supplied template segment.\"\"\"\n\n return segment == self.raw_segment\n\n def conflicts_with(self, segment):\n \"\"\"Returns True if this node conflicts with a given template segment.\"\"\"\n\n # NOTE(kgriffs): This method assumes that the caller has already\n # checked if the segment matches. By definition, only unmatched\n # segments may conflict, so there isn't any sense in calling\n # conflicts_with in that case.\n assert not self.matches(segment)\n\n # NOTE(kgriffs): Possible combinations are as follows.\n #\n # simple, simple ==> True\n # simple, complex ==> False\n # simple, string ==> False\n # complex, simple ==> False\n # complex, complex ==> (Depend)\n # complex, string ==> False\n # string, simple ==> False\n # string, complex ==> False\n # string, string ==> False\n #\n other = CompiledRouterNode(segment)\n\n if self.is_var:\n # NOTE(kgriffs & philiptzou): Falcon does not accept multiple\n # simple var nodes exist at the same level as following:\n #\n # /foo/{thing1}\n # /foo/{thing2}\n #\n # Nor two complex nodes like this:\n #\n # /foo/{thing1}.{ext}\n # /foo/{thing2}.{ext}\n #\n # On the other hand, those are all OK:\n #\n # /foo/{thing1}\n # /foo/all\n # /foo/{thing1}.{ext}\n # /foo/{thing2}.detail.{ext}\n #\n if self.is_complex:\n if other.is_complex:\n return (self._regex_vars.sub('v', self.raw_segment) ==\n self._regex_vars.sub('v', segment))\n\n return False\n else:\n return other.is_var and not other.is_complex\n\n # NOTE(kgriffs): If self is a static string match, then all the cases\n # for other are False, so no need to check.\n return False\n", "path": "falcon/routing/compiled.py"}]}
| 3,827 | 550 |
gh_patches_debug_25453
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-113
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`python3.7` raises `flake8` warning
It is a bug in `flake8`:
- https://github.com/PyCQA/pycodestyle/issues/728
We currently allow `python3.7` build to fail.
```
=============================== warnings summary ===============================
tests/test_visitors/test_wrong_class/test_base_class.py::FLAKE8
/home/travis/virtualenv/python3.7.0/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=================== 1514 passed, 1 warnings in 27.96 seconds ===================
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/compat.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 This module contains ugly hacks and fixes for version compat issues.
5
6 Do not be over-exited to add anything here.
7 """
8
9 import ast
10
11
12 def maybe_set_parent(tree: ast.AST) -> ast.AST:
13 """Sets parents for all nodes that do not have this prop."""
14 for statement in ast.walk(tree):
15 for child in ast.iter_child_nodes(statement):
16 if not hasattr(child, 'parent'): # noqa: Z113
17 setattr(child, 'parent', statement)
18
19 return tree
20
```
Path: `wemake_python_styleguide/checker.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from ast import Module
4 from typing import Generator
5
6 from flake8.options.manager import OptionManager
7
8 from wemake_python_styleguide import constants
9 from wemake_python_styleguide.compat import maybe_set_parent
10 from wemake_python_styleguide.options.config import Configuration
11 from wemake_python_styleguide.types import (
12 CheckerSequence,
13 CheckResult,
14 ConfigurationOptions,
15 )
16 from wemake_python_styleguide.version import version
17 from wemake_python_styleguide.visitors.ast.complexity.counts import (
18 MethodMembersVisitor,
19 ModuleMembersVisitor,
20 )
21 from wemake_python_styleguide.visitors.ast.complexity.function import (
22 FunctionComplexityVisitor,
23 )
24 from wemake_python_styleguide.visitors.ast.complexity.nested import (
25 NestedComplexityVisitor,
26 )
27 from wemake_python_styleguide.visitors.ast.complexity.offset import (
28 OffsetVisitor,
29 )
30 from wemake_python_styleguide.visitors.ast.wrong_class import WrongClassVisitor
31 from wemake_python_styleguide.visitors.ast.wrong_contents import (
32 WrongContentsVisitor,
33 )
34 from wemake_python_styleguide.visitors.ast.wrong_function_call import (
35 WrongFunctionCallVisitor,
36 )
37 from wemake_python_styleguide.visitors.ast.wrong_import import (
38 WrongImportVisitor,
39 )
40 from wemake_python_styleguide.visitors.ast.wrong_keyword import (
41 WrongKeywordVisitor,
42 WrongRaiseVisitor,
43 )
44 from wemake_python_styleguide.visitors.ast.wrong_name import (
45 WrongModuleMetadataVisitor,
46 WrongNameVisitor,
47 )
48 from wemake_python_styleguide.visitors.ast.wrong_string import (
49 WrongStringVisitor,
50 )
51 from wemake_python_styleguide.visitors.filenames.wrong_module_name import (
52 WrongModuleNameVisitor,
53 )
54
55 #: Visitors that should be working by default:
56 ENABLED_VISITORS: CheckerSequence = [
57 # Styling and correctness:
58 WrongRaiseVisitor,
59 WrongFunctionCallVisitor,
60 WrongImportVisitor,
61 WrongKeywordVisitor,
62 WrongNameVisitor,
63 WrongModuleMetadataVisitor,
64 WrongClassVisitor,
65 WrongStringVisitor,
66 WrongContentsVisitor,
67
68 # Complexity:
69 FunctionComplexityVisitor,
70 NestedComplexityVisitor,
71 OffsetVisitor,
72 ModuleMembersVisitor,
73 MethodMembersVisitor,
74
75 # Modules:
76 WrongModuleNameVisitor,
77 ]
78
79
80 class Checker(object):
81 """
82 Main checker class.
83
84 Runs all checks that are bundled with this package.
85 If you want to add new checks they should be added to ``ENABLED_VISITORS``.
86 """
87
88 name = 'wemake-python-styleguide'
89 version = version
90
91 config = Configuration()
92 options: ConfigurationOptions
93
94 def __init__(self, tree: Module, filename: str = constants.STDIN) -> None:
95 """Creates new checker instance."""
96 self.tree = maybe_set_parent(tree)
97 self.filename = filename
98
99 @classmethod
100 def add_options(cls, parser: OptionManager) -> None:
101 """Calls Configuration instance method for registering options."""
102 cls.config.register_options(parser)
103
104 @classmethod
105 def parse_options(cls, options: ConfigurationOptions) -> None:
106 """Parses registered options for providing to the visitor."""
107 cls.options = options
108
109 def run(self) -> Generator[CheckResult, None, None]:
110 """
111 Runs the checker.
112
113 This method is used by `flake8` API.
114 After all configuration is parsed and passed.
115 """
116 for visitor_class in ENABLED_VISITORS:
117 visitor = visitor_class(
118 self.options,
119 tree=self.tree,
120 filename=self.filename,
121 )
122 visitor.run()
123
124 for error in visitor.errors:
125 yield (*error.node_items(), type(self))
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/checker.py b/wemake_python_styleguide/checker.py
--- a/wemake_python_styleguide/checker.py
+++ b/wemake_python_styleguide/checker.py
@@ -6,7 +6,6 @@
from flake8.options.manager import OptionManager
from wemake_python_styleguide import constants
-from wemake_python_styleguide.compat import maybe_set_parent
from wemake_python_styleguide.options.config import Configuration
from wemake_python_styleguide.types import (
CheckerSequence,
@@ -93,7 +92,7 @@
def __init__(self, tree: Module, filename: str = constants.STDIN) -> None:
"""Creates new checker instance."""
- self.tree = maybe_set_parent(tree)
+ self.tree = tree
self.filename = filename
@classmethod
diff --git a/wemake_python_styleguide/compat.py b/wemake_python_styleguide/compat.py
deleted file mode 100644
--- a/wemake_python_styleguide/compat.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""
-This module contains ugly hacks and fixes for version compat issues.
-
-Do not be over-exited to add anything here.
-"""
-
-import ast
-
-
-def maybe_set_parent(tree: ast.AST) -> ast.AST:
- """Sets parents for all nodes that do not have this prop."""
- for statement in ast.walk(tree):
- for child in ast.iter_child_nodes(statement):
- if not hasattr(child, 'parent'): # noqa: Z113
- setattr(child, 'parent', statement)
-
- return tree
|
{"golden_diff": "diff --git a/wemake_python_styleguide/checker.py b/wemake_python_styleguide/checker.py\n--- a/wemake_python_styleguide/checker.py\n+++ b/wemake_python_styleguide/checker.py\n@@ -6,7 +6,6 @@\n from flake8.options.manager import OptionManager\n \n from wemake_python_styleguide import constants\n-from wemake_python_styleguide.compat import maybe_set_parent\n from wemake_python_styleguide.options.config import Configuration\n from wemake_python_styleguide.types import (\n CheckerSequence,\n@@ -93,7 +92,7 @@\n \n def __init__(self, tree: Module, filename: str = constants.STDIN) -> None:\n \"\"\"Creates new checker instance.\"\"\"\n- self.tree = maybe_set_parent(tree)\n+ self.tree = tree\n self.filename = filename\n \n @classmethod\ndiff --git a/wemake_python_styleguide/compat.py b/wemake_python_styleguide/compat.py\ndeleted file mode 100644\n--- a/wemake_python_styleguide/compat.py\n+++ /dev/null\n@@ -1,19 +0,0 @@\n-# -*- coding: utf-8 -*-\n-\n-\"\"\"\n-This module contains ugly hacks and fixes for version compat issues.\n-\n-Do not be over-exited to add anything here.\n-\"\"\"\n-\n-import ast\n-\n-\n-def maybe_set_parent(tree: ast.AST) -> ast.AST:\n- \"\"\"Sets parents for all nodes that do not have this prop.\"\"\"\n- for statement in ast.walk(tree):\n- for child in ast.iter_child_nodes(statement):\n- if not hasattr(child, 'parent'): # noqa: Z113\n- setattr(child, 'parent', statement)\n-\n- return tree\n", "issue": "`python3.7` raises `flake8` warning\nIt is a bug in `flake8`:\r\n- https://github.com/PyCQA/pycodestyle/issues/728\r\n\r\nWe currently allow `python3.7` build to fail.\r\n\r\n```\r\n=============================== warnings summary ===============================\r\ntests/test_visitors/test_wrong_class/test_base_class.py::FLAKE8\r\n /home/travis/virtualenv/python3.7.0/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1\r\n EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n=================== 1514 passed, 1 warnings in 27.96 seconds ===================\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThis module contains ugly hacks and fixes for version compat issues.\n\nDo not be over-exited to add anything here.\n\"\"\"\n\nimport ast\n\n\ndef maybe_set_parent(tree: ast.AST) -> ast.AST:\n \"\"\"Sets parents for all nodes that do not have this prop.\"\"\"\n for statement in ast.walk(tree):\n for child in ast.iter_child_nodes(statement):\n if not hasattr(child, 'parent'): # noqa: Z113\n setattr(child, 'parent', statement)\n\n return tree\n", "path": "wemake_python_styleguide/compat.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom ast import Module\nfrom typing import Generator\n\nfrom flake8.options.manager import OptionManager\n\nfrom wemake_python_styleguide import constants\nfrom wemake_python_styleguide.compat import maybe_set_parent\nfrom wemake_python_styleguide.options.config import Configuration\nfrom wemake_python_styleguide.types import (\n CheckerSequence,\n CheckResult,\n ConfigurationOptions,\n)\nfrom wemake_python_styleguide.version import version\nfrom wemake_python_styleguide.visitors.ast.complexity.counts import (\n MethodMembersVisitor,\n ModuleMembersVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.function import (\n FunctionComplexityVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.nested import (\n NestedComplexityVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.offset import (\n OffsetVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_class import WrongClassVisitor\nfrom wemake_python_styleguide.visitors.ast.wrong_contents import (\n WrongContentsVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_function_call import (\n WrongFunctionCallVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_import import (\n WrongImportVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_keyword import (\n WrongKeywordVisitor,\n WrongRaiseVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_name import (\n WrongModuleMetadataVisitor,\n WrongNameVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_string import (\n WrongStringVisitor,\n)\nfrom wemake_python_styleguide.visitors.filenames.wrong_module_name import (\n WrongModuleNameVisitor,\n)\n\n#: Visitors that should be working by default:\nENABLED_VISITORS: CheckerSequence = [\n # Styling and correctness:\n WrongRaiseVisitor,\n WrongFunctionCallVisitor,\n WrongImportVisitor,\n WrongKeywordVisitor,\n WrongNameVisitor,\n WrongModuleMetadataVisitor,\n WrongClassVisitor,\n WrongStringVisitor,\n WrongContentsVisitor,\n\n # Complexity:\n FunctionComplexityVisitor,\n NestedComplexityVisitor,\n OffsetVisitor,\n ModuleMembersVisitor,\n MethodMembersVisitor,\n\n # Modules:\n WrongModuleNameVisitor,\n]\n\n\nclass Checker(object):\n \"\"\"\n Main checker class.\n\n Runs all checks that are bundled with this package.\n If you want to add new checks they should be added to ``ENABLED_VISITORS``.\n \"\"\"\n\n name = 'wemake-python-styleguide'\n version = version\n\n config = Configuration()\n options: ConfigurationOptions\n\n def __init__(self, tree: Module, filename: str = constants.STDIN) -> None:\n \"\"\"Creates new checker instance.\"\"\"\n self.tree = maybe_set_parent(tree)\n self.filename = filename\n\n @classmethod\n def add_options(cls, parser: OptionManager) -> None:\n \"\"\"Calls Configuration instance method for registering options.\"\"\"\n cls.config.register_options(parser)\n\n @classmethod\n def parse_options(cls, options: ConfigurationOptions) -> None:\n \"\"\"Parses registered options for providing to the visitor.\"\"\"\n cls.options = options\n\n def run(self) -> Generator[CheckResult, None, None]:\n \"\"\"\n Runs the checker.\n\n This method is used by `flake8` API.\n After all configuration is parsed and passed.\n \"\"\"\n for visitor_class in ENABLED_VISITORS:\n visitor = visitor_class(\n self.options,\n tree=self.tree,\n filename=self.filename,\n )\n visitor.run()\n\n for error in visitor.errors:\n yield (*error.node_items(), type(self))\n", "path": "wemake_python_styleguide/checker.py"}], "after_files": [{"content": null, "path": "wemake_python_styleguide/compat.py"}, {"content": "# -*- coding: utf-8 -*-\n\nfrom ast import Module\nfrom typing import Generator\n\nfrom flake8.options.manager import OptionManager\n\nfrom wemake_python_styleguide import constants\nfrom wemake_python_styleguide.options.config import Configuration\nfrom wemake_python_styleguide.types import (\n CheckerSequence,\n CheckResult,\n ConfigurationOptions,\n)\nfrom wemake_python_styleguide.version import version\nfrom wemake_python_styleguide.visitors.ast.complexity.counts import (\n MethodMembersVisitor,\n ModuleMembersVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.function import (\n FunctionComplexityVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.nested import (\n NestedComplexityVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.complexity.offset import (\n OffsetVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_class import WrongClassVisitor\nfrom wemake_python_styleguide.visitors.ast.wrong_contents import (\n WrongContentsVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_function_call import (\n WrongFunctionCallVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_import import (\n WrongImportVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_keyword import (\n WrongKeywordVisitor,\n WrongRaiseVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_name import (\n WrongModuleMetadataVisitor,\n WrongNameVisitor,\n)\nfrom wemake_python_styleguide.visitors.ast.wrong_string import (\n WrongStringVisitor,\n)\nfrom wemake_python_styleguide.visitors.filenames.wrong_module_name import (\n WrongModuleNameVisitor,\n)\n\n#: Visitors that should be working by default:\nENABLED_VISITORS: CheckerSequence = [\n # Styling and correctness:\n WrongRaiseVisitor,\n WrongFunctionCallVisitor,\n WrongImportVisitor,\n WrongKeywordVisitor,\n WrongNameVisitor,\n WrongModuleMetadataVisitor,\n WrongClassVisitor,\n WrongStringVisitor,\n WrongContentsVisitor,\n\n # Complexity:\n FunctionComplexityVisitor,\n NestedComplexityVisitor,\n OffsetVisitor,\n ModuleMembersVisitor,\n MethodMembersVisitor,\n\n # Modules:\n WrongModuleNameVisitor,\n]\n\n\nclass Checker(object):\n \"\"\"\n Main checker class.\n\n Runs all checks that are bundled with this package.\n If you want to add new checks they should be added to ``ENABLED_VISITORS``.\n \"\"\"\n\n name = 'wemake-python-styleguide'\n version = version\n\n config = Configuration()\n options: ConfigurationOptions\n\n def __init__(self, tree: Module, filename: str = constants.STDIN) -> None:\n \"\"\"Creates new checker instance.\"\"\"\n self.tree = tree\n self.filename = filename\n\n @classmethod\n def add_options(cls, parser: OptionManager) -> None:\n \"\"\"Calls Configuration instance method for registering options.\"\"\"\n cls.config.register_options(parser)\n\n @classmethod\n def parse_options(cls, options: ConfigurationOptions) -> None:\n \"\"\"Parses registered options for providing to the visitor.\"\"\"\n cls.options = options\n\n def run(self) -> Generator[CheckResult, None, None]:\n \"\"\"\n Runs the checker.\n\n This method is used by `flake8` API.\n After all configuration is parsed and passed.\n \"\"\"\n for visitor_class in ENABLED_VISITORS:\n visitor = visitor_class(\n self.options,\n tree=self.tree,\n filename=self.filename,\n )\n visitor.run()\n\n for error in visitor.errors:\n yield (*error.node_items(), type(self))\n", "path": "wemake_python_styleguide/checker.py"}]}
| 1,678 | 381 |
gh_patches_debug_28846
|
rasdani/github-patches
|
git_diff
|
mozilla__pontoon-2416
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove aurora redirects
I just looked at our root urls.py, and saw a bunch of aurora-related redirects.
It's been ... a decade or so, let's get rid of them.
CC @flodolo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pontoon/urls.py`
Content:
```
1 from django.urls import include, path, register_converter
2 from django.urls.converters import StringConverter
3 from django.contrib import admin
4 from django.contrib.auth import logout
5 from django.views.generic import RedirectView, TemplateView
6
7 from pontoon.teams.views import team
8
9
10 class LocaleConverter(StringConverter):
11 regex = r"[A-Za-z0-9\-\@\.]+"
12
13
14 register_converter(LocaleConverter, "locale")
15
16 pontoon_js_view = TemplateView.as_view(
17 template_name="js/pontoon.js", content_type="text/javascript"
18 )
19
20 permission_denied_view = TemplateView.as_view(template_name="403.html")
21 page_not_found_view = TemplateView.as_view(template_name="404.html")
22 server_error_view = TemplateView.as_view(template_name="500.html")
23
24 urlpatterns = [
25 # Redirect legacy Aurora projects
26 path(
27 "projects/firefox-aurora/<path:url>",
28 RedirectView.as_view(url="/projects/firefox/%(url)s", permanent=True),
29 ),
30 path(
31 "projects/firefox-for-android-aurora/<path:url>",
32 RedirectView.as_view(
33 url="/projects/firefox-for-android/%(url)s", permanent=True
34 ),
35 ),
36 path(
37 "projects/thunderbird-aurora/<path:url>",
38 RedirectView.as_view(url="/projects/thunderbird/%(url)s", permanent=True),
39 ),
40 path(
41 "projects/lightning-aurora/<path:url>",
42 RedirectView.as_view(url="/projects/lightning/%(url)s", permanent=True),
43 ),
44 path(
45 "projects/seamonkey-aurora/<path:url>",
46 RedirectView.as_view(url="/projects/seamonkey/%(url)s", permanent=True),
47 ),
48 path(
49 "<locale:locale>/firefox-aurora/<path:url>",
50 RedirectView.as_view(url="/%(locale)s/firefox/%(url)s", permanent=True),
51 ),
52 path(
53 "<locale:locale>/firefox-for-android-aurora/<path:url>",
54 RedirectView.as_view(
55 url="/%(locale)s/firefox-for-android/%(url)s", permanent=True
56 ),
57 ),
58 path(
59 "<locale:locale>/thunderbird-aurora/<path:url>",
60 RedirectView.as_view(url="/%(locale)s/thunderbird/%(url)s", permanent=True),
61 ),
62 path(
63 "<locale:locale>/lightning-aurora/<path:url>",
64 RedirectView.as_view(url="/%(locale)s/lightning/%(url)s", permanent=True),
65 ),
66 path(
67 "<locale:locale>/seamonkey-aurora/<path:url>",
68 RedirectView.as_view(url="/%(locale)s/seamonkey/%(url)s", permanent=True),
69 ),
70 # Accounts
71 path("accounts/", include("pontoon.allauth_urls")),
72 # Admin
73 path("admin/", include("pontoon.administration.urls")),
74 # Django admin: Disable the login form
75 path("a/login/", permission_denied_view),
76 # Django admin
77 path("a/", admin.site.urls),
78 # Logout
79 path("signout/", logout, {"next_page": "/"}, name="signout"),
80 # Error pages
81 path("403/", permission_denied_view),
82 path("404/", page_not_found_view),
83 path("500/", server_error_view),
84 # Robots.txt
85 path(
86 "robots.txt",
87 TemplateView.as_view(template_name="robots.txt", content_type="text/plain"),
88 ),
89 # contribute.json
90 path(
91 "contribute.json",
92 TemplateView.as_view(
93 template_name="contribute.json", content_type="text/plain"
94 ),
95 ),
96 # Favicon
97 path(
98 "favicon.ico",
99 RedirectView.as_view(url="/static/img/favicon.ico", permanent=True),
100 ),
101 # Include script
102 path("pontoon.js", pontoon_js_view),
103 path("static/js/pontoon.js", pontoon_js_view),
104 # Include URL configurations from installed apps
105 path("terminology/", include("pontoon.terminology.urls")),
106 path("translations/", include("pontoon.translations.urls")),
107 path("", include("pontoon.teams.urls")),
108 path("", include("pontoon.tour.urls")),
109 path("", include("pontoon.tags.urls")),
110 path("", include("pontoon.sync.urls")),
111 path("", include("pontoon.projects.urls")),
112 path("", include("pontoon.machinery.urls")),
113 path("", include("pontoon.contributors.urls")),
114 path("", include("pontoon.localizations.urls")),
115 path("", include("pontoon.base.urls")),
116 path("", include("pontoon.translate.urls")),
117 path("", include("pontoon.batch.urls")),
118 path("", include("pontoon.api.urls")),
119 path("", include("pontoon.homepage.urls")),
120 path("", include("pontoon.in_context.urls")),
121 path("", include("pontoon.uxactionlog.urls")),
122 # Team page: Must be at the end
123 path("<locale:locale>/", team, name="pontoon.teams.team"),
124 ]
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pontoon/urls.py b/pontoon/urls.py
--- a/pontoon/urls.py
+++ b/pontoon/urls.py
@@ -22,51 +22,6 @@
server_error_view = TemplateView.as_view(template_name="500.html")
urlpatterns = [
- # Redirect legacy Aurora projects
- path(
- "projects/firefox-aurora/<path:url>",
- RedirectView.as_view(url="/projects/firefox/%(url)s", permanent=True),
- ),
- path(
- "projects/firefox-for-android-aurora/<path:url>",
- RedirectView.as_view(
- url="/projects/firefox-for-android/%(url)s", permanent=True
- ),
- ),
- path(
- "projects/thunderbird-aurora/<path:url>",
- RedirectView.as_view(url="/projects/thunderbird/%(url)s", permanent=True),
- ),
- path(
- "projects/lightning-aurora/<path:url>",
- RedirectView.as_view(url="/projects/lightning/%(url)s", permanent=True),
- ),
- path(
- "projects/seamonkey-aurora/<path:url>",
- RedirectView.as_view(url="/projects/seamonkey/%(url)s", permanent=True),
- ),
- path(
- "<locale:locale>/firefox-aurora/<path:url>",
- RedirectView.as_view(url="/%(locale)s/firefox/%(url)s", permanent=True),
- ),
- path(
- "<locale:locale>/firefox-for-android-aurora/<path:url>",
- RedirectView.as_view(
- url="/%(locale)s/firefox-for-android/%(url)s", permanent=True
- ),
- ),
- path(
- "<locale:locale>/thunderbird-aurora/<path:url>",
- RedirectView.as_view(url="/%(locale)s/thunderbird/%(url)s", permanent=True),
- ),
- path(
- "<locale:locale>/lightning-aurora/<path:url>",
- RedirectView.as_view(url="/%(locale)s/lightning/%(url)s", permanent=True),
- ),
- path(
- "<locale:locale>/seamonkey-aurora/<path:url>",
- RedirectView.as_view(url="/%(locale)s/seamonkey/%(url)s", permanent=True),
- ),
# Accounts
path("accounts/", include("pontoon.allauth_urls")),
# Admin
|
{"golden_diff": "diff --git a/pontoon/urls.py b/pontoon/urls.py\n--- a/pontoon/urls.py\n+++ b/pontoon/urls.py\n@@ -22,51 +22,6 @@\n server_error_view = TemplateView.as_view(template_name=\"500.html\")\n \n urlpatterns = [\n- # Redirect legacy Aurora projects\n- path(\n- \"projects/firefox-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/projects/firefox/%(url)s\", permanent=True),\n- ),\n- path(\n- \"projects/firefox-for-android-aurora/<path:url>\",\n- RedirectView.as_view(\n- url=\"/projects/firefox-for-android/%(url)s\", permanent=True\n- ),\n- ),\n- path(\n- \"projects/thunderbird-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/projects/thunderbird/%(url)s\", permanent=True),\n- ),\n- path(\n- \"projects/lightning-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/projects/lightning/%(url)s\", permanent=True),\n- ),\n- path(\n- \"projects/seamonkey-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/projects/seamonkey/%(url)s\", permanent=True),\n- ),\n- path(\n- \"<locale:locale>/firefox-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/%(locale)s/firefox/%(url)s\", permanent=True),\n- ),\n- path(\n- \"<locale:locale>/firefox-for-android-aurora/<path:url>\",\n- RedirectView.as_view(\n- url=\"/%(locale)s/firefox-for-android/%(url)s\", permanent=True\n- ),\n- ),\n- path(\n- \"<locale:locale>/thunderbird-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/%(locale)s/thunderbird/%(url)s\", permanent=True),\n- ),\n- path(\n- \"<locale:locale>/lightning-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/%(locale)s/lightning/%(url)s\", permanent=True),\n- ),\n- path(\n- \"<locale:locale>/seamonkey-aurora/<path:url>\",\n- RedirectView.as_view(url=\"/%(locale)s/seamonkey/%(url)s\", permanent=True),\n- ),\n # Accounts\n path(\"accounts/\", include(\"pontoon.allauth_urls\")),\n # Admin\n", "issue": "Remove aurora redirects\nI just looked at our root urls.py, and saw a bunch of aurora-related redirects.\r\n\r\nIt's been ... a decade or so, let's get rid of them.\r\n\r\nCC @flodolo \n", "before_files": [{"content": "from django.urls import include, path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.views.generic import RedirectView, TemplateView\n\nfrom pontoon.teams.views import team\n\n\nclass LocaleConverter(StringConverter):\n regex = r\"[A-Za-z0-9\\-\\@\\.]+\"\n\n\nregister_converter(LocaleConverter, \"locale\")\n\npontoon_js_view = TemplateView.as_view(\n template_name=\"js/pontoon.js\", content_type=\"text/javascript\"\n)\n\npermission_denied_view = TemplateView.as_view(template_name=\"403.html\")\npage_not_found_view = TemplateView.as_view(template_name=\"404.html\")\nserver_error_view = TemplateView.as_view(template_name=\"500.html\")\n\nurlpatterns = [\n # Redirect legacy Aurora projects\n path(\n \"projects/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"projects/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/projects/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"projects/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"projects/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"projects/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/seamonkey/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/%(locale)s/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"<locale:locale>/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/seamonkey/%(url)s\", permanent=True),\n ),\n # Accounts\n path(\"accounts/\", include(\"pontoon.allauth_urls\")),\n # Admin\n path(\"admin/\", include(\"pontoon.administration.urls\")),\n # Django admin: Disable the login form\n path(\"a/login/\", permission_denied_view),\n # Django admin\n path(\"a/\", admin.site.urls),\n # Logout\n path(\"signout/\", logout, {\"next_page\": \"/\"}, name=\"signout\"),\n # Error pages\n path(\"403/\", permission_denied_view),\n path(\"404/\", page_not_found_view),\n path(\"500/\", server_error_view),\n # Robots.txt\n path(\n \"robots.txt\",\n TemplateView.as_view(template_name=\"robots.txt\", content_type=\"text/plain\"),\n ),\n # contribute.json\n path(\n \"contribute.json\",\n TemplateView.as_view(\n template_name=\"contribute.json\", content_type=\"text/plain\"\n ),\n ),\n # Favicon\n path(\n \"favicon.ico\",\n RedirectView.as_view(url=\"/static/img/favicon.ico\", permanent=True),\n ),\n # Include script\n path(\"pontoon.js\", pontoon_js_view),\n path(\"static/js/pontoon.js\", pontoon_js_view),\n # Include URL configurations from installed apps\n path(\"terminology/\", include(\"pontoon.terminology.urls\")),\n path(\"translations/\", include(\"pontoon.translations.urls\")),\n path(\"\", include(\"pontoon.teams.urls\")),\n path(\"\", include(\"pontoon.tour.urls\")),\n path(\"\", include(\"pontoon.tags.urls\")),\n path(\"\", include(\"pontoon.sync.urls\")),\n path(\"\", include(\"pontoon.projects.urls\")),\n path(\"\", include(\"pontoon.machinery.urls\")),\n path(\"\", include(\"pontoon.contributors.urls\")),\n path(\"\", include(\"pontoon.localizations.urls\")),\n path(\"\", include(\"pontoon.base.urls\")),\n path(\"\", include(\"pontoon.translate.urls\")),\n path(\"\", include(\"pontoon.batch.urls\")),\n path(\"\", include(\"pontoon.api.urls\")),\n path(\"\", include(\"pontoon.homepage.urls\")),\n path(\"\", include(\"pontoon.in_context.urls\")),\n path(\"\", include(\"pontoon.uxactionlog.urls\")),\n # Team page: Must be at the end\n path(\"<locale:locale>/\", team, name=\"pontoon.teams.team\"),\n]\n", "path": "pontoon/urls.py"}], "after_files": [{"content": "from django.urls import include, path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.views.generic import RedirectView, TemplateView\n\nfrom pontoon.teams.views import team\n\n\nclass LocaleConverter(StringConverter):\n regex = r\"[A-Za-z0-9\\-\\@\\.]+\"\n\n\nregister_converter(LocaleConverter, \"locale\")\n\npontoon_js_view = TemplateView.as_view(\n template_name=\"js/pontoon.js\", content_type=\"text/javascript\"\n)\n\npermission_denied_view = TemplateView.as_view(template_name=\"403.html\")\npage_not_found_view = TemplateView.as_view(template_name=\"404.html\")\nserver_error_view = TemplateView.as_view(template_name=\"500.html\")\n\nurlpatterns = [\n # Accounts\n path(\"accounts/\", include(\"pontoon.allauth_urls\")),\n # Admin\n path(\"admin/\", include(\"pontoon.administration.urls\")),\n # Django admin: Disable the login form\n path(\"a/login/\", permission_denied_view),\n # Django admin\n path(\"a/\", admin.site.urls),\n # Logout\n path(\"signout/\", logout, {\"next_page\": \"/\"}, name=\"signout\"),\n # Error pages\n path(\"403/\", permission_denied_view),\n path(\"404/\", page_not_found_view),\n path(\"500/\", server_error_view),\n # Robots.txt\n path(\n \"robots.txt\",\n TemplateView.as_view(template_name=\"robots.txt\", content_type=\"text/plain\"),\n ),\n # contribute.json\n path(\n \"contribute.json\",\n TemplateView.as_view(\n template_name=\"contribute.json\", content_type=\"text/plain\"\n ),\n ),\n # Favicon\n path(\n \"favicon.ico\",\n RedirectView.as_view(url=\"/static/img/favicon.ico\", permanent=True),\n ),\n # Include script\n path(\"pontoon.js\", pontoon_js_view),\n path(\"static/js/pontoon.js\", pontoon_js_view),\n # Include URL configurations from installed apps\n path(\"terminology/\", include(\"pontoon.terminology.urls\")),\n path(\"translations/\", include(\"pontoon.translations.urls\")),\n path(\"\", include(\"pontoon.teams.urls\")),\n path(\"\", include(\"pontoon.tour.urls\")),\n path(\"\", include(\"pontoon.tags.urls\")),\n path(\"\", include(\"pontoon.sync.urls\")),\n path(\"\", include(\"pontoon.projects.urls\")),\n path(\"\", include(\"pontoon.machinery.urls\")),\n path(\"\", include(\"pontoon.contributors.urls\")),\n path(\"\", include(\"pontoon.localizations.urls\")),\n path(\"\", include(\"pontoon.base.urls\")),\n path(\"\", include(\"pontoon.translate.urls\")),\n path(\"\", include(\"pontoon.batch.urls\")),\n path(\"\", include(\"pontoon.api.urls\")),\n path(\"\", include(\"pontoon.homepage.urls\")),\n path(\"\", include(\"pontoon.in_context.urls\")),\n path(\"\", include(\"pontoon.uxactionlog.urls\")),\n # Team page: Must be at the end\n path(\"<locale:locale>/\", team, name=\"pontoon.teams.team\"),\n]\n", "path": "pontoon/urls.py"}]}
| 1,661 | 540 |
gh_patches_debug_12343
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-4080
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] xmlsec/1.2.30: Android build failed
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **xmlsec/1.2.30**
* Operating System+version: **macOS Catalina (Android cross-build)**
* Compiler+version: **clang 9**
* Conan version: **conan 1.31.3**
* Python version: **Python 3.7.4**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
os_build=Macos
arch_build=x86_64
compiler=clang
compiler.version=9
compiler.libcxx=libc++
os=Android
os.api_level=21
arch=armv7
build_type=Release
[build_requires]
android_ndk_installer/r21d@bincrafters/stable
cmake_installer/3.15.5@conan/stable
```
### Steps to reproduce (Include if Applicable)
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
ERROR: m4/1.4.18: Error in build() method, line 90
autotools.make()
ConanException: Error 2 while executing make -j12
In file included from ../source_subfolder/lib/freading.c:22:
../source_subfolder/lib/stdio-impl.h:72:22: error: field has incomplete type 'struct __sbuf'
struct __sbuf _ub; /* ungetc buffer */
^
../source_subfolder/lib/stdio-impl.h:72:15: note: forward declaration of 'struct __sbuf'
struct __sbuf _ub; /* ungetc buffer */
^
../source_subfolder/lib/freading.c:41:16: error: no member named '_flags' in 'struct __sFILE'
return (fp_->_flags & __SRD) != 0;
~~~ ^
../source_subfolder/lib/freading.c:41:25: error: use of undeclared identifier '__SRD'
return (fp_->_flags & __SRD) != 0;
^
3 errors generated.
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/libxml2/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools, AutoToolsBuildEnvironment, VisualStudioBuildEnvironment
2 from contextlib import contextmanager
3 import glob
4 import os
5
6
7 class Libxml2Conan(ConanFile):
8 name = "libxml2"
9 url = "https://github.com/conan-io/conan-center-index"
10 description = "libxml2 is a software library for parsing XML documents"
11 topics = ("XML", "parser", "validation")
12 homepage = "https://xmlsoft.org"
13 license = "MIT"
14 settings = "os", "arch", "compiler", "build_type"
15 generators = "pkg_config"
16
17 # from ./configure and ./win32/configure.js
18 default_options = {'shared': False,
19 'fPIC': True,
20 'include_utils': True,
21 "c14n": True,
22 "catalog": True,
23 "docbook": True,
24 "ftp": True,
25 "http": True,
26 "html": True,
27 "iconv": True,
28 "icu": False,
29 "iso8859x": True,
30 "legacy": True,
31 "mem-debug": False,
32 "output": True,
33 "pattern": True,
34 "push": True,
35 "python": False,
36 "reader": True,
37 "regexps": True,
38 "run-debug": False,
39 "sax1": True,
40 "schemas": True,
41 "schematron": True,
42 "threads": True,
43 "tree": True,
44 "valid": True,
45 "writer": True,
46 "xinclude": True,
47 "xpath": True,
48 "xptr": True,
49 "zlib": True,
50 "lzma": False,
51 }
52
53 options = {name: [True, False] for name in default_options.keys()}
54 _option_names = [name for name in default_options.keys() if name not in ["shared", "fPIC", "include_utils"]]
55
56 _autotools = None
57 _source_subfolder = "source_subfolder"
58
59 def requirements(self):
60 if self.options.zlib:
61 self.requires("zlib/1.2.11")
62 if self.options.lzma:
63 self.requires("xz_utils/5.2.5")
64 if self.options.iconv:
65 self.requires("libiconv/1.16")
66 if self.options.icu:
67 self.requires("icu/68.1")
68
69 def build_requirements(self):
70 if self.settings.compiler != "Visual Studio" and tools.os_info.is_windows and os.environ.get("CONAN_BASH_PATH", None) is None:
71 self.build_requires("msys2/20190524")
72
73 @property
74 def _is_msvc(self):
75 return self.settings.compiler == 'Visual Studio'
76
77 @property
78 def _is_mingw(self):
79 return self.settings.compiler == 'gcc' and self.settings.os == 'Windows'
80
81 def source(self):
82 tools.get(**self.conan_data["sources"][self.version])
83 os.rename("libxml2-{0}".format(self.version), self._source_subfolder)
84
85 def config_options(self):
86 if self.settings.os == "Windows":
87 del self.options.fPIC
88
89 def configure(self):
90 del self.settings.compiler.libcxx
91 del self.settings.compiler.cppstd
92
93 @contextmanager
94 def _msvc_build_environment(self):
95 with tools.chdir(os.path.join(self._source_subfolder, 'win32')):
96 with tools.vcvars(self.settings):
97 with tools.environment_append(VisualStudioBuildEnvironment(self).vars):
98 yield
99
100 def _build_msvc(self):
101 with self._msvc_build_environment():
102 debug = "yes" if self.settings.build_type == "Debug" else "no"
103 static = "no" if self.options.shared else "yes"
104
105 args = ["cscript",
106 "configure.js",
107 "compiler=msvc",
108 "prefix=%s" % self.package_folder,
109 "cruntime=/%s" % self.settings.compiler.runtime,
110 "debug=%s" % debug,
111 "static=%s" % static,
112 'include="%s"' % ";".join(self.deps_cpp_info.include_paths),
113 'lib="%s"' % ";".join(self.deps_cpp_info.lib_paths)]
114
115 for name in self._option_names:
116 cname = {"mem-debug": "mem_debug",
117 "run-debug": "run_debug",
118 "docbook": "docb"}.get(name, name)
119 value = getattr(self.options, name)
120 value = "yes" if value else "no"
121 args.append("%s=%s" % (cname, value))
122
123 configure_command = ' '.join(args)
124 self.output.info(configure_command)
125 self.run(configure_command)
126
127 # Fix library names because they can be not just zlib.lib
128 def fix_library(option, package, old_libname):
129 if option:
130 libs = []
131 for lib in self.deps_cpp_info[package].libs:
132 libname = lib
133 if not libname.endswith('.lib'):
134 libname += '.lib'
135 libs.append(libname)
136 tools.replace_in_file("Makefile.msvc",
137 "LIBS = $(LIBS) %s" % old_libname,
138 "LIBS = $(LIBS) %s" % ' '.join(libs))
139
140 fix_library(self.options.zlib, 'zlib', 'zlib.lib')
141 fix_library(self.options.lzma, 'lzma', 'liblzma.lib')
142 fix_library(self.options.iconv, 'libiconv', 'iconv.lib')
143 fix_library(self.options.icu, 'icu', 'advapi32.lib sicuuc.lib sicuin.lib sicudt.lib')
144 fix_library(self.options.icu, 'icu', 'icuuc.lib icuin.lib icudt.lib')
145
146 self.run("nmake /f Makefile.msvc libxml libxmla libxmladll")
147
148 if self.options.include_utils:
149 self.run("nmake /f Makefile.msvc utils")
150
151
152 def _package_msvc(self):
153 with self._msvc_build_environment():
154 self.run("nmake /f Makefile.msvc install-libs")
155
156 if self.options.include_utils:
157 self.run("nmake /f Makefile.msvc install-dist")
158
159 def _configure_autotools(self):
160 if self._autotools:
161 return self._autotools
162 self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
163 self._autotools.libs = []
164 if not tools.os_info.is_windows:
165 self._autotools.fpic = self.options.fPIC
166 full_install_subfolder = tools.unix_path(self.package_folder) if tools.os_info.is_windows else self.package_folder
167 # fix rpath
168 if self.settings.os == "Macos":
169 tools.replace_in_file(os.path.join(self._source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
170 configure_args = ['--prefix=%s' % full_install_subfolder]
171 if self._autotools.fpic:
172 configure_args.extend(['--with-pic'])
173 if self.options.shared:
174 configure_args.extend(['--enable-shared', '--disable-static'])
175 else:
176 configure_args.extend(['--enable-static', '--disable-shared'])
177
178 for name in self._option_names:
179 value = getattr(self.options, name)
180 value = ("--with-%s" % name) if value else ("--without-%s" % name)
181 configure_args.append(value)
182
183 # Disable --build when building for iPhoneSimulator. The configure script halts on
184 # not knowing if it should cross-compile.
185 build = None
186 if self.settings.os == "iOS" and self.settings.arch == "x86_64":
187 build = False
188
189 self._autotools.configure(args=configure_args, build=build, configure_dir=self._source_subfolder)
190 return self._autotools
191
192 def _patch_sources(self):
193 # Break dependency of install on build
194 for makefile in ("Makefile.mingw", "Makefile.msvc"):
195 tools.replace_in_file(os.path.join(self._source_subfolder, "win32", makefile),
196 "install-libs : all",
197 "install-libs :")
198
199 def build(self):
200 self._patch_sources()
201 if self._is_msvc:
202 self._build_msvc()
203 else:
204 autotools = self._configure_autotools()
205 autotools.make(["libxml2.la"])
206
207 if self.options.include_utils:
208 autotools.make(["xmllint", "xmlcatalog", "xml2-config"])
209
210 def package(self):
211 # copy package license
212 self.copy("COPYING", src=self._source_subfolder, dst="licenses", ignore_case=True, keep_path=False)
213 if self._is_msvc:
214 self._package_msvc()
215 else:
216 autotools = self._configure_autotools()
217 autotools.make(["install-libLTLIBRARIES", "install-data"])
218
219 if self.options.include_utils:
220 autotools.make(["install","xmllint", "xmlcatalog", "xml2-config"])
221
222 os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2.la'))
223
224 for prefix in ["run", "test"]:
225 for test in glob.glob("%s/bin/%s*" % (self.package_folder, prefix)):
226 os.remove(test)
227 for header in ["win32config.h", "wsockcompat.h"]:
228 self.copy(pattern=header, src=os.path.join(self._source_subfolder, "include"),
229 dst=os.path.join("include", "libxml2"), keep_path=False)
230 if self._is_msvc:
231 # remove redundant libraries to avoid confusion
232 if not self.options.shared:
233 os.unlink(os.path.join(self.package_folder, "bin", "libxml2.dll"))
234 os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a_dll.lib'))
235 os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a.lib' if self.options.shared else 'libxml2.lib'))
236
237 pdb_files = glob.glob(os.path.join(self.package_folder, 'bin', '*.pdb'), recursive=True)
238 for pdb in pdb_files:
239 os.unlink(pdb)
240
241 tools.rmdir(os.path.join(self.package_folder, 'share'))
242 tools.rmdir(os.path.join(self.package_folder, 'lib', 'cmake'))
243 tools.rmdir(os.path.join(self.package_folder, 'lib', 'pkgconfig'))
244
245 def package_info(self):
246 if self._is_msvc:
247 self.cpp_info.libs = ['libxml2' if self.options.shared else 'libxml2_a']
248 else:
249 self.cpp_info.libs = ['xml2']
250 self.cpp_info.includedirs.append(os.path.join("include", "libxml2"))
251 if not self.options.shared:
252 self.cpp_info.defines = ["LIBXML_STATIC"]
253 if self.options.include_utils:
254 bindir = os.path.join(self.package_folder, "bin")
255 self.output.info("Appending PATH environment variable: {}".format(bindir))
256 self.env_info.PATH.append(bindir)
257
258 if self.settings.os in ["Linux", "Macos", "FreeBSD"]:
259 self.cpp_info.system_libs.append('m')
260 if self.settings.os == "Windows":
261 self.cpp_info.system_libs.append('ws2_32')
262 self.cpp_info.names["cmake_find_package"] = "LibXml2"
263 self.cpp_info.names["cmake_find_package_multi"] = "LibXml2"
264 self.cpp_info.names["pkg_config"] = "libxml-2.0"
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/libxml2/all/conanfile.py b/recipes/libxml2/all/conanfile.py
--- a/recipes/libxml2/all/conanfile.py
+++ b/recipes/libxml2/all/conanfile.py
@@ -254,8 +254,7 @@
bindir = os.path.join(self.package_folder, "bin")
self.output.info("Appending PATH environment variable: {}".format(bindir))
self.env_info.PATH.append(bindir)
-
- if self.settings.os in ["Linux", "Macos", "FreeBSD"]:
+ if self.settings.os in ["Linux", "Macos", "FreeBSD", "Android"]:
self.cpp_info.system_libs.append('m')
if self.settings.os == "Windows":
self.cpp_info.system_libs.append('ws2_32')
|
{"golden_diff": "diff --git a/recipes/libxml2/all/conanfile.py b/recipes/libxml2/all/conanfile.py\n--- a/recipes/libxml2/all/conanfile.py\n+++ b/recipes/libxml2/all/conanfile.py\n@@ -254,8 +254,7 @@\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n-\n- if self.settings.os in [\"Linux\", \"Macos\", \"FreeBSD\"]:\n+ if self.settings.os in [\"Linux\", \"Macos\", \"FreeBSD\", \"Android\"]:\n self.cpp_info.system_libs.append('m')\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append('ws2_32')\n", "issue": "[package] xmlsec/1.2.30: Android build failed\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **xmlsec/1.2.30**\r\n * Operating System+version: **macOS Catalina (Android cross-build)**\r\n * Compiler+version: **clang 9**\r\n * Conan version: **conan 1.31.3**\r\n * Python version: **Python 3.7.4**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\n[settings]\r\nos_build=Macos\r\narch_build=x86_64\r\n\r\ncompiler=clang\r\ncompiler.version=9\r\ncompiler.libcxx=libc++\r\n\r\nos=Android\r\nos.api_level=21\r\narch=armv7\r\nbuild_type=Release\r\n\r\n[build_requires]\r\nandroid_ndk_installer/r21d@bincrafters/stable\r\ncmake_installer/3.15.5@conan/stable\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nERROR: m4/1.4.18: Error in build() method, line 90\r\n autotools.make()\r\n ConanException: Error 2 while executing make -j12\r\n\r\n\r\nIn file included from ../source_subfolder/lib/freading.c:22:\r\n../source_subfolder/lib/stdio-impl.h:72:22: error: field has incomplete type 'struct __sbuf'\r\n struct __sbuf _ub; /* ungetc buffer */\r\n ^\r\n../source_subfolder/lib/stdio-impl.h:72:15: note: forward declaration of 'struct __sbuf'\r\n struct __sbuf _ub; /* ungetc buffer */\r\n ^\r\n../source_subfolder/lib/freading.c:41:16: error: no member named '_flags' in 'struct __sFILE'\r\n return (fp_->_flags & __SRD) != 0;\r\n ~~~ ^\r\n../source_subfolder/lib/freading.c:41:25: error: use of undeclared identifier '__SRD'\r\n return (fp_->_flags & __SRD) != 0;\r\n ^\r\n3 errors generated.\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment, VisualStudioBuildEnvironment\nfrom contextlib import contextmanager\nimport glob\nimport os\n\n\nclass Libxml2Conan(ConanFile):\n name = \"libxml2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"libxml2 is a software library for parsing XML documents\"\n topics = (\"XML\", \"parser\", \"validation\")\n homepage = \"https://xmlsoft.org\"\n license = \"MIT\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n generators = \"pkg_config\"\n\n # from ./configure and ./win32/configure.js\n default_options = {'shared': False,\n 'fPIC': True,\n 'include_utils': True,\n \"c14n\": True,\n \"catalog\": True,\n \"docbook\": True,\n \"ftp\": True,\n \"http\": True,\n \"html\": True,\n \"iconv\": True,\n \"icu\": False,\n \"iso8859x\": True,\n \"legacy\": True,\n \"mem-debug\": False,\n \"output\": True,\n \"pattern\": True,\n \"push\": True,\n \"python\": False,\n \"reader\": True,\n \"regexps\": True,\n \"run-debug\": False,\n \"sax1\": True,\n \"schemas\": True,\n \"schematron\": True,\n \"threads\": True,\n \"tree\": True,\n \"valid\": True,\n \"writer\": True,\n \"xinclude\": True,\n \"xpath\": True,\n \"xptr\": True,\n \"zlib\": True,\n \"lzma\": False,\n }\n\n options = {name: [True, False] for name in default_options.keys()}\n _option_names = [name for name in default_options.keys() if name not in [\"shared\", \"fPIC\", \"include_utils\"]]\n\n _autotools = None\n _source_subfolder = \"source_subfolder\"\n\n def requirements(self):\n if self.options.zlib:\n self.requires(\"zlib/1.2.11\")\n if self.options.lzma:\n self.requires(\"xz_utils/5.2.5\")\n if self.options.iconv:\n self.requires(\"libiconv/1.16\")\n if self.options.icu:\n self.requires(\"icu/68.1\")\n\n def build_requirements(self):\n if self.settings.compiler != \"Visual Studio\" and tools.os_info.is_windows and os.environ.get(\"CONAN_BASH_PATH\", None) is None:\n self.build_requires(\"msys2/20190524\")\n\n @property\n def _is_msvc(self):\n return self.settings.compiler == 'Visual Studio'\n \n @property\n def _is_mingw(self):\n return self.settings.compiler == 'gcc' and self.settings.os == 'Windows'\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"libxml2-{0}\".format(self.version), self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n @contextmanager\n def _msvc_build_environment(self):\n with tools.chdir(os.path.join(self._source_subfolder, 'win32')):\n with tools.vcvars(self.settings):\n with tools.environment_append(VisualStudioBuildEnvironment(self).vars):\n yield\n\n def _build_msvc(self):\n with self._msvc_build_environment():\n debug = \"yes\" if self.settings.build_type == \"Debug\" else \"no\"\n static = \"no\" if self.options.shared else \"yes\"\n\n args = [\"cscript\",\n \"configure.js\",\n \"compiler=msvc\",\n \"prefix=%s\" % self.package_folder,\n \"cruntime=/%s\" % self.settings.compiler.runtime,\n \"debug=%s\" % debug,\n \"static=%s\" % static,\n 'include=\"%s\"' % \";\".join(self.deps_cpp_info.include_paths),\n 'lib=\"%s\"' % \";\".join(self.deps_cpp_info.lib_paths)]\n\n for name in self._option_names:\n cname = {\"mem-debug\": \"mem_debug\",\n \"run-debug\": \"run_debug\",\n \"docbook\": \"docb\"}.get(name, name)\n value = getattr(self.options, name)\n value = \"yes\" if value else \"no\"\n args.append(\"%s=%s\" % (cname, value))\n\n configure_command = ' '.join(args)\n self.output.info(configure_command)\n self.run(configure_command)\n\n # Fix library names because they can be not just zlib.lib\n def fix_library(option, package, old_libname):\n if option:\n libs = []\n for lib in self.deps_cpp_info[package].libs:\n libname = lib\n if not libname.endswith('.lib'):\n libname += '.lib'\n libs.append(libname)\n tools.replace_in_file(\"Makefile.msvc\",\n \"LIBS = $(LIBS) %s\" % old_libname,\n \"LIBS = $(LIBS) %s\" % ' '.join(libs))\n\n fix_library(self.options.zlib, 'zlib', 'zlib.lib')\n fix_library(self.options.lzma, 'lzma', 'liblzma.lib')\n fix_library(self.options.iconv, 'libiconv', 'iconv.lib')\n fix_library(self.options.icu, 'icu', 'advapi32.lib sicuuc.lib sicuin.lib sicudt.lib')\n fix_library(self.options.icu, 'icu', 'icuuc.lib icuin.lib icudt.lib')\n\n self.run(\"nmake /f Makefile.msvc libxml libxmla libxmladll\")\n\n if self.options.include_utils:\n self.run(\"nmake /f Makefile.msvc utils\")\n\n\n def _package_msvc(self):\n with self._msvc_build_environment():\n self.run(\"nmake /f Makefile.msvc install-libs\")\n\n if self.options.include_utils:\n self.run(\"nmake /f Makefile.msvc install-dist\")\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.libs = []\n if not tools.os_info.is_windows:\n self._autotools.fpic = self.options.fPIC\n full_install_subfolder = tools.unix_path(self.package_folder) if tools.os_info.is_windows else self.package_folder\n # fix rpath\n if self.settings.os == \"Macos\":\n tools.replace_in_file(os.path.join(self._source_subfolder, \"configure\"), r\"-install_name \\$rpath/\", \"-install_name \")\n configure_args = ['--prefix=%s' % full_install_subfolder]\n if self._autotools.fpic:\n configure_args.extend(['--with-pic'])\n if self.options.shared:\n configure_args.extend(['--enable-shared', '--disable-static'])\n else:\n configure_args.extend(['--enable-static', '--disable-shared'])\n\n for name in self._option_names:\n value = getattr(self.options, name)\n value = (\"--with-%s\" % name) if value else (\"--without-%s\" % name)\n configure_args.append(value)\n\n # Disable --build when building for iPhoneSimulator. The configure script halts on\n # not knowing if it should cross-compile.\n build = None\n if self.settings.os == \"iOS\" and self.settings.arch == \"x86_64\":\n build = False\n\n self._autotools.configure(args=configure_args, build=build, configure_dir=self._source_subfolder)\n return self._autotools\n\n def _patch_sources(self):\n # Break dependency of install on build\n for makefile in (\"Makefile.mingw\", \"Makefile.msvc\"):\n tools.replace_in_file(os.path.join(self._source_subfolder, \"win32\", makefile),\n \"install-libs : all\",\n \"install-libs :\")\n\n def build(self):\n self._patch_sources()\n if self._is_msvc:\n self._build_msvc()\n else:\n autotools = self._configure_autotools()\n autotools.make([\"libxml2.la\"])\n\n if self.options.include_utils:\n autotools.make([\"xmllint\", \"xmlcatalog\", \"xml2-config\"])\n\n def package(self):\n # copy package license\n self.copy(\"COPYING\", src=self._source_subfolder, dst=\"licenses\", ignore_case=True, keep_path=False)\n if self._is_msvc:\n self._package_msvc()\n else:\n autotools = self._configure_autotools()\n autotools.make([\"install-libLTLIBRARIES\", \"install-data\"])\n\n if self.options.include_utils:\n autotools.make([\"install\",\"xmllint\", \"xmlcatalog\", \"xml2-config\"])\n\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2.la'))\n\n for prefix in [\"run\", \"test\"]:\n for test in glob.glob(\"%s/bin/%s*\" % (self.package_folder, prefix)):\n os.remove(test)\n for header in [\"win32config.h\", \"wsockcompat.h\"]:\n self.copy(pattern=header, src=os.path.join(self._source_subfolder, \"include\"),\n dst=os.path.join(\"include\", \"libxml2\"), keep_path=False)\n if self._is_msvc:\n # remove redundant libraries to avoid confusion\n if not self.options.shared:\n os.unlink(os.path.join(self.package_folder, \"bin\", \"libxml2.dll\"))\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a_dll.lib'))\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a.lib' if self.options.shared else 'libxml2.lib'))\n\n pdb_files = glob.glob(os.path.join(self.package_folder, 'bin', '*.pdb'), recursive=True)\n for pdb in pdb_files:\n os.unlink(pdb)\n\n tools.rmdir(os.path.join(self.package_folder, 'share'))\n tools.rmdir(os.path.join(self.package_folder, 'lib', 'cmake'))\n tools.rmdir(os.path.join(self.package_folder, 'lib', 'pkgconfig'))\n\n def package_info(self):\n if self._is_msvc:\n self.cpp_info.libs = ['libxml2' if self.options.shared else 'libxml2_a']\n else:\n self.cpp_info.libs = ['xml2']\n self.cpp_info.includedirs.append(os.path.join(\"include\", \"libxml2\"))\n if not self.options.shared:\n self.cpp_info.defines = [\"LIBXML_STATIC\"]\n if self.options.include_utils:\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n\n if self.settings.os in [\"Linux\", \"Macos\", \"FreeBSD\"]:\n self.cpp_info.system_libs.append('m')\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append('ws2_32')\n self.cpp_info.names[\"cmake_find_package\"] = \"LibXml2\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"LibXml2\"\n self.cpp_info.names[\"pkg_config\"] = \"libxml-2.0\"\n", "path": "recipes/libxml2/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment, VisualStudioBuildEnvironment\nfrom contextlib import contextmanager\nimport glob\nimport os\n\n\nclass Libxml2Conan(ConanFile):\n name = \"libxml2\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"libxml2 is a software library for parsing XML documents\"\n topics = (\"XML\", \"parser\", \"validation\")\n homepage = \"https://xmlsoft.org\"\n license = \"MIT\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n generators = \"pkg_config\"\n\n # from ./configure and ./win32/configure.js\n default_options = {'shared': False,\n 'fPIC': True,\n 'include_utils': True,\n \"c14n\": True,\n \"catalog\": True,\n \"docbook\": True,\n \"ftp\": True,\n \"http\": True,\n \"html\": True,\n \"iconv\": True,\n \"icu\": False,\n \"iso8859x\": True,\n \"legacy\": True,\n \"mem-debug\": False,\n \"output\": True,\n \"pattern\": True,\n \"push\": True,\n \"python\": False,\n \"reader\": True,\n \"regexps\": True,\n \"run-debug\": False,\n \"sax1\": True,\n \"schemas\": True,\n \"schematron\": True,\n \"threads\": True,\n \"tree\": True,\n \"valid\": True,\n \"writer\": True,\n \"xinclude\": True,\n \"xpath\": True,\n \"xptr\": True,\n \"zlib\": True,\n \"lzma\": False,\n }\n\n options = {name: [True, False] for name in default_options.keys()}\n _option_names = [name for name in default_options.keys() if name not in [\"shared\", \"fPIC\", \"include_utils\"]]\n\n _autotools = None\n _source_subfolder = \"source_subfolder\"\n\n def requirements(self):\n if self.options.zlib:\n self.requires(\"zlib/1.2.11\")\n if self.options.lzma:\n self.requires(\"xz_utils/5.2.5\")\n if self.options.iconv:\n self.requires(\"libiconv/1.16\")\n if self.options.icu:\n self.requires(\"icu/68.1\")\n\n def build_requirements(self):\n if self.settings.compiler != \"Visual Studio\" and tools.os_info.is_windows and os.environ.get(\"CONAN_BASH_PATH\", None) is None:\n self.build_requires(\"msys2/20190524\")\n\n @property\n def _is_msvc(self):\n return self.settings.compiler == 'Visual Studio'\n \n @property\n def _is_mingw(self):\n return self.settings.compiler == 'gcc' and self.settings.os == 'Windows'\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"libxml2-{0}\".format(self.version), self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n @contextmanager\n def _msvc_build_environment(self):\n with tools.chdir(os.path.join(self._source_subfolder, 'win32')):\n with tools.vcvars(self.settings):\n with tools.environment_append(VisualStudioBuildEnvironment(self).vars):\n yield\n\n def _build_msvc(self):\n with self._msvc_build_environment():\n debug = \"yes\" if self.settings.build_type == \"Debug\" else \"no\"\n static = \"no\" if self.options.shared else \"yes\"\n\n args = [\"cscript\",\n \"configure.js\",\n \"compiler=msvc\",\n \"prefix=%s\" % self.package_folder,\n \"cruntime=/%s\" % self.settings.compiler.runtime,\n \"debug=%s\" % debug,\n \"static=%s\" % static,\n 'include=\"%s\"' % \";\".join(self.deps_cpp_info.include_paths),\n 'lib=\"%s\"' % \";\".join(self.deps_cpp_info.lib_paths)]\n\n for name in self._option_names:\n cname = {\"mem-debug\": \"mem_debug\",\n \"run-debug\": \"run_debug\",\n \"docbook\": \"docb\"}.get(name, name)\n value = getattr(self.options, name)\n value = \"yes\" if value else \"no\"\n args.append(\"%s=%s\" % (cname, value))\n\n configure_command = ' '.join(args)\n self.output.info(configure_command)\n self.run(configure_command)\n\n # Fix library names because they can be not just zlib.lib\n def fix_library(option, package, old_libname):\n if option:\n libs = []\n for lib in self.deps_cpp_info[package].libs:\n libname = lib\n if not libname.endswith('.lib'):\n libname += '.lib'\n libs.append(libname)\n tools.replace_in_file(\"Makefile.msvc\",\n \"LIBS = $(LIBS) %s\" % old_libname,\n \"LIBS = $(LIBS) %s\" % ' '.join(libs))\n\n fix_library(self.options.zlib, 'zlib', 'zlib.lib')\n fix_library(self.options.lzma, 'lzma', 'liblzma.lib')\n fix_library(self.options.iconv, 'libiconv', 'iconv.lib')\n fix_library(self.options.icu, 'icu', 'advapi32.lib sicuuc.lib sicuin.lib sicudt.lib')\n fix_library(self.options.icu, 'icu', 'icuuc.lib icuin.lib icudt.lib')\n\n self.run(\"nmake /f Makefile.msvc libxml libxmla libxmladll\")\n\n if self.options.include_utils:\n self.run(\"nmake /f Makefile.msvc utils\")\n\n\n def _package_msvc(self):\n with self._msvc_build_environment():\n self.run(\"nmake /f Makefile.msvc install-libs\")\n\n if self.options.include_utils:\n self.run(\"nmake /f Makefile.msvc install-dist\")\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.libs = []\n if not tools.os_info.is_windows:\n self._autotools.fpic = self.options.fPIC\n full_install_subfolder = tools.unix_path(self.package_folder) if tools.os_info.is_windows else self.package_folder\n # fix rpath\n if self.settings.os == \"Macos\":\n tools.replace_in_file(os.path.join(self._source_subfolder, \"configure\"), r\"-install_name \\$rpath/\", \"-install_name \")\n configure_args = ['--prefix=%s' % full_install_subfolder]\n if self._autotools.fpic:\n configure_args.extend(['--with-pic'])\n if self.options.shared:\n configure_args.extend(['--enable-shared', '--disable-static'])\n else:\n configure_args.extend(['--enable-static', '--disable-shared'])\n\n for name in self._option_names:\n value = getattr(self.options, name)\n value = (\"--with-%s\" % name) if value else (\"--without-%s\" % name)\n configure_args.append(value)\n\n # Disable --build when building for iPhoneSimulator. The configure script halts on\n # not knowing if it should cross-compile.\n build = None\n if self.settings.os == \"iOS\" and self.settings.arch == \"x86_64\":\n build = False\n\n self._autotools.configure(args=configure_args, build=build, configure_dir=self._source_subfolder)\n return self._autotools\n\n def _patch_sources(self):\n # Break dependency of install on build\n for makefile in (\"Makefile.mingw\", \"Makefile.msvc\"):\n tools.replace_in_file(os.path.join(self._source_subfolder, \"win32\", makefile),\n \"install-libs : all\",\n \"install-libs :\")\n\n def build(self):\n self._patch_sources()\n if self._is_msvc:\n self._build_msvc()\n else:\n autotools = self._configure_autotools()\n autotools.make([\"libxml2.la\"])\n\n if self.options.include_utils:\n autotools.make([\"xmllint\", \"xmlcatalog\", \"xml2-config\"])\n\n def package(self):\n # copy package license\n self.copy(\"COPYING\", src=self._source_subfolder, dst=\"licenses\", ignore_case=True, keep_path=False)\n if self._is_msvc:\n self._package_msvc()\n else:\n autotools = self._configure_autotools()\n autotools.make([\"install-libLTLIBRARIES\", \"install-data\"])\n\n if self.options.include_utils:\n autotools.make([\"install\",\"xmllint\", \"xmlcatalog\", \"xml2-config\"])\n\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2.la'))\n\n for prefix in [\"run\", \"test\"]:\n for test in glob.glob(\"%s/bin/%s*\" % (self.package_folder, prefix)):\n os.remove(test)\n for header in [\"win32config.h\", \"wsockcompat.h\"]:\n self.copy(pattern=header, src=os.path.join(self._source_subfolder, \"include\"),\n dst=os.path.join(\"include\", \"libxml2\"), keep_path=False)\n if self._is_msvc:\n # remove redundant libraries to avoid confusion\n if not self.options.shared:\n os.unlink(os.path.join(self.package_folder, \"bin\", \"libxml2.dll\"))\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a_dll.lib'))\n os.unlink(os.path.join(self.package_folder, 'lib', 'libxml2_a.lib' if self.options.shared else 'libxml2.lib'))\n\n pdb_files = glob.glob(os.path.join(self.package_folder, 'bin', '*.pdb'), recursive=True)\n for pdb in pdb_files:\n os.unlink(pdb)\n\n tools.rmdir(os.path.join(self.package_folder, 'share'))\n tools.rmdir(os.path.join(self.package_folder, 'lib', 'cmake'))\n tools.rmdir(os.path.join(self.package_folder, 'lib', 'pkgconfig'))\n\n def package_info(self):\n if self._is_msvc:\n self.cpp_info.libs = ['libxml2' if self.options.shared else 'libxml2_a']\n else:\n self.cpp_info.libs = ['xml2']\n self.cpp_info.includedirs.append(os.path.join(\"include\", \"libxml2\"))\n if not self.options.shared:\n self.cpp_info.defines = [\"LIBXML_STATIC\"]\n if self.options.include_utils:\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n if self.settings.os in [\"Linux\", \"Macos\", \"FreeBSD\", \"Android\"]:\n self.cpp_info.system_libs.append('m')\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append('ws2_32')\n self.cpp_info.names[\"cmake_find_package\"] = \"LibXml2\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"LibXml2\"\n self.cpp_info.names[\"pkg_config\"] = \"libxml-2.0\"\n", "path": "recipes/libxml2/all/conanfile.py"}]}
| 4,051 | 180 |
gh_patches_debug_1707
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-5247
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing AWS RDS CA in CKV_AWS_211
**Describe the issue**
In check CKV_AWS_211, checkov currently only checks for one possible CA on AWS RDS instances, namely `rds-ca-2019` (see [associated code](https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py#L24)) whereas RDS supports several (see [AWS docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.RegionCertificateAuthorities)). The check should accept those CAs: `rds-ca-rsa2048-g1`, `rds-ca-rsa4096-g1` and `rds-ca-ecc384-g1`.
**Examples**
Terraform code on which the check should pass:
```terraform
resource "aws_db_instance" "pass3" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
db_name = "mydb"
username = "foo"
password = "foobarbaz"
iam_database_authentication_enabled = true
storage_encrypted = true
ca_cert_identifier = "rds-ca-rsa2048-g1"
}
```
When I run checkov on this Terraform example, I get an error whereas the test should pass:
```
Check: CKV_AWS_211: "Ensure RDS uses a modern CaCert"
FAILED for resource: aws_db_instance.pass3
File: /main.tf:43-55
Guide: https://docs.paloaltonetworks.com/content/techdocs/en_US/prisma/prisma-cloud/prisma-cloud-code-security-policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-uses-a-modern-cacert.html
43 | resource "aws_db_instance" "pass3" {
44 | allocated_storage = 20
45 | storage_type = "gp2"
46 | engine = "mysql"
47 | engine_version = "5.7"
48 | instance_class = "db.t2.micro"
49 | db_name = "mydb"
50 | username = "foo"
51 | password = "foobarbaz"
52 | iam_database_authentication_enabled = true
53 | storage_encrypted = true
54 | ca_cert_identifier = "rds-ca-rsa2048-g1"
55 | }
```
**Version (please complete the following information):**
- Checkov Version 2.0.930
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3 from typing import Any, List
4
5
6 class RDSCACertIsRecent(BaseResourceValueCheck):
7 def __init__(self):
8 name = "Ensure RDS uses a modern CaCert"
9 id = "CKV_AWS_211"
10 supported_resources = ["aws_db_instance"]
11 categories = [CheckCategories.GENERAL_SECURITY]
12 super().__init__(
13 name=name,
14 id=id,
15 categories=categories,
16 supported_resources=supported_resources,
17 missing_block_result=CheckResult.PASSED
18 )
19
20 def get_inspected_key(self) -> str:
21 return "ca_cert_identifier"
22
23 def get_expected_values(self) -> List[Any]:
24 return ["rds-ca-2019"]
25
26
27 check = RDSCACertIsRecent()
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py b/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py
--- a/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py
+++ b/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py
@@ -21,7 +21,7 @@
return "ca_cert_identifier"
def get_expected_values(self) -> List[Any]:
- return ["rds-ca-2019"]
+ return ["rds-ca-rsa2048-g1", "rds-ca-rsa4096-g1", "rds-ca-ecc384-g1"]
check = RDSCACertIsRecent()
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py b/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py\n--- a/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py\n+++ b/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py\n@@ -21,7 +21,7 @@\n return \"ca_cert_identifier\"\n \n def get_expected_values(self) -> List[Any]:\n- return [\"rds-ca-2019\"]\n+ return [\"rds-ca-rsa2048-g1\", \"rds-ca-rsa4096-g1\", \"rds-ca-ecc384-g1\"]\n \n \n check = RDSCACertIsRecent()\n", "issue": "Missing AWS RDS CA in CKV_AWS_211\n**Describe the issue**\r\nIn check CKV_AWS_211, checkov currently only checks for one possible CA on AWS RDS instances, namely `rds-ca-2019` (see [associated code](https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py#L24)) whereas RDS supports several (see [AWS docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.RegionCertificateAuthorities)). The check should accept those CAs: `rds-ca-rsa2048-g1`, `rds-ca-rsa4096-g1` and `rds-ca-ecc384-g1`.\r\n\r\n**Examples**\r\nTerraform code on which the check should pass:\r\n```terraform\r\nresource \"aws_db_instance\" \"pass3\" {\r\n allocated_storage = 20\r\n storage_type = \"gp2\"\r\n engine = \"mysql\"\r\n engine_version = \"5.7\"\r\n instance_class = \"db.t2.micro\"\r\n db_name = \"mydb\"\r\n username = \"foo\"\r\n password = \"foobarbaz\"\r\n iam_database_authentication_enabled = true\r\n storage_encrypted = true\r\n ca_cert_identifier = \"rds-ca-rsa2048-g1\"\r\n}\r\n```\r\nWhen I run checkov on this Terraform example, I get an error whereas the test should pass:\r\n\r\n```\r\nCheck: CKV_AWS_211: \"Ensure RDS uses a modern CaCert\"\r\n\tFAILED for resource: aws_db_instance.pass3\r\n\tFile: /main.tf:43-55\r\n\tGuide: https://docs.paloaltonetworks.com/content/techdocs/en_US/prisma/prisma-cloud/prisma-cloud-code-security-policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-uses-a-modern-cacert.html\r\n\r\n\t\t43 | resource \"aws_db_instance\" \"pass3\" {\r\n\t\t44 | allocated_storage = 20\r\n\t\t45 | storage_type = \"gp2\"\r\n\t\t46 | engine = \"mysql\"\r\n\t\t47 | engine_version = \"5.7\"\r\n\t\t48 | instance_class = \"db.t2.micro\"\r\n\t\t49 | db_name = \"mydb\"\r\n\t\t50 | username = \"foo\"\r\n\t\t51 | password = \"foobarbaz\"\r\n\t\t52 | iam_database_authentication_enabled = true\r\n\t\t53 | storage_encrypted = true\r\n\t\t54 | ca_cert_identifier = \"rds-ca-rsa2048-g1\"\r\n\t\t55 | }\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.0.930\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom typing import Any, List\n\n\nclass RDSCACertIsRecent(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure RDS uses a modern CaCert\"\n id = \"CKV_AWS_211\"\n supported_resources = [\"aws_db_instance\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED\n )\n\n def get_inspected_key(self) -> str:\n return \"ca_cert_identifier\"\n\n def get_expected_values(self) -> List[Any]:\n return [\"rds-ca-2019\"]\n\n\ncheck = RDSCACertIsRecent()\n", "path": "checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom typing import Any, List\n\n\nclass RDSCACertIsRecent(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure RDS uses a modern CaCert\"\n id = \"CKV_AWS_211\"\n supported_resources = [\"aws_db_instance\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED\n )\n\n def get_inspected_key(self) -> str:\n return \"ca_cert_identifier\"\n\n def get_expected_values(self) -> List[Any]:\n return [\"rds-ca-rsa2048-g1\", \"rds-ca-rsa4096-g1\", \"rds-ca-ecc384-g1\"]\n\n\ncheck = RDSCACertIsRecent()\n", "path": "checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py"}]}
| 1,166 | 171 |
gh_patches_debug_15023
|
rasdani/github-patches
|
git_diff
|
sonic-net__sonic-mgmt-3458
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SLB Test Cases
Step | Goal | Expected results
-- | -- | --
Create peering session from the SLB to Active ToR | SLB | Verify session is established
Create peering session from the SLB to Standby ToR | SLB | Verify session is established
| |
Announce routes from SLB to Active ToR | SLB | Verify routes in Active ToR
Announce routes from SLB to Standby ToR | SLB | Verify routes in Standby ToR
| |
Run PTF tests on Active ToR | SLB | Verify packets forwarded directly to active SLB port
Run PTF tests on Standby ToR | SLB | Verify packets forwarded via tunnel to Active ToR
| |
Withdraw routes from SLB to Active ToR | SLB | Verify routes removed in Active ToR
Withdraw routes from SLB to Standby ToR | SLB | Verify routes removed in Standby ToR
| |
Repeat PTF tests as above | SLB | Verify no packets forwarded
| |
Simulate a mux state change for the SLB port | SLB | Verify both sessions stays established and not disrupted
| |
Announce routes from SLB to new Active ToR | SLB | Verify routes in Active ToR
Announce routes from SLB to new Standby ToR | SLB | Verify routes in Standby ToR
| |
Repeat PTF tests as above | SLB | Verify packet forwarding based on mux state
| |
Verify teardown by shutting peering session one by one | SLB | After one session is down, verify other peering session is active and routes present
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ansible/library/dual_tor_facts.py`
Content:
```
1 from collections import defaultdict
2 class DualTorParser:
3
4 def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):
5 self.hostname = hostname
6 self.testbed_facts = testbed_facts
7 self.host_vars = host_vars
8 self.vm_config = vm_config
9 self.port_alias = port_alias
10 self.vlan_intfs = vlan_intfs
11 self.dual_tor_facts = {}
12
13 def parse_neighbor_tor(self):
14 '''
15 Parses information about the other ToR in a dual ToR pair
16 '''
17 neighbor = {}
18 neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]
19 neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']
20 neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']
21
22 self.dual_tor_facts['neighbor'] = neighbor
23
24 def parse_tor_position(self):
25 '''
26 Determines the position ('U' for upper and 'L' for lower) of the ToR.
27
28 The upper ToR is always the first ToR listed in the testbed file
29 '''
30 self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}
31
32 def parse_loopback_ips(self):
33 '''
34 Parses the IPv4 and IPv6 loopback IPs for the DUTs
35
36 Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP
37 '''
38
39 loopback_ips = defaultdict(dict)
40 addl_loopback_ips = defaultdict(dict)
41
42 for dut_num, dut in enumerate(self.testbed_facts['duts']):
43 loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]
44 loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num]
45
46 for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2
47 loopback_key = 'loopback{}'.format(loopback_num)
48 loopback_dict = {}
49 loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]
50 loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]
51 loopback_dict['host_ip_base_index'] = loopback_num * 2
52 addl_loopback_ips[dut][loopback_num] = loopback_dict
53
54 self.dual_tor_facts['loopback'] = loopback_ips
55 self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips
56
57 def generate_cable_names(self):
58 cables = []
59
60 for server_num, dut_intf in enumerate(self.vlan_intfs):
61 name = '{}-Servers{}-SC'.format(self.hostname, server_num)
62 cable = {"hostname": name, "dut_intf": dut_intf}
63 cables.append(cable)
64
65 self.dual_tor_facts['cables'] = cables
66
67 def get_dual_tor_facts(self):
68 '''
69 Gathers facts related to a dual ToR configuration
70 '''
71 if 'dualtor' in self.testbed_facts['topo']:
72 self.parse_neighbor_tor()
73 self.parse_tor_position()
74 self.generate_cable_names()
75 self.parse_loopback_ips()
76
77 return self.dual_tor_facts
78
79
80 def main():
81 module = AnsibleModule(
82 argument_spec=dict(
83 hostname=dict(required=True, default=None, type='str'),
84 testbed_facts=dict(required=True, default=None, type='dict'),
85 hostvars=dict(required=True, default=None, type='dict'),
86 vm_config=dict(required=True, default=None, type='dict'),
87 port_alias=dict(required=True, default=None, type='list'),
88 vlan_intfs=dict(required=True, default=None, type='list')
89 ),
90 supports_check_mode=True
91 )
92 m_args = module.params
93 # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}
94 hostname = m_args['hostname']
95 testbed_facts = m_args['testbed_facts']
96 host_vars = m_args['hostvars']
97 vm_config = m_args['vm_config']
98 port_alias = m_args['port_alias']
99 vlan_intfs = m_args['vlan_intfs']
100 try:
101 dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)
102 module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})
103 except Exception as e:
104 module.fail_json(msg=traceback.format_exc())
105
106 from ansible.module_utils.basic import *
107 if __name__== "__main__":
108 main()
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ansible/library/dual_tor_facts.py b/ansible/library/dual_tor_facts.py
--- a/ansible/library/dual_tor_facts.py
+++ b/ansible/library/dual_tor_facts.py
@@ -43,7 +43,7 @@
loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]
loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num]
- for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2
+ for loopback_num in range(1, 4): # Generate two additional loopback IPs, Loopback1, Loopback2, and Loopback3
loopback_key = 'loopback{}'.format(loopback_num)
loopback_dict = {}
loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]
|
{"golden_diff": "diff --git a/ansible/library/dual_tor_facts.py b/ansible/library/dual_tor_facts.py\n--- a/ansible/library/dual_tor_facts.py\n+++ b/ansible/library/dual_tor_facts.py\n@@ -43,7 +43,7 @@\n loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]\n loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num] \n \n- for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2\n+ for loopback_num in range(1, 4): # Generate two additional loopback IPs, Loopback1, Loopback2, and Loopback3\n loopback_key = 'loopback{}'.format(loopback_num)\n loopback_dict = {}\n loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]\n", "issue": "SLB Test Cases\n\r\nStep | Goal | Expected results\r\n-- | -- | --\r\nCreate peering session from the SLB to Active ToR | SLB | Verify session is established\r\nCreate peering session from the SLB to Standby ToR | SLB | Verify session is established\r\n\u00a0 | \u00a0 | \u00a0\r\nAnnounce routes from SLB to Active ToR | SLB | Verify routes in Active ToR\r\nAnnounce routes from SLB to Standby ToR | SLB | Verify routes in Standby ToR\r\n\u00a0 | \u00a0 | \u00a0\r\nRun PTF tests on Active ToR | SLB | Verify packets forwarded directly to active SLB port\r\nRun PTF tests on Standby ToR | SLB | Verify packets forwarded via tunnel to Active ToR\r\n\u00a0 | \u00a0 | \u00a0\r\nWithdraw routes from SLB to Active ToR | SLB | Verify routes removed in Active ToR\r\nWithdraw routes from SLB to Standby ToR | SLB | Verify routes removed in Standby ToR\r\n\u00a0 | \u00a0 | \u00a0\r\nRepeat PTF tests as above | SLB | Verify no packets forwarded\r\n\u00a0 | \u00a0 | \u00a0\r\nSimulate a mux state change for the SLB port | SLB | Verify both sessions stays established and not disrupted\r\n\u00a0 | \u00a0 | \u00a0\r\nAnnounce routes from SLB to new Active ToR | SLB | Verify routes in Active ToR\r\nAnnounce routes from SLB to new Standby ToR | SLB | Verify routes in Standby ToR\r\n\u00a0 | \u00a0 | \u00a0\r\nRepeat PTF tests as above | SLB | Verify packet forwarding based on mux state\r\n\u00a0 | \u00a0 | \u00a0\r\nVerify teardown by shutting peering session one by one | SLB | After one session is down, verify other peering session is active and routes present\r\n\r\n\n", "before_files": [{"content": "from collections import defaultdict\nclass DualTorParser:\n\n def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):\n self.hostname = hostname\n self.testbed_facts = testbed_facts\n self.host_vars = host_vars\n self.vm_config = vm_config\n self.port_alias = port_alias\n self.vlan_intfs = vlan_intfs\n self.dual_tor_facts = {}\n\n def parse_neighbor_tor(self):\n '''\n Parses information about the other ToR in a dual ToR pair\n '''\n neighbor = {}\n neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]\n neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']\n neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']\n\n self.dual_tor_facts['neighbor'] = neighbor\n\n def parse_tor_position(self):\n '''\n Determines the position ('U' for upper and 'L' for lower) of the ToR.\n\n The upper ToR is always the first ToR listed in the testbed file\n '''\n self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}\n\n def parse_loopback_ips(self):\n '''\n Parses the IPv4 and IPv6 loopback IPs for the DUTs\n\n Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP\n '''\n\n loopback_ips = defaultdict(dict)\n addl_loopback_ips = defaultdict(dict)\n\n for dut_num, dut in enumerate(self.testbed_facts['duts']):\n loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]\n loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num] \n\n for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2\n loopback_key = 'loopback{}'.format(loopback_num)\n loopback_dict = {}\n loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]\n loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]\n loopback_dict['host_ip_base_index'] = loopback_num * 2\n addl_loopback_ips[dut][loopback_num] = loopback_dict\n\n self.dual_tor_facts['loopback'] = loopback_ips \n self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips\n\n def generate_cable_names(self):\n cables = []\n\n for server_num, dut_intf in enumerate(self.vlan_intfs):\n name = '{}-Servers{}-SC'.format(self.hostname, server_num)\n cable = {\"hostname\": name, \"dut_intf\": dut_intf}\n cables.append(cable)\n\n self.dual_tor_facts['cables'] = cables\n\n def get_dual_tor_facts(self):\n '''\n Gathers facts related to a dual ToR configuration\n '''\n if 'dualtor' in self.testbed_facts['topo']:\n self.parse_neighbor_tor()\n self.parse_tor_position()\n self.generate_cable_names()\n self.parse_loopback_ips()\n\n return self.dual_tor_facts\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n hostname=dict(required=True, default=None, type='str'),\n testbed_facts=dict(required=True, default=None, type='dict'),\n hostvars=dict(required=True, default=None, type='dict'),\n vm_config=dict(required=True, default=None, type='dict'),\n port_alias=dict(required=True, default=None, type='list'),\n vlan_intfs=dict(required=True, default=None, type='list')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}\n hostname = m_args['hostname']\n testbed_facts = m_args['testbed_facts']\n host_vars = m_args['hostvars']\n vm_config = m_args['vm_config']\n port_alias = m_args['port_alias']\n vlan_intfs = m_args['vlan_intfs']\n try:\n dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)\n module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__== \"__main__\":\n main()\n", "path": "ansible/library/dual_tor_facts.py"}], "after_files": [{"content": "from collections import defaultdict\nclass DualTorParser:\n\n def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):\n self.hostname = hostname\n self.testbed_facts = testbed_facts\n self.host_vars = host_vars\n self.vm_config = vm_config\n self.port_alias = port_alias\n self.vlan_intfs = vlan_intfs\n self.dual_tor_facts = {}\n\n def parse_neighbor_tor(self):\n '''\n Parses information about the other ToR in a dual ToR pair\n '''\n neighbor = {}\n neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]\n neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']\n neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']\n\n self.dual_tor_facts['neighbor'] = neighbor\n\n def parse_tor_position(self):\n '''\n Determines the position ('U' for upper and 'L' for lower) of the ToR.\n\n The upper ToR is always the first ToR listed in the testbed file\n '''\n self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}\n\n def parse_loopback_ips(self):\n '''\n Parses the IPv4 and IPv6 loopback IPs for the DUTs\n\n Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP\n '''\n\n loopback_ips = defaultdict(dict)\n addl_loopback_ips = defaultdict(dict)\n\n for dut_num, dut in enumerate(self.testbed_facts['duts']):\n loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]\n loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num] \n\n for loopback_num in range(1, 4): # Generate two additional loopback IPs, Loopback1, Loopback2, and Loopback3\n loopback_key = 'loopback{}'.format(loopback_num)\n loopback_dict = {}\n loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]\n loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]\n loopback_dict['host_ip_base_index'] = loopback_num * 2\n addl_loopback_ips[dut][loopback_num] = loopback_dict\n\n self.dual_tor_facts['loopback'] = loopback_ips \n self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips\n\n def generate_cable_names(self):\n cables = []\n\n for server_num, dut_intf in enumerate(self.vlan_intfs):\n name = '{}-Servers{}-SC'.format(self.hostname, server_num)\n cable = {\"hostname\": name, \"dut_intf\": dut_intf}\n cables.append(cable)\n\n self.dual_tor_facts['cables'] = cables\n\n def get_dual_tor_facts(self):\n '''\n Gathers facts related to a dual ToR configuration\n '''\n if 'dualtor' in self.testbed_facts['topo']:\n self.parse_neighbor_tor()\n self.parse_tor_position()\n self.generate_cable_names()\n self.parse_loopback_ips()\n\n return self.dual_tor_facts\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n hostname=dict(required=True, default=None, type='str'),\n testbed_facts=dict(required=True, default=None, type='dict'),\n hostvars=dict(required=True, default=None, type='dict'),\n vm_config=dict(required=True, default=None, type='dict'),\n port_alias=dict(required=True, default=None, type='list'),\n vlan_intfs=dict(required=True, default=None, type='list')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}\n hostname = m_args['hostname']\n testbed_facts = m_args['testbed_facts']\n host_vars = m_args['hostvars']\n vm_config = m_args['vm_config']\n port_alias = m_args['port_alias']\n vlan_intfs = m_args['vlan_intfs']\n try:\n dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)\n module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__== \"__main__\":\n main()\n", "path": "ansible/library/dual_tor_facts.py"}]}
| 2,180 | 233 |
gh_patches_debug_42294
|
rasdani/github-patches
|
git_diff
|
lightly-ai__lightly-491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lightly-Crop: memory leak
When using lightly-crop some users experience a memory leak.
- [ ] Try to reproduce it.
- [ ] Fix it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/utils/cropping/crop_image_by_bounding_boxes.py`
Content:
```
1 import os.path
2 import warnings
3 from pathlib import Path
4 from typing import List
5
6 from PIL import Image
7 from tqdm import tqdm
8
9 from lightly.active_learning.utils import BoundingBox
10 from lightly.data import LightlyDataset
11
12
13 def crop_image_by_bounding_boxes(image_filepath: str, bounding_boxes: List[BoundingBox]) -> List[Image.Image]:
14 image = Image.open(image_filepath)
15 cropped_images = []
16 for bbox in bounding_boxes:
17 w, h = image.size
18 crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)
19 crop_box = tuple(int(i) for i in crop_box)
20 cropped_image = image.crop(crop_box)
21 cropped_images.append(cropped_image)
22 return cropped_images
23
24
25 def crop_dataset_by_bounding_boxes_and_save(dataset: LightlyDataset,
26 output_dir: str,
27 bounding_boxes_list_list: List[List[BoundingBox]],
28 class_indices_list_list: List[List[int]],
29 class_names: List[str] = None
30 ) -> List[List[str]]:
31 """Crops all images in a dataset by the bounding boxes and saves them in the output dir
32
33 Args:
34 dataset:
35 The dataset with the images to be cropped. Must contain M images.
36 output_dir:
37 The output directory to saved the cropped images to.
38 bounding_boxes_list_list:
39 The bounding boxes of the detections for each image. Must have M sublists, one for each image.
40 Each sublist contains the bounding boxes for each detection, thus N_m elements.
41 class_indices_list_list:
42 The object class ids of the detections for each image. Must have M sublists, one for each image.
43 Each sublist contains the bounding boxes for each detection, thus N_m elements.
44 class_names:
45 The names of the classes, used to map the class id to the class name.
46
47
48 Returns:
49 The filepaths to all saved cropped images. Has M sublists, one for each image.
50 Each sublist contains the filepath of the crop each detection, thus N_m elements.
51
52 """
53 filenames_images = dataset.get_filenames()
54 if len(filenames_images) != len(bounding_boxes_list_list) or len(filenames_images) != len(class_indices_list_list):
55 raise ValueError("There must be one bounding box and class index list for each image in the datasets,"
56 "but the lengths dont align.")
57
58 cropped_image_filepath_list_list: List[List[Image]] = []
59
60
61 print(f"Cropping objects out of {len(filenames_images)} images...")
62 for filename_image, class_indices, bounding_boxes in \
63 tqdm(zip(filenames_images, class_indices_list_list, bounding_boxes_list_list)):
64
65 if not len(class_indices) == len(bounding_boxes):
66 warnings.warn(UserWarning(f"Length of class indices ({len(class_indices)} does not equal length of bounding boxes"
67 f"({len(bounding_boxes)}. This is an error in the input arguments. "
68 f"Skipping this image {filename_image}."))
69 continue
70
71 filepath_image = dataset.get_filepath_from_filename(filename_image)
72 filepath_image_base, image_extension = os.path.splitext(filepath_image)
73
74 filepath_out_dir = os.path.join(output_dir, filename_image).replace(image_extension, '')
75 Path(filepath_out_dir).mkdir(parents=True, exist_ok=True)
76
77 cropped_images = crop_image_by_bounding_boxes(filepath_image, bounding_boxes)
78 cropped_images_filepaths = []
79 for index, (class_index, cropped_image) in enumerate((zip(class_indices, cropped_images))):
80 if class_names:
81 class_name = class_names[class_index]
82 else:
83 class_name = f"class{class_index}"
84 cropped_image_last_filename = f'{index}_{class_name}{image_extension}'
85 cropped_image_filepath = os.path.join(filepath_out_dir, cropped_image_last_filename)
86 cropped_image.save(cropped_image_filepath)
87
88 cropped_image_filename = os.path.join(filename_image.replace(image_extension, ''), cropped_image_last_filename)
89 cropped_images_filepaths.append(cropped_image_filename)
90
91 cropped_image_filepath_list_list.append(cropped_images_filepaths)
92
93 return cropped_image_filepath_list_list
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lightly/utils/cropping/crop_image_by_bounding_boxes.py b/lightly/utils/cropping/crop_image_by_bounding_boxes.py
--- a/lightly/utils/cropping/crop_image_by_bounding_boxes.py
+++ b/lightly/utils/cropping/crop_image_by_bounding_boxes.py
@@ -10,18 +10,6 @@
from lightly.data import LightlyDataset
-def crop_image_by_bounding_boxes(image_filepath: str, bounding_boxes: List[BoundingBox]) -> List[Image.Image]:
- image = Image.open(image_filepath)
- cropped_images = []
- for bbox in bounding_boxes:
- w, h = image.size
- crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)
- crop_box = tuple(int(i) for i in crop_box)
- cropped_image = image.crop(crop_box)
- cropped_images.append(cropped_image)
- return cropped_images
-
-
def crop_dataset_by_bounding_boxes_and_save(dataset: LightlyDataset,
output_dir: str,
bounding_boxes_list_list: List[List[BoundingBox]],
@@ -55,7 +43,7 @@
raise ValueError("There must be one bounding box and class index list for each image in the datasets,"
"but the lengths dont align.")
- cropped_image_filepath_list_list: List[List[Image]] = []
+ cropped_image_filepath_list_list: List[List[str]] = []
print(f"Cropping objects out of {len(filenames_images)} images...")
@@ -71,21 +59,38 @@
filepath_image = dataset.get_filepath_from_filename(filename_image)
filepath_image_base, image_extension = os.path.splitext(filepath_image)
- filepath_out_dir = os.path.join(output_dir, filename_image).replace(image_extension, '')
+ filepath_out_dir = os.path.join(output_dir, filename_image)\
+ .replace(image_extension, '')
Path(filepath_out_dir).mkdir(parents=True, exist_ok=True)
- cropped_images = crop_image_by_bounding_boxes(filepath_image, bounding_boxes)
+ image = Image.open(filepath_image)
+
cropped_images_filepaths = []
- for index, (class_index, cropped_image) in enumerate((zip(class_indices, cropped_images))):
+ # For every image, crop out multiple cropped images, one for each
+ # bounding box
+ for index, (class_index, bbox) in \
+ enumerate((zip(class_indices, bounding_boxes))):
+
+ # determine the filename and filepath of the cropped image
if class_names:
class_name = class_names[class_index]
else:
class_name = f"class{class_index}"
cropped_image_last_filename = f'{index}_{class_name}{image_extension}'
cropped_image_filepath = os.path.join(filepath_out_dir, cropped_image_last_filename)
+
+ # crop out the image and save it
+ w, h = image.size
+ crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)
+ crop_box = tuple(int(i) for i in crop_box)
+ cropped_image = image.crop(crop_box)
cropped_image.save(cropped_image_filepath)
- cropped_image_filename = os.path.join(filename_image.replace(image_extension, ''), cropped_image_last_filename)
+ # add the filename of the cropped image to the corresponding list
+ cropped_image_filename: str = os.path.join(
+ filename_image.replace(image_extension, ''),
+ cropped_image_last_filename
+ )
cropped_images_filepaths.append(cropped_image_filename)
cropped_image_filepath_list_list.append(cropped_images_filepaths)
|
{"golden_diff": "diff --git a/lightly/utils/cropping/crop_image_by_bounding_boxes.py b/lightly/utils/cropping/crop_image_by_bounding_boxes.py\n--- a/lightly/utils/cropping/crop_image_by_bounding_boxes.py\n+++ b/lightly/utils/cropping/crop_image_by_bounding_boxes.py\n@@ -10,18 +10,6 @@\n from lightly.data import LightlyDataset\n \n \n-def crop_image_by_bounding_boxes(image_filepath: str, bounding_boxes: List[BoundingBox]) -> List[Image.Image]:\n- image = Image.open(image_filepath)\n- cropped_images = []\n- for bbox in bounding_boxes:\n- w, h = image.size\n- crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)\n- crop_box = tuple(int(i) for i in crop_box)\n- cropped_image = image.crop(crop_box)\n- cropped_images.append(cropped_image)\n- return cropped_images\n-\n-\n def crop_dataset_by_bounding_boxes_and_save(dataset: LightlyDataset,\n output_dir: str,\n bounding_boxes_list_list: List[List[BoundingBox]],\n@@ -55,7 +43,7 @@\n raise ValueError(\"There must be one bounding box and class index list for each image in the datasets,\"\n \"but the lengths dont align.\")\n \n- cropped_image_filepath_list_list: List[List[Image]] = []\n+ cropped_image_filepath_list_list: List[List[str]] = []\n \n \n print(f\"Cropping objects out of {len(filenames_images)} images...\")\n@@ -71,21 +59,38 @@\n filepath_image = dataset.get_filepath_from_filename(filename_image)\n filepath_image_base, image_extension = os.path.splitext(filepath_image)\n \n- filepath_out_dir = os.path.join(output_dir, filename_image).replace(image_extension, '')\n+ filepath_out_dir = os.path.join(output_dir, filename_image)\\\n+ .replace(image_extension, '')\n Path(filepath_out_dir).mkdir(parents=True, exist_ok=True)\n \n- cropped_images = crop_image_by_bounding_boxes(filepath_image, bounding_boxes)\n+ image = Image.open(filepath_image)\n+ \n cropped_images_filepaths = []\n- for index, (class_index, cropped_image) in enumerate((zip(class_indices, cropped_images))):\n+ # For every image, crop out multiple cropped images, one for each\n+ # bounding box\n+ for index, (class_index, bbox) in \\\n+ enumerate((zip(class_indices, bounding_boxes))):\n+\n+ # determine the filename and filepath of the cropped image\n if class_names:\n class_name = class_names[class_index]\n else:\n class_name = f\"class{class_index}\"\n cropped_image_last_filename = f'{index}_{class_name}{image_extension}'\n cropped_image_filepath = os.path.join(filepath_out_dir, cropped_image_last_filename)\n+\n+ # crop out the image and save it\n+ w, h = image.size\n+ crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)\n+ crop_box = tuple(int(i) for i in crop_box)\n+ cropped_image = image.crop(crop_box)\n cropped_image.save(cropped_image_filepath)\n \n- cropped_image_filename = os.path.join(filename_image.replace(image_extension, ''), cropped_image_last_filename)\n+ # add the filename of the cropped image to the corresponding list\n+ cropped_image_filename: str = os.path.join(\n+ filename_image.replace(image_extension, ''),\n+ cropped_image_last_filename\n+ )\n cropped_images_filepaths.append(cropped_image_filename)\n \n cropped_image_filepath_list_list.append(cropped_images_filepaths)\n", "issue": "Lightly-Crop: memory leak\nWhen using lightly-crop some users experience a memory leak.\r\n\r\n- [ ] Try to reproduce it.\r\n- [ ] Fix it\n", "before_files": [{"content": "import os.path\nimport warnings\nfrom pathlib import Path\nfrom typing import List\n\nfrom PIL import Image\nfrom tqdm import tqdm\n\nfrom lightly.active_learning.utils import BoundingBox\nfrom lightly.data import LightlyDataset\n\n\ndef crop_image_by_bounding_boxes(image_filepath: str, bounding_boxes: List[BoundingBox]) -> List[Image.Image]:\n image = Image.open(image_filepath)\n cropped_images = []\n for bbox in bounding_boxes:\n w, h = image.size\n crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)\n crop_box = tuple(int(i) for i in crop_box)\n cropped_image = image.crop(crop_box)\n cropped_images.append(cropped_image)\n return cropped_images\n\n\ndef crop_dataset_by_bounding_boxes_and_save(dataset: LightlyDataset,\n output_dir: str,\n bounding_boxes_list_list: List[List[BoundingBox]],\n class_indices_list_list: List[List[int]],\n class_names: List[str] = None\n ) -> List[List[str]]:\n \"\"\"Crops all images in a dataset by the bounding boxes and saves them in the output dir\n\n Args:\n dataset:\n The dataset with the images to be cropped. Must contain M images.\n output_dir:\n The output directory to saved the cropped images to.\n bounding_boxes_list_list:\n The bounding boxes of the detections for each image. Must have M sublists, one for each image.\n Each sublist contains the bounding boxes for each detection, thus N_m elements.\n class_indices_list_list:\n The object class ids of the detections for each image. Must have M sublists, one for each image.\n Each sublist contains the bounding boxes for each detection, thus N_m elements.\n class_names:\n The names of the classes, used to map the class id to the class name.\n\n\n Returns:\n The filepaths to all saved cropped images. Has M sublists, one for each image.\n Each sublist contains the filepath of the crop each detection, thus N_m elements.\n\n \"\"\"\n filenames_images = dataset.get_filenames()\n if len(filenames_images) != len(bounding_boxes_list_list) or len(filenames_images) != len(class_indices_list_list):\n raise ValueError(\"There must be one bounding box and class index list for each image in the datasets,\"\n \"but the lengths dont align.\")\n\n cropped_image_filepath_list_list: List[List[Image]] = []\n\n\n print(f\"Cropping objects out of {len(filenames_images)} images...\")\n for filename_image, class_indices, bounding_boxes in \\\n tqdm(zip(filenames_images, class_indices_list_list, bounding_boxes_list_list)):\n\n if not len(class_indices) == len(bounding_boxes):\n warnings.warn(UserWarning(f\"Length of class indices ({len(class_indices)} does not equal length of bounding boxes\"\n f\"({len(bounding_boxes)}. This is an error in the input arguments. \"\n f\"Skipping this image {filename_image}.\"))\n continue\n\n filepath_image = dataset.get_filepath_from_filename(filename_image)\n filepath_image_base, image_extension = os.path.splitext(filepath_image)\n\n filepath_out_dir = os.path.join(output_dir, filename_image).replace(image_extension, '')\n Path(filepath_out_dir).mkdir(parents=True, exist_ok=True)\n\n cropped_images = crop_image_by_bounding_boxes(filepath_image, bounding_boxes)\n cropped_images_filepaths = []\n for index, (class_index, cropped_image) in enumerate((zip(class_indices, cropped_images))):\n if class_names:\n class_name = class_names[class_index]\n else:\n class_name = f\"class{class_index}\"\n cropped_image_last_filename = f'{index}_{class_name}{image_extension}'\n cropped_image_filepath = os.path.join(filepath_out_dir, cropped_image_last_filename)\n cropped_image.save(cropped_image_filepath)\n\n cropped_image_filename = os.path.join(filename_image.replace(image_extension, ''), cropped_image_last_filename)\n cropped_images_filepaths.append(cropped_image_filename)\n\n cropped_image_filepath_list_list.append(cropped_images_filepaths)\n\n return cropped_image_filepath_list_list\n", "path": "lightly/utils/cropping/crop_image_by_bounding_boxes.py"}], "after_files": [{"content": "import os.path\nimport warnings\nfrom pathlib import Path\nfrom typing import List\n\nfrom PIL import Image\nfrom tqdm import tqdm\n\nfrom lightly.active_learning.utils import BoundingBox\nfrom lightly.data import LightlyDataset\n\n\ndef crop_dataset_by_bounding_boxes_and_save(dataset: LightlyDataset,\n output_dir: str,\n bounding_boxes_list_list: List[List[BoundingBox]],\n class_indices_list_list: List[List[int]],\n class_names: List[str] = None\n ) -> List[List[str]]:\n \"\"\"Crops all images in a dataset by the bounding boxes and saves them in the output dir\n\n Args:\n dataset:\n The dataset with the images to be cropped. Must contain M images.\n output_dir:\n The output directory to saved the cropped images to.\n bounding_boxes_list_list:\n The bounding boxes of the detections for each image. Must have M sublists, one for each image.\n Each sublist contains the bounding boxes for each detection, thus N_m elements.\n class_indices_list_list:\n The object class ids of the detections for each image. Must have M sublists, one for each image.\n Each sublist contains the bounding boxes for each detection, thus N_m elements.\n class_names:\n The names of the classes, used to map the class id to the class name.\n\n\n Returns:\n The filepaths to all saved cropped images. Has M sublists, one for each image.\n Each sublist contains the filepath of the crop each detection, thus N_m elements.\n\n \"\"\"\n filenames_images = dataset.get_filenames()\n if len(filenames_images) != len(bounding_boxes_list_list) or len(filenames_images) != len(class_indices_list_list):\n raise ValueError(\"There must be one bounding box and class index list for each image in the datasets,\"\n \"but the lengths dont align.\")\n\n cropped_image_filepath_list_list: List[List[str]] = []\n\n\n print(f\"Cropping objects out of {len(filenames_images)} images...\")\n for filename_image, class_indices, bounding_boxes in \\\n tqdm(zip(filenames_images, class_indices_list_list, bounding_boxes_list_list)):\n\n if not len(class_indices) == len(bounding_boxes):\n warnings.warn(UserWarning(f\"Length of class indices ({len(class_indices)} does not equal length of bounding boxes\"\n f\"({len(bounding_boxes)}. This is an error in the input arguments. \"\n f\"Skipping this image {filename_image}.\"))\n continue\n\n filepath_image = dataset.get_filepath_from_filename(filename_image)\n filepath_image_base, image_extension = os.path.splitext(filepath_image)\n\n filepath_out_dir = os.path.join(output_dir, filename_image)\\\n .replace(image_extension, '')\n Path(filepath_out_dir).mkdir(parents=True, exist_ok=True)\n\n image = Image.open(filepath_image)\n \n cropped_images_filepaths = []\n # For every image, crop out multiple cropped images, one for each\n # bounding box\n for index, (class_index, bbox) in \\\n enumerate((zip(class_indices, bounding_boxes))):\n\n # determine the filename and filepath of the cropped image\n if class_names:\n class_name = class_names[class_index]\n else:\n class_name = f\"class{class_index}\"\n cropped_image_last_filename = f'{index}_{class_name}{image_extension}'\n cropped_image_filepath = os.path.join(filepath_out_dir, cropped_image_last_filename)\n\n # crop out the image and save it\n w, h = image.size\n crop_box = (w * bbox.x0, h * bbox.y0, w * bbox.x1, h * bbox.y1)\n crop_box = tuple(int(i) for i in crop_box)\n cropped_image = image.crop(crop_box)\n cropped_image.save(cropped_image_filepath)\n\n # add the filename of the cropped image to the corresponding list\n cropped_image_filename: str = os.path.join(\n filename_image.replace(image_extension, ''),\n cropped_image_last_filename\n )\n cropped_images_filepaths.append(cropped_image_filename)\n\n cropped_image_filepath_list_list.append(cropped_images_filepaths)\n\n return cropped_image_filepath_list_list\n", "path": "lightly/utils/cropping/crop_image_by_bounding_boxes.py"}]}
| 1,350 | 793 |
gh_patches_debug_23013
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2015
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Project update endpoint gives internal server error
See `http://rsr.akvo.org/rest/v1/project_update/`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/project_update.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from akvo.rsr.models import ProjectUpdate
9
10 from ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer
11 from ..viewsets import PublicProjectViewSet
12
13 from rest_framework.decorators import api_view, permission_classes
14 from rest_framework.permissions import IsAuthenticated
15 from rest_framework.response import Response
16
17
18 class ProjectUpdateViewSet(PublicProjectViewSet):
19
20 """."""
21 queryset = ProjectUpdate.objects.select_related('project',
22 'user').prefetch_related('locations')
23 serializer_class = ProjectUpdateSerializer
24 filter_fields = {
25 'project': ['exact', ],
26 'indicator_period': ['exact', ],
27 'user': ['exact', ],
28 'uuid': ['exact', 'icontains', ],
29 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],
30 }
31
32 paginate_by_param = 'limit'
33 max_paginate_by = 1000
34
35 def get_queryset(self):
36 """
37 Allow simple filtering on selected fields.
38 We don't use the default filter_fields, because Up filters on
39 datetime for last_modified_at, and they only support a date, not datetime.
40 """
41 created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)
42 if created_at__gt is not None:
43 self.queryset = self.queryset.filter(created_at__gt=created_at__gt)
44 created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)
45 if created_at__lt is not None:
46 self.queryset = self.queryset.filter(created_at__lt=created_at__lt)
47 last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)
48 if last_modified_at__gt is not None:
49 self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)
50 last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)
51 if last_modified_at__lt is not None:
52 self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)
53 # Get updates per organisation
54 project__partners = self.request.QUERY_PARAMS.get('project__partners', None)
55 if project__partners:
56 self.queryset = self.queryset.filter(project__partners=project__partners)
57 user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)
58 if user__organisations:
59 self.queryset = self.queryset.filter(user__organisations=user__organisations)
60 return super(ProjectUpdateViewSet, self).get_queryset()
61
62
63 class ProjectUpdateExtraViewSet(PublicProjectViewSet):
64
65 """Project update extra resource."""
66
67 max_paginate_by = 30
68 paginate_by = 10
69
70 queryset = ProjectUpdate.objects.select_related(
71 'primary_location',
72 'primary_location__location_target',
73 'primary_location__location_target__project',
74 'primary_location__location_target__user',
75 'primary_location__location_target__primary_location',
76 'primary_location__location_target__country',
77 'project',
78 'user',
79 'user__organisation',
80 'user__organisation__primary_location',
81 'user__organisation__primary_location__country',
82 'user__organisation__primary_location__location_target',
83 'user__organisation__primary_location__location_target__internal_org_ids',
84
85 ).prefetch_related(
86 'user__organisations',
87 'user__organisations__primary_location',
88 'user__organisations__primary_location__country',
89 'user__organisations__primary_location__location_target')
90 serializer_class = ProjectUpdateExtraSerializer
91 filter_fields = {
92 'project': ['exact', ],
93 'indicator_period': ['exact', ],
94 'user': ['exact', ],
95 'uuid': ['exact', 'icontains', ],
96 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],
97 # These filters only accept a date, not a datetime
98 # 'created_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],
99 # 'last_modified_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],
100 }
101
102 def get_queryset(self):
103 """
104 Allow simple filtering on selected fields.
105 We don't use the default filter_fields, because Up filters on
106 datetime for last_modified_at, and they only support a date, not datetime.
107 """
108 created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)
109 if created_at__gt is not None:
110 self.queryset = self.queryset.filter(created_at__gt=created_at__gt)
111 created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)
112 if created_at__lt is not None:
113 self.queryset = self.queryset.filter(created_at__lt=created_at__lt)
114 last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)
115 if last_modified_at__gt is not None:
116 self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)
117 last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)
118 if last_modified_at__lt is not None:
119 self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)
120 # Get updates per organisation
121 project__partners = self.request.QUERY_PARAMS.get('project__partners', None)
122 if project__partners:
123 self.queryset = self.queryset.filter(project__partners=project__partners)
124 user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)
125 if user__organisations:
126 self.queryset = self.queryset.filter(user__organisations=user__organisations)
127 return super(ProjectUpdateExtraViewSet, self).get_queryset()
128
129
130 @api_view(['POST'])
131 @permission_classes((IsAuthenticated, ))
132 def upload_indicator_update_photo(request, pk=None):
133 update = ProjectUpdate.objects.get(pk=pk)
134 user = request.user
135
136 # TODO: permissions
137
138 files = request.FILES
139
140 if 'photo' in files.keys():
141 update.photo = files['photo']
142 update.save(update_fields=['photo'])
143
144 return Response(ProjectUpdateExtraSerializer(update).data)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rest/views/project_update.py b/akvo/rest/views/project_update.py
--- a/akvo/rest/views/project_update.py
+++ b/akvo/rest/views/project_update.py
@@ -23,10 +23,8 @@
serializer_class = ProjectUpdateSerializer
filter_fields = {
'project': ['exact', ],
- 'indicator_period': ['exact', ],
'user': ['exact', ],
'uuid': ['exact', 'icontains', ],
- 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],
}
paginate_by_param = 'limit'
@@ -90,10 +88,8 @@
serializer_class = ProjectUpdateExtraSerializer
filter_fields = {
'project': ['exact', ],
- 'indicator_period': ['exact', ],
'user': ['exact', ],
'uuid': ['exact', 'icontains', ],
- 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],
# These filters only accept a date, not a datetime
# 'created_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],
# 'last_modified_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],
|
{"golden_diff": "diff --git a/akvo/rest/views/project_update.py b/akvo/rest/views/project_update.py\n--- a/akvo/rest/views/project_update.py\n+++ b/akvo/rest/views/project_update.py\n@@ -23,10 +23,8 @@\n serializer_class = ProjectUpdateSerializer\n filter_fields = {\n 'project': ['exact', ],\n- 'indicator_period': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n- 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n }\n \n paginate_by_param = 'limit'\n@@ -90,10 +88,8 @@\n serializer_class = ProjectUpdateExtraSerializer\n filter_fields = {\n 'project': ['exact', ],\n- 'indicator_period': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n- 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n # These filters only accept a date, not a datetime\n # 'created_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n # 'last_modified_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n", "issue": "Project update endpoint gives internal server error\nSee `http://rsr.akvo.org/rest/v1/project_update/`\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import ProjectUpdate\n\nfrom ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer\nfrom ..viewsets import PublicProjectViewSet\n\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\n\n\nclass ProjectUpdateViewSet(PublicProjectViewSet):\n\n \"\"\".\"\"\"\n queryset = ProjectUpdate.objects.select_related('project',\n 'user').prefetch_related('locations')\n serializer_class = ProjectUpdateSerializer\n filter_fields = {\n 'project': ['exact', ],\n 'indicator_period': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n }\n\n paginate_by_param = 'limit'\n max_paginate_by = 1000\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateViewSet, self).get_queryset()\n\n\nclass ProjectUpdateExtraViewSet(PublicProjectViewSet):\n\n \"\"\"Project update extra resource.\"\"\"\n\n max_paginate_by = 30\n paginate_by = 10\n\n queryset = ProjectUpdate.objects.select_related(\n 'primary_location',\n 'primary_location__location_target',\n 'primary_location__location_target__project',\n 'primary_location__location_target__user',\n 'primary_location__location_target__primary_location',\n 'primary_location__location_target__country',\n 'project',\n 'user',\n 'user__organisation',\n 'user__organisation__primary_location',\n 'user__organisation__primary_location__country',\n 'user__organisation__primary_location__location_target',\n 'user__organisation__primary_location__location_target__internal_org_ids',\n\n ).prefetch_related(\n 'user__organisations',\n 'user__organisations__primary_location',\n 'user__organisations__primary_location__country',\n 'user__organisations__primary_location__location_target')\n serializer_class = ProjectUpdateExtraSerializer\n filter_fields = {\n 'project': ['exact', ],\n 'indicator_period': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n 'period_update': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n # These filters only accept a date, not a datetime\n # 'created_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n # 'last_modified_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n }\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateExtraViewSet, self).get_queryset()\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef upload_indicator_update_photo(request, pk=None):\n update = ProjectUpdate.objects.get(pk=pk)\n user = request.user\n\n # TODO: permissions\n\n files = request.FILES\n\n if 'photo' in files.keys():\n update.photo = files['photo']\n update.save(update_fields=['photo'])\n\n return Response(ProjectUpdateExtraSerializer(update).data)\n", "path": "akvo/rest/views/project_update.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rsr.models import ProjectUpdate\n\nfrom ..serializers import ProjectUpdateSerializer, ProjectUpdateExtraSerializer\nfrom ..viewsets import PublicProjectViewSet\n\nfrom rest_framework.decorators import api_view, permission_classes\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\n\n\nclass ProjectUpdateViewSet(PublicProjectViewSet):\n\n \"\"\".\"\"\"\n queryset = ProjectUpdate.objects.select_related('project',\n 'user').prefetch_related('locations')\n serializer_class = ProjectUpdateSerializer\n filter_fields = {\n 'project': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n }\n\n paginate_by_param = 'limit'\n max_paginate_by = 1000\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateViewSet, self).get_queryset()\n\n\nclass ProjectUpdateExtraViewSet(PublicProjectViewSet):\n\n \"\"\"Project update extra resource.\"\"\"\n\n max_paginate_by = 30\n paginate_by = 10\n\n queryset = ProjectUpdate.objects.select_related(\n 'primary_location',\n 'primary_location__location_target',\n 'primary_location__location_target__project',\n 'primary_location__location_target__user',\n 'primary_location__location_target__primary_location',\n 'primary_location__location_target__country',\n 'project',\n 'user',\n 'user__organisation',\n 'user__organisation__primary_location',\n 'user__organisation__primary_location__country',\n 'user__organisation__primary_location__location_target',\n 'user__organisation__primary_location__location_target__internal_org_ids',\n\n ).prefetch_related(\n 'user__organisations',\n 'user__organisations__primary_location',\n 'user__organisations__primary_location__country',\n 'user__organisations__primary_location__location_target')\n serializer_class = ProjectUpdateExtraSerializer\n filter_fields = {\n 'project': ['exact', ],\n 'user': ['exact', ],\n 'uuid': ['exact', 'icontains', ],\n # These filters only accept a date, not a datetime\n # 'created_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n # 'last_modified_at': ['exact', 'gt', 'gte', 'lt', 'lte', ],\n }\n\n def get_queryset(self):\n \"\"\"\n Allow simple filtering on selected fields.\n We don't use the default filter_fields, because Up filters on\n datetime for last_modified_at, and they only support a date, not datetime.\n \"\"\"\n created_at__gt = self.request.QUERY_PARAMS.get('created_at__gt', None)\n if created_at__gt is not None:\n self.queryset = self.queryset.filter(created_at__gt=created_at__gt)\n created_at__lt = self.request.QUERY_PARAMS.get('created_at__lt', None)\n if created_at__lt is not None:\n self.queryset = self.queryset.filter(created_at__lt=created_at__lt)\n last_modified_at__gt = self.request.QUERY_PARAMS.get('last_modified_at__gt', None)\n if last_modified_at__gt is not None:\n self.queryset = self.queryset.filter(last_modified_at__gt=last_modified_at__gt)\n last_modified_at__lt = self.request.QUERY_PARAMS.get('last_modified_at__lt', None)\n if last_modified_at__lt is not None:\n self.queryset = self.queryset.filter(last_modified_at__lt=last_modified_at__lt)\n # Get updates per organisation\n project__partners = self.request.QUERY_PARAMS.get('project__partners', None)\n if project__partners:\n self.queryset = self.queryset.filter(project__partners=project__partners)\n user__organisations = self.request.QUERY_PARAMS.get('user__organisations', None)\n if user__organisations:\n self.queryset = self.queryset.filter(user__organisations=user__organisations)\n return super(ProjectUpdateExtraViewSet, self).get_queryset()\n\n\n@api_view(['POST'])\n@permission_classes((IsAuthenticated, ))\ndef upload_indicator_update_photo(request, pk=None):\n update = ProjectUpdate.objects.get(pk=pk)\n user = request.user\n\n # TODO: permissions\n\n files = request.FILES\n\n if 'photo' in files.keys():\n update.photo = files['photo']\n update.save(update_fields=['photo'])\n\n return Response(ProjectUpdateExtraSerializer(update).data)\n", "path": "akvo/rest/views/project_update.py"}]}
| 2,035 | 284 |
gh_patches_debug_3391
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2636
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mitmdump does not apply filter to saved data
##### Steps to reproduce the problem:
1. I captured some traffic, and ran the following to filter it:
```
$ mitmdump -r traffic.mitm -w out.mitm '~u main.css'
Proxy server listening at http://[::]:8080
172.16.122.1:51049: GET https://www.sjoerdlangkemper.nl/css/main.css
<< 304 Not Modified 0b
$
```
It displays only the matched URL, but it saves all traffic. When done, out.mitm contains the same requests and responses as traffic.mitm. I.e. `mitmproxy -r out.mitm` shows a lot of requests, where I would expect only the request for main.css.
##### Any other comments? What have you tried so far?
I tried this with release 2.0.2, and there it worked as expected. This issue seems to be similar to #1089.
##### System information
```
$ mitmdump --version
Mitmproxy version: 3.0.0 (2.0.0dev0965-0x168c72a)
Python version: 3.5.2
Platform: Linux-4.4.0-98-generic-x86_64-with-Ubuntu-16.04-xenial
SSL version: OpenSSL 1.1.0f 25 May 2017
Linux distro: Ubuntu 16.04 xenial
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/save.py`
Content:
```
1 import os.path
2 import typing
3
4 from mitmproxy import exceptions
5 from mitmproxy import flowfilter
6 from mitmproxy import io
7 from mitmproxy import ctx
8 from mitmproxy import flow
9
10
11 class Save:
12 def __init__(self):
13 self.stream = None
14 self.filt = None
15 self.active_flows = set() # type: Set[flow.Flow]
16
17 def open_file(self, path):
18 if path.startswith("+"):
19 path = path[1:]
20 mode = "ab"
21 else:
22 mode = "wb"
23 path = os.path.expanduser(path)
24 return open(path, mode)
25
26 def start_stream_to_path(self, path, flt):
27 try:
28 f = self.open_file(path)
29 except IOError as v:
30 raise exceptions.OptionsError(str(v))
31 self.stream = io.FilteredFlowWriter(f, flt)
32 self.active_flows = set()
33
34 def configure(self, updated):
35 # We're already streaming - stop the previous stream and restart
36 if "save_stream_filter" in updated:
37 if ctx.options.save_stream_filter:
38 self.filt = flowfilter.parse(ctx.options.save_stream_filter)
39 if not self.filt:
40 raise exceptions.OptionsError(
41 "Invalid filter specification: %s" % ctx.options.save_stream_filter
42 )
43 else:
44 self.filt = None
45 if "save_stream_file" in updated:
46 if self.stream:
47 self.done()
48 if ctx.options.save_stream_file:
49 self.start_stream_to_path(ctx.options.save_stream_file, self.filt)
50
51 def save(self, flows: typing.Sequence[flow.Flow], path: str) -> None:
52 """
53 Save flows to a file. If the path starts with a +, flows are
54 appended to the file, otherwise it is over-written.
55 """
56 try:
57 f = self.open_file(path)
58 except IOError as v:
59 raise exceptions.CommandError(v) from v
60 stream = io.FlowWriter(f)
61 for i in flows:
62 stream.add(i)
63 f.close()
64 ctx.log.alert("Saved %s flows." % len(flows))
65
66 def load(self, l):
67 l.add_command("save.file", self.save)
68
69 def tcp_start(self, flow):
70 if self.stream:
71 self.active_flows.add(flow)
72
73 def tcp_end(self, flow):
74 if self.stream:
75 self.stream.add(flow)
76 self.active_flows.discard(flow)
77
78 def response(self, flow):
79 if self.stream:
80 self.stream.add(flow)
81 self.active_flows.discard(flow)
82
83 def request(self, flow):
84 if self.stream:
85 self.active_flows.add(flow)
86
87 def done(self):
88 if self.stream:
89 for f in self.active_flows:
90 self.stream.add(f)
91 self.active_flows = set([])
92 self.stream.fo.close()
93 self.stream = None
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/addons/save.py b/mitmproxy/addons/save.py
--- a/mitmproxy/addons/save.py
+++ b/mitmproxy/addons/save.py
@@ -42,7 +42,7 @@
)
else:
self.filt = None
- if "save_stream_file" in updated:
+ if "save_stream_file" in updated or "save_stream_filter" in updated:
if self.stream:
self.done()
if ctx.options.save_stream_file:
|
{"golden_diff": "diff --git a/mitmproxy/addons/save.py b/mitmproxy/addons/save.py\n--- a/mitmproxy/addons/save.py\n+++ b/mitmproxy/addons/save.py\n@@ -42,7 +42,7 @@\n )\n else:\n self.filt = None\n- if \"save_stream_file\" in updated:\n+ if \"save_stream_file\" in updated or \"save_stream_filter\" in updated:\n if self.stream:\n self.done()\n if ctx.options.save_stream_file:\n", "issue": "mitmdump does not apply filter to saved data\n##### Steps to reproduce the problem:\r\n\r\n1. I captured some traffic, and ran the following to filter it:\r\n\r\n```\r\n$ mitmdump -r traffic.mitm -w out.mitm '~u main.css'\r\nProxy server listening at http://[::]:8080\r\n172.16.122.1:51049: GET https://www.sjoerdlangkemper.nl/css/main.css\r\n << 304 Not Modified 0b\r\n$\r\n```\r\n\r\nIt displays only the matched URL, but it saves all traffic. When done, out.mitm contains the same requests and responses as traffic.mitm. I.e. `mitmproxy -r out.mitm` shows a lot of requests, where I would expect only the request for main.css.\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nI tried this with release 2.0.2, and there it worked as expected. This issue seems to be similar to #1089.\r\n\r\n##### System information\r\n\r\n```\r\n$ mitmdump --version\r\nMitmproxy version: 3.0.0 (2.0.0dev0965-0x168c72a) \r\nPython version: 3.5.2\r\nPlatform: Linux-4.4.0-98-generic-x86_64-with-Ubuntu-16.04-xenial\r\nSSL version: OpenSSL 1.1.0f 25 May 2017\r\nLinux distro: Ubuntu 16.04 xenial\r\n```\r\n\n", "before_files": [{"content": "import os.path\nimport typing\n\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import io\nfrom mitmproxy import ctx\nfrom mitmproxy import flow\n\n\nclass Save:\n def __init__(self):\n self.stream = None\n self.filt = None\n self.active_flows = set() # type: Set[flow.Flow]\n\n def open_file(self, path):\n if path.startswith(\"+\"):\n path = path[1:]\n mode = \"ab\"\n else:\n mode = \"wb\"\n path = os.path.expanduser(path)\n return open(path, mode)\n\n def start_stream_to_path(self, path, flt):\n try:\n f = self.open_file(path)\n except IOError as v:\n raise exceptions.OptionsError(str(v))\n self.stream = io.FilteredFlowWriter(f, flt)\n self.active_flows = set()\n\n def configure(self, updated):\n # We're already streaming - stop the previous stream and restart\n if \"save_stream_filter\" in updated:\n if ctx.options.save_stream_filter:\n self.filt = flowfilter.parse(ctx.options.save_stream_filter)\n if not self.filt:\n raise exceptions.OptionsError(\n \"Invalid filter specification: %s\" % ctx.options.save_stream_filter\n )\n else:\n self.filt = None\n if \"save_stream_file\" in updated:\n if self.stream:\n self.done()\n if ctx.options.save_stream_file:\n self.start_stream_to_path(ctx.options.save_stream_file, self.filt)\n\n def save(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Save flows to a file. If the path starts with a +, flows are\n appended to the file, otherwise it is over-written.\n \"\"\"\n try:\n f = self.open_file(path)\n except IOError as v:\n raise exceptions.CommandError(v) from v\n stream = io.FlowWriter(f)\n for i in flows:\n stream.add(i)\n f.close()\n ctx.log.alert(\"Saved %s flows.\" % len(flows))\n\n def load(self, l):\n l.add_command(\"save.file\", self.save)\n\n def tcp_start(self, flow):\n if self.stream:\n self.active_flows.add(flow)\n\n def tcp_end(self, flow):\n if self.stream:\n self.stream.add(flow)\n self.active_flows.discard(flow)\n\n def response(self, flow):\n if self.stream:\n self.stream.add(flow)\n self.active_flows.discard(flow)\n\n def request(self, flow):\n if self.stream:\n self.active_flows.add(flow)\n\n def done(self):\n if self.stream:\n for f in self.active_flows:\n self.stream.add(f)\n self.active_flows = set([])\n self.stream.fo.close()\n self.stream = None\n", "path": "mitmproxy/addons/save.py"}], "after_files": [{"content": "import os.path\nimport typing\n\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import io\nfrom mitmproxy import ctx\nfrom mitmproxy import flow\n\n\nclass Save:\n def __init__(self):\n self.stream = None\n self.filt = None\n self.active_flows = set() # type: Set[flow.Flow]\n\n def open_file(self, path):\n if path.startswith(\"+\"):\n path = path[1:]\n mode = \"ab\"\n else:\n mode = \"wb\"\n path = os.path.expanduser(path)\n return open(path, mode)\n\n def start_stream_to_path(self, path, flt):\n try:\n f = self.open_file(path)\n except IOError as v:\n raise exceptions.OptionsError(str(v))\n self.stream = io.FilteredFlowWriter(f, flt)\n self.active_flows = set()\n\n def configure(self, updated):\n # We're already streaming - stop the previous stream and restart\n if \"save_stream_filter\" in updated:\n if ctx.options.save_stream_filter:\n self.filt = flowfilter.parse(ctx.options.save_stream_filter)\n if not self.filt:\n raise exceptions.OptionsError(\n \"Invalid filter specification: %s\" % ctx.options.save_stream_filter\n )\n else:\n self.filt = None\n if \"save_stream_file\" in updated or \"save_stream_filter\" in updated:\n if self.stream:\n self.done()\n if ctx.options.save_stream_file:\n self.start_stream_to_path(ctx.options.save_stream_file, self.filt)\n\n def save(self, flows: typing.Sequence[flow.Flow], path: str) -> None:\n \"\"\"\n Save flows to a file. If the path starts with a +, flows are\n appended to the file, otherwise it is over-written.\n \"\"\"\n try:\n f = self.open_file(path)\n except IOError as v:\n raise exceptions.CommandError(v) from v\n stream = io.FlowWriter(f)\n for i in flows:\n stream.add(i)\n f.close()\n ctx.log.alert(\"Saved %s flows.\" % len(flows))\n\n def load(self, l):\n l.add_command(\"save.file\", self.save)\n\n def tcp_start(self, flow):\n if self.stream:\n self.active_flows.add(flow)\n\n def tcp_end(self, flow):\n if self.stream:\n self.stream.add(flow)\n self.active_flows.discard(flow)\n\n def response(self, flow):\n if self.stream:\n self.stream.add(flow)\n self.active_flows.discard(flow)\n\n def request(self, flow):\n if self.stream:\n self.active_flows.add(flow)\n\n def done(self):\n if self.stream:\n for f in self.active_flows:\n self.stream.add(f)\n self.active_flows = set([])\n self.stream.fo.close()\n self.stream = None\n", "path": "mitmproxy/addons/save.py"}]}
| 1,417 | 111 |
gh_patches_debug_29098
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-2815
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
windows.compile_resources() can't be used with custom targets
```meson
rc_target = custom_target('Windows resource file',
command : [preprocess_command, rcdefs, '@INPUT@', '@OUTPUT@'],
build_always : true,
input : 'taisei.rc.in',
output : 'taisei.rc',
)
version_deps += winmod.compile_resources(rc_target)
```
```
Meson encountered an error in file src/meson.build, line 59, column 4:
Windows resource arguments must be strings or files not <CustomTargetHolder Windows resource file@cus: ['/data/git/taisei/scripts/configure-file.py', '--rootdir', '/data/git/taisei', '--fallback-version', 'v1.1.0-9999', '-DMESON_BUILD_TYPE=release', '-DICONS_DIR=/data/git/taisei/misc/icons', '-DBUILDTYPE_DEFINE=#define RELEASE_BUILD', '@INPUT@', '@OUTPUT@']>
```
This bug makes it impossible to reliably regenerate the `.rc` source on every rebuild.
Add something like depend_files to windows.compile_resources()
Resource script can include various other files (bitmap, cursor, font, html, icon, message table, binary data, manifest), it would be nice if it were possible to declare the resource script depends on these.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesonbuild/modules/windows.py`
Content:
```
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from .. import mlog
18 from .. import mesonlib, dependencies, build
19 from ..mesonlib import MesonException, extract_as_list
20 from . import get_include_args
21 from . import ModuleReturnValue
22 from . import ExtensionModule
23 from ..interpreterbase import permittedKwargs
24
25 class WindowsModule(ExtensionModule):
26
27 def detect_compiler(self, compilers):
28 for l in ('c', 'cpp'):
29 if l in compilers:
30 return compilers[l]
31 raise MesonException('Resource compilation requires a C or C++ compiler.')
32
33 @permittedKwargs({'args', 'include_directories'})
34 def compile_resources(self, state, args, kwargs):
35 comp = self.detect_compiler(state.compilers)
36
37 extra_args = mesonlib.stringlistify(kwargs.get('args', []))
38 inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)
39 for incd in inc_dirs:
40 if not isinstance(incd.held_object, (str, build.IncludeDirs)):
41 raise MesonException('Resource include dirs should be include_directories().')
42 extra_args += get_include_args(inc_dirs)
43
44 if comp.id == 'msvc':
45 rescomp = dependencies.ExternalProgram('rc', silent=True)
46 res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']
47 suffix = 'res'
48 else:
49 m = 'Argument {!r} has a space which may not work with windres due to ' \
50 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'
51 for arg in extra_args:
52 if ' ' in arg:
53 mlog.warning(m.format(arg))
54 rescomp_name = None
55 # FIXME: Does not handle `native: true` executables, see
56 # https://github.com/mesonbuild/meson/issues/1531
57 if state.environment.is_cross_build():
58 # If cross compiling see if windres has been specified in the
59 # cross file before trying to find it another way.
60 rescomp_name = state.environment.cross_info.config['binaries'].get('windres')
61 if rescomp_name is None:
62 # Pick-up env var WINDRES if set. This is often used for
63 # specifying an arch-specific windres.
64 rescomp_name = os.environ.get('WINDRES', 'windres')
65 rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)
66 res_args = extra_args + ['@INPUT@', '@OUTPUT@']
67 suffix = 'o'
68 if not rescomp.found():
69 raise MesonException('Could not find Windows resource compiler %s.' % ' '.join(rescomp.get_command()))
70 res_kwargs = {'output': '@BASENAME@.' + suffix,
71 'arguments': res_args}
72 res_gen = build.Generator([rescomp], res_kwargs)
73 res_output = res_gen.process_files('Windows resource', args, state)
74 return ModuleReturnValue(res_output, [res_output])
75
76 def initialize():
77 return WindowsModule()
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py
--- a/mesonbuild/modules/windows.py
+++ b/mesonbuild/modules/windows.py
@@ -67,11 +67,42 @@
suffix = 'o'
if not rescomp.found():
raise MesonException('Could not find Windows resource compiler %s.' % ' '.join(rescomp.get_command()))
- res_kwargs = {'output': '@BASENAME@.' + suffix,
- 'arguments': res_args}
- res_gen = build.Generator([rescomp], res_kwargs)
- res_output = res_gen.process_files('Windows resource', args, state)
- return ModuleReturnValue(res_output, [res_output])
+
+ res_targets = []
+
+ def add_target(src):
+ if isinstance(src, list):
+ for subsrc in src:
+ add_target(subsrc)
+ return
+
+ if hasattr(src, 'held_object'):
+ src = src.held_object
+
+ res_kwargs = {
+ 'output': '@BASENAME@.' + suffix,
+ 'input': [src],
+ 'command': [rescomp] + res_args,
+ }
+
+ if isinstance(src, (str, mesonlib.File)):
+ name = 'file {!r}'.format(str(src))
+ elif isinstance(src, build.CustomTarget):
+ if len(src.get_outputs()) > 1:
+ raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
+
+ name = 'target {!r}'.format(src.get_id())
+ else:
+ raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))
+
+ # Path separators are not allowed in target names
+ name = name.replace('/', '_').replace('\\', '_')
+
+ res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))
+
+ add_target(args)
+
+ return ModuleReturnValue(res_targets, [res_targets])
def initialize():
return WindowsModule()
|
{"golden_diff": "diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py\n--- a/mesonbuild/modules/windows.py\n+++ b/mesonbuild/modules/windows.py\n@@ -67,11 +67,42 @@\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler %s.' % ' '.join(rescomp.get_command()))\n- res_kwargs = {'output': '@BASENAME@.' + suffix,\n- 'arguments': res_args}\n- res_gen = build.Generator([rescomp], res_kwargs)\n- res_output = res_gen.process_files('Windows resource', args, state)\n- return ModuleReturnValue(res_output, [res_output])\n+\n+ res_targets = []\n+\n+ def add_target(src):\n+ if isinstance(src, list):\n+ for subsrc in src:\n+ add_target(subsrc)\n+ return\n+\n+ if hasattr(src, 'held_object'):\n+ src = src.held_object\n+\n+ res_kwargs = {\n+ 'output': '@BASENAME@.' + suffix,\n+ 'input': [src],\n+ 'command': [rescomp] + res_args,\n+ }\n+\n+ if isinstance(src, (str, mesonlib.File)):\n+ name = 'file {!r}'.format(str(src))\n+ elif isinstance(src, build.CustomTarget):\n+ if len(src.get_outputs()) > 1:\n+ raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n+\n+ name = 'target {!r}'.format(src.get_id())\n+ else:\n+ raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n+\n+ # Path separators are not allowed in target names\n+ name = name.replace('/', '_').replace('\\\\', '_')\n+\n+ res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))\n+\n+ add_target(args)\n+\n+ return ModuleReturnValue(res_targets, [res_targets])\n \n def initialize():\n return WindowsModule()\n", "issue": "windows.compile_resources() can't be used with custom targets\n```meson\r\n rc_target = custom_target('Windows resource file',\r\n command : [preprocess_command, rcdefs, '@INPUT@', '@OUTPUT@'],\r\n build_always : true,\r\n input : 'taisei.rc.in',\r\n output : 'taisei.rc',\r\n )\r\n\r\n version_deps += winmod.compile_resources(rc_target)\r\n```\r\n\r\n```\r\nMeson encountered an error in file src/meson.build, line 59, column 4:\r\nWindows resource arguments must be strings or files not <CustomTargetHolder Windows resource file@cus: ['/data/git/taisei/scripts/configure-file.py', '--rootdir', '/data/git/taisei', '--fallback-version', 'v1.1.0-9999', '-DMESON_BUILD_TYPE=release', '-DICONS_DIR=/data/git/taisei/misc/icons', '-DBUILDTYPE_DEFINE=#define RELEASE_BUILD', '@INPUT@', '@OUTPUT@']>\r\n```\r\n\r\nThis bug makes it impossible to reliably regenerate the `.rc` source on every rebuild.\nAdd something like depend_files to windows.compile_resources()\nResource script can include various other files (bitmap, cursor, font, html, icon, message table, binary data, manifest), it would be nice if it were possible to declare the resource script depends on these.\n", "before_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .. import mlog\nfrom .. import mesonlib, dependencies, build\nfrom ..mesonlib import MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n @permittedKwargs({'args', 'include_directories'})\n def compile_resources(self, state, args, kwargs):\n comp = self.detect_compiler(state.compilers)\n\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n if comp.id == 'msvc':\n rescomp = dependencies.ExternalProgram('rc', silent=True)\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n suffix = 'res'\n else:\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n rescomp_name = None\n # FIXME: Does not handle `native: true` executables, see\n # https://github.com/mesonbuild/meson/issues/1531\n if state.environment.is_cross_build():\n # If cross compiling see if windres has been specified in the\n # cross file before trying to find it another way.\n rescomp_name = state.environment.cross_info.config['binaries'].get('windres')\n if rescomp_name is None:\n # Pick-up env var WINDRES if set. This is often used for\n # specifying an arch-specific windres.\n rescomp_name = os.environ.get('WINDRES', 'windres')\n rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler %s.' % ' '.join(rescomp.get_command()))\n res_kwargs = {'output': '@BASENAME@.' + suffix,\n 'arguments': res_args}\n res_gen = build.Generator([rescomp], res_kwargs)\n res_output = res_gen.process_files('Windows resource', args, state)\n return ModuleReturnValue(res_output, [res_output])\n\ndef initialize():\n return WindowsModule()\n", "path": "mesonbuild/modules/windows.py"}], "after_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .. import mlog\nfrom .. import mesonlib, dependencies, build\nfrom ..mesonlib import MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n @permittedKwargs({'args', 'include_directories'})\n def compile_resources(self, state, args, kwargs):\n comp = self.detect_compiler(state.compilers)\n\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n if comp.id == 'msvc':\n rescomp = dependencies.ExternalProgram('rc', silent=True)\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n suffix = 'res'\n else:\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n rescomp_name = None\n # FIXME: Does not handle `native: true` executables, see\n # https://github.com/mesonbuild/meson/issues/1531\n if state.environment.is_cross_build():\n # If cross compiling see if windres has been specified in the\n # cross file before trying to find it another way.\n rescomp_name = state.environment.cross_info.config['binaries'].get('windres')\n if rescomp_name is None:\n # Pick-up env var WINDRES if set. This is often used for\n # specifying an arch-specific windres.\n rescomp_name = os.environ.get('WINDRES', 'windres')\n rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler %s.' % ' '.join(rescomp.get_command()))\n\n res_targets = []\n\n def add_target(src):\n if isinstance(src, list):\n for subsrc in src:\n add_target(subsrc)\n return\n\n if hasattr(src, 'held_object'):\n src = src.held_object\n\n res_kwargs = {\n 'output': '@BASENAME@.' + suffix,\n 'input': [src],\n 'command': [rescomp] + res_args,\n }\n\n if isinstance(src, (str, mesonlib.File)):\n name = 'file {!r}'.format(str(src))\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n\n name = 'target {!r}'.format(src.get_id())\n else:\n raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n\n # Path separators are not allowed in target names\n name = name.replace('/', '_').replace('\\\\', '_')\n\n res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))\n\n add_target(args)\n\n return ModuleReturnValue(res_targets, [res_targets])\n\ndef initialize():\n return WindowsModule()\n", "path": "mesonbuild/modules/windows.py"}]}
| 1,482 | 469 |
gh_patches_debug_29464
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-470
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recursive serializer stackoverflow error when serializing an object that logs on iteration
I'm currently having a problem with the SDKs integrated serialisation of local variables for sending with events.
I have in my local variables, an object that is a `Mapping` but that sometimes raises an event when keys are accessed. This is a normal behavior, since keys are lazy loaded and my code handles these correctly. However, when an event occurs and this variable is serialized for sending to sentry, the sdk iterates over all my keys and causes an infinite recursion.
Minimal code te reproduce :
```python
import logging
from collections.abc import Sequence
import sentry_sdk
logger = logging.getLogger(__name__)
class MyGenerator(Sequence):
def __init__(self):
self.values = [1, 2, 3]
Sequence.__init__(self)
def __iter__(self):
for value in self.values:
yield value
logger.error(f"No values left", exc_info=True)
def __len__(self):
"""List length"""
return len(self.values)
def __getitem__(self, ii):
"""Get a list item"""
return self.values[ii]
if __name__ == "__main__":
sentry_sdk.init(
dsn="[DSN]"
)
gen = MyGenerator()
logger.error("Testing", exc_info=True)
```
The only quick solution I found is to set `with_locals=False` on the sdk but it's a pity to lack all this functionality because of this case.
On the other hand, I'm not sure how the SDK could anticipate such problems, here's what I'm thinking:
* The SDK could check how deep in recursion it is when serializing and stop when it notices it's handling a log that was generated while handling an other log.
* In the same way that exceptions that occur when deserializing trigger `<failed to serialize, use init(debug=True) to see error logs>`, all logs could be disregarded
* In some cases it could be problematic that sentry iterates over those variables altogether (for example, emptying a generator when it should still have values available) and for those maybe a `serialize_ignore_classes=[MyDangerousClass]` on the SDK init could avoid interference with such classes.
Happy to open a PR to follow-up on this when a proper solution is found !
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/client.py`
Content:
```
1 import os
2 import uuid
3 import random
4 from datetime import datetime
5 import socket
6
7 from sentry_sdk._compat import string_types, text_type, iteritems
8 from sentry_sdk.utils import (
9 handle_in_app,
10 get_type_name,
11 capture_internal_exceptions,
12 current_stacktrace,
13 logger,
14 )
15 from sentry_sdk.serializer import Serializer
16 from sentry_sdk.transport import make_transport
17 from sentry_sdk.consts import DEFAULT_OPTIONS, SDK_INFO, ClientConstructor
18 from sentry_sdk.integrations import setup_integrations
19 from sentry_sdk.utils import ContextVar
20
21 from sentry_sdk._types import MYPY
22
23 if MYPY:
24 from typing import Any
25 from typing import Callable
26 from typing import Dict
27 from typing import Optional
28
29 from sentry_sdk.scope import Scope
30 from sentry_sdk._types import Event, Hint
31
32
33 _client_init_debug = ContextVar("client_init_debug")
34
35
36 def _get_options(*args, **kwargs):
37 # type: (*Optional[str], **Any) -> Dict[str, Any]
38 if args and (isinstance(args[0], (text_type, bytes, str)) or args[0] is None):
39 dsn = args[0] # type: Optional[str]
40 args = args[1:]
41 else:
42 dsn = None
43
44 rv = dict(DEFAULT_OPTIONS)
45 options = dict(*args, **kwargs) # type: ignore
46 if dsn is not None and options.get("dsn") is None:
47 options["dsn"] = dsn # type: ignore
48
49 for key, value in iteritems(options):
50 if key not in rv:
51 raise TypeError("Unknown option %r" % (key,))
52 rv[key] = value
53
54 if rv["dsn"] is None:
55 rv["dsn"] = os.environ.get("SENTRY_DSN")
56
57 if rv["release"] is None:
58 rv["release"] = os.environ.get("SENTRY_RELEASE")
59
60 if rv["environment"] is None:
61 rv["environment"] = os.environ.get("SENTRY_ENVIRONMENT")
62
63 if rv["server_name"] is None and hasattr(socket, "gethostname"):
64 rv["server_name"] = socket.gethostname()
65
66 return rv # type: ignore
67
68
69 class _Client(object):
70 """The client is internally responsible for capturing the events and
71 forwarding them to sentry through the configured transport. It takes
72 the client options as keyword arguments and optionally the DSN as first
73 argument.
74 """
75
76 def __init__(self, *args, **kwargs):
77 # type: (*Any, **Any) -> None
78 self.options = get_options(*args, **kwargs) # type: Dict[str, Any]
79 self._init_impl()
80
81 def __getstate__(self):
82 # type: () -> Any
83 return {"options": self.options}
84
85 def __setstate__(self, state):
86 # type: (Any) -> None
87 self.options = state["options"]
88 self._init_impl()
89
90 def _init_impl(self):
91 # type: () -> None
92 old_debug = _client_init_debug.get(False)
93 try:
94 _client_init_debug.set(self.options["debug"])
95 self.transport = make_transport(self.options)
96
97 request_bodies = ("always", "never", "small", "medium")
98 if self.options["request_bodies"] not in request_bodies:
99 raise ValueError(
100 "Invalid value for request_bodies. Must be one of {}".format(
101 request_bodies
102 )
103 )
104
105 self.integrations = setup_integrations(
106 self.options["integrations"],
107 with_defaults=self.options["default_integrations"],
108 )
109 finally:
110 _client_init_debug.set(old_debug)
111
112 @property
113 def dsn(self):
114 # type: () -> Optional[str]
115 """Returns the configured DSN as string."""
116 return self.options["dsn"]
117
118 def _prepare_event(
119 self,
120 event, # type: Event
121 hint, # type: Optional[Hint]
122 scope, # type: Optional[Scope]
123 ):
124 # type: (...) -> Optional[Event]
125 if event.get("timestamp") is None:
126 event["timestamp"] = datetime.utcnow()
127
128 hint = dict(hint or ()) # type: Hint
129
130 if scope is not None:
131 event_ = scope.apply_to_event(event, hint)
132 if event_ is None:
133 return None
134 event = event_
135
136 if (
137 self.options["attach_stacktrace"]
138 and "exception" not in event
139 and "stacktrace" not in event
140 and "threads" not in event
141 ):
142 with capture_internal_exceptions():
143 event["threads"] = {
144 "values": [
145 {
146 "stacktrace": current_stacktrace(
147 self.options["with_locals"]
148 ),
149 "crashed": False,
150 "current": True,
151 }
152 ]
153 }
154
155 for key in "release", "environment", "server_name", "dist":
156 if event.get(key) is None and self.options[key] is not None: # type: ignore
157 event[key] = text_type(self.options[key]).strip() # type: ignore
158 if event.get("sdk") is None:
159 sdk_info = dict(SDK_INFO)
160 sdk_info["integrations"] = sorted(self.integrations.keys())
161 event["sdk"] = sdk_info
162
163 if event.get("platform") is None:
164 event["platform"] = "python"
165
166 event = handle_in_app(
167 event, self.options["in_app_exclude"], self.options["in_app_include"]
168 )
169
170 # Postprocess the event here so that annotated types do
171 # generally not surface in before_send
172 if event is not None:
173 event = Serializer().serialize_event(event)
174
175 before_send = self.options["before_send"]
176 if before_send is not None:
177 new_event = None
178 with capture_internal_exceptions():
179 new_event = before_send(event, hint or {})
180 if new_event is None:
181 logger.info("before send dropped event (%s)", event)
182 event = new_event # type: ignore
183
184 return event
185
186 def _is_ignored_error(self, event, hint):
187 # type: (Event, Hint) -> bool
188 exc_info = hint.get("exc_info")
189 if exc_info is None:
190 return False
191
192 type_name = get_type_name(exc_info[0])
193 full_name = "%s.%s" % (exc_info[0].__module__, type_name)
194
195 for errcls in self.options["ignore_errors"]:
196 # String types are matched against the type name in the
197 # exception only
198 if isinstance(errcls, string_types):
199 if errcls == full_name or errcls == type_name:
200 return True
201 else:
202 if issubclass(exc_info[0], errcls): # type: ignore
203 return True
204
205 return False
206
207 def _should_capture(
208 self,
209 event, # type: Event
210 hint, # type: Hint
211 scope=None, # type: Optional[Scope]
212 ):
213 # type: (...) -> bool
214 if scope is not None and not scope._should_capture:
215 return False
216
217 if (
218 self.options["sample_rate"] < 1.0
219 and random.random() >= self.options["sample_rate"]
220 ):
221 return False
222
223 if self._is_ignored_error(event, hint):
224 return False
225
226 return True
227
228 def capture_event(
229 self,
230 event, # type: Event
231 hint=None, # type: Optional[Hint]
232 scope=None, # type: Optional[Scope]
233 ):
234 # type: (...) -> Optional[str]
235 """Captures an event.
236
237 :param event: A ready-made event that can be directly sent to Sentry.
238
239 :param hint: Contains metadata about the event that can be read from `before_send`, such as the original exception object or a HTTP request object.
240
241 :returns: An event ID. May be `None` if there is no DSN set or of if the SDK decided to discard the event for other reasons. In such situations setting `debug=True` on `init()` may help.
242 """
243 if self.transport is None:
244 return None
245 if hint is None:
246 hint = {}
247 rv = event.get("event_id")
248 if rv is None:
249 event["event_id"] = rv = uuid.uuid4().hex
250 if not self._should_capture(event, hint, scope):
251 return None
252 event = self._prepare_event(event, hint, scope)
253 if event is None:
254 return None
255 self.transport.capture_event(event)
256 return rv
257
258 def close(
259 self,
260 timeout=None, # type: Optional[float]
261 callback=None, # type: Optional[Callable[[int, float], None]]
262 ):
263 # type: (...) -> None
264 """
265 Close the client and shut down the transport. Arguments have the same
266 semantics as :py:meth:`Client.flush`.
267 """
268 if self.transport is not None:
269 self.flush(timeout=timeout, callback=callback)
270 self.transport.kill()
271 self.transport = None
272
273 def flush(
274 self,
275 timeout=None, # type: Optional[float]
276 callback=None, # type: Optional[Callable[[int, float], None]]
277 ):
278 # type: (...) -> None
279 """
280 Wait for the current events to be sent.
281
282 :param timeout: Wait for at most `timeout` seconds. If no `timeout` is provided, the `shutdown_timeout` option value is used.
283
284 :param callback: Is invoked with the number of pending events and the configured timeout.
285 """
286 if self.transport is not None:
287 if timeout is None:
288 timeout = self.options["shutdown_timeout"]
289 self.transport.flush(timeout=timeout, callback=callback)
290
291 def __enter__(self):
292 # type: () -> _Client
293 return self
294
295 def __exit__(self, exc_type, exc_value, tb):
296 # type: (Any, Any, Any) -> None
297 self.close()
298
299
300 from sentry_sdk._types import MYPY
301
302 if MYPY:
303 # Make mypy, PyCharm and other static analyzers think `get_options` is a
304 # type to have nicer autocompletion for params.
305 #
306 # Use `ClientConstructor` to define the argument types of `init` and
307 # `Dict[str, Any]` to tell static analyzers about the return type.
308
309 class get_options(ClientConstructor, Dict[str, Any]):
310 pass
311
312 class Client(ClientConstructor, _Client):
313 pass
314
315
316 else:
317 # Alias `get_options` for actual usage. Go through the lambda indirection
318 # to throw PyCharm off of the weakly typed signature (it would otherwise
319 # discover both the weakly typed signature of `_init` and our faked `init`
320 # type).
321
322 get_options = (lambda: _get_options)()
323 Client = (lambda: _Client)()
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/client.py b/sentry_sdk/client.py
--- a/sentry_sdk/client.py
+++ b/sentry_sdk/client.py
@@ -31,6 +31,7 @@
_client_init_debug = ContextVar("client_init_debug")
+_client_in_capture_event = ContextVar("client_in_capture_event")
def _get_options(*args, **kwargs):
@@ -240,20 +241,29 @@
:returns: An event ID. May be `None` if there is no DSN set or of if the SDK decided to discard the event for other reasons. In such situations setting `debug=True` on `init()` may help.
"""
- if self.transport is None:
+ is_recursive = _client_in_capture_event.get(False)
+ if is_recursive:
return None
- if hint is None:
- hint = {}
- rv = event.get("event_id")
- if rv is None:
- event["event_id"] = rv = uuid.uuid4().hex
- if not self._should_capture(event, hint, scope):
- return None
- event = self._prepare_event(event, hint, scope)
- if event is None:
- return None
- self.transport.capture_event(event)
- return rv
+
+ _client_in_capture_event.set(True)
+
+ try:
+ if self.transport is None:
+ return None
+ if hint is None:
+ hint = {}
+ event_id = event.get("event_id")
+ if event_id is None:
+ event["event_id"] = event_id = uuid.uuid4().hex
+ if not self._should_capture(event, hint, scope):
+ return None
+ event_opt = self._prepare_event(event, hint, scope)
+ if event_opt is None:
+ return None
+ self.transport.capture_event(event_opt)
+ return event_id
+ finally:
+ _client_in_capture_event.set(False)
def close(
self,
|
{"golden_diff": "diff --git a/sentry_sdk/client.py b/sentry_sdk/client.py\n--- a/sentry_sdk/client.py\n+++ b/sentry_sdk/client.py\n@@ -31,6 +31,7 @@\n \n \n _client_init_debug = ContextVar(\"client_init_debug\")\n+_client_in_capture_event = ContextVar(\"client_in_capture_event\")\n \n \n def _get_options(*args, **kwargs):\n@@ -240,20 +241,29 @@\n \n :returns: An event ID. May be `None` if there is no DSN set or of if the SDK decided to discard the event for other reasons. In such situations setting `debug=True` on `init()` may help.\n \"\"\"\n- if self.transport is None:\n+ is_recursive = _client_in_capture_event.get(False)\n+ if is_recursive:\n return None\n- if hint is None:\n- hint = {}\n- rv = event.get(\"event_id\")\n- if rv is None:\n- event[\"event_id\"] = rv = uuid.uuid4().hex\n- if not self._should_capture(event, hint, scope):\n- return None\n- event = self._prepare_event(event, hint, scope)\n- if event is None:\n- return None\n- self.transport.capture_event(event)\n- return rv\n+\n+ _client_in_capture_event.set(True)\n+\n+ try:\n+ if self.transport is None:\n+ return None\n+ if hint is None:\n+ hint = {}\n+ event_id = event.get(\"event_id\")\n+ if event_id is None:\n+ event[\"event_id\"] = event_id = uuid.uuid4().hex\n+ if not self._should_capture(event, hint, scope):\n+ return None\n+ event_opt = self._prepare_event(event, hint, scope)\n+ if event_opt is None:\n+ return None\n+ self.transport.capture_event(event_opt)\n+ return event_id\n+ finally:\n+ _client_in_capture_event.set(False)\n \n def close(\n self,\n", "issue": "Recursive serializer stackoverflow error when serializing an object that logs on iteration\nI'm currently having a problem with the SDKs integrated serialisation of local variables for sending with events.\r\nI have in my local variables, an object that is a `Mapping` but that sometimes raises an event when keys are accessed. This is a normal behavior, since keys are lazy loaded and my code handles these correctly. However, when an event occurs and this variable is serialized for sending to sentry, the sdk iterates over all my keys and causes an infinite recursion.\r\n\r\nMinimal code te reproduce :\r\n```python\r\nimport logging\r\nfrom collections.abc import Sequence\r\nimport sentry_sdk\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\nclass MyGenerator(Sequence):\r\n def __init__(self):\r\n self.values = [1, 2, 3]\r\n Sequence.__init__(self)\r\n \r\n def __iter__(self):\r\n for value in self.values:\r\n yield value\r\n logger.error(f\"No values left\", exc_info=True)\r\n\r\n def __len__(self):\r\n \"\"\"List length\"\"\"\r\n return len(self.values)\r\n \r\n def __getitem__(self, ii):\r\n \"\"\"Get a list item\"\"\"\r\n return self.values[ii]\r\n \r\n\r\nif __name__ == \"__main__\":\r\n sentry_sdk.init(\r\n dsn=\"[DSN]\"\r\n )\r\n\r\n gen = MyGenerator()\r\n\r\n logger.error(\"Testing\", exc_info=True)\r\n```\r\n\r\nThe only quick solution I found is to set `with_locals=False` on the sdk but it's a pity to lack all this functionality because of this case.\r\nOn the other hand, I'm not sure how the SDK could anticipate such problems, here's what I'm thinking:\r\n* The SDK could check how deep in recursion it is when serializing and stop when it notices it's handling a log that was generated while handling an other log.\r\n* In the same way that exceptions that occur when deserializing trigger `<failed to serialize, use init(debug=True) to see error logs>`, all logs could be disregarded\r\n* In some cases it could be problematic that sentry iterates over those variables altogether (for example, emptying a generator when it should still have values available) and for those maybe a `serialize_ignore_classes=[MyDangerousClass]` on the SDK init could avoid interference with such classes.\r\n\r\nHappy to open a PR to follow-up on this when a proper solution is found !\n", "before_files": [{"content": "import os\nimport uuid\nimport random\nfrom datetime import datetime\nimport socket\n\nfrom sentry_sdk._compat import string_types, text_type, iteritems\nfrom sentry_sdk.utils import (\n handle_in_app,\n get_type_name,\n capture_internal_exceptions,\n current_stacktrace,\n logger,\n)\nfrom sentry_sdk.serializer import Serializer\nfrom sentry_sdk.transport import make_transport\nfrom sentry_sdk.consts import DEFAULT_OPTIONS, SDK_INFO, ClientConstructor\nfrom sentry_sdk.integrations import setup_integrations\nfrom sentry_sdk.utils import ContextVar\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import Dict\n from typing import Optional\n\n from sentry_sdk.scope import Scope\n from sentry_sdk._types import Event, Hint\n\n\n_client_init_debug = ContextVar(\"client_init_debug\")\n\n\ndef _get_options(*args, **kwargs):\n # type: (*Optional[str], **Any) -> Dict[str, Any]\n if args and (isinstance(args[0], (text_type, bytes, str)) or args[0] is None):\n dsn = args[0] # type: Optional[str]\n args = args[1:]\n else:\n dsn = None\n\n rv = dict(DEFAULT_OPTIONS)\n options = dict(*args, **kwargs) # type: ignore\n if dsn is not None and options.get(\"dsn\") is None:\n options[\"dsn\"] = dsn # type: ignore\n\n for key, value in iteritems(options):\n if key not in rv:\n raise TypeError(\"Unknown option %r\" % (key,))\n rv[key] = value\n\n if rv[\"dsn\"] is None:\n rv[\"dsn\"] = os.environ.get(\"SENTRY_DSN\")\n\n if rv[\"release\"] is None:\n rv[\"release\"] = os.environ.get(\"SENTRY_RELEASE\")\n\n if rv[\"environment\"] is None:\n rv[\"environment\"] = os.environ.get(\"SENTRY_ENVIRONMENT\")\n\n if rv[\"server_name\"] is None and hasattr(socket, \"gethostname\"):\n rv[\"server_name\"] = socket.gethostname()\n\n return rv # type: ignore\n\n\nclass _Client(object):\n \"\"\"The client is internally responsible for capturing the events and\n forwarding them to sentry through the configured transport. It takes\n the client options as keyword arguments and optionally the DSN as first\n argument.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n self.options = get_options(*args, **kwargs) # type: Dict[str, Any]\n self._init_impl()\n\n def __getstate__(self):\n # type: () -> Any\n return {\"options\": self.options}\n\n def __setstate__(self, state):\n # type: (Any) -> None\n self.options = state[\"options\"]\n self._init_impl()\n\n def _init_impl(self):\n # type: () -> None\n old_debug = _client_init_debug.get(False)\n try:\n _client_init_debug.set(self.options[\"debug\"])\n self.transport = make_transport(self.options)\n\n request_bodies = (\"always\", \"never\", \"small\", \"medium\")\n if self.options[\"request_bodies\"] not in request_bodies:\n raise ValueError(\n \"Invalid value for request_bodies. Must be one of {}\".format(\n request_bodies\n )\n )\n\n self.integrations = setup_integrations(\n self.options[\"integrations\"],\n with_defaults=self.options[\"default_integrations\"],\n )\n finally:\n _client_init_debug.set(old_debug)\n\n @property\n def dsn(self):\n # type: () -> Optional[str]\n \"\"\"Returns the configured DSN as string.\"\"\"\n return self.options[\"dsn\"]\n\n def _prepare_event(\n self,\n event, # type: Event\n hint, # type: Optional[Hint]\n scope, # type: Optional[Scope]\n ):\n # type: (...) -> Optional[Event]\n if event.get(\"timestamp\") is None:\n event[\"timestamp\"] = datetime.utcnow()\n\n hint = dict(hint or ()) # type: Hint\n\n if scope is not None:\n event_ = scope.apply_to_event(event, hint)\n if event_ is None:\n return None\n event = event_\n\n if (\n self.options[\"attach_stacktrace\"]\n and \"exception\" not in event\n and \"stacktrace\" not in event\n and \"threads\" not in event\n ):\n with capture_internal_exceptions():\n event[\"threads\"] = {\n \"values\": [\n {\n \"stacktrace\": current_stacktrace(\n self.options[\"with_locals\"]\n ),\n \"crashed\": False,\n \"current\": True,\n }\n ]\n }\n\n for key in \"release\", \"environment\", \"server_name\", \"dist\":\n if event.get(key) is None and self.options[key] is not None: # type: ignore\n event[key] = text_type(self.options[key]).strip() # type: ignore\n if event.get(\"sdk\") is None:\n sdk_info = dict(SDK_INFO)\n sdk_info[\"integrations\"] = sorted(self.integrations.keys())\n event[\"sdk\"] = sdk_info\n\n if event.get(\"platform\") is None:\n event[\"platform\"] = \"python\"\n\n event = handle_in_app(\n event, self.options[\"in_app_exclude\"], self.options[\"in_app_include\"]\n )\n\n # Postprocess the event here so that annotated types do\n # generally not surface in before_send\n if event is not None:\n event = Serializer().serialize_event(event)\n\n before_send = self.options[\"before_send\"]\n if before_send is not None:\n new_event = None\n with capture_internal_exceptions():\n new_event = before_send(event, hint or {})\n if new_event is None:\n logger.info(\"before send dropped event (%s)\", event)\n event = new_event # type: ignore\n\n return event\n\n def _is_ignored_error(self, event, hint):\n # type: (Event, Hint) -> bool\n exc_info = hint.get(\"exc_info\")\n if exc_info is None:\n return False\n\n type_name = get_type_name(exc_info[0])\n full_name = \"%s.%s\" % (exc_info[0].__module__, type_name)\n\n for errcls in self.options[\"ignore_errors\"]:\n # String types are matched against the type name in the\n # exception only\n if isinstance(errcls, string_types):\n if errcls == full_name or errcls == type_name:\n return True\n else:\n if issubclass(exc_info[0], errcls): # type: ignore\n return True\n\n return False\n\n def _should_capture(\n self,\n event, # type: Event\n hint, # type: Hint\n scope=None, # type: Optional[Scope]\n ):\n # type: (...) -> bool\n if scope is not None and not scope._should_capture:\n return False\n\n if (\n self.options[\"sample_rate\"] < 1.0\n and random.random() >= self.options[\"sample_rate\"]\n ):\n return False\n\n if self._is_ignored_error(event, hint):\n return False\n\n return True\n\n def capture_event(\n self,\n event, # type: Event\n hint=None, # type: Optional[Hint]\n scope=None, # type: Optional[Scope]\n ):\n # type: (...) -> Optional[str]\n \"\"\"Captures an event.\n\n :param event: A ready-made event that can be directly sent to Sentry.\n\n :param hint: Contains metadata about the event that can be read from `before_send`, such as the original exception object or a HTTP request object.\n\n :returns: An event ID. May be `None` if there is no DSN set or of if the SDK decided to discard the event for other reasons. In such situations setting `debug=True` on `init()` may help.\n \"\"\"\n if self.transport is None:\n return None\n if hint is None:\n hint = {}\n rv = event.get(\"event_id\")\n if rv is None:\n event[\"event_id\"] = rv = uuid.uuid4().hex\n if not self._should_capture(event, hint, scope):\n return None\n event = self._prepare_event(event, hint, scope)\n if event is None:\n return None\n self.transport.capture_event(event)\n return rv\n\n def close(\n self,\n timeout=None, # type: Optional[float]\n callback=None, # type: Optional[Callable[[int, float], None]]\n ):\n # type: (...) -> None\n \"\"\"\n Close the client and shut down the transport. Arguments have the same\n semantics as :py:meth:`Client.flush`.\n \"\"\"\n if self.transport is not None:\n self.flush(timeout=timeout, callback=callback)\n self.transport.kill()\n self.transport = None\n\n def flush(\n self,\n timeout=None, # type: Optional[float]\n callback=None, # type: Optional[Callable[[int, float], None]]\n ):\n # type: (...) -> None\n \"\"\"\n Wait for the current events to be sent.\n\n :param timeout: Wait for at most `timeout` seconds. If no `timeout` is provided, the `shutdown_timeout` option value is used.\n\n :param callback: Is invoked with the number of pending events and the configured timeout.\n \"\"\"\n if self.transport is not None:\n if timeout is None:\n timeout = self.options[\"shutdown_timeout\"]\n self.transport.flush(timeout=timeout, callback=callback)\n\n def __enter__(self):\n # type: () -> _Client\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n # type: (Any, Any, Any) -> None\n self.close()\n\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n # Make mypy, PyCharm and other static analyzers think `get_options` is a\n # type to have nicer autocompletion for params.\n #\n # Use `ClientConstructor` to define the argument types of `init` and\n # `Dict[str, Any]` to tell static analyzers about the return type.\n\n class get_options(ClientConstructor, Dict[str, Any]):\n pass\n\n class Client(ClientConstructor, _Client):\n pass\n\n\nelse:\n # Alias `get_options` for actual usage. Go through the lambda indirection\n # to throw PyCharm off of the weakly typed signature (it would otherwise\n # discover both the weakly typed signature of `_init` and our faked `init`\n # type).\n\n get_options = (lambda: _get_options)()\n Client = (lambda: _Client)()\n", "path": "sentry_sdk/client.py"}], "after_files": [{"content": "import os\nimport uuid\nimport random\nfrom datetime import datetime\nimport socket\n\nfrom sentry_sdk._compat import string_types, text_type, iteritems\nfrom sentry_sdk.utils import (\n handle_in_app,\n get_type_name,\n capture_internal_exceptions,\n current_stacktrace,\n logger,\n)\nfrom sentry_sdk.serializer import Serializer\nfrom sentry_sdk.transport import make_transport\nfrom sentry_sdk.consts import DEFAULT_OPTIONS, SDK_INFO, ClientConstructor\nfrom sentry_sdk.integrations import setup_integrations\nfrom sentry_sdk.utils import ContextVar\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import Dict\n from typing import Optional\n\n from sentry_sdk.scope import Scope\n from sentry_sdk._types import Event, Hint\n\n\n_client_init_debug = ContextVar(\"client_init_debug\")\n_client_in_capture_event = ContextVar(\"client_in_capture_event\")\n\n\ndef _get_options(*args, **kwargs):\n # type: (*Optional[str], **Any) -> Dict[str, Any]\n if args and (isinstance(args[0], (text_type, bytes, str)) or args[0] is None):\n dsn = args[0] # type: Optional[str]\n args = args[1:]\n else:\n dsn = None\n\n rv = dict(DEFAULT_OPTIONS)\n options = dict(*args, **kwargs) # type: ignore\n if dsn is not None and options.get(\"dsn\") is None:\n options[\"dsn\"] = dsn # type: ignore\n\n for key, value in iteritems(options):\n if key not in rv:\n raise TypeError(\"Unknown option %r\" % (key,))\n rv[key] = value\n\n if rv[\"dsn\"] is None:\n rv[\"dsn\"] = os.environ.get(\"SENTRY_DSN\")\n\n if rv[\"release\"] is None:\n rv[\"release\"] = os.environ.get(\"SENTRY_RELEASE\")\n\n if rv[\"environment\"] is None:\n rv[\"environment\"] = os.environ.get(\"SENTRY_ENVIRONMENT\")\n\n if rv[\"server_name\"] is None and hasattr(socket, \"gethostname\"):\n rv[\"server_name\"] = socket.gethostname()\n\n return rv # type: ignore\n\n\nclass _Client(object):\n \"\"\"The client is internally responsible for capturing the events and\n forwarding them to sentry through the configured transport. It takes\n the client options as keyword arguments and optionally the DSN as first\n argument.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n self.options = get_options(*args, **kwargs) # type: Dict[str, Any]\n self._init_impl()\n\n def __getstate__(self):\n # type: () -> Any\n return {\"options\": self.options}\n\n def __setstate__(self, state):\n # type: (Any) -> None\n self.options = state[\"options\"]\n self._init_impl()\n\n def _init_impl(self):\n # type: () -> None\n old_debug = _client_init_debug.get(False)\n try:\n _client_init_debug.set(self.options[\"debug\"])\n self.transport = make_transport(self.options)\n\n request_bodies = (\"always\", \"never\", \"small\", \"medium\")\n if self.options[\"request_bodies\"] not in request_bodies:\n raise ValueError(\n \"Invalid value for request_bodies. Must be one of {}\".format(\n request_bodies\n )\n )\n\n self.integrations = setup_integrations(\n self.options[\"integrations\"],\n with_defaults=self.options[\"default_integrations\"],\n )\n finally:\n _client_init_debug.set(old_debug)\n\n @property\n def dsn(self):\n # type: () -> Optional[str]\n \"\"\"Returns the configured DSN as string.\"\"\"\n return self.options[\"dsn\"]\n\n def _prepare_event(\n self,\n event, # type: Event\n hint, # type: Optional[Hint]\n scope, # type: Optional[Scope]\n ):\n # type: (...) -> Optional[Event]\n if event.get(\"timestamp\") is None:\n event[\"timestamp\"] = datetime.utcnow()\n\n hint = dict(hint or ()) # type: Hint\n\n if scope is not None:\n event_ = scope.apply_to_event(event, hint)\n if event_ is None:\n return None\n event = event_\n\n if (\n self.options[\"attach_stacktrace\"]\n and \"exception\" not in event\n and \"stacktrace\" not in event\n and \"threads\" not in event\n ):\n with capture_internal_exceptions():\n event[\"threads\"] = {\n \"values\": [\n {\n \"stacktrace\": current_stacktrace(\n self.options[\"with_locals\"]\n ),\n \"crashed\": False,\n \"current\": True,\n }\n ]\n }\n\n for key in \"release\", \"environment\", \"server_name\", \"dist\":\n if event.get(key) is None and self.options[key] is not None: # type: ignore\n event[key] = text_type(self.options[key]).strip() # type: ignore\n if event.get(\"sdk\") is None:\n sdk_info = dict(SDK_INFO)\n sdk_info[\"integrations\"] = sorted(self.integrations.keys())\n event[\"sdk\"] = sdk_info\n\n if event.get(\"platform\") is None:\n event[\"platform\"] = \"python\"\n\n event = handle_in_app(\n event, self.options[\"in_app_exclude\"], self.options[\"in_app_include\"]\n )\n\n # Postprocess the event here so that annotated types do\n # generally not surface in before_send\n if event is not None:\n event = Serializer().serialize_event(event)\n\n before_send = self.options[\"before_send\"]\n if before_send is not None:\n new_event = None\n with capture_internal_exceptions():\n new_event = before_send(event, hint or {})\n if new_event is None:\n logger.info(\"before send dropped event (%s)\", event)\n event = new_event # type: ignore\n\n return event\n\n def _is_ignored_error(self, event, hint):\n # type: (Event, Hint) -> bool\n exc_info = hint.get(\"exc_info\")\n if exc_info is None:\n return False\n\n type_name = get_type_name(exc_info[0])\n full_name = \"%s.%s\" % (exc_info[0].__module__, type_name)\n\n for errcls in self.options[\"ignore_errors\"]:\n # String types are matched against the type name in the\n # exception only\n if isinstance(errcls, string_types):\n if errcls == full_name or errcls == type_name:\n return True\n else:\n if issubclass(exc_info[0], errcls): # type: ignore\n return True\n\n return False\n\n def _should_capture(\n self,\n event, # type: Event\n hint, # type: Hint\n scope=None, # type: Optional[Scope]\n ):\n # type: (...) -> bool\n if scope is not None and not scope._should_capture:\n return False\n\n if (\n self.options[\"sample_rate\"] < 1.0\n and random.random() >= self.options[\"sample_rate\"]\n ):\n return False\n\n if self._is_ignored_error(event, hint):\n return False\n\n return True\n\n def capture_event(\n self,\n event, # type: Event\n hint=None, # type: Optional[Hint]\n scope=None, # type: Optional[Scope]\n ):\n # type: (...) -> Optional[str]\n \"\"\"Captures an event.\n\n :param event: A ready-made event that can be directly sent to Sentry.\n\n :param hint: Contains metadata about the event that can be read from `before_send`, such as the original exception object or a HTTP request object.\n\n :returns: An event ID. May be `None` if there is no DSN set or of if the SDK decided to discard the event for other reasons. In such situations setting `debug=True` on `init()` may help.\n \"\"\"\n is_recursive = _client_in_capture_event.get(False)\n if is_recursive:\n return None\n\n _client_in_capture_event.set(True)\n\n try:\n if self.transport is None:\n return None\n if hint is None:\n hint = {}\n event_id = event.get(\"event_id\")\n if event_id is None:\n event[\"event_id\"] = event_id = uuid.uuid4().hex\n if not self._should_capture(event, hint, scope):\n return None\n event_opt = self._prepare_event(event, hint, scope)\n if event_opt is None:\n return None\n self.transport.capture_event(event_opt)\n return event_id\n finally:\n _client_in_capture_event.set(False)\n\n def close(\n self,\n timeout=None, # type: Optional[float]\n callback=None, # type: Optional[Callable[[int, float], None]]\n ):\n # type: (...) -> None\n \"\"\"\n Close the client and shut down the transport. Arguments have the same\n semantics as :py:meth:`Client.flush`.\n \"\"\"\n if self.transport is not None:\n self.flush(timeout=timeout, callback=callback)\n self.transport.kill()\n self.transport = None\n\n def flush(\n self,\n timeout=None, # type: Optional[float]\n callback=None, # type: Optional[Callable[[int, float], None]]\n ):\n # type: (...) -> None\n \"\"\"\n Wait for the current events to be sent.\n\n :param timeout: Wait for at most `timeout` seconds. If no `timeout` is provided, the `shutdown_timeout` option value is used.\n\n :param callback: Is invoked with the number of pending events and the configured timeout.\n \"\"\"\n if self.transport is not None:\n if timeout is None:\n timeout = self.options[\"shutdown_timeout\"]\n self.transport.flush(timeout=timeout, callback=callback)\n\n def __enter__(self):\n # type: () -> _Client\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n # type: (Any, Any, Any) -> None\n self.close()\n\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n # Make mypy, PyCharm and other static analyzers think `get_options` is a\n # type to have nicer autocompletion for params.\n #\n # Use `ClientConstructor` to define the argument types of `init` and\n # `Dict[str, Any]` to tell static analyzers about the return type.\n\n class get_options(ClientConstructor, Dict[str, Any]):\n pass\n\n class Client(ClientConstructor, _Client):\n pass\n\n\nelse:\n # Alias `get_options` for actual usage. Go through the lambda indirection\n # to throw PyCharm off of the weakly typed signature (it would otherwise\n # discover both the weakly typed signature of `_init` and our faked `init`\n # type).\n\n get_options = (lambda: _get_options)()\n Client = (lambda: _Client)()\n", "path": "sentry_sdk/client.py"}]}
| 4,034 | 445 |
gh_patches_debug_35058
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-29303
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Egghead still broken
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.03.03. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [X] I'm reporting a broken site support
- [X] I've verified that I'm running youtube-dl version **2021.03.03**
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [X] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2021.03.03
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
$ youtube-dl -v "https://egghead.io/courses/write-your-first-program-with-the-rust-language"
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://egghead.io/courses/write-your-first-program-with-the-rust-language']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.03.03
[debug] Python version 3.8.5 (CPython) - Linux-5.4.0-66-generic-x86_64-with-glibc2.29
[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
[debug] Proxy map: {}
[egghead:course] write-your-first-program-with-the-rust-language: Downloading course lessons JSON
[egghead:course] write-your-first-program-with-the-rust-language: Downloading course JSON
[download] Downloading playlist: Write Your First Program with the Rust Language
[egghead:course] playlist Write Your First Program with the Rust Language: Collected 15 video ids (downloading 15 of them)
[download] Downloading video 1 of 15
ERROR: no suitable InfoExtractor for URL https://app.egghead.io/lessons/rust-install-rust
File "/home/user/.local/bin/youtube-dl", line 8, in <module>
sys.exit(main())
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/__init__.py", line 475, in main
_real_main(argv)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/__init__.py", line 465, in _real_main
retcode = ydl.download(all_urls)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 2055, in download
res = self.extract_info(
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 799, in extract_info
return self.__extract_info(url, ie, download, extra_info, process)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 806, in wrapper
return func(self, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 838, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 924, in process_ie_result
return self.__process_playlist(ie_result, download)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 1058, in __process_playlist
entry_result = self.__process_iterable_entry(entry, download, extra)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 806, in wrapper
return func(self, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 1067, in __process_iterable_entry
return self.process_ie_result(
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 876, in process_ie_result
return self.extract_info(ie_result['url'],
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 801, in extract_info
self.report_error('no suitable InfoExtractor for URL %s' % url)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 628, in report_error
self.trouble(error_message, tb)
File "/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 590, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
https://github.com/ytdl-org/youtube-dl/pull/28038 fixed the URL, but I assume somewhere it's still not changed to the new URL.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/egghead.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .common import InfoExtractor
5 from ..compat import compat_str
6 from ..utils import (
7 determine_ext,
8 int_or_none,
9 try_get,
10 unified_timestamp,
11 url_or_none,
12 )
13
14
15 class EggheadBaseIE(InfoExtractor):
16 def _call_api(self, path, video_id, resource, fatal=True):
17 return self._download_json(
18 'https://app.egghead.io/api/v1/' + path,
19 video_id, 'Downloading %s JSON' % resource, fatal=fatal)
20
21
22 class EggheadCourseIE(EggheadBaseIE):
23 IE_DESC = 'egghead.io course'
24 IE_NAME = 'egghead:course'
25 _VALID_URL = r'https://egghead\.io/courses/(?P<id>[^/?#&]+)'
26 _TEST = {
27 'url': 'https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript',
28 'playlist_count': 29,
29 'info_dict': {
30 'id': '72',
31 'title': 'Professor Frisby Introduces Composable Functional JavaScript',
32 'description': 're:(?s)^This course teaches the ubiquitous.*You\'ll start composing functionality before you know it.$',
33 },
34 }
35
36 def _real_extract(self, url):
37 playlist_id = self._match_id(url)
38 series_path = 'series/' + playlist_id
39 lessons = self._call_api(
40 series_path + '/lessons', playlist_id, 'course lessons')
41
42 entries = []
43 for lesson in lessons:
44 lesson_url = url_or_none(lesson.get('http_url'))
45 if not lesson_url:
46 continue
47 lesson_id = lesson.get('id')
48 if lesson_id:
49 lesson_id = compat_str(lesson_id)
50 entries.append(self.url_result(
51 lesson_url, ie=EggheadLessonIE.ie_key(), video_id=lesson_id))
52
53 course = self._call_api(
54 series_path, playlist_id, 'course', False) or {}
55
56 playlist_id = course.get('id')
57 if playlist_id:
58 playlist_id = compat_str(playlist_id)
59
60 return self.playlist_result(
61 entries, playlist_id, course.get('title'),
62 course.get('description'))
63
64
65 class EggheadLessonIE(EggheadBaseIE):
66 IE_DESC = 'egghead.io lesson'
67 IE_NAME = 'egghead:lesson'
68 _VALID_URL = r'https://egghead\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'
69 _TESTS = [{
70 'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',
71 'info_dict': {
72 'id': '1196',
73 'display_id': 'javascript-linear-data-flow-with-container-style-types-box',
74 'ext': 'mp4',
75 'title': 'Create linear data flow with container style types (Box)',
76 'description': 'md5:9aa2cdb6f9878ed4c39ec09e85a8150e',
77 'thumbnail': r're:^https?:.*\.jpg$',
78 'timestamp': 1481296768,
79 'upload_date': '20161209',
80 'duration': 304,
81 'view_count': 0,
82 'tags': 'count:2',
83 },
84 'params': {
85 'skip_download': True,
86 'format': 'bestvideo',
87 },
88 }, {
89 'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',
90 'only_matching': True,
91 }]
92
93 def _real_extract(self, url):
94 display_id = self._match_id(url)
95
96 lesson = self._call_api(
97 'lessons/' + display_id, display_id, 'lesson')
98
99 lesson_id = compat_str(lesson['id'])
100 title = lesson['title']
101
102 formats = []
103 for _, format_url in lesson['media_urls'].items():
104 format_url = url_or_none(format_url)
105 if not format_url:
106 continue
107 ext = determine_ext(format_url)
108 if ext == 'm3u8':
109 formats.extend(self._extract_m3u8_formats(
110 format_url, lesson_id, 'mp4', entry_protocol='m3u8',
111 m3u8_id='hls', fatal=False))
112 elif ext == 'mpd':
113 formats.extend(self._extract_mpd_formats(
114 format_url, lesson_id, mpd_id='dash', fatal=False))
115 else:
116 formats.append({
117 'url': format_url,
118 })
119 self._sort_formats(formats)
120
121 return {
122 'id': lesson_id,
123 'display_id': display_id,
124 'title': title,
125 'description': lesson.get('summary'),
126 'thumbnail': lesson.get('thumb_nail'),
127 'timestamp': unified_timestamp(lesson.get('published_at')),
128 'duration': int_or_none(lesson.get('duration')),
129 'view_count': int_or_none(lesson.get('plays_count')),
130 'tags': try_get(lesson, lambda x: x['tag_list'], list),
131 'series': try_get(
132 lesson, lambda x: x['series']['title'], compat_str),
133 'formats': formats,
134 }
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/egghead.py b/youtube_dl/extractor/egghead.py
--- a/youtube_dl/extractor/egghead.py
+++ b/youtube_dl/extractor/egghead.py
@@ -22,16 +22,19 @@
class EggheadCourseIE(EggheadBaseIE):
IE_DESC = 'egghead.io course'
IE_NAME = 'egghead:course'
- _VALID_URL = r'https://egghead\.io/courses/(?P<id>[^/?#&]+)'
- _TEST = {
+ _VALID_URL = r'https://(?:app\.)?egghead\.io/(?:course|playlist)s/(?P<id>[^/?#&]+)'
+ _TESTS = [{
'url': 'https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript',
'playlist_count': 29,
'info_dict': {
- 'id': '72',
+ 'id': '432655',
'title': 'Professor Frisby Introduces Composable Functional JavaScript',
'description': 're:(?s)^This course teaches the ubiquitous.*You\'ll start composing functionality before you know it.$',
},
- }
+ }, {
+ 'url': 'https://app.egghead.io/playlists/professor-frisby-introduces-composable-functional-javascript',
+ 'only_matching': True,
+ }]
def _real_extract(self, url):
playlist_id = self._match_id(url)
@@ -65,7 +68,7 @@
class EggheadLessonIE(EggheadBaseIE):
IE_DESC = 'egghead.io lesson'
IE_NAME = 'egghead:lesson'
- _VALID_URL = r'https://egghead\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'
+ _VALID_URL = r'https://(?:app\.)?egghead\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',
'info_dict': {
@@ -88,6 +91,9 @@
}, {
'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',
'only_matching': True,
+ }, {
+ 'url': 'https://app.egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',
+ 'only_matching': True,
}]
def _real_extract(self, url):
|
{"golden_diff": "diff --git a/youtube_dl/extractor/egghead.py b/youtube_dl/extractor/egghead.py\n--- a/youtube_dl/extractor/egghead.py\n+++ b/youtube_dl/extractor/egghead.py\n@@ -22,16 +22,19 @@\n class EggheadCourseIE(EggheadBaseIE):\n IE_DESC = 'egghead.io course'\n IE_NAME = 'egghead:course'\n- _VALID_URL = r'https://egghead\\.io/courses/(?P<id>[^/?#&]+)'\n- _TEST = {\n+ _VALID_URL = r'https://(?:app\\.)?egghead\\.io/(?:course|playlist)s/(?P<id>[^/?#&]+)'\n+ _TESTS = [{\n 'url': 'https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript',\n 'playlist_count': 29,\n 'info_dict': {\n- 'id': '72',\n+ 'id': '432655',\n 'title': 'Professor Frisby Introduces Composable Functional JavaScript',\n 'description': 're:(?s)^This course teaches the ubiquitous.*You\\'ll start composing functionality before you know it.$',\n },\n- }\n+ }, {\n+ 'url': 'https://app.egghead.io/playlists/professor-frisby-introduces-composable-functional-javascript',\n+ 'only_matching': True,\n+ }]\n \n def _real_extract(self, url):\n playlist_id = self._match_id(url)\n@@ -65,7 +68,7 @@\n class EggheadLessonIE(EggheadBaseIE):\n IE_DESC = 'egghead.io lesson'\n IE_NAME = 'egghead:lesson'\n- _VALID_URL = r'https://egghead\\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'\n+ _VALID_URL = r'https://(?:app\\.)?egghead\\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',\n 'info_dict': {\n@@ -88,6 +91,9 @@\n }, {\n 'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',\n 'only_matching': True,\n+ }, {\n+ 'url': 'https://app.egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',\n+ 'only_matching': True,\n }]\n \n def _real_extract(self, url):\n", "issue": "Egghead still broken\n<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.03.03. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.\r\n- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [X] I'm reporting a broken site support\r\n- [X] I've verified that I'm running youtube-dl version **2021.03.03**\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [X] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n<!--\r\nProvide the complete verbose output of youtube-dl that clearly demonstrates the problem.\r\nAdd the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:\r\n [debug] System config: []\r\n [debug] User config: []\r\n [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']\r\n [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n [debug] youtube-dl version 2021.03.03\r\n [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n [debug] Proxy map: {}\r\n <more lines>\r\n-->\r\n\r\n```\r\n$ youtube-dl -v \"https://egghead.io/courses/write-your-first-program-with-the-rust-language\"\r\n\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['-v', 'https://egghead.io/courses/write-your-first-program-with-the-rust-language']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8\r\n[debug] youtube-dl version 2021.03.03\r\n[debug] Python version 3.8.5 (CPython) - Linux-5.4.0-66-generic-x86_64-with-glibc2.29\r\n[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4\r\n[debug] Proxy map: {}\r\n[egghead:course] write-your-first-program-with-the-rust-language: Downloading course lessons JSON\r\n[egghead:course] write-your-first-program-with-the-rust-language: Downloading course JSON\r\n[download] Downloading playlist: Write Your First Program with the Rust Language\r\n[egghead:course] playlist Write Your First Program with the Rust Language: Collected 15 video ids (downloading 15 of them)\r\n[download] Downloading video 1 of 15\r\nERROR: no suitable InfoExtractor for URL https://app.egghead.io/lessons/rust-install-rust\r\n File \"/home/user/.local/bin/youtube-dl\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/__init__.py\", line 475, in main\r\n _real_main(argv)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/__init__.py\", line 465, in _real_main\r\n retcode = ydl.download(all_urls)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 2055, in download\r\n res = self.extract_info(\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 799, in extract_info\r\n return self.__extract_info(url, ie, download, extra_info, process)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 806, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 838, in __extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 924, in process_ie_result\r\n return self.__process_playlist(ie_result, download)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 1058, in __process_playlist\r\n entry_result = self.__process_iterable_entry(entry, download, extra)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 806, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 1067, in __process_iterable_entry\r\n return self.process_ie_result(\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 876, in process_ie_result\r\n return self.extract_info(ie_result['url'],\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 801, in extract_info\r\n self.report_error('no suitable InfoExtractor for URL %s' % url)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 628, in report_error\r\n self.trouble(error_message, tb)\r\n File \"/home/user/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py\", line 590, in trouble\r\n tb_data = traceback.format_list(traceback.extract_stack())\r\n```\r\n\r\n\r\n## Description\r\n\r\n<!--\r\nProvide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.\r\nIf work on your issue requires account credentials please provide them or explain how one can obtain them.\r\n-->\r\n\r\nhttps://github.com/ytdl-org/youtube-dl/pull/28038 fixed the URL, but I assume somewhere it's still not changed to the new URL.\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n determine_ext,\n int_or_none,\n try_get,\n unified_timestamp,\n url_or_none,\n)\n\n\nclass EggheadBaseIE(InfoExtractor):\n def _call_api(self, path, video_id, resource, fatal=True):\n return self._download_json(\n 'https://app.egghead.io/api/v1/' + path,\n video_id, 'Downloading %s JSON' % resource, fatal=fatal)\n\n\nclass EggheadCourseIE(EggheadBaseIE):\n IE_DESC = 'egghead.io course'\n IE_NAME = 'egghead:course'\n _VALID_URL = r'https://egghead\\.io/courses/(?P<id>[^/?#&]+)'\n _TEST = {\n 'url': 'https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript',\n 'playlist_count': 29,\n 'info_dict': {\n 'id': '72',\n 'title': 'Professor Frisby Introduces Composable Functional JavaScript',\n 'description': 're:(?s)^This course teaches the ubiquitous.*You\\'ll start composing functionality before you know it.$',\n },\n }\n\n def _real_extract(self, url):\n playlist_id = self._match_id(url)\n series_path = 'series/' + playlist_id\n lessons = self._call_api(\n series_path + '/lessons', playlist_id, 'course lessons')\n\n entries = []\n for lesson in lessons:\n lesson_url = url_or_none(lesson.get('http_url'))\n if not lesson_url:\n continue\n lesson_id = lesson.get('id')\n if lesson_id:\n lesson_id = compat_str(lesson_id)\n entries.append(self.url_result(\n lesson_url, ie=EggheadLessonIE.ie_key(), video_id=lesson_id))\n\n course = self._call_api(\n series_path, playlist_id, 'course', False) or {}\n\n playlist_id = course.get('id')\n if playlist_id:\n playlist_id = compat_str(playlist_id)\n\n return self.playlist_result(\n entries, playlist_id, course.get('title'),\n course.get('description'))\n\n\nclass EggheadLessonIE(EggheadBaseIE):\n IE_DESC = 'egghead.io lesson'\n IE_NAME = 'egghead:lesson'\n _VALID_URL = r'https://egghead\\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',\n 'info_dict': {\n 'id': '1196',\n 'display_id': 'javascript-linear-data-flow-with-container-style-types-box',\n 'ext': 'mp4',\n 'title': 'Create linear data flow with container style types (Box)',\n 'description': 'md5:9aa2cdb6f9878ed4c39ec09e85a8150e',\n 'thumbnail': r're:^https?:.*\\.jpg$',\n 'timestamp': 1481296768,\n 'upload_date': '20161209',\n 'duration': 304,\n 'view_count': 0,\n 'tags': 'count:2',\n },\n 'params': {\n 'skip_download': True,\n 'format': 'bestvideo',\n },\n }, {\n 'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n display_id = self._match_id(url)\n\n lesson = self._call_api(\n 'lessons/' + display_id, display_id, 'lesson')\n\n lesson_id = compat_str(lesson['id'])\n title = lesson['title']\n\n formats = []\n for _, format_url in lesson['media_urls'].items():\n format_url = url_or_none(format_url)\n if not format_url:\n continue\n ext = determine_ext(format_url)\n if ext == 'm3u8':\n formats.extend(self._extract_m3u8_formats(\n format_url, lesson_id, 'mp4', entry_protocol='m3u8',\n m3u8_id='hls', fatal=False))\n elif ext == 'mpd':\n formats.extend(self._extract_mpd_formats(\n format_url, lesson_id, mpd_id='dash', fatal=False))\n else:\n formats.append({\n 'url': format_url,\n })\n self._sort_formats(formats)\n\n return {\n 'id': lesson_id,\n 'display_id': display_id,\n 'title': title,\n 'description': lesson.get('summary'),\n 'thumbnail': lesson.get('thumb_nail'),\n 'timestamp': unified_timestamp(lesson.get('published_at')),\n 'duration': int_or_none(lesson.get('duration')),\n 'view_count': int_or_none(lesson.get('plays_count')),\n 'tags': try_get(lesson, lambda x: x['tag_list'], list),\n 'series': try_get(\n lesson, lambda x: x['series']['title'], compat_str),\n 'formats': formats,\n }\n", "path": "youtube_dl/extractor/egghead.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_str\nfrom ..utils import (\n determine_ext,\n int_or_none,\n try_get,\n unified_timestamp,\n url_or_none,\n)\n\n\nclass EggheadBaseIE(InfoExtractor):\n def _call_api(self, path, video_id, resource, fatal=True):\n return self._download_json(\n 'https://app.egghead.io/api/v1/' + path,\n video_id, 'Downloading %s JSON' % resource, fatal=fatal)\n\n\nclass EggheadCourseIE(EggheadBaseIE):\n IE_DESC = 'egghead.io course'\n IE_NAME = 'egghead:course'\n _VALID_URL = r'https://(?:app\\.)?egghead\\.io/(?:course|playlist)s/(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'https://egghead.io/courses/professor-frisby-introduces-composable-functional-javascript',\n 'playlist_count': 29,\n 'info_dict': {\n 'id': '432655',\n 'title': 'Professor Frisby Introduces Composable Functional JavaScript',\n 'description': 're:(?s)^This course teaches the ubiquitous.*You\\'ll start composing functionality before you know it.$',\n },\n }, {\n 'url': 'https://app.egghead.io/playlists/professor-frisby-introduces-composable-functional-javascript',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n playlist_id = self._match_id(url)\n series_path = 'series/' + playlist_id\n lessons = self._call_api(\n series_path + '/lessons', playlist_id, 'course lessons')\n\n entries = []\n for lesson in lessons:\n lesson_url = url_or_none(lesson.get('http_url'))\n if not lesson_url:\n continue\n lesson_id = lesson.get('id')\n if lesson_id:\n lesson_id = compat_str(lesson_id)\n entries.append(self.url_result(\n lesson_url, ie=EggheadLessonIE.ie_key(), video_id=lesson_id))\n\n course = self._call_api(\n series_path, playlist_id, 'course', False) or {}\n\n playlist_id = course.get('id')\n if playlist_id:\n playlist_id = compat_str(playlist_id)\n\n return self.playlist_result(\n entries, playlist_id, course.get('title'),\n course.get('description'))\n\n\nclass EggheadLessonIE(EggheadBaseIE):\n IE_DESC = 'egghead.io lesson'\n IE_NAME = 'egghead:lesson'\n _VALID_URL = r'https://(?:app\\.)?egghead\\.io/(?:api/v1/)?lessons/(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'https://egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',\n 'info_dict': {\n 'id': '1196',\n 'display_id': 'javascript-linear-data-flow-with-container-style-types-box',\n 'ext': 'mp4',\n 'title': 'Create linear data flow with container style types (Box)',\n 'description': 'md5:9aa2cdb6f9878ed4c39ec09e85a8150e',\n 'thumbnail': r're:^https?:.*\\.jpg$',\n 'timestamp': 1481296768,\n 'upload_date': '20161209',\n 'duration': 304,\n 'view_count': 0,\n 'tags': 'count:2',\n },\n 'params': {\n 'skip_download': True,\n 'format': 'bestvideo',\n },\n }, {\n 'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',\n 'only_matching': True,\n }, {\n 'url': 'https://app.egghead.io/lessons/javascript-linear-data-flow-with-container-style-types-box',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n display_id = self._match_id(url)\n\n lesson = self._call_api(\n 'lessons/' + display_id, display_id, 'lesson')\n\n lesson_id = compat_str(lesson['id'])\n title = lesson['title']\n\n formats = []\n for _, format_url in lesson['media_urls'].items():\n format_url = url_or_none(format_url)\n if not format_url:\n continue\n ext = determine_ext(format_url)\n if ext == 'm3u8':\n formats.extend(self._extract_m3u8_formats(\n format_url, lesson_id, 'mp4', entry_protocol='m3u8',\n m3u8_id='hls', fatal=False))\n elif ext == 'mpd':\n formats.extend(self._extract_mpd_formats(\n format_url, lesson_id, mpd_id='dash', fatal=False))\n else:\n formats.append({\n 'url': format_url,\n })\n self._sort_formats(formats)\n\n return {\n 'id': lesson_id,\n 'display_id': display_id,\n 'title': title,\n 'description': lesson.get('summary'),\n 'thumbnail': lesson.get('thumb_nail'),\n 'timestamp': unified_timestamp(lesson.get('published_at')),\n 'duration': int_or_none(lesson.get('duration')),\n 'view_count': int_or_none(lesson.get('plays_count')),\n 'tags': try_get(lesson, lambda x: x['tag_list'], list),\n 'series': try_get(\n lesson, lambda x: x['series']['title'], compat_str),\n 'formats': formats,\n }\n", "path": "youtube_dl/extractor/egghead.py"}]}
| 3,353 | 590 |
gh_patches_debug_41074
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-2081
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
search --asm: look for assembly instruction[s] bytes
The `search` command should have a way to search for a given assembly instruction[s] bytes.
We can assemble the given instructions through pwntools (its `pwnlib` import) similarly as we do in the `asm` command implementation: https://github.com/pwndbg/pwndbg/blob/c0d785565b499ba32d674c9e84a27e4967aee315/pwndbg/commands/asm.py#L69
This can probably be implemented as a `-a --asm` or `-t|--type=asm` option, where if provided, we would search for assebmled bytes.
E.g. in x86-64 programs `search --asm "xor rax, rax"` should assemble it through pwnlib to the following bytes:
```py
In [4]: asm('xor rax, rax', arch='amd64')
Out[4]: b'H1\xc0'
```
And then search the memory for those bytes (`b'H1\xc0'`). Ofc it should work with all the other options in `search` command like `--writable` or the ability to pass in a mapping name as the last argument of the search.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/search.py`
Content:
```
1 from __future__ import annotations
2
3 import argparse
4 import binascii
5 import codecs
6 import os
7 import struct
8
9 import pwndbg.color.memory as M
10 import pwndbg.commands
11 import pwndbg.enhance
12 import pwndbg.gdblib.arch
13 import pwndbg.gdblib.config
14 import pwndbg.gdblib.vmmap
15 import pwndbg.search
16 from pwndbg.color import message
17 from pwndbg.commands import CommandCategory
18
19 saved: set[int] = set()
20
21
22 def print_search_hit(address) -> None:
23 """Prints out a single search hit.
24
25 Arguments:
26 address(int): Address to print
27 """
28 if not address:
29 return
30
31 vmmap = pwndbg.gdblib.vmmap.find(address)
32 if vmmap:
33 region = os.path.basename(vmmap.objfile)
34 else:
35 region = "[mapped]"
36
37 region = region.ljust(15)
38
39 region = M.get(address, region)
40 addr = M.get(address)
41 display = pwndbg.enhance.enhance(address)
42 print(region, addr, display)
43
44
45 auto_save = pwndbg.gdblib.config.add_param(
46 "auto-save-search", False, 'automatically pass --save to "search" command'
47 )
48 parser = argparse.ArgumentParser(
49 formatter_class=argparse.RawTextHelpFormatter,
50 description="""Search memory for byte sequences, strings, pointers, and integer values.
51
52 By default search results are cached. If you want to cache all results, but only print a subset, use --trunc-out. If you want to cache only a subset of results, and print the results immediately, use --limit. The latter is specially useful if you're searching a huge section of memory.
53
54 """,
55 )
56 parser.add_argument(
57 "-t",
58 "--type",
59 choices=["byte", "short", "word", "dword", "qword", "pointer", "string", "bytes"],
60 help="Size of search target",
61 default="bytes",
62 type=str,
63 )
64 parser.add_argument(
65 "-1",
66 "--byte",
67 dest="type",
68 action="store_const",
69 const="byte",
70 help="Search for a 1-byte integer",
71 )
72 parser.add_argument(
73 "-2",
74 "--word",
75 "--short",
76 dest="type",
77 action="store_const",
78 const="word",
79 help="Search for a 2-byte integer",
80 )
81 parser.add_argument(
82 "-4",
83 "--dword",
84 dest="type",
85 action="store_const",
86 const="dword",
87 help="Search for a 4-byte integer",
88 )
89 parser.add_argument(
90 "-8",
91 "--qword",
92 dest="type",
93 action="store_const",
94 const="qword",
95 help="Search for an 8-byte integer",
96 )
97 parser.add_argument(
98 "-p",
99 "--pointer",
100 dest="type",
101 action="store_const",
102 const="pointer",
103 help="Search for a pointer-width integer",
104 )
105 parser.add_argument(
106 "-x", "--hex", action="store_true", help="Target is a hex-encoded (for bytes/strings)"
107 )
108 parser.add_argument(
109 "-e", "--executable", action="store_true", help="Search executable segments only"
110 )
111 parser.add_argument("-w", "--writable", action="store_true", help="Search writable segments only")
112 parser.add_argument(
113 "-s",
114 "--step",
115 default=None,
116 type=str,
117 help="Step search address forward to next alignment after each hit (ex: 0x1000)",
118 )
119 parser.add_argument(
120 "-l",
121 "--limit",
122 default=None,
123 type=str,
124 help="Max results before quitting the search. Differs from --trunc-out in that it will not save all search results before quitting",
125 )
126 parser.add_argument(
127 "-a", "--aligned", default=None, type=str, help="Result must be aligned to this byte boundary"
128 )
129 parser.add_argument("value", type=str, help="Value to search for")
130 parser.add_argument(
131 "mapping_name", type=str, nargs="?", default=None, help="Mapping to search [e.g. libc]"
132 )
133 parser.add_argument(
134 "--save",
135 action="store_true",
136 default=None,
137 help="Save results for further searches with --next. Default comes from config %r"
138 % auto_save.name,
139 )
140 parser.add_argument(
141 "--no-save", action="store_false", default=None, dest="save", help="Invert --save"
142 )
143 parser.add_argument(
144 "-n",
145 "--next",
146 action="store_true",
147 help="Search only locations returned by previous search with --save",
148 )
149 parser.add_argument(
150 "--trunc-out",
151 action="store_true",
152 default=False,
153 help="Truncate the output to 20 results. Differs from --limit in that it will first save all search results",
154 )
155
156
157 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.MEMORY)
158 @pwndbg.commands.OnlyWhenRunning
159 def search(
160 type,
161 hex,
162 executable,
163 writable,
164 step,
165 limit,
166 aligned,
167 value,
168 mapping_name,
169 save,
170 next,
171 trunc_out,
172 ) -> None:
173 global saved
174 if next and not saved:
175 print(
176 "WARNING: cannot filter previous search results as they were empty. Performing new search saving results."
177 )
178 next = False
179 save = True
180
181 # Adjust pointer sizes to the local architecture
182 if type == "pointer":
183 type = {4: "dword", 8: "qword"}[pwndbg.gdblib.arch.ptrsize]
184
185 if save is None:
186 save = bool(pwndbg.gdblib.config.auto_save_search)
187
188 if hex:
189 try:
190 value = codecs.decode(value, "hex")
191 except binascii.Error as e:
192 print(f"invalid input for type hex: {e}")
193 return
194
195 if step:
196 step = pwndbg.commands.fix_int(step)
197
198 if aligned:
199 aligned = pwndbg.commands.fix_int(aligned)
200
201 if limit:
202 limit = pwndbg.commands.fix_int(limit)
203 # Convert to an integer if needed, and pack to bytes
204 if type not in ("string", "bytes"):
205 value = pwndbg.commands.fix_int(value)
206 value &= pwndbg.gdblib.arch.ptrmask
207 fmt = {"little": "<", "big": ">"}[pwndbg.gdblib.arch.endian] + {
208 "byte": "B",
209 "short": "H",
210 "word": "H",
211 "dword": "L",
212 "qword": "Q",
213 }[type]
214
215 try:
216 value = struct.pack(fmt, value)
217 except struct.error as e:
218 print(f"invalid input for type {type}: {e}")
219 return
220
221 # Null-terminate strings
222 elif type == "string":
223 value = value.encode()
224 value += b"\x00"
225
226 # Find the mappings that we're looking for
227 mappings = pwndbg.gdblib.vmmap.get()
228
229 if mapping_name:
230 mappings = [m for m in mappings if mapping_name in m.objfile]
231
232 if not mappings:
233 print(message.error("Could not find mapping %r" % mapping_name))
234 return
235
236 # If next is passed, only perform a manual search over previously saved addresses
237 print("Searching for value: " + repr(value))
238 if next:
239 val_len = len(value)
240 new_saved = set()
241
242 i = 0
243 for addr in saved:
244 try:
245 val = pwndbg.gdblib.memory.read(addr, val_len)
246 except Exception:
247 continue
248 if val == value:
249 new_saved.add(addr)
250 if not trunc_out or i < 20:
251 print_search_hit(addr)
252 i += 1
253
254 print("Search found %d items" % i)
255 saved = new_saved
256 return
257
258 # Prep the saved set if necessary
259 if save:
260 saved = set()
261
262 # Perform the search
263 i = 0
264 for address in pwndbg.search.search(
265 value,
266 mappings=mappings,
267 executable=executable,
268 writable=writable,
269 step=step,
270 aligned=aligned,
271 limit=limit,
272 ):
273 if save:
274 saved.add(address)
275
276 if not trunc_out or i < 20:
277 print_search_hit(address)
278 i += 1
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/commands/search.py b/pwndbg/commands/search.py
--- a/pwndbg/commands/search.py
+++ b/pwndbg/commands/search.py
@@ -6,6 +6,8 @@
import os
import struct
+import pwnlib
+
import pwndbg.color.memory as M
import pwndbg.commands
import pwndbg.enhance
@@ -56,7 +58,7 @@
parser.add_argument(
"-t",
"--type",
- choices=["byte", "short", "word", "dword", "qword", "pointer", "string", "bytes"],
+ choices=["byte", "short", "word", "dword", "qword", "pointer", "string", "bytes", "asm"],
help="Size of search target",
default="bytes",
type=str,
@@ -102,6 +104,19 @@
const="pointer",
help="Search for a pointer-width integer",
)
+parser.add_argument(
+ "--asm",
+ dest="type",
+ action="store_const",
+ const="asm",
+ help="Search for an assembly instruction",
+)
+parser.add_argument(
+ "--arch",
+ choices=pwnlib.context.context.architectures.keys(),
+ type=str,
+ help="Target architecture",
+)
parser.add_argument(
"-x", "--hex", action="store_true", help="Target is a hex-encoded (for bytes/strings)"
)
@@ -158,6 +173,7 @@
@pwndbg.commands.OnlyWhenRunning
def search(
type,
+ arch,
hex,
executable,
writable,
@@ -178,6 +194,9 @@
next = False
save = True
+ if not arch:
+ arch = pwnlib.context.context.arch
+
# Adjust pointer sizes to the local architecture
if type == "pointer":
type = {4: "dword", 8: "qword"}[pwndbg.gdblib.arch.ptrsize]
@@ -201,7 +220,7 @@
if limit:
limit = pwndbg.commands.fix_int(limit)
# Convert to an integer if needed, and pack to bytes
- if type not in ("string", "bytes"):
+ if type not in ("string", "bytes", "asm"):
value = pwndbg.commands.fix_int(value)
value &= pwndbg.gdblib.arch.ptrmask
fmt = {"little": "<", "big": ">"}[pwndbg.gdblib.arch.endian] + {
@@ -223,6 +242,10 @@
value = value.encode()
value += b"\x00"
+ elif type == "asm":
+ bits_for_arch = pwnlib.context.context.architectures.get(arch, {}).get("bits")
+ value = pwnlib.asm.asm(value, arch=arch, bits=bits_for_arch)
+
# Find the mappings that we're looking for
mappings = pwndbg.gdblib.vmmap.get()
@@ -234,7 +257,11 @@
return
# If next is passed, only perform a manual search over previously saved addresses
- print("Searching for value: " + repr(value))
+ if type == "asm":
+ print("Searching for instruction (assembled value): " + repr(value))
+ else:
+ print("Searching for value: " + repr(value))
+
if next:
val_len = len(value)
new_saved = set()
|
{"golden_diff": "diff --git a/pwndbg/commands/search.py b/pwndbg/commands/search.py\n--- a/pwndbg/commands/search.py\n+++ b/pwndbg/commands/search.py\n@@ -6,6 +6,8 @@\n import os\n import struct\n \n+import pwnlib\n+\n import pwndbg.color.memory as M\n import pwndbg.commands\n import pwndbg.enhance\n@@ -56,7 +58,7 @@\n parser.add_argument(\n \"-t\",\n \"--type\",\n- choices=[\"byte\", \"short\", \"word\", \"dword\", \"qword\", \"pointer\", \"string\", \"bytes\"],\n+ choices=[\"byte\", \"short\", \"word\", \"dword\", \"qword\", \"pointer\", \"string\", \"bytes\", \"asm\"],\n help=\"Size of search target\",\n default=\"bytes\",\n type=str,\n@@ -102,6 +104,19 @@\n const=\"pointer\",\n help=\"Search for a pointer-width integer\",\n )\n+parser.add_argument(\n+ \"--asm\",\n+ dest=\"type\",\n+ action=\"store_const\",\n+ const=\"asm\",\n+ help=\"Search for an assembly instruction\",\n+)\n+parser.add_argument(\n+ \"--arch\",\n+ choices=pwnlib.context.context.architectures.keys(),\n+ type=str,\n+ help=\"Target architecture\",\n+)\n parser.add_argument(\n \"-x\", \"--hex\", action=\"store_true\", help=\"Target is a hex-encoded (for bytes/strings)\"\n )\n@@ -158,6 +173,7 @@\n @pwndbg.commands.OnlyWhenRunning\n def search(\n type,\n+ arch,\n hex,\n executable,\n writable,\n@@ -178,6 +194,9 @@\n next = False\n save = True\n \n+ if not arch:\n+ arch = pwnlib.context.context.arch\n+\n # Adjust pointer sizes to the local architecture\n if type == \"pointer\":\n type = {4: \"dword\", 8: \"qword\"}[pwndbg.gdblib.arch.ptrsize]\n@@ -201,7 +220,7 @@\n if limit:\n limit = pwndbg.commands.fix_int(limit)\n # Convert to an integer if needed, and pack to bytes\n- if type not in (\"string\", \"bytes\"):\n+ if type not in (\"string\", \"bytes\", \"asm\"):\n value = pwndbg.commands.fix_int(value)\n value &= pwndbg.gdblib.arch.ptrmask\n fmt = {\"little\": \"<\", \"big\": \">\"}[pwndbg.gdblib.arch.endian] + {\n@@ -223,6 +242,10 @@\n value = value.encode()\n value += b\"\\x00\"\n \n+ elif type == \"asm\":\n+ bits_for_arch = pwnlib.context.context.architectures.get(arch, {}).get(\"bits\")\n+ value = pwnlib.asm.asm(value, arch=arch, bits=bits_for_arch)\n+\n # Find the mappings that we're looking for\n mappings = pwndbg.gdblib.vmmap.get()\n \n@@ -234,7 +257,11 @@\n return\n \n # If next is passed, only perform a manual search over previously saved addresses\n- print(\"Searching for value: \" + repr(value))\n+ if type == \"asm\":\n+ print(\"Searching for instruction (assembled value): \" + repr(value))\n+ else:\n+ print(\"Searching for value: \" + repr(value))\n+\n if next:\n val_len = len(value)\n new_saved = set()\n", "issue": "search --asm: look for assembly instruction[s] bytes\nThe `search` command should have a way to search for a given assembly instruction[s] bytes.\r\n\r\nWe can assemble the given instructions through pwntools (its `pwnlib` import) similarly as we do in the `asm` command implementation: https://github.com/pwndbg/pwndbg/blob/c0d785565b499ba32d674c9e84a27e4967aee315/pwndbg/commands/asm.py#L69\r\n\r\nThis can probably be implemented as a `-a --asm` or `-t|--type=asm` option, where if provided, we would search for assebmled bytes.\r\n\r\nE.g. in x86-64 programs `search --asm \"xor rax, rax\"` should assemble it through pwnlib to the following bytes:\r\n ```py\r\n In [4]: asm('xor rax, rax', arch='amd64')\r\n Out[4]: b'H1\\xc0'\r\n ```\r\nAnd then search the memory for those bytes (`b'H1\\xc0'`). Ofc it should work with all the other options in `search` command like `--writable` or the ability to pass in a mapping name as the last argument of the search.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport binascii\nimport codecs\nimport os\nimport struct\n\nimport pwndbg.color.memory as M\nimport pwndbg.commands\nimport pwndbg.enhance\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.vmmap\nimport pwndbg.search\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\n\nsaved: set[int] = set()\n\n\ndef print_search_hit(address) -> None:\n \"\"\"Prints out a single search hit.\n\n Arguments:\n address(int): Address to print\n \"\"\"\n if not address:\n return\n\n vmmap = pwndbg.gdblib.vmmap.find(address)\n if vmmap:\n region = os.path.basename(vmmap.objfile)\n else:\n region = \"[mapped]\"\n\n region = region.ljust(15)\n\n region = M.get(address, region)\n addr = M.get(address)\n display = pwndbg.enhance.enhance(address)\n print(region, addr, display)\n\n\nauto_save = pwndbg.gdblib.config.add_param(\n \"auto-save-search\", False, 'automatically pass --save to \"search\" command'\n)\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"\"\"Search memory for byte sequences, strings, pointers, and integer values.\n\nBy default search results are cached. If you want to cache all results, but only print a subset, use --trunc-out. If you want to cache only a subset of results, and print the results immediately, use --limit. The latter is specially useful if you're searching a huge section of memory.\n\n\"\"\",\n)\nparser.add_argument(\n \"-t\",\n \"--type\",\n choices=[\"byte\", \"short\", \"word\", \"dword\", \"qword\", \"pointer\", \"string\", \"bytes\"],\n help=\"Size of search target\",\n default=\"bytes\",\n type=str,\n)\nparser.add_argument(\n \"-1\",\n \"--byte\",\n dest=\"type\",\n action=\"store_const\",\n const=\"byte\",\n help=\"Search for a 1-byte integer\",\n)\nparser.add_argument(\n \"-2\",\n \"--word\",\n \"--short\",\n dest=\"type\",\n action=\"store_const\",\n const=\"word\",\n help=\"Search for a 2-byte integer\",\n)\nparser.add_argument(\n \"-4\",\n \"--dword\",\n dest=\"type\",\n action=\"store_const\",\n const=\"dword\",\n help=\"Search for a 4-byte integer\",\n)\nparser.add_argument(\n \"-8\",\n \"--qword\",\n dest=\"type\",\n action=\"store_const\",\n const=\"qword\",\n help=\"Search for an 8-byte integer\",\n)\nparser.add_argument(\n \"-p\",\n \"--pointer\",\n dest=\"type\",\n action=\"store_const\",\n const=\"pointer\",\n help=\"Search for a pointer-width integer\",\n)\nparser.add_argument(\n \"-x\", \"--hex\", action=\"store_true\", help=\"Target is a hex-encoded (for bytes/strings)\"\n)\nparser.add_argument(\n \"-e\", \"--executable\", action=\"store_true\", help=\"Search executable segments only\"\n)\nparser.add_argument(\"-w\", \"--writable\", action=\"store_true\", help=\"Search writable segments only\")\nparser.add_argument(\n \"-s\",\n \"--step\",\n default=None,\n type=str,\n help=\"Step search address forward to next alignment after each hit (ex: 0x1000)\",\n)\nparser.add_argument(\n \"-l\",\n \"--limit\",\n default=None,\n type=str,\n help=\"Max results before quitting the search. Differs from --trunc-out in that it will not save all search results before quitting\",\n)\nparser.add_argument(\n \"-a\", \"--aligned\", default=None, type=str, help=\"Result must be aligned to this byte boundary\"\n)\nparser.add_argument(\"value\", type=str, help=\"Value to search for\")\nparser.add_argument(\n \"mapping_name\", type=str, nargs=\"?\", default=None, help=\"Mapping to search [e.g. libc]\"\n)\nparser.add_argument(\n \"--save\",\n action=\"store_true\",\n default=None,\n help=\"Save results for further searches with --next. Default comes from config %r\"\n % auto_save.name,\n)\nparser.add_argument(\n \"--no-save\", action=\"store_false\", default=None, dest=\"save\", help=\"Invert --save\"\n)\nparser.add_argument(\n \"-n\",\n \"--next\",\n action=\"store_true\",\n help=\"Search only locations returned by previous search with --save\",\n)\nparser.add_argument(\n \"--trunc-out\",\n action=\"store_true\",\n default=False,\n help=\"Truncate the output to 20 results. Differs from --limit in that it will first save all search results\",\n)\n\n\[email protected](parser, category=CommandCategory.MEMORY)\[email protected]\ndef search(\n type,\n hex,\n executable,\n writable,\n step,\n limit,\n aligned,\n value,\n mapping_name,\n save,\n next,\n trunc_out,\n) -> None:\n global saved\n if next and not saved:\n print(\n \"WARNING: cannot filter previous search results as they were empty. Performing new search saving results.\"\n )\n next = False\n save = True\n\n # Adjust pointer sizes to the local architecture\n if type == \"pointer\":\n type = {4: \"dword\", 8: \"qword\"}[pwndbg.gdblib.arch.ptrsize]\n\n if save is None:\n save = bool(pwndbg.gdblib.config.auto_save_search)\n\n if hex:\n try:\n value = codecs.decode(value, \"hex\")\n except binascii.Error as e:\n print(f\"invalid input for type hex: {e}\")\n return\n\n if step:\n step = pwndbg.commands.fix_int(step)\n\n if aligned:\n aligned = pwndbg.commands.fix_int(aligned)\n\n if limit:\n limit = pwndbg.commands.fix_int(limit)\n # Convert to an integer if needed, and pack to bytes\n if type not in (\"string\", \"bytes\"):\n value = pwndbg.commands.fix_int(value)\n value &= pwndbg.gdblib.arch.ptrmask\n fmt = {\"little\": \"<\", \"big\": \">\"}[pwndbg.gdblib.arch.endian] + {\n \"byte\": \"B\",\n \"short\": \"H\",\n \"word\": \"H\",\n \"dword\": \"L\",\n \"qword\": \"Q\",\n }[type]\n\n try:\n value = struct.pack(fmt, value)\n except struct.error as e:\n print(f\"invalid input for type {type}: {e}\")\n return\n\n # Null-terminate strings\n elif type == \"string\":\n value = value.encode()\n value += b\"\\x00\"\n\n # Find the mappings that we're looking for\n mappings = pwndbg.gdblib.vmmap.get()\n\n if mapping_name:\n mappings = [m for m in mappings if mapping_name in m.objfile]\n\n if not mappings:\n print(message.error(\"Could not find mapping %r\" % mapping_name))\n return\n\n # If next is passed, only perform a manual search over previously saved addresses\n print(\"Searching for value: \" + repr(value))\n if next:\n val_len = len(value)\n new_saved = set()\n\n i = 0\n for addr in saved:\n try:\n val = pwndbg.gdblib.memory.read(addr, val_len)\n except Exception:\n continue\n if val == value:\n new_saved.add(addr)\n if not trunc_out or i < 20:\n print_search_hit(addr)\n i += 1\n\n print(\"Search found %d items\" % i)\n saved = new_saved\n return\n\n # Prep the saved set if necessary\n if save:\n saved = set()\n\n # Perform the search\n i = 0\n for address in pwndbg.search.search(\n value,\n mappings=mappings,\n executable=executable,\n writable=writable,\n step=step,\n aligned=aligned,\n limit=limit,\n ):\n if save:\n saved.add(address)\n\n if not trunc_out or i < 20:\n print_search_hit(address)\n i += 1\n", "path": "pwndbg/commands/search.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport binascii\nimport codecs\nimport os\nimport struct\n\nimport pwnlib\n\nimport pwndbg.color.memory as M\nimport pwndbg.commands\nimport pwndbg.enhance\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.vmmap\nimport pwndbg.search\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\n\nsaved: set[int] = set()\n\n\ndef print_search_hit(address) -> None:\n \"\"\"Prints out a single search hit.\n\n Arguments:\n address(int): Address to print\n \"\"\"\n if not address:\n return\n\n vmmap = pwndbg.gdblib.vmmap.find(address)\n if vmmap:\n region = os.path.basename(vmmap.objfile)\n else:\n region = \"[mapped]\"\n\n region = region.ljust(15)\n\n region = M.get(address, region)\n addr = M.get(address)\n display = pwndbg.enhance.enhance(address)\n print(region, addr, display)\n\n\nauto_save = pwndbg.gdblib.config.add_param(\n \"auto-save-search\", False, 'automatically pass --save to \"search\" command'\n)\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"\"\"Search memory for byte sequences, strings, pointers, and integer values.\n\nBy default search results are cached. If you want to cache all results, but only print a subset, use --trunc-out. If you want to cache only a subset of results, and print the results immediately, use --limit. The latter is specially useful if you're searching a huge section of memory.\n\n\"\"\",\n)\nparser.add_argument(\n \"-t\",\n \"--type\",\n choices=[\"byte\", \"short\", \"word\", \"dword\", \"qword\", \"pointer\", \"string\", \"bytes\", \"asm\"],\n help=\"Size of search target\",\n default=\"bytes\",\n type=str,\n)\nparser.add_argument(\n \"-1\",\n \"--byte\",\n dest=\"type\",\n action=\"store_const\",\n const=\"byte\",\n help=\"Search for a 1-byte integer\",\n)\nparser.add_argument(\n \"-2\",\n \"--word\",\n \"--short\",\n dest=\"type\",\n action=\"store_const\",\n const=\"word\",\n help=\"Search for a 2-byte integer\",\n)\nparser.add_argument(\n \"-4\",\n \"--dword\",\n dest=\"type\",\n action=\"store_const\",\n const=\"dword\",\n help=\"Search for a 4-byte integer\",\n)\nparser.add_argument(\n \"-8\",\n \"--qword\",\n dest=\"type\",\n action=\"store_const\",\n const=\"qword\",\n help=\"Search for an 8-byte integer\",\n)\nparser.add_argument(\n \"-p\",\n \"--pointer\",\n dest=\"type\",\n action=\"store_const\",\n const=\"pointer\",\n help=\"Search for a pointer-width integer\",\n)\nparser.add_argument(\n \"--asm\",\n dest=\"type\",\n action=\"store_const\",\n const=\"asm\",\n help=\"Search for an assembly instruction\",\n)\nparser.add_argument(\n \"--arch\",\n choices=pwnlib.context.context.architectures.keys(),\n type=str,\n help=\"Target architecture\",\n)\nparser.add_argument(\n \"-x\", \"--hex\", action=\"store_true\", help=\"Target is a hex-encoded (for bytes/strings)\"\n)\nparser.add_argument(\n \"-e\", \"--executable\", action=\"store_true\", help=\"Search executable segments only\"\n)\nparser.add_argument(\"-w\", \"--writable\", action=\"store_true\", help=\"Search writable segments only\")\nparser.add_argument(\n \"-s\",\n \"--step\",\n default=None,\n type=str,\n help=\"Step search address forward to next alignment after each hit (ex: 0x1000)\",\n)\nparser.add_argument(\n \"-l\",\n \"--limit\",\n default=None,\n type=str,\n help=\"Max results before quitting the search. Differs from --trunc-out in that it will not save all search results before quitting\",\n)\nparser.add_argument(\n \"-a\", \"--aligned\", default=None, type=str, help=\"Result must be aligned to this byte boundary\"\n)\nparser.add_argument(\"value\", type=str, help=\"Value to search for\")\nparser.add_argument(\n \"mapping_name\", type=str, nargs=\"?\", default=None, help=\"Mapping to search [e.g. libc]\"\n)\nparser.add_argument(\n \"--save\",\n action=\"store_true\",\n default=None,\n help=\"Save results for further searches with --next. Default comes from config %r\"\n % auto_save.name,\n)\nparser.add_argument(\n \"--no-save\", action=\"store_false\", default=None, dest=\"save\", help=\"Invert --save\"\n)\nparser.add_argument(\n \"-n\",\n \"--next\",\n action=\"store_true\",\n help=\"Search only locations returned by previous search with --save\",\n)\nparser.add_argument(\n \"--trunc-out\",\n action=\"store_true\",\n default=False,\n help=\"Truncate the output to 20 results. Differs from --limit in that it will first save all search results\",\n)\n\n\[email protected](parser, category=CommandCategory.MEMORY)\[email protected]\ndef search(\n type,\n arch,\n hex,\n executable,\n writable,\n step,\n limit,\n aligned,\n value,\n mapping_name,\n save,\n next,\n trunc_out,\n) -> None:\n global saved\n if next and not saved:\n print(\n \"WARNING: cannot filter previous search results as they were empty. Performing new search saving results.\"\n )\n next = False\n save = True\n\n if not arch:\n arch = pwnlib.context.context.arch\n\n # Adjust pointer sizes to the local architecture\n if type == \"pointer\":\n type = {4: \"dword\", 8: \"qword\"}[pwndbg.gdblib.arch.ptrsize]\n\n if save is None:\n save = bool(pwndbg.gdblib.config.auto_save_search)\n\n if hex:\n try:\n value = codecs.decode(value, \"hex\")\n except binascii.Error as e:\n print(f\"invalid input for type hex: {e}\")\n return\n\n if step:\n step = pwndbg.commands.fix_int(step)\n\n if aligned:\n aligned = pwndbg.commands.fix_int(aligned)\n\n if limit:\n limit = pwndbg.commands.fix_int(limit)\n # Convert to an integer if needed, and pack to bytes\n if type not in (\"string\", \"bytes\", \"asm\"):\n value = pwndbg.commands.fix_int(value)\n value &= pwndbg.gdblib.arch.ptrmask\n fmt = {\"little\": \"<\", \"big\": \">\"}[pwndbg.gdblib.arch.endian] + {\n \"byte\": \"B\",\n \"short\": \"H\",\n \"word\": \"H\",\n \"dword\": \"L\",\n \"qword\": \"Q\",\n }[type]\n\n try:\n value = struct.pack(fmt, value)\n except struct.error as e:\n print(f\"invalid input for type {type}: {e}\")\n return\n\n # Null-terminate strings\n elif type == \"string\":\n value = value.encode()\n value += b\"\\x00\"\n\n elif type == \"asm\":\n bits_for_arch = pwnlib.context.context.architectures.get(arch, {}).get(\"bits\")\n value = pwnlib.asm.asm(value, arch=arch, bits=bits_for_arch)\n\n # Find the mappings that we're looking for\n mappings = pwndbg.gdblib.vmmap.get()\n\n if mapping_name:\n mappings = [m for m in mappings if mapping_name in m.objfile]\n\n if not mappings:\n print(message.error(\"Could not find mapping %r\" % mapping_name))\n return\n\n # If next is passed, only perform a manual search over previously saved addresses\n if type == \"asm\":\n print(\"Searching for instruction (assembled value): \" + repr(value))\n else:\n print(\"Searching for value: \" + repr(value))\n\n if next:\n val_len = len(value)\n new_saved = set()\n\n i = 0\n for addr in saved:\n try:\n val = pwndbg.gdblib.memory.read(addr, val_len)\n except Exception:\n continue\n if val == value:\n new_saved.add(addr)\n if not trunc_out or i < 20:\n print_search_hit(addr)\n i += 1\n\n print(\"Search found %d items\" % i)\n saved = new_saved\n return\n\n # Prep the saved set if necessary\n if save:\n saved = set()\n\n # Perform the search\n i = 0\n for address in pwndbg.search.search(\n value,\n mappings=mappings,\n executable=executable,\n writable=writable,\n step=step,\n aligned=aligned,\n limit=limit,\n ):\n if save:\n saved.add(address)\n\n if not trunc_out or i < 20:\n print_search_hit(address)\n i += 1\n", "path": "pwndbg/commands/search.py"}]}
| 3,117 | 794 |
gh_patches_debug_4298
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-773
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regex error in authentication
https://github.com/dotKom/onlineweb4/blob/develop/apps/authentication/views.py#L121
The "."s should be changed to "."
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import uuid
4 import re
5
6 from django.contrib import auth
7 from django.contrib import messages
8 from django.core.mail import send_mail
9 from django.shortcuts import render, redirect, get_object_or_404
10 from django.http import HttpResponseRedirect
11 from django.utils.translation import ugettext as _
12 from django.views.decorators.debug import sensitive_post_parameters
13
14 from django.conf import settings
15 from apps.authentication.forms import (LoginForm, RegisterForm,
16 RecoveryForm, ChangePasswordForm)
17 from apps.authentication.models import OnlineUser as User, RegisterToken, Email
18
19
20 @sensitive_post_parameters()
21 def login(request):
22 redirect_url = request.REQUEST.get('next', '')
23 if request.method == 'POST':
24 form = LoginForm(request.POST)
25 if form.login(request):
26 messages.success(request, _(u'Du er nå logget inn.'))
27 if redirect_url:
28 return HttpResponseRedirect(redirect_url)
29 return HttpResponseRedirect('/')
30 else: form = LoginForm(request.POST, auto_id=True)
31 else:
32 form = LoginForm()
33
34 response_dict = { 'form' : form, 'next' : redirect_url}
35 return render(request, 'auth/login.html', response_dict)
36
37
38 def logout(request):
39 auth.logout(request)
40 messages.success(request, _(u'Du er nå logget ut.'))
41 return HttpResponseRedirect('/')
42
43
44 @sensitive_post_parameters()
45 def register(request):
46 if request.user.is_authenticated():
47 messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))
48 return HttpResponseRedirect('/')
49 else:
50 if request.method == 'POST':
51 form = RegisterForm(request.POST)
52 if form.is_valid():
53 cleaned = form.cleaned_data
54
55 # Create user
56 user = User(
57 username=cleaned['username'],
58 first_name=cleaned['first_name'].title(),
59 last_name=cleaned['last_name'].title(),
60 )
61 # Set remaining fields
62 user.phone_number=cleaned['phone']
63 user.address=cleaned['address'].title()
64 user.zip_code=cleaned['zip_code']
65 # Store password properly
66 user.set_password(cleaned['password'])
67 # Users need to be manually activated
68 user.is_active = False
69 user.save()
70
71 # Set email address
72 email = Email(
73 user=user,
74 email=cleaned['email'].lower(),
75 )
76 email.primary = True
77 email.save()
78
79 # Create the registration token
80 token = uuid.uuid4().hex
81 rt = RegisterToken(user=user, email=cleaned['email'], token=token)
82 rt.save()
83
84 email_message = _(u"""
85 En konto har blitt registrert på online.ntnu.no med denne epostadressen. Dersom du ikke
86 har utført denne handlingen ber vi deg se bort fra denne eposten.
87
88 For å bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gjøre
89 dette ved å besøke linken under.
90
91 http://%s/auth/verify/%s/
92
93 Denne lenken vil være gyldig i 24 timer. Dersom du behøver å få tilsendt en ny lenke
94 kan dette gjøres med funksjonen for å gjenopprette passord.
95 """) % (request.META['HTTP_HOST'], token)
96
97 send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
98
99 messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))
100
101 return HttpResponseRedirect('/')
102 else:
103 form = RegisterForm(request.POST, auto_id=True)
104 else:
105 form = RegisterForm()
106
107 return render(request, 'auth/register.html', {'form': form, })
108
109
110 def verify(request, token):
111 rt = get_object_or_404(RegisterToken, token=token)
112
113 if rt.is_valid:
114 email = get_object_or_404(Email, email=rt.email)
115 email.verified = True
116 email.save()
117
118 user = getattr(rt, 'user')
119
120 # If it is a stud email, set the ntnu_username for user
121 if re.match(r'[^@][email protected]', rt.email):
122 user.ntnu_username = rt.email.split("@")[0]
123
124 user_activated = False
125 if not user.is_active:
126 user.is_active = True
127 user_activated = True
128
129 user.save()
130 rt.delete()
131
132 if user_activated:
133 messages.success(request, _(u'Bruker %s ble aktivert. Du kan nå logge inn.') % user.username)
134 return redirect('auth_login')
135 else:
136 messages.success(request, _(u'Eposten %s er nå verifisert.') % email)
137 return redirect('profiles')
138 else:
139 messages.error(request, _(u'Denne lenken er utløpt. Bruk gjenopprett passord for å få tilsendt en ny lenke.'))
140 return HttpResponseRedirect('/')
141
142
143 def recover(request):
144 if request.user.is_authenticated():
145 messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))
146 return HttpResponseRedirect('/')
147 else:
148 if request.method == 'POST':
149 form = RecoveryForm(request.POST)
150 if form.is_valid():
151 email_string = form.cleaned_data['email']
152 emails = Email.objects.filter(email=email_string)
153
154 if len(emails) == 0:
155 messages.error(request, _(u'Denne eposten er ikke registrert i våre systemer.'))
156 return HttpResponseRedirect('/')
157
158 email = emails[0]
159
160 # Create the registration token
161 token = uuid.uuid4().hex
162 rt = RegisterToken(user=email.user, email=email.email, token=token)
163 rt.save()
164
165 email_message = _(u"""
166 Vi har mottat forespørsel om å gjenopprette passordet for kontoen bundet til %s.
167 Dersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.
168
169 Brukernavn: %s
170
171 Hvis du ønsker å gjennomføre en gjenoppretning av passord, bruk lenken under.
172
173 http://%s/auth/set_password/%s/
174
175 Denne lenken vil være gyldig i 24 timer. Dersom du behøver å få tilsendt en ny lenke
176 kan dette gjøres med funksjonen for å gjenopprette passord.
177 """) % (email.email, email.user.username, request.META['HTTP_HOST'], token)
178
179 send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])
180
181 messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)
182
183 return HttpResponseRedirect('/')
184 else:
185 form = RecoveryForm(request.POST, auto_id=True)
186 else:
187 form = RecoveryForm()
188
189 return render(request, 'auth/recover.html', {'form': form})
190
191
192 @sensitive_post_parameters()
193 def set_password(request, token=None):
194 if request.user.is_authenticated():
195 return HttpResponseRedirect('/')
196 else:
197 tokens = RegisterToken.objects.filter(token=token)
198
199 if tokens.count() == 1:
200 rt = tokens[0]
201 if rt.is_valid:
202 if request.method == 'POST':
203 form = ChangePasswordForm(request.POST, auto_id=True)
204 if form.is_valid():
205 user = getattr(rt, 'user')
206
207 user.is_active = True
208 user.set_password(form.cleaned_data['new_password'])
209 user.save()
210
211 rt.delete()
212
213 messages.success(request, _(u'Bruker %s har gjennomført vellykket gjenoppretning av passord. Du kan nå logge inn.') % user.username)
214
215 return HttpResponseRedirect('/')
216 else:
217 form = ChangePasswordForm()
218
219 messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn ønsket passord.'))
220
221 return render(request, 'auth/set_password.html', {'form': form, 'token': token})
222
223 else:
224 messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for å få tilsendt en ny lenke.'))
225 return HttpResponseRedirect('/')
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/authentication/views.py b/apps/authentication/views.py
--- a/apps/authentication/views.py
+++ b/apps/authentication/views.py
@@ -118,7 +118,7 @@
user = getattr(rt, 'user')
# If it is a stud email, set the ntnu_username for user
- if re.match(r'[^@][email protected]', rt.email):
+ if re.match(r'[^@]+@stud\.ntnu\.no', rt.email):
user.ntnu_username = rt.email.split("@")[0]
user_activated = False
|
{"golden_diff": "diff --git a/apps/authentication/views.py b/apps/authentication/views.py\n--- a/apps/authentication/views.py\n+++ b/apps/authentication/views.py\n@@ -118,7 +118,7 @@\n user = getattr(rt, 'user')\n \n # If it is a stud email, set the ntnu_username for user\n- if re.match(r'[^@][email protected]', rt.email):\n+ if re.match(r'[^@]+@stud\\.ntnu\\.no', rt.email):\n user.ntnu_username = rt.email.split(\"@\")[0]\n \n user_activated = False\n", "issue": "Regex error in authentication\nhttps://github.com/dotKom/onlineweb4/blob/develop/apps/authentication/views.py#L121\n\nThe \".\"s should be changed to \".\"\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\nimport re\n\nfrom django.contrib import auth\nfrom django.contrib import messages\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom django.conf import settings\nfrom apps.authentication.forms import (LoginForm, RegisterForm, \n RecoveryForm, ChangePasswordForm)\nfrom apps.authentication.models import OnlineUser as User, RegisterToken, Email\n\n\n@sensitive_post_parameters()\ndef login(request):\n redirect_url = request.REQUEST.get('next', '')\n if request.method == 'POST':\n form = LoginForm(request.POST)\n if form.login(request):\n messages.success(request, _(u'Du er n\u00e5 logget inn.'))\n if redirect_url:\n return HttpResponseRedirect(redirect_url)\n return HttpResponseRedirect('/')\n else: form = LoginForm(request.POST, auto_id=True)\n else:\n form = LoginForm()\n\n response_dict = { 'form' : form, 'next' : redirect_url}\n return render(request, 'auth/login.html', response_dict)\n\n\ndef logout(request):\n auth.logout(request)\n messages.success(request, _(u'Du er n\u00e5 logget ut.'))\n return HttpResponseRedirect('/')\n\n\n@sensitive_post_parameters()\ndef register(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RegisterForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n # Create user\n user = User(\n username=cleaned['username'], \n first_name=cleaned['first_name'].title(), \n last_name=cleaned['last_name'].title(),\n )\n # Set remaining fields\n user.phone_number=cleaned['phone']\n user.address=cleaned['address'].title()\n user.zip_code=cleaned['zip_code']\n # Store password properly\n user.set_password(cleaned['password'])\n # Users need to be manually activated\n user.is_active = False\n user.save()\n\n # Set email address\n email = Email(\n user=user,\n email=cleaned['email'].lower(),\n )\n email.primary = True\n email.save() \n\n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=user, email=cleaned['email'], token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nEn konto har blitt registrert p\u00e5 online.ntnu.no med denne epostadressen. Dersom du ikke\nhar utf\u00f8rt denne handlingen ber vi deg se bort fra denne eposten.\n\nFor \u00e5 bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gj\u00f8re\ndette ved \u00e5 bes\u00f8ke linken under.\n\nhttp://%s/auth/verify/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))\n\n return HttpResponseRedirect('/') \n else:\n form = RegisterForm(request.POST, auto_id=True)\n else:\n form = RegisterForm()\n\n return render(request, 'auth/register.html', {'form': form, })\n\n\ndef verify(request, token):\n rt = get_object_or_404(RegisterToken, token=token)\n \n if rt.is_valid:\n email = get_object_or_404(Email, email=rt.email)\n email.verified = True\n email.save()\n \n user = getattr(rt, 'user')\n\n # If it is a stud email, set the ntnu_username for user\n if re.match(r'[^@][email protected]', rt.email):\n user.ntnu_username = rt.email.split(\"@\")[0]\n\n user_activated = False\n if not user.is_active:\n user.is_active = True\n user_activated = True\n\n user.save()\n rt.delete()\n\n if user_activated:\n messages.success(request, _(u'Bruker %s ble aktivert. Du kan n\u00e5 logge inn.') % user.username)\n return redirect('auth_login')\n else:\n messages.success(request, _(u'Eposten %s er n\u00e5 verifisert.') % email)\n return redirect('profiles')\n else:\n messages.error(request, _(u'Denne lenken er utl\u00f8pt. Bruk gjenopprett passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n \n\ndef recover(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RecoveryForm(request.POST)\n if form.is_valid():\n email_string = form.cleaned_data['email']\n emails = Email.objects.filter(email=email_string)\n\n if len(emails) == 0:\n messages.error(request, _(u'Denne eposten er ikke registrert i v\u00e5re systemer.'))\n return HttpResponseRedirect('/') \n\n email = emails[0]\n \n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=email.user, email=email.email, token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nVi har mottat foresp\u00f8rsel om \u00e5 gjenopprette passordet for kontoen bundet til %s.\nDersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.\n\nBrukernavn: %s\n\nHvis du \u00f8nsker \u00e5 gjennomf\u00f8re en gjenoppretning av passord, bruk lenken under.\n\nhttp://%s/auth/set_password/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (email.email, email.user.username, request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)\n\n return HttpResponseRedirect('/') \n else:\n form = RecoveryForm(request.POST, auto_id=True)\n else:\n form = RecoveryForm()\n\n return render(request, 'auth/recover.html', {'form': form})\n\n\n@sensitive_post_parameters()\ndef set_password(request, token=None): \n if request.user.is_authenticated():\n return HttpResponseRedirect('/')\n else:\n tokens = RegisterToken.objects.filter(token=token)\n \n if tokens.count() == 1:\n rt = tokens[0]\n if rt.is_valid:\n if request.method == 'POST':\n form = ChangePasswordForm(request.POST, auto_id=True)\n if form.is_valid():\n user = getattr(rt, 'user')\n\n user.is_active = True\n user.set_password(form.cleaned_data['new_password'])\n user.save()\n \n rt.delete()\n\n messages.success(request, _(u'Bruker %s har gjennomf\u00f8rt vellykket gjenoppretning av passord. Du kan n\u00e5 logge inn.') % user.username)\n \n return HttpResponseRedirect('/') \n else:\n form = ChangePasswordForm()\n\n messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn \u00f8nsket passord.'))\n\n return render(request, 'auth/set_password.html', {'form': form, 'token': token})\n\n else:\n messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n", "path": "apps/authentication/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\nimport re\n\nfrom django.contrib import auth\nfrom django.contrib import messages\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.debug import sensitive_post_parameters\n\nfrom django.conf import settings\nfrom apps.authentication.forms import (LoginForm, RegisterForm, \n RecoveryForm, ChangePasswordForm)\nfrom apps.authentication.models import OnlineUser as User, RegisterToken, Email\n\n\n@sensitive_post_parameters()\ndef login(request):\n redirect_url = request.REQUEST.get('next', '')\n if request.method == 'POST':\n form = LoginForm(request.POST)\n if form.login(request):\n messages.success(request, _(u'Du er n\u00e5 logget inn.'))\n if redirect_url:\n return HttpResponseRedirect(redirect_url)\n return HttpResponseRedirect('/')\n else: form = LoginForm(request.POST, auto_id=True)\n else:\n form = LoginForm()\n\n response_dict = { 'form' : form, 'next' : redirect_url}\n return render(request, 'auth/login.html', response_dict)\n\n\ndef logout(request):\n auth.logout(request)\n messages.success(request, _(u'Du er n\u00e5 logget ut.'))\n return HttpResponseRedirect('/')\n\n\n@sensitive_post_parameters()\ndef register(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Registrering av ny konto krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RegisterForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n # Create user\n user = User(\n username=cleaned['username'], \n first_name=cleaned['first_name'].title(), \n last_name=cleaned['last_name'].title(),\n )\n # Set remaining fields\n user.phone_number=cleaned['phone']\n user.address=cleaned['address'].title()\n user.zip_code=cleaned['zip_code']\n # Store password properly\n user.set_password(cleaned['password'])\n # Users need to be manually activated\n user.is_active = False\n user.save()\n\n # Set email address\n email = Email(\n user=user,\n email=cleaned['email'].lower(),\n )\n email.primary = True\n email.save() \n\n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=user, email=cleaned['email'], token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nEn konto har blitt registrert p\u00e5 online.ntnu.no med denne epostadressen. Dersom du ikke\nhar utf\u00f8rt denne handlingen ber vi deg se bort fra denne eposten.\n\nFor \u00e5 bruke denne kontoen kreves det at du verifiserer epostadressen. Du kan gj\u00f8re\ndette ved \u00e5 bes\u00f8ke linken under.\n\nhttp://%s/auth/verify/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Verifiser din konto'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'Registreringen var vellykket. Se tilsendt epost for verifiseringsinstrukser.'))\n\n return HttpResponseRedirect('/') \n else:\n form = RegisterForm(request.POST, auto_id=True)\n else:\n form = RegisterForm()\n\n return render(request, 'auth/register.html', {'form': form, })\n\n\ndef verify(request, token):\n rt = get_object_or_404(RegisterToken, token=token)\n \n if rt.is_valid:\n email = get_object_or_404(Email, email=rt.email)\n email.verified = True\n email.save()\n \n user = getattr(rt, 'user')\n\n # If it is a stud email, set the ntnu_username for user\n if re.match(r'[^@]+@stud\\.ntnu\\.no', rt.email):\n user.ntnu_username = rt.email.split(\"@\")[0]\n\n user_activated = False\n if not user.is_active:\n user.is_active = True\n user_activated = True\n\n user.save()\n rt.delete()\n\n if user_activated:\n messages.success(request, _(u'Bruker %s ble aktivert. Du kan n\u00e5 logge inn.') % user.username)\n return redirect('auth_login')\n else:\n messages.success(request, _(u'Eposten %s er n\u00e5 verifisert.') % email)\n return redirect('profiles')\n else:\n messages.error(request, _(u'Denne lenken er utl\u00f8pt. Bruk gjenopprett passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n \n\ndef recover(request):\n if request.user.is_authenticated():\n messages.error(request, _(u'Gjenoppretning av passord krever at du er logget ut.'))\n return HttpResponseRedirect('/')\n else:\n if request.method == 'POST':\n form = RecoveryForm(request.POST)\n if form.is_valid():\n email_string = form.cleaned_data['email']\n emails = Email.objects.filter(email=email_string)\n\n if len(emails) == 0:\n messages.error(request, _(u'Denne eposten er ikke registrert i v\u00e5re systemer.'))\n return HttpResponseRedirect('/') \n\n email = emails[0]\n \n # Create the registration token\n token = uuid.uuid4().hex\n rt = RegisterToken(user=email.user, email=email.email, token=token)\n rt.save()\n\n email_message = _(u\"\"\"\nVi har mottat foresp\u00f8rsel om \u00e5 gjenopprette passordet for kontoen bundet til %s.\nDersom du ikke har bedt om denne handlingen ber vi deg se bort fra denne eposten.\n\nBrukernavn: %s\n\nHvis du \u00f8nsker \u00e5 gjennomf\u00f8re en gjenoppretning av passord, bruk lenken under.\n\nhttp://%s/auth/set_password/%s/\n\nDenne lenken vil v\u00e6re gyldig i 24 timer. Dersom du beh\u00f8ver \u00e5 f\u00e5 tilsendt en ny lenke\nkan dette gj\u00f8res med funksjonen for \u00e5 gjenopprette passord.\n\"\"\") % (email.email, email.user.username, request.META['HTTP_HOST'], token)\n\n send_mail(_(u'Gjenoppretning av passord'), email_message, settings.DEFAULT_FROM_EMAIL, [email.email,])\n\n messages.success(request, _(u'En lenke for gjenoppretning har blitt sendt til %s.') % email.email)\n\n return HttpResponseRedirect('/') \n else:\n form = RecoveryForm(request.POST, auto_id=True)\n else:\n form = RecoveryForm()\n\n return render(request, 'auth/recover.html', {'form': form})\n\n\n@sensitive_post_parameters()\ndef set_password(request, token=None): \n if request.user.is_authenticated():\n return HttpResponseRedirect('/')\n else:\n tokens = RegisterToken.objects.filter(token=token)\n \n if tokens.count() == 1:\n rt = tokens[0]\n if rt.is_valid:\n if request.method == 'POST':\n form = ChangePasswordForm(request.POST, auto_id=True)\n if form.is_valid():\n user = getattr(rt, 'user')\n\n user.is_active = True\n user.set_password(form.cleaned_data['new_password'])\n user.save()\n \n rt.delete()\n\n messages.success(request, _(u'Bruker %s har gjennomf\u00f8rt vellykket gjenoppretning av passord. Du kan n\u00e5 logge inn.') % user.username)\n \n return HttpResponseRedirect('/') \n else:\n form = ChangePasswordForm()\n\n messages.success(request, _(u'Lenken er akseptert. Vennligst skriv inn \u00f8nsket passord.'))\n\n return render(request, 'auth/set_password.html', {'form': form, 'token': token})\n\n else:\n messages.error(request, _(u'Lenken er ugyldig. Vennligst bruk gjenoppretning av passord for \u00e5 f\u00e5 tilsendt en ny lenke.'))\n return HttpResponseRedirect('/') \n", "path": "apps/authentication/views.py"}]}
| 2,732 | 131 |
gh_patches_debug_57622
|
rasdani/github-patches
|
git_diff
|
AnalogJ__lexicon-164
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Namecheap support not optional
Unlike route53 or softlayer and unlike what setup.py suggests, the namecheap provider is not optional in 2.1.17.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lexicon/providers/namecheap.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import print_function
3
4 import logging
5
6 import namecheap
7
8 from .base import Provider as BaseProvider
9
10 logger = logging.getLogger(__name__)
11
12
13 def ProviderParser(subparser):
14 subparser.add_argument(
15 '--auth-token',
16 help='specify api token used to authenticate'
17 )
18 subparser.add_argument(
19 '--auth-username',
20 help='specify email address used to authenticate'
21 )
22 # FIXME What is the client IP used for?
23 subparser.add_argument(
24 '--auth-client-ip',
25 help='Client IP address to send to Namecheap API calls',
26 default='127.0.0.1'
27 )
28 subparser.add_argument(
29 '--auth-sandbox',
30 help='Whether to use the sandbox server',
31 action='store_true'
32 )
33
34 class Provider(BaseProvider):
35
36 def __init__(self, options, engine_overrides=None):
37 super(Provider, self).__init__(options, engine_overrides)
38 self.options = options
39 self.client = namecheap.Api(
40 ApiUser=options.get('auth_username',''),
41 ApiKey=options.get('auth_token',''),
42 UserName=options.get('auth_username',''),
43 ClientIP=options.get('auth_client_ip',''),
44 sandbox=options.get('auth_sandbox', False),
45 debug=False
46 )
47 self.domain = self.options['domain']
48 self.domain_id = None
49
50 def authenticate(self):
51 try:
52 domain_names = [x['Name'] for x in self.client.domains_getList()]
53 except namecheap.ApiError:
54 raise Exception('Authentication failed')
55 if self.domain not in domain_names:
56 raise Exception('The domain {} is not controlled by this Namecheap '
57 'account'.format(self.domain))
58 # FIXME What is this for?
59 self.domain_id = self.domain
60
61 # Create record. If record already exists with the same content, do nothing
62 def create_record(self, type, name, content):
63 record = {
64 # required
65 'Type': type,
66 'Name': self._relative_name(name),
67 'Address': content
68 }
69 # logger.debug('create_record: %s', 'id' in payload)
70 # return 'id' in payload
71 self.client.domains_dns_addHost(self.domain, record)
72 return True
73
74 # List all records. Return an empty list if no records found.
75 # type, name and content are used to filter records.
76 # If possible filter during the query, otherwise filter after response is
77 # received.
78 def list_records(self, type=None, name=None, content=None, id=None):
79
80
81 records = []
82 raw_records = self.client.domains_dns_getHosts(self.domain)
83 for record in raw_records:
84 records.append(self._convert_to_lexicon(record))
85
86 if id:
87 records = [record for record in records if record['id'] == id]
88 if type:
89 records = [record for record in records if record['type'] == type]
90 if name:
91 if name.endswith('.'):
92 name = name[:-1]
93 records = [record for record in records if name in record['name'] ]
94 if content:
95 records = [record for record in records if record['content'].lower() == content.lower()]
96
97 logger.debug('list_records: %s', records)
98 return records
99
100 # Create or update a record.
101 def update_record(self, identifier, type=None, name=None, content=None):
102 # Delete record if it exists
103 self.delete_record(identifier, type, name, content)
104 return self.create_record(type, name, content)
105
106 # Delete an existing record.
107 # If record does not exist, do nothing.
108 def delete_record(self, identifier=None, type=None, name=None, content=None):
109
110 record = self.list_records(type=type, name=name, content=content, id=identifier)
111 if record:
112 self.client.domains_dns_delHost(self.domain, self._convert_to_namecheap(record[0]))
113 return True
114 else:
115 return False
116
117 def _convert_to_namecheap(self, record):
118 """ converts from lexicon format record to namecheap format record,
119 suitable to sending through the api to namecheap"""
120
121 name = record['name']
122 if name.endswith('.'):
123 name = name[:-1]
124
125 short_name = name[:name.find(self.domain)-1]
126 processed_record = {
127 'Type': record['type'],
128 'Name': short_name,
129 'TTL': record['ttl'],
130 'Address': record['content'],
131 'HostId': record['id']
132 }
133
134 return processed_record
135
136 def _convert_to_lexicon(self, record):
137 """ converts from namecheap raw record format to lexicon format record
138 """
139
140 name = record['Name']
141 if self.domain not in name:
142 name = "{}.{}".format(name,self.domain)
143
144 processed_record = {
145 'type': record['Type'],
146 'name': '{0}.{1}'.format(record['Name'], self.domain),
147 'ttl': record['TTL'],
148 'content': record['Address'],
149 'id': record['HostId']
150 }
151
152 return processed_record
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lexicon/providers/namecheap.py b/lexicon/providers/namecheap.py
--- a/lexicon/providers/namecheap.py
+++ b/lexicon/providers/namecheap.py
@@ -3,10 +3,14 @@
import logging
-import namecheap
from .base import Provider as BaseProvider
+try:
+ import namecheap #optional dep
+except ImportError:
+ pass
+
logger = logging.getLogger(__name__)
|
{"golden_diff": "diff --git a/lexicon/providers/namecheap.py b/lexicon/providers/namecheap.py\n--- a/lexicon/providers/namecheap.py\n+++ b/lexicon/providers/namecheap.py\n@@ -3,10 +3,14 @@\n \n import logging\n \n-import namecheap\n \n from .base import Provider as BaseProvider\n \n+try:\n+ import namecheap #optional dep\n+except ImportError:\n+ pass\n+\n logger = logging.getLogger(__name__)\n", "issue": "Namecheap support not optional\nUnlike route53 or softlayer and unlike what setup.py suggests, the namecheap provider is not optional in 2.1.17.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\n\nimport logging\n\nimport namecheap\n\nfrom .base import Provider as BaseProvider\n\nlogger = logging.getLogger(__name__)\n\n\ndef ProviderParser(subparser):\n subparser.add_argument(\n '--auth-token',\n help='specify api token used to authenticate'\n )\n subparser.add_argument(\n '--auth-username',\n help='specify email address used to authenticate'\n )\n # FIXME What is the client IP used for?\n subparser.add_argument(\n '--auth-client-ip',\n help='Client IP address to send to Namecheap API calls',\n default='127.0.0.1'\n )\n subparser.add_argument(\n '--auth-sandbox',\n help='Whether to use the sandbox server',\n action='store_true'\n )\n\nclass Provider(BaseProvider):\n\n def __init__(self, options, engine_overrides=None):\n super(Provider, self).__init__(options, engine_overrides)\n self.options = options\n self.client = namecheap.Api(\n ApiUser=options.get('auth_username',''),\n ApiKey=options.get('auth_token',''),\n UserName=options.get('auth_username',''),\n ClientIP=options.get('auth_client_ip',''),\n sandbox=options.get('auth_sandbox', False),\n debug=False\n )\n self.domain = self.options['domain']\n self.domain_id = None\n\n def authenticate(self):\n try:\n domain_names = [x['Name'] for x in self.client.domains_getList()]\n except namecheap.ApiError:\n raise Exception('Authentication failed')\n if self.domain not in domain_names:\n raise Exception('The domain {} is not controlled by this Namecheap '\n 'account'.format(self.domain))\n # FIXME What is this for?\n self.domain_id = self.domain\n\n # Create record. If record already exists with the same content, do nothing\n def create_record(self, type, name, content):\n record = {\n # required\n 'Type': type,\n 'Name': self._relative_name(name),\n 'Address': content\n }\n # logger.debug('create_record: %s', 'id' in payload)\n # return 'id' in payload\n self.client.domains_dns_addHost(self.domain, record)\n return True\n\n # List all records. Return an empty list if no records found.\n # type, name and content are used to filter records.\n # If possible filter during the query, otherwise filter after response is\n # received.\n def list_records(self, type=None, name=None, content=None, id=None):\n\n\n records = []\n raw_records = self.client.domains_dns_getHosts(self.domain)\n for record in raw_records:\n records.append(self._convert_to_lexicon(record))\n\n if id:\n records = [record for record in records if record['id'] == id]\n if type:\n records = [record for record in records if record['type'] == type]\n if name:\n if name.endswith('.'):\n name = name[:-1]\n records = [record for record in records if name in record['name'] ]\n if content:\n records = [record for record in records if record['content'].lower() == content.lower()]\n\n logger.debug('list_records: %s', records)\n return records\n\n # Create or update a record.\n def update_record(self, identifier, type=None, name=None, content=None):\n # Delete record if it exists\n self.delete_record(identifier, type, name, content)\n return self.create_record(type, name, content)\n\n # Delete an existing record.\n # If record does not exist, do nothing.\n def delete_record(self, identifier=None, type=None, name=None, content=None):\n\n record = self.list_records(type=type, name=name, content=content, id=identifier)\n if record:\n self.client.domains_dns_delHost(self.domain, self._convert_to_namecheap(record[0]))\n return True\n else:\n return False\n\n def _convert_to_namecheap(self, record):\n \"\"\" converts from lexicon format record to namecheap format record,\n suitable to sending through the api to namecheap\"\"\"\n\n name = record['name']\n if name.endswith('.'):\n name = name[:-1]\n\n short_name = name[:name.find(self.domain)-1]\n processed_record = {\n 'Type': record['type'],\n 'Name': short_name,\n 'TTL': record['ttl'],\n 'Address': record['content'],\n 'HostId': record['id']\n }\n\n return processed_record\n\n def _convert_to_lexicon(self, record):\n \"\"\" converts from namecheap raw record format to lexicon format record\n \"\"\"\n\n name = record['Name']\n if self.domain not in name:\n name = \"{}.{}\".format(name,self.domain)\n\n processed_record = {\n 'type': record['Type'],\n 'name': '{0}.{1}'.format(record['Name'], self.domain),\n 'ttl': record['TTL'],\n 'content': record['Address'],\n 'id': record['HostId']\n }\n\n return processed_record\n", "path": "lexicon/providers/namecheap.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\n\nimport logging\n\n\nfrom .base import Provider as BaseProvider\n\ntry:\n import namecheap #optional dep\nexcept ImportError:\n pass\n\nlogger = logging.getLogger(__name__)\n\n\ndef ProviderParser(subparser):\n subparser.add_argument(\n '--auth-token',\n help='specify api token used to authenticate'\n )\n subparser.add_argument(\n '--auth-username',\n help='specify email address used to authenticate'\n )\n # FIXME What is the client IP used for?\n subparser.add_argument(\n '--auth-client-ip',\n help='Client IP address to send to Namecheap API calls',\n default='127.0.0.1'\n )\n subparser.add_argument(\n '--auth-sandbox',\n help='Whether to use the sandbox server',\n action='store_true'\n )\n\nclass Provider(BaseProvider):\n\n def __init__(self, options, engine_overrides=None):\n super(Provider, self).__init__(options, engine_overrides)\n self.options = options\n self.client = namecheap.Api(\n ApiUser=options.get('auth_username',''),\n ApiKey=options.get('auth_token',''),\n UserName=options.get('auth_username',''),\n ClientIP=options.get('auth_client_ip',''),\n sandbox=options.get('auth_sandbox', False),\n debug=False\n )\n self.domain = self.options['domain']\n self.domain_id = None\n\n def authenticate(self):\n try:\n domain_names = [x['Name'] for x in self.client.domains_getList()]\n except namecheap.ApiError:\n raise Exception('Authentication failed')\n if self.domain not in domain_names:\n raise Exception('The domain {} is not controlled by this Namecheap '\n 'account'.format(self.domain))\n # FIXME What is this for?\n self.domain_id = self.domain\n\n # Create record. If record already exists with the same content, do nothing\n def create_record(self, type, name, content):\n record = {\n # required\n 'Type': type,\n 'Name': self._relative_name(name),\n 'Address': content\n }\n # logger.debug('create_record: %s', 'id' in payload)\n # return 'id' in payload\n self.client.domains_dns_addHost(self.domain, record)\n return True\n\n # List all records. Return an empty list if no records found.\n # type, name and content are used to filter records.\n # If possible filter during the query, otherwise filter after response is\n # received.\n def list_records(self, type=None, name=None, content=None, id=None):\n\n\n records = []\n raw_records = self.client.domains_dns_getHosts(self.domain)\n for record in raw_records:\n records.append(self._convert_to_lexicon(record))\n\n if id:\n records = [record for record in records if record['id'] == id]\n if type:\n records = [record for record in records if record['type'] == type]\n if name:\n if name.endswith('.'):\n name = name[:-1]\n records = [record for record in records if name in record['name'] ]\n if content:\n records = [record for record in records if record['content'].lower() == content.lower()]\n\n logger.debug('list_records: %s', records)\n return records\n\n # Create or update a record.\n def update_record(self, identifier, type=None, name=None, content=None):\n # Delete record if it exists\n self.delete_record(identifier, type, name, content)\n return self.create_record(type, name, content)\n\n # Delete an existing record.\n # If record does not exist, do nothing.\n def delete_record(self, identifier=None, type=None, name=None, content=None):\n\n record = self.list_records(type=type, name=name, content=content, id=identifier)\n if record:\n self.client.domains_dns_delHost(self.domain, self._convert_to_namecheap(record[0]))\n return True\n else:\n return False\n\n def _convert_to_namecheap(self, record):\n \"\"\" converts from lexicon format record to namecheap format record,\n suitable to sending through the api to namecheap\"\"\"\n\n name = record['name']\n if name.endswith('.'):\n name = name[:-1]\n\n short_name = name[:name.find(self.domain)-1]\n processed_record = {\n 'Type': record['type'],\n 'Name': short_name,\n 'TTL': record['ttl'],\n 'Address': record['content'],\n 'HostId': record['id']\n }\n\n return processed_record\n\n def _convert_to_lexicon(self, record):\n \"\"\" converts from namecheap raw record format to lexicon format record\n \"\"\"\n\n name = record['Name']\n if self.domain not in name:\n name = \"{}.{}\".format(name,self.domain)\n\n processed_record = {\n 'type': record['Type'],\n 'name': '{0}.{1}'.format(record['Name'], self.domain),\n 'ttl': record['TTL'],\n 'content': record['Address'],\n 'id': record['HostId']\n }\n\n return processed_record\n", "path": "lexicon/providers/namecheap.py"}]}
| 1,769 | 97 |
gh_patches_debug_10924
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-640
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix
Environment details
- OS: $ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G6020
- Python version: 3.6, 3.7, 3.8
- pip version: pip 20.2.4
- `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59
#### Steps to reproduce
1. nox -s unit
There are 9 tests that fail, all with the same error:
`TypeError: __init__() missing 1 required positional argument: 'limit'`
```
====================================================== short test summary info =======================================================
FAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...
============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================
```
Here is the traceback for one of the failing tests:
```
____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________
self = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>
urllib3_mock = <function decompress at 0x10880a820>
@mock.patch(
"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress",
return_value="decompressed",
autospec=True,
)
@pytest.mark.asyncio
async def test_content_compressed(self, urllib3_mock):
rm = core.RequestMatch(
"url", headers={"Content-Encoding": "gzip"}, payload="compressed"
)
> response = await rm.build_response(core.URL("url"))
tests_async/transport/test_aiohttp_requests.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response
resp = self._build_response(
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response
resp.content = stream_reader_factory(loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
loop = <Mock id='4437587472'>
def stream_reader_factory( # noqa
loop: 'Optional[asyncio.AbstractEventLoop]' = None
):
protocol = ResponseHandler(loop=loop)
> return StreamReader(protocol, loop=loop)
E TypeError: __init__() missing 1 required positional argument: 'limit'
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError
========================================================== warnings summary ==========================================================
```
The root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.
https://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106
This change breaks aioresponses:
https://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import nox
16
17 TEST_DEPENDENCIES = [
18 "flask",
19 "freezegun",
20 "mock",
21 "oauth2client",
22 "pyopenssl",
23 "pytest",
24 "pytest-cov",
25 "pytest-localserver",
26 "requests",
27 "urllib3",
28 "cryptography",
29 "responses",
30 "grpcio",
31 ]
32
33 ASYNC_DEPENDENCIES = [
34 "pytest-asyncio",
35 "aiohttp < 3.7.0dev",
36 "aioresponses",
37 "asynctest",
38 ]
39
40 BLACK_VERSION = "black==19.3b0"
41 BLACK_PATHS = [
42 "google",
43 "tests",
44 "tests_async",
45 "noxfile.py",
46 "setup.py",
47 "docs/conf.py",
48 ]
49
50
51 @nox.session(python="3.7")
52 def lint(session):
53 session.install("flake8", "flake8-import-order", "docutils", BLACK_VERSION)
54 session.install(".")
55 session.run("black", "--check", *BLACK_PATHS)
56 session.run(
57 "flake8",
58 "--import-order-style=google",
59 "--application-import-names=google,tests,system_tests",
60 "google",
61 "tests",
62 "tests_async",
63 )
64 session.run(
65 "python", "setup.py", "check", "--metadata", "--restructuredtext", "--strict"
66 )
67
68
69 @nox.session(python="3.6")
70 def blacken(session):
71 """Run black.
72
73 Format code to uniform standard.
74
75 This currently uses Python 3.6 due to the automated Kokoro run of synthtool.
76 That run uses an image that doesn't have 3.6 installed. Before updating this
77 check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
78 """
79 session.install(BLACK_VERSION)
80 session.run("black", *BLACK_PATHS)
81
82
83 @nox.session(python=["3.6", "3.7", "3.8"])
84 def unit(session):
85 session.install(*TEST_DEPENDENCIES)
86 session.install(*(ASYNC_DEPENDENCIES))
87 session.install(".")
88 session.run(
89 "pytest",
90 "--cov=google.auth",
91 "--cov=google.oauth2",
92 "--cov=tests",
93 "tests",
94 "tests_async",
95 )
96
97
98 @nox.session(python=["2.7", "3.5"])
99 def unit_prev_versions(session):
100 session.install(*TEST_DEPENDENCIES)
101 session.install(".")
102 session.run(
103 "pytest", "--cov=google.auth", "--cov=google.oauth2", "--cov=tests", "tests"
104 )
105
106
107 @nox.session(python="3.7")
108 def cover(session):
109 session.install(*TEST_DEPENDENCIES)
110 session.install(*(ASYNC_DEPENDENCIES))
111 session.install(".")
112 session.run(
113 "pytest",
114 "--cov=google.auth",
115 "--cov=google.oauth2",
116 "--cov=tests",
117 "--cov=tests_async",
118 "--cov-report=",
119 "tests",
120 "tests_async",
121 )
122 session.run("coverage", "report", "--show-missing", "--fail-under=100")
123
124
125 @nox.session(python="3.7")
126 def docgen(session):
127 session.env["SPHINX_APIDOC_OPTIONS"] = "members,inherited-members,show-inheritance"
128 session.install(*TEST_DEPENDENCIES)
129 session.install("sphinx")
130 session.install(".")
131 session.run("rm", "-r", "docs/reference")
132 session.run(
133 "sphinx-apidoc",
134 "--output-dir",
135 "docs/reference",
136 "--separate",
137 "--module-first",
138 "google",
139 )
140
141
142 @nox.session(python="3.7")
143 def docs(session):
144 session.install("sphinx", "-r", "docs/requirements-docs.txt")
145 session.install(".")
146 session.run("make", "-C", "docs", "html")
147
148
149 @nox.session(python="pypy")
150 def pypy(session):
151 session.install(*TEST_DEPENDENCIES)
152 session.install(*ASYNC_DEPENDENCIES)
153 session.install(".")
154 session.run(
155 "pytest",
156 "--cov=google.auth",
157 "--cov=google.oauth2",
158 "--cov=tests",
159 "tests",
160 "tests_async",
161 )
162
```
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16
17 from setuptools import find_packages
18 from setuptools import setup
19
20
21 DEPENDENCIES = (
22 "cachetools>=2.0.0,<5.0",
23 "pyasn1-modules>=0.2.1",
24 # rsa==4.5 is the last version to support 2.7
25 # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
26 'rsa<4.6; python_version < "3.5"',
27 'rsa>=3.1.4,<5; python_version >= "3.5"',
28 "setuptools>=40.3.0",
29 "six>=1.9.0",
30 )
31
32 extras = {"aiohttp": "aiohttp >= 3.6.2, < 3.7.0dev; python_version>='3.6'"}
33
34 with io.open("README.rst", "r") as fh:
35 long_description = fh.read()
36
37 version = "1.22.1"
38
39 setup(
40 name="google-auth",
41 version=version,
42 author="Google Cloud Platform",
43 author_email="[email protected]",
44 description="Google Authentication Library",
45 long_description=long_description,
46 url="https://github.com/googleapis/google-auth-library-python",
47 packages=find_packages(exclude=("tests*", "system_tests*")),
48 namespace_packages=("google",),
49 install_requires=DEPENDENCIES,
50 extras_require=extras,
51 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
52 license="Apache 2.0",
53 keywords="google auth oauth client",
54 classifiers=[
55 "Programming Language :: Python :: 2",
56 "Programming Language :: Python :: 2.7",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.5",
59 "Programming Language :: Python :: 3.6",
60 "Programming Language :: Python :: 3.7",
61 "Programming Language :: Python :: 3.8",
62 "Development Status :: 5 - Production/Stable",
63 "Intended Audience :: Developers",
64 "License :: OSI Approved :: Apache Software License",
65 "Operating System :: POSIX",
66 "Operating System :: Microsoft :: Windows",
67 "Operating System :: MacOS :: MacOS X",
68 "Operating System :: OS Independent",
69 "Topic :: Internet :: WWW/HTTP",
70 ],
71 )
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -30,12 +30,7 @@
"grpcio",
]
-ASYNC_DEPENDENCIES = [
- "pytest-asyncio",
- "aiohttp < 3.7.0dev",
- "aioresponses",
- "asynctest",
-]
+ASYNC_DEPENDENCIES = ["pytest-asyncio", "aioresponses", "asynctest"]
BLACK_VERSION = "black==19.3b0"
BLACK_PATHS = [
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
"six>=1.9.0",
)
-extras = {"aiohttp": "aiohttp >= 3.6.2, < 3.7.0dev; python_version>='3.6'"}
+extras = {"aiohttp": "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'"}
with io.open("README.rst", "r") as fh:
long_description = fh.read()
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -30,12 +30,7 @@\n \"grpcio\",\n ]\n \n-ASYNC_DEPENDENCIES = [\n- \"pytest-asyncio\",\n- \"aiohttp < 3.7.0dev\",\n- \"aioresponses\",\n- \"asynctest\",\n-]\n+ASYNC_DEPENDENCIES = [\"pytest-asyncio\", \"aioresponses\", \"asynctest\"]\n \n BLACK_VERSION = \"black==19.3b0\"\n BLACK_PATHS = [\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,7 @@\n \"six>=1.9.0\",\n )\n \n-extras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 3.7.0dev; python_version>='3.6'\"}\n+extras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\"}\n \n with io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n", "issue": "TODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix\nEnvironment details\r\n\r\n - OS: $ sw_vers\r\nProductName: Mac OS X\r\nProductVersion: 10.14.6\r\nBuildVersion: 18G6020\r\n\r\n - Python version: 3.6, 3.7, 3.8\r\n - pip version: pip 20.2.4\r\n - `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59\r\n\r\n#### Steps to reproduce\r\n\r\n 1. nox -s unit\r\n\r\n\r\nThere are 9 tests that fail, all with the same error:\r\n\r\n`TypeError: __init__() missing 1 required positional argument: 'limit'`\r\n\r\n\r\n```\r\n====================================================== short test summary info =======================================================\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...\r\n============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================\r\n```\r\n\r\nHere is the traceback for one of the failing tests:\r\n\r\n\r\n```\r\n____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________\r\n\r\nself = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>\r\nurllib3_mock = <function decompress at 0x10880a820>\r\n\r\n @mock.patch(\r\n \"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress\",\r\n return_value=\"decompressed\",\r\n autospec=True,\r\n )\r\n @pytest.mark.asyncio\r\n async def test_content_compressed(self, urllib3_mock):\r\n rm = core.RequestMatch(\r\n \"url\", headers={\"Content-Encoding\": \"gzip\"}, payload=\"compressed\"\r\n )\r\n> response = await rm.build_response(core.URL(\"url\"))\r\n\r\ntests_async/transport/test_aiohttp_requests.py:72: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response\r\n resp = self._build_response(\r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response\r\n resp.content = stream_reader_factory(loop)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nloop = <Mock id='4437587472'>\r\n\r\n def stream_reader_factory( # noqa\r\n loop: 'Optional[asyncio.AbstractEventLoop]' = None\r\n ):\r\n protocol = ResponseHandler(loop=loop)\r\n> return StreamReader(protocol, loop=loop)\r\nE TypeError: __init__() missing 1 required positional argument: 'limit'\r\n\r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError\r\n========================================================== warnings summary ==========================================================\r\n```\r\n\r\nThe root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.\r\n\r\nhttps://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106\r\n\r\nThis change breaks aioresponses:\r\n\r\nhttps://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nox\n\nTEST_DEPENDENCIES = [\n \"flask\",\n \"freezegun\",\n \"mock\",\n \"oauth2client\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-localserver\",\n \"requests\",\n \"urllib3\",\n \"cryptography\",\n \"responses\",\n \"grpcio\",\n]\n\nASYNC_DEPENDENCIES = [\n \"pytest-asyncio\",\n \"aiohttp < 3.7.0dev\",\n \"aioresponses\",\n \"asynctest\",\n]\n\nBLACK_VERSION = \"black==19.3b0\"\nBLACK_PATHS = [\n \"google\",\n \"tests\",\n \"tests_async\",\n \"noxfile.py\",\n \"setup.py\",\n \"docs/conf.py\",\n]\n\n\[email protected](python=\"3.7\")\ndef lint(session):\n session.install(\"flake8\", \"flake8-import-order\", \"docutils\", BLACK_VERSION)\n session.install(\".\")\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n session.run(\n \"flake8\",\n \"--import-order-style=google\",\n \"--application-import-names=google,tests,system_tests\",\n \"google\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\n \"python\", \"setup.py\", \"check\", \"--metadata\", \"--restructuredtext\", \"--strict\"\n )\n\n\[email protected](python=\"3.6\")\ndef blacken(session):\n \"\"\"Run black.\n\n Format code to uniform standard.\n\n This currently uses Python 3.6 due to the automated Kokoro run of synthtool.\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef unit(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*(ASYNC_DEPENDENCIES))\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n\n\[email protected](python=[\"2.7\", \"3.5\"])\ndef unit_prev_versions(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(\".\")\n session.run(\n \"pytest\", \"--cov=google.auth\", \"--cov=google.oauth2\", \"--cov=tests\", \"tests\"\n )\n\n\[email protected](python=\"3.7\")\ndef cover(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*(ASYNC_DEPENDENCIES))\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov=tests_async\",\n \"--cov-report=\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n\n\[email protected](python=\"3.7\")\ndef docgen(session):\n session.env[\"SPHINX_APIDOC_OPTIONS\"] = \"members,inherited-members,show-inheritance\"\n session.install(*TEST_DEPENDENCIES)\n session.install(\"sphinx\")\n session.install(\".\")\n session.run(\"rm\", \"-r\", \"docs/reference\")\n session.run(\n \"sphinx-apidoc\",\n \"--output-dir\",\n \"docs/reference\",\n \"--separate\",\n \"--module-first\",\n \"google\",\n )\n\n\[email protected](python=\"3.7\")\ndef docs(session):\n session.install(\"sphinx\", \"-r\", \"docs/requirements-docs.txt\")\n session.install(\".\")\n session.run(\"make\", \"-C\", \"docs\", \"html\")\n\n\[email protected](python=\"pypy\")\ndef pypy(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*ASYNC_DEPENDENCIES)\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n", "path": "noxfile.py"}, {"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.5\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.5\"',\n \"setuptools>=40.3.0\",\n \"six>=1.9.0\",\n)\n\nextras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 3.7.0dev; python_version>='3.6'\"}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\nversion = \"1.22.1\"\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport nox\n\nTEST_DEPENDENCIES = [\n \"flask\",\n \"freezegun\",\n \"mock\",\n \"oauth2client\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-localserver\",\n \"requests\",\n \"urllib3\",\n \"cryptography\",\n \"responses\",\n \"grpcio\",\n]\n\nASYNC_DEPENDENCIES = [\"pytest-asyncio\", \"aioresponses\", \"asynctest\"]\n\nBLACK_VERSION = \"black==19.3b0\"\nBLACK_PATHS = [\n \"google\",\n \"tests\",\n \"tests_async\",\n \"noxfile.py\",\n \"setup.py\",\n \"docs/conf.py\",\n]\n\n\[email protected](python=\"3.7\")\ndef lint(session):\n session.install(\"flake8\", \"flake8-import-order\", \"docutils\", BLACK_VERSION)\n session.install(\".\")\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n session.run(\n \"flake8\",\n \"--import-order-style=google\",\n \"--application-import-names=google,tests,system_tests\",\n \"google\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\n \"python\", \"setup.py\", \"check\", \"--metadata\", \"--restructuredtext\", \"--strict\"\n )\n\n\[email protected](python=\"3.6\")\ndef blacken(session):\n \"\"\"Run black.\n\n Format code to uniform standard.\n\n This currently uses Python 3.6 due to the automated Kokoro run of synthtool.\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef unit(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*(ASYNC_DEPENDENCIES))\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n\n\[email protected](python=[\"2.7\", \"3.5\"])\ndef unit_prev_versions(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(\".\")\n session.run(\n \"pytest\", \"--cov=google.auth\", \"--cov=google.oauth2\", \"--cov=tests\", \"tests\"\n )\n\n\[email protected](python=\"3.7\")\ndef cover(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*(ASYNC_DEPENDENCIES))\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov=tests_async\",\n \"--cov-report=\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n\n\[email protected](python=\"3.7\")\ndef docgen(session):\n session.env[\"SPHINX_APIDOC_OPTIONS\"] = \"members,inherited-members,show-inheritance\"\n session.install(*TEST_DEPENDENCIES)\n session.install(\"sphinx\")\n session.install(\".\")\n session.run(\"rm\", \"-r\", \"docs/reference\")\n session.run(\n \"sphinx-apidoc\",\n \"--output-dir\",\n \"docs/reference\",\n \"--separate\",\n \"--module-first\",\n \"google\",\n )\n\n\[email protected](python=\"3.7\")\ndef docs(session):\n session.install(\"sphinx\", \"-r\", \"docs/requirements-docs.txt\")\n session.install(\".\")\n session.run(\"make\", \"-C\", \"docs\", \"html\")\n\n\[email protected](python=\"pypy\")\ndef pypy(session):\n session.install(*TEST_DEPENDENCIES)\n session.install(*ASYNC_DEPENDENCIES)\n session.install(\".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n", "path": "noxfile.py"}, {"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.5\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.5\"',\n \"setuptools>=40.3.0\",\n \"six>=1.9.0\",\n)\n\nextras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\"}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\nversion = \"1.22.1\"\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]}
| 3,694 | 277 |
gh_patches_debug_60787
|
rasdani/github-patches
|
git_diff
|
liqd__a4-product-1090
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
get_newsletters during normal register is broken
If checked, the user still has get_newsletters = False. But when changed in the account settings, it's changed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/users/forms.py`
Content:
```
1 from allauth.socialaccount.adapter import get_adapter
2 from allauth.utils import email_address_exists
3 from django import forms
4 from django.contrib.auth import forms as auth_forms
5 from django.contrib.auth import get_user_model
6 from django.utils.translation import ugettext_lazy as _
7
8 User = get_user_model()
9
10
11 class TermsSignupForm(auth_forms.UserCreationForm):
12 terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={
13 'required': _('Please accept the terms of use.')
14 })
15
16 def signup(self, request, user):
17 user.signup(
18 self.cleaned_data['username'],
19 self.cleaned_data['email'],
20 )
21
22 class Meta:
23 model = User
24 fields = ('email', 'username', 'password1', 'password2',
25 'terms_of_use', 'get_newsletters')
26
27 # Tried to add form as described in allauth documentation:
28 # https://django-allauth.readthedocs.io/en/latest/forms.html#socialaccount-forms
29 # ran into the following error:
30 # https://stackoverflow.com/questions/57254251/custom-form-with-socialaccount-in-django-allauth
31 # added this solution, maybe not the best
32
33
34 class SignupForm(forms.Form):
35 terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={
36 'required': _('Please accept the terms of use.')
37 })
38 get_newsletters = forms.BooleanField(
39 label=_('Send me newsletters'), required=False)
40 email = forms.EmailField(widget=forms.HiddenInput())
41 username = forms.CharField(widget=forms.HiddenInput())
42
43 def __init__(self, *args, **kwargs):
44 self.sociallogin = kwargs.pop('sociallogin')
45 initial = get_adapter().get_signup_form_initial_data(
46 self.sociallogin)
47 kwargs.update({
48 'initial': initial})
49 super().__init__(*args, **kwargs)
50
51 def save(self, request):
52 adapter = get_adapter(request)
53 user = adapter.save_user(request, self.sociallogin, form=self)
54 user.get_newsletters = self.cleaned_data['get_newsletters']
55 user.save()
56 user.signup(
57 user.username,
58 user.email
59 )
60 return user
61
62 def clean(self):
63 email = self.cleaned_data['email']
64 if email_address_exists(email):
65 raise forms.ValidationError(
66 get_adapter().error_messages['email_taken']
67 % self.sociallogin.account.get_provider().name)
68 return super().clean()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/users/forms.py b/apps/users/forms.py
--- a/apps/users/forms.py
+++ b/apps/users/forms.py
@@ -14,6 +14,7 @@
})
def signup(self, request, user):
+ user.get_newsletters = self.cleaned_data["get_newsletters"]
user.signup(
self.cleaned_data['username'],
self.cleaned_data['email'],
|
{"golden_diff": "diff --git a/apps/users/forms.py b/apps/users/forms.py\n--- a/apps/users/forms.py\n+++ b/apps/users/forms.py\n@@ -14,6 +14,7 @@\n })\n \n def signup(self, request, user):\n+ user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n user.signup(\n self.cleaned_data['username'],\n self.cleaned_data['email'],\n", "issue": "get_newsletters during normal register is broken\nIf checked, the user still has get_newsletters = False. But when changed in the account settings, it's changed.\n", "before_files": [{"content": "from allauth.socialaccount.adapter import get_adapter\nfrom allauth.utils import email_address_exists\nfrom django import forms\nfrom django.contrib.auth import forms as auth_forms\nfrom django.contrib.auth import get_user_model\nfrom django.utils.translation import ugettext_lazy as _\n\nUser = get_user_model()\n\n\nclass TermsSignupForm(auth_forms.UserCreationForm):\n terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={\n 'required': _('Please accept the terms of use.')\n })\n\n def signup(self, request, user):\n user.signup(\n self.cleaned_data['username'],\n self.cleaned_data['email'],\n )\n\n class Meta:\n model = User\n fields = ('email', 'username', 'password1', 'password2',\n 'terms_of_use', 'get_newsletters')\n\n# Tried to add form as described in allauth documentation:\n# https://django-allauth.readthedocs.io/en/latest/forms.html#socialaccount-forms\n# ran into the following error:\n# https://stackoverflow.com/questions/57254251/custom-form-with-socialaccount-in-django-allauth\n# added this solution, maybe not the best\n\n\nclass SignupForm(forms.Form):\n terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={\n 'required': _('Please accept the terms of use.')\n })\n get_newsletters = forms.BooleanField(\n label=_('Send me newsletters'), required=False)\n email = forms.EmailField(widget=forms.HiddenInput())\n username = forms.CharField(widget=forms.HiddenInput())\n\n def __init__(self, *args, **kwargs):\n self.sociallogin = kwargs.pop('sociallogin')\n initial = get_adapter().get_signup_form_initial_data(\n self.sociallogin)\n kwargs.update({\n 'initial': initial})\n super().__init__(*args, **kwargs)\n\n def save(self, request):\n adapter = get_adapter(request)\n user = adapter.save_user(request, self.sociallogin, form=self)\n user.get_newsletters = self.cleaned_data['get_newsletters']\n user.save()\n user.signup(\n user.username,\n user.email\n )\n return user\n\n def clean(self):\n email = self.cleaned_data['email']\n if email_address_exists(email):\n raise forms.ValidationError(\n get_adapter().error_messages['email_taken']\n % self.sociallogin.account.get_provider().name)\n return super().clean()\n", "path": "apps/users/forms.py"}], "after_files": [{"content": "from allauth.socialaccount.adapter import get_adapter\nfrom allauth.utils import email_address_exists\nfrom django import forms\nfrom django.contrib.auth import forms as auth_forms\nfrom django.contrib.auth import get_user_model\nfrom django.utils.translation import ugettext_lazy as _\n\nUser = get_user_model()\n\n\nclass TermsSignupForm(auth_forms.UserCreationForm):\n terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={\n 'required': _('Please accept the terms of use.')\n })\n\n def signup(self, request, user):\n user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n user.signup(\n self.cleaned_data['username'],\n self.cleaned_data['email'],\n )\n\n class Meta:\n model = User\n fields = ('email', 'username', 'password1', 'password2',\n 'terms_of_use', 'get_newsletters')\n\n# Tried to add form as described in allauth documentation:\n# https://django-allauth.readthedocs.io/en/latest/forms.html#socialaccount-forms\n# ran into the following error:\n# https://stackoverflow.com/questions/57254251/custom-form-with-socialaccount-in-django-allauth\n# added this solution, maybe not the best\n\n\nclass SignupForm(forms.Form):\n terms_of_use = forms.BooleanField(label=_('Terms of use'), error_messages={\n 'required': _('Please accept the terms of use.')\n })\n get_newsletters = forms.BooleanField(\n label=_('Send me newsletters'), required=False)\n email = forms.EmailField(widget=forms.HiddenInput())\n username = forms.CharField(widget=forms.HiddenInput())\n\n def __init__(self, *args, **kwargs):\n self.sociallogin = kwargs.pop('sociallogin')\n initial = get_adapter().get_signup_form_initial_data(\n self.sociallogin)\n kwargs.update({\n 'initial': initial})\n super().__init__(*args, **kwargs)\n\n def save(self, request):\n adapter = get_adapter(request)\n user = adapter.save_user(request, self.sociallogin, form=self)\n user.get_newsletters = self.cleaned_data['get_newsletters']\n user.save()\n user.signup(\n user.username,\n user.email\n )\n return user\n\n def clean(self):\n email = self.cleaned_data['email']\n if email_address_exists(email):\n raise forms.ValidationError(\n get_adapter().error_messages['email_taken']\n % self.sociallogin.account.get_provider().name)\n return super().clean()\n", "path": "apps/users/forms.py"}]}
| 926 | 85 |
gh_patches_debug_24231
|
rasdani/github-patches
|
git_diff
|
MycroftAI__mycroft-core-1023
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import useful classes in mycroft package
This is open for discussion, but I think it is worth importing a few classes/functions in `mycroft/__init__.py` so that skills can do things like:
```Python
from mycroft import MycroftSkill # vs. from mycroft.skills.core import MycroftSkil
class MySkill(MycroftSkill):
# ...
```
Or:
```Python
from mycroft import FallbackSkill, play_wav
class MediaFallback(FallbackSkill):
def handle_fallback(self, message):
# ...
play_wav('my_song.wav')
# ...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/__init__.py`
Content:
```
1 from os.path import abspath, dirname, join
2
3 __author__ = 'seanfitz'
4
5 MYCROFT_ROOT_PATH = abspath(join(dirname(__file__), '..'))
6
```
Path: `mycroft/util/__init__.py`
Content:
```
1 # Copyright 2016 Mycroft AI, Inc.
2 #
3 # This file is part of Mycroft Core.
4 #
5 # Mycroft Core is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Mycroft Core is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.
17
18
19 import socket
20 import subprocess
21 import tempfile
22 import time
23
24 import os
25 import os.path
26 import time
27 from stat import S_ISREG, ST_MTIME, ST_MODE, ST_SIZE
28 import psutil
29 from mycroft.util.log import getLogger
30 from mycroft.util.signal import *
31 import mycroft.configuration
32 import mycroft.audio
33
34 __author__ = 'jdorleans'
35
36 logger = getLogger(__name__)
37
38
39 def resolve_resource_file(res_name):
40 """Convert a resource into an absolute filename.
41
42 Resource names are in the form: 'filename.ext'
43 or 'path/filename.ext'
44
45 The system wil look for ~/.mycroft/res_name first, and
46 if not found will look at /opt/mycroft/res_name,
47 then finally it will look for res_name in the 'mycroft/res'
48 folder of the source code package.
49
50 Example:
51 With mycroft running as the user 'bob', if you called
52 resolve_resource_file('snd/beep.wav')
53 it would return either '/home/bob/.mycroft/snd/beep.wav' or
54 '/opt/mycroft/snd/beep.wav' or '.../mycroft/res/snd/beep.wav',
55 where the '...' is replaced by the path where the package has
56 been installed.
57
58 Args:
59 res_name (str): a resource path/name
60 """
61
62 # First look for fully qualified file (e.g. a user setting)
63 if os.path.isfile(res_name):
64 return res_name
65
66 # Now look for ~/.mycroft/res_name (in user folder)
67 filename = os.path.expanduser("~/.mycroft/" + res_name)
68 if os.path.isfile(filename):
69 return filename
70
71 # Next look for /opt/mycroft/res/res_name
72 filename = os.path.expanduser("/opt/mycroft/" + res_name)
73 if os.path.isfile(filename):
74 return filename
75
76 # Finally look for it in the source package
77 filename = os.path.join(os.path.dirname(__file__), '..', 'res', res_name)
78 filename = os.path.abspath(os.path.normpath(filename))
79 if os.path.isfile(filename):
80 return filename
81
82 return None # Resource cannot be resolved
83
84
85 def play_wav(uri):
86 config = mycroft.configuration.ConfigurationManager.instance()
87 play_cmd = config.get("play_wav_cmdline")
88 play_wav_cmd = str(play_cmd).split(" ")
89 for index, cmd in enumerate(play_wav_cmd):
90 if cmd == "%1":
91 play_wav_cmd[index] = (get_http(uri))
92 return subprocess.Popen(play_wav_cmd)
93
94
95 def play_mp3(uri):
96 config = mycroft.configuration.ConfigurationManager.instance()
97 play_cmd = config.get("play_mp3_cmdline")
98 play_mp3_cmd = str(play_cmd).split(" ")
99 for index, cmd in enumerate(play_mp3_cmd):
100 if cmd == "%1":
101 play_mp3_cmd[index] = (get_http(uri))
102 return subprocess.Popen(play_mp3_cmd)
103
104
105 def record(file_path, duration, rate, channels):
106 if duration > 0:
107 return subprocess.Popen(
108 ["arecord", "-r", str(rate), "-c", str(channels), "-d",
109 str(duration), file_path])
110 else:
111 return subprocess.Popen(
112 ["arecord", "-r", str(rate), "-c", str(channels), file_path])
113
114
115 def get_http(uri):
116 return uri.replace("https://", "http://")
117
118
119 def remove_last_slash(url):
120 if url and url.endswith('/'):
121 url = url[:-1]
122 return url
123
124
125 def read_stripped_lines(filename):
126 with open(filename, 'r') as f:
127 return [line.strip() for line in f]
128
129
130 def read_dict(filename, div='='):
131 d = {}
132 with open(filename, 'r') as f:
133 for line in f:
134 (key, val) = line.split(div)
135 d[key.strip()] = val.strip()
136 return d
137
138
139 def connected(host="8.8.8.8", port=53, timeout=3):
140 """
141 Thanks to 7h3rAm on
142 Host: 8.8.8.8 (google-public-dns-a.google.com)
143 OpenPort: 53/tcp
144 Service: domain (DNS/TCP)
145
146 NOTE:
147 This is no longer in use by this version
148 New method checks for a connection using ConnectionError only when
149 a question is asked
150 """
151 try:
152 socket.setdefaulttimeout(timeout)
153 socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect((host, port))
154 return True
155 except IOError:
156 try:
157 socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(
158 ("8.8.4.4", port))
159 return True
160 except IOError:
161 return False
162
163
164 def curate_cache(dir, min_free_percent=5.0):
165 """Clear out the directory if needed
166
167 This assumes all the files in the directory can be deleted as freely
168
169 Args:
170 dir (str): directory path that holds cached files
171 min_free_percent (float): percentage (0.0-100.0) of drive to keep free
172 """
173
174 # Simpleminded implementation -- keep a certain percentage of the
175 # disk available.
176 # TODO: Would be easy to add more options, like whitelisted files, etc.
177 space = psutil.disk_usage(dir)
178
179 # space.percent = space.used/space.total*100.0
180 percent_free = 100.0-space.percent
181 if percent_free < min_free_percent:
182 # calculate how many bytes we need to delete
183 bytes_needed = (min_free_percent - percent_free) / 100.0 * space.total
184 bytes_needed = int(bytes_needed + 1.0)
185
186 # get all entries in the directory w/ stats
187 entries = (os.path.join(dir, fn) for fn in os.listdir(dir))
188 entries = ((os.stat(path), path) for path in entries)
189
190 # leave only regular files, insert modification date
191 entries = ((stat[ST_MTIME], stat[ST_SIZE], path)
192 for stat, path in entries if S_ISREG(stat[ST_MODE]))
193
194 # delete files with oldest modification date until space is freed
195 space_freed = 0
196 for moddate, fsize, path in sorted(entries):
197 try:
198 os.remove(path)
199 space_freed += fsize
200 except:
201 pass
202
203 if space_freed > bytes_needed:
204 return # deleted enough!
205
206
207 def get_cache_directory(domain=None):
208 """Get a directory for caching data
209
210 This directory can be used to hold temporary caches of data to
211 speed up performance. This directory will likely be part of a
212 small RAM disk and may be cleared at any time. So code that
213 uses these cached files must be able to fallback and regenerate
214 the file.
215
216 Args:
217 domain (str): The cache domain. Basically just a subdirectory.
218
219 Return:
220 str: a path to the directory where you can cache data
221 """
222 config = mycroft.configuration.ConfigurationManager.instance()
223 dir = config.get("cache_path")
224 if not dir:
225 # If not defined, use /tmp/mycroft/cache
226 dir = os.path.join(tempfile.gettempdir(), "mycroft", "cache")
227 return ensure_directory_exists(dir, domain)
228
229
230 def validate_param(value, name):
231 if not value:
232 raise ValueError("Missing or empty %s in mycroft.conf " % name)
233
234
235 def is_speaking():
236 """Determine if Text to Speech is occurring
237
238 Returns:
239 bool: True while still speaking
240 """
241 logger.info("mycroft.utils.is_speaking() is depreciated, use "
242 "mycroft.audio.is_speaking() instead.")
243 return mycroft.audio.is_speaking()
244
245
246 def wait_while_speaking():
247 """Pause as long as Text to Speech is still happening
248
249 Pause while Text to Speech is still happening. This always pauses
250 briefly to ensure that any preceeding request to speak has time to
251 begin.
252 """
253 logger.info("mycroft.utils.wait_while_speaking() is depreciated, use "
254 "mycroft.audio.wait_while_speaking() instead.")
255 return mycroft.audio.wait_while_speaking()
256
257
258 def stop_speaking():
259 # TODO: Less hacky approach to this once Audio Manager is implemented
260 # Skills should only be able to stop speech they've initiated
261 logger.info("mycroft.utils.stop_speaking() is depreciated, use "
262 "mycroft.audio.stop_speaking() instead.")
263 mycroft.audio.stop_speaking()
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mycroft/__init__.py b/mycroft/__init__.py
--- a/mycroft/__init__.py
+++ b/mycroft/__init__.py
@@ -1,3 +1,9 @@
+from mycroft.api import Api
+from mycroft.skills.core import MycroftSkill, FallbackSkill, \
+ intent_handler, intent_file_handler
+from mycroft.skills.context import adds_context, removes_context
+from mycroft.messagebus.message import Message
+
from os.path import abspath, dirname, join
__author__ = 'seanfitz'
diff --git a/mycroft/util/__init__.py b/mycroft/util/__init__.py
--- a/mycroft/util/__init__.py
+++ b/mycroft/util/__init__.py
@@ -15,6 +15,12 @@
# You should have received a copy of the GNU General Public License
# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.
+# Officially exported methods from this file:
+# play_wav, play_mp3, get_cache_directory,
+# resolve_resource_file, wait_while_speaking
+from mycroft.util.log import getLogger
+from mycroft.util.parse import extract_datetime, extractnumber, normalize
+from mycroft.util.format import nice_number, convert_number
import socket
import subprocess
@@ -26,7 +32,6 @@
import time
from stat import S_ISREG, ST_MTIME, ST_MODE, ST_SIZE
import psutil
-from mycroft.util.log import getLogger
from mycroft.util.signal import *
import mycroft.configuration
import mycroft.audio
|
{"golden_diff": "diff --git a/mycroft/__init__.py b/mycroft/__init__.py\n--- a/mycroft/__init__.py\n+++ b/mycroft/__init__.py\n@@ -1,3 +1,9 @@\n+from mycroft.api import Api\n+from mycroft.skills.core import MycroftSkill, FallbackSkill, \\\n+ intent_handler, intent_file_handler\n+from mycroft.skills.context import adds_context, removes_context\n+from mycroft.messagebus.message import Message\n+\n from os.path import abspath, dirname, join\n \n __author__ = 'seanfitz'\ndiff --git a/mycroft/util/__init__.py b/mycroft/util/__init__.py\n--- a/mycroft/util/__init__.py\n+++ b/mycroft/util/__init__.py\n@@ -15,6 +15,12 @@\n # You should have received a copy of the GNU General Public License\n # along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.\n \n+# Officially exported methods from this file:\n+# play_wav, play_mp3, get_cache_directory,\n+# resolve_resource_file, wait_while_speaking\n+from mycroft.util.log import getLogger\n+from mycroft.util.parse import extract_datetime, extractnumber, normalize\n+from mycroft.util.format import nice_number, convert_number\n \n import socket\n import subprocess\n@@ -26,7 +32,6 @@\n import time\n from stat import S_ISREG, ST_MTIME, ST_MODE, ST_SIZE\n import psutil\n-from mycroft.util.log import getLogger\n from mycroft.util.signal import *\n import mycroft.configuration\n import mycroft.audio\n", "issue": "Import useful classes in mycroft package\nThis is open for discussion, but I think it is worth importing a few classes/functions in `mycroft/__init__.py` so that skills can do things like:\r\n```Python\r\nfrom mycroft import MycroftSkill # vs. from mycroft.skills.core import MycroftSkil\r\nclass MySkill(MycroftSkill):\r\n# ...\r\n```\r\n\r\nOr:\r\n```Python\r\nfrom mycroft import FallbackSkill, play_wav\r\nclass MediaFallback(FallbackSkill):\r\n def handle_fallback(self, message):\r\n # ...\r\n play_wav('my_song.wav')\r\n # ...\n", "before_files": [{"content": "from os.path import abspath, dirname, join\n\n__author__ = 'seanfitz'\n\nMYCROFT_ROOT_PATH = abspath(join(dirname(__file__), '..'))\n", "path": "mycroft/__init__.py"}, {"content": "# Copyright 2016 Mycroft AI, Inc.\n#\n# This file is part of Mycroft Core.\n#\n# Mycroft Core is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Mycroft Core is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.\n\n\nimport socket\nimport subprocess\nimport tempfile\nimport time\n\nimport os\nimport os.path\nimport time\nfrom stat import S_ISREG, ST_MTIME, ST_MODE, ST_SIZE\nimport psutil\nfrom mycroft.util.log import getLogger\nfrom mycroft.util.signal import *\nimport mycroft.configuration\nimport mycroft.audio\n\n__author__ = 'jdorleans'\n\nlogger = getLogger(__name__)\n\n\ndef resolve_resource_file(res_name):\n \"\"\"Convert a resource into an absolute filename.\n\n Resource names are in the form: 'filename.ext'\n or 'path/filename.ext'\n\n The system wil look for ~/.mycroft/res_name first, and\n if not found will look at /opt/mycroft/res_name,\n then finally it will look for res_name in the 'mycroft/res'\n folder of the source code package.\n\n Example:\n With mycroft running as the user 'bob', if you called\n resolve_resource_file('snd/beep.wav')\n it would return either '/home/bob/.mycroft/snd/beep.wav' or\n '/opt/mycroft/snd/beep.wav' or '.../mycroft/res/snd/beep.wav',\n where the '...' is replaced by the path where the package has\n been installed.\n\n Args:\n res_name (str): a resource path/name\n \"\"\"\n\n # First look for fully qualified file (e.g. a user setting)\n if os.path.isfile(res_name):\n return res_name\n\n # Now look for ~/.mycroft/res_name (in user folder)\n filename = os.path.expanduser(\"~/.mycroft/\" + res_name)\n if os.path.isfile(filename):\n return filename\n\n # Next look for /opt/mycroft/res/res_name\n filename = os.path.expanduser(\"/opt/mycroft/\" + res_name)\n if os.path.isfile(filename):\n return filename\n\n # Finally look for it in the source package\n filename = os.path.join(os.path.dirname(__file__), '..', 'res', res_name)\n filename = os.path.abspath(os.path.normpath(filename))\n if os.path.isfile(filename):\n return filename\n\n return None # Resource cannot be resolved\n\n\ndef play_wav(uri):\n config = mycroft.configuration.ConfigurationManager.instance()\n play_cmd = config.get(\"play_wav_cmdline\")\n play_wav_cmd = str(play_cmd).split(\" \")\n for index, cmd in enumerate(play_wav_cmd):\n if cmd == \"%1\":\n play_wav_cmd[index] = (get_http(uri))\n return subprocess.Popen(play_wav_cmd)\n\n\ndef play_mp3(uri):\n config = mycroft.configuration.ConfigurationManager.instance()\n play_cmd = config.get(\"play_mp3_cmdline\")\n play_mp3_cmd = str(play_cmd).split(\" \")\n for index, cmd in enumerate(play_mp3_cmd):\n if cmd == \"%1\":\n play_mp3_cmd[index] = (get_http(uri))\n return subprocess.Popen(play_mp3_cmd)\n\n\ndef record(file_path, duration, rate, channels):\n if duration > 0:\n return subprocess.Popen(\n [\"arecord\", \"-r\", str(rate), \"-c\", str(channels), \"-d\",\n str(duration), file_path])\n else:\n return subprocess.Popen(\n [\"arecord\", \"-r\", str(rate), \"-c\", str(channels), file_path])\n\n\ndef get_http(uri):\n return uri.replace(\"https://\", \"http://\")\n\n\ndef remove_last_slash(url):\n if url and url.endswith('/'):\n url = url[:-1]\n return url\n\n\ndef read_stripped_lines(filename):\n with open(filename, 'r') as f:\n return [line.strip() for line in f]\n\n\ndef read_dict(filename, div='='):\n d = {}\n with open(filename, 'r') as f:\n for line in f:\n (key, val) = line.split(div)\n d[key.strip()] = val.strip()\n return d\n\n\ndef connected(host=\"8.8.8.8\", port=53, timeout=3):\n \"\"\"\n Thanks to 7h3rAm on\n Host: 8.8.8.8 (google-public-dns-a.google.com)\n OpenPort: 53/tcp\n Service: domain (DNS/TCP)\n\n NOTE:\n This is no longer in use by this version\n New method checks for a connection using ConnectionError only when\n a question is asked\n \"\"\"\n try:\n socket.setdefaulttimeout(timeout)\n socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect((host, port))\n return True\n except IOError:\n try:\n socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(\n (\"8.8.4.4\", port))\n return True\n except IOError:\n return False\n\n\ndef curate_cache(dir, min_free_percent=5.0):\n \"\"\"Clear out the directory if needed\n\n This assumes all the files in the directory can be deleted as freely\n\n Args:\n dir (str): directory path that holds cached files\n min_free_percent (float): percentage (0.0-100.0) of drive to keep free\n \"\"\"\n\n # Simpleminded implementation -- keep a certain percentage of the\n # disk available.\n # TODO: Would be easy to add more options, like whitelisted files, etc.\n space = psutil.disk_usage(dir)\n\n # space.percent = space.used/space.total*100.0\n percent_free = 100.0-space.percent\n if percent_free < min_free_percent:\n # calculate how many bytes we need to delete\n bytes_needed = (min_free_percent - percent_free) / 100.0 * space.total\n bytes_needed = int(bytes_needed + 1.0)\n\n # get all entries in the directory w/ stats\n entries = (os.path.join(dir, fn) for fn in os.listdir(dir))\n entries = ((os.stat(path), path) for path in entries)\n\n # leave only regular files, insert modification date\n entries = ((stat[ST_MTIME], stat[ST_SIZE], path)\n for stat, path in entries if S_ISREG(stat[ST_MODE]))\n\n # delete files with oldest modification date until space is freed\n space_freed = 0\n for moddate, fsize, path in sorted(entries):\n try:\n os.remove(path)\n space_freed += fsize\n except:\n pass\n\n if space_freed > bytes_needed:\n return # deleted enough!\n\n\ndef get_cache_directory(domain=None):\n \"\"\"Get a directory for caching data\n\n This directory can be used to hold temporary caches of data to\n speed up performance. This directory will likely be part of a\n small RAM disk and may be cleared at any time. So code that\n uses these cached files must be able to fallback and regenerate\n the file.\n\n Args:\n domain (str): The cache domain. Basically just a subdirectory.\n\n Return:\n str: a path to the directory where you can cache data\n \"\"\"\n config = mycroft.configuration.ConfigurationManager.instance()\n dir = config.get(\"cache_path\")\n if not dir:\n # If not defined, use /tmp/mycroft/cache\n dir = os.path.join(tempfile.gettempdir(), \"mycroft\", \"cache\")\n return ensure_directory_exists(dir, domain)\n\n\ndef validate_param(value, name):\n if not value:\n raise ValueError(\"Missing or empty %s in mycroft.conf \" % name)\n\n\ndef is_speaking():\n \"\"\"Determine if Text to Speech is occurring\n\n Returns:\n bool: True while still speaking\n \"\"\"\n logger.info(\"mycroft.utils.is_speaking() is depreciated, use \"\n \"mycroft.audio.is_speaking() instead.\")\n return mycroft.audio.is_speaking()\n\n\ndef wait_while_speaking():\n \"\"\"Pause as long as Text to Speech is still happening\n\n Pause while Text to Speech is still happening. This always pauses\n briefly to ensure that any preceeding request to speak has time to\n begin.\n \"\"\"\n logger.info(\"mycroft.utils.wait_while_speaking() is depreciated, use \"\n \"mycroft.audio.wait_while_speaking() instead.\")\n return mycroft.audio.wait_while_speaking()\n\n\ndef stop_speaking():\n # TODO: Less hacky approach to this once Audio Manager is implemented\n # Skills should only be able to stop speech they've initiated\n logger.info(\"mycroft.utils.stop_speaking() is depreciated, use \"\n \"mycroft.audio.stop_speaking() instead.\")\n mycroft.audio.stop_speaking()\n", "path": "mycroft/util/__init__.py"}], "after_files": [{"content": "from mycroft.api import Api\nfrom mycroft.skills.core import MycroftSkill, FallbackSkill, \\\n intent_handler, intent_file_handler\nfrom mycroft.skills.context import adds_context, removes_context\nfrom mycroft.messagebus.message import Message\n\nfrom os.path import abspath, dirname, join\n\n__author__ = 'seanfitz'\n\nMYCROFT_ROOT_PATH = abspath(join(dirname(__file__), '..'))\n", "path": "mycroft/__init__.py"}, {"content": "# Copyright 2016 Mycroft AI, Inc.\n#\n# This file is part of Mycroft Core.\n#\n# Mycroft Core is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Mycroft Core is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Mycroft Core. If not, see <http://www.gnu.org/licenses/>.\n\n# Officially exported methods from this file:\n# play_wav, play_mp3, get_cache_directory,\n# resolve_resource_file, wait_while_speaking\nfrom mycroft.util.log import getLogger\nfrom mycroft.util.parse import extract_datetime, extractnumber, normalize\nfrom mycroft.util.format import nice_number, convert_number\n\nimport socket\nimport subprocess\nimport tempfile\nimport time\n\nimport os\nimport os.path\nimport time\nfrom stat import S_ISREG, ST_MTIME, ST_MODE, ST_SIZE\nimport psutil\nfrom mycroft.util.signal import *\nimport mycroft.configuration\nimport mycroft.audio\n\n__author__ = 'jdorleans'\n\nlogger = getLogger(__name__)\n\n\ndef resolve_resource_file(res_name):\n \"\"\"Convert a resource into an absolute filename.\n\n Resource names are in the form: 'filename.ext'\n or 'path/filename.ext'\n\n The system wil look for ~/.mycroft/res_name first, and\n if not found will look at /opt/mycroft/res_name,\n then finally it will look for res_name in the 'mycroft/res'\n folder of the source code package.\n\n Example:\n With mycroft running as the user 'bob', if you called\n resolve_resource_file('snd/beep.wav')\n it would return either '/home/bob/.mycroft/snd/beep.wav' or\n '/opt/mycroft/snd/beep.wav' or '.../mycroft/res/snd/beep.wav',\n where the '...' is replaced by the path where the package has\n been installed.\n\n Args:\n res_name (str): a resource path/name\n \"\"\"\n\n # First look for fully qualified file (e.g. a user setting)\n if os.path.isfile(res_name):\n return res_name\n\n # Now look for ~/.mycroft/res_name (in user folder)\n filename = os.path.expanduser(\"~/.mycroft/\" + res_name)\n if os.path.isfile(filename):\n return filename\n\n # Next look for /opt/mycroft/res/res_name\n filename = os.path.expanduser(\"/opt/mycroft/\" + res_name)\n if os.path.isfile(filename):\n return filename\n\n # Finally look for it in the source package\n filename = os.path.join(os.path.dirname(__file__), '..', 'res', res_name)\n filename = os.path.abspath(os.path.normpath(filename))\n if os.path.isfile(filename):\n return filename\n\n return None # Resource cannot be resolved\n\n\ndef play_wav(uri):\n config = mycroft.configuration.ConfigurationManager.instance()\n play_cmd = config.get(\"play_wav_cmdline\")\n play_wav_cmd = str(play_cmd).split(\" \")\n for index, cmd in enumerate(play_wav_cmd):\n if cmd == \"%1\":\n play_wav_cmd[index] = (get_http(uri))\n return subprocess.Popen(play_wav_cmd)\n\n\ndef play_mp3(uri):\n config = mycroft.configuration.ConfigurationManager.instance()\n play_cmd = config.get(\"play_mp3_cmdline\")\n play_mp3_cmd = str(play_cmd).split(\" \")\n for index, cmd in enumerate(play_mp3_cmd):\n if cmd == \"%1\":\n play_mp3_cmd[index] = (get_http(uri))\n return subprocess.Popen(play_mp3_cmd)\n\n\ndef record(file_path, duration, rate, channels):\n if duration > 0:\n return subprocess.Popen(\n [\"arecord\", \"-r\", str(rate), \"-c\", str(channels), \"-d\",\n str(duration), file_path])\n else:\n return subprocess.Popen(\n [\"arecord\", \"-r\", str(rate), \"-c\", str(channels), file_path])\n\n\ndef get_http(uri):\n return uri.replace(\"https://\", \"http://\")\n\n\ndef remove_last_slash(url):\n if url and url.endswith('/'):\n url = url[:-1]\n return url\n\n\ndef read_stripped_lines(filename):\n with open(filename, 'r') as f:\n return [line.strip() for line in f]\n\n\ndef read_dict(filename, div='='):\n d = {}\n with open(filename, 'r') as f:\n for line in f:\n (key, val) = line.split(div)\n d[key.strip()] = val.strip()\n return d\n\n\ndef connected(host=\"8.8.8.8\", port=53, timeout=3):\n \"\"\"\n Thanks to 7h3rAm on\n Host: 8.8.8.8 (google-public-dns-a.google.com)\n OpenPort: 53/tcp\n Service: domain (DNS/TCP)\n\n NOTE:\n This is no longer in use by this version\n New method checks for a connection using ConnectionError only when\n a question is asked\n \"\"\"\n try:\n socket.setdefaulttimeout(timeout)\n socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect((host, port))\n return True\n except IOError:\n try:\n socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(\n (\"8.8.4.4\", port))\n return True\n except IOError:\n return False\n\n\ndef curate_cache(dir, min_free_percent=5.0):\n \"\"\"Clear out the directory if needed\n\n This assumes all the files in the directory can be deleted as freely\n\n Args:\n dir (str): directory path that holds cached files\n min_free_percent (float): percentage (0.0-100.0) of drive to keep free\n \"\"\"\n\n # Simpleminded implementation -- keep a certain percentage of the\n # disk available.\n # TODO: Would be easy to add more options, like whitelisted files, etc.\n space = psutil.disk_usage(dir)\n\n # space.percent = space.used/space.total*100.0\n percent_free = 100.0-space.percent\n if percent_free < min_free_percent:\n # calculate how many bytes we need to delete\n bytes_needed = (min_free_percent - percent_free) / 100.0 * space.total\n bytes_needed = int(bytes_needed + 1.0)\n\n # get all entries in the directory w/ stats\n entries = (os.path.join(dir, fn) for fn in os.listdir(dir))\n entries = ((os.stat(path), path) for path in entries)\n\n # leave only regular files, insert modification date\n entries = ((stat[ST_MTIME], stat[ST_SIZE], path)\n for stat, path in entries if S_ISREG(stat[ST_MODE]))\n\n # delete files with oldest modification date until space is freed\n space_freed = 0\n for moddate, fsize, path in sorted(entries):\n try:\n os.remove(path)\n space_freed += fsize\n except:\n pass\n\n if space_freed > bytes_needed:\n return # deleted enough!\n\n\ndef get_cache_directory(domain=None):\n \"\"\"Get a directory for caching data\n\n This directory can be used to hold temporary caches of data to\n speed up performance. This directory will likely be part of a\n small RAM disk and may be cleared at any time. So code that\n uses these cached files must be able to fallback and regenerate\n the file.\n\n Args:\n domain (str): The cache domain. Basically just a subdirectory.\n\n Return:\n str: a path to the directory where you can cache data\n \"\"\"\n config = mycroft.configuration.ConfigurationManager.instance()\n dir = config.get(\"cache_path\")\n if not dir:\n # If not defined, use /tmp/mycroft/cache\n dir = os.path.join(tempfile.gettempdir(), \"mycroft\", \"cache\")\n return ensure_directory_exists(dir, domain)\n\n\ndef validate_param(value, name):\n if not value:\n raise ValueError(\"Missing or empty %s in mycroft.conf \" % name)\n\n\ndef is_speaking():\n \"\"\"Determine if Text to Speech is occurring\n\n Returns:\n bool: True while still speaking\n \"\"\"\n logger.info(\"mycroft.utils.is_speaking() is depreciated, use \"\n \"mycroft.audio.is_speaking() instead.\")\n return mycroft.audio.is_speaking()\n\n\ndef wait_while_speaking():\n \"\"\"Pause as long as Text to Speech is still happening\n\n Pause while Text to Speech is still happening. This always pauses\n briefly to ensure that any preceeding request to speak has time to\n begin.\n \"\"\"\n logger.info(\"mycroft.utils.wait_while_speaking() is depreciated, use \"\n \"mycroft.audio.wait_while_speaking() instead.\")\n return mycroft.audio.wait_while_speaking()\n\n\ndef stop_speaking():\n # TODO: Less hacky approach to this once Audio Manager is implemented\n # Skills should only be able to stop speech they've initiated\n logger.info(\"mycroft.utils.stop_speaking() is depreciated, use \"\n \"mycroft.audio.stop_speaking() instead.\")\n mycroft.audio.stop_speaking()\n", "path": "mycroft/util/__init__.py"}]}
| 3,199 | 349 |
gh_patches_debug_6976
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1369
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Searching in photos api is broken
### Describe the bug
Searching in photos api is broken
### How to reproduce
Steps to reproduce the behaviour:
1. Go to https://thalia.nu/api/v1/photos/albums/?search=Test
### Expected behaviour
This should not crash.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/photos/api/viewsets.py`
Content:
```
1 from rest_framework import permissions, filters
2 from rest_framework.exceptions import PermissionDenied
3 from rest_framework.mixins import CreateModelMixin, UpdateModelMixin
4 from rest_framework.viewsets import ModelViewSet, GenericViewSet
5
6 from photos import services
7 from photos.api import serializers
8 from photos.models import Album, Photo
9
10
11 class AlbumsViewSet(ModelViewSet):
12 """ViewSet class for an Album object."""
13
14 permission_classes = (permissions.IsAuthenticated,)
15 queryset = Album.objects.all()
16 filter_backends = (filters.SearchFilter,)
17 search_fields = ("title_en", "title_nl", "date", "slug")
18
19 def get_queryset(self):
20 """Return albums that are annotated to be accessible by the request user."""
21 return services.get_annotated_accessible_albums(
22 self.request, Album.objects.all()
23 )
24
25 def create(self, request, *args, **kwargs):
26 """Create album if the request user is allowed to."""
27 if self.request.user.has_perm("photos.create_album"):
28 return super().create(request, *args, **kwargs)
29 raise PermissionDenied
30
31 def update(self, request, *args, **kwargs):
32 """Create album if the request user is allowed to."""
33 if self.request.user.has_perm("photos.change_album"):
34 return super().update(request, *args, **kwargs)
35 raise PermissionDenied
36
37 def get_serializer_class(self):
38 """Return AlbumListSerializer if the current action is list else return AlbumSerializer."""
39 if self.action == "list":
40 return serializers.AlbumListSerializer
41 return serializers.AlbumSerializer
42
43
44 class PhotosViewSet(GenericViewSet, CreateModelMixin, UpdateModelMixin):
45 """ViewSet class for a Photo object."""
46
47 queryset = Photo.objects.all()
48 permission_classes = (permissions.IsAuthenticated,)
49 serializer_class = serializers.PhotoCreateSerializer
50
51 def create(self, request, *args, **kwargs):
52 """Create photo if the request user is allowed to."""
53 if self.request.user.has_perm("photos.create_photo"):
54 return super().create(request, *args, **kwargs)
55 raise PermissionDenied
56
57 def update(self, request, *args, **kwargs):
58 """Update photo if the request user is allowed to."""
59 if self.request.user.has_perm("photos.change_photo"):
60 return super().update(request, *args, **kwargs)
61 raise PermissionDenied
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/photos/api/viewsets.py b/website/photos/api/viewsets.py
--- a/website/photos/api/viewsets.py
+++ b/website/photos/api/viewsets.py
@@ -14,7 +14,7 @@
permission_classes = (permissions.IsAuthenticated,)
queryset = Album.objects.all()
filter_backends = (filters.SearchFilter,)
- search_fields = ("title_en", "title_nl", "date", "slug")
+ search_fields = ("title_en", "date", "slug")
def get_queryset(self):
"""Return albums that are annotated to be accessible by the request user."""
|
{"golden_diff": "diff --git a/website/photos/api/viewsets.py b/website/photos/api/viewsets.py\n--- a/website/photos/api/viewsets.py\n+++ b/website/photos/api/viewsets.py\n@@ -14,7 +14,7 @@\n permission_classes = (permissions.IsAuthenticated,)\n queryset = Album.objects.all()\n filter_backends = (filters.SearchFilter,)\n- search_fields = (\"title_en\", \"title_nl\", \"date\", \"slug\")\n+ search_fields = (\"title_en\", \"date\", \"slug\")\n \n def get_queryset(self):\n \"\"\"Return albums that are annotated to be accessible by the request user.\"\"\"\n", "issue": "Searching in photos api is broken\n### Describe the bug\r\nSearching in photos api is broken\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to https://thalia.nu/api/v1/photos/albums/?search=Test\r\n\r\n### Expected behaviour\r\nThis should not crash.\r\n\r\n\n", "before_files": [{"content": "from rest_framework import permissions, filters\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.mixins import CreateModelMixin, UpdateModelMixin\nfrom rest_framework.viewsets import ModelViewSet, GenericViewSet\n\nfrom photos import services\nfrom photos.api import serializers\nfrom photos.models import Album, Photo\n\n\nclass AlbumsViewSet(ModelViewSet):\n \"\"\"ViewSet class for an Album object.\"\"\"\n\n permission_classes = (permissions.IsAuthenticated,)\n queryset = Album.objects.all()\n filter_backends = (filters.SearchFilter,)\n search_fields = (\"title_en\", \"title_nl\", \"date\", \"slug\")\n\n def get_queryset(self):\n \"\"\"Return albums that are annotated to be accessible by the request user.\"\"\"\n return services.get_annotated_accessible_albums(\n self.request, Album.objects.all()\n )\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create album if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.create_album\"):\n return super().create(request, *args, **kwargs)\n raise PermissionDenied\n\n def update(self, request, *args, **kwargs):\n \"\"\"Create album if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.change_album\"):\n return super().update(request, *args, **kwargs)\n raise PermissionDenied\n\n def get_serializer_class(self):\n \"\"\"Return AlbumListSerializer if the current action is list else return AlbumSerializer.\"\"\"\n if self.action == \"list\":\n return serializers.AlbumListSerializer\n return serializers.AlbumSerializer\n\n\nclass PhotosViewSet(GenericViewSet, CreateModelMixin, UpdateModelMixin):\n \"\"\"ViewSet class for a Photo object.\"\"\"\n\n queryset = Photo.objects.all()\n permission_classes = (permissions.IsAuthenticated,)\n serializer_class = serializers.PhotoCreateSerializer\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create photo if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.create_photo\"):\n return super().create(request, *args, **kwargs)\n raise PermissionDenied\n\n def update(self, request, *args, **kwargs):\n \"\"\"Update photo if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.change_photo\"):\n return super().update(request, *args, **kwargs)\n raise PermissionDenied\n", "path": "website/photos/api/viewsets.py"}], "after_files": [{"content": "from rest_framework import permissions, filters\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.mixins import CreateModelMixin, UpdateModelMixin\nfrom rest_framework.viewsets import ModelViewSet, GenericViewSet\n\nfrom photos import services\nfrom photos.api import serializers\nfrom photos.models import Album, Photo\n\n\nclass AlbumsViewSet(ModelViewSet):\n \"\"\"ViewSet class for an Album object.\"\"\"\n\n permission_classes = (permissions.IsAuthenticated,)\n queryset = Album.objects.all()\n filter_backends = (filters.SearchFilter,)\n search_fields = (\"title_en\", \"date\", \"slug\")\n\n def get_queryset(self):\n \"\"\"Return albums that are annotated to be accessible by the request user.\"\"\"\n return services.get_annotated_accessible_albums(\n self.request, Album.objects.all()\n )\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create album if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.create_album\"):\n return super().create(request, *args, **kwargs)\n raise PermissionDenied\n\n def update(self, request, *args, **kwargs):\n \"\"\"Create album if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.change_album\"):\n return super().update(request, *args, **kwargs)\n raise PermissionDenied\n\n def get_serializer_class(self):\n \"\"\"Return AlbumListSerializer if the current action is list else return AlbumSerializer.\"\"\"\n if self.action == \"list\":\n return serializers.AlbumListSerializer\n return serializers.AlbumSerializer\n\n\nclass PhotosViewSet(GenericViewSet, CreateModelMixin, UpdateModelMixin):\n \"\"\"ViewSet class for a Photo object.\"\"\"\n\n queryset = Photo.objects.all()\n permission_classes = (permissions.IsAuthenticated,)\n serializer_class = serializers.PhotoCreateSerializer\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create photo if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.create_photo\"):\n return super().create(request, *args, **kwargs)\n raise PermissionDenied\n\n def update(self, request, *args, **kwargs):\n \"\"\"Update photo if the request user is allowed to.\"\"\"\n if self.request.user.has_perm(\"photos.change_photo\"):\n return super().update(request, *args, **kwargs)\n raise PermissionDenied\n", "path": "website/photos/api/viewsets.py"}]}
| 928 | 134 |
gh_patches_debug_36886
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-411
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] gmp/6.1.2: can't be built with mingw
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **gmp/6.1.2**
* Operating System+version: **Windows 10 1903**
* Compiler+version: **GCC 8**
* Conan version: **conan 1.19.1**
* Python version: **Python 3.7.3**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
os=Windows
os_build=Windows
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=8
compiler.exception=seh
compiler.libcxx=libstdc++11
compiler.threads=posix
build_type=Release
[options]
[build_requires]
*: mingw_installer/1.0@conan/stable, msys2_installer/latest@bincrafters/stable
[env]
```
### Steps to reproduce (Include if Applicable)
conanfile.txt
```
[requires]
gmp/6.1.2@bincrafters/stable
[generators]
txt
```
Running from Git bash
```
conan install --profile mingw -o gmp:disable_assembly=False -o gmp:enable_cxx=False -o gmp:fPIC=True -o gmp:run_checks=True -o gmp:shared=True --build gmp ../build_gmp
```
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
gmp/6.1.2@bincrafters/stable: Copying sources to build folder
gmp/6.1.2@bincrafters/stable: Building your package in C:\.conan\data\gmp\6.1.2\bincrafters\stable\build\92709a555eae5613e66076e2183cc3e52e0cd0e5
gmp/6.1.2@bincrafters/stable: Generator txt created conanbuildinfo.txt
gmp/6.1.2@bincrafters/stable: Calling build()
gmp/6.1.2@bincrafters/stable: WARN: Error running `configure --help`: Error 1 while executing source_subfolder/configure --help
gmp/6.1.2@bincrafters/stable: Calling:
> source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5
'source_subfolder' is not recognized as an internal or external command,
operable program or batch file.
gmp/6.1.2@bincrafters/stable:
gmp/6.1.2@bincrafters/stable: ERROR: Package '92709a555eae5613e66076e2183cc3e52e0cd0e5' build failed
gmp/6.1.2@bincrafters/stable: WARN: Build folder C:\.conan\data\gmp\6.1.2\bincrafters\stable\build\92709a555eae5613e66076e2183cc3e52e0cd0e5
ERROR: gmp/6.1.2@bincrafters/stable: Error in build() method, line 66
autotools = self._configure_autotools()
while calling '_configure_autotools', line 62
self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)
ConanException: Error 1 while executing source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5
```
</details>
Replacing
```self._autotools = AutoToolsBuildEnvironment(self)```
with
```self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)```
allows to go further.
<details><summary>Click to expand log</summary>
```
gmp/6.1.2@bincrafters/stable: checking if the .align directive accepts an 0x90 fill in .text... yes
gmp/6.1.2@bincrafters/stable: checking size of void *... 8
gmp/6.1.2@bincrafters/stable: checking size of unsigned short... 2
gmp/6.1.2@bincrafters/stable: checking size of unsigned... 4
gmp/6.1.2@bincrafters/stable: checking size of unsigned long... 4
gmp/6.1.2@bincrafters/stable: checking size of mp_limb_t... 0
gmp/6.1.2@bincrafters/stable: configure: error: Oops, mp_limb_t doesn't seem to work
gmp/6.1.2@bincrafters/stable:
gmp/6.1.2@bincrafters/stable: ERROR: Package '92709a555eae5613e66076e2183cc3e52e0cd0e5' build failed
gmp/6.1.2@bincrafters/stable: WARN: Build folder C:\.conan\data\gmp\6.1.2\bincrafters\stable\build\92709a555eae5613e66076e2183cc3e52e0cd0e5
ERROR: gmp/6.1.2@bincrafters/stable: Error in build() method, line 66
autotools = self._configure_autotools()
while calling '_configure_autotools', line 62
self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)
ConanException: Error 1 while executing /c/.conan/data/gmp/6.1.2/bincrafters/stable/build/92709a555eae5613e66076e2183cc3e52e0cd0e5/source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5
```
</details>
Next I've tried adding
```with tools.chdir(self._source_subfolder):```
to build and package steps for autotools. And removed ```configure_dir=self._source_subfolder``` from ```self._autotools.configure```
This has allowed me to build gmp but I'm not sure that it is the right way.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/gmp/all/conanfile.py`
Content:
```
1 import os
2 import stat
3 from conans import ConanFile, AutoToolsBuildEnvironment, tools
4 from conans.errors import ConanInvalidConfiguration
5
6
7 class GmpConan(ConanFile):
8 name = "gmp"
9 description = "GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers."
10 url = "https://github.com/conan-io/conan-center-index"
11 topics = ("conan", "gmp", "math")
12 license = ("LGPL-3.0", "GPL-2.0")
13 homepage = "https://gmplib.org"
14 settings = "os", "arch", "compiler", "build_type"
15 options = {"shared": [True, False], "fPIC": [True, False], "disable_assembly": [True, False],
16 "run_checks": [True, False], "enable_cxx" : [True, False]}
17 default_options = {'shared': False, 'fPIC': True, 'disable_assembly': True, 'run_checks': False, "enable_cxx" : True}
18
19 _source_subfolder = "source_subfolder"
20 _autotools = None
21
22 def config_options(self):
23 if self.settings.os == "Windows":
24 del self.options.fPIC
25
26 def configure(self):
27 if self.settings.compiler == 'Visual Studio':
28 raise ConanInvalidConfiguration("The gmp package cannot be built on Visual Studio.")
29
30 if not self.options.enable_cxx:
31 del self.settings.compiler.libcxx
32 del self.settings.compiler.cppstd
33
34 def source(self):
35 tools.get(**self.conan_data["sources"][self.version])
36 os.rename("gmp-" + self.version, self._source_subfolder)
37
38 def _configure_autotools(self):
39 if not self._autotools:
40 self._autotools = AutoToolsBuildEnvironment(self)
41 if self.settings.os == "Macos":
42 configure_file = os.path.join(self._source_subfolder, "configure")
43 tools.replace_in_file(configure_file, r"-install_name \$rpath/", "-install_name ")
44 configure_stats = os.stat(configure_file)
45 os.chmod(configure_file, configure_stats.st_mode | stat.S_IEXEC)
46 configure_args = []
47 if self.options.disable_assembly:
48 configure_args.append('--disable-assembly')
49 if self.options.shared:
50 configure_args.extend(["--enable-shared", "--disable-static"])
51 else:
52 configure_args.extend(["--disable-shared", "--enable-static"])
53 if self.options.enable_cxx:
54 configure_args.append('--enable-cxx')
55 self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)
56 return self._autotools
57
58 def build(self):
59 autotools = self._configure_autotools()
60 autotools.make()
61 # INFO: According to the gmp readme file, make check should not be omitted, but it causes timeouts on the CI server.
62 if self.options.run_checks:
63 autotools.make(args=['check'])
64
65 def package(self):
66 self.copy("COPYINGv2", dst="licenses", src=self._source_subfolder)
67 self.copy("COPYING.LESSERv3", dst="licenses", src=self._source_subfolder)
68 autotools = self._configure_autotools()
69 autotools.install()
70 tools.rmdir(os.path.join(self.package_folder, "share"))
71 # remove la files
72 for la_name in ['libgmp.la', 'libgmpxx.la']:
73 la = os.path.join(self.package_folder, "lib", la_name)
74 if os.path.isfile(la):
75 os.unlink(la)
76
77 def package_id(self):
78 del self.info.options.run_checks # run_checks doesn't affect package's ID
79
80 def package_info(self):
81 self.cpp_info.libs = tools.collect_libs(self)
82 self.cpp_info.name = "GMP"
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/gmp/all/conanfile.py b/recipes/gmp/all/conanfile.py
--- a/recipes/gmp/all/conanfile.py
+++ b/recipes/gmp/all/conanfile.py
@@ -37,9 +37,9 @@
def _configure_autotools(self):
if not self._autotools:
- self._autotools = AutoToolsBuildEnvironment(self)
+ self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
if self.settings.os == "Macos":
- configure_file = os.path.join(self._source_subfolder, "configure")
+ configure_file = "configure"
tools.replace_in_file(configure_file, r"-install_name \$rpath/", "-install_name ")
configure_stats = os.stat(configure_file)
os.chmod(configure_file, configure_stats.st_mode | stat.S_IEXEC)
@@ -52,12 +52,13 @@
configure_args.extend(["--disable-shared", "--enable-static"])
if self.options.enable_cxx:
configure_args.append('--enable-cxx')
- self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)
+ self._autotools.configure(args=configure_args)
return self._autotools
def build(self):
- autotools = self._configure_autotools()
- autotools.make()
+ with tools.chdir(self._source_subfolder):
+ autotools = self._configure_autotools()
+ autotools.make()
# INFO: According to the gmp readme file, make check should not be omitted, but it causes timeouts on the CI server.
if self.options.run_checks:
autotools.make(args=['check'])
@@ -65,8 +66,9 @@
def package(self):
self.copy("COPYINGv2", dst="licenses", src=self._source_subfolder)
self.copy("COPYING.LESSERv3", dst="licenses", src=self._source_subfolder)
- autotools = self._configure_autotools()
- autotools.install()
+ with tools.chdir(self._source_subfolder):
+ autotools = self._configure_autotools()
+ autotools.install()
tools.rmdir(os.path.join(self.package_folder, "share"))
# remove la files
for la_name in ['libgmp.la', 'libgmpxx.la']:
|
{"golden_diff": "diff --git a/recipes/gmp/all/conanfile.py b/recipes/gmp/all/conanfile.py\n--- a/recipes/gmp/all/conanfile.py\n+++ b/recipes/gmp/all/conanfile.py\n@@ -37,9 +37,9 @@\n \n def _configure_autotools(self):\n if not self._autotools:\n- self._autotools = AutoToolsBuildEnvironment(self)\n+ self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n if self.settings.os == \"Macos\":\n- configure_file = os.path.join(self._source_subfolder, \"configure\")\n+ configure_file = \"configure\"\n tools.replace_in_file(configure_file, r\"-install_name \\$rpath/\", \"-install_name \")\n configure_stats = os.stat(configure_file)\n os.chmod(configure_file, configure_stats.st_mode | stat.S_IEXEC)\n@@ -52,12 +52,13 @@\n configure_args.extend([\"--disable-shared\", \"--enable-static\"])\n if self.options.enable_cxx:\n configure_args.append('--enable-cxx')\n- self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)\n+ self._autotools.configure(args=configure_args)\n return self._autotools\n \n def build(self):\n- autotools = self._configure_autotools()\n- autotools.make()\n+ with tools.chdir(self._source_subfolder):\n+ autotools = self._configure_autotools()\n+ autotools.make()\n # INFO: According to the gmp readme file, make check should not be omitted, but it causes timeouts on the CI server.\n if self.options.run_checks:\n autotools.make(args=['check'])\n@@ -65,8 +66,9 @@\n def package(self):\n self.copy(\"COPYINGv2\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"COPYING.LESSERv3\", dst=\"licenses\", src=self._source_subfolder)\n- autotools = self._configure_autotools()\n- autotools.install()\n+ with tools.chdir(self._source_subfolder):\n+ autotools = self._configure_autotools()\n+ autotools.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n # remove la files\n for la_name in ['libgmp.la', 'libgmpxx.la']:\n", "issue": "[package] gmp/6.1.2: can't be built with mingw\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **gmp/6.1.2**\r\n * Operating System+version: **Windows 10 1903**\r\n * Compiler+version: **GCC 8**\r\n * Conan version: **conan 1.19.1**\r\n * Python version: **Python 3.7.3**\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\n[settings]\r\nos=Windows\r\nos_build=Windows\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=gcc\r\ncompiler.version=8\r\ncompiler.exception=seh\r\ncompiler.libcxx=libstdc++11\r\ncompiler.threads=posix\r\nbuild_type=Release\r\n[options]\r\n[build_requires]\r\n*: mingw_installer/1.0@conan/stable, msys2_installer/latest@bincrafters/stable\r\n[env]\r\n```\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nconanfile.txt\r\n```\r\n[requires]\r\ngmp/6.1.2@bincrafters/stable\r\n\r\n[generators]\r\ntxt\r\n```\r\n\r\nRunning from Git bash\r\n```\r\nconan install --profile mingw -o gmp:disable_assembly=False -o gmp:enable_cxx=False -o gmp:fPIC=True -o gmp:run_checks=True -o gmp:shared=True --build gmp ../build_gmp\r\n```\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ngmp/6.1.2@bincrafters/stable: Copying sources to build folder\r\ngmp/6.1.2@bincrafters/stable: Building your package in C:\\.conan\\data\\gmp\\6.1.2\\bincrafters\\stable\\build\\92709a555eae5613e66076e2183cc3e52e0cd0e5\r\ngmp/6.1.2@bincrafters/stable: Generator txt created conanbuildinfo.txt\r\ngmp/6.1.2@bincrafters/stable: Calling build()\r\ngmp/6.1.2@bincrafters/stable: WARN: Error running `configure --help`: Error 1 while executing source_subfolder/configure --help\r\ngmp/6.1.2@bincrafters/stable: Calling:\r\n > source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5\r\n'source_subfolder' is not recognized as an internal or external command,\r\noperable program or batch file.\r\ngmp/6.1.2@bincrafters/stable:\r\ngmp/6.1.2@bincrafters/stable: ERROR: Package '92709a555eae5613e66076e2183cc3e52e0cd0e5' build failed\r\ngmp/6.1.2@bincrafters/stable: WARN: Build folder C:\\.conan\\data\\gmp\\6.1.2\\bincrafters\\stable\\build\\92709a555eae5613e66076e2183cc3e52e0cd0e5\r\nERROR: gmp/6.1.2@bincrafters/stable: Error in build() method, line 66\r\n autotools = self._configure_autotools()\r\nwhile calling '_configure_autotools', line 62\r\n self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)\r\n ConanException: Error 1 while executing source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5\r\n```\r\n</details>\r\n\r\nReplacing\r\n```self._autotools = AutoToolsBuildEnvironment(self)```\r\nwith\r\n```self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)```\r\nallows to go further.\r\n\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ngmp/6.1.2@bincrafters/stable: checking if the .align directive accepts an 0x90 fill in .text... yes\r\ngmp/6.1.2@bincrafters/stable: checking size of void *... 8\r\ngmp/6.1.2@bincrafters/stable: checking size of unsigned short... 2\r\ngmp/6.1.2@bincrafters/stable: checking size of unsigned... 4\r\ngmp/6.1.2@bincrafters/stable: checking size of unsigned long... 4\r\ngmp/6.1.2@bincrafters/stable: checking size of mp_limb_t... 0\r\ngmp/6.1.2@bincrafters/stable: configure: error: Oops, mp_limb_t doesn't seem to work\r\ngmp/6.1.2@bincrafters/stable:\r\ngmp/6.1.2@bincrafters/stable: ERROR: Package '92709a555eae5613e66076e2183cc3e52e0cd0e5' build failed\r\ngmp/6.1.2@bincrafters/stable: WARN: Build folder C:\\.conan\\data\\gmp\\6.1.2\\bincrafters\\stable\\build\\92709a555eae5613e66076e2183cc3e52e0cd0e5\r\nERROR: gmp/6.1.2@bincrafters/stable: Error in build() method, line 66\r\n autotools = self._configure_autotools()\r\nwhile calling '_configure_autotools', line 62\r\n self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)\r\n ConanException: Error 1 while executing /c/.conan/data/gmp/6.1.2/bincrafters/stable/build/92709a555eae5613e66076e2183cc3e52e0cd0e5/source_subfolder/configure --enable-shared --disable-static --prefix=C:/.conan/data/gmp/6.1.2/bincrafters/stable/package/92709a555eae5613e66076e2183cc3e52e0cd0e5\r\n```\r\n</details>\r\n\r\nNext I've tried adding \r\n```with tools.chdir(self._source_subfolder):```\r\nto build and package steps for autotools. And removed ```configure_dir=self._source_subfolder``` from ```self._autotools.configure```\r\n\r\nThis has allowed me to build gmp but I'm not sure that it is the right way.\r\n\n", "before_files": [{"content": "import os\nimport stat\nfrom conans import ConanFile, AutoToolsBuildEnvironment, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass GmpConan(ConanFile):\n name = \"gmp\"\n description = \"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n topics = (\"conan\", \"gmp\", \"math\")\n license = (\"LGPL-3.0\", \"GPL-2.0\")\n homepage = \"https://gmplib.org\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False], \"disable_assembly\": [True, False],\n \"run_checks\": [True, False], \"enable_cxx\" : [True, False]}\n default_options = {'shared': False, 'fPIC': True, 'disable_assembly': True, 'run_checks': False, \"enable_cxx\" : True}\n\n _source_subfolder = \"source_subfolder\"\n _autotools = None\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == 'Visual Studio':\n raise ConanInvalidConfiguration(\"The gmp package cannot be built on Visual Studio.\")\n\n if not self.options.enable_cxx:\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"gmp-\" + self.version, self._source_subfolder)\n\n def _configure_autotools(self):\n if not self._autotools:\n self._autotools = AutoToolsBuildEnvironment(self)\n if self.settings.os == \"Macos\":\n configure_file = os.path.join(self._source_subfolder, \"configure\")\n tools.replace_in_file(configure_file, r\"-install_name \\$rpath/\", \"-install_name \")\n configure_stats = os.stat(configure_file)\n os.chmod(configure_file, configure_stats.st_mode | stat.S_IEXEC)\n configure_args = []\n if self.options.disable_assembly:\n configure_args.append('--disable-assembly')\n if self.options.shared:\n configure_args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n configure_args.extend([\"--disable-shared\", \"--enable-static\"])\n if self.options.enable_cxx:\n configure_args.append('--enable-cxx')\n self._autotools.configure(args=configure_args, configure_dir=self._source_subfolder)\n return self._autotools\n\n def build(self):\n autotools = self._configure_autotools()\n autotools.make()\n # INFO: According to the gmp readme file, make check should not be omitted, but it causes timeouts on the CI server.\n if self.options.run_checks:\n autotools.make(args=['check'])\n\n def package(self):\n self.copy(\"COPYINGv2\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"COPYING.LESSERv3\", dst=\"licenses\", src=self._source_subfolder)\n autotools = self._configure_autotools()\n autotools.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n # remove la files\n for la_name in ['libgmp.la', 'libgmpxx.la']:\n la = os.path.join(self.package_folder, \"lib\", la_name)\n if os.path.isfile(la):\n os.unlink(la)\n\n def package_id(self):\n del self.info.options.run_checks # run_checks doesn't affect package's ID\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n self.cpp_info.name = \"GMP\"\n", "path": "recipes/gmp/all/conanfile.py"}], "after_files": [{"content": "import os\nimport stat\nfrom conans import ConanFile, AutoToolsBuildEnvironment, tools\nfrom conans.errors import ConanInvalidConfiguration\n\n\nclass GmpConan(ConanFile):\n name = \"gmp\"\n description = \"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n topics = (\"conan\", \"gmp\", \"math\")\n license = (\"LGPL-3.0\", \"GPL-2.0\")\n homepage = \"https://gmplib.org\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False], \"disable_assembly\": [True, False],\n \"run_checks\": [True, False], \"enable_cxx\" : [True, False]}\n default_options = {'shared': False, 'fPIC': True, 'disable_assembly': True, 'run_checks': False, \"enable_cxx\" : True}\n\n _source_subfolder = \"source_subfolder\"\n _autotools = None\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.settings.compiler == 'Visual Studio':\n raise ConanInvalidConfiguration(\"The gmp package cannot be built on Visual Studio.\")\n\n if not self.options.enable_cxx:\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"gmp-\" + self.version, self._source_subfolder)\n\n def _configure_autotools(self):\n if not self._autotools:\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n if self.settings.os == \"Macos\":\n configure_file = \"configure\"\n tools.replace_in_file(configure_file, r\"-install_name \\$rpath/\", \"-install_name \")\n configure_stats = os.stat(configure_file)\n os.chmod(configure_file, configure_stats.st_mode | stat.S_IEXEC)\n configure_args = []\n if self.options.disable_assembly:\n configure_args.append('--disable-assembly')\n if self.options.shared:\n configure_args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n configure_args.extend([\"--disable-shared\", \"--enable-static\"])\n if self.options.enable_cxx:\n configure_args.append('--enable-cxx')\n self._autotools.configure(args=configure_args)\n return self._autotools\n\n def build(self):\n with tools.chdir(self._source_subfolder):\n autotools = self._configure_autotools()\n autotools.make()\n # INFO: According to the gmp readme file, make check should not be omitted, but it causes timeouts on the CI server.\n if self.options.run_checks:\n autotools.make(args=['check'])\n\n def package(self):\n self.copy(\"COPYINGv2\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(\"COPYING.LESSERv3\", dst=\"licenses\", src=self._source_subfolder)\n with tools.chdir(self._source_subfolder):\n autotools = self._configure_autotools()\n autotools.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n # remove la files\n for la_name in ['libgmp.la', 'libgmpxx.la']:\n la = os.path.join(self.package_folder, \"lib\", la_name)\n if os.path.isfile(la):\n os.unlink(la)\n\n def package_id(self):\n del self.info.options.run_checks # run_checks doesn't affect package's ID\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n self.cpp_info.name = \"GMP\"\n", "path": "recipes/gmp/all/conanfile.py"}]}
| 2,969 | 538 |
gh_patches_debug_17201
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-3451
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CodeInput doesn't work with IniLexer
https://gist.github.com/aron-bordin/df00122f90231d5081d4
It's not possible to add [ in the first column:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/uix/codeinput.py`
Content:
```
1 '''
2 Code Input
3 ==========
4
5 .. versionadded:: 1.5.0
6
7 .. image:: images/codeinput.jpg
8
9
10 The :class:`CodeInput` provides a box of editable highlighted text like the one
11 shown in the image.
12
13 It supports all the features provided by the :class:`~kivy.uix.textinput` as
14 well as code highlighting for `languages supported by pygments
15 <http://pygments.org/docs/lexers/>`_ along with `KivyLexer` for
16 :mod:`kivy.lang` highlighting.
17
18 Usage example
19 -------------
20
21 To create a CodeInput with highlighting for `KV language`::
22
23 from kivy.uix.codeinput import CodeInput
24 from kivy.extras.highlight import KivyLexer
25 codeinput = CodeInput(lexer=KivyLexer())
26
27 To create a CodeInput with highlighting for `Cython`::
28
29 from kivy.uix.codeinput import CodeInput
30 from pygments.lexers import CythonLexer
31 codeinput = CodeInput(lexer=CythonLexer())
32
33 '''
34
35 __all__ = ('CodeInput', )
36
37 from pygments import highlight
38 from pygments import lexers
39 from pygments import styles
40 from pygments.formatters import BBCodeFormatter
41
42 from kivy.uix.textinput import TextInput
43 from kivy.core.text.markup import MarkupLabel as Label
44 from kivy.cache import Cache
45 from kivy.properties import ObjectProperty, OptionProperty
46 from kivy.utils import get_hex_from_color
47
48 Cache_get = Cache.get
49 Cache_append = Cache.append
50
51 # TODO: color chooser for keywords/strings/...
52
53
54 class CodeInput(TextInput):
55 '''CodeInput class, used for displaying highlighted code.
56 '''
57
58 lexer = ObjectProperty(None)
59 '''This holds the selected Lexer used by pygments to highlight the code.
60
61
62 :attr:`lexer` is an :class:`~kivy.properties.ObjectProperty` and
63 defaults to `PythonLexer`.
64 '''
65
66 style_name = OptionProperty(
67 'default', options=list(styles.get_all_styles())
68 )
69 '''Name of the pygments style to use for formatting.
70
71 :attr:`style_name` is an :class:`~kivy.properties.OptionProperty`
72 and defaults to ``'default'``.
73
74 '''
75
76 style = ObjectProperty(None)
77 '''The pygments style object to use for formatting.
78
79 When ``style_name`` is set, this will be changed to the
80 corresponding style object.
81
82 :attr:`style` is a :class:`~kivy.properties.ObjectProperty` and
83 defaults to ``None``
84
85 '''
86
87 def __init__(self, **kwargs):
88 stylename = kwargs.get('style_name', 'default')
89 style = kwargs['style'] if 'style' in kwargs \
90 else styles.get_style_by_name(stylename)
91 self.formatter = BBCodeFormatter(style=style)
92 self.lexer = lexers.PythonLexer()
93 self.text_color = '#000000'
94 self._label_cached = Label()
95 self.use_text_color = True
96
97 super(CodeInput, self).__init__(**kwargs)
98
99 self._line_options = kw = self._get_line_options()
100 self._label_cached = Label(**kw)
101 # use text_color as foreground color
102 text_color = kwargs.get('foreground_color')
103 if text_color:
104 self.text_color = get_hex_from_color(text_color)
105 # set foreground to white to allow text colors to show
106 # use text_color as the default color in bbcodes
107 self.use_text_color = False
108 self.foreground_color = [1, 1, 1, .999]
109 if not kwargs.get('background_color'):
110 self.background_color = [.9, .92, .92, 1]
111
112 def on_style_name(self, *args):
113 self.style = styles.get_style_by_name(self.style_name)
114
115 def on_style(self, *args):
116 self.formatter = BBCodeFormatter(style=self.style)
117 self._trigger_update_graphics()
118
119 def _create_line_label(self, text, hint=False):
120 # Create a label from a text, using line options
121 ntext = text.replace(u'\n', u'').replace(u'\t', u' ' * self.tab_width)
122 if self.password and not hint: # Don't replace hint_text with *
123 ntext = u'*' * len(ntext)
124 ntext = self._get_bbcode(ntext)
125 kw = self._get_line_options()
126 cid = u'{}\0{}\0{}'.format(ntext, self.password, kw)
127 texture = Cache_get('textinput.label', cid)
128
129 if texture is None:
130 # FIXME right now, we can't render very long line...
131 # if we move on "VBO" version as fallback, we won't need to
132 # do this.
133 # try to find the maximum text we can handle
134 label = Label(text=ntext, **kw)
135 if text.find(u'\n') > 0:
136 label.text = u''
137 else:
138 label.text = ntext
139 label.refresh()
140
141 # ok, we found it.
142 texture = label.texture
143 Cache_append('textinput.label', cid, texture)
144 label.text = ''
145 return texture
146
147 def _get_line_options(self):
148 kw = super(CodeInput, self)._get_line_options()
149 kw['markup'] = True
150 kw['valign'] = 'top'
151 kw['codeinput'] = repr(self.lexer)
152 return kw
153
154 def _get_text_width(self, text, tab_width, _label_cached):
155 # Return the width of a text, according to the current line options.
156 cid = u'{}\0{}\0{}'.format(text, self.password,
157 self._get_line_options())
158 width = Cache_get('textinput.width', cid)
159 if width is not None:
160 return width
161 lbl = self._create_line_label(text)
162 width = lbl.width
163 Cache_append('textinput.width', cid, width)
164 return width
165
166 def _get_bbcode(self, ntext):
167 # get bbcoded text for python
168 try:
169 ntext[0]
170 # replace brackets with special chars that aren't highlighted
171 # by pygment. can't use &bl; ... cause & is highlighted
172 ntext = ntext.replace(u'[', u'\x01;').replace(u']', u'\x02;')
173 ntext = highlight(ntext, self.lexer, self.formatter)
174 ntext = ntext.replace(u'\x01;', u'&bl;').replace(u'\x02;', u'&br;')
175 # replace special chars with &bl; and &br;
176 ntext = ''.join((u'[color=', str(self.text_color), u']',
177 ntext, u'[/color]'))
178 ntext = ntext.replace(u'\n', u'')
179 return ntext
180 except IndexError:
181 return ''
182
183 # overriden to prevent cursor position off screen
184 def _cursor_offset(self):
185 '''Get the cursor x offset on the current line
186 '''
187 offset = 0
188 try:
189 if self.cursor_col:
190 offset = self._get_text_width(
191 self._lines[self.cursor_row][:self.cursor_col])
192 return offset
193 except:
194 pass
195 finally:
196 return offset
197
198 def on_lexer(self, instance, value):
199 self._trigger_refresh_text()
200
201 def on_foreground_color(self, instance, text_color):
202 if not self.use_text_color:
203 self.use_text_color = True
204 return
205 self.text_color = get_hex_from_color(text_color)
206 self.use_text_color = False
207 self.foreground_color = (1, 1, 1, .999)
208 self._trigger_refresh_text()
209
210
211 if __name__ == '__main__':
212 from kivy.extras.highlight import KivyLexer
213 from kivy.app import App
214
215 class CodeInputTest(App):
216 def build(self):
217 return CodeInput(lexer=KivyLexer(),
218 font_name='data/fonts/DroidSansMono.ttf',
219 font_size=12,
220 text='''
221 #:kivy 1.0
222
223 <YourWidget>:
224 canvas:
225 Color:
226 rgb: .5, .5, .5
227 Rectangle:
228 pos: self.pos
229 size: self.size''')
230
231 CodeInputTest().run()
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/uix/codeinput.py b/kivy/uix/codeinput.py
--- a/kivy/uix/codeinput.py
+++ b/kivy/uix/codeinput.py
@@ -169,9 +169,9 @@
ntext[0]
# replace brackets with special chars that aren't highlighted
# by pygment. can't use &bl; ... cause & is highlighted
- ntext = ntext.replace(u'[', u'\x01;').replace(u']', u'\x02;')
+ ntext = ntext.replace(u'[', u'\x01').replace(u']', u'\x02')
ntext = highlight(ntext, self.lexer, self.formatter)
- ntext = ntext.replace(u'\x01;', u'&bl;').replace(u'\x02;', u'&br;')
+ ntext = ntext.replace(u'\x01', u'&bl;').replace(u'\x02', u'&br;')
# replace special chars with &bl; and &br;
ntext = ''.join((u'[color=', str(self.text_color), u']',
ntext, u'[/color]'))
|
{"golden_diff": "diff --git a/kivy/uix/codeinput.py b/kivy/uix/codeinput.py\n--- a/kivy/uix/codeinput.py\n+++ b/kivy/uix/codeinput.py\n@@ -169,9 +169,9 @@\n ntext[0]\n # replace brackets with special chars that aren't highlighted\n # by pygment. can't use &bl; ... cause & is highlighted\n- ntext = ntext.replace(u'[', u'\\x01;').replace(u']', u'\\x02;')\n+ ntext = ntext.replace(u'[', u'\\x01').replace(u']', u'\\x02')\n ntext = highlight(ntext, self.lexer, self.formatter)\n- ntext = ntext.replace(u'\\x01;', u'&bl;').replace(u'\\x02;', u'&br;')\n+ ntext = ntext.replace(u'\\x01', u'&bl;').replace(u'\\x02', u'&br;')\n # replace special chars with &bl; and &br;\n ntext = ''.join((u'[color=', str(self.text_color), u']',\n ntext, u'[/color]'))\n", "issue": "CodeInput doesn't work with IniLexer\nhttps://gist.github.com/aron-bordin/df00122f90231d5081d4\n\nIt's not possible to add [ in the first column:\n\n\n\n", "before_files": [{"content": "'''\nCode Input\n==========\n\n.. versionadded:: 1.5.0\n\n.. image:: images/codeinput.jpg\n\n\nThe :class:`CodeInput` provides a box of editable highlighted text like the one\nshown in the image.\n\nIt supports all the features provided by the :class:`~kivy.uix.textinput` as\nwell as code highlighting for `languages supported by pygments\n<http://pygments.org/docs/lexers/>`_ along with `KivyLexer` for\n:mod:`kivy.lang` highlighting.\n\nUsage example\n-------------\n\nTo create a CodeInput with highlighting for `KV language`::\n\n from kivy.uix.codeinput import CodeInput\n from kivy.extras.highlight import KivyLexer\n codeinput = CodeInput(lexer=KivyLexer())\n\nTo create a CodeInput with highlighting for `Cython`::\n\n from kivy.uix.codeinput import CodeInput\n from pygments.lexers import CythonLexer\n codeinput = CodeInput(lexer=CythonLexer())\n\n'''\n\n__all__ = ('CodeInput', )\n\nfrom pygments import highlight\nfrom pygments import lexers\nfrom pygments import styles\nfrom pygments.formatters import BBCodeFormatter\n\nfrom kivy.uix.textinput import TextInput\nfrom kivy.core.text.markup import MarkupLabel as Label\nfrom kivy.cache import Cache\nfrom kivy.properties import ObjectProperty, OptionProperty\nfrom kivy.utils import get_hex_from_color\n\nCache_get = Cache.get\nCache_append = Cache.append\n\n# TODO: color chooser for keywords/strings/...\n\n\nclass CodeInput(TextInput):\n '''CodeInput class, used for displaying highlighted code.\n '''\n\n lexer = ObjectProperty(None)\n '''This holds the selected Lexer used by pygments to highlight the code.\n\n\n :attr:`lexer` is an :class:`~kivy.properties.ObjectProperty` and\n defaults to `PythonLexer`.\n '''\n\n style_name = OptionProperty(\n 'default', options=list(styles.get_all_styles())\n )\n '''Name of the pygments style to use for formatting.\n\n :attr:`style_name` is an :class:`~kivy.properties.OptionProperty`\n and defaults to ``'default'``.\n\n '''\n\n style = ObjectProperty(None)\n '''The pygments style object to use for formatting.\n\n When ``style_name`` is set, this will be changed to the\n corresponding style object.\n\n :attr:`style` is a :class:`~kivy.properties.ObjectProperty` and\n defaults to ``None``\n\n '''\n\n def __init__(self, **kwargs):\n stylename = kwargs.get('style_name', 'default')\n style = kwargs['style'] if 'style' in kwargs \\\n else styles.get_style_by_name(stylename)\n self.formatter = BBCodeFormatter(style=style)\n self.lexer = lexers.PythonLexer()\n self.text_color = '#000000'\n self._label_cached = Label()\n self.use_text_color = True\n\n super(CodeInput, self).__init__(**kwargs)\n\n self._line_options = kw = self._get_line_options()\n self._label_cached = Label(**kw)\n # use text_color as foreground color\n text_color = kwargs.get('foreground_color')\n if text_color:\n self.text_color = get_hex_from_color(text_color)\n # set foreground to white to allow text colors to show\n # use text_color as the default color in bbcodes\n self.use_text_color = False\n self.foreground_color = [1, 1, 1, .999]\n if not kwargs.get('background_color'):\n self.background_color = [.9, .92, .92, 1]\n\n def on_style_name(self, *args):\n self.style = styles.get_style_by_name(self.style_name)\n\n def on_style(self, *args):\n self.formatter = BBCodeFormatter(style=self.style)\n self._trigger_update_graphics()\n\n def _create_line_label(self, text, hint=False):\n # Create a label from a text, using line options\n ntext = text.replace(u'\\n', u'').replace(u'\\t', u' ' * self.tab_width)\n if self.password and not hint: # Don't replace hint_text with *\n ntext = u'*' * len(ntext)\n ntext = self._get_bbcode(ntext)\n kw = self._get_line_options()\n cid = u'{}\\0{}\\0{}'.format(ntext, self.password, kw)\n texture = Cache_get('textinput.label', cid)\n\n if texture is None:\n # FIXME right now, we can't render very long line...\n # if we move on \"VBO\" version as fallback, we won't need to\n # do this.\n # try to find the maximum text we can handle\n label = Label(text=ntext, **kw)\n if text.find(u'\\n') > 0:\n label.text = u''\n else:\n label.text = ntext\n label.refresh()\n\n # ok, we found it.\n texture = label.texture\n Cache_append('textinput.label', cid, texture)\n label.text = ''\n return texture\n\n def _get_line_options(self):\n kw = super(CodeInput, self)._get_line_options()\n kw['markup'] = True\n kw['valign'] = 'top'\n kw['codeinput'] = repr(self.lexer)\n return kw\n\n def _get_text_width(self, text, tab_width, _label_cached):\n # Return the width of a text, according to the current line options.\n cid = u'{}\\0{}\\0{}'.format(text, self.password,\n self._get_line_options())\n width = Cache_get('textinput.width', cid)\n if width is not None:\n return width\n lbl = self._create_line_label(text)\n width = lbl.width\n Cache_append('textinput.width', cid, width)\n return width\n\n def _get_bbcode(self, ntext):\n # get bbcoded text for python\n try:\n ntext[0]\n # replace brackets with special chars that aren't highlighted\n # by pygment. can't use &bl; ... cause & is highlighted\n ntext = ntext.replace(u'[', u'\\x01;').replace(u']', u'\\x02;')\n ntext = highlight(ntext, self.lexer, self.formatter)\n ntext = ntext.replace(u'\\x01;', u'&bl;').replace(u'\\x02;', u'&br;')\n # replace special chars with &bl; and &br;\n ntext = ''.join((u'[color=', str(self.text_color), u']',\n ntext, u'[/color]'))\n ntext = ntext.replace(u'\\n', u'')\n return ntext\n except IndexError:\n return ''\n\n # overriden to prevent cursor position off screen\n def _cursor_offset(self):\n '''Get the cursor x offset on the current line\n '''\n offset = 0\n try:\n if self.cursor_col:\n offset = self._get_text_width(\n self._lines[self.cursor_row][:self.cursor_col])\n return offset\n except:\n pass\n finally:\n return offset\n\n def on_lexer(self, instance, value):\n self._trigger_refresh_text()\n\n def on_foreground_color(self, instance, text_color):\n if not self.use_text_color:\n self.use_text_color = True\n return\n self.text_color = get_hex_from_color(text_color)\n self.use_text_color = False\n self.foreground_color = (1, 1, 1, .999)\n self._trigger_refresh_text()\n\n\nif __name__ == '__main__':\n from kivy.extras.highlight import KivyLexer\n from kivy.app import App\n\n class CodeInputTest(App):\n def build(self):\n return CodeInput(lexer=KivyLexer(),\n font_name='data/fonts/DroidSansMono.ttf',\n font_size=12,\n text='''\n#:kivy 1.0\n\n<YourWidget>:\n canvas:\n Color:\n rgb: .5, .5, .5\n Rectangle:\n pos: self.pos\n size: self.size''')\n\n CodeInputTest().run()\n", "path": "kivy/uix/codeinput.py"}], "after_files": [{"content": "'''\nCode Input\n==========\n\n.. versionadded:: 1.5.0\n\n.. image:: images/codeinput.jpg\n\n\nThe :class:`CodeInput` provides a box of editable highlighted text like the one\nshown in the image.\n\nIt supports all the features provided by the :class:`~kivy.uix.textinput` as\nwell as code highlighting for `languages supported by pygments\n<http://pygments.org/docs/lexers/>`_ along with `KivyLexer` for\n:mod:`kivy.lang` highlighting.\n\nUsage example\n-------------\n\nTo create a CodeInput with highlighting for `KV language`::\n\n from kivy.uix.codeinput import CodeInput\n from kivy.extras.highlight import KivyLexer\n codeinput = CodeInput(lexer=KivyLexer())\n\nTo create a CodeInput with highlighting for `Cython`::\n\n from kivy.uix.codeinput import CodeInput\n from pygments.lexers import CythonLexer\n codeinput = CodeInput(lexer=CythonLexer())\n\n'''\n\n__all__ = ('CodeInput', )\n\nfrom pygments import highlight\nfrom pygments import lexers\nfrom pygments import styles\nfrom pygments.formatters import BBCodeFormatter\n\nfrom kivy.uix.textinput import TextInput\nfrom kivy.core.text.markup import MarkupLabel as Label\nfrom kivy.cache import Cache\nfrom kivy.properties import ObjectProperty, OptionProperty\nfrom kivy.utils import get_hex_from_color\n\nCache_get = Cache.get\nCache_append = Cache.append\n\n# TODO: color chooser for keywords/strings/...\n\n\nclass CodeInput(TextInput):\n '''CodeInput class, used for displaying highlighted code.\n '''\n\n lexer = ObjectProperty(None)\n '''This holds the selected Lexer used by pygments to highlight the code.\n\n\n :attr:`lexer` is an :class:`~kivy.properties.ObjectProperty` and\n defaults to `PythonLexer`.\n '''\n\n style_name = OptionProperty(\n 'default', options=list(styles.get_all_styles())\n )\n '''Name of the pygments style to use for formatting.\n\n :attr:`style_name` is an :class:`~kivy.properties.OptionProperty`\n and defaults to ``'default'``.\n\n '''\n\n style = ObjectProperty(None)\n '''The pygments style object to use for formatting.\n\n When ``style_name`` is set, this will be changed to the\n corresponding style object.\n\n :attr:`style` is a :class:`~kivy.properties.ObjectProperty` and\n defaults to ``None``\n\n '''\n\n def __init__(self, **kwargs):\n stylename = kwargs.get('style_name', 'default')\n style = kwargs['style'] if 'style' in kwargs \\\n else styles.get_style_by_name(stylename)\n self.formatter = BBCodeFormatter(style=style)\n self.lexer = lexers.PythonLexer()\n self.text_color = '#000000'\n self._label_cached = Label()\n self.use_text_color = True\n\n super(CodeInput, self).__init__(**kwargs)\n\n self._line_options = kw = self._get_line_options()\n self._label_cached = Label(**kw)\n # use text_color as foreground color\n text_color = kwargs.get('foreground_color')\n if text_color:\n self.text_color = get_hex_from_color(text_color)\n # set foreground to white to allow text colors to show\n # use text_color as the default color in bbcodes\n self.use_text_color = False\n self.foreground_color = [1, 1, 1, .999]\n if not kwargs.get('background_color'):\n self.background_color = [.9, .92, .92, 1]\n\n def on_style_name(self, *args):\n self.style = styles.get_style_by_name(self.style_name)\n\n def on_style(self, *args):\n self.formatter = BBCodeFormatter(style=self.style)\n self._trigger_update_graphics()\n\n def _create_line_label(self, text, hint=False):\n # Create a label from a text, using line options\n ntext = text.replace(u'\\n', u'').replace(u'\\t', u' ' * self.tab_width)\n if self.password and not hint: # Don't replace hint_text with *\n ntext = u'*' * len(ntext)\n ntext = self._get_bbcode(ntext)\n kw = self._get_line_options()\n cid = u'{}\\0{}\\0{}'.format(ntext, self.password, kw)\n texture = Cache_get('textinput.label', cid)\n\n if texture is None:\n # FIXME right now, we can't render very long line...\n # if we move on \"VBO\" version as fallback, we won't need to\n # do this.\n # try to find the maximum text we can handle\n label = Label(text=ntext, **kw)\n if text.find(u'\\n') > 0:\n label.text = u''\n else:\n label.text = ntext\n label.refresh()\n\n # ok, we found it.\n texture = label.texture\n Cache_append('textinput.label', cid, texture)\n label.text = ''\n return texture\n\n def _get_line_options(self):\n kw = super(CodeInput, self)._get_line_options()\n kw['markup'] = True\n kw['valign'] = 'top'\n kw['codeinput'] = repr(self.lexer)\n return kw\n\n def _get_text_width(self, text, tab_width, _label_cached):\n # Return the width of a text, according to the current line options.\n cid = u'{}\\0{}\\0{}'.format(text, self.password,\n self._get_line_options())\n width = Cache_get('textinput.width', cid)\n if width is not None:\n return width\n lbl = self._create_line_label(text)\n width = lbl.width\n Cache_append('textinput.width', cid, width)\n return width\n\n def _get_bbcode(self, ntext):\n # get bbcoded text for python\n try:\n ntext[0]\n # replace brackets with special chars that aren't highlighted\n # by pygment. can't use &bl; ... cause & is highlighted\n ntext = ntext.replace(u'[', u'\\x01').replace(u']', u'\\x02')\n ntext = highlight(ntext, self.lexer, self.formatter)\n ntext = ntext.replace(u'\\x01', u'&bl;').replace(u'\\x02', u'&br;')\n # replace special chars with &bl; and &br;\n ntext = ''.join((u'[color=', str(self.text_color), u']',\n ntext, u'[/color]'))\n ntext = ntext.replace(u'\\n', u'')\n return ntext\n except IndexError:\n return ''\n\n # overriden to prevent cursor position off screen\n def _cursor_offset(self):\n '''Get the cursor x offset on the current line\n '''\n offset = 0\n try:\n if self.cursor_col:\n offset = self._get_text_width(\n self._lines[self.cursor_row][:self.cursor_col])\n return offset\n except:\n pass\n finally:\n return offset\n\n def on_lexer(self, instance, value):\n self._trigger_refresh_text()\n\n def on_foreground_color(self, instance, text_color):\n if not self.use_text_color:\n self.use_text_color = True\n return\n self.text_color = get_hex_from_color(text_color)\n self.use_text_color = False\n self.foreground_color = (1, 1, 1, .999)\n self._trigger_refresh_text()\n\n\nif __name__ == '__main__':\n from kivy.extras.highlight import KivyLexer\n from kivy.app import App\n\n class CodeInputTest(App):\n def build(self):\n return CodeInput(lexer=KivyLexer(),\n font_name='data/fonts/DroidSansMono.ttf',\n font_size=12,\n text='''\n#:kivy 1.0\n\n<YourWidget>:\n canvas:\n Color:\n rgb: .5, .5, .5\n Rectangle:\n pos: self.pos\n size: self.size''')\n\n CodeInputTest().run()\n", "path": "kivy/uix/codeinput.py"}]}
| 2,815 | 270 |
gh_patches_debug_43593
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1211
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEAT] Mixed Nulls - catch mixed nulls on non string columns (np.NaN/pd.NaN/Null/None/etc.)
**Docs**
API Reference should include what are the DEFAULT NULLS that are checks (currently this can be found only by going into source)
Example notebook can also print out these nulls for convenience.
**Null Types**
1. NaT nulls not caught @chelseatroy can you elaborate?
3. Seems that list currently includes only strings (and null character). Does this catch also null objects? (e.g. the python None. Numpy and pandas nulls. or any other null that is likely to find it's way due to multiple feature engineering backends)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/tabular/checks/integrity/mixed_nulls.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Module contains Mixed Nulls check."""
12 from collections import defaultdict
13 from typing import Union, Dict, List, Iterable
14
15 import numpy as np
16 import pandas as pd
17
18 from deepchecks.tabular import Context, SingleDatasetCheck
19 from deepchecks.core import CheckResult, ConditionResult, ConditionCategory
20 from deepchecks.core.errors import DeepchecksValueError
21 from deepchecks.utils.dataframes import select_from_dataframe
22 from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
23 from deepchecks.utils.strings import string_baseform, format_percent
24 from deepchecks.utils.typing import Hashable
25
26
27 __all__ = ['MixedNulls']
28
29
30 DEFAULT_NULL_VALUES = {'none', 'null', 'nan', 'na', '', '\x00', '\x00\x00'}
31
32
33 class MixedNulls(SingleDatasetCheck):
34 """Search for various types of null values in a string column(s), including string representations of null.
35
36 Parameters
37 ----------
38 null_string_list : Iterable[str] , default: None
39 List of strings to be considered alternative null representations
40 check_nan : bool , default: True
41 Whether to add to null list to check also NaN values
42 columns : Union[Hashable, List[Hashable]] , default: None
43 Columns to check, if none are given checks all columns except ignored ones.
44 ignore_columns : Union[Hashable, List[Hashable]] , default: None
45 Columns to ignore, if none given checks based on columns variable
46 n_top_columns : int , optional
47 amount of columns to show ordered by feature importance (date, index, label are first)
48 """
49
50 def __init__(
51 self,
52 null_string_list: Iterable[str] = None,
53 check_nan: bool = True,
54 columns: Union[Hashable, List[Hashable], None] = None,
55 ignore_columns: Union[Hashable, List[Hashable], None] = None,
56 n_top_columns: int = 10,
57 **kwargs
58 ):
59 super().__init__(**kwargs)
60 self.null_string_list = null_string_list
61 self.check_nan = check_nan
62 self.columns = columns
63 self.ignore_columns = ignore_columns
64 self.n_top_columns = n_top_columns
65
66 def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:
67 """Run check.
68
69 Returns
70 -------
71 CheckResult
72 DataFrame with columns ('Column Name', 'Value', 'Count', 'Percentage') for any column which
73 have more than 1 null values.
74 """
75 if dataset_type == 'train':
76 dataset = context.train
77 else:
78 dataset = context.test
79 df = dataset.data
80
81 df = select_from_dataframe(df, self.columns, self.ignore_columns)
82 null_string_list: set = self._validate_null_string_list(self.null_string_list, self.check_nan)
83
84 # Result value
85 display_array = []
86 result_dict = defaultdict(dict)
87
88 for column_name in list(df.columns):
89 column_data = df[column_name]
90 # TODO: Modify this once Dataset type casting mechanism is done
91 if column_data.dtype != pd.StringDtype:
92 continue
93 # Get counts of all values in series including NaNs, in sorted order of count
94 column_counts: pd.Series = column_data.value_counts(dropna=False)
95 # Filter out values not in the nulls list
96 null_counts = {value: count for value, count in column_counts.items()
97 if string_baseform(value) in null_string_list}
98 if len(null_counts) < 2:
99 continue
100 # Save the column info
101 for null_value, count in null_counts.items():
102 percent = count / len(column_data)
103 display_array.append([column_name, null_value, count, format_percent(percent)])
104 result_dict[column_name][null_value] = {'count': count, 'percent': percent}
105
106 # Create dataframe to display table
107 if display_array:
108 df_graph = pd.DataFrame(display_array, columns=['Column Name', 'Value', 'Count', 'Percent of data'])
109 df_graph = df_graph.set_index(['Column Name', 'Value'])
110 df_graph = column_importance_sorter_df(df_graph, dataset, context.features_importance,
111 self.n_top_columns, col='Column Name')
112 display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]
113 else:
114 display = None
115
116 return CheckResult(result_dict, display=display)
117
118 def _validate_null_string_list(self, nsl, check_nan: bool) -> set:
119 """Validate the object given is a list of strings. If null is given return default list of null values.
120
121 Parameters
122 ----------
123 nsl
124 Object to validate
125 check_nan : bool
126 Whether to add to null list to check also NaN values
127 Returns
128 -------
129 set
130 Returns list of null values as set object
131 """
132 result: set
133 if nsl:
134 if not isinstance(nsl, Iterable):
135 raise DeepchecksValueError('null_string_list must be an iterable')
136 if len(nsl) == 0:
137 raise DeepchecksValueError("null_string_list can't be empty list")
138 if any((not isinstance(string, str) for string in nsl)):
139 raise DeepchecksValueError("null_string_list must contain only items of type 'str'")
140 result = set(nsl)
141 else:
142 # Default values
143 result = set(DEFAULT_NULL_VALUES)
144 if check_nan is None or check_nan is True:
145 result.add(np.NaN)
146
147 return result
148
149 def add_condition_different_nulls_not_more_than(self, max_allowed_null_types: int = 1):
150 """Add condition - require column not to have more than given number of different null values.
151
152 Parameters
153 ----------
154 max_allowed_null_types : int , default: 1
155 Number of different null value types which is the maximum allowed.
156 """
157 def condition(result: Dict) -> ConditionResult:
158 not_passing_columns = {}
159 for column in result.keys():
160 nulls = result[column]
161 num_nulls = len(nulls)
162 if num_nulls > max_allowed_null_types:
163 not_passing_columns[column] = num_nulls
164 if not_passing_columns:
165 return ConditionResult(ConditionCategory.FAIL,
166 'Found columns with amount of null types above threshold: '
167 f'{not_passing_columns}')
168 else:
169 return ConditionResult(ConditionCategory.PASS)
170
171 return self.add_condition(f'Not more than {max_allowed_null_types} different null types',
172 condition)
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/tabular/checks/integrity/mixed_nulls.py b/deepchecks/tabular/checks/integrity/mixed_nulls.py
--- a/deepchecks/tabular/checks/integrity/mixed_nulls.py
+++ b/deepchecks/tabular/checks/integrity/mixed_nulls.py
@@ -9,10 +9,9 @@
# ----------------------------------------------------------------------------
#
"""Module contains Mixed Nulls check."""
-from collections import defaultdict
+from collections import defaultdict, Counter
from typing import Union, Dict, List, Iterable
-import numpy as np
import pandas as pd
from deepchecks.tabular import Context, SingleDatasetCheck
@@ -22,6 +21,7 @@
from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
from deepchecks.utils.strings import string_baseform, format_percent
from deepchecks.utils.typing import Hashable
+from pkg_resources import parse_version
__all__ = ['MixedNulls']
@@ -31,7 +31,7 @@
class MixedNulls(SingleDatasetCheck):
- """Search for various types of null values in a string column(s), including string representations of null.
+ """Search for various types of null values, including string representations of null.
Parameters
----------
@@ -79,7 +79,7 @@
df = dataset.data
df = select_from_dataframe(df, self.columns, self.ignore_columns)
- null_string_list: set = self._validate_null_string_list(self.null_string_list, self.check_nan)
+ null_string_list: set = self._validate_null_string_list(self.null_string_list)
# Result value
display_array = []
@@ -87,14 +87,17 @@
for column_name in list(df.columns):
column_data = df[column_name]
- # TODO: Modify this once Dataset type casting mechanism is done
- if column_data.dtype != pd.StringDtype:
- continue
- # Get counts of all values in series including NaNs, in sorted order of count
- column_counts: pd.Series = column_data.value_counts(dropna=False)
+ # Pandas version 1.3.X and lower doesn't support counting separate NaN values in value_counts
+ if parse_version(pd.__version__) < parse_version('1.4.0'):
+ column_counts = Counter(column_data)
+ else:
+ # Get counts of all values in series including NaNs
+ column_counts: pd.Series = column_data.value_counts(dropna=False)
+
# Filter out values not in the nulls list
null_counts = {value: count for value, count in column_counts.items()
- if string_baseform(value) in null_string_list}
+ if (self.check_nan and pd.isnull(value)) or (string_baseform(value) in null_string_list)}
+
if len(null_counts) < 2:
continue
# Save the column info
@@ -115,15 +118,14 @@
return CheckResult(result_dict, display=display)
- def _validate_null_string_list(self, nsl, check_nan: bool) -> set:
+ def _validate_null_string_list(self, nsl) -> set:
"""Validate the object given is a list of strings. If null is given return default list of null values.
Parameters
----------
nsl
Object to validate
- check_nan : bool
- Whether to add to null list to check also NaN values
+
Returns
-------
set
@@ -141,8 +143,6 @@
else:
# Default values
result = set(DEFAULT_NULL_VALUES)
- if check_nan is None or check_nan is True:
- result.add(np.NaN)
return result
|
{"golden_diff": "diff --git a/deepchecks/tabular/checks/integrity/mixed_nulls.py b/deepchecks/tabular/checks/integrity/mixed_nulls.py\n--- a/deepchecks/tabular/checks/integrity/mixed_nulls.py\n+++ b/deepchecks/tabular/checks/integrity/mixed_nulls.py\n@@ -9,10 +9,9 @@\n # ----------------------------------------------------------------------------\n #\n \"\"\"Module contains Mixed Nulls check.\"\"\"\n-from collections import defaultdict\n+from collections import defaultdict, Counter\n from typing import Union, Dict, List, Iterable\n \n-import numpy as np\n import pandas as pd\n \n from deepchecks.tabular import Context, SingleDatasetCheck\n@@ -22,6 +21,7 @@\n from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\n from deepchecks.utils.strings import string_baseform, format_percent\n from deepchecks.utils.typing import Hashable\n+from pkg_resources import parse_version\n \n \n __all__ = ['MixedNulls']\n@@ -31,7 +31,7 @@\n \n \n class MixedNulls(SingleDatasetCheck):\n- \"\"\"Search for various types of null values in a string column(s), including string representations of null.\n+ \"\"\"Search for various types of null values, including string representations of null.\n \n Parameters\n ----------\n@@ -79,7 +79,7 @@\n df = dataset.data\n \n df = select_from_dataframe(df, self.columns, self.ignore_columns)\n- null_string_list: set = self._validate_null_string_list(self.null_string_list, self.check_nan)\n+ null_string_list: set = self._validate_null_string_list(self.null_string_list)\n \n # Result value\n display_array = []\n@@ -87,14 +87,17 @@\n \n for column_name in list(df.columns):\n column_data = df[column_name]\n- # TODO: Modify this once Dataset type casting mechanism is done\n- if column_data.dtype != pd.StringDtype:\n- continue\n- # Get counts of all values in series including NaNs, in sorted order of count\n- column_counts: pd.Series = column_data.value_counts(dropna=False)\n+ # Pandas version 1.3.X and lower doesn't support counting separate NaN values in value_counts\n+ if parse_version(pd.__version__) < parse_version('1.4.0'):\n+ column_counts = Counter(column_data)\n+ else:\n+ # Get counts of all values in series including NaNs\n+ column_counts: pd.Series = column_data.value_counts(dropna=False)\n+\n # Filter out values not in the nulls list\n null_counts = {value: count for value, count in column_counts.items()\n- if string_baseform(value) in null_string_list}\n+ if (self.check_nan and pd.isnull(value)) or (string_baseform(value) in null_string_list)}\n+\n if len(null_counts) < 2:\n continue\n # Save the column info\n@@ -115,15 +118,14 @@\n \n return CheckResult(result_dict, display=display)\n \n- def _validate_null_string_list(self, nsl, check_nan: bool) -> set:\n+ def _validate_null_string_list(self, nsl) -> set:\n \"\"\"Validate the object given is a list of strings. If null is given return default list of null values.\n \n Parameters\n ----------\n nsl\n Object to validate\n- check_nan : bool\n- Whether to add to null list to check also NaN values\n+\n Returns\n -------\n set\n@@ -141,8 +143,6 @@\n else:\n # Default values\n result = set(DEFAULT_NULL_VALUES)\n- if check_nan is None or check_nan is True:\n- result.add(np.NaN)\n \n return result\n", "issue": "[FEAT] Mixed Nulls - catch mixed nulls on non string columns (np.NaN/pd.NaN/Null/None/etc.)\n**Docs**\r\n\r\nAPI Reference should include what are the DEFAULT NULLS that are checks (currently this can be found only by going into source)\r\nExample notebook can also print out these nulls for convenience.\r\n\r\n**Null Types**\r\n1. NaT nulls not caught @chelseatroy can you elaborate?\r\n3. Seems that list currently includes only strings (and null character). Does this catch also null objects? (e.g. the python None. Numpy and pandas nulls. or any other null that is likely to find it's way due to multiple feature engineering backends)\r\n\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Module contains Mixed Nulls check.\"\"\"\nfrom collections import defaultdict\nfrom typing import Union, Dict, List, Iterable\n\nimport numpy as np\nimport pandas as pd\n\nfrom deepchecks.tabular import Context, SingleDatasetCheck\nfrom deepchecks.core import CheckResult, ConditionResult, ConditionCategory\nfrom deepchecks.core.errors import DeepchecksValueError\nfrom deepchecks.utils.dataframes import select_from_dataframe\nfrom deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\nfrom deepchecks.utils.strings import string_baseform, format_percent\nfrom deepchecks.utils.typing import Hashable\n\n\n__all__ = ['MixedNulls']\n\n\nDEFAULT_NULL_VALUES = {'none', 'null', 'nan', 'na', '', '\\x00', '\\x00\\x00'}\n\n\nclass MixedNulls(SingleDatasetCheck):\n \"\"\"Search for various types of null values in a string column(s), including string representations of null.\n\n Parameters\n ----------\n null_string_list : Iterable[str] , default: None\n List of strings to be considered alternative null representations\n check_nan : bool , default: True\n Whether to add to null list to check also NaN values\n columns : Union[Hashable, List[Hashable]] , default: None\n Columns to check, if none are given checks all columns except ignored ones.\n ignore_columns : Union[Hashable, List[Hashable]] , default: None\n Columns to ignore, if none given checks based on columns variable\n n_top_columns : int , optional\n amount of columns to show ordered by feature importance (date, index, label are first)\n \"\"\"\n\n def __init__(\n self,\n null_string_list: Iterable[str] = None,\n check_nan: bool = True,\n columns: Union[Hashable, List[Hashable], None] = None,\n ignore_columns: Union[Hashable, List[Hashable], None] = None,\n n_top_columns: int = 10,\n **kwargs\n ):\n super().__init__(**kwargs)\n self.null_string_list = null_string_list\n self.check_nan = check_nan\n self.columns = columns\n self.ignore_columns = ignore_columns\n self.n_top_columns = n_top_columns\n\n def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:\n \"\"\"Run check.\n\n Returns\n -------\n CheckResult\n DataFrame with columns ('Column Name', 'Value', 'Count', 'Percentage') for any column which\n have more than 1 null values.\n \"\"\"\n if dataset_type == 'train':\n dataset = context.train\n else:\n dataset = context.test\n df = dataset.data\n\n df = select_from_dataframe(df, self.columns, self.ignore_columns)\n null_string_list: set = self._validate_null_string_list(self.null_string_list, self.check_nan)\n\n # Result value\n display_array = []\n result_dict = defaultdict(dict)\n\n for column_name in list(df.columns):\n column_data = df[column_name]\n # TODO: Modify this once Dataset type casting mechanism is done\n if column_data.dtype != pd.StringDtype:\n continue\n # Get counts of all values in series including NaNs, in sorted order of count\n column_counts: pd.Series = column_data.value_counts(dropna=False)\n # Filter out values not in the nulls list\n null_counts = {value: count for value, count in column_counts.items()\n if string_baseform(value) in null_string_list}\n if len(null_counts) < 2:\n continue\n # Save the column info\n for null_value, count in null_counts.items():\n percent = count / len(column_data)\n display_array.append([column_name, null_value, count, format_percent(percent)])\n result_dict[column_name][null_value] = {'count': count, 'percent': percent}\n\n # Create dataframe to display table\n if display_array:\n df_graph = pd.DataFrame(display_array, columns=['Column Name', 'Value', 'Count', 'Percent of data'])\n df_graph = df_graph.set_index(['Column Name', 'Value'])\n df_graph = column_importance_sorter_df(df_graph, dataset, context.features_importance,\n self.n_top_columns, col='Column Name')\n display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]\n else:\n display = None\n\n return CheckResult(result_dict, display=display)\n\n def _validate_null_string_list(self, nsl, check_nan: bool) -> set:\n \"\"\"Validate the object given is a list of strings. If null is given return default list of null values.\n\n Parameters\n ----------\n nsl\n Object to validate\n check_nan : bool\n Whether to add to null list to check also NaN values\n Returns\n -------\n set\n Returns list of null values as set object\n \"\"\"\n result: set\n if nsl:\n if not isinstance(nsl, Iterable):\n raise DeepchecksValueError('null_string_list must be an iterable')\n if len(nsl) == 0:\n raise DeepchecksValueError(\"null_string_list can't be empty list\")\n if any((not isinstance(string, str) for string in nsl)):\n raise DeepchecksValueError(\"null_string_list must contain only items of type 'str'\")\n result = set(nsl)\n else:\n # Default values\n result = set(DEFAULT_NULL_VALUES)\n if check_nan is None or check_nan is True:\n result.add(np.NaN)\n\n return result\n\n def add_condition_different_nulls_not_more_than(self, max_allowed_null_types: int = 1):\n \"\"\"Add condition - require column not to have more than given number of different null values.\n\n Parameters\n ----------\n max_allowed_null_types : int , default: 1\n Number of different null value types which is the maximum allowed.\n \"\"\"\n def condition(result: Dict) -> ConditionResult:\n not_passing_columns = {}\n for column in result.keys():\n nulls = result[column]\n num_nulls = len(nulls)\n if num_nulls > max_allowed_null_types:\n not_passing_columns[column] = num_nulls\n if not_passing_columns:\n return ConditionResult(ConditionCategory.FAIL,\n 'Found columns with amount of null types above threshold: '\n f'{not_passing_columns}')\n else:\n return ConditionResult(ConditionCategory.PASS)\n\n return self.add_condition(f'Not more than {max_allowed_null_types} different null types',\n condition)\n", "path": "deepchecks/tabular/checks/integrity/mixed_nulls.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Module contains Mixed Nulls check.\"\"\"\nfrom collections import defaultdict, Counter\nfrom typing import Union, Dict, List, Iterable\n\nimport pandas as pd\n\nfrom deepchecks.tabular import Context, SingleDatasetCheck\nfrom deepchecks.core import CheckResult, ConditionResult, ConditionCategory\nfrom deepchecks.core.errors import DeepchecksValueError\nfrom deepchecks.utils.dataframes import select_from_dataframe\nfrom deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\nfrom deepchecks.utils.strings import string_baseform, format_percent\nfrom deepchecks.utils.typing import Hashable\nfrom pkg_resources import parse_version\n\n\n__all__ = ['MixedNulls']\n\n\nDEFAULT_NULL_VALUES = {'none', 'null', 'nan', 'na', '', '\\x00', '\\x00\\x00'}\n\n\nclass MixedNulls(SingleDatasetCheck):\n \"\"\"Search for various types of null values, including string representations of null.\n\n Parameters\n ----------\n null_string_list : Iterable[str] , default: None\n List of strings to be considered alternative null representations\n check_nan : bool , default: True\n Whether to add to null list to check also NaN values\n columns : Union[Hashable, List[Hashable]] , default: None\n Columns to check, if none are given checks all columns except ignored ones.\n ignore_columns : Union[Hashable, List[Hashable]] , default: None\n Columns to ignore, if none given checks based on columns variable\n n_top_columns : int , optional\n amount of columns to show ordered by feature importance (date, index, label are first)\n \"\"\"\n\n def __init__(\n self,\n null_string_list: Iterable[str] = None,\n check_nan: bool = True,\n columns: Union[Hashable, List[Hashable], None] = None,\n ignore_columns: Union[Hashable, List[Hashable], None] = None,\n n_top_columns: int = 10,\n **kwargs\n ):\n super().__init__(**kwargs)\n self.null_string_list = null_string_list\n self.check_nan = check_nan\n self.columns = columns\n self.ignore_columns = ignore_columns\n self.n_top_columns = n_top_columns\n\n def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:\n \"\"\"Run check.\n\n Returns\n -------\n CheckResult\n DataFrame with columns ('Column Name', 'Value', 'Count', 'Percentage') for any column which\n have more than 1 null values.\n \"\"\"\n if dataset_type == 'train':\n dataset = context.train\n else:\n dataset = context.test\n df = dataset.data\n\n df = select_from_dataframe(df, self.columns, self.ignore_columns)\n null_string_list: set = self._validate_null_string_list(self.null_string_list)\n\n # Result value\n display_array = []\n result_dict = defaultdict(dict)\n\n for column_name in list(df.columns):\n column_data = df[column_name]\n # Pandas version 1.3.X and lower doesn't support counting separate NaN values in value_counts\n if parse_version(pd.__version__) < parse_version('1.4.0'):\n column_counts = Counter(column_data)\n else:\n # Get counts of all values in series including NaNs\n column_counts: pd.Series = column_data.value_counts(dropna=False)\n\n # Filter out values not in the nulls list\n null_counts = {value: count for value, count in column_counts.items()\n if (self.check_nan and pd.isnull(value)) or (string_baseform(value) in null_string_list)}\n\n if len(null_counts) < 2:\n continue\n # Save the column info\n for null_value, count in null_counts.items():\n percent = count / len(column_data)\n display_array.append([column_name, null_value, count, format_percent(percent)])\n result_dict[column_name][null_value] = {'count': count, 'percent': percent}\n\n # Create dataframe to display table\n if display_array:\n df_graph = pd.DataFrame(display_array, columns=['Column Name', 'Value', 'Count', 'Percent of data'])\n df_graph = df_graph.set_index(['Column Name', 'Value'])\n df_graph = column_importance_sorter_df(df_graph, dataset, context.features_importance,\n self.n_top_columns, col='Column Name')\n display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]\n else:\n display = None\n\n return CheckResult(result_dict, display=display)\n\n def _validate_null_string_list(self, nsl) -> set:\n \"\"\"Validate the object given is a list of strings. If null is given return default list of null values.\n\n Parameters\n ----------\n nsl\n Object to validate\n\n Returns\n -------\n set\n Returns list of null values as set object\n \"\"\"\n result: set\n if nsl:\n if not isinstance(nsl, Iterable):\n raise DeepchecksValueError('null_string_list must be an iterable')\n if len(nsl) == 0:\n raise DeepchecksValueError(\"null_string_list can't be empty list\")\n if any((not isinstance(string, str) for string in nsl)):\n raise DeepchecksValueError(\"null_string_list must contain only items of type 'str'\")\n result = set(nsl)\n else:\n # Default values\n result = set(DEFAULT_NULL_VALUES)\n\n return result\n\n def add_condition_different_nulls_not_more_than(self, max_allowed_null_types: int = 1):\n \"\"\"Add condition - require column not to have more than given number of different null values.\n\n Parameters\n ----------\n max_allowed_null_types : int , default: 1\n Number of different null value types which is the maximum allowed.\n \"\"\"\n def condition(result: Dict) -> ConditionResult:\n not_passing_columns = {}\n for column in result.keys():\n nulls = result[column]\n num_nulls = len(nulls)\n if num_nulls > max_allowed_null_types:\n not_passing_columns[column] = num_nulls\n if not_passing_columns:\n return ConditionResult(ConditionCategory.FAIL,\n 'Found columns with amount of null types above threshold: '\n f'{not_passing_columns}')\n else:\n return ConditionResult(ConditionCategory.PASS)\n\n return self.add_condition(f'Not more than {max_allowed_null_types} different null types',\n condition)\n", "path": "deepchecks/tabular/checks/integrity/mixed_nulls.py"}]}
| 2,333 | 822 |
gh_patches_debug_64869
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-2345
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release Kedro `0.18.5`
### Description
Release Kedro `0.18.5` which will contain lots of new features for configuration. The release depends on the following tickets to be finished:
- [x] BLOCKER: https://github.com/kedro-org/kedro/issues/2255
- [x] #1909 (Docs)
- [x] #2148
- [x] #2170
- [x] #2225
Initially we wanted to include the below issues as well, but the implementation turned out to be trickier than expected, so we'll take more time to investigate a solution and won't let it block the release.
- [x] #2146
- [x] #2212
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/__init__.py`
Content:
```
1 """Kedro is a framework that makes it easy to build robust and scalable
2 data pipelines by providing uniform project templates, data abstraction,
3 configuration and pipeline assembly.
4 """
5
6 __version__ = "0.18.4"
7
8
9 import logging
10
11 logging.getLogger(__name__).addHandler(logging.NullHandler())
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/__init__.py b/kedro/__init__.py
--- a/kedro/__init__.py
+++ b/kedro/__init__.py
@@ -3,7 +3,7 @@
configuration and pipeline assembly.
"""
-__version__ = "0.18.4"
+__version__ = "0.18.5"
import logging
|
{"golden_diff": "diff --git a/kedro/__init__.py b/kedro/__init__.py\n--- a/kedro/__init__.py\n+++ b/kedro/__init__.py\n@@ -3,7 +3,7 @@\n configuration and pipeline assembly.\n \"\"\"\n \n-__version__ = \"0.18.4\"\n+__version__ = \"0.18.5\"\n \n \n import logging\n", "issue": "Release Kedro `0.18.5`\n### Description\r\n\r\nRelease Kedro `0.18.5` which will contain lots of new features for configuration. The release depends on the following tickets to be finished:\r\n\r\n- [x] BLOCKER: https://github.com/kedro-org/kedro/issues/2255\r\n- [x] #1909 (Docs)\r\n- [x] #2148 \r\n- [x] #2170\r\n- [x] #2225 \r\n\r\nInitially we wanted to include the below issues as well, but the implementation turned out to be trickier than expected, so we'll take more time to investigate a solution and won't let it block the release.\r\n- [x] #2146 \r\n- [x] #2212 \r\n\n", "before_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\n__version__ = \"0.18.4\"\n\n\nimport logging\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "kedro/__init__.py"}], "after_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\n__version__ = \"0.18.5\"\n\n\nimport logging\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "kedro/__init__.py"}]}
| 518 | 87 |
gh_patches_debug_9959
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-6781
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
An editing error in the Tutorial Documentation
Thank you for ur contribution, I am a newbie to this project, maybe I found an editing error in the latest [Tutorial Documentation](https://mmdetection.readthedocs.io/en/latest/2_new_data_model.html). The detailed description is shown in the figure.

it should be `python tools/test.py configs/balloon/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_balloon.py work_dirs/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_balloon/latest.pth --eval bbox segm`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/train.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import argparse
3 import copy
4 import os
5 import os.path as osp
6 import time
7 import warnings
8
9 import mmcv
10 import torch
11 from mmcv import Config, DictAction
12 from mmcv.runner import get_dist_info, init_dist
13 from mmcv.utils import get_git_hash
14
15 from mmdet import __version__
16 from mmdet.apis import init_random_seed, set_random_seed, train_detector
17 from mmdet.datasets import build_dataset
18 from mmdet.models import build_detector
19 from mmdet.utils import collect_env, get_root_logger
20
21
22 def parse_args():
23 parser = argparse.ArgumentParser(description='Train a detector')
24 parser.add_argument('config', help='train config file path')
25 parser.add_argument('--work-dir', help='the dir to save logs and models')
26 parser.add_argument(
27 '--resume-from', help='the checkpoint file to resume from')
28 parser.add_argument(
29 '--no-validate',
30 action='store_true',
31 help='whether not to evaluate the checkpoint during training')
32 group_gpus = parser.add_mutually_exclusive_group()
33 group_gpus.add_argument(
34 '--gpus',
35 type=int,
36 help='number of gpus to use '
37 '(only applicable to non-distributed training)')
38 group_gpus.add_argument(
39 '--gpu-ids',
40 type=int,
41 nargs='+',
42 help='ids of gpus to use '
43 '(only applicable to non-distributed training)')
44 parser.add_argument('--seed', type=int, default=None, help='random seed')
45 parser.add_argument(
46 '--deterministic',
47 action='store_true',
48 help='whether to set deterministic options for CUDNN backend.')
49 parser.add_argument(
50 '--options',
51 nargs='+',
52 action=DictAction,
53 help='override some settings in the used config, the key-value pair '
54 'in xxx=yyy format will be merged into config file (deprecate), '
55 'change to --cfg-options instead.')
56 parser.add_argument(
57 '--cfg-options',
58 nargs='+',
59 action=DictAction,
60 help='override some settings in the used config, the key-value pair '
61 'in xxx=yyy format will be merged into config file. If the value to '
62 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
63 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
64 'Note that the quotation marks are necessary and that no white space '
65 'is allowed.')
66 parser.add_argument(
67 '--launcher',
68 choices=['none', 'pytorch', 'slurm', 'mpi'],
69 default='none',
70 help='job launcher')
71 parser.add_argument('--local_rank', type=int, default=0)
72 args = parser.parse_args()
73 if 'LOCAL_RANK' not in os.environ:
74 os.environ['LOCAL_RANK'] = str(args.local_rank)
75
76 if args.options and args.cfg_options:
77 raise ValueError(
78 '--options and --cfg-options cannot be both '
79 'specified, --options is deprecated in favor of --cfg-options')
80 if args.options:
81 warnings.warn('--options is deprecated in favor of --cfg-options')
82 args.cfg_options = args.options
83
84 return args
85
86
87 def main():
88 args = parse_args()
89
90 cfg = Config.fromfile(args.config)
91 if args.cfg_options is not None:
92 cfg.merge_from_dict(args.cfg_options)
93 # set cudnn_benchmark
94 if cfg.get('cudnn_benchmark', False):
95 torch.backends.cudnn.benchmark = True
96
97 # work_dir is determined in this priority: CLI > segment in file > filename
98 if args.work_dir is not None:
99 # update configs according to CLI args if args.work_dir is not None
100 cfg.work_dir = args.work_dir
101 elif cfg.get('work_dir', None) is None:
102 # use config filename as default work_dir if cfg.work_dir is None
103 cfg.work_dir = osp.join('./work_dirs',
104 osp.splitext(osp.basename(args.config))[0])
105 if args.resume_from is not None:
106 cfg.resume_from = args.resume_from
107 if args.gpu_ids is not None:
108 cfg.gpu_ids = args.gpu_ids
109 else:
110 cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
111
112 # init distributed env first, since logger depends on the dist info.
113 if args.launcher == 'none':
114 distributed = False
115 else:
116 distributed = True
117 init_dist(args.launcher, **cfg.dist_params)
118 # re-set gpu_ids with distributed training mode
119 _, world_size = get_dist_info()
120 cfg.gpu_ids = range(world_size)
121
122 # create work_dir
123 mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
124 # dump config
125 cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
126 # init the logger before other steps
127 timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
128 log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
129 logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
130
131 # init the meta dict to record some important information such as
132 # environment info and seed, which will be logged
133 meta = dict()
134 # log env info
135 env_info_dict = collect_env()
136 env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])
137 dash_line = '-' * 60 + '\n'
138 logger.info('Environment info:\n' + dash_line + env_info + '\n' +
139 dash_line)
140 meta['env_info'] = env_info
141 meta['config'] = cfg.pretty_text
142 # log some basic info
143 logger.info(f'Distributed training: {distributed}')
144 logger.info(f'Config:\n{cfg.pretty_text}')
145
146 # set random seeds
147 seed = init_random_seed(args.seed)
148 logger.info(f'Set random seed to {seed}, '
149 f'deterministic: {args.deterministic}')
150 set_random_seed(seed, deterministic=args.deterministic)
151 cfg.seed = seed
152 meta['seed'] = seed
153 meta['exp_name'] = osp.basename(args.config)
154
155 model = build_detector(
156 cfg.model,
157 train_cfg=cfg.get('train_cfg'),
158 test_cfg=cfg.get('test_cfg'))
159 model.init_weights()
160
161 datasets = [build_dataset(cfg.data.train)]
162 if len(cfg.workflow) == 2:
163 val_dataset = copy.deepcopy(cfg.data.val)
164 val_dataset.pipeline = cfg.data.train.pipeline
165 datasets.append(build_dataset(val_dataset))
166 if cfg.checkpoint_config is not None:
167 # save mmdet version, config file content and class names in
168 # checkpoints as meta data
169 cfg.checkpoint_config.meta = dict(
170 mmdet_version=__version__ + get_git_hash()[:7],
171 CLASSES=datasets[0].CLASSES)
172 # add an attribute for visualization convenience
173 model.CLASSES = datasets[0].CLASSES
174 train_detector(
175 model,
176 datasets,
177 cfg,
178 distributed=distributed,
179 validate=(not args.no_validate),
180 timestamp=timestamp,
181 meta=meta)
182
183
184 if __name__ == '__main__':
185 main()
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/train.py b/tools/train.py
--- a/tools/train.py
+++ b/tools/train.py
@@ -112,6 +112,12 @@
# init distributed env first, since logger depends on the dist info.
if args.launcher == 'none':
distributed = False
+ if len(cfg.gpu_ids) > 1:
+ warnings.warn(
+ f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '
+ f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '
+ 'non-distribute training time.')
+ cfg.gpu_ids = cfg.gpu_ids[0:1]
else:
distributed = True
init_dist(args.launcher, **cfg.dist_params)
|
{"golden_diff": "diff --git a/tools/train.py b/tools/train.py\n--- a/tools/train.py\n+++ b/tools/train.py\n@@ -112,6 +112,12 @@\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n+ if len(cfg.gpu_ids) > 1:\n+ warnings.warn(\n+ f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '\n+ f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '\n+ 'non-distribute training time.')\n+ cfg.gpu_ids = cfg.gpu_ids[0:1]\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n", "issue": "An editing error in the Tutorial Documentation\nThank you for ur contribution, I am a newbie to this project, maybe I found an editing error in the latest [Tutorial Documentation](https://mmdetection.readthedocs.io/en/latest/2_new_data_model.html). The detailed description is shown in the figure.\r\n\r\n\r\n\r\nit should be `python tools/test.py configs/balloon/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_balloon.py work_dirs/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_balloon/latest.pth --eval bbox segm`\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\nimport warnings\n\nimport mmcv\nimport torch\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import get_dist_info, init_dist\nfrom mmcv.utils import get_git_hash\n\nfrom mmdet import __version__\nfrom mmdet.apis import init_random_seed, set_random_seed, train_detector\nfrom mmdet.datasets import build_dataset\nfrom mmdet.models import build_detector\nfrom mmdet.utils import collect_env, get_root_logger\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a detector')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='ids of gpus to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file (deprecate), '\n 'change to --cfg-options instead.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local_rank', type=int, default=0)\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n if args.options and args.cfg_options:\n raise ValueError(\n '--options and --cfg-options cannot be both '\n 'specified, --options is deprecated in favor of --cfg-options')\n if args.options:\n warnings.warn('--options is deprecated in favor of --cfg-options')\n args.cfg_options = args.options\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids\n else:\n cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n # re-set gpu_ids with distributed training mode\n _, world_size = get_dist_info()\n cfg.gpu_ids = range(world_size)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # dump config\n cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n meta['config'] = cfg.pretty_text\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n seed = init_random_seed(args.seed)\n logger.info(f'Set random seed to {seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(seed, deterministic=args.deterministic)\n cfg.seed = seed\n meta['seed'] = seed\n meta['exp_name'] = osp.basename(args.config)\n\n model = build_detector(\n cfg.model,\n train_cfg=cfg.get('train_cfg'),\n test_cfg=cfg.get('test_cfg'))\n model.init_weights()\n\n datasets = [build_dataset(cfg.data.train)]\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n if cfg.checkpoint_config is not None:\n # save mmdet version, config file content and class names in\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmdet_version=__version__ + get_git_hash()[:7],\n CLASSES=datasets[0].CLASSES)\n # add an attribute for visualization convenience\n model.CLASSES = datasets[0].CLASSES\n train_detector(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport argparse\nimport copy\nimport os\nimport os.path as osp\nimport time\nimport warnings\n\nimport mmcv\nimport torch\nfrom mmcv import Config, DictAction\nfrom mmcv.runner import get_dist_info, init_dist\nfrom mmcv.utils import get_git_hash\n\nfrom mmdet import __version__\nfrom mmdet.apis import init_random_seed, set_random_seed, train_detector\nfrom mmdet.datasets import build_dataset\nfrom mmdet.models import build_detector\nfrom mmdet.utils import collect_env, get_root_logger\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='Train a detector')\n parser.add_argument('config', help='train config file path')\n parser.add_argument('--work-dir', help='the dir to save logs and models')\n parser.add_argument(\n '--resume-from', help='the checkpoint file to resume from')\n parser.add_argument(\n '--no-validate',\n action='store_true',\n help='whether not to evaluate the checkpoint during training')\n group_gpus = parser.add_mutually_exclusive_group()\n group_gpus.add_argument(\n '--gpus',\n type=int,\n help='number of gpus to use '\n '(only applicable to non-distributed training)')\n group_gpus.add_argument(\n '--gpu-ids',\n type=int,\n nargs='+',\n help='ids of gpus to use '\n '(only applicable to non-distributed training)')\n parser.add_argument('--seed', type=int, default=None, help='random seed')\n parser.add_argument(\n '--deterministic',\n action='store_true',\n help='whether to set deterministic options for CUDNN backend.')\n parser.add_argument(\n '--options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file (deprecate), '\n 'change to --cfg-options instead.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n parser.add_argument(\n '--launcher',\n choices=['none', 'pytorch', 'slurm', 'mpi'],\n default='none',\n help='job launcher')\n parser.add_argument('--local_rank', type=int, default=0)\n args = parser.parse_args()\n if 'LOCAL_RANK' not in os.environ:\n os.environ['LOCAL_RANK'] = str(args.local_rank)\n\n if args.options and args.cfg_options:\n raise ValueError(\n '--options and --cfg-options cannot be both '\n 'specified, --options is deprecated in favor of --cfg-options')\n if args.options:\n warnings.warn('--options is deprecated in favor of --cfg-options')\n args.cfg_options = args.options\n\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n\n # work_dir is determined in this priority: CLI > segment in file > filename\n if args.work_dir is not None:\n # update configs according to CLI args if args.work_dir is not None\n cfg.work_dir = args.work_dir\n elif cfg.get('work_dir', None) is None:\n # use config filename as default work_dir if cfg.work_dir is None\n cfg.work_dir = osp.join('./work_dirs',\n osp.splitext(osp.basename(args.config))[0])\n if args.resume_from is not None:\n cfg.resume_from = args.resume_from\n if args.gpu_ids is not None:\n cfg.gpu_ids = args.gpu_ids\n else:\n cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)\n\n # init distributed env first, since logger depends on the dist info.\n if args.launcher == 'none':\n distributed = False\n if len(cfg.gpu_ids) > 1:\n warnings.warn(\n f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '\n f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '\n 'non-distribute training time.')\n cfg.gpu_ids = cfg.gpu_ids[0:1]\n else:\n distributed = True\n init_dist(args.launcher, **cfg.dist_params)\n # re-set gpu_ids with distributed training mode\n _, world_size = get_dist_info()\n cfg.gpu_ids = range(world_size)\n\n # create work_dir\n mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))\n # dump config\n cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))\n # init the logger before other steps\n timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())\n log_file = osp.join(cfg.work_dir, f'{timestamp}.log')\n logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)\n\n # init the meta dict to record some important information such as\n # environment info and seed, which will be logged\n meta = dict()\n # log env info\n env_info_dict = collect_env()\n env_info = '\\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])\n dash_line = '-' * 60 + '\\n'\n logger.info('Environment info:\\n' + dash_line + env_info + '\\n' +\n dash_line)\n meta['env_info'] = env_info\n meta['config'] = cfg.pretty_text\n # log some basic info\n logger.info(f'Distributed training: {distributed}')\n logger.info(f'Config:\\n{cfg.pretty_text}')\n\n # set random seeds\n seed = init_random_seed(args.seed)\n logger.info(f'Set random seed to {seed}, '\n f'deterministic: {args.deterministic}')\n set_random_seed(seed, deterministic=args.deterministic)\n cfg.seed = seed\n meta['seed'] = seed\n meta['exp_name'] = osp.basename(args.config)\n\n model = build_detector(\n cfg.model,\n train_cfg=cfg.get('train_cfg'),\n test_cfg=cfg.get('test_cfg'))\n model.init_weights()\n\n datasets = [build_dataset(cfg.data.train)]\n if len(cfg.workflow) == 2:\n val_dataset = copy.deepcopy(cfg.data.val)\n val_dataset.pipeline = cfg.data.train.pipeline\n datasets.append(build_dataset(val_dataset))\n if cfg.checkpoint_config is not None:\n # save mmdet version, config file content and class names in\n # checkpoints as meta data\n cfg.checkpoint_config.meta = dict(\n mmdet_version=__version__ + get_git_hash()[:7],\n CLASSES=datasets[0].CLASSES)\n # add an attribute for visualization convenience\n model.CLASSES = datasets[0].CLASSES\n train_detector(\n model,\n datasets,\n cfg,\n distributed=distributed,\n validate=(not args.no_validate),\n timestamp=timestamp,\n meta=meta)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/train.py"}]}
| 2,477 | 169 |
gh_patches_debug_24683
|
rasdani/github-patches
|
git_diff
|
ietf-tools__datatracker-5620
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Case mismatch for fragment identifiers between menus and page heading anchor
### Describe the issue
The menu item "Groups / Ops and Management" sends you off to https://datatracker.ietf.org/wg/#ops but "#ops" is not recognised on the page because the heading anchor is "#OPS" and so that menu item takes you to the top of the page not the Ops heading.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/doc/templatetags/wg_menu.py`
Content:
```
1 # Copyright The IETF Trust 2009-2022, All Rights Reserved
2
3 # Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).
4 # All rights reserved. Contact: Pasi Eronen <[email protected]>
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions
8 # are met:
9 #
10 # * Redistributions of source code must retain the above copyright
11 # notice, this list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above
14 # copyright notice, this list of conditions and the following
15 # disclaimer in the documentation and/or other materials provided
16 # with the distribution.
17 #
18 # * Neither the name of the Nokia Corporation and/or its
19 # subsidiary(-ies) nor the names of its contributors may be used
20 # to endorse or promote products derived from this software
21 # without specific prior written permission.
22 #
23 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
24 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
25 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
26 # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
27 # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
28 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
29 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
30 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
31 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
32 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
33 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34
35 from django import template
36 from django.template.loader import render_to_string
37 from django.db import models
38
39 from ietf.group.models import Group
40
41 register = template.Library()
42
43 parent_short_names = {
44 "ops": "Ops & Management",
45 "rai": "RAI",
46 "iab": "IAB",
47 "art": "Apps & Realtime",
48 "ietfadminllc": "IETF LLC",
49 }
50
51 parents = Group.objects.filter(
52 models.Q(type="area")
53 | models.Q(type="irtf", acronym="irtf")
54 | models.Q(acronym="iab")
55 | models.Q(acronym="ietfadminllc")
56 | models.Q(acronym="rfceditor"),
57 state="active",
58 ).order_by("type__order", "type_id", "acronym")
59
60
61 @register.simple_tag
62 def wg_menu(flavor):
63 global parents
64
65 for p in parents:
66 p.short_name = parent_short_names.get(p.acronym) or p.name
67 if p.short_name.endswith(" Area"):
68 p.short_name = p.short_name[: -len(" Area")]
69
70 if p.type_id == "area":
71 p.menu_url = "/wg/#" + p.acronym
72 elif p.acronym == "irtf":
73 p.menu_url = "/rg/"
74 elif p.acronym == "iab":
75 p.menu_url = "/program/"
76 elif p.acronym == "ietfadminllc":
77 p.menu_url = "/adm/"
78 elif p.acronym == "rfceditor":
79 p.menu_url = "/rfcedtyp/"
80
81 return render_to_string(
82 "base/menu_wg.html", {"parents": parents, "flavor": flavor}
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ietf/doc/templatetags/wg_menu.py b/ietf/doc/templatetags/wg_menu.py
--- a/ietf/doc/templatetags/wg_menu.py
+++ b/ietf/doc/templatetags/wg_menu.py
@@ -1,4 +1,4 @@
-# Copyright The IETF Trust 2009-2022, All Rights Reserved
+# Copyright The IETF Trust 2009-2023, All Rights Reserved
# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).
# All rights reserved. Contact: Pasi Eronen <[email protected]>
@@ -32,6 +32,8 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+import debug # pyflakes: ignore
+
from django import template
from django.template.loader import render_to_string
from django.db import models
@@ -68,7 +70,7 @@
p.short_name = p.short_name[: -len(" Area")]
if p.type_id == "area":
- p.menu_url = "/wg/#" + p.acronym
+ p.menu_url = "/wg/#" + p.acronym.upper()
elif p.acronym == "irtf":
p.menu_url = "/rg/"
elif p.acronym == "iab":
|
{"golden_diff": "diff --git a/ietf/doc/templatetags/wg_menu.py b/ietf/doc/templatetags/wg_menu.py\n--- a/ietf/doc/templatetags/wg_menu.py\n+++ b/ietf/doc/templatetags/wg_menu.py\n@@ -1,4 +1,4 @@\n-# Copyright The IETF Trust 2009-2022, All Rights Reserved\n+# Copyright The IETF Trust 2009-2023, All Rights Reserved\n \n # Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n # All rights reserved. Contact: Pasi Eronen <[email protected]>\n@@ -32,6 +32,8 @@\n # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n \n+import debug # pyflakes: ignore\n+\n from django import template\n from django.template.loader import render_to_string\n from django.db import models\n@@ -68,7 +70,7 @@\n p.short_name = p.short_name[: -len(\" Area\")]\n \n if p.type_id == \"area\":\n- p.menu_url = \"/wg/#\" + p.acronym\n+ p.menu_url = \"/wg/#\" + p.acronym.upper()\n elif p.acronym == \"irtf\":\n p.menu_url = \"/rg/\"\n elif p.acronym == \"iab\":\n", "issue": "Case mismatch for fragment identifiers between menus and page heading anchor\n### Describe the issue\n\nThe menu item \"Groups / Ops and Management\" sends you off to https://datatracker.ietf.org/wg/#ops but \"#ops\" is not recognised on the page because the heading anchor is \"#OPS\" and so that menu item takes you to the top of the page not the Ops heading.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2009-2022, All Rights Reserved\n\n# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n# All rights reserved. Contact: Pasi Eronen <[email protected]>\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n#\n# * Neither the name of the Nokia Corporation and/or its\n# subsidiary(-ies) nor the names of its contributors may be used\n# to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom django import template\nfrom django.template.loader import render_to_string\nfrom django.db import models\n\nfrom ietf.group.models import Group\n\nregister = template.Library()\n\nparent_short_names = {\n \"ops\": \"Ops & Management\",\n \"rai\": \"RAI\",\n \"iab\": \"IAB\",\n \"art\": \"Apps & Realtime\",\n \"ietfadminllc\": \"IETF LLC\",\n}\n\nparents = Group.objects.filter(\n models.Q(type=\"area\")\n | models.Q(type=\"irtf\", acronym=\"irtf\")\n | models.Q(acronym=\"iab\")\n | models.Q(acronym=\"ietfadminllc\")\n | models.Q(acronym=\"rfceditor\"),\n state=\"active\",\n).order_by(\"type__order\", \"type_id\", \"acronym\")\n\n\[email protected]_tag\ndef wg_menu(flavor):\n global parents\n\n for p in parents:\n p.short_name = parent_short_names.get(p.acronym) or p.name\n if p.short_name.endswith(\" Area\"):\n p.short_name = p.short_name[: -len(\" Area\")]\n\n if p.type_id == \"area\":\n p.menu_url = \"/wg/#\" + p.acronym\n elif p.acronym == \"irtf\":\n p.menu_url = \"/rg/\"\n elif p.acronym == \"iab\":\n p.menu_url = \"/program/\"\n elif p.acronym == \"ietfadminllc\":\n p.menu_url = \"/adm/\"\n elif p.acronym == \"rfceditor\":\n p.menu_url = \"/rfcedtyp/\"\n\n return render_to_string(\n \"base/menu_wg.html\", {\"parents\": parents, \"flavor\": flavor}\n )\n", "path": "ietf/doc/templatetags/wg_menu.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2009-2023, All Rights Reserved\n\n# Copyright (C) 2009-2010 Nokia Corporation and/or its subsidiary(-ies).\n# All rights reserved. Contact: Pasi Eronen <[email protected]>\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n#\n# * Neither the name of the Nokia Corporation and/or its\n# subsidiary(-ies) nor the names of its contributors may be used\n# to endorse or promote products derived from this software\n# without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport debug # pyflakes: ignore\n\nfrom django import template\nfrom django.template.loader import render_to_string\nfrom django.db import models\n\nfrom ietf.group.models import Group\n\nregister = template.Library()\n\nparent_short_names = {\n \"ops\": \"Ops & Management\",\n \"rai\": \"RAI\",\n \"iab\": \"IAB\",\n \"art\": \"Apps & Realtime\",\n \"ietfadminllc\": \"IETF LLC\",\n}\n\nparents = Group.objects.filter(\n models.Q(type=\"area\")\n | models.Q(type=\"irtf\", acronym=\"irtf\")\n | models.Q(acronym=\"iab\")\n | models.Q(acronym=\"ietfadminllc\")\n | models.Q(acronym=\"rfceditor\"),\n state=\"active\",\n).order_by(\"type__order\", \"type_id\", \"acronym\")\n\n\[email protected]_tag\ndef wg_menu(flavor):\n global parents\n\n for p in parents:\n p.short_name = parent_short_names.get(p.acronym) or p.name\n if p.short_name.endswith(\" Area\"):\n p.short_name = p.short_name[: -len(\" Area\")]\n\n if p.type_id == \"area\":\n p.menu_url = \"/wg/#\" + p.acronym.upper()\n elif p.acronym == \"irtf\":\n p.menu_url = \"/rg/\"\n elif p.acronym == \"iab\":\n p.menu_url = \"/program/\"\n elif p.acronym == \"ietfadminllc\":\n p.menu_url = \"/adm/\"\n elif p.acronym == \"rfceditor\":\n p.menu_url = \"/rfcedtyp/\"\n\n return render_to_string(\n \"base/menu_wg.html\", {\"parents\": parents, \"flavor\": flavor}\n )\n", "path": "ietf/doc/templatetags/wg_menu.py"}]}
| 1,293 | 331 |
gh_patches_debug_33213
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-738
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consolidate serialization helpers to be static and shared
In the teams_helper there are 2 serialization helper methods. Currently they both create a big dict of all the Model objects that exist in Teams and BF. We should make the optimization to make the big dict once, and update the 2 helpers to use the new dict.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from inspect import getmembers
5 from typing import Type
6 from enum import Enum
7
8 from msrest.serialization import Model, Deserializer, Serializer
9
10 import botbuilder.schema as schema
11 import botbuilder.schema.teams as teams_schema
12
13 # Optimization: The dependencies dictionary could be cached here,
14 # and shared between the two methods.
15
16
17 def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:
18 dependencies = [
19 schema_cls
20 for key, schema_cls in getmembers(schema)
21 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
22 ]
23 dependencies += [
24 schema_cls
25 for key, schema_cls in getmembers(teams_schema)
26 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
27 ]
28 dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}
29 deserializer = Deserializer(dependencies_dict)
30 return deserializer(msrest_cls.__name__, dict_to_deserialize)
31
32
33 def serializer_helper(object_to_serialize: Model) -> dict:
34 if object_to_serialize is None:
35 return None
36
37 dependencies = [
38 schema_cls
39 for key, schema_cls in getmembers(schema)
40 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
41 ]
42 dependencies += [
43 schema_cls
44 for key, schema_cls in getmembers(teams_schema)
45 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
46 ]
47 dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}
48 serializer = Serializer(dependencies_dict)
49 # pylint: disable=protected-access
50 return serializer._serialize(object_to_serialize)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py b/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py
--- a/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py
+++ b/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py
@@ -10,23 +10,21 @@
import botbuilder.schema as schema
import botbuilder.schema.teams as teams_schema
-# Optimization: The dependencies dictionary could be cached here,
-# and shared between the two methods.
+DEPENDICIES = [
+ schema_cls
+ for key, schema_cls in getmembers(schema)
+ if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
+]
+DEPENDICIES += [
+ schema_cls
+ for key, schema_cls in getmembers(teams_schema)
+ if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
+]
+DEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}
def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:
- dependencies = [
- schema_cls
- for key, schema_cls in getmembers(schema)
- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
- ]
- dependencies += [
- schema_cls
- for key, schema_cls in getmembers(teams_schema)
- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
- ]
- dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}
- deserializer = Deserializer(dependencies_dict)
+ deserializer = Deserializer(DEPENDICIES_DICT)
return deserializer(msrest_cls.__name__, dict_to_deserialize)
@@ -34,17 +32,6 @@
if object_to_serialize is None:
return None
- dependencies = [
- schema_cls
- for key, schema_cls in getmembers(schema)
- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
- ]
- dependencies += [
- schema_cls
- for key, schema_cls in getmembers(teams_schema)
- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
- ]
- dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}
- serializer = Serializer(dependencies_dict)
+ serializer = Serializer(DEPENDICIES_DICT)
# pylint: disable=protected-access
return serializer._serialize(object_to_serialize)
|
{"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py b/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py\n--- a/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py\n+++ b/libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py\n@@ -10,23 +10,21 @@\n import botbuilder.schema as schema\n import botbuilder.schema.teams as teams_schema\n \n-# Optimization: The dependencies dictionary could be cached here,\n-# and shared between the two methods.\n+DEPENDICIES = [\n+ schema_cls\n+ for key, schema_cls in getmembers(schema)\n+ if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n+]\n+DEPENDICIES += [\n+ schema_cls\n+ for key, schema_cls in getmembers(teams_schema)\n+ if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n+]\n+DEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}\n \n \n def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n- dependencies = [\n- schema_cls\n- for key, schema_cls in getmembers(schema)\n- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n- ]\n- dependencies += [\n- schema_cls\n- for key, schema_cls in getmembers(teams_schema)\n- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n- ]\n- dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}\n- deserializer = Deserializer(dependencies_dict)\n+ deserializer = Deserializer(DEPENDICIES_DICT)\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n \n \n@@ -34,17 +32,6 @@\n if object_to_serialize is None:\n return None\n \n- dependencies = [\n- schema_cls\n- for key, schema_cls in getmembers(schema)\n- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n- ]\n- dependencies += [\n- schema_cls\n- for key, schema_cls in getmembers(teams_schema)\n- if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n- ]\n- dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}\n- serializer = Serializer(dependencies_dict)\n+ serializer = Serializer(DEPENDICIES_DICT)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n", "issue": "Consolidate serialization helpers to be static and shared\nIn the teams_helper there are 2 serialization helper methods. Currently they both create a big dict of all the Model objects that exist in Teams and BF. We should make the optimization to make the big dict once, and update the 2 helpers to use the new dict.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom inspect import getmembers\nfrom typing import Type\nfrom enum import Enum\n\nfrom msrest.serialization import Model, Deserializer, Serializer\n\nimport botbuilder.schema as schema\nimport botbuilder.schema.teams as teams_schema\n\n# Optimization: The dependencies dictionary could be cached here,\n# and shared between the two methods.\n\n\ndef deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n dependencies = [\n schema_cls\n for key, schema_cls in getmembers(schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n ]\n dependencies += [\n schema_cls\n for key, schema_cls in getmembers(teams_schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n ]\n dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}\n deserializer = Deserializer(dependencies_dict)\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n\n\ndef serializer_helper(object_to_serialize: Model) -> dict:\n if object_to_serialize is None:\n return None\n\n dependencies = [\n schema_cls\n for key, schema_cls in getmembers(schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n ]\n dependencies += [\n schema_cls\n for key, schema_cls in getmembers(teams_schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n ]\n dependencies_dict = {dependency.__name__: dependency for dependency in dependencies}\n serializer = Serializer(dependencies_dict)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n", "path": "libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom inspect import getmembers\nfrom typing import Type\nfrom enum import Enum\n\nfrom msrest.serialization import Model, Deserializer, Serializer\n\nimport botbuilder.schema as schema\nimport botbuilder.schema.teams as teams_schema\n\nDEPENDICIES = [\n schema_cls\n for key, schema_cls in getmembers(schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES += [\n schema_cls\n for key, schema_cls in getmembers(teams_schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}\n\n\ndef deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n deserializer = Deserializer(DEPENDICIES_DICT)\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n\n\ndef serializer_helper(object_to_serialize: Model) -> dict:\n if object_to_serialize is None:\n return None\n\n serializer = Serializer(DEPENDICIES_DICT)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n", "path": "libraries/botbuilder-core/botbuilder/core/teams/teams_helper.py"}]}
| 811 | 582 |
gh_patches_debug_37314
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2095
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_192 raises an error when run it with terraform_plan framework flag
**Describe the bug**
When I run checkov with terraform_plan framework I receive this error:
```
Traceback (most recent call last):
File "/usr/bin/checkov", line 9, in <module>
sys.exit(run())
File "/usr/lib/python3.9/site-packages/checkov/main.py", line 208, in run
scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,
File "/usr/lib/python3.9/site-packages/checkov/common/runners/runner_registry.py", line 59, in run
reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 67, in run
self.check_tf_definition(report, runner_filter)
File "/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 93, in check_tf_definition
self.run_block(definition[block_type], full_file_path, report, scanned_file,
File "/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py", line 109, in run_block
results = registry.scan(scanned_file, entity, [], runner_filter)
File "/usr/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py", line 121, in scan
result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)
File "/usr/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py", line 135, in run_check
result = check.run(
File "/usr/lib/python3.9/site-packages/checkov/common/checks/base_check.py", line 75, in run
raise e
File "/usr/lib/python3.9/site-packages/checkov/common/checks/base_check.py", line 62, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/usr/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 27, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/usr/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py", line 27, in scan_resource_conf
if managed_group[0].get("name") == ["AWSManagedRulesKnownBadInputsRuleSet"]:
File "/usr/lib/python3.9/site-packages/checkov/common/parsers/node.py", line 183, in __getattr__
raise TemplateAttributeError(f'{self.__name__}.{name} is invalid')
checkov.common.parsers.node.TemplateAttributeError: <function ListNode.__name__ at 0x7f295099e1f0>.get is invalid
```
**To Reproduce**
You can use this snippet in order to do that:
```
resource "aws_wafv2_web_acl" "main" {
name = "${local.common_vars.environment}-${local.common_vars.country}-main"
scope = "REGIONAL"
custom_response_body {
key = "main-response-body"
content = "BLOCKED BY AWS WAF"
content_type = "TEXT_PLAIN"
}
default_action {
# Allow traffic unless it is blocked by a rule
allow {}
}
rule {
name = "aws-managed-known-bad-inputs"
priority = 1
override_action {
none {}
}
statement {
managed_rule_group_statement {
name = "AWSManagedRulesKnownBadInputsRuleSet"
vendor_name = "AWS"
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "aws-managed-known-bad-inputs"
sampled_requests_enabled = true
}
}
rule {
name = "aws-managed-common-rule-set"
priority = 2
override_action {
none {}
}
statement {
managed_rule_group_statement {
name = "AWSManagedRulesCommonRuleSet"
vendor_name = "AWS"
excluded_rule {
name = "SizeRestrictions_BODY"
}
excluded_rule {
name = "CrossSiteScripting_COOKIE"
}
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "aws-managed-common-rule-set"
sampled_requests_enabled = true
}
}
rule {
name = "rate-limit-ip"
priority = 3
action {
block {}
}
statement {
rate_based_statement {
limit = 1000
aggregate_key_type = "IP"
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "rate-limit-ip"
sampled_requests_enabled = true
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "all"
sampled_requests_enabled = false
}
tags = {
Name = "${local.common_vars.environment}-${local.common_vars.country}-main"
Description = "rules derived from AWSManagedRulesCommonRuleSet"
}
}
```
1. terraform plan -out test_output
2. terrform show -json test_output | jq '.' > test_output.json
3. checkov --framework=terraform_plan -d .
**Expected behavior**
Failed or Passed not raising python error
**Desktop (please complete the following information):**
- Linux 09d2041af498 5.11.0-40-generic 44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 UTC 2021 x86_64 Linux
- Checkov Version 2.0.654
- Terraform Version 1.0.9
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py`
Content:
```
1 from typing import Dict, Any
2
3 from checkov.common.models.enums import CheckCategories, CheckResult
4 from checkov.common.util.type_forcers import force_list
5 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
6
7
8 class WAFACLCVE202144228(BaseResourceCheck):
9 def __init__(self) -> None:
10 name = "Ensure WAF prevents message lookup in Log4j2. See CVE-2021-44228 aka log4jshell"
11 id = "CKV_AWS_192"
12 supported_resources = ["aws_wafv2_web_acl"]
13 categories = [CheckCategories.APPLICATION_SECURITY]
14 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
15
16 def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:
17 self.evaluated_keys = ["rule"]
18 rules = conf.get("rule") or []
19 for idx_rule, rule in enumerate(force_list(rules)):
20 self.evaluated_keys = [f"rule/[{idx_rule}]/statement"]
21 statement = rule.get("statement")
22 if statement:
23 self.evaluated_keys = [f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement"]
24 managed_group = statement[0].get("managed_rule_group_statement")
25 if managed_group:
26 self.evaluated_keys = [f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name"]
27 if managed_group[0].get("name") == ["AWSManagedRulesKnownBadInputsRuleSet"]:
28 self.evaluated_keys.append(
29 f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule"
30 )
31 excluded_rules = managed_group[0].get("excluded_rule") or []
32 # rule 'Log4JRCE' should not be set to count
33 for idx_excluded_rule, excluded_rule in enumerate(force_list(excluded_rules)):
34 if excluded_rule.get("name") == ["Log4JRCE"]:
35 self.evaluated_keys = [
36 f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name",
37 f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule/[{idx_excluded_rule}]/name",
38 ]
39 return CheckResult.FAILED
40
41 self.evaluated_keys.append(
42 f"rule/[{idx_rule}]/override_action/[0]/none"
43 )
44 override_action = rule.get("override_action")
45 # check for group override
46 if override_action and next(iter(override_action[0].keys())) != "none":
47 return CheckResult.FAILED
48
49 return CheckResult.PASSED
50
51 return CheckResult.FAILED
52
53
54 check = WAFACLCVE202144228()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py b/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py
--- a/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py
+++ b/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py
@@ -24,14 +24,14 @@
managed_group = statement[0].get("managed_rule_group_statement")
if managed_group:
self.evaluated_keys = [f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name"]
- if managed_group[0].get("name") == ["AWSManagedRulesKnownBadInputsRuleSet"]:
+ if managed_group[0] and managed_group[0].get("name") == ["AWSManagedRulesKnownBadInputsRuleSet"]:
self.evaluated_keys.append(
f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule"
)
excluded_rules = managed_group[0].get("excluded_rule") or []
# rule 'Log4JRCE' should not be set to count
for idx_excluded_rule, excluded_rule in enumerate(force_list(excluded_rules)):
- if excluded_rule.get("name") == ["Log4JRCE"]:
+ if excluded_rule and excluded_rule.get("name") == ["Log4JRCE"]:
self.evaluated_keys = [
f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name",
f"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule/[{idx_excluded_rule}]/name",
@@ -43,7 +43,9 @@
)
override_action = rule.get("override_action")
# check for group override
- if override_action and next(iter(override_action[0].keys())) != "none":
+ override_action_none = override_action[0].get("none")
+ # Terraform plan includes both keys, but one is a dict and the not chosen one a list
+ if not override_action_none or not isinstance(override_action_none[0], dict):
return CheckResult.FAILED
return CheckResult.PASSED
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py b/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py\n--- a/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py\n+++ b/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py\n@@ -24,14 +24,14 @@\n managed_group = statement[0].get(\"managed_rule_group_statement\")\n if managed_group:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\"]\n- if managed_group[0].get(\"name\") == [\"AWSManagedRulesKnownBadInputsRuleSet\"]:\n+ if managed_group[0] and managed_group[0].get(\"name\") == [\"AWSManagedRulesKnownBadInputsRuleSet\"]:\n self.evaluated_keys.append(\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule\"\n )\n excluded_rules = managed_group[0].get(\"excluded_rule\") or []\n # rule 'Log4JRCE' should not be set to count\n for idx_excluded_rule, excluded_rule in enumerate(force_list(excluded_rules)):\n- if excluded_rule.get(\"name\") == [\"Log4JRCE\"]:\n+ if excluded_rule and excluded_rule.get(\"name\") == [\"Log4JRCE\"]:\n self.evaluated_keys = [\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\",\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule/[{idx_excluded_rule}]/name\",\n@@ -43,7 +43,9 @@\n )\n override_action = rule.get(\"override_action\")\n # check for group override\n- if override_action and next(iter(override_action[0].keys())) != \"none\":\n+ override_action_none = override_action[0].get(\"none\")\n+ # Terraform plan includes both keys, but one is a dict and the not chosen one a list\n+ if not override_action_none or not isinstance(override_action_none[0], dict):\n return CheckResult.FAILED\n \n return CheckResult.PASSED\n", "issue": "CKV_AWS_192 raises an error when run it with terraform_plan framework flag\n**Describe the bug**\r\nWhen I run checkov with terraform_plan framework I receive this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/bin/checkov\", line 9, in <module>\r\n sys.exit(run())\r\n File \"/usr/lib/python3.9/site-packages/checkov/main.py\", line 208, in run\r\n scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/runners/runner_registry.py\", line 59, in run\r\n reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 67, in run\r\n self.check_tf_definition(report, runner_filter)\r\n File \"/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 93, in check_tf_definition\r\n self.run_block(definition[block_type], full_file_path, report, scanned_file,\r\n File \"/usr/lib/python3.9/site-packages/checkov/terraform/plan_runner.py\", line 109, in run_block\r\n results = registry.scan(scanned_file, entity, [], runner_filter)\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py\", line 121, in scan\r\n result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/checks/base_check_registry.py\", line 135, in run_check\r\n result = check.run(\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/checks/base_check.py\", line 75, in run\r\n raise e\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/checks/base_check.py\", line 62, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/usr/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 27, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/usr/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py\", line 27, in scan_resource_conf\r\n if managed_group[0].get(\"name\") == [\"AWSManagedRulesKnownBadInputsRuleSet\"]:\r\n File \"/usr/lib/python3.9/site-packages/checkov/common/parsers/node.py\", line 183, in __getattr__\r\n raise TemplateAttributeError(f'{self.__name__}.{name} is invalid')\r\ncheckov.common.parsers.node.TemplateAttributeError: <function ListNode.__name__ at 0x7f295099e1f0>.get is invalid\r\n```\r\n\r\n**To Reproduce**\r\nYou can use this snippet in order to do that:\r\n```\r\nresource \"aws_wafv2_web_acl\" \"main\" {\r\n name = \"${local.common_vars.environment}-${local.common_vars.country}-main\"\r\n scope = \"REGIONAL\"\r\n custom_response_body {\r\n key = \"main-response-body\"\r\n content = \"BLOCKED BY AWS WAF\"\r\n content_type = \"TEXT_PLAIN\"\r\n }\r\n default_action {\r\n # Allow traffic unless it is blocked by a rule\r\n allow {}\r\n }\r\n\r\n rule {\r\n name = \"aws-managed-known-bad-inputs\"\r\n priority = 1\r\n override_action {\r\n none {}\r\n }\r\n statement {\r\n managed_rule_group_statement {\r\n name = \"AWSManagedRulesKnownBadInputsRuleSet\"\r\n vendor_name = \"AWS\"\r\n }\r\n }\r\n visibility_config {\r\n cloudwatch_metrics_enabled = true\r\n metric_name = \"aws-managed-known-bad-inputs\"\r\n sampled_requests_enabled = true\r\n }\r\n }\r\n\r\n rule {\r\n name = \"aws-managed-common-rule-set\"\r\n priority = 2\r\n override_action {\r\n none {}\r\n }\r\n statement {\r\n managed_rule_group_statement {\r\n name = \"AWSManagedRulesCommonRuleSet\"\r\n vendor_name = \"AWS\"\r\n excluded_rule {\r\n name = \"SizeRestrictions_BODY\"\r\n }\r\n excluded_rule {\r\n name = \"CrossSiteScripting_COOKIE\"\r\n }\r\n }\r\n }\r\n visibility_config {\r\n cloudwatch_metrics_enabled = true\r\n metric_name = \"aws-managed-common-rule-set\"\r\n sampled_requests_enabled = true\r\n }\r\n }\r\n\r\n rule {\r\n name = \"rate-limit-ip\"\r\n priority = 3\r\n\r\n action {\r\n block {}\r\n }\r\n\r\n statement {\r\n rate_based_statement {\r\n limit = 1000\r\n aggregate_key_type = \"IP\"\r\n }\r\n }\r\n\r\n visibility_config {\r\n cloudwatch_metrics_enabled = true\r\n metric_name = \"rate-limit-ip\"\r\n sampled_requests_enabled = true\r\n }\r\n }\r\n\r\n visibility_config {\r\n cloudwatch_metrics_enabled = true\r\n metric_name = \"all\"\r\n sampled_requests_enabled = false\r\n }\r\n\r\n tags = {\r\n Name = \"${local.common_vars.environment}-${local.common_vars.country}-main\"\r\n Description = \"rules derived from AWSManagedRulesCommonRuleSet\"\r\n }\r\n}\r\n\r\n```\r\n1. terraform plan -out test_output\r\n2. terrform show -json test_output | jq '.' > test_output.json\r\n3. checkov --framework=terraform_plan -d .\r\n\r\n\r\n**Expected behavior**\r\nFailed or Passed not raising python error\r\n\r\n**Desktop (please complete the following information):**\r\n - Linux 09d2041af498 5.11.0-40-generic 44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 UTC 2021 x86_64 Linux\r\n - Checkov Version 2.0.654\r\n - Terraform Version 1.0.9\r\n\n", "before_files": [{"content": "from typing import Dict, Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass WAFACLCVE202144228(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure WAF prevents message lookup in Log4j2. See CVE-2021-44228 aka log4jshell\"\n id = \"CKV_AWS_192\"\n supported_resources = [\"aws_wafv2_web_acl\"]\n categories = [CheckCategories.APPLICATION_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\") or []\n for idx_rule, rule in enumerate(force_list(rules)):\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement\"]\n statement = rule.get(\"statement\")\n if statement:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement\"]\n managed_group = statement[0].get(\"managed_rule_group_statement\")\n if managed_group:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\"]\n if managed_group[0].get(\"name\") == [\"AWSManagedRulesKnownBadInputsRuleSet\"]:\n self.evaluated_keys.append(\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule\"\n )\n excluded_rules = managed_group[0].get(\"excluded_rule\") or []\n # rule 'Log4JRCE' should not be set to count\n for idx_excluded_rule, excluded_rule in enumerate(force_list(excluded_rules)):\n if excluded_rule.get(\"name\") == [\"Log4JRCE\"]:\n self.evaluated_keys = [\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\",\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule/[{idx_excluded_rule}]/name\",\n ]\n return CheckResult.FAILED\n\n self.evaluated_keys.append(\n f\"rule/[{idx_rule}]/override_action/[0]/none\"\n )\n override_action = rule.get(\"override_action\")\n # check for group override\n if override_action and next(iter(override_action[0].keys())) != \"none\":\n return CheckResult.FAILED\n\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = WAFACLCVE202144228()\n", "path": "checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py"}], "after_files": [{"content": "from typing import Dict, Any\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass WAFACLCVE202144228(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure WAF prevents message lookup in Log4j2. See CVE-2021-44228 aka log4jshell\"\n id = \"CKV_AWS_192\"\n supported_resources = [\"aws_wafv2_web_acl\"]\n categories = [CheckCategories.APPLICATION_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\") or []\n for idx_rule, rule in enumerate(force_list(rules)):\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement\"]\n statement = rule.get(\"statement\")\n if statement:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement\"]\n managed_group = statement[0].get(\"managed_rule_group_statement\")\n if managed_group:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\"]\n if managed_group[0] and managed_group[0].get(\"name\") == [\"AWSManagedRulesKnownBadInputsRuleSet\"]:\n self.evaluated_keys.append(\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule\"\n )\n excluded_rules = managed_group[0].get(\"excluded_rule\") or []\n # rule 'Log4JRCE' should not be set to count\n for idx_excluded_rule, excluded_rule in enumerate(force_list(excluded_rules)):\n if excluded_rule and excluded_rule.get(\"name\") == [\"Log4JRCE\"]:\n self.evaluated_keys = [\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/name\",\n f\"rule/[{idx_rule}]/statement/[0]/managed_rule_group_statement/[0]/excluded_rule/[{idx_excluded_rule}]/name\",\n ]\n return CheckResult.FAILED\n\n self.evaluated_keys.append(\n f\"rule/[{idx_rule}]/override_action/[0]/none\"\n )\n override_action = rule.get(\"override_action\")\n # check for group override\n override_action_none = override_action[0].get(\"none\")\n # Terraform plan includes both keys, but one is a dict and the not chosen one a list\n if not override_action_none or not isinstance(override_action_none[0], dict):\n return CheckResult.FAILED\n\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = WAFACLCVE202144228()\n", "path": "checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py"}]}
| 2,332 | 526 |
gh_patches_debug_18661
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-4713
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to disable user config with Jupyter Server
The hub administrator is supposed to be able to prevent per-user notebook configuration scripts from running by setting
```
c.Spawner.disable_user_config = True
```
In the `jupyterhub_config.py` config. This sets the environment variable `JUPYTERHUB_DISABLE_USER_CONFIG=1` for the spawned notebook server. However this seems to be being ignored?
<details>
<summary>Using this Dockerfile</summary>
```
FROM jupyterhub/jupyterhub:2
RUN python3 -m pip install --no-cache jupyterlab
RUN \
adduser -q --gecos "" --disabled-password user1 && \
echo user1:user1 | chpasswd
ADD jupyterhub_config.py .
RUN mkdir -p /home/user1/.jupyter
ADD jupyter_notebook_config.py /home/user1/.jupyter/.
RUN chown -R user1:user1 /home/user1/.jupyter
CMD ["jupyterhub"]
```
</details>
<details><summary>
with this `jupyterhub_config.py` and example notebook config for `user1`:
</summary>
```
c.Spawner.disable_user_config = True
```
```
import os
print("HELLO FROM THE NOTEBOOK CONFIG")
print(os.getenv("JUPYTERHUB_DISABLE_USER_CONFIG"))
c.ServerApp.shutdown_no_activity_timeout = 600
c.MappingKernelManager.cull_idle_timeout = 600
c.TerminalManager.cull_inactive_timeout = 600
```
</details>
I see the "HELLO" message and the value 1 printed when the notebook starts up, and the timeout message indicating that my config setting is in effect:
```
[I 2022-02-22 22:35:23.167 SingleUserLabApp serverapp:2161] Will shut down after 600 seconds with no kernels or terminals.
```
Am I misunderstanding exactly what config files are excluded? I see there's a test for this but I wonder is it actually verifying that the config is being ignored?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser/_disable_user_config.py`
Content:
```
1 """
2 Disable user-controlled config for single-user servers
3
4 Applies patches to prevent loading configuration from the user's home directory.
5
6 Only used when launching a single-user server with disable_user_config=True.
7
8 This is where we still have some monkeypatches,
9 because we want to prevent loading configuration from user directories,
10 and `jupyter_core` functions don't allow that.
11
12 Due to extensions, we aren't able to apply patches in one place on the ServerApp,
13 we have to insert the patches at the lowest-level
14 on function objects themselves,
15 to ensure we modify calls to e.g. `jupyter_core.jupyter_path`
16 that may have been imported already!
17
18 We should perhaps ask for the necessary hooks to modify this in jupyter_core,
19 rather than keeing these monkey patches around.
20 """
21
22 import os
23
24 from jupyter_core import paths
25
26
27 def _exclude_home(path_list):
28 """Filter out any entries in a path list that are in my home directory.
29
30 Used to disable per-user configuration.
31 """
32 home = os.path.expanduser('~/')
33 for p in path_list:
34 if not p.startswith(home):
35 yield p
36
37
38 # record patches
39 _original_jupyter_paths = None
40 _jupyter_paths_without_home = None
41
42
43 def _disable_user_config(serverapp):
44 """
45 disable user-controlled sources of configuration
46 by excluding directories in their home from paths.
47
48 This _does not_ disable frontend config,
49 such as UI settings persistence.
50
51 1. Python config file paths
52 2. Search paths for extensions, etc.
53 3. import path
54 """
55 original_jupyter_path = paths.jupyter_path()
56 jupyter_path_without_home = list(_exclude_home(original_jupyter_path))
57
58 # config_file_paths is a property without a setter
59 # can't override on the instance
60 default_config_file_paths = serverapp.config_file_paths
61 config_file_paths = list(_exclude_home(default_config_file_paths))
62 serverapp.__class__.config_file_paths = property(
63 lambda self: config_file_paths,
64 )
65 # verify patch applied
66 assert serverapp.config_file_paths == config_file_paths
67
68 # patch jupyter_path to exclude $HOME
69 global _original_jupyter_paths, _jupyter_paths_without_home, _original_jupyter_config_dir
70 _original_jupyter_paths = paths.jupyter_path()
71 _jupyter_paths_without_home = list(_exclude_home(_original_jupyter_paths))
72
73 def get_jupyter_path_without_home(*subdirs):
74 # reimport because of our `__code__` patch
75 # affects what is resolved as the parent namespace
76 from jupyterhub.singleuser._disable_user_config import (
77 _jupyter_paths_without_home,
78 )
79
80 paths = list(_jupyter_paths_without_home)
81 if subdirs:
82 paths = [os.path.join(p, *subdirs) for p in paths]
83 return paths
84
85 # patch `jupyter_path.__code__` to ensure all callers are patched,
86 # even if they've already imported
87 # this affects e.g. nbclassic.nbextension_paths
88 paths.jupyter_path.__code__ = get_jupyter_path_without_home.__code__
89
90 # same thing for config_dir,
91 # which applies to some things like ExtensionApp config paths
92 # and nbclassic.static_custom_path
93
94 # allows explicit override if $JUPYTER_CONFIG_DIR is set
95 # or config dir is otherwise not in $HOME
96
97 if not os.getenv("JUPYTER_CONFIG_DIR") and not list(
98 _exclude_home([paths.jupyter_config_dir()])
99 ):
100 # patch specifically Application.config_dir
101 # this affects ServerApp and ExtensionApp,
102 # but does not affect JupyterLab's user-settings, etc.
103 # patching the traitlet directly affects all instances,
104 # already-created or future
105 from jupyter_core.application import JupyterApp
106
107 def get_env_config_dir(obj, cls=None):
108 return paths.ENV_CONFIG_PATH[0]
109
110 JupyterApp.config_dir.get = get_env_config_dir
111
112 # record disabled state on app object
113 serverapp.disable_user_config = True
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/singleuser/_disable_user_config.py b/jupyterhub/singleuser/_disable_user_config.py
--- a/jupyterhub/singleuser/_disable_user_config.py
+++ b/jupyterhub/singleuser/_disable_user_config.py
@@ -20,19 +20,35 @@
"""
import os
+from pathlib import Path
from jupyter_core import paths
+def _is_relative_to(path, prefix):
+ """
+ Backport Path.is_relative_to for Python < 3.9
+
+ added in Python 3.9
+ """
+ if hasattr(path, "is_relative_to"):
+ # Python >= 3.9
+ return path.is_relative_to(prefix)
+ else:
+ return path == prefix or prefix in path.parents
+
+
def _exclude_home(path_list):
"""Filter out any entries in a path list that are in my home directory.
Used to disable per-user configuration.
"""
- home = os.path.expanduser('~/')
- for p in path_list:
- if not p.startswith(home):
- yield p
+ # resolve paths before comparison
+ # so we do the right thing when $HOME is a symlink
+ home = Path.home().resolve()
+ for path in path_list:
+ if not _is_relative_to(Path(path).resolve(), home):
+ yield path
# record patches
|
{"golden_diff": "diff --git a/jupyterhub/singleuser/_disable_user_config.py b/jupyterhub/singleuser/_disable_user_config.py\n--- a/jupyterhub/singleuser/_disable_user_config.py\n+++ b/jupyterhub/singleuser/_disable_user_config.py\n@@ -20,19 +20,35 @@\n \"\"\"\n \n import os\n+from pathlib import Path\n \n from jupyter_core import paths\n \n \n+def _is_relative_to(path, prefix):\n+ \"\"\"\n+ Backport Path.is_relative_to for Python < 3.9\n+\n+ added in Python 3.9\n+ \"\"\"\n+ if hasattr(path, \"is_relative_to\"):\n+ # Python >= 3.9\n+ return path.is_relative_to(prefix)\n+ else:\n+ return path == prefix or prefix in path.parents\n+\n+\n def _exclude_home(path_list):\n \"\"\"Filter out any entries in a path list that are in my home directory.\n \n Used to disable per-user configuration.\n \"\"\"\n- home = os.path.expanduser('~/')\n- for p in path_list:\n- if not p.startswith(home):\n- yield p\n+ # resolve paths before comparison\n+ # so we do the right thing when $HOME is a symlink\n+ home = Path.home().resolve()\n+ for path in path_list:\n+ if not _is_relative_to(Path(path).resolve(), home):\n+ yield path\n \n \n # record patches\n", "issue": "Unable to disable user config with Jupyter Server\nThe hub administrator is supposed to be able to prevent per-user notebook configuration scripts from running by setting\r\n\r\n```\r\nc.Spawner.disable_user_config = True\r\n```\r\n\r\nIn the `jupyterhub_config.py` config. This sets the environment variable `JUPYTERHUB_DISABLE_USER_CONFIG=1` for the spawned notebook server. However this seems to be being ignored?\r\n\r\n<details>\r\n<summary>Using this Dockerfile</summary>\r\n\r\n```\r\nFROM jupyterhub/jupyterhub:2\r\n\r\nRUN python3 -m pip install --no-cache jupyterlab\r\n\r\nRUN \\\r\n adduser -q --gecos \"\" --disabled-password user1 && \\\r\n echo user1:user1 | chpasswd\r\n\r\nADD jupyterhub_config.py .\r\n\r\nRUN mkdir -p /home/user1/.jupyter\r\nADD jupyter_notebook_config.py /home/user1/.jupyter/.\r\nRUN chown -R user1:user1 /home/user1/.jupyter\r\n\r\nCMD [\"jupyterhub\"]\r\n```\r\n\r\n</details>\r\n\r\n<details><summary>\r\nwith this `jupyterhub_config.py` and example notebook config for `user1`:\r\n</summary>\r\n\r\n```\r\nc.Spawner.disable_user_config = True\r\n```\r\n\r\n```\r\nimport os\r\n\r\nprint(\"HELLO FROM THE NOTEBOOK CONFIG\")\r\nprint(os.getenv(\"JUPYTERHUB_DISABLE_USER_CONFIG\"))\r\n\r\nc.ServerApp.shutdown_no_activity_timeout = 600\r\nc.MappingKernelManager.cull_idle_timeout = 600\r\nc.TerminalManager.cull_inactive_timeout = 600\r\n```\r\n\r\n</details>\r\n\r\nI see the \"HELLO\" message and the value 1 printed when the notebook starts up, and the timeout message indicating that my config setting is in effect:\r\n\r\n```\r\n[I 2022-02-22 22:35:23.167 SingleUserLabApp serverapp:2161] Will shut down after 600 seconds with no kernels or terminals.\r\n```\r\n\r\nAm I misunderstanding exactly what config files are excluded? I see there's a test for this but I wonder is it actually verifying that the config is being ignored?\n", "before_files": [{"content": "\"\"\"\nDisable user-controlled config for single-user servers\n\nApplies patches to prevent loading configuration from the user's home directory.\n\nOnly used when launching a single-user server with disable_user_config=True.\n\nThis is where we still have some monkeypatches,\nbecause we want to prevent loading configuration from user directories,\nand `jupyter_core` functions don't allow that.\n\nDue to extensions, we aren't able to apply patches in one place on the ServerApp,\nwe have to insert the patches at the lowest-level\non function objects themselves,\nto ensure we modify calls to e.g. `jupyter_core.jupyter_path`\nthat may have been imported already!\n\nWe should perhaps ask for the necessary hooks to modify this in jupyter_core,\nrather than keeing these monkey patches around.\n\"\"\"\n\nimport os\n\nfrom jupyter_core import paths\n\n\ndef _exclude_home(path_list):\n \"\"\"Filter out any entries in a path list that are in my home directory.\n\n Used to disable per-user configuration.\n \"\"\"\n home = os.path.expanduser('~/')\n for p in path_list:\n if not p.startswith(home):\n yield p\n\n\n# record patches\n_original_jupyter_paths = None\n_jupyter_paths_without_home = None\n\n\ndef _disable_user_config(serverapp):\n \"\"\"\n disable user-controlled sources of configuration\n by excluding directories in their home from paths.\n\n This _does not_ disable frontend config,\n such as UI settings persistence.\n\n 1. Python config file paths\n 2. Search paths for extensions, etc.\n 3. import path\n \"\"\"\n original_jupyter_path = paths.jupyter_path()\n jupyter_path_without_home = list(_exclude_home(original_jupyter_path))\n\n # config_file_paths is a property without a setter\n # can't override on the instance\n default_config_file_paths = serverapp.config_file_paths\n config_file_paths = list(_exclude_home(default_config_file_paths))\n serverapp.__class__.config_file_paths = property(\n lambda self: config_file_paths,\n )\n # verify patch applied\n assert serverapp.config_file_paths == config_file_paths\n\n # patch jupyter_path to exclude $HOME\n global _original_jupyter_paths, _jupyter_paths_without_home, _original_jupyter_config_dir\n _original_jupyter_paths = paths.jupyter_path()\n _jupyter_paths_without_home = list(_exclude_home(_original_jupyter_paths))\n\n def get_jupyter_path_without_home(*subdirs):\n # reimport because of our `__code__` patch\n # affects what is resolved as the parent namespace\n from jupyterhub.singleuser._disable_user_config import (\n _jupyter_paths_without_home,\n )\n\n paths = list(_jupyter_paths_without_home)\n if subdirs:\n paths = [os.path.join(p, *subdirs) for p in paths]\n return paths\n\n # patch `jupyter_path.__code__` to ensure all callers are patched,\n # even if they've already imported\n # this affects e.g. nbclassic.nbextension_paths\n paths.jupyter_path.__code__ = get_jupyter_path_without_home.__code__\n\n # same thing for config_dir,\n # which applies to some things like ExtensionApp config paths\n # and nbclassic.static_custom_path\n\n # allows explicit override if $JUPYTER_CONFIG_DIR is set\n # or config dir is otherwise not in $HOME\n\n if not os.getenv(\"JUPYTER_CONFIG_DIR\") and not list(\n _exclude_home([paths.jupyter_config_dir()])\n ):\n # patch specifically Application.config_dir\n # this affects ServerApp and ExtensionApp,\n # but does not affect JupyterLab's user-settings, etc.\n # patching the traitlet directly affects all instances,\n # already-created or future\n from jupyter_core.application import JupyterApp\n\n def get_env_config_dir(obj, cls=None):\n return paths.ENV_CONFIG_PATH[0]\n\n JupyterApp.config_dir.get = get_env_config_dir\n\n # record disabled state on app object\n serverapp.disable_user_config = True\n", "path": "jupyterhub/singleuser/_disable_user_config.py"}], "after_files": [{"content": "\"\"\"\nDisable user-controlled config for single-user servers\n\nApplies patches to prevent loading configuration from the user's home directory.\n\nOnly used when launching a single-user server with disable_user_config=True.\n\nThis is where we still have some monkeypatches,\nbecause we want to prevent loading configuration from user directories,\nand `jupyter_core` functions don't allow that.\n\nDue to extensions, we aren't able to apply patches in one place on the ServerApp,\nwe have to insert the patches at the lowest-level\non function objects themselves,\nto ensure we modify calls to e.g. `jupyter_core.jupyter_path`\nthat may have been imported already!\n\nWe should perhaps ask for the necessary hooks to modify this in jupyter_core,\nrather than keeing these monkey patches around.\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom jupyter_core import paths\n\n\ndef _is_relative_to(path, prefix):\n \"\"\"\n Backport Path.is_relative_to for Python < 3.9\n\n added in Python 3.9\n \"\"\"\n if hasattr(path, \"is_relative_to\"):\n # Python >= 3.9\n return path.is_relative_to(prefix)\n else:\n return path == prefix or prefix in path.parents\n\n\ndef _exclude_home(path_list):\n \"\"\"Filter out any entries in a path list that are in my home directory.\n\n Used to disable per-user configuration.\n \"\"\"\n # resolve paths before comparison\n # so we do the right thing when $HOME is a symlink\n home = Path.home().resolve()\n for path in path_list:\n if not _is_relative_to(Path(path).resolve(), home):\n yield path\n\n\n# record patches\n_original_jupyter_paths = None\n_jupyter_paths_without_home = None\n\n\ndef _disable_user_config(serverapp):\n \"\"\"\n disable user-controlled sources of configuration\n by excluding directories in their home from paths.\n\n This _does not_ disable frontend config,\n such as UI settings persistence.\n\n 1. Python config file paths\n 2. Search paths for extensions, etc.\n 3. import path\n \"\"\"\n original_jupyter_path = paths.jupyter_path()\n jupyter_path_without_home = list(_exclude_home(original_jupyter_path))\n\n # config_file_paths is a property without a setter\n # can't override on the instance\n default_config_file_paths = serverapp.config_file_paths\n config_file_paths = list(_exclude_home(default_config_file_paths))\n serverapp.__class__.config_file_paths = property(\n lambda self: config_file_paths,\n )\n # verify patch applied\n assert serverapp.config_file_paths == config_file_paths\n\n # patch jupyter_path to exclude $HOME\n global _original_jupyter_paths, _jupyter_paths_without_home, _original_jupyter_config_dir\n _original_jupyter_paths = paths.jupyter_path()\n _jupyter_paths_without_home = list(_exclude_home(_original_jupyter_paths))\n\n def get_jupyter_path_without_home(*subdirs):\n # reimport because of our `__code__` patch\n # affects what is resolved as the parent namespace\n from jupyterhub.singleuser._disable_user_config import (\n _jupyter_paths_without_home,\n )\n\n paths = list(_jupyter_paths_without_home)\n if subdirs:\n paths = [os.path.join(p, *subdirs) for p in paths]\n return paths\n\n # patch `jupyter_path.__code__` to ensure all callers are patched,\n # even if they've already imported\n # this affects e.g. nbclassic.nbextension_paths\n paths.jupyter_path.__code__ = get_jupyter_path_without_home.__code__\n\n # same thing for config_dir,\n # which applies to some things like ExtensionApp config paths\n # and nbclassic.static_custom_path\n\n # allows explicit override if $JUPYTER_CONFIG_DIR is set\n # or config dir is otherwise not in $HOME\n\n if not os.getenv(\"JUPYTER_CONFIG_DIR\") and not list(\n _exclude_home([paths.jupyter_config_dir()])\n ):\n # patch specifically Application.config_dir\n # this affects ServerApp and ExtensionApp,\n # but does not affect JupyterLab's user-settings, etc.\n # patching the traitlet directly affects all instances,\n # already-created or future\n from jupyter_core.application import JupyterApp\n\n def get_env_config_dir(obj, cls=None):\n return paths.ENV_CONFIG_PATH[0]\n\n JupyterApp.config_dir.get = get_env_config_dir\n\n # record disabled state on app object\n serverapp.disable_user_config = True\n", "path": "jupyterhub/singleuser/_disable_user_config.py"}]}
| 1,843 | 309 |
gh_patches_debug_30267
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-7768
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reconnecting error messages with master(-ish)
Version: ee6223ab74eb
My situation is that I set up a remote notebook on a server that I only can reach when in a VPN. Now, my Wifi is shaky for some reason, that's possibly how my VPN get's shaky.
So I needed to close my notebook after a VPN reconnect and the dashboard still reported it as 'running'. When I clicked on the 'running' notebook, I got this error in the console:
``` python
RunTimeError: Method not supported by Web Sockets
```
Full error log here:
https://gist.github.com/23ed2e252897d96804a5
Working on OSX 10.9.5 using conda and Python 3.4
Reconnecting error messages with master(-ish)
Version: ee6223ab74eb
My situation is that I set up a remote notebook on a server that I only can reach when in a VPN. Now, my Wifi is shaky for some reason, that's possibly how my VPN get's shaky.
So I needed to close my notebook after a VPN reconnect and the dashboard still reported it as 'running'. When I clicked on the 'running' notebook, I got this error in the console:
``` python
RunTimeError: Method not supported by Web Sockets
```
Full error log here:
https://gist.github.com/23ed2e252897d96804a5
Working on OSX 10.9.5 using conda and Python 3.4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/html/base/zmqhandlers.py`
Content:
```
1 # coding: utf-8
2 """Tornado handlers for WebSocket <-> ZMQ sockets."""
3
4 # Copyright (c) IPython Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8 import json
9 import struct
10 import warnings
11
12 try:
13 from urllib.parse import urlparse # Py 3
14 except ImportError:
15 from urlparse import urlparse # Py 2
16
17 import tornado
18 from tornado import gen, ioloop, web
19 from tornado.websocket import WebSocketHandler
20
21 from IPython.kernel.zmq.session import Session
22 from IPython.utils.jsonutil import date_default, extract_dates
23 from IPython.utils.py3compat import cast_unicode
24
25 from .handlers import IPythonHandler
26
27 def serialize_binary_message(msg):
28 """serialize a message as a binary blob
29
30 Header:
31
32 4 bytes: number of msg parts (nbufs) as 32b int
33 4 * nbufs bytes: offset for each buffer as integer as 32b int
34
35 Offsets are from the start of the buffer, including the header.
36
37 Returns
38 -------
39
40 The message serialized to bytes.
41
42 """
43 # don't modify msg or buffer list in-place
44 msg = msg.copy()
45 buffers = list(msg.pop('buffers'))
46 bmsg = json.dumps(msg, default=date_default).encode('utf8')
47 buffers.insert(0, bmsg)
48 nbufs = len(buffers)
49 offsets = [4 * (nbufs + 1)]
50 for buf in buffers[:-1]:
51 offsets.append(offsets[-1] + len(buf))
52 offsets_buf = struct.pack('!' + 'I' * (nbufs + 1), nbufs, *offsets)
53 buffers.insert(0, offsets_buf)
54 return b''.join(buffers)
55
56
57 def deserialize_binary_message(bmsg):
58 """deserialize a message from a binary blog
59
60 Header:
61
62 4 bytes: number of msg parts (nbufs) as 32b int
63 4 * nbufs bytes: offset for each buffer as integer as 32b int
64
65 Offsets are from the start of the buffer, including the header.
66
67 Returns
68 -------
69
70 message dictionary
71 """
72 nbufs = struct.unpack('!i', bmsg[:4])[0]
73 offsets = list(struct.unpack('!' + 'I' * nbufs, bmsg[4:4*(nbufs+1)]))
74 offsets.append(None)
75 bufs = []
76 for start, stop in zip(offsets[:-1], offsets[1:]):
77 bufs.append(bmsg[start:stop])
78 msg = json.loads(bufs[0].decode('utf8'))
79 msg['header'] = extract_dates(msg['header'])
80 msg['parent_header'] = extract_dates(msg['parent_header'])
81 msg['buffers'] = bufs[1:]
82 return msg
83
84 # ping interval for keeping websockets alive (30 seconds)
85 WS_PING_INTERVAL = 30000
86
87 if os.environ.get('IPYTHON_ALLOW_DRAFT_WEBSOCKETS_FOR_PHANTOMJS', False):
88 warnings.warn("""Allowing draft76 websocket connections!
89 This should only be done for testing with phantomjs!""")
90 from IPython.html import allow76
91 WebSocketHandler = allow76.AllowDraftWebSocketHandler
92 # draft 76 doesn't support ping
93 WS_PING_INTERVAL = 0
94
95 class ZMQStreamHandler(WebSocketHandler):
96
97 def check_origin(self, origin):
98 """Check Origin == Host or Access-Control-Allow-Origin.
99
100 Tornado >= 4 calls this method automatically, raising 403 if it returns False.
101 We call it explicitly in `open` on Tornado < 4.
102 """
103 if self.allow_origin == '*':
104 return True
105
106 host = self.request.headers.get("Host")
107
108 # If no header is provided, assume we can't verify origin
109 if origin is None:
110 self.log.warn("Missing Origin header, rejecting WebSocket connection.")
111 return False
112 if host is None:
113 self.log.warn("Missing Host header, rejecting WebSocket connection.")
114 return False
115
116 origin = origin.lower()
117 origin_host = urlparse(origin).netloc
118
119 # OK if origin matches host
120 if origin_host == host:
121 return True
122
123 # Check CORS headers
124 if self.allow_origin:
125 allow = self.allow_origin == origin
126 elif self.allow_origin_pat:
127 allow = bool(self.allow_origin_pat.match(origin))
128 else:
129 # No CORS headers deny the request
130 allow = False
131 if not allow:
132 self.log.warn("Blocking Cross Origin WebSocket Attempt. Origin: %s, Host: %s",
133 origin, host,
134 )
135 return allow
136
137 def clear_cookie(self, *args, **kwargs):
138 """meaningless for websockets"""
139 pass
140
141 def _reserialize_reply(self, msg_list, channel=None):
142 """Reserialize a reply message using JSON.
143
144 This takes the msg list from the ZMQ socket, deserializes it using
145 self.session and then serializes the result using JSON. This method
146 should be used by self._on_zmq_reply to build messages that can
147 be sent back to the browser.
148 """
149 idents, msg_list = self.session.feed_identities(msg_list)
150 msg = self.session.deserialize(msg_list)
151 if channel:
152 msg['channel'] = channel
153 if msg['buffers']:
154 buf = serialize_binary_message(msg)
155 return buf
156 else:
157 smsg = json.dumps(msg, default=date_default)
158 return cast_unicode(smsg)
159
160 def _on_zmq_reply(self, stream, msg_list):
161 # Sometimes this gets triggered when the on_close method is scheduled in the
162 # eventloop but hasn't been called.
163 if stream.closed(): return
164 channel = getattr(stream, 'channel', None)
165 try:
166 msg = self._reserialize_reply(msg_list, channel=channel)
167 except Exception:
168 self.log.critical("Malformed message: %r" % msg_list, exc_info=True)
169 else:
170 self.write_message(msg, binary=isinstance(msg, bytes))
171
172 class AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):
173 ping_callback = None
174 last_ping = 0
175 last_pong = 0
176
177 @property
178 def ping_interval(self):
179 """The interval for websocket keep-alive pings.
180
181 Set ws_ping_interval = 0 to disable pings.
182 """
183 return self.settings.get('ws_ping_interval', WS_PING_INTERVAL)
184
185 @property
186 def ping_timeout(self):
187 """If no ping is received in this many milliseconds,
188 close the websocket connection (VPNs, etc. can fail to cleanly close ws connections).
189 Default is max of 3 pings or 30 seconds.
190 """
191 return self.settings.get('ws_ping_timeout',
192 max(3 * self.ping_interval, WS_PING_INTERVAL)
193 )
194
195 def set_default_headers(self):
196 """Undo the set_default_headers in IPythonHandler
197
198 which doesn't make sense for websockets
199 """
200 pass
201
202 def pre_get(self):
203 """Run before finishing the GET request
204
205 Extend this method to add logic that should fire before
206 the websocket finishes completing.
207 """
208 # authenticate the request before opening the websocket
209 if self.get_current_user() is None:
210 self.log.warn("Couldn't authenticate WebSocket connection")
211 raise web.HTTPError(403)
212
213 if self.get_argument('session_id', False):
214 self.session.session = cast_unicode(self.get_argument('session_id'))
215 else:
216 self.log.warn("No session ID specified")
217
218 @gen.coroutine
219 def get(self, *args, **kwargs):
220 # pre_get can be a coroutine in subclasses
221 # assign and yield in two step to avoid tornado 3 issues
222 res = self.pre_get()
223 yield gen.maybe_future(res)
224 super(AuthenticatedZMQStreamHandler, self).get(*args, **kwargs)
225
226 def initialize(self):
227 self.log.debug("Initializing websocket connection %s", self.request.path)
228 self.session = Session(config=self.config)
229
230 def open(self, *args, **kwargs):
231 self.log.debug("Opening websocket %s", self.request.path)
232
233 # start the pinging
234 if self.ping_interval > 0:
235 loop = ioloop.IOLoop.current()
236 self.last_ping = loop.time() # Remember time of last ping
237 self.last_pong = self.last_ping
238 self.ping_callback = ioloop.PeriodicCallback(
239 self.send_ping, self.ping_interval, io_loop=loop,
240 )
241 self.ping_callback.start()
242
243 def send_ping(self):
244 """send a ping to keep the websocket alive"""
245 if self.stream.closed() and self.ping_callback is not None:
246 self.ping_callback.stop()
247 return
248
249 # check for timeout on pong. Make sure that we really have sent a recent ping in
250 # case the machine with both server and client has been suspended since the last ping.
251 now = ioloop.IOLoop.current().time()
252 since_last_pong = 1e3 * (now - self.last_pong)
253 since_last_ping = 1e3 * (now - self.last_ping)
254 if since_last_ping < 2*self.ping_interval and since_last_pong > self.ping_timeout:
255 self.log.warn("WebSocket ping timeout after %i ms.", since_last_pong)
256 self.close()
257 return
258
259 self.ping(b'')
260 self.last_ping = now
261
262 def on_pong(self, data):
263 self.last_pong = ioloop.IOLoop.current().time()
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/html/base/zmqhandlers.py b/IPython/html/base/zmqhandlers.py
--- a/IPython/html/base/zmqhandlers.py
+++ b/IPython/html/base/zmqhandlers.py
@@ -94,11 +94,23 @@
class ZMQStreamHandler(WebSocketHandler):
+ if tornado.version_info < (4,1):
+ """Backport send_error from tornado 4.1 to 4.0"""
+ def send_error(self, *args, **kwargs):
+ if self.stream is None:
+ super(WebSocketHandler, self).send_error(*args, **kwargs)
+ else:
+ # If we get an uncaught exception during the handshake,
+ # we have no choice but to abruptly close the connection.
+ # TODO: for uncaught exceptions after the handshake,
+ # we can close the connection more gracefully.
+ self.stream.close()
+
+
def check_origin(self, origin):
"""Check Origin == Host or Access-Control-Allow-Origin.
Tornado >= 4 calls this method automatically, raising 403 if it returns False.
- We call it explicitly in `open` on Tornado < 4.
"""
if self.allow_origin == '*':
return True
@@ -160,7 +172,10 @@
def _on_zmq_reply(self, stream, msg_list):
# Sometimes this gets triggered when the on_close method is scheduled in the
# eventloop but hasn't been called.
- if stream.closed(): return
+ if self.stream.closed() or stream.closed():
+ self.log.warn("zmq message arrived on closed channel")
+ self.close()
+ return
channel = getattr(stream, 'channel', None)
try:
msg = self._reserialize_reply(msg_list, channel=channel)
|
{"golden_diff": "diff --git a/IPython/html/base/zmqhandlers.py b/IPython/html/base/zmqhandlers.py\n--- a/IPython/html/base/zmqhandlers.py\n+++ b/IPython/html/base/zmqhandlers.py\n@@ -94,11 +94,23 @@\n \n class ZMQStreamHandler(WebSocketHandler):\n \n+ if tornado.version_info < (4,1):\n+ \"\"\"Backport send_error from tornado 4.1 to 4.0\"\"\"\n+ def send_error(self, *args, **kwargs):\n+ if self.stream is None:\n+ super(WebSocketHandler, self).send_error(*args, **kwargs)\n+ else:\n+ # If we get an uncaught exception during the handshake,\n+ # we have no choice but to abruptly close the connection.\n+ # TODO: for uncaught exceptions after the handshake,\n+ # we can close the connection more gracefully.\n+ self.stream.close()\n+\n+ \n def check_origin(self, origin):\n \"\"\"Check Origin == Host or Access-Control-Allow-Origin.\n \n Tornado >= 4 calls this method automatically, raising 403 if it returns False.\n- We call it explicitly in `open` on Tornado < 4.\n \"\"\"\n if self.allow_origin == '*':\n return True\n@@ -160,7 +172,10 @@\n def _on_zmq_reply(self, stream, msg_list):\n # Sometimes this gets triggered when the on_close method is scheduled in the\n # eventloop but hasn't been called.\n- if stream.closed(): return\n+ if self.stream.closed() or stream.closed():\n+ self.log.warn(\"zmq message arrived on closed channel\")\n+ self.close()\n+ return\n channel = getattr(stream, 'channel', None)\n try:\n msg = self._reserialize_reply(msg_list, channel=channel)\n", "issue": "Reconnecting error messages with master(-ish)\nVersion: ee6223ab74eb\n\nMy situation is that I set up a remote notebook on a server that I only can reach when in a VPN. Now, my Wifi is shaky for some reason, that's possibly how my VPN get's shaky.\nSo I needed to close my notebook after a VPN reconnect and the dashboard still reported it as 'running'. When I clicked on the 'running' notebook, I got this error in the console:\n\n``` python\nRunTimeError: Method not supported by Web Sockets\n```\n\nFull error log here:\nhttps://gist.github.com/23ed2e252897d96804a5\n\nWorking on OSX 10.9.5 using conda and Python 3.4\n\nReconnecting error messages with master(-ish)\nVersion: ee6223ab74eb\n\nMy situation is that I set up a remote notebook on a server that I only can reach when in a VPN. Now, my Wifi is shaky for some reason, that's possibly how my VPN get's shaky.\nSo I needed to close my notebook after a VPN reconnect and the dashboard still reported it as 'running'. When I clicked on the 'running' notebook, I got this error in the console:\n\n``` python\nRunTimeError: Method not supported by Web Sockets\n```\n\nFull error log here:\nhttps://gist.github.com/23ed2e252897d96804a5\n\nWorking on OSX 10.9.5 using conda and Python 3.4\n\n", "before_files": [{"content": "# coding: utf-8\n\"\"\"Tornado handlers for WebSocket <-> ZMQ sockets.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\nimport json\nimport struct\nimport warnings\n\ntry:\n from urllib.parse import urlparse # Py 3\nexcept ImportError:\n from urlparse import urlparse # Py 2\n\nimport tornado\nfrom tornado import gen, ioloop, web\nfrom tornado.websocket import WebSocketHandler\n\nfrom IPython.kernel.zmq.session import Session\nfrom IPython.utils.jsonutil import date_default, extract_dates\nfrom IPython.utils.py3compat import cast_unicode\n\nfrom .handlers import IPythonHandler\n\ndef serialize_binary_message(msg):\n \"\"\"serialize a message as a binary blob\n\n Header:\n\n 4 bytes: number of msg parts (nbufs) as 32b int\n 4 * nbufs bytes: offset for each buffer as integer as 32b int\n\n Offsets are from the start of the buffer, including the header.\n\n Returns\n -------\n\n The message serialized to bytes.\n\n \"\"\"\n # don't modify msg or buffer list in-place\n msg = msg.copy()\n buffers = list(msg.pop('buffers'))\n bmsg = json.dumps(msg, default=date_default).encode('utf8')\n buffers.insert(0, bmsg)\n nbufs = len(buffers)\n offsets = [4 * (nbufs + 1)]\n for buf in buffers[:-1]:\n offsets.append(offsets[-1] + len(buf))\n offsets_buf = struct.pack('!' + 'I' * (nbufs + 1), nbufs, *offsets)\n buffers.insert(0, offsets_buf)\n return b''.join(buffers)\n\n\ndef deserialize_binary_message(bmsg):\n \"\"\"deserialize a message from a binary blog\n\n Header:\n\n 4 bytes: number of msg parts (nbufs) as 32b int\n 4 * nbufs bytes: offset for each buffer as integer as 32b int\n\n Offsets are from the start of the buffer, including the header.\n\n Returns\n -------\n\n message dictionary\n \"\"\"\n nbufs = struct.unpack('!i', bmsg[:4])[0]\n offsets = list(struct.unpack('!' + 'I' * nbufs, bmsg[4:4*(nbufs+1)]))\n offsets.append(None)\n bufs = []\n for start, stop in zip(offsets[:-1], offsets[1:]):\n bufs.append(bmsg[start:stop])\n msg = json.loads(bufs[0].decode('utf8'))\n msg['header'] = extract_dates(msg['header'])\n msg['parent_header'] = extract_dates(msg['parent_header'])\n msg['buffers'] = bufs[1:]\n return msg\n\n# ping interval for keeping websockets alive (30 seconds)\nWS_PING_INTERVAL = 30000\n\nif os.environ.get('IPYTHON_ALLOW_DRAFT_WEBSOCKETS_FOR_PHANTOMJS', False):\n warnings.warn(\"\"\"Allowing draft76 websocket connections!\n This should only be done for testing with phantomjs!\"\"\")\n from IPython.html import allow76\n WebSocketHandler = allow76.AllowDraftWebSocketHandler\n # draft 76 doesn't support ping\n WS_PING_INTERVAL = 0\n\nclass ZMQStreamHandler(WebSocketHandler):\n \n def check_origin(self, origin):\n \"\"\"Check Origin == Host or Access-Control-Allow-Origin.\n \n Tornado >= 4 calls this method automatically, raising 403 if it returns False.\n We call it explicitly in `open` on Tornado < 4.\n \"\"\"\n if self.allow_origin == '*':\n return True\n\n host = self.request.headers.get(\"Host\")\n\n # If no header is provided, assume we can't verify origin\n if origin is None:\n self.log.warn(\"Missing Origin header, rejecting WebSocket connection.\")\n return False\n if host is None:\n self.log.warn(\"Missing Host header, rejecting WebSocket connection.\")\n return False\n \n origin = origin.lower()\n origin_host = urlparse(origin).netloc\n \n # OK if origin matches host\n if origin_host == host:\n return True\n \n # Check CORS headers\n if self.allow_origin:\n allow = self.allow_origin == origin\n elif self.allow_origin_pat:\n allow = bool(self.allow_origin_pat.match(origin))\n else:\n # No CORS headers deny the request\n allow = False\n if not allow:\n self.log.warn(\"Blocking Cross Origin WebSocket Attempt. Origin: %s, Host: %s\",\n origin, host,\n )\n return allow\n\n def clear_cookie(self, *args, **kwargs):\n \"\"\"meaningless for websockets\"\"\"\n pass\n\n def _reserialize_reply(self, msg_list, channel=None):\n \"\"\"Reserialize a reply message using JSON.\n\n This takes the msg list from the ZMQ socket, deserializes it using\n self.session and then serializes the result using JSON. This method\n should be used by self._on_zmq_reply to build messages that can\n be sent back to the browser.\n \"\"\"\n idents, msg_list = self.session.feed_identities(msg_list)\n msg = self.session.deserialize(msg_list)\n if channel:\n msg['channel'] = channel\n if msg['buffers']:\n buf = serialize_binary_message(msg)\n return buf\n else:\n smsg = json.dumps(msg, default=date_default)\n return cast_unicode(smsg)\n\n def _on_zmq_reply(self, stream, msg_list):\n # Sometimes this gets triggered when the on_close method is scheduled in the\n # eventloop but hasn't been called.\n if stream.closed(): return\n channel = getattr(stream, 'channel', None)\n try:\n msg = self._reserialize_reply(msg_list, channel=channel)\n except Exception:\n self.log.critical(\"Malformed message: %r\" % msg_list, exc_info=True)\n else:\n self.write_message(msg, binary=isinstance(msg, bytes))\n\nclass AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):\n ping_callback = None\n last_ping = 0\n last_pong = 0\n \n @property\n def ping_interval(self):\n \"\"\"The interval for websocket keep-alive pings.\n \n Set ws_ping_interval = 0 to disable pings.\n \"\"\"\n return self.settings.get('ws_ping_interval', WS_PING_INTERVAL)\n \n @property\n def ping_timeout(self):\n \"\"\"If no ping is received in this many milliseconds,\n close the websocket connection (VPNs, etc. can fail to cleanly close ws connections).\n Default is max of 3 pings or 30 seconds.\n \"\"\"\n return self.settings.get('ws_ping_timeout',\n max(3 * self.ping_interval, WS_PING_INTERVAL)\n )\n\n def set_default_headers(self):\n \"\"\"Undo the set_default_headers in IPythonHandler\n \n which doesn't make sense for websockets\n \"\"\"\n pass\n \n def pre_get(self):\n \"\"\"Run before finishing the GET request\n \n Extend this method to add logic that should fire before\n the websocket finishes completing.\n \"\"\"\n # authenticate the request before opening the websocket\n if self.get_current_user() is None:\n self.log.warn(\"Couldn't authenticate WebSocket connection\")\n raise web.HTTPError(403)\n \n if self.get_argument('session_id', False):\n self.session.session = cast_unicode(self.get_argument('session_id'))\n else:\n self.log.warn(\"No session ID specified\")\n \n @gen.coroutine\n def get(self, *args, **kwargs):\n # pre_get can be a coroutine in subclasses\n # assign and yield in two step to avoid tornado 3 issues\n res = self.pre_get()\n yield gen.maybe_future(res)\n super(AuthenticatedZMQStreamHandler, self).get(*args, **kwargs)\n \n def initialize(self):\n self.log.debug(\"Initializing websocket connection %s\", self.request.path)\n self.session = Session(config=self.config)\n \n def open(self, *args, **kwargs):\n self.log.debug(\"Opening websocket %s\", self.request.path)\n \n # start the pinging\n if self.ping_interval > 0:\n loop = ioloop.IOLoop.current()\n self.last_ping = loop.time() # Remember time of last ping\n self.last_pong = self.last_ping\n self.ping_callback = ioloop.PeriodicCallback(\n self.send_ping, self.ping_interval, io_loop=loop,\n )\n self.ping_callback.start()\n\n def send_ping(self):\n \"\"\"send a ping to keep the websocket alive\"\"\"\n if self.stream.closed() and self.ping_callback is not None:\n self.ping_callback.stop()\n return\n \n # check for timeout on pong. Make sure that we really have sent a recent ping in\n # case the machine with both server and client has been suspended since the last ping.\n now = ioloop.IOLoop.current().time()\n since_last_pong = 1e3 * (now - self.last_pong)\n since_last_ping = 1e3 * (now - self.last_ping)\n if since_last_ping < 2*self.ping_interval and since_last_pong > self.ping_timeout:\n self.log.warn(\"WebSocket ping timeout after %i ms.\", since_last_pong)\n self.close()\n return\n\n self.ping(b'')\n self.last_ping = now\n\n def on_pong(self, data):\n self.last_pong = ioloop.IOLoop.current().time()\n", "path": "IPython/html/base/zmqhandlers.py"}], "after_files": [{"content": "# coding: utf-8\n\"\"\"Tornado handlers for WebSocket <-> ZMQ sockets.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\nimport json\nimport struct\nimport warnings\n\ntry:\n from urllib.parse import urlparse # Py 3\nexcept ImportError:\n from urlparse import urlparse # Py 2\n\nimport tornado\nfrom tornado import gen, ioloop, web\nfrom tornado.websocket import WebSocketHandler\n\nfrom IPython.kernel.zmq.session import Session\nfrom IPython.utils.jsonutil import date_default, extract_dates\nfrom IPython.utils.py3compat import cast_unicode\n\nfrom .handlers import IPythonHandler\n\ndef serialize_binary_message(msg):\n \"\"\"serialize a message as a binary blob\n\n Header:\n\n 4 bytes: number of msg parts (nbufs) as 32b int\n 4 * nbufs bytes: offset for each buffer as integer as 32b int\n\n Offsets are from the start of the buffer, including the header.\n\n Returns\n -------\n\n The message serialized to bytes.\n\n \"\"\"\n # don't modify msg or buffer list in-place\n msg = msg.copy()\n buffers = list(msg.pop('buffers'))\n bmsg = json.dumps(msg, default=date_default).encode('utf8')\n buffers.insert(0, bmsg)\n nbufs = len(buffers)\n offsets = [4 * (nbufs + 1)]\n for buf in buffers[:-1]:\n offsets.append(offsets[-1] + len(buf))\n offsets_buf = struct.pack('!' + 'I' * (nbufs + 1), nbufs, *offsets)\n buffers.insert(0, offsets_buf)\n return b''.join(buffers)\n\n\ndef deserialize_binary_message(bmsg):\n \"\"\"deserialize a message from a binary blog\n\n Header:\n\n 4 bytes: number of msg parts (nbufs) as 32b int\n 4 * nbufs bytes: offset for each buffer as integer as 32b int\n\n Offsets are from the start of the buffer, including the header.\n\n Returns\n -------\n\n message dictionary\n \"\"\"\n nbufs = struct.unpack('!i', bmsg[:4])[0]\n offsets = list(struct.unpack('!' + 'I' * nbufs, bmsg[4:4*(nbufs+1)]))\n offsets.append(None)\n bufs = []\n for start, stop in zip(offsets[:-1], offsets[1:]):\n bufs.append(bmsg[start:stop])\n msg = json.loads(bufs[0].decode('utf8'))\n msg['header'] = extract_dates(msg['header'])\n msg['parent_header'] = extract_dates(msg['parent_header'])\n msg['buffers'] = bufs[1:]\n return msg\n\n# ping interval for keeping websockets alive (30 seconds)\nWS_PING_INTERVAL = 30000\n\nif os.environ.get('IPYTHON_ALLOW_DRAFT_WEBSOCKETS_FOR_PHANTOMJS', False):\n warnings.warn(\"\"\"Allowing draft76 websocket connections!\n This should only be done for testing with phantomjs!\"\"\")\n from IPython.html import allow76\n WebSocketHandler = allow76.AllowDraftWebSocketHandler\n # draft 76 doesn't support ping\n WS_PING_INTERVAL = 0\n\nclass ZMQStreamHandler(WebSocketHandler):\n \n if tornado.version_info < (4,1):\n \"\"\"Backport send_error from tornado 4.1 to 4.0\"\"\"\n def send_error(self, *args, **kwargs):\n if self.stream is None:\n super(WebSocketHandler, self).send_error(*args, **kwargs)\n else:\n # If we get an uncaught exception during the handshake,\n # we have no choice but to abruptly close the connection.\n # TODO: for uncaught exceptions after the handshake,\n # we can close the connection more gracefully.\n self.stream.close()\n\n \n def check_origin(self, origin):\n \"\"\"Check Origin == Host or Access-Control-Allow-Origin.\n \n Tornado >= 4 calls this method automatically, raising 403 if it returns False.\n \"\"\"\n if self.allow_origin == '*':\n return True\n\n host = self.request.headers.get(\"Host\")\n\n # If no header is provided, assume we can't verify origin\n if origin is None:\n self.log.warn(\"Missing Origin header, rejecting WebSocket connection.\")\n return False\n if host is None:\n self.log.warn(\"Missing Host header, rejecting WebSocket connection.\")\n return False\n \n origin = origin.lower()\n origin_host = urlparse(origin).netloc\n \n # OK if origin matches host\n if origin_host == host:\n return True\n \n # Check CORS headers\n if self.allow_origin:\n allow = self.allow_origin == origin\n elif self.allow_origin_pat:\n allow = bool(self.allow_origin_pat.match(origin))\n else:\n # No CORS headers deny the request\n allow = False\n if not allow:\n self.log.warn(\"Blocking Cross Origin WebSocket Attempt. Origin: %s, Host: %s\",\n origin, host,\n )\n return allow\n\n def clear_cookie(self, *args, **kwargs):\n \"\"\"meaningless for websockets\"\"\"\n pass\n\n def _reserialize_reply(self, msg_list, channel=None):\n \"\"\"Reserialize a reply message using JSON.\n\n This takes the msg list from the ZMQ socket, deserializes it using\n self.session and then serializes the result using JSON. This method\n should be used by self._on_zmq_reply to build messages that can\n be sent back to the browser.\n \"\"\"\n idents, msg_list = self.session.feed_identities(msg_list)\n msg = self.session.deserialize(msg_list)\n if channel:\n msg['channel'] = channel\n if msg['buffers']:\n buf = serialize_binary_message(msg)\n return buf\n else:\n smsg = json.dumps(msg, default=date_default)\n return cast_unicode(smsg)\n\n def _on_zmq_reply(self, stream, msg_list):\n # Sometimes this gets triggered when the on_close method is scheduled in the\n # eventloop but hasn't been called.\n if self.stream.closed() or stream.closed():\n self.log.warn(\"zmq message arrived on closed channel\")\n self.close()\n return\n channel = getattr(stream, 'channel', None)\n try:\n msg = self._reserialize_reply(msg_list, channel=channel)\n except Exception:\n self.log.critical(\"Malformed message: %r\" % msg_list, exc_info=True)\n else:\n self.write_message(msg, binary=isinstance(msg, bytes))\n\nclass AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):\n ping_callback = None\n last_ping = 0\n last_pong = 0\n \n @property\n def ping_interval(self):\n \"\"\"The interval for websocket keep-alive pings.\n \n Set ws_ping_interval = 0 to disable pings.\n \"\"\"\n return self.settings.get('ws_ping_interval', WS_PING_INTERVAL)\n \n @property\n def ping_timeout(self):\n \"\"\"If no ping is received in this many milliseconds,\n close the websocket connection (VPNs, etc. can fail to cleanly close ws connections).\n Default is max of 3 pings or 30 seconds.\n \"\"\"\n return self.settings.get('ws_ping_timeout',\n max(3 * self.ping_interval, WS_PING_INTERVAL)\n )\n\n def set_default_headers(self):\n \"\"\"Undo the set_default_headers in IPythonHandler\n \n which doesn't make sense for websockets\n \"\"\"\n pass\n \n def pre_get(self):\n \"\"\"Run before finishing the GET request\n \n Extend this method to add logic that should fire before\n the websocket finishes completing.\n \"\"\"\n # authenticate the request before opening the websocket\n if self.get_current_user() is None:\n self.log.warn(\"Couldn't authenticate WebSocket connection\")\n raise web.HTTPError(403)\n \n if self.get_argument('session_id', False):\n self.session.session = cast_unicode(self.get_argument('session_id'))\n else:\n self.log.warn(\"No session ID specified\")\n \n @gen.coroutine\n def get(self, *args, **kwargs):\n # pre_get can be a coroutine in subclasses\n # assign and yield in two step to avoid tornado 3 issues\n res = self.pre_get()\n yield gen.maybe_future(res)\n super(AuthenticatedZMQStreamHandler, self).get(*args, **kwargs)\n \n def initialize(self):\n self.log.debug(\"Initializing websocket connection %s\", self.request.path)\n self.session = Session(config=self.config)\n \n def open(self, *args, **kwargs):\n self.log.debug(\"Opening websocket %s\", self.request.path)\n \n # start the pinging\n if self.ping_interval > 0:\n loop = ioloop.IOLoop.current()\n self.last_ping = loop.time() # Remember time of last ping\n self.last_pong = self.last_ping\n self.ping_callback = ioloop.PeriodicCallback(\n self.send_ping, self.ping_interval, io_loop=loop,\n )\n self.ping_callback.start()\n\n def send_ping(self):\n \"\"\"send a ping to keep the websocket alive\"\"\"\n if self.stream.closed() and self.ping_callback is not None:\n self.ping_callback.stop()\n return\n \n # check for timeout on pong. Make sure that we really have sent a recent ping in\n # case the machine with both server and client has been suspended since the last ping.\n now = ioloop.IOLoop.current().time()\n since_last_pong = 1e3 * (now - self.last_pong)\n since_last_ping = 1e3 * (now - self.last_ping)\n if since_last_ping < 2*self.ping_interval and since_last_pong > self.ping_timeout:\n self.log.warn(\"WebSocket ping timeout after %i ms.\", since_last_pong)\n self.close()\n return\n\n self.ping(b'')\n self.last_ping = now\n\n def on_pong(self, data):\n self.last_pong = ioloop.IOLoop.current().time()\n", "path": "IPython/html/base/zmqhandlers.py"}]}
| 3,389 | 402 |
gh_patches_debug_4699
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-3607
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prefect logger doesn't use local Python logger timezone configuration
## Opened from the [Prefect Public Slack Community](https://join.slack.com/t/prefect-public/shared_invite/enQtNzE5OTU3OTQwNzc1LTQ5M2FkZmQzZjI0ODg1ZTBmOTc0ZjVjYWFjMWExZDAyYzBmYjVmMTE1NTQ1Y2IxZTllOTc4MmI3NzYxMDlhYWU)
**dcshah88**: Hello Everyone,
Have started looking into prefect core library recently and exploring it to use for simple use cases.
One thing I am struggling with right now is to provide a different timezone to prefect instead of default UTC.
By default, prefect is generating UTC timestamps in logs but wanted to change it to my local timezone.
Could you please help ?
**znicholasbrown**: Hi <@U01CWF05D6Y> - Prefect uses Python's built-in logging module; please look at the <https://docs.python.org/3/library/logging.html|Python logging documentation> for info on how to configure your local Python logger.
**dcshah88**: Thanks <@UN6FTLFAS> : what I am confused about is, if I use python logging without prefect, it prints logs using local timezone by default but when used with prefect it prints timestamps in UTC.
looks like prefect is overriding default python logging behavior.
So was wondering if prefect has any config option to provide which timezone to use while printing logs ?
**znicholasbrown**: Ah, that's a good flag - it looks like we're explicitly defaulting to UTC instead of local in the logger, which is probably unintentional. Let me see if there's a reason for that and if not we can PR it :slightly_smiling_face:
**znicholasbrown**: I think this is something we can open a ticket for <@U01CWF05D6Y> - <https://github.com/PrefectHQ/prefect/blob/master/src/prefect/utilities/logging.py#L220|this line> is where the timezone is set in the logger; I'll use <@ULVA73B9P> to open the ticket from this thread :slightly_smiling_face:
**znicholasbrown**: <@ULVA73B9P> open "Prefect logger doesn't use local Python logger timezone configuration"
Original thread can be found [here](https://prefect-community.slack.com/archives/CL09KU1K7/p1603234095357600?thread_ts=1603234095.357600&cid=CL09KU1K7).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/utilities/logging.py`
Content:
```
1 """
2 Utility functions for interacting with and configuring logging. The main entrypoint for
3 retrieving loggers for customization is the `get_logger` utility.
4
5 Note that Prefect Tasks come equipped with their own loggers. These can be accessed via:
6 - `self.logger` if implementing a Task class
7 - `prefect.context.get("logger")` if using the `task` decorator
8
9 When running locally, log levels and message formatting are set via your Prefect configuration file.
10 """
11 import atexit
12 import json
13 import logging
14 import sys
15 import threading
16 import time
17 from queue import Empty, Queue
18 from typing import Any
19
20 import pendulum
21
22 import prefect
23 from prefect.utilities.context import context
24
25 _original_log_record_factory = logging.getLogRecordFactory()
26
27 PREFECT_LOG_RECORD_ATTRIBUTES = (
28 "flow_name",
29 "flow_run_id",
30 "task_name",
31 "task_slug",
32 "task_run_id",
33 )
34
35
36 class CloudHandler(logging.StreamHandler):
37 def __init__(self) -> None:
38 super().__init__(sys.stdout)
39 self.client = None
40 self.logger = logging.getLogger("CloudHandler")
41 handler = logging.StreamHandler(sys.stdout)
42 formatter = logging.Formatter(
43 context.config.logging.format, context.config.logging.datefmt
44 )
45 formatter.converter = time.gmtime # type: ignore
46 handler.setFormatter(formatter)
47 self.logger.addHandler(handler)
48 self.logger.setLevel(context.config.logging.level)
49
50 @property
51 def queue(self) -> Queue:
52 if not hasattr(self, "_queue"):
53 self._queue = Queue() # type: Queue
54 self._flush = False
55 self.start()
56 return self._queue
57
58 def flush(self) -> None:
59 self._flush = True
60 if self.client is not None:
61 self.batch_upload()
62 self._thread.join()
63
64 def batch_upload(self) -> None:
65 logs = []
66 try:
67 while True:
68 log = self.queue.get(False)
69 logs.append(log)
70 except Empty:
71 pass
72
73 if logs:
74 try:
75 assert self.client is not None
76 self.client.write_run_logs(logs)
77 except Exception as exc:
78 message = "Failed to write log with error: {}".format(str(exc))
79 self.logger.critical(message)
80
81 # Attempt to write batch error log otherwise log invalid cloud communication
82 try:
83 assert self.client is not None
84 self.client.write_run_logs([self._make_error_log(message)])
85 except Exception:
86 self.logger.critical("Unable to write logs to Prefect Cloud")
87
88 def _monitor(self) -> None:
89 while not self._flush:
90 self.batch_upload()
91 time.sleep(self.heartbeat)
92
93 def __del__(self) -> None:
94 if hasattr(self, "_thread"):
95 self.flush()
96 atexit.unregister(self.flush)
97
98 def start(self) -> None:
99 if not hasattr(self, "_thread"):
100 self.heartbeat = context.config.cloud.logging_heartbeat
101 self._thread = t = threading.Thread(
102 target=self._monitor, name="PrefectCloudLoggingThread"
103 )
104 t.daemon = True
105 t.start()
106 atexit.register(self.flush)
107
108 def put(self, log: dict) -> None:
109 try:
110 json.dumps(log) # make sure the payload is serializable
111 self.queue.put(log)
112 except TypeError as exc:
113 message = "Failed to write log with error: {}".format(str(exc))
114 self.logger.critical(message)
115
116 self.queue.put(self._make_error_log(message))
117
118 def emit(self, record) -> None: # type: ignore
119 # if we shouldn't log to cloud, don't emit
120 if not prefect.context.config.logging.log_to_cloud:
121 return
122
123 try:
124 from prefect.client import Client
125
126 if self.client is None:
127 self.client = Client() # type: ignore
128
129 assert isinstance(self.client, Client) # mypy assert
130
131 record_dict = record.__dict__.copy()
132
133 # ensures emitted logs respect configured logging level
134 config_level = getattr(
135 logging, prefect.context.config.logging.level, logging.INFO
136 )
137
138 if record_dict["levelno"] < config_level:
139 return
140
141 # remove potentially non-json serializable formatting args
142 record_dict.pop("args", None)
143
144 log = dict()
145 log["flow_run_id"] = prefect.context.get("flow_run_id", None)
146 log["task_run_id"] = prefect.context.get("task_run_id", None)
147 log["timestamp"] = pendulum.from_timestamp(
148 record_dict.pop("created", time.time())
149 ).isoformat()
150 log["name"] = record_dict.pop("name", None)
151 log["message"] = record_dict.pop("message", None)
152 log["level"] = record_dict.pop("levelname", None)
153
154 if record_dict.get("exc_text") is not None:
155 log["message"] += "\n" + record_dict.pop("exc_text", "")
156 record_dict.pop("exc_info", None)
157
158 log["info"] = record_dict
159 self.put(log)
160 except Exception as exc:
161 message = "Failed to write log with error: {}".format(str(exc))
162 self.logger.critical(message)
163
164 self.put(self._make_error_log(message))
165
166 def _make_error_log(self, message: str) -> dict:
167 log = dict()
168 log["flow_run_id"] = prefect.context.get("flow_run_id", None)
169 log["timestamp"] = pendulum.from_timestamp(time.time()).isoformat()
170 log["name"] = self.logger.name
171 log["message"] = message
172 log["level"] = "CRITICAL"
173 log["info"] = {}
174
175 return log
176
177
178 def _log_record_context_injector(*args: Any, **kwargs: Any) -> logging.LogRecord:
179 """
180 A custom logger LogRecord Factory that injects selected context parameters into newly
181 created logs.
182
183 Args:
184 - *args: arguments to pass to the original LogRecord Factory
185 - **kwargs: keyword arguments to pass to the original LogRecord Factory
186
187 Returns:
188 - logging.LogRecord: the newly created LogRecord
189 """
190 record = _original_log_record_factory(*args, **kwargs)
191
192 additional_attrs = context.config.logging.get("log_attributes", [])
193
194 for attr in PREFECT_LOG_RECORD_ATTRIBUTES + tuple(additional_attrs):
195 value = prefect.context.get(attr, None)
196 if value or attr in additional_attrs:
197 setattr(record, attr, value)
198
199 return record
200
201
202 def _create_logger(name: str) -> logging.Logger:
203 """
204 Creates a logger with a `StreamHandler` that has level and formatting
205 set from `prefect.config`.
206
207 Args:
208 - name (str): Name to use for logger.
209
210 Returns:
211 - logging.Logger: a configured logging object
212 """
213 logging.setLogRecordFactory(_log_record_context_injector)
214
215 logger = logging.getLogger(name)
216 handler = logging.StreamHandler(sys.stdout)
217 formatter = logging.Formatter(
218 context.config.logging.format, context.config.logging.datefmt
219 )
220 formatter.converter = time.gmtime # type: ignore
221 handler.setFormatter(formatter)
222 logger.addHandler(handler)
223 logger.setLevel(context.config.logging.level)
224
225 # we set the cloud handler to DEBUG level
226 # but the handler itself will dynamically respond
227 # to the configured level in the emit() method call
228 cloud_handler = CloudHandler()
229 cloud_handler.setLevel("DEBUG")
230 logger.addHandler(cloud_handler)
231 return logger
232
233
234 def configure_logging(testing: bool = False) -> logging.Logger:
235 """
236 Creates a "prefect" root logger with a `StreamHandler` that has level and formatting
237 set from `prefect.config`.
238
239 Args:
240 - testing (bool, optional): a boolean specifying whether this configuration
241 is for testing purposes only; this helps us isolate any global state during testing
242 by configuring a "prefect-test-logger" instead of the standard "prefect" logger
243
244 Returns:
245 - logging.Logger: a configured logging object
246 """
247 name = "prefect-test-logger" if testing else "prefect"
248 return _create_logger(name)
249
250
251 context.logger = prefect_logger = configure_logging()
252
253
254 def configure_extra_loggers() -> None:
255 """
256 Creates a "Prefect" configured logger for all strings in extra_loggers config list.
257 The logging.extra_loggers config defaults to an empty list.
258 """
259 loggers = context.config.logging.get("extra_loggers", [])
260 for l in loggers:
261 _create_logger(l)
262
263
264 configure_extra_loggers()
265
266
267 def create_diagnostic_logger(name: str) -> logging.Logger:
268 """
269 Create a logger that does not use the `CloudHandler` but preserves all other
270 Prefect logging configuration. For diagnostic / debugging / internal use only.
271 """
272 logger = _create_logger(name)
273 logger.handlers = [h for h in logger.handlers if not isinstance(h, CloudHandler)]
274 return logger
275
276
277 def get_logger(name: str = None) -> logging.Logger:
278 """
279 Returns a "prefect" logger.
280
281 Args:
282 - name (str): if `None`, the root Prefect logger is returned. If provided, a child
283 logger of the name `"prefect.{name}"` is returned. The child logger inherits
284 the root logger's settings.
285
286 Returns:
287 - logging.Logger: a configured logging object with the appropriate name
288 """
289
290 if name is None:
291 return prefect_logger
292 else:
293 return prefect_logger.getChild(name)
294
295
296 class RedirectToLog:
297 """
298 Custom redirect of stdout messages to logs
299
300 Args:
301 - logger (logging.Logger, optional): an optional logger to redirect stdout. If
302 not provided a logger names `stdout` will be created.
303 """
304
305 def __init__(self, logger: logging.Logger = None) -> None:
306 self.stdout_logger = logger or get_logger("stdout")
307
308 def write(self, s: str) -> None:
309 """
310 Write message from stdout to a prefect logger.
311 Note: blank newlines will not be logged.
312
313 Args:
314 s (str): the message from stdout to be logged
315 """
316 if not isinstance(s, str):
317 # stdout is expecting str
318 raise TypeError(f"string argument expected, got {type(s)}")
319
320 if s.strip():
321 self.stdout_logger.info(s)
322
323 def flush(self) -> None:
324 """
325 Implemented flush operation for logger handler
326 """
327 for handler in self.stdout_logger.handlers:
328 handler.flush()
329
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py
--- a/src/prefect/utilities/logging.py
+++ b/src/prefect/utilities/logging.py
@@ -217,7 +217,6 @@
formatter = logging.Formatter(
context.config.logging.format, context.config.logging.datefmt
)
- formatter.converter = time.gmtime # type: ignore
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(context.config.logging.level)
|
{"golden_diff": "diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py\n--- a/src/prefect/utilities/logging.py\n+++ b/src/prefect/utilities/logging.py\n@@ -217,7 +217,6 @@\n formatter = logging.Formatter(\n context.config.logging.format, context.config.logging.datefmt\n )\n- formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(context.config.logging.level)\n", "issue": "Prefect logger doesn't use local Python logger timezone configuration\n## Opened from the [Prefect Public Slack Community](https://join.slack.com/t/prefect-public/shared_invite/enQtNzE5OTU3OTQwNzc1LTQ5M2FkZmQzZjI0ODg1ZTBmOTc0ZjVjYWFjMWExZDAyYzBmYjVmMTE1NTQ1Y2IxZTllOTc4MmI3NzYxMDlhYWU)\n\n**dcshah88**: Hello Everyone, \n\nHave started looking into prefect core library recently and exploring it to use for simple use cases.\n\nOne thing I am struggling with right now is to provide a different timezone to prefect instead of default UTC.\n\nBy default, prefect is generating UTC timestamps in logs but wanted to change it to my local timezone. \n\nCould you please help ?\n\n**znicholasbrown**: Hi <@U01CWF05D6Y> - Prefect uses Python's built-in logging module; please look at the <https://docs.python.org/3/library/logging.html|Python logging documentation> for info on how to configure your local Python logger.\n\n**dcshah88**: Thanks <@UN6FTLFAS> : what I am confused about is, if I use python logging without prefect, it prints logs using local timezone by default but when used with prefect it prints timestamps in UTC.\n\nlooks like prefect is overriding default python logging behavior.\n\nSo was wondering if prefect has any config option to provide which timezone to use while printing logs ?\n\n**znicholasbrown**: Ah, that's a good flag - it looks like we're explicitly defaulting to UTC instead of local in the logger, which is probably unintentional. Let me see if there's a reason for that and if not we can PR it :slightly_smiling_face:\n\n**znicholasbrown**: I think this is something we can open a ticket for <@U01CWF05D6Y> - <https://github.com/PrefectHQ/prefect/blob/master/src/prefect/utilities/logging.py#L220|this line> is where the timezone is set in the logger; I'll use <@ULVA73B9P> to open the ticket from this thread :slightly_smiling_face:\n\n**znicholasbrown**: <@ULVA73B9P> open \"Prefect logger doesn't use local Python logger timezone configuration\"\n\nOriginal thread can be found [here](https://prefect-community.slack.com/archives/CL09KU1K7/p1603234095357600?thread_ts=1603234095.357600&cid=CL09KU1K7).\n\n\n", "before_files": [{"content": "\"\"\"\nUtility functions for interacting with and configuring logging. The main entrypoint for\nretrieving loggers for customization is the `get_logger` utility.\n\nNote that Prefect Tasks come equipped with their own loggers. These can be accessed via:\n - `self.logger` if implementing a Task class\n - `prefect.context.get(\"logger\")` if using the `task` decorator\n\nWhen running locally, log levels and message formatting are set via your Prefect configuration file.\n\"\"\"\nimport atexit\nimport json\nimport logging\nimport sys\nimport threading\nimport time\nfrom queue import Empty, Queue\nfrom typing import Any\n\nimport pendulum\n\nimport prefect\nfrom prefect.utilities.context import context\n\n_original_log_record_factory = logging.getLogRecordFactory()\n\nPREFECT_LOG_RECORD_ATTRIBUTES = (\n \"flow_name\",\n \"flow_run_id\",\n \"task_name\",\n \"task_slug\",\n \"task_run_id\",\n)\n\n\nclass CloudHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__(sys.stdout)\n self.client = None\n self.logger = logging.getLogger(\"CloudHandler\")\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n context.config.logging.format, context.config.logging.datefmt\n )\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n self.logger.addHandler(handler)\n self.logger.setLevel(context.config.logging.level)\n\n @property\n def queue(self) -> Queue:\n if not hasattr(self, \"_queue\"):\n self._queue = Queue() # type: Queue\n self._flush = False\n self.start()\n return self._queue\n\n def flush(self) -> None:\n self._flush = True\n if self.client is not None:\n self.batch_upload()\n self._thread.join()\n\n def batch_upload(self) -> None:\n logs = []\n try:\n while True:\n log = self.queue.get(False)\n logs.append(log)\n except Empty:\n pass\n\n if logs:\n try:\n assert self.client is not None\n self.client.write_run_logs(logs)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n # Attempt to write batch error log otherwise log invalid cloud communication\n try:\n assert self.client is not None\n self.client.write_run_logs([self._make_error_log(message)])\n except Exception:\n self.logger.critical(\"Unable to write logs to Prefect Cloud\")\n\n def _monitor(self) -> None:\n while not self._flush:\n self.batch_upload()\n time.sleep(self.heartbeat)\n\n def __del__(self) -> None:\n if hasattr(self, \"_thread\"):\n self.flush()\n atexit.unregister(self.flush)\n\n def start(self) -> None:\n if not hasattr(self, \"_thread\"):\n self.heartbeat = context.config.cloud.logging_heartbeat\n self._thread = t = threading.Thread(\n target=self._monitor, name=\"PrefectCloudLoggingThread\"\n )\n t.daemon = True\n t.start()\n atexit.register(self.flush)\n\n def put(self, log: dict) -> None:\n try:\n json.dumps(log) # make sure the payload is serializable\n self.queue.put(log)\n except TypeError as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.queue.put(self._make_error_log(message))\n\n def emit(self, record) -> None: # type: ignore\n # if we shouldn't log to cloud, don't emit\n if not prefect.context.config.logging.log_to_cloud:\n return\n\n try:\n from prefect.client import Client\n\n if self.client is None:\n self.client = Client() # type: ignore\n\n assert isinstance(self.client, Client) # mypy assert\n\n record_dict = record.__dict__.copy()\n\n # ensures emitted logs respect configured logging level\n config_level = getattr(\n logging, prefect.context.config.logging.level, logging.INFO\n )\n\n if record_dict[\"levelno\"] < config_level:\n return\n\n # remove potentially non-json serializable formatting args\n record_dict.pop(\"args\", None)\n\n log = dict()\n log[\"flow_run_id\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"task_run_id\"] = prefect.context.get(\"task_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(\n record_dict.pop(\"created\", time.time())\n ).isoformat()\n log[\"name\"] = record_dict.pop(\"name\", None)\n log[\"message\"] = record_dict.pop(\"message\", None)\n log[\"level\"] = record_dict.pop(\"levelname\", None)\n\n if record_dict.get(\"exc_text\") is not None:\n log[\"message\"] += \"\\n\" + record_dict.pop(\"exc_text\", \"\")\n record_dict.pop(\"exc_info\", None)\n\n log[\"info\"] = record_dict\n self.put(log)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.put(self._make_error_log(message))\n\n def _make_error_log(self, message: str) -> dict:\n log = dict()\n log[\"flow_run_id\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(time.time()).isoformat()\n log[\"name\"] = self.logger.name\n log[\"message\"] = message\n log[\"level\"] = \"CRITICAL\"\n log[\"info\"] = {}\n\n return log\n\n\ndef _log_record_context_injector(*args: Any, **kwargs: Any) -> logging.LogRecord:\n \"\"\"\n A custom logger LogRecord Factory that injects selected context parameters into newly\n created logs.\n\n Args:\n - *args: arguments to pass to the original LogRecord Factory\n - **kwargs: keyword arguments to pass to the original LogRecord Factory\n\n Returns:\n - logging.LogRecord: the newly created LogRecord\n \"\"\"\n record = _original_log_record_factory(*args, **kwargs)\n\n additional_attrs = context.config.logging.get(\"log_attributes\", [])\n\n for attr in PREFECT_LOG_RECORD_ATTRIBUTES + tuple(additional_attrs):\n value = prefect.context.get(attr, None)\n if value or attr in additional_attrs:\n setattr(record, attr, value)\n\n return record\n\n\ndef _create_logger(name: str) -> logging.Logger:\n \"\"\"\n Creates a logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - name (str): Name to use for logger.\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n logging.setLogRecordFactory(_log_record_context_injector)\n\n logger = logging.getLogger(name)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n context.config.logging.format, context.config.logging.datefmt\n )\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(context.config.logging.level)\n\n # we set the cloud handler to DEBUG level\n # but the handler itself will dynamically respond\n # to the configured level in the emit() method call\n cloud_handler = CloudHandler()\n cloud_handler.setLevel(\"DEBUG\")\n logger.addHandler(cloud_handler)\n return logger\n\n\ndef configure_logging(testing: bool = False) -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - testing (bool, optional): a boolean specifying whether this configuration\n is for testing purposes only; this helps us isolate any global state during testing\n by configuring a \"prefect-test-logger\" instead of the standard \"prefect\" logger\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n name = \"prefect-test-logger\" if testing else \"prefect\"\n return _create_logger(name)\n\n\ncontext.logger = prefect_logger = configure_logging()\n\n\ndef configure_extra_loggers() -> None:\n \"\"\"\n Creates a \"Prefect\" configured logger for all strings in extra_loggers config list.\n The logging.extra_loggers config defaults to an empty list.\n \"\"\"\n loggers = context.config.logging.get(\"extra_loggers\", [])\n for l in loggers:\n _create_logger(l)\n\n\nconfigure_extra_loggers()\n\n\ndef create_diagnostic_logger(name: str) -> logging.Logger:\n \"\"\"\n Create a logger that does not use the `CloudHandler` but preserves all other\n Prefect logging configuration. For diagnostic / debugging / internal use only.\n \"\"\"\n logger = _create_logger(name)\n logger.handlers = [h for h in logger.handlers if not isinstance(h, CloudHandler)]\n return logger\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n - logging.Logger: a configured logging object with the appropriate name\n \"\"\"\n\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n\n\nclass RedirectToLog:\n \"\"\"\n Custom redirect of stdout messages to logs\n\n Args:\n - logger (logging.Logger, optional): an optional logger to redirect stdout. If\n not provided a logger names `stdout` will be created.\n \"\"\"\n\n def __init__(self, logger: logging.Logger = None) -> None:\n self.stdout_logger = logger or get_logger(\"stdout\")\n\n def write(self, s: str) -> None:\n \"\"\"\n Write message from stdout to a prefect logger.\n Note: blank newlines will not be logged.\n\n Args:\n s (str): the message from stdout to be logged\n \"\"\"\n if not isinstance(s, str):\n # stdout is expecting str\n raise TypeError(f\"string argument expected, got {type(s)}\")\n\n if s.strip():\n self.stdout_logger.info(s)\n\n def flush(self) -> None:\n \"\"\"\n Implemented flush operation for logger handler\n \"\"\"\n for handler in self.stdout_logger.handlers:\n handler.flush()\n", "path": "src/prefect/utilities/logging.py"}], "after_files": [{"content": "\"\"\"\nUtility functions for interacting with and configuring logging. The main entrypoint for\nretrieving loggers for customization is the `get_logger` utility.\n\nNote that Prefect Tasks come equipped with their own loggers. These can be accessed via:\n - `self.logger` if implementing a Task class\n - `prefect.context.get(\"logger\")` if using the `task` decorator\n\nWhen running locally, log levels and message formatting are set via your Prefect configuration file.\n\"\"\"\nimport atexit\nimport json\nimport logging\nimport sys\nimport threading\nimport time\nfrom queue import Empty, Queue\nfrom typing import Any\n\nimport pendulum\n\nimport prefect\nfrom prefect.utilities.context import context\n\n_original_log_record_factory = logging.getLogRecordFactory()\n\nPREFECT_LOG_RECORD_ATTRIBUTES = (\n \"flow_name\",\n \"flow_run_id\",\n \"task_name\",\n \"task_slug\",\n \"task_run_id\",\n)\n\n\nclass CloudHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__(sys.stdout)\n self.client = None\n self.logger = logging.getLogger(\"CloudHandler\")\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n context.config.logging.format, context.config.logging.datefmt\n )\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n self.logger.addHandler(handler)\n self.logger.setLevel(context.config.logging.level)\n\n @property\n def queue(self) -> Queue:\n if not hasattr(self, \"_queue\"):\n self._queue = Queue() # type: Queue\n self._flush = False\n self.start()\n return self._queue\n\n def flush(self) -> None:\n self._flush = True\n if self.client is not None:\n self.batch_upload()\n self._thread.join()\n\n def batch_upload(self) -> None:\n logs = []\n try:\n while True:\n log = self.queue.get(False)\n logs.append(log)\n except Empty:\n pass\n\n if logs:\n try:\n assert self.client is not None\n self.client.write_run_logs(logs)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n # Attempt to write batch error log otherwise log invalid cloud communication\n try:\n assert self.client is not None\n self.client.write_run_logs([self._make_error_log(message)])\n except Exception:\n self.logger.critical(\"Unable to write logs to Prefect Cloud\")\n\n def _monitor(self) -> None:\n while not self._flush:\n self.batch_upload()\n time.sleep(self.heartbeat)\n\n def __del__(self) -> None:\n if hasattr(self, \"_thread\"):\n self.flush()\n atexit.unregister(self.flush)\n\n def start(self) -> None:\n if not hasattr(self, \"_thread\"):\n self.heartbeat = context.config.cloud.logging_heartbeat\n self._thread = t = threading.Thread(\n target=self._monitor, name=\"PrefectCloudLoggingThread\"\n )\n t.daemon = True\n t.start()\n atexit.register(self.flush)\n\n def put(self, log: dict) -> None:\n try:\n json.dumps(log) # make sure the payload is serializable\n self.queue.put(log)\n except TypeError as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.queue.put(self._make_error_log(message))\n\n def emit(self, record) -> None: # type: ignore\n # if we shouldn't log to cloud, don't emit\n if not prefect.context.config.logging.log_to_cloud:\n return\n\n try:\n from prefect.client import Client\n\n if self.client is None:\n self.client = Client() # type: ignore\n\n assert isinstance(self.client, Client) # mypy assert\n\n record_dict = record.__dict__.copy()\n\n # ensures emitted logs respect configured logging level\n config_level = getattr(\n logging, prefect.context.config.logging.level, logging.INFO\n )\n\n if record_dict[\"levelno\"] < config_level:\n return\n\n # remove potentially non-json serializable formatting args\n record_dict.pop(\"args\", None)\n\n log = dict()\n log[\"flow_run_id\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"task_run_id\"] = prefect.context.get(\"task_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(\n record_dict.pop(\"created\", time.time())\n ).isoformat()\n log[\"name\"] = record_dict.pop(\"name\", None)\n log[\"message\"] = record_dict.pop(\"message\", None)\n log[\"level\"] = record_dict.pop(\"levelname\", None)\n\n if record_dict.get(\"exc_text\") is not None:\n log[\"message\"] += \"\\n\" + record_dict.pop(\"exc_text\", \"\")\n record_dict.pop(\"exc_info\", None)\n\n log[\"info\"] = record_dict\n self.put(log)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.put(self._make_error_log(message))\n\n def _make_error_log(self, message: str) -> dict:\n log = dict()\n log[\"flow_run_id\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(time.time()).isoformat()\n log[\"name\"] = self.logger.name\n log[\"message\"] = message\n log[\"level\"] = \"CRITICAL\"\n log[\"info\"] = {}\n\n return log\n\n\ndef _log_record_context_injector(*args: Any, **kwargs: Any) -> logging.LogRecord:\n \"\"\"\n A custom logger LogRecord Factory that injects selected context parameters into newly\n created logs.\n\n Args:\n - *args: arguments to pass to the original LogRecord Factory\n - **kwargs: keyword arguments to pass to the original LogRecord Factory\n\n Returns:\n - logging.LogRecord: the newly created LogRecord\n \"\"\"\n record = _original_log_record_factory(*args, **kwargs)\n\n additional_attrs = context.config.logging.get(\"log_attributes\", [])\n\n for attr in PREFECT_LOG_RECORD_ATTRIBUTES + tuple(additional_attrs):\n value = prefect.context.get(attr, None)\n if value or attr in additional_attrs:\n setattr(record, attr, value)\n\n return record\n\n\ndef _create_logger(name: str) -> logging.Logger:\n \"\"\"\n Creates a logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - name (str): Name to use for logger.\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n logging.setLogRecordFactory(_log_record_context_injector)\n\n logger = logging.getLogger(name)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(\n context.config.logging.format, context.config.logging.datefmt\n )\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(context.config.logging.level)\n\n # we set the cloud handler to DEBUG level\n # but the handler itself will dynamically respond\n # to the configured level in the emit() method call\n cloud_handler = CloudHandler()\n cloud_handler.setLevel(\"DEBUG\")\n logger.addHandler(cloud_handler)\n return logger\n\n\ndef configure_logging(testing: bool = False) -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - testing (bool, optional): a boolean specifying whether this configuration\n is for testing purposes only; this helps us isolate any global state during testing\n by configuring a \"prefect-test-logger\" instead of the standard \"prefect\" logger\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n name = \"prefect-test-logger\" if testing else \"prefect\"\n return _create_logger(name)\n\n\ncontext.logger = prefect_logger = configure_logging()\n\n\ndef configure_extra_loggers() -> None:\n \"\"\"\n Creates a \"Prefect\" configured logger for all strings in extra_loggers config list.\n The logging.extra_loggers config defaults to an empty list.\n \"\"\"\n loggers = context.config.logging.get(\"extra_loggers\", [])\n for l in loggers:\n _create_logger(l)\n\n\nconfigure_extra_loggers()\n\n\ndef create_diagnostic_logger(name: str) -> logging.Logger:\n \"\"\"\n Create a logger that does not use the `CloudHandler` but preserves all other\n Prefect logging configuration. For diagnostic / debugging / internal use only.\n \"\"\"\n logger = _create_logger(name)\n logger.handlers = [h for h in logger.handlers if not isinstance(h, CloudHandler)]\n return logger\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n - logging.Logger: a configured logging object with the appropriate name\n \"\"\"\n\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n\n\nclass RedirectToLog:\n \"\"\"\n Custom redirect of stdout messages to logs\n\n Args:\n - logger (logging.Logger, optional): an optional logger to redirect stdout. If\n not provided a logger names `stdout` will be created.\n \"\"\"\n\n def __init__(self, logger: logging.Logger = None) -> None:\n self.stdout_logger = logger or get_logger(\"stdout\")\n\n def write(self, s: str) -> None:\n \"\"\"\n Write message from stdout to a prefect logger.\n Note: blank newlines will not be logged.\n\n Args:\n s (str): the message from stdout to be logged\n \"\"\"\n if not isinstance(s, str):\n # stdout is expecting str\n raise TypeError(f\"string argument expected, got {type(s)}\")\n\n if s.strip():\n self.stdout_logger.info(s)\n\n def flush(self) -> None:\n \"\"\"\n Implemented flush operation for logger handler\n \"\"\"\n for handler in self.stdout_logger.handlers:\n handler.flush()\n", "path": "src/prefect/utilities/logging.py"}]}
| 4,026 | 111 |
gh_patches_debug_43131
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-5940
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MN parser down
## Description
Mongolia parser is down, but the bot hasn't open an issue. I am opening this one to alert the maintainers.
It seems this time the parser is down because of a change in the json that was being parsed:
> raise ParserException(
parsers.lib.exceptions.ParserException: MN.py Parser: Fetched keys from source dict_keys(['date', 'syssum', 'tpp', 'sumnar', 'sums', 'energyimport', 't']) do not match expected keys dict_values(['date', 'syssum', 'sumnar', 'sums', 'energyimport', 't']).
A new key called tpp (thermal power plants?) has being added. The value of this new key doesn't match the previously calculated unknown production (so tpp plus other keys don't add up to consumption). What should be done to fix this? It seems an unknown source is being added.
By the way, a bit off-topic, but I have noticed that the Mongolia parser outputs global exchange data. We currently get the exchange with Russia from its parser, so we could calculate the exchange with China by substracting the other exchange. Is this possible?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/MN.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from datetime import datetime
4 from logging import Logger, getLogger
5 from typing import Any
6 from zoneinfo import ZoneInfo
7
8 from requests import Response, Session
9
10 from electricitymap.contrib.config import ZoneKey
11 from electricitymap.contrib.lib.models.event_lists import (
12 ProductionBreakdownList,
13 TotalConsumptionList,
14 )
15 from electricitymap.contrib.lib.models.events import ProductionMix
16 from parsers.lib.exceptions import ParserException
17
18 NDC_GENERATION = "https://disnews.energy.mn/test/convert.php"
19 TZ = ZoneInfo("Asia/Ulaanbaatar") # UTC+8
20
21 # Query fields to web API fields
22 JSON_QUERY_TO_SRC = {
23 "time": "date",
24 "consumptionMW": "syssum",
25 "solarMW": "sumnar",
26 "windMW": "sums",
27 "importMW": "energyimport", # positive = import
28 "temperatureC": "t", # current temperature
29 }
30
31
32 def parse_json(web_json: dict) -> dict[str, Any]:
33 """
34 Parse the fetched JSON data to our query format according to JSON_QUERY_TO_SRC.
35 Example of expected JSON format present at URL:
36 {"date":"2023-06-27 18:00:00","syssum":"869.37","sumnar":42.34,"sums":119.79,"energyimport":"49.58","t":"17"}
37 """
38
39 # Validate first if keys in fetched dict match expected keys
40 if set(JSON_QUERY_TO_SRC.values()) != set(web_json.keys()):
41 raise ParserException(
42 parser="MN.py",
43 message=f"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.",
44 )
45
46 if None in web_json.values():
47 raise ParserException(
48 parser="MN.py",
49 message=f"Fetched values contain null. Fetched data: {web_json}.",
50 )
51
52 # Then we can safely parse them
53 query_data = dict()
54 for query_key, src_key in JSON_QUERY_TO_SRC.items():
55 if query_key == "time":
56 # convert to datetime
57 query_data[query_key] = datetime.fromisoformat(web_json[src_key]).replace(
58 tzinfo=TZ
59 )
60 else:
61 # or convert to float, might also be string
62 query_data[query_key] = float(web_json[src_key])
63
64 return query_data
65
66
67 def query(session: Session) -> dict[str, Any]:
68 """
69 Query the JSON endpoint and parse it.
70 """
71
72 target_response: Response = session.get(NDC_GENERATION)
73
74 if not target_response.ok:
75 raise ParserException(
76 parser="MN.py",
77 message=f"Data request did not succeed: {target_response.status_code}",
78 )
79
80 # Read as JSON
81 response_json = target_response.json()
82 query_result = parse_json(response_json)
83
84 return query_result
85
86
87 def fetch_production(
88 zone_key: ZoneKey,
89 session: Session = Session(),
90 target_datetime: datetime | None = None,
91 logger: Logger = getLogger(__name__),
92 ):
93 if target_datetime:
94 raise NotImplementedError("This parser is not yet able to parse past dates.")
95
96 query_data = query(session)
97
98 # Calculated 'unknown' production from available data (consumption, import, solar, wind).
99 # 'unknown' consists of 92.8% coal, 5.8% oil and 1.4% hydro as per 2020; sources: IEA and IRENA statistics.
100 query_data["unknownMW"] = round(
101 query_data["consumptionMW"]
102 - query_data["importMW"]
103 - query_data["solarMW"]
104 - query_data["windMW"],
105 13,
106 )
107
108 prod_mix = ProductionMix(
109 solar=query_data["solarMW"],
110 wind=query_data["windMW"],
111 unknown=query_data["unknownMW"],
112 )
113
114 prod_breakdown_list = ProductionBreakdownList(logger)
115 prod_breakdown_list.append(
116 datetime=query_data["time"],
117 zoneKey=zone_key,
118 source="https://ndc.energy.mn/",
119 production=prod_mix,
120 )
121
122 return prod_breakdown_list.to_list()
123
124
125 def fetch_consumption(
126 zone_key: ZoneKey,
127 session: Session = Session(),
128 target_datetime: datetime | None = None,
129 logger: Logger = getLogger(__name__),
130 ):
131 if target_datetime:
132 raise NotImplementedError("This parser is not yet able to parse past dates.")
133
134 query_data = query(session)
135
136 consumption_list = TotalConsumptionList(logger)
137 consumption_list.append(
138 datetime=query_data["time"],
139 zoneKey=zone_key,
140 consumption=query_data["consumptionMW"],
141 source="https://ndc.energy.mn/",
142 )
143
144 return consumption_list.to_list()
145
146
147 if __name__ == "__main__":
148 print("fetch_production() ->")
149 print(fetch_production(ZoneKey("MN")))
150 print("fetch_consumption() ->")
151 print(fetch_consumption(ZoneKey("MN")))
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/MN.py b/parsers/MN.py
--- a/parsers/MN.py
+++ b/parsers/MN.py
@@ -29,18 +29,18 @@
}
-def parse_json(web_json: dict) -> dict[str, Any]:
+def parse_json(web_json: dict, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:
"""
Parse the fetched JSON data to our query format according to JSON_QUERY_TO_SRC.
Example of expected JSON format present at URL:
- {"date":"2023-06-27 18:00:00","syssum":"869.37","sumnar":42.34,"sums":119.79,"energyimport":"49.58","t":"17"}
+ {"date":"2023-06-27 18:00:00","syssum":"869.37","sumnar":42.34,"sums":119.79,"energyimport":"49.58"}
"""
# Validate first if keys in fetched dict match expected keys
if set(JSON_QUERY_TO_SRC.values()) != set(web_json.keys()):
- raise ParserException(
- parser="MN.py",
- message=f"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.",
+ logger.error(
+ msg=f"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.",
+ extra={"zone_key": zone_key, "parser": "MN.py"},
)
if None in web_json.values():
@@ -52,7 +52,7 @@
# Then we can safely parse them
query_data = dict()
for query_key, src_key in JSON_QUERY_TO_SRC.items():
- if query_key == "time":
+ if "time" in query_key:
# convert to datetime
query_data[query_key] = datetime.fromisoformat(web_json[src_key]).replace(
tzinfo=TZ
@@ -64,7 +64,7 @@
return query_data
-def query(session: Session) -> dict[str, Any]:
+def query(session: Session, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:
"""
Query the JSON endpoint and parse it.
"""
@@ -79,7 +79,7 @@
# Read as JSON
response_json = target_response.json()
- query_result = parse_json(response_json)
+ query_result = parse_json(response_json, logger, zone_key)
return query_result
@@ -93,11 +93,11 @@
if target_datetime:
raise NotImplementedError("This parser is not yet able to parse past dates.")
- query_data = query(session)
+ query_data = query(session, logger, zone_key)
- # Calculated 'unknown' production from available data (consumption, import, solar, wind).
+ # Calculated 'unknown' production from available data (consumption, import, solar, wind, tpp).
# 'unknown' consists of 92.8% coal, 5.8% oil and 1.4% hydro as per 2020; sources: IEA and IRENA statistics.
- query_data["unknownMW"] = round(
+ query_data["leftoverMW"] = round(
query_data["consumptionMW"]
- query_data["importMW"]
- query_data["solarMW"]
@@ -105,11 +105,10 @@
13,
)
- prod_mix = ProductionMix(
- solar=query_data["solarMW"],
- wind=query_data["windMW"],
- unknown=query_data["unknownMW"],
- )
+ prod_mix = ProductionMix()
+ prod_mix.add_value("solar", query_data["solarMW"])
+ prod_mix.add_value("wind", query_data["windMW"])
+ prod_mix.add_value("unknown", query_data["leftoverMW"])
prod_breakdown_list = ProductionBreakdownList(logger)
prod_breakdown_list.append(
@@ -131,7 +130,7 @@
if target_datetime:
raise NotImplementedError("This parser is not yet able to parse past dates.")
- query_data = query(session)
+ query_data = query(session, logger, zone_key)
consumption_list = TotalConsumptionList(logger)
consumption_list.append(
|
{"golden_diff": "diff --git a/parsers/MN.py b/parsers/MN.py\n--- a/parsers/MN.py\n+++ b/parsers/MN.py\n@@ -29,18 +29,18 @@\n }\n \n \n-def parse_json(web_json: dict) -> dict[str, Any]:\n+def parse_json(web_json: dict, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:\n \"\"\"\n Parse the fetched JSON data to our query format according to JSON_QUERY_TO_SRC.\n Example of expected JSON format present at URL:\n- {\"date\":\"2023-06-27 18:00:00\",\"syssum\":\"869.37\",\"sumnar\":42.34,\"sums\":119.79,\"energyimport\":\"49.58\",\"t\":\"17\"}\n+ {\"date\":\"2023-06-27 18:00:00\",\"syssum\":\"869.37\",\"sumnar\":42.34,\"sums\":119.79,\"energyimport\":\"49.58\"}\n \"\"\"\n \n # Validate first if keys in fetched dict match expected keys\n if set(JSON_QUERY_TO_SRC.values()) != set(web_json.keys()):\n- raise ParserException(\n- parser=\"MN.py\",\n- message=f\"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.\",\n+ logger.error(\n+ msg=f\"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.\",\n+ extra={\"zone_key\": zone_key, \"parser\": \"MN.py\"},\n )\n \n if None in web_json.values():\n@@ -52,7 +52,7 @@\n # Then we can safely parse them\n query_data = dict()\n for query_key, src_key in JSON_QUERY_TO_SRC.items():\n- if query_key == \"time\":\n+ if \"time\" in query_key:\n # convert to datetime\n query_data[query_key] = datetime.fromisoformat(web_json[src_key]).replace(\n tzinfo=TZ\n@@ -64,7 +64,7 @@\n return query_data\n \n \n-def query(session: Session) -> dict[str, Any]:\n+def query(session: Session, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:\n \"\"\"\n Query the JSON endpoint and parse it.\n \"\"\"\n@@ -79,7 +79,7 @@\n \n # Read as JSON\n response_json = target_response.json()\n- query_result = parse_json(response_json)\n+ query_result = parse_json(response_json, logger, zone_key)\n \n return query_result\n \n@@ -93,11 +93,11 @@\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n \n- query_data = query(session)\n+ query_data = query(session, logger, zone_key)\n \n- # Calculated 'unknown' production from available data (consumption, import, solar, wind).\n+ # Calculated 'unknown' production from available data (consumption, import, solar, wind, tpp).\n # 'unknown' consists of 92.8% coal, 5.8% oil and 1.4% hydro as per 2020; sources: IEA and IRENA statistics.\n- query_data[\"unknownMW\"] = round(\n+ query_data[\"leftoverMW\"] = round(\n query_data[\"consumptionMW\"]\n - query_data[\"importMW\"]\n - query_data[\"solarMW\"]\n@@ -105,11 +105,10 @@\n 13,\n )\n \n- prod_mix = ProductionMix(\n- solar=query_data[\"solarMW\"],\n- wind=query_data[\"windMW\"],\n- unknown=query_data[\"unknownMW\"],\n- )\n+ prod_mix = ProductionMix()\n+ prod_mix.add_value(\"solar\", query_data[\"solarMW\"])\n+ prod_mix.add_value(\"wind\", query_data[\"windMW\"])\n+ prod_mix.add_value(\"unknown\", query_data[\"leftoverMW\"])\n \n prod_breakdown_list = ProductionBreakdownList(logger)\n prod_breakdown_list.append(\n@@ -131,7 +130,7 @@\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n \n- query_data = query(session)\n+ query_data = query(session, logger, zone_key)\n \n consumption_list = TotalConsumptionList(logger)\n consumption_list.append(\n", "issue": "MN parser down\n## Description\r\nMongolia parser is down, but the bot hasn't open an issue. I am opening this one to alert the maintainers.\r\nIt seems this time the parser is down because of a change in the json that was being parsed:\r\n> raise ParserException(\r\nparsers.lib.exceptions.ParserException: MN.py Parser: Fetched keys from source dict_keys(['date', 'syssum', 'tpp', 'sumnar', 'sums', 'energyimport', 't']) do not match expected keys dict_values(['date', 'syssum', 'sumnar', 'sums', 'energyimport', 't']).\r\n\r\nA new key called tpp (thermal power plants?) has being added. The value of this new key doesn't match the previously calculated unknown production (so tpp plus other keys don't add up to consumption). What should be done to fix this? It seems an unknown source is being added.\r\n\r\nBy the way, a bit off-topic, but I have noticed that the Mongolia parser outputs global exchange data. We currently get the exchange with Russia from its parser, so we could calculate the exchange with China by substracting the other exchange. Is this possible?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom datetime import datetime\nfrom logging import Logger, getLogger\nfrom typing import Any\nfrom zoneinfo import ZoneInfo\n\nfrom requests import Response, Session\n\nfrom electricitymap.contrib.config import ZoneKey\nfrom electricitymap.contrib.lib.models.event_lists import (\n ProductionBreakdownList,\n TotalConsumptionList,\n)\nfrom electricitymap.contrib.lib.models.events import ProductionMix\nfrom parsers.lib.exceptions import ParserException\n\nNDC_GENERATION = \"https://disnews.energy.mn/test/convert.php\"\nTZ = ZoneInfo(\"Asia/Ulaanbaatar\") # UTC+8\n\n# Query fields to web API fields\nJSON_QUERY_TO_SRC = {\n \"time\": \"date\",\n \"consumptionMW\": \"syssum\",\n \"solarMW\": \"sumnar\",\n \"windMW\": \"sums\",\n \"importMW\": \"energyimport\", # positive = import\n \"temperatureC\": \"t\", # current temperature\n}\n\n\ndef parse_json(web_json: dict) -> dict[str, Any]:\n \"\"\"\n Parse the fetched JSON data to our query format according to JSON_QUERY_TO_SRC.\n Example of expected JSON format present at URL:\n {\"date\":\"2023-06-27 18:00:00\",\"syssum\":\"869.37\",\"sumnar\":42.34,\"sums\":119.79,\"energyimport\":\"49.58\",\"t\":\"17\"}\n \"\"\"\n\n # Validate first if keys in fetched dict match expected keys\n if set(JSON_QUERY_TO_SRC.values()) != set(web_json.keys()):\n raise ParserException(\n parser=\"MN.py\",\n message=f\"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.\",\n )\n\n if None in web_json.values():\n raise ParserException(\n parser=\"MN.py\",\n message=f\"Fetched values contain null. Fetched data: {web_json}.\",\n )\n\n # Then we can safely parse them\n query_data = dict()\n for query_key, src_key in JSON_QUERY_TO_SRC.items():\n if query_key == \"time\":\n # convert to datetime\n query_data[query_key] = datetime.fromisoformat(web_json[src_key]).replace(\n tzinfo=TZ\n )\n else:\n # or convert to float, might also be string\n query_data[query_key] = float(web_json[src_key])\n\n return query_data\n\n\ndef query(session: Session) -> dict[str, Any]:\n \"\"\"\n Query the JSON endpoint and parse it.\n \"\"\"\n\n target_response: Response = session.get(NDC_GENERATION)\n\n if not target_response.ok:\n raise ParserException(\n parser=\"MN.py\",\n message=f\"Data request did not succeed: {target_response.status_code}\",\n )\n\n # Read as JSON\n response_json = target_response.json()\n query_result = parse_json(response_json)\n\n return query_result\n\n\ndef fetch_production(\n zone_key: ZoneKey,\n session: Session = Session(),\n target_datetime: datetime | None = None,\n logger: Logger = getLogger(__name__),\n):\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n\n query_data = query(session)\n\n # Calculated 'unknown' production from available data (consumption, import, solar, wind).\n # 'unknown' consists of 92.8% coal, 5.8% oil and 1.4% hydro as per 2020; sources: IEA and IRENA statistics.\n query_data[\"unknownMW\"] = round(\n query_data[\"consumptionMW\"]\n - query_data[\"importMW\"]\n - query_data[\"solarMW\"]\n - query_data[\"windMW\"],\n 13,\n )\n\n prod_mix = ProductionMix(\n solar=query_data[\"solarMW\"],\n wind=query_data[\"windMW\"],\n unknown=query_data[\"unknownMW\"],\n )\n\n prod_breakdown_list = ProductionBreakdownList(logger)\n prod_breakdown_list.append(\n datetime=query_data[\"time\"],\n zoneKey=zone_key,\n source=\"https://ndc.energy.mn/\",\n production=prod_mix,\n )\n\n return prod_breakdown_list.to_list()\n\n\ndef fetch_consumption(\n zone_key: ZoneKey,\n session: Session = Session(),\n target_datetime: datetime | None = None,\n logger: Logger = getLogger(__name__),\n):\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n\n query_data = query(session)\n\n consumption_list = TotalConsumptionList(logger)\n consumption_list.append(\n datetime=query_data[\"time\"],\n zoneKey=zone_key,\n consumption=query_data[\"consumptionMW\"],\n source=\"https://ndc.energy.mn/\",\n )\n\n return consumption_list.to_list()\n\n\nif __name__ == \"__main__\":\n print(\"fetch_production() ->\")\n print(fetch_production(ZoneKey(\"MN\")))\n print(\"fetch_consumption() ->\")\n print(fetch_consumption(ZoneKey(\"MN\")))\n", "path": "parsers/MN.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom datetime import datetime\nfrom logging import Logger, getLogger\nfrom typing import Any\nfrom zoneinfo import ZoneInfo\n\nfrom requests import Response, Session\n\nfrom electricitymap.contrib.config import ZoneKey\nfrom electricitymap.contrib.lib.models.event_lists import (\n ProductionBreakdownList,\n TotalConsumptionList,\n)\nfrom electricitymap.contrib.lib.models.events import ProductionMix\nfrom parsers.lib.exceptions import ParserException\n\nNDC_GENERATION = \"https://disnews.energy.mn/test/convert.php\"\nTZ = ZoneInfo(\"Asia/Ulaanbaatar\") # UTC+8\n\n# Query fields to web API fields\nJSON_QUERY_TO_SRC = {\n \"time\": \"date\",\n \"consumptionMW\": \"syssum\",\n \"solarMW\": \"sumnar\",\n \"windMW\": \"sums\",\n \"importMW\": \"energyimport\", # positive = import\n \"temperatureC\": \"t\", # current temperature\n}\n\n\ndef parse_json(web_json: dict, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:\n \"\"\"\n Parse the fetched JSON data to our query format according to JSON_QUERY_TO_SRC.\n Example of expected JSON format present at URL:\n {\"date\":\"2023-06-27 18:00:00\",\"syssum\":\"869.37\",\"sumnar\":42.34,\"sums\":119.79,\"energyimport\":\"49.58\"}\n \"\"\"\n\n # Validate first if keys in fetched dict match expected keys\n if set(JSON_QUERY_TO_SRC.values()) != set(web_json.keys()):\n logger.error(\n msg=f\"Fetched keys from source {web_json.keys()} do not match expected keys {JSON_QUERY_TO_SRC.values()}.\",\n extra={\"zone_key\": zone_key, \"parser\": \"MN.py\"},\n )\n\n if None in web_json.values():\n raise ParserException(\n parser=\"MN.py\",\n message=f\"Fetched values contain null. Fetched data: {web_json}.\",\n )\n\n # Then we can safely parse them\n query_data = dict()\n for query_key, src_key in JSON_QUERY_TO_SRC.items():\n if \"time\" in query_key:\n # convert to datetime\n query_data[query_key] = datetime.fromisoformat(web_json[src_key]).replace(\n tzinfo=TZ\n )\n else:\n # or convert to float, might also be string\n query_data[query_key] = float(web_json[src_key])\n\n return query_data\n\n\ndef query(session: Session, logger: Logger, zone_key: ZoneKey) -> dict[str, Any]:\n \"\"\"\n Query the JSON endpoint and parse it.\n \"\"\"\n\n target_response: Response = session.get(NDC_GENERATION)\n\n if not target_response.ok:\n raise ParserException(\n parser=\"MN.py\",\n message=f\"Data request did not succeed: {target_response.status_code}\",\n )\n\n # Read as JSON\n response_json = target_response.json()\n query_result = parse_json(response_json, logger, zone_key)\n\n return query_result\n\n\ndef fetch_production(\n zone_key: ZoneKey,\n session: Session = Session(),\n target_datetime: datetime | None = None,\n logger: Logger = getLogger(__name__),\n):\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n\n query_data = query(session, logger, zone_key)\n\n # Calculated 'unknown' production from available data (consumption, import, solar, wind, tpp).\n # 'unknown' consists of 92.8% coal, 5.8% oil and 1.4% hydro as per 2020; sources: IEA and IRENA statistics.\n query_data[\"leftoverMW\"] = round(\n query_data[\"consumptionMW\"]\n - query_data[\"importMW\"]\n - query_data[\"solarMW\"]\n - query_data[\"windMW\"],\n 13,\n )\n\n prod_mix = ProductionMix()\n prod_mix.add_value(\"solar\", query_data[\"solarMW\"])\n prod_mix.add_value(\"wind\", query_data[\"windMW\"])\n prod_mix.add_value(\"unknown\", query_data[\"leftoverMW\"])\n\n prod_breakdown_list = ProductionBreakdownList(logger)\n prod_breakdown_list.append(\n datetime=query_data[\"time\"],\n zoneKey=zone_key,\n source=\"https://ndc.energy.mn/\",\n production=prod_mix,\n )\n\n return prod_breakdown_list.to_list()\n\n\ndef fetch_consumption(\n zone_key: ZoneKey,\n session: Session = Session(),\n target_datetime: datetime | None = None,\n logger: Logger = getLogger(__name__),\n):\n if target_datetime:\n raise NotImplementedError(\"This parser is not yet able to parse past dates.\")\n\n query_data = query(session, logger, zone_key)\n\n consumption_list = TotalConsumptionList(logger)\n consumption_list.append(\n datetime=query_data[\"time\"],\n zoneKey=zone_key,\n consumption=query_data[\"consumptionMW\"],\n source=\"https://ndc.energy.mn/\",\n )\n\n return consumption_list.to_list()\n\n\nif __name__ == \"__main__\":\n print(\"fetch_production() ->\")\n print(fetch_production(ZoneKey(\"MN\")))\n print(\"fetch_consumption() ->\")\n print(fetch_consumption(ZoneKey(\"MN\")))\n", "path": "parsers/MN.py"}]}
| 1,965 | 1,006 |
gh_patches_debug_18550
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-13076
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Webauthn allows me to register my phone as security key but not login with it
<!--
NOTE: This issue should be for problems with PyPI itself, including:
* pypi.org
* test.pypi.org
* files.pythonhosted.org
This issue should NOT be for a project installed from PyPI. If you are
having an issue with a specific package, you should reach out to the
maintainers of that project directly instead.
Furthermore, this issue should NOT be for any non-PyPI properties (like
python.org, docs.python.org, etc.)
If your problem is related to search (a new or updated project doesn't
appear in the PyPI search results), please wait for a couple of hours
and check again before reporting it. The search index may take some
time to be updated.
-->
**Describe the bug**
Webauthn allows me to register my phone as security key but not login with it
**Expected behavior**
After closing the native is security key prompt, A chrome prompt like this should pop up and allows me to select my phone to use as a security key

**To Reproduce**
Add a android phone as a security key by visiting your profile and clicking add security key and follow the expected behavior
Then logout and try to login with the same expected behaviour
**My Platform**
Windows 10 and chrome version Version 110.0.5481.177 (Official Build) (64-bit)
**Additional context**
None
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/utils/webauthn.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import base64
14 import json
15
16 import webauthn as pywebauthn
17
18 from webauthn.helpers import base64url_to_bytes, generate_challenge
19 from webauthn.helpers.exceptions import (
20 InvalidAuthenticationResponse,
21 InvalidRegistrationResponse,
22 )
23 from webauthn.helpers.options_to_json import options_to_json
24 from webauthn.helpers.structs import (
25 AttestationConveyancePreference,
26 AuthenticationCredential,
27 AuthenticatorSelectionCriteria,
28 AuthenticatorTransport,
29 PublicKeyCredentialDescriptor,
30 RegistrationCredential,
31 UserVerificationRequirement,
32 )
33
34
35 class AuthenticationRejectedError(Exception):
36 pass
37
38
39 class RegistrationRejectedError(Exception):
40 pass
41
42
43 def _get_webauthn_user_public_key_credential_descriptors(user, *, rp_id):
44 """
45 Returns a webauthn.WebAuthnUser instance corresponding
46 to the given user model, with properties suitable for
47 usage within the webauthn API.
48 """
49 return [
50 PublicKeyCredentialDescriptor(
51 id=base64url_to_bytes(credential.credential_id),
52 transports=[
53 AuthenticatorTransport.USB,
54 AuthenticatorTransport.NFC,
55 AuthenticatorTransport.BLE,
56 AuthenticatorTransport.INTERNAL,
57 ],
58 )
59 for credential in user.webauthn
60 ]
61
62
63 def _get_webauthn_user_public_keys(user, *, rp_id):
64 return [
65 (
66 base64url_to_bytes(credential.public_key),
67 credential.sign_count,
68 )
69 for credential in user.webauthn
70 ]
71
72
73 def _webauthn_b64encode(source):
74 return base64.urlsafe_b64encode(source).rstrip(b"=")
75
76
77 def generate_webauthn_challenge():
78 """
79 Returns a random challenge suitable for use within
80 Webauthn's credential and configuration option objects.
81
82 See: https://w3c.github.io/webauthn/#cryptographic-challenges
83 """
84 return generate_challenge()
85
86
87 def get_credential_options(user, *, challenge, rp_name, rp_id):
88 """
89 Returns a dictionary of options for credential creation
90 on the client side.
91 """
92 _authenticator_selection = AuthenticatorSelectionCriteria()
93 _authenticator_selection.user_verification = UserVerificationRequirement.DISCOURAGED
94 options = pywebauthn.generate_registration_options(
95 rp_id=rp_id,
96 rp_name=rp_name,
97 user_id=str(user.id),
98 user_name=user.username,
99 user_display_name=user.name or user.username,
100 challenge=challenge,
101 attestation=AttestationConveyancePreference.NONE,
102 authenticator_selection=_authenticator_selection,
103 )
104 return json.loads(options_to_json(options))
105
106
107 def get_assertion_options(user, *, challenge, rp_id):
108 """
109 Returns a dictionary of options for assertion retrieval
110 on the client side.
111 """
112 options = pywebauthn.generate_authentication_options(
113 rp_id=rp_id,
114 challenge=challenge,
115 allow_credentials=_get_webauthn_user_public_key_credential_descriptors(
116 user, rp_id=rp_id
117 ),
118 user_verification=UserVerificationRequirement.DISCOURAGED,
119 )
120 return json.loads(options_to_json(options))
121
122
123 def verify_registration_response(response, challenge, *, rp_id, origin):
124 """
125 Validates the challenge and attestation information
126 sent from the client during device registration.
127
128 Returns a WebAuthnCredential on success.
129 Raises RegistrationRejectedError on failire.
130 """
131 # NOTE: We re-encode the challenge below, because our
132 # response's clientData.challenge is encoded twice:
133 # first for the entire clientData payload, and then again
134 # for the individual challenge.
135 encoded_challenge = _webauthn_b64encode(challenge)
136 try:
137 _credential = RegistrationCredential.parse_raw(response)
138 return pywebauthn.verify_registration_response(
139 credential=_credential,
140 expected_challenge=encoded_challenge,
141 expected_rp_id=rp_id,
142 expected_origin=origin,
143 require_user_verification=False,
144 )
145 except InvalidRegistrationResponse as e:
146 raise RegistrationRejectedError(str(e))
147
148
149 def verify_assertion_response(assertion, *, challenge, user, origin, rp_id):
150 """
151 Validates the challenge and assertion information
152 sent from the client during authentication.
153
154 Returns an updated signage count on success.
155 Raises AuthenticationRejectedError on failure.
156 """
157 # NOTE: We re-encode the challenge below, because our
158 # response's clientData.challenge is encoded twice:
159 # first for the entire clientData payload, and then again
160 # for the individual challenge.
161 encoded_challenge = _webauthn_b64encode(challenge)
162 webauthn_user_public_keys = _get_webauthn_user_public_keys(user, rp_id=rp_id)
163
164 for public_key, current_sign_count in webauthn_user_public_keys:
165 try:
166 _credential = AuthenticationCredential.parse_raw(assertion)
167 return pywebauthn.verify_authentication_response(
168 credential=_credential,
169 expected_challenge=encoded_challenge,
170 expected_rp_id=rp_id,
171 expected_origin=origin,
172 credential_public_key=public_key,
173 credential_current_sign_count=current_sign_count,
174 require_user_verification=False,
175 )
176 except InvalidAuthenticationResponse:
177 pass
178
179 # If we exit the loop, then we've failed to verify the assertion against
180 # any of the user's WebAuthn credentials. Fail.
181 raise AuthenticationRejectedError("Invalid WebAuthn credential")
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/utils/webauthn.py b/warehouse/utils/webauthn.py
--- a/warehouse/utils/webauthn.py
+++ b/warehouse/utils/webauthn.py
@@ -25,7 +25,6 @@
AttestationConveyancePreference,
AuthenticationCredential,
AuthenticatorSelectionCriteria,
- AuthenticatorTransport,
PublicKeyCredentialDescriptor,
RegistrationCredential,
UserVerificationRequirement,
@@ -47,15 +46,7 @@
usage within the webauthn API.
"""
return [
- PublicKeyCredentialDescriptor(
- id=base64url_to_bytes(credential.credential_id),
- transports=[
- AuthenticatorTransport.USB,
- AuthenticatorTransport.NFC,
- AuthenticatorTransport.BLE,
- AuthenticatorTransport.INTERNAL,
- ],
- )
+ PublicKeyCredentialDescriptor(id=base64url_to_bytes(credential.credential_id))
for credential in user.webauthn
]
|
{"golden_diff": "diff --git a/warehouse/utils/webauthn.py b/warehouse/utils/webauthn.py\n--- a/warehouse/utils/webauthn.py\n+++ b/warehouse/utils/webauthn.py\n@@ -25,7 +25,6 @@\n AttestationConveyancePreference,\n AuthenticationCredential,\n AuthenticatorSelectionCriteria,\n- AuthenticatorTransport,\n PublicKeyCredentialDescriptor,\n RegistrationCredential,\n UserVerificationRequirement,\n@@ -47,15 +46,7 @@\n usage within the webauthn API.\n \"\"\"\n return [\n- PublicKeyCredentialDescriptor(\n- id=base64url_to_bytes(credential.credential_id),\n- transports=[\n- AuthenticatorTransport.USB,\n- AuthenticatorTransport.NFC,\n- AuthenticatorTransport.BLE,\n- AuthenticatorTransport.INTERNAL,\n- ],\n- )\n+ PublicKeyCredentialDescriptor(id=base64url_to_bytes(credential.credential_id))\n for credential in user.webauthn\n ]\n", "issue": "Webauthn allows me to register my phone as security key but not login with it\n<!--\r\n NOTE: This issue should be for problems with PyPI itself, including:\r\n * pypi.org\r\n * test.pypi.org\r\n * files.pythonhosted.org\r\n\r\n This issue should NOT be for a project installed from PyPI. If you are\r\n having an issue with a specific package, you should reach out to the\r\n maintainers of that project directly instead.\r\n\r\n Furthermore, this issue should NOT be for any non-PyPI properties (like\r\n python.org, docs.python.org, etc.)\r\n\r\n If your problem is related to search (a new or updated project doesn't\r\n appear in the PyPI search results), please wait for a couple of hours\r\n and check again before reporting it. The search index may take some\r\n time to be updated.\r\n-->\r\n\r\n**Describe the bug**\r\nWebauthn allows me to register my phone as security key but not login with it\r\n\r\n**Expected behavior**\r\nAfter closing the native is security key prompt, A chrome prompt like this should pop up and allows me to select my phone to use as a security key\r\n\r\n\r\n\r\n**To Reproduce**\r\nAdd a android phone as a security key by visiting your profile and clicking add security key and follow the expected behavior\r\nThen logout and try to login with the same expected behaviour\r\n\r\n**My Platform**\r\nWindows 10 and chrome version Version 110.0.5481.177 (Official Build) (64-bit)\r\n\r\n**Additional context**\r\nNone\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\n\nimport webauthn as pywebauthn\n\nfrom webauthn.helpers import base64url_to_bytes, generate_challenge\nfrom webauthn.helpers.exceptions import (\n InvalidAuthenticationResponse,\n InvalidRegistrationResponse,\n)\nfrom webauthn.helpers.options_to_json import options_to_json\nfrom webauthn.helpers.structs import (\n AttestationConveyancePreference,\n AuthenticationCredential,\n AuthenticatorSelectionCriteria,\n AuthenticatorTransport,\n PublicKeyCredentialDescriptor,\n RegistrationCredential,\n UserVerificationRequirement,\n)\n\n\nclass AuthenticationRejectedError(Exception):\n pass\n\n\nclass RegistrationRejectedError(Exception):\n pass\n\n\ndef _get_webauthn_user_public_key_credential_descriptors(user, *, rp_id):\n \"\"\"\n Returns a webauthn.WebAuthnUser instance corresponding\n to the given user model, with properties suitable for\n usage within the webauthn API.\n \"\"\"\n return [\n PublicKeyCredentialDescriptor(\n id=base64url_to_bytes(credential.credential_id),\n transports=[\n AuthenticatorTransport.USB,\n AuthenticatorTransport.NFC,\n AuthenticatorTransport.BLE,\n AuthenticatorTransport.INTERNAL,\n ],\n )\n for credential in user.webauthn\n ]\n\n\ndef _get_webauthn_user_public_keys(user, *, rp_id):\n return [\n (\n base64url_to_bytes(credential.public_key),\n credential.sign_count,\n )\n for credential in user.webauthn\n ]\n\n\ndef _webauthn_b64encode(source):\n return base64.urlsafe_b64encode(source).rstrip(b\"=\")\n\n\ndef generate_webauthn_challenge():\n \"\"\"\n Returns a random challenge suitable for use within\n Webauthn's credential and configuration option objects.\n\n See: https://w3c.github.io/webauthn/#cryptographic-challenges\n \"\"\"\n return generate_challenge()\n\n\ndef get_credential_options(user, *, challenge, rp_name, rp_id):\n \"\"\"\n Returns a dictionary of options for credential creation\n on the client side.\n \"\"\"\n _authenticator_selection = AuthenticatorSelectionCriteria()\n _authenticator_selection.user_verification = UserVerificationRequirement.DISCOURAGED\n options = pywebauthn.generate_registration_options(\n rp_id=rp_id,\n rp_name=rp_name,\n user_id=str(user.id),\n user_name=user.username,\n user_display_name=user.name or user.username,\n challenge=challenge,\n attestation=AttestationConveyancePreference.NONE,\n authenticator_selection=_authenticator_selection,\n )\n return json.loads(options_to_json(options))\n\n\ndef get_assertion_options(user, *, challenge, rp_id):\n \"\"\"\n Returns a dictionary of options for assertion retrieval\n on the client side.\n \"\"\"\n options = pywebauthn.generate_authentication_options(\n rp_id=rp_id,\n challenge=challenge,\n allow_credentials=_get_webauthn_user_public_key_credential_descriptors(\n user, rp_id=rp_id\n ),\n user_verification=UserVerificationRequirement.DISCOURAGED,\n )\n return json.loads(options_to_json(options))\n\n\ndef verify_registration_response(response, challenge, *, rp_id, origin):\n \"\"\"\n Validates the challenge and attestation information\n sent from the client during device registration.\n\n Returns a WebAuthnCredential on success.\n Raises RegistrationRejectedError on failire.\n \"\"\"\n # NOTE: We re-encode the challenge below, because our\n # response's clientData.challenge is encoded twice:\n # first for the entire clientData payload, and then again\n # for the individual challenge.\n encoded_challenge = _webauthn_b64encode(challenge)\n try:\n _credential = RegistrationCredential.parse_raw(response)\n return pywebauthn.verify_registration_response(\n credential=_credential,\n expected_challenge=encoded_challenge,\n expected_rp_id=rp_id,\n expected_origin=origin,\n require_user_verification=False,\n )\n except InvalidRegistrationResponse as e:\n raise RegistrationRejectedError(str(e))\n\n\ndef verify_assertion_response(assertion, *, challenge, user, origin, rp_id):\n \"\"\"\n Validates the challenge and assertion information\n sent from the client during authentication.\n\n Returns an updated signage count on success.\n Raises AuthenticationRejectedError on failure.\n \"\"\"\n # NOTE: We re-encode the challenge below, because our\n # response's clientData.challenge is encoded twice:\n # first for the entire clientData payload, and then again\n # for the individual challenge.\n encoded_challenge = _webauthn_b64encode(challenge)\n webauthn_user_public_keys = _get_webauthn_user_public_keys(user, rp_id=rp_id)\n\n for public_key, current_sign_count in webauthn_user_public_keys:\n try:\n _credential = AuthenticationCredential.parse_raw(assertion)\n return pywebauthn.verify_authentication_response(\n credential=_credential,\n expected_challenge=encoded_challenge,\n expected_rp_id=rp_id,\n expected_origin=origin,\n credential_public_key=public_key,\n credential_current_sign_count=current_sign_count,\n require_user_verification=False,\n )\n except InvalidAuthenticationResponse:\n pass\n\n # If we exit the loop, then we've failed to verify the assertion against\n # any of the user's WebAuthn credentials. Fail.\n raise AuthenticationRejectedError(\"Invalid WebAuthn credential\")\n", "path": "warehouse/utils/webauthn.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\n\nimport webauthn as pywebauthn\n\nfrom webauthn.helpers import base64url_to_bytes, generate_challenge\nfrom webauthn.helpers.exceptions import (\n InvalidAuthenticationResponse,\n InvalidRegistrationResponse,\n)\nfrom webauthn.helpers.options_to_json import options_to_json\nfrom webauthn.helpers.structs import (\n AttestationConveyancePreference,\n AuthenticationCredential,\n AuthenticatorSelectionCriteria,\n PublicKeyCredentialDescriptor,\n RegistrationCredential,\n UserVerificationRequirement,\n)\n\n\nclass AuthenticationRejectedError(Exception):\n pass\n\n\nclass RegistrationRejectedError(Exception):\n pass\n\n\ndef _get_webauthn_user_public_key_credential_descriptors(user, *, rp_id):\n \"\"\"\n Returns a webauthn.WebAuthnUser instance corresponding\n to the given user model, with properties suitable for\n usage within the webauthn API.\n \"\"\"\n return [\n PublicKeyCredentialDescriptor(id=base64url_to_bytes(credential.credential_id))\n for credential in user.webauthn\n ]\n\n\ndef _get_webauthn_user_public_keys(user, *, rp_id):\n return [\n (\n base64url_to_bytes(credential.public_key),\n credential.sign_count,\n )\n for credential in user.webauthn\n ]\n\n\ndef _webauthn_b64encode(source):\n return base64.urlsafe_b64encode(source).rstrip(b\"=\")\n\n\ndef generate_webauthn_challenge():\n \"\"\"\n Returns a random challenge suitable for use within\n Webauthn's credential and configuration option objects.\n\n See: https://w3c.github.io/webauthn/#cryptographic-challenges\n \"\"\"\n return generate_challenge()\n\n\ndef get_credential_options(user, *, challenge, rp_name, rp_id):\n \"\"\"\n Returns a dictionary of options for credential creation\n on the client side.\n \"\"\"\n _authenticator_selection = AuthenticatorSelectionCriteria()\n _authenticator_selection.user_verification = UserVerificationRequirement.DISCOURAGED\n options = pywebauthn.generate_registration_options(\n rp_id=rp_id,\n rp_name=rp_name,\n user_id=str(user.id),\n user_name=user.username,\n user_display_name=user.name or user.username,\n challenge=challenge,\n attestation=AttestationConveyancePreference.NONE,\n authenticator_selection=_authenticator_selection,\n )\n return json.loads(options_to_json(options))\n\n\ndef get_assertion_options(user, *, challenge, rp_id):\n \"\"\"\n Returns a dictionary of options for assertion retrieval\n on the client side.\n \"\"\"\n options = pywebauthn.generate_authentication_options(\n rp_id=rp_id,\n challenge=challenge,\n allow_credentials=_get_webauthn_user_public_key_credential_descriptors(\n user, rp_id=rp_id\n ),\n user_verification=UserVerificationRequirement.DISCOURAGED,\n )\n return json.loads(options_to_json(options))\n\n\ndef verify_registration_response(response, challenge, *, rp_id, origin):\n \"\"\"\n Validates the challenge and attestation information\n sent from the client during device registration.\n\n Returns a WebAuthnCredential on success.\n Raises RegistrationRejectedError on failire.\n \"\"\"\n # NOTE: We re-encode the challenge below, because our\n # response's clientData.challenge is encoded twice:\n # first for the entire clientData payload, and then again\n # for the individual challenge.\n encoded_challenge = _webauthn_b64encode(challenge)\n try:\n _credential = RegistrationCredential.parse_raw(response)\n return pywebauthn.verify_registration_response(\n credential=_credential,\n expected_challenge=encoded_challenge,\n expected_rp_id=rp_id,\n expected_origin=origin,\n require_user_verification=False,\n )\n except InvalidRegistrationResponse as e:\n raise RegistrationRejectedError(str(e))\n\n\ndef verify_assertion_response(assertion, *, challenge, user, origin, rp_id):\n \"\"\"\n Validates the challenge and assertion information\n sent from the client during authentication.\n\n Returns an updated signage count on success.\n Raises AuthenticationRejectedError on failure.\n \"\"\"\n # NOTE: We re-encode the challenge below, because our\n # response's clientData.challenge is encoded twice:\n # first for the entire clientData payload, and then again\n # for the individual challenge.\n encoded_challenge = _webauthn_b64encode(challenge)\n webauthn_user_public_keys = _get_webauthn_user_public_keys(user, rp_id=rp_id)\n\n for public_key, current_sign_count in webauthn_user_public_keys:\n try:\n _credential = AuthenticationCredential.parse_raw(assertion)\n return pywebauthn.verify_authentication_response(\n credential=_credential,\n expected_challenge=encoded_challenge,\n expected_rp_id=rp_id,\n expected_origin=origin,\n credential_public_key=public_key,\n credential_current_sign_count=current_sign_count,\n require_user_verification=False,\n )\n except InvalidAuthenticationResponse:\n pass\n\n # If we exit the loop, then we've failed to verify the assertion against\n # any of the user's WebAuthn credentials. Fail.\n raise AuthenticationRejectedError(\"Invalid WebAuthn credential\")\n", "path": "warehouse/utils/webauthn.py"}]}
| 2,357 | 211 |
gh_patches_debug_41161
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1313
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError thrown and Incorrect display_options present when altering a column type from TEXT to DATE
## Description
When patching a column from `TEXT` to `DATE`,
* Irrelevant `display_option "show_as_percentage"` is present on the column
* Further requests to `columns/` and `tables/` endpoints, throws a 500 with keyError.
## Reproduction
* Create a TEXT column
* PATCH it to DATE. (Alternatively, change type to DATE on the frontend).
```
{
type: "DATE",
display_options: {},
type_options: {},
}
```
* Notice that the table fails to loads on the frontend.
* Requests to `columns/` and `tables/` endpoints result in a 500 with,
> Got KeyError when attempting to get a value for field `format` on serializer `TimeFormatDisplayOptionSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\nOriginal exception text was: 'format'.
* Check the preloaded tables list by executing the following on the developer tools:
```javascript
JSON.parse(document.querySelector('#common-data').textContent).tables
```
* Notice that incorrect fields are present in the display_options of the modified column:
```javascript
{
id: 34,
name: "Date Shared",
type: "DATE",
type_options :null,
display_options:{
show_as_percentage: false
}
}
```
## Expected behavior
* `display_options` field should not contain invalid entries.
* The 500 with keyError should not be thrown.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/exceptions/mixins.py`
Content:
```
1 from rest_framework.serializers import Serializer
2 from rest_framework.utils.serializer_helpers import ReturnList
3 from rest_framework_friendly_errors.mixins import FriendlyErrorMessagesMixin
4 from django.core.exceptions import ValidationError as DjangoValidationError
5 from rest_framework.exceptions import ValidationError as RestValidationError
6 from mathesar.api.exceptions.generic_exceptions.base_exceptions import ErrorBody
7
8
9 class MathesarErrorMessageMixin(FriendlyErrorMessagesMixin):
10 def is_pretty(self, error):
11 return isinstance(error, dict) and tuple(error.keys()) == ErrorBody._fields
12
13 def build_pretty_errors(self, errors, serializer=None):
14
15 """
16 This method build on top of `build_pretty_errors` method of the superclass
17 It provides the following additional features
18 1. Avoids processing prettified exceptions
19 2. Add field to the pretty exception body if raised by field validation method
20 """
21 pretty = []
22 for error_type in errors:
23 error = errors[error_type]
24 if error_type == 'non_field_errors':
25 if self.is_pretty(error):
26 pretty.append(error)
27 else:
28 pretty.extend(self.get_non_field_error_entries(errors[error_type]))
29 else:
30 field = self.fields.fields[error_type]
31 if isinstance(field, Serializer) and type(errors[error_type]) == dict:
32 field.initial_data = self.initial_data[error_type]
33 child_errors = field.build_pretty_errors(errors[error_type])
34 pretty += child_errors
35 continue
36 if self.is_pretty(error):
37 if 'field' not in error or error['field'] is None or str(error['field']) == 'None':
38 error['field'] = error_type
39 pretty.append(error)
40 else:
41 pretty.extend(self.get_field_error_entries(errors[error_type], field))
42 if pretty:
43 return pretty
44 return []
45
46 def _run_validator(self, validator, field, message):
47 """
48 This method build on top of `_run_validator` method of the superclass
49 It provides the following additional features
50 1. Includes serializer if `required_context` is True similar to the behaviour of drf
51 """
52 try:
53 args = []
54 if getattr(validator, 'requires_context', False):
55 args.append(field)
56 validator(self.initial_data[field.field_name], *args)
57 except (DjangoValidationError, RestValidationError) as err:
58 err_message = err.detail[0] if hasattr(err, 'detail') else err.message
59 return err_message == message
60
61 @property
62 def errors(self):
63 """
64 This method build on top of `errors` property of the superclass to return a list instead of a dictionary
65 """
66 ugly_errors = super(FriendlyErrorMessagesMixin, self).errors
67 pretty_errors = self.build_pretty_errors(ugly_errors)
68 return ReturnList(pretty_errors, serializer=self)
69
70 @property
71 def field_map(self):
72 """
73 This method build on top of `field_map` property of the superclass
74 It provides the following additional features
75 1. Adds `ListSerializer` to `relation` field list
76 """
77 parent_field_map = super(FriendlyErrorMessagesMixin, self).field_map
78 # Add missing `ListSerializer to existing relation list`
79 parent_field_map['relation'].append('ListSerializer')
80 return parent_field_map
81
82 def get_field_kwargs(self, field, field_data):
83 """
84 This method build on top of `get_field_kwargs` method of the superclass
85 It provides the following fixes
86 1. Fixes file type length value to use name of the file instead of the size of the file,
87 matching the default behaviour of drf
88 """
89 field_type = field.__class__.__name__
90 kwargs = {
91 'data_type': type(field_data).__name__
92 }
93 if field_type in self.field_map['boolean']:
94 kwargs.update({'input': field_data})
95 elif field_type in self.field_map['string']:
96 kwargs.update(
97 {
98 'max_length': getattr(field, 'max_length', None),
99 'min_length': getattr(field, 'min_length', None),
100 'value': field_data
101 }
102 )
103 elif field_type in self.field_map['numeric']:
104
105 kwargs.update(
106 {
107 'min_value': field.min_value,
108 'max_value': field.max_value,
109 'decimal_places': getattr(field, 'decimal_places', None),
110 'max_decimal_places': getattr(field, 'decimal_places', None),
111 'max_digits': getattr(field, 'max_digits', None)
112 }
113 )
114 max_digits = kwargs['max_digits']
115 decimal_places = kwargs['decimal_places']
116 if max_digits is not None and decimal_places is not None:
117 whole_digits = max_digits - decimal_places
118 kwargs.update({'max_whole_digits': whole_digits})
119 elif field_type in self.field_map['date'].keys():
120 kwargs.update({'format': self.field_map['date'][field_type]})
121 elif field_type in self.field_map['choice']:
122 kwargs.update(
123 {
124 'input': field_data,
125 'input_type': type(field_data).__name__
126 }
127 )
128 elif field_type in self.field_map['file']:
129 kwargs.update(
130 {
131 'max_length': field.max_length,
132 # Parent method calculates the length of the file instead of the filename,
133 # we are changing it to calculate length of the file name
134 'length': len(field.parent.data.get(field.source, '').name)
135 }
136 )
137 elif field_type in self.field_map['composite']:
138 kwargs.update(
139 {
140 'input_type': type(field_data).__name__,
141 'max_length': getattr(field, 'max_length', None),
142 'min_length': getattr(field, 'min_length', None)
143 }
144 )
145 elif field_type in self.field_map['relation']:
146 kwargs.update(
147 {
148 'pk_value': field_data,
149 'data_type': type(field_data).__name__,
150 'input_type': type(field_data).__name__,
151 'slug_name': getattr(field, 'slug_field', None),
152 'value': field_data
153 }
154 )
155 else:
156 kwargs.update({'max_length': getattr(field, 'max_length', None)})
157 return kwargs
158
```
Path: `mathesar/api/serializers/shared_serializers.py`
Content:
```
1 from django.core.exceptions import ImproperlyConfigured
2 from rest_framework import serializers
3
4 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
5 from mathesar.database.types import MathesarTypeIdentifier, get_mathesar_type_from_db_type
6
7
8 class ReadOnlyPolymorphicSerializerMappingMixin:
9 """
10 This serializer mixin is helpful in serializing polymorphic models,
11 by switching to correct serializer based on the mapping field value.
12 """
13
14 def __new__(cls, *args, **kwargs):
15 if cls.serializers_mapping is None:
16 raise ImproperlyConfigured(
17 '`{cls}` is missing a '
18 '`{cls}.model_serializer_mapping` attribute'.format(cls=cls.__name__)
19 )
20 return super().__new__(cls, *args, **kwargs)
21
22 def __init__(self, *args, **kwargs):
23 super().__init__(*args, **kwargs)
24 self.serializers_cls_mapping = {}
25 serializers_mapping = self.serializers_mapping
26 self.serializers_mapping = {}
27 for identifier, serializer_cls in serializers_mapping.items():
28 if callable(serializer_cls):
29 serializer = serializer_cls(*args, **kwargs)
30 serializer.parent = self
31 else:
32 serializer = serializer_cls
33 self.serializers_mapping[identifier] = serializer
34 self.serializers_cls_mapping[identifier] = serializer_cls
35
36 def to_representation(self, instance):
37 serializer = self.serializers_mapping.get(self.get_mapping_field(), None)
38 if serializer is not None:
39 self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())
40 return serializer.to_representation(instance)
41 else:
42 raise Exception(f"Cannot find a matching serializer for the specified type {self.get_mapping_field()}")
43
44 def get_mapping_field(self):
45 mapping_field = getattr(self, "mapping_field", None)
46 if mapping_field is None:
47 raise Exception(
48 "Add a `mapping_field` to be used as a identifier"
49 "or override this method to return a identifier to identify a proper serializer"
50 )
51 return mapping_field
52
53
54 class ReadWritePolymorphicSerializerMappingMixin(ReadOnlyPolymorphicSerializerMappingMixin):
55 def to_internal_value(self, data):
56 serializer = self.serializers_mapping.get(self.get_mapping_field())
57 if serializer is not None:
58 self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())
59 return serializer.to_internal_value(data=data)
60 else:
61 raise Exception(f"Cannot find a matching serializer for the specified type {self.get_mapping_field()}")
62
63
64 class MonkeyPatchPartial:
65 """
66 Work around bug #3847 in djangorestframework by monkey-patching the partial
67 attribute of the root serializer during the call to validate_empty_values.
68 https://github.com/encode/django-rest-framework/issues/3847
69 """
70
71 def __init__(self, root):
72 self._root = root
73
74 def __enter__(self):
75 self._old = getattr(self._root, 'partial')
76 setattr(self._root, 'partial', False)
77
78 def __exit__(self, *args):
79 setattr(self._root, 'partial', self._old)
80
81
82 class OverrideRootPartialMixin:
83 """
84 This mixin is used to convert a serializer into a partial serializer,
85 based on the serializer `partial` property rather than the parent's `partial` property.
86 Refer to the issue
87 https://github.com/encode/django-rest-framework/issues/3847
88 """
89
90 def run_validation(self, *args, **kwargs):
91 if not self.partial:
92 with MonkeyPatchPartial(self.root):
93 return super().run_validation(*args, **kwargs)
94 return super().run_validation(*args, **kwargs)
95
96
97 class CustomBooleanLabelSerializer(MathesarErrorMessageMixin, serializers.Serializer):
98 TRUE = serializers.CharField()
99 FALSE = serializers.CharField()
100
101
102 DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY = 'db_type'
103
104
105 class BooleanDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):
106 input = serializers.ChoiceField(choices=[("dropdown", "dropdown"), ("checkbox", "checkbox")])
107 custom_labels = CustomBooleanLabelSerializer(required=False)
108
109
110 class AbstractNumberDisplayOptionSerializer(serializers.Serializer):
111 number_format = serializers.ChoiceField(allow_null=True, required=False, choices=['english', 'german', 'french', 'hindi', 'swiss'])
112
113
114 class NumberDisplayOptionSerializer(
115 MathesarErrorMessageMixin,
116 OverrideRootPartialMixin,
117 AbstractNumberDisplayOptionSerializer
118 ):
119 show_as_percentage = serializers.BooleanField(default=False)
120
121
122 class TimeFormatDisplayOptionSerializer(
123 MathesarErrorMessageMixin,
124 OverrideRootPartialMixin,
125 serializers.Serializer
126 ):
127 format = serializers.CharField(max_length=255)
128
129
130 class DurationDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):
131 min = serializers.CharField(max_length=255)
132 max = serializers.CharField(max_length=255)
133 show_units = serializers.BooleanField()
134
135
136 class DisplayOptionsMappingSerializer(
137 MathesarErrorMessageMixin,
138 OverrideRootPartialMixin,
139 ReadWritePolymorphicSerializerMappingMixin,
140 serializers.Serializer
141 ):
142 serializers_mapping = {
143 MathesarTypeIdentifier.BOOLEAN.value: BooleanDisplayOptionSerializer,
144 MathesarTypeIdentifier.NUMBER.value: NumberDisplayOptionSerializer,
145 MathesarTypeIdentifier.DATETIME.value: TimeFormatDisplayOptionSerializer,
146 MathesarTypeIdentifier.DATE.value: TimeFormatDisplayOptionSerializer,
147 MathesarTypeIdentifier.TIME.value: TimeFormatDisplayOptionSerializer,
148 MathesarTypeIdentifier.DURATION.value: DurationDisplayOptionSerializer,
149 }
150
151 def get_mapping_field(self):
152 db_type = self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY]
153 mathesar_type = get_mathesar_type_from_db_type(db_type)
154 return mathesar_type
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/api/exceptions/mixins.py b/mathesar/api/exceptions/mixins.py
--- a/mathesar/api/exceptions/mixins.py
+++ b/mathesar/api/exceptions/mixins.py
@@ -27,7 +27,7 @@
else:
pretty.extend(self.get_non_field_error_entries(errors[error_type]))
else:
- field = self.fields.fields[error_type]
+ field = self.get_serializer_fields().fields[error_type]
if isinstance(field, Serializer) and type(errors[error_type]) == dict:
field.initial_data = self.initial_data[error_type]
child_errors = field.build_pretty_errors(errors[error_type])
@@ -43,6 +43,9 @@
return pretty
return []
+ def get_serializer_fields(self):
+ return self.fields
+
def _run_validator(self, validator, field, message):
"""
This method build on top of `_run_validator` method of the superclass
diff --git a/mathesar/api/serializers/shared_serializers.py b/mathesar/api/serializers/shared_serializers.py
--- a/mathesar/api/serializers/shared_serializers.py
+++ b/mathesar/api/serializers/shared_serializers.py
@@ -36,7 +36,6 @@
def to_representation(self, instance):
serializer = self.serializers_mapping.get(self.get_mapping_field(), None)
if serializer is not None:
- self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())
return serializer.to_representation(instance)
else:
raise Exception(f"Cannot find a matching serializer for the specified type {self.get_mapping_field()}")
@@ -55,7 +54,6 @@
def to_internal_value(self, data):
serializer = self.serializers_mapping.get(self.get_mapping_field())
if serializer is not None:
- self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())
return serializer.to_internal_value(data=data)
else:
raise Exception(f"Cannot find a matching serializer for the specified type {self.get_mapping_field()}")
@@ -94,6 +92,11 @@
return super().run_validation(*args, **kwargs)
+class MathesarPolymorphicErrorMixin(MathesarErrorMessageMixin):
+ def get_serializer_fields(self):
+ return self.serializers_mapping[self.get_mapping_field()].fields
+
+
class CustomBooleanLabelSerializer(MathesarErrorMessageMixin, serializers.Serializer):
TRUE = serializers.CharField()
FALSE = serializers.CharField()
@@ -134,8 +137,8 @@
class DisplayOptionsMappingSerializer(
- MathesarErrorMessageMixin,
OverrideRootPartialMixin,
+ MathesarPolymorphicErrorMixin,
ReadWritePolymorphicSerializerMappingMixin,
serializers.Serializer
):
|
{"golden_diff": "diff --git a/mathesar/api/exceptions/mixins.py b/mathesar/api/exceptions/mixins.py\n--- a/mathesar/api/exceptions/mixins.py\n+++ b/mathesar/api/exceptions/mixins.py\n@@ -27,7 +27,7 @@\n else:\n pretty.extend(self.get_non_field_error_entries(errors[error_type]))\n else:\n- field = self.fields.fields[error_type]\n+ field = self.get_serializer_fields().fields[error_type]\n if isinstance(field, Serializer) and type(errors[error_type]) == dict:\n field.initial_data = self.initial_data[error_type]\n child_errors = field.build_pretty_errors(errors[error_type])\n@@ -43,6 +43,9 @@\n return pretty\n return []\n \n+ def get_serializer_fields(self):\n+ return self.fields\n+\n def _run_validator(self, validator, field, message):\n \"\"\"\n This method build on top of `_run_validator` method of the superclass\ndiff --git a/mathesar/api/serializers/shared_serializers.py b/mathesar/api/serializers/shared_serializers.py\n--- a/mathesar/api/serializers/shared_serializers.py\n+++ b/mathesar/api/serializers/shared_serializers.py\n@@ -36,7 +36,6 @@\n def to_representation(self, instance):\n serializer = self.serializers_mapping.get(self.get_mapping_field(), None)\n if serializer is not None:\n- self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())\n return serializer.to_representation(instance)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n@@ -55,7 +54,6 @@\n def to_internal_value(self, data):\n serializer = self.serializers_mapping.get(self.get_mapping_field())\n if serializer is not None:\n- self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())\n return serializer.to_internal_value(data=data)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n@@ -94,6 +92,11 @@\n return super().run_validation(*args, **kwargs)\n \n \n+class MathesarPolymorphicErrorMixin(MathesarErrorMessageMixin):\n+ def get_serializer_fields(self):\n+ return self.serializers_mapping[self.get_mapping_field()].fields\n+\n+\n class CustomBooleanLabelSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n TRUE = serializers.CharField()\n FALSE = serializers.CharField()\n@@ -134,8 +137,8 @@\n \n \n class DisplayOptionsMappingSerializer(\n- MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n+ MathesarPolymorphicErrorMixin,\n ReadWritePolymorphicSerializerMappingMixin,\n serializers.Serializer\n ):\n", "issue": "KeyError thrown and Incorrect display_options present when altering a column type from TEXT to DATE\n## Description\r\nWhen patching a column from `TEXT` to `DATE`,\r\n* Irrelevant `display_option \"show_as_percentage\"` is present on the column\r\n* Further requests to `columns/` and `tables/` endpoints, throws a 500 with keyError.\r\n\r\n## Reproduction\r\n* Create a TEXT column\r\n* PATCH it to DATE. (Alternatively, change type to DATE on the frontend).\r\n ```\r\n {\r\n type: \"DATE\",\r\n display_options: {},\r\n type_options: {},\r\n }\r\n ```\r\n* Notice that the table fails to loads on the frontend.\r\n* Requests to `columns/` and `tables/` endpoints result in a 500 with,\r\n > Got KeyError when attempting to get a value for field `format` on serializer `TimeFormatDisplayOptionSerializer`.\\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\\nOriginal exception text was: 'format'.\r\n* Check the preloaded tables list by executing the following on the developer tools:\r\n ```javascript\r\n JSON.parse(document.querySelector('#common-data').textContent).tables\r\n ```\r\n* Notice that incorrect fields are present in the display_options of the modified column: \r\n ```javascript\r\n {\r\n id: 34,\r\n name: \"Date Shared\",\r\n type: \"DATE\",\r\n type_options :null,\r\n display_options:{\r\n show_as_percentage: false\r\n }\r\n }\r\n ```\r\n\r\n## Expected behavior\r\n* `display_options` field should not contain invalid entries.\r\n* The 500 with keyError should not be thrown.\n", "before_files": [{"content": "from rest_framework.serializers import Serializer\nfrom rest_framework.utils.serializer_helpers import ReturnList\nfrom rest_framework_friendly_errors.mixins import FriendlyErrorMessagesMixin\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError as RestValidationError\nfrom mathesar.api.exceptions.generic_exceptions.base_exceptions import ErrorBody\n\n\nclass MathesarErrorMessageMixin(FriendlyErrorMessagesMixin):\n def is_pretty(self, error):\n return isinstance(error, dict) and tuple(error.keys()) == ErrorBody._fields\n\n def build_pretty_errors(self, errors, serializer=None):\n\n \"\"\"\n This method build on top of `build_pretty_errors` method of the superclass\n It provides the following additional features\n 1. Avoids processing prettified exceptions\n 2. Add field to the pretty exception body if raised by field validation method\n \"\"\"\n pretty = []\n for error_type in errors:\n error = errors[error_type]\n if error_type == 'non_field_errors':\n if self.is_pretty(error):\n pretty.append(error)\n else:\n pretty.extend(self.get_non_field_error_entries(errors[error_type]))\n else:\n field = self.fields.fields[error_type]\n if isinstance(field, Serializer) and type(errors[error_type]) == dict:\n field.initial_data = self.initial_data[error_type]\n child_errors = field.build_pretty_errors(errors[error_type])\n pretty += child_errors\n continue\n if self.is_pretty(error):\n if 'field' not in error or error['field'] is None or str(error['field']) == 'None':\n error['field'] = error_type\n pretty.append(error)\n else:\n pretty.extend(self.get_field_error_entries(errors[error_type], field))\n if pretty:\n return pretty\n return []\n\n def _run_validator(self, validator, field, message):\n \"\"\"\n This method build on top of `_run_validator` method of the superclass\n It provides the following additional features\n 1. Includes serializer if `required_context` is True similar to the behaviour of drf\n \"\"\"\n try:\n args = []\n if getattr(validator, 'requires_context', False):\n args.append(field)\n validator(self.initial_data[field.field_name], *args)\n except (DjangoValidationError, RestValidationError) as err:\n err_message = err.detail[0] if hasattr(err, 'detail') else err.message\n return err_message == message\n\n @property\n def errors(self):\n \"\"\"\n This method build on top of `errors` property of the superclass to return a list instead of a dictionary\n \"\"\"\n ugly_errors = super(FriendlyErrorMessagesMixin, self).errors\n pretty_errors = self.build_pretty_errors(ugly_errors)\n return ReturnList(pretty_errors, serializer=self)\n\n @property\n def field_map(self):\n \"\"\"\n This method build on top of `field_map` property of the superclass\n It provides the following additional features\n 1. Adds `ListSerializer` to `relation` field list\n \"\"\"\n parent_field_map = super(FriendlyErrorMessagesMixin, self).field_map\n # Add missing `ListSerializer to existing relation list`\n parent_field_map['relation'].append('ListSerializer')\n return parent_field_map\n\n def get_field_kwargs(self, field, field_data):\n \"\"\"\n This method build on top of `get_field_kwargs` method of the superclass\n It provides the following fixes\n 1. Fixes file type length value to use name of the file instead of the size of the file,\n matching the default behaviour of drf\n \"\"\"\n field_type = field.__class__.__name__\n kwargs = {\n 'data_type': type(field_data).__name__\n }\n if field_type in self.field_map['boolean']:\n kwargs.update({'input': field_data})\n elif field_type in self.field_map['string']:\n kwargs.update(\n {\n 'max_length': getattr(field, 'max_length', None),\n 'min_length': getattr(field, 'min_length', None),\n 'value': field_data\n }\n )\n elif field_type in self.field_map['numeric']:\n\n kwargs.update(\n {\n 'min_value': field.min_value,\n 'max_value': field.max_value,\n 'decimal_places': getattr(field, 'decimal_places', None),\n 'max_decimal_places': getattr(field, 'decimal_places', None),\n 'max_digits': getattr(field, 'max_digits', None)\n }\n )\n max_digits = kwargs['max_digits']\n decimal_places = kwargs['decimal_places']\n if max_digits is not None and decimal_places is not None:\n whole_digits = max_digits - decimal_places\n kwargs.update({'max_whole_digits': whole_digits})\n elif field_type in self.field_map['date'].keys():\n kwargs.update({'format': self.field_map['date'][field_type]})\n elif field_type in self.field_map['choice']:\n kwargs.update(\n {\n 'input': field_data,\n 'input_type': type(field_data).__name__\n }\n )\n elif field_type in self.field_map['file']:\n kwargs.update(\n {\n 'max_length': field.max_length,\n # Parent method calculates the length of the file instead of the filename,\n # we are changing it to calculate length of the file name\n 'length': len(field.parent.data.get(field.source, '').name)\n }\n )\n elif field_type in self.field_map['composite']:\n kwargs.update(\n {\n 'input_type': type(field_data).__name__,\n 'max_length': getattr(field, 'max_length', None),\n 'min_length': getattr(field, 'min_length', None)\n }\n )\n elif field_type in self.field_map['relation']:\n kwargs.update(\n {\n 'pk_value': field_data,\n 'data_type': type(field_data).__name__,\n 'input_type': type(field_data).__name__,\n 'slug_name': getattr(field, 'slug_field', None),\n 'value': field_data\n }\n )\n else:\n kwargs.update({'max_length': getattr(field, 'max_length', None)})\n return kwargs\n", "path": "mathesar/api/exceptions/mixins.py"}, {"content": "from django.core.exceptions import ImproperlyConfigured\nfrom rest_framework import serializers\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.database.types import MathesarTypeIdentifier, get_mathesar_type_from_db_type\n\n\nclass ReadOnlyPolymorphicSerializerMappingMixin:\n \"\"\"\n This serializer mixin is helpful in serializing polymorphic models,\n by switching to correct serializer based on the mapping field value.\n \"\"\"\n\n def __new__(cls, *args, **kwargs):\n if cls.serializers_mapping is None:\n raise ImproperlyConfigured(\n '`{cls}` is missing a '\n '`{cls}.model_serializer_mapping` attribute'.format(cls=cls.__name__)\n )\n return super().__new__(cls, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.serializers_cls_mapping = {}\n serializers_mapping = self.serializers_mapping\n self.serializers_mapping = {}\n for identifier, serializer_cls in serializers_mapping.items():\n if callable(serializer_cls):\n serializer = serializer_cls(*args, **kwargs)\n serializer.parent = self\n else:\n serializer = serializer_cls\n self.serializers_mapping[identifier] = serializer\n self.serializers_cls_mapping[identifier] = serializer_cls\n\n def to_representation(self, instance):\n serializer = self.serializers_mapping.get(self.get_mapping_field(), None)\n if serializer is not None:\n self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())\n return serializer.to_representation(instance)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n\n def get_mapping_field(self):\n mapping_field = getattr(self, \"mapping_field\", None)\n if mapping_field is None:\n raise Exception(\n \"Add a `mapping_field` to be used as a identifier\"\n \"or override this method to return a identifier to identify a proper serializer\"\n )\n return mapping_field\n\n\nclass ReadWritePolymorphicSerializerMappingMixin(ReadOnlyPolymorphicSerializerMappingMixin):\n def to_internal_value(self, data):\n serializer = self.serializers_mapping.get(self.get_mapping_field())\n if serializer is not None:\n self.__class__ = self.serializers_cls_mapping.get(self.get_mapping_field())\n return serializer.to_internal_value(data=data)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n\n\nclass MonkeyPatchPartial:\n \"\"\"\n Work around bug #3847 in djangorestframework by monkey-patching the partial\n attribute of the root serializer during the call to validate_empty_values.\n https://github.com/encode/django-rest-framework/issues/3847\n \"\"\"\n\n def __init__(self, root):\n self._root = root\n\n def __enter__(self):\n self._old = getattr(self._root, 'partial')\n setattr(self._root, 'partial', False)\n\n def __exit__(self, *args):\n setattr(self._root, 'partial', self._old)\n\n\nclass OverrideRootPartialMixin:\n \"\"\"\n This mixin is used to convert a serializer into a partial serializer,\n based on the serializer `partial` property rather than the parent's `partial` property.\n Refer to the issue\n https://github.com/encode/django-rest-framework/issues/3847\n \"\"\"\n\n def run_validation(self, *args, **kwargs):\n if not self.partial:\n with MonkeyPatchPartial(self.root):\n return super().run_validation(*args, **kwargs)\n return super().run_validation(*args, **kwargs)\n\n\nclass CustomBooleanLabelSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n TRUE = serializers.CharField()\n FALSE = serializers.CharField()\n\n\nDISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY = 'db_type'\n\n\nclass BooleanDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):\n input = serializers.ChoiceField(choices=[(\"dropdown\", \"dropdown\"), (\"checkbox\", \"checkbox\")])\n custom_labels = CustomBooleanLabelSerializer(required=False)\n\n\nclass AbstractNumberDisplayOptionSerializer(serializers.Serializer):\n number_format = serializers.ChoiceField(allow_null=True, required=False, choices=['english', 'german', 'french', 'hindi', 'swiss'])\n\n\nclass NumberDisplayOptionSerializer(\n MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n AbstractNumberDisplayOptionSerializer\n):\n show_as_percentage = serializers.BooleanField(default=False)\n\n\nclass TimeFormatDisplayOptionSerializer(\n MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n serializers.Serializer\n):\n format = serializers.CharField(max_length=255)\n\n\nclass DurationDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):\n min = serializers.CharField(max_length=255)\n max = serializers.CharField(max_length=255)\n show_units = serializers.BooleanField()\n\n\nclass DisplayOptionsMappingSerializer(\n MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n ReadWritePolymorphicSerializerMappingMixin,\n serializers.Serializer\n):\n serializers_mapping = {\n MathesarTypeIdentifier.BOOLEAN.value: BooleanDisplayOptionSerializer,\n MathesarTypeIdentifier.NUMBER.value: NumberDisplayOptionSerializer,\n MathesarTypeIdentifier.DATETIME.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.DATE.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.TIME.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.DURATION.value: DurationDisplayOptionSerializer,\n }\n\n def get_mapping_field(self):\n db_type = self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY]\n mathesar_type = get_mathesar_type_from_db_type(db_type)\n return mathesar_type\n", "path": "mathesar/api/serializers/shared_serializers.py"}], "after_files": [{"content": "from rest_framework.serializers import Serializer\nfrom rest_framework.utils.serializer_helpers import ReturnList\nfrom rest_framework_friendly_errors.mixins import FriendlyErrorMessagesMixin\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError as RestValidationError\nfrom mathesar.api.exceptions.generic_exceptions.base_exceptions import ErrorBody\n\n\nclass MathesarErrorMessageMixin(FriendlyErrorMessagesMixin):\n def is_pretty(self, error):\n return isinstance(error, dict) and tuple(error.keys()) == ErrorBody._fields\n\n def build_pretty_errors(self, errors, serializer=None):\n\n \"\"\"\n This method build on top of `build_pretty_errors` method of the superclass\n It provides the following additional features\n 1. Avoids processing prettified exceptions\n 2. Add field to the pretty exception body if raised by field validation method\n \"\"\"\n pretty = []\n for error_type in errors:\n error = errors[error_type]\n if error_type == 'non_field_errors':\n if self.is_pretty(error):\n pretty.append(error)\n else:\n pretty.extend(self.get_non_field_error_entries(errors[error_type]))\n else:\n field = self.get_serializer_fields().fields[error_type]\n if isinstance(field, Serializer) and type(errors[error_type]) == dict:\n field.initial_data = self.initial_data[error_type]\n child_errors = field.build_pretty_errors(errors[error_type])\n pretty += child_errors\n continue\n if self.is_pretty(error):\n if 'field' not in error or error['field'] is None or str(error['field']) == 'None':\n error['field'] = error_type\n pretty.append(error)\n else:\n pretty.extend(self.get_field_error_entries(errors[error_type], field))\n if pretty:\n return pretty\n return []\n\n def get_serializer_fields(self):\n return self.fields\n\n def _run_validator(self, validator, field, message):\n \"\"\"\n This method build on top of `_run_validator` method of the superclass\n It provides the following additional features\n 1. Includes serializer if `required_context` is True similar to the behaviour of drf\n \"\"\"\n try:\n args = []\n if getattr(validator, 'requires_context', False):\n args.append(field)\n validator(self.initial_data[field.field_name], *args)\n except (DjangoValidationError, RestValidationError) as err:\n err_message = err.detail[0] if hasattr(err, 'detail') else err.message\n return err_message == message\n\n @property\n def errors(self):\n \"\"\"\n This method build on top of `errors` property of the superclass to return a list instead of a dictionary\n \"\"\"\n ugly_errors = super(FriendlyErrorMessagesMixin, self).errors\n pretty_errors = self.build_pretty_errors(ugly_errors)\n return ReturnList(pretty_errors, serializer=self)\n\n @property\n def field_map(self):\n \"\"\"\n This method build on top of `field_map` property of the superclass\n It provides the following additional features\n 1. Adds `ListSerializer` to `relation` field list\n \"\"\"\n parent_field_map = super(FriendlyErrorMessagesMixin, self).field_map\n # Add missing `ListSerializer to existing relation list`\n parent_field_map['relation'].append('ListSerializer')\n return parent_field_map\n\n def get_field_kwargs(self, field, field_data):\n \"\"\"\n This method build on top of `get_field_kwargs` method of the superclass\n It provides the following fixes\n 1. Fixes file type length value to use name of the file instead of the size of the file,\n matching the default behaviour of drf\n \"\"\"\n field_type = field.__class__.__name__\n kwargs = {\n 'data_type': type(field_data).__name__\n }\n if field_type in self.field_map['boolean']:\n kwargs.update({'input': field_data})\n elif field_type in self.field_map['string']:\n kwargs.update(\n {\n 'max_length': getattr(field, 'max_length', None),\n 'min_length': getattr(field, 'min_length', None),\n 'value': field_data\n }\n )\n elif field_type in self.field_map['numeric']:\n\n kwargs.update(\n {\n 'min_value': field.min_value,\n 'max_value': field.max_value,\n 'decimal_places': getattr(field, 'decimal_places', None),\n 'max_decimal_places': getattr(field, 'decimal_places', None),\n 'max_digits': getattr(field, 'max_digits', None)\n }\n )\n max_digits = kwargs['max_digits']\n decimal_places = kwargs['decimal_places']\n if max_digits is not None and decimal_places is not None:\n whole_digits = max_digits - decimal_places\n kwargs.update({'max_whole_digits': whole_digits})\n elif field_type in self.field_map['date'].keys():\n kwargs.update({'format': self.field_map['date'][field_type]})\n elif field_type in self.field_map['choice']:\n kwargs.update(\n {\n 'input': field_data,\n 'input_type': type(field_data).__name__\n }\n )\n elif field_type in self.field_map['file']:\n kwargs.update(\n {\n 'max_length': field.max_length,\n # Parent method calculates the length of the file instead of the filename,\n # we are changing it to calculate length of the file name\n 'length': len(field.parent.data.get(field.source, '').name)\n }\n )\n elif field_type in self.field_map['composite']:\n kwargs.update(\n {\n 'input_type': type(field_data).__name__,\n 'max_length': getattr(field, 'max_length', None),\n 'min_length': getattr(field, 'min_length', None)\n }\n )\n elif field_type in self.field_map['relation']:\n kwargs.update(\n {\n 'pk_value': field_data,\n 'data_type': type(field_data).__name__,\n 'input_type': type(field_data).__name__,\n 'slug_name': getattr(field, 'slug_field', None),\n 'value': field_data\n }\n )\n else:\n kwargs.update({'max_length': getattr(field, 'max_length', None)})\n return kwargs\n", "path": "mathesar/api/exceptions/mixins.py"}, {"content": "from django.core.exceptions import ImproperlyConfigured\nfrom rest_framework import serializers\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.database.types import MathesarTypeIdentifier, get_mathesar_type_from_db_type\n\n\nclass ReadOnlyPolymorphicSerializerMappingMixin:\n \"\"\"\n This serializer mixin is helpful in serializing polymorphic models,\n by switching to correct serializer based on the mapping field value.\n \"\"\"\n\n def __new__(cls, *args, **kwargs):\n if cls.serializers_mapping is None:\n raise ImproperlyConfigured(\n '`{cls}` is missing a '\n '`{cls}.model_serializer_mapping` attribute'.format(cls=cls.__name__)\n )\n return super().__new__(cls, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.serializers_cls_mapping = {}\n serializers_mapping = self.serializers_mapping\n self.serializers_mapping = {}\n for identifier, serializer_cls in serializers_mapping.items():\n if callable(serializer_cls):\n serializer = serializer_cls(*args, **kwargs)\n serializer.parent = self\n else:\n serializer = serializer_cls\n self.serializers_mapping[identifier] = serializer\n self.serializers_cls_mapping[identifier] = serializer_cls\n\n def to_representation(self, instance):\n serializer = self.serializers_mapping.get(self.get_mapping_field(), None)\n if serializer is not None:\n return serializer.to_representation(instance)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n\n def get_mapping_field(self):\n mapping_field = getattr(self, \"mapping_field\", None)\n if mapping_field is None:\n raise Exception(\n \"Add a `mapping_field` to be used as a identifier\"\n \"or override this method to return a identifier to identify a proper serializer\"\n )\n return mapping_field\n\n\nclass ReadWritePolymorphicSerializerMappingMixin(ReadOnlyPolymorphicSerializerMappingMixin):\n def to_internal_value(self, data):\n serializer = self.serializers_mapping.get(self.get_mapping_field())\n if serializer is not None:\n return serializer.to_internal_value(data=data)\n else:\n raise Exception(f\"Cannot find a matching serializer for the specified type {self.get_mapping_field()}\")\n\n\nclass MonkeyPatchPartial:\n \"\"\"\n Work around bug #3847 in djangorestframework by monkey-patching the partial\n attribute of the root serializer during the call to validate_empty_values.\n https://github.com/encode/django-rest-framework/issues/3847\n \"\"\"\n\n def __init__(self, root):\n self._root = root\n\n def __enter__(self):\n self._old = getattr(self._root, 'partial')\n setattr(self._root, 'partial', False)\n\n def __exit__(self, *args):\n setattr(self._root, 'partial', self._old)\n\n\nclass OverrideRootPartialMixin:\n \"\"\"\n This mixin is used to convert a serializer into a partial serializer,\n based on the serializer `partial` property rather than the parent's `partial` property.\n Refer to the issue\n https://github.com/encode/django-rest-framework/issues/3847\n \"\"\"\n\n def run_validation(self, *args, **kwargs):\n if not self.partial:\n with MonkeyPatchPartial(self.root):\n return super().run_validation(*args, **kwargs)\n return super().run_validation(*args, **kwargs)\n\n\nclass MathesarPolymorphicErrorMixin(MathesarErrorMessageMixin):\n def get_serializer_fields(self):\n return self.serializers_mapping[self.get_mapping_field()].fields\n\n\nclass CustomBooleanLabelSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n TRUE = serializers.CharField()\n FALSE = serializers.CharField()\n\n\nDISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY = 'db_type'\n\n\nclass BooleanDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):\n input = serializers.ChoiceField(choices=[(\"dropdown\", \"dropdown\"), (\"checkbox\", \"checkbox\")])\n custom_labels = CustomBooleanLabelSerializer(required=False)\n\n\nclass AbstractNumberDisplayOptionSerializer(serializers.Serializer):\n number_format = serializers.ChoiceField(allow_null=True, required=False, choices=['english', 'german', 'french', 'hindi', 'swiss'])\n\n\nclass NumberDisplayOptionSerializer(\n MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n AbstractNumberDisplayOptionSerializer\n):\n show_as_percentage = serializers.BooleanField(default=False)\n\n\nclass TimeFormatDisplayOptionSerializer(\n MathesarErrorMessageMixin,\n OverrideRootPartialMixin,\n serializers.Serializer\n):\n format = serializers.CharField(max_length=255)\n\n\nclass DurationDisplayOptionSerializer(MathesarErrorMessageMixin, OverrideRootPartialMixin, serializers.Serializer):\n min = serializers.CharField(max_length=255)\n max = serializers.CharField(max_length=255)\n show_units = serializers.BooleanField()\n\n\nclass DisplayOptionsMappingSerializer(\n OverrideRootPartialMixin,\n MathesarPolymorphicErrorMixin,\n ReadWritePolymorphicSerializerMappingMixin,\n serializers.Serializer\n):\n serializers_mapping = {\n MathesarTypeIdentifier.BOOLEAN.value: BooleanDisplayOptionSerializer,\n MathesarTypeIdentifier.NUMBER.value: NumberDisplayOptionSerializer,\n MathesarTypeIdentifier.DATETIME.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.DATE.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.TIME.value: TimeFormatDisplayOptionSerializer,\n MathesarTypeIdentifier.DURATION.value: DurationDisplayOptionSerializer,\n }\n\n def get_mapping_field(self):\n db_type = self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY]\n mathesar_type = get_mathesar_type_from_db_type(db_type)\n return mathesar_type\n", "path": "mathesar/api/serializers/shared_serializers.py"}]}
| 3,894 | 597 |
gh_patches_debug_2872
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-1599
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GB-NIR invalid data in database
The latest observation seems to be problematic:

therefore it is not shown on the map. However it is inserted in the database.
We should add proper validations to make sure coal/gas are present and that load > 0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/GB_NIR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from collections import defaultdict
4 from datetime import datetime
5 from io import StringIO
6 from operator import itemgetter
7
8 import logging
9 import pandas as pd
10 import requests
11 from bs4 import BeautifulSoup
12 from dateutil import parser, tz
13
14 from .lib.validation import validate
15
16 thermal_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/FuelMix.aspx'
17 wind_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/SystemOutput.aspx'
18 exchange_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/MoyleTie.aspx'
19 # Positive values represent imports to Northern Ireland.
20 # Negative value represent exports from Northern Ireland.
21
22
23 def get_data(url, session=None):
24 """
25 Requests data from a specified url in CSV format.
26 Returns a response.text object.
27 """
28
29 s = session or requests.Session()
30
31 headers = {
32 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',
33 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
34 }
35
36 pagereq = requests.get(url, headers=headers)
37 soup = BeautifulSoup(pagereq.text, 'html.parser')
38
39 # Find and define parameters needed to send a POST request for the actual data.
40 viewstategenerator = soup.find("input", attrs={'id': '__VIEWSTATEGENERATOR'})['value']
41 viewstate = soup.find("input", attrs={'id': '__VIEWSTATE'})['value']
42 eventvalidation = soup.find("input", attrs={'id': '__EVENTVALIDATION'})['value']
43
44 # Set date for post request.
45 current_date = datetime.now().date()
46 month = current_date.month
47 day = current_date.day
48 year = current_date.year
49
50 FromDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],"01%s-%s-%s-0-0-0-0"]' % (year, month, day, '', year, month, day)
51 ToDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],"01%s-%s-%s-0-0-0-0"]' % (year, month, day, '', year, month, day)
52 btnDownloadCSV = 'Download+CSV'
53 ig_def_dp_cal_clientState = '|0|15,2017,09,2017,%s,%s||[[null,[],null],[{%s},[]],"11,2017,09,2017,%s,%s"]' % (month, day, '', month, day)
54 IG_CSS_LINKS_ = 'ig_res/default/ig_monthcalendar.css|ig_res/default/ig_texteditor.css|ig_res/default/ig_shared.css'
55
56 postdata = {'__VIEWSTATE': viewstate,
57 '__VIEWSTATEGENERATOR': viewstategenerator,
58 '__EVENTVALIDATION': eventvalidation,
59 'FromDatePicker_clientState': FromDatePicker_clientState,
60 'ToDatePicker_clientState': ToDatePicker_clientState,
61 'btnDownloadCSV': btnDownloadCSV,
62 '_ig_def_dp_cal_clientState': ig_def_dp_cal_clientState,
63 '_IG_CSS_LINKS_': IG_CSS_LINKS_
64 }
65
66 postheaders = {
67 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',
68 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
69 'Content-Type': 'application/x-www-form-urlencoded'
70 }
71
72 datareq = s.post(url, headers=postheaders, data=postdata)
73
74 return datareq.text
75
76
77 def add_default_tz(timestamp):
78 """
79 Adds Northern Ireland timezone to datetime object if tz = None.
80 """
81
82 NIR = tz.gettz('Europe/Belfast')
83 modified_timestamp = timestamp.replace(tzinfo=timestamp.tzinfo or NIR)
84
85 return modified_timestamp
86
87
88 def create_thermal_df(text_data):
89 """
90 Turns thermal csv data into a usable dataframe.
91 """
92
93 cols_to_use = [0, 1, 2, 3, 4, 5]
94 df_thermal = pd.read_csv(StringIO(text_data),
95 usecols=cols_to_use)
96 df_thermal.fillna(0.0, inplace=True)
97
98 return df_thermal
99
100
101 def create_wind_df(text_data):
102 """
103 Turns wind csv data into a usable dataframe.
104 """
105
106 cols_to_use = [0, 1]
107 df_wind = pd.read_csv(StringIO(text_data),
108 usecols=cols_to_use)
109 df_wind.fillna(0.0, inplace=True)
110
111 return df_wind
112
113
114 def create_exchange_df(text_data):
115 """
116 Turns exchange csv data into a usable dataframe.
117 """
118
119 df_exchange = pd.read_csv(StringIO(text_data))
120 df_exchange.fillna(0.0, inplace=True)
121
122 return df_exchange
123
124
125 def thermal_processor(df):
126 """
127 Creates quarter hour datapoints for thermal production.
128 Returns a list.
129 """
130
131 datapoints = []
132 for index, row in df.iterrows():
133 snapshot = {}
134 snapshot['datetime'] = row['TimeStamp']
135 snapshot['gas'] = row['Gas_MW']
136 snapshot['coal'] = row['Coal_MW']
137 snapshot['oil'] = row['Distillate_MW'] + row['Diesel_MW']
138 datapoints.append(snapshot)
139
140 return datapoints
141
142
143 def wind_processor(df):
144 """
145 Creates quarter hour datapoints for wind production.
146 Returns a list.
147 """
148
149 datapoints = []
150 for index, row in df.iterrows():
151 snapshot = {}
152 snapshot['datetime'] = row['TimeStamp']
153 snapshot['wind'] = row['Total_Wind_Generated_MW']
154 if snapshot['wind'] > -20:
155 snapshot['wind'] = max(snapshot['wind'], 0)
156 datapoints.append(snapshot)
157
158 return datapoints
159
160
161 def moyle_processor(df):
162 """
163 Creates quarter hour datapoints for GB exchange.
164 Returns a list.
165 """
166
167 datapoints = []
168 for index, row in df.iterrows():
169 snapshot = {}
170 snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],
171 dayfirst=True))
172 snapshot['netFlow'] = row['Total_Moyle_Load_MW']
173 snapshot['source'] = 'soni.ltd.uk'
174 snapshot['sortedZoneKeys'] = 'GB->GB-NIR'
175 datapoints.append(snapshot)
176
177 return datapoints
178
179
180 def IE_processor(df):
181 """
182 Creates quarter hour datapoints for IE exchange.
183 Returns a list.
184 """
185
186 datapoints = []
187 for index, row in df.iterrows():
188 snapshot = {}
189 snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],
190 dayfirst=True))
191 netFlow = (row['Total_Str_Let_Load_MW'] +
192 row['Total_Enn_Cor_Load_MW'] +
193 row['Total_Tan_Lou_Load_MW'])
194 snapshot['netFlow'] = -1 * (netFlow)
195 snapshot['source'] = 'soni.ltd.uk'
196 snapshot['sortedZoneKeys'] = 'GB-NIR->IE'
197 datapoints.append(snapshot)
198
199 return datapoints
200
201
202 def merge_production(thermal_data, wind_data):
203 """
204 Joins thermal and wind production data on shared datetime key.
205 Returns a list.
206 """
207
208 total_production = thermal_data + wind_data
209
210 # Join thermal and wind dicts on 'datetime' key.
211 d = defaultdict(dict)
212 for elem in total_production:
213 d[elem['datetime']].update(elem)
214
215 joined_data = sorted(d.values(), key=itemgetter("datetime"))
216
217 for datapoint in joined_data:
218 datapoint['datetime'] = add_default_tz(parser.parse(datapoint['datetime'], dayfirst=True))
219
220 return joined_data
221
222
223 def fetch_production(zone_key='GB-NIR', session=None, target_datetime=None,
224 logger=logging.getLogger(__name__)):
225 """
226 Requests the last known production mix (in MW) of a given country
227 Arguments:
228 zone_key (optional) -- used in case a parser is able to fetch multiple countries
229 session (optional) -- request session passed in order to re-use an existing session
230 Return:
231 A dictionary in the form:
232 {
233 'zoneKey': 'FR',
234 'datetime': '2017-01-01T00:00:00Z',
235 'production': {
236 'biomass': 0.0,
237 'coal': 0.0,
238 'gas': 0.0,
239 'hydro': 0.0,
240 'nuclear': null,
241 'oil': 0.0,
242 'solar': 0.0,
243 'wind': 0.0,
244 'geothermal': 0.0,
245 'unknown': 0.0
246 },
247 'storage': {
248 'hydro': -10.0,
249 },
250 'source': 'mysource.com'
251 }
252 """
253 if target_datetime:
254 raise NotImplementedError('This parser is not yet able to parse past dates')
255
256 thermal_data = get_data(thermal_url)
257 wind_data = get_data(wind_url)
258 thermal_df = create_thermal_df(thermal_data)
259 wind_df = create_wind_df(wind_data)
260 thermal = thermal_processor(thermal_df)
261 wind = wind_processor(wind_df)
262 merge = merge_production(thermal, wind)
263
264 production_mix_by_quarter_hour = []
265
266 for datapoint in merge:
267 production_mix = {
268 'zoneKey': zone_key,
269 'datetime': datapoint.get('datetime', 0.0),
270 'production': {
271 'coal': datapoint.get('coal', 0.0),
272 'gas': datapoint.get('gas', 0.0),
273 'oil': datapoint.get('oil', 0.0),
274 'solar': None,
275 'wind': datapoint.get('wind', 0.0)
276 },
277 'source': 'soni.ltd.uk'
278 }
279 production_mix_by_quarter_hour.append(
280 validate(production_mix, logger=logger, required=['gas', 'coal']))
281
282 return production_mix_by_quarter_hour
283
284
285 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):
286 """Requests the last known power exchange (in MW) between two countries
287 Arguments:
288 zone_key (optional) -- used in case a parser is able to fetch multiple countries
289 session (optional) -- request session passed in order to re-use an existing session
290 Return:
291 A dictionary in the form:
292 {
293 'sortedZoneKeys': 'DK->NO',
294 'datetime': '2017-01-01T00:00:00Z',
295 'netFlow': 0.0,
296 'source': 'mysource.com'
297 }
298 """
299 if target_datetime:
300 raise NotImplementedError('This parser is not yet able to parse past dates')
301
302 exchange_data = get_data(exchange_url)
303 exchange_dataframe = create_exchange_df(exchange_data)
304 if '->'.join(sorted([zone_key1, zone_key2])) == 'GB->GB-NIR':
305 moyle = moyle_processor(exchange_dataframe)
306 return moyle
307 elif '->'.join(sorted([zone_key1, zone_key2])) == 'GB-NIR->IE':
308 IE = IE_processor(exchange_dataframe)
309 return IE
310 else:
311 raise NotImplementedError('This exchange pair is not implemented')
312
313
314 if __name__ == '__main__':
315 """Main method, never used by the Electricity Map backend, but handy for testing."""
316
317 print('fetch_production() ->')
318 print(fetch_production())
319 print('fetch_exchange(GB-NIR, GB) ->')
320 print(fetch_exchange('GB-NIR', 'GB'))
321 print('fetch_exchange(GB-NIR, IE) ->')
322 print(fetch_exchange('GB-NIR', 'IE'))
323
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/GB_NIR.py b/parsers/GB_NIR.py
--- a/parsers/GB_NIR.py
+++ b/parsers/GB_NIR.py
@@ -277,7 +277,7 @@
'source': 'soni.ltd.uk'
}
production_mix_by_quarter_hour.append(
- validate(production_mix, logger=logger, required=['gas', 'coal']))
+ validate(production_mix, logger=logger, required=['gas', 'coal'], floor=1.0))
return production_mix_by_quarter_hour
|
{"golden_diff": "diff --git a/parsers/GB_NIR.py b/parsers/GB_NIR.py\n--- a/parsers/GB_NIR.py\n+++ b/parsers/GB_NIR.py\n@@ -277,7 +277,7 @@\n 'source': 'soni.ltd.uk'\n }\n production_mix_by_quarter_hour.append(\n- validate(production_mix, logger=logger, required=['gas', 'coal']))\n+ validate(production_mix, logger=logger, required=['gas', 'coal'], floor=1.0))\n \n return production_mix_by_quarter_hour\n", "issue": "GB-NIR invalid data in database\nThe latest observation seems to be problematic:\r\n\r\n\r\n\r\ntherefore it is not shown on the map. However it is inserted in the database.\r\nWe should add proper validations to make sure coal/gas are present and that load > 0\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom collections import defaultdict\nfrom datetime import datetime\nfrom io import StringIO\nfrom operator import itemgetter\n\nimport logging\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\nfrom dateutil import parser, tz\n\nfrom .lib.validation import validate\n\nthermal_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/FuelMix.aspx'\nwind_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/SystemOutput.aspx'\nexchange_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/MoyleTie.aspx'\n# Positive values represent imports to Northern Ireland.\n# Negative value represent exports from Northern Ireland.\n\n\ndef get_data(url, session=None):\n \"\"\"\n Requests data from a specified url in CSV format.\n Returns a response.text object.\n \"\"\"\n\n s = session or requests.Session()\n\n headers = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'\n }\n\n pagereq = requests.get(url, headers=headers)\n soup = BeautifulSoup(pagereq.text, 'html.parser')\n\n # Find and define parameters needed to send a POST request for the actual data.\n viewstategenerator = soup.find(\"input\", attrs={'id': '__VIEWSTATEGENERATOR'})['value']\n viewstate = soup.find(\"input\", attrs={'id': '__VIEWSTATE'})['value']\n eventvalidation = soup.find(\"input\", attrs={'id': '__EVENTVALIDATION'})['value']\n\n # Set date for post request.\n current_date = datetime.now().date()\n month = current_date.month\n day = current_date.day\n year = current_date.year\n\n FromDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],\"01%s-%s-%s-0-0-0-0\"]' % (year, month, day, '', year, month, day)\n ToDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],\"01%s-%s-%s-0-0-0-0\"]' % (year, month, day, '', year, month, day)\n btnDownloadCSV = 'Download+CSV'\n ig_def_dp_cal_clientState = '|0|15,2017,09,2017,%s,%s||[[null,[],null],[{%s},[]],\"11,2017,09,2017,%s,%s\"]' % (month, day, '', month, day)\n IG_CSS_LINKS_ = 'ig_res/default/ig_monthcalendar.css|ig_res/default/ig_texteditor.css|ig_res/default/ig_shared.css'\n\n postdata = {'__VIEWSTATE': viewstate,\n '__VIEWSTATEGENERATOR': viewstategenerator,\n '__EVENTVALIDATION': eventvalidation,\n 'FromDatePicker_clientState': FromDatePicker_clientState,\n 'ToDatePicker_clientState': ToDatePicker_clientState,\n 'btnDownloadCSV': btnDownloadCSV,\n '_ig_def_dp_cal_clientState': ig_def_dp_cal_clientState,\n '_IG_CSS_LINKS_': IG_CSS_LINKS_\n }\n\n postheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Content-Type': 'application/x-www-form-urlencoded'\n }\n\n datareq = s.post(url, headers=postheaders, data=postdata)\n\n return datareq.text\n\n\ndef add_default_tz(timestamp):\n \"\"\"\n Adds Northern Ireland timezone to datetime object if tz = None.\n \"\"\"\n\n NIR = tz.gettz('Europe/Belfast')\n modified_timestamp = timestamp.replace(tzinfo=timestamp.tzinfo or NIR)\n\n return modified_timestamp\n\n\ndef create_thermal_df(text_data):\n \"\"\"\n Turns thermal csv data into a usable dataframe.\n \"\"\"\n\n cols_to_use = [0, 1, 2, 3, 4, 5]\n df_thermal = pd.read_csv(StringIO(text_data),\n usecols=cols_to_use)\n df_thermal.fillna(0.0, inplace=True)\n\n return df_thermal\n\n\ndef create_wind_df(text_data):\n \"\"\"\n Turns wind csv data into a usable dataframe.\n \"\"\"\n\n cols_to_use = [0, 1]\n df_wind = pd.read_csv(StringIO(text_data),\n usecols=cols_to_use)\n df_wind.fillna(0.0, inplace=True)\n\n return df_wind\n\n\ndef create_exchange_df(text_data):\n \"\"\"\n Turns exchange csv data into a usable dataframe.\n \"\"\"\n\n df_exchange = pd.read_csv(StringIO(text_data))\n df_exchange.fillna(0.0, inplace=True)\n\n return df_exchange\n\n\ndef thermal_processor(df):\n \"\"\"\n Creates quarter hour datapoints for thermal production.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = row['TimeStamp']\n snapshot['gas'] = row['Gas_MW']\n snapshot['coal'] = row['Coal_MW']\n snapshot['oil'] = row['Distillate_MW'] + row['Diesel_MW']\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef wind_processor(df):\n \"\"\"\n Creates quarter hour datapoints for wind production.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = row['TimeStamp']\n snapshot['wind'] = row['Total_Wind_Generated_MW']\n if snapshot['wind'] > -20:\n snapshot['wind'] = max(snapshot['wind'], 0)\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef moyle_processor(df):\n \"\"\"\n Creates quarter hour datapoints for GB exchange.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],\n dayfirst=True))\n snapshot['netFlow'] = row['Total_Moyle_Load_MW']\n snapshot['source'] = 'soni.ltd.uk'\n snapshot['sortedZoneKeys'] = 'GB->GB-NIR'\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef IE_processor(df):\n \"\"\"\n Creates quarter hour datapoints for IE exchange.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],\n dayfirst=True))\n netFlow = (row['Total_Str_Let_Load_MW'] +\n row['Total_Enn_Cor_Load_MW'] +\n row['Total_Tan_Lou_Load_MW'])\n snapshot['netFlow'] = -1 * (netFlow)\n snapshot['source'] = 'soni.ltd.uk'\n snapshot['sortedZoneKeys'] = 'GB-NIR->IE'\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef merge_production(thermal_data, wind_data):\n \"\"\"\n Joins thermal and wind production data on shared datetime key.\n Returns a list.\n \"\"\"\n\n total_production = thermal_data + wind_data\n\n # Join thermal and wind dicts on 'datetime' key.\n d = defaultdict(dict)\n for elem in total_production:\n d[elem['datetime']].update(elem)\n\n joined_data = sorted(d.values(), key=itemgetter(\"datetime\"))\n\n for datapoint in joined_data:\n datapoint['datetime'] = add_default_tz(parser.parse(datapoint['datetime'], dayfirst=True))\n\n return joined_data\n\n\ndef fetch_production(zone_key='GB-NIR', session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n \"\"\"\n Requests the last known production mix (in MW) of a given country\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n thermal_data = get_data(thermal_url)\n wind_data = get_data(wind_url)\n thermal_df = create_thermal_df(thermal_data)\n wind_df = create_wind_df(wind_data)\n thermal = thermal_processor(thermal_df)\n wind = wind_processor(wind_df)\n merge = merge_production(thermal, wind)\n\n production_mix_by_quarter_hour = []\n\n for datapoint in merge:\n production_mix = {\n 'zoneKey': zone_key,\n 'datetime': datapoint.get('datetime', 0.0),\n 'production': {\n 'coal': datapoint.get('coal', 0.0),\n 'gas': datapoint.get('gas', 0.0),\n 'oil': datapoint.get('oil', 0.0),\n 'solar': None,\n 'wind': datapoint.get('wind', 0.0)\n },\n 'source': 'soni.ltd.uk'\n }\n production_mix_by_quarter_hour.append(\n validate(production_mix, logger=logger, required=['gas', 'coal']))\n\n return production_mix_by_quarter_hour\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two countries\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n exchange_data = get_data(exchange_url)\n exchange_dataframe = create_exchange_df(exchange_data)\n if '->'.join(sorted([zone_key1, zone_key2])) == 'GB->GB-NIR':\n moyle = moyle_processor(exchange_dataframe)\n return moyle\n elif '->'.join(sorted([zone_key1, zone_key2])) == 'GB-NIR->IE':\n IE = IE_processor(exchange_dataframe)\n return IE\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n print('fetch_exchange(GB-NIR, GB) ->')\n print(fetch_exchange('GB-NIR', 'GB'))\n print('fetch_exchange(GB-NIR, IE) ->')\n print(fetch_exchange('GB-NIR', 'IE'))\n", "path": "parsers/GB_NIR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom collections import defaultdict\nfrom datetime import datetime\nfrom io import StringIO\nfrom operator import itemgetter\n\nimport logging\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup\nfrom dateutil import parser, tz\n\nfrom .lib.validation import validate\n\nthermal_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/FuelMix.aspx'\nwind_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/SystemOutput.aspx'\nexchange_url = 'http://ws.soni.ltd.uk/DownloadCentre/aspx/MoyleTie.aspx'\n# Positive values represent imports to Northern Ireland.\n# Negative value represent exports from Northern Ireland.\n\n\ndef get_data(url, session=None):\n \"\"\"\n Requests data from a specified url in CSV format.\n Returns a response.text object.\n \"\"\"\n\n s = session or requests.Session()\n\n headers = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'\n }\n\n pagereq = requests.get(url, headers=headers)\n soup = BeautifulSoup(pagereq.text, 'html.parser')\n\n # Find and define parameters needed to send a POST request for the actual data.\n viewstategenerator = soup.find(\"input\", attrs={'id': '__VIEWSTATEGENERATOR'})['value']\n viewstate = soup.find(\"input\", attrs={'id': '__VIEWSTATE'})['value']\n eventvalidation = soup.find(\"input\", attrs={'id': '__EVENTVALIDATION'})['value']\n\n # Set date for post request.\n current_date = datetime.now().date()\n month = current_date.month\n day = current_date.day\n year = current_date.year\n\n FromDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],\"01%s-%s-%s-0-0-0-0\"]' % (year, month, day, '', year, month, day)\n ToDatePicker_clientState = '|0|01%s-%s-%s-0-0-0-0||[[[[]],[],[]],[{%s},[]],\"01%s-%s-%s-0-0-0-0\"]' % (year, month, day, '', year, month, day)\n btnDownloadCSV = 'Download+CSV'\n ig_def_dp_cal_clientState = '|0|15,2017,09,2017,%s,%s||[[null,[],null],[{%s},[]],\"11,2017,09,2017,%s,%s\"]' % (month, day, '', month, day)\n IG_CSS_LINKS_ = 'ig_res/default/ig_monthcalendar.css|ig_res/default/ig_texteditor.css|ig_res/default/ig_shared.css'\n\n postdata = {'__VIEWSTATE': viewstate,\n '__VIEWSTATEGENERATOR': viewstategenerator,\n '__EVENTVALIDATION': eventvalidation,\n 'FromDatePicker_clientState': FromDatePicker_clientState,\n 'ToDatePicker_clientState': ToDatePicker_clientState,\n 'btnDownloadCSV': btnDownloadCSV,\n '_ig_def_dp_cal_clientState': ig_def_dp_cal_clientState,\n '_IG_CSS_LINKS_': IG_CSS_LINKS_\n }\n\n postheaders = {\n 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Content-Type': 'application/x-www-form-urlencoded'\n }\n\n datareq = s.post(url, headers=postheaders, data=postdata)\n\n return datareq.text\n\n\ndef add_default_tz(timestamp):\n \"\"\"\n Adds Northern Ireland timezone to datetime object if tz = None.\n \"\"\"\n\n NIR = tz.gettz('Europe/Belfast')\n modified_timestamp = timestamp.replace(tzinfo=timestamp.tzinfo or NIR)\n\n return modified_timestamp\n\n\ndef create_thermal_df(text_data):\n \"\"\"\n Turns thermal csv data into a usable dataframe.\n \"\"\"\n\n cols_to_use = [0, 1, 2, 3, 4, 5]\n df_thermal = pd.read_csv(StringIO(text_data),\n usecols=cols_to_use)\n df_thermal.fillna(0.0, inplace=True)\n\n return df_thermal\n\n\ndef create_wind_df(text_data):\n \"\"\"\n Turns wind csv data into a usable dataframe.\n \"\"\"\n\n cols_to_use = [0, 1]\n df_wind = pd.read_csv(StringIO(text_data),\n usecols=cols_to_use)\n df_wind.fillna(0.0, inplace=True)\n\n return df_wind\n\n\ndef create_exchange_df(text_data):\n \"\"\"\n Turns exchange csv data into a usable dataframe.\n \"\"\"\n\n df_exchange = pd.read_csv(StringIO(text_data))\n df_exchange.fillna(0.0, inplace=True)\n\n return df_exchange\n\n\ndef thermal_processor(df):\n \"\"\"\n Creates quarter hour datapoints for thermal production.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = row['TimeStamp']\n snapshot['gas'] = row['Gas_MW']\n snapshot['coal'] = row['Coal_MW']\n snapshot['oil'] = row['Distillate_MW'] + row['Diesel_MW']\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef wind_processor(df):\n \"\"\"\n Creates quarter hour datapoints for wind production.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = row['TimeStamp']\n snapshot['wind'] = row['Total_Wind_Generated_MW']\n if snapshot['wind'] > -20:\n snapshot['wind'] = max(snapshot['wind'], 0)\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef moyle_processor(df):\n \"\"\"\n Creates quarter hour datapoints for GB exchange.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],\n dayfirst=True))\n snapshot['netFlow'] = row['Total_Moyle_Load_MW']\n snapshot['source'] = 'soni.ltd.uk'\n snapshot['sortedZoneKeys'] = 'GB->GB-NIR'\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef IE_processor(df):\n \"\"\"\n Creates quarter hour datapoints for IE exchange.\n Returns a list.\n \"\"\"\n\n datapoints = []\n for index, row in df.iterrows():\n snapshot = {}\n snapshot['datetime'] = add_default_tz(parser.parse(row['TimeStamp'],\n dayfirst=True))\n netFlow = (row['Total_Str_Let_Load_MW'] +\n row['Total_Enn_Cor_Load_MW'] +\n row['Total_Tan_Lou_Load_MW'])\n snapshot['netFlow'] = -1 * (netFlow)\n snapshot['source'] = 'soni.ltd.uk'\n snapshot['sortedZoneKeys'] = 'GB-NIR->IE'\n datapoints.append(snapshot)\n\n return datapoints\n\n\ndef merge_production(thermal_data, wind_data):\n \"\"\"\n Joins thermal and wind production data on shared datetime key.\n Returns a list.\n \"\"\"\n\n total_production = thermal_data + wind_data\n\n # Join thermal and wind dicts on 'datetime' key.\n d = defaultdict(dict)\n for elem in total_production:\n d[elem['datetime']].update(elem)\n\n joined_data = sorted(d.values(), key=itemgetter(\"datetime\"))\n\n for datapoint in joined_data:\n datapoint['datetime'] = add_default_tz(parser.parse(datapoint['datetime'], dayfirst=True))\n\n return joined_data\n\n\ndef fetch_production(zone_key='GB-NIR', session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n \"\"\"\n Requests the last known production mix (in MW) of a given country\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'zoneKey': 'FR',\n 'datetime': '2017-01-01T00:00:00Z',\n 'production': {\n 'biomass': 0.0,\n 'coal': 0.0,\n 'gas': 0.0,\n 'hydro': 0.0,\n 'nuclear': null,\n 'oil': 0.0,\n 'solar': 0.0,\n 'wind': 0.0,\n 'geothermal': 0.0,\n 'unknown': 0.0\n },\n 'storage': {\n 'hydro': -10.0,\n },\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n thermal_data = get_data(thermal_url)\n wind_data = get_data(wind_url)\n thermal_df = create_thermal_df(thermal_data)\n wind_df = create_wind_df(wind_data)\n thermal = thermal_processor(thermal_df)\n wind = wind_processor(wind_df)\n merge = merge_production(thermal, wind)\n\n production_mix_by_quarter_hour = []\n\n for datapoint in merge:\n production_mix = {\n 'zoneKey': zone_key,\n 'datetime': datapoint.get('datetime', 0.0),\n 'production': {\n 'coal': datapoint.get('coal', 0.0),\n 'gas': datapoint.get('gas', 0.0),\n 'oil': datapoint.get('oil', 0.0),\n 'solar': None,\n 'wind': datapoint.get('wind', 0.0)\n },\n 'source': 'soni.ltd.uk'\n }\n production_mix_by_quarter_hour.append(\n validate(production_mix, logger=logger, required=['gas', 'coal'], floor=1.0))\n\n return production_mix_by_quarter_hour\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):\n \"\"\"Requests the last known power exchange (in MW) between two countries\n Arguments:\n zone_key (optional) -- used in case a parser is able to fetch multiple countries\n session (optional) -- request session passed in order to re-use an existing session\n Return:\n A dictionary in the form:\n {\n 'sortedZoneKeys': 'DK->NO',\n 'datetime': '2017-01-01T00:00:00Z',\n 'netFlow': 0.0,\n 'source': 'mysource.com'\n }\n \"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n exchange_data = get_data(exchange_url)\n exchange_dataframe = create_exchange_df(exchange_data)\n if '->'.join(sorted([zone_key1, zone_key2])) == 'GB->GB-NIR':\n moyle = moyle_processor(exchange_dataframe)\n return moyle\n elif '->'.join(sorted([zone_key1, zone_key2])) == 'GB-NIR->IE':\n IE = IE_processor(exchange_dataframe)\n return IE\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n print('fetch_exchange(GB-NIR, GB) ->')\n print(fetch_exchange('GB-NIR', 'GB'))\n print('fetch_exchange(GB-NIR, IE) ->')\n print(fetch_exchange('GB-NIR', 'IE'))\n", "path": "parsers/GB_NIR.py"}]}
| 3,962 | 128 |
gh_patches_debug_22068
|
rasdani/github-patches
|
git_diff
|
psf__black-3773
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal error on type hinted comment with trailing space
**Describe the bug**
Black is encountering an `INTERNAL ERROR` when a type hinted comment has a trailing space.
**To Reproduce**
For example, take this code:
With the trailing space after `# type: dict[str, str] ` black encounters an internal error
```python
d = {} # type: dict[str, str]
```
remove the white space character like `# type: dict[str, str]` and the file is formatted.
And run it with these arguments:
```sh
$ black t.py
```
The resulting error is:
> error: cannot format t.py: INTERNAL ERROR: Black produced code that is not equivalent to the source. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk_66snb7vb.log
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
- Black's version: black==22.12.0
- OS and Python version: WSL Ubuntu 22.04 Python 3.10.6
**Additional context**
here's the log
```
--- src
+++ dst
@@ -8,11 +8,11 @@
) # /Store
id=
'd', # str
) # /Name
type_comment=
- 'dict[str, str] ', # str
+ 'dict[str, str]', # str
value=
Dict(
keys=
values=
) # /Dict
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/black/parsing.py`
Content:
```
1 """
2 Parse Python code and perform AST validation.
3 """
4 import ast
5 import sys
6 from typing import Final, Iterable, Iterator, List, Set, Tuple
7
8 from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature
9 from black.nodes import syms
10 from blib2to3 import pygram
11 from blib2to3.pgen2 import driver
12 from blib2to3.pgen2.grammar import Grammar
13 from blib2to3.pgen2.parse import ParseError
14 from blib2to3.pgen2.tokenize import TokenError
15 from blib2to3.pytree import Leaf, Node
16
17 PY2_HINT: Final = "Python 2 support was removed in version 22.0."
18
19
20 class InvalidInput(ValueError):
21 """Raised when input source code fails all parse attempts."""
22
23
24 def get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]:
25 if not target_versions:
26 # No target_version specified, so try all grammars.
27 return [
28 # Python 3.7-3.9
29 pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords,
30 # Python 3.0-3.6
31 pygram.python_grammar_no_print_statement_no_exec_statement,
32 # Python 3.10+
33 pygram.python_grammar_soft_keywords,
34 ]
35
36 grammars = []
37 # If we have to parse both, try to parse async as a keyword first
38 if not supports_feature(
39 target_versions, Feature.ASYNC_IDENTIFIERS
40 ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING):
41 # Python 3.7-3.9
42 grammars.append(
43 pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords
44 )
45 if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS):
46 # Python 3.0-3.6
47 grammars.append(pygram.python_grammar_no_print_statement_no_exec_statement)
48 if any(Feature.PATTERN_MATCHING in VERSION_TO_FEATURES[v] for v in target_versions):
49 # Python 3.10+
50 grammars.append(pygram.python_grammar_soft_keywords)
51
52 # At least one of the above branches must have been taken, because every Python
53 # version has exactly one of the two 'ASYNC_*' flags
54 return grammars
55
56
57 def lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node:
58 """Given a string with source, return the lib2to3 Node."""
59 if not src_txt.endswith("\n"):
60 src_txt += "\n"
61
62 grammars = get_grammars(set(target_versions))
63 errors = {}
64 for grammar in grammars:
65 drv = driver.Driver(grammar)
66 try:
67 result = drv.parse_string(src_txt, True)
68 break
69
70 except ParseError as pe:
71 lineno, column = pe.context[1]
72 lines = src_txt.splitlines()
73 try:
74 faulty_line = lines[lineno - 1]
75 except IndexError:
76 faulty_line = "<line number missing in source>"
77 errors[grammar.version] = InvalidInput(
78 f"Cannot parse: {lineno}:{column}: {faulty_line}"
79 )
80
81 except TokenError as te:
82 # In edge cases these are raised; and typically don't have a "faulty_line".
83 lineno, column = te.args[1]
84 errors[grammar.version] = InvalidInput(
85 f"Cannot parse: {lineno}:{column}: {te.args[0]}"
86 )
87
88 else:
89 # Choose the latest version when raising the actual parsing error.
90 assert len(errors) >= 1
91 exc = errors[max(errors)]
92
93 if matches_grammar(src_txt, pygram.python_grammar) or matches_grammar(
94 src_txt, pygram.python_grammar_no_print_statement
95 ):
96 original_msg = exc.args[0]
97 msg = f"{original_msg}\n{PY2_HINT}"
98 raise InvalidInput(msg) from None
99
100 raise exc from None
101
102 if isinstance(result, Leaf):
103 result = Node(syms.file_input, [result])
104 return result
105
106
107 def matches_grammar(src_txt: str, grammar: Grammar) -> bool:
108 drv = driver.Driver(grammar)
109 try:
110 drv.parse_string(src_txt, True)
111 except (ParseError, TokenError, IndentationError):
112 return False
113 else:
114 return True
115
116
117 def lib2to3_unparse(node: Node) -> str:
118 """Given a lib2to3 node, return its string representation."""
119 code = str(node)
120 return code
121
122
123 def parse_single_version(
124 src: str, version: Tuple[int, int], *, type_comments: bool
125 ) -> ast.AST:
126 filename = "<unknown>"
127 return ast.parse(
128 src, filename, feature_version=version, type_comments=type_comments
129 )
130
131
132 def parse_ast(src: str) -> ast.AST:
133 # TODO: support Python 4+ ;)
134 versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)]
135
136 first_error = ""
137 for version in sorted(versions, reverse=True):
138 try:
139 return parse_single_version(src, version, type_comments=True)
140 except SyntaxError as e:
141 if not first_error:
142 first_error = str(e)
143
144 # Try to parse without type comments
145 for version in sorted(versions, reverse=True):
146 try:
147 return parse_single_version(src, version, type_comments=False)
148 except SyntaxError:
149 pass
150
151 raise SyntaxError(first_error)
152
153
154 def _normalize(lineend: str, value: str) -> str:
155 # To normalize, we strip any leading and trailing space from
156 # each line...
157 stripped: List[str] = [i.strip() for i in value.splitlines()]
158 normalized = lineend.join(stripped)
159 # ...and remove any blank lines at the beginning and end of
160 # the whole string
161 return normalized.strip()
162
163
164 def stringify_ast(node: ast.AST, depth: int = 0) -> Iterator[str]:
165 """Simple visitor generating strings to compare ASTs by content."""
166
167 if (
168 isinstance(node, ast.Constant)
169 and isinstance(node.value, str)
170 and node.kind == "u"
171 ):
172 # It's a quirk of history that we strip the u prefix over here. We used to
173 # rewrite the AST nodes for Python version compatibility and we never copied
174 # over the kind
175 node.kind = None
176
177 yield f"{' ' * depth}{node.__class__.__name__}("
178
179 for field in sorted(node._fields): # noqa: F402
180 # TypeIgnore has only one field 'lineno' which breaks this comparison
181 if isinstance(node, ast.TypeIgnore):
182 break
183
184 try:
185 value: object = getattr(node, field)
186 except AttributeError:
187 continue
188
189 yield f"{' ' * (depth+1)}{field}="
190
191 if isinstance(value, list):
192 for item in value:
193 # Ignore nested tuples within del statements, because we may insert
194 # parentheses and they change the AST.
195 if (
196 field == "targets"
197 and isinstance(node, ast.Delete)
198 and isinstance(item, ast.Tuple)
199 ):
200 for elt in item.elts:
201 yield from stringify_ast(elt, depth + 2)
202
203 elif isinstance(item, ast.AST):
204 yield from stringify_ast(item, depth + 2)
205
206 elif isinstance(value, ast.AST):
207 yield from stringify_ast(value, depth + 2)
208
209 else:
210 normalized: object
211 # Constant strings may be indented across newlines, if they are
212 # docstrings; fold spaces after newlines when comparing. Similarly,
213 # trailing and leading space may be removed.
214 if (
215 isinstance(node, ast.Constant)
216 and field == "value"
217 and isinstance(value, str)
218 ):
219 normalized = _normalize("\n", value)
220 else:
221 normalized = value
222 yield f"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}"
223
224 yield f"{' ' * depth}) # /{node.__class__.__name__}"
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/black/parsing.py b/src/black/parsing.py
--- a/src/black/parsing.py
+++ b/src/black/parsing.py
@@ -208,15 +208,18 @@
else:
normalized: object
- # Constant strings may be indented across newlines, if they are
- # docstrings; fold spaces after newlines when comparing. Similarly,
- # trailing and leading space may be removed.
if (
isinstance(node, ast.Constant)
and field == "value"
and isinstance(value, str)
):
+ # Constant strings may be indented across newlines, if they are
+ # docstrings; fold spaces after newlines when comparing. Similarly,
+ # trailing and leading space may be removed.
normalized = _normalize("\n", value)
+ elif field == "type_comment" and isinstance(value, str):
+ # Trailing whitespace in type comments is removed.
+ normalized = value.rstrip()
else:
normalized = value
yield f"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}"
|
{"golden_diff": "diff --git a/src/black/parsing.py b/src/black/parsing.py\n--- a/src/black/parsing.py\n+++ b/src/black/parsing.py\n@@ -208,15 +208,18 @@\n \n else:\n normalized: object\n- # Constant strings may be indented across newlines, if they are\n- # docstrings; fold spaces after newlines when comparing. Similarly,\n- # trailing and leading space may be removed.\n if (\n isinstance(node, ast.Constant)\n and field == \"value\"\n and isinstance(value, str)\n ):\n+ # Constant strings may be indented across newlines, if they are\n+ # docstrings; fold spaces after newlines when comparing. Similarly,\n+ # trailing and leading space may be removed.\n normalized = _normalize(\"\\n\", value)\n+ elif field == \"type_comment\" and isinstance(value, str):\n+ # Trailing whitespace in type comments is removed.\n+ normalized = value.rstrip()\n else:\n normalized = value\n yield f\"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}\"\n", "issue": "Internal error on type hinted comment with trailing space\n**Describe the bug**\r\n\r\nBlack is encountering an `INTERNAL ERROR` when a type hinted comment has a trailing space.\r\n\r\n**To Reproduce**\r\n\r\n\r\nFor example, take this code:\r\nWith the trailing space after `# type: dict[str, str] ` black encounters an internal error\r\n```python\r\nd = {} # type: dict[str, str] \r\n```\r\nremove the white space character like `# type: dict[str, str]` and the file is formatted.\r\n\r\nAnd run it with these arguments:\r\n\r\n```sh\r\n$ black t.py \r\n\r\n```\r\n\r\nThe resulting error is:\r\n\r\n> error: cannot format t.py: INTERNAL ERROR: Black produced code that is not equivalent to the source. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk_66snb7vb.log\r\n\r\n**Expected behavior**\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Environment**\r\n\r\n- Black's version: black==22.12.0\r\n- OS and Python version: WSL Ubuntu 22.04 Python 3.10.6\r\n\r\n**Additional context**\r\nhere's the log\r\n```\r\n--- src\r\n+++ dst\r\n@@ -8,11 +8,11 @@\r\n ) # /Store\r\n id=\r\n 'd', # str\r\n ) # /Name\r\n type_comment=\r\n- 'dict[str, str] ', # str\r\n+ 'dict[str, str]', # str\r\n value=\r\n Dict(\r\n keys=\r\n values=\r\n ) # /Dict\r\n\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nParse Python code and perform AST validation.\n\"\"\"\nimport ast\nimport sys\nfrom typing import Final, Iterable, Iterator, List, Set, Tuple\n\nfrom black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature\nfrom black.nodes import syms\nfrom blib2to3 import pygram\nfrom blib2to3.pgen2 import driver\nfrom blib2to3.pgen2.grammar import Grammar\nfrom blib2to3.pgen2.parse import ParseError\nfrom blib2to3.pgen2.tokenize import TokenError\nfrom blib2to3.pytree import Leaf, Node\n\nPY2_HINT: Final = \"Python 2 support was removed in version 22.0.\"\n\n\nclass InvalidInput(ValueError):\n \"\"\"Raised when input source code fails all parse attempts.\"\"\"\n\n\ndef get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]:\n if not target_versions:\n # No target_version specified, so try all grammars.\n return [\n # Python 3.7-3.9\n pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords,\n # Python 3.0-3.6\n pygram.python_grammar_no_print_statement_no_exec_statement,\n # Python 3.10+\n pygram.python_grammar_soft_keywords,\n ]\n\n grammars = []\n # If we have to parse both, try to parse async as a keyword first\n if not supports_feature(\n target_versions, Feature.ASYNC_IDENTIFIERS\n ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING):\n # Python 3.7-3.9\n grammars.append(\n pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords\n )\n if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS):\n # Python 3.0-3.6\n grammars.append(pygram.python_grammar_no_print_statement_no_exec_statement)\n if any(Feature.PATTERN_MATCHING in VERSION_TO_FEATURES[v] for v in target_versions):\n # Python 3.10+\n grammars.append(pygram.python_grammar_soft_keywords)\n\n # At least one of the above branches must have been taken, because every Python\n # version has exactly one of the two 'ASYNC_*' flags\n return grammars\n\n\ndef lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node:\n \"\"\"Given a string with source, return the lib2to3 Node.\"\"\"\n if not src_txt.endswith(\"\\n\"):\n src_txt += \"\\n\"\n\n grammars = get_grammars(set(target_versions))\n errors = {}\n for grammar in grammars:\n drv = driver.Driver(grammar)\n try:\n result = drv.parse_string(src_txt, True)\n break\n\n except ParseError as pe:\n lineno, column = pe.context[1]\n lines = src_txt.splitlines()\n try:\n faulty_line = lines[lineno - 1]\n except IndexError:\n faulty_line = \"<line number missing in source>\"\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {faulty_line}\"\n )\n\n except TokenError as te:\n # In edge cases these are raised; and typically don't have a \"faulty_line\".\n lineno, column = te.args[1]\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {te.args[0]}\"\n )\n\n else:\n # Choose the latest version when raising the actual parsing error.\n assert len(errors) >= 1\n exc = errors[max(errors)]\n\n if matches_grammar(src_txt, pygram.python_grammar) or matches_grammar(\n src_txt, pygram.python_grammar_no_print_statement\n ):\n original_msg = exc.args[0]\n msg = f\"{original_msg}\\n{PY2_HINT}\"\n raise InvalidInput(msg) from None\n\n raise exc from None\n\n if isinstance(result, Leaf):\n result = Node(syms.file_input, [result])\n return result\n\n\ndef matches_grammar(src_txt: str, grammar: Grammar) -> bool:\n drv = driver.Driver(grammar)\n try:\n drv.parse_string(src_txt, True)\n except (ParseError, TokenError, IndentationError):\n return False\n else:\n return True\n\n\ndef lib2to3_unparse(node: Node) -> str:\n \"\"\"Given a lib2to3 node, return its string representation.\"\"\"\n code = str(node)\n return code\n\n\ndef parse_single_version(\n src: str, version: Tuple[int, int], *, type_comments: bool\n) -> ast.AST:\n filename = \"<unknown>\"\n return ast.parse(\n src, filename, feature_version=version, type_comments=type_comments\n )\n\n\ndef parse_ast(src: str) -> ast.AST:\n # TODO: support Python 4+ ;)\n versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)]\n\n first_error = \"\"\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=True)\n except SyntaxError as e:\n if not first_error:\n first_error = str(e)\n\n # Try to parse without type comments\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=False)\n except SyntaxError:\n pass\n\n raise SyntaxError(first_error)\n\n\ndef _normalize(lineend: str, value: str) -> str:\n # To normalize, we strip any leading and trailing space from\n # each line...\n stripped: List[str] = [i.strip() for i in value.splitlines()]\n normalized = lineend.join(stripped)\n # ...and remove any blank lines at the beginning and end of\n # the whole string\n return normalized.strip()\n\n\ndef stringify_ast(node: ast.AST, depth: int = 0) -> Iterator[str]:\n \"\"\"Simple visitor generating strings to compare ASTs by content.\"\"\"\n\n if (\n isinstance(node, ast.Constant)\n and isinstance(node.value, str)\n and node.kind == \"u\"\n ):\n # It's a quirk of history that we strip the u prefix over here. We used to\n # rewrite the AST nodes for Python version compatibility and we never copied\n # over the kind\n node.kind = None\n\n yield f\"{' ' * depth}{node.__class__.__name__}(\"\n\n for field in sorted(node._fields): # noqa: F402\n # TypeIgnore has only one field 'lineno' which breaks this comparison\n if isinstance(node, ast.TypeIgnore):\n break\n\n try:\n value: object = getattr(node, field)\n except AttributeError:\n continue\n\n yield f\"{' ' * (depth+1)}{field}=\"\n\n if isinstance(value, list):\n for item in value:\n # Ignore nested tuples within del statements, because we may insert\n # parentheses and they change the AST.\n if (\n field == \"targets\"\n and isinstance(node, ast.Delete)\n and isinstance(item, ast.Tuple)\n ):\n for elt in item.elts:\n yield from stringify_ast(elt, depth + 2)\n\n elif isinstance(item, ast.AST):\n yield from stringify_ast(item, depth + 2)\n\n elif isinstance(value, ast.AST):\n yield from stringify_ast(value, depth + 2)\n\n else:\n normalized: object\n # Constant strings may be indented across newlines, if they are\n # docstrings; fold spaces after newlines when comparing. Similarly,\n # trailing and leading space may be removed.\n if (\n isinstance(node, ast.Constant)\n and field == \"value\"\n and isinstance(value, str)\n ):\n normalized = _normalize(\"\\n\", value)\n else:\n normalized = value\n yield f\"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}\"\n\n yield f\"{' ' * depth}) # /{node.__class__.__name__}\"\n", "path": "src/black/parsing.py"}], "after_files": [{"content": "\"\"\"\nParse Python code and perform AST validation.\n\"\"\"\nimport ast\nimport sys\nfrom typing import Final, Iterable, Iterator, List, Set, Tuple\n\nfrom black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature\nfrom black.nodes import syms\nfrom blib2to3 import pygram\nfrom blib2to3.pgen2 import driver\nfrom blib2to3.pgen2.grammar import Grammar\nfrom blib2to3.pgen2.parse import ParseError\nfrom blib2to3.pgen2.tokenize import TokenError\nfrom blib2to3.pytree import Leaf, Node\n\nPY2_HINT: Final = \"Python 2 support was removed in version 22.0.\"\n\n\nclass InvalidInput(ValueError):\n \"\"\"Raised when input source code fails all parse attempts.\"\"\"\n\n\ndef get_grammars(target_versions: Set[TargetVersion]) -> List[Grammar]:\n if not target_versions:\n # No target_version specified, so try all grammars.\n return [\n # Python 3.7-3.9\n pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords,\n # Python 3.0-3.6\n pygram.python_grammar_no_print_statement_no_exec_statement,\n # Python 3.10+\n pygram.python_grammar_soft_keywords,\n ]\n\n grammars = []\n # If we have to parse both, try to parse async as a keyword first\n if not supports_feature(\n target_versions, Feature.ASYNC_IDENTIFIERS\n ) and not supports_feature(target_versions, Feature.PATTERN_MATCHING):\n # Python 3.7-3.9\n grammars.append(\n pygram.python_grammar_no_print_statement_no_exec_statement_async_keywords\n )\n if not supports_feature(target_versions, Feature.ASYNC_KEYWORDS):\n # Python 3.0-3.6\n grammars.append(pygram.python_grammar_no_print_statement_no_exec_statement)\n if any(Feature.PATTERN_MATCHING in VERSION_TO_FEATURES[v] for v in target_versions):\n # Python 3.10+\n grammars.append(pygram.python_grammar_soft_keywords)\n\n # At least one of the above branches must have been taken, because every Python\n # version has exactly one of the two 'ASYNC_*' flags\n return grammars\n\n\ndef lib2to3_parse(src_txt: str, target_versions: Iterable[TargetVersion] = ()) -> Node:\n \"\"\"Given a string with source, return the lib2to3 Node.\"\"\"\n if not src_txt.endswith(\"\\n\"):\n src_txt += \"\\n\"\n\n grammars = get_grammars(set(target_versions))\n errors = {}\n for grammar in grammars:\n drv = driver.Driver(grammar)\n try:\n result = drv.parse_string(src_txt, True)\n break\n\n except ParseError as pe:\n lineno, column = pe.context[1]\n lines = src_txt.splitlines()\n try:\n faulty_line = lines[lineno - 1]\n except IndexError:\n faulty_line = \"<line number missing in source>\"\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {faulty_line}\"\n )\n\n except TokenError as te:\n # In edge cases these are raised; and typically don't have a \"faulty_line\".\n lineno, column = te.args[1]\n errors[grammar.version] = InvalidInput(\n f\"Cannot parse: {lineno}:{column}: {te.args[0]}\"\n )\n\n else:\n # Choose the latest version when raising the actual parsing error.\n assert len(errors) >= 1\n exc = errors[max(errors)]\n\n if matches_grammar(src_txt, pygram.python_grammar) or matches_grammar(\n src_txt, pygram.python_grammar_no_print_statement\n ):\n original_msg = exc.args[0]\n msg = f\"{original_msg}\\n{PY2_HINT}\"\n raise InvalidInput(msg) from None\n\n raise exc from None\n\n if isinstance(result, Leaf):\n result = Node(syms.file_input, [result])\n return result\n\n\ndef matches_grammar(src_txt: str, grammar: Grammar) -> bool:\n drv = driver.Driver(grammar)\n try:\n drv.parse_string(src_txt, True)\n except (ParseError, TokenError, IndentationError):\n return False\n else:\n return True\n\n\ndef lib2to3_unparse(node: Node) -> str:\n \"\"\"Given a lib2to3 node, return its string representation.\"\"\"\n code = str(node)\n return code\n\n\ndef parse_single_version(\n src: str, version: Tuple[int, int], *, type_comments: bool\n) -> ast.AST:\n filename = \"<unknown>\"\n return ast.parse(\n src, filename, feature_version=version, type_comments=type_comments\n )\n\n\ndef parse_ast(src: str) -> ast.AST:\n # TODO: support Python 4+ ;)\n versions = [(3, minor) for minor in range(3, sys.version_info[1] + 1)]\n\n first_error = \"\"\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=True)\n except SyntaxError as e:\n if not first_error:\n first_error = str(e)\n\n # Try to parse without type comments\n for version in sorted(versions, reverse=True):\n try:\n return parse_single_version(src, version, type_comments=False)\n except SyntaxError:\n pass\n\n raise SyntaxError(first_error)\n\n\ndef _normalize(lineend: str, value: str) -> str:\n # To normalize, we strip any leading and trailing space from\n # each line...\n stripped: List[str] = [i.strip() for i in value.splitlines()]\n normalized = lineend.join(stripped)\n # ...and remove any blank lines at the beginning and end of\n # the whole string\n return normalized.strip()\n\n\ndef stringify_ast(node: ast.AST, depth: int = 0) -> Iterator[str]:\n \"\"\"Simple visitor generating strings to compare ASTs by content.\"\"\"\n\n if (\n isinstance(node, ast.Constant)\n and isinstance(node.value, str)\n and node.kind == \"u\"\n ):\n # It's a quirk of history that we strip the u prefix over here. We used to\n # rewrite the AST nodes for Python version compatibility and we never copied\n # over the kind\n node.kind = None\n\n yield f\"{' ' * depth}{node.__class__.__name__}(\"\n\n for field in sorted(node._fields): # noqa: F402\n # TypeIgnore has only one field 'lineno' which breaks this comparison\n if isinstance(node, ast.TypeIgnore):\n break\n\n try:\n value: object = getattr(node, field)\n except AttributeError:\n continue\n\n yield f\"{' ' * (depth+1)}{field}=\"\n\n if isinstance(value, list):\n for item in value:\n # Ignore nested tuples within del statements, because we may insert\n # parentheses and they change the AST.\n if (\n field == \"targets\"\n and isinstance(node, ast.Delete)\n and isinstance(item, ast.Tuple)\n ):\n for elt in item.elts:\n yield from stringify_ast(elt, depth + 2)\n\n elif isinstance(item, ast.AST):\n yield from stringify_ast(item, depth + 2)\n\n elif isinstance(value, ast.AST):\n yield from stringify_ast(value, depth + 2)\n\n else:\n normalized: object\n if (\n isinstance(node, ast.Constant)\n and field == \"value\"\n and isinstance(value, str)\n ):\n # Constant strings may be indented across newlines, if they are\n # docstrings; fold spaces after newlines when comparing. Similarly,\n # trailing and leading space may be removed.\n normalized = _normalize(\"\\n\", value)\n elif field == \"type_comment\" and isinstance(value, str):\n # Trailing whitespace in type comments is removed.\n normalized = value.rstrip()\n else:\n normalized = value\n yield f\"{' ' * (depth+2)}{normalized!r}, # {value.__class__.__name__}\"\n\n yield f\"{' ' * depth}) # /{node.__class__.__name__}\"\n", "path": "src/black/parsing.py"}]}
| 2,979 | 251 |
gh_patches_debug_5273
|
rasdani/github-patches
|
git_diff
|
crytic__slither-1339
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
README is not correctly rendered on PyPi
### Describe the desired feature
The description on https://pypi.org/project/slither-analyzer/ is not being rendered as markdown. Add the line `long_description_content_type="text/markdown",` to the `setup.py` for it to render correctly in future releases.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2
3 with open("README.md", "r", encoding="utf-8") as f:
4 long_description = f.read()
5
6 setup(
7 name="slither-analyzer",
8 description="Slither is a Solidity static analysis framework written in Python 3.",
9 url="https://github.com/crytic/slither",
10 author="Trail of Bits",
11 version="0.8.3",
12 packages=find_packages(),
13 python_requires=">=3.8",
14 install_requires=[
15 "prettytable>=0.7.2",
16 "pysha3>=1.0.2",
17 # "crytic-compile>=0.2.3",
18 "crytic-compile",
19 ],
20 extras_require={
21 "dev": [
22 "black==22.3.0",
23 "pylint==2.13.4",
24 "pytest",
25 "pytest-cov",
26 "deepdiff",
27 "numpy",
28 "solc-select>=v1.0.0b1",
29 ]
30 },
31 dependency_links=["git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile"],
32 license="AGPL-3.0",
33 long_description=long_description,
34 entry_points={
35 "console_scripts": [
36 "slither = slither.__main__:main",
37 "slither-check-upgradeability = slither.tools.upgradeability.__main__:main",
38 "slither-find-paths = slither.tools.possible_paths.__main__:main",
39 "slither-simil = slither.tools.similarity.__main__:main",
40 "slither-flat = slither.tools.flattening.__main__:main",
41 "slither-format = slither.tools.slither_format.__main__:main",
42 "slither-check-erc = slither.tools.erc_conformance.__main__:main",
43 "slither-check-kspec = slither.tools.kspec_coverage.__main__:main",
44 "slither-prop = slither.tools.properties.__main__:main",
45 "slither-mutate = slither.tools.mutator.__main__:main",
46 "slither-read-storage = slither.tools.read_storage.__main__:main",
47 ]
48 },
49 )
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,6 +31,7 @@
dependency_links=["git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile"],
license="AGPL-3.0",
long_description=long_description,
+ long_description_content_type="text/markdown",
entry_points={
"console_scripts": [
"slither = slither.__main__:main",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,6 +31,7 @@\n dependency_links=[\"git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile\"],\n license=\"AGPL-3.0\",\n long_description=long_description,\n+ long_description_content_type=\"text/markdown\",\n entry_points={\n \"console_scripts\": [\n \"slither = slither.__main__:main\",\n", "issue": "README is not correctly rendered on PyPi\n### Describe the desired feature\n\nThe description on https://pypi.org/project/slither-analyzer/ is not being rendered as markdown. Add the line `long_description_content_type=\"text/markdown\",` to the `setup.py` for it to render correctly in future releases.\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=\"slither-analyzer\",\n description=\"Slither is a Solidity static analysis framework written in Python 3.\",\n url=\"https://github.com/crytic/slither\",\n author=\"Trail of Bits\",\n version=\"0.8.3\",\n packages=find_packages(),\n python_requires=\">=3.8\",\n install_requires=[\n \"prettytable>=0.7.2\",\n \"pysha3>=1.0.2\",\n # \"crytic-compile>=0.2.3\",\n \"crytic-compile\",\n ],\n extras_require={\n \"dev\": [\n \"black==22.3.0\",\n \"pylint==2.13.4\",\n \"pytest\",\n \"pytest-cov\",\n \"deepdiff\",\n \"numpy\",\n \"solc-select>=v1.0.0b1\",\n ]\n },\n dependency_links=[\"git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile\"],\n license=\"AGPL-3.0\",\n long_description=long_description,\n entry_points={\n \"console_scripts\": [\n \"slither = slither.__main__:main\",\n \"slither-check-upgradeability = slither.tools.upgradeability.__main__:main\",\n \"slither-find-paths = slither.tools.possible_paths.__main__:main\",\n \"slither-simil = slither.tools.similarity.__main__:main\",\n \"slither-flat = slither.tools.flattening.__main__:main\",\n \"slither-format = slither.tools.slither_format.__main__:main\",\n \"slither-check-erc = slither.tools.erc_conformance.__main__:main\",\n \"slither-check-kspec = slither.tools.kspec_coverage.__main__:main\",\n \"slither-prop = slither.tools.properties.__main__:main\",\n \"slither-mutate = slither.tools.mutator.__main__:main\",\n \"slither-read-storage = slither.tools.read_storage.__main__:main\",\n ]\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=\"slither-analyzer\",\n description=\"Slither is a Solidity static analysis framework written in Python 3.\",\n url=\"https://github.com/crytic/slither\",\n author=\"Trail of Bits\",\n version=\"0.8.3\",\n packages=find_packages(),\n python_requires=\">=3.8\",\n install_requires=[\n \"prettytable>=0.7.2\",\n \"pysha3>=1.0.2\",\n # \"crytic-compile>=0.2.3\",\n \"crytic-compile\",\n ],\n extras_require={\n \"dev\": [\n \"black==22.3.0\",\n \"pylint==2.13.4\",\n \"pytest\",\n \"pytest-cov\",\n \"deepdiff\",\n \"numpy\",\n \"solc-select>=v1.0.0b1\",\n ]\n },\n dependency_links=[\"git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile\"],\n license=\"AGPL-3.0\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n entry_points={\n \"console_scripts\": [\n \"slither = slither.__main__:main\",\n \"slither-check-upgradeability = slither.tools.upgradeability.__main__:main\",\n \"slither-find-paths = slither.tools.possible_paths.__main__:main\",\n \"slither-simil = slither.tools.similarity.__main__:main\",\n \"slither-flat = slither.tools.flattening.__main__:main\",\n \"slither-format = slither.tools.slither_format.__main__:main\",\n \"slither-check-erc = slither.tools.erc_conformance.__main__:main\",\n \"slither-check-kspec = slither.tools.kspec_coverage.__main__:main\",\n \"slither-prop = slither.tools.properties.__main__:main\",\n \"slither-mutate = slither.tools.mutator.__main__:main\",\n \"slither-read-storage = slither.tools.read_storage.__main__:main\",\n ]\n },\n)\n", "path": "setup.py"}]}
| 894 | 111 |
gh_patches_debug_25106
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-128
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pymongo trace exception
We're noticing this exception - using latest `pymongo==3.4.0` driver:
```exceptions.AttributeError: 'long' object has no attribute 'items'
Traceback (most recent call last):
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 105, in send_message_with_response
span.resource = _resource_from_cmd(cmd)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 220, in _resource_from_cmd
nq = normalize_filter(cmd.query)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 207, in normalize_filter
out[k] = normalize_filter(v)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 207, in normalize_filter
out[k] = normalize_filter(v)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 199, in normalize_filter
return [normalize_filter(s) for s in f]
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 204, in normalize_filter
for k, v in iteritems(f):
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/compat.py", line 32, in iteritems
func = obj.items
AttributeError: 'long' object has no attribute 'items'
```
pymongo trace exception
We're noticing this exception - using latest `pymongo==3.4.0` driver:
```exceptions.AttributeError: 'long' object has no attribute 'items'
Traceback (most recent call last):
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 105, in send_message_with_response
span.resource = _resource_from_cmd(cmd)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 220, in _resource_from_cmd
nq = normalize_filter(cmd.query)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 207, in normalize_filter
out[k] = normalize_filter(v)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 207, in normalize_filter
out[k] = normalize_filter(v)
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 199, in normalize_filter
return [normalize_filter(s) for s in f]
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py", line 204, in normalize_filter
for k, v in iteritems(f):
File "/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/compat.py", line 32, in iteritems
func = obj.items
AttributeError: 'long' object has no attribute 'items'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/pymongo/client.py`
Content:
```
1 # stdlib
2 import contextlib
3 import logging
4 import json
5
6 # 3p
7 import pymongo
8 from wrapt import ObjectProxy
9
10 # project
11 import ddtrace
12 from ...compat import iteritems
13 from ...ext import AppTypes
14 from ...ext import mongo as mongox
15 from ...ext import net as netx
16 from ...util import deprecated
17 from .parse import parse_spec, parse_query, parse_msg
18
19 # Original Client class
20 _MongoClient = pymongo.MongoClient
21
22 log = logging.getLogger(__name__)
23
24
25 @deprecated(message='Use patching instead (see the docs).', version='0.6.0')
26 def trace_mongo_client(client, tracer, service=mongox.TYPE):
27 tracer.set_service_info(
28 service=service,
29 app=mongox.TYPE,
30 app_type=AppTypes.db,
31 )
32 traced_client = TracedMongoClient(client)
33 ddtrace.Pin(service=service, tracer=tracer).onto(traced_client)
34 return traced_client
35
36
37 class TracedMongoClient(ObjectProxy):
38
39 def __init__(self, client=None, *args, **kwargs):
40 # To support the former trace_mongo_client interface, we have to keep this old interface
41 # TODO(Benjamin): drop it in a later version
42 if not isinstance(client, _MongoClient):
43 # Patched interface, instanciate the client
44 # Note that, in that case, the client argument isn't a client, it's just the first arg
45 client = _MongoClient(client, *args, **kwargs)
46
47 super(TracedMongoClient, self).__init__(client)
48 # Default Pin
49 ddtrace.Pin(service=mongox.TYPE).onto(self)
50 # NOTE[matt] the TracedMongoClient attempts to trace all of the network
51 # calls in the trace library. This is good because it measures the
52 # actual network time. It's bad because it uses a private API which
53 # could change. We'll see how this goes.
54 client._topology = TracedTopology(client._topology)
55
56 def __setddpin__(self, pin):
57 pin.onto(self._topology)
58
59 def __getddpin__(self):
60 return ddtrace.Pin.get_from(self._topology)
61
62
63 class TracedTopology(ObjectProxy):
64
65 def __init__(self, topology):
66 super(TracedTopology, self).__init__(topology)
67
68 def select_server(self, *args, **kwargs):
69 s = self.__wrapped__.select_server(*args, **kwargs)
70 if not isinstance(s, TracedServer):
71 s = TracedServer(s)
72 # Reattach the pin every time in case it changed since the initial patching
73 ddtrace.Pin.get_from(self).onto(s)
74 return s
75
76
77 class TracedServer(ObjectProxy):
78
79 def __init__(self, server):
80 super(TracedServer, self).__init__(server)
81
82 def send_message_with_response(self, operation, *args, **kwargs):
83 cmd = None
84 # Only try to parse something we think is a query.
85 if self._is_query(operation):
86 try:
87 cmd = parse_query(operation)
88 except Exception:
89 log.exception("error parsing query")
90
91 pin = ddtrace.Pin.get_from(self)
92
93 # if we couldn't parse or shouldn't trace the message, just go.
94 if not cmd or not pin or not pin.enabled():
95 return self.__wrapped__.send_message_with_response(
96 operation,
97 *args,
98 **kwargs)
99
100 with pin.tracer.trace(
101 "pymongo.cmd",
102 span_type=mongox.TYPE,
103 service=pin.service) as span:
104
105 span.resource = _resource_from_cmd(cmd)
106 span.set_tag(mongox.DB, cmd.db)
107 span.set_tag(mongox.COLLECTION, cmd.coll)
108 span.set_tags(cmd.tags)
109
110 result = self.__wrapped__.send_message_with_response(
111 operation,
112 *args,
113 **kwargs)
114
115 if result and result.address:
116 _set_address_tags(span, result.address)
117 return result
118
119 @contextlib.contextmanager
120 def get_socket(self, *args, **kwargs):
121 with self.__wrapped__.get_socket(*args, **kwargs) as s:
122 if not isinstance(s, TracedSocket):
123 s = TracedSocket(s)
124 ddtrace.Pin.get_from(self).onto(s)
125 yield s
126
127 @staticmethod
128 def _is_query(op):
129 # NOTE: _Query should alwyas have a spec field
130 return hasattr(op, 'spec')
131
132
133 class TracedSocket(ObjectProxy):
134
135 def __init__(self, socket):
136 super(TracedSocket, self).__init__(socket)
137
138 def command(self, dbname, spec, *args, **kwargs):
139 cmd = None
140 try:
141 cmd = parse_spec(spec, dbname)
142 except Exception:
143 log.exception("error parsing spec. skipping trace")
144
145 pin = ddtrace.Pin.get_from(self)
146 # skip tracing if we don't have a piece of data we need
147 if not dbname or not cmd or not pin or not pin.enabled():
148 return self.__wrapped__.command(dbname, spec, *args, **kwargs)
149
150 cmd.db = dbname
151 with self.__trace(cmd):
152 return self.__wrapped__.command(dbname, spec, *args, **kwargs)
153
154 def write_command(self, request_id, msg):
155 cmd = None
156 try:
157 cmd = parse_msg(msg)
158 except Exception:
159 log.exception("error parsing msg")
160
161 pin = ddtrace.Pin.get_from(self)
162 # if we couldn't parse it, don't try to trace it.
163 if not cmd or not pin or not pin.enabled():
164 return self.__wrapped__.write_command(request_id, msg)
165
166 with self.__trace(cmd) as s:
167 s.resource = _resource_from_cmd(cmd)
168 result = self.__wrapped__.write_command(request_id, msg)
169 if result:
170 s.set_metric(mongox.ROWS, result.get("n", -1))
171 return result
172
173 def __trace(self, cmd):
174 pin = ddtrace.Pin.get_from(self)
175 s = pin.tracer.trace(
176 "pymongo.cmd",
177 span_type=mongox.TYPE,
178 service=pin.service)
179
180 if cmd.db:
181 s.set_tag(mongox.DB, cmd.db)
182 if cmd:
183 s.set_tag(mongox.COLLECTION, cmd.coll)
184 s.set_tags(cmd.tags)
185 s.set_metrics(cmd.metrics)
186
187 s.resource = _resource_from_cmd(cmd)
188 if self.address:
189 _set_address_tags(s, self.address)
190 return s
191
192
193 def normalize_filter(f=None):
194 if f is None:
195 return {}
196 elif isinstance(f, list):
197 # normalize lists of filters
198 # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]}
199 return [normalize_filter(s) for s in f]
200 else:
201 # normalize dicts of filters
202 # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})
203 out = {}
204 for k, v in iteritems(f):
205 if isinstance(v, list) or isinstance(v, dict):
206 # RECURSION ALERT: needs to move to the agent
207 out[k] = normalize_filter(v)
208 else:
209 out[k] = '?'
210 return out
211
212 def _set_address_tags(span, address):
213 # the address is only set after the cursor is done.
214 if address:
215 span.set_tag(netx.TARGET_HOST, address[0])
216 span.set_tag(netx.TARGET_PORT, address[1])
217
218 def _resource_from_cmd(cmd):
219 if cmd.query is not None:
220 nq = normalize_filter(cmd.query)
221 # needed to dump json so we don't get unicode
222 # dict keys like {u'foo':'bar'}
223 q = json.dumps(nq)
224 return "%s %s %s" % (cmd.name, cmd.coll, q)
225 else:
226 return "%s %s" % (cmd.name, cmd.coll)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/pymongo/client.py b/ddtrace/contrib/pymongo/client.py
--- a/ddtrace/contrib/pymongo/client.py
+++ b/ddtrace/contrib/pymongo/client.py
@@ -197,17 +197,25 @@
# normalize lists of filters
# e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]}
return [normalize_filter(s) for s in f]
- else:
+ elif isinstance(f, dict):
# normalize dicts of filters
- # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})
+ # {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})
out = {}
for k, v in iteritems(f):
- if isinstance(v, list) or isinstance(v, dict):
+ if k == "$in" or k == "$nin":
+ # special case $in queries so we don't loop over lists.
+ out[k] = "?"
+ elif isinstance(v, list) or isinstance(v, dict):
# RECURSION ALERT: needs to move to the agent
out[k] = normalize_filter(v)
else:
+ # NOTE: this shouldn't happen, but let's have a safeguard.
out[k] = '?'
return out
+ else:
+ # FIXME[matt] unexpected type. not sure this should ever happen, but at
+ # least it won't crash.
+ return {}
def _set_address_tags(span, address):
# the address is only set after the cursor is done.
|
{"golden_diff": "diff --git a/ddtrace/contrib/pymongo/client.py b/ddtrace/contrib/pymongo/client.py\n--- a/ddtrace/contrib/pymongo/client.py\n+++ b/ddtrace/contrib/pymongo/client.py\n@@ -197,17 +197,25 @@\n # normalize lists of filters\n # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]}\n return [normalize_filter(s) for s in f]\n- else:\n+ elif isinstance(f, dict):\n # normalize dicts of filters\n- # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})\n+ # {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})\n out = {}\n for k, v in iteritems(f):\n- if isinstance(v, list) or isinstance(v, dict):\n+ if k == \"$in\" or k == \"$nin\":\n+ # special case $in queries so we don't loop over lists.\n+ out[k] = \"?\"\n+ elif isinstance(v, list) or isinstance(v, dict):\n # RECURSION ALERT: needs to move to the agent\n out[k] = normalize_filter(v)\n else:\n+ # NOTE: this shouldn't happen, but let's have a safeguard.\n out[k] = '?'\n return out\n+ else:\n+ # FIXME[matt] unexpected type. not sure this should ever happen, but at\n+ # least it won't crash.\n+ return {}\n \n def _set_address_tags(span, address):\n # the address is only set after the cursor is done.\n", "issue": "pymongo trace exception\nWe're noticing this exception - using latest `pymongo==3.4.0` driver:\r\n\r\n```exceptions.AttributeError: 'long' object has no attribute 'items'\r\nTraceback (most recent call last):\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 105, in send_message_with_response\r\n span.resource = _resource_from_cmd(cmd)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 220, in _resource_from_cmd\r\n nq = normalize_filter(cmd.query)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 207, in normalize_filter\r\n out[k] = normalize_filter(v)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 207, in normalize_filter\r\n out[k] = normalize_filter(v)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 199, in normalize_filter\r\n return [normalize_filter(s) for s in f]\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 204, in normalize_filter\r\n for k, v in iteritems(f):\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/compat.py\", line 32, in iteritems\r\n func = obj.items\r\nAttributeError: 'long' object has no attribute 'items'\r\n```\npymongo trace exception\nWe're noticing this exception - using latest `pymongo==3.4.0` driver:\r\n\r\n```exceptions.AttributeError: 'long' object has no attribute 'items'\r\nTraceback (most recent call last):\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 105, in send_message_with_response\r\n span.resource = _resource_from_cmd(cmd)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 220, in _resource_from_cmd\r\n nq = normalize_filter(cmd.query)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 207, in normalize_filter\r\n out[k] = normalize_filter(v)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 207, in normalize_filter\r\n out[k] = normalize_filter(v)\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 199, in normalize_filter\r\n return [normalize_filter(s) for s in f]\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/contrib/pymongo/client.py\", line 204, in normalize_filter\r\n for k, v in iteritems(f):\r\n File \"/home/deploy/virtualenvs/discord/local/lib/python2.7/site-packages/ddtrace/compat.py\", line 32, in iteritems\r\n func = obj.items\r\nAttributeError: 'long' object has no attribute 'items'\r\n```\n", "before_files": [{"content": "# stdlib\nimport contextlib\nimport logging\nimport json\n\n# 3p\nimport pymongo\nfrom wrapt import ObjectProxy\n\n# project\nimport ddtrace\nfrom ...compat import iteritems\nfrom ...ext import AppTypes\nfrom ...ext import mongo as mongox\nfrom ...ext import net as netx\nfrom ...util import deprecated\nfrom .parse import parse_spec, parse_query, parse_msg\n\n# Original Client class\n_MongoClient = pymongo.MongoClient\n\nlog = logging.getLogger(__name__)\n\n\n@deprecated(message='Use patching instead (see the docs).', version='0.6.0')\ndef trace_mongo_client(client, tracer, service=mongox.TYPE):\n tracer.set_service_info(\n service=service,\n app=mongox.TYPE,\n app_type=AppTypes.db,\n )\n traced_client = TracedMongoClient(client)\n ddtrace.Pin(service=service, tracer=tracer).onto(traced_client)\n return traced_client\n\n\nclass TracedMongoClient(ObjectProxy):\n\n def __init__(self, client=None, *args, **kwargs):\n # To support the former trace_mongo_client interface, we have to keep this old interface\n # TODO(Benjamin): drop it in a later version\n if not isinstance(client, _MongoClient):\n # Patched interface, instanciate the client\n # Note that, in that case, the client argument isn't a client, it's just the first arg\n client = _MongoClient(client, *args, **kwargs)\n\n super(TracedMongoClient, self).__init__(client)\n # Default Pin\n ddtrace.Pin(service=mongox.TYPE).onto(self)\n # NOTE[matt] the TracedMongoClient attempts to trace all of the network\n # calls in the trace library. This is good because it measures the\n # actual network time. It's bad because it uses a private API which\n # could change. We'll see how this goes.\n client._topology = TracedTopology(client._topology)\n\n def __setddpin__(self, pin):\n pin.onto(self._topology)\n\n def __getddpin__(self):\n return ddtrace.Pin.get_from(self._topology)\n\n\nclass TracedTopology(ObjectProxy):\n\n def __init__(self, topology):\n super(TracedTopology, self).__init__(topology)\n\n def select_server(self, *args, **kwargs):\n s = self.__wrapped__.select_server(*args, **kwargs)\n if not isinstance(s, TracedServer):\n s = TracedServer(s)\n # Reattach the pin every time in case it changed since the initial patching\n ddtrace.Pin.get_from(self).onto(s)\n return s\n\n\nclass TracedServer(ObjectProxy):\n\n def __init__(self, server):\n super(TracedServer, self).__init__(server)\n\n def send_message_with_response(self, operation, *args, **kwargs):\n cmd = None\n # Only try to parse something we think is a query.\n if self._is_query(operation):\n try:\n cmd = parse_query(operation)\n except Exception:\n log.exception(\"error parsing query\")\n\n pin = ddtrace.Pin.get_from(self)\n\n # if we couldn't parse or shouldn't trace the message, just go.\n if not cmd or not pin or not pin.enabled():\n return self.__wrapped__.send_message_with_response(\n operation,\n *args,\n **kwargs)\n\n with pin.tracer.trace(\n \"pymongo.cmd\",\n span_type=mongox.TYPE,\n service=pin.service) as span:\n\n span.resource = _resource_from_cmd(cmd)\n span.set_tag(mongox.DB, cmd.db)\n span.set_tag(mongox.COLLECTION, cmd.coll)\n span.set_tags(cmd.tags)\n\n result = self.__wrapped__.send_message_with_response(\n operation,\n *args,\n **kwargs)\n\n if result and result.address:\n _set_address_tags(span, result.address)\n return result\n\n @contextlib.contextmanager\n def get_socket(self, *args, **kwargs):\n with self.__wrapped__.get_socket(*args, **kwargs) as s:\n if not isinstance(s, TracedSocket):\n s = TracedSocket(s)\n ddtrace.Pin.get_from(self).onto(s)\n yield s\n\n @staticmethod\n def _is_query(op):\n # NOTE: _Query should alwyas have a spec field\n return hasattr(op, 'spec')\n\n\nclass TracedSocket(ObjectProxy):\n\n def __init__(self, socket):\n super(TracedSocket, self).__init__(socket)\n\n def command(self, dbname, spec, *args, **kwargs):\n cmd = None\n try:\n cmd = parse_spec(spec, dbname)\n except Exception:\n log.exception(\"error parsing spec. skipping trace\")\n\n pin = ddtrace.Pin.get_from(self)\n # skip tracing if we don't have a piece of data we need\n if not dbname or not cmd or not pin or not pin.enabled():\n return self.__wrapped__.command(dbname, spec, *args, **kwargs)\n\n cmd.db = dbname\n with self.__trace(cmd):\n return self.__wrapped__.command(dbname, spec, *args, **kwargs)\n\n def write_command(self, request_id, msg):\n cmd = None\n try:\n cmd = parse_msg(msg)\n except Exception:\n log.exception(\"error parsing msg\")\n\n pin = ddtrace.Pin.get_from(self)\n # if we couldn't parse it, don't try to trace it.\n if not cmd or not pin or not pin.enabled():\n return self.__wrapped__.write_command(request_id, msg)\n\n with self.__trace(cmd) as s:\n s.resource = _resource_from_cmd(cmd)\n result = self.__wrapped__.write_command(request_id, msg)\n if result:\n s.set_metric(mongox.ROWS, result.get(\"n\", -1))\n return result\n\n def __trace(self, cmd):\n pin = ddtrace.Pin.get_from(self)\n s = pin.tracer.trace(\n \"pymongo.cmd\",\n span_type=mongox.TYPE,\n service=pin.service)\n\n if cmd.db:\n s.set_tag(mongox.DB, cmd.db)\n if cmd:\n s.set_tag(mongox.COLLECTION, cmd.coll)\n s.set_tags(cmd.tags)\n s.set_metrics(cmd.metrics)\n\n s.resource = _resource_from_cmd(cmd)\n if self.address:\n _set_address_tags(s, self.address)\n return s\n\n\ndef normalize_filter(f=None):\n if f is None:\n return {}\n elif isinstance(f, list):\n # normalize lists of filters\n # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]}\n return [normalize_filter(s) for s in f]\n else:\n # normalize dicts of filters\n # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})\n out = {}\n for k, v in iteritems(f):\n if isinstance(v, list) or isinstance(v, dict):\n # RECURSION ALERT: needs to move to the agent\n out[k] = normalize_filter(v)\n else:\n out[k] = '?'\n return out\n\ndef _set_address_tags(span, address):\n # the address is only set after the cursor is done.\n if address:\n span.set_tag(netx.TARGET_HOST, address[0])\n span.set_tag(netx.TARGET_PORT, address[1])\n\ndef _resource_from_cmd(cmd):\n if cmd.query is not None:\n nq = normalize_filter(cmd.query)\n # needed to dump json so we don't get unicode\n # dict keys like {u'foo':'bar'}\n q = json.dumps(nq)\n return \"%s %s %s\" % (cmd.name, cmd.coll, q)\n else:\n return \"%s %s\" % (cmd.name, cmd.coll)\n", "path": "ddtrace/contrib/pymongo/client.py"}], "after_files": [{"content": "# stdlib\nimport contextlib\nimport logging\nimport json\n\n# 3p\nimport pymongo\nfrom wrapt import ObjectProxy\n\n# project\nimport ddtrace\nfrom ...compat import iteritems\nfrom ...ext import AppTypes\nfrom ...ext import mongo as mongox\nfrom ...ext import net as netx\nfrom ...util import deprecated\nfrom .parse import parse_spec, parse_query, parse_msg\n\n# Original Client class\n_MongoClient = pymongo.MongoClient\n\nlog = logging.getLogger(__name__)\n\n\n@deprecated(message='Use patching instead (see the docs).', version='0.6.0')\ndef trace_mongo_client(client, tracer, service=mongox.TYPE):\n tracer.set_service_info(\n service=service,\n app=mongox.TYPE,\n app_type=AppTypes.db,\n )\n traced_client = TracedMongoClient(client)\n ddtrace.Pin(service=service, tracer=tracer).onto(traced_client)\n return traced_client\n\n\nclass TracedMongoClient(ObjectProxy):\n\n def __init__(self, client=None, *args, **kwargs):\n # To support the former trace_mongo_client interface, we have to keep this old interface\n # TODO(Benjamin): drop it in a later version\n if not isinstance(client, _MongoClient):\n # Patched interface, instanciate the client\n # Note that, in that case, the client argument isn't a client, it's just the first arg\n client = _MongoClient(client, *args, **kwargs)\n\n super(TracedMongoClient, self).__init__(client)\n # Default Pin\n ddtrace.Pin(service=mongox.TYPE).onto(self)\n # NOTE[matt] the TracedMongoClient attempts to trace all of the network\n # calls in the trace library. This is good because it measures the\n # actual network time. It's bad because it uses a private API which\n # could change. We'll see how this goes.\n client._topology = TracedTopology(client._topology)\n\n def __setddpin__(self, pin):\n pin.onto(self._topology)\n\n def __getddpin__(self):\n return ddtrace.Pin.get_from(self._topology)\n\n\nclass TracedTopology(ObjectProxy):\n\n def __init__(self, topology):\n super(TracedTopology, self).__init__(topology)\n\n def select_server(self, *args, **kwargs):\n s = self.__wrapped__.select_server(*args, **kwargs)\n if not isinstance(s, TracedServer):\n s = TracedServer(s)\n # Reattach the pin every time in case it changed since the initial patching\n ddtrace.Pin.get_from(self).onto(s)\n return s\n\n\nclass TracedServer(ObjectProxy):\n\n def __init__(self, server):\n super(TracedServer, self).__init__(server)\n\n def send_message_with_response(self, operation, *args, **kwargs):\n cmd = None\n # Only try to parse something we think is a query.\n if self._is_query(operation):\n try:\n cmd = parse_query(operation)\n except Exception:\n log.exception(\"error parsing query\")\n\n pin = ddtrace.Pin.get_from(self)\n\n # if we couldn't parse or shouldn't trace the message, just go.\n if not cmd or not pin or not pin.enabled():\n return self.__wrapped__.send_message_with_response(\n operation,\n *args,\n **kwargs)\n\n with pin.tracer.trace(\n \"pymongo.cmd\",\n span_type=mongox.TYPE,\n service=pin.service) as span:\n\n span.resource = _resource_from_cmd(cmd)\n span.set_tag(mongox.DB, cmd.db)\n span.set_tag(mongox.COLLECTION, cmd.coll)\n span.set_tags(cmd.tags)\n\n result = self.__wrapped__.send_message_with_response(\n operation,\n *args,\n **kwargs)\n\n if result and result.address:\n _set_address_tags(span, result.address)\n return result\n\n @contextlib.contextmanager\n def get_socket(self, *args, **kwargs):\n with self.__wrapped__.get_socket(*args, **kwargs) as s:\n if not isinstance(s, TracedSocket):\n s = TracedSocket(s)\n ddtrace.Pin.get_from(self).onto(s)\n yield s\n\n @staticmethod\n def _is_query(op):\n # NOTE: _Query should alwyas have a spec field\n return hasattr(op, 'spec')\n\n\nclass TracedSocket(ObjectProxy):\n\n def __init__(self, socket):\n super(TracedSocket, self).__init__(socket)\n\n def command(self, dbname, spec, *args, **kwargs):\n cmd = None\n try:\n cmd = parse_spec(spec, dbname)\n except Exception:\n log.exception(\"error parsing spec. skipping trace\")\n\n pin = ddtrace.Pin.get_from(self)\n # skip tracing if we don't have a piece of data we need\n if not dbname or not cmd or not pin or not pin.enabled():\n return self.__wrapped__.command(dbname, spec, *args, **kwargs)\n\n cmd.db = dbname\n with self.__trace(cmd):\n return self.__wrapped__.command(dbname, spec, *args, **kwargs)\n\n def write_command(self, request_id, msg):\n cmd = None\n try:\n cmd = parse_msg(msg)\n except Exception:\n log.exception(\"error parsing msg\")\n\n pin = ddtrace.Pin.get_from(self)\n # if we couldn't parse it, don't try to trace it.\n if not cmd or not pin or not pin.enabled():\n return self.__wrapped__.write_command(request_id, msg)\n\n with self.__trace(cmd) as s:\n s.resource = _resource_from_cmd(cmd)\n result = self.__wrapped__.write_command(request_id, msg)\n if result:\n s.set_metric(mongox.ROWS, result.get(\"n\", -1))\n return result\n\n def __trace(self, cmd):\n pin = ddtrace.Pin.get_from(self)\n s = pin.tracer.trace(\n \"pymongo.cmd\",\n span_type=mongox.TYPE,\n service=pin.service)\n\n if cmd.db:\n s.set_tag(mongox.DB, cmd.db)\n if cmd:\n s.set_tag(mongox.COLLECTION, cmd.coll)\n s.set_tags(cmd.tags)\n s.set_metrics(cmd.metrics)\n\n s.resource = _resource_from_cmd(cmd)\n if self.address:\n _set_address_tags(s, self.address)\n return s\n\n\ndef normalize_filter(f=None):\n if f is None:\n return {}\n elif isinstance(f, list):\n # normalize lists of filters\n # e.g. {$or: [ { age: { $lt: 30 } }, { type: 1 } ]}\n return [normalize_filter(s) for s in f]\n elif isinstance(f, dict):\n # normalize dicts of filters\n # {$or: [ { age: { $lt: 30 } }, { type: 1 } ]})\n out = {}\n for k, v in iteritems(f):\n if k == \"$in\" or k == \"$nin\":\n # special case $in queries so we don't loop over lists.\n out[k] = \"?\"\n elif isinstance(v, list) or isinstance(v, dict):\n # RECURSION ALERT: needs to move to the agent\n out[k] = normalize_filter(v)\n else:\n # NOTE: this shouldn't happen, but let's have a safeguard.\n out[k] = '?'\n return out\n else:\n # FIXME[matt] unexpected type. not sure this should ever happen, but at\n # least it won't crash.\n return {}\n\ndef _set_address_tags(span, address):\n # the address is only set after the cursor is done.\n if address:\n span.set_tag(netx.TARGET_HOST, address[0])\n span.set_tag(netx.TARGET_PORT, address[1])\n\ndef _resource_from_cmd(cmd):\n if cmd.query is not None:\n nq = normalize_filter(cmd.query)\n # needed to dump json so we don't get unicode\n # dict keys like {u'foo':'bar'}\n q = json.dumps(nq)\n return \"%s %s %s\" % (cmd.name, cmd.coll, q)\n else:\n return \"%s %s\" % (cmd.name, cmd.coll)\n", "path": "ddtrace/contrib/pymongo/client.py"}]}
| 3,412 | 382 |
gh_patches_debug_43347
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-4358
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
#4268 breaks Buildbot on Git step workers with different filesystem layout than the master
Resolving `abspath` in 33682e89057349fed6b72ca7613944b2687633f9 on the Buildbot master does not work in scenarios that the Buildbot worker is using a different working directory than the master.
[Master WORKDIR: /var/lib/buildbot](https://github.com/buildbot/buildbot/blob/master/master/Dockerfile#L93)
[Worker WORKDIR: /buildbot](https://github.com/buildbot/buildbot/blob/master/worker/Dockerfile#L51)
This was rather tricky to track down and I'm going to revert this commit locally and look at fixing it in a subsequent PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/util/git.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18 from future.utils import iteritems
19
20 from distutils.version import LooseVersion
21
22 from twisted.internet import defer
23 from twisted.python import log
24
25 from buildbot import config as bbconfig
26 from buildbot.process import buildstep
27 from buildbot.process import remotecommand
28 from buildbot.process.properties import Properties
29
30 RC_SUCCESS = 0
31
32
33 def getSshCommand(keyPath, knownHostsPath):
34 command = ['ssh']
35 if keyPath is not None:
36 command += ['-i', '"{0}"'.format(keyPath)]
37 if knownHostsPath is not None:
38 command += ['-o', '"UserKnownHostsFile={0}"'.format(knownHostsPath)]
39 return ' '.join(command)
40
41
42 class GitMixin(object):
43
44 def setupGit(self):
45 self.gitInstalled = False
46 self.supportsBranch = False
47 self.supportsSubmoduleForce = False
48 self.supportsSubmoduleCheckout = False
49 self.supportsSshPrivateKeyAsEnvOption = False
50 self.supportsSshPrivateKeyAsConfigOption = False
51
52 def parseGitFeatures(self, version_stdout):
53
54 if 'git' not in version_stdout:
55 return
56
57 try:
58 version = version_stdout.strip().split(' ')[2]
59 except IndexError:
60 return
61
62 self.gitInstalled = True
63 if LooseVersion(version) >= LooseVersion("1.6.5"):
64 self.supportsBranch = True
65 if LooseVersion(version) >= LooseVersion("1.7.6"):
66 self.supportsSubmoduleForce = True
67 if LooseVersion(version) >= LooseVersion("1.7.8"):
68 self.supportsSubmoduleCheckout = True
69 if LooseVersion(version) >= LooseVersion("2.3.0"):
70 self.supportsSshPrivateKeyAsEnvOption = True
71 if LooseVersion(version) >= LooseVersion("2.10.0"):
72 self.supportsSshPrivateKeyAsConfigOption = True
73
74 def adjustCommandParamsForSshPrivateKey(self, command, env,
75 keyPath, sshWrapperPath=None,
76 knownHostsPath=None):
77 ssh_command = getSshCommand(keyPath, knownHostsPath)
78
79 if self.supportsSshPrivateKeyAsConfigOption:
80 command.append('-c')
81 command.append('core.sshCommand={0}'.format(ssh_command))
82 elif self.supportsSshPrivateKeyAsEnvOption:
83 env['GIT_SSH_COMMAND'] = ssh_command
84 else:
85 if sshWrapperPath is None:
86 raise Exception('Only SSH wrapper script is supported but path '
87 'not given')
88 env['GIT_SSH'] = sshWrapperPath
89
90
91 def getSshWrapperScriptContents(keyPath, knownHostsPath=None):
92 ssh_command = getSshCommand(keyPath, knownHostsPath)
93
94 # note that this works on windows if using git with MINGW embedded.
95 return '#!/bin/sh\n{0} "$@"\n'.format(ssh_command)
96
97
98 def getSshKnownHostsContents(hostKey):
99 host_name = '*'
100 return '{0} {1}'.format(host_name, hostKey)
101
102
103 class GitStepMixin(GitMixin):
104
105 def setupGitStep(self):
106 self.didDownloadSshPrivateKey = False
107 self.setupGit()
108
109 if self.sshHostKey is not None and self.sshPrivateKey is None:
110 bbconfig.error('Git: sshPrivateKey must be provided in order '
111 'use sshHostKey')
112 self.sshPrivateKey = None
113
114 if not self.repourl:
115 bbconfig.error("Git: must provide repourl.")
116
117 def _isSshPrivateKeyNeededForGitCommand(self, command):
118 if not command or self.sshPrivateKey is None:
119 return False
120
121 gitCommandsThatNeedSshKey = [
122 'clone', 'submodule', 'fetch', 'push'
123 ]
124 if command[0] in gitCommandsThatNeedSshKey:
125 return True
126 return False
127
128 def _getSshDataPath(self):
129 # we can't use the workdir for temporary ssh-related files, because
130 # it's needed when cloning repositories and git does not like the
131 # destination directory being non-empty. We have to use separate
132 # temporary directory for that data to ensure the confidentiality of it.
133 # So instead of
134 # '{path}/{to}/{workdir}/.buildbot-ssh-key' we put the key at
135 # '{path}/{to}/.{workdir}.buildbot/ssh-key'.
136
137 # basename and dirname interpret the last element being empty for paths
138 # ending with a slash
139 path_module = self.build.path_module
140
141 workdir = self._getSshDataWorkDir().rstrip('/\\')
142 parent_path = path_module.dirname(workdir)
143
144 basename = '.{0}.buildbot'.format(path_module.basename(workdir))
145 return path_module.join(parent_path, basename)
146
147 def _getSshPrivateKeyPath(self):
148 return self.build.path_module.join(self._getSshDataPath(), 'ssh-key')
149
150 def _getSshHostKeyPath(self):
151 return self.build.path_module.join(self._getSshDataPath(), 'ssh-known-hosts')
152
153 def _getSshWrapperScriptPath(self):
154 return self.build.path_module.join(self._getSshDataPath(), 'ssh-wrapper.sh')
155
156 def _getSshWrapperScript(self):
157 rel_key_path = self.build.path_module.relpath(
158 self._getSshPrivateKeyPath(), self._getSshDataWorkDir())
159
160 return getSshWrapperScriptContents(rel_key_path)
161
162 def _adjustCommandParamsForSshPrivateKey(self, full_command, full_env):
163
164 rel_key_path = self.build.path_module.relpath(
165 self._getSshPrivateKeyPath(), self.workdir)
166 rel_ssh_wrapper_path = self.build.path_module.relpath(
167 self._getSshWrapperScriptPath(), self.workdir)
168 rel_host_key_path = None
169 if self.sshHostKey is not None:
170 rel_host_key_path = self.build.path_module.relpath(
171 self._getSshHostKeyPath(), self.workdir)
172
173 self.adjustCommandParamsForSshPrivateKey(full_command, full_env,
174 rel_key_path,
175 rel_ssh_wrapper_path,
176 rel_host_key_path)
177
178 @defer.inlineCallbacks
179 def _dovccmd(self, command, abandonOnFailure=True, collectStdout=False, initialStdin=None):
180 full_command = ['git']
181 full_env = self.env.copy() if self.env else {}
182
183 if self.config is not None:
184 for name, value in iteritems(self.config):
185 full_command.append('-c')
186 full_command.append('%s=%s' % (name, value))
187
188 if self._isSshPrivateKeyNeededForGitCommand(command):
189 self._adjustCommandParamsForSshPrivateKey(full_command, full_env)
190
191 full_command.extend(command)
192
193 # check for the interruptSignal flag
194 sigtermTime = None
195 interruptSignal = None
196
197 # If possible prefer to send a SIGTERM to git before we send a SIGKILL.
198 # If we send a SIGKILL, git is prone to leaving around stale lockfiles.
199 # By priming it with a SIGTERM first we can ensure that it has a chance to shut-down gracefully
200 # before getting terminated
201 if not self.workerVersionIsOlderThan("shell", "2.16"):
202 # git should shut-down quickly on SIGTERM. If it doesn't don't let it
203 # stick around for too long because this is on top of any timeout
204 # we have hit.
205 sigtermTime = 1
206 else:
207 # Since sigtermTime is unavailable try to just use SIGTERM by itself instead of
208 # killing. This should be safe.
209 if self.workerVersionIsOlderThan("shell", "2.15"):
210 log.msg(
211 "NOTE: worker does not allow master to specify "
212 "interruptSignal. This may leave a stale lockfile around "
213 "if the command is interrupted/times out\n")
214 else:
215 interruptSignal = 'TERM'
216
217 cmd = remotecommand.RemoteShellCommand(self.workdir,
218 full_command,
219 env=full_env,
220 logEnviron=self.logEnviron,
221 timeout=self.timeout,
222 sigtermTime=sigtermTime,
223 interruptSignal=interruptSignal,
224 collectStdout=collectStdout,
225 initialStdin=initialStdin)
226 cmd.useLog(self.stdio_log, False)
227 yield self.runCommand(cmd)
228
229 if abandonOnFailure and cmd.didFail():
230 log.msg("Source step failed while running command %s" % cmd)
231 raise buildstep.BuildStepFailed()
232 if collectStdout:
233 defer.returnValue(cmd.stdout)
234 return
235 defer.returnValue(cmd.rc)
236
237 @defer.inlineCallbacks
238 def checkBranchSupport(self):
239 stdout = yield self._dovccmd(['--version'], collectStdout=True)
240
241 self.parseGitFeatures(stdout)
242
243 defer.returnValue(self.gitInstalled)
244
245 @defer.inlineCallbacks
246 def _downloadSshPrivateKeyIfNeeded(self):
247 if self.sshPrivateKey is None:
248 defer.returnValue(RC_SUCCESS)
249
250 p = Properties()
251 p.master = self.master
252 private_key = yield p.render(self.sshPrivateKey)
253 host_key = yield p.render(self.sshHostKey)
254
255 # not using self.workdir because it may be changed depending on step
256 # options
257 workdir = self._getSshDataWorkDir()
258
259 rel_key_path = self.build.path_module.relpath(
260 self._getSshPrivateKeyPath(), workdir)
261 rel_host_key_path = self.build.path_module.relpath(
262 self._getSshHostKeyPath(), workdir)
263 rel_wrapper_script_path = self.build.path_module.relpath(
264 self._getSshWrapperScriptPath(), workdir)
265
266 yield self.runMkdir(self._getSshDataPath())
267
268 if not self.supportsSshPrivateKeyAsEnvOption:
269 yield self.downloadFileContentToWorker(rel_wrapper_script_path,
270 self._getSshWrapperScript(),
271 workdir=workdir, mode=0o700)
272
273 yield self.downloadFileContentToWorker(rel_key_path, private_key,
274 workdir=workdir, mode=0o400)
275
276 if self.sshHostKey is not None:
277 known_hosts_contents = getSshKnownHostsContents(host_key)
278 yield self.downloadFileContentToWorker(rel_host_key_path,
279 known_hosts_contents,
280 workdir=workdir, mode=0o400)
281
282 self.didDownloadSshPrivateKey = True
283 defer.returnValue(RC_SUCCESS)
284
285 @defer.inlineCallbacks
286 def _removeSshPrivateKeyIfNeeded(self):
287 if not self.didDownloadSshPrivateKey:
288 defer.returnValue(RC_SUCCESS)
289
290 yield self.runRmdir(self._getSshDataPath())
291 defer.returnValue(RC_SUCCESS)
292
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/master/buildbot/util/git.py b/master/buildbot/util/git.py
--- a/master/buildbot/util/git.py
+++ b/master/buildbot/util/git.py
@@ -139,7 +139,11 @@
path_module = self.build.path_module
workdir = self._getSshDataWorkDir().rstrip('/\\')
- parent_path = path_module.dirname(workdir)
+ if path_module.isabs(workdir):
+ parent_path = path_module.dirname(workdir)
+ else:
+ parent_path = path_module.join(self.worker.worker_basedir,
+ path_module.dirname(workdir))
basename = '.{0}.buildbot'.format(path_module.basename(workdir))
return path_module.join(parent_path, basename)
@@ -154,26 +158,19 @@
return self.build.path_module.join(self._getSshDataPath(), 'ssh-wrapper.sh')
def _getSshWrapperScript(self):
- rel_key_path = self.build.path_module.relpath(
- self._getSshPrivateKeyPath(), self._getSshDataWorkDir())
-
- return getSshWrapperScriptContents(rel_key_path)
+ return getSshWrapperScriptContents(self._getSshPrivateKeyPath)
def _adjustCommandParamsForSshPrivateKey(self, full_command, full_env):
- rel_key_path = self.build.path_module.relpath(
- self._getSshPrivateKeyPath(), self.workdir)
- rel_ssh_wrapper_path = self.build.path_module.relpath(
- self._getSshWrapperScriptPath(), self.workdir)
- rel_host_key_path = None
+ key_path = self._getSshPrivateKeyPath()
+ ssh_wrapper_path = self._getSshWrapperScriptPath()
+ host_key_path = None
if self.sshHostKey is not None:
- rel_host_key_path = self.build.path_module.relpath(
- self._getSshHostKeyPath(), self.workdir)
+ host_key_path = self._getSshHostKeyPath()
self.adjustCommandParamsForSshPrivateKey(full_command, full_env,
- rel_key_path,
- rel_ssh_wrapper_path,
- rel_host_key_path)
+ key_path, ssh_wrapper_path,
+ host_key_path)
@defer.inlineCallbacks
def _dovccmd(self, command, abandonOnFailure=True, collectStdout=False, initialStdin=None):
@@ -256,26 +253,20 @@
# options
workdir = self._getSshDataWorkDir()
- rel_key_path = self.build.path_module.relpath(
- self._getSshPrivateKeyPath(), workdir)
- rel_host_key_path = self.build.path_module.relpath(
- self._getSshHostKeyPath(), workdir)
- rel_wrapper_script_path = self.build.path_module.relpath(
- self._getSshWrapperScriptPath(), workdir)
-
yield self.runMkdir(self._getSshDataPath())
if not self.supportsSshPrivateKeyAsEnvOption:
- yield self.downloadFileContentToWorker(rel_wrapper_script_path,
+ yield self.downloadFileContentToWorker(self._getSshWrapperScriptPath(),
self._getSshWrapperScript(),
workdir=workdir, mode=0o700)
- yield self.downloadFileContentToWorker(rel_key_path, private_key,
+ yield self.downloadFileContentToWorker(self._getSshPrivateKeyPath(),
+ private_key,
workdir=workdir, mode=0o400)
if self.sshHostKey is not None:
known_hosts_contents = getSshKnownHostsContents(host_key)
- yield self.downloadFileContentToWorker(rel_host_key_path,
+ yield self.downloadFileContentToWorker(self._getSshHostKeyPath(),
known_hosts_contents,
workdir=workdir, mode=0o400)
|
{"golden_diff": "diff --git a/master/buildbot/util/git.py b/master/buildbot/util/git.py\n--- a/master/buildbot/util/git.py\n+++ b/master/buildbot/util/git.py\n@@ -139,7 +139,11 @@\n path_module = self.build.path_module\n \n workdir = self._getSshDataWorkDir().rstrip('/\\\\')\n- parent_path = path_module.dirname(workdir)\n+ if path_module.isabs(workdir):\n+ parent_path = path_module.dirname(workdir)\n+ else:\n+ parent_path = path_module.join(self.worker.worker_basedir,\n+ path_module.dirname(workdir))\n \n basename = '.{0}.buildbot'.format(path_module.basename(workdir))\n return path_module.join(parent_path, basename)\n@@ -154,26 +158,19 @@\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-wrapper.sh')\n \n def _getSshWrapperScript(self):\n- rel_key_path = self.build.path_module.relpath(\n- self._getSshPrivateKeyPath(), self._getSshDataWorkDir())\n-\n- return getSshWrapperScriptContents(rel_key_path)\n+ return getSshWrapperScriptContents(self._getSshPrivateKeyPath)\n \n def _adjustCommandParamsForSshPrivateKey(self, full_command, full_env):\n \n- rel_key_path = self.build.path_module.relpath(\n- self._getSshPrivateKeyPath(), self.workdir)\n- rel_ssh_wrapper_path = self.build.path_module.relpath(\n- self._getSshWrapperScriptPath(), self.workdir)\n- rel_host_key_path = None\n+ key_path = self._getSshPrivateKeyPath()\n+ ssh_wrapper_path = self._getSshWrapperScriptPath()\n+ host_key_path = None\n if self.sshHostKey is not None:\n- rel_host_key_path = self.build.path_module.relpath(\n- self._getSshHostKeyPath(), self.workdir)\n+ host_key_path = self._getSshHostKeyPath()\n \n self.adjustCommandParamsForSshPrivateKey(full_command, full_env,\n- rel_key_path,\n- rel_ssh_wrapper_path,\n- rel_host_key_path)\n+ key_path, ssh_wrapper_path,\n+ host_key_path)\n \n @defer.inlineCallbacks\n def _dovccmd(self, command, abandonOnFailure=True, collectStdout=False, initialStdin=None):\n@@ -256,26 +253,20 @@\n # options\n workdir = self._getSshDataWorkDir()\n \n- rel_key_path = self.build.path_module.relpath(\n- self._getSshPrivateKeyPath(), workdir)\n- rel_host_key_path = self.build.path_module.relpath(\n- self._getSshHostKeyPath(), workdir)\n- rel_wrapper_script_path = self.build.path_module.relpath(\n- self._getSshWrapperScriptPath(), workdir)\n-\n yield self.runMkdir(self._getSshDataPath())\n \n if not self.supportsSshPrivateKeyAsEnvOption:\n- yield self.downloadFileContentToWorker(rel_wrapper_script_path,\n+ yield self.downloadFileContentToWorker(self._getSshWrapperScriptPath(),\n self._getSshWrapperScript(),\n workdir=workdir, mode=0o700)\n \n- yield self.downloadFileContentToWorker(rel_key_path, private_key,\n+ yield self.downloadFileContentToWorker(self._getSshPrivateKeyPath(),\n+ private_key,\n workdir=workdir, mode=0o400)\n \n if self.sshHostKey is not None:\n known_hosts_contents = getSshKnownHostsContents(host_key)\n- yield self.downloadFileContentToWorker(rel_host_key_path,\n+ yield self.downloadFileContentToWorker(self._getSshHostKeyPath(),\n known_hosts_contents,\n workdir=workdir, mode=0o400)\n", "issue": "#4268 breaks Buildbot on Git step workers with different filesystem layout than the master\nResolving `abspath` in 33682e89057349fed6b72ca7613944b2687633f9 on the Buildbot master does not work in scenarios that the Buildbot worker is using a different working directory than the master.\r\n\r\n[Master WORKDIR: /var/lib/buildbot](https://github.com/buildbot/buildbot/blob/master/master/Dockerfile#L93)\r\n[Worker WORKDIR: /buildbot](https://github.com/buildbot/buildbot/blob/master/worker/Dockerfile#L51)\r\n\r\nThis was rather tricky to track down and I'm going to revert this commit locally and look at fixing it in a subsequent PR.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom future.utils import iteritems\n\nfrom distutils.version import LooseVersion\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config as bbconfig\nfrom buildbot.process import buildstep\nfrom buildbot.process import remotecommand\nfrom buildbot.process.properties import Properties\n\nRC_SUCCESS = 0\n\n\ndef getSshCommand(keyPath, knownHostsPath):\n command = ['ssh']\n if keyPath is not None:\n command += ['-i', '\"{0}\"'.format(keyPath)]\n if knownHostsPath is not None:\n command += ['-o', '\"UserKnownHostsFile={0}\"'.format(knownHostsPath)]\n return ' '.join(command)\n\n\nclass GitMixin(object):\n\n def setupGit(self):\n self.gitInstalled = False\n self.supportsBranch = False\n self.supportsSubmoduleForce = False\n self.supportsSubmoduleCheckout = False\n self.supportsSshPrivateKeyAsEnvOption = False\n self.supportsSshPrivateKeyAsConfigOption = False\n\n def parseGitFeatures(self, version_stdout):\n\n if 'git' not in version_stdout:\n return\n\n try:\n version = version_stdout.strip().split(' ')[2]\n except IndexError:\n return\n\n self.gitInstalled = True\n if LooseVersion(version) >= LooseVersion(\"1.6.5\"):\n self.supportsBranch = True\n if LooseVersion(version) >= LooseVersion(\"1.7.6\"):\n self.supportsSubmoduleForce = True\n if LooseVersion(version) >= LooseVersion(\"1.7.8\"):\n self.supportsSubmoduleCheckout = True\n if LooseVersion(version) >= LooseVersion(\"2.3.0\"):\n self.supportsSshPrivateKeyAsEnvOption = True\n if LooseVersion(version) >= LooseVersion(\"2.10.0\"):\n self.supportsSshPrivateKeyAsConfigOption = True\n\n def adjustCommandParamsForSshPrivateKey(self, command, env,\n keyPath, sshWrapperPath=None,\n knownHostsPath=None):\n ssh_command = getSshCommand(keyPath, knownHostsPath)\n\n if self.supportsSshPrivateKeyAsConfigOption:\n command.append('-c')\n command.append('core.sshCommand={0}'.format(ssh_command))\n elif self.supportsSshPrivateKeyAsEnvOption:\n env['GIT_SSH_COMMAND'] = ssh_command\n else:\n if sshWrapperPath is None:\n raise Exception('Only SSH wrapper script is supported but path '\n 'not given')\n env['GIT_SSH'] = sshWrapperPath\n\n\ndef getSshWrapperScriptContents(keyPath, knownHostsPath=None):\n ssh_command = getSshCommand(keyPath, knownHostsPath)\n\n # note that this works on windows if using git with MINGW embedded.\n return '#!/bin/sh\\n{0} \"$@\"\\n'.format(ssh_command)\n\n\ndef getSshKnownHostsContents(hostKey):\n host_name = '*'\n return '{0} {1}'.format(host_name, hostKey)\n\n\nclass GitStepMixin(GitMixin):\n\n def setupGitStep(self):\n self.didDownloadSshPrivateKey = False\n self.setupGit()\n\n if self.sshHostKey is not None and self.sshPrivateKey is None:\n bbconfig.error('Git: sshPrivateKey must be provided in order '\n 'use sshHostKey')\n self.sshPrivateKey = None\n\n if not self.repourl:\n bbconfig.error(\"Git: must provide repourl.\")\n\n def _isSshPrivateKeyNeededForGitCommand(self, command):\n if not command or self.sshPrivateKey is None:\n return False\n\n gitCommandsThatNeedSshKey = [\n 'clone', 'submodule', 'fetch', 'push'\n ]\n if command[0] in gitCommandsThatNeedSshKey:\n return True\n return False\n\n def _getSshDataPath(self):\n # we can't use the workdir for temporary ssh-related files, because\n # it's needed when cloning repositories and git does not like the\n # destination directory being non-empty. We have to use separate\n # temporary directory for that data to ensure the confidentiality of it.\n # So instead of\n # '{path}/{to}/{workdir}/.buildbot-ssh-key' we put the key at\n # '{path}/{to}/.{workdir}.buildbot/ssh-key'.\n\n # basename and dirname interpret the last element being empty for paths\n # ending with a slash\n path_module = self.build.path_module\n\n workdir = self._getSshDataWorkDir().rstrip('/\\\\')\n parent_path = path_module.dirname(workdir)\n\n basename = '.{0}.buildbot'.format(path_module.basename(workdir))\n return path_module.join(parent_path, basename)\n\n def _getSshPrivateKeyPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-key')\n\n def _getSshHostKeyPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-known-hosts')\n\n def _getSshWrapperScriptPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-wrapper.sh')\n\n def _getSshWrapperScript(self):\n rel_key_path = self.build.path_module.relpath(\n self._getSshPrivateKeyPath(), self._getSshDataWorkDir())\n\n return getSshWrapperScriptContents(rel_key_path)\n\n def _adjustCommandParamsForSshPrivateKey(self, full_command, full_env):\n\n rel_key_path = self.build.path_module.relpath(\n self._getSshPrivateKeyPath(), self.workdir)\n rel_ssh_wrapper_path = self.build.path_module.relpath(\n self._getSshWrapperScriptPath(), self.workdir)\n rel_host_key_path = None\n if self.sshHostKey is not None:\n rel_host_key_path = self.build.path_module.relpath(\n self._getSshHostKeyPath(), self.workdir)\n\n self.adjustCommandParamsForSshPrivateKey(full_command, full_env,\n rel_key_path,\n rel_ssh_wrapper_path,\n rel_host_key_path)\n\n @defer.inlineCallbacks\n def _dovccmd(self, command, abandonOnFailure=True, collectStdout=False, initialStdin=None):\n full_command = ['git']\n full_env = self.env.copy() if self.env else {}\n\n if self.config is not None:\n for name, value in iteritems(self.config):\n full_command.append('-c')\n full_command.append('%s=%s' % (name, value))\n\n if self._isSshPrivateKeyNeededForGitCommand(command):\n self._adjustCommandParamsForSshPrivateKey(full_command, full_env)\n\n full_command.extend(command)\n\n # check for the interruptSignal flag\n sigtermTime = None\n interruptSignal = None\n\n # If possible prefer to send a SIGTERM to git before we send a SIGKILL.\n # If we send a SIGKILL, git is prone to leaving around stale lockfiles.\n # By priming it with a SIGTERM first we can ensure that it has a chance to shut-down gracefully\n # before getting terminated\n if not self.workerVersionIsOlderThan(\"shell\", \"2.16\"):\n # git should shut-down quickly on SIGTERM. If it doesn't don't let it\n # stick around for too long because this is on top of any timeout\n # we have hit.\n sigtermTime = 1\n else:\n # Since sigtermTime is unavailable try to just use SIGTERM by itself instead of\n # killing. This should be safe.\n if self.workerVersionIsOlderThan(\"shell\", \"2.15\"):\n log.msg(\n \"NOTE: worker does not allow master to specify \"\n \"interruptSignal. This may leave a stale lockfile around \"\n \"if the command is interrupted/times out\\n\")\n else:\n interruptSignal = 'TERM'\n\n cmd = remotecommand.RemoteShellCommand(self.workdir,\n full_command,\n env=full_env,\n logEnviron=self.logEnviron,\n timeout=self.timeout,\n sigtermTime=sigtermTime,\n interruptSignal=interruptSignal,\n collectStdout=collectStdout,\n initialStdin=initialStdin)\n cmd.useLog(self.stdio_log, False)\n yield self.runCommand(cmd)\n\n if abandonOnFailure and cmd.didFail():\n log.msg(\"Source step failed while running command %s\" % cmd)\n raise buildstep.BuildStepFailed()\n if collectStdout:\n defer.returnValue(cmd.stdout)\n return\n defer.returnValue(cmd.rc)\n\n @defer.inlineCallbacks\n def checkBranchSupport(self):\n stdout = yield self._dovccmd(['--version'], collectStdout=True)\n\n self.parseGitFeatures(stdout)\n\n defer.returnValue(self.gitInstalled)\n\n @defer.inlineCallbacks\n def _downloadSshPrivateKeyIfNeeded(self):\n if self.sshPrivateKey is None:\n defer.returnValue(RC_SUCCESS)\n\n p = Properties()\n p.master = self.master\n private_key = yield p.render(self.sshPrivateKey)\n host_key = yield p.render(self.sshHostKey)\n\n # not using self.workdir because it may be changed depending on step\n # options\n workdir = self._getSshDataWorkDir()\n\n rel_key_path = self.build.path_module.relpath(\n self._getSshPrivateKeyPath(), workdir)\n rel_host_key_path = self.build.path_module.relpath(\n self._getSshHostKeyPath(), workdir)\n rel_wrapper_script_path = self.build.path_module.relpath(\n self._getSshWrapperScriptPath(), workdir)\n\n yield self.runMkdir(self._getSshDataPath())\n\n if not self.supportsSshPrivateKeyAsEnvOption:\n yield self.downloadFileContentToWorker(rel_wrapper_script_path,\n self._getSshWrapperScript(),\n workdir=workdir, mode=0o700)\n\n yield self.downloadFileContentToWorker(rel_key_path, private_key,\n workdir=workdir, mode=0o400)\n\n if self.sshHostKey is not None:\n known_hosts_contents = getSshKnownHostsContents(host_key)\n yield self.downloadFileContentToWorker(rel_host_key_path,\n known_hosts_contents,\n workdir=workdir, mode=0o400)\n\n self.didDownloadSshPrivateKey = True\n defer.returnValue(RC_SUCCESS)\n\n @defer.inlineCallbacks\n def _removeSshPrivateKeyIfNeeded(self):\n if not self.didDownloadSshPrivateKey:\n defer.returnValue(RC_SUCCESS)\n\n yield self.runRmdir(self._getSshDataPath())\n defer.returnValue(RC_SUCCESS)\n", "path": "master/buildbot/util/git.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom future.utils import iteritems\n\nfrom distutils.version import LooseVersion\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config as bbconfig\nfrom buildbot.process import buildstep\nfrom buildbot.process import remotecommand\nfrom buildbot.process.properties import Properties\n\nRC_SUCCESS = 0\n\n\ndef getSshCommand(keyPath, knownHostsPath):\n command = ['ssh']\n if keyPath is not None:\n command += ['-i', '\"{0}\"'.format(keyPath)]\n if knownHostsPath is not None:\n command += ['-o', '\"UserKnownHostsFile={0}\"'.format(knownHostsPath)]\n return ' '.join(command)\n\n\nclass GitMixin(object):\n\n def setupGit(self):\n self.gitInstalled = False\n self.supportsBranch = False\n self.supportsSubmoduleForce = False\n self.supportsSubmoduleCheckout = False\n self.supportsSshPrivateKeyAsEnvOption = False\n self.supportsSshPrivateKeyAsConfigOption = False\n\n def parseGitFeatures(self, version_stdout):\n\n if 'git' not in version_stdout:\n return\n\n try:\n version = version_stdout.strip().split(' ')[2]\n except IndexError:\n return\n\n self.gitInstalled = True\n if LooseVersion(version) >= LooseVersion(\"1.6.5\"):\n self.supportsBranch = True\n if LooseVersion(version) >= LooseVersion(\"1.7.6\"):\n self.supportsSubmoduleForce = True\n if LooseVersion(version) >= LooseVersion(\"1.7.8\"):\n self.supportsSubmoduleCheckout = True\n if LooseVersion(version) >= LooseVersion(\"2.3.0\"):\n self.supportsSshPrivateKeyAsEnvOption = True\n if LooseVersion(version) >= LooseVersion(\"2.10.0\"):\n self.supportsSshPrivateKeyAsConfigOption = True\n\n def adjustCommandParamsForSshPrivateKey(self, command, env,\n keyPath, sshWrapperPath=None,\n knownHostsPath=None):\n ssh_command = getSshCommand(keyPath, knownHostsPath)\n\n if self.supportsSshPrivateKeyAsConfigOption:\n command.append('-c')\n command.append('core.sshCommand={0}'.format(ssh_command))\n elif self.supportsSshPrivateKeyAsEnvOption:\n env['GIT_SSH_COMMAND'] = ssh_command\n else:\n if sshWrapperPath is None:\n raise Exception('Only SSH wrapper script is supported but path '\n 'not given')\n env['GIT_SSH'] = sshWrapperPath\n\n\ndef getSshWrapperScriptContents(keyPath, knownHostsPath=None):\n ssh_command = getSshCommand(keyPath, knownHostsPath)\n\n # note that this works on windows if using git with MINGW embedded.\n return '#!/bin/sh\\n{0} \"$@\"\\n'.format(ssh_command)\n\n\ndef getSshKnownHostsContents(hostKey):\n host_name = '*'\n return '{0} {1}'.format(host_name, hostKey)\n\n\nclass GitStepMixin(GitMixin):\n\n def setupGitStep(self):\n self.didDownloadSshPrivateKey = False\n self.setupGit()\n\n if self.sshHostKey is not None and self.sshPrivateKey is None:\n bbconfig.error('Git: sshPrivateKey must be provided in order '\n 'use sshHostKey')\n self.sshPrivateKey = None\n\n if not self.repourl:\n bbconfig.error(\"Git: must provide repourl.\")\n\n def _isSshPrivateKeyNeededForGitCommand(self, command):\n if not command or self.sshPrivateKey is None:\n return False\n\n gitCommandsThatNeedSshKey = [\n 'clone', 'submodule', 'fetch', 'push'\n ]\n if command[0] in gitCommandsThatNeedSshKey:\n return True\n return False\n\n def _getSshDataPath(self):\n # we can't use the workdir for temporary ssh-related files, because\n # it's needed when cloning repositories and git does not like the\n # destination directory being non-empty. We have to use separate\n # temporary directory for that data to ensure the confidentiality of it.\n # So instead of\n # '{path}/{to}/{workdir}/.buildbot-ssh-key' we put the key at\n # '{path}/{to}/.{workdir}.buildbot/ssh-key'.\n\n # basename and dirname interpret the last element being empty for paths\n # ending with a slash\n path_module = self.build.path_module\n\n workdir = self._getSshDataWorkDir().rstrip('/\\\\')\n if path_module.isabs(workdir):\n parent_path = path_module.dirname(workdir)\n else:\n parent_path = path_module.join(self.worker.worker_basedir,\n path_module.dirname(workdir))\n\n basename = '.{0}.buildbot'.format(path_module.basename(workdir))\n return path_module.join(parent_path, basename)\n\n def _getSshPrivateKeyPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-key')\n\n def _getSshHostKeyPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-known-hosts')\n\n def _getSshWrapperScriptPath(self):\n return self.build.path_module.join(self._getSshDataPath(), 'ssh-wrapper.sh')\n\n def _getSshWrapperScript(self):\n return getSshWrapperScriptContents(self._getSshPrivateKeyPath)\n\n def _adjustCommandParamsForSshPrivateKey(self, full_command, full_env):\n\n key_path = self._getSshPrivateKeyPath()\n ssh_wrapper_path = self._getSshWrapperScriptPath()\n host_key_path = None\n if self.sshHostKey is not None:\n host_key_path = self._getSshHostKeyPath()\n\n self.adjustCommandParamsForSshPrivateKey(full_command, full_env,\n key_path, ssh_wrapper_path,\n host_key_path)\n\n @defer.inlineCallbacks\n def _dovccmd(self, command, abandonOnFailure=True, collectStdout=False, initialStdin=None):\n full_command = ['git']\n full_env = self.env.copy() if self.env else {}\n\n if self.config is not None:\n for name, value in iteritems(self.config):\n full_command.append('-c')\n full_command.append('%s=%s' % (name, value))\n\n if self._isSshPrivateKeyNeededForGitCommand(command):\n self._adjustCommandParamsForSshPrivateKey(full_command, full_env)\n\n full_command.extend(command)\n\n # check for the interruptSignal flag\n sigtermTime = None\n interruptSignal = None\n\n # If possible prefer to send a SIGTERM to git before we send a SIGKILL.\n # If we send a SIGKILL, git is prone to leaving around stale lockfiles.\n # By priming it with a SIGTERM first we can ensure that it has a chance to shut-down gracefully\n # before getting terminated\n if not self.workerVersionIsOlderThan(\"shell\", \"2.16\"):\n # git should shut-down quickly on SIGTERM. If it doesn't don't let it\n # stick around for too long because this is on top of any timeout\n # we have hit.\n sigtermTime = 1\n else:\n # Since sigtermTime is unavailable try to just use SIGTERM by itself instead of\n # killing. This should be safe.\n if self.workerVersionIsOlderThan(\"shell\", \"2.15\"):\n log.msg(\n \"NOTE: worker does not allow master to specify \"\n \"interruptSignal. This may leave a stale lockfile around \"\n \"if the command is interrupted/times out\\n\")\n else:\n interruptSignal = 'TERM'\n\n cmd = remotecommand.RemoteShellCommand(self.workdir,\n full_command,\n env=full_env,\n logEnviron=self.logEnviron,\n timeout=self.timeout,\n sigtermTime=sigtermTime,\n interruptSignal=interruptSignal,\n collectStdout=collectStdout,\n initialStdin=initialStdin)\n cmd.useLog(self.stdio_log, False)\n yield self.runCommand(cmd)\n\n if abandonOnFailure and cmd.didFail():\n log.msg(\"Source step failed while running command %s\" % cmd)\n raise buildstep.BuildStepFailed()\n if collectStdout:\n defer.returnValue(cmd.stdout)\n return\n defer.returnValue(cmd.rc)\n\n @defer.inlineCallbacks\n def checkBranchSupport(self):\n stdout = yield self._dovccmd(['--version'], collectStdout=True)\n\n self.parseGitFeatures(stdout)\n\n defer.returnValue(self.gitInstalled)\n\n @defer.inlineCallbacks\n def _downloadSshPrivateKeyIfNeeded(self):\n if self.sshPrivateKey is None:\n defer.returnValue(RC_SUCCESS)\n\n p = Properties()\n p.master = self.master\n private_key = yield p.render(self.sshPrivateKey)\n host_key = yield p.render(self.sshHostKey)\n\n # not using self.workdir because it may be changed depending on step\n # options\n workdir = self._getSshDataWorkDir()\n\n yield self.runMkdir(self._getSshDataPath())\n\n if not self.supportsSshPrivateKeyAsEnvOption:\n yield self.downloadFileContentToWorker(self._getSshWrapperScriptPath(),\n self._getSshWrapperScript(),\n workdir=workdir, mode=0o700)\n\n yield self.downloadFileContentToWorker(self._getSshPrivateKeyPath(),\n private_key,\n workdir=workdir, mode=0o400)\n\n if self.sshHostKey is not None:\n known_hosts_contents = getSshKnownHostsContents(host_key)\n yield self.downloadFileContentToWorker(self._getSshHostKeyPath(),\n known_hosts_contents,\n workdir=workdir, mode=0o400)\n\n self.didDownloadSshPrivateKey = True\n defer.returnValue(RC_SUCCESS)\n\n @defer.inlineCallbacks\n def _removeSshPrivateKeyIfNeeded(self):\n if not self.didDownloadSshPrivateKey:\n defer.returnValue(RC_SUCCESS)\n\n yield self.runRmdir(self._getSshDataPath())\n defer.returnValue(RC_SUCCESS)\n", "path": "master/buildbot/util/git.py"}]}
| 3,753 | 864 |
gh_patches_debug_7316
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-567
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dashboard: Elements in organization/user switch are astray
The arrow and the label for the organization/user name should be in one line and vertically centered in the switch area. Keep in mind that there can be long names of two lines.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/dashboard/views.py`
Content:
```
1 from allauth.account import views as account_views
2 from allauth.socialaccount import views as socialaccount_views
3 from django.contrib import messages
4 from django.contrib.messages.views import SuccessMessageMixin
5 from django.core.urlresolvers import reverse
6 from django.shortcuts import get_object_or_404, redirect
7 from django.utils import functional
8 from django.utils.translation import ugettext as _
9 from django.views import generic
10 from rules.compat import access_mixins as mixins
11 from rules.contrib import views as rules_views
12
13 from adhocracy4.phases import models as phase_models
14 from adhocracy4.projects import models as project_models
15 from euth.memberships import models as member_models
16 from euth.organisations import models as org_models
17 from euth.users import models as user_models
18
19 from . import blueprints, emails, forms
20
21
22 def dashboard(request):
23 return redirect('dashboard-profile')
24
25
26 class DashboardBaseMixin(mixins.LoginRequiredMixin,
27 generic.base.ContextMixin,):
28
29 @functional.cached_property
30 def user_has_organisation(self):
31 return bool(self.request.user.organisation_set.all())
32
33 @functional.cached_property
34 def organisation(self):
35 if 'organisation_slug' in self.kwargs:
36 slug = self.kwargs['organisation_slug']
37 return get_object_or_404(org_models.Organisation, slug=slug)
38 else:
39 return self.request.user.organisation_set.first()
40
41 @functional.cached_property
42 def other_organisations_of_user(self):
43 user = self.request.user
44 if self.organisation:
45 return user.organisation_set.exclude(pk=self.organisation.pk)
46 else:
47 return None
48
49 @property
50 def raise_exception(self):
51 return self.request.user.is_authenticated()
52
53
54 class DashboardEmailView(DashboardBaseMixin, account_views.EmailView):
55 menu_item = 'email'
56
57
58 class DashboardAccountView(DashboardBaseMixin,
59 socialaccount_views.ConnectionsView):
60 menu_item = 'connections'
61
62
63 class DashboardProfileView(DashboardBaseMixin,
64 SuccessMessageMixin,
65 generic.UpdateView):
66
67 model = user_models.User
68 template_name = "euth_dashboard/profile_detail.html"
69 form_class = forms.ProfileForm
70 success_message = _("Your profile was successfully updated.")
71 menu_item = 'profile'
72
73 def get_object(self):
74 return get_object_or_404(user_models.User, pk=self.request.user.id)
75
76 def get_success_url(self):
77 return self.request.path
78
79
80 class ChangePasswordView(DashboardBaseMixin,
81 account_views.PasswordChangeView):
82 menu_item = 'password'
83
84 def get_success_url(self):
85 return reverse('dashboard-password')
86
87
88 class DashboardOrganisationUpdateView(DashboardBaseMixin,
89 rules_views.PermissionRequiredMixin,
90 SuccessMessageMixin,
91 generic.UpdateView):
92 model = org_models.Organisation
93 form_class = forms.OrganisationForm
94 slug_url_kwarg = 'organisation_slug'
95 template_name = 'euth_dashboard/organisation_form.html'
96 success_message = _('Organisation successfully updated.')
97 permission_required = 'euth_organisations.modify_organisation'
98 menu_item = 'organisation'
99
100 def get_success_url(self):
101 return self.request.path
102
103
104 class DashboardProjectListView(DashboardBaseMixin,
105 rules_views.PermissionRequiredMixin,
106 generic.ListView):
107 model = project_models.Project
108 template_name = 'euth_dashboard/project_list.html'
109 permission_required = 'euth_organisations.modify_organisation'
110 menu_item = 'project'
111
112 def get_queryset(self):
113 return self.model.objects.filter(
114 organisation=self.organisation
115 )
116
117 def get_permission_object(self):
118 return self.organisation
119
120 def get_success_url(self):
121 return reverse('dashboard-project-list')
122
123
124 class DashboardBlueprintListView(DashboardBaseMixin,
125 rules_views.PermissionRequiredMixin,
126 generic.TemplateView):
127 template_name = 'euth_dashboard/blueprint_list.html'
128 blueprints = blueprints.blueprints
129 permission_required = 'euth_organisations.initiate_project'
130 menu_item = 'project'
131
132 def get_permission_object(self):
133 return self.organisation
134
135
136 class DashboardProjectCreateView(DashboardBaseMixin,
137 rules_views.PermissionRequiredMixin,
138 SuccessMessageMixin,
139 blueprints.BlueprintMixin,
140 generic.CreateView):
141 model = project_models.Project
142 form_class = forms.ProjectCreateForm
143 template_name = 'euth_dashboard/project_form.html'
144 success_message = _('Project succesfully created.')
145 permission_required = 'euth_organisations.initiate_project'
146 menu_item = 'project'
147
148 def get_context_data(self, **kwargs):
149 context = super().get_context_data(**kwargs)
150 context['heading'] = _("New project based on")
151 return context
152
153 def get_permission_object(self):
154 return self.organisation
155
156 def get_form_kwargs(self):
157 kwargs = super().get_form_kwargs()
158 kwargs['blueprint'] = self.blueprint
159 kwargs['organisation'] = self.organisation
160 kwargs['creator'] = self.request.user
161 return kwargs
162
163 def get_success_url(self):
164 return reverse('dashboard-project-list',
165 kwargs={
166 'organisation_slug': self.organisation.slug,
167 })
168
169
170 class DashboardProjectUpdateView(DashboardBaseMixin,
171 rules_views.PermissionRequiredMixin,
172 SuccessMessageMixin,
173 generic.UpdateView):
174 model = project_models.Project
175 form_class = forms.ProjectUpdateForm
176 template_name = 'euth_dashboard/project_form.html'
177 success_message = _('Project successfully updated.')
178 permission_required = 'euth_organisations.initiate_project'
179 menu_item = 'project'
180
181 def get_context_data(self, **kwargs):
182 context = super().get_context_data(**kwargs)
183 context['heading'] = _("Update project: " + self.object.name)
184 return context
185
186 def get_permission_object(self):
187 return self.organisation
188
189 def get_success_url(self):
190 return reverse('dashboard-project-edit',
191 kwargs={
192 'organisation_slug': self.organisation.slug,
193 'slug': self.get_object().slug
194 })
195
196 def get_form_kwargs(self):
197 kwargs = super().get_form_kwargs()
198 qs = phase_models.Phase.objects.filter(module__project=self.object)
199 kwargs['phases__queryset'] = qs
200
201 if qs.first().module.settings_instance:
202 settings_instance = qs.first().module.settings_instance
203 kwargs['module_settings__instance'] = settings_instance
204
205 return kwargs
206
207
208 class DashboardProjectDeleteView(DashboardBaseMixin,
209 rules_views.PermissionRequiredMixin,
210 generic.DeleteView):
211 model = project_models.Project
212 form_class = forms.ProjectUpdateForm
213 permission_required = 'euth_organisations.initiate_project'
214 success_message = _('Your project has been deleted.')
215 menu_item = 'project'
216
217 @property
218 def raise_exception(self):
219 return self.request.user.is_authenticated()
220
221 def delete(self, *args, **kwargs):
222 response = super().delete(*args, **kwargs)
223 emails.ProjectDeletedEmail.send(
224 self.object,
225 action_user=self.request.user
226 )
227 success_message = self.success_message
228 messages.success(self.request, success_message)
229 return response
230
231 def get_success_url(self):
232 return reverse('dashboard-project-list',
233 kwargs={
234 'organisation_slug': self.organisation.slug
235 })
236
237
238 class DashboardProjectInviteView(DashboardBaseMixin,
239 rules_views.PermissionRequiredMixin,
240 SuccessMessageMixin,
241 generic.FormView):
242 form_class = forms.ProjectInviteForm
243 template_name = 'euth_dashboard/project_invites.html'
244 success_message = _("Invitations successfully sent.")
245 permission_required = 'euth_organisations.initiate_project'
246 menu_item = 'project'
247
248 def get_permission_object(self):
249 return self.organisation
250
251 @functional.cached_property
252 def project(self):
253 return project_models.Project.objects.get(
254 slug=self.kwargs['slug']
255 )
256
257 def get_form_kwargs(self):
258 kwargs = super().get_form_kwargs()
259 kwargs['project'] = self.project
260 return kwargs
261
262 def form_valid(self, form):
263 emails = form.cleaned_data['emails']
264 user = self.request.user
265 project = self.project
266 for email in emails:
267 member_models.Invite.objects.invite(user, project, email)
268 return super().form_valid(form)
269
270 def get_success_url(self):
271 return reverse('dashboard-project-users',
272 kwargs={
273 'organisation_slug': self.organisation.slug,
274 'slug': self.project.slug
275 })
276
277
278 class DashboardProjectUserView(DashboardBaseMixin,
279 rules_views.PermissionRequiredMixin,
280 SuccessMessageMixin,
281 generic.FormView):
282
283 form_class = forms.ProjectUserForm
284 template_name = 'euth_dashboard/project_users.html'
285 success_message = _("User request successfully updated.")
286 permission_required = 'euth_organisations.initiate_project'
287 menu_item = 'project'
288
289 def get_permission_object(self):
290 return self.organisation
291
292 def get_form_kwargs(self):
293 kwargs = super().get_form_kwargs()
294 qs = member_models.Request.objects.order_by('created').filter(
295 project__slug=self.kwargs['slug']
296 )
297 kwargs['requests__queryset'] = qs
298 qs = member_models.Invite.objects.order_by('created').filter(
299 project__slug=self.kwargs['slug']
300 )
301 kwargs['invites__queryset'] = qs
302 qs = user_models.User.objects.order_by('email').filter(
303 project_participant__slug=self.kwargs['slug']
304 )
305 kwargs['users__queryset'] = qs
306 kwargs['project'] = self.project
307 return kwargs
308
309 @functional.cached_property
310 def project(self):
311 return project_models.Project.objects.get(
312 slug=self.kwargs['slug']
313 )
314
315 def get_context_data(self, **kwargs):
316 context = super().get_context_data(**kwargs)
317 context['project'] = self.project
318 return context
319
320 def get_success_url(self):
321 return self.request.path
322
323 def form_valid(self, form):
324 form.save()
325 return super().form_valid(form)
326
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/euth/dashboard/views.py b/euth/dashboard/views.py
--- a/euth/dashboard/views.py
+++ b/euth/dashboard/views.py
@@ -187,10 +187,9 @@
return self.organisation
def get_success_url(self):
- return reverse('dashboard-project-edit',
+ return reverse('dashboard-project-list',
kwargs={
'organisation_slug': self.organisation.slug,
- 'slug': self.get_object().slug
})
def get_form_kwargs(self):
|
{"golden_diff": "diff --git a/euth/dashboard/views.py b/euth/dashboard/views.py\n--- a/euth/dashboard/views.py\n+++ b/euth/dashboard/views.py\n@@ -187,10 +187,9 @@\n return self.organisation\n \n def get_success_url(self):\n- return reverse('dashboard-project-edit',\n+ return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n- 'slug': self.get_object().slug\n })\n \n def get_form_kwargs(self):\n", "issue": "Dashboard: Elements in organization/user switch are astray\nThe arrow and the label for the organization/user name should be in one line and vertically centered in the switch area. Keep in mind that there can be long names of two lines. \r\n\r\n\n", "before_files": [{"content": "from allauth.account import views as account_views\nfrom allauth.socialaccount import views as socialaccount_views\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.urlresolvers import reverse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import functional\nfrom django.utils.translation import ugettext as _\nfrom django.views import generic\nfrom rules.compat import access_mixins as mixins\nfrom rules.contrib import views as rules_views\n\nfrom adhocracy4.phases import models as phase_models\nfrom adhocracy4.projects import models as project_models\nfrom euth.memberships import models as member_models\nfrom euth.organisations import models as org_models\nfrom euth.users import models as user_models\n\nfrom . import blueprints, emails, forms\n\n\ndef dashboard(request):\n return redirect('dashboard-profile')\n\n\nclass DashboardBaseMixin(mixins.LoginRequiredMixin,\n generic.base.ContextMixin,):\n\n @functional.cached_property\n def user_has_organisation(self):\n return bool(self.request.user.organisation_set.all())\n\n @functional.cached_property\n def organisation(self):\n if 'organisation_slug' in self.kwargs:\n slug = self.kwargs['organisation_slug']\n return get_object_or_404(org_models.Organisation, slug=slug)\n else:\n return self.request.user.organisation_set.first()\n\n @functional.cached_property\n def other_organisations_of_user(self):\n user = self.request.user\n if self.organisation:\n return user.organisation_set.exclude(pk=self.organisation.pk)\n else:\n return None\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n\nclass DashboardEmailView(DashboardBaseMixin, account_views.EmailView):\n menu_item = 'email'\n\n\nclass DashboardAccountView(DashboardBaseMixin,\n socialaccount_views.ConnectionsView):\n menu_item = 'connections'\n\n\nclass DashboardProfileView(DashboardBaseMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n\n model = user_models.User\n template_name = \"euth_dashboard/profile_detail.html\"\n form_class = forms.ProfileForm\n success_message = _(\"Your profile was successfully updated.\")\n menu_item = 'profile'\n\n def get_object(self):\n return get_object_or_404(user_models.User, pk=self.request.user.id)\n\n def get_success_url(self):\n return self.request.path\n\n\nclass ChangePasswordView(DashboardBaseMixin,\n account_views.PasswordChangeView):\n menu_item = 'password'\n\n def get_success_url(self):\n return reverse('dashboard-password')\n\n\nclass DashboardOrganisationUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = org_models.Organisation\n form_class = forms.OrganisationForm\n slug_url_kwarg = 'organisation_slug'\n template_name = 'euth_dashboard/organisation_form.html'\n success_message = _('Organisation successfully updated.')\n permission_required = 'euth_organisations.modify_organisation'\n menu_item = 'organisation'\n\n def get_success_url(self):\n return self.request.path\n\n\nclass DashboardProjectListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.ListView):\n model = project_models.Project\n template_name = 'euth_dashboard/project_list.html'\n permission_required = 'euth_organisations.modify_organisation'\n menu_item = 'project'\n\n def get_queryset(self):\n return self.model.objects.filter(\n organisation=self.organisation\n )\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse('dashboard-project-list')\n\n\nclass DashboardBlueprintListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.TemplateView):\n template_name = 'euth_dashboard/blueprint_list.html'\n blueprints = blueprints.blueprints\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n\nclass DashboardProjectCreateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n blueprints.BlueprintMixin,\n generic.CreateView):\n model = project_models.Project\n form_class = forms.ProjectCreateForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project succesfully created.')\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['heading'] = _(\"New project based on\")\n return context\n\n def get_permission_object(self):\n return self.organisation\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['blueprint'] = self.blueprint\n kwargs['organisation'] = self.organisation\n kwargs['creator'] = self.request.user\n return kwargs\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n })\n\n\nclass DashboardProjectUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = project_models.Project\n form_class = forms.ProjectUpdateForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project successfully updated.')\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['heading'] = _(\"Update project: \" + self.object.name)\n return context\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse('dashboard-project-edit',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n 'slug': self.get_object().slug\n })\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = phase_models.Phase.objects.filter(module__project=self.object)\n kwargs['phases__queryset'] = qs\n\n if qs.first().module.settings_instance:\n settings_instance = qs.first().module.settings_instance\n kwargs['module_settings__instance'] = settings_instance\n\n return kwargs\n\n\nclass DashboardProjectDeleteView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.DeleteView):\n model = project_models.Project\n form_class = forms.ProjectUpdateForm\n permission_required = 'euth_organisations.initiate_project'\n success_message = _('Your project has been deleted.')\n menu_item = 'project'\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def delete(self, *args, **kwargs):\n response = super().delete(*args, **kwargs)\n emails.ProjectDeletedEmail.send(\n self.object,\n action_user=self.request.user\n )\n success_message = self.success_message\n messages.success(self.request, success_message)\n return response\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug\n })\n\n\nclass DashboardProjectInviteView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n form_class = forms.ProjectInviteForm\n template_name = 'euth_dashboard/project_invites.html'\n success_message = _(\"Invitations successfully sent.\")\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['project'] = self.project\n return kwargs\n\n def form_valid(self, form):\n emails = form.cleaned_data['emails']\n user = self.request.user\n project = self.project\n for email in emails:\n member_models.Invite.objects.invite(user, project, email)\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse('dashboard-project-users',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n 'slug': self.project.slug\n })\n\n\nclass DashboardProjectUserView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n\n form_class = forms.ProjectUserForm\n template_name = 'euth_dashboard/project_users.html'\n success_message = _(\"User request successfully updated.\")\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = member_models.Request.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['requests__queryset'] = qs\n qs = member_models.Invite.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['invites__queryset'] = qs\n qs = user_models.User.objects.order_by('email').filter(\n project_participant__slug=self.kwargs['slug']\n )\n kwargs['users__queryset'] = qs\n kwargs['project'] = self.project\n return kwargs\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n return context\n\n def get_success_url(self):\n return self.request.path\n\n def form_valid(self, form):\n form.save()\n return super().form_valid(form)\n", "path": "euth/dashboard/views.py"}], "after_files": [{"content": "from allauth.account import views as account_views\nfrom allauth.socialaccount import views as socialaccount_views\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.urlresolvers import reverse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import functional\nfrom django.utils.translation import ugettext as _\nfrom django.views import generic\nfrom rules.compat import access_mixins as mixins\nfrom rules.contrib import views as rules_views\n\nfrom adhocracy4.phases import models as phase_models\nfrom adhocracy4.projects import models as project_models\nfrom euth.memberships import models as member_models\nfrom euth.organisations import models as org_models\nfrom euth.users import models as user_models\n\nfrom . import blueprints, emails, forms\n\n\ndef dashboard(request):\n return redirect('dashboard-profile')\n\n\nclass DashboardBaseMixin(mixins.LoginRequiredMixin,\n generic.base.ContextMixin,):\n\n @functional.cached_property\n def user_has_organisation(self):\n return bool(self.request.user.organisation_set.all())\n\n @functional.cached_property\n def organisation(self):\n if 'organisation_slug' in self.kwargs:\n slug = self.kwargs['organisation_slug']\n return get_object_or_404(org_models.Organisation, slug=slug)\n else:\n return self.request.user.organisation_set.first()\n\n @functional.cached_property\n def other_organisations_of_user(self):\n user = self.request.user\n if self.organisation:\n return user.organisation_set.exclude(pk=self.organisation.pk)\n else:\n return None\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n\nclass DashboardEmailView(DashboardBaseMixin, account_views.EmailView):\n menu_item = 'email'\n\n\nclass DashboardAccountView(DashboardBaseMixin,\n socialaccount_views.ConnectionsView):\n menu_item = 'connections'\n\n\nclass DashboardProfileView(DashboardBaseMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n\n model = user_models.User\n template_name = \"euth_dashboard/profile_detail.html\"\n form_class = forms.ProfileForm\n success_message = _(\"Your profile was successfully updated.\")\n menu_item = 'profile'\n\n def get_object(self):\n return get_object_or_404(user_models.User, pk=self.request.user.id)\n\n def get_success_url(self):\n return self.request.path\n\n\nclass ChangePasswordView(DashboardBaseMixin,\n account_views.PasswordChangeView):\n menu_item = 'password'\n\n def get_success_url(self):\n return reverse('dashboard-password')\n\n\nclass DashboardOrganisationUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = org_models.Organisation\n form_class = forms.OrganisationForm\n slug_url_kwarg = 'organisation_slug'\n template_name = 'euth_dashboard/organisation_form.html'\n success_message = _('Organisation successfully updated.')\n permission_required = 'euth_organisations.modify_organisation'\n menu_item = 'organisation'\n\n def get_success_url(self):\n return self.request.path\n\n\nclass DashboardProjectListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.ListView):\n model = project_models.Project\n template_name = 'euth_dashboard/project_list.html'\n permission_required = 'euth_organisations.modify_organisation'\n menu_item = 'project'\n\n def get_queryset(self):\n return self.model.objects.filter(\n organisation=self.organisation\n )\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse('dashboard-project-list')\n\n\nclass DashboardBlueprintListView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.TemplateView):\n template_name = 'euth_dashboard/blueprint_list.html'\n blueprints = blueprints.blueprints\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n\nclass DashboardProjectCreateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n blueprints.BlueprintMixin,\n generic.CreateView):\n model = project_models.Project\n form_class = forms.ProjectCreateForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project succesfully created.')\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['heading'] = _(\"New project based on\")\n return context\n\n def get_permission_object(self):\n return self.organisation\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['blueprint'] = self.blueprint\n kwargs['organisation'] = self.organisation\n kwargs['creator'] = self.request.user\n return kwargs\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n })\n\n\nclass DashboardProjectUpdateView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.UpdateView):\n model = project_models.Project\n form_class = forms.ProjectUpdateForm\n template_name = 'euth_dashboard/project_form.html'\n success_message = _('Project successfully updated.')\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['heading'] = _(\"Update project: \" + self.object.name)\n return context\n\n def get_permission_object(self):\n return self.organisation\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n })\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = phase_models.Phase.objects.filter(module__project=self.object)\n kwargs['phases__queryset'] = qs\n\n if qs.first().module.settings_instance:\n settings_instance = qs.first().module.settings_instance\n kwargs['module_settings__instance'] = settings_instance\n\n return kwargs\n\n\nclass DashboardProjectDeleteView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n generic.DeleteView):\n model = project_models.Project\n form_class = forms.ProjectUpdateForm\n permission_required = 'euth_organisations.initiate_project'\n success_message = _('Your project has been deleted.')\n menu_item = 'project'\n\n @property\n def raise_exception(self):\n return self.request.user.is_authenticated()\n\n def delete(self, *args, **kwargs):\n response = super().delete(*args, **kwargs)\n emails.ProjectDeletedEmail.send(\n self.object,\n action_user=self.request.user\n )\n success_message = self.success_message\n messages.success(self.request, success_message)\n return response\n\n def get_success_url(self):\n return reverse('dashboard-project-list',\n kwargs={\n 'organisation_slug': self.organisation.slug\n })\n\n\nclass DashboardProjectInviteView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n form_class = forms.ProjectInviteForm\n template_name = 'euth_dashboard/project_invites.html'\n success_message = _(\"Invitations successfully sent.\")\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['project'] = self.project\n return kwargs\n\n def form_valid(self, form):\n emails = form.cleaned_data['emails']\n user = self.request.user\n project = self.project\n for email in emails:\n member_models.Invite.objects.invite(user, project, email)\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse('dashboard-project-users',\n kwargs={\n 'organisation_slug': self.organisation.slug,\n 'slug': self.project.slug\n })\n\n\nclass DashboardProjectUserView(DashboardBaseMixin,\n rules_views.PermissionRequiredMixin,\n SuccessMessageMixin,\n generic.FormView):\n\n form_class = forms.ProjectUserForm\n template_name = 'euth_dashboard/project_users.html'\n success_message = _(\"User request successfully updated.\")\n permission_required = 'euth_organisations.initiate_project'\n menu_item = 'project'\n\n def get_permission_object(self):\n return self.organisation\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n qs = member_models.Request.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['requests__queryset'] = qs\n qs = member_models.Invite.objects.order_by('created').filter(\n project__slug=self.kwargs['slug']\n )\n kwargs['invites__queryset'] = qs\n qs = user_models.User.objects.order_by('email').filter(\n project_participant__slug=self.kwargs['slug']\n )\n kwargs['users__queryset'] = qs\n kwargs['project'] = self.project\n return kwargs\n\n @functional.cached_property\n def project(self):\n return project_models.Project.objects.get(\n slug=self.kwargs['slug']\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n return context\n\n def get_success_url(self):\n return self.request.path\n\n def form_valid(self, form):\n form.save()\n return super().form_valid(form)\n", "path": "euth/dashboard/views.py"}]}
| 3,399 | 112 |
gh_patches_debug_22810
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-45
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement service_account.Credentials.to_jwt_credentials()
(Context: #29)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/oauth2/service_account.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Service Accounts: JSON Web Token (JWT) Profile for OAuth 2.0
16
17 This module implements the JWT Profile for OAuth 2.0 Authorization Grants
18 as defined by `RFC 7523`_ with particular support for how this RFC is
19 implemented in Google's infrastructure. Google refers to these credentials
20 as *Service Accounts*.
21
22 Service accounts are used for server-to-server communication, such as
23 interactions between a web application server and a Google service. The
24 service account belongs to your application instead of to an individual end
25 user. In contrast to other OAuth 2.0 profiles, no users are involved and your
26 application "acts" as the service account.
27
28 Typically an application uses a service account when the application uses
29 Google APIs to work with its own data rather than a user's data. For example,
30 an application that uses Google Cloud Datastore for data persistence would use
31 a service account to authenticate its calls to the Google Cloud Datastore API.
32 However, an application that needs to access a user's Drive documents would
33 use the normal OAuth 2.0 profile.
34
35 Additionally, Google Apps domain administrators can grant service accounts
36 `domain-wide delegation`_ authority to access user data on behalf of users in
37 the domain.
38
39 This profile uses a JWT to acquire an OAuth 2.0 access token. The JWT is used
40 in place of the usual authorization token returned during the standard
41 OAuth 2.0 Authorization Code grant. The JWT is only used for this purpose, as
42 the acquired access token is used as the bearer token when making requests
43 using these credentials.
44
45 This profile differs from normal OAuth 2.0 profile because no user consent
46 step is required. The use of the private key allows this profile to assert
47 identity directly.
48
49 This profile also differs from the :mod:`google.auth.jwt` authentication
50 because the JWT credentials use the JWT directly as the bearer token. This
51 profile instead only uses the JWT to obtain an OAuth 2.0 access token. The
52 obtained OAuth 2.0 access token is used as the bearer token.
53
54 Domain-wide delegation
55 ----------------------
56
57 Domain-wide delegation allows a service account to access user data on
58 behalf of any user in a Google Apps domain without consent from the user.
59 For example, an application that uses the Google Calendar API to add events to
60 the calendars of all users in a Google Apps domain would use a service account
61 to access the Google Calendar API on behalf of users.
62
63 The Google Apps administrator must explicitly authorize the service account to
64 do this. This authorization step is referred to as "delegating domain-wide
65 authority" to a service account.
66
67 You can use domain-wise delegation by creating a set of credentials with a
68 specific subject using :meth:`~Credentials.with_subject`.
69
70 .. _RFC 7523: https://tools.ietf.org/html/rfc7523
71 """
72
73 import datetime
74
75 from google.auth import _helpers
76 from google.auth import _service_account_info
77 from google.auth import credentials
78 from google.auth import jwt
79 from google.oauth2 import _client
80
81 _DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in sections
82
83
84 class Credentials(credentials.Signing,
85 credentials.Scoped,
86 credentials.Credentials):
87 """Service account credentials
88
89 Usually, you'll create these credentials with one of the helper
90 constructors. To create credentials using a Google service account
91 private key JSON file::
92
93 credentials = service_account.Credentials.from_service_account_file(
94 'service-account.json')
95
96 Or if you already have the service account file loaded::
97
98 service_account_info = json.load(open('service_account.json'))
99 credentials = service_account.Credentials.from_service_account_info(
100 service_account_info)
101
102 Both helper methods pass on arguments to the constructor, so you can
103 specify additional scopes and a subject if necessary::
104
105 credentials = service_account.Credentials.from_service_account_file(
106 'service-account.json',
107 scopes=['email'],
108 subject='[email protected]')
109
110 The credentials are considered immutable. If you want to modify the scopes
111 or the subject used for delegation, use :meth:`with_scopes` or
112 :meth:`with_subject`::
113
114 scoped_credentials = credentials.with_scopes(['email'])
115 delegated_credentials = credentials.with_subject(subject)
116 """
117
118 def __init__(self, signer, service_account_email, token_uri, scopes=None,
119 subject=None, additional_claims=None):
120 """
121 Args:
122 signer (google.auth.crypt.Signer): The signer used to sign JWTs.
123 service_account_email (str): The service account's email.
124 scopes (Sequence[str]): Scopes to request during the authorization
125 grant.
126 token_uri (str): The OAuth 2.0 Token URI.
127 subject (str): For domain-wide delegation, the email address of the
128 user to for which to request delegated access.
129 additional_claims (Mapping[str, str]): Any additional claims for
130 the JWT assertion used in the authorization grant.
131
132 .. note:: Typically one of the helper constructors
133 :meth:`from_service_account_file` or
134 :meth:`from_service_account_info` are used instead of calling the
135 constructor directly.
136 """
137 super(Credentials, self).__init__()
138
139 self._scopes = scopes
140 self._signer = signer
141 self._service_account_email = service_account_email
142 self._subject = subject
143 self._token_uri = token_uri
144
145 if additional_claims is not None:
146 self._additional_claims = additional_claims
147 else:
148 self._additional_claims = {}
149
150 @classmethod
151 def _from_signer_and_info(cls, signer, info, **kwargs):
152 """Creates a Credentials instance from a signer and service account
153 info.
154
155 Args:
156 signer (google.auth.crypt.Signer): The signer used to sign JWTs.
157 info (Mapping[str, str]): The service account info.
158 kwargs: Additional arguments to pass to the constructor.
159
160 Returns:
161 google.auth.jwt.Credentials: The constructed credentials.
162
163 Raises:
164 ValueError: If the info is not in the expected format.
165 """
166 return cls(
167 signer,
168 service_account_email=info['client_email'],
169 token_uri=info['token_uri'], **kwargs)
170
171 @classmethod
172 def from_service_account_info(cls, info, **kwargs):
173 """Creates a Credentials instance from parsed service account info.
174
175 Args:
176 info (Mapping[str, str]): The service account info in Google
177 format.
178 kwargs: Additional arguments to pass to the constructor.
179
180 Returns:
181 google.auth.service_account.Credentials: The constructed
182 credentials.
183
184 Raises:
185 ValueError: If the info is not in the expected format.
186 """
187 signer = _service_account_info.from_dict(
188 info, require=['client_email', 'token_uri'])
189 return cls._from_signer_and_info(signer, info, **kwargs)
190
191 @classmethod
192 def from_service_account_file(cls, filename, **kwargs):
193 """Creates a Credentials instance from a service account json file.
194
195 Args:
196 filename (str): The path to the service account json file.
197 kwargs: Additional arguments to pass to the constructor.
198
199 Returns:
200 google.auth.service_account.Credentials: The constructed
201 credentials.
202 """
203 info, signer = _service_account_info.from_filename(
204 filename, require=['client_email', 'token_uri'])
205 return cls._from_signer_and_info(signer, info, **kwargs)
206
207 @property
208 def requires_scopes(self):
209 """Checks if the credentials requires scopes.
210
211 Returns:
212 bool: True if there are no scopes set otherwise False.
213 """
214 return True if not self._scopes else False
215
216 @_helpers.copy_docstring(credentials.Scoped)
217 def with_scopes(self, scopes):
218 return Credentials(
219 self._signer,
220 service_account_email=self._service_account_email,
221 scopes=scopes,
222 token_uri=self._token_uri,
223 subject=self._subject,
224 additional_claims=self._additional_claims.copy())
225
226 def with_subject(self, subject):
227 """Create a copy of these credentials with the specified subject.
228
229 Args:
230 subject (str): The subject claim.
231
232 Returns:
233 google.auth.service_account.Credentials: A new credentials
234 instance.
235 """
236 return Credentials(
237 self._signer,
238 service_account_email=self._service_account_email,
239 scopes=self._scopes,
240 token_uri=self._token_uri,
241 subject=subject,
242 additional_claims=self._additional_claims.copy())
243
244 def _make_authorization_grant_assertion(self):
245 """Create the OAuth 2.0 assertion.
246
247 This assertion is used during the OAuth 2.0 grant to acquire an
248 access token.
249
250 Returns:
251 bytes: The authorization grant assertion.
252 """
253 now = _helpers.utcnow()
254 lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)
255 expiry = now + lifetime
256
257 payload = {
258 'iat': _helpers.datetime_to_secs(now),
259 'exp': _helpers.datetime_to_secs(expiry),
260 # The issuer must be the service account email.
261 'iss': self._service_account_email,
262 # The audience must be the auth token endpoint's URI
263 'aud': self._token_uri,
264 'scope': _helpers.scopes_to_string(self._scopes or ())
265 }
266
267 payload.update(self._additional_claims)
268
269 # The subject can be a user email for domain-wide delegation.
270 if self._subject:
271 payload.setdefault('sub', self._subject)
272
273 token = jwt.encode(self._signer, payload)
274
275 return token
276
277 @_helpers.copy_docstring(credentials.Credentials)
278 def refresh(self, request):
279 assertion = self._make_authorization_grant_assertion()
280 access_token, expiry, _ = _client.jwt_grant(
281 request, self._token_uri, assertion)
282 self.token = access_token
283 self.expiry = expiry
284
285 @_helpers.copy_docstring(credentials.Signing)
286 def sign_bytes(self, message):
287 return self._signer.sign(message)
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/oauth2/service_account.py b/google/oauth2/service_account.py
--- a/google/oauth2/service_account.py
+++ b/google/oauth2/service_account.py
@@ -204,6 +204,33 @@
filename, require=['client_email', 'token_uri'])
return cls._from_signer_and_info(signer, info, **kwargs)
+ def to_jwt_credentials(self):
+ """Creates a :cls:`google.auth.jwt.Credentials` instance from this
+ instance.
+
+ The new instance will use the same private key as this instance and
+ will use this instance's service account email as the issuer and
+ subject.
+
+ This is the same as calling
+ :meth:`jwt.Credentials.from_service_account_file` with the same
+ file used to create these credentials::
+
+ svc_creds = service_account.Credentials.from_service_account_file(
+ 'service_account.json')
+ jwt_from_svc = svc_credentials.to_jwt_credentials()
+ # is the same as:
+ jwt_creds = jwt.Credentials.from_service_account_file(
+ 'service_account.json')
+
+ Returns:
+ google.auth.jwt.Credentials: A new Credentials instance.
+ """
+ return jwt.Credentials(
+ self._signer,
+ issuer=self._service_account_email,
+ subject=self._service_account_email)
+
@property
def requires_scopes(self):
"""Checks if the credentials requires scopes.
|
{"golden_diff": "diff --git a/google/oauth2/service_account.py b/google/oauth2/service_account.py\n--- a/google/oauth2/service_account.py\n+++ b/google/oauth2/service_account.py\n@@ -204,6 +204,33 @@\n filename, require=['client_email', 'token_uri'])\n return cls._from_signer_and_info(signer, info, **kwargs)\n \n+ def to_jwt_credentials(self):\n+ \"\"\"Creates a :cls:`google.auth.jwt.Credentials` instance from this\n+ instance.\n+\n+ The new instance will use the same private key as this instance and\n+ will use this instance's service account email as the issuer and\n+ subject.\n+\n+ This is the same as calling\n+ :meth:`jwt.Credentials.from_service_account_file` with the same\n+ file used to create these credentials::\n+\n+ svc_creds = service_account.Credentials.from_service_account_file(\n+ 'service_account.json')\n+ jwt_from_svc = svc_credentials.to_jwt_credentials()\n+ # is the same as:\n+ jwt_creds = jwt.Credentials.from_service_account_file(\n+ 'service_account.json')\n+\n+ Returns:\n+ google.auth.jwt.Credentials: A new Credentials instance.\n+ \"\"\"\n+ return jwt.Credentials(\n+ self._signer,\n+ issuer=self._service_account_email,\n+ subject=self._service_account_email)\n+\n @property\n def requires_scopes(self):\n \"\"\"Checks if the credentials requires scopes.\n", "issue": "Implement service_account.Credentials.to_jwt_credentials()\n(Context: #29)\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Service Accounts: JSON Web Token (JWT) Profile for OAuth 2.0\n\nThis module implements the JWT Profile for OAuth 2.0 Authorization Grants\nas defined by `RFC 7523`_ with particular support for how this RFC is\nimplemented in Google's infrastructure. Google refers to these credentials\nas *Service Accounts*.\n\nService accounts are used for server-to-server communication, such as\ninteractions between a web application server and a Google service. The\nservice account belongs to your application instead of to an individual end\nuser. In contrast to other OAuth 2.0 profiles, no users are involved and your\napplication \"acts\" as the service account.\n\nTypically an application uses a service account when the application uses\nGoogle APIs to work with its own data rather than a user's data. For example,\nan application that uses Google Cloud Datastore for data persistence would use\na service account to authenticate its calls to the Google Cloud Datastore API.\nHowever, an application that needs to access a user's Drive documents would\nuse the normal OAuth 2.0 profile.\n\nAdditionally, Google Apps domain administrators can grant service accounts\n`domain-wide delegation`_ authority to access user data on behalf of users in\nthe domain.\n\nThis profile uses a JWT to acquire an OAuth 2.0 access token. The JWT is used\nin place of the usual authorization token returned during the standard\nOAuth 2.0 Authorization Code grant. The JWT is only used for this purpose, as\nthe acquired access token is used as the bearer token when making requests\nusing these credentials.\n\nThis profile differs from normal OAuth 2.0 profile because no user consent\nstep is required. The use of the private key allows this profile to assert\nidentity directly.\n\nThis profile also differs from the :mod:`google.auth.jwt` authentication\nbecause the JWT credentials use the JWT directly as the bearer token. This\nprofile instead only uses the JWT to obtain an OAuth 2.0 access token. The\nobtained OAuth 2.0 access token is used as the bearer token.\n\nDomain-wide delegation\n----------------------\n\nDomain-wide delegation allows a service account to access user data on\nbehalf of any user in a Google Apps domain without consent from the user.\nFor example, an application that uses the Google Calendar API to add events to\nthe calendars of all users in a Google Apps domain would use a service account\nto access the Google Calendar API on behalf of users.\n\nThe Google Apps administrator must explicitly authorize the service account to\ndo this. This authorization step is referred to as \"delegating domain-wide\nauthority\" to a service account.\n\nYou can use domain-wise delegation by creating a set of credentials with a\nspecific subject using :meth:`~Credentials.with_subject`.\n\n.. _RFC 7523: https://tools.ietf.org/html/rfc7523\n\"\"\"\n\nimport datetime\n\nfrom google.auth import _helpers\nfrom google.auth import _service_account_info\nfrom google.auth import credentials\nfrom google.auth import jwt\nfrom google.oauth2 import _client\n\n_DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in sections\n\n\nclass Credentials(credentials.Signing,\n credentials.Scoped,\n credentials.Credentials):\n \"\"\"Service account credentials\n\n Usually, you'll create these credentials with one of the helper\n constructors. To create credentials using a Google service account\n private key JSON file::\n\n credentials = service_account.Credentials.from_service_account_file(\n 'service-account.json')\n\n Or if you already have the service account file loaded::\n\n service_account_info = json.load(open('service_account.json'))\n credentials = service_account.Credentials.from_service_account_info(\n service_account_info)\n\n Both helper methods pass on arguments to the constructor, so you can\n specify additional scopes and a subject if necessary::\n\n credentials = service_account.Credentials.from_service_account_file(\n 'service-account.json',\n scopes=['email'],\n subject='[email protected]')\n\n The credentials are considered immutable. If you want to modify the scopes\n or the subject used for delegation, use :meth:`with_scopes` or\n :meth:`with_subject`::\n\n scoped_credentials = credentials.with_scopes(['email'])\n delegated_credentials = credentials.with_subject(subject)\n \"\"\"\n\n def __init__(self, signer, service_account_email, token_uri, scopes=None,\n subject=None, additional_claims=None):\n \"\"\"\n Args:\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n service_account_email (str): The service account's email.\n scopes (Sequence[str]): Scopes to request during the authorization\n grant.\n token_uri (str): The OAuth 2.0 Token URI.\n subject (str): For domain-wide delegation, the email address of the\n user to for which to request delegated access.\n additional_claims (Mapping[str, str]): Any additional claims for\n the JWT assertion used in the authorization grant.\n\n .. note:: Typically one of the helper constructors\n :meth:`from_service_account_file` or\n :meth:`from_service_account_info` are used instead of calling the\n constructor directly.\n \"\"\"\n super(Credentials, self).__init__()\n\n self._scopes = scopes\n self._signer = signer\n self._service_account_email = service_account_email\n self._subject = subject\n self._token_uri = token_uri\n\n if additional_claims is not None:\n self._additional_claims = additional_claims\n else:\n self._additional_claims = {}\n\n @classmethod\n def _from_signer_and_info(cls, signer, info, **kwargs):\n \"\"\"Creates a Credentials instance from a signer and service account\n info.\n\n Args:\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n info (Mapping[str, str]): The service account info.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.jwt.Credentials: The constructed credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n return cls(\n signer,\n service_account_email=info['client_email'],\n token_uri=info['token_uri'], **kwargs)\n\n @classmethod\n def from_service_account_info(cls, info, **kwargs):\n \"\"\"Creates a Credentials instance from parsed service account info.\n\n Args:\n info (Mapping[str, str]): The service account info in Google\n format.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.service_account.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n signer = _service_account_info.from_dict(\n info, require=['client_email', 'token_uri'])\n return cls._from_signer_and_info(signer, info, **kwargs)\n\n @classmethod\n def from_service_account_file(cls, filename, **kwargs):\n \"\"\"Creates a Credentials instance from a service account json file.\n\n Args:\n filename (str): The path to the service account json file.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.service_account.Credentials: The constructed\n credentials.\n \"\"\"\n info, signer = _service_account_info.from_filename(\n filename, require=['client_email', 'token_uri'])\n return cls._from_signer_and_info(signer, info, **kwargs)\n\n @property\n def requires_scopes(self):\n \"\"\"Checks if the credentials requires scopes.\n\n Returns:\n bool: True if there are no scopes set otherwise False.\n \"\"\"\n return True if not self._scopes else False\n\n @_helpers.copy_docstring(credentials.Scoped)\n def with_scopes(self, scopes):\n return Credentials(\n self._signer,\n service_account_email=self._service_account_email,\n scopes=scopes,\n token_uri=self._token_uri,\n subject=self._subject,\n additional_claims=self._additional_claims.copy())\n\n def with_subject(self, subject):\n \"\"\"Create a copy of these credentials with the specified subject.\n\n Args:\n subject (str): The subject claim.\n\n Returns:\n google.auth.service_account.Credentials: A new credentials\n instance.\n \"\"\"\n return Credentials(\n self._signer,\n service_account_email=self._service_account_email,\n scopes=self._scopes,\n token_uri=self._token_uri,\n subject=subject,\n additional_claims=self._additional_claims.copy())\n\n def _make_authorization_grant_assertion(self):\n \"\"\"Create the OAuth 2.0 assertion.\n\n This assertion is used during the OAuth 2.0 grant to acquire an\n access token.\n\n Returns:\n bytes: The authorization grant assertion.\n \"\"\"\n now = _helpers.utcnow()\n lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)\n expiry = now + lifetime\n\n payload = {\n 'iat': _helpers.datetime_to_secs(now),\n 'exp': _helpers.datetime_to_secs(expiry),\n # The issuer must be the service account email.\n 'iss': self._service_account_email,\n # The audience must be the auth token endpoint's URI\n 'aud': self._token_uri,\n 'scope': _helpers.scopes_to_string(self._scopes or ())\n }\n\n payload.update(self._additional_claims)\n\n # The subject can be a user email for domain-wide delegation.\n if self._subject:\n payload.setdefault('sub', self._subject)\n\n token = jwt.encode(self._signer, payload)\n\n return token\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n assertion = self._make_authorization_grant_assertion()\n access_token, expiry, _ = _client.jwt_grant(\n request, self._token_uri, assertion)\n self.token = access_token\n self.expiry = expiry\n\n @_helpers.copy_docstring(credentials.Signing)\n def sign_bytes(self, message):\n return self._signer.sign(message)\n", "path": "google/oauth2/service_account.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Service Accounts: JSON Web Token (JWT) Profile for OAuth 2.0\n\nThis module implements the JWT Profile for OAuth 2.0 Authorization Grants\nas defined by `RFC 7523`_ with particular support for how this RFC is\nimplemented in Google's infrastructure. Google refers to these credentials\nas *Service Accounts*.\n\nService accounts are used for server-to-server communication, such as\ninteractions between a web application server and a Google service. The\nservice account belongs to your application instead of to an individual end\nuser. In contrast to other OAuth 2.0 profiles, no users are involved and your\napplication \"acts\" as the service account.\n\nTypically an application uses a service account when the application uses\nGoogle APIs to work with its own data rather than a user's data. For example,\nan application that uses Google Cloud Datastore for data persistence would use\na service account to authenticate its calls to the Google Cloud Datastore API.\nHowever, an application that needs to access a user's Drive documents would\nuse the normal OAuth 2.0 profile.\n\nAdditionally, Google Apps domain administrators can grant service accounts\n`domain-wide delegation`_ authority to access user data on behalf of users in\nthe domain.\n\nThis profile uses a JWT to acquire an OAuth 2.0 access token. The JWT is used\nin place of the usual authorization token returned during the standard\nOAuth 2.0 Authorization Code grant. The JWT is only used for this purpose, as\nthe acquired access token is used as the bearer token when making requests\nusing these credentials.\n\nThis profile differs from normal OAuth 2.0 profile because no user consent\nstep is required. The use of the private key allows this profile to assert\nidentity directly.\n\nThis profile also differs from the :mod:`google.auth.jwt` authentication\nbecause the JWT credentials use the JWT directly as the bearer token. This\nprofile instead only uses the JWT to obtain an OAuth 2.0 access token. The\nobtained OAuth 2.0 access token is used as the bearer token.\n\nDomain-wide delegation\n----------------------\n\nDomain-wide delegation allows a service account to access user data on\nbehalf of any user in a Google Apps domain without consent from the user.\nFor example, an application that uses the Google Calendar API to add events to\nthe calendars of all users in a Google Apps domain would use a service account\nto access the Google Calendar API on behalf of users.\n\nThe Google Apps administrator must explicitly authorize the service account to\ndo this. This authorization step is referred to as \"delegating domain-wide\nauthority\" to a service account.\n\nYou can use domain-wise delegation by creating a set of credentials with a\nspecific subject using :meth:`~Credentials.with_subject`.\n\n.. _RFC 7523: https://tools.ietf.org/html/rfc7523\n\"\"\"\n\nimport datetime\n\nfrom google.auth import _helpers\nfrom google.auth import _service_account_info\nfrom google.auth import credentials\nfrom google.auth import jwt\nfrom google.oauth2 import _client\n\n_DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in sections\n\n\nclass Credentials(credentials.Signing,\n credentials.Scoped,\n credentials.Credentials):\n \"\"\"Service account credentials\n\n Usually, you'll create these credentials with one of the helper\n constructors. To create credentials using a Google service account\n private key JSON file::\n\n credentials = service_account.Credentials.from_service_account_file(\n 'service-account.json')\n\n Or if you already have the service account file loaded::\n\n service_account_info = json.load(open('service_account.json'))\n credentials = service_account.Credentials.from_service_account_info(\n service_account_info)\n\n Both helper methods pass on arguments to the constructor, so you can\n specify additional scopes and a subject if necessary::\n\n credentials = service_account.Credentials.from_service_account_file(\n 'service-account.json',\n scopes=['email'],\n subject='[email protected]')\n\n The credentials are considered immutable. If you want to modify the scopes\n or the subject used for delegation, use :meth:`with_scopes` or\n :meth:`with_subject`::\n\n scoped_credentials = credentials.with_scopes(['email'])\n delegated_credentials = credentials.with_subject(subject)\n \"\"\"\n\n def __init__(self, signer, service_account_email, token_uri, scopes=None,\n subject=None, additional_claims=None):\n \"\"\"\n Args:\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n service_account_email (str): The service account's email.\n scopes (Sequence[str]): Scopes to request during the authorization\n grant.\n token_uri (str): The OAuth 2.0 Token URI.\n subject (str): For domain-wide delegation, the email address of the\n user to for which to request delegated access.\n additional_claims (Mapping[str, str]): Any additional claims for\n the JWT assertion used in the authorization grant.\n\n .. note:: Typically one of the helper constructors\n :meth:`from_service_account_file` or\n :meth:`from_service_account_info` are used instead of calling the\n constructor directly.\n \"\"\"\n super(Credentials, self).__init__()\n\n self._scopes = scopes\n self._signer = signer\n self._service_account_email = service_account_email\n self._subject = subject\n self._token_uri = token_uri\n\n if additional_claims is not None:\n self._additional_claims = additional_claims\n else:\n self._additional_claims = {}\n\n @classmethod\n def _from_signer_and_info(cls, signer, info, **kwargs):\n \"\"\"Creates a Credentials instance from a signer and service account\n info.\n\n Args:\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n info (Mapping[str, str]): The service account info.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.jwt.Credentials: The constructed credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n return cls(\n signer,\n service_account_email=info['client_email'],\n token_uri=info['token_uri'], **kwargs)\n\n @classmethod\n def from_service_account_info(cls, info, **kwargs):\n \"\"\"Creates a Credentials instance from parsed service account info.\n\n Args:\n info (Mapping[str, str]): The service account info in Google\n format.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.service_account.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n signer = _service_account_info.from_dict(\n info, require=['client_email', 'token_uri'])\n return cls._from_signer_and_info(signer, info, **kwargs)\n\n @classmethod\n def from_service_account_file(cls, filename, **kwargs):\n \"\"\"Creates a Credentials instance from a service account json file.\n\n Args:\n filename (str): The path to the service account json file.\n kwargs: Additional arguments to pass to the constructor.\n\n Returns:\n google.auth.service_account.Credentials: The constructed\n credentials.\n \"\"\"\n info, signer = _service_account_info.from_filename(\n filename, require=['client_email', 'token_uri'])\n return cls._from_signer_and_info(signer, info, **kwargs)\n\n def to_jwt_credentials(self):\n \"\"\"Creates a :cls:`google.auth.jwt.Credentials` instance from this\n instance.\n\n The new instance will use the same private key as this instance and\n will use this instance's service account email as the issuer and\n subject.\n\n This is the same as calling\n :meth:`jwt.Credentials.from_service_account_file` with the same\n file used to create these credentials::\n\n svc_creds = service_account.Credentials.from_service_account_file(\n 'service_account.json')\n jwt_from_svc = svc_credentials.to_jwt_credentials()\n # is the same as:\n jwt_creds = jwt.Credentials.from_service_account_file(\n 'service_account.json')\n\n Returns:\n google.auth.jwt.Credentials: A new Credentials instance.\n \"\"\"\n return jwt.Credentials(\n self._signer,\n issuer=self._service_account_email,\n subject=self._service_account_email)\n\n @property\n def requires_scopes(self):\n \"\"\"Checks if the credentials requires scopes.\n\n Returns:\n bool: True if there are no scopes set otherwise False.\n \"\"\"\n return True if not self._scopes else False\n\n @_helpers.copy_docstring(credentials.Scoped)\n def with_scopes(self, scopes):\n return Credentials(\n self._signer,\n service_account_email=self._service_account_email,\n scopes=scopes,\n token_uri=self._token_uri,\n subject=self._subject,\n additional_claims=self._additional_claims.copy())\n\n def with_subject(self, subject):\n \"\"\"Create a copy of these credentials with the specified subject.\n\n Args:\n subject (str): The subject claim.\n\n Returns:\n google.auth.service_account.Credentials: A new credentials\n instance.\n \"\"\"\n return Credentials(\n self._signer,\n service_account_email=self._service_account_email,\n scopes=self._scopes,\n token_uri=self._token_uri,\n subject=subject,\n additional_claims=self._additional_claims.copy())\n\n def _make_authorization_grant_assertion(self):\n \"\"\"Create the OAuth 2.0 assertion.\n\n This assertion is used during the OAuth 2.0 grant to acquire an\n access token.\n\n Returns:\n bytes: The authorization grant assertion.\n \"\"\"\n now = _helpers.utcnow()\n lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)\n expiry = now + lifetime\n\n payload = {\n 'iat': _helpers.datetime_to_secs(now),\n 'exp': _helpers.datetime_to_secs(expiry),\n # The issuer must be the service account email.\n 'iss': self._service_account_email,\n # The audience must be the auth token endpoint's URI\n 'aud': self._token_uri,\n 'scope': _helpers.scopes_to_string(self._scopes or ())\n }\n\n payload.update(self._additional_claims)\n\n # The subject can be a user email for domain-wide delegation.\n if self._subject:\n payload.setdefault('sub', self._subject)\n\n token = jwt.encode(self._signer, payload)\n\n return token\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n assertion = self._make_authorization_grant_assertion()\n access_token, expiry, _ = _client.jwt_grant(\n request, self._token_uri, assertion)\n self.token = access_token\n self.expiry = expiry\n\n @_helpers.copy_docstring(credentials.Signing)\n def sign_bytes(self, message):\n return self._signer.sign(message)\n", "path": "google/oauth2/service_account.py"}]}
| 3,310 | 321 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.